text
stringlengths
56
7.94M
\begin{document} \title[Planelike minimizers of nonlocal energies and fractional perimeters]{Planelike minimizers of nonlocal Ginzburg-Landau energies and fractional perimeters in periodic media} \author[Matteo Cozzi, Enrico Valdinoci]{ Matteo Cozzi${}^{(1,2)}$ \and Enrico Valdinoci${}^{(3,4)}$ } \subjclass[2010]{35R11, 82B26} \keywords{Nonlocal Ginzburg-Landau-Allen-Cahn equation, periodic media, density and energy estimates, planelike minimizers} \thanks{The first author is supported by the MINECO grants~MDM-2014-0445 and~MTM2017-84214-C2-1-P. The second author is supported by the Australian Research Council grant~N.E.W. ``Nonlocal Equations at Work''.} \maketitle {\scriptsize \begin{center} (1) -- BGSMath Barcelona Graduate School of Mathematics. \end{center} \scriptsize \begin{center}(2) -- Departament de Matem\`atiques\\ Universitat Polit\`ecnica de Catalunya\\ Diagonal 647, E-08028 Barcelona (Spain). \end{center} \scriptsize \begin{center} (3) -- School of Mathematics and Statistics\\ University of Melbourne\\ Grattan Street, Parkville, VIC-3010 Melbourne (Australia). \end{center} \scriptsize \begin{center} (4) -- Dipartimento di Matematica ``Federigo Enriques''\\ Universit\`a degli Studi di Milano\\ Via Saldini 50, I-20133 Milano (Italy). \end{center} \begin{center} E-mail addresses: [email protected], [email protected] \end{center} } \begin{abstract} We consider here a nonlocal phase transition energy in a periodic medium and we construct solutions whose interfaces lie at a bounded distance from any given hyperplane. These solutions are either periodic or quasiperiodic, depending on the rational dependency of the normal direction to the reference hyperplane. Remarkably, the oscillations of the interfaces with respect to the reference hyperplane are bounded by a universal constant times the periodicity scale of the medium. This geometric property allows us to establish, in the limit, the existence of planelike nonlocal minimal surfaces in a periodic structure. The proofs rely on new optimal density and energy estimates. In particular, roughly speaking, the energy of phase transition minimizers is controlled, both from above and below, by the energy of one-dimensional transition layers. \end{abstract} \tableofcontents \section{Introduction}\label{0orhteriteru6877} In this paper, we consider a phase transition model in a periodic medium with long-range particle interactions. As customary, the phase coexistence is mathematically described by a double-well potential, the minimization of which tends to set a suitable state-parameter function into one of the two pure phases (which will be taken here to be~$-1$ and~$+1$). In order to make the coexistence of phases significant from both the mathematical and the physical point of view, the total energy of the system has to take into account also an elastic, or ferromagnetic, energy, which avoids the production of unnecessary phase changes and forces the interface between phases to be minimal, at least at large scales, with respect to a suitable notion of surface tension. The model that we study here considers an elastic energy of nonlocal type, that takes into account long-range particle interactions with polynomial decay. {F}rom the mathematical point of view, this elastic energy takes the form of a suitable seminorm of Gagliardo type which is related to fractional Sobolev spaces, see e.g.~\cite{SV12, SV14}. For this, the nonlocal character of the energy is encoded into a fractional parameter~$s\in(0,1)$, and the smaller this parameter is, the stronger the nonlocal effect on the system. This type of models also finds natural applications in the description of boundary layer effects on phase transitions, see~\cite{ABS94, AB98, G09, SiV12}, and in classical equations subject to a nonlinear boundary reaction, see~\cite{CS-M05, SiV09, CC10, CC14, CS15}. Also, for recent books on lonlocal problems see e.g.~\cite{MBRS16, BV17, DMV17, G17}. At a large scale, the phase separation tends to minimize a suitable notion of surface tension, related to either local or nonlocal perimeters, see~\cite{SV12}. In this, the fractional parameter~$s=1/2$ provides a threshold between local and nonlocal behaviors of interfaces at a large scale: indeed, when~$s\in[1/2,\,1)$ these interfaces are related to the minimization of a classical perimeter functional, and the nonlocal effects are not involved in this process; conversely, when~$s\in(0,\,1/2)$ these interfaces are related to the minimization of the fractional perimeter functional introduced in~\cite{CRS10} and thus the nonlocal effects persist at any scale. The particular focus of this paper is on periodic media, and for this we suppose that both the potential and the elastic energies depend periodically on the space variable (of course, a natural interpretation of such model comes from the study of crystals). The periodicity scale of the medium is given by a parameter~$\tau>0$. In crystals, one may think that~$\tau$ is small: for this, it is natural to seek results which possess good scaling properties with respect to~$\tau$. The main result of this paper is indeed the construction of phase transition solutions of minimal type whose interfaces lie at a bounded distance from any prescribed hyperplanes. Roughly speaking, these interfaces separate the pure states in an ``almost flat'' way. We stress that the existence of these objects is not obvious, since the medium is not homogeneous, and the flatness property of the interfaces is global in the whole of the space. Moreover, and most importantly, such flatness property will be shown to have optimal scaling features with respect to the periodicity size of the medium. Namely, the distance from the prescribed hyperplanes will be bounded by a structural constant times~$\tau$. That is, in the motivation coming from crystallography, the oscillation of these interfaces will be proven to be comparable with the size of the crystal itself. This invariance by scaling will allow us also to look at a rescaled version of this picture and, by developing an appropriate $\mathscr{G}amma$-convergence theory in this framework, we will also obtain a result on fractional minimal surfaces in periodic media. Indeed, we will establish the existence of nonlocal minimal surfaces in a periodic setting which stay at a bounded distance from any prescribed hyperplane (the same result for classical minimal surfaces was obtained in~\cite{CdlL01}). A crucial step in the proof of our results lies in obtaining density and energy estimates that are sharp with respect to the size of the fundamental domain and that possess optimal scaling properties. As a matter of fact, the energy of the minimizers is not expected to behave in an additive way with respect to the domain (roughly, the energy in a double ball is not the sum of the energy of two balls). This is due to the fact that the minimizers have the tendency to concentrate their energy along a codimension-one interface. In addition, the nonlocal features of the elastic energy contribute significantly to the total energy and this type of contribution changes dramatically in dependence on the fractional parameter~$s$: once again, the analysis for the cases~$s\in(0,\,1/2)$, $s=1/2$ and~$s\in (1/2,\, 1)$ require different methods and give different results. As a matter of fact, we will see that the energy of the minimizers in a ball of radius~$R$ is controlled from above and below by~$R^{n-\min\{1,2s\}}$ for~$s\in(0,1)\setminus\{1/2\}$, and a logarithmic correction is needed for the case~$s=1/2$. The precise mathematical setting in which we work is the following. For a domain~$\Omega \subseteq \mathbb{R}$, we define \begin{equation} \label{Edef} \mathscr{E}(u; \Omega) := \frac{1}{2} \iint_{\mathscr{C}_{\Omega}} \leqslantslantft| u(x) - u(y) \right|^2 K(x, y) \, dx dy + \int_\Omega W(x, u(x)) \, dx, \end{equation} where \begin{equation} \label{COmegadef} \mathscr{C}_\Omega := \Big( \mathbb{R}^n \times \mathbb{R}^n \Big) \setminus \Big( \leqslantslantft( \mathbb{R}^n \setminus \Omega \right) \times \leqslantslantft( \mathbb{R}^n \setminus \Omega \right) \Big). \end{equation} The kernel~$K: \mathbb{R}^n \times \mathbb{R}^n \to [0, +\infty]$ is a measurable function satisfying \begin{equation} \label{Ksymmetry} \tag{K1} K(x, y) = K(y, x) \quad \mbox{for a.a.~} x, y \in \mathbb{R}^n, \end{equation} and \begin{equation} \label{Kbounds} \tag{K2} \frac{\lambda \chi_{(0, \xi)}(|x - y|)}{|x - y|^{n + 2 s}} \leqslantslant K(x, y) \leqslantslant \frac{\mathscr{L}ambda}{|x - y|^{n + 2 s}} \quad \mbox{for a.a.~} x, y \in \mathbb{R}^n, \end{equation} for some~$s \in (0, 1)$,~$\mathscr{L}ambda \geqslantslant \lambda > 0$ and~$\xi > 0$. Assumption~\eqref{Kbounds} ensures that~$K$ is controlled from above and below (in a neighborhood of the diagonal of~$\mathbb{R}^{2 n}$) by the standard homogeneous, translation invariant, rotationally symmetric kernel \begin{equation} \label{FLker} K_s(x, y) := \frac{1}{|x - y|^{n + 2 s}}. \end{equation} When~$s \in [1/2, 1)$, we also impose the regularity assumption \begin{equation} \label{Kreg} \tag{K3} \leqslantslantft| K(x, x + w) - K(x, x - w) \right| \leqslantslant \mathscr{G}amma |w|^{- n - 1 + \nu} \quad \mbox{for a.a.~} x, w \in \mathbb{R}^n, \end{equation} for some~$\nu \in (0, 1)$ and~$\mathscr{G}amma > 0$. On the other hand, the potential~$W: \mathbb{R}^n \times \mathbb{R} \to [0, +\infty)$ is a measurable function for which \begin{equation} \label{Wzeros} \tag{W1} W(x, \pm 1) = 0 \quad \mbox{for a.a.~} x \in \mathbb{R}^n, \end{equation} and, for any~$\theta \in [0, 1)$, \begin{equation} \label{Wgamma} \tag{W2} \inf_{\substack{x \in \mathbb{R}^n \\ |r| \leqslantslant \theta}} W(x, r) \geqslantslant \gamma(\theta), \end{equation} where~$\gamma$ is a non-increasing positive function of the interval~$[0, 1)$. Moreover, we require~$W$ to be locally of class~$C^1$ in the second variable, to satisfy \begin{equation} \label{Wbound} \tag{W3} W(x, r), \, |W_r(x, r)| \leqslantslant \kappa^{-1} \quad \mbox{for a.a.~} x \in \mathbb{R}^n \mbox{ and any } r \in \mathbb{R}, \end{equation} and \begin{equation} \label{W''} \tag{W4} \begin{aligned} W(x, t) \geqslantslant W(x, r) + \kappa (1 + r) (t - r) + \kappa (t - r)^2 & \quad \mbox{for any } -1 \leqslantslant r \leqslantslant t \leqslantslant - 1 + \kappa, \\ W(x, r) \geqslantslant W(x, t) + \kappa (1 - t) (t - r) + \kappa (t - r)^2 & \quad \mbox{for any } 1 - \kappa \leqslantslant r \leqslantslant t \leqslantslant 1, \\ W(x, r) \leqslantslant \kappa^{-1} (1 - |r|) & \quad \mbox{for any } 1 - \kappa \leqslantslant |r| \leqslantslant 1, \end{aligned} \end{equation} for a.a.~$x \in \mathbb{R}^n$ and some small constant~$\kappa \in (0, 1)$. Condition~\eqref{W''} essentially tells that~$W$ must have superquadratic/sublinear detachment from its zeroes~$\pm 1$, uniformly in~$x$. We point out that any potential which is locally~$C^2$ in the second variable and satisfies $$ W_r(x, \pm 1) = 0 \mbox{ and } W_{rr}(x, \pm 1) \geqslantslant \kappa \quad \mbox{for a.a.~} x \in \mathbb{R}^n, $$ also fulfills~\eqref{W''}. But much more general behaviors are allowed. In order to model a periodic environment, we need to impose periodicity conditions on the kernel~$K$ and the potential~$W$. Given~$\tau > 0$, we assume \begin{equation} \label{Kper} \tag{K4} K(x + k, y + k) = K(x, y) \quad \mbox{for a.a.~} x, y \in \mathbb{R}^n \mbox{ and any } k \in \tau \mathbb{Z}^n. \end{equation} \begin{equation} \label{Wper} \tag{W5} W(x + k, r) = W(x, r) \quad \mbox{for a.a.~} x \in \mathbb{R}^n \mbox{ and any } k \in \tau \mathbb{Z}^n, \end{equation} for any fixed~$r \in \mathbb{R}$. We notice that the parameter~$\tau$ allows us to modulate the periodicity scale of the medium. Typical examples of potentials $W(x,r)$ that we take into account are given by \begin{eqnarray*} && Q(x)\, (1-r^2)^2,\\ && Q(x)\, |1-r^2|^d \quad{\mbox{ with }} d\in (1,2),\\ && Q(x)\, \leqslantslantft(1+\cos(\pi r)\right)\\ {\mbox{and }}&& Q(x)\,\cos^2\leqslantslantft(\frac{\pi r}{2}\right), \end{eqnarray*} with~$Q(x)\in[1,2]$ for any~$x\in\mathbb{R}^n$. Typical examples of interaction kernels are given by $$ K(x, y) = \frac{a(x, y)}{|x - y|^{n + 2 s}}, $$ with~$a$ Lipschitz continuous and periodic in both~$x$ and~$y$. Sometimes we will adopt shorthand notations for the interaction and potential terms appearing in definition~\eqref{Edef}. Given any measurable sets~$E, F \subseteq \mathbb{R}^n$, we write \begin{equation} \label{kindef} \begin{aligned} \mathscr{K}_K(u; E, F) & = \mathscr{K}(u; E, F) := \frac{1}{2} \int_E \int_F \leqslantslantft| u(x) - u(y) \right|^2 K(x, y) \, dx dy \\ \mathscr{K}_K(u; E) & = \mathscr{K}(u; E) := \frac{1}{2} \iint_{\mathscr{C}_E} \leqslantslantft| u(x) - u(y) \right|^2 K(x, y) \, dx dy, \end{aligned} \end{equation} with~$\mathscr{C}_E$ as in~\eqref{COmegadef}, and $$ \mathscr{P}_W(u; E) = \mathscr{P}(u; E) := \int_E W(x, u(x)) \, dx. $$ Under these conventions (and the symmetry assumption~\eqref{Ksymmetry}), we have \begin{align*} \mathscr{E}(u; \Omega) & = \mathscr{K}(u; \Omega, \Omega) + 2 \mathscr{K}(u; \Omega, \mathbb{R}^n \setminus \Omega) + \mathscr{P}(u; \Omega) \\ & = \mathscr{K}(u; \Omega) + \mathscr{P}(u; \Omega). \end{align*} Often, we will consider the integral operator~$\mathscr{L}_K$ associated with~$\mathscr{E}$, that is defined by \begin{equation} \label{LKdef} \begin{aligned} \mathscr{L}_K u(x) := & \, \mbox{\normalfont P.V.} \int_{\mathbb{R}^n} \leqslantslantft( u(x) - u(y) \right) K(x, y) \, dy \\ = & \, \lim_{\varepsilon \rightarrow 0^+} \int_{\mathbb{R}^n \setminus B_\varepsilon(x)} \leqslantslantft( u(x) - u(y) \right) K(x, y) \, dy. \end{aligned} \end{equation} Such operator naturally appears when considering the Euler-Lagrange equation of the functional~$\mathscr{E}$. We observe that~$\mathscr{L}_K u$ is well-defined pointwise, at least when~$u$ is a smooth bounded function and~$K$ satisfies~\eqref{Kreg}, when~$s \geqslantslant 1/2$. See Lemmata~\ref{LKlems<} and~\ref{LKlemsge} for estimates in this direction. We also notice that~$\mathscr{L}_K$ boils down to the well-known fractional Laplacian~$(-\Delta)^s$, when~$K$ is the standard kernel~$K_s$ as defined in~\eqref{FLker}. After these preliminary definitions, we are now almost ready to state the main results contained in this paper. In order to do this, we first need to make precise the notions of minimizers of the functional~$\mathscr{E}$ that we take into consideration. \begin{definition} \label{mindef} Let~$\Omega$ be an open subset of~$\mathbb{R}^n$. A measurable function~$u: \mathbb{R}^n \to \mathbb{R}$ is said to be a~\emph{minimizer} of~$\mathscr{E}$ in~$\Omega$ if~$\mathscr{E}(u; \Omega) < +\infty$ and $$ \mathscr{E}(u; \Omega) \leqslantslant \mathscr{E}(v; \Omega), $$ for any measurable function~$v$ that coincides with~$u$ outside of~$\Omega$. \end{definition} This definition may be extended to the whole space in the following way. \begin{definition} \label{classAmindef} A measurable function~$u: \mathbb{R}^n \to \mathbb{R}$ is said to be a~\emph{class~A minimizer} of~$\mathscr{E}$ if it is a minimizer for~$\mathscr{E}$ in every bounded open set~$\Omega \subset \mathbb{R}^n$. \end{definition} We stress that the simpler notion of~\emph{global minimizer} is too restrictive for our scopes. Indeed, the functions that we typically take into consideration have infinite energy over the whole space~$\mathbb{R}^n$ and therefore it is convenient to evaluate their energy on bounded domains only. However, class~A minimizers of~$\mathscr{E}$ are still (weak) solutions of the Euler-Lagrange equation $$ - 2 \, \mathscr{L}_K u = W_u(\cdot, u) \quad \mbox{in } \mathbb{R}^n. $$ The concept of class~A minimizers is frequently used in the literature, in contexts where one has to deal with objects having only locally finite energy. The terminology comes from~\cite{M24} and has been more recently adopted in e.g.~\cite{CdlL01,V04,PV05,CV17,CP16}. In what follows, we will construct class~A minimizers of~$\mathscr{E}$ and related functionals, which exhibit a close-to-one-dimensional geometry. More specifically, we will look for minimizers that connect the two pure phases~$-1$ and~$1$ of the potential~$W$ asymptotically in one fixed direction~$\omega \in \mathbb{R}^n \setminus \{ 0 \}$ of the space, and that are~\emph{planelike}, in the sense that their intermediate level sets (of levels between, say,~$-9/10$ and~$9/10$) are contained in a strip orthogonal to~$\omega$ and of width universally proportional to the periodicity scale~$\tau$ of the medium. Note that when~$s<1/2$ we shall call \emph{universal} any quantity that depends at most on~$n$,~$s$,~$\lambda$,~$\mathscr{L}ambda$,~$ \kappa$, the function~$\gamma$, but not on~$\xi$ and~$\tau$. A similar notation is taken when~$s\geqslantslant 1/2$, but in this case universal quantities may also depend on~$\nu$ and~$\mathscr{G}amma$, according to condition~\eqref{Kreg}. Namely, when~$s\geqslantslant1/2$, we shall call \emph{universal} any quantity that depends at most on~$n$,~$s$,~$\lambda$,~$\mathscr{L}ambda$,~$ \kappa$, the function~$\gamma$, $\nu$, $\mathscr{G}amma$ but not on~$\xi$ and~$\tau$. Our construction will heavily rely on the periodic structure of the ambient space and will be carried out in different ways, depending on whether the direction~$\omega$ belongs to~$\tau \mathbb{Q}^n$ or not. In the first, \emph{rational} case, the minimizers will naturally inherit a periodic property from that of the medium, in a sense that may be made precise through the following definition. \begin{definition} \label{simperdef} Let~$\omega \in \tau \mathbb{Q}^n \setminus \{ 0 \}$. We define the equivalence relation~$\, \sim_{\tau, \, \omega}$ in~$\mathbb{R}^n$, by setting $$ x \sim_{\tau, \, \omega} y \quad \mbox{if and only if} \quad x - y = k \in \tau \mathbb{Z}^n \mbox{ and } \omega \cdot k = 0. $$ We say that a function~$u: \mathbb{R}^n \to \mathbb{R}$ is~\emph{periodic with respect to~$\sim_{\tau, \, \omega}$} or simply~\emph{$\sim_{\tau, \, \omega}$-periodic} if $$ u(x) = u(y) \quad \mbox{for any } x, y \in \mathbb{R}^n \mbox{ such that } x \sim_{\tau, \, \omega} y. $$ \end{definition} When no confusion may arise, we will denote this equivalence relation simply with~$\sim$. We are now in position to present the statements of the main contributions of this paper. Our first result (Theorem~\ref{tauPLthm}) improves the main theorem of~\cite{CV17} and it allows its application to the scaled energies that provides $\mathscr{G}amma$-limit results as a byproduct (see Theorems~\ref{epsPLthm} and~\ref{PerPLthm}). In this sense, our main results here consist in the forthcoming Theorem~\ref{tauPLthm} and in the sequence of arguments leading from it to Theorems~\ref{epsPLthm} and~\ref{PerPLthm}. We point out that the result in Theorem~\ref{tauPLthm} is valid for the whole fractional parameter range~$s\in(0,1)$, while the $\mathscr{G}amma$-limit results focus on the strongly nonlocal regime~$s\in(0,1/2)$, in which the nonlocal features of the problem are preserved at any scale and produce, in the limit, a nonlocal perimeter functional (we think that it will be also interesting to investigate the $\mathscr{G}amma$-limit in the weakly nonlocal regime~$s\in[1/2,1)$, and in this framework a limit functional of local type has to be expected). \begin{theorem} \label{tauPLthm} Let~$n \geqslantslant 2$ and~$s \in (0, 1)$. Assume that the kernel~$K$ and the potential~$W$ respectively satisfy~\eqref{Ksymmetry},~\eqref{Kbounds},~\eqref{Kper} and~\eqref{Wzeros},~\eqref{Wgamma},~\eqref{Wbound},~\eqref{W''},~\eqref{Wper}, with~$\xi = \tau \geqslantslant 1$. If~$s \in [1/2, 1)$, we also require~$K$ to fulfill~\eqref{Kreg}.\\ For any fixed~$\theta \in (0, 1)$, there exists a constant~$M_0 > 0$, depending only on~$\theta$ and on universal quantities, such that, given any direction~$\omega \in \mathbb{R}^n \setminus \{ 0 \}$, we can construct a class~A minimizer~$u$ of the energy~$\mathscr{E}$ for which \begin{equation} \label{tauPLcond} \bigg\{ x \in \mathbb{R}^n : \leqslantslantft| u(x) \right| < \theta \bigg\} \subset \bigg\{ x \in \mathbb{R}^n : \frac{\omega}{|\omega|} \cdot x \in \leqslantslantft[ 0, \tau M_0 \right] \bigg\}. \end{equation} Furthermore, \begin{enumerate}[$\bullet$] \item if~$\omega \in \tau \mathbb{Q}^n \setminus \{ 0 \}$, then~$u$ is periodic with respect to~$\sim_{\tau, \, \omega}$, while \item if~$\omega \in \mathbb{R}^n \setminus \tau \mathbb{Q}^n$, then~$u$ is the locally uniform limit of a sequence of periodic class~A minimizers of~$\mathscr{E}$. \end{enumerate} \end{theorem} Theorem~\ref{tauPLthm} has been proved in~\cite{CV17} for the case in which the periodicity scale~$\tau$ is equal to~$1$. Its proof in the more general setting of this paper requires several important modifications from that of~\cite{CV17}. We shall comment more on the differences in the argument at the end of this section. In local contexts, similar results have been obtained in~\cite{V04} and~\cite{CdlL01}, where the authors respectively took into account an energy with gradient interaction term and a geometric functional driven by a heterogeneous perimeter, instead the one appearing in~\eqref{Edef}. See also~\cite{PV05,PV05b,NV07,BV08,D13} for related constructions. In nonlocal frameworks, Theorem~\ref{tauPLthm} here and~\cite[Theorem~1.4]{CV17} are the first available results on planelike minimizers, to the best of our knowledge. When the medium is homogeneous (i.e.~$K$ is translation invariant and~$W$ does not depend on~$x$), the existence of one-dimensional minimizers has been investigated in~\cite{CS-M05,PSV13,CS14,CS15,CP16}. When comparing Theorem~\ref{tauPLthm} to~\cite[Theorem~1.4]{CV17}, it is worth noting that the presence of a medium with~$\tau$-periodicity is mostly reflected at the level of the minimizers in the fact that the constructed minimizers have level sets contained in a strip of width proportional to~$\tau$, as can be seen in~\eqref{tauPLcond}. Besides being interesting in itself (and not obtainable with the techniques of~\cite{CV17}), this fact leads to important consequences when applied to the class of scaled functionals that we now introduce. Given a small~$\varepsilon > 0$, we define the scaled energy~$\mathscr{E}_\varepsilon$ on any measurable set~$\Omega \subset \mathbb{R}^n$ as \begin{equation} \label{Eepsdef} \mathscr{E}_\varepsilon(u; \Omega) = \frac{1}{2} \iint_{\mathscr{C}_{\Omega}} \leqslantslantft| u(x) - u(y) \right|^2 K(x, y) \, dx dy + \frac{1}{\varepsilon^{2 s}} \int_\Omega W(x, u(x)) \, dx. \end{equation} This modified functional has been first studied in~\cite{SV12} for the model case of~$K$ given by~\eqref{FLker} and naturally arises when considering the rescaling \begin{equation} \label{Reps} \mathcal{R}_\varepsilon u(x) := u \leqslantslantft( \frac{x}{\varepsilon} \right). \end{equation} With the aid of~\eqref{Reps}, it is almost immediate to see that Theorem~\ref{tauPLthm} implies the following analogous result for the functional~$\mathscr{E}_\varepsilon$. \begin{theorem} \label{epsPLthm} Let~$n \geqslantslant 2$ and~$s \in (0, 1)$. Assume that the kernel~$K$ and the potential~$W$ respectively satisfy~\eqref{Ksymmetry},~\eqref{Kbounds},~\eqref{Kper} and~\eqref{Wzeros},~\eqref{Wgamma},~\eqref{Wbound},~\eqref{W''},~\eqref{Wper}, with~$\xi = \tau \geqslantslant 1$. If~$s \in [1/2, 1)$, we also require~$K$ to fulfill~\eqref{Kreg}.\\ For any fixed~$\theta \in (0, 1)$, there exists a constant~$M_0 > 0$, depending only on~$\theta$ and on universal quantities, such that, given any~$\varepsilon \in (0, \tau]$ and any direction~$\omega \in \mathbb{R}^n \setminus \{ 0 \}$, we can construct a family of class~A minimizers~$u_\varepsilon$ of the energy~$\mathscr{E}_\varepsilon$ for which $$ \bigg\{ x \in \mathbb{R}^n : \leqslantslantft| u_\varepsilon(x) \right| < \theta \bigg\} \subset \bigg\{ x \in \mathbb{R}^n : \frac{\omega}{|\omega|} \cdot x \in \leqslantslantft[ 0, \tau M_0 \right] \bigg\}. $$ Furthermore, \begin{enumerate}[$\bullet$] \item if~$\omega \in \tau \mathbb{Q}^n \setminus \{ 0 \}$, then~$u_\varepsilon$ is periodic with respect to~$\sim_{\tau, \, \omega}$, while \item if~$\omega \in \mathbb{R}^n \setminus \tau \mathbb{Q}^n$, then~$u_\varepsilon$ is the locally uniform limit of a sequence of periodic class~A minimizers of~$\mathscr{E}_\varepsilon$. \end{enumerate} \end{theorem} Observe that the family of minimizers~$\{ u_\varepsilon \}$ produced by Theorem~\ref{epsPLthm} is such that each minimizer has intermediate values confined in a strip of width independent of~$\varepsilon$. For this being true, it is crucial that the value~$M_0$ found in Theorem~\ref{tauPLthm} does not depend on the periodicity scale~$\tau$. Such uniform-in-$\varepsilon$ width of the strips where the transitions of the~$u_\varepsilon$'s occur allows to consider smaller and smaller values of~$\varepsilon$ and eventually take the limit as~$\varepsilon \rightarrow 0^+$. In the remaining part of this first section we shall focus on what happens when one takes this limit. In the classical Van der Waals-Cahn-Hilliard theory, a gradient term weighted by a small parameter~$\varepsilon$ is often introduced in the total energy functional in order to model phase coexistence phenomena that exhibit smooth transition interfaces. See e.g.~\cite{CGS84,G87} and the references therein for some more detailed explanations on the subject. In~\cite{M87}, in particular, the limit as~$\varepsilon \rightarrow 0^+$ of these~$\varepsilon$-scaled functionals has been deeply analyzed through the language of~$\mathscr{G}amma$-convergence. It has been proved there that the interfaces of the minimizers of such functionals converge to a minimal surface, building therefore a bridge between the Allen-Cahn-Ginzburg-Landau energy and the De Giorgi perimeter. Nonlocal variants of this~$\mathscr{G}amma$-convergence result have also been considered. Typically, one replaces the gradient penalization with a term that takes into account finite differences and allows for long-range interactions. In~\cite{ABS94,AB98,G09}, the authors obtained $\mathscr{G}amma$-convergence results in which the target functional is still the classical perimeter, in conformity with the classical theory. More recently, a wider array of behaviors for the limit functional has been discovered in~\cite{SV12}. There, it is shown that (a suitable renormalization in $\varepsilon$ of) the family of energies~\eqref{Eepsdef}, with kernel~$K$ given by~\eqref{FLker},~$\mathscr{G}amma$-converges to the standard perimeter when~$s\geqslantslant1/2$, and to the new notion of~\emph{fractional perimeter} introduced in~\cite{CRS10} when~$s < 1/2$. In what follows, we study the~$\mathscr{G}amma$-limit of the functional~$\mathscr{E}_\varepsilon$ in~\eqref{Eepsdef} in the strongly nonlocal regime~$s < 1/2$. We pose such restriction since we are predominantly interested in the emerging of a nonlocal perimeter of the type of~\cite{CRS10} and, in particular, in deducing from Theorem~\ref{epsPLthm} an analogous statement for the minimal surfaces of such perimeter. We nevertheless believe that it might be interesting to investigate the~$\mathscr{G}amma$-limit also in the case of~$s \geqslantslant 1/2$, presumably obtaining a local, heterogeneous perimeter. As said right above, we now restrict our attention to kernels that satisfy condition~\eqref{Kbounds} in Section~\ref{0orhteriteru6877} with~$s \in (0, 1/2)$. Given an open set~$\Omega \subseteq \mathbb{R}^n$ and a measurable set~$E \subset \mathbb{R}^n$, we define the~\emph{$K$-perimeter} of~$E$ inside~$\Omega$ as \begin{equation} \label{PerKdef} \mathscr{P}er_K(E; \Omega) := \mathcal{L}_K(E \cap \Omega, \Omega \setminus E) + \mathcal{L}_K(E \cap \Omega, \mathbb{R}^n \setminus (E \cup \Omega)) + \mathcal{L}_K(E \setminus \Omega, \Omega \setminus E), \end{equation} where, for any two disjoint measurable sets~$A, B \subset \mathbb{R}^n$, we set $$ \mathcal{L}_K(A, B) := \int_A \int_B K(x, y) \, dx dy. $$ We stress that, when~$K$ is given by~\eqref{FLker}, the~$K$-perimeter boils down to the fractional perimeter introduced in~\cite{CRS10}. Anisotropic versions of this nonlocal perimeter have first been studied in~\cite{L14}. The very recent paper~\cite{CSV16} deals with an even more general class of anisotropic perimeter functionals, driven by kernels which are not necessarily homogeneous. With definition~\eqref{PerKdef}, we consider a perimeter that may possibly be also~\emph{space-dependent} and therefore model a completely heterogeneous environment. In the following, we study the minimizers of~$\mathscr{P}er_K$, especially in the whole space~$\mathbb{R}^n$. In analogy with Definitions~\ref{mindef} and~\ref{classAmindef}, we consider the following concepts of minimizers. \begin{definition} Let~$\Omega$ be an open subset of~$\mathbb{R}^n$. Given a measurable set~$E \subset \mathbb{R}^n$, we say that its boundary~$\partial E$ is a~\emph{minimal surface} for~$\mathscr{P}er_K$ in~$\Omega$ if~$\mathscr{P}er_K(E; \Omega) < +\infty$ and $$ \mathscr{P}er_K(E; \Omega) \leqslantslant \mathscr{P}er_K(F; \Omega), $$ for any measurable set~$F$ such that~$F \setminus \Omega = E \setminus \Omega$. \end{definition} \begin{definition} The boundary~$\partial E$ of a measurable set~$E \subset \mathbb{R}^n$ is said to be a~\emph{class A minimal surface} for~$\mathscr{P}er_K$ if it is a minimal surface for~$\mathscr{P}er_K$ in every bounded open set~$\Omega \subset \mathbb{R}^n$. \end{definition} As a first result, we show that~$\mathscr{P}er_K$ is the~$\mathscr{G}amma$-limit of the functionals~$\mathscr{E}_\varepsilon$ defined in~\eqref{Eepsdef}, in the appropriate topology. Observe that, in the notation of~\eqref{kindef}, we may write \begin{equation} \label{PerKchi} \mathscr{P}er_K(E; \Omega) = \frac{1}{4} \, \mathscr{K}_K(\chi_E - \chi_{\mathbb{R}^n \setminus E}; \Omega). \end{equation} We then introduce the space of functions $$ \mathcal{X} := \bigg\{ u \in L^\infty(\mathbb{R}^n) : \| u \|_{L^\infty(\mathbb{R}^n)} \leqslantslant 1 \bigg\}, $$ and we endow it with the~$L^1_{\rm loc}(\mathbb{R}^n)$ topology. In view of the representation in~\eqref{PerKchi}, the~$K$-perimeter may be seen as acting on the subset of~$\mathcal{X}$ composed by the modified characteristic functions of the form~$\chi_E - \chi_{\mathbb{R}^n \setminus E}$, for measurable sets~$E$. Actually, we may extend it to a functional~$\mathscr{G}_K(\cdot, \Omega): \mathcal{X} \to [0, +\infty]$ by setting $$ \mathscr{G}_K(u; \Omega) := \begin{cases} \mathscr{K}_K(u; \Omega) & \quad \mbox{if } u|_\Omega = \chi_E - \chi_{\mathbb{R}^n \setminus E} \mbox{ for some measurable } E \subseteq \Omega\\ +\infty & \quad \mbox{otherwise}. \end{cases} $$ When no confusion may arise, we omit the dependence of~$\mathscr{G}_K$ on the kernel~$K$ and simply refer to this functional as~$\mathscr{G}$. We have the following~$\mathscr{G}amma$-convergence result. \begin{proposition} \label{Gammaconvprop} Let~$n \geqslantslant 1$ and~$s \in (0, 1/2)$. Assume the kernel~$K$ to be a non-negative function satisfying~\eqref{Ksymmetry} and the potential~$W$ to fulfill conditions~\eqref{Wzeros},~\eqref{Wgamma},~\eqref{Wbound}.\\ Then, the family of functionals~$\mathscr{E}_\varepsilon$~$\mathscr{G}amma$-converges to~$\mathscr{G}$ on~$\mathcal{X}$. That is, \begin{enumerate}[$(i)$] \item for any~$u_\varepsilon$ converging to~$u$ in~$\mathcal{X}$, it holds $$ \mathscr{G}(u; \Omega) \leqslantslant \liminf_{\varepsilon \rightarrow 0^+} \mathscr{E}_\varepsilon(u_\varepsilon; \Omega) \quad \mbox{for any open set } \Omega \subseteq \mathbb{R}^n; $$ \item for any~$u \in \mathcal{X}$, there exists~$u_\varepsilon$ converging to~$u$ in~$\mathcal{X}$ such that $$ \mathscr{G}(u; \Omega) \geqslantslant \limsup_{\varepsilon \rightarrow 0^+} \mathscr{E}_\varepsilon(u_\varepsilon; \Omega) \quad \mbox{for any open set } \Omega \subseteq \mathbb{R}^n. $$ \end{enumerate} \end{proposition} In view of the above proposition, we see that the~$K$-perimeter is the correct geometric counterpart to the energy functional~$\mathscr{E}_\varepsilon$, when~$s < 1/2$. Consequently, the minimizers of~$\mathscr{P}er_K$ might be treated as limits of the minimizers of~$\mathscr{E}_\varepsilon$. In this spirit, we may take the limit as~$\varepsilon \rightarrow 0^+$ in Theorem~\ref{epsPLthm} and obtain the existence of planelike minimal surfaces for the~$K$-perimeter. Before doing this, we extend Definition~\ref{simperdef} to get a notion of~$\sim_{\tau, \omega}$-periodicity for the subsets of~$\mathbb{R}^n$. \begin{definition} We say that a set~$A \subseteq \mathbb{R}^n$ is~\emph{periodic with respect to~$\sim_{\tau, \, \omega}$} or simply~\emph{$\sim_{\tau, \, \omega}$-periodic} if $$ x \in A \mbox{ implies that } y \in A \mbox{ for any } y \in \mathbb{R}^n \mbox{ such that } y \sim_{\tau, \, \omega} x. $$ \end{definition} We are now in position to state the following \begin{theorem} \label{PerPLthm} Let~$n \geqslantslant 2$ and~$s \in (0, 1/2)$. Assume that the kernel~$K$ satisfies conditions~\eqref{Ksymmetry},~\eqref{Kbounds} and~\eqref{Kper}, with~$\xi = \tau \geqslantslant 1$.\\ There exists a universal constant~$M_0 > 0$ for which, given any direction~$\omega \in \mathbb{R}^n \setminus \{ 0 \}$, we can construct a class~A minimal surface~$\partial E$ for the perimeter~$\mathscr{P}er_K$ such that $$ \bigg\{ x \in \mathbb{R}^n : \frac{\omega}{|\omega|} \cdot x < 0 \bigg\} \subset E \subset \bigg\{ x \in \mathbb{R}^n : \frac{\omega}{|\omega|} \cdot x \leqslantslant \tau M_0 \bigg\}. $$ Furthermore, \begin{enumerate}[$\bullet$] \item if~$\omega \in \tau \mathbb{Q}^n \setminus \{ 0 \}$, then~$\partial E$ is periodic with respect to~$\sim_{\tau, \, \omega}$, while \item if~$\omega \in \mathbb{R}^n \setminus \tau \mathbb{Q}^n$, then~$\partial E$ is the locally uniform limit of a sequence of periodic class~A minimal surfaces for~$\mathscr{P}er_K$. \end{enumerate} \end{theorem} Recently, we also obtained a version of Theorem~\ref{PerPLthm} with different methods, by taking a suitable limit of an Ising model on a lattice, see Theorem~1.7 of~\cite{CDV17b}. In local frameworks, a counterpart of Theorem~\ref{PerPLthm} has first been obtained in~\cite{CdlL01} for a class of periodic perimeter functionals in~$\mathbb{R}^n$ and more general Riemannian contexts. This result generalizes to any dimension classical statements on the existence of geodesics on~$2$-dimensional periodic manifolds (\cite{M24,H32}). The result of~\cite{CdlL01} has then been obtained again in~\cite{V04} via a~$\mathscr{G}amma$-convergence approach that motivates and inspires ours here. Before heading to the proofs of the results previously stated, we conclude this introductory section with a brief remark on the argument that we follow to prove our main contribution, Theorem~\ref{tauPLthm}. The strategy adopted is, in some steps, close to the one developed in~\cite{V04} and later translated to this nonlocal setting in~\cite{CV17}. Basically, we first restrict ourselves to directions~$\omega$ that belong to~$\tau \mathbb{Q}^n$, as the~\emph{irrational} ones may be dealt with a limiting procedure. For such~\emph{rational}~$\omega$'s, we use the compactness provided by the equivalence relation~$\sim$ to construct an appropriately constrained minimizer~${u_\omega^M}$ for~$\mathscr{E}$ in the strip $$ \mathcal{S}_\omega^M = \Big\{ x \in \mathbb{R}^n : \omega \cdot x \in [0, M] \Big\}, $$ for any large~$M > 0$. The proof then finishes by proving that, for~$M$ large enough, the candidate~${u_\omega^M}$ is indeed a class~A minimizer, as desired. The essential, and important, difference between the proof provided here and that of~\cite{CV17} lies in the conclusive step. In~\cite{CV17}, the proof that~${u_\omega^M}$ is a class~A minimizer relies on the coupling of uniform~$C^\alpha$ estimates on~${u_\omega^M}$ with suitable bounds from above on the growth of the energy~$\mathscr{E}$ on large balls. Such energy estimates have first been proved in~\cite{CC10,CC14,SV14} for functionals related to the fractional Laplacian. A more general version of this result can be found in~\cite{CV17}. Setting \begin{equation} \label{Psidef} \mathscr{P}si_s(t) := \begin{cases} t^{1 - 2 s} & \quad \mbox{if } s \in (0, 1/2) \\ \log t & \quad \mbox{if } s = 1/2 \\ 1 & \quad \mbox{if } s \in (1/2, 1), \end{cases} \end{equation} for~$t > 1$, it may be stated as follows. \begin{proposition}[\cite{CV17}] \label{enestprop} Let~$n \geqslantslant 1$,~$s \in (0, 1)$,~$x_0 \in \mathbb{R}^n$ and~$R \geqslantslant 3$. Assume that~$K$ and~$W$ satisfy~\eqref{Ksymmetry},~\eqref{Kbounds} and~\eqref{Wzeros},~\eqref{Wbound}, respectively. If~$u: \mathbb{R}^n \to [-1, 1]$ is a minimizer of~$\mathscr{E}$ in~$B_{R + 2}(x_0)$, then \begin{equation} \label{enest} \mathscr{E}(u; B_R(x_0)) \leqslantslant C R^{n - 1} \mathscr{P}si_s(R), \end{equation} for some constant~$C \geqslantslant 1$ which depends on~$n$,~$s$,~$\mathscr{L}ambda$ and~$\kappa$. \end{proposition} Though simple and rather elementary, the proof of~\cite{CV17} does not scale well with the parameter~$\tau$, in the sense that, once applied in the framework of this paper, it does not provide information on the dependence on~$\tau$ of the width of the strip appearing on the right-hand side of inclusion~\eqref{tauPLcond}. As one can easily convince himself, the dependence stated in~\eqref{tauPLcond} (that is, width of the strip~$\simeq M_0 \tau$) is crucial for obtaining Theorem~\ref{epsPLthm}. Furthermore, the reason for which such dependence can not be detected is that the H\"older estimates of~\cite[Section~2]{CV17} do not rely at all on the double-well structure of the potential~$W$ and do not match appropriately with the rescaling~\eqref{Reps}. Here, we solve this issue by correcting our strategy and making it more adherent to those traced in~\cite{CdlL01,V04}. In more concrete terms, we replace the use of the~$C^\alpha$ bounds with a powerful tool, frequently associated with Allen-Cahn equations and minimal surfaces: the density estimates. Density estimates are a classical device in geometric measure theory, where they are used to study minimal surfaces. In PDEs, they have first been introduced in~\cite{CC95} to obtain the uniform convergence of the level sets of the minimizers to Ginzburg-Landau-type energies driven by the~$L^2$ norm of the gradient. Later on, they have been generalized to more general functionals with gradient structures (see for instance~\cite{V04,PV05,PV05b,NV07}) and, more recently, to nonlocal energies driven by Gagliardo seminorms (see~\cite{SV11,SV14}). In this paper, we present a version of the density estimates compatible with our setting. To obtain such results, we suitably modify the arguments of~\cite{SV11,SV14}. Moreover, we couple these density estimates with a bound from below on the energy of non-trivial minimizers of~$\mathscr{E}$, which is a counterpart of Proposition~\ref{enestprop} and a proof of their general optimality. As we believe that these two results may be interesting in themselves and useful in other contexts, we include their statements here in the introduction. \begin{theorem} \label{densestthm} Let~$n \geqslantslant 2$ and~$s \in (0, 1)$. Assume~$K$ and~$W$ to respectively satisfy~\eqref{Ksymmetry},~\eqref{Kbounds} and~\eqref{Wzeros},~\eqref{Wgamma},~\eqref{Wbound},~\eqref{W''}. If~$s \in [1/2, 1)$, also suppose that~$K$ fulfills~\eqref{Kreg}.\\ Let~$u: \mathbb{R}^n \to [-1, 1]$ be a minimizer of~$\mathscr{E}$ in~$B_R(x_0)$, for some~$x_0 \in \mathbb{R}^n$ and~$R > 0$. Fix~$\theta, \theta_0 \in (-1, 1)$. If~$u(x_0) \geqslantslant \theta_0$, then there exist two constants~$\bar{c} \in (0, 1)$, depending on universal quantities, and~$\bar{R} = \bar{R}(\theta, \theta_0) \geqslantslant 2$, that may also depend on~$\theta$ and~$\theta_0$, such that \begin{equation} \label{densest1} \leqslantslantft| \leqslantslantft\{ u > \theta \right\} \cap B_R(x_0) \right| \geqslantslant \bar{c} R^n, \end{equation} provided~$\bar{R} \leqslantslant R \leqslantslant \xi/3$. Similarly, if~$u(x_0) \leqslantslant \theta_0$, then \begin{equation} \label{densest2} \leqslantslantft| \leqslantslantft\{ u < \theta \right\} \cap B_R(x_0) \right| \geqslantslant \bar{c} R^n, \end{equation} provided~$\bar{R} \leqslantslant R \leqslantslant \xi/3$. \end{theorem} \begin{theorem} \label{enestbelowthm} Let~$n \geqslantslant 2$ and~$s \in (0, 1)$. Assume~$K$ and~$W$ to respectively satisfy~\eqref{Ksymmetry},~\eqref{Kbounds} and~\eqref{Wzeros},~\eqref{Wgamma},~\eqref{Wbound},~\eqref{W''}. If~$s \in [1/2, 1)$, also suppose that~$K$ fulfills~\eqref{Kreg}.\\ Let~$u: \mathbb{R}^n \to [-1, 1]$ be a minimizer of~$\mathscr{E}$ in~$B_R(x_0)$, for some~$x_0 \in \mathbb{R}^n$ and~$R > 0$. If~$u(x_0) \in [-\theta_0, \theta_0]$, for some~$\theta_0 \in (0, 1)$, then there exist two constants~$c_0 \in (0, 1)$ and~$R_0 \geqslantslant 1$, depending only on~$\theta_0$ and on universal quantities, such that \begin{equation} \label{enestbelow} \mathscr{E}(u; B_R(x_0)) \geqslantslant c_0 R^{n - 1} \mathscr{P}si_s(R), \end{equation} provided~$R_0 \leqslantslant R \leqslantslant \xi$. \end{theorem} See also Proposition~\ref{intdensprop} in Section~\ref{densec} for a bound from below on the measure of the interface of non-trivial minimizers. This result has been already announced\footnote{ We take this opportunity to correct some imprecisions contained in~\cite{CDV17a}. First of all, in formula~(3.3) the expressions ``$eithern$'' and ``$orn$'' should be replaced by~``either~$n$'' and ``or~$n$'', respectively. More interestingly, formula~(2.2) has to be replaced by \begin{gather*} \leqslantslantft| \{ |u_\varepsilon|<\vartheta_2 \} \cap B_1 \right| \geqslantslantq c \varepsilon \\ \mbox{and, if } s\in(1/2, 1), \, \leqslantslantft| \{ |u_\varepsilon|<\vartheta_2\}\cap B_1 \right| \leqslantslantq C \varepsilon. \end{gather*} Note that the lower bound is true for any~$s \in (0, 1)$ and can be easily deduced from Proposition~\ref{intdensprop} by scaling. On the other hand, the condition~$s\in(1/2, 1)$ for the upper bound slipped out of the original formula~(2.2) in~\cite{CDV17a}, and this in fact generates the very interesting question on whether this upper bound may hold true also when~$s\in (0, 1/2]$. To the best of our knowledge, the only known upper bound in such generality is $$ \leqslantslantft| \{ |u_\varepsilon|<\vartheta_2\}\cap B_1 \right| \leqslantslantq C \begin{cases} \varepsilon^{2 s} & \quad \mbox{if } s \in (0, 1/2) \\ \varepsilon |\log \varepsilon| & \quad \mbox{if } s = 1/2, \end{cases} $$ which can be obtained from an appropriate rescaling of the energy estimate of Proposition~\ref{enestprop}, namely Proposition~\ref{epsenestprop} in Section~\ref{PerPLsec}. Actually, as a result of Theorem 1.1(v) of the recent preprint~\cite{MSW16}, the better bound~$ | \{ |u_\varepsilon| < \vartheta_2 \} \cap B_1 | \leqslantslant C_\alpha \varepsilon^{\min \{ 4 s, \alpha \}}$, for every~$\alpha \in (0, 1)$, holds true when~$ s \in (0, 1/2)$, for a special class of minimizers and, more in general, solutions of an Allen-Cahn equation driven by the fractional Laplace operator of order~$2s$. This result was obtained in~\cite{MSW16} by means of extension methods which cannot be applied directly to deal with general integrodifferential kernels. } in~\cite{CDV17a} in an equivalent formulation for the minimizers~$u_\varepsilon$ of the rescaled functional~\eqref{Eepsdef}. The remaining part of this paper is organized as follows. In Section~\ref{auxsec} we gather some technical and auxiliary results that will be used throughout the following sections. Sections~\ref{densec} and~\ref{ensec} are respectively devoted to the proofs of the density estimates of Theorem~\ref{densestthm} and the energy estimate of Theorem~\ref{enestbelowthm}. An important consequence of the combination of these two results - the so-called~\emph{clean ball condition} - is contained in Proposition~\ref{cleanballprop}, at the end of Section~\ref{ensec}. In Section~\ref{PLminsec} we address Theorem~\ref{tauPLthm}. As, for a large part, the proof follows closely the one displayed in~\cite[Sections~4 and~5]{CV17}, we only sketch the general argument and focus on the key differences. Section~\ref{Gammasec} contains the proof of the~$\mathscr{G}amma$-convergence result stated in Proposition~\ref{Gammaconvprop}. In the conclusive Section~\ref{PerPLsec} we deal with Theorem~\ref{PerPLthm}, by showing how it can be deduced from Theorem~\ref{epsPLthm}, with the aid of some arguments closely related to Proposition~\ref{Gammaconvprop}. \section{Some auxiliary results} \label{auxsec} In this preliminary section we collect a few ancillary results of different nature, that will be needed in the remaining part of the paper. We begin with a couple of lemmata, containing standard pointwise estimates on the integral operator~$\mathscr{L}_K$ defined in~\eqref{LKdef}. For~$s < 1/2$, we have \begin{lemma} \label{LKlems<} Let~$n \geqslantslant 1$ and~$s \in (0, 1/2)$. Assume that~$K$ satisfies~\eqref{Kbounds}. Then, given~$x \in \mathbb{R}^n$,~$\rho > 0$ and~$\psi \in L^\infty(\mathbb{R}^n) \cap C^{0, 1}(B_\rho(x))$, it holds \begin{equation} \label{LKests<} |\mathscr{L}_K \psi(x)| \leqslantslant C \mathscr{L}ambda \leqslantslantft( \| \psi \|_{L^\infty(\mathbb{R}^n)} \rho^{- 2 s} + \| \nabla \psi \|_{L^\infty(B_\rho(x))} \rho^{1 - 2 s} \right), \end{equation} for some constant~$C > 0$ which depends on~$n$ and~$s$. \end{lemma} \begin{proof} We split the integral defining~$\mathscr{L}_K \psi(x)$ as $$ \mathscr{L}_K \psi(x) = I_1 + I_2, $$ where \begin{align*} I_1 & := \int_{\mathbb{R}^n \setminus B_\rho(x)} \leqslantslantft( \psi(x) - \psi(y) \right) K(x, y) \, dy \\ I_2 & := \int_{B_\rho(x)} \leqslantslantft( \psi(x) - \psi(y) \right) K(x, y) \, dy. \end{align*} Applying~\eqref{Kbounds}, we first compute $$ \leqslantslantft| I_1 \right| \leqslantslant 2 \mathscr{L}ambda \| \psi \|_{L^\infty(\mathbb{R}^n)} \int_{\mathbb{R}^n \setminus B_\rho(x)} |x - y|^{- n - 2 s} \, dy = \frac{n |B_1| \mathscr{L}ambda \| \psi \|_{L^\infty(R^n)} \rho^{- 2 s}}{s}. $$ To control~$I_2$ we use the Lipschitzianity of~$\psi$ together with~\eqref{Kbounds} again to get $$ \leqslantslantft| I_2 \right| \leqslantslant \mathscr{L}ambda \| \nabla \psi \|_{L^\infty(B_\rho(x))} \int_{B_\rho(x)} |x - y|^{1 - n - 2 s} \, dy = \frac{n |B_1| \mathscr{L}ambda \| \nabla \psi \|_{L^\infty(B_\rho(x))} \rho^{1 - 2 s}}{1 - 2 s}. $$ These two estimates combined lead to~\eqref{LKests<}. \end{proof} In the general case of~$s \in (0, 1)$ - and, most significantly, when~$s \geqslantslant 1/2$ - a similar statement holds, as long as we add the regularity assumption~\eqref{Kreg} on the kernel~$K$. \begin{lemma} \label{LKlemsge} Let~$n \geqslantslant 1$ and~$s \in (0, 1)$. Assume that~$K$ satisfies~\eqref{Kbounds} and~\eqref{Kreg}. Then, given~$x \in \mathbb{R}^n$,~$\rho > 0$ and~$\psi \in L^\infty(\mathbb{R}^n) \cap C^{1, 1}(B_\rho(x))$, it holds \begin{equation} \label{LKestsge} |\mathscr{L}_K \psi(x)| \leqslantslant C \leqslantslantft[ \mathscr{L}ambda \leqslantslantft( \| \psi \|_{L^\infty(\mathbb{R}^n)} \rho^{- 2 s} + \| \nabla^2 \psi \|_{L^\infty(B_\rho(x))} \rho^{2 (1 - s)} \right) + \mathscr{G}amma |\nabla \psi(x)| \rho^\nu \right], \end{equation} for some constant~$C > 0$ which depends on~$n$,~$s$ and~$\nu$. \end{lemma} \begin{proof} We have \begin{align*} |\mathscr{L}_K \psi(x)| & \leqslantslant \int_{\mathbb{R}^n \setminus B_\rho(x)} |\psi(x) - \psi(y)| K(x, y) \, dy \\ & \quad + \int_{B_\rho(x)} |\psi(x) - \psi(y) + \nabla \psi(x) \cdot (y - x)| K(x, y) \, dy \\ & \quad + \leqslantslantft| \mbox{\normalfont P.V.} \int_{B_\rho(x)} \nabla \psi(x) \cdot (y - x) K(x, y) \, dy \right| \\ & =: I_1 + I_2 + I_3. \end{align*} The term~$I_1$ can be estimated exactly as in Lemma~\ref{LKlems<} to deduce \begin{equation} \label{I1} I_1 \leqslantslant \frac{n |B_1| \mathscr{L}ambda \| \psi \|_{L^\infty(\mathbb{R}^n)} \rho^{- 2 s}}{s}. \end{equation} On the other hand, the regularity of~$\psi$ and again~\eqref{Kbounds} imply that \begin{equation} \label{I2} I_2 \leqslantslant \mathscr{L}ambda \| \nabla^2 \psi \|_{L^\infty(B_\rho(x))} \int_{B_\rho(x)} |x - y|^{2 - n - 2 s} \, dy = \frac{n |B_1| \mathscr{L}ambda \| \nabla^2 \psi \|_{L^\infty(B_\rho(x))} \rho^{2 (1 - s)}}{2 (1 - s)}. \end{equation} Finally, we claim that \begin{equation} \label{I3} I_3 \leqslantslant \frac{n |B_1| \mathscr{G}amma |\nabla \psi(x)| \rho^\nu}{\nu}. \end{equation} The proof of~\eqref{I3} is a little bit more involved. Of course, we may assume that~$\nabla \psi(x) \ne 0$, as if the contrary is true, then~\eqref{I3} follows trivially. Write $$ \mbox{\normalfont P.V.} \int_{B_\rho(x)} \nabla \psi(x) \cdot (y - x) K(x, y) \, dy = \lim_{\varepsilon \rightarrow 0^+} \int_{B_\rho(x) \setminus B_\varepsilon(x)} \nabla \psi(x) \cdot (y - x) K(x, y) \, dy, $$ and consider the half-annuli \begin{align*} A_\varepsilon^+ & := \leqslantslantft\{ y \in B_\rho(x) \setminus B_\varepsilon(x) : \nabla \psi(x) \cdot (y - x) \geqslantslant 0 \right\} \\ A_\varepsilon^- & := \leqslantslantft( B_\rho(x) \setminus B_\varepsilon(x) \right) \setminus A_\varepsilon^+, \end{align*} for any small~$\varepsilon > 0$. Then, \begin{align*} \int_{B_\rho(x) \setminus B_\varepsilon(x)} & \nabla \psi(x) \cdot (y - x) K(x, y) \, dy \\ & = \int_{A_\varepsilon^+} \nabla \psi(x) \cdot (y - x) K(x, y) \, dy + \int_{A_\varepsilon^-} \nabla \psi(x) \cdot (y - x) K(x, y) \, dy \\ & = \int_{A_\varepsilon^+} \nabla \psi(x) \cdot (y - x) K(x, y) \, dy + \int_{A_\varepsilon^+} \nabla \psi(x) \cdot (x - z) K(x, 2 x - z) \, dz \\ & = \int_{A_\varepsilon^+} \nabla \psi(x) \cdot (y - x) \leqslantslantft[ K(x, y) - K(x, 2 x - y) \right] dy, \end{align*} where we applied the change of variables~$z := 2 x - y$ to the integral over~$A_\varepsilon^-$ and we noticed that this map is a diffeomorphism of~$A_\varepsilon^-$ onto~$A_\varepsilon^+$. Consequently, by virtue of~\eqref{Kreg} we obtain \begin{align*} \leqslantslantft| \int_{B_\rho(x) \setminus B_\varepsilon(x)} \nabla \psi(x) \cdot (y - x) K(x, y) \, dy \right| & \leqslantslant |\nabla \psi(x)| \int_{A_\varepsilon^+} |y - x| \leqslantslantft| K(x, y) - K(x, 2 x - y) \right| dy \\ & \leqslantslant \mathscr{G}amma |\nabla \psi(x)| \int_{B_\rho(x)} |y - x|^{- n + \nu} \, dy \\ & = \frac{n |B_1| \mathscr{G}amma |\nabla \psi(x)| \rho^{\nu}}{\nu}, \end{align*} and thus~\eqref{I3}. Formula~\eqref{LKestsge} then follows from~\eqref{I1},~\eqref{I2} and~\eqref{I3}. \end{proof} Next, we state a measure theoretic result that assesses the size of the boundary of~\emph{dense} sets via a cubic grid decomposition. The proposition is based on the relative isoperimetric inequality and should probably be well-known to the experts in some equivalent form. However, as we have not been able to find a satisfactory reference in the literature, we present a proof of it in full details. \begin{proposition} \label{densestAprop} Let~$Q_r \subset \mathbb{R}^n$ be a closed cube of sides~$r > 0$ and~$A$ be an open subset of~$\mathbb{R}^n$. Suppose that there exists a constant~$c_\sharp \in (0, 1)$ such that \begin{equation} \label{densestA} \min \Big\{ \leqslantslantft| A \cap Q_r \right|, \leqslantslantft| Q_r \setminus \overline{A} \right| \Big\} \geqslantslant c_\sharp r^n. \end{equation} For any~$k \in \mathbb{N}$, let~$\mathcal{P}$ be the non-overlapping (up to negligible sets) partition of~$Q_r$ in~$k^n$ closed cubes with sides of length~$r / k$, parallel to those of~$Q_r$. Then, $$ \card \leqslantslantft( \Big\{ Q \in \mathcal{P} : Q \cap \partial A \ne \varnothing \Big\} \right) \geqslantslant c_\star k^{n - 1}, $$ for some constant~$c_\star \in (0, 1)$ which depends only on~$n$ and~$c_\sharp$. \end{proposition} \begin{proof} Define the following subclasses of~$\mathcal{P}$: \begin{align*} \mathcal{Y} & := \leqslantslantft\{ Q \in \mathcal{P} : Q \subseteq A \right\}, \\ \mathbb{R}R & := \leqslantslantft\{ Q \in \mathcal{P} : Q \subseteq Q_r \setminus \overline{A} \right\}, \\ \mathscr{G}G & := \leqslantslantft\{ Q \in \mathcal{P} : Q \cap \partial A \ne \varnothing \right\} = \mathcal{P} \setminus \leqslantslantft( \mathcal{Y} \cup \mathbb{R}R \right). \end{align*} With this notation, we need to show that~$\card(\mathscr{G}G) \geqslantslant c_\star k^{n - 1}$. To do this, we first divide~$\mathcal{Y}$ into connected clusters of adjacent cubes, i.e. we write $$\mathcal{Y} = \bigcup_{j = 1}^{N_\mathcal{Y}} \mathcal{Y}_j, $$ where each~$\mathcal{Y}_j$ is made up of adjacent cubes. Analogously, we write $$ \mathbb{R}R = \bigcup_{j = 1}^{N_\mathbb{R}R} \mathbb{R}R_j. $$ Of course,~$\card(\mathcal{Y}_j), \card(\mathbb{R}R_j) \leqslantslant k^n$, for any~$j$. Moreover, we adopt the notation $$ Y_j := \bigcup_{Q \in \mathcal{Y}_j} Q, \quad Y := \bigcup_{j = 1}^{N_\mathcal{Y}} Y_j, \quad R_j := \bigcup_{Q \in \mathbb{R}R_j} Q, \quad R := \bigcup_{j = 1}^{N_\mathbb{R}R} R_j \quad \mbox{and} \quad G := \bigcup_{Q \in \mathscr{G}G} Q, $$ for the subsets of~$Q_r$ corresponding to the families~$\mathcal{Y}_j$,~$\mathcal{Y}$,~$\mathbb{R}R_j$,~$\mathbb{R}R$ and~$\mathscr{G}G$. In view of~\eqref{densestA}, $$ \Big( \card(\mathcal{Y}) + \card(\mathscr{G}G) \Big) \leqslantslantft( \frac{r}{k} \right)^n = \sum_{Q \in \mathcal{Y} \cup \mathscr{G}G} |Q| \geqslantslant |A \cap Q_r| \geqslantslant c_\sharp r^n. $$ Hence, $$ \mbox{either } \card(\mathcal{Y}) \geqslantslant \frac{c_\sharp}{2} k^n, \mbox{ or } \card(\mathscr{G}G) \geqslantslant \frac{c_\sharp}{2} k^n. $$ With the same argument we obtain that $$ \mbox{either } \card(\mathbb{R}R) \geqslantslant \frac{c_\sharp}{2} k^n, \mbox{ or } \card(\mathscr{G}G) \geqslantslant \frac{c_\sharp}{2} k^n. $$ Now, if the bound for~$\card(\mathscr{G}G)$ is true, we are done. Thus, we assume the other two options to hold, so that \begin{equation} \label{cardYj} \sum_{j = 1}^{N_\mathcal{Y}} \card(\mathcal{Y}_j)^{(n - 1) / n} = \sum_{j = 1}^{N_\mathcal{Y}} \frac{\card(\mathcal{Y}_j)}{\card(\mathcal{Y}_j)^{1/n}} \geqslantslant \frac{1}{k} \sum_{j = 1}^{N_\mathcal{Y}} \card(\mathcal{Y}_j) = \frac{\card(\mathcal{Y})}{k} \geqslantslant \frac{c_\sharp}{2} k^{n - 1}, \end{equation} and analogously for the~$\mathbb{R}R_j$'s. But then, by the relative isoperimetric inequality in cubes (see e.g.~\cite[Corollary~5.9.13]{P12}), \begin{equation} \label{perYj} \mbox{Per} \leqslantslantft( Y_j, \accentset{\circ}{Q}_r \right) \geqslantslant c_1 \min \leqslantslantft\{ |Y_j|, |\accentset{\circ}{Q}_r \setminus Y_j| \right\}^{(n - 1) / n}, \end{equation} for any~$j = 1, \ldots, N_\mathcal{Y}$ and for some dimensional constant~$c_1 \in (0, 1)$. Similarly for the~$R_j$'s. Notice now that $$ \mbox{either } |Y_j| \leqslantslant |\accentset{\circ}{Q}_r \setminus Y_j| \mbox{ for any } j = 1, \ldots, N_\mathcal{Y}, \mbox{ or } |R_j| \leqslantslant |\accentset{\circ}{Q}_r \setminus R_j| \mbox{ for any } j = 1, \ldots, N_\mathbb{R}R. $$ We assume, without loss of generality, that the above property holds for the~$Y_j$'s. By using~\eqref{cardYj} and~\eqref{perYj}, we get \begin{align*} \mbox{Per} \leqslantslantft( Y, \accentset{\circ}{Q}_r \right) & = \sum_{j = 1}^{N_\mathcal{Y}} \mbox{Per} \leqslantslantft( Y_j, \accentset{\circ}{Q}_r \right) \geqslantslant c_1 \sum_{j = 1}^{N_\mathcal{Y}} |Y_j|^{(n - 1) / n} \\ & = c_1 \leqslantslantft( \frac{r}{k} \right)^{n - 1} \sum_{j = 1}^{N_\mathcal{Y}} \card(\mathcal{Y}_j)^{(n - 1) / n} \geqslantslant c_2 r^{n - 1}, \end{align*} for some~$c_2 \in (0, 1)$ depending only on~$n$ and~$c_\sharp$. But then, the~$Y_j$'s may only confine with the set~$G$. Hence,~$\mbox{Per}(G, \accentset{\circ}{Q}_r) \geqslantslant c_2 r^{n - 1}$ and thus $$ 2 n \leqslantslantft( \frac{r}{k} \right)^{n - 1} \card(\mathscr{G}G) = \sum_{Q \in \mathscr{G}G} \mbox{Per}(Q) \geqslantslant \mbox{Per}(G, \accentset{\circ}{Q}_r) \geqslantslant c_2 r^{n - 1}, $$ from which the thesis follows. \end{proof} The above result may be further sharpened as follows. In this enhanced form, such estimation will be used later in Section~\ref{ensec}. \begin{corollary} \label{densestAcor} Let~$Q_r \subset \mathbb{R}^n$ be a closed cube of sides~$r > 0$ and~$A$ be an open subset of~$\mathbb{R}^n$ for which~\eqref{densestA} holds true, for some~$c_\sharp \in (0, 1)$. Then, for any~$k \in \mathbb{N}$, there exists a collection~$\mathbb{Q}Q = \{ Q^{(j)} \}_{j = 1}^N$ of non-overlapping closed cubes with sides of length~$r / k$, parallel to those of~$Q_r$, each~$Q^{(j)}$ contained in~$\accentset{\circ}{Q}_{2 r}$ and centered at some point $$ x_j \in Q_r \cap \partial A. $$ The cardinality~$N$ of the family can be chosen to satisfy $$ N \geqslantslant c_{\star \star} k^{n - 1}, $$ for some constant~$c_{\star \star} \in (0, 1)$ depending only on~$n$ and~$c_\sharp$. \end{corollary} \begin{proof} Given~$k \in \mathbb{N}$, divide~$Q_r$ into the non-overlapping partition~$\mathcal{P}$ described in Proposition~\ref{densestAprop}. By virtue of Proposition~\ref{densestAprop} itself, we already know that the number of cubes in this partition that have non-trivial intersection with~$\partial A$ is at least~$c_\star k^{n - 1}$, for some constant~$c_\star \in (0, 1)$ which depends on~$n$ and~$c_\sharp$. Denote these cubes by~$\widetilde{Q}^{(j)}$, for~$j = 1, \ldots, \tilde{N}$, with~$\tilde{N} \geqslantslant c_\star k^{n - 1}$, and let~$x_j \in \widetilde{Q}^{(j)} \cap \partial A$ be the intersection points. We now translate each~$\widetilde{Q}^{(j)}$ to obtain a new cube~$Q^{(j)}$ centered at~$x_j$. Notice that, by reducing the total number of the cubes~$Q^{(j)}$'s to~$N := \tilde{N} / 3^n$, we may assume the new family to be non-overlapping. This concludes the proof. \end{proof} We conclude the section with the following simple lemma on the~$L^1$ convergence of the superlevel sets of a particular class of pointwise converging sequences of functions. \begin{lemma} \label{diffsimmlimlem} Let~$\Omega$ be an open bounded subset of~$\mathbb{R}^n$. Let~$\{ u_j \}$ be a sequence of measurable functions~$u_j: \Omega \to \mathbb{R}$ converging a.e.~in~$\Omega$ to~$u = \chi_E - \chi_{\Omega \setminus E}$, for some measurable set~$E \subseteq \Omega$. Then, given any~$\eta \in (-1, 1)$, $$ \lim_{j \rightarrow +\infty} \leqslantslantft| E \Delta \{ u_j > \eta \} \right| = 0. $$ \end{lemma} \begin{proof} First, note that \begin{equation} \label{chitochi} \chi_{ \{ u_j > \eta \} } \longrightarrow \chi_E \quad \mbox{a.e.~in } \Omega. \end{equation} Indeed, for a.e.~$x \in E$, we have~$u_j(x) \rightarrow u(x) = \chi_E(x) - \chi_{\Omega \setminus E}(x) = 1$. Hence,~$u_j(x) > \eta$ for any large enough~$j$, i.e.~$\chi_{ \{ u_j > \eta \} }(x) = 1 = \chi_E(x)$. Analogously, one checks that, for a.a.~$x \in \Omega \setminus E$, it holds~$\chi_{ \{ u_j > \eta \} } = 0 = \chi_E(x)$, for~$j$ sufficiently large. Then,~\eqref{chitochi} holds true. To conclude the proof of the lemma, we simply apply formula~\eqref{chitochi} in combination with Lebesgue's dominated convergence theorem. We get \begin{equation*} \lim_{j \rightarrow +\infty} \leqslantslantft| E \Delta \{ u_j > \eta \} \right| = \lim_{j \rightarrow +\infty} \int_\Omega \leqslantslantft| \chi_E(x) - \chi_{ \{ u_j > \eta \}}(x) \right| \, dx = 0.\qedhere \end{equation*} \end{proof} \section{Density estimates. Proof of Theorem~\ref{densestthm}} \label{densec} In this section we establish the density estimates for the minimizers of~$\mathscr{E}$, thus proving Theorem~\ref{densestthm}. The argument follows the lines of that developed in~\cite{SV11,SV14}. Here we only sketch the general strategy and outline the main differences in our version. As a first step, we construct the following barrier, that was also considered in~\cite{CP16}. \begin{lemma} \label{barrierlem} Let~$n \geqslantslant 1$,~$s \in (0, 1)$ and assume that~$K$ satisfies~\eqref{Ksymmetry} and~\eqref{Kbounds}. Given any~$\delta > 0$ there exists a constant~$C \geqslantslant 1$, depending on~$\delta$ and on universal quantities, such that for any~$C \leqslantslant R$ we can construct a symmetric radially non-decreasing function $$ w \in C^{1, 1}\leqslantslantft( \mathbb{R}^n, \leqslantslantft[ -1 + C^{-1} R^{- 2 s}, 1 \right] \right), $$ with $$ w = 1 \quad \mbox{in } \mathbb{R}^n \setminus B_R, $$ which satisfies \begin{equation} \label{LKwbar} \leqslantslantft| \mathscr{L}_K w(x) \right| \leqslantslant \delta \leqslantslantft( 1 + w(x) \right), \end{equation} and \begin{equation} \label{1+wbarest} \frac{1}{C} \leqslantslantft( R + 1 - |x| \right)^{- 2 s} \leqslantslant 1 + w(x) \leqslantslant C \leqslantslantft( R + 1 - |x| \right)^{- 2 s}, \end{equation} for any~$x \in B_R$. \end{lemma} The proof of Lemma~\ref{barrierlem} is an adaptation of that of~\cite[Lemma~3.1]{SV14} for the fractional Laplacian. In our case, the computations involved are slightly more delicate, due to the more general form of the interaction kernel. \begin{proof}[Proof of Lemma~\ref{barrierlem}] Fix a value \begin{equation} \label{r0def} r_1 \geqslantslant 2^{3/s}, \end{equation} and let~$r \geqslantslant r_1$. Then, set~$\ell(t) := (r - t)^{- 2 s}$, for any~$0 \leqslantslant t < r$, and define $$ \gamma_r := \leqslantslantft[ \ell(r - 1) - \ell(r / 2) - \ell'(r / 2) \leqslantslantft( r/2 - 1 \right) \right]^{-1}. $$ Note that \begin{align*} \ell(r - 1) - \ell(r/2) - \ell'(r/2) (r/2 - 1) & = 1 - 2^{2 s} \leqslantslantft( 1 + 2 s - 4 s r^{- 1} \right) r^{- 2 s} \\ & \geqslantslant 1 - 12 {r_1}^{- 2 s} \\ & \geqslantslant 1 / 2, \end{align*} for any~$r \geqslantslant r_1$. Thus,~$\gamma_r$ is well-defined and \begin{equation} \label{alpharbounds} 1 < \gamma_r \leqslantslant 2. \end{equation} Consider the function~$h: [0, +\infty) \to [0, 1]$ defined by $$ h(t) := \begin{cases} 0 & \mbox{if } t \in [0, r/2) \\ \gamma_r \leqslantslantft( \ell(t) - \ell(r/2) - \ell'(r/2) (t - r/2) \right) & \mbox{if } t \in [r/2, r - 1) \\ 1 & \mbox{if } t \geqslantslant r - 1. \end{cases} $$ We have $$ h(r/2) = 0, \quad h'(r/2) = 0 \quad \mbox{and} \quad h(r - 1) = 1, $$ so that~$h \in C^{0, 1}([0, +\infty)) \cap C^{1, 1}([0, r - 1])$. Furthermore, recalling~\eqref{alpharbounds}, for~$t \in (r/2, r - 1)$ we have \begin{equation} \label{h'h''} \begin{aligned} |h'(t)| & = \gamma_r |\ell'(t) - \ell'(r / 2)| = 2 s \gamma_r \leqslantslantft[ (r - t)^{- 2 s - 1} - (r / 2)^{- 2 s - 1} \right] \leqslantslant 4 (r - t)^{- 2 s - 1} \\ |h''(t)| & = \gamma_r |\ell''(t)| = 2 s (2s + 1) \gamma_r (r - t)^{- 2 s - 2} \leqslantslant 12 (r - t)^{- 2 s - 2}. \end{aligned} \end{equation} As~$h$ is constant outside of~$(r/2, r - 1)$, the above estimates also holds for a.a.~$t \geqslantslant 0$. We want to modify~$h$ between~$r - 2$ and~$r - 1$ in order to obtain a new function~$g$ of class~$C^{1, 1}$ on the whole half-line. To do this, let~$\eta \in C^\infty([0, + \infty))$ be a cut-off function with~$0 \leqslantslant \eta \leqslantslant 1$,~$\eta = 1$ in~$[0, r - 7 / 4]$,~$\eta = 0$ in~$[r - 5 / 4, +\infty)$,~$- 4 \leqslantslant \eta' \leqslantslant 0$ and~$|\eta''| \leqslantslant 32$. We then set $$ g(t) := \eta(t) h(t) + 1 - \eta(t) \quad \mbox{for any } t \geqslantslant 0. $$ Of course,~$g \in C^{1, 1}([0, +\infty))$,~$0 \leqslantslant g \leqslantslant 1$ and~$g$ coincides with~$h$ outside of~$(r - 2, r - 1)$. On the other hand, by~\eqref{h'h''}, for~$t \in (r - 2, r - 1)$ we compute \begin{align*} |g'(t)| & \leqslantslant |h'(t)| \chi_{[r - 2, r - 5/4]}(t) + 4 (1 - h(t)) \\ & \leqslantslant 4 (r - t)^{- 2 s - 1} \chi_{[r - 2, r - 5/4]}(t) + 4 \\ & \leqslantslant 40 \min \leqslantslantft\{ 1, (r - t)^{- 2 s - 1} \right\}, \end{align*} and \begin{align*} |g''(t)| & \leqslantslant |h''(t)| \chi_{[r - 2, r - 5/4]}(t) + 8 |h'(t)| \chi_{[r - 7/4, r - 5/4]}(t) + 32 (1 - h(t)) \\ & \leqslantslant 12 (r - t)^{- 2 s - 2} \chi_{[r - 2, r - 5/4]}(t) + 32 (r - t)^{- 2 s -1} \chi_{[r - 7/4, r - 5/4]}(t) + 32 \\ & \leqslantslant 600 \min \{ 1, (r - t)^{- 2 s - 2} \}. \end{align*} By combining these last two estimates with~\eqref{h'h''} (recall that~$g = h$ outside of~$(r - 2, r - 1)$), we conclude that there exists a numerical constant~$c_1 > 0$ such that \begin{equation} \label{g'g''} |g'(t)| \leqslantslant c_1 \min\{ (r - t)^{- 2 s - 1}, 1 \} \quad \mbox{and} \quad |g''(t)| \leqslantslant c_1 \min \{ (r - t)^{- 2 s - 2}, 1 \}, \end{equation} for a.a.~$t \in [0, r]$. Moreover, we claim that \begin{equation} \label{gbounds} \min \leqslantslantft\{ (r - t)^{- 2 s}, 1 \right\} \leqslantslant g(t) + 16 r^{- 2 s} \leqslantslant 20 \min \leqslantslantft\{ (r - t)^{- 2 s}, 1 \right\} \quad \mbox{for any } t \in [0, r]. \end{equation} Since the right-hand inequality of~\eqref{gbounds} follows almost directly from the definition of~$g$, we focus on the left estimation. The bound is clearly valid when~$t \geqslantslant r - 1$, as~$g = 1$ there. Also, when~$t \leqslantslant r/2$, it holds~$g = 0$ and~$(r - t)^{- 2 s} \leqslantslant 4 r^{- 2 s}$. Finally, when~$t \in (r/2, r - 1)$, using~\eqref{alpharbounds} we have \begin{align*} g(t) & \geqslantslant h(t) \geqslantslant \ell(t) - \ell(r/2) - \ell'(r/2) \leqslantslantft( t - r/2 \right) \\ & = (r - t)^{- 2 s} - (r/2)^{- 2 s} - 2 s (r/2)^{-2 s - 1} \leqslantslantft( t - r/2 \right) \\ & \geqslantslant (r - t)^{- 2 s} - 2^{2 s} \leqslantslantft( 1 + 2 s \right) r^{- 2 s} \\ & \geqslantslant (r - t)^{- 2 s} - 16 r^{- 2 s}. \end{align*} In any case,~\eqref{gbounds} is established. Let now~$v(x) := g(|x|)$, for any~$x \in \mathbb{R}^n$. By the properties of~$g$, we recover that~$v \in C^{1, 1}(\mathbb{R}^n)$ is radially symmetric, radially non-decreasing and satisfies~$v = 0$ in~$B_{r/2}$,~$v = 1$ in~$\mathbb{R}^n \setminus B_r$. Moreover, we infer from~\eqref{gbounds} that, for~$x \in B_r$, it holds \begin{equation} \label{vbounds} \min \leqslantslantft\{ (r - |x|)^{- 2 s}, 1 \right\} \leqslantslant v(x) + 16 r^{- 2 s} \leqslantslant 20 \min \leqslantslantft\{ (r - |x|)^{- 2 s}, 1\right\}. \end{equation} We claim that for any~$x \in B_r$ \begin{equation} \label{supnablav1} \| \nabla v \|_{L^\infty(B_{\max \leqslantslantft\{ (r - |x|) / 2, 1 \right\}}(x))} \leqslantslant c_2 \max \leqslantslantft\{ \frac{r - |x|}{2}, 1 \right\}^{- 2 s - 1}, \end{equation} and \begin{equation} \label{supnablav2} \| \nabla^2 v \|_{L^\infty (B_{\max \leqslantslantft\{ (r - |x|) / 2, 1 \right\}}(x))} \leqslantslant c_2 \max \leqslantslantft\{ \frac{r - |x|}{2}, 1 \right\}^{- 2 s - 2}, \end{equation} for some dimensional constant~$c_2 > 0$. Estimate~\eqref{supnablav1} is almost immediate. Indeed, recalling~\eqref{g'g''}, for any~$y \in B_r$ we get \begin{equation} \label{nablavy} |\nabla v(y)| = |g'(|y|)| \leqslantslant c_1 \min \{ (r - |y|)^{- 2 s - 1}, 1 \}. \end{equation} Fix now~$x \in B_r$ and take any~$y \in B_{(r - |x|) / 2}(x)$. Clearly,~$y \in B_r$. Also, $$ |y| \leqslantslant |y - x| + |x| \leqslantslant \frac{r - |x|}{2} + |x| = \frac{r + |x|}{2}, $$ and thus $$ r - |y| \geqslantslant r - \frac{r + |x|}{2} = \frac{r - |x|}{2}. $$ By this and~\eqref{nablavy}, it follows that \begin{equation} \label{nablavy2} \leqslantslantft| \nabla v(y) \right| \leqslantslant c_1 \min \leqslantslantft\{ \leqslantslantft( \frac{r - |x|}{2} \right)^{- 2 s - 2}, 1 \right\}, \end{equation} for any~$y \in B_{(r - |x|) / 2}(x)$. Finally, when~$(r - |x|) / 2 \leqslantslant 1$ we use again~\eqref{nablavy} to deduce that~\eqref{nablavy2} holds also for~$y \in B_1(x) \cap B_r$. The proof of~\eqref{supnablav2} follows similarly, by noticing that, for~$y \in B_r \setminus \overline{B_{r/2}}$, by~\eqref{g'g''} it holds \begin{align*} \leqslantslantft| \nabla^2 v(y) \right| & \leqslantslant |g''(|y|)| + \sqrt{2 (n + 1)} \, \frac{|g'(|y|)|}{|y|} \\ & \leqslantslant 2 n c_1 \leqslantslantft( \min \leqslantslantft\{ \leqslantslantft( \frac{r - |y|}{2} \right)^{- 2 s - 2}, 1 \right\} + |y|^{-1} \min \leqslantslantft\{ \leqslantslantft( \frac{r - |y|}{2} \right)^{- 2 s - 1}, 1 \right\} \right) \\ & \leqslantslant 50 n c_1 \min \{ (r - |y|)^{- 2 s - 2}, 1 \}, \end{align*} where to obtain the last inequality we took advantage of the fact that~$|y| \geqslantslant r / 2 \geqslantslant r - |y|$ and~$r \geqslantslant r_1 > 2$, by~\eqref{r0def}. With this in hand, we can deduce an estimate for~$L_{K_\sigma} v$ in~$B_r$, where~$K_\sigma$ is the scaled kernel defined by $$ K_\sigma(x, y) := K(\sigma x, \sigma y), \quad \mbox{for a.a.~} x, y \in \mathbb{R}^n, $$ with~$\sigma \geqslantslant 1$. Observe that~$K_\sigma$ satisfies~\eqref{Ksymmetry},~\eqref{Kbounds} and~\eqref{Kreg} with~$\lambda_\sigma := \sigma^{- n - 2 s} \lambda$,~$\mathscr{L}ambda_\sigma := \sigma^{- n - 2 s} \mathscr{L}ambda$ and~$\mathscr{G}amma_\sigma := \sigma^{- n - 1 + \nu} \mathscr{G}amma$. We apply either Lemma~\ref{LKlems<}, if~$s < 1/2$, or Lemma~\ref{LKlemsge}, if~$s \geqslantslant 1/2$, with~$\rho = \max \leqslantslantft\{ (r - |x|) / 2, 1 \right\}$. In view of~\eqref{supnablav1},~\eqref{supnablav2} and~\eqref{vbounds}, it is easy to see that, for any~$x \in B_r$, \begin{equation} \label{LKv} \leqslantslantft| \mathscr{L}_{K_\sigma} v(x) \right| \leqslantslant c_3 \sigma^{- n - 1 + \bar{\nu}} \leqslantslantft( v(x) + 16 r^{-2 s} \right), \end{equation} for some constant~$c_3 > 0$, which may depend on~$n$,~$s$,~$\mathscr{L}ambda$ and also~$\mathscr{G}amma$, if~$s \geqslantslant 1/2$, and where we set $$ \bar{\nu} := \begin{cases} 1 - 2 s & \quad \mbox{if } s \in (0, 1/2) \\ \nu & \quad \mbox{if } s \in [1/2, 1). \end{cases} $$ Without loss of generality, we may take \begin{equation} \label{c3>delta} c_3 \geqslantslant \delta. \end{equation} We are now able to construct the function~$w$. We take~$R \geqslantslant R_0$, with \begin{equation} \label{R0def} R_0 := \leqslantslantft( \frac{c_3}{\delta} \right)^{\frac{1}{1 - \bar{\nu}}} r_1, \end{equation} and set \begin{equation} \label{rdef} r := \frac{r_1}{R_0} R. \end{equation} Notice that~$r \geqslantslant r_1$. We then define $$ w(x) := (2 - \beta) v\leqslantslantft( \frac{r}{R} x \right) + \beta - 1, $$ where~$\beta := 32 r^{- 2 s}$. Clearly,~$\beta \in (0, 1)$, since~$r \geqslantslant r_1$ and~\eqref{r0def} is in force. The function~$w$ thus obtained inherits all the qualitative properties of~$v$. That is,~$w$ is of class~$C^{1, 1}(\mathbb{R}^n)$, is radially symmetric and radially non-decreasing. Moreover,~$w = \beta - 1$ in~$B_{R / 2}$ and~$w = 1$ in~$\mathbb{R}^n \setminus B_R$. Now we check that~$w$ satisfies properties~\eqref{LKwbar} and~\eqref{1+wbarest}. By changing variables appropriately, applying~\eqref{LKv} with~$\sigma := R / r \geqslantslant 1$, which holds thanks to~\eqref{c3>delta}, and recalling definitions~\eqref{R0def}-\eqref{rdef}, we get \begin{align*} \leqslantslantft| \mathscr{L}_K w(x) \right| & = (2 - \beta) \leqslantslantft( \frac{R}{r} \right)^n \leqslantslantft| \mathscr{L}_{K_{R/r}} v \leqslantslantft( \frac{r}{R} x \right) \right| \\ & \leqslantslant c_3 (2 - \beta) \leqslantslantft( \frac{R}{r} \right)^{- 1 + \bar{\nu}} \leqslantslantft( v \leqslantslantft( \frac{r}{R} x \right) + 16 r^{- 2 s} \right) \\ & \leqslantslant c_3 \leqslantslantft( \frac{R}{r} \right)^{- 1 + \bar{\nu}} \leqslantslantft( (2 - \beta) v \leqslantslantft( \frac{r}{R} x \right) + 32 r^{- 2 s} \right) \\ & = \delta \leqslantslantft( 1 + w(x) \right), \end{align*} for any~$x \in B_R$. Thus,~\eqref{LKwbar} is established. The validity of the inequalities in~\eqref{1+wbarest} basically relies on~\eqref{vbounds}. Namely, being~$\beta$ positive and taking advantage of the upper estimate in~\eqref{vbounds}, along with~\eqref{R0def} and~\eqref{rdef}, we have, for any~$x \in B_R$, \begin{align*} 1 + w(x) & \leqslantslant 2 \leqslantslantft[ v\leqslantslantft( \frac{r}{R} x \right) + 16 r^{- 2 s} \right] \\ & \leqslantslant 40 \min \leqslantslantft\{ \leqslantslantft( \frac{c_3}{\delta} \right)^{\frac{2 s}{1 - \bar{\nu}}} \leqslantslantft( R - |x| \right)^{- 2 s}, 1 \right\} \\ & \leqslantslant c_4 \leqslantslantft( R + 1 - |x| \right)^{- 2 s}, \end{align*} for some constant~$c_4 > 0$ which depends on~$n, s, \mathscr{L}ambda, \mathscr{G}amma, \nu$ and~$\delta$. The left-hand inequality of~\eqref{1+wbarest} follows similarly. Indeed, we first note that~$2 - \beta \geqslantslant 1$, since~$\beta \leqslantslant 1$. Hence, by this,~\eqref{vbounds},~\eqref{R0def} and~\eqref{rdef}, we get \begin{align*} 1 + w(x) \geqslantslant v\leqslantslantft( \frac{r}{R} x \right) + 16 r^{- 2 s} \geqslantslant \min \leqslantslantft\{ \leqslantslantft( \frac{c_3}{\delta} \right)^{\frac{2 s}{1 - \bar{\nu}}} \leqslantslantft( R - |x| \right)^{- 2 s}, 1 \right\} \geqslantslant c_5 \leqslantslantft( R + 1 - |x| \right)^{- 2 s}, \end{align*} for some~$c_5 > 0$ which depends on the same parameters as~$c_4$. The proof of the lemma is therefore complete, by eventually taking \begin{equation*} C := \max \leqslantslantft\{ R_0, \frac{1}{32} \leqslantslantft( \frac{r_1}{R_0} \right)^{2 s}, c_4, \frac{1}{c_5} \right\}. \qedhere \end{equation*} \end{proof} We now proceed to the \begin{proof}[Proof of Theorem~\ref{densestthm}] Of course, it is enough to prove the result for~$x_0 = 0$. Also, we can restrict ourselves to show the validity of~\eqref{densest1} only, as the dual estimate~\eqref{densest2} can be then recovered in a completely analogous fashion or by just exchanging~$u$ with~$-u$. Note that, thanks to a simple argument involving the energy estimate~\eqref{enest}, we can assume without loss of generality~$\theta \leqslantslant \theta_0$. Furthermore, we are free to prove the existence of suitable constants~$\bar{c}$ and~$\bar{R}$ (as in the statement of the theorem) that \emph{both} depend on~$\theta$,~$\theta_0$, along with universal quantities. Then, employing again the energy estimates, we are able to prove that~$\bar{c}$ actually depends on universal quantities only. We refer to~\cite[Subsection~3.1]{SV14} for more details on this. By the H\"{o}lder continuity of~$u$ (see e.g.~\cite[Section~2]{CV17}), we infer from~$u(0) \geqslantslant \theta_0$, which holds by hypothesis, that there exist two constants~$R_o \in (0, R)$ and~$\mu > 0$ such that \begin{equation} \label{u>thetamu} \leqslantslantft| \leqslantslantft\{ u > \theta \right\} \cap B_{R_o} \right| \geqslantslant \mu. \end{equation} Moreover, after a scaling argument, we are allowed to suppose~$\mu \geqslantslant e^n$. See again~\cite[Subsection~3.1]{SV14}. Fix~$H > 2 (R_o + 1)$ and take~$R > 2 H$. Let~$w$ be the~$C^{1, 1}$ function constructed in Lemma~\ref{barrierlem}. Recall that \begin{equation} \label{w=1prop} w = 1 \quad \mbox{in } \mathbb{R}^n \setminus B_R. \end{equation} Define \begin{equation} \label{vuw} v := \min \leqslantslantft\{ u, w \right\}, \end{equation} and observe that, by~\eqref{w=1prop}, \begin{equation} \label{v=uout} v = u \quad \mbox{in } \mathbb{R}^n \setminus B_R. \end{equation} Writing for simplicity $$ \mathscr{K}(u; B_R) := \frac{1}{2} \iint_{\mathscr{C}_{B_R}} |u(x) - u(y)|^2 K(x, y) \, dx dy, $$ where~$\mathscr{C}_{B_R}$ is as in~\eqref{COmegadef}, in view of~\eqref{vuw},~\eqref{v=uout} and~\eqref{Ksymmetry} we have \begin{align*} & \mathscr{K}(u - v; B_R) + \mathscr{K}(v; B_R) - \mathscr{K}(u; B_R) \\ & \hspace{50pt} = - \iint_{\mathscr{C}_{B_R}} \leqslantslantft( (u - v)(x) - (u - v)(y) \right) \leqslantslantft( v(x) - v(y) \right) K(x, y) \, dx dy \\ & \hspace{50pt} = - \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \leqslantslantft( (u - v)(x) - (u - v)(y) \right) \leqslantslantft( v(x) - v(y) \right) K(x, y) \, dx dy \\ & \hspace{50pt} = - 2 \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \leqslantslantft( u(x) - v(x) \right) \leqslantslantft( v(x) - v(y) \right) K(x, y) \, dx dy \\ & \hspace{50pt} = - 2 \int_{B_R \cap \{ w = v < u \}} \leqslantslantft( u(x) - w(x) \right) \leqslantslantft( \mbox{\normalfont P.V.} \int_{\mathbb{R}^n} \leqslantslantft( w(x) - w(y) \right) K(x, y) \, dy \right) dx \\ & \hspace{50pt} \leqslantslant 2 \int_{B_R \cap \{ w < u \}} \leqslantslantft( u(x) - w(x) \right) \leqslantslantft| \mathscr{L}_K w(x) \right| dx. \end{align*} The proof then continues as in~\cite{SV14}. We take advantage of the minimality of~$u$, together with hypotheses~\eqref{Wbound},~\eqref{W''} on~$W$ and estimates~\eqref{LKwbar},~\eqref{1+wbarest} provided by Lemma~\ref{barrierlem} , to get \begin{align*} & \mathscr{K}(u - v; B_R) + \frac{\kappa}{2} \int_{B_R \cap \{ w < u \leqslantslant \theta_\star \}} \leqslantslantft( 1 + w \right) \leqslantslantft( u - w \right) \, dx \\ & \hspace{90pt} \leqslantslant c \int_{B_R \cap \{ u > \theta_\star \}} \leqslantslantft( R + 1 - |x| \right)^{- 2 s} \, dx - A(R), \end{align*} for some~$c > 0$, where~$\theta_\star$ is any fixed parameter that satisfies \begin{equation} \label{thetastardef} -1 < \theta_\star \leqslantslant \min \leqslantslantft\{ \theta, -1 + \kappa \right\}, \end{equation} and $$ A(R) := \kappa \int_{B_R \cap \{ w < u \leqslantslant \theta_\star \}} (u - w)^2 \, dx. $$ Setting $$ V(R) := \leqslantslantft| B_R \leqslantslantft\{ u > \theta_\star \right\} \right|, $$ we use the coarea formula along with~\eqref{Kbounds} to obtain (recall that~$\xi \geqslantslant 3 R$) \begin{equation} \label{densestthmtech1} \begin{aligned} & \frac{\lambda}{2} \int_{B_R} \int_{B_{2R}} \frac{|(u - v)(x) - (u - v)(y)|^2}{|x - y|^{n + 2 s}} \, dx dy + \frac{\kappa}{2} \int_{B_R \cap \{ w < u \leqslantslant \theta_\star \}} \leqslantslantft( 1 + w \right) \leqslantslantft( u - w \right) \, dx \\ & \hspace{80pt} \leqslantslant \mathscr{K}(u - v; B_R) + \frac{\kappa}{2} \int_{B_R \cap \{ w < u \leqslantslant \theta_\star \}} \leqslantslantft( 1 + w \right) \leqslantslantft( u - w \right) \, dx \\ & \hspace{80pt} \leqslantslant c \int_0^R \leqslantslantft( R + 1 - t \right)^{- 2 s} \leqslantslantft( \int_{\partial B_t} \chi_{\{ u > \theta_\star \}}(x) \, d\mathcal{H}^{n - 1}(x) \right) dt - A(R) \\ & \hspace{80pt} = c \int_0^R \leqslantslantft( R + 1 - t \right)^{- 2 s} V'(t) \, dt - A(R). \end{aligned} \end{equation} Recalling the definition~\eqref{Psidef} of~$\mathscr{P}si_s$, we claim that \begin{equation} \label{VleV'} V(R - H)^{\frac{n - 1}{n}} \mathscr{P}si_s \leqslantslantft( \sqrt[n]{V(R - H} \right) \leqslantslant \widetilde{C} \int_0^R (R + 1 - t)^{-2s} V'(t) \, dt, \end{equation} for some~$\widetilde{C} \geqslantslant 1$, provided~$H$ is large enough. To prove~\eqref{VleV'}, we consider separately the two cases~$s < 1/2$ and~$s \geqslantslant 1/2$. In the first situation we argue as in~\cite{SV11}. Recalling that~$u - v = 0$ outside of~$B_R$ and using the fractional Poincar\'e inequality, we compute \begin{align*} \int_{B_R} \int_{\mathbb{R}^n \setminus B_{2 R}} \frac{|(u - v)(x) - (u - v)(y)|^2}{|x - y|^{n + 2 s}} \, dx dy & = \int_{B_R} |(u - v)(x)|^2 \leqslantslantft( \int_{\mathbb{R}^n \setminus B_{2 R}} \frac{dy}{|x - y|^{n + 2 s}} \right) dx \\ & \leqslantslant \frac{c}{R^{2 s}} \int_{B_{2 R}} |(u - v)(x)|^2 \, dx \\ & \leqslantslant c \int_{B_R} \int_{B_{2 R}} \frac{|(u - v)(x) - (u - v)(y)|^2}{|x - y|^{n + 2 s}} \, dx dy, \end{align*} from which it follows immediately that $$ \int_{B_R} \int_{B_{2R}} \frac{|(u - v)(x) - (u - v)(y)|^2}{|x - y|^{n + 2 s}} \, dx dy \geqslantslant c \int_{\mathbb{R}^n} \int_{\mathbb{R}^n} \frac{|(u - v)(x) - (u - v)(y)|^2}{|x - y|^{n + 2 s}} \, dx dy. $$ By means of this and the fractional Sobolev inequality, we get \begin{equation} \label{densestthmtech2} \int_{B_R} \int_{B_{2R}} \frac{|(u - v)(x) - (u - v)(y)|^2}{|x - y|^{n + 2 s}} \, dx dy \geqslantslant c \| u - v \|_{L^{\frac{2 n}{n - 2 s}}(\mathbb{R}^n)}^2 = c \| u - v \|_{L^{\frac{2 n}{n - 2 s}}(B_R)}^2. \end{equation} Notice that, by taking~$H$ large enough (in dependence of~$\theta_\star$), by~\eqref{1+wbarest} we have $$ w \leqslantslant - 1 + \frac{1 + \theta_\star}{2} \quad \mbox{in } B_{R - H}. $$ Hence, \begin{equation} \label{u-vge} |u - v| \geqslantslant u - v \geqslantslant u - w \geqslantslant \frac{1 + \theta_\star}{2} \quad \mbox{in } \{ u > \theta_\star \} \cap B_{R - H}, \end{equation} so that \begin{align*} \| u - v \|_{L^{\frac{2 n}{n - 2 s}}(B_R)}^2 & \geqslantslant \leqslantslantft( \int_{\{ u > \theta_\star \} \cap B_{R - H}} |(u - v)(x)|^{\frac{2 n}{n - 2 s}} \, dx \right)^{\frac{n - 2 s}{n}} \\ & \geqslantslant \leqslantslantft( \frac{1 + \theta_\star}{2} \right)^2 \leqslantslantft| \{ u > \theta_\star \} \cap B_{R - H} \right|^{\frac{n - 2 s}{n}} \\ & = c \, V(R - H)^{\frac{n - 2 s}{n}}. \end{align*} This,~\eqref{densestthmtech1} and~\eqref{densestthmtech2} imply claim~\eqref{VleV'} for~$s < 1/2$. On the other hand, when~$s \geqslantslant 1/2$ we follow the strategy displayed in~\cite{SV14}. Define \begin{align*} a_R & := \leqslantslantft\{ u - w \geqslantslant \frac{1 + \theta_\star}{4} \right\} \cap B_R,\\ b_R & := \leqslantslantft\{ \frac{1 + \theta_\star}{8} < u - w < \frac{1 + \theta_\star}{4} \right\} \cap B_R, \\ d_R & := \leqslantslantft( \mathbb{R}^n \setminus B_R \right) \cup \leqslantslantft( \leqslantslantft\{ u - w \leqslantslant \frac{1 + \theta_\star}{8} \right\} \cap B_R \right). \end{align*} Note that, by~\eqref{u-vge} $$ a_R \supseteq \leqslantslantft\{ u > \theta_\star \right\} \cap B_{R - H} \supseteq \leqslantslantft\{ u > \theta_\star \right\} \cap B_{R_o}. $$ Hence, by~\eqref{u>thetamu} and~\eqref{thetastardef} we deduce that \begin{equation} \label{aRgemu} |a_R| \geqslantslant \mu, \end{equation} so that we may apply the geometric formula~(3.46) in~\cite{SV14} to obtain that $$ \int_{a_R} \int_{d_R} \frac{dx dy}{|x - y|^{n + 2 s}} + |b_R| \geqslantslant 2 c_1 \, |a_R|^{\frac{n - 1}{n}} \mathscr{P}si_s \leqslantslantft( \sqrt[n]{|a_R|} \right), $$ for some constant~$c_1 > 0$. We then observe that $$ \int_{a_R} \int_{\mathbb{R}^n \setminus B_{2 R}} \frac{dx dy}{|x - y|^{n + 2 s}} \leqslantslant \frac{c_2}{R^{2 s}} |a_R|, $$ for~$c_2 > 0$, and thus \begin{align*} \int_{a_R} \int_{d_R \cap B_{2 R}} \frac{dx dy}{|x - y|^{n + 2 s}} + |b_R| & = \int_{a_R} \int_{d_R} \frac{dx dy}{|x - y|^{n + 2 s}} + |b_R| - \int_{a_R} \int_{\mathbb{R}^n \setminus B_{2 R}} \frac{dx dy}{|x - y|^{n + 2 s}} \\ & \geqslantslant 2 c_1 \, |a_R|^{\frac{n - 1}{n}} \mathscr{P}si_s \leqslantslantft( \sqrt[n]{|a_R|} \right) - \frac{c_2}{R^{2 s}} |a_R| \\ & = |a_R|^{\frac{n - 1}{n}} \mathscr{P}si_s \leqslantslantft( \sqrt[n]{|a_R|} \right) \leqslantslantft[ 2 c_1 - \frac{c_2}{R^{2 s}} \frac{\sqrt[n]{|a_R|}}{\mathscr{P}si_s \leqslantslantft( \sqrt[n]{|a_R|} \right)} \right] \\ & \geqslantslant |a_R|^{\frac{n - 1}{n}} \mathscr{P}si_s \leqslantslantft( \sqrt[n]{|a_R|} \right) \leqslantslantft[ 2 c_1 - \frac{c_3}{R^{2 s - 1} \mathscr{P}si_s \leqslantslantft( R \right)} \right], \end{align*} for some constant~$c_3 > 0$. Note that in the last line we took advantage of the fact that the function~$t / \mathscr{P}si_s(t)$ is monotone increasing, at least\footnote{We point out that, by~\eqref{aRgemu} and the fact that we chose~$\mu \geqslantslant e^n$, it follows immediately that~$\sqrt[n]{|a_R|} \geqslantslant e$.} for~$t \geqslantslant e$. Since~$s \geqslantslant 1/2$, we may and do choose~$R$ sufficiently large for the quantity in square brackets to be larger than~$c_1$. This yields that $$ \int_{a_R} \int_{d_R \cap B_{2 R}} \frac{dx dy}{|x - y|^{n + 2 s}} + |b_R| \geqslantslant c_1 |a_R|^{\frac{n - 1}{n}} \mathscr{P}si_s \leqslantslantft( \sqrt[n]{|a_R|} \right). $$ With the aid of this estimate and~\eqref{densestthmtech1}, an approach identical to that followed by~\cite{SV14} easily implies that~\eqref{VleV'} holds true also when~$s \geqslantslant 1/2$. To conclude the proof of the theorem, we note that~\eqref{VleV'} is formula~(3.55) of~\cite{SV14}. An iterative procedure as the one performed there then finishes the argument. \end{proof} The density estimates that we just proved ensure that both the sub- and superlevel sets of non-trivial minimizers have full measure in large balls. In the next result we use such estimates to deduce some information on the size of the interfaces of those minimizers. \begin{proposition} \label{intdensprop} Let~$u: \mathbb{R}^n \to [-1, 1]$ be a minimizer of~$\mathscr{E}$ in~$B_R(x_0)$, for some~$x_0 \in \mathbb{R}^n$ and~$R > 0$. Fix~$\theta, \theta_0 \in (0, 1)$ and suppose that~$u(x_0) \in [-\theta_0, \theta_0]$. Then, there exist two constants~$\tilde{c} \in (0, 1)$ and~$\tilde{R} \geqslantslant 4$ such that \begin{equation} \label{intdensest} \leqslantslantft| \leqslantslantft\{ |u| < \theta \right\} \cap B_R(x_0) \right| \geqslantslant \tilde{c} R^{n - 1}, \end{equation} provided~$\tilde{R} \leqslantslant R \leqslantslant \xi$. The constants~$\tilde{c}$ and~$\tilde{R}$ depend only on~$\theta$,~$\theta_0$ and on universal quantities. \end{proposition} \begin{proof} It is well-known that~$u$ is of class~$C^{0, \alpha}(B_{3 R / 4}(x_0))$, for some universal~$\alpha \in (0, 1)$, with H\"{o}lder norm bounded independently of~$R \geqslantslant 1$, that is \begin{equation} \label{Calphaesttech} \| u \|_{C^{0, \alpha}(B_{3 R / 4}(x_0))} \leqslantslant C_1, \end{equation} for some universal constant~$C_1 \geqslantslant 1$. See e.g.~\cite{C17} or~\cite[Section~2]{CV17} for a thorough proof of this fact. Now, we focus on the proof of~\eqref{intdensest}. Notice that we can restrict ourselves to take~$\theta > 0$ suitably small. We initially assume that~$u(x_0) = 0$. We claim that there exists~$\delta \in (0, 1)$, depending on~$\theta$ and on universal quantities, for which \begin{equation} \label{leveldist1} \leqslantslantft\{ |u| < \theta \right\} \cap B_{R / 3}(x_0) \supseteq \leqslantslantft\{ |d| < \delta \right\} \cap B_{R / 3}(x_0), \end{equation} where we indicate with~$d$ the signed distance from the set~$\mathbb{R}^n \setminus \{ u > 0 \}$, that is $$ d(x) := \begin{cases} {\mbox{\normalfont dist}} \leqslantslantft( x, \mathbb{R}^n \setminus \{ u > 0 \} \right) & \quad \mbox{if } x \in \{ u > 0 \} \\ - {\mbox{\normalfont dist}} \leqslantslantft( x, \{ u > 0 \} \right) & \quad \mbox{if } x \in \mathbb{R}^n \setminus \{ u > 0 \}, \end{cases} $$ for any~$x \in \mathbb{R}^n$. Note that \begin{equation} \label{u>0d>0} \leqslantslantft\{ d > 0 \right\} \cap B_{R/3}(x_0) = \leqslantslantft\{ u > 0 \right\} \cap B_{R/3}(x_0). \end{equation} To check that~\eqref{leveldist1} holds true, let~$\delta \in (0, 1)$ and, given~$x \in \{ |d| < \delta \} \cap B_{R / 3}(x_0)$, take~$y_x \in \{ u = 0 \} \cap B_{3 R / 4}(x_0)$ to be a point at which~$d(x) = |x - y_x|$. Then, by~\eqref{Calphaesttech} we have $$ |u(x)| = |u(x) - u(y_x)| \leqslantslant C_1 \leqslantslantft| x - y_x \right|^\alpha = C_1 d(x)^\alpha < C_1 \delta^\alpha, $$ and hence~$|u(x)| < \theta$, if we choose~$\delta \leqslantslant \leqslantslantft( \theta / C_1 \right)^{1 / \alpha}$. Thus,~\eqref{leveldist1} follows. Also observe that, in particular, formulae~\eqref{leveldist1} and~\eqref{u>0d>0} yield that \begin{equation} \label{leveldist2} \leqslantslantft\{ u > \theta \right\} \cap B_{R / 3}(x_0) \subseteq \leqslantslantft\{ d > \delta \right\} \cap B_{R / 3}(x_0). \end{equation} After this preliminary work, we are in position to show that~\eqref{intdensest} is valid. We use~\eqref{leveldist1} and the coarea formula (applied to the function~$d$) to compute \begin{equation} \label{leveldtech} \begin{aligned} \leqslantslantft| \leqslantslantft\{ |u| < \theta \right\} \cap B_{R / 3}(x_0) \right| & \geqslantslant \leqslantslantft| \leqslantslantft\{ |d| < \delta \right\} \cap B_{R / 3}(x_0) \right| = \int_{\leqslantslantft\{ |d| < \delta \right\} \cap B_{R / 3}(x_0)} |\nabla d(x)| \, dx \\ & = \int_{-\delta}^\delta \mathscr{P}er \leqslantslantft( \leqslantslantft\{ d = t \right\}, B_{R / 3}(x_0) \right) \, dt, \end{aligned} \end{equation} as the gradient of the distance function has modulus equal to~$1$. Notice that~$\mathscr{P}er(E, \Omega)$ indicates the perimeter of a Borel set~$E$ inside a domain~$\Omega$. Now, we assume without loss of generality (up to changing~$u$ with~$-u$) that $$ \leqslantslantft| \{ u > 0 \} \cap B_{R / 3}(x_0) \right| \leqslantslant \leqslantslantft| B_{R / 3}(x_0) \setminus \{ u > 0 \} \right|, $$ and we consider~$t \in [\delta / 2, \delta]$. Thanks to this reduction, the fact that~$\{ d > t \} \subset \{ d > 0\}$ and identity~\eqref{u>0d>0}, it is clear that $$ \leqslantslantft| \{ d > t \} \cap B_{R / 3}(x_0) \right| \leqslantslant \leqslantslantft| B_{R / 3}(x_0) \setminus \{ d > t \} \right|, $$ so that the relative isoperimetric inequality yields \begin{align*} \mathscr{P}er \leqslantslantft( \{ d = t \}, B_{R / 3}(x_0) \right) & \geqslantslant c_2 \min \leqslantslantft\{ \leqslantslantft| \{ d > t \} \cap B_{R / 3}(x_0) \right|, \leqslantslantft| B_{R / 3}(x_0) \setminus \{ d > t \} \right| \right\}^{(n - 1) / n} \\ & = c_2 \leqslantslantft| \{ d > t \} \cap B_{R / 3}(x_0) \right|^{(n - 1) / n} \\ & \geqslantslant c_2 \leqslantslantft| \{ d > \delta \} \cap B_{R / 3}(x_0) \right|^{(n - 1) / n}, \end{align*} for some dimensional constant~$c_2 > 0$. Hence, by taking advantage of this and of estimate~\eqref{leveldist2}, in virtue of~\eqref{leveldtech} we get \begin{align*} \leqslantslantft| \leqslantslantft\{ |u| < \theta \right\} \cap B_{R/3}(x_0) \right| & \geqslantslant c_2 \int_{\delta/2}^\delta \leqslantslantft| \{ d > \delta \} \cap B_{R / 3}(x_0) \right|^{(n - 1) / n} dt \\ & \geqslantslant c_3 \leqslantslantft| \leqslantslantft\{ u > \theta \right\} \cap B_{R / 3}(x_0) \right|^{(n - 1) / n}, \end{align*} for some constant~$c_3 > 0$ depending on~$\theta$ and on universal quantities. The thesis then follows by virtue of Theorem~\ref{densestthm}. Now we only have to deal with the more general case of~$u(x_0) \in [-\theta_0, \theta_0]$. In this scenario, we know by Theorem~\ref{densestthm} that there exist two points~$x_1, x_2 \in B_{R / 3}(x_0)$ such that~$u(x_1) > 1/2$ and~$u(x_2) < - 1/2$. By continuity, we may thus find~$\bar{x} \in B_{R/3}(x_0)$ at which~$u$ vanishes. But then, we may apply the estimate obtained above to the ball~$B_{R / 3}(\bar{x}) \subset B_R(x_0)$ to conclude the proof. \end{proof} \section{Energy estimates. Proof of Theorem~\ref{enestbelowthm}} \label{ensec} In this section, we show that the bound provided by Proposition~\ref{enestprop} is sharp, in the sense that the energy of every non-trivial (i.e. different from the constant functions~$1$ and~$-1$) minimizer can be also controlled from below by a term that has the same growth in~$R$ as the right-hand side of~\eqref{enest}. That is, we prove Theorem~\ref{enestbelowthm}. The argument leading to~\eqref{enestbelow} changes significantly as~$s$ takes different values in the interval~$(0, 1)$. In particular, we first establish~\eqref{enestbelow} for the case~$s > 1/2$, by inspecting the potential term of~$\mathscr{E}$. Then, we look at the kinetic term~$\mathscr{K}$ to obtain the estimate when~$s < 1/2$. Finally, a deeper analysis of the contributions coming from~$\mathscr{K}$ yields the desired result also for~$s = 1/2$. We stress that such differences in the proof of~\eqref{enestbelow} as~$s$ varies in~$(0, 1)$ are the effect of the competition between the local potential~$\mathscr{P}$ and the nonlocal interaction term~$\mathscr{K}$. The precise result that we address here is slightly stronger than Theorem~\ref{enestbelowthm} and can be stated as follows. Note that throughout the section we always implicitly assume~$K$ and~$W$ to satisfy the hypotheses listed in Theorem~\ref{enestbelowthm}. \begin{proposition} \label{enestbelowprop} Let~$u: \mathbb{R}^n \to [-1. 1]$ be a minimizer for~$\mathscr{E}$ in~$B_R(x_0)$, for some~$x_0 \in \mathbb{R}^n$ and~$R > 0$. If~$u(x_0) \in [-\theta_0, \theta_0]$, for some~$\theta_0 \in (0, 1)$, then there exist two constants~$c_0 \in (0, 1)$ and~$R_0 \geqslantslant 1$, depending only on~$\theta_0$ and on universal quantities, such that \begin{equation} \label{enestbelowbis} \mathscr{K}(u; B_R(x_0), B_R(x_0)) + \mathscr{P}(u; B_R(x_0)) \geqslantslant c_0 R^{n - 1} \mathscr{P}si_s(R), \end{equation} provided~$R_0 \leqslantslant R \leqslantslant \xi$. \end{proposition} Observe that~\eqref{enestbelowbis} provides a sharper estimate than~\eqref{enestbelow}, as the former does not involve the interaction term over~$B_R(x_0) \times (\mathbb{R}^n \setminus B_R(x_0) )$. As anticipated at the beginning of the section, we remark that the method to prove~\eqref{enestbelowbis} is very sensitive with respect to the fractional parameter~$s$. Indeed, when~$s\in(1/2,\,1)$, the proof of~\eqref{enestbelowbis} is considerably simpler than in the other cases, and it follows essentially from Proposition~\ref{intdensprop}: as a matter of fact, when~$s\in(1/2,\,1)$ the problem is ``sufficiently close to the local case'' that the optimal bounds are provided directly by the potential energy (which is, in a sense, of local nature). When~$s\in (0,\,1/2]$ the situation is more complicated and the kinetic energy plays a dominant role in the estimate. In particular, a fine computation of the interactions at all scales will be needed to detect the logarithmic correction in the case~$s=1/2$. In any case, it might be interesting to determine whether an estimate like~\eqref{enestbelowbis} holds true for both terms~$\mathscr{K}$ and~$\mathscr{P}$ separately. After this preliminary discussion, we now head to the proof of Proposition~\ref{enestbelowprop}. As a first result, we estimate from below the growth of the potential term. \begin{lemma} \label{potestbelowlem} Let~$u: \mathbb{R}^n \to [-1, 1]$ be a minimizer of~$\mathscr{E}$ in~$B_R(x_0)$, for some~$x_0 \in \mathbb{R}^n$ and~$R > 0$. If~$u(x_0) \in [-\theta_0, \theta_0]$, for some~$\theta_0 \in (0, 1)$, then there exist two constants~$c_1 \in (0, 1)$ and~$R_1 \geqslantslant 1$, depending on~$\theta_0$ and on universal quantities, such that \begin{equation} \label{potestbelow} \mathscr{P}(u; B_R(x_0)) \geqslantslant c_1 R^{n - 1}, \end{equation} provided~$R_1 \leqslantslant R \leqslantslant \xi$. \end{lemma} \begin{proof} Estimate~\eqref{potestbelow} is a simple consequence of Proposition~\ref{intdensprop} and hypothesis~\eqref{Wgamma}. Indeed, $$ \mathscr{P}(u; B_R(x_0)) \geqslantslant \gamma(\theta_0) \leqslantslantft| \{ |u| < \theta_0 \} \cap B_R(x_0) \right| \geqslantslant c_1 R^{n - 1}, $$ for some~$c_1 > 0$ and provided~$R$ is large enough. \end{proof} From Lemma~\ref{potestbelowlem} we immediately deduce the validity of the bound~\eqref{enestbelowbis}, when~$s \in (1/2, 1)$. For others values of~$s$ we still get a bound from below for the total energy, which however is strictly weaker than the one claimed in Theorem~\ref{enestbelowthm}/Proposition~\ref{enestbelowprop}. To cover the case of~$s \in (0, 1/2)$ we analyze the behavior of the interaction term~$\mathscr{K}$. A first computation in this direction is given by \begin{lemma} \label{enestbelowlem} Let~$u: B_r(x_0) \to \mathbb{R}$ be a measurable function, for some~$x_0 \in \mathbb{R}^n$ and~$r > 0$. Fix~$\theta_1 < \theta_2$ and let~$E, F \subset \mathbb{R}^n$ be two measurable sets on which~$u > \theta_2$ and~$u < \theta_1$, respectively. Suppose that \begin{equation} \label{ABdens} \min \leqslantslantft\{ \leqslantslantft| E \cap B_r(x_0) \right|, \leqslantslantft| F \cap B_r(x_0) \right| \right\} \geqslantslant c_\flat r^n, \end{equation} for some~$c_\flat > 0$. Then, there exists a constant~$c_* > 0$, depending on~$\theta_2 - \theta_1$,~$c_\flat$ and on universal quantities, for which $$ \mathscr{K}(u; E \cap B_r(x_0), F \cap B_r(x_0)) \geqslantslant c_* r^{n - 2 s}, $$ provided~$2 r \leqslantslant \xi$. \end{lemma} \begin{proof} By taking advantage of~\eqref{Kbounds},~\eqref{ABdens} and the way~$E, F$ are chosen, we compute \begin{align*} \mathscr{K}(u; E \cap B_r(x_0), F \cap B_r(x_0)) & \geqslantslant \frac{\lambda}{2} \int_{E \cap B_r(x_0)} \int_{F \cap B_r(x_0)} \frac{|u(x) - u(y)|^2}{|x - y|^{n + 2 s}} \, dx dy \\ & \geqslantslant \frac{\lambda (\theta_2 - \theta_1)^2}{2 (2 r)^{n + 2 s}} \leqslantslantft| E \cap B_r(x_0) \right| \leqslantslantft| F \cap B_r(x_0) \right| \\ & \geqslantslant \frac{\lambda (\theta_2 - \theta_1)^2 c_\flat^2}{2^{n + 1 + 2 s}} \, r^{n - 2 s}, \end{align*} if~$2 r \leqslantslant \xi$. This concludes the proof. \end{proof} By combining this lemma with the density estimates of Theorem~\ref{densestthm}, we obtain the next corollary, which in particular establishes~\eqref{enestbelowbis} when~$s \in (0, 1/2)$. \begin{corollary} \label{enestbelowcor2} Let~$u: \mathbb{R}^n \to [-1, 1]$ be a minimizer of~$\mathscr{E}$ in~$B_R(x_0)$, for some~$x_0 \in \mathbb{R}^n$ and~$R > 0$. If~$u(x_0) \in [-\theta_0, \theta_0]$, for some~$\theta_0 \in (0, 1)$, then there exists two constants~$c_2 \in (0, 1)$ and~$R_2 \geqslantslant 3$, depending on~$\theta_0$ and on universal quantities, for which \begin{equation} \label{enestbelow2} \mathscr{K}(u; B_R(x_0), B_R(x_0)) \geqslantslant c_2 R^{n - 2 s}, \end{equation} provided that~$R_2 \leqslantslant R \leqslantslant \xi$. \end{corollary} \begin{proof} Apply Lemma~\ref{enestbelowlem} with, say,~$r = R/2$,~$\theta_1 = - 1/2$,~$\theta_2 = 1/2$,~$E = \{ u > 1/2 \}$,~$F = \{ u < - 1 / 2 \}$. Note that condition~\eqref{ABdens} is satisfied by virtue of Theorem~\ref{densestthm}, provided that we take~$R \geqslantslant 2 \max \{ \bar{R}(1/2, \theta_0), \bar{R}(- 1/2, \theta_0) \}$. \end{proof} In view of Lemma~\ref{potestbelowlem} and Corollary~\ref{enestbelowcor2}, we are only left to prove~\eqref{enestbelowbis} in the case~$s = 1/2$. This task is carried out in the following lemma, where we use the bound given in~\eqref{enestbelow2} repeatedly and at different scales, to gain the desired logarithmic correction (see also~\cite[Lemma~6.7]{CP16} for a similar computation in dimension~$n = 1$; of course, the case~$n\geqslantslant2$ that we consider here provides additional geometric and analytic difficulties and a finer estimate is needed in order to detect the optimal contribution of all the interactions). \begin{lemma} Assume~$s = 1/2$. Let~$u: \mathbb{R}^n \to [-1, 1]$ be a minimizer of~$\mathscr{E}$ in~$B_R(x_0)$, for some~$x_0 \in \mathbb{R}^n$ and~$R > 0$. If~$u(x_0) \in [-\theta_0, \theta_0]$, for some~$\theta_0 \in (0, 1)$, then there exists two constants~$c_3 \in (0, 1)$ and~$R_3 \geqslantslant 1$, depending on~$\theta_0$ and on universal quantities, such that \begin{equation} \label{enestbelows=1/2} \mathscr{K}(u; B_R(x_0), B_R(x_0)) \geqslantslant c_3 R^{n - 1} \log R, \end{equation} provided~$R_3 \leqslantslant R \leqslantslant \xi$. \end{lemma} \begin{proof} Fix~$R_3 \geqslantslant \max \{ \bar{R}(0, \theta_0), \bar{R}(1/2, 0), \bar{R}(-1/2,0) \}$, with~$\bar{R}$ as given by Theorem~\ref{densestthm}, and take~$R \geqslantslant R_3$. Let~$k$ be the largest integer such that~$\sqrt{n} R_3 10^k \leqslantslant R$. Notice that, for this choice, we have~$Q_{R_3 10^k}(x_0) \subset B_{R/2}(x_0)$ and \begin{equation} \label{kNest} R < \sqrt{n} R_3 10^{k + 1}. \end{equation} Note that we may suppose without loss of generality that~$k \geqslantslant 2$, since otherwise~\eqref{enestbelows=1/2} would immediately follow from Corollary~\ref{enestbelowcor2}. Let~$\ell \in \{ 1, \ldots, k - 1 \}$. We claim that there exist two families of sets~$\{ E_i^{(\ell)} \}_{i = 1}^{N_\ell}, \{ F_i^{(\ell)} \}_{i = 1}^{N_\ell} \subset B_R(x_0)$, such that~$E_i^{(\ell)} \cap E_j^{(\ell)} = \emptyset$ for any~$i \ne j$, \begin{equation} \label{EFdistclaim} {\mbox{\normalfont dist}} \leqslantslantft( E_i^{(\ell)}, F_i^{(\ell)} \right) \geqslantslant \frac{R_3}{2} 10^\ell, \quad \sup_{x \in E_i^{(\ell)}, \, y \in F_i^{(\ell)}} |x - y| \leqslantslant \frac{7 R_3}{2} 10^\ell, \end{equation} \begin{equation} \label{Kbelowestclaim} \mathscr{K}(u; E_i^{(\ell)}, F_i^{(\ell)}) \geqslantslant c_\circ 10^{\ell (n - 1)}, \end{equation} for any~$i$, and~$N_\ell \geqslantslant c_\circ 10^{(k - \ell) (n - 1)}$, for some constant~$c_\circ \in (0, 1)$ independent of~$k$,~$\ell$ and~$i$. To prove the claim, we first use Theorem~\ref{densestthm} to deduce that $$ \min\Big\{ | \{ u > 0 \} \cap Q_{R_3 10^k}(x_0) |, \; | \{ u < 0 \} \cap Q_{R_3 10^k}(x_0) | \Big\}\geqslantslant c_\sharp 10^{n k}, $$ for some~$c_\sharp \in (0,1)$ independent of~$k$. Then, Corollary~\ref{densestAcor} yields the existence of a family~$\{ Q_i^{(\ell)} \}_{i = 1}^{N_\ell} \subset B_R(x_0)$ of~$N_\ell$ non-overlapping cubes with sides of length~$R_3 10^\ell$, each centered at a point at which~$u$ vanishes and such that~$N_\ell \geqslantslant c_{\star \star} 10^{(k - \ell) (n - 1)}$, for some~$c_{\star \star} \in (0,1)$ independent of~$k$ and~$\ell$. For any~$i$, denote with~$B_i^{(\ell)} \subset Q_i^{(\ell)}$ the ball of radius~$R_3 10^\ell / 2$ concentric to~$Q_i^{(\ell)}$. Then, consider another ball~$\widetilde{B}_i^{(\ell)} \subset B_R(x_0)$ of radius~$R_3 10^\ell / 2$ and centered at any point at distance~$2 R_3 10^\ell$ from the center of~$B_i^{(\ell)}$. It holds that either \begin{enumerate}[$(i)$] \item $\widetilde{B}_i^{(\ell)} \subset \{ u > 0 \}$, or \item $\widetilde{B}_i^{(\ell)} \subset \{ u < 0 \}$, or \item $u(\widetilde{x}) = 0$ at some point~$\widetilde{x} \in \widetilde{B}_i^{(\ell)}$. \end{enumerate} In case~$(i)$ we set $E_i^{(\ell)} := \{ u < -1/2 \} \cap B_i^{(\ell)}$, while in case~$(ii)$,~$E_i^{(\ell)} := \{ u > 1/2 \} \cap B_i^{(\ell)}$; in both cases we set~$F_i^{(\ell)} := \widetilde{B}_i^{(\ell)}$. On the other hand, in case~$(iii)$ we slightly translate~$\widetilde{B}_i^{(\ell)}$ (along a vector of length at most~$R_3 10^\ell / 2$) to make it centered at~$\widetilde{x}$ and set \begin{eqnarray*} && E_i^{(\ell)} := \{ u < -1/2 \} \cap B_i^{(\ell)}\\ {\mbox{and }}&& F_i^{(\ell)} := \{ u > 1/2 \} \cap \widetilde{B}_i^{(\ell)}.\end{eqnarray*} In any case, we employ Lemma~\ref{enestbelowlem} in combination with Theorem~\ref{densestthm} to see that~\eqref{Kbelowestclaim} is true. Inequalities~\eqref{EFdistclaim} also hold by construction. The claim is therefore proved. In view of this, $$ \leqslantslantft( E_i^{(\ell)} \times F_i^{(\ell)} \right) \cap \leqslantslantft( E_j^{(m)} \times F_j^{(m)} \right) = \emptyset, $$ for any~$i, \in \{ 1, \ldots, N_\ell \}$,~$j \in \{ 1, \ldots, N_m \}$ and~$\ell, m \in \{ 1, \ldots, k - 1 \}$ such that~$(i, \ell) \ne (j, m)$. Accordingly, we can sum up each contribution coming from~\eqref{Kbelowestclaim} and obtain that \begin{align*} \mathscr{K}(u; B_R(x_0), B_R(x_0)) & \geqslantslant \sum_{\ell = 1}^{k - 1} \sum_{i = 1}^{N_\ell} \mathscr{K}(u; E_i^{(\ell)}, F_i^{(\ell)}) \geqslantslant c_\circ \sum_{\ell = 1}^{k - 1} N_\ell 10^{\ell(n - 1)} \\ & \geqslantslant c_\circ^2 \sum_{\ell = 1}^{k - 1} 10^{k(n - 1)} \geqslantslant \frac{c_\circ^2}{2} 10^{k(n - 1)} k. \end{align*} Recalling now~\eqref{kNest} and possibly enlarging~$R_3$, estimate~\eqref{enestbelows=1/2} plainly follows. \end{proof} The proof of Proposition~\ref{enestbelowprop} (and, consequently, of Theorem~\ref{enestbelowthm}) is therefore concluded. As a consequence of Proposition~\ref{enestbelowprop}, we deduce a~\emph{clean ball condition} for the sub- and superlevel sets of the minimizers of~$\mathscr{E}$, that improves Theorem~\ref{densestthm}. \begin{proposition} \label{cleanballprop} Let~$u: \mathbb{R}^n \to [-1, 1]$ be a minimizer of~$\mathscr{E}$ in~$B_R(x_0)$, for some~$x_0 \in \mathbb{R}^n$ and~$R > 0$. Fix~$\theta, \theta_0 \in (0, 1)$ and a parameter~$\delta \in (0, 1]$. If~$u(x_0) \in [- \theta_0, \theta_0]$, then there exists two constants~$\kappa \in (0, 1]$,~$\widehat{R} > 0$ and two points~$z_1, z_2 \in \mathbb{R}^n$ such that \begin{equation} \label{cleanball} B_{\kappa R}(z_1) \subseteq \leqslantslantft\{ u > \theta \right\} \cap B_R(x_0) \quad \mbox{and} \quad B_{\kappa R}(z_2) \subseteq \leqslantslantft\{ u < -\theta \right\} \cap B_R(x_0), \end{equation} provided~$R \geqslantslant \widehat{R}$ and~$\delta \kappa R \leqslantslant \xi$. The constants~$\kappa$ and~$\widehat{R}$ only depend on~$\delta$,~$\theta$,~$\theta_0$ and on universal quantities. \end{proposition} \begin{proof} We restrict ourselves to the proof of the first inclusion in~\eqref{cleanball}, the other one being completely analogous. Set~$\kappa := 1/(2 \sqrt{n} N)$, for some~$N \in \mathbb{N}$ to be later determined. Let~$Q$ be a cube centered at~$x_0$, of sides~$R / \sqrt{n}$. By construction,~$Q \subset B_{R / 2}(x_0)$. Subdivide~$Q$ into a family~$\{ Q_j \}_{j = 1}^{N^n}$ of non-overlapping cubes of sides~$R/(\sqrt{n} N) = 2 \kappa R$ parallel to those of~$Q$. Let~$\mathbb{Q}Q$ be the family of cubes~$Q_j$ having non-empty intersection with the level set~$\{ u > \theta \}$ and denote with~$\widetilde{N} \in \mathbb{N} \cup \{ 0 \}$ the cardinality of~$\mathbb{Q}Q$. We relabel the cubes belonging to~$\mathbb{Q}Q$ in such a way that $$ \mathbb{Q}Q = \leqslantslantft\{ \widetilde{Q}_j : j = 1, \ldots, \widetilde{N} \right\}. $$ We claim that there exist two constants~$\widetilde{R} > 0$ and~$\tilde{c} \in (0, 1)$, depending on~$\theta$,~$\theta_0$ and on universal quantities, such that \begin{equation} \label{tildeN} \widetilde{N} \geqslantslant \tilde{c} N^n, \end{equation} provided \begin{equation} \label{RNcond1} R \geqslantslant \widetilde{R}. \end{equation} To see this, we consider the ball~$\widetilde{B} = B_{R / (2 \sqrt{n})}(x_0) \subset Q$ and we apply to it the density estimate~\eqref{densest1}. Setting~$\widetilde{R} := 2 \sqrt{n} \bar{R}$, with~$\bar{R} = \bar{R}(\theta, - \theta_0)$ as in Theorem~\ref{densestthm}, by~\eqref{RNcond1} we compute $$ \bar{c} \leqslantslantft( \frac{R}{2 \sqrt{n}} \right)^n \leqslantslant \leqslantslantft| \{ u > \theta \} \cap \widetilde{B} \right| = \sum_{j = 1}^{\widetilde{N}} \leqslantslantft| \widetilde{Q}_j \cap \{ u > \theta \} \cap \widetilde{B} \right| \leqslantslant \sum_{j = 1}^{\widetilde{N}} \leqslantslantft| \widetilde{Q}_j \right| = \widetilde{N} \leqslantslantft( \frac{R}{\sqrt{n} N} \right)^n, $$ which leads to~\eqref{tildeN}. Now, either \begin{enumerate}[$(i)$] \item there exists~$j = 1, \ldots, \widetilde{N}$ such that~$\widetilde{Q}_j \cap \{ |u| \leqslantslant \theta \} = \varnothing$, or \item for any~$j = 1, \ldots, \widetilde{N}$, there exists~$y_j \in \widetilde{Q}_j$ at which~$|u(y_j)| \leqslantslant \theta$. \end{enumerate} We claim that the latter possibility cannot occur, at least if~$N$ and~$R$ are sufficiently large, in dependence of~$\theta$,~$\theta_0$ and universal quantities only. If this is the case, condition~$(i)$ would then be valid. By the way in which the family~$\mathbb{Q}Q$ is chosen and the continuity of~$u$, we might then conclude that the first assertion in~\eqref{cleanball} holds true. By contradiction, suppose~$(ii)$ to be in force. By reducing, if necessary, the number~$\widetilde{N}$ of cubes in~$\mathbb{Q}Q$ by a factor~$3^n$ and changing the position of the~$y_j$, we may assume without loss of generality that each~$\widetilde{Q}_j$ is centered at~$y_j$. Let now~$B_j^{(\delta)} \subset \widetilde{Q}_j$ be the ball of radius~$\delta R / (2 \sqrt{n} N)$ centered at~$y_j$. In view of~\eqref{tildeN} and Proposition~\ref{enestbelowprop}, we estimate \begin{equation} \label{enestbelowtech} \begin{aligned} \mathscr{E}(u; B_R(x_0)) & \geqslantslant \sum_{j = 1}^{\widetilde{N}} \leqslantslantft[ \mathscr{K}(u; B_j^{(\delta)}, B_j^{(\delta)}) + \mathscr{P}(u; B_j^{(\delta)}) \right] \\ & \geqslantslant c_0 \widetilde{N} \leqslantslantft( \frac{\delta R}{2 \sqrt{n} N} \right)^{n - 1} \mathscr{P}si_s \leqslantslantft( \frac{\delta R}{2 \sqrt{n} N} \right) \\ & \geqslantslant c_1 N \delta^{n - 1} R^{n - 1} \mathscr{P}si_s \leqslantslantft( \frac{\delta R}{N} \right), \end{aligned} \end{equation} for some~$c_0, c_1 \in (0, 1)$ depending only on~$\theta$ and on universal quantities. This is true provided \begin{equation} \label{RNcond2} R_* \leqslantslant \frac{\delta R}{N} \leqslantslant 2 \sqrt{n} \xi, \end{equation} with~$R_* \geqslantslant 1$ depending only on universal quantities and~$\theta$. On the other hand, with the aid of Proposition~\ref{enestprop} we estimate \begin{equation} \label{enestabovetech} \mathscr{E}(u; B_R(x_0)) \leqslantslant C_2 R^{n - 1} \mathscr{P}si_s(R), \end{equation} for some universal~$C_2 \geqslantslant 1$, if \begin{equation} \label{RNcond3} R \geqslantslant 3. \end{equation} By comparing the two estimates~\eqref{enestbelowtech} and~\eqref{enestabovetech}, we find that \begin{equation} \label{RNcond4} N \leqslantslant \bar{C}_\delta := \begin{dcases} C_\star^{\frac{1}{2 s}} \delta^{1 - \frac{n}{2s}} & \quad \mbox{if } s < 1/2, \\ 2 C_\star \delta^{1 - n} & \quad \mbox{if } s = 1/2, \\ C_\star \delta^{1 - n} & \quad \mbox{if } s > 1/2, \end{dcases} \qquad \mbox{with } C_\star := \frac{C_2}{c_1}, \end{equation} under the further restriction \begin{equation} \label{RNcond5} \frac{N}{\delta} \leqslantslant \sqrt{R}, \quad \mbox{when } s = 1/2. \end{equation} But then we reached a contradiction, as we can choose~$N$ and~$\widehat{R}$ in such a way that conditions~\eqref{RNcond1},~\eqref{RNcond2},~\eqref{RNcond3} and~\eqref{RNcond5} are satisfied, but not~\eqref{RNcond4}. For instance, we may take $$ N := \lfloor \bar{C}_\delta \rfloor + 2 \quad \mbox{and} \quad \widehat{R} := \leqslantslantft\{ 3, \widetilde{R}, N R_* / \delta, (N / \delta)^2 \right\}, $$ and any~$R \geqslantslant \widehat{R}$ such that~$\delta R / N \leqslantslant 2 \sqrt{n} \xi$. \end{proof} \section{Planelike minimizers of~$\mathscr{E}$. Proof of Theorem~\ref{tauPLthm}} \label{PLminsec} After having established in Proposition~\ref{cleanballprop} the appropriate clean ball condition for our problem, the proof of Theorem~\ref{tauPLthm} follows now the strategy exploited in~\cite{V04} and~\cite{CV17}. In particular, we shall follow the latter reference closely and only point out the most important differences in the argument. First, we restrict to consider a periodicity~$\tau$ larger than a big constant~$\tau_0$, chosen in dependence of~$\theta$ and universal quantities only. Note that this assumption does not harm the generality of our framework. Indeed, one can deal with a periodicity~$1 \leqslantslant \tau \leqslantslant \tau_0$ by scaling down and reducing it to the case of periodicity~$1$ treated in~\cite{CV17}. More specifically, we take~$\tau > \tau_0$, with~$\tau_0$ equal to twice the constant~$\widehat{R}$ of Proposition~\ref{cleanballprop} (applied with~$\delta = 1 / \sqrt{n}$ and~$\theta_0 = \theta$). We only consider the case of a \emph{rational} direction~$\omega \in \tau \mathbb{Q}^n \setminus \{ 0 \}$ and a kernel~$K$ with rapid decay at infinity, i.e. that satisfies $$ K(x, y) \leqslantslant \frac{\mathscr{L}ambda_0}{|x - y|^{n + \beta}} \quad \mbox{for a.a.~} x, y \in \mathbb{R}^n \mbox{ such that } |x - y| \geqslantslant \rho_0, \mbox{ with } \beta > 1, $$ for some~$\mathscr{L}ambda_0, \rho_0 > 0$. The general case can be then dealt with via approximation arguments analogous to those presented in Subsection~4.7 and Section~5 of~\cite{CV17}, respectively. Under these additional assumptions, for~$M > 0$ we define the set of periodic, locally~$L^2$ functions $$ L^2_{\rm loc}({\widetilde{\R}^n}) := \Big\{ u \in L^2_{\rm loc}(\mathbb{R}^n) : u \mbox{ is periodic with respect to~$\sim$} \Big\}, $$ the class of admissible functions $$ {\mathcal{A}_\omega^M} := \leqslantslantft\{ u \in L^2_{\rm loc}({\widetilde{\R}^n}) : u(x) \geqslantslant \theta \mbox{ if } \omega \cdot x \leqslantslant 0 \mbox{ and } u(x) \leqslantslant - \theta \mbox{ if } \omega \cdot x \geqslantslant M \right\}, $$ the auxiliary functional \begin{align*} {\mathscr{F}_\omega}(u) := & \, \mathscr{K}(u; {\widetilde{\R}^n}, \mathbb{R}^n) + \mathscr{P}(u; {\widetilde{\R}^n}) \\ = & \, \frac{1}{2} \int_{{\widetilde{\R}^n}} \int_{\mathbb{R}^n} |u(x) - u(y)|^2 K(x, y) \, dx dy + \int_{{\widetilde{\R}^n}} W(x, u(x)) \, dx, \end{align*} and the set of absolute minimizers $$ {\mathcal{M}_\omega^M} := \Big\{ u \in {\mathcal{A}_\omega^M} : {\mathscr{F}_\omega}(u) \leqslantslant {\mathscr{F}_\omega}(v) \mbox{ for all } v \in {\mathcal{A}_\omega^M} \Big\}. $$ We have \begin{proposition} \label{uMexprop} There exists a particular minimizer~${u_\omega^M} \in {\mathcal{M}_\omega^M}$ that satisfies the following properties: \begin{enumerate}[$(i)$] \item ${u_\omega^M}$ is the~(unique)~\emph{minimal minimizer} of~${\mathcal{M}_\omega^M}$, that is $$ {u_\omega^M}(x) = \inf_{u \in {\mathcal{M}_\omega^M}} u(x) \quad \mbox{for a.a.~} x \in \mathbb{R}^n, $$ at least in the sense of~\cite[Definition~4.2.1]{CV17}; \item ${u_\omega^M}$ is a minimizer of~$\mathscr{E}$ in every bounded open set~$\Omega$ compactly contained in the strip $$ \mathcal{S}_\omega^M := \Big\{ x \in \mathbb{R}^n : \omega \cdot x \in [0, M] \Big\}; $$ \item ${u_\omega^M}$ has the~\emph{doubling property}, i.e. it coincides with the minimal minimizers corresponding to the weaker equivalence relations~$\sim_m$, defined for any~$m \in \mathbb{N}^{n - 1}$ by setting $$ x \sim_m y \quad \mbox{iff} \quad x - y \mbox{ belongs to the lattice generated by } m_1 \vec{z}_1, \ldots, m_{n - 1} \vec{z}_{n - 1}, $$ where~$\vec{z}_1, \ldots, \vec{z}_{n - 1} \in \tau \mathbb{Z}^n \setminus \{ 0 \}$ are vectors orthogonal to~$\omega$, forming a basis for the lattice induced by~$\sim$ (see~\cite[Subsection~4.3]{CV17} for more details); \item the superlevel~[sublevel] sets of~${u_\omega^M}$ enjoy the~\emph{$\tau$-Birkhoff property} with respect to direction~$\omega$~[$-\omega$], that is \begin{align*} \leqslantslantft\{ \pm {u_\omega^M} > \eta \right\} + k & \subseteq \leqslantslantft\{ \pm {u_\omega^M} > \eta \right\} \mbox{ for any } k \in \tau \mathbb{Z}^n \mbox{ such that } \pm \omega \cdot k \leqslantslant 0, \mbox{ and} \\ \leqslantslantft\{ \pm {u_\omega^M} > \eta \right\} + k & \supseteq \leqslantslantft\{ \pm {u_\omega^M} > \eta \right\} \mbox{ for any } k \in \tau \mathbb{Z}^n \mbox{ such that } \pm \omega \cdot k \geqslantslant 0, \end{align*} for any~$\eta \in \mathbb{R}$. \end{enumerate} \end{proposition} Proposition~\ref{uMexprop} can be proved by repeating the arguments of~\cite[Subsections~4.1-4.5]{CV17}. The differences are purely formal and are due to the fact that here the equivalence relation~$\sim$ is chosen coherently with the~$\tau \mathbb{Z}^n$-periodicity of~$K$ and~$W$ (which were instead~$\mathbb{Z}^n$-periodic in~\cite{CV17}). To conclude the proof of Theorem~\ref{tauPLthm}, we now need to show that for large, universal values of~$M / (|\omega| \tau)$, the minimal minimizer~${u_\omega^M}$ is~\emph{unconstrained}, i.e. that~${u_\omega^M}$ is a minimizer for~$\mathscr{E}$ in~\emph{any} bounded open subset~$\Omega$ of~$\mathbb{R}^n$ (compare this with point~$(ii)$ of Proposition~\ref{uMexprop}). The essential step in this direction is given by \begin{proposition} \label{disttauprop} There exists a constant~$M_0 > 0$, depending only on~$\theta$ and on universal quantities, such that if~$M \geqslantslant M_0 |\omega| \tau$, then the superlevel set~$\{ {u_\omega^M} > - \theta \}$ is at least at distance~$\tau$ from the upper constraint~$\{ \omega \cdot x = M \}$ delimiting~$\mathcal{S}_\omega^M$. \end{proposition} \begin{proof} First, we claim that \begin{equation} \label{cleanballtauclaim} \mbox{there exists a ball } B \mbox{ of radius } \sqrt{n} \tau \mbox{ contained in } \mathcal{S}_\omega^M, \mbox{ such that } |{u_\omega^M}| > \theta \mbox{ on } B, \end{equation} if~$M \geqslantslant M_0 |\omega| \tau$, for some~$M_0 > 0$. To prove claim~\eqref{cleanballtauclaim}, we fix~$\bar{x} \in \mathcal{S}_\omega^M$ in such a way that~$\omega \cdot \bar{x} = M / 2$. Consider the ball~$B_{\sqrt{n} \tau}(\bar{x})$, which is itself contained in~$\mathcal{S}_\omega^M$, provided~$M > 2 \sqrt{n} |\omega| \tau$. Now, either \begin{enumerate}[$(i)$] \item $|{u_\omega^M}| > \theta$ on~$B_{\sqrt{n} \tau}(\bar{x})$, or \item there exists~$x_0 \in B_{\sqrt{n} \tau}(\bar{x})$ at which~$|{u_\omega^M}(x_0)| \leqslantslant \theta$. \end{enumerate} In case~$(i)$, we clearly have a ball~$B$ as desired, and~\eqref{cleanballtauclaim} follows. Thus, we suppose that~$(ii)$ is valid. Consider a large ball~$B_R(x_0)$, for~$R > 0$, and observe that~$B_R(x_0) \subset \subset \mathcal{S}_\omega^M$, if we take~$M \geqslantslant 4 (\sqrt{n} \tau + R) |\omega|$. Observe that, by point~$(ii)$ of Proposition~\ref{uMexprop},~${u_\omega^M}$ is a minimizer of~$\mathscr{E}$ in~$B_R(x_0)$. Therefore, we may apply Proposition~\ref{cleanballprop} (with~$\delta = 1 / \sqrt{n}$ and~$\theta_0 = \theta$) to find a ball~$\widetilde{B} \subset B_R(x_0)$ of radius~$\kappa R$,~$\kappa \in (0, 1]$, on which, say,~${u_\omega^M} < - \theta$, provided~$R \geqslantslant \widehat{R}$ and~$\kappa R \leqslantslant \sqrt{n} \tau$.\footnote{Note that the resulting interval for~$R$ is not the empty set, in view of the assumptions we made on~$\tau$ at the beginning of the section.} By choosing exactly~$\kappa R = \sqrt{n} \tau$ we are led once again to~\eqref{cleanballtauclaim}, at least if~$M \geqslantslant 4 \sqrt{n} (1 + \kappa^{-1}) |\omega| \tau$. Claim~\eqref{cleanballtauclaim} being proved, the rest of the proof follows in a more or less straightforward way. In view of the continuity of~${u_\omega^M}$ (see e.g.~\cite{C17} or~\cite[Section~2]{CV17}), we see that either~${u_\omega^M} > \theta$ on~$B$ or~${u_\omega^M} < - \theta$ on~$B$. By a general result on Birkhoff sets (an appropriately scaled version of~\cite[Proposition~4.5.2]{CV17}, for instance) and point~$(iv)$ of Proposition~\ref{uMexprop}, it then follows that either~${u_\omega^M} > \theta$ on a half-space of the form~$\{ \omega \cdot x < t_1 \}$, with~$t_1 > \sqrt{n} |\omega| \tau$, or that~${u_\omega^M} < - \theta$ on a half-space of the form~$\{ \omega \cdot x > t_2 \}$, with~$t_2 < M - \sqrt{n} |\omega| \tau$. As it is not hard to see, the former possibility leads to a contradiction,~${u_\omega^M}$ being the minimal minimizer. Consequently, the latter conclusion is true and the proposition is proved. \end{proof} Thanks to Proposition~\ref{disttauprop}, we see that~${u_\omega^M}$ starts attaining values smaller than~$-\theta$ well before meeting the upper constraint, once~$M$ is chosen sufficiently large (in relation to~$\tau$ and~$|\omega|$). By this fact and the minimality properties of~${u_\omega^M}$, it is not hard to see that the following stronger statement holds true. \begin{proposition} Let~$M \geqslantslant M_0 |\omega| \tau$, with~$M_0$ as in Proposition~\ref{disttauprop}. Then,~${u_\omega^M} = u_\omega^{M + a}$, for any~$a \geqslantslant 0$. \end{proposition} We refer to the arguments of~\cite[Corollary~4.6.2]{CV17} for a detailed proof of this result. We remark that the above proposition shows in particular that~${u_\omega^M}$ is a minimizer of~$\mathscr{E}$ in each bounded subset of the half-space~$\{ \omega \cdot x > 0 \}$ (recall point~$(ii)$ of Proposition~\ref{uMexprop}). By exploiting another time the fact that~${u_\omega^M}$ is a minimal minimizer, it can be proved that its minimizing properties do not stop at the lower constraint~$\{ \omega \cdot x = 0 \}$, but in fact extends to the whole space~$\mathbb{R}^n$. We refer once again to the arguments displayed in~\cite[Subsection~4.6]{CV17} for more details on this. Consequently,~${u_\omega^M}$ is a class~A minimizer and the proof of Theorem~\ref{tauPLthm} is concluded. \section{The~$\mathscr{G}amma$-convergence result. Proof of Proposition~\ref{Gammaconvprop}} \label{Gammasec} The proof is an obvious modification of the argument contained in~\cite[Section~2]{SV12}. We report it here for the sake of completeness. We first establish the~$\mathscr{G}amma$-liminf inequality claimed in~$(i)$. To this aim, note that we may assume $$ \ell := \liminf_{\varepsilon \rightarrow 0^+} \mathscr{E}_\varepsilon(u_\varepsilon; \Omega) < +\infty. $$ Let~$u_{\varepsilon_j}$ be a subsequence attaining the liminf above. Up to extracting a further subsequence, we may also assume that~$u_{\varepsilon_j}$ converges to~$u$ a.e.~in~$\mathbb{R}^n$. In view of this, $$ \lim_{j \rightarrow +\infty} \frac{1}{\varepsilon_j^{2 s}} \int_{\Omega} W(x, u_{\varepsilon_j}(x)) \, dx \leqslantslant \ell, $$ that is, by the continuity of~$W$ and Fatou's lemma, $$ \int_{\Omega} W(x, u(x)) \, dx = \lim_{j \rightarrow +\infty} \int_{\Omega} W(x, u_{\varepsilon_j}(x)) \, dx = 0. $$ By~\eqref{Wgamma}, we then deduce that~$u$ only takes the values~$1$ and~$-1$ and, hence, may be written in~$\Omega$ as~$u = \chi_E - \chi_{\mathbb{R}^n \setminus E}$, for some measurable set~$E$. Accordingly, using again Fatou's lemma, $$ \mathscr{G}(u; \Omega) = \mathscr{K}(u; \Omega) \leqslantslant \lim_{j \rightarrow +\infty} \mathscr{K}(u_{\varepsilon_j}; \Omega) \leqslantslant \ell, $$ and the inequality in~$(i)$ holds true. On the other hand, the proof of~$(ii)$ is almost immediate. We simply select~$u_\varepsilon = u$ as recovery sequence. Of course we may restrict ourselves to suppose~$\mathscr{G}(u; \Omega) < +\infty$ and consequently deduce that~$u = \chi_E - \chi_{\mathbb{R}^n \setminus E}$ in~$\Omega$, for some measurable set~$E$. As, by~\eqref{Wzeros},~$W(x, u(x)) = 0$ for a.a.~$x \in \Omega$, we get $$ \mathscr{G}(u; \Omega) = \mathscr{K}(u; \Omega) = \mathscr{E}_\varepsilon(u; \Omega) = \mathscr{E}_\varepsilon(u_\varepsilon; \Omega), \quad \mbox{for any } \varepsilon > 0, $$ and the thesis plainly follows. The proof of Proposition~\ref{Gammaconvprop} is therefore concluded. \section{Planelike minimal surfaces for~$\mbox{\normalfont Per}_K$. Proof of Theorem~\ref{PerPLthm}} \label{PerPLsec} In this conclusive section, we obtain planelike class~A minimal surfaces for the~$K$-perimeter as limits in~$\varepsilon \rightarrow 0^+$ of the minimizers constructed in Theorem~\ref{epsPLthm}. That is, we prove Theorem~\ref{PerPLthm}. Note that, in contrast with the previous sections, here we always consider~$s \in (0, 1/2)$. For the sake of clarity, we also stress that, while~$K$ is obviously assumed throughout the section to satisfy the hypotheses explicitly stated in Theorem~\ref{PerPLthm}, the functional~$\mathscr{E}_\varepsilon$ that we often consider has to be intended as given by any~$W$ fulfilling requirements~\eqref{Wzeros}-\eqref{Wper}. As a first step towards the proof of Theorem~\ref{epsPLthm}, we need to have uniform-in-$\varepsilon$ versions of the density and energy estimates for the functional~$\mathscr{E}_\varepsilon$. Such results are reported in the next two propositions. \begin{proposition} \label{epsenestprop} Let~$\varepsilon \in (0, 1),$~$x_0 \in \mathbb{R}^n$ and~$R \geqslantslant 3 \varepsilon$. If~$u_\varepsilon: \mathbb{R}^n \to [-1, 1]$ is a minimizer of~$\mathscr{E}_\varepsilon$ in~$B_{R + 2 \varepsilon}(x_0)$, then $$ \mathscr{E}_\varepsilon(u_\varepsilon; B_R(x_0)) \leqslantslant C R^{n - 2 s}, $$ for some universal constant~$C \geqslantslant 1$. \end{proposition} \begin{proposition} \label{epsdensestprop} Let~$\varepsilon \in (0, 1)$,~$x_0 \in \mathbb{R}^n$ and~$R > 0$. Let~$u_\varepsilon: \mathbb{R}^n \to [-1, 1]$ be a minimizer of~$\mathscr{E}_\varepsilon$ in~$B_R(x_0)$. Fix~$\theta, \theta_0 \in (-1, 1)$. If~$u_\varepsilon(x_0) \geqslantslant \theta_0$, then there exist two constants~$c \in (0, 1)$, depending on universal quantities, and~$\bar{R} > 0$, that may also depend on~$\theta$ and~$\theta_0$, such that $$ \leqslantslantft| \leqslantslantft\{ u_\varepsilon > \theta \right\} \cap B_R(x_0) \right| \geqslantslant c R^n, $$ provided~$\bar{R} \varepsilon \leqslantslant R \leqslantslant \xi/3$. Similarly, if~$u_\varepsilon(x_0) \leqslantslant \theta_0$, then $$ \leqslantslantft| \leqslantslantft\{ u_\varepsilon < \theta \right\} \cap B_R(x_0) \right| \geqslantslant c R^n, $$ provided~$\bar{R} \varepsilon \leqslantslant R \leqslantslant \xi/3$. \end{proposition} Both statements can be easily deduced from Theorem~\ref{densestthm} and Proposition~\ref{enestprop}, respectively, by using~$\varepsilon$-rescalings of the form~\eqref{Reps}. Next is a simple remark on the convergence of sequences having equibounded energies. \begin{lemma} \label{directlem} Let~$\Omega \subset \mathbb{R}^n$ be an open bounded set with Lipschitz boundary and~$\{ \varepsilon_j \}$ an infinitesimal sequence of positive numbers. Let~$\{ u_j \} \subset \mathcal{X}$ be a sequence of functions such that~$\mathscr{E}_{\varepsilon_j}(u_j; \Omega)$ is bounded uniformly in~$j$. Then, there exists a subsequence~$\{ j_k \}$ such that, as~$k \rightarrow +\infty$, $$ u_{j_k} \longrightarrow u := \chi_{E} - \chi_{\mathbb{R}^n \setminus E} \quad \mbox{in } L^1(\Omega), $$ for some measurable set~$E \subseteq \Omega$. \end{lemma} \begin{proof} Recalling~\eqref{Kbounds} and the fact that~$|u_j| \leqslantslant 1$, we estimate \begin{align*} [u_j]_{H^s(\Omega)}^2 & \leqslantslant \int_{\Omega} \leqslantslantft[ \frac{1}{\lambda} \int_{B_\tau(x)} \leqslantslantft| u_j(x) - u_j(y) \right|^2 K(x, y) \, dy + 4 \int_{\mathbb{R}^n \setminus B_\tau(x)} \frac{dy}{|x - y|^{n + 2 s}} \right] dx \\ & \leqslantslant \frac{2}{\lambda} \, \mathscr{E}_{\varepsilon_j}(u_j; \Omega) + \frac{2 n |B_1| |\Omega|}{s \tau^{2 s}}. \end{align*} The equiboundedness of the energies allows us to apply e.g.~\cite[Corollary~7.2]{DPV12} and deduce the existence of a subsequence~$u_{j_k}$ converging to some~$u$ in~$L^1(\Omega)$. Also, by using again the equiboundedness of~$\{ \mathscr{E}_j(u_{\varepsilon_j}; \Omega) \}$ and Fatou's lemma, we have $$ \int_{\Omega} W(x, u(x)) \, dx \leqslantslant \liminf_{k \rightarrow +\infty} \int_{\Omega} W(x, u_{j_k}(x)) \, dx \leqslantslant \liminf_{k \rightarrow +\infty} \varepsilon_{j_k}^{2 s} \, \mathscr{E}_{\varepsilon_{j_k}}(u_{j_k}; \Omega) = 0. $$ By~\eqref{Wzeros}, the function~$u$ only takes values~$1$ and~$-1$, i.e.~$u = \chi_E - \chi_{\mathbb{R}^n \setminus E}$, for some~$E \subseteq \Omega$. \end{proof} The following result ensures that the limit of a sequence of class~A minimizers of~$\mathscr{E}_\varepsilon$ is a class~A minimizer of~$\mathscr{G}$. Note that this does not immediately follow from Proposition~\ref{Gammaconvprop} as a consequence of a standard application of the Fundamental Theorem of~$\mathscr{G}amma$-convergence, since class~A minimizers are not \emph{global} minimizers on~$\mathcal{X}$. \begin{lemma} \label{FTGClem} Let~$\{ \varepsilon_j \}$ be an infinitesimal sequence of positive numbers. For any~$j \in \mathbb{N}$, let~$u_j \in \mathcal{X}$ be a class~A minimizer of~$\mathscr{E}_{\varepsilon_j}$ and suppose that the sequence~$\{ u_j \}$ converges in~$\mathcal{X}$ to~$u = \chi_E - \chi_{\mathbb{R}^n \setminus E}$, for some measurable set~$E \subseteq \mathbb{R}^n$, such that~$\mathscr{G}(u; \Omega) < +\infty$, for any bounded open~$\Omega \subset \mathbb{R}^n$. Then,~$u$ is a class~A minimizer of~$\mathscr{G}$ or, equivalently,~$\partial E$ is a class~A minimal surface for~$\mathscr{P}er_K$. \end{lemma} \begin{proof} To prove the result it is enough to check that $$ \mathscr{G}(u; B_R) \leqslantslant \mathscr{G}(v; B_R) \quad \mbox{for any } v \in \mathcal{X} \mbox{ such that } v = u \mbox{ in } \mathbb{R}^n \setminus B_R \mbox{ and any } R > 0. $$ Thus, fix~$R > 0$ and consider any such~$v$. Let~$\{ v_\varepsilon \}$ be the sequence converging to~$v$ in~$\mathcal{X}$ given by Proposition~\ref{Gammaconvprop}$(ii)$. As we also know that~$\{ u_j \}$ converges to~$u$ in~$\mathcal{X}$, Proposition~\ref{Gammaconvprop} yields \begin{equation} \label{FTGCtech1} \mathscr{G}(u; B_R) \leqslantslant \liminf_{j \rightarrow +\infty} \mathscr{E}_{\varepsilon_j}(u_j; B_R) \quad \mbox{and} \quad \limsup_{j \rightarrow +\infty} \mathscr{E}_{\varepsilon_j}(v_{\varepsilon_j}; B_R) \leqslantslant \mathscr{G}(v; B_R). \end{equation} For any~$j \in \mathbb{N}$, we set $$ \bar{v}_j := \begin{cases} v_{\varepsilon_j} & \quad \mbox{in } B_R \\ u_j & \quad \mbox{in } \mathbb{R}^n \setminus B_R. \end{cases} $$ Observe that~$\bar{v}_j = u_j$ outside of~$B_R$, so that, by the minimality of~$u_j$, \begin{equation} \label{FTGCtech2} \mathscr{E}_{\varepsilon_j}(u_j; B_R) \leqslantslant \mathscr{E}_{\varepsilon_j}(\bar{v}_j; B_R). \end{equation} Define, for~$y \in \mathbb{R}^n \setminus B_R$, \begin{equation} \label{Phidef} \mathscr{P}hi(y) := \int_{B_R} \frac{dx}{|x - y|^{n + 2 s}}. \end{equation} Note that \begin{equation} \label{PhiL1} \mathscr{P}hi \in L^1(\mathbb{R}^n \setminus B_R). \end{equation} Indeed, \begin{align*} \int_{\mathbb{R}^n \setminus B_R} \leqslantslantft| \mathscr{P}hi(y) \right| dy & = \int_{B_R} \leqslantslantft[ \int_{\mathbb{R}^n \setminus B_R} \frac{dy}{|x - y|^{n + 2 s}} \right] dx \leqslantslant \int_{B_R} \leqslantslantft[ \int_{\mathbb{R}^n \setminus B_{R - |x|}(x)} \frac{dy}{|x - y|^{n + 2 s}} \right] dx \\ & = \frac{n |B_1|}{2 s} \int_{B_R} \frac{dx}{\leqslantslantft( R - |x| \right)^{2 s}} \leqslantslant \frac{n |B_1| R^{n - 1}}{2 s} \int_0^R \frac{d\rho}{\leqslantslantft( R - \rho \right)^{2 s}} = \frac{n |B_1| R^{n - 2 s}}{2 s (1 - 2 s)} \\ & < +\infty, \end{align*} as~$s < 1/2$. For~$x \in B_R$ and~$y \in \mathbb{R}^n \setminus B_R$, we then compute \begin{align*} \leqslantslantft| \leqslantslantft| \bar{v}_j(x) - \bar{v}_j(y) \right|^2 - \leqslantslantft| v_{\varepsilon_j}(x) - v_{\varepsilon_j}(y) \right|^2 \right| & = \leqslantslantft| \leqslantslantft| v_{\varepsilon_j}(x) - u_j(y) \right|^2 - \leqslantslantft| v_{\varepsilon_j}(x) - v_{\varepsilon_j}(y) \right|^2 \right| \\ & = \leqslantslantft| 2 v_{\varepsilon_j}(x) - u_j(y) - v_{\varepsilon_j}(y) \right| \leqslantslantft| v_{\varepsilon_j}(y) - u_j(y) \right| \\ & \leqslantslant 4 \leqslantslantft| v_{\varepsilon_j}(y) - u_j(y) \right|. \end{align*} Hence, recalling the definition of~$\bar{v}_j$ and~\eqref{Kbounds} we get \begin{equation} \label{FTGCtech3} \begin{aligned} & \lim_{j \rightarrow +\infty} \leqslantslantft| \mathscr{E}_{\varepsilon_j}(\bar{v}_j; B_R) - \mathscr{E}_{\varepsilon_j}(v_{\varepsilon_j}; B_R) \right| \\ & \hspace{30pt} = 2 \lim_{j \rightarrow +\infty} \leqslantslantft| \mathscr{K}(\bar{v}_j; B_R, \mathbb{R}^n \setminus B_R) - \mathscr{K}(v_{\varepsilon_j}; B_R, \mathbb{R}^n \setminus B_R) \right| \\ & \hspace{30pt} \leqslantslant \lim_{j \rightarrow +\infty} \int_{B_R} \leqslantslantft[ \int_{\mathbb{R}^n \setminus B_R} \leqslantslantft| \leqslantslantft| \bar{v}_j(x) - \bar{v}_j(y) \right|^2 - \leqslantslantft| v_{\varepsilon_j}(x) - v_{\varepsilon_j}(y) \right|^2 \right| K(x, y) \, dy \right] dx \\ & \hspace{30pt} \leqslantslant 4 \mathscr{L}ambda \lim_{j \rightarrow +\infty} \int_{\mathbb{R}^n \setminus B_R} \leqslantslantft| v_{\varepsilon_j}(y) - u_j(y) \right| \mathscr{P}hi(y) \, dy \\ & \hspace{30pt} = 0, \end{aligned} \end{equation} where the last limit vanishes in view of Lebesgue's dominated convergence theorem, as both~$v_{\varepsilon_j}$ and~$u_j$ converge to~$u$ a.e.~in~$\mathbb{R}^n \setminus B_R$ and~\eqref{PhiL1} holds true. By putting together~\eqref{FTGCtech1},~\eqref{FTGCtech2} and~\eqref{FTGCtech3}, we obtain \begin{align*} \mathscr{G}(u; B_R) & \leqslantslant \liminf_{j \rightarrow +\infty} \mathscr{E}_{\varepsilon_j}(u_j; B_R) \leqslantslant \liminf_{j \rightarrow +\infty} \mathscr{E}_{\varepsilon_j}(\bar{v}_j; B_R) \\ & \leqslantslant \limsup_{j \rightarrow +\infty} \mathscr{E}_{\varepsilon_j}(v_{\varepsilon_j}; B_R) + \lim_{j \rightarrow +\infty} \leqslantslantft[ \mathscr{E}_{\varepsilon_j}(\bar{v}_j; B_R) - \mathscr{E}_{\varepsilon_j}(v_{\varepsilon_j}; B_R) \right] \\ & \leqslantslant \mathscr{G}(v; B_R), \end{align*} and the thesis follows. \end{proof} Analogously, the limit of class~A minimal surfaces for the~$K$-perimeter is itself a class~A minimal surface. \begin{lemma} \label{Perstablem} For any~$k \in \mathbb{N}$, let~$\partial E_k$ be a class~A minimal surface for~$\mathscr{P}er_K$ and suppose that the sequence~$\{ E_k \}$ converges in~$\mathcal{X}$ to a set~$E \subset \mathbb{R}^n$, such that~$\mathscr{P}er_K(E, \Omega) < +\infty$, for any bounded open~$\Omega \subset \mathbb{R}^n$. Then,~$\partial E$ is a class~A minimal surface for~$\mathscr{P}er_K$. \end{lemma} \begin{proof} The proof of this result is quite similar to that of Lemma~\ref{FTGClem}. By possibly passing to a subsequence, we may suppose that~$\chi_{E_k} \rightarrow \chi_E$ a.e.~in~$\mathbb{R}^n$. Given any~$R > 0$, we need to prove that \begin{equation} \label{Perstabthesis} \mathscr{P}er_K(E; B_R) \leqslantslant \mathscr{P}er_K(F; B_R) \quad \mbox{for any set } F \mbox{ such that } F \setminus B_R = E \setminus B_R. \end{equation} Given such a~$F$, we set~$\bar{F}_k := \leqslantslantft( F \cap B_R \right) \cup \leqslantslantft( E_k \setminus B_R \right)$. Since~$\bar{F}_k \setminus B_R = E_k \setminus B_R$, we know that \begin{equation} \label{Perstabtech1} \mathscr{P}er_K(E_k; B_R) \leqslantslant \mathscr{P}er_K(\bar{F}_k; B_R). \end{equation} Also, thanks to representation~\eqref{PerKchi} and Fatou's lemma, it easily follows that \begin{equation} \label{Perstabtech2} \mathscr{P}er_K(E; B_R) \leqslantslant \liminf_{k \rightarrow +\infty} \mathscr{P}er_K(E_k; B_R). \end{equation} Observe now that \begin{align*} \mathscr{P}er_K(\bar{F}_k; B_R) - \mathscr{P}er_k(F; B_R) & = \mathcal{L}_K(F \cap B_R, \mathbb{R}^n \setminus (E_k \cup B_R)) - \mathcal{L}_K(F \cap B_R, \mathbb{R}^n \setminus (E \cup B_R)) \\ & \quad + \mathcal{L}_K(E_k \setminus B_R, B_R \setminus F) - \mathcal{L}_K(E \setminus B_R, B_R \setminus F). \end{align*} As $$ \leqslantslantft| \mathcal{L}_K(F \cap B_R, \mathbb{R}^n \setminus (E_k \cup B_R)) - \mathcal{L}_K(F \cap B_R, \mathbb{R}^n \setminus (E \cup B_R)) \right| \leqslantslant \mathcal{L}_K((E_k \Delta E) \setminus B_R, F \cap B_R), $$ and $$ \leqslantslantft| \mathcal{L}_K(E_k \setminus B_R, B_R \setminus F) - \mathcal{L}_K(E \setminus B_R, B_R \setminus F) \right| \leqslantslant \mathcal{L}_K((E_k \Delta E) \setminus B_R, B_R \setminus F), $$ we may compute $$ \leqslantslantft| \mathscr{P}er_K(\bar{F}_k; B_R) - \mathscr{P}er_k(F; B_R) \right| \leqslantslant \mathcal{L}_K((E_k \Delta E) \setminus B_R, B_R) = \int_{\mathbb{R}^n \setminus B_R} \chi_{E_k \Delta E}(y) \mathscr{P}hi(y) \, dy, $$ where~$\mathscr{P}hi \in L^1(\mathbb{R}^n \setminus B_R)$ is the function defined in~\eqref{Phidef}. Since~$\chi_{E_k \Delta E} \rightarrow 0$ a.e.~in~$\mathbb{R}^n$, using Lebesgue's dominated convergence theorem we deduce that $$ \lim_{k \rightarrow +\infty} \mathscr{P}er_K(\bar{F}_k; B_R) = \mathscr{P}er_k(F; B_R). $$ Consequently, by this,~\eqref{Perstabtech1} and~\eqref{Perstabtech2}, we conclude that \begin{align*} \mathscr{P}er_K(E; B_R) \leqslantslant \liminf_{k \rightarrow +\infty} \mathscr{P}er_K(E_k; B_R) \leqslantslant \lim_{k \rightarrow +\infty} \mathscr{P}er_K(\bar{F}_k; B_R) \leqslantslant \mathscr{P}er_K(F; B_R), \end{align*} that is~\eqref{Perstabthesis}. \end{proof} With all these preliminary results at hand, we may now prove the main proposition of the section. Observe that, as a consequence of it, we deduce the validity Theorem~\ref{PerPLthm}, at least for the case of~$\omega \in \tau \mathbb{Q}^n \setminus \{ 0 \}$. \begin{proposition} \label{omegaratprop} Let~$\theta \in (0, 1)$ and~$\omega \in \mathbb{R}^n \setminus \{ 0 \}$ be fixed. For~$\varepsilon > 0$, let~$u_\varepsilon$ be the class~A minimizer of~$\mathscr{E}_{\varepsilon}$ associated to~$\theta$ and~$\omega$, constructed in Theorem~\ref{epsPLthm}. Then, there exists an infinitesimal sequence~$\{ \varepsilon_j \}$ of positive numbers such that \begin{enumerate}[$(i)$] \item $u_{\varepsilon_j}$ converges in~$\mathcal{X}$ to a function~$u = \chi_E - \chi_{\mathbb{R}^n \setminus E}$, for some measurable set~$E \subset \mathbb{R}^n$; \item for any~$\eta \in (0, 1)$, the set~$\{ |u_{\varepsilon_j}| \leqslantslant \eta \}$ converges locally uniformly to~$\partial E$; \item $\partial E$ is a class~A minimal surface for~$\mathscr{P}er_K$ and, for any bounded set~$\Omega \subset \mathbb{R}^n$, $$ \mathscr{P}er_K(E; \Omega) \leqslantslant C_\Omega, $$ where~$C_\Omega > 0$ is a constant depending only on~$\Omega$ and universal quantities; \item there exists a universal constant~$c \in (0, 1)$ such that, given any point~$x_0 \in \partial E$, $$ \leqslantslantft| E \cap B_R(x_0) \right| \geqslantslant c R^n \quad \mbox{and} \quad \leqslantslantft| B_R(x_0) \setminus E \right| \geqslantslant c R^n, $$ for any~$0 < R < \xi$; \item the inclusions $$ \bigg\{ x \in \mathbb{R}^n : \frac{\omega}{|\omega|} \cdot x < 0 \bigg\} \subset E \subset \bigg\{ x \in \mathbb{R}^n : \frac{\omega}{|\omega|} \cdot x \leqslantslant \tau M_0 \bigg\}, $$ hold true, where~$M_0 > 0$ is the constant found in Theorem~\ref{epsPLthm}; \item if~$\omega \in \tau \mathbb{Q}^n \setminus \{ 0 \}$, then~$\partial E$ is periodic with respect to~$\sim_{\tau, \, \omega}$. \end{enumerate} \end{proposition} \begin{proof} Let~$\{ R_k \}$ be an increasing sequence of positive numbers that diverges to~$+\infty$, and~$\{ \varepsilon_j \}$ an infinitesimal sequence of positive numbers. Thanks to Proposition~\ref{epsenestprop}, for any~$k \in \mathbb{N}$ there exists a constant~$C_k > 0$ such that $$ \mathscr{E}_{\varepsilon_j}(u_{\varepsilon_j}; B_{R_k}) \leqslantslant C_k \quad \mbox{for any } j \in \mathbb{N}. $$ Then, after a standard diagonal argument and repeated applications of Lemma~\ref{directlem}, it is easy to see that a subsequence of~$\{ u_{\varepsilon_j} \}$ (that we label in the same way) converges in~$\mathcal{X}$ and pointwise a.e.~in~$\mathbb{R}^n$ to a function~$u = \chi_{E} - \chi_{\mathbb{R}^n \setminus E}$, for some measurable set~$E \subseteq \mathbb{R}^n$. Moreover, given any bounded set~$\Omega \subset \mathbb{R}^n$, we may select a sufficiently large~$k \in \mathbb{N}$ so that~$\Omega \subseteq B_{R_k}$ and hence by Proposition~\ref{Gammaconvprop}$(i)$, $$ 4 \mathscr{P}er_K(E; \Omega) = \mathscr{G}(u; \Omega) \leqslantslant \mathscr{G}(u; B_{R_k}) \leqslantslant \liminf_{j \rightarrow +\infty} \mathscr{E}_{\varepsilon_j}(u_{\varepsilon_j}; B_{R_k}) \leqslantslant C_k < +\infty. $$ Consequently, Lemma~\ref{FTGClem} ensures that~$\partial E$ is a class~A minimal surface for~$\mathscr{P}er_K$. Also, when~$\omega \in \tau \mathbb{Q}^n \setminus \{ 0 \}$, then~$\partial E$ clearly inherits the periodicity properties shared by each element of the approximating sequence~$\{ u_{\varepsilon_j}\}$. We have therefore showed that~$(i)$,~$(iii)$ and~$(vi)$ hold true. Concerning~$(v)$, observe that, by Theorem~\ref{epsPLthm}, we may assume without loss of generality that~$u_{\varepsilon_j} \geqslantslant \theta$ on~$\{ \omega \cdot x \leqslantslant 0 \}$. Accordingly, $$ u(x) = \lim_{j \rightarrow +\infty} u_{\varepsilon_j}(x) \geqslantslant \theta > 0 \quad \mbox{for a.a.~} x \mbox{ such that } \omega \cdot x \leqslantslant 0. $$ As~$u$ only attains the values~$1$ and~$-1$, we conclude that, up to changing~$u$ on a negligible set, it holds~$u = 1$ on~$\{ \omega \cdot x < 0 \}$. Similarly, one shows that~$u = -1$ on~$\{ \omega \cdot x \geqslantslant \tau M_0|\omega| \}$ and~$(v)$ readily follows. Now we deal with the proof of~$(ii)$. We argue by contradiction and suppose that there exist a compact set~$K \subset \mathbb{R}^n$, a value~$\delta \in (0, \xi)$ and a sequence of points~$\{ x_j \} \subset K$, such that~$|u_{\varepsilon_j}(x_j)| \leqslantslant \eta$ and~$B_\delta(x_j) \subset E$. Up to a subsequence,~$\{x_j\}$ converges to some point~$x_0 \in K$, with~$B_{\delta/2}(x_0) \subset E$. That is, \begin{equation} \label{u=1delta/2} u = 1 \mbox{ on } B_{\delta/2}(x_0). \end{equation} By Proposition~\ref{epsdensestprop}, for any large enough~$j$ we have $$ \leqslantslantft| \{ u_{\varepsilon_j} < -1/2 \} \cap B_{\delta/2}(x_0) \right| \geqslantslant \leqslantslantft| \{ u_{\varepsilon_j} < -1/2 \} \cap B_{\delta / 4}(x_j) \right| \geqslantslant c \leqslantslantft| B_{\delta/2} \right|, $$ for some~$c \in (0, 1)$ independent of~$j$. Accordingly, \begin{align*} \int_{B_{\delta/2}(x_0)} u_{\varepsilon_j}(x) \, dx & = \int_{\{ u_{\varepsilon_j} < -1/2 \} \cap B_{\delta/2}(x_0)} u_{\varepsilon_j}(x) \, dx + \int_{\{ u_{\varepsilon_j} \geqslantslant -1/2 \} \cap B_{\delta/2}(x_0)} u_{\varepsilon_j}(x) \, dx \\ & \leqslantslant - \frac{1}{2} \leqslantslantft| \{ u_{\varepsilon_j} < -1/2 \} \cap B_{\delta/2}(x_0) \right| + \leqslantslantft| \{ u_{\varepsilon_j} \geqslantslant -1/2 \} \cap B_{\delta/2}(x_0) \right| \\ & \leqslantslant \leqslantslantft( 1 - \frac{c}{2} \right) \leqslantslantft| B_{\delta/2} \right|. \end{align*} But then, taking advantage of this,~\eqref{u=1delta/2} and the fact that~$u_{\varepsilon_j} \to u$ in~$L^1(B_{\delta/2}(x_0))$, we get \begin{align*} \leqslantslantft| B_{\delta/2} \right| = \int_{B_{\delta / 2}(x_0)} u(x) \, dx & = \lim_{j \rightarrow +\infty} \int_{B_{\delta/2}(x_0)} u_{\varepsilon_j}(x) \, dx \leqslantslant \leqslantslantft( 1 - \frac{c}{2} \right) |B_{\delta/2}|, \end{align*} which is a contradiction. Hence,~$(ii)$ holds true. Finally, we show the validity of the density estimates stated in~$(iv)$. By the uniform convergence result of item~$(ii)$, we infer the existence of a sequence of points~$\{ x_j \} \subset B_{R/2}(x_0)$ at which~$|u_{\varepsilon_j}(x_j)| \leqslantslant 1/2$. Proposition~\ref{epsdensestprop} then ensures that $$ \leqslantslantft| \{ u_{\varepsilon_j} > 0 \} \cap B_R(x_0) \right| \geqslantslant \leqslantslantft| \{ u_{\varepsilon_j} > 0 \} \cap B_{R/3}(x_j) \right| \geqslantslant c R^n, $$ for some universal constant~$c \in (0, 1)$. As, by point~$(i)$,~$u_{\varepsilon_j} \rightarrow u$ a.e.~in~$\mathbb{R}^n$, making use of Lemma~\ref{diffsimmlimlem} we obtain $$ \leqslantslantft| E \cap B_{R}(x_0) \right| = \lim_{j \rightarrow +\infty} \leqslantslantft| \{ u_{\varepsilon_j} > 0 \} \cap {B_R(x_0)} \right| \geqslantslant c R^n. $$ Similarly, one checks the validity of the estimate for the complement of~$E$. \end{proof} To complete the proof of Theorem~\ref{PerPLthm} we now only need to deal with directions~$\omega \in \mathbb{R}^n \setminus \tau \mathbb{Q}^n$. This is done in the following proposition, via an approximation argument. \begin{proposition} Let~$\omega \in \mathbb{R}^n \setminus \tau \mathbb{Q}^n$. Then, there exists a class~A minimal surface~$\partial E$ for~$\mathscr{P}er_K$, such that \begin{equation} \label{Eplanelike} \bigg\{ x \in \mathbb{R}^n : \frac{\omega}{|\omega|} \cdot x < 0 \bigg\} \subset E \subset \bigg\{ x \in \mathbb{R}^n : \frac{\omega}{|\omega|} \cdot x \leqslantslant \tau M_0 \bigg\}, \end{equation} Moreover, there exists a sequence~$\{ \omega_k \} \subset \tau \mathbb{Q}^n \setminus \{ 0 \}$ such that~$\omega_k \rightarrow \omega$ and, denoting by~$\partial E_k$ the class~A minimal surface for~$\mathscr{P}er_K$ associated with~$\omega_k$ given by Proposition~\ref{omegaratprop}, it holds~$E_k \rightarrow E$ in~$L^1_{\rm loc}(\mathbb{R}^n)$ and~$\partial E_k \rightarrow \partial E$ locally uniformly in~$\mathbb{R}^n$. \end{proposition} \begin{proof} As a preliminary observation, notice that, if~$u = \chi_A - \chi_{\mathbb{R}^n \setminus A}$, for some measurable~$A \subseteq \mathbb{R}^n$, then $$ \mathscr{P}er_K(A; \Omega) = \frac{1}{4} \, \mathscr{E}_\varepsilon(u; \Omega), $$ for any~$\varepsilon > 0$ and any bounded set~$\Omega \subset \mathbb{R}^n$. Let now~$\{ \omega_k \} \subset \tau \mathbb{Q}^n \setminus \{ 0 \}$ be any sequence converging to~$\omega$ and denote with~$\partial E_k$ the corresponding class~A minimal surface constructed in Proposition~\ref{omegaratprop}. Let~$\{ R_i \}$ be any monotone sequence of positive numbers, diverging to~$+\infty$ and~$\{ \varepsilon_k \}$ be any infinitesimal sequence. Then, by Proposition~\ref{omegaratprop}$(iii)$, $$ \mathscr{E}_{\varepsilon_k}(\chi_{E_k} - \chi_{\mathbb{R}^n \setminus E_k}, B_{R_i}) = 4 \mathscr{P}er_K(E_k; B_{R_i}) \leqslantslant C_i, $$ for some constant~$C_i > 0$ that only depends on~$i$ and universal quantities. Consequently, with the aid of Lemma~\ref{directlem} and a diagonal argument analogous to that presented in the proof of Proposition~\ref{omegaratprop}, we obtain that, up to a subsequence,~$E_k$ converges in~$L^1_{\rm loc}(\mathbb{R}^n)$ and pointwise a.e.~in~$\mathbb{R}^n$ to some measurable~$E \subseteq \mathbb{R}^n$. The inclusions in~\eqref{Eplanelike} then readily follow from the analogous ones obtained in Proposition~\ref{omegaratprop}$(iv)$ for each~$E_k$. Moreover, using Fatou's lemma it is immediate to check that the~$K$-perimeter of~$E$ in any compact set is finite. Hence, by Lemma~\ref{Perstablem}, the set~$\partial E$ is a class~A minimal surface for~$\mathscr{P}er_K$. We are therefore only left to show the locally uniform convergence of~$\partial E_k$ to~$\partial E$. The argument is similar to the one adopted in the proof of Proposition~\ref{omegaratprop}$(ii)$. Suppose by contradiction that there exist a compact set~$K$, a number~$\delta \in (0, \xi)$ and a sequence of points~$\{ x_k \}$ such that~$x_k \in \partial E_k \cap K$ and~$B_\delta(x_k) \cap E = \varnothing$. Up to a subsequence, we see that~$x_k \rightarrow x_0$, for some~$x_0 \in K$, and \begin{equation} \label{EoutBdelta} B_{\delta/2}(x_0) \cap E = \varnothing. \end{equation} In view of Proposition~\ref{omegaratprop}$(iv)$, there exists a universal constant~$c \in (0,1)$ for which $$ \leqslantslantft| E_k \cap B_{\delta/2}(x_0) \right| \geqslantslant \leqslantslantft| E_k \cap B_{\delta / 4}(x_k) \right| \geqslantslant c, $$ for any~$k \in \mathbb{N}$ sufficiently large. By the~$L^1_{\rm loc}$ convergence of the~$E_k$'s, we then have that $$ \leqslantslantft| E \cap B_{\delta/2}(x_0) \right| = \lim_{k \rightarrow +\infty} \leqslantslantft| E_k \cap B_{\delta/2}(x_0) \right| \geqslantslant \leqslantslantft| E_k \cap B_{\delta / 4}(x_k) \right| \geqslantslant c, $$ in contradiction with~\eqref{EoutBdelta}. The proof of the proposition is thus complete. \end{proof} \end{document}
\begin{document} \title{Quantum key distribution with asymmetric channel noise} \author{Xiang-Bin Wang}\email{[email protected]} \affiliation{Imai Quantum Computation and Information Project, ERATO, JST, Hongo White Building 201, 5-28-3, Hongo, Bunkyo, Tokyo 113-0033, Japan} \date{\today} \begin{abstract}We show that, one may take advantage of the asymmetry of channel noise. With appropriate modifications to the standard protocols, both the key rate and the tolerable total channel noise can be increased if the channel noise is asymmetric. \end{abstract} \maketitle \section {Introduction} Unlike classical key distribution, quantum key distribution (QKD) is built on the fact that measuring an unknown quantum state will almost surely disturb the state. The first published QKD protocol, proposed in by Bennett and Brassard in 1984\cite{BB}, is called BB84. For a history of the subject, one may refer to, for example, Ref.~\cite{gisin}. Since then, studies on QKD are extensive. Strict mathematical proofs for the unconditional security have been given already\cite{qkd,mayersqkd,others,others2,shorpre}. It is greatly simplified if one connects this with the quantum entanglement purification protocol (EPP)\cite{BDSW,deutsch,qkd,shorpre}. To all QKD protocols, the first important requirement is the unconditional security, i.e., using whatever type of attack, eavesdropper (Eve) can never have non-negligible information to the final key shared by Alice and Bob. A conditional secure protocol without a testable condition is not so useful, since it is essentially same to the classical protocol on its security. In particular, one cannot depend the security on certain specific properties of physical channels, since those specific properties may be changed by Eve at the time the QKD protocol is running. For example, based on certain properties of the physical channel, Alice and Bob may take some physical treatment to decrease the error rate of their physical channel. Given such a treatment, suppose the error rate of the improved physical channel is given by the function of $f(E_0)$, and $E_0$ is the error rate of the originally physical channel. Suppose in the QKD protocol Alice and Bob use the improved physical channel. It will be insecure if Alice and Bob in their QKD protocol only test the errors of the original physical and then use function $f$ to calculate the error rate for the improved channel and then use the calculated value as the error rate. This is because, Eve may change the properties of the original physical channel and the mapping $f$ could be incorrect then. However, it will be $secure$ if Alice and Bob directly test the error rate of the improved physical channel and use the tested results as the input of error correction and privacy amplification. Besides the security requirement, we also hope to improve the feasibility of a protocol in practical use, e.g., the larger tolerable channel error rate and the key rate. Different from the security issue, one can improve the feasibility of a protocol based on the properties of physical channel. Alice and Bob may first investigate certain properties of their physical channel and then modify the protocol according to those properties of their physical channel. Given the specific properties of the physical channel, the modified protocol will have certain advantage, e.g., a lower error rate in their error test. Note that here they should first be sure that the modified QKD protocol is unconditionally secure $even$ those assumed properties of physical channel don't exist. An $unconditional$ secure protocol with certain $conditional$ advantage is allowed in QKD. That is to say, after taking the modification, we expect certain advantageous result of the error test. In running the modified protocol, after the error test step, if Alice and Bob find that the result is close to their expected result, the advantage holds; if Alice and Bob find that the result is quite different from the expected one, the advantage is seriously undermined or even totally lost, but the protocol is still secure. In case they find the error test result quite different from the expected, their only loss is in the issue of key rate. Conditional advantage is useful if we have substantial probability that the error test result in QKD is close to the expected one. Although Eve may in principle do whatever to change the original physical properties, if Eve wants to hide her presence, she must not significantly change the error test result of the QKD protocol, since Alice and Bob will judge that there must be an Eve if their error test result is too much different from the expected result. Moreover, the final key rate is only dependent on the error test results rather than the full properties of the channel. If the QKD protocol itself is unconditionally secure, two $different$ channels are $equivalent$ for the QKD purpose provided that they cause the same error test result. This is to say, given an unconditionally secure protocol, if Eve hides her presence, Eve's action does not affect the error test result done by Alice and Bob, i.e., Eve's channel is $equivalent$ to the supposed physical channel. The conditional advantage $always$ holds if Eve hides herself. If Eve does not hide herself, she allows her action to cause the error test result much different from the expected one. This may decrease the final key rate, or even destroy the protocol if the error test result is too far away from the expected one. Even in such a case Eve cannot obtain a nonnegligible amount of information to the final key, given an unconditionally secure protocol. In this work, in evaluating the feasibility of our protocol, we only consider an invisible Eve, i.e., Eve always hides her presence. Note that no protocol can work as efficiently as it is expected with a visible Eve. A visible Eve can always destroy {\it any} protocol. We shall propose protocols with higher key rate and larger channel error rate threshold, given an asymmetric physical channel and invisible Eve. The key rate and channel error threshold of our protocol are dependent on the physical channel itself, but the security is independent of the physical channel, i.e., our protocols are unconditionally secure under whatever attack outside Alice's and Bob's labs. Most of the known prepare-and-measure protocols assume the symmetric channel to estimate the noise threshold for the protocol. Also, most of the protocols use the symmetrization method: Alice randomly chooses bases from certain set to prepare her initial state. All bases in the set is chosen with equal probability. In such a way, even the noise of the channel (Eve's channel) is not symmetric, the symmetrization make the error rate to the key bits be always symmetric. In this work we show that actually we can let all key bits prepared in a single basis and we can have advantages in key rate or noise threshold provided that the channel noise is asymmetric. \section {Channel error, tested error and key-bits error} Normally, Alice will transmit qubits in different basis (e.g., Z basis and X basis) to Bob, Bob will also measure them in different basis. The Hadamard transformation $H = \frac{1}{\sqrt{2}} \left(\begin{array}{rr}1 & 1 \\ 1 & -1 \end{array} \right) $ interchanges the $Z-$basis $\{|0\rangle,|1\rangle\}$ and $X-$basis $\{|\pm\rangle=\frac{1}{\sqrt 2}(|0\rangle\pm |1\rangle)\}$. We shall use the term key bits for those raw bits which are used to distill the final key and the term check bits for those bits whose values are announced publicly to test the error rates. Our purpose is to know the bit-flip rate and phase-flip rate to key-bits. We do it in this way: first test flipping rates of the check-bits, then deduce the channel flip rate and then determine the error rates of key-bits. As it was shown in Ref\cite{gl}, the flipping rates of qubits in different bases are in general different, due to the basis transformation. Here we give a more detailed study on this issue. We first consider the 4-state protocol with CSS code, where only two basis, $Z-$basis and $X-$basis are involved in operation. For such a case of 4-state protocol, we define asymmetric channel as the channel with its bit flip error rate being different from its phase flip error rate. The check bits will be discarded after the error test. We use the term Z-bits for those qubits which are prepared and measured in Z basis, the term $X$-bits for those bits which are prepared and measured in X basis. For clarity we shall regard Alice's action of preparing a state in X basis as the joint actions of state preparation in Z basis followed by a Hadamard transform. We shall also regard Bob's measurement in X basis as the joint action of first taking a Hadamard transform and then taking the measurement in Z basis. Therefore we shall also call those Z-bits as $I-$qubits and those $X$-bits as H-bits. To let the CSS code work properly, we need to know the value of average bit-flip rate and average phase-flip rate over all key bits. We define three Pauli matrices: \[ \sigma_x = \left(\begin{array}{rr}0 & 1 \\ 1 & 0 \end{array} \right), \quad \sigma_y = \left(\begin{array}{rr}0 & -i \\ i & 0 \end{array} \right), \quad \sigma_z = \left(\begin{array}{rr}1 & 0 \\ 0 & -1 \end{array} \right). \] The matrix $\sigma_x$ applies a bit flip error but no phase flip error to a qubit, $\sigma_z$ applies a phase flip error but no bit flip error, $\sigma_y$ applies both errors. We assume the $\sigma_x,\sigma_y,\sigma_z$ rates of the physical channel are $q_{x0},q_{y0},q_{z0}$ respectively. Note that the phase flip rate or bit flip rate of the channel is the summation of $q_{z0},q_{y0}$ or $q_{x0},q_{y0}$, respectively. Explicitly we have \begin{eqnarray} \nonumber p_{x0}=q_{x0}+q_{y0}\\ p_{z0}=q_{z0}+q_{y0}\\ p_{y0}=q_{x0}+q_{z0} \end{eqnarray} and $q_{y0}$ is defined as the channel flipping rate to the qubits prepared in $Y$-basis ($\frac{1}{\sqrt 2}(|0\rangle\pm i|1\rangle)$). After the error test on check bits, Alice and Bob know the value of $p_{z0},p_{x0}$ of the channel. Given the channel flipping rates $q_{x0},q_{y0},q_{z0}$, one can calculate the bit-flip rate and phase-flip rate for the remained qubits. For those $I-$bits, the bit-flip rate and phase-flip rate are just $q_{x0}+q_{y0}$ and $q_{z0}+q_{y0}$ respectively, which are just the channel bit flip rate and phase flip rate. Therefore the channel bit-flip rate $p_{x0}$ is identical to the tested flip rate of $I-$bits. The channel phase-flip rate $p_{z0}$ can be determined by testing the flip rate of those $H-$bits. An $H-$qubit is a qubit treated in the following order \\{\it prepared in $Z$ basis, Hadamard transform, transmitted over the noisy channel, Hadamard transform, measurement in $Z$ basis.} If the channel noise offers a $\sigma_y$ error, the net effect is \begin{eqnarray} H \sigma_y H \left(\begin{array}{c}|0\rangle\\|1\rangle\end{array}\right) =\sigma_y \left(\begin{array}{c}|0\rangle\\|1\rangle\end{array}\right). \end{eqnarray} This shows, the channel $\sigma_y$ error will also cause a $\sigma_y$ error to an $H-$qubit. Similarly, due to the fact of \begin{eqnarray} H\sigma_z H=\sigma_x\nonumber\\ H\sigma_x H =\sigma_z, \end{eqnarray} a channel $\sigma_x$ flip or a channel $\sigma_z$ flip will cause a net $\sigma_z$ error or $\sigma_x$ error, to an $H-$bit. Consequently, a channel phase flip causes a bit flip error to H-bit, a channel bit-flip causes a phase flip error to H-bit. This is to say, {\it the measured error of H-bits is just the channel phase flipping rate.} Therefore the average bit-flip error rate and phase-flip error rate to each types of key bits will be \begin{eqnarray} p_z^I=p_x^H=p_{z0}\nonumber,\\ p_z^H=p_{x}^I=p_{x0} \end{eqnarray} Here $p_x^H,p_x^I$($p_z^H,p_z^I$) are for the bit flip (phase flip) error of $H$-bits and $I$-bits from those key-bits respectively. Suppose the key bits consist of $\eta$ $I$-bits and $1-\eta$ $H$-bits, the average flip error of the key bits is \begin{eqnarray} p_x=\eta p_x^I+(1-\eta)p_z^H=\eta p_{x0}+(1-\eta)p_{z0};\nonumber\\ p_z==\eta p_z^I+(1-\eta)p_x^H=\eta p_{z0}+(1-\eta)p_{x0}. \end{eqnarray} \section{ Key rate of QKD protocols with one way communication.} We first consider an almost trivial application of our analysis of asymmetric channel above. In the standard BB84 protocol, since the preparation basis of key bits are symmetrized, the average bit flip error and phase flip error to those key bits are always equal no matter whether the channel noise itself is symmetric or not. That is to say, when half of the key-bits are $X$-bits and Half of them are Z-bits, the average flip rates over all key bits are always \begin{eqnarray} p_x=p_z=(p_{x0}+p_{z0})/2. \end{eqnarray} Therefore the key rate for the standard BB84 protocol (Shor-Preskill protocol)\cite{shorpre} with whatever asymmetric channel is $1-2H(\frac{p_{x0}+p_{z0}}{2})$\cite{shorpre}, where $H(t)=-(t\log_2t+(1-t)\log_2(1-t))$. (Note that in the 4-state protocol, asymmetric channel is simply defined by $p_{x0}\not= p_{z0}$.) However, if all key bits had been prepared and measured in $Z$ basis, then the bit flip and phase flip rates to key bits would be equal to those flipping values of the channel itself. In such a case the key rate is \begin{eqnarray} R=1-H(p_{x0})-H(p_{z0}).\label{unrate} \end{eqnarray} Obviously, this is, except for the special case of $p_{x0}=p_{z0}$, always larger than the key rate in standard BB84 protocol with CSS code (Shor-Preskill protocol), where the key-bits are prepared in Z-basis and $X$-basis with equal probability. For a higher key rate, one should $always$ use the above modified BB84 protocol with all key bits prepared and measured in a single basis, Z-basis. In fact, this type of 4-satte QKD protocol with one single basis for key bits had been proposed already in the past for different purposes, see e.g., ref\cite{onep}. Now we consider the case of 6-state protocol. In the standard protocol\cite{6state}, the key bits are equally distributed over all 3 different bases. In distilling the final key, each type of flipping error used is the averaged value over 3 bases, i.e., \begin{eqnarray}\label{average} \bar q_x=\frac{q_{x0}+2q_{z0}}{3}, \bar q_z=\frac{q_{x0}+q_{z0}+q_{y0}}{3}, \bar q_y=\frac{q_{x0}+2q_{y0}}{3}. \end{eqnarray} The key rate is \begin{eqnarray} r=1-H(\bar q)\nonumber\\ H(\bar q)=-\bar q_x \log_2 \bar q_x - \bar q_y \log_2 \bar q_y - \bar q_z \log_2 \bar q_z - q_{I0} \log_2 q_{I0}. \label{6mix} \end{eqnarray} This is the key rate for standard 6-state protocol where we mix all qubits in diffent bases together. However, such a mixing is unnecessary. We can choose to simply distill 3 batches of final keys from $Z-$bits, $X-$bits and $Y-$bits separately. If we do it in such a way, the key rate will be increased to \begin{eqnarray} r'=1-q_{x0}\log_2 q_{x0}-q_{y0}\log_2 q_{y0}-q_{z0}\log_2 q_{z0}- q_{I0}\log_2 q_{I0}.\label{6sep} \end{eqnarray} Obviously, $r'$ is $never$ less than $r$ since the mixing operation never decreases the entropy. The only case where $r'=r$ is $q_{x0}=q_{y0}=q_{z0}$. Therefore for a higher key rate, we propose to $always$ distill 3 batches of final key separately. The advantage in such a case is unditional, there is no loss for whatever channel. Now we start to consider something more subtle: the advantage conditional on the prior information of the asymmetry property of noise of the physical channel. \section{ 6-state protocol with 2-way classical communications.} The symmetric channel noise for a 6-state protocol is defined by $q_{x0}=q_{y0}=q_{z0}$, if this condition is broken, we regard it as an asymmetric channel for 6-state protocols. In the standard 6-state protocol\cite{6state,gl,chau}, symmetrization is used, i.e., the key-bits are equally consisted by $X-,Y-,Z-$bits. When the channel noise itself is symmetric, i.e., $q_{x0}=q_{y0}=q_{z0}$, a 6-state protocol can have a higher noise threshold than that of a 4-state protocol. This is because in the 6-state protocol, the $\sigma_y$ type of channel error rate is also detected. In removing the bit flip error, $\sigma_y$ error is also reduced, therefore the phase-flip error is partially removed. However, in a 4-state protocol, $\sigma_y$ error is never tested therefore we have to assume the worst situation that $q_{y0}=0$\cite{gl}. We shall show that one can have a higher tolerable channel errors if one modify the existing protocols, given the asymmetric channel(i.e., the channel with its $Y$-bits flipping error being different from that of $X$-bits or Z-bits.) For example, in the case that $q_{y0}=0$ and $q_{x0}=q_{z0}$. The different types of error rates to the transmitted qubits are $$ q_{x}=q_{x0}, q_{y}=0,q_z=q_{z0} $$ for those $Z-$bits; $$ q_{x}=q_{z0}, q_{y}=0,q_z=q_{x0} $$ for those $X-$bits and $$ q_{x}=q_{z0}, q_{y}=q_{x0},q_z=0 $$ for $Y-$bits. The average error rates over all transmitted bits are: \begin{eqnarray} \bar q_x=q,\bar q_y=q/3, \bar q_z=2q/3, q=q_{x0}=q_{z0}\label{trans} .\end{eqnarray} With such a fact, the threshold of total channel noise $q_{t0}=(q_{x0}+q_{y0}+q_{z0})$ for the protocol \cite{chau} is $41.4\%$, same with the case with symmetric noise\cite{chau}. Actually, by our numerical calculation we find that the threshold of total channel noise for Chau protocol\cite{chau} is almost unchanged with whatever value of $q_{y0}$. However, if $all$ key bits were prepared in $Y-$basis (the basis of $\{|y\pm\rangle=\frac{1}{\sqrt 2}(|0\rangle\pm i|1\rangle)\}$), there would be no $\sigma_z$ type of error therefore one only needs to correct the bit-flip error. To see this we can regard a $Y-$qubit as a qubit treated in the following order \\{\it prepared in $Z$ basis, $T$ transform, transmitted over the noisy channel, $T^{-1}$ transform, measurement in $Z$ basis.} Here $T=\frac{1}{\sqrt 2}\left(\begin{array}{cc} 1 & i\\1 & -i \end{array}\right)$, it changes states $|0,1\rangle$ into $|y\pm\rangle$. The following facts \begin{eqnarray} T\sigma_x T^{-1}=\sigma_y; T\sigma_y T^{-1}=\sigma_z; T\sigma_z T^{-1}=\sigma_x \end{eqnarray} cause the consequence that a channel flip of the type $\sigma_x,\sigma_y,\sigma_z$ will cause an error to $Y-$bits in the type of $\sigma_y,\sigma_z,\sigma_x$ respectively. With this fact, if the channel error of $p_{y0}$ is 0, the $\sigma_z$ type of error to the $Y-$qubit is also 0. Using the iteration formula given in Ref\cite{chau}, once all bit-flip error is removed, all errors are removed. Therefore the error rate threshold is \begin{eqnarray} Q_x=Q_z=25\%, \end{eqnarray} i.e., a total error rate of $50\%$. In practice, it is not likely that $\sigma_y$ type of channel flip is exactly 0. Numerical calculation (Fig. 1) shows that, using $Y-$basis as the only basis for all key-bits always has an advantage provided that the channel flipping rate satisfies $q_{y0}<q_{x0}=q_{z0}$. For the purpose of improving the noise threshold, we propose the following protocol: \begin{itemize} \setlength{\itemsep}{-\parskip} \item [\bf 1:] Alice creates a random binary string $b$ with $(6+\delta)n$ bits. \item [\bf 2:]Alice generates $(6+\delta)n$ quantum state according to each elements of $b$. For each bit in $b$, if it is 0, she produces a quantum state of either $|0\rangle$, or $|+\rangle$ or $|y+\rangle$ with probability $1/4,1/4,1/2$, respectively; if it is $1$, she creates a quantum state $|1\rangle$, or $|-\rangle$ or $|y-\rangle$ with probability $1/4,1/4,1/2$, respectively. \item [\bf 3:] Alice sends all qubits to Bob. \item [\bf 4:] Bob receives the $(6+\delta)n$ qubits, measuring each one in basis of either $\{|0\rangle$,$|1\rangle\}$ or $\{|\pm\rangle\}$ or $\{|y\pm\rangle\}$ randomly, with equal probability. \item [\bf 5:] Alice announces basis information of each qubits. \item [\bf 6:] Bob discards all those qubits he measured in a wrong basis. With high probability, there are at least $2n$ bits left (if not, abort the protocol). Alice randomly chooses $n$ $Y$-bits to use for the distillation of final key, and use the remained $n$ bits as check bits. (Among all the check bits, approximately $n/3$ are $X$-bits, $n/3$ are Z-bits, and $n/3$ are $Y$-bits.) \item [\bf 7:] Alice and Bob announce the values of their check bits. If too many of them are different, they abort the protocol. \item [\bf 8:] Alice randomly group the key bits with each group consisting 2 bits. Alice and Bob compare the parity values on each side to each group. If the values agree, they discard one bit and keep the other one. If the value disagree, they discard both bits of that group. They repeatedly do this for a number of rounds until they believe they can find a certain integer $k$ so that both bit-flip error and phase-flip error are less than $5\%$ after the following step is done. \item [\bf 9:] They randomly group the remained key bits with each group consisting $k$ bits. They use the parity value of each group as the new bits after this step. \item [\bf 10:] Alice and Bob use classical CSS code to distill the final key. \end{itemize} Remark 1: Step 9 is to remove the phase flip error of the final key. Although in the extreme case that the phase flip error rate is always 0 if initially the $\sigma_y$ type of error is 0, however, in practice the initial $\sigma_y$ type of error rate is not exactly 0. Even in the case the tested error rate on check bits is zero, we still have to assume a small error rate on the key bits to increase the confidence level. \\Remark 2: The above protocol is unconditionally secure. This means, under whatever type of intercept-and-resend attack, Eve's information to the final key is exponentially small. The security proof can be done through the purification and reduction procedure given by Ref.\cite{shorpre}. The only thing that is a bit different here is that Alice and Bob will take measurement in $Y$ basis to make the final key, after the distillation. \\Remark 3: We don't recommend to use the above protocol blindly. Before doing the QKD, the users should test their physical channel and decide whether there is an advantage. Numerical calculation shows that, In the case of $q_{x0}=q_{z0}, q_{y0}<q_{x0}$, our protocol always has a higher error rate threshold than the corresponding 6-state protocol with key bits' bases equally distributed in all 3 bases. This fact is shown in Fig.(1). \\Remark 4: The conditional advantage require the users first test the properties of the physical channel before doing QKD. And we assume that the physical channel itself is stable. Note that we don't require anything for Eve's channel. As we have discussed in the beginning, physical channel is in general different from Eve's channel since Eve may take over the whole channel only at the time Alice and Bob do the QKD. However, if Eve wants to hide her presence, she must respect the expected results of the error test in the protocol. In our protocol, Eve's operation must not change the error rates of the physical channel, though she can change the error pattern. Since if these values are changed, Alice and Bob will find that their error test result is much different from the expected one therefore Eve cannot hide here presence. \\One may still worry whether the conditional advantage in our protocol is really useful, especially for the conditional threshold part. To make it clear, we consider a specific game. Suppose now Alice and Bob are prisoned in two separate places. They are offered a chance to be freed immediately. The rule is set as this: If they can make an unconditionally secure final key, they will be freed immediately. If any third party obtained a non-negligible amount of information of the final key, they will be shot immediately. Suppose both Alice and Bob want to be freed immediately, but they take the value that being alive is more important than freedom\cite{free}. The noise of the physical is known: the $p_y=0; p_x=p_z=22\% $. In such a case, those conditionally secure protocol with untestable conditions cannot be used, since those protocols will bring the risk to Alice and Bob of being shot. For example, protocol $T$ is conditionally secure with individual attack, but we don't know how to see whether Eve has only used the individual attack. Even though $T$ has a very large tolerable channel noise, Alice and Bob cannot use this protocol because they have a risk to be shot immediately. Previously known unconditionally secure protocols will not bring the risk of being shot to Alice and Bob, but those protocols cannot bring liberty to Alice and Bob, since none of them can tolerate such a high channel flipping error. However, the protocol in this work can help Alice and Bob to be freed without any non-negligible risk of being shot. In such a case, our protocol is the only protocol that may help Alice and Bob while all previously known unconditionally secure protocols cannot. Besides the advantage of a higher tolerable error rate, there are also advantages in the key rate of our protocol with asymmetric channel noise. Obviously, when the error rate is higher than other protocols' threshold while lower than our protocol's threshold, our protocol always has an advantage in key rate. More interestingly, even in the case that the error rate is significantly lower than the threshold of Shor-Preskill's protocol, we may modify our protocol and the advantage in key rate may still holds. We modify our protocol in such a way: take one round bit-error-rejection with two way communication and then use CSS code to distill the final key. As it was shown in Ref.\cite{chau}, the various flipping rates will change by the following formulas after the bit-flip-error rejection \begin{equation} \left\{ \begin{array}{rcl} q_I & = & \displaystyle\frac{p_{I0}^2 + p_{z0}^2}{(q_{I0} + q_{z0})^2 + (q_{x0} + q_{y0})^2} , \\ \\ q_x & = & \displaystyle\frac{q_{x0}^2 + q_{y0}^2}{(q_{I0} + q_{z0})^2 + (p_{x0} + p_{y0})^2} , \\ \\ q_y & = & \displaystyle\frac{2q_{x0} q_{y0}}{(q_{I0} + q_{z0})^2 + (q_{x0} + q_{y0})^2} , \\ \\ q_z & = & \displaystyle\frac{2q_{I0} q_{z0}}{(q_{I0} + q_{z0})^2 + (q_{x0} + p_{y0})^2} . \end{array} \right. \label{errorrate} \end{equation} Also, it can be shown that, the number of remained pairs is \begin{eqnarray} f=\frac{1}{2}\frac{1}{(q_{I0} + q_{z0})^2 + (q_{x0} + p_{y0})^2}. \end{eqnarray} The key rate of our protocol is given by \begin{eqnarray} R=f\cdot (1+q_x \log_2 q_x + q_y \log_2 q_y + q_z \log_2 q_z + q_I \log_2 q_I) \end{eqnarray} We shall compare this with the key rate of our modified six-state protocol where key bits are equally distributed over 3 different bases but we distill 3 batches of final key, i.e. \begin{eqnarray} r'=1+ q_{x0} \log_2 q_{x0} + q_{y0} \log_2 q_{y0} + q_{z0} \log_2 q_{z0} + q_{I0} \log_2 q_{I0}. \end{eqnarray} We need not compare our results with the standard 6-state protocol\cite{6state}since its key rate given by eq.(\ref{6mix}) is superemed by eq.(\ref{6sep}). The numerical results are given in Fig.(2). \begin{figure} \caption{ Comparison of channel error threshold of different protocols. All values are in the unit of one percent. $Q_{t0} \end{figure} \begin{figure} \caption{ Comparison of the key rate of our protocol( the solid line) and six-state Shor-Preskill protocol (the dashed line). The Y-axis is for the key rate, X axis is $q_{x0} \end{figure} \section{ Summary} In summary, We have shown that, given the asymmetric channel flip rate one can have advantages in tolerable flip rates and efficiency, if one uses a single basis for the key-bits. We have demonstrated this point by both 4-state protocol with CSS-code and the 6-state protocol with 2-way communication. It should be interesting to investigate the most general case that $p_{x0},p_{y0},p_{z0}$ are all different for the case of 6-state protocol. \\{\bf Acknowledgement:} I thanks Prof Imai H for support. Valuable conservations with H. F. Chau is also gratefully acknowledged. \end{document}
\mathbf{e}gin{document} \pagestyle{myheadings} \markboth{\centerline{Jen\H o Szirmai}} {Bisector surfaces and circumscribed spheres of tetrahedra $\dots$} \title {Bisector surfaces and circumscribed spheres of tetrahedra derived by translation curves in $\mathbf{Sol}$ geometry \footnote{Mathematics Subject Classification 2010: 53A20, 53A35, 52C35, 53B20. \newline Key words and phrases: Thurston geometries, $\mathbf{Sol}$ geometry, translation and geodesic triangles, interior angle sum \newline }} \author{Jen\H o Szirmai \\ \normalsize Budapest University of Technology and \\ \normalsize Economics Institute of Mathematics, \\ \normalsize Department of Geometry \\ \normalsize Budapest, P. O. Box: 91, H-1521 \\ \normalsize [email protected] \date{\normalsize{\today}}} \maketitle \mathbf{e}gin{abstract} In the present paper we study the $\mathbf{Sol}$ geometry that is one of the eight homogeneous Thurston 3-geomet\-ri\-es. We determine the equation of the translation-like bisector surface of any two points. We prove, that the isosceles property of a translation triangle is not equivalent to two angles of the triangle being equal and that the triangle inequalities do not remain valid for translation triangles in general. Moreover, we develop a method to determine the centre and the radius of the circumscribed translation sphere of a given {\it translation tetrahedron}. In our work we will use for computations and visualizations the projective model of $\mathbf{Sol}$ described by E. Moln\'ar in \cite{M97}. \end{abstract} \section{Introduction} \label{section1} The Dirichlet - Voronoi (briefly $D-V$) cell is fundamental concept in geometry and crystallography. In particular, they do play important roles in the study of ball packing and ball covering. In $3$-dimensional spaces of constant curvature the $D-V$ cells are widely investigated, but in the further Thurston geometries $\bS^2\!\times\!\bR$, $\bH^2\!\times\!\bR$, $\mathbf{Nil}$, $\mathbf{Sol}$, $\widetilde{\bS\bL_2\bR}$ there are few results in this topic. Let $X$ be one of the above five geometries and $\Gamma$ is one of its discrete isometry groups. Moreover, we distinguish two distance function types: $d^g$ is the usual geodesic distance function and $d^t$ is the translation distance function (see Section 3). Therefore, we obtain two types of the $D-V$ cells regarding the two distance functions. We define the Dirichlet-Voronoi cell with kernel point $K$ of a given discrete isometry group $\Gamma$: \mathbf{e}gin{defn} We say that the point set \[ \mathcal{D}(K)=\left\{ Y \in X : ~ d^i(K,Y)\le d^i(K^{\mathbf{g}},Y) ~ \text{for all} ~ \mathbf{g}\in \Gamma \right\}\subset X \] is the {\it Dirichlet-Voronoi cell} of $\Gamma$ around its {\it kernel point} $K$ where $d^i$ is the geodesic or translation distance function of $X$. \end{defn} The firs step to get the $D-V$ cell of a given point set of $X$ is the determination of the translation or geodesic-like bisector (or equidistant) surface of two arbitrary points of $X$ because these surface types contain the faces of $D-V$ cells. In \cite{PSSz10}, \cite{PSSz11-1}, \cite{PSSz11-2} we studied the geodesic-like equidistant surfaces in $\bS^2\!\times\!\bR$, $\bH^2\!\times\!\bR$ and $\mathbf{Nil}$ geometries but there are no results concerning the translation-like equidistant surfaces in $\mathbf{Nil}$, $\widetilde{\bS\bL_2\bR}$ and $\mathbf{Sol}$ geometries. In the Thurston spaces can be introduced in a natural way (see \cite{M97}) translations mapping each point to any point. Consider a unit vector at the origin. Translations, postulated at the beginning carry this vector to any point by its tangent mapping. If a curve $t\rightarrow (x(t),y(t),z(t))$ has just the translated vector as tangent vector in each point, then the curve is called a {\it translation curve}. This assumption leads to a system of first order differential equations, thus translation curves are simpler than geodesics and differ from them in $\mathbf{Nil}$, $\widetilde{\bS\bL_2\bR}$ and $\mathbf{Sol}$ geometries. In $\mathbf E^3$, $\bS^3$, $\bH^3$, $\bS^2\!\times\!\bR$ and $\bH^2\!\times\!\bR$ geometries the translation and geodesic curves coincide with each other. Therefore, the translation curves also play an important role in $\mathbf{Nil}$, $\widetilde{\bS\bL_2\bR}$ and $\mathbf{Sol}$ geometries and often seem to be more natural in these geometries, than their geodesic lines. {\it In this paper} we study the translation-like bisector surfaces of two points in $\mathbf{Sol}$ geometry, determine their equations and visualize them. The translation-like bisector surfaces play an important role in the construction of the $D-V$ cells because their faces lie on bisector surfaces. The $D-V$-cells are relevant in the study of tilings, ball packing and ball covering. E.g. if the point set is the orbit of a point - generated by a discrete isometry group of $\mathbf{Sol}$ - then we obtain a monohedral $D-V$ cell decomposition (tiling) of the considered space and it is interesting to examine its optimal ball packing and covering (see \cite{Sz13-1}, \cite{Sz13-2}). Moreover, we prove, that the isosceles property of a translation triangle is not equivalent to two angles of the triangle being equal and that the triangle inequalities do not remain valid for translation triangles in general. Using the above bisector surfaces we develop a procedure to determine the centre and the radius of the circumscribed translation sphere of an arbitrary $\mathbf{Sol}$ tetrahedron. This is useful to determine the least dense ball covering radius of a given periodic polyhedral $\mathbf{Sol}$ tiling because the tiling can be decomposed into tetrahedra. \mathbf{e}gin{rem} We note here, that nowadays the $\mathbf{Sol}$ geometry is a widely investigated space concerning its manifolds, tilings, geodesic and translation ball packings and probability theory (see e.g. \cite{BT}, \cite{CaMoSpSz}, \cite{KV}, \cite{MSz}, \cite{MSz12}, \cite{MSzV}, \cite{Sz13-2} and the references given there). \end{rem} \section{On Sol geometry} \label{sec:1} In this Section we summarize the significant notions and notations of real $\mathbf{Sol}$ geometry (see \cite{M97}, \cite{S}). $\mathbf{Sol}$ is defined as a 3-dimensional real Lie group with multiplication \mathbf{e}gin{equation} \mathbf{e}gin{gathered} (a,b,c)(x,y,z)=(x + a e^{-z},y + b e^z ,z + c). \end{gathered} \tag{2.1} \end{equation} We note that the conjugacy by $(x,y,z)$ leaves invariant the plane $(a,b,c)$ with fixed $c$: \mathbf{e}gin{equation} \mathbf{e}gin{gathered} (x,y,z)^{-1}(a,b,c)(x,y,z)=(x(1-e^{-c})+a e^{-z},y(1-e^c)+b e^z ,c). \end{gathered} \tag{2.2} \end{equation} Moreover, for $c=0$, the action of $(x,y,z)$ is only by its $z$-component, where $(x,y,z)^{-1}=(-x e^{z}, -y e^{-z} ,-z)$. Thus the $(a,b,0)$ plane is distinguished as a {\it base plane} in $\mathbf{Sol}$, or by other words, $(x,y,0)$ is normal subgroup of $\mathbf{Sol}$. $\mathbf{Sol}$ multiplication can also be affinely (projectively) interpreted by "right translations" on its points as the following matrix formula shows, according to (2.1): \mathbf{e}gin{equation} \mathbf{e}gin{gathered} (1,a,b,c) \to (1,a,b,c) \mathbf{e}gin{pmatrix} 1&x&y&z \\ 0&e^{-z}&0&0 \\ 0&0&e^z&0 \\ 0&0&0&1 \\ \end{pmatrix} =(1,x + a e^{-z},y + b e^z ,z + c) \end{gathered} \tag{2.3} \end{equation} by row-column multiplication. This defines "translations" $\mathbf{L}(\mathbf{R})= \{(x,y,z): x,~y,~z\in \mathbf{R} \}$ on the points of space $\mathbf{Sol}= \{(a,b,c):a,~b,~c \in \mathbf{R}\}$. These translations are not commutative, in general. Here we can consider $\mathbf{L}$ as projective collineation group with right actions in homogeneous coordinates as usual in classical affine-projective geometry. We will use the Cartesian homogeneous coordinate simplex $E_0(\mathbf{e}_0)$, $E_1^{\infty}(\mathbf{e}_1)$, \ $E_2^{\infty}(\mathbf{e}_2)$, \ $E_3^{\infty}(\mathbf{e}_3), \ (\{\mathbf{e}_i\}\subset \mathbf{V}^4$ \ $\text{with the unit point}$ $E(\mathbf{e} = \mathbf{e}_0 + \mathbf{e}_1 + \mathbf{e}_2 + \mathbf{e}_3 ))$ which is distinguished by an origin $E_0$ and by the ideal points of coordinate axes, respectively. Thus {$\mathbf{Sol}$} can be visualized in the affine 3-space $\mathbf{A}^3$ (so in Euclidean space $\mathbf{E}^3$) as well. In this affine-projective context E. Moln\'ar has derived in \cite{M97} the usual infinitesimal arc-length square at any point of $\mathbf{Sol}$, by pull back translation, as follows \mathbf{e}gin{equation} \mathbf{e}gin{gathered} (ds)^2:=e^{2z}(dx)^2 +e^{-2z}(dy)^2 +(dz)^2. \end{gathered} \tag{2.4} \end{equation} Hence we get infinitesimal Riemann metric invariant under translations, by the symmetric metric tensor field $g$ on $\mathbf{Sol}$ by components as usual. It will be important for us that the full isometry group Isom$(\mathbf{Sol})$ has eight components, since the stabilizer of the origin is isomorphic to the dihedral group $\mathbf{D_4}$, generated by two involutive (involutory) transformations, preserving (2.4): \mathbf{e}gin{equation} \mathbf{e}gin{gathered} (1) \ \ y \leftrightarrow -y; \ \ (2) \ x \leftrightarrow y; \ \ z \leftrightarrow -z; \ \ \text{i.e. first by $3\times 3$ matrices}:\\ (1) \ \mathbf{e}gin{pmatrix} 1&0&0 \\ 0&-1&0 \\ 0&0&1 \\ \end{pmatrix}; \ \ \ (2) \ \mathbf{e}gin{pmatrix} 0&1&0 \\ 1&0&0 \\ 0&0&-1 \\ \end{pmatrix}; \\ \end{gathered} \tag{2.5} \end{equation} with its product, generating a cyclic group $\mathbf{C_4}$ of order 4 \mathbf{e}gin{equation} \mathbf{e}gin{gathered} \mathbf{e}gin{pmatrix} 0&1&0 \\ -1&0&0 \\ 0&0&-1 \\ \end{pmatrix};\ \ \mathbf{e}gin{pmatrix} -1&0&0 \\ 0&-1&0 \\ 0&0&1 \\ \end{pmatrix}; \ \ \mathbf{e}gin{pmatrix} 0&-1&0 \\ 1&0&0 \\ 0&0&-1 \\ \end{pmatrix};\ \ \mathbf{Id}=\mathbf{e}gin{pmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \\ \end{pmatrix}. \end{gathered} \notag \end{equation} Or we write by collineations fixing the origin $O=(1,0,0,0)$: \mathbf{e}gin{equation} (1) \ \mathbf{e}gin{pmatrix} 1&0&0&0 \\ 0&1&0&0 \\ 0&0&-1&0 \\ 0&0&0&1 \\ \end{pmatrix}, \ \ (2) \ \mathbf{e}gin{pmatrix} 1&0&0&0 \\ 0&0&1&0 \\ 0&1&0&0 \\ 0&0&0&-1 \\ \end{pmatrix} \ \ \text{of form (2.3)}. \tag{2.6} \end{equation} A general isometry of $\mathbf{Sol}$ to the origin $O$ is defined by a product $\gamma_O \tau_X$, first $\gamma_O$ of form (2.6) then $\tau_X$ of (2.3). To a general point $A=(1,a,b,c)$, this will be a product $\tau_A^{-1} \gamma_O \tau_X$, mapping $A$ into $X=(1,x,y,z)$. Conjugacy of translation $\tau$ by an above isometry $\gamma$, as $\tau^{\gamma}=\gamma^{-1}\tau\gamma$ also denotes it, will also be used by (2.3) and (2.6) or also by coordinates with above conventions. We remark only that the role of $x$ and $y$ can be exchanged throughout the paper, but this leads to the mirror interpretation of $\mathbf{Sol}$. As formula (2.4) fixes the metric of $\mathbf{Sol}$, the change above is not an isometry of a fixed $\mathbf{Sol}$ interpretation. Other conventions are also accepted and used in the literature. {\it $\mathbf{Sol}$ is an affine metric space (affine-projective one in the sense of the unified formulation of \cite{M97}). Therefore, its linear, affine, unimodular, etc. transformations are defined as those of the embedding affine space.} \subsection{Translation curves} We consider a $\mathbf{Sol}$ curve $(1,x(t), y(t), z(t) )$ with a given starting tangent vector at the origin $O=(1,0,0,0)$ \mathbf{e}gin{equation} \mathbf{e}gin{gathered} u=\dot{x}(0),\ v=\dot{y}(0), \ w=\dot{z}(0). \end{gathered} \tag{2.7} \end{equation} For a translation curve let its tangent vector at the point $(1,x(t), y(t), z(t) )$ be defined by the matrix (2.3) with the following equation: \mathbf{e}gin{equation} \mathbf{e}gin{gathered} (0,u,v,w) \mathbf{e}gin{pmatrix} 1&x(t)&y(t)&z(t) \\ 0&e^{-z(t)}&0&0 \\ 0&0&e^{z(t)}& 0 \\ 0&0&0&1 \\ \end{pmatrix} =(0,\dot{x}(t),\dot{y}(t),\dot{z}(t)). \end{gathered} \tag{2.8} \end{equation} Thus, {\it translation curves} in $\mathbf{Sol}$ geometry (see \cite{MoSzi10} and \cite{MSz}) are defined by the first order differential equation system $\dot{x}(t)=u e^{-z(t)}, \ \dot{y}(t)=v e^{z(t)}, \ \dot{z}(t)=w,$ whose solution is the following: \mathbf{e}gin{equation} \mathbf{e}gin{gathered} x(t)=-\frac{u}{w} (e^{-wt}-1), \ y(t)=\frac{v}{w} (e^{wt}-1), \ z(t)=wt, \ \mathrm{if} \ w \ne 0 \ \mathrm{and} \\ x(t)=u t, \ y(t)=v t, \ z(t)=z(0)=0 \ \ \mathrm{if} \ w =0. \end{gathered} \tag{2.9} \end{equation} We assume that the starting point of a translation curve is the origin, because we can transform a curve into an arbitrary starting point by translation (2.3), moreover, unit velocity translation can be assumed : \mathbf{e}gin{equation} \mathbf{e}gin{gathered} x(0)=y(0)=z(0)=0; \\ \ u=\dot{x}(0)=\cos{\theta} \cos{\phi}, \ \ v=\dot{y}(0)=\cos{\theta} \sin{\phi}, \ \ w=\dot{z}(0)=\sin{\theta}; \\ - \pi < \phi \leq \pi, \ -\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}. \tag{2.10} \end{gathered} \end{equation} \mathbf{e}gin{defn} The translation distance $d^t(P_1,P_2)$ between the points $P_1$ and $P_2$ is defined by the arc length of the above translation curve from $P_1$ to $P_2$. \end{defn} Thus we obtain the parametric equation of the the {\it translation curve segment} $t(\phi,\theta,t)$ with starting point at the origin in direction \mathbf{e}gin{equation} \mathbf{t}(\phi, \theta)=(\cos{\theta} \cos{\phi}, \cos{\theta} \sin{\phi}, \sin{\theta}) \tag{2.11} \end{equation} where $t \in [0,r] ~ r \in \mathbf{R}^+$. If $\theta \ne 0$ then the system of equation is: \mathbf{e}gin{equation} \mathbf{e}gin{gathered} \left\{ \mathbf{e}gin{array}{ll} x(\phi,\theta,t)=-\cot{\theta} \cos{\phi} (e^{-t \sin{\theta}}-1), \\ y(\phi,\theta,t)=\cot{\theta} \sin{\phi} (e^{t \sin{\theta}}-1), \\ z(\phi,\theta,t)=t \sin{\theta}. \end{array} \right. \\ \text{If $\theta=0$ then}: ~ x(t)=t\cos{\phi} , \ y(t)=t \sin{\phi}, \ z(t)=0. \tag{2.12} \end{gathered} \end{equation} \mathbf{e}gin{defn} The sphere of radius $r >0$ with centre at the origin (denoted by $S^t_O(r)$) with the usual longitude and altitude parameters $- \pi < \phi \leq \pi$, $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, respectively, by (2.10), is specified by the equations (2.12) where $t=r$. \end{defn} \mathbf{e}gin{defn} The body of the translation sphere of centre $O$ and of radius $r$ in the $\mathbf{Sol}$ space is called translation ball, denoted by $B^t_{O}(r)$, i.e. $Q \in B^t_{O}(r)$ iff $0 \leq d^t(O,Q) \leq r$. \end{defn} \mathbf{e}gin{figure}[ht] \centering \includegraphics[width=13cm]{Fig1.eps} \caption{Translation ball of radius $r=5/2$ and its plane sections parallel to [x,y] coordinate plane in $\mathbf{Sol}$ space} \label{} \end{figure} In \cite{Sz13-2} we proved the volume formula of the translation ball $B^t_{O}(r)$ of radius $r$: \mathbf{e}gin{thm} \mathbf{e}gin{equation} \mathbf{e}gin{gathered} Vol(B^t_{O}(r))=\int_{V} \mathrm{d}x ~ \mathrm{d}y ~ \mathrm{d}z = \\ = \int_{0}^{r} \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \int_{-\pi}^{\pi} \frac{\cos{\theta}}{\sin^2{\theta}} (e^{\rho \sin{\theta}}+e^{-\rho \sin{\theta}}-2) \ \mathrm{d}\phi \ \mathrm{d}\theta \ \mathrm{d}\rho = \\ = 4 \pi \int_{0}^{r} \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{\cos{\theta}}{\sin^2{\theta}} (\cosh(\rho \sin{\theta})-1) \ \mathrm{d}\theta \ \mathrm{d}\rho. \notag \end{gathered} \end{equation} \end{thm} An easy power series expansion with substitution $\rho \sin{\theta}=:z$ can also be applied, no more detailed. From the equation of the translation spheres $S^t_O(r)$ (see (2.12)) it follows that the plane sections of following spheres, given by parameters $\theta$ and $r$, parallel to $[x,y]$ plane are ellipses by the equations (see Fig.~1, $r=5/2$): \mathbf{e}gin{equation} \mathbf{e}gin{gathered} \frac{x^2}{k_1^2}+\frac{y^2}{k_2^2}=1 \ \mathrm{where} \\ k_1^2=(-\cot{\theta} (e^{-r \sin{\theta}}-1))^2, \ \ \ k_2^2=(\cot{\theta} (e^{r \sin{\theta}}-1))^2. \tag{2.13} \end{gathered} \end{equation} \section{Translation-like bisector surfaces} One of our further goals is to examine and visualize the Dirichlet-Voronoi cells of $\mathbf{Sol}$ geometry. In order to get $D-V$ cells we have to determine its "faces" that are parts of bisector (or equidistant) surfaces of given point pairs. The definition below comes naturally: \mathbf{e}gin{defn} The equidistant surface $\mathcal{S}_{P_1P_2}$ of two arbitrary points $P_1,P_2 \in \mathbf{Sol}$ consists of all points $P'\in \mathbf{Sol}$, for which $d^t(P_1,P')=d^t(P',P_2)$. \end{defn} \mathbf{e}gin{figure}[ht] \centering \includegraphics[width=12cm]{Fig2.eps} \caption{Translation-like bisector (equidistant surface) with $P_1=(1,0,0,0)$ and $P_2=(1,-1,1,1/2)$.} \label{pic:surf1} \end{figure} It can be assumed by the homogeneity of $\mathbf{Sol}$ that the starting point of a given translation curve segment is $E_0=P_1=(1,0,0,0)$. The other endpoint will be given by its homogeneous coordinates $P_2=(1,a,b,c)$. We consider the translation curve segment $t_{P_1P_2}$ and determine its parameters $(\phi,\theta,t)$ expressed by the real coordinates $a$, $b$, $c$ of $P_2$. We obtain directly by equation system (2.12) the following Lemma (see \cite{Sz17}): \mathbf{e}gin{lem} \mathbf{e}gin{enumerate} \item Let $(1,a,b,c)$ $(b,c \in \mathbf{R} \setminus \{0\}, a \in \mathbf{R})$ be the homogeneous coordinates of the point $P \in \mathbf{Sol}$. The paramerters of the corresponding translation curve $t_{E_0P}$ are the following \mathbf{e}gin{equation} \mathbf{e}gin{gathered} \phi=\mathrm{arccot}\Big(-\frac{a}{b} \frac{\mathrm{e}^{c}-1}{\mathrm{e}^{-c}-1}\Big),~\theta=\mathrm{arccot}\Big( \frac{b}{\sin\phi(\mathrm{e}^{c}-1)}\Big),\\ t=\frac{c}{\sin\theta}, ~ \text{where} ~ -\pi < \phi \le \pi, ~ -\pi/2\le \theta \le \pi/2, ~ t\in \mathbf{R}^+. \end{gathered} \tag{3.1} \end{equation} \item Let $(1,a,0,c)$ $(a,c \in \mathbf{R} \setminus \{0\})$ be the homogeneous coordinates of the point $P \in \mathbf{Sol}$. The parameters of the corresponding translation curve $t_{E_0P}$ are the following \mathbf{e}gin{equation} \mathbf{e}gin{gathered} \phi=0~\text{or}~ \pi, ~\theta=\mathrm{arccot}\Big( \mp \frac{a}{(\mathrm{e}^{-c}-1)}\Big),\\ t=\frac{c}{\sin\theta}, ~ \text{where} ~ -\pi/2\le \theta \le \pi/2, ~ t\in \mathbf{R}^+. \end{gathered} \tag{3.2} \end{equation} \item Let $(1,a,b,0)$ $(a,b \in \mathbf{R})$ be the homogeneous coordinates of the point $P \in \mathbf{Sol}$. The paramerters of the corresponding translation curve $t_{E_0P}$ are the following \mathbf{e}gin{equation} \mathbf{e}gin{gathered} \phi=\arccos\Big(\frac{x}{\sqrt{a^2+b^2}}\Big),~ \theta=0,\\ t=\sqrt{a^2+b^2}, ~ \text{where} ~ -\pi < \phi \le \pi, ~ t\in \mathbf{R}^+.~ ~ \square \end{gathered} \tag{3.3} \end{equation} \end{enumerate} \end{lem} {\it In order to determine the translation-like bisector surface $\mathcal{S}_{P_1P_2}(x,y,z)$ of two given point $E_0=P_1=(1,0,0,0)$ and $P_2=(1,a,b,c)$ we define {translation} $\mathbf{T}_{P_2}$ as elements of the isometry group of $\mathbf{Sol}$, that maps the origin $E_0$ onto $P$} (see Fig.~2), moreover let $P_3=(1,x,y,z)$ a point in $\mathbf{Sol}$ space. This isometrie $\mathbf{T}_{P_2}$ and its inverse (up to a positive determinant factor) can be given by: \mathbf{e}gin{equation} \mathbf{T}_{P_2}= \mathbf{e}gin{pmatrix} 1 & a & b & c \\ 0 & \mathrm{e}^{-c} & 0 & 0 \\ 0 & 0 & \mathrm{e}^{c} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} , ~ ~ ~ \mathbf{T}_{P_2}^{-1}= \mathbf{e}gin{pmatrix} 1 & -a\mathrm{e}^{c} & -b\mathrm{e}^{-c} & -c \\ 0 & \mathrm{e}^{c} & 0 & 0 \\ 0 & 0 & \mathrm{e}^{-c} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} , \tag{3.4} \end{equation} and the images $\mathbf{T}^{-1}_{P_2}(P_i)$ of points $P_i$ $(i \in \{1,2,3\})$ are the following (see also Fig.~2): \mathbf{e}gin{equation} \mathbf{e}gin{gathered} \mathbf{T}^{-1}_{P_2}(P_1=E_0)=P_1^2=(1,-x\mathrm{e}^{z},-y\mathrm{e}^{-z},-z),~ \mathbf{T}^{-1}_{P_2}(P_2)=E_0=(1,0,0,0), \\ \mathbf{T}^{-1}_{P_2}(P_3)=P_3^2=(1,(x-a)\mathrm{e}^{c},(y-b)\mathrm{e}^{-c},(z-c). \tag{3.5} \end{gathered} \end{equation} It is clear that $P_3=(1,x,y,z) \in \mathcal{S}_{P_1P_2} ~ \text{iff} ~ d^t(P_1,P_3)=d^t(P_3,P_2) \Rightarrow d^t(P_1,P_3)=d^t(E_0,P_3^2)$ where $P_3^2=\mathbf{T}^{-1}_{P_2}(P_3)$ (see (3.4), (3.5)). This method leads to \mathbf{e}gin{lem} The implicit equation of the equidistant surface $\mathcal{S}_{P_1P_2}(x,y,z)$ of two points $P_1=(1,0,0,0)$ and $P_2=(1,a,b,c)$ in $\mathbf{Sol}$ space (see Fig.~2,3): \mathbf{e}gin{enumerate} \item $c \ne 0$ \mathbf{e}gin{equation}\label{bis1} \mathbf{e}gin{gathered} z\ne 0, c~:~ \frac{|c-z|}{|\mathrm{e}^{c}-\mathrm{e}^z|}\sqrt{(a-x)^2 \mathrm{e}^{2(c+z)}+(\mathrm{e}^{c}-\mathrm{e}^{z})^2+(b-y)^2}=\\ =\frac{|z|}{|\mathrm{e}^{z}-1|}\sqrt{x^2 \mathrm{e}^{2z}+(\mathrm{e}^{z}-1)^2+y^2},\\ z=c~:~\sqrt{(x-a)^2\mathrm{e}^{2c}+(y-b)^2\mathrm{e}^{-2c}} =\frac{|z|}{|\mathrm{e}^{z}-1|}\sqrt{x^2 \mathrm{e}^{2z}+(\mathrm{e}^{z}-1)^2+y^2},\\ z=0~:~ \frac{|c|}{|\mathrm{e}^{c}-1|}\sqrt{(a-x)^2 \mathrm{e}^{2c}+(\mathrm{e}^{c}-1)^2+(b-y)^2}=\sqrt{x^2+y^2}, \end{gathered} \tag{3.6} \end{equation} \item $c=0$ \mathbf{e}gin{equation}\label{bis1} \mathbf{e}gin{gathered} z\ne 0~:~ \frac{|z|}{|\mathrm{e}^z-1|}\sqrt{(a-x)^2 \mathrm{e}^{2z}+(\mathrm{e}^{z}-1)^2+(b-y)^2}=\\ =\frac{|z|}{|\mathrm{e}^{z}-1|}\sqrt{x^2 \mathrm{e}^{2z}+(\mathrm{e}^{z}-1)^2+y^2}\Leftrightarrow \mathrm{e}^{2z}a(a-2x)+b(b-2y)=0,\\ z=0~:~ \sqrt{(x-a)^2+(y-b)^2}=\sqrt{x^2+y^2}\Leftrightarrow xa+yb-\frac{a^2+b^2}{2}.~\square \end{gathered} \tag{3.7} \end{equation} \end{enumerate} \end{lem} \mathbf{e}gin{figure}[ht] \centering \includegraphics[width=12cm]{Fig3.eps} \caption{Translation-like bisectors (equidistant surfaces) of point pairs $(P_1,P_2)$ with coordinates $((1,0,0,0), (1,0,0,2))$ (left) and $((1,0,0,0), (1,1,1,0))$ (right) } \label{pic:surf1} \end{figure} \subsection{On isosceles and equilateral translation triangles} We consider $3$ points $A_1$, $A_2$, $A_3$ in the projective model of $\mathbf{Sol}$ space. The {\it translation segments} connecting the points $A_i$ and $A_j$ $(i<j,~i,j,k \in \{1,2,3\})$ are called sides of the {\it translation triangle} $A_1A_2A_3$. The length of the side $a_k$ $(k\in \{1,2,3\})$ of a translation triangle $A_1A_2A_3$ is the translation distance $d^t(A_i,A_j)$ between the vertices $A_i$ and $A_j$ $(i<j,~i,j,k \in \{1,2,3\}, k \ne i,j$). Similarly to the Euclidean geometry we can define the notions of isosceles and equilateral translation triangles. An isosceles translation triangle is a triangle with (at least) two equal sides and a triangle with all sides equal is called an equilateral translation triangle (see Fig.~4) in the $\mathbf{Sol}$ space. We note here, that if in a translation triangle $A_1A_2A_3$ e.g. $a_1=a_2$ then the bisector surface $\mathcal{S}_{A_1A_2}$ contains the vertex $A_3$ (see Fig.~4). \mathbf{e}gin{figure}[ht] \centering \includegraphics[width=12cm]{Fig4.eps} \caption{Equilateral translation triangle with vertices $A_1=(1,0,0,0)$, $A_2=(1,2,1,-3/4)$, $A_3=(1,1,\approx 1.46717, \approx 1.04627)$ (left) and the above triangle with bisector $\mathcal{S}_{A_1A_2}$ containing the vertex $A_3$ (right).} \label{} \end{figure} In the Euclidean space the isosceles property of a triangle is equivalent to two angles of the triangle being equal therefore has both two equal sides and two equal angles. An equilateral triangle is a special case of an isosceles triangle having not just two, but all three sides and angles equal. \mathbf{e}gin{prop} The isosceles property of a translation triangle is not equivalent to two angles of the triangle being equal in the $\mathbf{Sol}$ space. \end{prop} {\bf Proof:}~ The coordinates $y^3$, $z^3$ of the vertex $A_3$ can be determined by the equation system $d^t(A_1,A_2)=d^t(A_1,A_3)=d^t(A_2,A_3)$, $y^3 \approx 1.46717$, $z^3 \approx 1.04627$ $(a_3=d^t(A_1,A_2)=a_2=d^t(A_1,A_3)=a_1=d^t(A_2,A_3) \approx 2.09436)$ (see Fig.~4). The {\it interior angles} of translation triangles are denoted at the vertex $A_i$ by $\omega_i$ $(i\in\{1,2,3\})$. We note here that the angle of two intersecting translation curves depends on the orientation of their tangent vectors. {\it In order to determine the interior angles of a translation triangle $A_1A_2A_3$ and its interior angle sum $\sum_{i=1}^3(\omega_i)$,} we apply the method (we do not discuss here) developed in \cite{Sz17} using the infinitesimal arc-lenght square of $\mathbf{Sol}$ geometry (see (2.4)). Our method (see \cite{Sz17}) provide the following results: $$ \omega_1 \approx 0.94694,~\omega_2 \approx 1.04250,~ \omega_3 \approx 1.44910,~\sum_{i=1}^3(\omega_i) \approx 3.43854 > \pi. $$ From the above results follows the statement. We note here, that if the vertices of the translation triangle lie in the $[x,y]$ plane than the Euclidean isosceles property true in the $\mathbf{Sol}$ geometry, as well. ~$\square$ Using the above methods we obtain the following \mathbf{e}gin{lem} The triangle inequalities do not remain valid for translation triangles in general. \end{lem} {\bf Proof:}~ We consider the translation triangle $A_1A_2A_3$ where $A_1=(1,0,0,0)$, $A_2=(1,-1,2,1)$, $A_3=(1,3/4,3/4,1/2)$. We obtain directly by equation systems (3.1), (3.2), (3.3) (see Lemma 3.2 and \cite{Sz17}) the lengths of the translation segments $A_iA_j$ $(i,j \in \{1,2,3\}$, $i<j)$: \mathbf{e}gin{equation} \mathbf{e}gin{gathered} d^t(A_1A_2) \approx 2.20396,~d^t(A_1A_3) \approx 1.22167, ~ d^t(A_2A_3) \approx 3.74623, \\ \text{therefore} ~ d^t(A_1A_2)+d^t(A_1A_3) < d^t(A_2A_3). ~\square \tag{3.8} \end{gathered} \end{equation} We note here that if the vertices of a translation triangle lie on the $[x,y]$ plane of the model then the corresponding triangle inequalities are true (see (2.12) and Lemma 3.2). \subsection{The locus of all points equidistant from three given points} A point is said to be equidistant from a set of objects if the distances between that point and each object in the set are equal. Here we study that case where the objects are vertices of a $\mathbf{Sol}$ translation triangle $A_1A_2A_3$ and determine the locus of all points that are equidistant from $A_1$, $A_2$ and $A_3$. We consider $3$ points $A_1$, $A_2$, $A_3$ that do not all lie in the same translation curve in the projective model of $\mathbf{Sol}$ space. The {\it translation segments} connecting the points $A_i$ and $A_j$ $(i<j,~i,j,k \in \{1,2,3\}, k \ne i,j$) are called sides of the {\it translation triangle} $A_1A_2A_3$. The locus of all points that are equidistant from the vertices $A_1$, $A_2$ and $A_3$ is denoted by $\mathcal{C}$. In the previous section we determined the equation of translation-like bisector (equidistant) surface to any two points in the $\mathbf{Sol}$ space. It is clear, that all points on the locus $\mathcal{C}$ must lie on the equidistant surfaces $\mathcal{S}_{A_iA_j}$, $(i<j,~i,j \in \{1,2,3\})$ therefore $\mathcal{C}=\mathcal{S}_{A_1A_2} \cap \mathcal{S}_{A_1A_3}$ and the coordinates of each of the points of that locus and only those points must satisfy the corresponding equations of Lemma 3.3. Thus, the non-empty point set $\mathcal{C}$ can be determined and can be visualized for any given translation triangle (see Fig.~5 and 6). \mathbf{e}gin{figure}[ht] \centering \includegraphics[width=12cm]{Fig5.eps} \caption{Translation triangle with vertices $A_1=(1,0,0,0)$, $A_2=(1,2,1,-3/4)$, $A_3=(1,1,-1/2,2/3)$ with translation-like bisector surfaces $\mathcal{S}_{A_1A_2}$ and $\mathcal{S}_{A_1A_3}$ (left) and a part of the locus $\mathcal{C}=\mathcal{S}_{A_1A_2} \cap \mathcal{S}_{A_1A_3}$ of all points equidistant from three given points $A_1$, $A_2$, $A_3$ (right).} \label{} \end{figure} If the vertices of the translation triangle lie on the $[x,y]$ plane $A_1=(1,0,0,0)$, $A_2=(1,a,b,0)$, $A_3=(1,a_1,b_1,0)$ then the parametric equation $(z\in \mathbf{R})$ of $\mathcal{C}$ is the following (see Lemma 3.3 and Fig.6): \mathbf{e}gin{equation} \mathcal{C}(z)=\Big(\frac{bb_1(b-b_1)\mathrm{e}^{-2z}+a^2b_1-a_1^2b)}{2(ab_1-a_1b)},\frac{(-aa_1(a-a_1)\mathrm{e}^{2z}+ab_1^2-a_1b^2)}{2(ab_1-a_1b)},z\Big). \tag{3.9} \end{equation} \mathbf{e}gin{figure}[ht] \centering \includegraphics[width=12cm]{Fig6.eps} \caption{Translation triangle with vertices $A_1=(1,0,0,0)$, $A_2=(1,1/3,1/5,$ $0)$, $A_3=(1,1/2,-2/7,0)$ with translation-like bisector surfaces $\mathcal{S}_{A_1A_2}$ and $\mathcal{S}_{A_1A_3}$ (left) and a part of the locus $\mathcal{C}=\mathcal{S}_{A_1A_2} \cap \mathcal{S}_{A_1A_3}$ of all points equidistant from three given points $A_1$, $A_2$, $A_3$ (right).} \label{} \end{figure} \subsection{Translation tetrahedra and their circumscribed spheres} We consider $4$ points $A_1$, $A_2$, $A_3$, $A_4$ in the projective model of $\mathbf{Sol}$ space (see Section 2). These points are the vertices of a {\it translation tetrahedron} in the $\mathbf{Sol}$ space if any two {\it translation segments} connecting the points $A_i$ and $A_j$ $(i<j,~i,j \in \{1,2,3,4\}$) do not have common inner points and any three vertices do not lie in a same translation curve. Now, the translation segments $A_iA_j$ are called edges of the translation tetrahedron $A_1A_2A_3A_4$. \mathbf{e}gin{figure}[ht] \centering \includegraphics[width=12cm]{Fig7.eps} \caption{Translation tetrahedron with vertices $A_1=(1,0,0,0)$, $A_2=(1,\sqrt{3}/8,$ $1/8,1/40)$, $A_3=(1,1/8,\sqrt{3}/8,-1/40)$, $A_4=(1,1/20,3/20,1/5)$ and its circumscibed sphere of radius $r \approx 0.14688$ with circumcenter $C=(1,\approx 0.08198, \approx 0.10540, \approx 0.06319)$.} \label{} \end{figure} The circumscribed sphere of a translation tetrahedron is a translation sphere (see Definition 2.2, (2.12) and Fig.~1) that touches each of the tetrahedron's vertices. As in the Euclidean case the radius of a translation sphere circumscribed around a tetrahedron $T$ is called the circumradius of $T$, and the center point of this sphere is called the circumcenter of $T$. \mathbf{e}gin{lem} For any translation tetrahedron there exists uniquely a translation sphere (called the circumsphere) on which all four vertices lie. \end{lem} \mathbf{e}gin{figure}[ht] \centering \includegraphics[width=12cm]{Fig8.eps} \caption{Translation tetrahedron with vertices $A_1=(1,0,0,0)$, $A_2=(1,\sqrt{3}/8,$ $1/8,1/40)$, $A_3=(1,1/8,\sqrt{3}/8,-1/40)$, $A_4=(1,-3/20,-3/20,$ $3/10)$ and its circumscibed sphere of radius $r \approx 0.36332$ with circumcenter $C=(1,\approx 0.04904, \approx 0.17721, \approx 0.32593)$. } \label{} \end{figure} {\bf Proof:}~ The Lemma follows directly from the properties of the translation distance function (see Definition 2.1 and (2.12)). The procedure to determine the radius and the circumcenter of a given translation tetrahedron is the folowing: The circumcenter $C=(1,x,y,z)$ of a given translation tetrahedron $A_1A_2A_3A_4$ $(A_i=(1,x^i,y^i,z^i), ~ i \in \{1,2,3,4\})$ have to hold the following system of equation: \mathbf{e}gin{equation} d^t(A_1,C)=d^t(A_2,C)=d^t(A_3,C)=d^t(A_4,C), \tag{3.10} \end{equation} therefore it lies on the translation-like bisector surfaces $\mathcal{S}_{A_i,A_j}$ $(i<j,~i,j \in \{1,2,3,4\}$) which equations are determined in Lemma 3.3. The coordinates $x,y,z$ of the circumcenter of the circumscribed sphere around the tetrahedron $A_1A_2A_3A_4$ are obtained by the system of equation derived from the facts: \mathbf{e}gin{equation} C \in \mathcal{S}_{A_1A_2}, \mathcal{S}_{A_1A_3}, \mathcal{S}_{A_1A_4}. \tag{3.11} \end{equation} Finally, we get the circumradius $r$ as the translation distance e.g. $r=d^t(A_1,C)$. We apply the above procedure to two tetrahedra determined their centres and the radii of their circumscribed balls that are described in Fig.~7 and 8.~ $\square$ \mathbf{e}gin{thebibliography}{12} \bibitem{BT} {Brieussel,~J.~--~Tanaka,~R.,} Discrete random walks on the group $\mathbf{Sol}$. \textit{Isr. J. Math.,} {\bf 208/1}, 291-321 (2015). \bibitem{Ch} Chavel,~I., {\it Riemannian Geometry: A Modern Introduction}. Cambridge Studies in Advances Mathematics, (2006). \bibitem{KN} Kobayashi,~S.~--~Nomizu,~K., {\it Fundation of differential geometry, I.}. Interscience, Wiley, New York (1963). \bibitem{Mi} Milnor,~J., Curvatures of left Invariant metrics on Lie groups. {\it Advances in Math.,} {\bf 21}, 293--329 (1976). \bibitem{M97} Moln{\'a}r,~E., The projective interpretation of the eight 3-di\-men\-sional homogeneous geometries. {\it Beitr. Algebra Geom.,} {\bf 38}(2), 261--288 (1997). \bibitem{CaMoSpSz} {Cavichioli,~A.~--~Moln\'ar,~E.~--~Spaggiari,~F.~--~Szirmai,~J.,} Some tetrahedron manifolds with $\mathbf{Sol}$ geometry. \textit{J. Geom.,} {\bf 105/3}, 601-614 (2014). \bibitem{KV} {Kotowski,~M.~--~Vir\'ag,~B.,} Dyson's spike for random Schroedinger operators and Novikov-Shubin invariants of groups. \textit{Manuscript (2016)} arXiv:1602.06626. \bibitem{M97} {Moln{\'a}r,~E.,} The projective interpretation of the eight 3-di\-men\-sional homogeneous geometries. \emph{Beitr. Algebra Geom.,} {\bf38} No.~2, 261--288, (1997). \bibitem{MoSzi10} {Moln{\'a}r,~E.~--~Szil\'agyi,~B.,} Translation curves and their spheres in homogeneous geometries. \textit{Publ. Math. Debrecen,} {\bf 78/2}, 327-346 (2010). \bibitem{MSz} {Moln{\'a}r,~E.~--~Szirmai,~J.,} Symmetries in the 8 homogeneous 3-geometries. \textit{Symmetry Cult. Sci.,} {\bf 21/1-3}, 87-117 (2010). \bibitem{MSz12} {Moln{\'a}r,~E.~--~Szirmai,~J.,} Classification of $\mathbf{Sol}$ lattices. \textit{Geom. Dedicata,} {\bf 161/1}, 251-275 (2012). \bibitem{MSzV} Moln{\'a}r,~E.~--~Szirmai,~J.~--~Vesnin,~A., Projective metric realizations of cone-manifolds with singularities along 2-bridge knots and links. {\it J. Geom.,} {\bf 95}, 91-133 (2009). \bibitem{PSSz10} {Pallagi,~J.~--~Schultz,~B.~--~Szirmai,~J.}. Visualization of geodesic curves, spheres and equidistant surfaces in $\bS^2\!\times\!\bR$ space. \emph{KoG}, {\bf 14}, 35-40 (2010). \bibitem{PSSz11-1} {Pallagi,~J.~--~Schultz~B.~--~Szirmai,~J.}, {Equidistant surfaces in $\mathbf{Nil}$ space,} \emph{Stud. Univ. Zilina, Math. Ser.,} {\bf 25}, 31--40 (2011). \bibitem{PSSz11-2} {Pallagi,~J.~--~Schultz,~B.~--~Szirmai,~J.}. Equidistant surfaces in $\bH^2\!\times\!\bR$ space. \emph{KoG}, {\bf 15}, 3-6 (2011). \bibitem{S} Scott,~P., The geometries of 3-manifolds. {\it Bull. London Math. Soc.} {\bf 15}, 401--487 (1983). \bibitem{Sz13-1} Szirmai,~J., A candidate to the densest packing with equal balls in the Thurston geometries. {\it Beitr. Algebra Geom.,} {\bf 55}(2), 441--452 (2014). \bibitem{Sz13-2} Szirmai,~J., The densest translation ball packing by fundamental lattices in $\mathbf{Sol}$ space. {\it Beitr. Algebra Geom.,} {\bf 51}(2) 353--373 (2010). \bibitem{Sz17} Szirmai,~J., Triangle angle sums related to translation curves in $\mathbf{Sol}$ geometry. {\it Manuscript [2017]}. \bibitem{T} Thurston,~W.~P. (and Levy,~S. editor), {\it Three-Dimensional Geometry and Topology}. Princeton University Press, Princeton, New Jersey, vol. {\bf 1} (1997). \end{thebibliography} \end{document}
\begin{document} \title{Engineering Framework for Optimizing Superconducting Qubit Designs} \author{\mbox{Fei Yan}} \thanks{[email protected]} \altaffiliation{Current address: Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{Youngkyu Sung} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \affiliation{Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{Philip Krantz} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{Archana Kamal} \altaffiliation{Current address: Department of Physics and Applied Physics, University of Massachusetts Lowell, Lowell, MA 01854, USA} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{David K. Kim} \affiliation{MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02421, USA} \author{Jonilyn L. Yoder} \affiliation{MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02421, USA} \author{Terry P. Orlando} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{Simon Gustavsson} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \author{\mbox{William D. Oliver}} \affiliation{Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \affiliation{Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \affiliation{MIT Lincoln Laboratory, 244 Wood Street, Lexington, MA 02421, USA} \affiliation{Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA} \begin{abstract} Superconducting quantum technologies require qubit systems whose properties meet several often conflicting requirements, such as long coherence times and high anharmonicity. Here, we provide an engineering framework based on a generalized superconducting qubit model in the flux regime, which abstracts multiple circuit design parameters and thereby supports design optimization across multiple qubit properties. We experimentally investigate a special parameter regime which has both high anharmonicity ($\sim\!1$\,GHz) and long quantum coherence times ($T_1\!=\!40\!-\!80\,\mathrm{\mu s}$ and $T_\mathrm{2Echo}\!=\!2T_1$). \end{abstract} \maketitle Since the first direct observation of quantum coherence in a superconducting qubit more than 20 years ago \cite{Nakamura-Nature-1999}, many variants have been designed and studied \cite{krantz2019quantum}, such as the Cooper-pair box (CPB) \cite{Nakamura-Nature-1999}, the persistent-current flux qubit (PCFQ) \cite{Orlando-PRB-1999,Mooij-Science-1999}, the transmon \cite{Koch-PRA-2007,Paik-PRL-2011}, the fluxonium \cite{Manucharyan-Science-2009}, and the capacitively-shunted flux qubit (CSFQ) \cite{Steffen-PRL-2010,Yan-NComms-2016}. These superconducting qubit designs were usually categorized according to the ratio between the effective charging energy $E_\mathrm{C}$ and Josephson energy $E_\mathrm{J}$, into the charge ($E_\mathrm{J} \leq E_\mathrm{C}$) or flux ($E_\mathrm{J} \gg E_\mathrm{C}$) regime \cite{Clarke-Nature-2008}. The CPB (Fig.~\ref{fig:circuit_main}), a representative in the charge regime, provides large anharmonicity that facilitates fast gate operations. However, strong background charge noise limits its coherence time \cite{Nakamura-Nature-1999}, and the dispersion from quasiparticle tunneling causes severe frequency instability \cite{schreier2008suppressing}. Likewise, qubits in the flux regime, including the transmon, PCFQ, CSFQ, fluxonium and the rf-SQUID qubit, have been studied extensively as potential elements for gate-based quantum computing \cite{Kelly-Nature-2015,Ofek-Nature-2016,corcoles2015demonstration,Riste-NComms-2015}, quantum annealing \cite{Johnson-Nature-2011,Barends-Nature-2016}, quantum simulations \cite{Barends-NComms-2015} and many other applications, largely due to the flexibility in engineering their Hamiltonians and due to their relative insensitivity to charge noise. In this work, we provide an engineering framework based on a generalized flux qubit (GFQ) model which accommodates most (if not all) contemporary qubit variants. The framework facilitates an understanding of how key qubit properties are related to circuit parameters. The increased complexity, i.e., the use of both a shunt capacitor and an array of Josephson junctions, enables better control over coherence, anharmonicity and qubit frequency. As an example of implementing this framework, we experimentally demonstrate a special parameter regime, the ``quarton'' regime, named after its quartic potential profile. In comparison with other state-of-the-art designs, the quarton can simultaneously maintain a desirable qubit frequency ($3\!-\!4$\,GHz), large anharmonicity ($\sim\!1$\,GHz), and high coherence ($T_1\!=\!40\!-\!80\,\mathrm{\mu s}$, $T_\mathrm{2Echo}\!=\!2T_1$). Such a configurable energy level structure is advantageous with respect to the problem of frequency crowding in highly connected qubit systems. We show experimentally that quarton qubits with as few as 8 and 16 array junctions and with a much smaller shunt capacitor allow for a compact design, promising better scalability and reproducibility. \begin{figure}\label{fig:circuit_main} \end{figure} In the GFQ circuit with $N$ array junctions (Fig.~\ref{fig:circuit_main}), each junction is associated with a gauge-invariant (branch) phase $\varphi_i$. These phases satisfy the fluxoid quantization condition, $\sum_{k=1}^{N+1}\!\varphi_k \,+\, \varphi_\mathrm{e} = 2\pi z$ ($z\in\mathbb{Z}$), where $\varphi_\mathrm{e} = 2\pi \Phi_\mathrm{e}/\Phi_0$. $\Phi_\mathrm{e}$ is the external magnetic flux threading the qubit loop and $\Phi_0=h/2\mathrm{e}$ is the superconducting flux quantum. Although the full Hamiltonian is $N$-dimensional, the symmetry among the array junctions allows the full Hamiltonian to be approximated by a one-dimensional Hamiltonian; and this dimension coincides with a lower-energy mode that describes the qubit \cite{Ferguson-PRX-2013}. This one-dimensional Hamiltonian is given by \begin{align} \label{eq:H_plus} \mathcal{H} &= - 4 E_\mathrm{C} \, {\partial_{\phi}}^2 + E_\mathrm{J} \Big( - \gamma N \cos(\phi/N) - \cos(\phi + \varphi_\mathrm{e}) \Big) \;, \end{align} where $\phi=\varphi_1+\varphi_2+...+\varphi_N$ is the phase variable of the qubit mode, and $\gamma$ is the size ratio between the array junction and the smaller principle junction. The effective charging energy is $E_\mathrm{C} = \mathrm{e}^2/2C_\Sigma$, where $C_\Sigma = C_\mathrm{sh} + C_\mathrm{J} + \gamma C_\mathrm{J}/N + C_\mathrm{g}$ is the total capacitance across the principal junction. $C_\mathrm{g}$ is the correction from stray capacitances from superconducting islands to ground, which may become significant for large $N$ \cite{Ferguson-PRX-2013,Viola-PRB-2015}. The principal junction has a Josephson energy $E_\mathrm{J} = I_\mathrm{c} \Phi_0/2\pi$. In the $E_\mathrm{J}\!\gg\!E_\mathrm{C}$ limit, $\phi$ is well-defined and has small quantum fluctuations. Such a multi-junction qubit (total junction number $\geqslant3$) achieves the best coherence when biased at $\varphi_\mathrm{e}=\pi$, where the qubit frequency is (at least) first-order insensitive to flux fluctuations. At this working point, we may expand the potential part of Eq.~(\ref{eq:H_plus}) to fourth order, \begin{align} \label{eq:H_plus_4th} \mathcal{H} &= - 4 E_\mathrm{C} \, {\partial_{\phi}}^2 + E_\mathrm{J} \Big( \frac{\gamma/N-1}{2} \phi^2 + \frac{1}{24} \phi^4 \Big) \;, \end{align} where we have assumed $N^3\!\gg\!\gamma$. Depending on the value of $\gamma/N$, the problem can be categorized into one of the three regimes illustrated in Fig.~\ref{fig:circuit_main}(b). The fluxon regime (\textbf{\romannumeral 1}), $1\!<\!\gamma\!<\!N$, was first demonstrated in the traditional PCFQ with $N=2$ \cite{VanDerWal-Science-2000}, where the potential assumes a double-well profile, providing strong anharmonicity. The fluxonium extends the case to $N\!\approx\!100$ \cite{Pop-Nature-2014}. The energy eigenstates can be treated as hybridized states via quantum-mechanical tunneling between neighboring wells. The plasmon regime (\textbf{\romannumeral 2}), $\gamma\!>\!N$, was explored in the CSFQ, where the potential assumes a single-well profile. A leading quadratic term and a minor quartic term lead to weak anharmonicity, though the CSFQ is still more anharmonic than the transmon due to partial cancellation of the quadratic term \cite{Yan-NComms-2016}. The quarton regime (\textbf{\romannumeral 3}), $\gamma \approx N$, approximates the problem to a particle in a quartic potential. As we will show later, the quarton design has desirable features in its energy level configuration. We notice that a similar design with $\gamma\approx N=3$ was used in a parametric amplifier, but it operates at non-degenerate bias to exploit the cubic potential term and, to the contrary, eliminate the quartic one \cite{frattini20173}. \begin{figure}\label{fig:parameter_tradespace} \end{figure} To find qubit designs with predetermined desirable properties, we may expand the parameter space beyond $E_\mathrm{J}/E_\mathrm{C}$, to include $I_\mathrm{c}$, $C_\Sigma$, $N$ and $\gamma/N$ as the four independent design parameters. Our engineering framework provides an abstraction that captures the underlying physics to develop a set of rules or guidelines by which one can understand the parameter-property tradespace, as illustrated in Fig.~\ref{fig:parameter_tradespace}(a). Two of the circuit parameters, $I_\mathrm{c}$ and $C_\Sigma$, have been studied extensively. In general, a lower $I_\mathrm{c}$ and a higher $C_\Sigma$ are preferred for reducing sensitivity to flux and charge noise respectively. The energy level structure is also generally sensitive to their values, depending on the specific case. In the following, we focus on the discussion of the other two quantities, $\gamma/N$ and $N$. First, we consider $\gamma/N$ as an independent variable instead of $\gamma$, because the Hamiltonian in Eq.~(\ref{eq:H_plus_4th}) is parameterized by $E_\mathrm{J}$, $E_\mathrm{C}$, and $\gamma/N$. We find that a smaller $\gamma/N$ leads to a smaller qubit frequency and a larger anharmonicity, except for certain cases such as very small $N$ or $\gamma\approx1$. An example is shown in Fig.~\ref{fig:parameter_tradespace}(b). An intuitive explanation is as follows. With a symmetric potential profile, the wavefunctions of the ground state $\ket{0}$ and the second-excited state $\ket{2}$ have even parity while the excited state $\ket{1}$ has odd parity. Reducing $\gamma/N$ will raise the potential energy around $\phi = 0$, pushing up $\ket{0}$ and $\ket{2}$ due to their non-zero amplitudes at $\phi = 0$. In contrast, the odd-parity $\ket{1}$ state is unaffected, leading to a smaller $\omega_{01}$ and a greater $\omega_{12}$. At the critical value $\gamma/N=1$, the quadratic term in Eq.~(\ref{eq:H_plus_4th}) is canceled, resulting in a quartic potential. We can find the solutions numerically with $E_n = \lambda_n ( \frac{2}{3} E_\mathrm{J} {E_\mathrm{C}}^2 )^{1/3}$. For the lowest three levels, we find $\lambda_0=1.0604$, $\lambda_1=3.7997$ and $\lambda_2=7.4557$. Note that $\lambda_2-\lambda_1\approx \frac{4}{3}(\lambda_1-\lambda_0)$, suggesting that the anharmonicity of the quarton qubit is about 1/3 of its qubit frequency. This interesting finding is useful in practice, as it is common to operate qubits in the frequency range of 3-6\,GHz, in part for better qubit initialization ($\omega_{01}\!\gg\!k_\mathrm{B}T/\hbar$), and in part for compatibility with high-performance microwave control electronics, although, exceptions exist using non-adiabatic control \cite{campbell2020universal,zhang2020universal}. One third of the qubit frequency gives 1-2\,GHz anharmonicity, sufficient for suppressing leakage to non-computational states and alleviating the frequency-crowding problem, so that higher single- and two-qubit gate fidelities are achievable. Second, we find that the charge dispersion that causes qubit frequency instability and dephasing can be efficiently suppressed by increasing $N$. The size of the charge dispersion $\Delta\epsilon$ can be estimated from the tight-binding hopping amplitude between neighboring lattice sites in the potential landscape, which becomes exponentially small with respect to the height of the inter-lattice barrier. In the example of a single junction, it was shown that $\Delta\epsilon \propto \exp(-\sqrt{8 E_\mathrm{J}/E_\mathrm{C}})$ \cite{Koch-PRA-2007}. It is more complicated for flux qubits due to the multi-dimensionality of the Hamiltonian. In the quarton, a simulation of the full $N$-dimensional Hamiltonian shows that the suppression is even more efficient, because the barrier height scales with $N^2$ [Fig.~\ref{fig:parameter_tradespace}(c)]. Typically, $N\geqslant6$ is sufficient to suppress charge dispersion down to the kilohertz level. This implies that it is not necessary to use $\sim\!\!100$ junctions to suppress the charge noise for better coherence~\cite{Manucharyan-Science-2009,Pop-Nature-2014}. A more compact and easier-to-fabricate design can be done with the quarton. There are other advantages of using a junction array. For example, $T_1$ relaxation due to quasiparticle tunneling across an array junction may be improved by increasing $N$, since the corresponding matrix element $\bra{0} \sin (\frac{\varphi_i}{2}) \ket{1} \approx \bra{0} \sin (\frac{\phi}{2N}) \ket{1}$ scales as $1/N$ (it vanishes at $\varphi_\mathrm{e}=\pi$ for tunneling across the principal junction) \cite{Catelani-PRB-2011}. However, it is also important to limit the array size, since more junctions lead to more decohering channels from the parasitic capacitance to ground and the Aharonov-Casher effect, as well as keeping the qubit loop area small to avoid excess flux noise. In general, a trade-off has to be found. Such an example is discussed within the context of optimizing coherence time given only a few noise sources \cite{mizel2019rightsizing}. The enhanced controllability over qubit properties in the GFQ model allows more flexibility in qubit design. For example, the transmon requires a large shunt capacitance to suppress charge dispersion. This unavoidably lowers $E_\mathrm{C}$ and anharmonicity, as well as increases the qubit footprint. With the introduction of the junction array in the GFQ, one gains more freedom in configuring the qubit frequency and anharmonicity while, at the same time, the junction array helps suppress charge dispersion. \begin{figure}\label{fig:modality_compare} \end{figure} To demonstrate the concept, we implemented the quarton design with $N=8,16$. As a practical matter, we have found it best to decide first on the target qubit frequency and its anharmonicity. Since $\mathcal{A}/\omega_{01}$ mostly depends on $\gamma/N$, one may fix $\gamma/N$ before optimizing other parameters, simplifying the design process. The circuit layout, fabrication process, and measurement setup are similar to those presented in our previous work \cite{Yan-NComms-2016}. The aluminum metallization layer is patterned with square-shaped pads for the shunt capacitor and a half-wave-length transmission-line resonator for readout. A slightly larger qubit loop, about $10\times20\,\mathrm{\mu m}^2$, is used here for housing the array junctions. The junctions are made in the standard dolan-bridge style. We tested multiple samples with varying parameters. The typical design parameters are $I_\mathrm{C}=15-40$\,nA, $C_\Sigma=20-30$\,fF, $\gamma/N=0.85-1.1$. Results are shown in Table I and Fig.~\ref{fig:modality_compare}. Most samples have qubit frequencies spread within 2-4\,GHz and anharmonicities above 800\,MHz, including some ideal cases like sample H ($\omega_{01}=3.4$\,GHz, $\mathcal{A}=1.9$\,GHz). Fig.~\ref{fig:modality_compare}(b) shows that $\mathcal{A}/\omega_{01}$ ratios of these samples spread around the 1/3 line. In comparison to transmon-type qubits and CSFQs which have much lower anharmonicities (200-300\,MHz and 500\,MHz respectively) and to fluxonium qubits whose qubit frequencies are consistently below 1\,GHz, the quarton qubits demonstrate a practically useful parameter regime where trade-offs between qubit frequencies and anharmonicities can be made. However, by comparing the predicted and experimentally inferred values of $\gamma/N$, we find that variation during fabrication and subsequent aging may cause significant fluctuations in actual values. Qubit frequencies in many samples undershoot our target values ($\geqslant3$\,GHz), possibly due to junction-aging effects that reduce $I_\mathrm{c}$. Studying and improving reproducibility will be a main objective in the future. The quarton qubits show comparable $T_1$ times with respect to 2D transmon-type qubits and CSFQs [Fig.~\ref{fig:modality_compare}(c)]. We believe surface participation is the common key factor affecting coherence. Fluxonium devices generally have longer $T_1$ times, in some cases on the order of 1\,ms \cite{Pop-Nature-2014}. The enhancement is due to the suppressed dipole matrix element in the deep fluxon regime, and due to weaker low-frequency noise from Ohmic or super-Ohmic dielectric loss, i.e., the power spectral densities $S(\omega)\propto\omega^d$ ($d\geqslant1$) \cite{Nguyen_2019}. The quarton qubits also show long spin-echo times $T_\mathrm{2Echo}$. The highlighted samples in Table I approaches the $T_\mathrm{2Echo}\!=\!2T_1$ limit, indicating low residual thermal cavity photons due to our optimized measurement setup \cite{yan2018distinguishing}. To conclude, our GFQ framework facilitates the understanding of how key qubit properties are related to circuit parameters. In particular, we find the effectiveness of $\gamma/N$ in tuning the ratio between anharmonicity and qubit frequency and the effectiveness in suppressing charge dispersion by increasing $N$. We experimentally demonstrate how to take advantage of these findings by testing the quarton design, which simultaneously achieves a desirable qubit frequency, large anharmonicity, and high coherence while maintaining a compact design. The configurable energy level structure alleviates the problem of frequency crowding, promising better two-qubit gate performance from schemes such as parametric gates \cite{mckay2016universal,caldwell2018parametrically}. Future improvement in reproducibility can transform such designs into powerful building blocks in quantum information processing. \begin{table*}[t] \begin{tabular}{ccccccccccc} \hline Device & $N$ & $I_\mathrm{c}$ [nA] & $C_\mathrm{sh}$ [fF] & $\gamma/N$ & $\omega_{01}$ [GHz] & $\mathcal{A}$ [GHz] & $\mathcal{A}/\omega_{01}$ & $T_1$ [$\mu$s] & $T_\mathrm{2Echo}$ [$\mu$s] & Local Bias \\ \hline \rowcolor{green!25} A & 8 & 21 & 20 & 0.92 & 3.6 & 1.0 & 0.28 & $43.1\pm7.5$ & 70-100 & No \\ \hline B & 8 & 21 & 30 & 0.95 & 2.8 & 0.8 & 0.29 & 23 & - & No \\ \hline \rowcolor{green!25} C & 8 & 21 & 30 & 0.92 & 2.6 & 1.0 & 0.38 & $82.9\pm7.9$ & 100-125 & No \\ \hline D & 8 & 18 & 30 & 0.92 & 2.4 & 0.9 & 0.38 & 20 & - & No \\ \hline E & 8 & 18 & 30 & 0.93 & 2.0 & 1.0 & 0.50 & 50 & 12 & Yes \\ \hline F & 8 & 40 & 20 & 0.93 & 3.7 & 1.6 & 0.43 & - & - & Yes \\ \hline G & 8 & 40 & 20 & 0.98 & 4.7 & 1.2 & 0.26 & 10 & 7 & No \\ \hline H & 8 & 40 & 20 & 1.0 & 3.4 & 1.9 & 0.56 & 23 & 15 & Yes \\ \hline \rowcolor{green!25} I & 16 & 14 & 20 & 0.84 & 3.0 & 0.9 & 0.30 & $50.6\pm9.2$ & 110-140 & No \\ \hline J & 16 & 14 & 20 & 1.09 & 3.8 & 0.6 & 0.16 & 30 & - & No \\ \hline K & 16 & 27 & 20 & 0.88 & 2.6 & 1.0 & 0.38 & 30 & 20 & Yes \\ \hline \rowcolor{blue!25} CSFQ & 2 & 60 & 50 & 1.2 & 4.7 & 0.5 & 0.11 & 35-55 & 70-90 & No \\ \hline \end{tabular} \vspace*{1mm} \caption[] { Design parameters ($N$, $I_\mathrm{c}$, $C_\mathrm{sh}$, $\gamma/N$) and measurement results ($T_1$ and spin-echo dephasing time $T_\mathrm{2Echo}$) of quarton qubits. The $\gamma/N$ ratios are designed to be in the vinicity of 1 with slight variations. We note that the measured frequencies mostly have better agreement with a 30-40\% lower $I_\mathrm{c}$ than the designed value, possibly due to junction aging. Highlighted are samples with repeated measurements of coherence times (typically 50-100 times over ~10 hours). Results of other samples lack of statistical confidence, and are presented for reference. The CSFQ results from Ref.~\cite{Yan-NComms-2016} are also listed for comparison. Note that all quartons have much smaller shunt capacitance than CSFQ. The reduced footprint promises better scalability. }\label{table1} \end{table*} \end{document}
\begin{document} \begin{abstract} In \cite{AC1} we introduced a method combining together an observability inequality and a spectral decomposition to get a logarithmic stability estimate for the inverse problem of determining both the potential and the damping coefficient in a dissipative wave equation from boundary measurements. The present work deals with an adaptation of that method to obtain a logarithmic stability estimate for the inverse problem of determining a boundary damping coefficient from boundary measurements. As in our preceding work, the different boundary measurements are generated by varying one of the initial conditions. \noindent {\bf Keywords}: inverse problem, wave equation, boundary damping coefficient, logarithmic stability, boundary measurements. \noindent {\bf MSC}: 35R30. \end{abstract} \maketitle \tableofcontents \section{Introduction} We are concerned with an inverse problem for the wave equation when the spatial domain is the square $\Omega=(0,1)\times (0,1)$. To this end we consider the following initial-boundary value problem (abbreviated to IBVP in the sequel) : \begin{equation}\label{1.1} \left\{ \begin{array}{lll} \partial _t^2 u - \Delta u = 0 \;\; &\mbox{in}\; Q=\Omega \times (0,\tau), \\ u = 0 &\mbox{on}\; \Sigma _0=\Gamma _0 \times (0,\tau), \\ \partial _\nu u +a\partial _tu=0 &\mbox{on}\; \Sigma _1=\Gamma _1 \times (0,\tau), \\ u(\cdot ,0) = u^0,\; \partial_t u (\cdot ,0) = u^1. \end{array} \right. \end{equation} Here \begin{align*} &\Gamma _0=((0,1)\times \{1\})\cup (\{1\}\times (0,1)), \\ &\Gamma _1=((0,1)\times \{0\})\cup (\{0\}\times (0,1)) \end{align*} and $\partial _\nu =\nu \cdot \nabla$ is the derivative along $\nu$, the unit normal vector pointing outward of $\Omega$. We note that $\nu$ is everywhere defined except at the vertices of $\Omega$ and we denote by $\Gamma = \Gamma_0 \cup \Gamma_1$. The boundary coefficient $a$ is usually called the boundary damping coefficient. In the rest of this text we identify $a_{|(0,1)\times \{0\}}$ by $a_1=a_1(x)$, $x\in (0,1)$ and $a_{|\{0\}\times (0,1)}$ by $a_2=a_2(y)$, $y\in (0,1)$. In that case it is natural to identify $a$, defined on $\Gamma _1$, by the pair $(a_1,a_2)$. \subsection{The IBVP} We fix $1/2<\alpha \le 1$ and we assume that $a\in \mathscr{A}$, where \[ \mathscr{A}=\{ b=(b_1,b_2)\in C^\alpha ([0,1])^2,\; b_1(0)=b_2(0),\; b_j\geq 0\}. \] This assumption guarantees that the multiplication operator by $a_j$, $j=1,2$, defines a bounded operator on $H^{1/2}((0,1))$. The proof of this fact will be proved in Appendix A. Let $V=\{ u\in H^1(\Omega );\; u=0\; \textrm{on}\; \Gamma _0\}$ and we consider on $V\times L^2(\Omega )$ the linear unbounded operator $A$ given by \[ A_a= (w,\Delta v),\quad D(A_a)=\{ (v,w)\in V\times V;\; \Delta v\in L^2(\Omega )\; \textrm{and}\; \partial _\nu v=-aw\; \textrm{on}\; \Gamma _1\}. \] One can prove that $A_a$ is a m-dissipative operator on the Hilbert space $V\times L^2(\Omega )$ (for the reader's convenience we detail the proof in Appendix B). Therefore, $A_a$ is the generator of a strongly continuous semigroup of contractions $e^{tA_a}$. Hence, for each $(u^0,u^1)$, the IBVP \eqref{1.1} possesses a unique solution denoted by $u_a=u_a(u^0,u^1)$ so that \[ (u_a,\partial _tu_a)\in C([0,\infty );D(A_a))\cap C^1([0,\infty ),V\times L^2(\Omega )). \] \subsection{Main result} For $0<m\leq M$, we set \[ \mathscr{A}_{m,M}=\{ b=(b_1,b_2)\in \mathscr{A}\cap H^1(0,1)^2;\; m\leq b_j,\; \|b_j\|_{H^1(0,1)}^2\leq M\}. \] Let $\mathcal{U}_0$ given by \[ \mathcal{U}_0=\{ v\in V;\; \Delta v\in L^2(\Omega )\; \textrm{and}\; \partial _\nu v=0\; \textrm{on}\; \Gamma _1\}. \] We observe that $\mathcal{U}_0\times \{0\}\subset D(A_a)$, for any $a\in \mathscr{A}$. Let $C_a\in \mathscr{B}(D(A_a);L^2(\Sigma _1))$ defined by \[ C_a(u^0,u^1)=\partial _\nu u_a(u^0,u^1){_{|\Gamma _1}}. \] We define the initial to boundary operator \[ \Lambda _a :u^0\in \mathcal{U}_0 \longrightarrow C_a(u^0,0)\in L^2(\Sigma _1). \] Clearly $C_a\in \mathscr{B}(D(A_a);L^2(\Sigma _1))$ implies that $\Lambda _a\in \mathscr{B}(\mathcal{U}_0;L^2(\Sigma _1))$, when $\mathcal{U}_0$ is identified to a subspace of $D(A_a)$ endowed with the graph norm of $A_a$. Precisely the norm in $\mathcal{U}_0$ is the following one \[ \|u^0\|_{\mathcal{U}_0}=\left(\|u^0\|_V^2+\|\Delta u^0\|_{L^2(\Omega )}^2\right)^{1/2}. \] Henceforth, for simplicity sake, the norm of $\Lambda_a-\Lambda _0$ in $\mathscr{B}(\mathcal{U}_0;L^2(\Sigma _1))$ will denoted by $\| \Lambda_a-\Lambda _0\|$. \begin{theorem}\label{theorem1.1} There exists $\tau _0>0$ so that for any $\tau >\tau _0$, we find a constant $c>0$ depending only on $\tau$ such that \begin{equation}\label{1.2} \|a-0\|_{L^2((0,1))^2} \le cM\left(\left| \ln \left(m^{-1}\| \Lambda_a-\Lambda _0\|\right)\right|^{-1/2}+m^{-1}\|\Lambda_a-\Lambda _0\|_{L^2(\Sigma _1)}\right), \end{equation} for each $a\in \mathcal{A}_{m,M}$. \end{theorem} We point out that our choice of the domain $\Omega$ is motivated by the fact the spectral analysis of the laplacian under mixed boundary condition is very simple in that case. However this choice has the inconvenient that the square domain $\Omega$ is no longer smooth. So we need to prove an observability inequality associated to this non smooth domain. This is done by adapting the existing results. We note that the key point in establishing this observability inequality relies on a Rellich type identity for the domain $\Omega$. The inverse problem we discuss in the present paper remains largely open for an arbitrary (smooth) domain as well as for the stability around a non zero damping coefficient. Uniqueness and directional Lipschitz stability, around the origin, was established by the authors in \cite{AC2}. The determination of a potential and/or the sound speed coefficient in a wave equation from the so-called Dirichlet-to-Neumann map was extensively studied these last decades. We refer to the comments in \cite{AC1} for more details. \section{Preliminaries} \subsection{Extension lemma} We decompose $\Gamma _1$ as follows $\Gamma _1=\Gamma _{1,1}\cup \Gamma_{1,2}$, where $\Gamma _{1,1}=(0,1)\times \{0\}$ and $\Gamma_{1,2}=\{0\}\times (0,1)$. Similarly, we write $\Gamma _0=\Gamma_{0,1}\cup\Gamma _{0,2}$, with $\Gamma_{0,1}=\{1\}\times (0,1)$ and $\Gamma_{0,2}=(0,1)\times \{1\}$. Let $(g_1,g_2)\in L^2((0,1))^2$. We say that the pair $(g_1,g_2)$ satisfies the compatibility condition of the first order at the vertex $(0,0)$ if \begin{equation}\label{1.3} \int_0^1|g_1(t)-g_2(t)|^2\frac{dt}{t}<\infty . \end{equation} Similarly, we can define the compatibility condition of the first order at the other vertices of $\Omega$. We need also to introduce compatibility conditions of the second order. Let $(f_j,g_j)\in H^1((0,1))\times L^2((0,1))$, $j=1,2$. We say that the pair $[(f_1,g_1),(f_2,g_2)]$ satisfies the compatibility conditions of second order at the vertex $(0,0)$ when \begin{equation} f_1(0)=f_2(0), \quad \int_0^1|f'_1(t)-g_2(t)|^2\frac{dt}{t}<\infty \quad \textrm{and}\quad \int_0^1|g_1(t)-f'_2(t)|^2\frac{dt}{t}<\infty . \end{equation} The compatibility conditions of the second order at the other vertices of $\Omega$ are defined in the same manner. The following theorem is a special case of \cite[Theorem 1.5.2.8, page 50]{Gr}. \begin{theorem}\label{theorem1.3} (1) The mapping \[ w\longrightarrow (w_{|\Gamma _{0,1}},w_{|\Gamma _{0,2}},w_{|\Gamma _{1,1}},w_{|\Gamma _{1,2}})=(g_1,\ldots ,g_4), \] defined on $\mathscr{D}(\overline{\Omega} )$ is extended from $H^1(\Omega )$ onto the subspace of $H^{1/2}((0,1))^4$ consisting in functions $(g_1,\ldots ,g_4)$ so that the compatibility condition of the first order is satisfied at each vertex of $\Omega$ in a natural way with the pairs $(g_j,g_k)$. \\ (2) The mapping \[ w\rightarrow (w_{|\Gamma _{0,1}}, \partial _x w_{|\Gamma _{0,1}}, w_{|\Gamma _{0,2}},\partial _y w_{|\Gamma _{0,2}} w_{|\Gamma _{1,1}},-\partial _y w_{|\Gamma _{1,1}}, w_{|\Gamma _{1,2}},-\partial _xw_{|\Gamma _{1,2}})=((f_1,g_1),\ldots (f_4,g_4)) \] defined on $\mathscr{D}(\overline{\Omega} )$ is extended from $H^2(\Omega )$ onto the subspace of $[H^{3/2}((0,1))\times H^{1/2}((0,1))]^4$ of functions $((f_1,g_1),\ldots (f_4,g_4))$ so that the compatibility conditions of the second order are satisfied at each vertex of $\Omega$ in a natural way with the pairs $[(f_j,g_j),(f_k,g_k)]$. \end{theorem} \begin{lemma}\label{lemma1.2} (Extension lemma) Let $g_j\in H^{1/2}((0,1))$, $j=1,2$, so that $(g_1,g_2)$, $(g_1,0)$ and $(g_2,0)$ satisfy the first order compatibility condition respectively at the vertices $(0,0)$, $(1,0)$ and $(0,1)$. Then there exists $u\in H^2(\Omega )$ so that $u=0$ on $\Gamma _0$ and $\partial _\nu u=g_j$ on $\Gamma _{1,j}$, $j=1,2$. \end{lemma} \begin{proof} (i) We define $f_1(t)=\int_0^tg_2(s)ds $ and $f_2(t)=\int_0^tg_1(s)ds$. Then $(f_1,g_1)$ and $(f_2,g_2)$ satisfy the compatibility conditions of the second order at the vertex $(0,0)$. (ii) Let $\widetilde{g}_1\in H^{1/2}((0,1))$ be such that $\int_0^1\frac{|\widetilde{g}_1(t)|^2}{t}dt<\infty$. Let $\widetilde{f}_1(t)=\int_0^tg_2(s)ds$. Hence, it is straightforward to check that $(\widetilde{f}_1,\widetilde{g}_1)$ and $(0,g_2)$ satisfy the compatibility conditions of the second order at $(0,0)$. (iii) From steps (i) and (ii) we derive that the pairs $[(f_1,g_1),(f_2,g_2)]$, $[(f_1,g_1),(0,g_2)]$ and $[(0,g_1),(f_2,g_2)]$ satisfy the second order compatibility conditions respectively at the vertices $(0,0)$, $(1,0)$ and $(0,1)$. We see that unfortunately the pair $[(0,g_1),(0,g_2)]$ doesn't satisfy necessarily the compatibility conditions of the second order at the vertex $(1,1)$. We pick $\chi \in C^\infty (\mathbb{R})$ so that $\chi =1$ in a neighborhood of $0$ and $\chi =0$ in a neighborhood of $1$. Then $[(0,\chi g_1),(0,\chi g_2)]$ satisfies the compatibility condition of the second order at the vertex $(1,1)$. Since this construction is of local character at each vertex, the cutoff function at the vertex $(1,1)$ doesn't modify the construction at the other vertices. In other words, the compatibility conditions of the second order are preserved at the other vertices. We complete the proof by applying Theorem \ref{theorem1.3}. \end{proof} \begin{corollary}\label{corollary2.1} Let $a=(a_1,a_2)\in \mathscr{A}$ and $g_j\in H^{1/2}((0,1))$, $j=1,2$, so that $(g_1,g_2)$, $(g_1,0)$ and $(g_2,0)$ satisfy the first order compatibility condition respectively at the vertices $(0,0)$, $(1,0)$ and $(0,1)$. Then there exists $u\in H^2(\Omega )$ so that $u=0$ on $\Gamma _0$ and $\partial _\nu u=a_jg_j$ on $\Gamma _{1,j}$, $j=1,2$. \end{corollary} \begin{proof} It is sufficient to prove that $(a_1g_1, a_2,g_2)$ and $(a_jg_j, 0)$, $j=1,2$, satisfy the first order compatibility condition at $(0,0)$ with $a_1(0)=a_2(0)$ for the first pair and without any condition on $a_j$ for the second pair. Using $a_1(0)=a_2(0)$, we get \begin{align*} t^{-1}|a_1(t)-a_2(t)|^2 &\le 2t^{-1}|a_1(t)-a_1(0)|^2+2t^{-1}|a_2(t)-a_2(0)|^2 \\ &\le 2t^{-1+2\alpha}([a_1]_\alpha^2 +[a_2]_\alpha^2 ) \\ &\le 2([a_1]_\alpha^2 +[a_2]_\alpha ^2). \end{align*} This estimate together with the following one \[ |a_1(t)g_1(t)-a_2(t)g_2(t)|^2\le 2|a_1(t)-a_2(t)|^2|g_1(t)|^2+2|a_2(t)|^2|g_1(t)-g_2(t)|^2 \] yield \[ \int_0^1|a_1(t)g_1(t)-a_2(t)g_2(t)|^2\frac{dt}{t}\le 4([a_1]_\alpha^2 +[a_2]_\alpha^2 )\|f\|_{L^2((0,1))}+2\|a_2\|_{L^\infty ((0,1))}\int_0^1|g_1(t)-g_2(t)|^2\frac{dt}{t}. \] Hence \[ \int_0^1|g_1(t)-g_2(t)|^2\frac{dt}{t} <\infty \Longrightarrow \int_0^1|a_1(t)g_1(t)-a_2(t)g_2(t)|^2\frac{dt}{t}<\infty . \] If $(g_j,0)$ satisfies the first compatibility at the vertex $(0,0)$. Then \[ \int_0^1|g_j(t)|^2\frac{dt}{t}<\infty. \] Therefore \[ \int_0^1|a_jg_j(t)|^2\frac{dt}{t} \le \|a_j\|_{L^\infty ((0,1))}^2\int_0^1|g_j(t)|^2\frac{dt}{t} <\infty . \] Thus $(a_jg_j,0)$ satisfies also the first compatibility at the vertex $(0,0)$. \end{proof} \subsection{Observability inequality} We discuss briefly how we can adapt the existing results to get an observability inequality corresponding to our IBVP. We first note that \begin{align*} &\Gamma _0\subset \{ x\in \Gamma ;\; m(x)\cdot \nu (x)<0\}, \\ &\Gamma _1\subset \{ x\in \Gamma ;\; m(x)\cdot \nu (x)>0\}, \end{align*} where $m(x)=x-x_0$, $x\in \mathbb{R}^2$, and $x_0=(\alpha ,\alpha )$ with $\alpha >1$. The following Rellich identity is a particular case of identity \cite[(3.5), page 227]{Gr2}: for each $3/2<s<2$ and $\varphi \in H^s(\Omega )$ satisfying $\Delta \varphi \in L^2(\Omega )$, \begin{equation}\label{1.5} 2\int_\Omega \Delta \varphi (m\cdot \nabla \varphi )dx= 2\int_\Gamma \partial _\nu \varphi (m\cdot \nabla \varphi )d\sigma -\int_\Gamma (m\cdot \nu ) |\nabla \varphi |^2d\sigma . \end{equation} \begin{lemma}\label{lemma1.1} Let $(v,w)\in D(A_a)$. Then \[ 2\int_\Omega \Delta v (m\cdot \nabla v )dx= 2\int_\Gamma \partial _\nu v (m\cdot \nabla v)d\sigma -\int_\Gamma (m\cdot \nu ) |\nabla v|^2d\sigma . \] \end{lemma} \begin{proof} Let $(v,w)\in D(A_a)$. By Corollary \ref{corollary2.1}, there exists $\widetilde{v}\in H^2(\Omega )$ so that $\widetilde{v}=0$ on $\Gamma_0$ and $\partial _\nu \widetilde{v}=-aw$ on $\Gamma _1$. In light of the fact that $z=v-\widetilde{v}$ is such that $\Delta z\in L^2(\Omega )$, $z=0$ on $\Gamma _0$ and $\partial _\nu z=0$ on $\Gamma _1$, we get $z\in H^s(\Omega )$ for some $3/2<s<2$ by \cite[Theorem 5.2, page 237]{Gr2}. Therefore $v\in H^s(\Omega )$. We complete the proof by applying Rellich identity \eqref{1.5}. \end{proof} Lemma \ref{lemma1.1} at hand, we can mimic the proof of \cite[Theorem 7.6.1, page 252]{TW} in order to obtain the following theorem: \begin{theorem}\label{theorem1.4} We assume that $a\geq \delta$ on $\Gamma_1$, for some $\delta >0$. There exist $M\geq 1$ and $\omega >0$, depending only on $\delta$, so that \[ \|e^{tA_a}(v,w)\|_{V\times L^2(\Omega )}\leq Me^{-\omega t}\|(v,w)\|_{V\times L^2(\Omega )},\;\; (v,w)\in D(A_a),\; t\geq 0. \] \end{theorem} An immediate consequence of Theorem \ref{theorem1.4} is the following observability inequality. \begin{corollary}\label{corollary1.1} We fix $0<\delta _0 <\delta _1$. Then there exist $\tau _0>0$ and $\kappa$, depending only on $\delta _0$ and $\delta_1$ so that for any $\tau \geq \tau _0$ and $a\in \mathscr{A}$ satisfying $\delta _0 \le a \le \delta _1$ on $\Gamma _1$, \[ \| (u^0,u^1)\|_{V\times L^2(\Omega )}\leq \kappa \|C_a(u^0,u^1)\|_{L^2(\Sigma _1)}. \] Moreover, $C_a$ is admissible for $e^{tA_a}$ and $(C_a, A_a)$ is exactly observable. \end{corollary} We omit the proof of this corollary. It is quite similar to that of \cite[Corollary 7.6.5, page 256]{TW}. \section{The inverse problem} \subsection{An abstract framework for the inverse source problem} In the present subsection we consider an inverse source problem for an abstract evolution equation. The result of this subsection is the main ingredient in the proof of Theorem \ref{theorem1.1}. Let $H$ be a Hilbert space and $A :D(A) \subset H \rightarrow H$ be the generator of continuous semigroup $(T(t))$. An operator $C \in \mathscr{B}(D(A),Y)$, $Y$ is a Hilbert space which is identified with its dual space, is called an admissible observation for $(T(t))$ if for some (and hence for all) $\tau >0$, the operator $\Psi \in \mathscr{B}(D(A),L^2((0,\tau ),Y))$ given by \[ (\Psi x)(t)=CT(t)x,\;\; t\in [0,\tau ],\;\; x\in D(A), \] has a bounded extension to $H$. We introduce the notion of exact observability for the system \begin{align}\label{2.1} &z'(t)=Az(t),\;\; z(0)=x, \\ &y(t)=Cz(t),\label{2.2} \end{align} where $C$ is an admissible observation for $T(t)$. Following the usual definition, the pair $(A,C)$ is said exactly observable at time $\tau >0$ if there is a constant $\kappa $ such that the solution $(z,y)$ of \eqref{2.1} and \eqref{2.2} satisfies \[ \int_0^\tau \|y(t)\|_Y^2dt\geq \kappa ^2 \|x\|_H^2,\;\; x\in D(A). \] Or equivalently \begin{equation}\label{2.3} \int_0^\tau \|(\Psi x)(t)\|_Y^2dt\geq \kappa ^2 \|x\|_H^2,\;\; x\in D(A). \end{equation} Let $\lambda \in H^1((0,\tau ))$ such that $\lambda (0)\ne 0$. We consider the Cauchy problem \begin{equation}\label{2.4} z'(t)=Az(t)+ \lambda (t)x,\;\; z(0)=0 \end{equation} and we set \begin{equation}\label{2.5} y(t)=Cz(t),\;\; t\in [0,\tau ]. \end{equation} We fix $\beta$ in the resolvent set of $A$. Let $H_1$ be the space $D(A)$ equipped with the norm $\|x\|_1=\|(\beta -A)x\|$ and denote by $H_{-1}$ the completion of $H$ with respect to the norm $\|x\|_{-1}=\| (\beta -A)^{-1}x\|$. As it is observed in \cite[Proposition 4.2, page 1644]{tucsnak} and its proof, when $x\in H_{-1}$ (which is the dual space of $H_1$ with respect to the pivot space $H$) and $\lambda \in H^1((0,T))$, then, according to the classical extrapolation theory of semigroups, the Cauchy problem \eqref{2.4} has a unique solution $z\in C([0,\tau ];H)$. Additionally $y$ given in \eqref{2.5} belongs to $L^2((0,\tau ) ,Y)$. When $x\in H$, we have by Duhamel's formula \begin{equation}\label{2.6} y(t)=\int_0^t \lambda (t-s)CT(s)xds=\int_0^t \lambda (t-s)(\Psi x)(s)ds. \end{equation} Let \[ H^1_\ell ((0,\tau), Y) = \left\{u \in H^1((0,\tau), Y); \; u(0) = 0 \right\}. \] We define the operator $S :L^2((0,\tau), Y)\longrightarrow H^1_\ell ((0,\tau ) ,Y)$ by \begin{equation}\label{2.7} (S h)(t)=\int_0^t \lambda (t-s)h(s)ds. \end{equation} If $E =S \Psi$, then \eqref{2.6} takes the form \[ y(t)=(E x)(t). \] Let $\mathcal{Z}=(\beta -A^\ast)^{-1}(X+C^\ast Y)$. \begin{theorem}\label{theorem2.1} We assume that $(A,C)$ is exactly observable at time $\tau$. Then \\ (i) $E$ is one-to-one from $H$ onto $H^1_\ell ((0,\tau), Y)$. \\ (ii) $E$ can be extended to an isomorphism, denoted by $\widetilde{E}$, from $\mathcal{Z}'$ onto $L^2((0,\tau );Y)$. \\ (iii) There exists a constant $\widetilde{\kappa}$, independent on $\lambda$, so that \begin{equation}\label{2.8} \|x\|_{\mathcal{Z}'}\leq \widetilde{\kappa}|\lambda (0)|e^{\frac{\|\lambda '\|^2_{L^2((0,\tau ))}}{|\lambda (0)|^2}\tau}\|\widetilde{E}x\|_{L^2 ((0,\tau), Y)}. \end{equation} \end{theorem} \begin{proof} (i) and (ii) are contained in \cite[Theorem 4.3, page 1645]{tucsnak}. We need only to prove (iii). To do this, we start by observing that \[ S^\ast: L^2((0,\tau ),Y)\rightarrow H_r^1((0,\tau );Y)=\left\{u \in H^1((0,\tau), Y); \; u(\tau ) = 0 \right\}, \] the adjoint of $S$, is given by \[ S^\ast h(t)=\int_t^\tau \lambda (s-t)h(s)ds,\;\; h\in H_r^1((0,\tau );Y). \] We fix $h\in H_r^1((0,\tau );Y)$ and we set $k=S^\ast h$. Then \[ k'(t)= \lambda (0)h(t)-\int_t^\tau \lambda '(s-t)h(s)ds. \] Hence \begin{align*} \left[|\lambda (0)| \|h(t)\|\right]^2&\le \left( \int_t^\tau \frac{|\lambda '(s-t)|}{|\lambda (0)|}[ |\lambda (0)|\|h(s)\|]ds +\|k'(t)\|\right)^2 \\ &\le 2\left( \int_t^\tau \frac{|\lambda '(s-t)|}{|\lambda (0)|}[ |\lambda (0)|\|h(s)\|]ds\right)^2 +2\|k'(t)\|^2 \\ &\le 2\frac{\|\lambda '\|_{L^2((0,\tau ))}^2}{|\lambda (0)|^2}\int_0^t [|\lambda (0)|\|h(s)\|]^2ds+2\|k'(t)\|^2. \end{align*} The last estimate is obtained by applying Cauchy-Schwarz's inequality. A simple application of Gronwall's lemma entails \[ [|\lambda (0)|\|h(t)\|]^2\le 2e^{2\frac{\|\lambda '\|_{L^2((0,\tau ))}^2}{|\lambda (0)|^2}\tau }\|k'(t)\|^2. \] Therefore, \[ \|h\|_{L^2((0,\tau) ;Y)}\leq \frac{\sqrt{2}}{|\lambda (0)|}e^{\frac{\|\lambda '\|_{L^2((0,\tau ))}^2}{|\lambda (0)|^2}\tau }\|k'\|_{L^2((0,\tau) ;Y)}. \] This inequality yields \begin{equation}\label{2.8.1} \|h\|_{L^2((0,\tau) ;Y)}\le \frac{\sqrt{2}}{|\lambda (0)|}e^{\frac{\|\lambda '\|_{L^2((0,\tau ))}^2}{|\lambda (0)|^2}\tau }\|S^\ast h\|_{H^1_r((0,\tau) ;Y)}. \end{equation} The adjoint of $S^\ast$, acting as a bounded operator from $[H_r((0,1);Y)]'$ into $L^2((0,\tau );Y)$, gives an extension of $S$. We denote by $\widetilde{S}$ this operator. By \cite[Proposition 4.1, page 1644]{tucsnak} $S^\ast$ defines an isomorphism from $[H_r((0,1);Y)]'$ onto $L^2((0,\tau );Y)$. In light of the fact that \[ \|\widetilde{S}\|_{\mathscr{B}([H_r((0,1);Y)]';L^2((0,\tau );Y))}=\|S^\ast \|_{\mathscr{B}(L^2((0,\tau );Y);H_r((0,1);Y))}, \] \eqref{2.8.1} implies \begin{equation}\label{2.8.1b} \frac{|\lambda (0)|}{\sqrt{2}}e^{-\frac{\|\lambda '\|_{L^2((0,\tau ))}^2}{|\lambda (0)|^2}\tau }\le \|\widetilde{S}\|_{\mathscr{B}([H_r((0,1);Y)]';L^2((0,\tau );Y))}. \end{equation} On the other hand, according to \cite[Proposition 2.13, page 1641]{tucsnak}, $\Psi $ possesses a unique bounded extension, denoted by $\widetilde{\Psi}$ from $\mathcal{Z}'$ into $[H_r((0,1);Y)]'$ and there exists a constant $c>0$ so that \begin{equation}\label{2.8.2} \|\widetilde{\Psi}\|_{\mathcal{B}(\mathcal{Z}';[H_r((0,1);Y)]')}\geq c. \end{equation} Consequently, $\widetilde{E}=\widetilde{S}\widetilde{\Psi}$ gives a unique extension of $E$ to an isomorphism from $\mathcal{Z}'$ onto $L^2((0,\tau );Y)$. We end up the proof by noting that \eqref{2.8} is a consequence of \eqref{2.8.1} and \eqref{2.8.2}. \end{proof} \subsection{An inverse source problem for an IBVP for the wave equation} In the present subsection we are going to apply the result of the preceding subsection to $H=V\times L^2(\Omega )$, $H_1=D(A_a)$ equipped with its graph norm and $Y=L^2(\Gamma _1)$. We consider the the IBVP \begin{equation}\label{2.9} \left\{ \begin{array}{lll} \partial _t^2 u - \Delta u = \lambda (t)w \;\; &\mbox{in}\; Q, \\ u = 0 &\mbox{on}\; \Sigma _0, \\ \partial _\nu u +a\partial _tu=0 &\mbox{on}\; \Sigma _1, \\ u(\cdot ,0) =0,\; \partial_t u (\cdot ,0) =0. \end{array} \right. \end{equation} Let $(0,w)\in H_{-1}$ and $\lambda \in H^1((0,\tau ))$. From the comments in the preceding subsection, \eqref{2.9} has a unique solution $u_w$ so that $(u_w,\partial _tu_w) \in C([0,\tau ]; V\times L^2(\Omega ))$ and $\partial_\nu u_w{_{|\Gamma _1}}\in L^2(\Sigma _1 )$. We consider the inverse problem consisting in the determination of $w$, so that $(0,w)\in H_{-1}$, appearing in the IBVP \eqref{2.9} from the boundary measurement $\partial_\nu u_w{_{|\Sigma _1}}$. Here the function $\lambda$ is assumed to be known. Taking into account that $\{0\}\times V' \subset H_{-1}$, where $V'$ is the dual space of $V$, we obtain as a consequence of Corollary \ref{corollary2.1}: \begin{proposition}\label{proposition2.1} There exists a constant $C>0$ so that for any $\lambda \in H^1((0,\tau ))$ and $w\in V'$, \begin{equation}\label{2.10} \|w\|_{V'}\leq C|\lambda (0)|e^{\frac{\|\lambda '\|^2_{L^2((0,\tau ))}}{|\lambda (0)|^2}\tau}\|\partial _\nu u\|_{L^2 (\Sigma _1)}. \end{equation} \end{proposition} \subsection{Proof of Theorem \ref{theorem1.1}} We start by observing that $u_a$ is also the unique solution of \begin{equation*} \left\{ \begin{array}{lll} \int_\Omega u''(t)vdx=\int_\Omega \nabla u(t)\cdot \nabla vdx-\int_{\Gamma _1}au'(t)v,\;\; \textrm{for all}\; v\in V. \\ u(0)=u^0,\;\; u'(0)=u^1. \end{array} \right. \end{equation*} Let $u=u_a-u_0$. Then $u$ is the solution of the following problem \begin{equation}\label{2.11} \left\{ \begin{array}{lll} \int_\Omega u''(t)vdx=\int_\Omega \nabla u(t)\cdot \nabla vdx-\int_{\Gamma _1}au'(t)v -\int_{\Gamma _1}au_0'(t)v,\;\; \textrm{for all}\; v\in V. \\ u(0)=0,\;\; u'(0)=0. \end{array} \right. \end{equation} For $k$, $\ell \in \mathbb{Z}$, we set \begin{align*} &\lambda_{k\ell}=[ (k+1/2)^2+(\ell +1/2)^2]\pi ^2 \\ &\phi_{k\ell}(x,y)=2\cos ((k+1/2)\pi x)\cos ((\ell +1/2)\pi y). \end{align*} We check in a straightforward manner that $u_0=\cos(\sqrt{\lambda_{k\ell}}t)\phi_{k\ell}$ when $(u^0,u^1)=(\phi_{k\ell},0)$. In the sequel $k$, $\ell$ are arbitrarily fixed. We set $\lambda (t)=\cos(\sqrt{\lambda_{k\ell}}t)$ and we define $w_a\in V'$ by \[ w_a(v)=-\sqrt{\lambda_{k\ell}}\int_{\Gamma _1}a\phi_{k\ell}v. \] In that case \eqref{2.11} becomes \begin{equation*} \left\{ \begin{array}{lll} \int_\Omega u''(t)vdx=\int_\Omega \nabla u(t)\cdot \nabla vdx-\int_{\Gamma _1}au'(t)v +\lambda (t)w_a(v),\;\; \textrm{for all}\; v\in V. \\ u(0)=0,\;\; u'(0)=0. \end{array} \right. \end{equation*} Consequently, $u$ is the solution of \eqref{2.9} with $w=w_a$. Applying Proposition \ref{proposition2.1}, we find \begin{equation}\label{2.12} \|w_a\|_{V'}\leq Ce^{\lambda_{k\ell}\tau ^2}\| \partial _\nu u\|_{L^2(\Sigma _1)}. \end{equation} But \begin{equation}\label{2.13} a_1(0)\left| \int_{\Gamma _1}(a\phi_{k\ell})^2d\sigma \right| =\frac{1}{\sqrt{\lambda_{k\ell}}}\left| w_a ((a_1\otimes a_2)\phi_{k\ell})\right|\leq \frac{1}{\sqrt{\lambda_{k\ell}}}\|w_a\|_{V'}\|(a_1\otimes a_2)\phi_{k\ell}\|_V, \end{equation} where we used $a_1(0)=a_2(0)$, and \begin{equation}\label{2.14} \|(a_1\otimes a_2)\phi_{k\ell}\|_V \leq C_0\sqrt{\lambda _{kl}}\| a_1\otimes a_2\|_{H^1(\Omega )}. \end{equation} Here $C_0$ is a constant independent on $a$ and $\phi_{k\ell}$. We note $(a_1\otimes a_2)\phi_{k\ell}\in V$ even if $a_1\otimes a_2\not\in V$. Now a combination of \eqref{2.12}, \eqref{2.13} and \eqref{2.14} yields \begin{equation*} a_1(0)\left(\|a_1\phi _k\|_{L^2((0,1))}^2+\|a_2\phi _\ell\|_{L^2((0,1))}^2\right)\le C\| a_1\|_{H^1(0,1)}\| a_2\|_{H^1(0,1)}e^{\lambda_{k\ell}\tau ^2/2}\| \partial _\nu u\|_{L^2(\Sigma _1)}, \end{equation*} where $\phi_k (s)=\sqrt{2}\cos ((k+1/2)\pi s)$. This and the fact that $m\leq a_j(0)$ and $\|a_j\|_{H^1((0,1))}\leq M$ imply \begin{equation*} \|a_1\phi _k\|_{L^2((0,1))}^2+\|a_2\phi _\ell\|_{L^2((0,1))}^2\le C\frac{M^2}{m}e^{\lambda_{k\ell}\tau ^2/2}\| \partial _\nu u\|_{L^2(\Sigma _1)}. \end{equation*} Hence, where $j=1$ or $2$, \begin{equation*} \|a_j\phi _k\|_{L^2((0,1))}^2\le C\frac{M^2}{m}e^{k^2\tau ^2\pi^2}\| \partial _\nu u\|_{L^2(\Sigma _1)}. \end{equation*} Let \[ a_j^k= \int_0^1a_j(x)\phi _k(x)dx,\;\; j=1,2. \] Since \[ |a_j^k|=\left| \int_0^1a_j(x)\phi _k(x)dx\right| \le \|a_j\phi _k\|_{L^1((0,1))}\leq \|a_j\phi _k\|_{L^2((0,1))}, \] we get \[ (a_j^k)^2\le C\frac{M^2}{m}e^{k^2\tau ^2\pi^2}\| \partial _\nu u\|_{L^2(\Sigma _1)}. \] On the other hand \[ \| \partial _\nu u\|_{L^2(\Sigma _1)}=\|\Lambda _a (\phi_{kl})-\Lambda _0 (\phi_{kl})\|_{L^2(\Sigma )}\leq Ck^2\|\Lambda _a -\Lambda _0 \|. \] Hence \begin{equation}\label{2.15} (a_j^k)^2\le C\frac{M^2}{m}e^{k^2(\tau ^2\pi^2+1)}\|\Lambda _a -\Lambda _0 \|. \end{equation} Let $q=\frac{M^2}{m}$ and $\alpha =\tau ^2\pi^2+2$. We obtain in a straightforward manner from \eqref{2.15} \[ \sum_{|k|\leq N}(a_j^k)^2\le Cqe^{\alpha N^2}\|\Lambda _a -\Lambda _0 \|. \] Consequently, \begin{align*} \|a_j\|_{L^2((0,1))}^2 &\le \sum_{|k|\leq N}(a_j^k)^2+\frac{1}{N^2}\sum_{|k|> N}k^2(a_j^k)^2 \\ &\le C\left(qe^{\alpha N^2}\|\Lambda _a -\Lambda _0 \|+ \frac{\|a_j\|_{H^1((0,1))}^2}{N^2}\right) \\ &\le C\left(qe^{\alpha N^2}\|\Lambda _a -\Lambda _0 \|+ \frac{M^2}{N^2}\right) \\ & \le CM^2\left( \frac{1}{m}e^{\alpha N^2}\|\Lambda _a -\Lambda _0 \|+ \frac{1}{N^2}\right). \end{align*} That is \begin{equation}\label{2.16} \|a_j\|_{L^2((0,1))}^2\leq CM^2\left( \frac{1}{m}e^{\alpha N^2}\|\Lambda _a -\Lambda _0 \|+ \frac{1}{N^2}\right). \end{equation} Assume that $\|\Lambda _a -\Lambda _0 \|\le \delta =me^{-\alpha}$. Let then $N_0\geq 1$ be the greatest integer so that \[ \frac{C}{m}e^{\alpha N_0^2}\|\Lambda _a -\Lambda _0 \|\leq \frac{1}{N_0^2}. \] Using \[ \frac{1}{m}e^{\alpha (N_0+1)^2}\|\Lambda _a -\Lambda _0 \|\le \frac{1}{(N_0+1)^2}, \] we find \[ (2N_0)^2\geq (N_0+1)^2\ge \frac{1}{\alpha +1}\ln \left( \frac{m}{\|\Lambda _a -\Lambda _0 \|}\right). \] This estimate in \eqref{2.16} with $N=N_0$ gives \begin{equation}\label{2.17} \|a_j\|_{L^2((0,1))} \le 2C\sqrt{\alpha +1}M\left| \ln \left(m^{-1}\|\Lambda _a -\Lambda _0 \|\right)\right|^{-1/2}. \end{equation} When $\|\Lambda _a -\Lambda _0 \|\ge \delta$, we have \begin{equation}\label{2.18} \|a_j\|_{L^2((0,1))} \le \frac{M}{\delta}\|\Lambda _a -\Lambda _0 \|. \end{equation} In light of \eqref{2.17} and \eqref{2.18}, we find a constants $c >0$, that can depend only on $\tau$, so that \[ \|a_j\|_{L^2((0,1))} \le cM\left(\left| \ln \left(m^{-1}\|\Lambda _a -\Lambda _0 \|\right)\right|^{-1/2}+m^{-1}\|\Lambda _a -\Lambda _0 \|\right). \] \appendix \section{}\label{appendixA} We prove the following lemma \begin{lemma}\label{lemmaA1} Let $1/2<\alpha \leq1$ and $a\in C^\alpha ([0,1])$. Then the mapping $f \mapsto af$ defines a bounded operator on $H^{1/2}((0,1))$. \end{lemma} \begin{proof} We recall that $H^{1/2}((0,1))$ consists in functions $f\in L^2((0,1))$ with finite norm \[ \|f\|_{H^{1/2}((0,1))}=\left( \|f\|^2_{L^2((0,1))}+\int_0^1\int_0^1\frac{|f(x)-f(y)|^2}{|x-y|^2}dxdy\right)^{1/2}. \] Let $a\in C^\alpha( [0,1])$. We have \[ \frac{|a(x)f(x)-a(y)f(y)|^2}{|x-y|^2} \leq \|a\|_{L^\infty (0,1)}^2\frac{|f(x)-f(y)|^2}{|x-y|^2}+|f(y)|^2\frac{[a]_\alpha^2}{|x-y|^{2(1-\alpha)}}, \] where \[ [a]_\alpha =\sup \{|a(x)-a(y)||x-y|^{-\alpha};\; x,y\in [0,1],\; x\neq y\}. \] Using that $1/2<\alpha \leq1$, we find that $x\rightarrow |x-y|^{-2(1-\alpha)}\in L^1((0,1))$, $y\in [0,1]$, and \[ \int_0^1\frac{dx}{|x-y|^{2(1-\alpha)}}\leq \frac{1}{2\alpha -1},\;\; y\in [0,1]. \] Hence $af\in H^{1/2}((0,1))$ with \[ \|af\|_{H^{1/2}((0,1))}\leq \frac{1}{2\alpha -1}\|a\|_{C^\alpha ([0,1])}\|f\|_{H^{1/2}((0,1))}. \] Here \[ \|a\|_{C^\alpha ([0,1])}=\|a\|_{L^\infty ((0,1))}+[a]_\alpha . \] \end{proof} \section{}\label{appendixB} We give the proof of the following lemma \begin{lemma}\label{lemmaB1} Let $a\in \mathscr{A}$ and $A_a$ be the unbounded operator defined on $V\times L^2(\Omega )$ by \[ A_a= (w,\Delta v),\quad D(A_a)=\{ (v,w)\in V\times V;\; \Delta v\in L^2(\Omega )\; \textrm{and}\; \partial _\nu v=-aw\; \textrm{on}\; \Gamma _1\}. \] Then $A_a$ is m-dissipative. \end{lemma} \begin{proof} Let $\langle \cdot ,\cdot \rangle$ be scalar product in $V\times L^2(\Omega )$. That is \[ \langle (v_1,w_1),(v_2,w_2)\rangle =\int_\Omega \nabla v_1\cdot \nabla \overline{v_2}dx+\int_\Omega w_1\overline{w_2}dx,\;\; (v_j,w_j)\in V\times L^2(\Omega ),\; j=1,2. \] For $(v_1,w_1)\in D(A_a)$, we have \begin{align} \langle A_a(v_1,w_1),(v_1,w_1)\rangle &= \langle (w_1,\Delta v_1),(v_1,w_1)\rangle \label{B1} \\ &=\int_\Omega \nabla w_1\cdot \nabla \overline{v_1}dx+\int_\Omega \Delta v_1\overline{w_1}dx \nonumber \end{align} Applying twice Green's formula, we get \begin{align} &\int_\Omega \nabla w_1\cdot \nabla \overline{v_1}dx=-\int_\Omega w_1\Delta \overline{v_1}dx+\int_{\Gamma _1}w_1\partial _\nu \overline{v_1}d\sigma ,\label{B2} \\ &\int_\Omega \Delta v_1\overline{w_1}dx =- \int_\Omega \nabla v_1\cdot \nabla \overline{w_1}dx - \int_{\Gamma _1}aw_1\overline{w_1}d\sigma.\label{B3} \end{align} We take the sum side by side of identities \eqref{B2} and \eqref{B3}. Using that $\partial_\nu v_1=-aw_1$ on $\Gamma _1$ we obtain \begin{align*} \int_\Omega \nabla w_1\cdot \nabla \overline{v_1}dx+\int_\Omega \Delta v_1\overline{w_1}dx &=-\int_\Omega w_1\Delta \overline{v_1}dx- \int_\Omega \nabla v_1\cdot \nabla \overline{w_1}dx - 2 \, \int_{\Gamma_1} a \left|w_1\right|^2 d\sigma \\ &=-\langle (v_1,w_1),A_a(v_1,w_1)\rangle - 2 \, \int_{\Gamma_1} a \left|w_1\right|^2d\sigma. \end{align*} This and \eqref{B1} yield \[ \Re \langle A_a(v_1,w_1),(v_2,w_2)\rangle = - \, \int_{\Gamma_1} a \left|w_1\right|^2d\sigma \leq 0. \] In other words, $A_a$ is dissipative. We complete the proof by showing that $A_a$ is onto implying that $A_a$ is m-dissipative. To this end we are going to show that for each $(f,g)\in V\times L^2(\Omega )$, the problem \[ w=f, \quad -\Delta v=g. \] has a unique solution $(v,w)\in D(A_a)$. In light of the fact $\psi \rightarrow \left(\int_\Omega |\nabla \psi |^2dx\right)^{1/2}$ defines an equivalent norm on $V$, we can apply Lax-milgram's lemma. We get that there exists a unique $v\in V$ satisfying \[ \int_\Omega \nabla v\cdot \nabla \overline{\psi}dx=\int_\Omega g\overline{\psi}dx -\int_{\Gamma _1}aw\overline{\psi}d\sigma ,\;\; \psi \in V. \] From this identity, we deduce in a standard way that $-\Delta v=g$ and $\partial _\nu v=-aw$ on $\Gamma _1$. The proof is then complete \end{proof} \end{document}
\begin{document} \begin{center} {Quantum correlations induced by local von Neumann measurement} \end{center} \begin{center} {Ming-Jing Zhao$^{1}$}, {Ting-Gui Zhang$^{1}$}, {Zong-Guo Li$^{2}$}, {Xianqing Li-Jost$^{1}$}, {Shao-Ming Fei$^{1,3}$}, and {De-Shou Zhong$^{4}$} \small {$^1$Max-Planck-Institute for Mathematics in the Sciences, Leipzig, 04103, Germany\\ {$^{2}$College of Science, Tianjin University of Technology, Tianjin, 300191, China}\\ $^3$School of Mathematical Sciences, Capital Normal University, Beijing 100048, China\\ $^4$Center of Mathematics, China Youth University for Political Sciences, Beijing, 100089, China} \end{center} {\bf Abstract} We study the total quantum correlation, semiquantum correlation and joint quantum correlation induced by local von Neumann measurement in bipartite system. We analyze the properties of these quantum correlations and obtain analytical formula for pure states. The experiment witness for these quantum correlations is further provided and the significance of these quantum correlations is discussed in the context of local distinguishability of quantum states. {\bf Keywords} Quantum correlation $\cdot$ Semiquantum correlation $\cdot$ Joint quantum correlation \section{Introduction} Quantum systems are correlated in a way that is inaccessible to classical ones. Furthermore, the correlations have advantages for quantum computing and information processing. In recent years, many attention therefore have been paid to quantify quantum correlation and different measures have been proposed from different aspects. In bipartite system, discord \cite{H. Ollivier,L. Henderson} is first introduced to quantify quantum correlation, which is defined as the difference between two quantum analogues of the classical mutual information. It has been shown that almost all quantum states have nonvanishing discord \cite{A. Ferraro}. Based on this, Ref. \cite{B. Daki} puts forward a geometric way of quantifying discord and obtains a closed form of expression for two-qubit state. In Refs. \cite{S. Luo2008,S. Luo2010}, they investigate the quantum correlation induced by local von Neumann measurement and reveal its equivalence with the geometry of discord. In multipartite systems, a unified view of correlations has been discussed using relative entropy and square norm as the distance respectively \cite{K. Modi,B. Bellomo}. Then Ref. \cite{G. L. Giorgi} defines the genuine correlation as the amount of correlation that can not be accounted for considering any of the possible subsystems. Recently, postulates for measures of quantum correlations have been proposed \cite{A. Brodutch,C. H. Bennett2011}, but the quantum correlation is still far from being understood. Since quantum states can be divided into classical state, semiquantum state and truly quantum state, quantum correlations are classified into total quantum correlation, semiquantum correlation and joint quantum correlation in terms of local von Neumann measurement \cite{ M. Gessner, S. Luo2008,S. Luo2010}. We study the properties of these quantum correlations and calculate the analytical formula of these quantum correlations for pure states. It is shown that these quantities coincide for pure state and are proportional to the squared concurrence. Furthermore, we provide a witness for experimental detection of these quantum correlations by employing the strategy in Ref. \cite{M. Gessner}. The result is finally applied to local distinguishability of quantum states, showing that the nonexisting joint quantum correlation is the necessary condition for distinguishing two-qubit separable and orthogonal pure states locally. \section{Quantum correlations induced by local von Neumann measurement} Let $H_m$ and $H_n$ denote $m$ and $n$ dimensional complex Hilbert spaces, with $\{\vert i\rangle\}_{i=1}^m$ and $\{\vert j\rangle\}_{j=1}^n$ the orthonormal basis for $H_m$ and $H_n$ respectively. Let $\rho$ be a density matrix defined on $H_m\otimes H_n$. We call $\rho$ a classical correlated (C-C) state if $\rho=\sum_{ij} p_{ij} |ij\rangle \langle ij|$, $0\leq p_{ij} \leq 1$, $\sum_{ij} p_{ij}=1$. $\rho$ is called classical-quantum (C-Q) correlated if $\rho=\sum_i p_i |i\rangle \langle i|\otimes \rho_i$, with $\rho_i$ the density matrices on $H_n$, $0\leq p_{i} \leq 1$, $\sum_{i} p_{i}=1$. Analogously, $\rho=\sum_j p_j \rho_j \otimes |j\rangle \langle j|$ is said to be a quantum-classical (Q-C) correlated state. For short, the C-Q and Q-C states are called semiquantum ones which can be of both quantum and classical correlations \cite{L. Henderson,K. Modi}. Let $\Phi_1=\{\pi^{(1)}_u\}$ and $\Phi_2=\{\pi^{(2)}_v\}$ stand for the local von Neumann measurements acting unilaterally on the first and second subsystems respectively, \begin{equation} \begin{array}{rcl} \Phi_1(\rho)=\sum_{u} \pi^{(1)}_u\otimes I \rho \pi^{(1)}_u \otimes I, \\ \Phi_2(\rho)=\sum_{v} I\otimes\pi^{(2)}_v \rho I \otimes \pi^{(2)}_v. \end{array} \end{equation} Let $\Phi_{12}\equiv\Phi_1\circ \Phi_2$ be the local von Neumman measurements acting bilaterally on $\rho$, \begin{eqnarray} \Phi_{12}(\rho)=\sum_{u,v} \pi^{(1)}_u \otimes \pi^{(2)}_v \rho \pi^{(1)}_u\otimes \pi^{(2)}_v. \end{eqnarray} Generally, $\Phi_{i}(\rho)$ is semiquantum state and $\Phi_{12}(\rho)$ is classical state. The quantum correlation is then defined as the change of the quantum state $\rho$ induced by the local von Neumann measurement. First, for arbitrary quantum state $\rho$, the distance minimized under all local von Neumann measurements $\Phi_{i}$, \begin{eqnarray} Q_{i}(\rho)&\equiv& \min_{\Phi_{i}}Q_{i}(\Phi_{i},\rho)\nonumber\\ &=&\min_{\Phi_{i}}||\rho-\Phi_{i}(\rho)||^2\nonumber\\ &=&tr(\rho^2)-\max_{\Phi_{i}}tr[(\Phi_{i}(\rho))^2], \end{eqnarray} is defined as quantum correlation with respect to the $i$-th part, $i=1,2$ \cite{S. Luo2010}. Here the Hilbert-Schmidt norm $||A||=\sqrt{tr(A^\dagger A)}$ has been used as the measure of distance. We call $Q_{i}(\rho)$ semiquantum correlation typically compared with the total quantum correlation, $i=1,2$. $Q_{1}(\rho)=0$ (resp. $Q_{2}(\rho)=0$) if and only if $\rho$ is a C-Q (resp. Q-C) state. Based on these, we define the total quantum correlation $Q_{12}(\rho)$ as the minimized distance \begin{eqnarray} Q_{12}(\rho)&\equiv&\min_{\Phi_{12}}Q_{12}(\Phi_{12},\rho)\nonumber\\ &=&\min_{\Phi_{12}}||\rho-\Phi_{12}(\rho)||^2\nonumber\\ &=&tr(\rho^2)-\max_{\Phi_{12}}tr[(\Phi_{12}(\rho))^2]. \end{eqnarray} $Q_{12}(\rho)=0$ if and only if $\rho$ is a C-C state. The sum of the semiquantum correlations is generally larger than the total quantum correlation. We call this discrepancy the joint quantum correlation, \begin{eqnarray} \delta(\rho)\equiv Q_{1}(\rho)+Q_{2}(\rho)-Q_{12}(\rho). \end{eqnarray} The geometric picture of these quantum correlations is clear and illustrated in Fig. 1. In fact the set of C-Q states, Q-C states and C-C states are not convex. The overlap between the set of C-Q states and the set of Q-C states are the set of C-C states. For any given quantum state $\rho$, $Q_1(\rho)$ (resp. $Q_2(\rho)$) is the minimum distance between $\rho$ and the set $\{\Phi_1(\rho)\}$ (resp. $\{\Phi_2(\rho)$\}) under all local von Neumann measurements on the first (resp. second) subsystem, and $Q_{12}(\rho)$ is the minimum distance between $\rho$ and the set $\{\Phi_{12}(\rho)\}$ under all local von Neumann measurements on both subsystems. \begin{center} \begin{figure} \caption{The geometric picture of the quantum correlations $Q_1(\rho)$, $Q_2(\rho)$ and $Q_{12} \label{geo} \end{figure} \end{center} In Ref. \cite{S. Luo2010}, it shows $Q_1({\rho})$ is equal to the geometric measure of discord which is defined as $D_1({\rho})=\min_{\sigma\in\Omega_1}||\rho-\sigma||^2$ with $\Omega_1$ the set of all C-Q states \cite{B. Daki}, i.e. $Q_1({\rho})=D_1({\rho})$. Thus for any quantum state $\rho$, the nearest C-Q state induced by local von Neumann measurement on the first subsystem is just the nearest one comparing all the C-Q states. Furthermore, we can get a general conclusion that, for any quantum state, the nearest semiquantum state or classical state induced by local von Neumann measurement is the nearest one comparing all the semiquantum states or classical states. The total quantum correlation $Q_{12}(\rho)$, semiquantum correlations $Q_{i}(\rho)$ ($i=1,2$) and the joint quantum correlation $\delta(\rho)$ have the following properties. (i) $0\leq \delta(\rho)\leq Q_{i}(\rho)\leq Q_{12}(\rho)<1$, for $i=1,2$. First, $Q_{12}(\rho)<1$ is obvious by definition. Second, for any quantum state $\rho$, assume $\Phi^{\prime\prime}_{1}$ and $\Phi^{\prime\prime}_{2}$ are the optimal local von Neumann measurements acting on the first and second subsystems respectively such that they reach the minimum of total quantum correlation, i.e. $Q_{12}(\rho)=||\rho-\Phi^{\prime\prime}_{12}(\rho)||^2$, $\Phi^{\prime\prime}_{12}=\Phi^{\prime\prime}_{1}\circ \Phi^{\prime\prime}_{2}$, we have \begin{eqnarray*} Q_{12}(\rho)- Q_{i}(\rho) \geq ||\rho-\Phi^{\prime\prime}_{12}(\rho)||^2-||\rho-\Phi^{\prime\prime}_i(\rho)||^2 =tr\{[\Phi^{\prime\prime}_i(\rho)-\Phi^{\prime\prime}_{12}(\rho)]^2\}\geq 0, \end{eqnarray*} for $i=1,2$. Third, since \begin{eqnarray*} Q_1(\rho)-\delta(\rho)=Q_{12}(\rho)- Q_{2}(\rho)\geq 0,\\ Q_2(\rho)-\delta(\rho)=Q_{12}(\rho)- Q_{1}(\rho)\geq 0, \end{eqnarray*} so $Q_i(\rho)\geq \delta(\rho)$ for $i=1,2$. Fourth, to show $\delta(\rho)\geq 0$, we suppose $\Phi^\prime_{i}$ is the optimal local von Neumann measurement acting on the $i$-th subsystem that reaches the minimum of the semiquantum correlation for $\rho$, i.e. $Q_{i}(\rho)=||\rho-\Phi^\prime_{i}(\rho)||^2$ for $i=1,2$. Then \begin{eqnarray*} \delta(\rho)\geq tr\{[\rho-\Phi^\prime_1(\rho)-\Phi^\prime_2(\rho)+\Phi^\prime_{12}(\rho)]^2\}\geq 0. \end{eqnarray*} (ii) If $Q_{i}(\rho)=0$ or $Q_{12}(\rho)=0$, $i=1,2$, then $\rho$ has no joint quantum correlation, $\delta(\rho)=0$. (iii) All these quantum correlations are invariant under local unitary operations. According to the definitions and properties of these quantum correlations, we have the following theorem whose format is analogous to the one in Ref. \cite{S. Luo2012}. \begin{theorem}\label{th} Let $|\psi\rangle$ be any bipartite pure state with Schmidt decomposition $|\psi\rangle=\sum_{i} \lambda_i |ii\rangle$, $0\leq \lambda_{i}\leq 1$ and $\sum_i \lambda_i^2=1$. The total quantum correlation, semiquantum correlation and joint quantum correlation coincide, i.e. $Q_{12}(|\psi\rangle)=Q_{1}(|\psi\rangle)=Q_{2}(|\psi\rangle)=\delta(|\psi\rangle)=1-\sum_i \lambda_i^4$. \end{theorem} Proof. Let $\Phi_1=\{\pi_j^{(1)}\}=\{|\psi_j\rangle \langle\psi_j|\}$ and $\Phi_2=\{\pi_j^{(2)}\}=\{|\phi_j\rangle \langle\phi_j|\}$ be two arbitrary local von Neumann measurements acting on the first and second subsystem respectively, with \begin{equation}\label{constraint th} \begin{array}{rcl} &&|\psi_j\rangle=\sum_i a_{ij}|i\rangle,\ \sum_j |\psi_j\rangle \langle\psi_j|=I, \ \langle\psi_j|\psi_{j^\prime}\rangle=\delta_{jj^\prime},\\[1mm] &&|\phi_j\rangle=\sum_i b_{ij}|i\rangle, \ \sum_j |\phi_j\rangle \langle\phi_j|=I, \ \langle\phi_j|\phi_{j^\prime}\rangle=\delta_{jj^\prime}. \end{array} \end{equation} After the local von Neumann measurement $\Phi_1$ acting on the first subsystem, quantum state $|\psi\rangle$ becomes \begin{eqnarray*} \Phi_1(|\psi\rangle) =\sum_j |\psi_j\rangle\langle\psi_j|\otimes (\sum_i a_{ij}^*\lambda_i |i\rangle)(\sum_i a_{ij} \lambda_i \langle i|), \end{eqnarray*} and $tr[(\Phi_1(|\psi\rangle))^2] =\sum_j (\sum_i \lambda_i^2 |a_{ij}|^2)^2$. In order to find the minimum of $Q_1(|\psi\rangle)$ and the nearest C-Q state $\Phi_1(|\psi\rangle)$ to $|\psi\rangle$, we need to find the maximum of $tr[(\Phi_1 (|\psi\rangle))^2]$ under the constraints in Eq. (\ref{constraint th}). Consider the multivariable function \begin{eqnarray*} f(x)=\sum_j (\sum_i \lambda_i^2 x_{ij})^2 \end{eqnarray*} with the restrictions \begin{eqnarray*} 0\leq x_{ij}\leq 1;~~ \sum_{j}x_{ij}=1, \, \forall i;~~ \sum_{i}x_{ij}=1, \, \forall j, \end{eqnarray*} it can be verified that the function $f(x)$ is convex with respect to the variables $x_{ij}$, as its Hessian matrix is nonnegative. Therefore its maximum is attained at the boundary, $x_{ij}=0,1$, $\forall i,j$. Hence we further derive that the maximum of $f(x)$ is $\sum_i \lambda_i^4$ which is attained when the matrix $X=(x_{ij})$ is a permutation matrix. Therefore we have $\max_{\Phi_1} tr[(\Phi_1(|\psi\rangle))^2]=\sum_i \lambda_i^4$ if we choose $|\psi_j\rangle=|j\rangle$, $\forall j$. This implies $Q_{1}(|\psi\rangle)=1-\sum_i \lambda_i^4$ and $\rho^\prime=\sum_i \lambda_i^2 |i\rangle\langle i|\otimes |i\rangle\langle i|$ is one of the nearest C-Q states to $|\psi\rangle$. Similarly, one can get the result for $Q_{2}(|\psi\rangle)$. For the total quantum correlation $Q_{12}(|\psi\rangle)$, since $Q_1(|\psi\rangle)\leq Q_{12}(|\psi\rangle)$, we get $$\max_{\Phi_{12}}tr [(\Phi_{12}(|\psi\rangle))^2]\leq \max_{\Phi_1}tr [(\Phi_1(|\psi\rangle))^2] =\sum_i \lambda_i^4$$ and the inequality becomes equality when $|\psi_j\rangle=|\phi_j\rangle=|j\rangle$, $\forall j$. As a result $Q_{12}(|\psi\rangle)=1-\sum_i \lambda_i^4$ and $\rho^\prime$ is also the nearest C-C state to $|\psi\rangle$. Finally, it is direct to get $\delta(|\psi\rangle)=1-\sum_i \lambda_i^4$ by definition. \qed Here it is worth to mention that for any bipartite pure state $|\psi\rangle$, its quantum correlations are proportional to the squared concurrence \cite{concurrence,fei,Rungta}, $Q_{1}(|\psi\rangle)=Q_{2}(|\psi\rangle)=Q_{12}(|\psi\rangle)=\delta(|\psi\rangle)=C^2(|\psi\rangle)/2$. Hence the quantum correlations for pure states are just the quantum entanglement \cite{L. Henderson}. In addition, if one restricts the local von Neumann measurements to the ones that keep the marginal states invariant, one can similarly define total quantum correlation, semiquantum correlation and joint quantum correlation. In this way, the properties (i)-(iii) and Theorem \ref{th} still hold true. For general bipartite mixed state $\rho$, the quantum correlation can be estimated in view of Ref. \cite{S. Luo2010} as follows. \begin{theorem} For any mixed state $\rho\in H_m\otimes H_n$ ($m\leq n$), $\rho=\sum_{i=1}^{m^2}\sum_{j=1}^{n^2} c_{ij} X_{i}\otimes Y_{j}$, where $X_1=I_m$, $Y_1=I_n$, $X_{i}$ and $Y_i$ are the generators of $SU(m)$ and $SU(n)$ respectively, $i=2,\cdots, m^2$, $j=2,\cdots, n^2$, we have \begin{equation*} \begin{array}{rcl} Q_1{(\rho)}&=&tr(CC^T)-\max_A tr(ACC^TA^T),\\ Q_2{(\rho)}&=&tr(CC^T)-\max_B tr(BC^TCB^T),\\ Q_{12}(\rho)&=&tr(CC^T)-\max_{A,B} tr(ACB^TBC^TA^T), \end{array} \end{equation*} where $C=(c_{ij})$, $A=(a_{ki})$ such that $a_{ki}=tr (|k\rangle\langle k| X_i)$ for $k=1,\cdots, m$ and $i=1, \cdots, m^2$, $B=(b_{lj})$ such that $b_{lj}=tr (|l\rangle\langle l| Y_i)$ for $k=1,\cdots, n$ and $i=1, \cdots, n^2$ with $\{|k\rangle\}$ and $\{|l\rangle\}$ any orthonormal basis for the first and second subsystems respectively. Especially, $Q_1(\rho),\ Q_{12}(\rho)\geq \sum_{i=m+1}^{m^2}\lambda_i$, and $Q_2(\rho)\geq \sum_{i=n+1}^{n^2}\lambda_i$, where $\lambda_i$ are the eigenvalues of $CC^T$ listed in decreasing order. \end{theorem} In fact, these quantum correlations can be calculated exactly for some special quantum states. For the isotropic states \cite{M. Horodecki1999}: \begin{eqnarray*} \rho_1(f_1) &=&\frac{1-f_1}{n^2-1} I_n + \frac{n^2f_1-1}{n^2-1} |\psi^+ \rangle \langle \psi^+|, \end{eqnarray*} with $f_1=\langle \psi^+| \rho_1(f_1) |\psi^+ \rangle$ satisfying $0\leq f_1 \leq 1$, $|\psi^+ \rangle=\frac{1}{\sqrt{n}}\sum_{i=0}^{n-1}|ii\rangle$, we have the semiquantum correlation and total quantum correlation are $\frac{(n^2f_1-1)^2}{n(n+1)^2(n-1)}$. While, for the Werner states \cite{werner}: \begin{eqnarray*} \rho_2(f_2) = \frac{n-f_2}{n^3-n} I_n + \frac{nf_2-1}{n^3-n}{V}, \end{eqnarray*} where ${V}=\sum_{i,j=0}^{n-1} |ij\rangle \langle ji|$ and $f_2=\langle \psi^+| \rho_2(f_2) |\psi^+ \rangle$, $-1\leq f_2 \leq 1$, we get the semiquantum correlation and total quantum correlation are $\frac{(nf_2-1)^2}{n(n+1)^2(n-1)}$. So for these two classes of states, the joint quantum correlation is the same as the semiquantum correlation and total quantum correlation. Furthermore they have no quantum correlation if and only if they are the maximally mixed state. Thus almost all isotropic states and Werner states have nonzero quantum correlations. For quantum correlation detection, notice that $tr(\rho^2)=tr[(P^+-P^-)\rho^{\otimes 2}]=1-2tr(P^-\rho^{\otimes 2})$, where $P^\pm$ are the projectors on the symmetric and antisymmetric subspaces respectively \cite{M. Hendrych}, so this provides an experimental way of measuring quantum correlations $Q_{i}(\rho)$ and $Q_{12}(\rho)$, $i=1,2$. For any given local von Neumann measurements $\Phi_1$ and $\Phi_2$, the experimentally accessible witness for $Q_{1}(\Phi_{1},\rho)$, $Q_{2}(\Phi_{2},\rho)$ and $\delta(\Phi_1,\Phi_2,\rho)\equiv Q_{1}(\Phi_{1},\rho)+Q_{2}(\Phi_{2},\rho)-Q_{12}(\Phi_{1},\Phi_{2},\rho)$ can also be constructed by using the strategy proposed in Ref. \cite{M. Gessner}. Consider now the dynamics of the total system given by some unitary operator $U$. If $\rho$ has nonzero total quantum correlation (resp. semiquantum correlation), namely, $\rho$ and $\Phi_{12}(\rho)$ (resp. $\Phi_{i}(\rho)$) are not identical, then it has \begin{equation}\label{correlation witness-1} \langle||tr_B\{U(\rho-\Phi_{12(i)}(\rho))U^\dagger\}||^2\rangle=f(m,n)Q_{12(i)}(\rho), \end{equation} for $i=1,2$, and \begin{eqnarray}\label{correlation witness-2} \langle||tr_B\{U(\rho-\Phi_1(\rho)-\Phi_2(\rho)+\Phi_{12}(\rho))U^\dagger\}||^2\rangle =f(m,n)\delta(\rho), \end{eqnarray} where $f(m,n)=\frac{m^2n-n}{m^2n^2-1}$, the expectation value of the function $F(U)$ is $\langle F(U)\rangle \equiv \int d\mu(U) F(U)$, $d\mu(U)$ is the probability measure on the unitary group. It shows the expectation value of the distance between the quantum state and the corresponding classical states (resp. semiquantum states) is proportional to the total quantum correlation (resp. semiquantum correlation), with the prefactor depending only on the dimensions of the subsystems. The expectation value of this witness in Eq. (\ref{correlation witness-1}) or Eq. (\ref{correlation witness-2}) with respect to randomly drawn unitaries is nonzero if and only if the initial state contains total quantum correlation (resp. semiquantum correlation) or joint quantum correlation respectively. As an application of the joint quantum correlation we consider the problem of local distinguishability of quantum states. A set of bipartite pure states is exactly locally distinguishable if there is some sequence of local operations and classical communications (LOCC) that determines with certainty which state it is. The Bell states present a simple example of an orthogonal set that is not locally distinguishable \cite{S. Ghosh, J. Walgate}. In Ref. \cite{J. Walgate} it has been proved in two-qubit system that if two separable and orthogonal states $|\psi_1\rangle$ and $|\psi_2\rangle$ are locally distinguished, then they must be of the form $\{|\psi_1\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|10\rangle),\ |\psi_2\rangle=\frac{1}{\sqrt{2}}(|01\rangle+|11\rangle)\}$ or $\{|\psi_1\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|01\rangle),\ |\psi_2\rangle=\frac{1}{\sqrt{2}}(|10\rangle+|11\rangle)\}$. Three separable and orthogonal states are locally distinguished if and only if they have the form, $\{|\psi_1\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|10\rangle),\ |\psi_2\rangle=|01\rangle,\ |\psi_3\rangle=|11\rangle\}$ or $\{|\psi_1\rangle=\frac{1}{\sqrt{2}}(|00\rangle+|01\rangle),\ |\psi_2\rangle=|10\rangle,\ |\psi_3\rangle=|11\rangle\}$. And four orthogonal states can be locally distinguished if and only if all of them are product states. There does not exist any set with more than four orthogonal states that can be distinguished under LOCC. Therefore if a set of separable and orthogonal states $\{|\psi_i\rangle\}$ in two-qubit system is locally distinguishable, then the corresponding ensemble, $\rho_{s}=\sum_i p_i |\psi_i\rangle \langle \psi_i|$, $0\leq p_i\leq 1$, $\sum_i p_i=1$, is classical or semiquantum, which implies $Q_{12}(\rho_s)=0$ or $Q_{i}(\rho_s)=0$, $i=1,2$. The joint quantum correlation of $\rho_{s}$ is then zero, i.e. $\delta(\rho_{s})=0$. Namely, if the joint quantum correlation of an ensemble is positive, then these separable pure two-qubit states in the ensemble can not be locally distinguished. \section{Summary} To summarize, we have investigated the quantum correlations induced by the local von Neumann measurement and classified the quantum correlation into total quantum correlation, semiquantum correlation and joint quantum correlation. The properties of these quantum correlations have been discussed here. Furthermore an analytical formula of these quantum correlations for pure state has been obtained, which shows the quantum correlation in pure state is just the squared concurrence. Additionally, an experiment witness for these quantum correlations has been given in experimental accessible way. As applications, the nonexisting joint quantum correlation is shown to be the necessary condition for the local distinguishability of two-qubit separable and orthogonal pure states. This method can be generalized to multipartite case and we hope this work give insight into the comprehensive interpretation of the quantum correlation. \end{document}
\begin{equation}gin{document} \title[fractional kinetic Fokker-Planck]{An operator splitting scheme for the fractional kinetic Fokker-Planck equation} \begin{equation}gin{abstract} In this paper, we develop an operator splitting scheme for the fractional kinetic Fokker-Planck equation (FKFPE). The scheme consists of two phases: a fractional diffusion phase and a kinetic transport phase. The first phase is solved exactly using the convolution operator while the second one is solved approximately using a variational scheme that minimizes an energy functional with respect to a certain Kantorovich optimal transport cost functional. We prove the convergence of the scheme to a weak solution to FKFPE. As a by-product of our analysis, we also establish a variational formulation for a kinetic transport equation that is relevant in the second phase. Finally, we discuss some extensions of our analysis to more complex systems. \end{equation}d{abstract} \author[M. H. Duong]{Manh Hong Duong } \address[M. H. Duong]{Department of Mathematics, Imperial College London, London SW7 2AZ, UK} \email{[email protected]} \author[Y. Lu]{Yulong Lu} \address[Y. Lu]{Department of Mathematics, Duke University, Durham NC 27708, USA} \email{[email protected]} \keywords{Operator splitting methods, variational methods, fractional kinetic Fokker-Planck equation, kinetic transport equation, optimal transportation.} \subjclass[2010]{Primary: 49S05, 35Q84; Secondary: 49J40.} \maketitle \section{Introduction} In this paper, we study the existence of solutions to the following fractional kinetic Fokker-Planck equation (FKFPE) \begin{equation}gin{equation} \begin{equation}gin{cases} \partialartial_t f+v\cdot\nabla_x f=\div_v(\nabla \mathcal{P}si(v)f)-(-\triangle_v)^s f \quad \text{in} \quad \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times (0,\infty),\\ f(x,v,0)=f_0(x,v)\quad \text{in} \quad \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d, \end{equation}d{cases} \label{eq:fracKramers} \end{equation}d{equation} with $s\in (0,1]$. In the above, $\mathrm{div}$ denotes the divergence operator; the differential operators $\nabla, \div$ and $\triangle$ with subscripts $x $ and $v$ indicate that these operators act only on the corresponding variables; the operator $-(-\triangle_v)^s$ is the fractional Laplacian operator on the variable $v$, where the fractional Laplacian $-(-\triangle)^{s}$, is defined by $$ -(-\triangle)^s f (x) := -\mathcal{F}^{-1} (|\mathbf{x}i|^{2s} \mathcal{F}[f] (\mathbf{x}i)) (x). $$ Here $\mathcal{F}$ denotes the Fourier transform on $\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d$, i.e. $\mathcal{F} [f] (\mathbf{x}i) = \frac{1}{(2\partiali)^{d/2}}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d} f(x) e^{-ix\cdot \mathbf{x}i}dx$. Note that the fractional Laplacian operator with $0<s<1$ is a non-local operator since it can also be expressed as the singular integral $$ -(-\triangle)^s f (x) = -C_{d, s} \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d} \frac{f(x) - f(y)}{|x - y|^{d+2s}} dy, $$ where the normalisation constant is given by $ C_{d, s} = s 2^{2s} \mathcal{G}amma(\frac{d + 2s}{2})/ (\partiali^{\frac{d}{2}} \mathcal{G}amma(1-s)) $ and $\mathcal{G}amma(t)$ is the Gamma function. See \cite{kwasnicki2017ten} for more equivalent definitions of fractional Laplacian operator. The equation \eqref{eq:fracKramers} is interesting to us because it can be viewed as the Fokker-Planck (forward Kolmogorov) equation of the following generalized Langevin equation \begin{equation}gin{equation} \label{SDE} \begin{equation}a & \frac{d X_t}{d t} = V_t,\\ & \frac{d V_t}{d t} = - \nabla \mathcal{P}si (V_t) + L_t^s, \end{equation}a \end{equation}d{equation} where $L_t^s$ is the L\'evy stable process with exponent $2s$. The stochastic differential equation (SDE) \eqref{SDE} describes the motion of a particle moving under the influence of a (generalized) frictional force and a stochastic noise and in the absence of an external force field. FKFPE \eqref{eq:fracKramers} is the evolution of the probability distribution of $(X_t,V_t)$. In particular, the fractional operator $-(-\triangle)^s$ is the Markov generator of the process $L_t^s$. When $s=1$ and $\mathcal{P}si(v)=\frac{|v|^2}{2}$, equation \eqref{eq:fracKramers} becomes the classical kinetic Fokker-Planck (or Kramers) equation (without external force field) which is a local PDE and has been used widely in chemistry as a simplified model for chemical reactions~~\cite{Kramers40,HTB90} and in statistical mechanics~\cite{Nelson1967,Risken}. The non-local L\'evy process plays an important role in modelling systems that include jumps and long-distance interactions such as anomalous diffusion or transport in confined plasma~\cite{Applebaum2009}. Singular limits of Equation~\eqref{eq:fracKramers} with $\mathcal{P}si(v)=\frac{|v|^2}{2}$ was studied in~\cite{Cesbron2012}, see also~\cite{Cesbron2017} for a similar result for the same equation but on a spatially bounded domain. In a recent work~~\cite{SanchezCesbron2016TMP}, the authors have extended~\cite{Cesbron2012} to a system that contains an additional external force field and they have also proved its well-posedness by the means of the Lax-Milgram theorem. We will prove the existence of solutions of \eqref{eq:fracKramers} for a general $\mathcal{P}si$ based on the trick of operator splitting. For more recent developments on PDEs involving the fractional Laplacian operator, we refer the interested reader to expository surveys~\cite{Vazquez2017, Vazquez2014,Vazquez2012}. The aim of this paper is to develop a variational formulation for approximating solutions to equation~\eqref{eq:fracKramers}. The theory of variational formulation for PDEs took off with the introduction of Wasserstein gradient flows by the seminal work of Jordan, Kinderlehrer and Otto~\cite{JKO98}. Such a variational structure has important applications for the analysis of an evolution equation such as providing general methods for proving well-posedness~\cite{AGS08} and characterizing large time behaviour~(e.g.,~\cite{CarrilloMcCannVillani03}), giving rise to natural numerical discretizations (e.g.,~\cite{DuringMatthesMilisic10}), and offering techniques for the analysis of singular limits (e.g.,~\cite{SandierSerfaty04, Stefanelli08, AMPSV12, DLPS17}). There are now a significantly large number of papers in exploring variational structures for local PDEs, see the aforementioned papers and references therein as well as the monographs~\cite{AGS08, Vil03} for more details. However, variational formulations for non-local PDEs are less understood. Erbar~\cite{erbar2014} showed that the fractional heat equation is a gradient flow of the Boltzmann entropy with respect to a new modified Wasserstein distance that is built from the L\'evy measure and based on the Benamou-Brenier variant of the Wasserstein distance. Bowles and Agueh~ \cite{AguelBowles2015} proved the existence of the fractional Fokker-Planck equation \begin{equation}\label{eq:FFP} \begin{equation}gin{cases} \partialartial_t f =\div_v(\nabla \mathcal{P}si(v)f)-(-\triangle_v)^s f \quad \text{in} \quad \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times (0,\infty),\\ f(v,0)=f_0(v)\quad \text{in} \quad \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d, \end{equation}d{cases} \end{equation} which can be viewed as the spatially homogeneous version of equation~\eqref{eq:fracKramers} or the fractional heat equation with a drift. Erbar's proof is variational based on the so-called ``evolution variational inequality" concept introduced in~\cite{AGS08}. However, it seems that his method can not be extended to the fractional Fokker-Planck equation since the distance that he introduced was particularly tailored for the Boltzmann entropy. Instead, Bowles and Agueh's proof is ``semi-variational" based on a novel splitting argument which we sketch now. They split up the original dynamics \eqref{eq:FFP} into two processes: a fractional diffusion process, namely $\partialartial_t f=-(-\triangle)^s f$, and a transport process in the field of the potential $\mathcal{P}si$, namely $\partialartial_t f=\div(\nabla \mathcal{P}si f)$, and then alternatively run these processes on a small time interval. Furthermore, the transport process can be understood as a Wasserstein gradient flow of the potential energy. By adopting a suitable interpolation of the individual processes, they were able to show that the constructed splitting scheme converges to a weak solution of \eqref{eq:FFP}. In the literature, the technique of operator splitting is often used to construct numerical methods for solving PDEs, see \cite{HKLR10}. On the theoretical side, the idea of splitting had also been used to study the well-posedness of PDEs, see \cite{CG04, Agueh16} on kinetic equations and \cite{Alibaud07, DGV03} on fractional PDEs. In the present work, we adopt the same splitting argument in \cite{AguelBowles2015} to construct a weak solution to the fractional kinetic equation \eqref{eq:fracKramers}. More specifically, we split the dynamics described in \eqref{eq:fracKramers} by two phases: \begin{equation}gin{enumerate} \item Fractional diffusion phase. At every fixed position $x\in \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d$, the probability density $f(x, v, t)$, as a function of velocity $v$, evolves according to the fractional heat equation \begin{equation}\label{eq:fhe} \partialartial_t f = - (-\triangle_v)^{s} f. \end{equation} \item Kinetic transport phase. The density $f(x,v, t)$ evolves according to the following equation \begin{equation}\label{eq:kt} \partialartial_t f + v \cdot \nabla_x f = \mathrm{div}_v(\nabla \mathcal{P}si(v)f). \end{equation} \end{equation}d{enumerate} We expect that successive alternative iterating the above two phases with vanishing period of time would give an approximation to the dynamics \eqref{eq:fracKramers}. The key difference between our splitting scheme above and the scheme in \cite{AguelBowles2015} is that the transport process here is not only driven by the potential energy but also the kinetic energy. In~\cite{AguelBowles2015}, the transport process is approximated by a discrete Wasserstein gradient flow based on the work~\cite{KinderlehrerTudorascu06}. However, due to the presence of the kinetic term, the kinetic transport equation is not a Wasserstein gradient flow; thus one can no longer use the Wasserstein distance. To overcome this obstacle, we employ instead the minimal acceleration cost function and the associated Kantorovich optimal transportation cost functional that has been used in~\cite{Hua00, DPZ13a} for the kinetic Fokker-Planck equation and in~\cite{GW09} for the isentropic Euler system, see Section~\ref{sec: kinetic transport}. \subsection{Main result} Throughout the paper, we make the following important assumption on the potential $\mathcal{P}si$. \begin{equation}gin{assumption}\label{ass:psi} $\mathcal{P}si$ is non-negative and $\mathcal{P}si \in C^{1,1}\cap C^{2,1}(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$. \end{equation}d{assumption} We adopt the following notion of weak solution to KFPE \eqref{eq:fracKramers}. \begin{equation}gin{definition}\label{def:weak} Let $f_0$ be a non-negative function such that $f_0\in \mathcal{P}^2_a(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}) \cap L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ for some $1 < p \leq \infty$ and $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f_0(x, v) \mathcal{P}si(v)dvdx < \infty$. We say that $f(x,v, t)$ is a weak solution to \eqref{eq:fracKramers} if it satisfies the following: \begin{equation}gin{enumerate} \item $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f(x, v, t) dxdv = \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f_0(x, v) dxdv = 1$ for any $t\in(0,T)$. \item $f(x,v, t) \geq 0$ for a.e. $(x,v,t) \in \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}\times (0, T)$. \item For any test function $\varphi\in C_c^\infty(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d} \times (-T, T))$, $$ \begin{equation}a & \int_{0}^T \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f(x, v, t) (\partialartial_t \varphi + v \cdot \partialartial_x \varphi + \nabla_v \mathcal{P}si \cdot \nabla_v \varphi - (-\triangle_v)^{s} \varphi) dt dx dv \\ & \qquad + \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f_0(x, v) \varphi(0, x, v) = 0. \end{equation}a $$ \end{equation}d{enumerate} \end{equation}d{definition} The main result of the paper is the following theorem. \begin{equation}gin{theorem}\label{thm:main} Suppose that Assumption \ref{ass:psi} holds. Given a $f_0\in \mathcal{P}^2_a(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}) \cap L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ for some $1 < p \leq \infty$ and $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f_0(x, v) \mathcal{P}si(v)dvdx < \infty$, there exists a weak solution $f(x, v, t)$ to \eqref{eq:fracKramers} in the sense of Definition \ref{def:weak}. \end{equation}d{theorem} The proof of Theorem \ref{thm:main} is constructive, that is we will build a converging sequence to a solution of \eqref{eq:fracKramers} from the splitting scheme discussed above that will be rigorously formulated in Section~\ref{sec: Scheme}. The proof is based on a series of lemmas and is postponed to Section \ref{sec:proofmain}. As a by-product of the analysis, we also construct a discrete variational scheme and obtain its convergence for the kinetic transport equation, see Theorem~\ref{thm: kinetic transport} in Section~\ref{sec: kinetic transport}; thus extending the work~\cite{KinderlehrerTudorascu06} to include the kinetic feature. Furthermore, some possible extensions to more complex systems are discussed in Section~\ref{sec: extension}. It is not clear to us how to obtain the uniqueness and regularity result. The bootstrap argument in~\cite{JKO98} to prove smoothness of weak solutions (and hence also uniqueness) seems not working for the fractional Laplacian operator due to the lack of a product rule. It should be mentioned that in the recent paper \cite{Lafleche2018}, the author has proved the existence and uniqueness of a solution to the fractional Fokker-Planck equation \eqref{eq:FFP} in some weighted Lebesgue spaces. It would be an interesting problem to generalize \cite{Lafleche2018} to FKFPE. This is to be investigated in future work. \subsection{Organization of the paper} The rest of the paper is organized as follows. Section~\ref{sec: FHE} summarizes some basic results about the fractional heat equation. Section~\ref{sec: kinetic transport} studies the kinetic transport equation and its variational formulation. The splitting scheme of the paper is formulated explicitly in Section~\ref{sec: Scheme} and some a priori estimates are established for the discrete sequences as well as their time-interpolation. The proof of the main result is presented in Section~\ref{sec:proofmain}. Finally, in Section~\ref{sec: extension} we discuss several possible extensions of the analysis to more complex systems. \subsection{Notation} Let $\mathcal{P}^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$ be the collection of probability measures on $\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d$ with finite second moments. Let $\mathcal{P}^2_a(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$ be the subset of probability measures in $\mathcal{P}^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$ that are absolutely continuous with respect to the Lebesgue measure on $\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d$. For $\mu, \nu \in \mathcal{P}^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$, the 2-Wasserstein distance $W_2(\mu, \nu)$ is defined by $$ W_2(\mu, \nu) := \mathcal{B}ig(\inf \mathcal{B}ig\{\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} |x - y|^2 p(dx, dy) : p \in \mathcal{P}(\mu, \nu)\mathcal{B}ig\}\mathcal{B}ig)^{\frac{1}{2}} $$ where $\mathcal{P}(\mu,\nu)$ is the set of probability measures on $\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}$ with marginals $\mu, \nu$, i.e. $p \in \mathcal{P}(\mu, \nu)$ if and only if $$ p(A\times \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d) = \mu(A), \quad p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d \times A) = \nu(A) $$ hold every Borel set $A\in \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d$. In the case that $\mu, \nu \in \mathcal{P}^2_a(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$ with densities $f, g$, we may write $W_2(f, g)$ instead of $W_2(\mu, \nu)$. We use the notation $F^\# \mu$ to denote the push-forward of a probability measure $\mu$ on $\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}$ under map $F$, that is a probability measure on $\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}$ satisfying for all smooth test function $\varphi$, \[ \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(x,v)\,dF^\#\mu =\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(F(x,v))\,d\mu. \] \section{The fractional heat equation} \label{sec: FHE} This section collects some basic results on the fractional heat equation. We start by defining the fractional heat kernel \begin{equation}\label{eq:phis} \mathcal{P}hi_s (v, t) := \mathcal{F}^{-1} (e^{-t |\cdot|^{2s}})(v). \end{equation} Remember that the fractional Laplacian operator in \eqref{eq:fracKramers} is only an operator in $v$-variable. With the fractional heat kernel, the solution to the fractional heat equation \eqref{eq:fhe} with initial condition $f_0(x,v)$ can be expressed as \begin{equation}\label{eq:rcon} f(x, v, t) = \mathcal{P}hi_s (\cdot, t) \ast_v f_0(x, v) \end{equation} where $\ast_v$ is the convolution operator in $v$-variable. The following elementary result is immediate from the definition of the kernel; see also \cite{AguelBowles2015}. \begin{equation}gin{lemma}\label{lem:phi} \item[(1)] For any $t>0$, $\|\mathcal{P}hi_s(\cdot,t)\|_{L^1(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)} =1$. \item[(2)] For any $t > 0$ and $p\in (1, \infty)$, $\|\mathcal{P}hi_s (\cdot, t) \ast_v f_0\|_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})} \leq \|f_0\|_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}$. \item[(3)] $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}} |v|^2 \mathcal{P}hi_s(v, t)dv = +\infty$ for all $s\in (0,1)$ and $t > 0$. \end{equation}d{lemma} Lemma \ref{lem:phi} (3) demonstrates a significant difference between the fractional heat kernel and standard Gaussian kernel, i.e. the former has infinite second moment. The loss of second moment bound may lead to infinite potential energy for example when the potential $\mathcal{P}si(v) = |v|^2$. To overcome this issue, it is more convenient to make a renormalisation on the fractional heat kernel. To be more precise, for any $h > 0$, let us denote $\mathcal{P}hi_s^h(v) := \mathcal{P}hi_s(v, h)$ and set $\mathcal{P}hi_{s, R}^h(v) := \mathcal{P}hi_s^h(v) \mathbf{1}_{B_R}(v)$ where $ \mathbf{1}_{B_R}$ is an indicator function of a centred ball of radius $R$. Given a function $f \in \mathcal{P}_a^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$, we can define the renormalised convolution \begin{equation}\label{eq:rcon2} \bar{f}_{h, R} := \frac{\mathcal{P}hi_{s, R}^h \ast_v f}{\|\mathcal{P}hi_{s, R}^h\|_{L^1(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)}} \end{equation} It is clear that the new defined convolution satisfies $\bar{f}_{h, R} \rightarrow \mathcal{P}hi_{s}^h \ast_v f$ pointwise. Moreover, we have the following lemma. \begin{equation}gin{lemma}\label{lem:fbarF} Let $f$ be a function such that $F\in C^{1,1}(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d) \cap C^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$. Suppose that $f\in \mathcal{P}^2_a(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ and with $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f(x, v) F(v) dv dx < \infty$. Then \item[(1)] $\bar{f}_{h, R} \in \mathcal{P}^2_a(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$. \item[(2)] \begin{equation}\label{eq:fbarF} \begin{equation}a \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} \bar{f}_{h, R}(x, v) F(v) dx dv &\leq \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f(x, v) F(v) dx dv \\ &\qquad + \frac{1}{2} \|D^2 F\|_{L^\infty(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)} \frac{\int_{B_R} |w|^2 \mathcal{P}hi_s^h(w)dw}{\int_{B_R} \mathcal{P}hi_s^h(w)dw}. \end{equation}a \end{equation} \end{equation}d{lemma} \begin{equation}gin{proof} Notice that it suffices to prove part (2) since part (1) follows directly from part (2) by setting $F(v) = |v|^2$. The proof is similar to that of \cite[Lemma 4.1]{AguelBowles2015}, but for completeness we give the proof below. First from the definition of $\bar{f}_{h, R}$, one sees that $$\begin{equation}a \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\bar{f}_{h, R}(x, v)F(v)\,dxdv =\frac{\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}F(v)\int_{B_R}\mathcal{P}hi_s^h(w)f(x,v-w)dw\, dxdv}{\int_{B_R}\mathcal{P}hi_s^h(w)\,dw}. \end{equation}a $$ Using change of variable $z=v-w$ and Taylor's expansion, we can write the numerator as \begin{equation}gin{align} &\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}F(v)\int_{B_R}\mathcal{P}hi_s^h(w)f(x,v-w)dw\, dxdv\notag \\&=\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}F(w+z)\int_{B_R}\mathcal{P}hi_s^h(w)f(x,z)dw\, dxdz\notag \\&=\int_{B_R}\mathcal{P}hi_s^h(w)\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}F(w+z)f(x,z)\,dxdzdw\notag \\&=\int_{B_R}\mathcal{P}hi_s^h(w)\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{B}ig[F(z)+w\cdot \nabla F(z)+\frac{1}{2}w^TD^2F(\mathbf{x}i_{w,z}) w\mathcal{B}ig]f(x,z)\,dxdzdw \notag \\&\leq \int_{B_R}\mathcal{P}hi_s^h(w)\,dw\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}F(z)f(x,z)\,dxdz+\left|\int_{B_R}\mathcal{P}hi_s^h(w)\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}w\cdot\nabla F(z) f(x,z)\,dxdzdw\right|\notag \\&\qquad +\frac{1}{2}\partialarallel D^2 F\partialarallel_{\infty}\,\int_{B_R}|w|^2\mathcal{P}hi_s^h(w)\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f(x,z)\,dxdz\notag \\&=\int_{B_R}\mathcal{P}hi_s^h(w)\,dw\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}F(z)f(x,z)\,dxdz+\frac{1}{2}\partialarallel D^2 F\partialarallel_{\infty}\,\int_{B_R}|w|^2\mathcal{P}hi_s^h(w).\notag \end{equation}d{align} Note that in the above $\mathbf{x}i_{w,z}$ is an intermediate point between $w$ and $z$ and the term with the modulus vanishes since the kernel $\mathcal{P}hi_s^h$ is symmetric with respect to the origin. \end{equation}d{proof} The following lemma provides an upper bound for the ratio on the right side of \eqref{eq:fbarF}. \begin{equation}gin{lemma}\label{lem:ratioconv} For any $s\in (0,1]$, there exists a constant $C > 0$ such that \begin{equation} \frac{\int_{B_R} |w|^2 \mathcal{P}hi_s^h(w)dw}{\int_{B_R} \mathcal{P}hi_s^h(w)dw} \leq C (h^{\frac{1}{s}} + h R^{2 -2s}) \end{equation} holds for all $R,h > 0$. \end{equation}d{lemma} \begin{equation}gin{proof} This lemma follows directly from a two-sided point-wise estimate on $\mathcal{P}hi_s^h(w)$ as shown in \cite[Proposition 2.1]{AguelBowles2015}. See also equation (16) in \cite{AguelBowles2015}. \end{equation}d{proof} \section{The kinetic transport equation and its variational formulation} \label{sec: kinetic transport} \subsection{The minimum acceleration cost} Consider the kinetic transport equation with initial value $f_0$ \begin{equation}\label{eq:ktransport} \begin{equation}a &\partialartial_t f(x, v, t) + v\cdot \nabla_x f = \div_v(\nabla \mathcal{P}si(v)f(x, v, t)),\\ & f(x, v, 0) = f_0(x, v). \end{equation}a \end{equation} We are interested in the variational structure of \eqref{eq:ktransport} which is an interesting problem on its own right. In~\cite{KinderlehrerTudorascu06}, Kinderlehrer and Tudorascu proved that the transport equation $\partialartial_t f(v,t)=\div_v(\nabla \mathcal{P}si f)$, which is the spatially homogeneous version of~\eqref{eq:ktransport}, is a Wasserstein gradient flow of the energy $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d} \mathcal{P}si f$. Their proof is via constructing a discrete variational scheme as in~\cite{JKO98}. However, due to the absence of the entropy term, which is super-linear, several non-trivial technicalities were introduced to obtain the compactness of the discrete approximations thus establishing the convergence of the scheme. For the kinetic transport equation~\eqref{eq:ktransport}, due to the presence of the kinetic term, it is not a Wasserstein gradient flow in the phase space thus the Wasserstein distance can no longer be used. Therefore to construct a discrete variational scheme for this equation, we need a different Kantorovich optimal transportation cost functional. To this end, we will employ the Kantorovich optimal transportation cost functional that is associated to the minimal acceleration cost. This cost functional has been used before in~\cite{Hua00, DPZ13a} for the kinetic Fokker-Planck equation and in~\cite{GW09} for the isentropic Euler system. We follow the heuristics of defining the minimal acceleration cost as in \cite{GW09}. Consider the motion of particle going from position $x$ with velocity $v$ to a new position $x'$ with velocity $v'$, within a time interval of length $h$. Suppose that the particle follows a curve $\mathbf{x}i : [0, h] \mapsto \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d$ such that $$ (\mathbf{x}i, \dot{\mathbf{x}i})|_{t=0} = (x, v)\ \text{ and }\ (\mathbf{x}i, \dot{\mathbf{x}i})|_{t=h} = (x', v') $$ and such that the average acceleration cost along the curve, that is $\frac{1}{h} \int_0^h | \,\textrm{d}ot{\mathbf{x}i}(t)|^2 dt$ is minimized. Then the curve is actually a cubic polynomial and the minimal average acceleration cost is given by $C_h(x, v; x', v')/h^2$ where \begin{equation}\label{eq:ch} C_h(x, v; x', v') := |v' - v|^2 + 12\mathcal{B}ig|\frac{x' - x}{h} - \frac{v' + v}{2}\mathcal{B}ig|. \end{equation} The Kantorovich functional $\mathcal{W}_h(\mu,\nu)$ associated with the cost function $C_h$, is defined by, for any $\mu, \nu \in \mathcal{P}^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$, \begin{equation}gin{equation} \label{eq: Wh distance} \mathcal{W}_h(\mu,\nu)^2 = \inf_{p\in\mathcal{P}(\mu,\nu)}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}C_h(x,v;x',v')p(dxdvdx'dv'), \end{equation}d{equation} where $\mathcal{P}(\mu, \nu)$ is the set of all couplings between $\mu$ and $\nu$. It is important to notice that $\mathcal{W}_h$ is not a distance. In fact, $\mathcal{W}_h$ is not symmetric in the arguments $\mu,\nu$, due to the asymmetry of the cost function $C_h$. In addition, $\mathcal{W}_h(\mu,\nu)$ does not vanish when $\mu=\nu$. Instead, we have that \[ \mathcal{W}_h(\mu,\nu)=0 \quad\Longleftrightarrow\quad \nu = (F_h)_\#\mu, \] where $F_h$ is the free transport map defined by \begin{equation}gin{align}\label{eq:fh} F_h: \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d&\to \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\notag \\(x,v)&\mapsto F_h(x,v)=(x+hv, v). \end{equation}d{align} It is also useful to define the map \begin{equation}gin{align}\label{eq:gh} G_h: \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d&\to \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\notag \\(x,v)&\mapsto G_h(x,v)=\left(\sqrt{3}\mathcal{B}ig(\frac{2x}{h}-v\mathcal{B}ig),v\right).\end{equation}d{align} The composition $G_h\circ F_h$ is then given by \[ (G_h\circ F_h)(x,v)=\left(\sqrt{3}\mathcal{B}ig(\frac{2x}{h}+v\mathcal{B}ig),v\right). \] Although the Kantorovich functional $\mathcal{W}_h(\mu, \nu)$ is not a distance, the next lemma shows that $\mathcal{W}_h$ can be expressed in terms of the usual Wasserstein distance $W_2$. \begin{equation}gin{lemma}\cite[Proposition 4.4]{GW09} \label{lem: aux1} Let $F_h$ and $G_h$ be given by \eqref{eq:fh} and \eqref{eq:gh} respectively. The Kantorovich functional $\mathcal{W}_h$ can be expressed in terms of the 2-Wasserstein distance $W_2$ as \begin{equation}gin{equation} \mathcal{W}_h(\mu,\nu)=W_2((G_h\circ F_h)^{\#} \mu, G_h^{\#} \nu)\quad\text{for all}~~\mu,\nu\in\mathcal{P}^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}). \end{equation}d{equation} As a consequence, the infimum in \eqref{eq: Wh distance} is attained and thus $\mathcal{W}_h(\mu,\nu)$ is a minimum. \end{equation}d{lemma} \subsection{Variational formulation} With $\mathcal{W}_h$ being defined, we want to interpret \eqref{eq:ktransport} as a generalized gradient flow of the potential energy $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} \mathcal{P}si(v) f(x,v)dxdv$ with respect to $\mathcal{W}_h$. For doing so, we consider the variational problem \begin{equation}\label{eq:kineticvp} \inf_{f\in \mathcal{P}_a^2} \mathcal{A}(f) := \frac{1}{2h} \mathcal{W}_h (f_0, f)^2 + \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} \mathcal{P}si(v) f(x,v)dxdv. \end{equation} Here $f_0 \in \mathcal{P}_a^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ is an initial probability density with $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} \mathcal{P}si(v) f_0(x, v)\ dxdv < \infty$ and $h > 0$ is the time step. The next lemma establishes some properties about the minimizer to \eqref{eq:kineticvp}. \begin{equation}gin{lemma}\label{lem:vp} \item[(1)] For $h$ being sufficiently small, the variational problem \eqref{eq:kineticvp} has a unique minimizer $f\in \mathcal{P}_a^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$. \item[(2)] Let $h>0$ be small enough such that $\det (I+hD^2(\mathcal{P}si(v)))\leq 1+\alpha h$ for some fixed $\alpha>\partialarallel D^2\mathcal{P}si\partialarallel_{L^\infty(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)}$. If $f_0\in L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ for $1<p<\infty$, then \begin{equation}gin{equation} \label{eq: ineq minimizer} \| f\|^p_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}\leq (1 - \alpha h)^{p-1}\|f_0\|^p_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}. \end{equation}d{equation} \item[(3)] $f$ satisfies the following Euler-Lagrange equation: for any $\varphi \in C^\infty_c(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$, \begin{equation} \begin{equation}a \label{eq: EL} &\frac{1}{h}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}\left[(x'-x)\cdot\nabla_{x'}\varphi(x',v')+(v'-v)\cdot\nabla_{v'}\varphi(x',v')\right]P^{*}(dxdvdx'dv') \\&\quad -\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}v'\cdot\nabla_{x'}\varphi(x',v')f(x',v')dx'dv'\\ & \quad +\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\nabla_{v'} \mathcal{P}si(v')\cdot\nabla_{v'}\varphi(x',v')f(x',v')dx'dv'=\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c, \end{equation}a \end{equation} where $P^*$ is the optimal coupling in $\mathcal{W}_h(f_0,f)$ and \begin{equation}gin{equation} \begin{equation}a \label{eq: errorEL} \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c & =-\frac{h}{2}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}\nabla_{v'}\mathcal{P}si(v')\cdot\nabla_{x'}\varphi(x',v')P^*(dxdvdx'dv')\\ & =-\frac{h}{2}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\nabla_{v'}\mathcal{P}si(v')\cdot\nabla_{x'}\varphi(x',v')f(x',v')\,dx'dv'. \end{equation}a \end{equation}d{equation} \end{equation}d{lemma} \begin{equation}gin{proof}\item[(1)] Thanks to Lemma \ref{lem: aux1}, we can rewrite the functional $\mathcal{A}$ as $$ \begin{equation}a \mathcal{A}(f) & = \frac{1}{2h} W_2((G_h\circ F_h)^{\#} f_0, (G_h)^{\#} f)^2 + \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} \mathcal{P}si(v) (G_h)^{\#} f(dx dv)\\ & = \frac{1}{2h} W_2(\tilde{f_0}, \tilde{f})^2 + \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} \mathcal{P}si(v) \tilde{f}(dx dv) =: \tilde{A}(\tilde{f}), \end{equation}a $$ where $\tilde{f_0} = (G_h\circ F_h)^{\#} f_0$ and $ \tilde{f} = (G_h)^{\#} f$. According to \cite[Proposition 1]{KinderlehrerTudorascu06} (see also \cite[Proposition 3.1 ]{AguelBowles2015}), the functional $\tilde{A}$ has a unique minimizer, denoted by $\tilde{f}$. Therefore, the problem \eqref{eq:kineticvp} has a unique minimizer $f=(G_h^{-1})^\# \tilde{f}$. \item[(2)] This follows directly from \cite[Proposition 1]{KinderlehrerTudorascu06} and the fact that if $\tilde{f} = (G_h)^\# f$ then $$ \|\tilde{f}\|_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}^p = \left(\frac{2h}{\sqrt{3}}\right)^{d(p-1)} \|f\|_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}^p. $$ \item[(3)] The derivation of the Euler-Langrange equation for the minimizer $f$ of the variational problem \eqref{eq:kineticvp} follows the now well-established procedure (see e.g.~\cite{JKO98,Hua00}). For the reader's convenience, we sketch the main steps here. First, we consider the perturbation of $f$ defined by push-forwarding $f$ under the flows $\partialhi,\partialsi\colon[0,\infty)\times\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}\rightarrow\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d$: \begin{equation}gin{eqnarray*} &&\frac{\partialartial\partialsi_s}{\partialartial s}=\zeta(\partialsi_s,\partialhi_s),~ \frac{\partialartial\partialhi_s}{\partialartial s}=\eta(\partialsi_s,\partialhi_s), \\ && \partialsi_0(x,v)=x,~\partialhi_0(x,v)=v, \end{equation}d{eqnarray*} where $\zeta,\eta\in C_0^\infty(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d},\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$ will be chosen later. Let us denote $\gamma_s$ to be the push forward of $f$ under the flow $(\partialsi_s,\partialhi_s)$. Since $(\partialsi_0,\partialhi_0)=\mathrm{Id}$, it follows that $\gamma_0=f$, and an explicit calculation gives \begin{equation}gin{equation} \partialartial_s\gamma_s\big|_{s=0} =-\text{div}_x (f \zeta)-\text{div}_v (f \eta) \end{equation}d{equation} in the sense of distributions. Second, thanks to the optimality of $f$, we have that $\mathcal{A}(\gamma_s) \geq \mathcal{A}(f)$ for all $\gamma_s$ defined via the flow above. Then the standard variational arguments as in \cite{JKO98,Hua00} leads to the following stationary equation on $f$: \begin{equation}gin{align} &\frac{1}{2h}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}\left[\nabla_{x'}C_h(x,v;x',v')\cdot\zeta(x',v')+\nabla_{v'}C_h(x,v;x',v')\cdot\eta(x',v')\right]P^*(dxdvdx'dv')\nonumber \\&\qquad+\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f(x,v)\nabla \mathcal{P}si(v)\cdot\eta(x,v)dxdv=0, \label{EuLageqn} \end{equation}d{align} where $\mathcal{P}^*$ is the optimal coupling in the definition of $\mathcal{W}_h(f_0,f)$. Third, we choose $\zeta$ and $\eta$ with a given $\varphi\in C_0^\infty(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d},\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N})$ as follows \begin{equation}gin{equation} \label{xiphi} \begin{equation}a \zeta(x',v')&=-\frac{h^2}{6}\nabla_{x'}\varphi(x',v')+\frac{1}{2}h\nabla_{v'}\varphi(x',v'),\\ \eta(x',v')&=-\frac{1}{2}h\nabla_{x'}\varphi(x',v')+\nabla_{v'}\varphi(x',v'). \end{equation}a \end{equation}d{equation} Now from the definition of the cost functional $C_h(x,v;x',v')$ in \eqref{eq:ch}, we have that \begin{equation}gin{align*} \nabla_{x'}C_h&=\frac{24}{h}\left(\frac{x'-x}{h}-\frac{v'+v}{2}\right), \\\nabla_{v'}C_h&=2(v'-v)-12\left(\frac{x'-x}{h}-\frac{v'+v}{2}\right). \end{equation}d{align*} Therefore, together with \eqref{xiphi}, we calculate \begin{equation}gin{align*} &\nabla_{x'}C_h\cdot\zeta+\nabla_{v'}C_h\cdot\eta \\\qquad&=\frac{24}{h}\left(\frac{x'-x}{h}-\frac{v'+v}{2}\right)\cdot\left(-\frac{h^2}{6}\nabla_{x'}\varphi(x',v')+\frac{1}{2}h\nabla_{v'}\varphi(x',v')\right)\\ & +\left(2(v'-v)-12\left(\frac{x'-x}{h}-\frac{v'+v}{2}\right)\right)\cdot\left(-\frac{1}{2}h\nabla_{x'}\varphi(x',v')+\nabla_{v'}\varphi(x',v')\right) \\\qquad&=2\left((x'-x)-hv'\right)\cdot\nabla_{x'}\varphi+2(v'-v)\cdot\nabla_{v'}\varphi. \end{equation}d{align*} The Euler-Lagrange equation \eqref{eq: EL} for the minimizer $f$ follows directly by substituting the equation above back into \eqref{EuLageqn}. \end{equation}d{proof} We now can build up a discrete variational scheme for the kinetic transport equation as follows. Given $f_0 \in \mathcal{P}_a^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ with $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} \mathcal{P}si(v) f_0(x, v)\ dxdv < \infty$ and $h > 0$ is the time step. For every integer $k\geq 1$, we define $f_k$ as the minimizer of the minimization problem \begin{equation}\label{eq:kineticvp2} \inf_{f\in \mathcal{P}_a^2}\mathcal{B}ig\{\frac{1}{2h} \mathcal{W}_h (f_{k-1}, f)^2 + \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} \mathcal{P}si(v) f(x,v)dxdv\mathcal{B}ig\}. \end{equation} The following theorem extends the work~\cite{KinderlehrerTudorascu06} to the kinetic transport equation. \begin{equation}gin{theorem} \label{thm: kinetic transport} Suppose that Assumption \ref{ass:psi} holds. Given a $f_0\in \mathcal{P}^2_a(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}) \cap L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ for some $1 < p \leq \infty$ and $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f_0(x, v) \mathcal{P}si(v)dvdx < \infty$, there exists a weak solution $f(x, v, t)$ to equation~ \eqref{eq:ktransport} in the sense of Definition \ref{def:weak} but with the fractional Laplacian term removed. \end{equation}d{theorem} \begin{equation}gin{proof} The proof of this theorem follows the same lines as that of the Theorem~\ref{thm:main}, that is to show that the discrete variational scheme \eqref{eq:kineticvp2} above converges to a weak solution of the kinetic transport equation. Since the proof of Theorem~\ref{thm:main} will be carried out in details in Section~\ref{sec:proofmain}, we omit this proof here. \end{equation}d{proof} \section{A splitting scheme for FKFPE} \label{sec: Scheme} \subsection{Definition of splitting scheme} As we mentioned in the introduction section, we object to construct an operator splitting scheme for equation \eqref{eq:fracKramers} by continuously alternating processes \eqref{eq:fhe} and \eqref{eq:kt}, where the later is approximated by the generalized gradient flow of the potential energy, or equivalently, the density after a short time step $h$ is approximately given by the solution to the variational problem \eqref{eq:kineticvp}. However, there is an issue associated with iterating \eqref{eq:fhe} and \eqref{eq:kineticvp}. That is, the solution of the fractional heat equation may not have a finite second moment (see Lemma \ref{lem:phi} (3)). Hence it can not be used as the initial condition in the variational problem \eqref{eq:kineticvp} since the potential energy might be infinite. To around this issue, we define an approximate fractional diffusion process by using the renormalised convolution \eqref{eq:rcon2} based on the truncted fractional heat kernel. To be more precise, given a fixed $N\in \mathbb{N}$, let us consider a uniform partition $0 = t_0 < t_1 < \cdots <t_N = T$ of the time interval $[0, T]$ with $t_k = kh $ and $h = 1/N$. With an initial condition $f_h^0 = f_0$, for $n=1, \cdots, N-1$ we iteratively compute the following: \begin{equation}gin{itemize} \item Given a trunction parameter $R > 0$, compute the renormalised convolution \begin{equation}\label{eq:step1} \bar{f}^n_{h, R} := \frac{\mathcal{P}hi_{s, R}^h \ast_v f_{h, R}^{n-1}}{\|\mathcal{P}hi_{s, R}^h\|_{L^1(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)}}. \end{equation} \item Solve for the minimizer $f^{n+1}_{h, R}$ of the problem \begin{equation}\label{eq:step2} f^{n+1}_{h, R} := \mathrm{argmin}_{\substack{f\in \mathcal{P}_a^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)}} \frac{1}{2h} \mathcal{W}_h (\bar{f}^n_{h, R}, f)^2 + \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} \mathcal{P}si(v) f(x,v)dxdv. \end{equation} \end{equation}d{itemize} Note that thanks to Lemma \ref{lem:vp} (1) the minimizer $f^{n+1}_{h, R}$ in \eqref{eq:step2} is well-defined and unique. Moreover, it follows from Lemma \ref{lem:vp} (3) that $f^{n+1}_{h, R}$ satisfies the following equation \begin{equation} \begin{equation}a \label{eq: ELn} &\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}\left[(x'-x)\cdot\nabla_{x'}\varphi(x',v')+(v'-v)\cdot\nabla_{v'}\varphi(x',v')\right]P^{n+1}_{h,R}(dxdvdx'dv') \\& = h\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}(v'\cdot\nabla_{x'}\varphi(x',v') - \nabla_{v'} \mathcal{P}si(v')\cdot\nabla_{v'}\varphi(x',v'))f^{n+1}_{h,R}(x',v')dx'dv' +\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c^{n+1}_{h,R}, \end{equation}a \end{equation} where $P^{n+1}_{h,R}$ is the optimal coupling in $\mathcal{W}_h(\bar{f}^{n+1}_{h,R},f^{n+1}_{h,R})$ and \begin{equation}gin{equation} \label{eq: errorELn} \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c^{n+1}_{h,R}=\frac{h^2}{2}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\nabla_{v}\mathcal{P}si(v)\cdot\nabla_{x}\varphi(x,v)f^{n+1}_{h,R}(dxdv). \end{equation}d{equation} With the scheme being defined above, we obtain a discrete approximating sequence $\{f^n_{h, R}\}_{0 \leq n \leq N}$. Below we define a time-interpolation based on $\{f^n_{h, R}\}$ and our ultimate goal is to prove that this sequence converges to a weak solution of \eqref{eq:fracKramers}. {\bf Time-interpolation:} We define $f_{h, R}$ by setting \begin{equation}\label{eq:tintp} f_{h, R}(t) := \mathcal{P}hi_s(t - t_n)\ast_v f^n_{h, R} \text{ for } t\in [t_n, t_{n+1}). \end{equation} It is clear that by definition $f_{h, R}$ solves the fractional heat equation on every $[t_n, t_{n+1})$ with initial condition $f^n_{h, R}$. Notice also that $f_{h, R}$ is only right-continuous in general. For convenience, we also define \begin{equation}\label{eq:tildefn} \tilde{f}^{n+1}_{h, R} = \lim_{t \uparrow t_{n+1}} f_{h, R}(t). \end{equation} \subsection{A priori estimates} In this section, we prove some useful a priori estimates for the discrete-time sequence $\{f^n_{h, R}\}$ as well as for the time-interpolation sequence $\{f_{h, R}(t)\}$. We start by proving an upper bound for the sum of the Kantorovich functionals $\mathcal{W}_h(\bar{f}^n_{h, R}, f^n_{h, R})$. \begin{equation}gin{lemma}\label{lem:sumWh} Let $\{\overline{f}^{n}_{h,R}\}$ and $\{f^{n}_{h,R}\}$ be the sequences constructed from the splitting scheme. Then there exists a constant $C>0$, independent of $h$ and $R$, such that \begin{equation}gin{equation} \label{eq: priori estimate 1} \sum_{n=1}^N{\mathcal{W}}_h(\overline{f}^n_{h,R},f^n_{h,R})^2\leq C\mathcal{B}ig(h\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v)\,f_0(x,v)\,dxdv+T\partialarallel D^2 \mathcal{P}si\partialarallel_{\infty}(h^{1/2}+ h R^{2-2s})\mathcal{B}ig). \end{equation}d{equation} \end{equation}d{lemma} \begin{equation}gin{proof} Since $f^{n}_{h,R}$ minimizes the functional $f\mapsto\frac{1}{2h}\mathcal{W}_h(\overline{f}^n_{h,R},f)+\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v)f(x,v)\,dxdv$, for all $f\in\mathcal{P}_2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$, we have \begin{equation}gin{equation*} \frac{1}{2h}\mathcal{W}_h(\overline{f}^n_{h,R},f^n_{h,R})^2+\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v)f^n_{h,R}\,dxdv\leq \frac{1}{2h}\mathcal{W}_h(\overline{f}^n_{h,R},f)^2+\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v)f(x,v)\,dxdv. \end{equation}d{equation*} In particular, if we set $f=f^\ast := F_h^\# \overline{f}^n_{h,R}$ where $F_h$ is the free transport map defined in \eqref{eq:fh}, then since $\mathcal{W}_h(\overline{f}^n_{h,R},f^*)=0$ we obtain \begin{equation}gin{align} \label{eq: Wh1} \mathcal{W}_h(\overline{f}^n_{h,R},f^n_{h,R})^2 &\leq 2h\left(\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v)f^*(x,v)\,dxdv-\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v)f^n_{h,R}\,dxdv\right)\notag \\&=2h\left(\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v)\overline{f}^n_{h,R}(x,v)\,dxdv-\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v)f^n_{h,R}\,dxdv\right). \end{equation}d{align} We have also used the fact that the free transport map $F_h$ has unit Jacobian in the last equality. According to Lemma \ref{lem:fbarF} (2), we have \begin{equation}gin{equation} \label{eq: Wh2} \begin{equation}a \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v) \overline{f}^n_{h,R}(x,v)\,dxdv & \leq \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v) f^{n-1}_{h,R}(x,v)\,dxdv\\ & \qquad +\frac{1}{2}\partialarallel D^2 \mathcal{P}si\partialarallel_{\infty}\,\frac{\int_{B_R}|w|^2\mathcal{P}hi_s^h(w)\,dw}{\int_{B_R}\mathcal{P}hi_s^h(w)\,dw}. \end{equation}a \end{equation}d{equation} Substituting \eqref{eq: Wh2} into \eqref{eq: Wh1}, we obtain \begin{equation}gin{equation*} \begin{equation}a \mathcal{W}_h(\overline{f}^n_{h,R},f^n_{h,R})^2 & \leq 2h\left(\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v) f^{n-1}_{h,R}(x,v)\,dxdv-\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v)f^n_{h,R}\,dxdv\right)\\ & \qquad + h\partialarallel D^2 \mathcal{P}si\partialarallel_{\infty}\,\frac{\int_{B_R}|w|^2\mathcal{P}hi_s^h(w)\,dw}{\int_{B_R}\mathcal{P}hi_s^h(w)\,dw}, \end{equation}a \end{equation}d{equation*} from which, by summing over $n$ from $1$ to $N$ we obtain \begin{equation}gin{equation}\label{eq: Wh3} \sum_{n=1}^N\mathcal{W}_h(\overline{f}^n_{h,R},f^n_{h,R})^2\leq 2h\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v) f_0(x,v)\,dxdv +T\partialarallel D^2 \mathcal{P}si\partialarallel_{\infty}\,\frac{\int_{B_R}|w|^2\mathcal{P}hi_s^h(w)\,dw}{\int_{B_R}\mathcal{P}hi_s^h(w)\,dw}. \end{equation}d{equation} Then the desired estimate follows directly from \eqref{eq: Wh3} and Lemma \ref{lem:ratioconv}. \end{equation}d{proof} We also need some second moment bounds on $f$ with respect to variable $v$. Given a density function $f$, let us set $M_{2,v}(f):= \int |v|^2 f(x, v)dx dv$. \begin{equation}gin{lemma}\label{priorbound2-2}There exist positive constants $C, h_0$ such that when $0 < h < h_0$, it holds for any index $i > 0$ that \begin{equation}gin{equation}\label{eq:m2v} M_{2,v}(f^i_{h, R}) \leq M_{2, v}(f^{i-1}_{h, R}) +4\mathcal{W}_h(\overline{f}^{i}_{h,R},f^{i}_{h,R})^2 +C(h^{1/s}+hR^{2-2s}). \end{equation}d{equation} It follows that \begin{equation}\label{eq:2ndmoment} \begin{equation}a & \max\mathcal{B}ig\{ M_{2, v}(f^n_{h, R}), M_{2, v}(\bar{f}^n_{h, R})\mathcal{B}ig\} \leq M_{2, v}(f^0) \\ & \qquad \qquad + 4 \sum_{i=1}^n \mathcal{W}_h(\overline{f}^{i}_{h,R},f^{i}_{h,R})^2 + C(n+1) (h^{1/s}+hR^{2-2s}). \end{equation}a \end{equation} In addition, let $\tilde{P}^i_h$ be the optimal coupling in the definition of $\mathcal{W}_h(\overline{f}^i_{h, R}, f^{i+1}_{h, R})$. Then \begin{equation}gin{equation}\label{eq:dm2xv} \begin{equation}a \int \mathcal{B}ig(|x - x'|^2 + |v - v'|^2 \mathcal{B}ig)\widetilde{P}^i_h(dxdv dx^{\partialrime}dv^{\partialrime}) & \leq C\mathcal{W}_h(\overline{f}^{i}_{h,R},f^{i}_{h,R})^2\\ & \qquad \qquad + Ch^2 \mathcal{B}ig(M_{2, v}(\overline{f}^{i}_{h, R}) + M_{2, v}(f^{i}_{h, R})\mathcal{B}ig). \end{equation}a \end{equation}d{equation} \begin{equation}gin{proof} First from the definition of the cost function $C_h$ in \eqref{eq:ch} we have the following inequalities: \begin{equation}gin{align} &|v'-v|^2\leq C_h(x,v;x',v'); \label{eq: v-moment} \\&|x'-x|^2=h^2\left|\frac{x'-x}{h}-\frac{v'+v}{2}+\frac{v'+v}{2}\right|^2\notag \\& \leq h^2\left[2\mathcal{B}ig|\frac{x'-x}{h}-\frac{v'+v}{2}\mathcal{B}ig|^2+\frac{|v' + v|^2 }{2} \right]\notag \\&\leq h^2\left( \frac{1}{6}C_h(x,v;x',v')+ \frac{|v'|^2 +|v|^2 }{2} \right).\label{eq: x-moment} \end{equation}d{align} Then there exist constants $C, h_0>0$ such that when $h < h_0$, \begin{equation}gin{equation}\label{eq:ineqxv} |x'-x|^2+|v'-v|^2\leq C C_h(x,v;x',v')+h^2(|v'|^2+|v|^2). \end{equation}d{equation} Now for any fixed $i > 0$, we have \begin{equation}gin{align} \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}|v|^2f^{i}_{h,R}&= \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}|v'|^2\widetilde P_{i}^h(dxdvdx'dv')\nonumber \\&\leq \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}|v'-v|^2\widetilde P_{i}^h(dxdvdx'dv')+ \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}|v|^2\widetilde P_{i}^h(dxdvdx'dv')\nonumber \\&\overset{\eqref{eq: v-moment}}{\leq} 4 \mathcal{W}_h(\overline{f}^{i}_{h,R},f^{i}_{h,R})^2+\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}|v|^2\overline{f}^i_{h,R}\,dxdv\nonumber \\&\overset{\eqref{eq:fbarF}}{\leq} 4 \mathcal{W}_h(\overline{f}^{i}_{h,R},f^{i}_{h,R})^2+\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}|v|^2 f^{i-1}_{h,R}dxdv +C(h^{1/s}+hR^{2-2s}).\nonumber \end{equation}d{align} This proves \eqref{eq:m2v}. The estimate \eqref{eq:2ndmoment} follows by summing the estimate \eqref{eq:m2v} over the index $i$ from $1$ to $n$ and inequality \eqref{eq:fbarF} with $F(v) = |v|^2$. Finally, the estimate \eqref{eq:dm2xv} follows directly from inequality \eqref{eq:ineqxv} and the definition of $\mathcal{W}_h(\overline{f}^{i}_{h,R},f^{i}_{h,R})$. \end{equation}d{proof} \end{equation}d{lemma} In the next lemma, we prove a uniform $L^p$-bound for the time-interpolation sequence $\{f_{h, R}\}$. \begin{equation}gin{lemma}\label{lem:lp} Let $h>0$ be small enough such that $\det (I+hD^2(\mathcal{P}si(v)))\leq 1+\alpha h$ for some fixed $\alpha>\partialarallel D^2\mathcal{P}si\partialarallel_{L^\infty(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)}$. If $f_0\in L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ for $1<p<\infty$, then \begin{equation}gin{equation} \label{eq:lpbd} \| f_{h, R}(t)\|^p_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}\leq e^{\alpha T(1-p)}\|f_0\|^p_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}. \end{equation}d{equation} \end{equation}d{lemma} \begin{equation}gin{proof} First, according to Lemma \ref{lem:vp} (2), we have that $$ \|f^n_{h, R}\|_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}^p \leq (1 - \alpha h)^{p-1}\|\bar{f}^n_{h, R}\|_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}^p. $$ In addition, by the definition of $\bar{f}^n_{h, R}$ (see \eqref{eq:step1}) and Young's inequality for convolution, $$ \|\bar{f}^n_{h, R}\|_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}^p \leq \|f^{n-1}_{h, R}\|_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}^p. $$ This implies that for any $n > 0$, $$ \|f^n_{h, R}\|_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}^p \leq (1 - \alpha h)^{n(p-1)}\|f_0\|^p_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}. $$ Then by the definition of the time-interpolation $f_{h, R}$ in \eqref{eq:tintp}, we have for any $t \in (t_n, t_{n+1})$ that $$ \begin{equation}a \|f_{h, R}(t)\|^p_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})} & = \|\mathcal{P}hi_s(t - t_n)\ast_v f^n_{h, R}\|^p_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}\\ & \leq \|f^n_{h, R}\|^p_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}\\ & \leq (1 - \alpha h)^{n(p-1)}\|f_0\|^p_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}\\ & \leq e^{aT(1-p)}\|f_0\|^p_{L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})}. \end{equation}a $$ \end{equation}d{proof} \section{Proof of Theorem \ref{thm:main}}\label{sec:proofmain} \subsection{Approximate equation} We first show in the next lemma that the time-interpolation $f_{h, R}$ satisfies an approximate equation. \begin{equation}gin{lemma}\label{lem:aweqn} Let $\varphi\in C_c^\infty([0,T)\times \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$ with time support in $[-T,T]$. Then \begin{equation} \begin{equation}a \label{eq:approxeqn} & \int_0^T\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f_{h,R}[\partialartial_t\varphi+v\cdot\nabla_x\varphi-\nabla_v \mathcal{P}si\cdot\nabla_v\varphi-(-\triangle_v)^s\varphi]\,dxdvdt\\ & \quad \quad +\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f_0(x,v)\varphi(0,x,v)\,dxdv=\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c(h,R), \end{equation}a \end{equation} where $\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c(h,R) = \sum_{j=1}^4 \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c_{j}(h, R) + \tilde{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c}(h, R)$ and \begin{equation}gin{align} \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c_{1}(h, R) &=\sum_{n=1}^{N}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(t_{n})(\tilde{f}^{n}_{h,R}-\overline{f}^{n}_{h,R})\,dx\,dv, \label{eq: approx term1} \\ \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c_{2}(h, R)& =\sum_{n=1}^{N-1}\int_{t_n}^{t_{n+1}}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{B}ig(\big(v\cdot\nabla_{x}\varphi(t,x,v)-\nabla_{v} F(v)\cdot\nabla_{v}\varphi(t,x,v)\big)\,f_{h,R}(t,x,v)\nonumber \\& \qquad -\big(v\cdot\nabla_{x}\varphi(t_{n},x,v)-\nabla_{v} F(v)\cdot\nabla_{v}\varphi(t_{n},x,v)\big)\,f^{n}_{h,R}(x,v) \mathcal{B}ig)\,dxdvdt, \label{eq: approx term2} \\\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c_{3}(h, R)& =\int_0^h\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}hi_s(t)\ast f_0 \big(v\cdot\nabla_{x}\varphi(t,x,v)-\nabla_{v} F(v)\cdot\nabla_{v}\varphi(t,x,v)\big)\,dxdvdt, \label{eq: approx term3} \\ \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c_{4}(h, R)& = \frac{h^2}{2} \sum_{n=1}^N \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^2}\nabla_v \mathcal{P}si(v) \cdot \nabla_x \varphi(x, v) f^n_{h, R}(dxdv). \end{equation}d{align} Moreover, $$ \begin{equation}a \tilde{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c}(h, R) & \leq \frac{1}{2}\sum_{n=1}^N\|\nabla^2 \varphi(t_n)\|_{\infty} \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^4} \mathcal{B}ig(|x - x'|^2 + |v - v'|^2\mathcal{B}ig)P^{n}_{h, R}(dxdvdx'dv'). \end{equation}a $$ Here $P^{n}_{h, R}$ is the optimal coupling in the definition of $\mathcal{W}_{h}(\bar{f}^n_{h,R}, f^n_{h, R})$. \end{equation}d{lemma} \begin{equation}gin{proof} From the definition of $f_{h, R}$ (see \eqref{eq:tintp}) and integration by parts, we obtain that \begin{equation} \begin{equation}a\label{eq: conv1} &\int_{t_n}^{t_{n+1}}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f_{h,R}(t)\partialartial_t\varphi(t)\,dt\,dx\,dv \\&=\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}(\varphi(t_{n+1})\tilde{f}^{n+1}_{h,R}-\varphi(t_n)f^n_{h,R})\,dx\,dv-\int_{t_n}^{t_{n+1}}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(t)\partialartial_t f_{h,R}(t)\,dt\,dx\,dv \\&=\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}(\varphi(t_{n+1})\tilde{f}^{n+1}_{h,R}-\varphi(t_n)f^n_{h,R})\,dx\,dv+\int_{t_n}^{t_{n+1}}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(t)(-\triangle_v)^s f_{h,R}(t)\,dt\,dx\,dv \\&=\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}(\varphi(t_{n+1})\tilde{f}^{n+1}_{h,R}-\varphi(t_n)f^n_{h,R})\,dx\,dv+\int_{t_n}^{t_{n+1}}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f_{h,R}(t)(-\triangle_v)^s\varphi(t)\,dt\,dx\,dv, \end{equation}a \end{equation} where the second equality holds because $f_{h, R}$ solves the fractional heat equation. By adding and subtracting a few tems, we can write the first term on the right hand side of \eqref{eq: conv1} as \begin{equation} \begin{equation}a\label{eq:conv2} &\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}(\varphi(t_{n+1})\tilde{f}^{n+1}_{h,R}-\varphi(t_n)f^n_{h,R})\,dx\,dv \\&\qquad=\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}(\varphi(t_{n+1})f^{n+1}_{h,R}-\varphi(t_n)f^n_{h,R})\,dx\,dv+\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(t_{n+1})(\tilde{f}^{n+1}_{h,R}-f^{n+1}_{h,R})\,dx\,dv \\&\qquad=\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}(\varphi(t_{n+1})f^{n+1}_{h,R}-\varphi(t_n)f^n_{h,R})\,dx\,dv +\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(t_{n+1})(\tilde{f}^{n+1}_{h,R}-\overline{f}^{n+1}_{h,R})\,dx\,dv \\&\qquad\qquad+\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(t_{n+1})(\overline{f}^{n+1}_{h,R}-f^{n+1}_{h,R})\,dx\,dv. \end{equation}a \end{equation} Now substituting \eqref{eq:conv2} back into \eqref{eq: conv1} and then summing over index $n$ from $0$ to $N-1$ yields \begin{equation}\label{eq:cov4} \begin{equation}a &\int_0^T\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f_{h,R}(t)\partialartial_t\varphi(t)\,dt\,dx\,dv \\&=\sum_{n=0}^{N-1}\int_{t_n}^{t_{n+1}}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f_{h,R}(t)\partialartial_t\varphi(t)\,dt\,dx\,dv \\&=\sum_{n=0}^{N-1}\mathcal{B}igg[\int_{t_n}^{t_{n+1}}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f_{h,R}(t)(-\triangle_v)^s\varphi(t)\,dt\,dx\,dv\\ & \qquad +\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}(\varphi(t_{n+1})f^{n+1}_{h,R}-\varphi(t_n)f^n_{h,R})\,dx\,dv +\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(t_{n+1})(\tilde{f}^{n+1}_{h,R}-\overline{f}^{n+1}_{h,R})\,dx\,dv\\ & \qquad +\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(t_{n+1})(\overline{f}^{n+1}_{h,R}-f^{n+1}_{h,R})\,dx\,dv\mathcal{B}igg] \\&\qquad=\int_0^T\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f_{h,R}(t)(-\triangle_v)^s\varphi(t)\,dt\,dx\,dv-\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(0)f_0(x,v)\,dxdv \\&\qquad\qquad+\sum_{n=1}^{N}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(t_{n})(\tilde{f}^{n}_{h,R}-\overline{f}^{n}_{h,R})\,dx\,dv+\sum_{n=1}^{N}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\varphi(t_{n})(\overline{f}^{n}_{h,R}-f^{n}_{h,R})\,dx\,dv. \end{equation}a \end{equation} In the above we also used the fact that $\varphi$ is compactly supported in $(-T, T)$ so that $\varphi(t_{N}) = 0$. Let $P^{n}_{h,R}(dxdvdx'dv')$ be the optimal coupling in $\mathcal{W}_h(\bar{f}^{n}_{h,R},f^{n}_{h,R})$. Then it is easy to see that \begin{equation} \begin{equation}a\label{timederivativeappro} &\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\big[f^{n}_{h,R}-\overline{f}_{h,R}^{n}\big]\,\varphi(t_{n})dxdv \\ &=\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f^{n}_{h,R}\varphi(t_{n},x',v')dx'dv'-\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\overline{f}_{h,R}^{n}(x,v)\varphi(t_{n},x,v)dxdv \\ &=\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}\big[\varphi(t_{n},x',v')-\varphi(t_{n},x,v)\big]P^{n}_{h,R}(dxdvdx'dv')\, \\&=\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}\big[(x'-x)\cdot\nabla_{x'}\varphi(t_{n},x',v')+(v'-v)\cdot\nabla_{v'}\varphi(t_{n},x',v')\big]P^{n}_{h,R}(dxdvdx'dv')\\ & \qquad \qquad +\varepsilon_{n}, \end{equation}a \end{equation} where we have used Taylor expansion in the last equality and the error term $\varepsilon_{n}$ can be bounded as \begin{equation} |\varepsilon_{n}|\leq \frac{1}{2} \partialarallel\nabla^2\varphi(t_{n})\partialarallel_{\infty}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{4d}}\big[|x'-x|^2+|v'-v|^2\big]\,P^{n}_{h,R}(dxdvdx'dv'). \end{equation} In view of \eqref{eq: ELn}, \eqref{eq: errorELn} and \eqref{timederivativeappro}, we have that \begin{equation}\label{eq: approx1} \begin{equation}a & \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}[f^{n}_{h,R}(x,v)-\overline{f}_{h,R}^{n}(x,v)]\,\varphi(t_{n},x,v)dxdv \\ & = h \,\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\big[v\cdot\nabla_{x}\varphi(t_{n},x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t_{n},x,v)\big]\,f^{n}_{h,R}(x,v)\,dxdv\\ & \qquad + \frac{h^2}{2}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} \nabla_{v} \mathcal{P}si(v) \cdot \nabla_x \varphi(t_n, x, v)f^{n}_{h,R}(dxdv) + \varepsilon_n \end{equation}a \end{equation} and that \begin{equation}\begin{equation}a \label{eq: approx1} & \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}[f^{n}_{h,R}(x,v)-\overline{f}_{h,R}^{n}(x,v)]\,\varphi(t_{n},x,v)dxdv \\ & =h\,\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\big[v\cdot\nabla_{x}\varphi(t_{n},x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t_{n},x,v)\big]\,f^{n}_{h,R}(x,v)\,dxdv\\ & \qquad + \frac{h^2}{2}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\nabla_{v}\mathcal{P}si(v)\cdot\nabla_{x} \varphi(t_n, x,v) f^{n}_{h,R}(dxdv) + \varepsilon_n. \end{equation}a \end{equation} As a result the last term on the right-hand side of \eqref{eq:cov4} can be written as \begin{equation}\label{eq:approx2} \begin{equation}a &\sum_{n=1}^{N}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}[\overline{f}_{h,R}^{n}(x,v)-f^{n}_{h,R}(x,v)]\,\varphi(t_{n},x,v)dxdv\\ & \qquad = -h\,\sum_{n=1}^{N}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\big[v\cdot\nabla_{x}\varphi(t_{n},x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t_{n},x,v)\big]\,f^{n}_{h,R}(x,v)\,dxdv\\ & \qquad -\frac{h^2}{2} \sum_{n=1}^{N}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\nabla_{v}\mathcal{P}si(v) \cdot \nabla_{x} \varphi(t_n, x,v) f^{n}_{h,R}(dxdv) - \sum_{n=1}^{N} \varepsilon_n. \end{equation}a \end{equation} Now using again the fact that $\varphi(t_N) = 0$, we rewrite the first term on the right side of \eqref{eq:approx2} as follows \begin{equation}\label{eq:approx3} \begin{equation}a &-h\,\sum_{n=1}^{N}\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\big[v\cdot\nabla_{x}\varphi(t_{n},x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t_{n},x,v)\big]\,f^{n}_{h,R}(x,v)\,dxdv\\ & = -\sum_{n=1}^{N-1}\int_{t_n}^{t_{n+1}}\,\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\big[v\cdot\nabla_{x}\varphi(t_{n},x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t_{n},x,v)\big]\,f^{n}_{h,R}(x,v)\,dxdvdt\\ & = -\int_{0}^{T}\,\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\big[v\cdot\nabla_{x}\varphi(t,x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t,x,v)\big]\,f_{h,R}(x,v)\,dxdvdt\\ & \quad+ \sum_{n=1}^{N-1}\int_{t_n}^{t_{n+1}}\,\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{B}igg( \big[v\cdot\nabla_{x}\varphi(t,x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t,x,v)\big]\,f_{h,R}(x,v)\\ & \qquad \qquad \qquad - \big[v\cdot\nabla_{x}\varphi(t_{n},x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t_{n},x,v)\big]\,f^{n}_{h,R}(x,v)\mathcal{B}igg)\,dxdvdt\\ & \quad+\int_0^h\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}hi_s(t)\ast f_0 \big[v\cdot\nabla_{x}\varphi(t,x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t,x,v)\big]\,dxdvdt. \end{equation}a \end{equation} Therefore the lemma follows by combining \eqref{eq:cov4}, \eqref{eq:approx2} and \eqref{eq:approx3}. \end{equation}d{proof} \subsection{Passing to the limit} Now we set the truncation parameter $R = h^{-1/2}$ and define \begin{equation}\label{eq:fh2} f_{h}(t):= f_{h, h^{-1/2}}(t), t\in [0, T]. \end{equation} Our aim is to prove that $f_h$ converges to a weak solution of \eqref{eq:fracKramers}. To this end, we first show that the residual term in the last lemma goes to zero when $h \rightarrow 0$. \begin{equation}gin{lemma} Let $f_0$ be a non-negative function such that $f_0\in \mathcal{P}^2_a(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ and $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f_0(x, v) \mathcal{P}si(v)dvdx < \infty$. Then as $h \rightarrow 0$, we have that \begin{equation}\label{eq:ineqR} |\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c(h, h^{-1/2})| \leq C(h^2 + h + h^s + h^{1/s}) \rightarrow 0. \end{equation} \end{equation}d{lemma} \begin{equation}gin{proof} The proof follows closely the proof of Lemma 5.3 of \cite{AguelBowles2015}. In particular, by using the same arguments there, we can first obtain the following estimates $$ \begin{equation}a \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c_1(h, R) & \leq C T \sup_{t \in [0, T]}\|\varphi(t)\|_{\infty} R^{-2s},\\ \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c_2(h, R) & \leq \frac{Th}{2}\sup_{t \in [0, T]} \|v\cdot\nabla_{x}\, \partialartial_t \varphi(t,x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\, \partialartial_t\varphi(t,x,v)\|_{\infty}\\ & + \frac{Th}{2}\sup_{t \in [0, T]} \left\|(-\triangle)^s \mathcal{B}ig(v\cdot\nabla_{x}\varphi(t,x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t,x,v)\mathcal{B}ig)\right\|_{\infty},\\ \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c_3(h, R) & \leq h \sup_{t \in [0, T]} \|v\cdot\nabla_{x} \varphi(t,x,v)-\nabla_{v} \mathcal{P}si(v)\cdot\nabla_{v}\varphi(t,x,v)\|_{\infty}. \end{equation}a $$ Notice that the supreme norms appearing in the above are finite since $\varphi \in C^\infty_0((-T\times T)\times \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ and $\mathcal{P}si\in C^{1,1}\cap C^{2,1}(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$. Next, we can bound $\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c_4(h,R)$ as $$ \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c_4(h, R) \leq \frac{Th}{2}\sup_{t \in [0, T]} \|\nabla_v \mathcal{P}si(v) \cdot \nabla_x \varphi(t,x,v)\|_{\infty}. $$ In addition, thanks to inequality \eqref{eq:dm2xv} and Lemma \ref{lem:sumWh}, the error term $\tilde{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c}$ can be bounded as follows $$\begin{equation}a \tilde{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}c}(h, R) & \leq C\sum_{n=1}^N \mathcal{W}_h(\bar{f}^n_{h, R}, f^n_{h, R})^2 + C h^2 \sum_{n=1}^N \mathcal{B}ig(M_{2, v}(\overline{f}^{n}_{h, R}) + M_{2, v}(f^{n}_{h, R})\mathcal{B}ig)\\ &\leq C(1 + h^2)\sum_{n=1}^N \mathcal{W}_h(\bar{f}^n_{h, R}, f^n_{h, R})^2 + C h^2 M_{2, v}(f^0) \\ & \qquad + C(N+1)Nh^2 (h^{1/s} + h R^{2-2s})\\ & \leq C \mathcal{B}ig(h\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}\mathcal{P}si(v) f_0(x,v)\,dxdv + T \|D^2 \mathcal{P}si\|_\infty (h^{1/s} + h R^{2- 2s}) \mathcal{B}ig)\\ & \qquad+ C h^2 M_{2, v}(f^0) + C(T+1)T (h^{1/s} + h R^{2-2s}). \end{equation}a $$ Finally, the desired estimate \eqref{eq:ineqR} follows by combining the above estimates and by setting $R = h^{-1/2}$. \end{equation}d{proof} Now we are ready to prove the main Theorem \ref{thm:main}. \begin{equation}gin{proof}[Proof of Theorem \ref{thm:main}] First, thanks to Lemma \ref{lem:lp} and the assumption that $f_0\in L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d})$ for some $1<p < \infty$, the constructed time-interpolation $\{f_{h}\}$ in \eqref{eq:fh2} is uniformly bounded in $L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}\times (0,T))$. Therefore there exists a $f\in L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}\times (0,T))$ such that $f_h \rightharpoonup h$ in $ L^p(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}\times (0,T))$. In view of equation \eqref{eq:approxeqn} of Lemma \ref{lem:aweqn}, and by using the fact that $\partialartial_t \varphi +v\cdot \nabla_x \varphi - \nabla_v \mathcal{P}si \cdot \nabla_v \varphi - (-\triangle_v)^s \varphi \in L^{p'}(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}\times (0,T))$, we obtain by letting $h\rightarrow 0$ that $$ \begin{equation}a & \int_0^T\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f [\partialartial_t\varphi+v\cdot\nabla_x\varphi-\nabla_v \mathcal{P}si\cdot\nabla_v\varphi-(-\triangle_v)^s\varphi]\,dxdvdt\\ & \qquad \qquad +\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}}f_0(x,v)\varphi(0,x,v)\,dxdv = 0. \end{equation}a $$ \end{equation}d{proof} \begin{equation}gin{remark} By using the similar technique as in the proof of Lemma 5.8 of \cite{AguelBowles2015}, one can show that the weak solution $f$ of \eqref{eq:fracKramers} is indeed a probability density for every $t \in (0,T)$, i.e. $\int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f(t,x ,v)dxdv = \int_{\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}} f_0(x, v)dxdv = 1$. \end{equation}d{remark} \section{Possible extensions to more complex systems} \label{sec: extension} With suitable adaptations, it should be possible, in principle, to extend the analysis of the present work to deal with more complex systems. Below we briefly discuss two such systems. \subsection{ FKFPE with external force fields} When an external force field, which is assumed to be conservative, is present, the SDE \eqref{SDE} becomes \begin{equation}gin{equation} \begin{equation}a & \frac{d X_t}{d t} = V_t,\\ & \frac{d V_t}{d t} = -\nabla U(X_t)- \nabla \mathcal{P}si (V_t) + L_t^s, \end{equation}a \label{SDE2} \end{equation}d{equation} where $U:\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\to\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}$ is the external potential. The corresponding FKFPE \eqref{eq:fracKramers} is then given by \begin{equation}gin{equation} \begin{equation}gin{cases} \partialartial_t f+v\cdot\nabla_x f=\div_v (\nabla V(x)f)+\div_v(\nabla \mathcal{P}si(v)f)-(-\triangle_v)^s f ~~ \text{in} ~~ \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times (0,\infty),\\ f(x,v,0)=f_0(x,v)~~\text{in}~ \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d\times \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d. \end{equation}d{cases} \label{eq:fracKramers ext} \end{equation}d{equation} One can view \eqref{SDE2} as a dissipative (frictional and stochastic noise) perturbation of the classical Hamiltonian \begin{equation}gin{equation*} \begin{equation}a & \frac{d X_t}{d t} = V_t,\\ & \frac{d V_t}{d t} = -\nabla U(X_t). \end{equation}a \end{equation}d{equation*} Thus FKFPE \eqref{eq:fracKramers ext} contains both conservative and dissipative effects. To construct an approximation scheme for it, instead of the minimal acceleration cost function \eqref{eq:ch}, one would use the following minimal Hamiltonian cost function which has been introduced in \cite{DPZ13a} for the development of a variational scheme for the classical Kramers equation: \begin{equation}gin{multline} \label{def:widetildeCh} \widetilde{C}_h(x,v;x',v'):= h\inf \bigg\{\int_0^h\bigl| \,\textrm{d}ot{\mathbf{x}i}(t)+ \nabla V(\mathbf{x}i(t))\bigr|^2\,dt: \mathbf{x}i\in C^1([0,h],\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d) ~~ \text{such that}\\~~ (\mathbf{x}i,\dot\mathbf{x}i)(0)=(x,v),\ (\mathbf{x}i,\dot\mathbf{x}i)(h)=(x',v')\bigg\}. \end{equation}d{multline} Physically, the optimal value $C_h(x,v;x',v')$ measures the least deviation from a Hamiltonian flow that connects $(x,v)$ and $(x',v')$ in the time interval $[0,h]$. Under the assumption that $U\in C^2(\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$ with $\|\nabla^2 U\|\leq C$ and using the properties of the cost function $\widetilde C_h$ established in \cite{DPZ13a} we expect that the splitting scheme \eqref{eq:step1}-\eqref{eq:step2}, where in \eqref{eq: Wh distance} the Kantorovich optimal cost functional $C_h$ is replaced by $\widetilde C_h$, can be proved to converge to a weak solution of FKFPE \eqref{eq:fracKramers ext}. \subsection{A multi-component FKFPE equation} The second system is an extension of FKFPE \eqref{eq:fracKramers} on the phase space $(x,v)\in \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{2d}$ to a multi-component FKFPE on the space $\mathbf{x}=(x_1,\ldots, x_n)\in \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{nd}$ \begin{equation}gin{equation} \begin{equation}gin{cases} \label{eq: Kramers eqn ext2} \partialartial_t f+\sum_{i=2}^{n}x_{i}\cdot\nabla_{x_{i-1}}f=\div_{x_n}(\nabla V(x_n)f)-(-\triangle_{x_n})^s f \quad \text{in} \quad \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{nd}\times (0,\infty),\\ f(x_1,\ldots,x_n,0)=f_0(x_1,\ldots,x_n)\quad \text{in} \quad \mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{nd}. \end{equation}d{cases} \end{equation}d{equation} Equation \eqref{eq: Kramers eqn ext2} with $n>2$ and $s=1$ has been studied extensively in the mathematical literature and has found many applications in different fields. For instance, it has been used as a simplified model of a finite Markovian approximation for the generalised Langevin dynamics \cite{OP11, Duong15NA} or a model of a harmonic chains of oscillators that arises in the context of non-equilibrium statistical mechanics \cite{EH00,BL08, DelarueMenozzi10}. It has also appeared in mathematical finance \cite{Pascucci2005}. Regularity properties of solutions to equation~\eqref{eq: Kramers eqn ext2} with $s \in (0,1]$ has been investigated recently \cite{HMP2017TMP,ChenZhang2017TMP, ChenZhang2017TMP2}. To construct an approximation scheme for equation~\eqref{eq: Kramers eqn ext2}, instead of the minimal acceleration cost function \eqref{eq:ch}, one would use the so-called mean squared derivative cost function \begin{equation}gin{equation*} \bar{C}_{n,h}(x_{1},x_{2},\ldots,x_{n};y_{1},y_{2},\ldots,y_{n}):=h\inf\limits_{\mathbf{x}i}\int_0^h|{\mathbf{x}i}^{(n)}(t)|^{2}\,dt, \end{equation}d{equation*} where $\mathbf{x}=(x_{1},\ldots,x_{n})\in\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{nd},~\mathbf{y}=(y_{1},\ldots,y_{n})\in\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^{nd}$, and the infimum is taken over all curves $\mathbf{x}i\in C^{n}([0,h],\mathbf{R}} \def\mathbb{Z}Z{\mathbb{Z}} \def\mathbb{N}N{\mathbb{N}^d)$ that satisfy the boundary conditions \begin{equation}gin{equation*} (\mathbf{x}i,\dot{\mathbf{x}i},\ldots,\mathbf{x}i^{(n-1)})(0)=(x_{1},x_{2},\ldots,x_{n})\quad\text{and}\quad (\mathbf{x}i,\dot{\mathbf{x}i},\ldots,\mathbf{x}i^{(n-1)})(h)=(y_{1},y_{2},\ldots,y_{n}). \end{equation}d{equation*} Several properties including an explicit representation of the mean squared derivative cost function has been studied in \cite{DuongTran2017a} and a variational formulation using this cost function for equation \eqref{eq: Kramers eqn ext2} with and $s=1$ has been developed recently in~\cite{DuongTran2018}. Using the properties of the cost function $\bar C_{n,h}$ established in \cite{DuongTran2017a} it should be possible, in principle, to adapt the analysis of the present paper to show that, under suitable assumptions, the splitting scheme \eqref{eq:step1}-\eqref{eq:step2} with $C_h$ being substituted by $\bar C_{n,h}$, converges to a weak solution of the multi-component FKFPE \eqref{eq: Kramers eqn ext2}. \section*{Acknowledgments} This work was partially done during the authors' stay at Warwick Mathematics Institute. The authors thank WMI for its great academic and administrative support. M. H. Duong was also supported by ERC Starting Grant 335120. \end{equation}d{document}
\begin{document} \renewcommand{1.3}{1.3} \title{On Fan's conjecture about $4$-flow} \author{Deping Song\textsuperscript{a}\quad Shuang Li\textsuperscript{a}\quad Xiao Wang\textsuperscript{b} \\ \\{\footnotesize \textsuperscript{a} \em Laboratory of Mathematics and Complex System $($Ministry of Education$)$,}\\{\footnotesize \em School of Mathematical Sciences, Beijing Normal University, Beijing, 100875, China } \\{\footnotesize \textsuperscript{b} \em College of Mathematics and Computer Application, Shangluo University, Shangluo, 726000, China } } \date{} \maketitle \footnote{\scriptsize\qquad{\em E-mail address:} [email protected] ~~(D. Song), [email protected] ~~(S. Li),\\ [email protected] ~~(X. Wang).} \begin{abstract} Let $G$ be a bridgeless graph,~$C$ is a circuit of $G$.~Fan proposed a conjecture that if $G/C$ admits a nowhere-zero 4-flow,~then $G$ admits a 4-flow $(D,f)$ such that $E(G)-E(C)\subseteq$ supp$(f)$ and $|\textrm{supp}(f)\cap E(C)|>\frac{3}{4}|E(C)|$.~The purpose of this conjecture is to find shorter circuit cover in bridgeless graphs. Fan showed that the conjecture holds for $|E(C)|\le19.$ Wang,~Lu and Zhang showed that the conjecture holds for $|E(C)|\le 27$.~In this paper,~we prove that the conjecture holds for $|E(C)|\le 35.$ \noindent {\em Key words:} Integer 4-flow,~$\mathbb{Z}_2\times \mathbb{Z}_2$ flow,~circuit \end{abstract} \section{Introduction} Graphs considered in this paper may have loops and multiple edges.~For terminology and notations not defined here,~we follow~\cite{Bondy,Zhang_1}. A \it circuit \rm is a $2$-regular connected graph. A \it cycle \rm is a graph such that the degree of each vertex is even.~\it Contracting \rm an edge means deleting the edge and then identifying its ends. For a subgraph $H$~of a graph $G$,~let $G/H$ be the graph obtained from $G$ by contracting all edges of $H$. A \it weight \rm of a graph $G$ is a function $\omega : E(G)\to \mathbb{Z}^+$,~where $\mathbb{Z}^+$ is the set of positive integers. For a sugraph $H$ of $G$,~denote $$ \omega (H)=\omega(E(H))=\sum\limits_{e\in E(H)}\omega(e).$$ Let $G$ be a graph,~$(D,f)$~be an ordered pair where $D$ is an orientation of $E(G)$~and $f:E(G)\to \Gamma$~is a function where $\Gamma$ is an abelian group (an additive group with ``$0$" as its identity).~An oriented edge of $G$ (under the orientation $D$)~is called an \it arc. \rm The graph $G$ under the orientation $D$ is denoted by $D(G)$.~\rm For a vertex $v\in V(G)$,~let $E^+(v)$~(or $E^-(v)$)~be the set of all arcs of $D(G)$ with their tails~(or,~heads, respectively)~at the vertex $v$ and let $$f^+(v)=\sum\limits_{e\in E^+(v)}f(e)$$ and $$f^-(v)=\sum\limits_{e\in E^-(v)}f(e).$$ A \it flow \rm of a graph $G$ is an ordered pair $(D,f)$ such that $$f^+(v)=f^-(v)$$ for every vertex $v\in V(G).$ An \it integer flow \rm $(D,f)$ is a flow of $G$ with an integer-valued function $f$.~An \it integer $k$-flow \rm of $G$ is an integer flow $(D,f)$ such that $|f(e)|<k$ for every edge $e\in G$.~The \it support \rm of $f$ is the set of all edges of $G$ with $f(e)\ne 0$ and is denoted by $supp(f)$. A flow $(D,f)$ of a graph $G$ is \it nowhere-zero \rm if $supp(f)=E(G)$. For $\mathbb{Z}_2\times \mathbb{Z}_2$-flow~$(D,f)$,~it is not difficult to check that for each direction~$D'$,~$(D',f)$~is a~$\mathbb{Z}_2\times \mathbb{Z}_2$-flow.~For convenience,~$(D,f)$~is denoted by $f$.~If $C$ is a cycle of a graph $G$ and $a\in \mathbb{Z}_2\times \mathbb{Z}_2-\left\lbrace (0,0)\right\rbrace $,~Define $f_C$,~$f_{C,a}$~as follows: \begin{equation} f_C(e)=\left\lbrace \begin{array}{lllllllll} a&&&\rm if~\it e\in E(C);\\ (0,0)&&&\rm if~\it e\in E(G)\backslash E(C) . \end{array} \nonumber \right. \end{equation} Then $f_{C,a}$ is a $\mathbb{Z}_2\times \mathbb{Z}_2$-flow of $G$,~$f_{C,(1,1)}$ is denoted by $f_{C}$. In 2017,~in order to find shorter circuit cover of graphs,~Fan $($\cite{Fan}$)$~proposed the following conjecture. \begin{con}\rm(\cite{Fan})\label{1.1} \it Let $G$ be a graph with a circuit $C$.~If $G/C$ admits a nowhere-zero $4$-flow,~then $G$ admits a $4$-flow $(D,f)$ such that $E(G)- E(C)\subseteq supp(f)$ and $ | E_{g=0}(C)|<\frac{1}{4}|E(C)|.$ \end{con} For this conjecture,~Fan(\cite{Fan}) proved that Conjecture is true for~$|E(C)|\le 19$.~As an application of this result,~Fan proved that if $G$ is a bridgeless graph with minimum degree at least three,~then $cc(G)< \frac{278}{171}|E(G)|\approx 1.62573|E(G)|$,~\\which improved the result $cc(G)< \frac{44}{27}|E(G)|\approx1.62963|E(G)|$ by Kaiser et al~\cite{K}.~This conjecture can be refined as follows: \begin{con}\label{Conjecture 1.2} \it Let $G$ be a graph with a circuit $C$ and $\omega:E(G)\to \mathbb{Z}$.~If there is a $\mathbb{Z}_2\times \mathbb{Z}_2$-flow $f$~such that $E(G)-E(C)\subseteq supp(f)$,~then there is a $\mathbb{Z}_2\times \mathbb{Z}_2$-flow $g$~in $G$ such that $E(G)-E(C)\subseteq supp(g)$ and $\omega(E_{g=(0,0)}(C))< \frac{1}{4}\omega(C)$. \end{con} Recently,~Wang,~Lu and Zhang(\cite{WLG}) proved that Conjecture \ref{Conjecture 1.2} is true for $\omega(C)\le 27$.~As an application of this result,~they proved that if $G$ is a bridgeless graph with minimum degree at least three,~then $cc(G)<\frac{394}{243}|E(G)|\approx 1.6214|E(G)|$;~if $G$ is a bridgeless and loopless graph with minimum degree at least three,~then $cc(G)< \frac{334}{207}|E(G)|\approx 1.6135|E(G)|$. We will prove that Conjecture \ref{Conjecture 1.2} is true for $\omega(C)\le 35$. \begin{thm}\label{1.3} \it Let $G$ be a graph with a circuit $C$ and $\omega:E(G)\to \mathbb{Z}$.~If~$ \omega(C)\le 35$~and there is a $\mathbb{Z}_2\times \mathbb{Z}_2$-flow $f$~in $G$~such that $E(G)- E(C)\subseteq supp(f)$,~then there is a $\mathbb{Z}_2\times \mathbb{Z}_2$-flow $g$~in $G$ such that $E(G)- E(C)\subseteq supp(g)$ and $\omega(E_{g=(0,0)}(C))$~\\ $< \frac{1}{4}\omega(C)$. \end{thm} Theorem $\ref{1.3}$ can be restated as follows. \begin{thm} \it Let $G$ be a graph with a circuit $C$.~If~$ |E(C)|\le 35$~and there is a $4$-flow $(D,f)$~in $G$~such that $E(G)- E(C)\subseteq supp(f)$,~then there is a $4$-flow $(D,g)$~in $G$ such that $E(G)- E(C)\subseteq supp(g)$ and $|E_{g=0}(C)|$ $< \frac{1}{4}|E(C)|$. \end{thm} As an application of this result,~one can use the similar way in \cite{Fan} to check that if $G$ is a bridgeless graph with minimum degree at least three,~then $cc(G)< \frac{34}{21}|E(G)|\approx 1.6190|E(G)|$;~if $G$ is a bridgeless and loopless graph with minimum degree at least three,~then $cc(G)< \frac{50}{31}|E(G)|\approx 1.6129|E(G)|$. \section{Preliminaries and Lemmas} Let $G$ be a graph and $H$ be a subgraph of $G$. For a $\mathbb{Z}_2\times\mathbb{Z}_2$-flow~$f$,~define an \it equivalent class \rm of $f$ with respect to $H$ by $$\mathcal{R}_H(f)=\left\lbrace g:g~\textrm{is a}~\mathbb{Z}_2\times\mathbb{Z}_2\textrm{-flow in $G$ and $supp(g)-E(H)=supp(f)-E(H)$}\right\rbrace .$$ And for $a\in\mathbb{Z}_2\times\mathbb{Z}_2$,~let $$ E_{f=a}(H)=\left\lbrace e\in E(H):f(e)=a \right\rbrace. $$ Let $f$ be a $\mathbb{Z}_2\times \mathbb{Z}_2$-flow in $G$~and~$\omega:E(G)\to \mathbb{Z}^+,$~$ C$~is a circuit of $G$.~For a $\mathbb{Z}_2\times \mathbb{Z}_2$-flow $g\in \mathcal{R}_C(f)$~and $xy,yz\in E(C)$ with $g(xy)=g(yz)$,~we denote by $G^*$ the graph obtained from $G$ by \it lifting \rm $xy,yz\in E(C)$,~that is,~deleting $xy,yz$ and adding a new edge $e^*=xz$ and let \begin{equation} \omega^*(e)=\left\lbrace \begin{array}{lllllllll} \omega(xy)+\omega(yz)&&&\rm if~\it e=e^*;\\ \omega(e)&&&\rm if~\it e\in E(G^*)-\left\lbrace e^*\right\rbrace . \end{array} \nonumber \right. \end{equation} the resulted circuit is denoted by $C^*$. Let \begin{equation} f^*(e)=\left\lbrace \begin{array}{lllllllll} g(xy)&&&\rm if~\it e=e^*;\\ g(e)&&&\rm if~\it e\in E(G^*)-\left\lbrace e^*\right\rbrace . \end{array} \nonumber \right. \end{equation} Then $f^*$~is also a $\mathbb{Z}_2\times \mathbb{Z}_2$-flow in $G$ and $supp(f^*)-E(C^*)=supp{f}-E(C)$.~In addition,~$|E(G^*)|<|E(G)|$. Let $G$ be a graph admitting a $\mathbb{Z}_2\times\mathbb{Z}_2$-flow~$f$.~$C$~is a circuit of $G$.~A segment $S$ is $E_{f=(0,0)}$-$alternating$ in $C$ if the edges of $S$ are alternately in $E_{f=(0,0)}\cap E(C)$ and $E(C)-E_{f=(0,0)}$. An $E_{f=(0,0)}$-alternating segment \rm is \it maximal \rm if for any $g\in \mathcal{R}_C(f)$,~there is no $E_{g=(0,0)}$-$alternating$ segment~$S'$~such that $$ E_{f=(0,0)}(S)\subseteq E_{g=(0,0)}(S')\textrm{~~~and~~~$|E(S)|<|E(S')|$.}$$ The following two lemmas are used in the proof of \rm Theorem $\ref{1.3}$. \begin{lemma}\rm(\cite{T}) \it Let $\Gamma$ be an abelian group of order $k$. A graph $G$ admits a nowhere-zero $k$-flow if and only if $G$ admits a nowhere-zero $\Gamma$-flow. \end{lemma} The following lemma is similar to Lemma $3.2$ in \cite{WLG}. \begin{lemma}\label{lem1} Let $G$ be a graph~and~$\sigma$ be a permutation on $\left\lbrace (1,0),(0,1),(1,1)\right\rbrace $.~If there is a nowhere-zero~$\mathbb{Z}_2\times\mathbb{Z}_2 $-flow~$f$,~then $\sigma f$ is a nowhere-zero~$\mathbb{Z}_2\times\mathbb{Z}_2$-flow. \end{lemma} \proof By the definition of $\mathbb{Z}_2\times\mathbb{Z}_2$-flow,~we have $G-E_{g=a}(G)$~is a cycle,~then $E_{f=a}(G)$~is a partity subgraph of $G$.~Let $S_1=E_{f=\sigma^{-1}(1,0)}(G)\cup E_{f=\sigma^{-1}(1,1)}(G)$,~$S_2=E_{f=\sigma^{-1}(0,1)}(G)\cup E_{f=\sigma^{-1}(1,1)}(G)$,~then $\left\lbrace S_1,~S_2\right\rbrace$ is a cycle cover of $G$. Let \begin{equation} g(e)=\left\lbrace \begin{array}{lllllllll} (1,0)&&&\rm if~\it e\in E(S_{\rm 1})-E(S_{\rm 2});\\ (0,1)&&&\rm if~\it e\in E(S_{\rm 2})-E(S_{\rm 1});\\ (1,1)&&&\rm if~\it e\in E(S_{\rm 1})\cap E(S_{\rm 2}).\\ \end{array} \nonumber \right. \end{equation} We can verify that $g$ is a nowhere-zero~$\mathbb{Z}_2\times\mathbb{Z}_2$-flow of $G$ and $g=\sigma f$ $ \qed$ \section{Maximal alternating segment of circuit $C$} From now on,~we suppose there is a quadruple $(G, f, C,\omega )$ such that (S1) $f$ is a $\mathbb{Z}_2\times \mathbb{Z}_2$-flow in $G$, $C$ is a circuit in $G$ and $\omega:E(G)\to \mathbb{Z}^+$ with $\omega(C)\le35$; (S2) Subject to (S1), $\omega(E_{g=(0,0)}(C))\geq \frac{1}{4}\omega(C)$ for any $g\in \mathcal{R}_C(f)$; (S3) Subject to (S1) and (S2), $\left|E(G)\right|$ is as small as possible. For $\omega(C)\not\equiv 0 \left( \textrm{mod}4\right) $,~there is an $a\in \mathbb{Z}_2\times\mathbb{Z}_2$ such that $ E_{f=a}<\frac{1}{4}\omega(C)$.~If $a=(0,0)$,~we are done;~if $a\ne (0,0)$,~by Lemma \ref{lem1},~we can assume $a=(1,1)$,~let $g=f+f_C$,~then we are done.~So, from now on we always assume that $\omega(C)\le 32$ and $\omega(C)\equiv 0 \left( \textrm{mod}4\right) $. For $g\in \mathcal{R}_C(f) $ and $\left\lbrace a,b \right\rbrace \subset \left\lbrace (1,0),(0,1),(1,1)\right\rbrace $.~Let $P_{a,b}$ be the subgraph of $G$ induced by the edges in $E_{g=a}(G)\cup E_{g=b}(G)-E(C)$. Then the vertices with odd degree are in $V(C)$. Assume that $e_1$ and $e_2$ are two edges in $C$ with exactly one common end $x$ and $g(e_1)=a $,~$g(e_2)=b$.~We can obtain that $x$ is a vertex with odd degree in $ P_{a,a+b}$ immediately.~Let $y$ be another vertex with odd degree in the same component and $e_3$,~$e_4$ be two edges in $C$ with exactly one common end $y$. Suppose $C=v_0v_1\cdots v_{|E(C)|-1}$,~$S=v_1\cdots v_{|E(S)|+1}$ and denote $v_{-1}=v_{|E(C)|-1}$. By Lemma \ref*{lem1}, we can assume that $ g(v_0v_1)=(1,0)=a$,~$g(v_0v_{-1})=(1,1)=b$,~$g(v_{|E(S)|+1}v_{|E(S)|+2})=c$ and $g(v_{|E(S)|+2}v_{|E(S)|+3})=d$. Then $v_0$ is a vertex with odd degree in $ P_{a,a+b}$ and $v_{|E(S)|+1}$ is a vertex with odd degree in $ P_{c,c+d}$, assume $v_t$ be another vertex with odd degree in the same component of $ P_{a,a+b}$ and $v_{\bar{t}}$ be another vertex with odd degree in the same component of $ P_{c,c+d}$.~$v_1$ is a vertex with odd degree in $ P_{a,b}$,~assume $v_{t'}$ be another vertex with odd degree in the same component of $ P_{a,b}$,~moreover,~if $t'\ge |E(S)|+1$,~then there is a circuit $ C_1$~consist of a path $P_1$ from $v_1$ to $v_{t'}$ and $v_0v_{-1}\cdots v_{t'}$.~Let $g'=g+f_{C_1,(a+b)}$,~then $v_0$ is a vertex with odd degree in $ P_{b,a+b}$ in $g'$, assume that $v_{t''}$ is another vertex with odd degree in the same component of $ P_{b,a+b}$ in $g'$. Then we have the following lemmas. \begin{lemma}\label{l1} $\omega(E_{g=a}(C))=\frac{\omega(C)}{4}\le 8$ for any $g\in \mathcal{R}_C(f)$ and each $a\in \mathbb{Z}_2\times \mathbb{Z}_2$. \end{lemma} \begin{proof} Suppose to be contrary that there is a flow $g\in \mathcal{R}_C(f)$ and $a\in \mathbb{Z}_2\times \mathbb{Z}_2$ such that $\omega(E_{g=a}(C))< \frac{\omega(C)}{4}$. By (S2), $\omega(E_{g=(0,0)}(C))\ge\frac{\omega(C)}{4}$. By Lemma \ref{lem1}, we can assume that $\omega(E_{g=(1,1)}(C))<\frac{\omega(C)}{4}$. Let $g'=g+f_C$, then $(G, g', C, \omega)$ is a new quadruple satisfying (S1). Moreover, $g'\in \mathcal{R}_C(f)$ and $$\omega(E_{g'=(0,0)}(C))<\frac{\omega(C)}{4}.$$ This contradicts (S2). \end{proof} \begin{lemma}\label{l2} $E_{g=a}(C)$ is a matching for any $g\in \mathcal{R}_C(f)$ and each $a\in \mathbb{Z}_2\times \mathbb{Z}_2$. \end{lemma} \begin{proof} Suppose to be contrary that there is a function $g\in \mathcal{R}_C(f)$ and $a\in \mathbb{Z}_2\times \mathbb{Z}_2$ such that $E_{g=a}$ is not a matching. Then there are two edges $xy, yz\in E(C)$ such that $g(xy)=g(yz)=a$. By \it lifting \rm $xy$ and $yz$, we can obtain a new quadruple $(G, f^*, C^*,\omega ^*)$ which satisfies (S1) and (S2), however, $\left|E(G^*)\right|=\left|E(G)\right|-1,$ which is a contradiction as condition (S3). \end{proof} \begin{lemma}\label{l3} Let $e_1$ and $e_2$ be two edges in $C$ with exactly one common end $x$ and $g(e_1)=a $,~$g(e_2)=b$,~$y$ be another vertex with odd degree in the same component of $ P_{a,a+b}$ and $e_3$,~$e_4$ be two edges in $C$ with exactly one common end $y$. Then $g(e_3)+g(e_4)\ne b$ Let $Q$ be a path of $C$ from $x$ to $y$,~then for each $c\in \mathbb{Z}_2\times \mathbb{Z}_2$,~then $ \omega(E_{g=c}(Q))=\omega(E_{g=c+b}(Q))$. \end{lemma} \begin{proof} If $g(e_3)+g(e_4)= b$, then $g'(e_3)=g'(e_4)$, which is a contradiction as Lemma \ref{l2}. Thus, we have $g(e_3)+g(e_4)\ne b$. If $\omega(E_{g=c}(Q))\ne\omega(E_{g=c+b}(Q))$, then $ \omega(E_{g=c}(Q))\ne \frac{\omega(C)}{4}$, which is a contradiction as Lemma $\ref{l1}$. Thus, we have $\omega(E_{g=c}(Q))=\omega(E_{g=c+b}(Q))$. \end{proof} \begin{lemma}\label{l6} Let $P_1=v_1v_2\cdots v_r$,~$P_2=v_{s}v_{s+1}\cdots v_{t}$ be two paths of $C$ $(r\ge s+1)$.~Let $Q_1$~be a path in $ P_{a,a+b}$ from $v_1$ to $v_r$ and $Q_2$~be a path in $ P_{a,a+b}$ from $v_s$ to $v_t$,~then $$ \omega(E_{g=c}(v_1\cdots v_{t}))=\omega(E_{g=c+b}(v_1\cdots v_{s}))$$ \end{lemma} \begin{proof} By Lemma \ref{l3},~$ \omega(E_{g=c}(v_1\cdots v_{t}))=\omega(E_{g=c+b}(v_1\cdots v_{t}))$.~Let $C_i=P_i\cup Q_i$,~$i=1,2$. We can check that $g_1=g+f_{C_1,b}+f_{C_2,b}\in \mathcal{R}_C(f)$,~then $\omega(E_{g=c}(v_1\cdots v_{s}))+\omega(E_{g=c}(v_r\cdots v_{t}))=\omega(E_{g=c+b}(v_1\cdots v_{s}))+\omega(E_{g=c+b}(v_r\cdots v_{t}))$, we have $$\omega(E_{g=c}(v_1\cdots v_{t}))=\omega(E_{g=c+b}(v_1\cdots v_{s}))$$ \end{proof} Let $\Omega$ be the maximum weight of $C$. \begin{lemma}\label{l4} Let $g\in \mathcal{R}_C(f)$,~$S$~be a maximal $E_{g=(0,0)}$-alternating segment of $C$. If there is an edge $e\in E_{g=(0,0)}(S)$ with $\omega(e)=\Omega$,~then $|E(S)|\le |E(C)|-2$.~If~$\Omega= 3$ and there is an edge $e\in E_{f=(0,0)}(S)$ such that $\omega(e)=2$,~then $|E(S)|\le |E(C)|-2$. \end{lemma} \proof Since $E_{g=\left( 0,0\right) }(S)$ is a matching by Lemma $\ref{l2}$~and the definition of $S$,~$|E(S)|\le |E(C)|-1$. Suppose that $|E(S)|=|E(C)|-1$.~If there is an edge $e\in E_{g=(0,0)}(S)$ with $\omega(e)=\Omega$,~then~ \begin{equation} \begin{array}{lllllllllll} 0=\omega(C)-\omega(C)&\le |E_{f=(0,0)}\cap E(C)|+\Omega|E(C)-E_{f=(0,0)}|-\omega(C)\\ &\le \frac{\omega(C)}{4}+\Omega(\frac{\omega(C)}{4}+1-\Omega)-\omega(C)\\ &\le \frac{1}{64}\left(\omega(C) -20\right)^2-6<0, \end{array} \nonumber \end{equation} a contradiction.~If~$\Omega= 3$ and there is an edge $e\in E_{f=(0,0)}(S)$ such that $\omega(e)=2$,~then \begin{equation} \begin{array}{lllllllllll} 0=\omega(C)-\omega(C)&\le |E_{f=(0,0)}\cap E(C)|+\Omega|E(C)-E_{f=(0,0)}|-\omega(C)\\ &\le \frac{\omega(C)}{4}+\Omega(\frac{\omega(C)}{4}+2-\Omega)-\omega(C)\\ &=-3<0, \end{array} \nonumber \end{equation} a contradiction. $ \qed$ \begin{lemma}\label{l5} Let $g\in \mathcal{R}_C(f)$,~$S$~be a maximal $E_{g=(0,0)}$-alternating segment of $C$, then $5\le t\le |E(S)|$ and $t\ne 6$. If $t=7$, then $g(v_2v_3)=(1,1)$ and $g(v_4v_5)=(1,1)$. Furthermore, $t\ge 7$ if there is an edge in $ \left\lbrace v_1v_2,v_3v_4\right\rbrace $ weighted $\Omega$. \end{lemma} \begin{proof} Let $P_1=v_0v_1v_2v_3$,~$P_2=v_3v_4v_5v_6$.~By Lemma \ref{l3},~$g(v_1v_2)=g(v_3v_4)=(0,0)$,~$g(v_0v_1)=a$,~we have $t\ge 5$ and $t\ge 7$ if there is an edge in $ \left\lbrace v_1v_2,v_3v_4\right\rbrace $ weighted $\Omega$. If $ t\ge|E(S)|+1$,~then there is a circuit $ C_1$~consist of a path $P_1$ from $v_1$ to $v_{t}$ and $v_0v_{-1}\cdots v_{t}$.~Let $g'=g+f_{C_1,(b)}$, then $g'\in \mathcal{R}_C(f)$ and $ g'(v_0v_{-1})=g'(v_0v_1)=(0,0)$,~a contradiction as Lemma \ref{l2}. Let's prove that $t\ne 6$. Assume that $t=6$, we have $g(v_2v_3)=(0,1)$ or $g(v_4v_5)=(0,1)$. If $g(v_2v_3)=(0,1)$, then $g(v_4v_5)=(1,1)$. Consider $P_{(0,1),(1,0)}$, $v_{3}$ be a vertex with odd degree on $ P_{(0,1),(1,0)}$, let $v_{t_0}$ be another vertex with odd degree in the same component of $ P_{(0,1),(1,0)}$. ~If $v_{t_0}\notin \left\lbrace v_0,v_1,\dots,v_6\right\rbrace $, then $\omega(E_{g=(0,0)}(P_1))=\omega(E_{g=(1,1)}(P_1))$, which is a contradiction as $g(v_0v_1)=(1,0)$, $g(v_1v_2)=(0,0)$ and $g(v_2v_3)= (0,1)$. Similarily, we have $t_0\notin \left\lbrace 0,1\right\rbrace $, thus $g(v_2v_3)\ne (0,1).$ Similarily, we can prove that $g(v_4v_5)\ne (0,1)$. This is a contradiction. Finally, if $t=7$, let's prove that $g(v_2v_3)=(1,1)$ and $g(v_4v_5)=(1,1)$. Since $t=7$ and Lemma \ref{l3}, $g(v_6v_7)=(0,1)$ or $(1,0)$. If $g(v_2v_3)=(0,1)$ or $(1,0)$, then $g(v_4v_5)=(1,1)$. Consider $P_{(0,1),(1,0)}$, $v_{3}$ be a vertex with odd degree on $ P_{(0,1),(1,0)}$, let $v_{t^*}$ be another vertex with odd degree in the same component of $ P_{(0,1),(1,0)}$. If $v_{t^*}\notin \left\lbrace v_0,v_1,\dots,v_7\right\rbrace $, then $\omega(E_{g=(0,0)}(P_1))=\omega(E_{g=(1,1)}(P_1))$, which is a contradiction as $g(v_0v_1)=(1,0)$, $g(v_1v_2)=(0,0)$ and $g(v_2v_3)=(1,0)$ or $(0,1)$. Similarily, we have $t^*\notin \left\lbrace 0,1,7 \right\rbrace $, thus $t^*=6$. Since $\omega(v_4v_5)=\omega(v_1v_2)+\omega(v_3v_4)+\omega(v_5v_6)$, $\omega(v_4v_5)\ne \omega(v_3v_4)+\omega(v_5v_6)$, which is a contradiction as $\omega(E_{g=(0,0)}(P_2))=\omega(E_{g=(1,1)}(P_2))$. Thus, $g(v_2v_3)=(1,1)$. Similarily, we can prove that $g(v_4v_5)=(1,1)$. \end{proof} \begin{Remark}\label{re1} Let $g\in \mathcal{R}_C(f)$,~$S$~be a maximal $E_{g=(0,0)}$-alternating segment of $C$,~then by Lemma $\ref{l5}$,~$t''\le |E(S)|$ if $t''$ is defined. \end{Remark} Now,~adding a condiction,~we suppose that $S$ is a maximal $E_{g=(0,0)}$-alternating segment in $C$ such that there is an edge $e\in E(S)$ which is weighted $\Omega$ and $g(e)=(0,0)$. \begin{lemma}\label{l41} $|E(S)|\ge 9$ and $\Omega\le4$. \end{lemma} \begin{proof} Suppose to be contrary that $|E(S)|<9$.~By symmetry,~assume there is an edge in $ \left\lbrace v_1v_2,v_3v_4\right\rbrace $ weighted $\Omega$.~By Lemma $\ref{l5}$,~$|E(S)|=7$ and $t=7$.~By Lemma \ref{l3} $g(v_3v_4)=g(v_5v_6)=(1,1)$,~$g(v_7v_8)=(0,1)$ and~$ t'\ge 8$.~Futhermore, $t''\ge 8$ by Lemma \ref{l3},~which is a contradiction as Lemma \ref{l5}. Since\begin{equation} \begin{array}{lllllllllll} 35\ge\omega(C)&= 4|E_{g=(0,0)}\cap E(C)|\\ &\ge 4(\Omega+4)\\ \end{array} \nonumber \end{equation} Thus,~$\Omega\le4$. \end{proof} By symmetry,~assume there is an edge in $ \left\lbrace v_1v_2,v_3v_4,v_5v_6\right\rbrace $ weighted $\Omega$ if $9\le|E(S)|\le 11$.~If $|E(S)|=9$,~let the number of edges in $\left\lbrace v_1v_2,v_3v_4\right\rbrace$ as small as possible. \begin{lemma}\label{l42} If~ $\Omega=4$, then $S$ satisfy the following two conditions: (a) $\omega(v_{5}v_{6})=4$ and $\omega(e)=1$ for any $e\in E_{g=(0,0)}(S)\backslash\left\lbrace v_{5}v_{6}\right\rbrace $; (b) $\omega(v_{2}v_{3})=\omega(v_{8}v_{9})=2$. \end{lemma} \begin{proof} Combining $\Omega=4$ with Lemma \ref{l41}, $\left|E(S)\right|= 9$ and $\omega(C)=32$,~there are four edges weighted $1$ in $E_{g=(0,0)}(S)$ and there is one edge weighted $4$ in $E_{g=(0,0)}(S)$. By Lemma \ref{l5}, $7\leq t\leq \left|E(S)\right|=9$ or $t=5$. If $t\ge7$, then $g(v_2v_3)=(1,1)$ by the same proof of Lemma \ref{l5},~then $t'\ge 10$,~then $t''\ge 10$ a contradiction as Remark \ref*{re1}. So $t=5$. Combining with Lemma \ref{l3}, $\omega(v_{5}v_{6})=4$, $g(v_{2}v_{3})=(1,1)$, $\omega(v_{2}v_{3})=2$, $g(v_{4}v_{5})=(0,1)$ and $\omega(v_{4}v_{5})=\omega(v_{0}v_{1})$. By the definition of $\bar{t}$,~similarly,~we can obtain $\bar{t}=v_{6}$, $\omega(v_{8}v_{9})=2$ and $\omega(v_{6}v_{7})=\omega(v_{10}v_{11})$. \end{proof} \begin{lemma}\label{l43} If~ $\Omega=3$, then $S$ satisfy the following two conditions: (a) $\omega(v_{5}v_{6})=3$; (b) $\omega(v_{2}v_{3})=2$ and $\omega(v_{1}v_{2})=\omega(v_{3}v_{4})=1$. \end{lemma} \begin{proof} Combining $\Omega=3$ with Lemma \ref{l41}, $\left|E(S)\right|= 9$ or $|E(S)|=11$.~By Lemma \ref{l5}, $7\leq t\leq \left|E(S)\right|$ or $t=5$. For $|E(S)|=9$,~if $t\ge7$, then $g(v_2v_3)=(1,1)$. By the same proof of Lemma \ref{l5},~then $t'\ge 10$,~then $t''\ge 10$,~which is a contradiction as Lemma \ref{l5}.~Thus $t=5$ and there are at least three edges weighted $1$ in $E_{g=(0,0)}(S)$ by Lemma $\ref{l1}$. Therefore,~ $\omega(v_{2}v_{3})=2$ and $\omega(v_{1}v_{2})=\omega(v_{3}v_{4})=1$; or $\omega(v_{8}v_{9})=2$ and $\omega(v_{7}v_{8})=\omega(v_{9}v_{10})=1$;~by our assumption,~$\omega(v_{2}v_{3})=2$ and $\omega(v_{1}v_{2})=\omega(v_{3}v_{4})=1$. For $|E(S)|=11$, if $t\ge7$, then $g(v_2v_3)=(1,1)$ by the same proof of Lemma \ref{l5},~then $t''\ge 12$,~a contradiction as Lemma \ref{l5}. Thus $t=5$. Therefore,~ $\omega(v_{2}v_{3})=2$ and $\omega(v_{1}v_{2})=\omega(v_{3}v_{4})=1$. \end{proof} Let $S'=v_{1}'v_{2}'\dots v_{\left|E(S')\right|+1}'$ be a maximal $E_{g'=(0,0)}(G)$-alternating containing $v_{2}v_{3}$ and $g'(v_{2}v_{3})=(0,0)$ and $\omega(v_{2}v_{3})=2$ on $C$, $v_{2}v_{3}$ is shown in Figure \ref*{fig1} and \ref*{fig6}. Futhermore, if $\Omega=2$, let $S'=S$. \begin{figure*} \caption{$\omega(v_5v_6)=4$,~$\omega_(v_2v_3)=\omega_(v_8v_9)=2$,~$\omega_(v_1v_2)=\omega_(v_3v_4)=1$~and~\\$\text{~~~~~~~~~~~~~} \label{fig1} \end{figure*} \begin{figure*} \caption{$\omega(v_5v_6)=3$,~$\omega_(v_2v_3)=2$,~$\omega_(v_1v_2)=\omega_(v_3v_4)=1$} \label{fig6} \end{figure*} \begin{lemma}\label{l44} $|E(S')|\le |E(C)|-2$. \end{lemma} \begin{proof} Suppose to be contrary that $|E(S')|> |E(C)|-2$. Since $E_{g'=\left( 0,0\right) }(S')$ is a matching and $S\subset C$ , we have $|E(S')|= |E(C)|-1$. By Lemma \ref{l4}, $\Omega=4$. Let $S''$ be a maximal $E_{g''=(0,0)}(G)$-alternating of $C$ containing $e$ with $g''(e)=(0,0)$ and $\omega(e)=4$. By Lemma \ref{l41}, $|E(S'')|\geq 9$, then $\omega(C)=32$. Since $S'$ contains an edge $e_1$ with $g'(e_1)=(0,0)$ and $\omega(e_1)=2$, $|E(S')|\leq 13$. Combining with Lemma \ref{l42}, $g'(e)\ne (0,0)$. Since $\omega(e)=4$, there are two edges weighted 1. Then \begin{equation} \begin{array}{lllllllllll} 0=\omega(C)-\omega(C)&= |E_{g'=(0,0)}\cap E(C)|+|E(C)-E_{g'=(0,0)}|-\omega(C)\\ &\le \frac{\omega(C)}{4}+\Omega+1+1+\Omega(7-1-2)-\omega(C)\\ &=-2<0, \end{array} \nonumber \end{equation} a contradiction. \end{proof} \begin{lemma}\label{l45} $|E(S')|\geq 9$. \end{lemma} \begin{proof} Suppose to be contrary that $|E(S')|=5$ or $7$. If $|E(S')|=5$, then $t=5$. As shown in Figure Consider $P_{(1,1),(1,0)}$, $v_{1}'$ be a vertex with odd degree on $ P_{(1,1),(1,0)}$, let $v_{t'}'$ be another vertex with odd degree in the same component of $ P_{(1,1),(1,0)}$. By Lemma \ref{1.3}, $t'\geq 6$. Combining the definition of $t''$ and Figure , $t''\geq 6$, which is a contradiction as Remark \ref{re1}. If $|E(S')|=7$, by symmetry, we can assume that exist edge $e\in \left\lbrace v_{1}'v_{2}', v_{3}'v_{4}'\right\rbrace$ such that $\omega(e)=2$,~especially,~assume $e=v_3v_4$~for $\Omega\ne2$. By Lemma \ref{l5}, $t=5$ or $7$. If $t=5$, $g'(v_{2}'v_{3}')=(1,1)$, combining with $\omega(e)=2$, $\Omega\geq 3$. By Figure \ref{fig1} and Figure \ref{fig6}. , $\omega(v_{2}' v_{3}')=1$, which is a contradiction as Lemma \ref{l3}. If $t=7$, by Lemma \ref{l5}, $g'(v_{2}'v_{3}')=g'(v_{4}'v_{5}')=(1,1)$ and $g'(v_{6}'v_{7}')=(0,1)$. Consider $P_{(1,1),(1,0)}$, $v_{1}'$ be a vertex with odd degree on $ P_{(1,1),(1,0)}$, let $v_{t'}'$ be another vertex with odd degree in the same component of $ P_{(1,1),(1,0)}$. Combining with Lemma \ref{l3}, $t'\geq 8$. Since the definition of $t''$ and Lemma \ref{l3} $t''\geq 8$, which is a contradiction as Remark \ref{re1}. Thus, $|E(S')|\geq 9$. \end{proof} \section{Proof of Theorem \ref{1.3}} We still use the same notation as section $3$ and we suppose that $S$ is a maximal $E_{g=(0,0)}$-alternating segment in $C$ such that there is an edge $e\in E(S)$ which is weighted $\Omega$ and $g(e)=(0,0)$.~We only need to proof such $S'$ does not exist.~If there is an $S'$,~then $9\le|E(S')|\le13$ by Lemma \ref{l1} and Lemma \ref{l45}. \begin{stepp}\label{step1} $\left|E(S')\right|\ne 9$. \end{stepp} Suppose to be contrary that $|E(S')|=9$. By symmetry, we can assume that exist edge $e\in \left\lbrace v_{1}'v_{2}', v_{3}'v_{4}', v_{5}'v_{6}'\right\rbrace$ such that $\omega(e)=2$,~especially,~assume $e=v_3v_4$~for $\Omega\ne2$. If $t=5$, then $g'(v_{2}'v_{3}')=(1,1)$ and $g'(v_{4}'v_{5}')=(0,1)$. If $\Omega\geq 3$, combining \ref*{fig1} and \ref*{fig6}, $\omega(v_{1}'v_{2}')\ne 2$ and $\omega(v_{3}'v_{4}')\ne 2$. Thus, $\omega(v_{5}'v_{6}')=2$. Futhermore, $\omega(v_{4}'v_{5}')=\omega(v_{6}'v_{7})=1$. Consider $P_{(1,1),(1,0)}$, $v_{1}'$ be a vertex with odd degree on $ P_{(1,1),(1,0)}$, let $v_{t'}'$ be another vertex with odd degree in the same component of $ P_{(1,1),(1,0)}$. Combining with Lemma \ref{l3}, $t'\geq 10$. Since the definition of $t''$ and Lemma \ref{l3},~ $t''\geq 10$, which is a contradiction as Remark \ref{re1}. If $\Omega= 2$, similarily, we have $t''\geq 10$, which is a contradiction. If $t=7$, by Lemma \ref{l5}, $g(v_{2}'v_{3}')=(1,1)$, $g(v_{4}'v_{5}')=(1,1)$ and $g(v_{6}'v_{7}')=(0,1)$. If $\Omega\leq 3$, then $t'\ge10$,~thus~$t''\geq 10$, which is a contradiction as Remark \ref{re1}. If $\Omega=4$, then $t'\geq 9$. If $t'=9$. By Figure $\omega(v_{1}'v_{2}')\ne 2$ and $\omega(v_{3}'v_{4}')\ne 2$. Thus, $\omega(v_{5}'v_{6}')=2$. Since $t'=9$, $\omega(E_{g'=(0,0)}(v_{1}'v_{2}'\dots v_{9}'))=\omega(E_{g'=(0,1)}(v_{1}'v_{2}'\dots v_{9}'))$. Thus, $\omega(v_{6}'v_{7}')=4$ and $\omega(v_{8}'v_{9}')=4$. By $\omega(v_{8}'v_{9}')=4$ and Lemma \ref{l43}, $\omega(v_{6}'v_{7}')=1$, which is a contradiction. If $t=8$, then $\omega(E_{g'=(0,0)}(v_{0}'v_{1}'\dots v_{8}'))\geq 5$. Thus $\left| E_{g'=(1,1)}(v_{0}'v_{1}'\dots v_{8}')\right| \geq 2$. Similar to Lemma \ref{l5}, we can prove that $g(v_{2}'v_{3}')=(1,1)$ and $g(v_{6}'v_{7}')=(1,1)$. Thus, $g(v_{4}'v_{5})=(0,1)$. Futhermore, we have $t'\geq 10$ and $t''\geq 10$, which is a contradiction as Remark \ref{re1}. If $t=9$, then $g'(v_{8}'v_{9}')=(1,0)$ or $(0,1)$. Similarily, we can prove that $g(v_{2}'v_{3}')=(1,1)$ and $g(v_{6}'v_{7}')=(1,1)$. Futhermore, we have $t'\geq 10$ and $t''\geq 10$, which is a contradiction as Remark \ref{re1}. Thus, we have $\left|E(S')\right|\ne 9$. \begin{stepp}\label{step2} $\left|E(S')\right|\ne 11$. \end{stepp} Suppose to be contrary that $|E(S')|=11$. By symmetry, we can assume that exist edge $e\in \left\lbrace v_{1}'v_{2}', v_{3}'v_{4}', v_{5}'v_{6}'\right\rbrace$ such that $\omega(e)=2$,~especially,~assume $e=v_3v_4$~for $\Omega\ne2$. If $t=5$, then $g'(v_{2}'v_{3}')=(1,1)$ and $g'(v_{4}'v_{5}')=(0,1)$. If $\Omega\geq 3$, combining Figure \ref*{fig1} and Figure \ref*{fig6}, $\omega(v_{1}'v_{2}')\ne 2$ and $\omega(v_{3}'v_{4}')\ne 2$. Thus, $\omega(v_{5}'v_{6}')=2$. Futhermore, $\omega(v_{4}'v_{5}')=\omega(v_{6}'v_{7})=1$. Consider $P_{(1,1),(1,0)}$, $v_{1}'$ be a vertex with odd degree on $ P_{(1,1),(1,0)}$, let $v_{t}'$ be another vertex with odd degree in the same component of $ P_{(1,1),(1,0)}$. Combining with Lemma \ref{l3}, $t'\geq 11$. If $t'= 11$. Since $\omega(E_{g'=(0,0)}(v_{1}'v_{2}'\dots v_{11}'))\geq 6$ and Lemma \ref{l3}, $\omega(E_{g'=(0,1)}(v_{1}'v_{2}'\dots v_{11}'))\geq 6$. By Lemma \ref{l3}, $g'(v_{10}'v_{11}')\ne (0,1)$. Since $g'(v_{4}'v_{5}')=(0,1)$ and $\omega(v_{4}'v_{5}')=1$, $g'(v_{6}'v_{7}')=g'(v_{8}'v_{9}')=(0,1)$. By $\omega(v_{6}'v_{7}')=1$, $\omega(v_{8}'v_{9}')=4$. Combining with Lemma \ref{l43}, $\omega(v_{10}'v_{11}')=1$. Thus, $\omega(E_{g'=(1,1)}(v_{1}'v_{2}'\dots v_{11}'))\ne \omega(E_{g'=(1,0)}(v_{1}'v_{2}'\dots v_{11}'))$, which is a contradiction. Finally, we have $t'\geq 12$. Similarily, we can obtain that $t''\geq 12$, which is a contradiction as Remark \ref{re1}. If $\Omega=2$. By Lemma \ref{l3}, $t'\geq 11$. If $t'= 11$. Since $\omega(E_{g'=(0,0)}(v_{1}'v_{2}'\dots v_{11}'))\geq 6$, $\Omega=2$ and $g'(v_{10}'v_{11}')\ne (0,1)$, $g'(e)=(0,1)$ and $\omega(e)=2$ for any $e\in \left\lbrace v_{4}'v_{5}', v_{6}'v_{7}', v_{8}'v_{9}'\right\rbrace$. By $\omega(E_{g'=(0,0)}(v_{0}'v_{1}'\dots v_{5}'))\geq 2$ and Lemma \ref{l3}, $\omega(v_{2}'v_{3}')=2$. Thus, $\omega(v_{10}'v_{11}')=2$. Let $S^*$ is a maximal $E_{g^*=(0,0)}(G)$-alternating containing $v_{4}'v_{5}'\dots v_{9}'$ and $g^*(v_{4}'v_{5}')=g^*(v_{6}'v_{7}')=g^*(v_{8}'v_{9}')=(0,0)$ on $C$, since $\omega(E_g^*(0,0))\leq 8$, $\left| E(S^*)\right|= 7$, which is a contradiction as Lemma \ref{l41}. Thus, $t'\geq 12$. Similarily, $t''\geq 12$, which is a contradiction as Remark \ref{re1}. If $t=7$, by Lemma \ref{l5}, $g'(v_{2}'v_{3}')=(1,1)$, $g(v_{4}'v_{5}')=(1,1)$ and $g'(v_{6}'v_{7}')=(0,1)$. If $\Omega\leq 3$. Consider $t'$, Since $\omega(v_{2}'v_{3}')+\omega(v_{4}'v_{5}')\geq 4$, $\omega(v_{1}'v_{2}')+\omega(v_{3}'v_{4}')+\omega(v_{5}'v_{6}')\ge 4$ and Lemma \ref{l3}, then $t'\geq 12$, futher, we have $t''\geq 12$, which is a contradiction as Remark \ref{re1}. If $\Omega=4$, then $t'\geq 11$. If $t'=11$. Since $\omega(E_{g'=(0,0)}(v_{1}'v_{2}'\dots v_{11}'))\geq 6$, $\Omega=4$ and Lemma \ref{l3}, $\left| E_{g'=(0,1)}(v_{1}'v_{2}'\dots v_{11}')\right| \geq 2$. By Lemma \ref{l3}, $g'(v_{10}'v_{11}')\ne (0,1)$, $g'(v_{8}'v_{9}')= (0,1)$ and $g'(v_{10}'v_{11}')= (1,0)$. Since $\omega(v_{2}'v_{3}')+\omega(v_{4}'v_{5}')\geq 4$ and Lemma \ref{l3}, $\omega(v_{10}'v_{11}')=4$. By Lemma \ref{l43}, $\omega(v_{8}'v_{9}')=\omega(v_{6}'v_{7}')=1$, which is a contradiction as $\omega(E_{g'=(0,0)}(v_{1}'v_{2}'\dots v_{11}'))\geq 6$. Thus, $t'\geq 12$. Similarily, $t''\geq 12$, which is a contradiction as Remark \ref{re1}. If $t=8$, then $\omega(E_{g'=(0,0)}(v_{0}'v_{1}'\dots v_{8}'))\geq 5$. Thus $\left| E_{g'=(1,1)}(v_{0}'v_{1}'\dots v_{8}')\right| \geq 2$. Similar to Lemma \ref{l5}, we can prove that $g(v_{2}'v_{3}')=(1,1)$ and $g(v_{6}'v_{7}')=(1,1)$. Thus, $g(v_{4}'v_{5})=(0,1)$. Futher, we have $t'\geq 10$ and $t''\geq 10$, which is a contradiction as Remark \ref{re1}. The proof of $t\ne9$ is similar to the proof of $t=8$. If $t=10$,~then $\omega(E_{g'=(0,0)}(v_0'\cdots v_{10}'))\ge6$,~thus $|E_{g'=(1,1)}(v_0'\cdots v_{10}')|\ge 2$.~If $t'=4$,~then $g'(v_2'v_3')=(0,1)$.~If $ g'(v_4'v_5')=g'(v_6'v_7')=g'(v_8'v_9')=(1,1)$,~similar to the proof of Lemma \ref{l5},~a contradiction.~Thus,~there are exactly two edges in $\left\lbrace v_4'v_5',~v_6'v_7',~v_8'v_9'\right\rbrace\cap E_{g'=(1,1)}(C) $,~neither of the weight of them are $1$ as $\Omega\le4$ and $\omega(E_{g'=(0,0)}(v_0'\cdots v_{10}'))=\omega(E_{g'=(1,1)}(v_0'\cdots v_{10}'))$,~thus $\Omega=2$,~which is a contradiction as Lemma $\ref{l3}$.~Thus $t'\ge 7$,~if $t'=7$,~then $\Omega\ne 2$ and $v_2v_3\in\left\lbrace v_1'v_2',v_3'v_4',v_5'v_6'\right\rbrace $ by Lemma \ref{l3},~Lemma $\ref{l42}$ and Lemma $\ref{l43}$.~By Lemma \ref{l3} and \ref{l42} unique edge of $\left\lbrace v_1'v_2',v_3'v_4',v_5'v_6'\right\rbrace \cap E_{g'=(1,1)}$ is weighted $1$,~a contadiction as Lemma \ref{l3}.~Therefore,~$t'\ge 12$.~Similarily,~$t''\ge12$,~a contradiction as Remark \ref{re1}. The proof of $t\ne11$ is similar to $t\ne10$. \begin{stepp}\label{step3} $\left|E(S')\right|\ne 13$. \end{stepp} \rm Suppose to be contrary that $|E(S')|=13$. By symmetry,~assume there is an edge in $ \left\lbrace v_1'v_2',v_3'v_4',v_5'v_6',v_7'v_8'\right\rbrace $ weighted $2$ which is $v_2v_3$ if $\Omega\ne2$. Since $\omega(E_{g'=(0,0)}(C))\leq 8$ and $\left| E_{g'=(0,0)}(S) \right|=7$, there is only one edge in $E(S)\cup E_{g'=(0,0)}(C)$ weighted $2$. If $\Omega=4$, combining with Figure \ref{fig1}, then $\omega(v_7v_8)\ne 2$. \bf Case 3.1 \rm $t=5$. Since $t=5$,~we have $g'(v_2'v_3')=(1,1)$ and $g'(v_4'v_5')=(0,1)$ by Lemma \ref{l3},~then $t'\ge7$ and $t'\ne8.$ If $t'=7$,~then $g'(v_6'v_7')=(1,0)$,~$\omega(v_6'v_7')=\omega(v_2'v_3')\ge 2$ and $ \omega(v_4'v_5')\ge 3$ by Lemma \ref{l3},~then $\Omega\ge3.$ By Lemma \ref{l42} and By Lemma \ref{l43} $\omega(v_6'v_7')=\omega(v_2'v_3')\ne 1 $,~then $\omega(v_4'v_5')\ne\Omega$.~Futhermore $\Omega=4$~and~$ \omega(v_4'v_5')=3$. Combining with Lemma \ref{l43},~we have $\omega(v_7'v_8')=2$ and $\omega(v_6'v_7')=1$,~a contradiction as $\omega(v_6'v_7')\ne 1$. If $t'=9$,~combining with $\omega(E_{g'=(0,0)}(v_1'\cdots v_9'))=\omega(E_{g'=(0,1)}(v_1'\cdots v_9'))\geq 5$, then $\left| E_{g'=(0,1)}(v_1'\cdots v_9')\right| \geq 2$. By Lemma \ref{l3}, $g'(v_8'v_9')\ne (0,1)$. Thus, $g'(v_6'v_7')=g'(v_8'v_9')=(1,0)$. Since $\omega(E_{g'=(0,1)}(v_1'\cdots v_9'))\geq 5$ and $\left| E_{g'=(0,1)}(v_1'\cdots v_9')\right|=2$, $\Omega\geq 3$. By Lemma \ref{l42} and By Lemma \ref{l43}, $\omega(v_4'v_5')\ne\Omega$ and $\omega(v_6'v_7')\ne\Omega$. Futher, $\Omega=4$, $\omega(v_4'v_5')\ne1$ and $\omega(v_6'v_7')\ne1$. By Lemma \ref{l43}, and ,~a contradiction as $\omega(v_2'v_3')\ne 1$. If $t'=10$,~then $\left\lbrace v_6'v_7',v_8'v_9'\right\rbrace=(1,0),(0,1)$. if one edge in $\left\lbrace v_4'v_5',v_6'v_7',v_8'v_9'\right\rbrace $ has a weight of $4$, then the remaining two edge weights are 1, which is a contradiction as Lemma \ref{l3}. Thus, $\omega(E_{g'=(0,1)}(v_1'\cdots v_10'))=6$ and $\Omega\geq 3$. By Lemma \ref{l42} and By Lemma \ref{l43},$\omega(v_1'v_2')=2$. Futher, $\omega(v_2'v_3')=1$, which is a contradiction as $\omega(v_2'v_3')=\omega(v_1'v_2')+\omega(v_3'v_4')\geq 3$. If $t'=11$,~then there is no edge in $\left\lbrace v_2'v_3',v_4'v_5',v_6'v_7',v_8'v_9',v_{10}'v_{11}'\right\rbrace $ weighted $4$ and there is no edge in $\left\lbrace v_1'v_2',v_3'v_4'\right\rbrace $ weighted $2$. Since there is only one edge in $E(S)\cup E_{g'=(0,0)}(C)$ weighted $2$ and Lemma \ref{l42}, combining $\omega(v_2'v_3')\ne 1$,~$\Omega\ne 4$. Since $\omega(E_{g'=(0,0)}(v_1'\cdots v_{11}'))\geq 6$ and Lemma $\ref{l43}$, there is no edge in $\left\lbrace v_4'v_5',v_6'v_7',v_8'v_9',v_{10}'v_{11}'\right\rbrace $ weighted $3$. Thus, $\Omega\ne 3$. Futher, $\omega(v_2'v_3')=\omega(v_4'v_5')=\omega(v_6'v_7')'=\omega(v_8'v_9')=\omega(v_{10}'v_{11}')=2$ and $g'(v_4'v_5')=g'(v_6'v_7')=g'(v_8'v_9')=(0,1)$,~a contradiction as Lemma \ref{l41}. If $t'=12$,~then there is no edge in $\left\lbrace v_2'v_3',v_4'v_5',v_6'v_7',v_8'v_9',v_{10}'v_{11}'\right\rbrace $ weighted $4$ and there is no edge in $\left\lbrace v_1'v_2',v_3'v_4'\right\rbrace $ weighted $2$ by there is only one edge in $E(S)\cup E_{g'=(0,0)}(C)$ weighted $2$. Combining $\omega(v_2'v_3')\ne 1$,~$\Omega\ne 4$. Since $\omega(E_{g'=(0,0)}(v_1'\cdots v_{12}'))\geq 7$ and Lemma $\ref{l43}$, there is no edge in $\left\lbrace v_4'v_5',v_6'v_7',v_8'v_9',v_{10}'v_{11}'\right\rbrace $ weighted $3$. Then $\Omega\ne 3$,~a contradiction as Lemma $\ref{l3}$. If $t'=13$,~then there is no edge in $\left\lbrace v_2'v_3',v_4'v_5',v_6'v_7',v_8'v_9',v_{10}'v_{11}'\right\rbrace $ weighted $4$ and there is no edge in $\left\lbrace v_1'v_2',v_3'v_4'\right\rbrace $ weighted $2$ by there is only one edge in $E(S)\cup E_{g'=(0,0)}(C)$ weighted $2$. Combining $\omega(v_2'v_3')\ne 1$,~$\Omega\ne 4$. Since $\omega(E_{g'=(0,0)}(v_1'\cdots v_{12}'))\geq 7$ and Lemma $\ref{l43}$, there is no edge in $\left\lbrace v_6'v_7',v_8'v_9'\right\rbrace $ weighted $3$. If $\Omega=3$, then one of the weights of $v_{4}'v_{5}'$ and $v_{10}'v_{11}'$ is $3$. If $\omega(v_{4}'v_{5}')=3$, then $\omega(v_6'v_7')=\omega(v_8'v_9')=1$ and $\omega(v_7'v_8')=2$ by Lemma \ref{l413}. Since $\omega(E_{g'=(0,0)}(v_1'\cdots v_{12}'))\geq 7$, $g'(v_6'v_7')=g'(v_8'v_9')=g'(v_{10}'v_{11}')=(0,1)$ and $\omega(v_{10}'v_{11}')\geq 2$. If $\omega(v_{10}'v_{11}')=2$, then $\omega(v_2'v_3')=2$. Consider the maximal $E_{g''=(0,0)}(G)$-alternating $S_0$ containing $v_{4}'v_{5}'$, $v_6'v_7'$, $v_8'v_9'$ and $v_{10}'v_{11}'$. Thus, $\left| E(S_0)\right| \leq 7$, which is a contradiction as Lemma \ref{l41}. Thus, $\omega(v_{10}'v_{11}')=3$. Similarily, if $\omega(v_{10}'v_{11}')=3$, then we can also have $\omega(v_{4}'v_{5}')=3$ and $\omega(v_6'v_7')=\omega(v_8'v_9')=1$. Consider $P_{(1,0),(0,1)}$, $v_{10}$ be a vertex with odd degree in $P_{(1,0),(0,1)}$ and another vertex with odd degree in the same component is not in $S$ by Lemma \ref{l3},~a contradiction as Lemma $\ref{l6}$. Thus, $\Omega=2$. Since $\omega(E_{g'=(0,0)}(v_1'\cdots v_{12}'))\geq 7$ and $\Omega=2$, $\left| E_{g'=(0,0)}(v_1'\cdots v_{12}')\right| =4$. Consider the maximal $E_{g'''=(0,0)}(G)$-alternating $S_1$ containing $E_{g'=(0,0)}(v_1'\cdots v_{12}')$, thus $\left| E(S_1)\right| \leq 7$, which is a contradiction as Lemma \ref{l41}. Therefore, $t'\geq 14$. Let us consider $t''$. If $t''=7$,~then $g'(v_6'v_7')=(1,0)$. Since $\omega(E_{g'=(0,0)}(v_0'\cdots v_{5}'))=\omega(E_{g'=(1,1)}(v_0'\cdots v_{5}'))$ and $\omega(E_{g'=(0,1)}(v_0'\cdots v_{7}'))=\omega(E_{g'=(1,1)}(v_0'\cdots v_{7}'))$, $\omega(v_0'v_1')=\omega(v_4'v_5')=\omega(v_0'v_1')+\omega(v_2'v_3')$,~a contradiction. If $t''=9$,~then $g'(v_6'v_7')=(1,0)$ and $g'(v_8'v_9')\ne (1,0)$ by Lemma \ref{l3}. Since $\omega(E_{g'=(0,0)}(v_0'\cdots v_{9}'))=\omega(E_{g'=(1,1)}(v_0'\cdots v_{9}'))\geq 5$, $g'(v_6'v_7')=g'(v_8'v_9')=(1,0)$, which is a contradiction. If $t''=10$,~then $g'(v_6'v_7')=g'(v_8'v_9')=(1,0)$~by Lemma \ref{l3}. Since $\omega(E_{g'=(1,0)}(v_0'\cdots v_{5}'))=\omega(E_{g'=(0,1)}(v_0'\cdots v_{5}'))$ and $\omega(E_{g'=(0,1)}(v_0'\cdots v_{10}'))=\omega(E_{g'=(1,1)}(v_0'\cdots v_{10}'))$, $\omega(v_0'v_1')=\omega(v_4'v_5')=\omega(v_0'v_1')+\omega(v_2'v_3')$,~a contradiction. If $t''=11$,~then $g'(v_6'v_7')=g'(v_8'v_9')=(1,0)$,~$g'(v_{10}'v_{11}')=(0,1)$~by Lemma \ref{l3}. We can easily check that there is no edge in $\left\lbrace v_2'v_3',v_6'v_7',v_8'v_9',v_{10}'v_{11}'\right\rbrace $ weighted $ 1$.~Therefore,~$\Omega=2$ by Lemma \ref*{l42} and Lemma \ref*{l43}. Thus, $\omega(v_6'v_7')+\omega(v_8'v_{9}')\leq 4$, which is a contradiction as $\omega(E_{g'=(0,0)}(v_0'\cdots v_{11}'))=\omega(v_6'v_7')+\omega(v_8'v_{9}')\geq 6$. If $t''=12$,~then $ |\left\lbrace v_6'v_7',v_8'v_9',v_{10}'v_{11}'\right\rbrace \cap E_{g'=(1,0)}|=2$ and $|\left\lbrace v_6'v_7',v_8'v_9',v_{10}'v_{11}'\right\rbrace \cap E_{g'=(0,1)}|=1$.~We can easily check that there is no edge in $\left\lbrace v_2'v_3',v_6'v_7',v_8'v_9',v_{10}'v_{11}'\right\rbrace $ weighted $ 1$.~Therefore,~$\Omega=2$ by Lemma \ref*{l42} and Lemma \ref*{l43},~Thus, $\omega(E_{g'=(1,0)}\\(v_0'\cdots v_{12}'))\le 4$,~which is a contradiction as $\omega(E_{g'=(1,0)}(v_0'\cdots v_{12}')=E_{g'=(0,0)}\\(v_0'\cdots v_{12}')\ge7.$ If $t''=13$,~then there is no edge in $\left\lbrace v_2'v_3',v_4'v_5',v_6'v_7',v_8'v_9',v_{10}'v_{11}',v_{12}'v_{13}'\right\rbrace $ weighted $4$ by Lemma \ref{l42},~futhermore $\Omega\ne 4$. Thus,~ $\Omega=3$.~Since $\omega(E_{g'=(1,0)})\\(v_0'\cdots v_{13}')=\omega(E_{g'=(0,0)})(v_0'\cdots v_{13}')=7$.~Consider the maximal $E_{g''=(0,0)}$-alternating segment~$S^*$ containing $v_6'v_7'$,~$v_8'v_9'$ and $v_{10}'v_{11}'$ in $E_{g''=(0,0)}(S^*)$.~Thus $|E(S^*)|\le7$,~a contradiction as Lemma \ref{l41}. Therefore $t''\ge14$,~a contradiction as \ref{l5}. \bf Case 3.2 \rm $t=7$. By Lemma $\ref{l5}$,~$g'(v_2'v_3')=g'(v_4'v_5')=(1,1)$~and $g'(v_6'v_7')=(0,1)$. Then $t'\ge 11$ by Lemma \ref{l3}. If $t'=11$,~then $g'(v_6'v_7')=g'(v_8'v_9')=(0,1)$,~$g'(v_{10}'v_{11}')=(1,0)$~and $\omega(v_6'v_7')+\omega(v_8'v_9')=6$.~Futhermore,~by Lemma \ref{l41},~we have $\Omega=4$ and $\omega(v_6'v_7')=\omega(v_8'v_9')=3$ ,~then $\omega(v_1'v_2')=2$ and $\omega(v_0'v_1')=1$,~which is a contradiction as $\omega(v_0'v_1')=\omega(v_6'v_7')$ by Lemma \ref{l3}. If $t'=12$,~the one of $g'(v_6'v_7'),g'(v_8'v_9')$,~$g'(v_{10}'v_{11}')$ is ~$(1,0)$ and the other two are $(0,1)$.~Since these total weight is $7$,~one of which is weighted $4$.~By Lemma \ref{l42},~the other one is weighted $1$,~a contradiction. If $t'=13$,~then $g'(v_8'v_9')\ne(0,1)$,~$g'(v_{12}'v_{13}')\ne(0,1) $~$g'(v_{10}'v_{11}')=(0,1)$ and $\omega(v_6'v_7')+\omega(v_{10}'v_{11}')=7$ by Lemma \ref{l3} and Lemma \ref{l41}.~Since these total weight is $7$,~one of which is weighted $4$.~By Lemma \ref{l42},~the other one is weighted $1$,~a contradiction. Finally, we have $t'\ge14$,~then $\omega(v_7v_8)\ne 2$ if $\Omega=4$,~$t''\ge 13$ by Lemma \ref{l3}.~ If $t''=13$,~then $g'(v_8'v_9')=g'(v_{12}'v_{13}')=(1,0)$,~$g'(v_{10}'v_{11}')=(0,1) $~ and $\omega(v_8'v_9')+\omega(v_{12}'v_{13}')=7$ by Lemma \ref{l3} and Lemma \ref{l41}.~Since these total weight is $7$,~one of which is weighted $4$.~By Lemma \ref{l42},~the other one is weighted $1$,~a contradiction. Therefore $t''\ge14$,~a contradiction as Lemma \ref{l5}. \bf Case 3.3 \rm $t=8$. By the similar proof of Lemma \ref{l5},~we have $g'(v_2'v_3')=g'(v_6'v_7')=(1,1)$ and $g'(v_4'v_5')=(0,1)$.~Then $t'\ge 13 $ by Lemma \ref{l3}. If $t'=13$,~then $g'(v_8'v_9')=g'(v_{12}'v_{13}')=(1,0)$ and $g'(v_4'v_5')=g'(v_{10}'v_{11}')=(0,1)$ by Lemma \ref{l3} and Lemma \ref{l41}.~Since $\omega(v_4'v_5')\ne 4$ and $\omega(v_{10}'v_{11}')\ne 4$ as there is only one edge in $E(S)\cup E_{g'=(0,0)}(C)$ weighted $2$,~this is a contradiction as Lemma \ref{l3}. Therefore, $t'\ge14$,~then $t''\ge 13$ by Lemma \ref{l3}. If $t''=13$,~then $g'(v_4'v_5')=g'(v_{12}'v_{13}')=(0,1)$,~$g'(v_{10}'v_{11}')=g'(v_8'v_9')=(1,0) $.~Since $\omega(v_4'v_5')+\omega(v_{12}'v_{13}')=7$ by Lemma \ref{l3}, $\omega(v_4'v_5')\ne 4$ and $\omega(v_{12}'v_{13}')\ne 4$ as there is only one edge in $E(S)\cup E_{g'=(0,0)}(C)$ weighted $2$,~this is a contradiction as Lemma \ref{l3}. Therefore $t''\ge14$,~a contradiction as Lemma \ref{l5}. \bf Case 3.4 \rm $t=9$. By the similar proof of Lemma \ref{l5},~we have $g'(v_2'v_3')=g'(v_6'v_7')=(1,1)$,~$g'(v_4'v_5')\ne(1,1)$ and $g'(v_8'v_9')\ne(1,1)$.~Then $t'\ge 13 $ by Lemma \ref{l3}. If $t'=13$,~then $g'(v_{12}'v_{13}')=(1,0)$ and there are exactly two edges in of $\left\lbrace v_4'v_5',v_8'v_9',v_{10}'v_{11}'\right\rbrace $ in $E_{g'=(0,1)}(C)$ and the total weight is $7$.~Futhermore,~there is an edge in $\left\lbrace v_4'v_5',v_8'v_9',v_{10}'v_{11}'\right\rbrace $ weighted $4$,~which is a contradiction as there is only one edge in $E(S)\cup E_{g'=(0,0)}(C)$ weighted $2$. Therefore, $t'\ge14$,~then $t''\ge 13$ by Lemma \ref{l3}. If $t''=13$,~then $g'(v_{12}'v_{13}')=(0,1)$,~since the only edge which can be weighted $4$ in $S$ is $v_2'v_3'$ which is in $E_{g''=(1,1)}(C)$ by the similar proof of Lemma \ref{l5}.~But the total weight of $E_{g'=(0,0)}(C)$ in $v_0\cdots v_{13}$ is $7$,~a contradiction as Lemma \ref{l3}. Therefore $t''\ge14$,~a contradiction as Lemma \ref{l5}. \bf Case 3.5 \rm $t=10$. If $t'=4$,~then $g'(v_{2}'v_{3}')=(0,1)$ by Lemma \ref{l3},~then there are exactly two edges of $\left\lbrace v_4'v_5',v_6'v_7',v_8'v_9'\right\rbrace \cap E_{g'=(1,1)}$ and total weight is $6$ by the proof similar to Lemma \ref{l5}.~Thus their weight is not $1$,~then $\Omega=2$.~By Lemma \ref{l3},~$ g'(v_4'v_5')=g'(v_6'v_7')=g'(v_8'v_9')=(1,1)$.~Similar to the proof Lemma \ref{l5},~this is a contradiction. If $t'=7$,~then there is only one edge in $v_0\cdots \cap E_{g'=(0,1)}$ and it is weighted $4$,~then this edge is $v_2'v_3'$.~By Lemma \ref{l3},~$g'(v_8'v_9')=(1,1)$ is weighted $2$. We can easily find a contradiction by Lemma \ref{l42} and Lemma \ref{l3}. If $t'=13$,~then $\Omega=4$ and $g'(v_2'v_3')=(0,1)$ is weighted $4$ since the only edge which can be weighted $4$ in $S$ is $v_2'v_3'$ and the total weght of $E_{g'=(0,0)}(v_1\cdots v_{13}) $ is $7$. We can obtain a contradiction as Lemma \ref{l3} and $g'(v_8'v_9')\ne 4$. Therefore, $t'\ge14$,~then $t''\ge 13$ by Lemma \ref{l3}. If $t''=13$,~then the total weight of $E_{g'=(0,0)}(C)$ and $E_{g'=(1,1)}(C)$ are at least $7$,~a contradiction as the only edge which can be weighted in $S$ is $v_2'v_3'$. Therefore $t''\ge14$,~a contradiction as Lemma \ref{l5}. Similarly,~we obtain that $t\ne 11$. \bf Case 3.6 \rm $t=12$. By there is only one edge in $E(S)\cup E_{g'=(0,0)}(C)$ weighted $2$ and Lemma \ref{l42},~ $\omega(e)\ne4$ for each edge in $S$. If $t'=4$ and $g'(v_4'v_5')=g'(v_6'v_7')=g'(v_8'v_9')=g'(v_{10}'v_{11}')=(1,1)$,~then we can proof a contradiction which is similar to Lemma \ref{l5}.~Thus,~by $\omega(E_{g'=(1,1)}(S))=\omega(E_{g=(0,0)}(S))=7$ and $\omega(e)=\ne4$ for each edge in $S$,~we have $\Omega\ge3$.~If $\Omega=4$,~then $ \omega(v_5'v_6')\ne 2$ by Lemma \ref{l42}. Then there is only one edge weighted $2$ in $\left\lbrace v_1'v_2',v_3'v_4'\right\rbrace$ a contradiction as $\omega(E_{g=(0,1)}(v_1v_2v_3))=\omega(E_{g=(0,0)}(v_1v_2v_3))$.~Therefore,~ $\Omega=3$,~then there is an edge in $\left\lbrace v_4'v_5',v_6'v_7',v_8'v_9',v_{10}'v_{11}'\right\rbrace $ weighted $3$.~Then $v_1'$ and $v_3'$ are vertices of odd degree in $P_{(1,0),(0,1)}$ assume ${v_{t_1}}$ be a vertex with odd degree in the same component of $P_{(1,0),(0,1)}$ of $v_1'$ and ${v_{t_3}}$ be a vertex with odd degree in the same component of $P_{(1,0),(0,1)}$ of $v_3'$.~by Lemma \ref{l3},~$t_1\ge 13$ or $t_3\ge 13$,~a contradiction as Lemma \ref{l6}. If $t'=6$,~then $ g'(v_6'v_7')=g'(v_8'v_9')=g'(v_{10}'v_{11}')=(1,1)$ and $\omega(v_6'v_7')+\omega(v_8'v_9')+\omega(v_{10}'v_{11}')=7$,~by finding a maximal $E_{g''=(0,0)}$-alternating segment of $C$ such $g''(v_6'v_7')=g''(v_8'v_9')=g''(v_{10}'v_{11}')=(0,0)$,~then the length of this segment is at most $7$,~a contradiction as Lemma \ref{l41}. Thus $t'\ge 9$.~If $t'=9$,~then there are exactly two edges of in $E_{g'=(1,1)}(v_0'\cdots v_{12}')$. By Lemma \ref{l3},~$g'(v_2'v_3')=(1,1)$ is weghted $4$,~a contradiction as the total weight of $E_{g'=(1,1)}(C)$ is at least $9$. Similar to $t'\ne 9$,~we have $t'\ne 10$,~thus $t'\ge 14$ by Lemma \ref{l3} and Lemma $\ref{l42}$. Then $t''\ge 14$,~a contradiction as Lemma \ref{l5}. Similarly,~we obtain that $t\ne 13$. Therefore,~$|E(S')|\ne 13$ by Lemma \ref{l41}. By $|E(S')|$ is odd and $9\le|E(S')|\le13$,~a contradiction as $|E(S')|\notin \left\lbrace 9,11,13\right\rbrace $. $ \qed$ \end{document}
\begin{eqnarray}gin{document} \title{A measure of physical reality} \author{A. L. O. Bilobran} \author{R. M. Angelo} \affiliation{Department of Physics, Federal University of Paran\'a, P.O.Box 19044, 81531-980, Curitiba, PR, Brazil} \begin{eqnarray}gin{abstract} From the premise that an observable is real after it is measured, we envisage a tomography-based protocol that allows us to propose a quantifier for the degree of indefiniteness of an observable given a quantum state. Then, we find that the reality of an observable can be inferred locally only if this observable is not quantumly correlated with an informer. In other words, quantum discord precludes Einstein's notion of separable realities. Also, by monitoring changes in the local reality upon measurements on a remote system, we are led to define a measure of nonlocality. Proved upper bounded by discordlike correlations and requiring indefiniteness as a basic ingredient, our measure signals nonlocality even for separable states, thus revealing nonlocal aspects that are not captured by Bell inequality violations. \end{abstract} \pacs{03.65.Ta, 03.65.Ud, 03.67.Mn} \maketitle {\em Introduction}. From Descartes' dictum {\em cogito ergo sum} one might be tempted to conclude that a brain-endowed system living in an empty universe could ensure its own reality. This position, however, cannot be maintained within a scientific framework. The reason is that physics is relative by essence, so it is not possible for an object to ascribe any physical state to itself; a reference frame is needed. It follows, therefore, that any attempt to build an empirically accessible notion of physical reality demands in the first place the definition of two entities, namely, an ``observer'' and an ``observed'', whose roles are interchangeable. These objects can interact and get to know about each other, in which case it then makes sense for one to speak of the physical reality of the other. This is precisely what we have in ordinary situations. Our notion of reality is nurtured by the process of repeated measurements that takes place, e.g., every time we watch an object at rest. The huge amount of photons that reach our retina after being scattered by the object---without appreciably disturbing it---informs us that the object ``exists'' in a ``definite'' position, so it is {\em real}. Although the sensation of reality is granted to the person who receives the scattered photons, the information about the object was already encoded in the photons. Such information, which manifests as correlations in the system ``object + photons'', was generated via physical interactions which by no means depend on the very existence of a retina or a brain. Hence, the capability of a system of informing the presence of another is a primordial condition for the existence of some element of physical reality. Moreover, without such a mechanism, reality cannot be empirically probed. In 1935, an attempt was put forward by Einstein, Podolsky, and Rosen (EPR)~\cite{EPR} aiming at defining the notion of {\em element of reality}: ``If, without in any way disturbing a system, we can predict with certainty the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.'' Along with the conception that ``every element of the physical reality must have a counterpart'' in a {\em complete theory}, this definition immediately implies that either ``(A) the quantum-mechanical description of reality given by the wavefunction is not complete or (B) when the operators corresponding to two physical quantities do not commute the two quantities cannot have simultaneous reality.'' By tacitly assuming locality and arguing that there are quantum states for which noncommuting observables can be simultaneously real, EPR proved (B) wrong and claimed the incompleteness of quantum theory. Later on, Bell showed that any theory aiming at completing quantum mechanics would be unavoidably nonlocal~\cite{bell}, Bohmian mechanics~\cite{bohm} being a prominent illustration of that. Other approaches appeared defending purely statistical interpretations for the quantum formalism~\cite{ballentine}, in consonance with Einstein's view. Discussions about foundational aspects of quantum theory, particularly on the wave function interpretation, entered the 21st century with physicists polarized in two main lines of thought, both supported by substantial amount of theoretical work. While on one hand $\psi$-ontic models ascribe to the wave function a realistic nature, on the other $\psi$-epistemic models suggest that it actually is knowledge about an underlying reality. Recent years have witnessed significant contributions in favor of $\psi$-ontic models~\cite{PBR,rudolph12,renner12,hardy13,massar13,lowther13,leifer14,maroney14,branciard14} against a more modest number of works exploring $\psi$-epistemic ones~\cite{spekkens07,spekkens10,veitch13,farr14}. Although general and powerful for their purposes, these works offer no clear interpretation for mixed states. In a different vein, considerable progress has recently been made towards a deep understanding of the quantum measurement problem and the emergence of objective classicality in the framework of the {\em quantum Darwinism}~\cite{zurek09,horodecki15}. Here, we aim at discussing the notion of physical reality by focusing not only on the quantum state but also on observables and their measurements. In particular, in what follows we will formulate a quantifier of reality that is grounded on an empirical protocol. Then, some relevant implications are presented, in particular a notion of nonlocality that is not captured by Bell inequality violations. We close the paper with a summary of our results and a brief remark positioning our proposal in the context delineated by EPR's and Bell's works. {\em Elements of reality}. Consider an experimental procedure that prepares a physical state for a multipartite system. A task is defined which consists of determining, via state tomography, the most complete description for this preparation. We are allowed to repeat the procedure as many times as necessary to get an ideal tomography. Thus, at the end of the day, we get to know that, every time the procedure runs, the quantum mechanical description for the system will be, say, $\rho$ (Fig.~\ref{fig1}a). Then, we are exposed to a different scheme (Fig.~\ref{fig1}b). Again, we are asked to propose a complete description for the system state, given the same preparation and tomography process, but now a measurement of an observable $\mathcal{O}_1=\sum_ko_{1k}\mathrm{O}_{1k}$, with projectors $\mathrm{O}_{1k}=\ket{o_{1k}}\bra{o_{1k}}$ acting on $\mathcal{H}_1$, is secretly performed by an agent in between the preparation and the tomography, in every run of the procedure. Quantum theory predicts that the system will be in the state $\mathrm{O}_{1k}\otimes\rho_{2|o_{1k}}$ with probability $p_{o_{1k}}$ after the measurement is performed, where $\rho_{2|o_{1k}}=\text{Tr}_1(\mathrm{O}_{1k}\,\rho\,\mathrm{O}_{1k})/p_{o_{1k}}$ is the state of the rest of the system given the outcome $o_{1k}$ and $p_{o_{1k}}=\text{Tr}(\mathrm{O}_{1k}\rho\,\mathrm{O}_{1k})$ is the probability associated with this particular outcome. Without any information about agent's measurements, after the state tomography our best description will be \begin{eqnarray}\label{Phi(rho)} \Phi_{\mathcal{O}_1}(\rho)\equiv\sum_k\mathrm{O}_{1k}\,\rho\,\mathrm{O}_{1k}=\sum_kp_{o_{1k}}\mathrm{O}_{1k}\otimes\rho_{2|o_{1k}}. \end{eqnarray} Now, the agent is certain, by EPR's criterion, that the observable is real after each measurement is made. It follows, therefore, that our description \eqref{Phi(rho)} is {\em epistemic} with respect to $\mathcal{O}_1$, i.e., $p_{o_{1k}}$ reflects only our subjective ignorance about the actual value of $\mathcal{O}_1$. \begin{eqnarray}gin{figure}[t] \centerline{\includegraphics[scale=0.1]{fig1.pdf}} \caption{(Color online). (a) A preparation $\rho$ is determined by state tomography. (b) An observable $\mathcal{O}_1$ is secretly measured after the preparation, so that it is surely real before the tomography, which then predicts a state $\Phi_{\mathcal{O}_1}(\rho)$. If $\Phi_{\mathcal{O}_1}(\rho)=\rho$, then the secret measurement has just revealed a pre-existing element of reality.} \label{fig1} \end{figure} The protocol proceeds with the comparison of the descriptions obtained in (a) and (b). When $\Phi_{\mathcal{O}_1}(\rho)=\rho$, the situation is such that the agent can conclude that an element of reality for $\mathcal{O}_1$ was implied by the very preparation. In this case, the agent measurements did not create reality, but revealed a pre-existing one. This suggests the following criteria of reality. \vskip2mm \noindent{\bf Definition} (Element of reality). {\em An observable $\mathcal{O}_1=\sum_k o_{1k}\mathrm{O}_{1k}$, with projectors $\mathrm{O}_{1k}=\ket{o_{1k}}\bra{o_{1k}}$ acting on $\mathcal{H}_1$, is real for a preparation $\rho\in\bigotimes_{i=1}^N\mathcal{H}_i$ if and only if} \begin{eqnarray}\label{def1} \Phi_{\mathcal{O}_1}(\rho)=\rho. \end{eqnarray} \vskip2mm The map $\Phi_{\mathcal{O}_1}$ defined by Eq.~\eqref{Phi(rho)} denotes a procedure of unread measurements, as delineated by our protocol. Clearly, the criterion \eqref{def1} agrees with EPR's on the reality of $\mathcal{O}_1$ when the preparation is an eigenstate of this observable, i.e., $\rho=\mathrm{O}_{1k}$. But it also predicts an element of reality for a mixture of eigenstates, $\rho=\sum_kp_{o_{1k}}\mathrm{O}_{1k}$, as in this case $\Phi_{\mathcal{O}_1}(\rho)=\rho$. Another interesting point is that the criterion \eqref{def1} automatically incorporates the fact that a measurement preserves a pre-existing reality, i.e., $\Phi_{\mathcal{O}_1\mathcal{O}_1}(\rho)\equiv\Phi_{\mathcal{O}_1}(\Phi_{\mathcal{O}_1}(\rho))=\Phi_{\mathcal{O}_1}(\rho)$. Finally, since a state in the form $\Phi_{\mathcal{O}_1}(\rho)$ can be viewed as an epistemic state with respect to $\mathcal{O}_1$, its collapse upon an eventual measurement of $\mathcal{O}_1$ can be interpreted as mere information updating rather than a physical process. The above criteria naturally induces a measure of by how much a given state $\rho$ is far from a state with $\mathcal{O}_1$ real. We define the {\em indefiniteness} (or {\em irreality}) of the observable $\mathcal{O}_1$ given the preparation $\rho\in\mathcal{H}$ as the entropic distance \begin{eqnarray}\label{frakI} \mathfrak{I}(\mathcal{O}_1|\rho)\equiv S(\Phi_{\mathcal{O}_1}(\rho))-S(\rho), \end{eqnarray} where $S(\rho)$ is the von Neumann entropy. Because projective measurements can never reduce the entropy~\cite{chuang}, one has that $\mathfrak{I}(\mathcal{O}_1|\rho)\geq 0$. Since the von Neumann entropy is strictly concave, it follows that $\mathfrak{I}$ will be zero (i.e., $\mathcal{O}_1$ will be real) if and only if the condition \eqref{def1} holds~(see Ref.~\cite{ana13} for a related demonstration). {\em Implications}. The question then arises as to whether the measure \eqref{frakI} can furnish insights to further aspects of quantum theory. To start with, we invoke the Stinespring theorem~\cite{spehner14}, $\Phi(\rho)=\text{Tr}_A\left(U\rho\otimes\ket{a_0}\bra{a_0}U^{\dag}\right)$, according to which any quantum operation $\Phi$ can be viewed as a reduced evolution of the system coupled to an ancillary system $\mathcal{A}$, where $U$ is a unitary operator acting on $\mathcal{H}\otimes\mathcal{H}_{\mathcal{A}}$ and $\ket{a_0}\in\mathcal{H}_{\mathcal{A}}$. This observation suggests that reality emerges upon the dynamical generation of correlations between the system and some {\em informer}, i.e., a degree of freedom that records the information about the physical state of the system and is discarded (or secretly read, as in the protocol of Fig.~\ref{fig1}). This point is illustrated by Bohr's floating-slit experiment~\cite{bohr} (see Ref.~\cite{miron15} for a recent realization of Bohr's thought experiment). After interacting with a light floating slit S, a particle P moves towards a double-slit system, which is rigidly attached to the laboratory. Momentum conservation implies that in order for P to move towards the upper (lower) slit, S has to move downwards (upwards). If $m$ and $M$ denote the masses of P and S, respectively, then the correlation generated in this experiment can be described by the state $\ket{\Psi}=\tfrac{1}{\sqrt{2}}(\ket{v}_{\text{\tiny P}}\ket{-\tfrac{mv}{M}}+\ket{-v}_{\text{\tiny P}}\ket{\tfrac{mv}{M}})$, where $v$ and $mv/M$---the speeds of P and S, respectively---are treated as discrete variables, for simplicity. It is just an exercise to show that $\mathfrak{I}(\mathrm{v}|\rho_{\text{\tiny P}})$, with $\rho_{\text{\tiny P}}=\text{Tr}_{\text{\tiny S}}\ket{\Psi}\bra{\Psi}$, is a monotonically increasing function of $x=|\langle\tfrac{mv}{M}|-\tfrac{mv}{M}\rangle|$ and that $\mathfrak{I}(\mathrm{v}|\rho_{\text{\tiny P}})=0$ only if $x=0$. This shows that the velocity $\mathrm{v}$ of P given $\rho_{\text{\tiny P}}$ will be real only if the motion of S can be unambiguously identified, i.e., if the slit can properly play the role of an informer, in which case no interference pattern will be seen. Clearly, the reality of the velocity can be adjusted by the ratio $\tfrac{m}{M}$, whose value is previously chosen by the observer (in consonance with Bohr's view~\cite{bohr}). When $m\ll M$ (a nearly fixed slit), the momentum conservation will not be able to reveal the path of the particle, so the velocity will be maximally indefinite and interference fringes will appear. {\em Reality inseparability}. Consider the indefiniteness $\mathfrak{I}(\mathcal{O}_1|\rho)$ of an observable $\mathcal{O}_1$ given a preparation $\rho$. It is straightforward to check that \begin{eqnarray}\label{IO1} \mathfrak{I}(\mathcal{O}_1|\rho)=\mathfrak{I}(\mathcal{O}_1|\rho_1)+D_{[\mathcal{O}_1]}(\rho), \end{eqnarray} where $\rho_1=\text{Tr}_2\rho$ and $D_{[\mathcal{O}_1]}(\rho)=I_{1:2}(\rho)-I_{1:2}(\Phi_{\mathcal{O}_1}(\rho))$. Up to a minimization, $D_{[\mathcal{O}_1]}$ is a discord-like measure~\cite{ollivier01,henderson01,rulli11} written in terms of the mutual information $I_{j:k}(\rho)=S(\rho||\rho_j\otimes\rho_k)$ of the parts $j$ and $k$, where $S(\rho||\sigma)=\text{Tr}(\rho\ln\rho-\rho\ln\sigma)$ is the relative entropy. $\mathfrak{I}(\mathcal{O}_1|\rho_1)$ can be viewed as a measure of {\em local indefiniteness}, as it quantifies the indefiniteness of $\mathcal{O}_1$ given the local state $\rho_1=\text{Tr}_2\rho$. This quantity has recently been used to quantify waviness and coherence~\cite{angelo15,plenio13}. The relation \eqref{IO1} can be rewritten as \begin{eqnarray} \label{D1} \mathfrak{I}(\mathcal{O}_1|\rho)-\mathfrak{I}(\mathcal{O}_1|\rho_1)\geq \mathcal{D}_1(\rho), \end{eqnarray} where $\mathcal{D}_1(\rho)=\min_{\mathcal{O}_1}D_{[\mathcal{O}_1]}(\rho)$ is the quantum discord. Interestingly, this shows that an amount $\mathcal{D}_1$ of quantum correlation prevents the indefiniteness of $\mathcal{O}_1$ given the tomography $\rho$ to be equal to its indefiniteness given the local tomography $\rho_1$. Meaning that the reality of $\mathcal{O}_1$ cannot be devised separately from the other subsystems, even when they are far apart, this constitutes a violation of Einstein's separability principle~\cite{howard85}. It is worth noticing that, in a related context, Wiseman has recently identified the fundamental role of quantum discord in defining Bohr's notion of disturbance in the Bohr-EPR debate~\cite{wiseman13}. Turning to the floating-slit experiment, when the slit is light enough we have that $\mathfrak{I}(\mathrm{v}|\rho_{\text{\tiny P}})=0$, so $\mathrm{v}$ is {\em locally definite} and, accordingly, no interference fringe will be observed. On the other hand, $\mathfrak{I}(\mathrm{v}|\ket{\Psi})=\mathcal{D}_{\text{\tiny P}}(\ket{\Psi})=\ln 2$ (the amount of entanglement in $\ket{\Psi}$), so $\mathrm{v}$ is {\em globally indefinite}. After all, is there an element of reality associated with $\mathrm{v}$? The answer depends on how the reality is probed. In an interference experiment, only the particle is monitored, so that it is the local reality that is accessed. If we look at both the particle and the slit, then we will be able to identify correlations, which will blur our inference about the reality of the particle. {\em Irreality of incompatible observables}. Consider a preparation $\rho$ for a multipartite system and two mutually unbiased bases (MUBs), $\{\ket{o_{1k}}\}$ and $\{\ket{o'_{1k}}\}$ in $\mathcal{H}_1$, associated with maximally incompatible observables (MIO) $\mathcal{O}_1$ and $\mathcal{O}'_1$, respectively. Let us compute $\mathfrak{I}(\mathcal{O}'_1|\Phi_{\mathcal{O}_1}(\rho))$, i.e., the irreality of $\mathcal{O}'_1$ given a state $\Phi_{\mathcal{O}_1}(\rho)$ with $\mathcal{O}_1$ real. Since $|\langle o_{1k}|o'_{1k'}\rangle|^2=\tfrac{1}{d_1}$ for MUBs, where $d_1=\dim\mathcal(\mathcal{H}_1)$, one shows that $\Phi_{\mathcal{O}'_1\mathcal{O}_1}(\rho)=\tfrac{\mathbbm{1}_1}{d_1}\otimes \rho_2$, where $\rho_2=\text{Tr}_1\rho$. It follows that $\mathfrak{I}(\mathcal{O}'_1|\Phi_{\mathcal{O}_1}(\rho))+\mathfrak{I}(\mathcal{O}_1|\rho)=I_{1:2}(\rho)+\mathcal{I}(\rho_1)$, where $\mathcal{I}(\rho_1)\equiv \ln d_1-S(\rho_1)\geq 0$ is the available information~\cite{costa14}. Now, consider a preparation for which $\mathcal{O}_1$ is real, i.e., $\rho=\Phi_{\mathcal{O}_1}(\rho)$. Then, $\mathfrak{I}(\mathcal{O}_1|\rho)=0$ and \begin{eqnarray} \label{Ixz>I1:2} \mathfrak{I}(\mathcal{O}'_1|\Phi_{\mathcal{O}_1}(\rho))=I_{1:2}(\Phi_{\mathcal{O}_1}(\rho))+\mathcal{I}(\Phi_{\mathcal{O}_1}(\rho_1)). \end{eqnarray} Hence, two MIO will be simultaneously real only if both terms in the right-hand side vanish. This will be the case only if $\rho=\tfrac{\mathbbm{1}_1}{d_1}\otimes\rho_2$, which is a fully uncorrelated state with a maximally incoherent reduced state. In this circumstance, all observables acting on $\mathcal{H}_1$ are simultaneously real, which renders to $\rho$ a fully classical essence. Clearly, this is not so for either an entangled state (for which $I_{1:2}>0$) or a projectively measured state (for which $\mathcal{I}>0$). Thus, in contrast with EPR's view, our notion of reality validates alternative (B) (see the Introduction), namely, that incompatible observables cannot be simultaneously real in general. {\em Nonlocality}. Given two spacelike separated systems, can a physical action on one influence the reality of the other? Let us consider a typical EPR scenario, in which two subsystems, 1 and 2, prepared in a state $\rho$, are sent to distinct laboratories separated by a distance $d$. An ancillary system $A$, prepared in the state $\rho_A=\ket{a_0}\bra{a_0}$, is allowed to locally interact with the subsystem 2 through a unitary transformation $U_{2A}(t)$. Let $\tau$ be the time interval necessary for the completion of the injective information storage $U_{2A}(\tau)\ket{o_{2k}}\ket{a_0}=\ket{o_{2k}}\ket{a_k}$, where $\{\ket{a_k}\}$ is an orthonormal basis in the space $\mathcal{H}_{\mathrm{A}}$ and $\{\ket{o_{2k}}\}$ an orthonormal basis in $\mathcal{H}_2$. After the interaction ceases, the global state reads $\varrho(\tau)=U_{2A}(\tau)\rho\otimes\rho_AU_{2A}^{\dag}(\tau)$. The assumption of spacelike separated systems demands that $d\gg c\tau$. By focusing on the time evolution of the system of interest, we consider the following measure of nonlocality: \begin{eqnarray} \label{NO1UA} \mathcal{N}(\mathcal{O}_1,U_{2A}|\rho)\equiv \mathfrak{I}(\mathcal{O}_1|\text{Tr}_A\varrho(0))-\mathfrak{I}(\mathcal{O}_1|\text{Tr}_A\varrho(\tau)). \end{eqnarray} Clearly, this measure can signal alterations in the reality of $\mathcal{O}_1$ after the remote subsystem 2 has suffered a local perturbation. It is not difficult to show, from the above assumptions, that $\text{Tr}_A\varrho(\tau)=\Phi_{\mathcal{O}_2}(\rho)$, where $\mathcal{O}_2=\sum_ko_{2k}\ket{o_{2k}}\bra{o_{2k}}$. Thus, we arrive at \begin{eqnarray} \label{NO1O2} \mathcal{N}(\mathcal{O}_1,\mathcal{O}_2|\rho)=\mathfrak{I}(\mathcal{O}_1|\rho)-\mathfrak{I}(\mathcal{O}_1|\Phi_{\mathcal{O}_2}(\rho)), \end{eqnarray} whose nonnegativity is implied by the theorem given below. Invariant under permutation of indices, as can be noted from its symmetric form $\mathcal{N}(\mathcal{O}_1,\mathcal{O}_2|\rho)=S(\Phi_{\mathcal{O}_1}(\rho))+S(\Phi_{\mathcal{O}_2}(\rho))-S(\Phi_{\mathcal{O}_1\mathcal{O}_2}(\rho))-S(\rho)$, this measure quantifies nonlocal aspects associated with the couple $\{\mathcal{O}_1,\mathcal{O}_2\}$ given $\rho$. We also define the {\em minimal nonlocality} of a preparation $\rho$ as the maximally restrictive optimization over the observables, i.e., \begin{eqnarray} \label{N} \mathcal{N}_{\mathrm{min}}(\rho)\equiv \min\limits_{\text{\tiny $\{\mathcal{O}_1,\mathcal{O}_2\}$}}\mathcal{N}(\mathcal{O}_1,\mathcal{O}_2|\rho), \end{eqnarray} whose bounds are defined by the following result. Let $\mathcal{D}_{12}(\rho)=\min_{\text{\tiny $\{\mathcal{O}_1,\mathcal{O}_2\}$}}D_{[\mathcal{O}_1,\mathcal{O}_2]}(\rho)$ be the {\em global quantum discord}~\cite{rulli11}, with $D_{[\mathcal{O}_1,\mathcal{O}_2]}(\rho)=I_{1:2}(\rho)-I_{1:2}(\Phi_{\mathcal{O}_1\mathcal{O}_2}(\rho))$. \vskip2mm \noindent {\bf Theorem.} {\em Given an arbitrary preparation $\rho\in\mathcal{H}_1\otimes\mathcal{H}_2$, it holds that $0\leq \mathcal{N}_{\mathrm{min}}(\rho)\leq \mathcal{D}_{12}(\rho)$. In particular, if $\rho$ is pure, then $\mathcal{N}_{\mathrm{min}}(\rho)=0$.} \vskip2mm \noindent {\em Proof.} From Eqs.~\eqref{IO1} and \eqref{NO1O2}, one shows that $\mathcal{N}(\mathcal{O}_1,\mathcal{O}_2|\rho)=D_{[\mathcal{O}_1]}(\rho)+D_{[\mathcal{O}_2]}(\rho)-D_{[\mathcal{O}_1,\mathcal{O}_2]}(\rho)$, where $D_{[\mathcal{O}_1,\mathcal{O}_2]}(\rho)=I_{1:2}(\rho)-I_{1:2}(\Phi_{\mathcal{O}_1\mathcal{O}_2}(\rho))$. From the non-negativity of the quantum discord $\mathcal{D}_k$, it follows that $I_{1:2}(\rho)\geq I_{1:2}(\Phi_{k}(\rho))$, $k=1,2$. With the replacement $\rho\to \Phi_{\mathcal{O}_j}(\rho)$ we get $I_{1:2}(\Phi_{j}(\rho))\geq I_{1:2}(\Phi_{\mathcal{O}_1\mathcal{O}_2}(\rho))$ and \begin{eqnarray} I_{1:2}(\Phi_{1}(\rho))+ I_{1:2}(\Phi_{2}(\rho))\geq 2I_{1:2}(\Phi_{\mathcal{O}_1\mathcal{O}_2}(\rho)).\nonumber \end{eqnarray} By rewriting $I_{1:2}$ in terms of its discordlike quantity $D$ [see the relation following Eq.~(4)] we obtain \begin{eqnarray} \mathcal{N}(\mathcal{O}_1,\mathcal{O}_2|\rho)=D_{[\mathcal{O}_1]}+D_{[\mathcal{O}_2]}-D_{[\mathcal{O}_1\mathcal{O}_2]}\leq D_{[\mathcal{O}_1\mathcal{O}_2]},\nonumber \end{eqnarray} which, upon minimization, gives \begin{eqnarray} \label{N<D} \mathcal{N}_{\mathrm{min}}(\rho)\leq \mathcal{D}_{12}(\rho). \end{eqnarray} This upper bound reveals that the absence of quantum discord allows for the existence of a couple of observables for which nonlocality will not manifest. This point reveals a hierarchy which is similar to that exhibited between entanglement and Bell nonlocality. The proof of the lower bound goes as follows. Consider a quadripartite state $\varrho_0=\rho\otimes\rho_A\otimes\rho_B$, where $\rho\in\mathcal{H}_1\otimes\mathcal{H}_2$, $\rho_A=\ket{a_0}\bra{a_0}\in\mathcal{H}_A$, and $\rho_B=\ket{b_0}\bra{b_0}\in\mathcal{H}_B$. Let $\mathcal{O}_1=\sum_ko_{1k}\mathrm{O}_{1k}$ and $\mathcal{O}_2=\sum_jo_{2j}\mathrm{O}_{1j}$ be observables acting on $\mathcal{H}_1$ and $\mathcal{H}_2$, respectively, where $\mathrm{O}_{1k}=\ket{k}\bra{k}$ and $\mathrm{O}_{2j}=\ket{j}\bra{j}$. Consider a unitary transformation $U=U_{1A}\otimes U_{2B}$ such that $\varrho=U\varrho_0 U^{\dag}$, $U_{1A}\ket{k}\ket{a_0}=\ket{k}\ket{a_k}$, and $U_{2B}\ket{j}\ket{b_0}=\ket{j}\ket{b_j}$. It follows that \begin{eqnarray} \varrho\!=\!\sum\limits_{\text{\tiny $\begin{eqnarray}gin{smallmatrix} k, k' \\ j,j' \end{smallmatrix}$}} \langle kj|\rho|k'j'\rangle \ket{k}\bra{k'}\otimes\ket{j}\bra{j'}\otimes\ket{a_k}\bra{a_{k'}}\otimes\ket{b_j}\bra{b_{j'}}.\nonumber \end{eqnarray} Now consider the following partitions: $X=A$, $Y=B$, and $Z=12$. Using the above state, one may show that \begin{eqnarray} \begin{eqnarray}gin{array}{lll} S(\rho_{XYZ})=S(\rho), & \quad & S(\rho_Z)=S(\Phi_{\mathcal{O}_1\mathcal{O}_2}(\rho)), \\ \\ S(\rho_{XZ})=S(\Phi_{\mathcal{O}_2}(\rho)), & & S(\rho_{YZ})=S(\Phi_{\mathcal{O}_1}(\rho)), \end{array}\nonumber \end{eqnarray} where $\rho_Z$, $\rho_{XZ}$, and $\rho_{YZ}$ are reductions of $\varrho=\rho_{XYZ}$. From the strong subaditivity of the von Neumann entropy, $S(\rho_{XYZ})+S(\rho_Z)\leq S(\rho_{XZ})+S(\rho_{YZ})$ \cite{chuang}, it follows that $\mathcal{N}(\mathcal{O}_1,\mathcal{O}_2|\rho)\geq 0$ for all $\{\mathcal{O}_1,\mathcal{O}_2\}$, so that \begin{eqnarray} \label{N>0} \mathcal{N}_{\mathrm{min}}(\rho)\geq 0. \end{eqnarray} The result for pure states is proved as follows. Take observables $\mathcal{O}_1=\sum_ko_{1k}\ket{k}\bra{k}$ and $\mathcal{O}_2=\sum_jo_{2j}\ket{j}\bra{j}$ that are connected with the Schmidt decomposition $\ket{\psi}=\sum_k\sqrt{\lambda_k}\ket{k}\ket{k}$ in the following peculiar way: $\mathcal{O}_1$'s eigenstates $\{\ket{k}\}$ correspond to the Schmidt sub-basis $\{\ket{k}\}$ whereas $\mathcal{O}_2$'s eigenstates $\{\ket{j}\}$ form a MUB with the Schmidt sub-basis $\{\ket{k}\}$, i.e., $|\langle k|j\rangle|^2=\tfrac{1}{d_2}$, where $d_2=\dim(\mathcal{H}_2)$. For these observables one shows that $S(\Phi_{\mathcal{O}_2}(\ket{\psi}))=\ln d_2$ and $S(\Phi_{\mathcal{O}_1\mathcal{O}_2}(\ket{\psi}))=\ln d_2+S(\Phi_{\mathcal{O}_1}(\ket{\psi}))$, from which it immediately follows that $\mathcal{N}(\mathcal{O}_1,\mathcal{O}_2|\ket{\psi})=0$. Therefore, \begin{eqnarray} \label{N=0} \mathcal{N}_{\mathrm{min}}(\ket{\psi})=0. \end{eqnarray} This result also follows from \eqref{N>0} and from the fact that pure states saturate the strong subaditivity (see Ref.~\cite{costa14}). Equations \eqref{N<D}-\eqref{N=0} define the content of the theorem. {\scriptsize $\blacksquare$} It can be checked by inspection that, while $\mathcal{N}=0$ for a fully uncorrelated state $\rho=\rho_1\otimes\rho_2$, the same cannot be ensured for a separable state $\rho=\sum_kp_k\rho_{1k}\otimes\rho_{2k}$. To illustrate this point, let us compute $\mathcal{N}_{\mathrm{min}}$ for specific two-qubit preparations, namely, the Werner state $\rho_W=\tfrac{(1-f)}{4}\mathbbm{1}\otimes\mathbbm{1}+f\ket{s}\bra{s}$, where $f$ measures the fidelity of $\rho_W$ with the singlet $\ket{s}=\tfrac{1}{\sqrt{2}}\big(\ket{+r}\ket{-r}-\ket{-r}\ket{+r}\big)$ ($r=x,y,z$), and the $\alpha$-state, $\rho_{\alpha}=\tfrac{\mathbbm{1}\otimes\mathbbm{1}}{4}+\tfrac{\alpha}{4}(\sigma_1\otimes\sigma_1-\sigma_2\otimes\sigma_2)+\tfrac{2\alpha-1}{4}\sigma_3\otimes\sigma_3$. Although analytical results have been computed, they are not insightful and will be omitted. They are shown in Fig.~\ref{fig2} along with the results for the global quantum discord $\mathcal{D}_{12}$ and the entanglement $E$ (as quantified by concurrence). Two aspects are remarkable. First, unlike Bell nonlocality, $\mathcal{N}$ may exist even in the absence of entanglement (see Refs.~\cite{bennett99,walgate02,luo11,zhang14,modi12} for other examples of nonlocality without entanglement and a link with discord). This suggests an interesting analogy: $\mathcal{N}$ is able to capture nonlocal aspects to which Bell inequalities are insensitive just like quantum discord can detect correlations that are invisible to entanglement measures. Second, $\mathcal{N}_{\mathrm{min}}$ vanishes for pure states ($f=\alpha=1$), as anticipated by the Theorem. This does not mean that pure states prevent nonlocality to manifest, but that there exists al least a couple of observables for which $\mathcal{N}$ vanishes (see the theorem proof). In fact, it is not difficult to show that, if we take observables $\mathcal{O}_1$ and $\mathcal{O}_2$ whose eigenstates define the Schmidt basis, then $\mathcal{N}(\mathcal{O}_1,\mathcal{O}_2|\ket{\psi})=E(\ket{\psi})$, with $E(\ket{\psi})=-\text{Tr}_1(\rho_1\ln\rho_1)$ and $\rho_1=\text{Tr}_2\ket{\psi}\bra{\psi}$, gives the entanglement entropy of $\ket{\psi}$. The take-away message here comes as follows. Take the singlet $\rho_s=\ket{s}\bra{s}$ as an example. By direct calculation we can check that $\mathcal{N}(\sigma_{1r},\sigma_{2r'}|\rho_s)=\delta_{r,r'}E(\ket{s})$, where $r,r'=x,y,z$ and $\delta_{r,r'}$ is the Kronecker delta. This shows that nonlocal aspects appear when we look at the couple of observables in which the entanglement has been encoded, namely, the observables that define the Schmidt basis. No nonlocality is detected when we look at a couple of MIO, one of them composing the Schmidt basis. This explains why $\mathcal{N}_{\mathrm{min}}(\ket{\psi})=0$ and stress that $\mathcal{N}$ is conceptually different from quantum discord, which reduces to the entanglement entropy for pure states. \begin{eqnarray}gin{figure}[ht] \includegraphics[width=\columnwidth]{fig2.pdf} \caption{(Color online). Minimal nonlocality $\mathcal{N}_{\mathrm{min}}$ (thick black line), global quantum discord $\mathcal{D}_{12}$ (blue line) and entanglement $E$ (dashed red line) for (a) $\rho_W$ and (b) $\rho_{\alpha}$. In the pink shaded area, $E=0$ whereas $\mathcal{N}\geq 0$ for all $\{\mathcal{O}_1,\mathcal{O}_2\}$.} \label{fig2} \end{figure} {\em Nonlocality from irreality and measurement}. When a correlation is created via physical interactions in a closed quantum system, a {\em constraint} is established which manifests as a conservation law. For instance, for the singlet $\ket{s}=\ket{S=0,M=0}$ one has that $\mathcal{S}_{z}=\mathcal{S}_{1z}+\mathcal{S}_{2z}=0$. Here the situation is such that even though the total $z$-spin $\mathcal{S}_z$ is real (i.e., fully definite, as ensured by the physical interaction) the individuals $\mathcal{S}_{1z}$ and $\mathcal{S}_{2z}$ are not. The crux is that the constraint $\mathcal{S}_{2z}=-\mathcal{S}_{1z}$ reduces the {\em a priori} independent indefiniteness of $\mathcal{S}_{1z}$ and $\mathcal{S}_{2z}$ to a state of {\em conditional reality}, a situation in which the reality of an observable becomes conditioned to the reality of another. Once one of them gets real, so does the other. By separating the subsystems without degrading the constraint, one will be able to define the reality of an observable by means of an action in a remote site. This does not occur classically, because even though the notion of a nonlocally spread constraint keeps valid, there is no fundamental indefiniteness underlying the observables, i.e., their realities are already established before the separation. It is immediately seen, therefore, that irreality is a basic condition for the manifestation of quantum nonlocality. Indeed, as is predicted by $\mathcal{N}(\mathcal{O}_1,\mathcal{O}_2|\Phi_{\mathcal{O}_k}(\rho))=0$ $(k=1,2)$, there is no nonlocality for a preparation in which one of the observables is already real. There is, however, a second condition for nonolocality to manifest: the reality in a given site has to be established to promote the aforementioned conditionalization in the remote site. This is precisely the point behind the conception of Eq.~\eqref{NO1UA}, where the generation of correlations and the posterior discard of the ancilla played the role an unread measurement. In fact, when we are able to access the whole system, including the ancilla, then the reality of the subsystem does not get defined, as we have seen in Bohr's floating slit thought experiment. This point can be further illustrated with some generality as follows. Consider a preparation $\varrho$ for a multipartite system and an arbitrary partition $\mathcal{H}_x\otimes\mathcal{H}_y$ for the Hilbert space. Let $U_y$ be a unitary transformation in $\mathcal{H}_y$ and $\mathcal{O}_1\in\mathcal{H}_1\in\mathcal{H}_x$. Given that $\Phi_{\mathcal{O}_1}$ and $U_y$ commute, it follows that \begin{eqnarray} \label{noNL} \mathfrak{I}(\mathcal{O}_1|\varrho)-\mathfrak{I}(\mathcal{O}_1|U_y\varrho U_y^{\dag})=0. \end{eqnarray} We see that the reality of a given observable can never be changed by {\em unitary} actions occurring in remote parts of the system. This result shows that, if we could access the whole system, no nonlocality would be detected in nature. (Recently, a similar conclusion has been reached, via arguments grounded on the framework of the many-worlds interpretation~\cite{tipler14}.) However, physics is fundamentally concerned with experiments, where discarding a system---a nonunitary operation---is an irremediable fact. Indeed, in a measurement process, the degrees of freedom of the probed system always remains veiled to the observer; this is why we have to use an accessible pointer in the first place. {\em Conclusion}. Based on the premise that an element of reality exists for an observable that has been measured, and using a protocol involving two observers, we developed a quantifier that extends current views of reality in two ways. First, our measure moves the focus from the quantum state to the couple state-observable. Second, it is able to diagnose reality also when mixed states are concerned. In particular, our approach allows for the identification of epistemic states in quantum theory. Once one accepts our proposition, the following framework is implied. i)~Upon discarding the system of interest, the apparatus is left in a state for which the pointer is real, so that the state reduction can be viewed as mere information updating. ii)~Noncommuting observables cannot be simultaneously real for entangled states, this being a result that is in contrast with EPR's claim. iii)~Quantum correlations forbid the concept of a separable reality, even when the subsystems are arbitrarily far apart from each other. iv)~Alterations induced in the reality of a given observable by means of measurements performed in a remote system reveal nonlocal aspects which are not captured by Bell inequality violations. Finally, it is worth noting that many relevant questions concerning our notions of reality and nonlocality can be formulated in the contexts of multipartite systems, quantum reference frames, weak measurements, and thermodynamics~\cite{comment}. This paves the way for an interesting research program. Put in the perspective of EPR's work, our results propose a scenario in which quantum mechanics can be viewed as a complete theory, provided we accept that observables can be fundamentally indefinite and nonlocality is unavoidable. To our perception, these aspects are by now viewed as established facts by a significant part of the scientific community, for which our measures then emerge as relevant tools. \acknowledgments This work was supported by CNPq/Brazil and the National Institute for Science and Technology of Quantum Information (INCT-IQ/CNPq). The authors acknowledge A. D. Ribeiro, F. Parisio, and M. S. Sarandy for discussions. \begin{eqnarray}gin{thebibliography}{99} \bibitem{EPR} A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. {\bf 47}, 777 (1935). \bibitem{bell} J. S. Bell, Physics {\bf 1}, 195 (1964). \bibitem{bohm} D. Bohm, Phys. Rev. {\bf 85}, 166 (1952); Phys. Rev. {\bf 85}, 180 (1952). \bibitem{ballentine} L. E. Ballentine, Rev. Mod. Phys. {\bf 42}, 358 (1970). \bibitem{PBR} M. F. Pusey, J. Barrett, and T. Rudolph, Nature Phys. {\bf 8}, 475 (2012). \bibitem{rudolph12} P. G. Lewis, D. Jennings, J. Barrett, and T. Rudolph, Phys. Rev. Lett. {\bf 109}, 150404 (2012). \bibitem{renner12} R. Colbeck and R. Renner, Phys. Rev. Lett. {\bf 108}, 150402 (2012). \bibitem{hardy13} L. Hardy, Int. J. Theor. Phys. {\bf 27}, 1345012 (2013). \bibitem{massar13} M. K. Patra, S. Pironio, and S. Massar, Phys. Rev. Lett. {\bf 111}, 090402 (2013). \bibitem{lowther13} S. Aaronson, A. Bouland, L. Chua, and G. Lowther, Phys. Rev. A {\bf 88}, 032111 (2013). \bibitem{leifer14} M. S. Leifer, Phys. Rev. Lett. {\bf 112}, 160404 (2014). \bibitem{maroney14} J. Barrett, E. G. Cavalcanti, R. Lal, and O. J. E. Maroney, Phys. Rev. Lett. {\bf 112}, 250403 (2014). \bibitem{branciard14} C. Branciard, Phys. Rev. Lett. {\bf 113}, 020409 (2014). \bibitem{spekkens07} R. W. Spekkens, Phys. Rev. A {\bf 75}, 032110 (2007). \bibitem{spekkens10} N. Harrigan and R. W. Spekkens, Found. Phys. {\bf 40}, 125 (2010). \bibitem{veitch13} J. Emerson, D. Serbin, C. Sutherland, V. Veitch, arXiv:1312.1345. \bibitem{farr14} D. J. Miller and M. Farr, arXiv:1405.2757. \bibitem{zurek09} W. H. Zurek, Nature Phys. {\bf 5}, 181 (2009). \bibitem{horodecki15} R. Horodecki, J. K. Korbicz, and P. Horodecki, Phys. Rev. A {\bf 91}, 032122 (2015). \bibitem{chuang} M. A. Nielsen and I. L. Chuang, {\em Quantum Computation and Quantum Information} (Cambridge University Press, Cambridge, 2000). \bibitem{ana13} A. C. S. Costa and R. M. Angelo, Phys. Rev. A {\bf 87}, 032109 (2013). \bibitem{spehner14} D. Spehner, J. Math. Phys. {\bf 55}, 075211 (2014). \bibitem{bohr} N. Bohr, Phys. Rev. {\bf 48}, 696 (1935). \bibitem{miron15} X.-J. Liu {\em et al}, Nature Phot. {\bf 9}, 120 (2015). \bibitem{ollivier01} H. Ollivier and W. H. Zurek, Phys. Rev. Lett. {\bf 88}, 017901 (2001). \bibitem{henderson01} L. Henderson and V. Vedral, J. Phys. A {\bf 34}, 6899 (2001). \bibitem{rulli11} C. C. Rulli and M. S. Sarandy, Phys. Rev. A {\bf 84}, 042109 (2011). \bibitem{angelo15} R. M. Angelo and A. D. Ribeiro, Found. Phys. {\bf 45}, 1407 (2015). \bibitem{plenio13} T. Baumgratz, M. Cramer, M. B. Plenio, Phys. Rev. Lett. {\bf 113}, 140401 (2014). \bibitem{howard85} D. Howard, Stud. Hist. Phil. Sci. {\bf 16}, 171 (1985). \bibitem{wiseman13} H. M. Wiseman, Ann. Phys. {\bf 338}, 361 (2013). \bibitem{costa14} A. C. S. Costa, R. M. Angelo, and M. W. Beims, Phys. Rev. A {\bf 90}, 012322 (2014). \bibitem{bennett99} C. H. Bennett {\em et al}, Phys. Rev. A {\bf 59}, 1070 (1999). \bibitem{walgate02} J. Walgate and L. Hardy, Phys. Rev. Lett. {\bf 89}, 147901 (2002). \bibitem{luo11} S. Luo and S. Fu, Phys. Rev. Lett. {\bf 106}, 120401 (2011). \bibitem{zhang14} Z.-C. Zhang {\em et al}, Phys. Rev. A {\bf 90}, 022313 (2014). \bibitem{modi12} K. Modi {\em et al}, Rev. Mod. Phys. {\bf 84}, 1655 (2012). \bibitem{tipler14} F. J. Tipler, PNAS {\bf 111}, 11281 (2014). \bibitem{zurek03} W. H. Zurek, Phys. Rev. A {\bf 67}, 012320 (2003). \bibitem{horodecki05} M. Horodecki, P. Horodecki, R. Horodecki, J. Oppenheim, A. Sen, U. Sen, and B. Synak-Radtke, Phys. Rev. A {\bf 71}, 062307 (2005). \bibitem{comment} In particular, employing the relation $W(\rho)=k_BTI(\rho)$ \cite{zurek03,horodecki05}, where $I(\rho)=\ln{d}-S(\rho)$, $\rho\in\mathcal{H}$, $d=\dim{\mathcal{H}}$, and $k_B$ is the Boltzmann constant, one may follow the proposals of Refs.~\cite{ana13,angelo15,zurek03} to conceive an operational interpretation for $\mathfrak{I}(\mathcal{O}_1|\rho)$ in terms of the thermodynamic work $W(\rho)$ that can be extracted from a reservoir at temperature $T$. For a single system, one can readily write $k_BT\mathfrak{I}(\mathcal{O}|\rho)=W(\rho)-W(\Phi_{\mathcal{O}}(\rho))$, which shows that the irreality of $\mathcal{O}$ given the preparation $\rho$ provides thermodynamic advantage in relation to a preparation $\Phi_{\mathcal{O}}(\rho)$ for which $\mathcal{O}$ is real. Research along this line is now in progress. \end{thebibliography} \end{document}
\begin{document} \newcommand{{\varepsilon}}{{\varepsilon}} \newcommand{$\Box$ }{$\Box$ } \newcommand{{\mathbb C}}{{\mathbb C}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\mathbb R}P}{{\mathbf {RP}}} \newcommand{{\mathbb C}P}{{\mathbf {CP}}} \newcommand{\rm Tr}{\rm Tr} \def\paragraph{Proof.}{\paragraph{Proof.}} \title{Skewers} \author{Serge Tabachnikov\footnote{ Department of Mathematics, Penn State University, University Park, PA 16802; [email protected]} } \date{} \maketitle \section{Introduction} \label{intro} Two lines in 3-dimensional space are skew if they are not coplanar. Two skew lines share a common perpendicular line that we call their {\it skewer}. We denote the skewer of lines $a$ and $b$ by $S(a,b)$.\footnote{One can also define the skewer of two intersecting lines: it's the line through the intersection point, perpendicular to both lines.} Consider your favorite configuration theorem of plane projective geometry that involves points and lines. For example, it may be the Pappus theorem, see Figure \ref{Pappus}: if $A_1,A_2,A_3$ and $B_1,B_2,B_3$ are two triples of collinear points, then the three intersection points $A_1B_2\cap A_2B_1$, $A_1B_3\cap A_3B_1$, and $A_2B_3\cap A_3B_2$ are also collinear (we refer to \cite{RG} for a modern viewpoint on projective geometry). \begin{figure} \caption{The Pappus theorem.} \label{Pappus} \end{figure} The Pappus theorem has a skewer analog in which both points and lines are replaced by lines in 3-space and the incidence between a line and a point translates as the intersection of the two respective lines at right angle. The basic 2-dimensional operations of connecting two points by a line or by intersecting two lines at a point translate as taking the skewer of two lines. \begin{theorem}[Skewer Pappus theorem I] \label{skPappus} Let $a_1,a_2,a_3$ be a triple of lines with a common skewer, and let $b_1,b_2,b_3$ be another triple of lines with a common skewer. Then the lines $$ S(S(a_1,b_2),S(a_2,b_1)),\ S(S(a_1,b_3),S(a_3,b_1)),\ {\rm and}\ \ S(S(a_2,b_3),S(a_3,b_2)) $$ share a skewer. \end{theorem} In this theorem, we assume that the lines involved are in general position in the following sense: each time one needs to draw a skewer of two lines, this operation is well defined and unique. This assumption holds in a Zariski open subset of the set of the initial lines (in this case, two triples of lines with common skewers, $a_1,a_2,a_3$ and $b_1,b_2,b_3$). A similar general position assumption applies to other theorems in this paper.\footnote{The configuration theorems of plane geometry also rely on similar general position assumptions.} Another skewer analog of the Pappus theorem was discovered by R. Schwartz. \begin{theorem}[Skewer Pappus theorem II] \label{othPapp} Let $L$ and $M$ be a pair of skew lines. Choose a triple of points $A_1,A_2,A_3$ on $L$ and a triple of points $B_1,B_2,B_3$ on $M$. Then the lines $$ S((A_1 B_2), (A_2 B_1)), \ S((A_2 B_3), (A_3 B_2)),\ {\rm and}\ \ S((A_3 B_1), (A_1 B_3)) $$ share a skewer. \end{theorem} Although the formulation of Theorem \ref{othPapp} is similar to that of Theorem \ref{skPappus}, we failed to prove it along the lines of the proofs of other results in this paper, and the `brute force' proof of Theorem \ref{othPapp} is postponed until Section \ref{Papprev}. \begin{figure} \caption{The Desargues theorem.} \label{Desargues} \end{figure} Another classical example is the Desargues theorem, see Figure \ref{Desargues}: if the three lines $A_1 B_1$, $A_2B_2$ and $A_3B_3$ are concurrent, then the three intersection points $A_1A_2\cap B_1B_2$, $A_1A_3\cap B_1B_3$, and $A_2A_3\cap B_2B_3$ are collinear. And one has a skewer version: \begin{theorem}[Skewer Desargues theorem] \label{skDesargues} Let $a_1,a_2,a_3$ and $b_1,b_2,b_3$ be two triples of lines such that the lines $S(a_1,b_1), S(a_2,b_2)$ and $S(a_3,b_3)$ share a skewer. Then the lines $$ S(S(a_1,a_2),S(b_1,b_2)),\ S(S(a_1,a_3),S(b_1,b_3)),\ {\rm and}\ \ S(S(a_2,a_3),S(b_2,b_3)) $$ also share a skewer. \end{theorem} The projective plane ${\mathbb R}P^2$ is the projectivization of 3-dimensional vector space $V$. Assume that the projective plane is equipped with a polarity, a projective isomorphism $\varphi: {\mathbb R}P^2 \to ({\mathbb R}P^2)^*$ induced by a self-adjoint linear isomorphism $V \to V^*$. \begin{figure} \caption{Point $P$ is polar dual to the line $AB$.} \label{polar} \end{figure} In particular, in 2-dimensional spherical geometry, polarity is the correspondence between great circles and their poles.\footnote{On $S^2$, this is a 1-1 correspondence between oriented great circles and points; in its quotient ${\mathbb R}P^2$, the elliptic plane, the orientation of lines becomes irrelevant.} In terms of 2-dimensional hyperbolic geometry, polarity is depicted in Figure \ref{polar}: in the projective model, $H^2$ is represented by the interior of a disc in ${\mathbb R}P^2$, and the polar points of lines lie outside of $H^2$, in the de Sitter world. As a fourth example, consider a theorem that involves polarity, namely, the statement that the altitudes of a (generic) spherical or a hyperbolic triangle are concurrent (in the hyperbolic case, the intersection point may also lie in the de Sitter world). The altitude of a spherical triangle $ABC$ dropped from vertex $C$ is the great circle through $C$ and the pole $P$ of the line $AB$, see Figure \ref{sphere}. Likewise, the line $PQ$ in Figure \ref{polar} is orthogonal in $H^2$ to the line $AB$. \begin{figure} \caption{Altitude of a spherical triangle.} \label{sphere} \end{figure} In the skewer translation, we do not distinguish between polar dual objects, such as the line $AB$ and its pole $P$ in Figure \ref{sphere}. This yields the following theorem. \begin{theorem}[Petersen-Morley \cite{Mo}] \label{skAlt} Given three lines $a,b,c$, the lines $$ S(S(a,b),c),\ S(S(b,c),a),\ \ {\rm and}\ \ S(S(c,a),b) $$ share a skewer.\footnote{This result is also known as Hjelmslev-Morley theorem, see \cite{Fe}.} \end{theorem} In words, {\it the common normals of the opposite sides of a rectangular hexagon have a common normal}; see Figure \ref{ten}, borrowed from \cite{Mo2}. \begin{figure} \caption{Petersen-Morley configuration in Euclidean space.} \label{ten} \end{figure} These `skewer' theorems hold not only in the Euclidean, but also in the elliptic and hyperbolic geometries. In $H^3$, two non-coplanar lines have a unique skewer. In elliptic space ${\mathbb R}P^3$, a pair of generic lines has two skewers; we shall address this subtlety in Section \ref{spherical}. In the next section we shall formulate a general correspondence principle, Theorem \ref{principle}, establishing skewer versions of plane configuration theorems. This correspondence principle will imply the above formulated theorems, except for Theorem \ref{othPapp}, whose proof will be given in Section \ref{Papprev}. The correspondence principle concerns line geometry of 3-dimensional projective space, a subject that was thoroughly studied in the 19th century by many an eminent mathematician (Cayley, Chasles, Klein, Kummer, Lie, Pl\"ucker, Study, ...) See \cite{Je} for a classical and \cite{PW} for a modern account. Although we did not see the formulation of our Theorem \ref{principle} in the literature, we believe that classical geometers would not be surprised by it. Similar ideas were expressed earlier. In the last section of \cite{Co}, H. S. M. Coxeter writes: \begin{quote} ... every projective statement in which one conic plays a special role can be translated into a statement about hyperbolic space. \end{quote} Coxeter illustrated this by the hyperbolic version of the Petersen-Morley theorem. Earlier F. Morley \cite{Mo1} also discussed the hyperbolic Petersen-Morley theorem, along with a version of Pascal's theorem for lines in $H^3$ (the ``celestial sphere" in the title of this paper is the sphere at infinity of hyperbolic space). We are witnessing a revival of projective geometry \cite{PW,RG}, not least because of the advent of computer-based methods of study, including interactive geometry software (such as Cinderella\footnote{Which was used to create illustrations in this paper.} and GeoGebra). Elementary projective geometry has served as a source of interesting dynamical systems \cite{Sc1,Sc2}, and it continues to yield surprises \cite{ST}. We hope that this paper will contribute to the renewal of interest in this classical area. {\bf Acknowledgments}. The `godfather' of this paper is Richard Schwartz whose question was the motivation for this project, and who discovered Theorem \ref{othPapp} and helped with its proof. I am grateful to Rich for numerous stimulating discussions on this and other topics. I am also grateful to I. Dolgachev, M. Skopenkov, V. Timorin, and O. Viro for their insights and contributions. Many thanks to A. Barvinok who introduced me to the chains of circles theorems. I was supported by NSF grants DMS-1105442 and DMS-1510055. Part of this work was done during my stay at ICERM; it is a pleasure to thank the Institute for the inspiring, creative, and friendly atmosphere. \section{Correspondence principle} \label{2proofs} \subsection{What is a configuration theorem?} \label{what} We adopt the following `dynamic' view of configuration theorems. One starts with an initial data, a collection of labelled points $a_i$ and lines $b_j$ in ${\mathbb R}P^2$, such that, for some pairs of indices $(i,j)$, the point $a_i$ lies on the line $b_j$. One also has an ordered list of instructions consisting of two operations: draw a line through a certain pair of points, or intersect a certain pair of lines at a point. These new lines and points also receive labels. The statement of a configuration theorem is that, among so constructed points and lines, certain incidence relations hold, that is, certain points lie on certain lines. Assume, in addition, that a polarity $\varphi: {\mathbb R}P^2 \to ({\mathbb R}P^2)^*$ is given. We may think of lines in ${\mathbb R}P^2$ as points in $({\mathbb R}P^2)^*$. The polarity takes one back to ${\mathbb R}P^2$, assigning the polar point to each line and vice versa. Given a polarity, one adds to the initial data that, for some pairs of indices $(k,l)$, the point $a_k$ is polar dual to the line $b_l$. One also adds to a list of instructions the operation of taking the polar dual object (point $\leftrightarrow$ line). Accordingly, one adds to the statement of a configuration theorem that certain points are polar dual to certain lines. We assume that the conclusion of a configuration theorem holds for almost every initial configuration of points and lines satisfying the initial conditions, that is, holds for a Zariski open set of such initial configurations (this formulation agrees well with interactive geometry software that makes it possible to perturb the initial data without changing its combinatorics). In this sense, a configuration theorem is not the same as a configuration of points and lines as described in Chapter 3 of \cite{HC} or in \cite{Gr}: there, the focus is on whether a combinatorial incidence is realizable by points and lines in the projective plane. For example, the configuration theorem in Figure \ref{hyptriangle} has three points $A,B$ and $C$ as an initial data. One draws the lines $AB, BC$ and $CA$, and constructs their polar dual points $c, a$ and $b$, respectfully. Then one connects points $a$ and $A$, $b$ and $B$, and $c$ and $C$. The claim is that these three lines are concurrent (that is, the intersection point of the lines $aA$ and $bB$ lies on the line $cC$). \begin{figure} \caption{Concurrence of the altitudes of a hyperbolic triangle.} \label{hyptriangle} \end{figure} A configuration theorem for lines in space is understood similarly: one has an initial collection of labelled lines $\ell_i$ such that, for some pairs of indices $(i,j)$, the lines $\ell_i$ and $\ell_j$ intersect at right angle. There is only one operation, taking the skewer of two lines. The statement of a configuration theorem is that certain pairs of thus constructed lines again intersect at right angle. This conclusion holds for almost all initial configurations of lines (i.e., a Zariski open set) satisfying the initial conditions. \subsection{Correspondence principle} \label{form} The correspondence principle provides a dictionary that translates a plane configuration theorem, involving points and lines, to a configuration theorem in space involving lines. \begin{theorem}[Correspondence principle] \label{principle} To a plane configuration theorem with the initial data consisting of points $a_i$, lines $b_j$, and incidences between them, there corresponds a configuration theorem for lines in space (elliptic, Euclidean, or hyperbolic), so that: \begin{itemize} \item to each point $a_i$ and line $b_j$ of the initial data there corresponds a line in space; \item whenever a point $a_i$ and a line $b_j$ are incident, the respective lines in space intersect at right angle; \item the operations of connecting two points by a line and of intersecting two lines at a point are replaced by the operation of taking the skewer of two lines. \end{itemize} If, in addition, a plane configuration theorem involves a polarity, then each pair of polar dual points and lines involved corresponds to the same line in space, and the operation of taking the polar dual object in the plane (point $\leftrightarrow$ line) corresponds to the trivial operation of leaving a line in space intact. \end{theorem} The reader might enjoy formulating the skewer version of the whole {\it hexagrammum mysticum}, the collection of results, ramifying the Pappus theorem, due to Steiner, Pl\"ucker, Kirkman, Cayley and Salmon; see \cite{CR1,CR2,Ho} for a modern treatment. We shall present two proofs of the Correspondence principle, one concerning the elliptic, and another the hyperbolic geometry. Either proof implies the Correspondence principle for the other two classical geometries: if a configuration theorem holds in the elliptic geometry, then it also holds in the hyperbolic geometry, and vice versa, by `analytic continuation'. And either non-zero curvature version implies the Euclidean one as a limiting case. This analytic continuation principle is well known in geometry; we refer to \cite{AP1,Pa} where it is discussed in detail. \subsection{Elliptic proof} \label{spherical} A line in elliptic space ${\mathbb R}P^3$ is the projectivization of a 2-dimensional subspace of ${\mathbb R}^4$, and the geometry of lines in ${\mathbb R}P^3$ is the Euclidean geometry of 2-planes in ${\mathbb R}^4$. The space of oriented lines is the Grassmannian $G(2,4)$ of oriented 2-dimensional subspaces in ${\mathbb R}^4$. To every oriented line $\ell$ in ${\mathbb R}P^3$ there corresponds its dual oriented line $\ell^*$: the respective oriented planes in ${\mathbb R}^4$ are the orthogonal complements of each other (the orientation of the orthogonal complement is induced by the orientation of the plane and the ambient space). The dual lines are equidistant and they have infinitely many skewers. The preimage of a pair of dual lines in $S^3$ is a Hopf link. The next lemma collects the properties of the Grassmannian $G(2,4)$ that we shall use. These properties are well known, see \cite{GW} for a detailed discussion. \begin{lemma} \label{Grass1} 1) The Grassmannian is a product of two spheres: $G(2,4)=S^2_-\times S^2_+$. This provides an identification of an oriented line in ${\mathbb R}P^3$ with a pair of points of the unit sphere $S^2$: $\ell\leftrightarrow (\ell_-,\ell_+)$. \\ 2) The antipodal involutions of the spheres $S^2_-$ and $S^2_+$ generate the action of the Klein group ${\mathbb Z}_2\times{\mathbb Z}_2$ on the space of oriented lines. The action is generated by reversing the orientation of a line and by taking the dual line.\\ 3) Two lines $\ell$ and $m$ intersect at right angle if and only if $d(\ell_-,m_-)=d(\ell_+,m_+)=\pi/2$, where $d$ denotes the spherical distance in $S^2$.\\ 4) The set of lines that intersect $\ell$ at right angle coincides with the set of lines that intersect $\ell$ and $\ell^*$.\\ 5) A line $n$ is a skewer of lines $\ell$ and $m$ if and only if $n_-$ is a pole of the great circle $\ell_- m_-$, and $n_+$ is a pole of the great circle $\ell_+ m_+$.\\ 6) A pair of generic lines has exactly two skewers (four, if orientation is taken into account), and they are dual to each other. \end{lemma} \paragraph{Proof.} Given two planes in ${\mathbb R}^4$, there are two angles, say $0\leq\alpha\leq\beta\le\pi/2$, between them: $\alpha$ is the smallest angle made by a line in the first plane with the second plane, and $\beta$ is the largest such angle. Recall the classical construction of Klein quadric (see, e.g., \cite{Do,PW}). Given an oriented plane $P$ in ${\mathbb R}^4$, choose a positive basis $u,v$ in $P$, and let $\omega_P$ be the bivector $u\wedge v$, normalized to be unit. In this way we assign to every oriented plane a unit decomposable element in $\Lambda^2 {\mathbb R}^4$. The decomposability condition $\omega\wedge\omega=0$ defines a quadratic cone in $\Lambda^2 {\mathbb R}^4$, and the image of the Grassmannian is the spherization of this cone (the Klein quadric is its projectivization). Consider the star operator in $\Lambda^2 {\mathbb R}^4$, and let $E_-$ and $E_+$ be its eigenspaces with eigenvalues $\pm1$. These spaces are 3-dimensional, and $\Lambda^2 {\mathbb R}^4=E_-\oplus E_+$. Let $S^2_{\pm}$ be the spheres of radii $1/\sqrt{2}$ in $E_{\pm}$. Then the bivector $\omega_P$ has the components in $E_{\pm}$ of lengths $1/\sqrt{2}$, and hence $G(2,4)=S^2_-\times S^2_+$. We rescale the radii of the spheres to unit. Thus an oriented plane $P$ becomes a pair of points $P_\pm$ of a unit sphere. Let us prove claim 2). Orientation reversing of a plane $P$ changes the sign of the bivector $\omega_P$ corresponding to the antipodal involutions of both spheres. Let $e_1,\ldots,e_4$ be an orthonormal basis in ${\mathbb R}^4$. Then the following vectors form bases of the spaces $E_{\pm}$: $$ u_{\pm}=\frac{e_1\wedge e_2 \pm e_3\wedge e_4}{2},\ v_{\pm}=\frac{e_1\wedge e_3 \mp e_2\wedge e_4}{2},\ w_{\pm}=\frac{e_1\wedge e_4 \pm e_2\wedge e_3}{2}. $$ Without loss of generality, assume that a plane $P$ is spanned by $e_1$ and $e_2$. Then $P^{\perp}$ is spanned by $e_3$ and $e_4$. Since $e_1\wedge e_2=u_+ + u_-, e_3\wedge e_4=u_+ - u_-$, the antipodal involution of $S^2_-$ sends $P$ to $P^{\perp}$. Given two planes $P$ and $Q$, one has two pairs of points on $S^2$: $(P_-,Q_-)$ and $(P_+,Q_+)$. Let $\alpha$ and $\beta$ be the two angles between $P$ and $Q$. Then $$ d(P_-,Q_-)=\alpha+\beta,\quad d(P_+,Q_+)=\beta-\alpha, $$ see \cite{GW}. In particular, $P$ and $Q$ have a nonzero intersection when $\alpha=0$, that is, when $d(P_-,Q_-)=d(P_+,Q_+)$. Likewise, $P$ and $Q$ are orthogonal when $\beta=\pi/2$. It follows that the respective lines intersect at right angle when $d(P_-,Q_-)=d(P_+,Q_+)=\pi/2$. This proves 3) and implies 5). In terms of bivectors, two lines intersect if and only if $\omega_P \cdot * \omega_Q =0,$ and they intersect at right angle if, in addition, $\omega_P \cdot \omega_Q =0$. Here dot means the dot product in $\Lambda^2 {\mathbb R}^4$ induced by the Euclidean metric. The duality $\ell \leftrightarrow \ell^*$ corresponds to the star operator on bivectors. This implies 4). Finally, given two lines, $\ell$ and $m$, consider the distance between a point of $\ell$ and a point of $m$. This distance attains a minimum, and the respective line is a skewer of $\ell$ and $m$. By the above discussion, the skewers of lines $\ell$ and $m$ are the lines that intersect the four lines $\ell, \ell^*, m$ and $m^*$. This set is invariant under duality and, by an elementary application of Schubert calculus (see, e.g., \cite{Do}), generically consists of two lines. This proves 6). $\Box$ Thus taking the skewer of a generic pair of lines is a 2-valued operation. However, by the above lemma, the choice of the skewer does not affect the statement of the respective configuration theorem. One can also avoid this indeterminacy by factorizing the Grassmannnian $G(2,4)$ by the Klein group, replacing it by the product of two elliptic planes ${\mathbb R}P^2_-\times{\mathbb R}P^2_+$. In this way, we ignore orientation of the lines and identify dual lines with each other. As a result, a generic pair of lines has a unique skewer. Now to the Correspondence principle. Given a plane configuration theorem, we realize it in the elliptic geometry: the initial data consists of points $a_i$ and lines $b_j$ in ${\mathbb R}P^2$ with some incidences between them, and the polarity in ${\mathbb R}P^2$ is induced by the spherical duality (pole $\leftrightarrow$ equator). Let us replace the lines by their polar points. Thus the initial data is a collection of points $\{a_i, b_j^*\}$ in the projective plane such $d(a_i, b_j^*)=\pi/2$ when the point $a_i$ is incident with the line $b_j$. Likewise, instead of connecting two points, say $p$ and $q$, by a line, we take the polar dual point to this line, that is, the cross-product $p\times q$ of vectors in ${\mathbb R}^3$, considered up to a factor. In this way, our configuration theorem will involve only points, and its statement is that certain pairs of points are at distance $\pi/2$. Take another the initial collection, $\{\bar a_i, \bar b_j^*\}$, and consider the collection of pairs $\{(a_i,\bar a_i), (b_j^*,\bar b_j^*)\}$ in ${\mathbb R}P^2_-\times {\mathbb R}P^2_+$. According to Lemma \ref{Grass1}, one obtains a configuration of lines $\{\ell_i, \ell_j\}$ in elliptic space such that if a point $a_i$ is incident with a line $b_j$ then the corresponding lines $\ell_i$ and $\ell_j$ intersect at right angle. This is the initial data for the skewer configuration theorem. By varying the generic choices of $\{a_i, b_j^*\}$ and $\{\bar a_i, \bar b_j^*\}$ satisfying the initial incidences, we obtain a dense open set of initial configurations of lines $\{\ell_i, \ell_j\}$. Likewise, the operations that comprise the configuration theorem (connecting pairs of points by lines and intersecting pairs of lines) become the operation of taking the skewer of a pair of lines, and the conclusion of the theorem is that the respective pairs of lines intersect at right angle. \subsection{Hyperbolic proof} \label{hyperbolic} In a nutshell, a skewer configuration theorem in 3-dimensional hyperbolic space is a complexification of a configuration theorem in the hyperbolic plane. We use ideas of F. Morley \cite{Mo2} and V. Arnold \cite{Ar}. Consider the 3-dimensional space of real binary quadratic forms $ax^2+2bxy+cy^2$ in variables $x,y$, equipped with the discriminant quadratic form $\Delta=ac-b^2$ and the respective bilinear form. We view the Cayley-Klein model of the hyperbolic plane as the projectivization of the set $\Delta >0$, the circle at infinity being given by $\Delta=0$. The projectivization of the set $\Delta <0$ is the 2-dimensional de Sitter world. Thus points of $H^2$ are elliptic (sign-definite) binary quadratic forms, considered up to a factor. To a line in $H^2$ there corresponds its polar point that lies in the de Sitter world, see Figure \ref{polar}. Hence lines in $H^2$ are hyperbolic (sign-indefinite) binary quadratic forms, also considered up to a factor. Consider the standard area form $dx\wedge dy$ in the $x,y$-plane. The space of smooth functions is a Lie algebra with respect to the Poisson bracket (the Jacobian), and the space of quadratic forms is its 3-dimensional subalgebra $sl(2,{\mathbb R})$. The following observations are made in \cite{Ar}. \begin{lemma} \label{bracket} A point is incident to a line in $H^2$ if and only if the corresponding quadratic forms are orthogonal with respect to the bilinear form $\Delta$. Given two points of $H^2$, the Poisson bracket of the respective elliptic quadratic forms is a hyperbolic one, corresponding to the line through these points. Likewise, for two lines in $H^2$, the Poisson bracket of the respective hyperbolic quadratic forms is an elliptic one, corresponding to the intersection point of these lines. \end{lemma} A complexification of this lemma also holds: one replaces ${\mathbb R}P^2$ by ${\mathbb C}P^2$, viewed as the projectivization of the space of quadratic binary forms (and losing the distinction between sign-definite and sign-indefinite forms). The conic $\Delta =0$ defines a polarity in ${\mathbb C}P^2$. Lemma \ref{bracket} makes it possible to reformulate a configuration theorem involving points and lines in $H^2$ as a statement about the Poisson algebra of quadratic forms. For example, the statement that the three altitudes of a hyperbolic triangle are concurrent, see Figure \ref{sphere} right, becomes the statement that the commutators $$ \{\{f,g\},h\},\ \ \{\{g,h\},f\},\ \ {\rm and}\ \ \{\{h,f\},g\} $$ are linearly dependent, which is an immediate consequence of the Jacobi identity $$ \{\{f,g\},h\}+ \{\{g,h\},f\},+ \{\{h,f\},g\} =0 $$ in the Poisson Lie algebra. Likewise, the Pappus theorem follows from the Tomihisa's identity $$ \{f_1, \{\{f_2, f_3\}, \{f_4, f_5\}\}\} + \{f_3, \{\{f_2, f_5\}, \{f_4, f_1\}\}\} + \{f_5, \{\{f_2, f_1\}, \{f_4, f_3\}\}\} = 0 $$ that holds in $sl(2,{\mathbb R})$, see \cite{To}, and also \cite{Ai,Iv,Sk} for this approach to configuration theorems. Now consider 3-dimensional hyperbolic space $H^3$ in the upper halfspace model. The isometry group is $SL(2,{\mathbb C})$, and the sphere at infinity is the Riemann sphere ${\mathbb C}P^1$. A line in $H^3$ intersects the sphere at infinity at two points, hence the space of (non-oriented) lines is the configuration space of unordered pairs of points, that is, the symmetric square of ${\mathbb C}P^1$ with the deleted diagonal. Note that $S^2({\mathbb C}P^1)={\mathbb C}P^2$ (this is a particular case of the Fundamental Theorem of Algebra, one of whose formulations is that $n$th symmetric power of ${\mathbb C}P^1$ is ${\mathbb C}P^n$). Namely, to two points of the projective line one assigns the binary quadratic form having zeros at these points: $$ (a_1:b_1,a_2:b_2) \longmapsto (a_1y-b_1x)(a_2y-b_2x). $$ Thus a line in $H^3$ can be though of as a complex binary quadratic form up to a factor. The next result is contained in \S 52 of \cite{Mo2}. \begin{lemma} \label{Jacobian} Two lines in $H^3$ intersect at right angle if and only if the respective binary quadratic forms $f_i=a_i x^2 + 2 b_i xy + c_i y^2,\ i=1,2$, are orthogonal with respect to $\Delta$: \begin{equation} \label{ort} a_1c_2-2b_1b_2+a_2c_1=0. \end{equation} If two lines correspond to binary quadratic forms $f_i=a_i x^2 + 2 b_i xy + c_i y^2,\ i=1,2$, then their skewer corresponds to the Poisson bracket (the Jacobian) $$ \{f_1,f_2\} = (a_1b_2-a_2b_1)x^2 + (a_1c_2-a_2c_1) xy + (b_1c_2-b_2c_1) y^2. $$ \end{lemma} If $(a_1:b_1:c_1)$ and $(a_2:b_2:c_2)$ are homogeneous coordinates in the projective plane and the dual projective plane, then (\ref{ort}) describes the incidence relation between points and lines. In particular, the set of lines in $H^3$ that meet a fixed line at right angle corresponds to a line in ${\mathbb C}P^2$. Suppose a configuration theorem involving polarity is given in ${\mathbb R}P^2$. The projective plane with a conic provide the projective model of the hyperbolic plane, see Figures \ref{polar} and \ref{hyptriangle}, so the configuration in realized in $H^2$. Consider the complexification, the respective configuration theorem in ${\mathbb C}P^2$ with the polarity induced by $\Delta$. According to Lemma \ref{Jacobian}, this yields a configuration of lines in $H^3$ such that the pairs of incident points and lines correspond to pairs of lines intersecting at right angle. Another way of saying this is by way of comparing Lemmas \ref{bracket} and \ref{Jacobian}: the relations in the Lie algebras $sl(2,{\mathbb R})$ and $sl(2,{\mathbb C})$ are the same, hence to a configuration theorem in $H^2$ there corresponds a skewer configuration theorem in $H^3$. \subsection{Euclidean picture} \label{Eucpict} The following description of the Euclidean case is due to I. Dolgachev (private communication). Add the plane at infinity to ${\mathbb R}^3$; call this plane $H$. A point of $H$ represents a family of parallel lines in ${\mathbb R}^3$. For a line $L$ in ${\mathbb R}^3$, let $q(L)=L\cap H$ be its direction, that is, the respective point at infinity. One has a polarity in $H$ defined as follows. Let $A$ be a point in $H$. This point corresponds to a direction in ${\mathbb R}^3$. The set of orthogonal directions constitutes a line $A^*$ in $H$; this is the line polar to $A$. \begin{lemma} \label{lineinf} Let $L$ and $M$ be skew lines in ${\mathbb R}^3$. Then $$ q(S(L,M))= q(L)^* \cap q(M)^*. $$ \end{lemma} \paragraph{Proof.} The direction $q(L)^* \cap q(M)^*$ is orthogonal to $L$ and to $M$, and so is the skewer $S(L,M)$. This implies the result. $\Box$ Thus the skewer $S(L,M)$ is constructed as follows: find points $q(L)$ and $q(M)$ of the plane at infinity $H$, intersect their polar lines, and construct the line through point $q(L)^* \cap q(M)^*$ that intersect $L$ and $M$. This line exists and is, generically, unique: it is the intersection of the planes through point $q(L)^* \cap q(M)^*$ and line $L$, and through point $q(L)^* \cap q(M)^*$ and line $M$. To summarize, a skewer configuration in ${\mathbb R}^3$ has a `shadow' in the plane $H$: to a line $L$ there corresponds the point $q(L)$ that is also identified with its polar line $q(L)^*$. In this way, the shadow of a skewer configuration is the respective projective configuration in the plane $H$. For example, both Theorems \ref{skPappus} and \ref{othPapp} become the usual Pappus theorem in $H$. \subsection{Odds and ends} \label{rmks} 1). {\it Legendrian lift}. One can associate a skewer configuration in ${\mathbb R}P^3$ to a configuration in $S^2$ using contact geometry. A cooriented contact element in $S^2$ is a pair consisting of a point and a cooriented line through this point. The space of cooriented contact elements is $SO(3)={\mathbb R}P^3$. We consider ${\mathbb R}P^3$ with its metric of constant positive curvature (elliptic space). The projection ${\mathbb R}P^3\to S^2$ that sends a contact element to its foot point is a Hopf fibration. The space of contact elements carries a contact structure generated by two tangent vector fields: $u$ is the rotation of a contact element about its foot point, and $v$ is the motion of the foot point along the respective geodesic. The fields $u$ and $v$ are orthogonal to each other. A curve tangent to the contact structure is called Legendrian. A smooth cooriented curve in $S^2$ has a unique Legendrian lift: one assigns to a point of the curve the tangent line at this point. Consider a configuration of points and (oriented) lines in $S^2$. One can lift each point as a Legendrian line in ${\mathbb R}P^3$, consisting of the contact elements with this foot point. Likewise, one can lift each line as a Legendrian line, consisting of the contact elements whose foot point lies on this line. As a result, a configuration of lines and points in $S^2$ lifts to a configuration of lines in ${\mathbb R}P^3$ intersecting at right angle, as described in Theorem \ref{principle}. The family of (oriented) Legendrian lines in ${\mathbb R}P^3$ is 3-dimensional; it forms the Lagrangian Grassmannian $\Lambda(2) \subset G(2,4)$. In the classical terminology, the 3-parameter family of Legendrian lines in projective space is the null-system, \cite{Je,Do}. 2). {\it Comparing the elliptic and hyperbolic approaches}. The approaches of Sections \ref{spherical} and \ref{hyperbolic} are parallel. The sphere $S^2$ in Section \ref{spherical} is the spherization of ${\mathbb R}^3=so(3)$, the Lie bracket being the cross-product of vectors. The pole of a line $uv$ in $S^2$ corresponds to the vector $u\times v$ in ${\mathbb R}^3$. Thus the operations of connecting two points by a line and of intersecting two lines are encoded by the Lie bracket of $so(3)$. Likewise, the Poisson bracket of two quadratic forms in Section \ref{hyperbolic} can be identified with the Minkowski cross-product that encodes the operations of connecting two points by a line and of intersecting two lines. Note that $so(3)$ is the Lie algebra of motions of $S^2$, whereas $sl(2,{\mathbb R})$ is the Lie algebra of motions of $H^2$, and the complex forms of these Lie algebras coincide. Interestingly, this Lie algebraic approach to configuration theorems fails in the Euclidean plane, see \cite{Iv} for a discussion; however, Euclidean skewer configurations, such as the Petersen-Morley theorem, can be described in terms of the Lie algebra of motions of ${\mathbb R}^3$, see \cite{Sk}. In both proofs, one goes from the Lie algebra of motions in dimension 2 to that in dimension 3. In the elliptic situation, we have $so(4)=so(3)\oplus so(3)$, and in the hyperbolic situation, the Lie algebra of motions of $H^3$ is $sl(2,{\mathbb C})$. Accordingly, an elliptic skewer configuration splits into the product of two configurations in $S^2$, and a hyperbolic skewer configuration is obtained from a configuration in $H^2$ by complexification. 3). {\it Skewers in ${\mathbb R}^3$ via dual numbers}. One can approach skewer configurations in ${\mathbb R}^3$ using Study's dual numbers \cite{St}; see \cite{PW} for a modern account. Dual numbers are defined similarly to complex numbers: $$ a+\varepsilon b,\ {\rm where}\ a,b\in{\mathbb R},\ {\rm and}\ \varepsilon^2=0. $$ Dual vectors are defined analogously. To an oriented line $\ell$ in ${\mathbb R}^3$ one assigns the dual vector $\xi_{\ell}=u+\varepsilon v$, where $u\in S^2$ is the unit directing vector of $\ell$, and $v$ is the moment vector: $v=P\times u$ where $P$ is any point of $\ell$. The vectors $\xi_{\ell}$ form the Study sphere: $\xi_{\ell}\cdot \xi_{\ell}=1$. This construction provides an isomorphism between the isometry group of ${\mathbb R}^3$ and the group of dual spherical motions. Two lines $\ell$ and $m$ intersect at right angle if and only if $\xi_{\ell}\cdot \xi_{m}=0$. Thus skewer configurations in ${\mathbb R}^3$ correspond to configurations of lines and points in the Study sphere whose real part are the respective configurations in $S^2$. \section{Circles} Denote the set of lines in 3-space that share a skewer $\ell$ by ${\cal N}_\ell$. We saw in Section \ref{2proofs} that ${\cal N}_\ell$ is an analog of a line in the plane. Two-parameter families of lines in 3-space are called congruences. ${\cal N}_\ell$ is a linear congruence: it is the intersection of the Klein quadric with a 3-dimensional subspace ${\mathbb R}P^3 \subset {\mathbb R}P^5$, that is, it is defined by two linear equations in Pl\"ucker coordinates. Now we describe line analogs of circles. Let $\ell$ be an oriented line in 3-space (elliptic, Euclidean, or hyperbolic). Let $G_\ell$ be the subgroup of the group of orientation preserving isometries that preserve $\ell$. This group is 2-dimensional. Following \cite{Ri}, we call the orbit $G_\ell(m)$ of an oriented line $m$ an {\it axial congruence} with $\ell$ as axis. In particular, ${\cal N}_\ell$ is an axial congruence. In ${\mathbb R}^3$ (the case considered in \cite {Ri}), the lines of an axial congruence with axis $\ell$ are at equal distances $d$ from $\ell$ and make equal angles $\varphi$ with it. One defines the dual angle between two oriented lines $\varphi + \varepsilon d$, see \cite{PW}. The dual angle between the lines of an axial congruence and its axis is constant. Thus, in ${\mathbb R}^3$, an axial congruence consists of a regulus (one family of ruling of a hyperboloid of one sheet) and its parallel translations along its axis. Likewise, one defines a complex distance between oriented lines $\ell$ and $m$ in $H^3$. Let $d$ be the distance from $\ell$ to $m$ along their skewer $S(\ell,m)$, and let $\varphi$ be the angle between $m$ and the line $\ell'$, orthogonal to $S(\ell,m)$ in the plane spanned by $\ell$ and $S(\ell,m)$, and intersecting $m$. (Both $d$ and $\varphi$ have signs determined by a choice of orientation of the skewer). Then the complex distance is given by the formula $\chi(\ell,m)=d+i\varphi$, see \cite{Ma}. Again, the complex distance between the lines of an axial congruence and its axis is constant. If $\ell_{1,2}$ and $m_{1,2}$ are the respective points on the sphere at infinity ${\mathbb C}P^1$ then $$ \cosh^2\left(\frac{\chi(\ell,m)}{2}\right)=[\ell_1,m_1,m_2,\ell_2], $$ where the cross-ratio is given by the formula $$ [a,b,c,d]=\frac{(a-c)(b-d)}{(a-d)(b-c)}, $$ see \cite{Ma}. In the next lemma, ${\mathbb C}P^1$ is the `celestial sphere', that is, the sphere at infinity of $H^3$. \begin{lemma} \label{round} Let $\psi:{\mathbb C}P^1\to{\mathbb C}P^1$ be a M\"obius (projective) transformation having two distinct fixed points. The family of lines connecting point $z\in {\mathbb C}P^1$ with the point $\psi(z)$ is an axial congruence, and all axial congruences are obtained in this way. \end{lemma} \paragraph{Proof.} Without loss of generality, assume that the fixed points of $\psi$ are $0$ and $\infty$, and let $\ell$ be the line through these points. Then $\psi(z)=cz$ for some constant $c\in{\mathbb C}$. One has $ [0,z,cz,\infty]=[0,1,c,\infty]=c/(c-1). $ Hence, for the lines $m$ connecting $z$ and $\psi(z)$, the complex distance $\chi(\ell,m)$ is the same. Conversely, given an axial congruence, we may assume, without loss of generality, that its axis $\ell$ connects $0$ and $\infty$. Then $G_{\ell}$ consists of the transformations $z\mapsto kz,\ k\in{\mathbb C}$. Let $m$ be the line connecting points $w_1$ and $w_2$. Then the axial congruence $G_{\ell}(m)$ consists of the lines connecting points $k w_1$ and $kw_2=\psi(kw_1)$, with $\psi: z\mapsto (w_2/w_1) z$. $\Box$ In $S^3$, an axial congruence is characterized by the condition that the angles $\alpha$ and $\beta$ (see the proof of Lemma \ref{Grass1}) between the axis and the lines of the congruence are constant. It follows from the proof of Lemma \ref{Grass1} that an axial congruence is a torus, a product of circles, one in $S^2_-$ and another in $S^2_+$. Thus an axial congruence of lines is an analog of a circle in 2-dimensional geometry. The arguments from Section \ref{spherical} imply analogs of the basic properties of circles: \begin{enumerate} \item If two generic axial congruences share a line then they share a unique other line. \item Three generic oriented lines belong to a unique axial congruence. \end{enumerate} (A direct proof of the first property: if the axes of the congruences are $\ell_1$ and $\ell_2$, and the shared line is $m$, then the second shared line is obtained from $m$ by reflecting in $S(\ell_1,\ell_2)$ and reverting the orientation). Using the approach of Section \ref{2proofs}, one extends the Correspondence principle to theorems involving circles. For example, one has \begin{theorem}[Skewer Pascal theorem] \label{skPascal} Let $A_1,\ldots,A_6$ be lines from an axial congruence. Then $$ S(S(A_1,A_2),S(A_4,A_5)),\ S(S(A_2,A_3),S(A_5,A_6)), \ {\rm and}\ S(S(A_3,A_4),S(A_6,A_1)) $$ share a skewer, see Figure \ref{Pascal}. \end{theorem} \begin{figure} \caption{Pascal's theorem for a circle.} \label{Pascal} \end{figure} As another example, consider the Clifford's Chain of Circles. This chain of theorems starts with a number of concurrent circles labelled $1,2,3,\ldots, n$. In Figure \ref{Clifford}, $n=5$, and the initial circles are represented by straight lines (so that their common point is at infinity).\footnote{As usual, lines are considered as circles of infinite radius.} The intersection point of circles $i$ and $j$ is labelled $ij$. The circle through points $ij, jk$ and $ki$ is labelled $ijk$. The first statement of the theorem is that the circles $ijk, jkl, kli$ and $lij$ share a point; this point is labelled $ijkl$. The next statement is that the points $ijkl, jklm, klmi, lmij$ and $mijk$ are cocyclic; this circle is labelled $ijklm$. And so on, with the claims of being concurrent and cocyclic alternating; see \cite{Coo,Mo2}, and \cite{KS,SK} for a relation with completely integrable systems. \begin{figure} \caption{Clifford's Chain of Circles ($n=5$).} \label{Clifford} \end{figure} A version of this theorem for lines in ${\mathbb R}^3$ is due to Richmond \cite{Ri}. The approach of Section \ref{2proofs} provides an extension to the elliptic and hyperbolic geometries. \begin{theorem}[Clifford's Chain of Lines] \label{skClifford} 1) Consider axial congruences ${\cal C}_i,\ i=1,2,3,4$, sharing a line. For each pair of indices $i,j \in \{1,2,3,4\}$, denote by $\ell_{ij}$ the line shared by ${\cal C}_i$ and ${\cal C}_j$, as described in statement 1 above. For each triple of indices $i,j,k \in \{1,2,3,4\}$, denote by ${\cal C}_{ijk}$ the axial congruence containing the lines $\ell_{ij},\ell_{jk},\ell_{ki}$, as described in the statement 2. Then the congruences ${\cal C}_{123}, {\cal C}_{234}, {\cal C}_{341}$ and ${\cal C}_{412}$ share a line. \\ 2) Consider axial congruences ${\cal C}_i,\ i=1,2,3,4,5$, sharing a line. Each four of the indices determine a line, as described in the previous statement of the theorem. One obtains five lines, and they all belong to an axial congruence.\\ 3) Consider axial congruences ${\cal C}_i,\ i=1,2,3,4,5,6$, sharing a line. Each five of them determine an axial congruence, as described in the previous statement of the theorem. One obtains six axial congruences, and they all share a line. And so on... \end{theorem} Next, we present an analog of the Poncelet Porism, see, e.g., \cite{DR,Fl}. This theorem states that if there exists an $n$-gon inscribed into a conic and circumscribed about a nested conic then every point of the outer conic is a vertex of such an $n$-gon, see Figure \ref{Poncelet}. \begin{figure} \caption{Poncelet Porism, $n=3$.} \label{Poncelet} \end{figure} Consider a particular case when both conics are circles (a pair of nested conics can be sent to a pair of circles by a projective transformation). The translation to the language of lines in space is as follows. Consider two generic axial congruences ${\cal C}_1$ and ${\cal C}_2$, and assume that there exist a pair of lines $\ell_1\in {\cal C}_1$ and $\ell_2\in {\cal C}_2$ that intersect at right angle. That is, ${\cal C}_1$ and ${\cal N}_{\ell_2}$ share the line $\ell_1$. By property 1) above, there exists a unique other line $\ell_1'\in {\cal C}_1$, shared with ${\cal N}_{\ell_2}$, that is, $\ell_1'$ intersects $\ell_2$ at right angle. Then there exists a unique other line $\ell_2'\in {\cal C}_2$ that intersects $\ell_1'$ at right angle, etc. We obtain a chain of intersecting orthogonal lines, alternating between the two axial congruences. The following theorem holds in the three classical geometries. \begin{theorem}[Skewer Poncelet theorem] \label{skPoncelet} If this chain of lines closes up after $n$ steps, then the same holds for any starting pair of lines from ${\cal C}_1$ and ${\cal C}_2$ that intersect at right angle. \end{theorem} \paragraph{Proof.} Arguing as in Section \ref{spherical}, we interpret one axial congruence as the set of points of a spherical circle, and another one as the set of geodesic circles tangent to a spherical circle. The incidence between a geodesic and a point corresponds to two lines in space intersecting at right angle. Thus the claim reduces to a version of the Poncelet theorem in $S^2$ where a spherical polygon is inscribed in a spherical circle and circumscribed about a spherical circle. This spherical version of the Poncelet theorem is well known, see, e.g., \cite{CS,Ve}. For a proof, the central projection sends a pair of disjoint circles to a pair of nested conics in the plane, and the geodesic circles to straight lines, and the result follows from the plane Poncelet theorem. $\Box$ A pair of nested circles in the Euclidean plane is characterized by three numbers: their radii, $r<R$, and the distance between the centers, $d$. The conditions for the existence of an $n$-gons inscribed into one and circumscribed about another circle (a bicentric $n$-gon) are known as the Fuss relations. The first ones, for $n=3$ and $n=4$, are $$ R^2-d^2=2rR,\quad (R^2-r^2)^2=2r^2 (R^2+d^2); $$ the case $n=3$ is due to Euler; Fuss found the relations for $n=4,\ldots,8$. More generally, Cayley gave conditions for Poncelet polygons to close up after $n$ steps for a pair of conics, see \cite{DR,Fl}). It would be interesting to find an analog of the Fuss and Cayley relations; up to isometry, a pair of axial congruence depends on 6 parameters (two characterizing each congruence and two describing the mutual position of the axes). \section{Projections and conics} \label{conics} In this section, we propose a definition-construction of a skewer analog of a conic. Let us first describe a skewer analog of a projection of a line to a line. Figure \ref{projective} depicts the central projection $\varphi_O: a \to b$ between two lines in the plane. \begin{figure} \caption{A central projection of a line to a line.} \label{projective} \end{figure} Consider three lines in space, $a,b$ and $O$, and define a map $\varphi_O: {\cal N}_a \to {\cal N}_b$ as follows: for $\ell \in {\cal N}_a,$ set $\varphi_O (\ell) = S(S(\ell,O),b)$. This is a skewer analog of the central projection. Like in the plane, this operation is involutive: swapping the roles of $a$ and $b$, and applying it to the line $S(S(\ell,O),b)$, takes one back to line $\ell$. Following Section \ref{hyperbolic}, one can describe the hyperbolic case of this construction in ${\mathbb C}P^2$; the result is (a complex version of) the central projection in Figure \ref{projective}. Recall the Braikenridge-Maclaurin construction of a conic depicted in Figure \ref{conic}; see \cite{Mi} for the history of this result. \begin{figure} \caption{The Braikenridge-Maclaurin construction of a conic.} \label{conic} \end{figure} Fix two lines, $p$ and $q$, and three points, $O, A$ and $B$. Identify $p$ with the pencil of lines through point $A$, and $q$ with the pencil of lines through point $B$. The central projection $\varphi_O:p \to q$ induces a projective transformation between the two pencils of lines. Then the locus of intersection points of the corresponding lines from these pencils is a conic. One can use the skewer version of the Braikenridge-Maclaurin construction to define a line analog of a conic. Start with five lines $O,p,q,A,B$. For each line $\ell \in {\cal N}_p$, we have the corresponding line $m=S(S(\ell,O),q) \in {\cal N}_q$. Then the 2-parameter family of lines $$ S(S(\ell,A),S(m,B)),\ \ \ell \in {\cal N}_p $$ is a skewer analog of a conic. In the hyperbolic case, this set is identified with a conic in ${\mathbb C}P^2$. \section{Sylvester Problem} \label{Sylv} Given a finite set $S$ of points in the plane, assume that the line through every pair of points in $S$ contains at least one other point of $S$. J.J.Sylvester asked in 1893 whether $S$ necessarily consists of collinear points. See \cite{BM} for the history of this problem and its generalizations. In ${\mathbb R}^2$, the Sylvester question has an affirmative answer (the Sylvester-Galai theorem), but in ${\mathbb C}^2$ one has a counter-example: the 9 inflection points of a cubic curve (of which at most three can be real, according to a theorem of Klein), connected by 12 lines. Note that the dual Sylvester-Galai theorem holds as well: if a finite collection of pairwise non-parallel lines in ${\mathbb R}^2$ has the property that through the intersection point of any two lines there passes at least one other line, then all the lines are concurrent. The skewer version of the Sylvester Problem concerns a finite collection $S$ of pairwise skew lines in space such that the skewer of any pair intersects at least one other line at right angle. We say that $S$ has the skewer Sylvester property. The question is whether a collection of lines with the skewer Sylvester property necessarily consists of lines that share a skewer. \begin{theorem} \label{Sylvth} The skewer version of the Sylvester-Galai theorem holds in the elliptic and Euclidean geometries, but fails in the hyperbolic one. \end{theorem} \paragraph{Proof.} In the elliptic case, we argue as in Section \ref{spherical}. A collection of lines becomes two collections of points, in ${\mathbb R}P^2_-$ and in ${\mathbb R}P^2_+$. The skewer Sylvester property implies that each of these sets enjoys the property that the line through a pair of points contains another point, and one applies the Sylvester-Galai theorem to each of the two sets. In the hyperbolic case, we argue as in Section \ref{hyperbolic}. Let $a_1,\ldots,a_9$ be the nine inflection points of a cubic curve in ${\mathbb C}P^2$, and let $b_1,\ldots,b_{12}$ be the respective lines (the counterexample to the complex Sylvester-Galai theorem). Let $b_1^*,\ldots,b_{12}^*$ be the polar dual points. As described in Section \ref{hyperbolic}, the points $a_i$ correspond to nine lines in $H^3$, and the points $b_j^*$ to their skewers. We obtain a collection of nine lines that has the skewer Sylvester property but does not possess a common skewer. In the intermediate case of ${\mathbb R}^3$, the following argument is due to V. Timorin (private communication). The approach is the same as in Section \ref{Eucpict}. It follows from the discussion there that if three lines in ${\mathbb R}^3$ share a skewer then their intersections with the plane at infinity $H$ are collinear. Let $L_1,\ldots, L_n$ be a collection of lines enjoying the skewer Sylvester property. Then, by the Sylvester-Galai theorem in $H$, the points $q(L_1),\ldots, q(L_n)$ are collinear. This means that the lines $L_1,\ldots, L_n$ lie in parallel planes, say, the horizontal ones. Consider the vertical projection of these lines. We obtain a finite collection of non-parallel lines such that through the intersection point of any two there passes at least one other line. By the dual Sylvester-Galai theorem, all these lines are concurrent. Therefore the horizontal lines in $R^3$ share a vertical skewer. $\Box$ \section{Pappus revisited} \label{Papprev} In this section we prove Theorem \ref{othPapp}. This computational proof is joint with R. Schwartz. As before, it suffices to establish the hyperbolic version of Theorem \ref{othPapp}. We use the approach to 3-dimensional hyperbolic geometry, in the upper half-space model, developed by Fenchel \cite{Fe}; see also \cite{Iv,Ma}. The relevant features of this theory are as follows. To a line $\ell$ in $H^3$, one assigns the reflection in this line, an orientation preserving isometry of the hyperbolic space, an element of the group $PGL(2,{\mathbb C})$. One can lift it to a matrix ${M_{\ell}} \in GL(2,{\mathbb C})$, defined up to a complex scalar. Since reflection is an involution, one has ${\rm Tr} (M_{\ell})=0$. More generally, a traceless matrix $M \in GL(2,{\mathbb C})$ is called a {\it line matrix}; it satisfies $M^2 = -\det(M) E$ where $E$ is the identity matrix. The skewer relations translate to the language of matrices as follows: \begin{itemize} \item two lines $\ell$ and $n$ intersect at right angle if and only if ${\rm Tr} (M_{\ell} M_n)=0$; \item the skewer of two lines $\ell$ and $n$ corresponds to the commutator $[M_{\ell},M_n]$; \item three lines $\ell,m,n$ share a skewer if and only if the matrices $M_{\ell}, M_m$, and $M_n$ are linearly dependent. \end{itemize} Likewise, one assigns matrices to points. The reflection in a point $P$ is an orientation-reversing isometry of $H^3$; one assigns to it a matrix $N_P$ in $GL(2,{\mathbb C})$, defined up to a real scalar, with $\det N_P >0$ and satisfying $N_P {\overline N_P} = -\det(N_P) E$, where bar means the entry-wise complex conjugation of a matrix. Such matrices are called {\it point matrices}. Equivalently, point matrices $N$ satisfy $n_{22}=-{\bar n_{11}}, n_{12}\in{\mathbb R}, n_{21}\in{\mathbb R}$, that is, the real part of $N$ is a traceless matrix, and the imaginary part is a scalar matrix. It is convenient to normalize so that the imaginary part is $E$, and then $N$ can be though of as a real 3-vector consisting of three entries of the real part of $N$. Incidence properties translate as follows: \begin{itemize} \item a point $P$ lies on a line $\ell$ if and only of $M_{\ell} N_P = N_P {\overline M_{\ell}}$; \item three points are collinear if and only if the respective point matrices are linearly dependent (equivalently, over ${\mathbb R}$ or ${\mathbb C}$). \end{itemize} We need a formula for a line matrix corresponding to the line through two given points. Let $N_1$ and $N_2$ be point matrices corresponding to the given points. Then the desired line matrix $M\in GL(2,{\mathbb C})$ satisfies the system of linear equations \begin{equation} \label{linepoint} M N_1 = N_1 \overline {M},\ M N_2 = N_2 {\overline M},\ {\rm Tr} (M) =0. \end{equation} This system is easily solved and it defines $M$ up to a factor (we do not reproduce the explicit formulas here). With these preliminaries, the proof proceeds in the following steps. \begin{enumerate} \item Start with two triples of linearly dependent point matrices, corresponding to the triples of points $A_1,A_2,A_3$ and $B_1,B_2,B_3$. \item Compute the line matrices, corresponding to the lines $(A_1 B_2)$ and $(A_2 B_1)$, $(A_2 B_3)$ and $(A_3 B_2)$, and $(A_3 B_1)$ and $(A_1 B_3)$ by solving the respective systems (\ref{linepoint}). \item Compute the commutators of these three pairs of line matrices. \item Check that the obtained three matrices are linearly dependent. \end{enumerate} We did these computations in Mathematica. Since a line matrix is traceless, it can be viewed as a complex 3-vector, and the last step consists in computing the determinant made by three 3-vectors. The result of this last computation was zero (for arbitrary initial point matrices) which proves the theorem. \begin{remark} {\rm Theorem \ref{othPapp} can be restated somewhat similarly to Theorem \ref{skPappus}. Given two skew lines $L$ and $M$, consider the 1-parameter family of lines ${\cal F}(L,M)$ consisting of the lines that pass through a point $A\in L$ and orthogonal to the plane spanned by point $A$ and line $M$. Likewise, one has the 1-parameter family of lines ${\cal F}(M,L)$. These families, ${\cal F}(L,M)$ and ${\cal F}(M,L)$, replace the 2-parameter families of lines ${\cal N}_L$ and ${\cal N}_M$ in the formulation of Theorem \ref{skPappus}, and yield Theorem \ref{othPapp}. } \end{remark} \begin{remark} {\rm F. Bachmann \cite{Ba} developed an approach to 2-dimensional geometry (elliptic, Euclidean, and hyperbolic) based on the notion of reflection and somewhat similar to Fenchel's approach to 3-dimensional hyperbolic geometry \cite{Fe}. Namely, to a point $P$ there corresponds the reflection $\sigma_P$ in this point, and to a line $\ell$ -- the reflection $\sigma_{\ell}$ in this line. The incidence relation $P \in \ell$ is expressed as $\sigma_P \sigma_{\ell} = \sigma_{\ell} \sigma_P$. Two lines, $\ell$ and $m$, are orthogonal if and only if $\sigma_{\ell} \sigma_m = \sigma_m \sigma_{\ell}$. More generally, one has a system of axioms of plane geometry in terms of involutions in the group of motions. At the present writing, it is not clear how to deduce the Correspondence principle using this approach. } \end{remark} \end{document}
\begin{document} \begin{frontmatter} \title{Composing Dinatural Transformations: Towards a Calculus of Substitution\tnoteref{t1}} \tnotetext[t1]{This is the \emph{accepted manuscript} version of a published journal article accessible at \url{https://doi.org/10.1016/j.jpaa.2021.106689}.} \author{Guy McCusker} \ead{[email protected]} \author{Alessio Santamaria\corref{cor1}\fnref{fn1}} \ead{[email protected]} \cortext[cor1]{Corresponding author} \fntext[fn1]{Present address: Dipartimento di Informatica, Università degli Studi di Pisa, Largo B.\ Pontecorvo 3, 56127 Pisa, Italy} \address{Department of Computer Science, University of Bath, BA2 7AY Bath, United Kingdom} \date{} \begin{abstract} Dinatural transformations, which generalise the ubiquitous natural transformations to the case where the domain and codomain functors are of mixed variance, fail to compose in general; this has been known since they were discovered by Dubuc and Street in 1970. Many ad hoc solutions to this remarkable shortcoming have been found, but a general theory of compositionality was missing until Petri\'c, in 2003, introduced the concept of g-dinatural transformations, that is, dinatural transformations together with an appropriate graph: he showed how acyclicity of the composite graph of two arbitrary dinatural transformations is a sufficient and essentially necessary condition for the composite transformation to be in turn dinatural. Here we propose an alternative, semantic rather than syntactic, proof of Petri\'c's theorem, which the authors independently rediscovered with no knowledge of its prior existence; we then use it to define a generalised functor category, whose objects are functors of mixed variance in many variables, and whose morphisms are transformations that happen to be dinatural only in some of their variables. We also define a notion of horizontal composition for dinatural transformations, extending the well-known version for natural transformations, and prove it is associative and unitary. Horizontal composition embodies substitution of functors into transformations and vice-versa, and is intuitively reflected from the string-diagram point of view by substitution of graphs into graphs. This work represents the first, fundamental steps towards a substitution calculus for dinatural transformations as sought originally by Kelly, with the intention then to apply it to describe coherence problems abstractly. There are still fundamental difficulties that are yet to be overcome in order to achieve such a calculus, and these will be the subject of future work; however, our contribution places us well in track on the path traced by Kelly towards a calculus of substitution for dinatural transformations. \end{abstract} \begin{keyword} Dinatural transformation \sep compositionality \sep substitution \sep coherence \sep Petri Net \MSC[2010] 18A05 \sep 18A23 \sep 18A25 \sep 18A30 \sep 18A32 \sep 18A40 \sep 18C10 \sep 18D05 \sep 18D15 \end{keyword} \end{frontmatter} \section{Introduction} The problem of coherence for a certain theory (like monoidal, monoidal closed\dots) consists in understanding which diagrams necessarily commute as a consequence of the axioms. One of the most famous results is Mac Lane's theorem on coherence for monoidal categories~\cite{mac_lane_natural_1963}: every diagram built up only using associators and unitors, which are the data that come with the definition of monoidal category, commutes. One of the consequences of this fact is that every monoidal category is monoidally equivalent to a \emph{strict} monoidal category, where associators and unitors are, in fact, identities. What this tells us is that those operations that one would like to regard as not important--such as the associators and unitors etc.--really are not important. Solving the coherence problem for a theory, therefore, is fundamental to the complete understanding of the theory itself. In this article we aim to set down the foundations for the answer to an open question left by Kelly in his task to study the coherence problem abstractly, started with~\cite{kelly_abstract_1972,kelly_many-variable_1972}. Kelly argued that coherence problems are concerned with categories carrying an extra structure: a collection of functors and natural transformations subject to various equational axioms. For example, in a monoidal category $\A$ we have $\otimes \colon \A^2\! \to \A$, $I \colon \A^0 \to \A$; if $\A$ is also closed then we would have a functor of mixed variance $(-) \implies (-) \colon \Op\A\! \times \A \to \A$. The natural transformations that are part of the data, like associativity in the monoidal case: \[ \alpha_{A,B,C} \colon (A \otimes B) \otimes C \to A \otimes (B \otimes C), \] connect not the basic functors directly, but rather functors obtained from them by \emph{iterated substitution}. By ``substitution'' we mean the process where, given functors \[ K \colon \A \times \Op\B \times \C \to \D, \quad F \colon \E \times \mathbb G \to \A, \quad G \colon \mathbb H \times \Op\L \to \B, \quad H \colon \Op\M \to \C \] we obtain the new functor \[ K(F,\Op G, H) \colon \E \times \mathbb G \times \Op{\mathbb H} \times \L \times \Op\M \to \D\label{substitution functors example} \] sending $(A,B,C,D,E)$ to $K(F(A,B),\Op G (C,D), H(E))$. Hence substitution generalises composition of functors, to which it reduces if we only consider one-variable functors. In the same way, the equational axioms for the structure, like the pentagonal axiom for monoidal categories: \[ \begin{tikzcd}[column sep={3.5em,between origins},row sep=2em] & & & (A \otimes B) \otimes (C \otimes D) \ar[drr,"\alpha_{A,B,C\otimes D}"] \\ \bigl( (A \otimes B) \otimes C \bigr) \otimes D \ar[urrr,"\alpha_{A\otimes B, C, D}"] \ar[dr,"\alpha_{A,B,C} \otimes D"'] & & & & & A \otimes \bigl( B \otimes (C \otimes D) \bigr) \\ & \bigl( A \otimes (B \otimes C) \bigr) \otimes D \ar[rrr,"\alpha_{A,B \otimes C, D}"'] & & & A \otimes \bigl( (B \otimes C) \otimes D \bigr) \ar[ur,"A \otimes \alpha_{B,C,D}"'] \end{tikzcd} \] involve natural transformations obtained from the basic ones by ``substituting functors into them and them into functors'', like $\alpha_{A \otimes B, C, D}$ and $\alpha_{A,B,C} \otimes D$ above. By substitution of functors into transformations and transformations into functors we mean therefore a generalised \emph{whiskering} operation or, more broadly, a generalised \emph{horizontal composition} of transformations. For these reasons Kelly argued in~\cite{kelly_many-variable_1972} that an abstract theory of coherence requires ``a tidy calculus of substitution'' for functors of many variables and appropriately general kinds of natural transformations, generalising the usual Godement calculus~\cite[Appendice]{godement_topologie_1958} for ordinary functors in one variable and ordinary natural transformations. (The ``five rules of the functorial calculus'' set down by Godement are in fact equivalent to saying that sequential composition of functors and vertical and horizontal composition of natural transformations are associative, unitary and satisfy the usual interchange law; see~\cite[Introduction]{santamaria_towards_2019} for more details.) One could ask why bother introducing the notion of substitution, given that it is not primitive, as the functor $K(F,\Op G, H)$ above can be easily seen to be the usual composite $K \circ (F \times \Op G \times H)$. Kelly's argument is that there is \emph{no need} to consider functors whose codomain is a product of categories, like $F \times \Op G \times H$, or the twisting functor $T(A,B) = (B,A)$, or the diagonal functor $\Delta \colon \A \to \A \times \A$ given by $\Delta(A) = (A,A)$, if we consider substitution as an operation on its own. However, take a Cartesian closed category $\A$, and consider the diagonal transformation $\delta_A \colon A \to A \times A$, the symmetry $\gamma_{A,B} \colon A \times B \to B \times A$ and the evaluation transformation $\eval A B \colon A \times (A \implies B) \to B$. It is true that we can see $\delta$ and $\gamma$ as transformations $\id \A \to \Delta$ and $\times \to \times \circ T$, but there is no way to involve $\Delta$ into the codomain of $\eval{}{}$, given that the variable $A$ appears covariantly and contravariantly at once. Kelly suggested adapting the notion of \emph{graph} for \emph{extranatural} transformations that he had introduced with Eilenberg~\cite{eilenberg_generalization_1966} to handle the case of natural transformations; that is, he proposed to consider natural transformations $\phi \colon F \to G$ between functors of many variables together with a graph $\Gamma(\phi)$ that tells us which arguments of $F$ and $G$ are to be equated when we write down the general component of $\phi$. The information carried by the graph is what allows us to get by without explicit mention of functors like $T$ and $\Delta$ and, moreover, it paves the way to the substitution calculus he sought. With the notion of ``graph of a natural transformation'', Kelly constructed a full Godement calculus for covariant functors only. His starting point was the observation that the usual Godement calculus essentially asserts that $\Cat$ is a 2-category, but this is saying less than saying that $\Cat$ is actually \emph{Cartesian closed}, $- \times \B$ having a right adjoint $[\B,-]$ where $[\B,\C]$ is the functor category. Since every Cartesian closed category is enriched over itself, we have that $\Cat$ is a $\Cat$-category, which is just another way to say 2-category. Now, vertical composition of natural transformations is embodied in $[\B,\C]$, but sequential composition of functors and horizontal composition of natural transformations are embodied in the functor \[ M \colon [\B,\C] \times [\A,\B] \to [\A,\C] \] given by the closed structure (using the adjunction and the evaluation map twice). What Kelly does, therefore, is to create a generalised functor category $\FC \B \C $ over a category of graphs $\Per$ and to show that the functor $\FC - -$ is the internal-hom of $\catover\Per$, which is then monoidal closed (in fact, far from being Cartesian or even symmetric), the left adjoint of $\FC \B -$ being denoted as $\ring - \B$. The analogue of the $M$ above, now of the form $\ring {\FC \B \C} {\FC \A \B} \to \FC \A \C,$ is what provides the desired substitution calculus. When trying to deal with the mixed-variance case, however, Kelly ran into problems. He considered the every-variable-twice extranatural transformations of~\cite{eilenberg_generalization_1966} and, although he got ``tantalizingly close'', to use his words, to a sensible calculus, he could not find a way to define a category of graphs that can handle cycles in a proper way. This is the reason for the ``I'' in the title \emph{Many-Variable Functorial Calculus, I} of~\cite{kelly_many-variable_1972}: he hoped to solve these issues in a future paper, which sadly has never seen the light of day. What we do in this article is, in fact, consider transformations between mixed-variance functors whose type is even more general than Eilenberg and Kelly's, corresponding to $\text{\uuline{G}}^*$ in~\cite{kelly_many-variable_1972}, recognising that they are a straightforward generalisation of \emph{dinatural transformations}~\cite{dubuc_dinatural_1970} in many variables. This poses an immediate, major obstacle: dinatural transformations notoriously fail to compose, as already observed by Dubuc and Street when they introduced them in 1970. There are certain conditions, known already to their discoverers, under which two dinatural transformations $\phi$ and $\psi$ compose: if either of them is natural, or if a certain square happens to be a pullback or a pushout, then the composite $\psi\circ\phi$ turns out to be dinatural. However, these are far from being satisfactory solutions for the compositionality problem, for either they are too restrictive (as in the first case), or they speak of properties enjoyed not by $\phi$ and $\psi$ themselves, but rather by other structures, namely one of the functors involved. Many studies have been conducted about them~\cite{bainbridge_functorial_1990,blute_linear_1993,freyd_dinaturality_1992,girard_normal_1992,lataillade_dinatural_2009,mulry_categorical_1990,pare_dinatural_1998,pistone_dinaturality_2017,plotkin_logic_1993,simpson_characterisation_1993,wadler_theorems_1989}, and many attempts have been made to find a proper calculus for dinatural transformations, but until recently only \emph{ad hoc} solutions have been found and, ultimately, they have remained poorly understood. In 2003, Petri\'c~\cite{petric_g-dinaturality_2003} studied coherence results for bicartesian closed categories, and found himself in need, much like Kelly in his more general case, of understanding the compositionality properties of \emph{g-dinatural transformations}, which are slightly more general dinatural transformations than those of Dubuc and Street~\cite{dubuc_dinatural_1970} in what their domain and codomain functors are allowed to have different variance and, moreover, they always come with a graph (whence the ``g'' in ``g-dinatural'') which reflects their signature. Petri\'c successfully managed to find a sufficient and essentially necessary condition for two consecutive g-dinatural transformations $\phi$ and $\psi$ to compose: if the composite graph, obtained by appropriately ``glueing'' together the graphs of $\phi$ and $\psi$, is acyclic, then $\psi\circ\phi$ is again g-dinatural. This result, which effectively solves the compositionality problem of dinatural transformations, surprisingly does not appear to be well known: fifteen years after Petri\'c's paper, the authors of the present article, completely oblivious to Petri\'c's contribution, independently re-discovered the same theorem, which was one of the results of~\cite{mccusker_compositionality_2018} and of the second author's PhD thesis~\cite{santamaria_towards_2019}\footnotemark. We, too, associated to each dinatural transformation a graph, inspired by Kelly's work of~\cite{kelly_many-variable_1972}, such graph being slightly different from Petri\'c's; we also proved that acyclicity of the composite graph of $\phi$ and $\psi$ is ``essentially enough'' for $\psi\circ\phi$ to be dinatural. The proof of our and Petri\'c's theorem are, deep down, following the same argument, but the main difference is in the approach we took to formalise it: Petri\'c's went purely syntactic, using re-writing rules to show how the arbitrary morphism of the universal quantification of the dinaturality property for $\psi\circ\phi$ can ``travel through the composite graph'' when the graph is acyclic, whereas we showed this by interpreting the composite graph as a \emph{Petri Net}~\cite{petri_kommunikation_1962} and re-casting the dinaturality property of $\psi\circ\phi$ into a \emph{reachability} problem. We then proceeded to solve it by exploiting the general theory of Petri Nets: in other words, we took a more semantic approach. \footnotetext{We also presented our result as novel in various occasions, including in a plenary talk at the Category Theory conference in Edinburgh in 2019, yet nobody redirected us to Petri\'c's paper, which we found by chance only in September 2019.} Because of this appreciable difference of Petri\'c's and our proof of the compositionality result for dinatural transformations, we believe it is worth presenting in this paper our theorem despite the non-novelty of its statement; moreover, we give here a more direct proof for it than the one in~\cite{mccusker_compositionality_2018}: this is done in Section~\ref{section vertical compositionality}. In Section~\ref{chapter horizontal}, we define a working notion of horizontal composition, that we believe will play the role of substitution of dinaturals into dinaturals, precisely as horizontal composition of natural transformation does, as shown by Kelly in~\cite{kelly_many-variable_1972}. Next, we form a generalised functor category $\FC \B \C$ for these transformations (Definition~\ref{def: generalised functor category}). Finally, we prove that $\FC \B -$ has indeed a left adjoint $\ring - \B$, which gives us the definition of a category of formal substitutions $\ring \A \B$ generalising Kelly's one. Although the road paved by Kelly towards a substitution calculus for dinatural transformations still stretches a long way, our work sets the first steps in the right direction for a full understanding of the compositionality properties of dinaturals, which hopefully will be achieved soon. \paragraph{Notations} $\N$ is the set of natural numbers, including 0, and we shall ambiguously write $n$ for both the natural number $n$ and the set $\{1,\dots,n\}$. We denote by $\I$ the category with one object and one morphism. Let $\alpha\in \List{\{+,-\}}$, $\length\alpha=n$, with $\length{-}$ denoting the length function (and also the cardinality of an ordinary finite set). We refer to the $i$-th element of $\alpha$ as $\alpha_i$. Given a category $\C$, if $n\ge 1$, then we define $\C^\alpha=\C^{\alpha_1} \times \dots \times \C^{\alpha_n}$, with $\C^+=\C$ and $\C^-=\Op\C$, otherwise $\C^\alpha=\I$. Composition of morphisms $f \colon A \to B$ and $g \colon B \to C$ will be denoted by $g\circ f$, $gf$ or also $f;g$. The identity morphism of an object $A$ will be denoted by $\id A$, $1_A$ (possibly without subscripts, if there is no risk of confusion), or $A$ itself. Given $A$, $B$ and $C$ objects of a category $\C$ with coproducts, and given $f \colon A \to C$ and $g \colon B \to C$, we denote by $[f,g] \colon A + B \to C$ the unique map granted by the universal property of $+$. We use boldface capital letters $\bfA,\bfB\dots$ for tuples of objects, whose length will be specified in context. Say $\bfA=(A_1,\dots,A_n) \in \C^n$: we can see $\bfA$ as a function from the set $n$ to the objects of $\C$. If $\sigma \colon k \to n$ is a function of sets, the composite $\bfA \sigma$ is the tuple $(A_{\sigma 1}, \dots, A_{\sigma k})$. For $\bfB \in \C^n$ and $i \in \{1,\dots,n\}$, we denote by $\subst B X i$ the tuple obtained from $\bfB$ by replacing its $i$-th entry with $X$, and by $\subst B \cdot i$ the tuple obtained from $\bfB$ by removing its $i$-th entry altogether. In particular, the tuple $\subst A X i \sigma$ is equal to $(Y_1,\dots,Y_k)$ where \[ Y_j = \begin{cases} X & \sigma j=i \\ A_{\sigma j} & \sigma j \ne i \end{cases}. \] Let $\alpha \in \List\{+,-\}$, $\bfA = (A_1,\dots,A_n)$, $\sigma \colon \length\alpha \to n$, $i \in \{1,\dots,n\}$. We shall write $\substMV A X Y i \sigma$ for the tuple $(Z_1,\dots,Z_{\length\alpha})$ where \[ Z_j = \begin{cases} X & \sigma j = i, \alpha_j = - \\ Y & \sigma j = i, \alpha_j = + \\ A_{\sigma j} & \sigma j \ne i \end{cases}\label{not:A[X,Y/i]sigma} \] We shall also write $\subst B {\bfA} i$ for the tuple obtained from $\bf B$ by substituting $\bfA$ into its $i$-th entry. For example, if $\bfA = (A_1,\dots,A_n)$ and $\bfB = (B_1,\dots,B_m)$, we have \[ \subst B {\bfA} i = (B_1,\dots, B_{i-1},A_1,\dots A_n, B_{i+1}, \dots B_m). \] If $F \colon \B^{\alpha} \to \C$ is a functor, we define $\funminplus F {A_i} {B_i} i {\length\alpha}$ to be the following object (if $A_i$, $B_i$ are objects) or morphism (if they are morphisms) of $\C$: \[ \funminplus F {A_i} {B_i} i {\length\alpha}= F(X_1,\dots,X_{\length\alpha}) \text{ where } X_i = \begin{cases} A_i & \alpha_i = - \\ B_i & \alpha_i = + \end{cases} \] If $A_i = A$ and $B_i = B$ for all $i \in \length\alpha$, then we will simply write $\funminplusconst F A B$ for the above. We denote by $\Not\alpha$ the list obtained from $\alpha$ by swapping the signs. Also, we call $\Op F \colon \B^{\Not\alpha} \to \Op\C$ the \emph{opposite functor}, which is the obvious functor that acts like $F$ between opposite categories. \section{Vertical compositionality of dinatural transformations}\label{section vertical compositionality} We begin by introducing the notion of \emph{transformation} between two functors of arbitrary variance and arity, which is simply a family of morphisms that does not have to satisfy any naturality condition. (This simple idea is, unsurprisingly, not new: it appears, for example, in~\cite{power_premonoidal_1997}.) A transformation comes equipped with a cospan in $\finset$ that tells us which variables of the functors involved are to be equated to each other in order to write down the general component of the family of morphisms. \begin{definition}\label{def:transformation} Let $\alpha$, $\beta \in \List\{+,-\}$, $F \colon \B^\alpha \to \C$, $G \colon \B^\beta \to \C$ be functors. A \emph{transformation} $\phi \colon F \to G$ \emph{of type} $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ (with $n$ a positive integer) is a family of morphisms in $\C$ \[ \bigl( \phi_{\bfA} \colon F(\bfA\sigma) \to G(\bfA\tau) \bigr)_{\bfA \in \B^n} \] (i.e., according to our notations, a family $\phi_{A_1,\dots,A_n} \colon F(A_{\sigma 1}, \dots, A_{\sigma\length\alpha}) \to G(A_{\tau1},\dots,A_{\tau\length\beta})$). Notice that $\sigma$ and $\tau$ need not be injective or surjective, so we may have repeated or unused variables. Given another transformation $\phi' \colon F' \to G'$ of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma'"] & n & \length\beta \ar[l,"\tau'"'] \end{tikzcd}, $ we say that \[ \phi \sim {\phi'} \text{ if and only if there exists } \pi \colon n \to n \text{ permutation such that } \begin{cases} \sigma' = \pi\sigma \\ \tau' = \pi\tau \\ \phi'_\bfA = \phi_{\bfA\pi} \end{cases}. \] $\sim$ so defined is an equivalence relation and we denote by $\class\phi$ the equivalence class of $\phi$. \end{definition} \begin{remark} Two transformations are equivalent precisely when they differ only by a permutation of the indices in the cospan describing their type: they are ``essentially the same''. For this reason, from now on we shall drop an explicit reference to the equivalence class $\class\phi$ and just reason with the representative $\phi$, except when defining new operations on transformations, like the vertical composition below. \end{remark} \begin{definition}\label{def:vertical composition} Let $\phi \colon F \to G$ be a transformation as in Definition~\ref{def:transformation}, let $H \colon \B^\gamma \to \C$ be a functor and $\psi \colon G \to H$ a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\eta"] & m & \ar[l,"\theta"'] \length\gamma \end{tikzcd} $. The \emph{vertical composition} $\class\psi \circ \class\phi$ is defined as the equivalence class of the transformation $\psi\circ\phi$ of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\zeta\sigma"] & l & \ar[l,"\xi\theta"'] \length\gamma \end{tikzcd} $, where $\zeta$ and $\xi$ are given by a choice of a pushout \begin{equation}\label{eqn:pushout composite type} \begin{tikzcd} & & \length\gamma \ar[d,"\theta"] \\ & \length\beta \ar[d,"\tau"'] \ar[r,"\eta"] \ar[dr,phantom,very near end,"\ulcorner"] & m \ar[d,"\xi",dotted] \\ \length\alpha \ar[r,"\sigma"] & n \ar[r,"\zeta",dotted] & l \end{tikzcd} \end{equation} and the general component $(\psi\circ\phi)_{\bfA}$, for $\bfA \in \B^l$, is the composite: \[ \begin{tikzcd} F(\bfA\zeta\sigma) \ar[r,"\phi_{\bfA\zeta}"] & G(\bfA\zeta\tau)=G(\bfA\xi\eta) \ar[r,"\psi_{\bfA\xi}"] & H(\bfA\xi\theta) \end{tikzcd}. \] (Notice that by definition $\phi_{\bfA\zeta} = \phi_{(A_{\zeta1},\dots,A_{\zeta n})}$ requires that the $i$-th variable of $F$ be the $\sigma i$-th element of the list $(A_{\zeta1},\dots,A_{\zeta n})=\bfA\zeta$, which is $A_{\zeta\sigma i}$, hence the domain of $\phi_{\bfA\zeta}$ is indeed $F(\bfA\zeta\sigma)$.) \end{definition} Before giving some examples, we introduce the definition of dinaturality of a transformation in one of its variables, as a straightforward generalisation of the classical notion of dinatural transformation in one variable. Recall from p.~\pageref{not:A[X,Y/i]sigma} the meaning of the notation $\substMV \bfA X Y i \sigma$ for $\bfA\in\B^n$, $\sigma \colon \length\alpha \to n$ and $i \in \{1,\dots,n\}$. \begin{definition}\label{def:dinaturality in i-th variable} Let $\phi = (\phi_{A_1,\dots,A_n}) \colon F \to G$ be a transformation as in Definition~\ref{def:transformation}. For $i \in \{1,\dots,n\}$, we say that $\phi$ is \emph{dinatural in $A_i$} (or, more precisely, \emph{dinatural in its $i$-th variable}) if and only if for all $A_1,\dots,A_{i-1}, A_{i+1},\dots,A_n$ objects of $\B$ and for all $f \colon A \to B$ in $\B$ the following hexagon commutes: \[ \begin{tikzcd} & F(\subst \bfA A i \sigma) \ar[r,"\phi_{\subst \bfA A i}"] & G(\subst \bfA A i \tau) \ar[dr,"G(\substMV \bfA A f i \tau)"] \\ F(\substMV \bfA B A i \sigma) \ar[ur,"F(\substMV \bfA f A i \sigma)"] \ar[dr,"F(\substMV \bfA B f \sigma)"'] & & & G(\substMV \bfA A B i \tau) \\ & F(\subst \bfA B i \sigma) \ar[r,"\phi_{\subst \bfA B i}"'] & G(\subst \bfA B i \tau) \ar[ur,"G(\substMV \bfA f B i \tau)"'] \end{tikzcd} \] where $\bfA$ is the $n$-tuple $(A_1,\dots,A_n)$ of the objects above with an additional (unused in this definition) object $A_i$ of $\B$. \end{definition} Definition~\ref{def:dinaturality in i-th variable} reduces to the well-known notion of dinatural transformation when $\alpha=\beta=[-,+]$ and $n=1$. Our generalisation allows multiple variables at once and the possibility for $F$ and $G$ of having an arbitrary number of copies of $\B$ and $\Op\B$ in their domain, for each variable $i \in \{1,\dots,n\}$. \begin{example}\label{ex:delta} Let $\C$ be a cartesian category. The diagonal transformation $\delta=(\delta_A \colon A \to A \times A)_{A \in \C}$, classically a natural transformation from $\id\C$ to the diagonal functor, can be equivalently seen in our notations as a transformation $\delta \colon \id\C \to \times$ of type $ \begin{tikzcd}[cramped,sep=small] 1 \ar[r] & 1 & \ar[l] 2 \end{tikzcd}. $ Of course $\delta$ is dinatural (in fact, natural) in its only variable. \end{example} \begin{example}\label{ex:eval} Let $\C$ be a cartesian closed category and consider the functor \[ \begin{tikzcd}[row sep=0em] \C \times \Op\C \times \C \ar[r,"T"] & \C \\ (X,Y,Z) \ar[r,|->] & X \times (Y \Rightarrow Z) \end{tikzcd} \] The evaluation $ \eval{}{} = \left(\eval A B \colon A \times (A \implies B) \to B\right)_{A,B \in \C} \colon T \to \id\C $ is a transformation of type \[ \begin{tikzcd}[row sep=0em] 3 \ar[r] & 2 & 1 \ar[l] \\ 1 \ar[r,|->] & 1 & 1 \ar[dl,|->,out=180,in=30] \\[-3pt] 2 \ar[ur,|->,out=0,in=210]& 2 & \\[-3pt] 3 \ar[ur,|->,out=0,in=210] \end{tikzcd} \] which is dinatural in both its variables. \end{example} \begin{example}\label{ex:Church numeral} Let $\C$ be any category, and call $\hom\C \colon \Op \C \times \C \to \Set$ the hom-functor of $\C$. The $n$-th numeral~\cite{dubuc_dinatural_1970}, for $n \in \N$, is the transformation $n \colon \hom\C \to \hom\C$ of type $ \begin{tikzcd}[cramped,sep=small] 2 \ar[r] & 1 & \ar[l] 2 \end{tikzcd} $ whose general component $n_A \colon \C(A,A) \to \C(A,A)$ is given, for $A \in \C$ and $g \colon A \to A$, by \[ n_A (g) = g^n, \] with $0_A (g) = \id A$. Then $n$ is dinatural because for all $f \colon A \to B$ the following hexagon commutes: \[ \begin{tikzcd} & \C(B,B) \ar[r,"n_B"] & \C(B,B) \ar[dr,"-\circ f"] \\ \C(B,A) \ar[ur,"f\circ -"] \ar[dr,"-\circ f"'] & & & \C(A,B) \\ & \C(A,A) \ar[r,"n_A"'] & \C(A,A) \ar[ur,"f \circ -"'] \end{tikzcd} \] It is indeed true that for $h \colon B \to A$, $(f \circ h)^n \circ f = f \circ (h \circ f)^n$: for $n=0$ it follows from the identity axiom; for $n \ge 1$ it is a consequence of associativity of composition. \end{example} \paragraph{The graph of a transformation} Given a transformation $\phi$, we now define a graph that reflects its signature, which we shall use to prove our version of Petri\'c's theorem on compositionality of dinatural transformations~\cite{petric_g-dinaturality_2003}. This graph is, as a matter of fact, a \emph{string diagram} for the transformation. String diagrams were introduced by Eilenberg and Kelly in~\cite{eilenberg_generalization_1966} (indeed our graphs are inspired by theirs) and have had a great success in the study of coherence problems (\cite{kelly_coherence_1980,mac_lane_natural_1963}) and monoidal categories in general (\cite{joyal_geometry_1991,joyal_traced_1996}, a nice survey can be found in~\cite{selinger_survey_2010}). \begin{definition}\label{def:standard graph} Let $F \colon \B^\alpha \to \C$ and $G \colon \B^\beta \to \C$ be functors, and let $\phi \colon F \to G$ be a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \ar[l,"\tau"'] \length\beta \end{tikzcd} $. We define its \emph{standard graph} $\graph\phi = (P,T,\inp{(-)},\out{(-)})$ as a directed, bipartite graph as follows: \begin{itemize} \item $P=\length{\length\alpha + \length\beta}$ and $T=n$ are distinct finite sets of vertices; \item $\inp{(-)},\out{(-)} \colon T \to \parts P$ are the input and output functions for elements in $T$: there is an arc from $p \in P$ to $t \in T$ if and only if $p \in \inp t$, and there is an arc from $t$ to $p$ if and only if $p \in \out t$. Indicating with $\injP {\length\alpha} \colon \length\alpha \to P$ and $\injP {\length\beta} \colon \length\beta \to P$ the injections defined as follows: \[ \injP{\length\alpha} (x) = x, \quad \injP{\length\beta} (x) = \length\alpha + x, \] we have: \begin{align*} \inp{t} &= \{ \injP {\length\alpha} (p) \mid \sigma (p) = t,\, \alpha_p = + \} \, \cup \, \{ \injP {\length\beta} (p) \mid \tau (p) = t,\, \beta_p = - \} \\ \out{t} &= \{ \injP {\length\alpha} (p) \mid \sigma(p) = t,\, \alpha_p = - \} \, \cup \, \{ \injP {\length\beta} (p) \mid \tau (p) = t,\, \beta_p = + \} \end{align*} \end{itemize} In other words, elements of $P$ correspond to the arguments of $F$ and $G$, while those of $T$ to the variables of $\phi$. For $t \in T$, its inputs are the covariant arguments of $F$ and the contravariant arguments of $G$ which are mapped by $\sigma$ and $\tau$ to $t$; similarly for its outputs (swapping `covariant' and `contravariant'). \end{definition} Graphically, we draw elements of $P$ as white or grey boxes (if corresponding to a covariant or contravariant argument of a functor, respectively), and elements of $T$ as black squares. The boxes for the domain functor are drawn at the top, while those for the codomain at the bottom; the black squares in the middle. The graphs of the transformations given in examples \ref{ex:delta}-\ref{ex:Church numeral} are the following: \begin{itemize} \item $\delta=(\delta_A \colon A \to A \times A)_{A \in \C}$ (example \ref{ex:delta}): \[ \begin{tikzpicture} \matrix[row sep=1em,column sep=0.5em]{ & \node (1) [category] {}; \\ & \node (A) [component] {}; \\ \node (2) [category] {}; & & \node (3) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture} \] \item $\eval{}{} = \left(\eval A B \colon A \times (A \implies B) \to B\right)_{A,B \in \C}$ (example \ref{ex:eval}): \[ \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [category] {}; & & \node (2) [opCategory] {}; & & \node (3) [category] {}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture} \] \item $n=(n_A \colon \C(A,A) \to \C(A,A))_{A \in \C}$ (example \ref{ex:Church numeral}): \[ \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [opCategory] {}; & & \node (2) [category] {};\\ & \node (A) [component] {};\\ \node (3) [opCategory] {}; & & \node (4) [category] {};\\ }; \graph[use existing nodes]{ 2 -> A -> 1; 3 -> A -> 4; }; \end{tikzpicture} \] \end{itemize} \begin{remark} Each connected component of $\graph\phi$ corresponds to one variable of $\phi$: the arguments of the domain and codomain of $\phi$ corresponding to (white, grey) boxes belonging to the same connected component are all computed on the same object, when we write down the general component of $\phi$. \end{remark} \label{discussion:informal-reading-morphisms-in-a-box}This graphical counterpart of a transformation $\phi \colon F \to G$ permits us to represent, in an informal fashion, the dinaturality properties of $\phi$. By writing inside a box a morphism $f$ and reading a graph from top to bottom as ``compute $F$ in the morphisms as they are written in its corresponding boxes, compose that with an appropriate component of $\phi$, and compose that with $G$ computed in the morphisms as they are written in its boxes (treating an empty box as an identity)'', we can express the commutativity of a dinaturality diagram as an informal equation of graphs. (We shall make this precise in Proposition~\ref{prop:fired labelled marking is equal to original one}.) For instance, the dinaturality of examples~\ref{ex:delta}-\ref{ex:Church numeral} can be depicted as follows, where the upper leg of the diagrams are the left-hand sides of the equations: \begin{itemize} \item $\delta=(\delta_A \colon A \to A \times A)_{A \in \C}$ (example \ref{ex:delta}): \[ \begin{tikzcd} A \ar[r,"f"] \ar[d,"\delta_A"'] & B \ar[d,"\delta_B"] \\ A \times A \ar[r,"f \times f"] & B \times B \end{tikzcd} \qquad \begin{tikzpicture} \matrix[row sep=1em,column sep=0.5em]{ & \node (1) [category] {$f$}; \\ & \node (A) [component] {}; \\ \node (2) [category] {}; & & \node (3) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture} \matrix[row sep=1em,column sep=0.5em]{ & \node (1) [category] {}; \\ & \node (A) [component] {}; \\ \node (2) [category] {$f$}; & & \node (3) [category] {$f$}; \\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture} \] \item $\eval{}{} = \left(\eval A B \colon A \times (A \implies B) \to B\right)_{A,B \in \C}$ (example \ref{ex:eval}): \[\scriptstyle{ \begin{tikzcd} A \times (A' \implies B) \ar[r,"f\times (1 \implies 1)"] \ar[d,"1\times(f\implies 1)"'] & A' \times (A' \implies B) \ar[d,"\eval {A'} B"] \\ A \times (A \implies B) \ar[r,"\eval A B"] & B \end{tikzcd}} \qquad \begin{tikzpicture} \matrix[row sep=1em, column sep=.5em]{ \node (1) [category] {$f$}; & & \node (2) [opCategory] {}; & & \node (3) [category] {}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture} \matrix[row sep=1em, column sep=.5em]{ \node (1) [category] {}; & & \node (2) [opCategory] {$f$}; & & \node (3) [category] {}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture} \] \[\scriptstyle{ \begin{tikzcd} A \times (A \implies B) \ar[r,"1\times(1\implies g)"] \ar[d,"\eval A B"'] & A \times (A \implies B') \ar[d,"\eval A {B'}"] \\ B \ar[r,"g"] & B' \end{tikzcd}} \qquad \begin{tikzpicture} \matrix[row sep=1em, column sep=.5em]{ \node (1) [category] {}; & & \node (2) [opCategory] {}; & & \node (3) [category] {$g$}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture} \matrix[row sep=1em, column sep=.5em]{ \node (1) [category] {}; & & \node (2) [opCategory] {}; & & \node (3) [category] {}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {$g$}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture} \] \item $n=(n_A \colon \C(A,A) \to \C(A,A))_{A \in \C}$ (example \ref{ex:Church numeral}): \[\scriptstyle{ \begin{tikzcd}[column sep={.5cm}] & \C(B,B) \ar[r,"n_B"] & \C(B,B) \ar[dr,"{\C(f,1)}"] \\ \C(B,A) \ar[ur,"{\C(1,f)}"] \ar[dr,"{\C(f,1)}"'] & & & \C(A,B) \\ & \C(A,A) \ar[r,"n_A"] & \C(A,A) \ar[ur,"{\C(1,f)}"'] \end{tikzcd}} \qquad \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [opCategory] {}; & & \node (2) [category] {$f$};\\ & \node (A) [component] {};\\ \node (3) [opCategory] {$f$}; & & \node (4) [category] {};\\ }; \graph[use existing nodes]{ 2 -> A -> 1; 3 -> A -> 4; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [opCategory] {$f$}; & & \node (2) [category] {};\\ & \node (A) [component] {};\\ \node (3) [opCategory] {}; & & \node (4) [category] {$f$};\\ }; \graph[use existing nodes]{ 2 -> A -> 1; 3 -> A -> 4; }; \end{tikzpicture} \] \end{itemize} All in all, the dinaturality condition becomes, in graphical terms, as follows: \emph{$\phi$ is dinatural if and only if having in $\graph\phi$ one $f$ in all white boxes at the top and grey boxes at the bottom is the same as having one $f$ in all grey boxes at the top and white boxes at the bottom}. Not only does $\graph\phi$ give an intuitive representation of the dinaturality properties of $\phi$, but also of the process of composition of transformations. Given two transformations $\phi \colon F \to G$ and $\psi \colon G \to H$ as in Definition~\ref{def:vertical composition}, the act of computing the pushout~(\ref{eqn:pushout composite type}) corresponds to ``glueing together'' $\graph\phi$ and $\graph\psi$ along the boxes corresponding to the functor $G$ (more precisely, one takes the disjoint union of $\graph\phi$ and $\graph\psi$ and then identifies the $G$-boxes), obtaining a composite graph which we will call ${\graph\psi} \circ {\graph\phi}$. The number of its connected components is, indeed, the result of the pushout. That being done, $\graph{\psi\circ\phi}$ is obtained by collapsing each connected component of $\graph\psi\circ\graph\phi$ into a single black square together with the $F$- and $H$-boxes. The following example shows this process. The graph $\graph\psi\circ\graph\phi$ will play a crucial role into the compositionality problem of $\psi\circ\phi$. \begin{example}\label{ex:acyclic-example} Suppose that $\C$ is cartesian closed, fix an object $R$ in $\C$, consider functors \[ \begin{tikzcd}[row sep=0em,column sep=1em] \C \times \Op\C \ar[r,"F"] & \C \\ (A,B) \ar[r,|->] & A \times (B \Rightarrow R) \end{tikzcd} \quad \begin{tikzcd}[row sep=0em,column sep=1em] \C \times \C \times \Op\C \ar[r,"G"] & \C \\ (A,B,C) \ar[r,|->] & A \times B \times (C \Rightarrow R) \end{tikzcd} \quad \begin{tikzcd}[row sep=0em,column sep=1.5em] \C \ar[r,"H"] & \C \\ A \ar[r,|->] & A \times R \end{tikzcd} \] and transformations $\phi = \delta \times \id{(-)\Rightarrow R} \colon F \to G$ and $\psi = \id\C \times \eval {(-)} R \colon G \to H$ of types, respectively, \[ \begin{tikzcd}[row sep=0em] 2 \ar[r,"\sigma"] & 2 & \ar[l,"\tau"'] 3 \\ 1 \ar[r,|->] & 1 & \ar[l,|->] 1 \\[-3pt] 2 \ar[r,|->] & 2 & \ar[ul,|->,out=180,in=-30] 2 \\[-3pt] & & \ar[ul,|->,out=180,in=-20] 3 \end{tikzcd} \quad\text{and}\quad \begin{tikzcd}[row sep=0em] 3 \ar[r,"\eta"] & 2 & \ar[l,"\theta"'] 1 \\ 1 \ar[r,|->] & 1 & \ar[l,|->] 1 \\[-3pt] 2 \ar[r,|->] & 2 \\[-3pt] 3 \ar[ur,|->,out=0,in=210] \end{tikzcd} \] so that \[ \phi_{A,B} = \delta_A \times \id{B\implies R} \colon F(A,B) \to G(A,A,B), \, \psi_{A,B} = \id A \times \eval B R \colon G(A,B,B) \to H(A). \] Then $\psi \circ \phi$ has type $\begin{tikzcd}[cramped,sep=small] 2 \ar[r] & 1 & \ar[l] 1 \end{tikzcd}$ and $\graph{\psi}\circ\graph{\phi}$ is: \[ \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ & \node (A) [category] {}; & & & \node(F) [opCategory] {};\\ & \node (B) [component] {}; & & & \node(J) [component] {};\\ \node (C) [category] {}; & & \node(D) [category] {}; & & \node(E) [opCategory] {};\\ \node (H) [component] {}; & & & \node(I) [component] {};\\ \node (G) [category] {}; & & & \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \] The two upper boxes at the top correspond to the arguments of $F$, the three in the middle to the arguments of $G$, and the bottom one to the only argument of $H$. This is a connected graph (indeed, $\psi\circ\phi$ depends only on one variable) and by collapsing it into a single black box we obtain $\graph{\psi\circ\phi}$ as it is according to Definition~\ref{def:standard graph}: \[ \begin{tikzpicture} \matrix[column sep=.5em,row sep=1em]{ \node (1) [category] {}; & & \node (2) [opCategory] {};\\ & \node (A) [component] {}; \\ & \node (3) [category] {};\\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture} \] We have that $\psi \circ \phi$ is a dinatural transformation. (This is one of the transformations studied by Girard, Scedrov and Scott in~\cite{girard_normal_1992}.) The following string-diagrammatic argument proves that: \[ \begin{split} \begin{tikzpicture}[ampersand replacement=\&] \matrix[column sep=2.4mm,row sep=0.4cm]{ \& \node (A) [category] {$f$}; \& \& \& \node(F) [opCategory] {};\\ \& \node (B) [component] {}; \& \& \& \node(J) [component] {};\\ \node (C) [category] {}; \& \& \node(D) [category] {}; \& \& \node(E) [opCategory] {};\\ \node (H) [component] {}; \& \& \& \node(I) [component] {};\\ \node (G) [category] {}; \& \& \& \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \quad &= \quad \begin{tikzpicture}[ampersand replacement=\&] \matrix[column sep=2.4mm,row sep=0.4cm]{ \& \node (A) [category] {}; \& \& \& \node(F) [opCategory] {};\\ \& \node (B) [component] {}; \& \& \& \node(J) [component] {};\\ \node (C) [category] {$f$}; \& \& \node(D) [category] {$f$}; \& \& \node(E) [opCategory] {};\\ \node (H) [component] {}; \& \& \& \node(I) [component] {};\\ \node (G) [category] {}; \& \& \& \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture}[ampersand replacement=\&] \matrix[column sep=2.4mm,row sep=0.4cm]{ \& \node (A) [category] {}; \& \& \& \node(F) [opCategory] {};\\ \& \node (B) [component] {}; \& \& \& \node(J) [component] {};\\ \node (C) [category] {}; \& \& \node(D) [category] {$f$}; \& \& \node(E) [opCategory] {};\\ \node (H) [component] {}; \& \& \& \node(I) [component] {};\\ \node (G) [category] {$f$}; \& \& \& \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \\ &= \quad \begin{tikzpicture}[ampersand replacement=\&] \matrix[column sep=2.4mm,row sep=0.4cm]{ \& \node (A) [category] {}; \& \& \& \node(F) [opCategory] {};\\ \& \node (B) [component] {}; \& \& \& \node(J) [component] {};\\ \node (C) [category] {}; \& \& \node(D) [category] {}; \& \& \node(E) [opCategory] {$f$};\\ \node (H) [component] {}; \& \& \& \node(I) [component] {};\\ \node (G) [category] {$f$}; \& \& \& \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture}[ampersand replacement=\&] \matrix[column sep=2.4mm,row sep=0.4cm]{ \& \node (A) [category] {}; \& \& \& \node(F) [opCategory] {$f$};\\ \& \node (B) [component] {}; \& \& \& \node(J) [component] {};\\ \node (C) [category] {}; \& \& \node(D) [category] {}; \& \& \node(E) [opCategory] {};\\ \node (H) [component] {}; \& \& \& \node(I) [component] {};\\ \node (G) [category] {$f$}; \& \& \& \\ }; \graph[use existing nodes]{ A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F; }; \end{tikzpicture} \end{split} \] The first equation is due to dinaturality of $\phi$ in its first variable; the second to dinaturality of $\psi$ in its first variable; the third to dinaturality of $\psi$ in its second variable; the fourth equation holds by dinaturality of $\phi$ in its second variable. \end{example} The string-diagrammatic argument above is the essence of our proof of Petrić's theorem: we will interpret $\graph\psi \circ \graph\phi$, for arbitrary transformations $\phi$ and $\psi$ as a \emph{Petri Net} whose set of places is $P$ and of transitions is $T$. The dinaturality of $\psi\circ\phi$ will be expressed as a reachability problem and we will prove that, if $\graph\psi \circ \graph\phi$ is acyclic, then $\psi\circ\phi$ is always dinatural because we can always ``move the $f$'s'' from the upper-white boxes and lower-grey boxes all the way to the upper-grey boxes and lower-white boxes, as we did in Example~\ref{ex:acyclic-example}. \paragraph{Petri Nets}\label{section: Petri Nets} Petri Nets were invented by Carl Adam Petri in 1962 in \cite{petri_kommunikation_1962}, and have been used since then to model concurrent systems, resource sensitivity and many dynamic systems. A nice survey of their properties was written by Murata in \cite{murata_petri_1989}, to which we refer the reader for more details and examples. Here we shall limit ourselves only to the definitions and the properties of which we will make use in the paper. \begin{definition}\label{def:Petri Net} A \emph{Petri Net} $N$ is a tuple $(P,T,\inp{(-)},\out{(-)})$ where $P$ and $T$ are distinct, finite sets, and $\inp{(-)},\out{(-)}\colon T \to \parts{P}$ are functions. Elements of $P$ are called \emph{places}, while elements of $T$ are called \emph{transitions}. For $t$ a transition, $\inp t$ is the set of \emph{inputs} of $t$, and $\out t$ is the set of its \emph{outputs}. A \emph{marking} for $N$ is a function $M \colon P \to \N$. \end{definition} Graphically, the elements of $P$ and $T$ are drawn as light-blue circles and black bars respectively. Notice that the graph of a transformation is, as a matter of fact, a Petri Net. We can represent a marking $M$ by drawing, in each place $p$, $M(p)$ \emph{tokens} (black dots). Note that there is at most one arrow from a node to another. With little abuse of notation, we extend the input and output notation for places too, where \[ \inp p = \{ t \in T \mid p \in \out{t} \}, \qquad \out p = \{ t \in T \mid p \in \inp t \}. \] A pair of a place $p$ and a transition $t$ where $p$ is both an input and an output of $t$ is called \emph{self-loop}. For the purposes of this article, we shall only consider Petri Nets that contain no self-loops. \begin{definition} Let $N$ be a Petri Net. A place $p$ of $N$ is said to be a \emph{source} if $\inp p = \emptyset$, whereas is said to be a \emph{sink} if $\out p = \emptyset$. A source (or sink) place $p$ is said to be \emph{proper} if $\out p \ne \emptyset$ (or $\inp p \ne \emptyset$, respectively). \end{definition} We shall need a notion of (directed) path in a Petri Net, which we introduce now. It coincides with the usual notion of path in a graph. \begin{definition} Let $N$ be a Petri Net. A \emph{path} from a vertex $v$ to a vertex $w$ is a finite sequence of vertices $\pi=(v_0,\dots,v_l)$ where $l \ge 1$, $v_0=v$, $v_l=w$ and for all $i \in \{0,\dots,l-1\}$ $v_{i+1} \in v_i \! \LargerCdot \! \cup \! \LargerCdot \! v_i $. Two vertices are said to be \emph{connected} if there is a path from one to the other. If every vertex in $N$ is connected with every other vertex, then $N$ is said to be \emph{weakly connected}. A \emph{directed path} from a vertex $v$ to a vertex $w$ is a finite sequence of vertices $\pi=(v_0,\dots,v_l)$ such that $v=v_0$, $w=v_l$ and for all $i \in \{0,\dots,l-1\}\,$ $v_{i+1} \in v_i \! \LargerCdot \!$. In this case we say that the path $\pi$ has length $l$. A directed path from a vertex to itself is called a \emph{cycle}, or \emph{loop}; if $N$ does not have cycles, then it is said to be \emph{acyclic}. Two vertices $v$ and $w$ are said to be \emph{directly connected} if there is a directed path either from $v$ to $w$ or from $w$ to $v$. \end{definition} We can give a dynamic flavour to Petri Nets by allowing the tokens to “flow” through the nets, that is allowing markings to change according to the following \emph{transition firing rule}. \begin{definition} Let $N=(P,T,\inp{(-)},\out{(-)})$ be a Petri Net, and $M$ a marking for $N$. A transition $t$ is said to be \emph{enabled} if and only if for all $p \in \inp t$ we have $M(p) \ge 1$. An enabled transition may \emph{fire}; the firing of an enabled transition $t$ removes one token from each $p \in \inp t$ and adds one token to each $p \in \out t$, generating the following new marking $M'$: \[ M'(p) = \begin{cases} M(p) -1 & p \in \inp t \\ M(p)+1 & p \in \out t \\ M(p) & \text{otherwise} \end{cases} \] \end{definition} \begin{example}\label{my-example} Consider the following net: \[ \begin{tikzpicture}[yscale=0.5,xscale=0.70] \foreach \i/\u in {1/1,2/1,3/2} { \foreach \j/\v in {1/0,2/0,3/0,4/1} { \node[place,tokens=\u,label=above:$p_\i$](p\i) at (2*\i,0){}; \node[place,tokens=\v,label=below:$q_\j$](q\j) at (2*\j-2,-4){}; \node[place,label=below:$q_5$](q5) at (8,-4){}; \node[place,tokens=1,label=above:$p_4$](p4) at (8,0){}; \node[place,label=above:$p_5$](p5) at (10,0){}; \node[transition,label=right:{$t$}] at (4,-2) {} edge [pre] (p\i) edge [post] (q\j) edge [post] (q5); \node[transition,label=right:$t'$] at (10,-2) {} edge [pre] (q5) edge [pre] (p4) edge [post] (p5); } } \end{tikzpicture} \] There are two transitions, $t$ and $t'$, but only $t$ is enabled. Firing $t$ will change the state of the net as follows: \[ \begin{tikzpicture}[yscale=0.5,xscale=0.70] \foreach \i/\u in {1/0,2/0,3/1} { \foreach \j/\v in {1/1,2/1,3/1,4/2} { \node[place,tokens=\u,label=above:$p_\i$](p\i) at (2*\i,0){}; \node[place,tokens=\v,label=below:$q_\j$](q\j) at (2*\j-2,-4){}; \node[place,tokens=1,label=below:$q_5$](q5) at (8,-4){}; \node[place,tokens=1,label=above:$p_4$](p4) at (8,0){}; \node[place,label=above:$p_5$](p5) at (10,0){}; \node[transition,label=right:{$t$}] at (4,-2) {} edge [pre] (p\i) edge [post] (q\j) edge [post] (q5); \node[transition,label=right:$t'$] at (10,-2) {} edge [pre] (q5) edge [pre] (p4) edge [post] (p5); } } \end{tikzpicture} \] Now $t$ is disabled, but $t'$ is enabled, and by firing it we obtain: \[ \begin{tikzpicture}[yscale=0.5,xscale=0.70] \foreach \i/\u in {1/0,2/0,3/1} { \foreach \j/\v in {1/1,2/1,3/1,4/2} { \node[place,tokens=\u,label=above:$p_\i$](p\i) at (2*\i,0){}; \node[place,tokens=\v,label=below:$q_\j$](q\j) at (2*\j-2,-4){}; \node[place,label=below:$q_5$](q5) at (8,-4){}; \node[place,label=above:$p_4$](p4) at (8,0){}; \node[place,tokens=1,label=above:$p_5$](p5) at (10,0){}; \node[transition,label=right:{$t$}] at (4,-2) {} edge [pre] (p\i) edge [post] (q\j) edge [post] (q5); \node[transition,label=right:$t'$] at (10,-2) {} edge [pre] (q5) edge [pre] (p4) edge [post] (p5); } } \end{tikzpicture} \] \end{example} \paragraph{The reachability problem and dinaturality} Suppose we have a Petri Net $N$ and an initial marking $M_0$. The firing of an enabled transition in $N$ will change the distribution of tokens from $M_0$ to $M_1$, according to the firing transition rule, therefore a sequence of firings of enabled transitions yields a sequence of markings. A \emph{firing sequence} is denoted by $\sigma = (t_0,\dots,t_n)$ where the $t_i$'s are transitions which fire. \begin{definition} A marking $M$ for a Petri Net $N$ is said to be \emph{reachable} from a marking $M_0$ if there exists a firing sequence $(t_1,\dots,t_n)$ and markings $M_1,\dots,M_n$ where $M_i$ is obtained from $M_{i-1}$ by firing transition $t_i$, for $i \in \{1,\dots,n\}$, and $M_{n}=M$. \end{definition} The reachability problem for Petri Nets consists in checking whether a marking $M$ is or is not reachable from $M_0$. It has been shown that the reachability problem is decidable \cite{kosaraju_decidability_1982,mayr_algorithm_1981}. \begin{remark}\label{rem:preliminary-discussion} The crucial observation that will be at the core of our proof of Petri\'c's theorem is that the firing of an enabled transition in the graph of a dinatural transformation $\phi$ corresponds, under certain circumstances, to the dinaturality condition of $\phi$ in one of its variables. Take, for instance, the $n$-th numeral transformation (see example~\ref{ex:Church numeral}). Call the only transition $t$, and consider the following marking $M_0$: \[ \begin{tikzpicture}[scale=0.7] \node[opCategory] (1) at (-1,1) {}; \node[category,tokens=1] (2) at (1,1) {}; \node[opCategory,tokens=1] (3) at (-1,-1) {}; \node[category] (4) at (1,-1) {}; \node[component,label=left:$t$] {} edge[pre] (2) edge[pre] (3) edge[post] (1) edge[post] (4); \end{tikzpicture} \] Transition $t$ is enabled, and once it fires we obtain the following marking $M_1$: \[ \begin{tikzpicture}[scale=0.7] \node[opCategory] (1) at (-1,1) {}; \node[category,tokens=1] (2) at (1,1) {}; \node[opCategory,tokens=1] (3) at (-1,-1) {}; \node[category] (4) at (1,-1) {}; \node[component,label=left:$t$] {} edge[pre] (2) edge[pre] (3) edge[post] (1) edge[post] (4); \draw[->,snake=snake,segment amplitude=.4mm,segment length=2mm,line after snake=1mm] (1.5,0) -- node[above]{$t$} node[below]{fires} (3.5,0); \begin{scope}[xshift=5cm] \node[opCategory,tokens=1] (1) at (-1,1) {}; \node[category] (2) at (1,1) {}; \node[opCategory] (3) at (-1,-1) {}; \node[category,tokens=1] (4) at (1,-1) {}; \node[component,label=left:$t$] {} edge[pre] (2) edge[pre] (3) edge[post] (1) edge[post] (4); \end{scope} \end{tikzpicture} \] The striking resemblance with the graphical version of the dinaturality condition for $n$ is evident: \[ \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [opCategory] {}; & & \node (2) [category] {$f$};\\ & \node (A) [component] {};\\ \node (3) [opCategory] {$f$}; & & \node (4) [category] {};\\ }; \graph[use existing nodes]{ 2 -> A -> 1; 3 -> A -> 4; }; \end{tikzpicture} \quad = \quad \begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [opCategory] {$f$}; & & \node (2) [category] {};\\ & \node (A) [component] {};\\ \node (3) [opCategory] {}; & & \node (4) [category] {$f$};\\ }; \graph[use existing nodes]{ 2 -> A -> 1; 3 -> A -> 4; }; \end{tikzpicture} \] By treating the ``morphism $f$ in a box'' as a ``token in a place'' of $\graph n$, we have seen that the firing of $t$ generates an equation in $\Set$, namely the one that expresses the dinaturality of $n$. \end{remark} Suppose now we have two composable transformations $\phi$ and $\psi$ dinatural in all their variables, in a category $\C$, together with a graph. We shall make precise how certain markings of $\graph\psi\circ\graph\phi$ correspond to morphisms in $\C$, and how the firing of an enabled transition corresponds to applying the dinaturality of $\phi$ or $\psi$ in one of their variables, thus creating an equation of morphisms in $\C$. Therefore, if the firing of a single transition generates an equality in the category, a sequence of firings of enabled transitions yields a chain of equalities. By individuating two markings $M_0$ and $M_d$, each corresponding to a leg of the dinaturality hexagon for $\psi\circ\phi$ we want to prove is commutative, and by showing that $M_d$ is reachable from $M_0$, we shall have proved that $\psi\circ\phi$ is dinatural. We are now ready to present and prove the first main result of this article. For the rest of this section, fix transformations $\phi \colon F_1 \to F_2$ and $\psi \colon F_2 \to F_3$ where \begin{itemize} \item $F_i \colon \B^{\alpha^i} \to \C$ is a functor for all $i \in \{1,2,3\}$, \item $\phi$ and $\psi$ have type, respectively, \[ \begin{tikzcd} \length{\alpha^1} \ar[r,"\sigma_1"] & k_1 & \length{\alpha^{2}} \ar[l,"\tau_1"'] \end{tikzcd} \qquad \text{and} \qquad \begin{tikzcd} \length{\alpha^2} \ar[r,"\sigma_2"] & k_2 & \length{\alpha^{3}}. \ar[l,"\tau_2"'] \end{tikzcd} \] \end{itemize} We shall establish a sufficient condition for the dinaturality of $\psi \circ \phi$ in some of its variables. However, since we are interested in analysing the dinaturality of the composition in each of its variables \emph{separately}, we start by assuming that $\psi\circ\phi$ depends on only one variable, i.e. has type $ \begin{tikzcd}[cramped,sep=small] \length{\alpha^1} \ar[r] & 1 & \length{\alpha^{3}} \ar[l], \end{tikzcd} $ and that $\phi$ and $\psi$ are dinatural in all their variables. In this case, we have to show that the following hexagon commutes for all $f \colon A \to B$, recalling that $\funminplusconst {F_1} B A$ is the result of applying functor $F_1$ in $B$ in all its contravariant arguments and in $A$ in all its covariant ones: \begin{equation}\label{eqn:compositionality-hexagon} \begin{tikzcd}[column sep=1cm] & \funminplusconst {F_1} A A \ar[r,"\phi_{A\dots A}"] & \funminplusconst {F_2} A A \ar[r,"\psi_{A \dots A}"] & \funminplusconst {F_3} A A \ar[dr,"\funminplusconst {F_3} 1 f"] \\ \funminplusconst {F_1} B A \ar[ur,"\funminplusconst {F_1} f 1"] \ar[dr,"\funminplusconst {F_1} 1 f"'] & & & & \funminplusconst {F_3} A B \\ & \funminplusconst {F_1} B B \ar[r,"\phi_{B\dots B}"'] & \funminplusconst {F_2} B B \ar[r,"\psi_{B \dots B}"'] & \funminplusconst {F_3} B B \ar[ur,"\funminplusconst {F_3} f 1"'] \end{tikzcd} \end{equation} The theorem we want to prove is then the following. \begin{theorem}\label{theorem:acyclic implies dinatural} Let $\phi$ and $\psi$ be transformations which are dinatural in all their variables and such that $\psi\circ\phi$ depends on only one variable. If \,$\graph\psi \circ \graph\phi$ is acyclic, then $\psi\circ\phi$ is a dinatural transformation. \end{theorem} The above is a direct generalisation of Eilenberg and Kelly's result on \emph{extranatural transformations} \cite{eilenberg_generalization_1966}, which are dinatural transformations where either the domain or the codomain functor is constant. For example, $\eval{}{}$ is extranatural in its first variable. They worked with the additional assumption that $\graph\phi$ and $\graph\psi$ do not contain any ramifications, that is, the white and grey boxes are always linked in pairs, and they also proved that if the composite graph is acyclic, then the composite transformation is again extranatural. Their condition is also ``essentially necessary'' in the sense that if we do create a cycle upon constructing $\graph\psi \circ \graph\phi$, then that means we are in a situation like this: \[ \begin{tikzpicture} \matrix[column sep=1em,row sep=1em]{ & \node[component] (A) {}; \\ \node[opCategory] (1) {}; & & \node[category] (2) {};\\ & \node[component] (B) {};\\ }; \graph[use existing nodes]{ 1 -> A -> 2 -> B -> 1; }; \end{tikzpicture} \] where we have a transformation between constant functors. Such a family of morphisms is (extra)natural precisely when it is constant (that is, if every component is equal to the same morphism) on each connected component of the domain category. As already said in Remark~\ref{rem:preliminary-discussion}, the key to prove this theorem is to see $\graph\psi \circ \graph\phi$ as a Petri Net, reducing the dinaturality of $\psi\circ\phi$ to the reachability problem for two markings we shall individuate. We begin by unfolding the definition of $\graph\psi \circ \graph\phi$: we have $\graph\psi \circ \graph\phi = (P,T,\inp{(-)},\out{(-)})$ where $P = \length{\alpha^1} + \length{\alpha^2} + \length{\alpha^{3}}$, $T = k_1 + k_2$ and, indicating with $\injP i \colon \length{\alpha^i} \to P$ and $\injT i \colon k_i \to T$ the injections defined similarly to $\injP{\length\alpha}$ and $\injP{\length\beta}$ in Definition~\ref{def:standard graph}, \begin{equation}\label{input-output-transitions} \begin{aligned} \inp{(\injT i (t))} &= \, \{ \injP i (p) \mid \sigma_i(p) = t,\, \alpha^i_p = + \} \, \cup \, \{ \injP {i+1} (p) \mid \tau_i(p) = t,\, \alpha^{i+1}_p = - \}, \\ \out{(\injT i (t))} &= \, \{ \injP i (p) \mid \sigma_i(p) = t,\, \alpha^i_p = - \} \, \cup \, \{ \injP {i+1} (p) \mid \tau_i(p) = t,\, \alpha^{i+1}_p = + \}. \end{aligned} \end{equation} For the rest of this section, we shall reserve the names $P$ and $T$ for the sets of places and transitions of $\graph\psi \circ \graph\phi$. \begin{remark}\label{rem:graph of a transformation is FBCF} Since $\sigma_i$ and $\tau_i$ are functions, we have that $\length{\inp p}, \length{\out p} \le 1$ and also that $\length{\inp p \cup \out p }\ge 1$ for all $p\in P$. With a little abuse of notation then, if $\inp p = \{t\}$ then we shall simply write $\inp p = t$, and similarly for $\out p$. \end{remark} \paragraph{Labelled markings as morphisms} We now show how to formally translate certain markings of $\graph\psi \circ \graph\phi$ in actual morphisms of $\C$. The idea is to treat every token in the net as a fixed, arbitrary morphism $f \colon A \to B$ of $\C$ and then use the idea discussed on p.~\pageref{discussion:informal-reading-morphisms-in-a-box}. However, not all possible markings of $\graph\psi \circ \graph\phi$ have a corresponding morphism in $\C$. For example, if $M$ is a marking and $p$ is a place such that $M(p)>1$, it makes no sense to ``compute a functor $F_i$ in $f$ twice'' in the argument of $F_i$ corresponding to $p$. Hence, only markings $M \colon P \to \{0,1\}$ can be considered. Moreover, we have to be careful with \emph{where} the marking puts tokens: if a token corresponds to a morphism $f \colon A \to B$, we have to make sure that there are no two consecutive tokens (more generally, we have to make sure that there is at most one token in every directed path), otherwise a naive attempt to assign a morphism to that marking might end up with type-checking problems. For instance, consider the diagonal transformation in a Cartesian category $\C$ (example \ref{ex:delta}) and the following marking: \[ \begin{tikzpicture} \matrix[row sep=1em,column sep=0.5em]{ & \node (1) [category,tokens=1] {}; \\ & \node (A) [component] {}; \\ \node (2) [category,tokens=1] {}; & & \node (3) [category,tokens=1] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture} \] The token on the top white box should be interpreted as $\id\C(f) \colon A \to B$, hence the black middle box should correspond to the $B$-th component of the family $\delta$, that is $\delta_B \colon B \to B \times B$. However, the bottom two white boxes are read as $f \times f \colon A \times A \to B \times B$, which cannot be composed with $\delta_B$. We therefore introduce the notion of \emph{labelled marking}, which consists of a marking together with a labelling of the transitions, such that a certain coherence condition between the two is satisfied. This constraint will ensure that every labelled marking corresponds to a morphism of $\C$. We will then use only \emph{some} labelled markings to prove our compositionality theorem. \begin{definition}\label{def:labelled marking} Consider $f \colon A \to B$ a morphism in $\C$. A \emph{labelled marking} for $\graph\psi \circ \graph\phi$ is a triple $(M,L,f)$ where functions $M \colon P \to \{0,1\}$ and $L \colon T \to \{A,B\}$ are such that for all $p \in P$ \[ M(p)=1 \implies L(\inp p) = A, \, L(\out p ) = B \] \[ M(p)=0 \implies L(\inp p ) = L(\out p ) \] These conditions need to be satisfied only when they make sense; for example if $M(p) = 1$ and $\inp p = \emptyset$, condition $L(\inp p) = A$ is to be ignored. \end{definition} We are now ready to assign a morphism in $\C$ to every labelled marking by reading a token in a place as a morphism $f$ in one of the arguments of a functor, while an empty place corresponds to the identity morphism of the label of the transition of which the place is an input or an output. \begin{definition}\label{def:morphism for labelled marking} Let $(M,L,f\colon A \to B)$ be a labelled marking. We define a morphism $\mor M L f$ in $\C$ as follows: \[ \mor M L f = F_1(x^1_1,\dots,x^1_{\length{\alpha^1}});\phi_{X^1_1\dots X^1_{k_1}} ; F_2(x^2_1,\dots,x^2_{\length{\alpha^2}}); \psi_{X^2_1\dots X^2_{k_2}} ; F_{3}(x^{3}_1,\dots,x^{3}_{\length{\alpha^{3}}}) \] where \[ x^i_j = \begin{cases} f \quad &M(\injP i (j)) = 1 \\ \id{L(t)} \quad & M(\injP i (j)) = 0 \land t \in \inp {\injP v (j)} \cup \out {\injP v (j)} \end{cases} \qquad X^i_j = L(\injT i (j)). \] for all $i \in \{1,2,3\}$ and $j\in\{1,\dots,\length{\alpha^i}\}$. (Recall that $\injP i \colon \length{\alpha^i} \to P$ and $\injT i \colon k_i \to T$ are the injections defined similarly to $\injP{\length\alpha}$ and $\injP{\length\beta}$ in Definition~\ref{def:standard graph}.) \end{definition} It is easy to see that $\mor M L f$ is indeed a morphism in $\C$, by checking that the maps it is made of are actually composable using the definition of labelled marking and of $\graph\psi \circ \graph\phi$. What are the labelled markings corresponding to the two legs of diagram~(\ref{eqn:compositionality-hexagon})? In the lower leg of the hexagon, $f$ appears in all the covariant arguments of $F_1$ and the contravariant ones of $F_{3}$, which correspond in $\graph\psi \circ \graph\phi$ to those places which have no inputs (in Petri nets terminology, \emph{sources}), and all variables of $\phi$ are equal to $B$; in the upper leg, $f$ appears in those arguments corresponding to places with no outputs (\emph{sinks}), and $\psi$ is computed in $A$ in each variable. Hence, the lower leg is $\mor {M_0} {L_0} f$ while the upper leg is $\mor {M_d} {L_d} f$, where: \begin{equation}\label{eqn:markings-definitions} \begin{aligned} M_0(p)&=\begin{cases} 1 & \inp p = \emptyset \\ 0 & \text{otherwise} \end{cases} \quad & M_d(p)&=\begin{cases} 1 & \out p = \emptyset \\ 0 & \text{otherwise} \end{cases} \\[.5em] L_0(t) &= B & L_d(t) &= A \end{aligned} \end{equation} for all $p\in P$ and $t \in T$. It is an immediate consequence of the definition that $(M_0,L_0,f)$ and $(M_d,L_d,f)$ so defined are labelled markings. We aim to show that $M_d$ is reachable from $M_0$ by means of a firing sequence that preserves the morphism $\mor {M_0} {L_0} f$. In order to do so, we now prove that firing a $B$-labelled transition in an arbitrary labelled marking $(M,L,f)$ generates a new labelled marking, whose associated morphism in $\C$ is still equal to $\mor M L f$. \begin{proposition}\label{prop:fired labelled marking is equal to original one} Let $(M,L,f)$ be a labelled marking, $t \in T$ an enabled transition such that $L(t) = B$. Consider \begin{equation}\label{markings after firing definition} \begin{tikzcd}[row sep=0em,column sep=1em,ampersand replacement=\&] P \ar[r,"M'"] \& \{0,1\} \& \& \& \& \& T \ar[r,"L'"] \& \{A,B\} \\ p \ar[r,|->] \& \begin{cases} 0 & p \in \inp t \\ 1 & p \in \out t \\ M(p) & \text{otherwise} \end{cases} \& \& \& \& \& s \ar[r,|->] \& \begin{cases} A & s = t \\ L(s) & s \ne t \end{cases} \end{tikzcd} \end{equation} Then $(M',L',f)$ is a labelled marking and $\mor M L f = \mor {M'} {L'} f$. \end{proposition} \begin{proof} By definition of labelled marking, if $\out t \ne \emptyset$ and $L(t) = B$ then $M(p) = 0$ for all $p \in \out t$, because if there were a $p \in \out t$ with $M(p) = 1$, then $L(t) = A$. $M'$ is therefore the marking obtained from $M$ when $t$ fires once. It is easy to see that $(M',L',f)$ is a labelled marking by simply checking the definition. We have now to prove that $\mor M L f = \mor {M'} {L'} f$. Since $t \in T$, we have $t = \injT u (i)$ for some $u \in \{1,2\}$ and $i \in \{1,\dots,k_u\}$. The fact that $t$ is enabled in $M$, together with the definition of $\graph\psi \circ \graph\phi$ (\ref{input-output-transitions}) and Definition~\ref{def:morphism for labelled marking}, ensures that, in the notations of Definition~\ref{def:morphism for labelled marking}, \begin{align*} \sigma_u(j) = i \land \alpha^u_j = + &\implies x^u_j = f \\ \sigma_u(j) = i \land \alpha^u_j = - &\implies x^u_j = \id B \\ \tau_u(j) = i \land \alpha^{u+1}_j = + &\implies x^{u+1}_j = \id B \\ \tau_u(j) = i \land \alpha^{u+1}_j = - &\implies x^{u+1}_j = f \end{align*} hence we can apply the dinaturality of $\phi$ or $\psi$ (if, respectively, $u=1$ or $u=2$) in its $i$-th variable. To conclude, one has to show that the morphism obtained in doing so is the same as $\mor {M'} {L'} f$, which is just a matter of identity check. The details can be found in the second author's thesis~\cite{santamaria_towards_2019}.\qed \end{proof} It immediately follows that a sequence of firings of $B$-labelled transitions gives rise to a labelled marking whose associated morphism is still equal to the original one, as the following Proposition states. \begin{corollary}\label{cor:reachability-implies-equality} Let $\mor M L f$ be a labelled marking, $M'$ a marking reachable from $M$ by firing only $B$-labelled transitions $t_1,\dots,t_m$, $L' \colon T \to \{A,B\}$ defined as: \[ L'(s) = \begin{cases} A & s = t_i \text{ for some $i \in \{1,\dots,m\}$} \\ L(s) & \text{otherwise} \end{cases} \] Then $(M', L',f)$ is a labelled marking and $\mor M L f = \mor {M'} {L'} f$. \end{corollary} Now all we have to show is that $M_d$ is reachable from $M_0$ (see~(\ref{eqn:markings-definitions})) by only firing $B$-labelled transitions: it is enough to make sure that each transition is fired at most once to satisfy this condition. We shall work on a special class of Petri Nets, to which our $\graph\psi \circ \graph\phi$ belongs (Remark~\ref{rem:graph of a transformation is FBCF}), where all places have at most one input and at most one output. \begin{definition}\label{def:FBCF petri net} A Petri Net is said to be \emph{forward-backward conflict free} (FBCF) if for all $p$ place $\length{\inp p} \le 1$ and $\length{\out p} \le 1$. \end{definition} \begin{theorem}\label{thm:acyclic-implies-reachable} Let $N$ be an acyclic FBCF Petri Net and let $M_0$, $M_d$ be the only-source and only-sink markings as in~(\ref{eqn:markings-definitions}). Then $M_d$ is reachable from $M_0$ by firing each transition exactly once. \end{theorem} \begin{proof} We proceed by induction on the number of transitions in $N$. If $N$ has no transitions at all, then every place is both a source and a sink, and $M_0$ and $M_d$ coincide, therefore there is nothing to prove. Now, let $n \ge 0$, suppose that the theorem holds for Petri Nets that have $n$ transitions and assume that $N$ has $n+1$ transitions. Define, given $t$ and $t'$ two transitions, $t \le t'$ if and only if there exists a directed path from $t$ to $t'$. The relation $\le$ so defined is reflexive, transitive and antisymmetric (because $N$ is acyclic), hence it is a partial order on $T$, the set of transitions of $N$. Now, $T$ is finite by definition, hence it has at least one minimal element $t_0$. Since $t_0$ is minimal, every (if any) input of $t_0$ is a source, therefore $t_0$ is enabled in $M_0$. Now, fire $t_0$ and call $M_1$ the resulting marking. Consider the subnet $N'$ obtained from $N$ by removing $t_0$ and all its inputs. Since $N$ is forward-backward conflict free, we have that all the outputs of $t_0$ are sources in $N'$. This means that $N'$ is an acyclic FBCF Petri Net: by inductive hypothesis, we have that $M_d$ (restricted to $N'$) is reachable from $M_1$ in $N'$, and therefore $M_d$ is reachable from $M_0$ in $N$.\qed \end{proof} \begin{remark} Theorem~\ref{thm:acyclic-implies-reachable} is an instance of Hiraishi and Ichikawa's result on reachability for arbitrary markings in arbitrary acyclic Petri Nets~\cite{hiraishi_class_1988}. Our proof is an adapted version of theirs for the special case of FBCF Petri Nets and the particular markings $M_0$ and $M_d$ that put one token precisely in every source and in every sink respectively. \end{remark} We are now ready to give an alternative proof to the first half of Petri\'{c}'s theorem~\cite{petric_g-dinaturality_2003} that solved the compositionality problem of dinatural transformations. \begin{proofMainTheorem} Let $f \colon A \to B$ be a morphism in $\C$, and define labelled markings $(M_0,L_0,f)$ and $(M_d,L_d,f)$ as in~(\ref{eqn:markings-definitions}). Then $\mor {M_0} {L_0} f$ is the lower leg of~(\ref{eqn:compositionality-hexagon}), while $\mor {M_d} {L_d} f$ is the upper leg. By theorem~\ref{thm:acyclic-implies-reachable}, marking $M_d$ is reachable from $M_0$ by firing each transition of $\graph\psi \circ \graph\phi$ exactly once, hence by only firing $B$-labelled transitions. By Proposition~\ref{cor:reachability-implies-equality}, we have that the hexagon~(\ref{eqn:compositionality-hexagon}) commutes. \qed \end{proofMainTheorem} Theorem~\ref{theorem:acyclic implies dinatural} can then be straightforwardly generalised to the case in which $\psi\circ\phi$ depends on $n$ variables for an arbitrary $n$. Suppose then that the type of $\psi\circ\phi$ is given by the following pushout: \begin{equation}\label{eqn:pushout2} \mkern0mu \begin{tikzcd} & & \length{\alpha^3} \ar[d, "\tau_2"] \\ & \length{\alpha^2} \ar[d, "\tau_1"'] \ar[dr, phantom, "\ulcorner" very near start] \ar[r, "\sigma_2"] & k_2 \ar[d, dotted, "\xi"] \\ \length{\alpha^1} \ar[r, "\sigma_1"] & k_1 \ar[r, dotted, "\zeta"] & n \end{tikzcd} \end{equation} $\graph\psi \circ \graph\phi$ now has $n$ connected components, and a sufficient condition for the dinaturality of $\psi\circ\phi$ in its $i$-th variable is that $\phi$ and $\psi$ are dinatural in all those variables of theirs which are ``involved'', as it were, in the $i$-th connected component of $\graph\psi \circ \graph\phi$ \emph{and} such connected component is acyclic. \begin{theorem}\label{theorem:acyclicity implies dinaturality GENERAL} In the notations above, let $i\in \{1,\dots,n\}$. If $\phi$ and $\psi$ are dinatural in all the variables in, respectively, $\zeta^{-1}\{i\}$ and $\xi^{-1}\{i\}$ (with $\zeta$ and $\xi$ given by the pushout~(\ref{eqn:pushout2})), and if the $i$-th connected component of $\graph\psi \circ \graph\phi$ is acyclic, then $\psi\circ\phi$ is dinatural in its $i$-th variable. \end{theorem} We then have a straightforward corollary. \begin{corollary} Let $\phi\colon F \to G$ and $\psi \colon G \to H$ be transformations which are dinatural in all their variables. If $\graph\psi \circ \graph\phi$ is acyclic, then $\psi\circ\phi$ is dinatural in all its variables. \end{corollary} One can generalise even further Theorem~\ref{theorem:acyclicity implies dinaturality GENERAL} by considering $k$ consecutive dinatural transformations $\phi_1,\dots,\phi_k$, instead of just two, and reasoning on the acyclicity of the connected components of the composite graph $\graph{\phi_k} \circ \dots \circ \graph{\phi_1}$, obtained by ``glueing together'' the standard graphs of the $\phi_i$'s along the common interfaces (formally this would be a composite performed in the category $\gc$ introduced in Definition~\ref{definition:graph category}). \begin{theorem}\label{theorem:compositionality with complicated graphs} Let $\phi_j \colon F_j \to F_{j+1}$ be transformations of type $ \begin{tikzcd}[cramped,sep=small] \length{\alpha^j} \ar[r,"\sigma_j"] & n_j & \ar[l,"\tau_j"'] \length{\alpha^{j+1}} \end{tikzcd} $ for $j \in \{1,\dots,k\}$. Suppose that the type of $\phi_k \circ \dots \phi_1$ is computed by the following pushout-pasting: \[ \begin{tikzcd} & & & & \length{\alpha^{k+1}} \ar[d,"\tau_k"'] \\ & & & \length{\alpha^k} \ar[r,"\sigma_k"] \ar[d] \ar[dr, phantom, "\ulcorner" very near start] & n_k \ar[d] \\ & & \length{\alpha^3} \ar[r] \ar[ur,sloped,phantom,"\dots"] \ar[d,"\tau_2"'] \ar[dr, phantom, "\ulcorner" very near start] & \dots \ar[r] \ar[d] \ar[dr, phantom, "\ulcorner" very near start] & \dots \ar[d] \\ & \length{\alpha^2} \ar[r,"\sigma_2"] \ar[d,"\tau_1"'] \ar[dr, phantom, "\ulcorner" very near start] & n_2 \ar[r] \ar[d] \ar[dr, phantom, "\ulcorner" very near start] & \dots \ar[r] \ar[d] \ar[dr, phantom, "\ulcorner" very near start] & \dots \ar[d] \\ \length{\alpha^1} \ar[r,"\sigma_1"] & n_1 \ar[r] & \dots \ar[r] & \dots \ar[r] & l \end{tikzcd} \] Let $\xi_j \colon n_j \to l$ be the map given by any path of morphisms from $n_j$ to $l$ in the above diagram. If the $i$-th connected component of $\graph{\phi_k} \circ \dots \circ \graph{\phi_1}$ (composite calculated in $\gc$) is acyclic and if for all $j \in \{1,\dots,k\}$, for all $x \in \xi_j^{-1} \{i\}$ the transformation $\phi_j$ is dinatural in its $x$-th variable, then $\phi_k \circ \dots \circ \phi_1$ is dinatural in its $i$-th variable. \end{theorem} \begin{proof} The proof is essentially the same of Theorem~\ref{theorem:acyclicity implies dinaturality GENERAL}, where instead of two transformations we have $k$: one defines labelled markings $(M_0, L_0,f)$ and $(M_d,L_d,f)$ corresponding to the two legs of the dinaturality hexagon of $\phi_k \circ \dots \circ \phi_1$ in its $i$-th variable, and uses Theorem~\ref{thm:acyclic-implies-reachable} to prove that $M_d$ is reachable from $M_0$, thus showing the hexagon commutes. \qed \end{proof} \begin{remark} In~\cite{girard_normal_1992}, the authors had to prove the dinaturality of families of morphisms obtained by composing several transformations that are dinatural by assumption. They showed that the dinaturality hexagons for such composites commute by filling them with a trellis of commutative diagrams, stating functoriality properties and dinaturality of the building blocks. Theorem~\ref{theorem:compositionality with complicated graphs} provides an alternative way to do that: one can simply draw the composite graph of the involved transformations, notice that the resulting Petri Net is always acyclic, and thus infer the dinaturality of the composite. \end{remark} \paragraph{An ``essentially necessary'' condition for compositionality} The other half of Petri\'c's theorem can also be shown with the help of the theory of Petri Nets. One can prove that if $N$ is a weakly connected FBCF Petri Net with at least one proper source or one proper sink and $M_0$ and $M_d$ are the only-source and only-sink markings as before, then a necessary condition for the reachability of $M_d$ from $M_0$ is that every transition in $N$ must fire at least once. The intuition behind this is that there must be at least one transition $t$ which fires, because $M_0$ and $M_d$ are not equal (in the hypothesis that $N$ has at least one proper sink or proper source), and if a transition $t$ fires once, then all the transitions that are connected to it must fire as well: in order for $t$ to fire it must be enabled, hence those transitions which are between the source places and $t$ must fire to move the tokens to the input places of $t$; equally, if $t$ fires, then also all those transitions ``on the way'' from $t$ to the sink places must fire, otherwise some tokens would get stuck in the middle of the net, in disagreement with $M_d$. As a consequence of this fact, we have a sort of inverse of Theorem~\ref{thm:acyclic-implies-reachable}. \begin{theorem}\label{thm:reachability-implies-acyclicity} Let $N$ be weakly connected with at least one proper source or one proper sink place. If $M_d$ is reachable from $M_0$, then $N$ is acyclic. \end{theorem} \begin{proof} Suppose that $N$ contains a directed, circular path $\pi=(v_0,\dots,v_{2l})$ where $v_0 = v_{2l}$ is a place. Then each $v_{2i}$ is not a source, given that it is the output of $v_{2i-1}$, hence $M_0(v_{2i})=0$ for all $i \in \{1,\dots,l\}$. This means that $v_{2i+1}$ is disabled in $M_0$, therefore it will not fire when transforming $M_0$ into $M_1$. Then also $M_1(v_{2i})=0$. Using the same argument we can see that none of the transitions in the loop $\pi$ can fire, thus $M_d$ cannot be reached by $M_0$. \qed \end{proof} In other words, if $N$ contains a loop—in the hypothesis that $N$ is weakly connected and has at least one proper source or sink place—then $M_d$ is \emph{not} reachable from $M_0$. In the case of $N=\graph\psi \circ \graph\phi$, given the correspondence between the dinaturality condition of $\phi$ and $\psi$ in each of their variables and the firing of the corresponding transitions, this intuitively means that $\psi\circ\phi$ cannot be proved to be dinatural as a sole consequence of the dinaturality of $\phi$ and $\psi$ when $\graph\psi \circ \graph\phi$ is cyclic. Therefore, acyclicity is not only a \emph{sufficient} condition for the dinaturality of the composite transformation, but also ``essentially necessary'': if the composite happens to be dinatural despite the cyclicity of the graph, then this is due to some ``third'' property, like the fact that certain squares of morphisms are pullbacks or pushouts. The interested reader can find a detailed formalisation of this intuition in the second author's thesis~\cite{santamaria_towards_2019}, where a syntactic category generated by the equations determined by the dinaturality conditions of $\phi$ and $\psi$ was considered, and where it was shown that in there $\psi \circ \phi$ is \emph{not} dinatural in a similar way to Petri\'c's approach in~\cite{petric_g-dinaturality_2003}. \section{Horizontal compositionality of dinatural transformations}\label{chapter horizontal} Horizontal composition of natural transformations is co-protagonist, together with vertical composition, in the classical Godement calculus. In this section we define a new operation of horizontal composition for dinatural transformations, generalising the well-known version for natural transformations. We also study its algebraic properties, proving it is associative and unitary. Remarkably, horizontal composition behaves better than vertical composition, as it is \emph{always} defined between dinatural transformations of matching type. \subsection{From the Natural to the Dinatural} Horizontal composition of natural transformations \cite{mac_lane_categories_1978} is a well-known operation which is rich in interesting properties: it is associative, unitary and compatible with vertical composition. As such, it makes $\mathbb{C}\mathrm{at}$ a strict 2-category. Also, it plays a crucial role in the calculus of substitution of functors and natural transformations developed by Kelly in \cite{kelly_many-variable_1972}; in fact, as we have seen in the introduction, it is at the heart of Kelly's abstract approach to coherence. An appropriate generalisation of this notion for dinatural transformations seems to be absent in the literature: in this section we propose a working definition, as we shall see. The best place to start is to take a look at the usual definition for the natural case. \begin{definition}\label{def:horizontal composition natural transformations} Consider (classical) natural transformations \[ \begin{tikzcd} \A \ar[r,bend left,"F"{above},""{name=F,below}]{} \ar[r,bend right,"G"{below},""{name=G}] & \B \ar[r,bend left,"H"{above},""{name=H,below}]{} \ar[r,bend right,"K"{below},""{name=K}] & \C \arrow[Rightarrow,from=F,to=G,"\phi"] \arrow[Rightarrow,from=H,to=K,"\psi"] \end{tikzcd} \] The horizontal composition $\hc \fst \snd \colon HF \to KG$ is the natural transformation whose $A$-th component, for $A \in \A$, is either leg of the following commutative square: \begin{equation}\label{eqn:horCompNatTransfSquare} \mkern0mu \begin{tikzcd} HF(A) \ar[r,"\psi_{F(A)}"] \ar[d,"H(\phi_A)"'] & KF(A) \ar[d,"K(\phi_A)"] \\ HG(A) \ar[r,"\psi_{G(A)}"] & KG(A) \end{tikzcd} \end{equation} \end{definition} Now, the commutativity of (\ref{eqn:horCompNatTransfSquare}) is due to the naturality of $\psi$; the fact that $\hc \phi \psi$ is in turn a natural transformation is due to the naturality of both $\phi$ and $\psi$. However, in order to \emph{define} the family of morphisms $\hc \phi \psi$, all we have to do is to apply the naturality condition of $\psi$ to the components of $\phi$, one by one. We apply the very same idea to dinatural transformations, leading to the following preliminary definition for classical dinatural transformations. \begin{definition}\label{def:horCompDef} Let $\fst\colon\fstDom \to \fstCoDom$ and $\snd \colon \sndDom \to \sndCoDom$ be dinatural transformations of type $ \begin{tikzcd}[cramped,sep=small] 2 \ar[r] & 1 & 2 \ar[l] \end{tikzcd} $, where $\fstDom, \fstCoDom \colon \Op\A \times \A \to \B$ and $\sndDom, \sndCoDom \colon \Op\B \times \B \to \C$. The \emph{horizontal composition} $\hc \fst \snd$ is the family of morphisms \[ \bigl((\hc \fst \snd)_A\colon \sndDom(\fstCoDom(A,A), \fstDom(A,A)) \to \sndCoDom(\fstDom(A,A),\fstCoDom(A,A))\bigr)_{A \in \A} \] where the general component $(\hc \fst \snd)_A$ is given, for any object $A \in \A$, by either leg of the following commutative hexagon: \[ \begin{tikzcd}[column sep=.8cm,font=\normalsize] & \sndDom(\fstDom(A,A),\fstDom(A,A)) \ar[r,"{\snd_{\fstDom(A,A)}}"] & \sndCoDom(\fstDom(A,A),\fstDom(A,A)) \ar[dr,"{\sndCoDom(1,\fst_A)}"] \\ \sndDom(\fstCoDom(A,A),\fstDom(A,A)) \ar[ur,"{\sndDom(\fst_A,1)}"] \ar[dr,"{\sndDom(1,\fst_A)}"'] & & & \sndCoDom(\fstDom(A,A),\fstCoDom(A,A)) \\ & \sndDom(\fstCoDom(A,A),\fstCoDom(A,A)) \ar[r,"{\snd_{\fstCoDom(A,A)}}"] & \sndCoDom(\fstCoDom(A,A),\fstCoDom(A,A)) \ar[ur,"{\sndCoDom(\fst_A,1)}"'] \end{tikzcd} \] \end{definition} \begin{remark}\label{rem:our definition of hc generalises natural case} If $F$, $G$, $H$ and $K$ all factor through the second projection $\Op\A \times \A \to \A$ or $\Op\B \times \B \to \B$, then $\phi$ and $\psi$ are just ordinary natural transformations and Definition~\ref{def:horCompDef} reduces to the usual notion of horizontal composition, Definition~\ref{def:horizontal composition natural transformations}. \end{remark} As in the classical natural case, we can deduce the dinaturality of $\hc \fst \snd$ from the dinaturality of $\fst$ and $\snd$, as the following Theorem states. (Recall that for $F \colon \A \to \B$ a functor, $\Op F \colon \Op\A \to \Op\B$ is the obvious functor which behaves like $F$.) \begin{theorem}\label{thm:horCompTheorem} Let $\fst$ and $\snd$ be dinatural transformations as in Definition \ref{def:horCompDef}. Then $\hc \fst \snd$ is a dinatural transformation \[ \hc \fst \snd \colon \sndDom(\Op\fstCoDom , \fstDom) \to \sndCoDom(\Op\fstDom,\fstCoDom) \] of type $ \begin{tikzcd}[cramped,sep=small] 4 \ar[r] & 1 & 4 \ar[l] \end{tikzcd} $, where $\sndDom(\Op\fstCoDom , \fstDom), \sndCoDom(\Op\fstDom,\fstCoDom) \colon \A^{[+,-,-,+]} \to \C$ are defined on objects as \begin{align*} \sndDom(\Op\fstCoDom , \fstDom)(A,B,C,D) &= \hcd A B C D \\ \sndCoDom(\Op\fstDom,\fstCoDom)(A,B,C,D) &= \hcc A B C D \end{align*} and similarly on morphisms. \end{theorem} \begin{proof} The proof consists in showing that the diagram that asserts the dinaturality of $\hc \fst \snd$ commutes: this is done in Figure~\ref{fig:DinaturalityHorizontalCompositionFigure}. \qed \end{proof} \begin{sidewaysfigure}[p] \centering \begin{tikzpicture}[every node/.style={scale=0.5}] \matrix[column sep=1cm, row sep=1.5cm]{ \node(1) {$\H(\G(A,A),\F(A,A))$}; & & & \node(2) {$\H(\F(A,A),\F(A,A))$}; & \node(3) {$\K(\F(A,A),\F(A,A))$}; & & & \node(4) {$\K(\F(A,A),\G(A,A))$};\\ & \node(5){$\H(\G(A,A),\F(B,A))$}; & \node(6) {$\H(\F(A,A),\F(B,A))$}; & & & \node(7) {$\K(\F(B,A),\F(A,A))$}; & \node (8){$\K(\F(B,A),\G(A,A))$};\\ \node(9) {$\H(\G(A,B),\F(B,A))$}; & & &\node(10){$\H(\F(B,A),\F(B,A))$}; & \node(11){$\K(\F(B,A),\F(B,A))$}; & & &\node(12){$\K(\F(B,A),\G(A,B))$};\\ &\node(13){$\H(\G(B,B),\F(B,A))$}; & \node(14){$\H(\F(B,B),\F(B,A))$}; & & & \node(15){$\K(\F(B,A),\F(B,B))$}; & \node(16){$\K(\F(B,A),\G(B,B))$};\\ \node(17){$\H(\G(B,B),\F(B,B))$}; & & &\node(18){$\H(\F(B,B),\F(B,B))$}; & \node(19){$\K(\F(B,B),\F(B,B))$}; & & &\node(20){$\K(\F(B,B),\G(B,B))$};\\ }; \graph[use existing nodes,edge quotes={sloped,anchor=south}]{ 9 ->["$\H(\G(1,f),\F(f,1))$"] 1 ->["$\H(\fst_A,1)$"] 2 ->["$\snd_{\F(A,A)}$"] 3 ->["$\K(1,\fst_A)$"] 4 ->["$\K(\F(f,1),\G(1,f))$",] 12; 9 ->["$\H(\G(1,f))$"] 5 ->["$\H(\fst_A,1)$"] 6 ->["$\H(1,\F(f,1))$"] 2; 3 ->["$\K(\F(f,1),1)$"] 7 ->["$\K(1,\fst_A)$"] 8 ->["$\K(1,\G(1,f))$"] 12; 6 ->["$\H(\F(f,1),1)$"] 10 ->["$\snd_{\F(B,A)}$"] 11 ->["$\K(1,\F(f,1))$"] 7; 9 ->["$\H(\G(f,1),1)$"] 13 ->["$\H(\fst_B,1)$"] 14 ->["$\H(\F(1,f),1)$"] 10; 11->["$\K(1,\F(1,f))$"] 15 ->["$\K(1,\fst_B)$"] 16 ->["$\K(1,\G(f,1))$"] 12; 14->["$\H(1,F(1,f))$"] 18 ->["$\snd_{\F(B,B)}$"] 19 ->["$\K(\F(1,f),1)$"] 15; 18 <-["$\H(\fst_B,1)$"] 17 <-["$\H(\G(f,1),\F(1,f))$"] 9; 12 <-["$\K(\F(1,f),\G(f,1))$"] 20 <-["$\K(1,\fst_B)$"] 19; 1 ->[bend left=20,dashed,"$(\hc \fst \snd)_A$"] 4; 17->[bend right=20,dashed,"$(\hc \fst \snd)_B$"] 20; }; \path[] (9) to [out=-80,in=170] node[anchor=mid,red]{Functoriality of $\H$} (18); \path[] (9) to [out=80,in=-170] node[anchor=mid,red]{Functoriality of $\H$} (2); \path[] (19) to [out=10,in=260] node[anchor=mid,red]{Functoriality of $\K$} (12); \path[] (3) to [out=-10,in=-260] node[anchor=mid,red]{Functoriality of $\K$} (12); \path (6) to node[anchor=mid,red]{Dinaturality of $\snd$} (7); \path (14) to node[anchor=mid,red]{Dinaturality of $\snd$} (15); \path (9) to node[anchor=mid,red]{Dinaturality of $\fst$} (10); \path (11) to node[anchor=mid,red]{Dinaturality of $\fst$} (12); \end{tikzpicture} \caption{Proof of Theorem \ref{thm:horCompTheorem}: dinaturality of horizontal composition in the classical case. Here $f\colon A \to B$.} \label{fig:DinaturalityHorizontalCompositionFigure} \end{sidewaysfigure} We can now proceed with the general definition, which involves transformations of arbitrary type. As the idea behind Definition~\ref{def:horCompDef} is to apply the dinaturality of $\snd$ to the general component of $\fst$ in order to define $\hc \fst \snd$, if $\snd$ is a transformation with many variables, then we have many dinaturality conditions we can apply to $\fst$, namely one for each variable of $\snd$ in which $\snd$ is dinatural. Hence, the general definition will depend on the variable of $\snd$ we want to use. For the sake of simplicity, we shall consider only the one-category case, that is when all functors in the definition involve one category $\C$; the general case follows with no substantial complications except for a much heavier notation. \begin{definition}\label{def:generalHorizontalCompositionDef} Let $\F \colon \C^\fstDomVar \to \C$, $\G \colon \C^\fstCoDomVar \to \C$, $\H \colon \C^\sndDomVar \to \C$, $\K \colon \C^\sndCoDomVar \to \C$ be functors, $\fst = (\fst_\bfA)_{\bfA \in \C^n} \colon \fstDom \to \fstCoDom$ be a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\fstDomVar \ar[r,"\fstTypL"] & \fstVarNo & \length\fstCoDomVar \ar[l,"\fstTypR"'] \end{tikzcd} $ and $\snd = (\snd_{\bf B})_{\bf B \in \C^m}\colon \sndDom \to \sndCoDom$ of type $ \begin{tikzcd}[cramped,sep=small] \length\sndDomVar \ar[r,"\sndTypL"] & \sndVarNo & \length\sndCoDomVar \ar[l,"\sndTypR"'] \end{tikzcd} $ a transformation which is dinatural in its $i$-th variable. Denoting with $\concat$ the concatenation of a family of lists, let \[ \ghcDom \colon \C^{{\concat_{u=1}^{\length\sndDomVar} \lambda^u}} \to \C, \quad \ghcCoDom \colon \C^{\concat_{v=1}^{\length\sndCoDomVar}\mu^v} \to \C \] be functors, defined similarly to $\sndDom(\Op\fstCoDom , \fstDom)$ and $\sndCoDom(\Op\fstDom,\fstCoDom)$ in Theorem \ref{thm:horCompTheorem}, where for all $j \in \sndVarNo$, $u \in\length\gamma$, $v\in\length\delta$: \[ \begin{tikzcd}[ampersand replacement=\&,row sep=.5em] \F_j= \begin{cases} F & j=i \\ \id\C & j \ne i \end{cases} \& \G_j= \begin{cases} G & j=i \\ \id\C & j \ne i \end{cases} \\ \lambda^u = \begin{cases} \alpha & \eta u = i \land \gamma_u=+ \\ \Not\beta\footnotemark & \eta u = i \land \gamma_u = - \\ [\gamma_u] & \eta u \ne i \end{cases} \& \mu^v = \begin{cases} \beta & \theta v = i \land \delta_v=+ \\ \Not\alpha & \theta v = i \land \delta_v = - \\ [\delta_v] & \theta v \ne i \end{cases} \end{tikzcd} \] \footnotetext{Remember that for any $\beta\in\List\{+,-\}$ we denote $\Not\beta$ the list obtained from $\beta$ by swapping the signs.}Define for all $u \in\length\gamma$ and $v\in\length\delta$ the following functions: \[ a_u = \begin{cases} \iota_n \sigma & \eta u = i \land \gamma_u=+ \\ \iota_n \tau & \eta u = i \land \gamma_u = - \\ \iota_m K_{\eta u} & \eta u \ne i \end{cases} \quad b_v = \begin{cases} \iota_n \tau & \theta v = i \land \delta_v=+ \\ \iota_n \sigma & \theta v = i \land \delta_v = - \\ \iota_m K_{\theta v} & \theta v \ne i \end{cases} \] with $K_{\eta u} \colon 1 \to m$ the constant function equal to $\eta u$, while $\iota_n$ and $\iota_m$ are defined as: \[ \begin{tikzcd}[row sep=0em] n \ar[r,"\iota_n"] & (i-1)+n+(m-i) \\ x \ar[|->,r] & i-1+x \end{tikzcd} \qquad \begin{tikzcd}[row sep=0em,ampersand replacement=\&] m \ar[r,"\iota_m"] \& (i-1)+n+(m-i) \\ x \ar[|->,r] \& \begin{cases} x & x < i \\ x +n -1 & x \ge i \end{cases} \end{tikzcd} \] The \emph{$i$-th horizontal composition} $\HC {[\fst]} {[\snd]} i$ is the equivalence class of the transformation \[ \HC \fst \snd i \colon \ghcDom \to \ghcCoDom \] of type \[ \begin{tikzcd}[column sep=1.5cm] \displaystyle\sum_{u=1}^{\length\gamma} \length{\lambda^u} \ar[r,"{[a_1,\dots, a_{\length\gamma}]}"] & (i-1) + n + (m-i) & \displaystyle\sum_{v=1}^{\length\delta} \length{\mu^v} \ar[l,"{[b_1,\dots, b_{\length\delta}]}"'] \end{tikzcd} \] whose general component, $(\HC \fst \snd i)_{\subst B \bfA i}$, is the diagonal of the commutative hexagon obtained by applying the dinaturality of $\snd$ in its $i$-th variable to the general component $\fst_\bfA$ of $\fst$: \[ \begin{tikzcd}[column sep=2em] & \H(\subst \bfB {F(\bfA\sigma)} i \eta) \ar[r,"\psi_{\subst \bfB {\F(\bfA\sigma)} i}"] & \K(\subst \bfB {\F(\bfA\sigma)} i \theta) \ar[dr,"\K(\substMV \bfB {\F(\bfA\sigma)} {\phi_\bfA} i \theta)"] \\ \H(\substMV \bfB {\G(\bfA\tau)} {\F(\bfA\sigma)} i \eta) \ar[ur,"\H(\substMV \bfB {\phi_\bfA} {\F(\bfA\sigma)} i \eta)"] \ar[dr,"\H(\substMV \bfB {\G(\bfA\tau)} {\phi_\bfA} \eta)"'] \ar[rrr,dotted,"(\HC \fst \snd i)_{\subst B \bfA i}"] & & & \K(\substMV \bfB {\F(\bfA\sigma)} {\G(\bfA\tau)} i \theta) \\ & \H(\subst \bfB {\G(\bfA\tau)} i \eta) \ar[r,"\psi_{\subst \bfB {\G(\bfA\tau)} i}"'] & \K(\subst \bfB {G(\bfA\tau)} i \theta) \ar[ur,"\K(\substMV \bfB {\phi_\bfA} {\G(\bfA\tau)} i \theta)"'] \end{tikzcd} \] \end{definition} In other words, the domain of $\HC \fst \snd i$ is obtained by substituting the arguments of $\H$ (the domain of $\snd$) that are in the $i$-th connected component of $\graph\snd$ with $\F$ (the domain of $\fst$) if they are covariant, and with $\Op\G$ (the opposite of the codomain of $\fst$) if they are contravariant; those arguments not in the $i$-th connected component are left untouched. Similarly the codomain. The type of $\HC \fst \snd i$ is obtained by replacing the $i$-th variable of $\snd$ with all the variables of $\fst$ and adjusting the type of $\snd$ with $\fstTypL$ and $\fstTypR$ to reflect this act. In the following example, we see what happens to $\graph\fst$ and $\graph\snd$ upon horizontal composition. \begin{example}\label{ex:hc example} Consider transformations $\delta$ and $\eval{}{}$ (see examples~\ref{ex:delta},\ref{ex:eval}). In the notations of Definition~\ref{def:generalHorizontalCompositionDef}, we have $F=\id\C \colon \C \to \C$, $G = \times \colon \C^{[+,+]} \to \C$, $H \colon \C^{[+,-,+]} \to \C$ defined as $H(X,Y,Z) = X \times (Y \implies Z)$ and $K = \id\C \colon \C \to \C$. The types of $\delta$ and $\eval{}{}$ are respectively \[ \begin{tikzcd}[font=\small] 1 \ar[r] & 1 & \ar[l] 2 \end{tikzcd} \qquad \text{and} \qquad \begin{tikzcd}[row sep=0em,font=\small] 3 \ar[r] & 2 & 1 \ar[l] \\ 1 \ar[r,|->] & 1 & 1 \ar[dl,|->,out=180,in=30] \\[-3pt] 2 \ar[ur,|->,out=0,in=210]& 2 & \\[-3pt] 3 \ar[ur,|->,out=0,in=210] \end{tikzcd} \] The transformation $\eval{}{}$ is extranatural in its first variable and natural in its second: we have two horizontal compositions. $(\HC \delta {\eval{}{}} 1 )_{A,B}$ is given by either leg of the following commutative square: \begin{equation}\label{delta inside eval} \begin{tikzcd} A \times \bigl( (A \times A) \implies B \bigr) \ar[r,"\delta_A \times (1 \implies 1)"] \ar[d,"1 \times (\delta_A \implies 1)"'] & (A \times A) \times \bigl( (A \times A) \implies B \bigr) \ar[d,"\eval {A \times A} B"] \\ A \times (A \implies B) \ar[r,"\eval A B"] & B \end{tikzcd} \end{equation} We have $\HC \delta {\eval{}{}} 1 \colon H(\id\C,\times,\id\C) \to \id\C(\id\C)$ where $\id\C(\id\C) = \id\C$ and \[ \begin{tikzcd}[row sep=0em] \C^{[+,-,-,+]} \ar[r,"{H(\id\C,\times,\id\C)}"] & \C \\ (X,Y,Z,W) \ar[|->,r] & \quad X \times \bigl( (Y \times Z) \implies W \bigr) \end{tikzcd} \] and it is of type \[ \begin{tikzcd}[row sep=0em] 4 \ar[r] & 2 & \ar[l] 1 \\ 1 \ar[r,|->] & 1 & 1 \ar[dl,|->,out=180,in=30] \\[-3pt] 2 \ar[ur,|->,out=0,in=210] & 2 \\[-3pt] 3 \ar[uur,|->,out=0,in=230] \\[-3pt] 4 \ar[uur,|->,out=0,in=230] \\ \end{tikzcd} \] Intuitively, $\graph{\HC \delta {\eval{}{}} 1}$ is obtained by substituting $\graph{\delta}=\begin{tikzpicture} \matrix[row sep=1em,column sep=0.5em]{ & \node (1) [category] {}; \\ & \node (A) [component] {}; \\ \node (2) [category] {}; & & \node (3) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; }; \end{tikzpicture}$ into the first connected component of $\graph{\eval{}{}}=\begin{tikzpicture} \matrix[row sep=1em, column sep=1em]{ \node (1) [category] {}; & & \node (2) [opCategory] {}; & & \node (3) [category] {}; \\ & \node (A) [component] {}; & & & \node (B) [component] {}; \\ & & & & \node (4) [category] {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> 4; }; \end{tikzpicture}$, by ``bending'', as it were, $\graph\delta$ into the $U$-turn that is the first connected component of $\graph{\eval{}{}}$: \[ \begin{tikzpicture} \matrix[column sep=1em,row sep=2em]{ &\node[category] (1) {}; & & \node[opCategory] (2) {}; & & \node[opCategory] (3) {}; & & \node[category] (4) {};\\ &\node[component] (A) {};& & & & & & \node[component] (B) {};\\ \node[coordinate] (fake1) {}; & & \node[coordinate] (fake2) {}; &\node[coordinate] (fake3) {};& & \node[coordinate] (fake4) {}; & & \node[category] (5) {};\\ }; \graph[use existing nodes]{ 1 -> A -- fake1 --[out=-90,in=-90] fake4 -> 3; A -- fake2 --[out=-90,in=-90] fake3 -> 2; 4 -> B -> 5; }; \end{tikzpicture} \quad \text{or} \quad \begin{tikzpicture} \matrix[column sep=1em,row sep=2em]{ \node[category] (1) {};& & \node[opCategory] (2) {}; & & \node[opCategory] (3) {}; & & \node[category] (4) {};\\ \node[coordinate] (fake1) {}; & & & \node[component] (A) {}; & & & \node[component] (B) {};\\ & & & & & & \node[category] (5) {};\\ }; \graph[use existing nodes]{ 1 -- fake1 ->[out=-90,in=-90] A -> {2,3}; 4 -> B -> 5; }; \end{tikzpicture} \] Here the first graph corresponds to the upper leg of (\ref{delta inside eval}) , the second to the lower one. Notice how the component $\eval {A \times A} B$ has now \emph{two} wires, one per each $A$ in the graph on the left. The result is therefore \[ \graph{\HC \delta {\eval{}{}} 1} = \begin{tikzpicture} \matrix[column sep=1em,row sep=1.5em]{ \node[category] (1) {}; & \node[opCategory] (2) {}; & \node[opCategory] (3) {}; & \node[category] (4) {}; \\ & \node[component] (A) {}; & & \node[component] (B) {};\\ & & & \node[category] (5) {};\\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; 4 -> B -> 5; }; \end{tikzpicture} \] Turning now to the other possible horizontal composition, we have that $\HC \delta {\eval{}{}} 2 \colon H(\id\C,\id\C,\id\C) \to \id\C(\times)$ where $ H(\id\C,\id\C,\id\C) = H$ and $\id\C(\times)=\times$ by definition; it is of type \[ \begin{tikzcd}[row sep=0em] 3 \ar[r] & 2 & \ar[l] 2 \\ 1 \ar[r,|->] & 1 & 1 \ar[dl,|->,out=180,in=30] \\[-3pt] 2 \ar[ur,|->,out=0,in=210]& 2 & 2 \ar[l,|->] \\[-3pt] 3 \ar[ur,|->,out=0,in=210] \end{tikzcd} \] and $(\HC \delta {\eval{}{}} 2)_{A,B}$ is given by either leg of the following commutative square: \[ \begin{tikzcd}[column sep=3em] A \times (A \implies B) \ar[r,"1 \times (1 \implies \delta_B)"] \ar[d,"\eval A B"'] & A \times \bigl( A \implies (B \times B) \bigr) \ar[d,"\eval A {B \times B}"] \\ B \ar[r,"\delta_B"] & B \times B \end{tikzcd} \] Substituting $\graph\delta$ into the second connected component of $\graph{\eval{}{}}$, which is just a ``straight line'', results into the following graph: \[ \graph{\HC \delta {\eval{}{}} 2} = \begin{tikzpicture} \matrix[column sep=.5em,row sep=1em]{ \node[category] (1) {}; & & \node[opCategory] (2) {}; & & \node[category] (3) {}; \\ & \node[component] (A) {}; & & & \node[component] (B) {};\\ & & & \node[category] (4) {}; & & \node[category] (5) {};\\ }; \graph[use existing nodes]{ 1 -> A -> 2; 3 -> B -> {4,5}; }; \end{tikzpicture} \] \end{example} \subsection{Dinaturality of horizontal composition}\label{section dinaturality of horizontal composition} We aim to prove here that our definition of horizontal composition, which we have already noticed generalises the well-known version for classical natural transformations (Remark~\ref{rem:our definition of hc generalises natural case}), is a closed operation on dinatural transformations. For the rest of this section, we shall fix transformations $\fst$ and $\snd$ with the notations used in Definition~\ref{def:generalHorizontalCompositionDef} for their signature; we also fix the ``names'' of the variables of $\fst$ as $\bfA=(A_1,\dots,A_n)$ and of $\snd$ as $\bfB=(B_1,\dots,B_m)$. In this spirit, $i$ is a fixed element of $\{1,\dots,m\}$, we assume $\snd$ to be dinatural in $B_i$ and we shall sometimes refer to $\HC \fst \snd i$ also as $\HC \fst \snd {B_i}$. As in the classical natural case (Definition~\ref{def:horizontal composition natural transformations}), only the dinaturality of $\snd$ in $B_i$ is needed to \emph{define} the $i$-th horizontal composition of $\fst$ and $\snd$. Here we want to understand in which variables the $i$-th horizontal composition \[ \HC \fst \snd {B_i} = \bigl( (\HC \fst \snd {B_i})_{\subst \bfB \bfA i} \bigr)= \bigl( (\HC \fst \snd {B_i})_{B_1,\dots, B_{i-1},A_1,\dots, A_n, B_{i+1}, \dots, B_m} \bigr) \] itself is in turn dinatural. It is straightforward to see that $\HC \fst \snd {B_i}$ is dinatural in all its $B$-variables where $\snd$ is dinatural, since the act of horizontally composing $\fst$ and $\snd$ in $B_i$ has not ``perturbed'' $\sndDom$, $\sndCoDom$ and $\snd$ in any way except in those arguments involved in the $i$-th connected component of $\graph\snd$, see example~\ref{ex:hc example}. Hence we have the following preliminary result. \begin{proposition} If $\snd$ is dinatural in $B_j$, for $j \ne i$, then $\HC \fst \snd {B_i}$ is also dinatural in $B_j$. \end{proposition} More interestingly, it turns out that $\HC \fst \snd {B_i}$ is also dinatural in all those $A$-variables where $\fst$ is dinatural in the first place. We aim then to prove the following Theorem. \begin{theorem}\label{thm:horCompIsDinat} If $\fst$ is dinatural in its $k$-th variable and $\snd$ in its $i$-th one, then $\HC \fst \snd i$ is dinatural in its $(i-1+k)$-th variable. In other words, if $\fst$ is dinatural in $A_k$ and $\snd$ in $B_i$, then $\HC \fst \snd {B_i}$ is dinatural in $A_k$. \end{theorem} The proof of this theorem relies on the fact that we can reduce ourselves, without loss of generality, to Theorem~\ref{thm:horCompTheorem}. To prove that, we introduce the notion of \emph{focalisation} of a transformation on one of its variables: essentially, the focalisation of a transformation $\varphi$ is a transformation depending on only one variable between functors that have only one covariant and one contravariant argument, obtained by fixing all the parts of the data involving variables different from the one we are focusing on. \begin{definition}\label{def:focalisation def} Let $\varphi = (\varphi_\bfA) = (\varphi_{A_1,\dots,A_p}) \colon T \to S$ be a transformation of type \[ \begin{tikzcd} \length\alpha \ar[r,"\sigma"] & p & \ar[l,"\tau"'] \length\beta \end{tikzcd} \] with $T \colon \C^\alpha \to \C$ and $S \colon \C^\beta \to \C$. Fix $k\in\{1,\dots,p\}$ and objects $A_1,\dots,A_{k-1}$, $A_{k+1},\dots,A_p$ in $\C$. Consider functors $\bar T k$, $\bar S k \colon \Op\C \times \C \to \C$ defined by \begin{align*} \bar T k (A,B) &= T(\substMV \bfA A B i \sigma) \\ \bar S k (A,B) &= S(\substMV \bfA A B i \tau) \end{align*} The \emph{focalisation of $\varphi$ on its $k$-th variable} is the transformation \[ \bar \varphi k \colon \bar T k \to \bar S k \] of type $ \begin{tikzcd}[cramped, sep=small] 2 \ar[r] & 1 & \ar[l] 2 \end{tikzcd} $ where \[ \bar \varphi k_X = \phi_{\subst \bfA X i} = \varphi_{A_1\dots A_{k-1},X,A_{k+1}\dots A_p}. \] Sometimes we may write $\bar \varphi {A_k} \colon \bar T {A_k} \to \bar S {A_k}$ too, when we fix as $A_1,\dots,A_p$ the name of the variables of $\varphi$. \end{definition} \begin{remark}\label{rem:focalisationIsDinaturalRemark} $\varphi$ is dinatural in its $k$-th variable if and only if $\bar \varphi k$ is dinatural in its only variable for all objects $A_1,\dots,A_{k-1},A_{k+1},\dots,A_p$ fixed by the focalisation of $\varphi$. \end{remark} The $\bar{} k$ construction depends on the $p-1$ objects we fix, but not to make the notation too heavy, we shall always call those (arbitrary) objects $A_1,\dots,A_{k-1},A_{k+1},\dots,A_n$ for $\bar \fst k$ and $B_1,\dots,B_{i-1}$, $B_{i+1},\dots,B_m$ for $\bar \snd i$. \begin{lemma}\label{lemma:focalisationLemma} It is the case that $\HC \fst \snd i$ is dinatural in its $(i-1+k)$-th variable if and only if $\hc {\bar \fst k} {\bar \snd i}$ is dinatural in its only variable for all objects $B_1,\dots,B_{i-1}$, $A_1,\dots,A_{k-1}$, $A_{k+1},\dots,A_n$, $B_{i+1},\dots,B_m$ in $\C$ fixed by the focalisations of $\fst$ and $\snd$. \end{lemma} \begin{proof} The proof consists in unwrapping the two definitions and showing that they require the exact same hexagon to commute: see~\cite[Lemma 2.14]{santamaria_towards_2019}. \qed \end{proof} We can now prove that horizontal composition preserves dinaturality. \begin{proofDinTheorem} Consider transformations $\bar \fst k$ and $\bar \snd i$. By Remark \ref{rem:focalisationIsDinaturalRemark}, they are both dinatural in their only variable. Hence, by Theorem \ref{thm:horCompTheorem}, $\hc {\bar \fst k} {\bar \snd i}$ is dinatural and by Lemma \ref{lemma:focalisationLemma} we conclude. \qed \end{proofDinTheorem} It is straightforward to see that horizontal composition has a left and a right unit, namely the identity (di)natural transformation on the appropriate identity functor. \begin{theorem} Let $T \colon \B^\alpha \to \C$, $S \colon \B^\beta \to \C$ be functors, and let $\varphi \colon T \to S$ be a transformation of any type. Then \[ \hc \varphi {\id{\id \C}} = \varphi. \] If $\varphi$ is dinatural in its $i$-th variable, for an appropriate $i$, then also \[ \HC {\id{\id \B}} \varphi i = \varphi. \] \end{theorem} \begin{proof} Direct consequence of the definition of horizontal composition.\qed \end{proof} \subsection{Associativity of horizontal composition}\label{section associativity horizontal composition} Associativity is a crucial property of any respectable algebraic operation. In this section we show that our notion of horizontal composition is at least this respectable. We begin by considering classical dinatural transformations $\fst \colon \F \to \G$, $\snd \colon \H \to \K$ and $\trd \colon \U \to \V$, for $\F,\G,\H,\K,\U,\V \colon \Op\C \times \C \to \C$ functors, all of type $ \begin{tikzcd}[cramped,sep=small] 2 \ar[r] & 1 & \ar[l] 2 \end{tikzcd} $. \begin{theorem}\label{thm:associativity simple case} $\hc {\left( \hc \fst \snd \right)} \trd = \hc \fst {\left( \hc \snd \trd \right)}$. \end{theorem} \begin{proof} We first prove that the two transformations have same domain and codomain functors. Since they both depend on one variable, this also immediately implies they have same type. We have $\hc \fst \snd \colon \H(\Op\G,\F) \to \K(\Op\F,G)$, hence \[ \hc {\left( \hc \fst \snd \right)} \trd \colon \U\Bigl(\Op{\K(\Op\F,G)},\H(\Op\G,\F)\Bigr) \to \V\Bigl( \Op{\H(\Op\G,\F)}, \K(\Op\F,G) \Bigr). \] Notice that $\Op{\K(\Op\F,G)} = \Op\K (F, \Op G)$ and $\Op{\H(\Op\G,\F)} = \Op\H (G, \Op \F)$. Next, we have $\hc \snd \trd \colon \U(\Op\K, \H) \to \V(\Op\H, \K)$. Given that $\U(\Op\K,\H), \V(\Op\H,\K) \colon \C^{[+,-,-,+]} \to \C$, we have \[ \hc \fst {\left( \hc \snd \trd \right)} \colon \underbrace{\U(\Op\K, \H)(\F,\Op\G,\Op\G,\F)}_{\U\bigl(\Op\K(\F,\Op\G),\H(\Op\G,\F)\bigr)} \to \underbrace{\V(\Op\H, \K)(\G,\Op\F,\Op\F,\G)}_{\V\bigl( \Op\H(\G,\Op\F), \K(\Op\F,\G) \bigr)}. \] This proves $\hc {\left( \hc \fst \snd \right)} \trd$ and $\hc \fst {\left( \hc \snd \trd \right)}$ have the same signature. Only equality of the single components is left to show. Fix then an object $A$ in $\C$. Figure~\ref{fig:Associativity} shows how to pass from $(\trd \ast \snd) \ast \fst$ to $\trd \ast (\snd \ast \fst)$ by pasting three commutative diagrams. In order to save space, we simply wrote ``$\H(\G,\F)$'' instead of the proper ``$\H(\Op\G(A,A),F(A,A))$'' and similarly for all the other instances of functors in the nodes of the diagram in Figure~\ref{fig:Associativity}; we also dropped the subscript for components of $\fst$, $\snd$ and $\trd$ when they appear as arrows, that is we simply wrote $\fst$ instead of $\fst_A$, since there is only one object involved and there is no risk of confusion. \qed \end{proof} \begin{sidewaysfigure} \footnotesize \begin{tikzpicture} \matrix[column sep=1cm, row sep=1cm]{ \node(1) {$\U(\K(\F,\G),\H(\G,\F))$}; & & \node(2) {$\U(\K(\F,\F),\H(\F,\F))$}; & \node(3) {$\U(\H(\F,\F),\H(\F,\F))$};\\ \node(4) {$\U(\K(\F,\F),\H(\G,\F))$}; & & \node(5) {$\U(\H(\F,\F),\H(\F,\F))$}; & \node(6) {$\V(\H(\F,\F),\H(\F,\F))$}; & \node(7) {$\V(\H(\F,\F),\K(\F,\F))$}; & & \node(8) {$\V(\H(\G,\F),\K(\F,\G))$};\\ \node(9) {$\U(\H(\F,\F),\H(\G,\F))$};&&&& \node(10){$\V(\H(\G,\F),\H(\F,\F))$};&& \node(11){$\V(\H(\G,\F),\K(\F,\F))$};\\ & & \node(12){$\U(\H(\G,\F),\H(\G,\F))$}; & \node(13){$\V(\H(\G,\F),\H(\G,\F))$};\\ }; \graph[use existing nodes]{ 1 ->["$\U(\K(1,\fst),\H(\fst,1))$"] 2 ->["$\U(\snd,1)$"] 3 ->["$\trd$"] 6; 1 ->["$\U(\K(1,\fst),1)$"'] 4 ->["$\U(\snd,1)$"'] 9 ->["$\U(1,\H(\fst,1))$",sloped] 5 ->["$\trd$"] 6; 6 ->["$\V(1,\snd)$"] 7 ->["$\V(\H(\fst,1),\K(1,\fst))$"] 8; 6 ->["$\V(\H(\fst,1),1)$",sloped] 10 ->["$\V(1,\snd)$"] 11 ->["$\V(1,\K(1,\fst))$"'] 8; 9 ->["$\U(\H(\fst,1),1)$"',sloped] 12 ->["$\trd$"] 13 ->["$\V(1,\H(\fst,1))$"', sloped] 10; }; \path (2) -- node[anchor=center,red]{\footnotesize Functoriality of $\U$} (5); \path (9) -- node[anchor=center,red]{\footnotesize Dinaturality of $\trd$} (10); \path (10) --node[anchor=center,red]{\footnotesize Functoriality of $\V$} (8); \end{tikzpicture} \caption{Associativity of horizontal composition in the classical case. The upper leg is $(\trd \ast \snd) \ast \fst $, whereas the lower one is $\trd \ast (\snd \ast \fst)$.} \label{fig:Associativity} \end{sidewaysfigure} We can now start discussing the general case for transformations with an arbitrary number of variables; we shall prove associativity by reducing ourselves to Theorem~\ref{thm:associativity simple case} using focalisation (see Definition~\ref{def:focalisation def}). For the rest of this section, fix transformations $\fst$, $\snd$ and $\trd$, dinatural in all their variables, with signatures: \begin{itemize} \item $\fst \colon \fstDom \to \fstCoDom$, for $\fstDom \colon \C^\fstDomVar \to \C$ and $\fstCoDom \colon \C^\fstCoDomVar \to \C$, of type $ \begin{tikzcd}[cramped,sep=small] \length\fstDomVar \ar[r,"\fstTypL"] & \fstVarNo & \ar[l,"\fstTypR"'] \length\fstCoDomVar \end{tikzcd} $; \item $\snd \colon \sndDom \to \sndCoDom$, for $\sndDom \colon \C^\sndDomVar \to \C$ and $\sndCoDom \colon \C^\sndCoDomVar \to \C$, of type $ \begin{tikzcd}[cramped,sep=small] \length\sndDomVar \ar[r,"\sndTypL"] & \sndVarNo & \ar[l,"\sndTypR"'] \length\sndCoDomVar \end{tikzcd} $; \item $\trd \colon \trdDom \to \trdCoDom$, for $\trdDom \colon \C^\trdDomVar \to \C$ and $\trdCoDom \colon \C^\trdCoDomVar \to \C$, of type $ \begin{tikzcd}[cramped,sep=small] \length\trdDomVar \ar[r,"\trdTypL"] & \trdVarNo & \ar[l,"\trdTypR"'] \length\trdCoDomVar \end{tikzcd} $ \end{itemize} For sake of simplicity, let us fix the name of the variables for $\fst$ as $\fstVariables{}{} = (A_1,\dots,A_n)$, for $\snd$ as $\sndVariables{}{} = (B_1,\dots,B_m)$ and for $\trd$ as $\trdVariables{}{} = (C_1,\dots,C_l)$. In this spirit we also fix the variables of the horizontal compositions, so for $i \in \{1,\dots,\sndVarNo\}$, the variables of $\HC \fst \snd i$ are \[ \sndVariables i {\fstVariables{}{}} = B_1,\dots,B_{i-1},A_1,\dots,A_n,B_{i+1},\dots,B_m \] and, similarly, for $j \in \{1,\dots,\trdVarNo\}$ the variables of $\HC \snd \trd j$ are $ \trdVariables j {\sndVariables{}{}}. $ The theorem asserting associativity of horizontal composition, which we prove in the rest of this section, is the following. \begin{theorem}\label{thm:associativityTheorem} For $i \in \{1,\dots,\sndVarNo\}$ and $j \in \{1,\dots,\trdVarNo\}$, \[ \HC {\left( \HC \fst \snd i \right)} \trd j = \HC \fst {\left(\HC \snd \trd j\right)} {j-1+i} \] or, in alternative notation, \begin{equation}\label{eqn:associativity equation} \HC {\left( \HC \fst \snd {B_i} \right)} \trd {C_j} = \HC \fst {\left(\HC \snd \trd {C_j}\right)} {B_i}. \end{equation} \end{theorem} We shall require the following, rather technical, Lemma, whose proof is a matter of identity checking. \begin{lemma}\label{lemma:associativity techincal lemma} Let $\Phi = (\Phi_{V_1,\dots,V_p})$ and $\Psi = (\Psi_{W_1,\dots,W_q})$ be transformations in $\C$ such that $\Psi$ is dinatural in $W_s$, for $s \in \{1,\dots,q\}$. Let $V_1,\dots,V_{r-1}$, $V_{r+1},\dots,V_p$, $W_1,\dots,W_{s-1}$, $W_{s+1},\dots,W_q$ be objects of $\C$, and let $\bar \Phi {V_r}$ and $\bar \Psi {W_s}$ be the focalisation of $\Phi$ and $\Psi$ in its $r$-th and $s$-th variable respectively using the fixed objects above. Let also $X$ be an object of $\C$. Then \begin{enumerate}[(i)] \item $ \left( \hc { \bar \Phi {V_r} } { \bar \Psi {W_s} } \right)_X = \left( \HC \Phi \Psi {W_s} \right)_{W_1,\dots,W_{s-1},V_1,\dots,V_{r-1},X,V_{r+1},\dots,V_p,W_{s+1},\dots,W_q} = \left( \bar {\HC \Phi \Psi {W_s}} {V_r} \right)_X $ \item $\mathit{(co)dom}\left( \bar {\HC \Phi \Psi {W_s}} {V_r} \right) (x,y) = \mathit{(co)dom}\left( \hc {\bar \Phi {V_r}} {\bar \Psi {W_s}} \right) (x,y,y,x) $ for any morphisms $x$ and $y$. \end{enumerate} \end{lemma} \begin{remark}\label{rem:associativity techincal lemma remark} Part (i) asserts an equality between \emph{morphisms} and not \emph{transformations}, as $ \hc { \bar \Phi {V_r} } { \bar \Psi {W_s} }$ and $\HC \Phi \Psi {W_s}$ have different types and even different domain and codomain functors. \end{remark} \begin{proofAssociativity} One can show that $\HC {\left( \HC \fst \snd {B_i} \right)} \trd {C_j}$ and $ \HC \fst {\left(\HC \snd \trd {C_j}\right)} {B_i}$ have the same domain, codomain and type simply by computing them and observing they coincide. In particular, notice that they both depend on the following variables: $\trdVariables j {\sndVariables i {\fstVariables{}{}}}$. Here we show that their components are equal. Let us fix then $ C_1,\dots, C_{j-1}$, $B_1$, $\dots$, $B_{i-1}$, $A_1$, $\dots$, $A_{k-1}$, $X$, $A_{k+1}$, $\dots$, $A_n$, $B_{i+1}$, $\dots$, $B_m$, $C_{j+1}$, $\dots$, $C_l$ objects in $\C$. Writing just $V$ for this long list of objects, we have, by Lemma~\ref{lemma:associativity techincal lemma}, that \[ \left(\HC \fst {\left(\HC \snd \trd {C_j}\right)} {B_i}\right)_V = \left( \hc {\bar \fst {A_k}} {\bar {\HC \snd \trd {C_j}} {B_i}} \right)_X . \] Now, we cannot apply again Lemma~\ref{lemma:associativity techincal lemma} to $\bar {\HC \snd \trd {C_j}} {B_i}$ because of the observation in Remark~\ref{rem:associativity techincal lemma remark}, but we can use the definition of horizontal composition to write down explicitly the right-hand side of the equation above: it is the morphism \[ \codom{ \bar {\HC \snd \trd {C_j}} {B_i} } (\id{\bar \F {} (X,X)} , (\bar \fst {A_k})_X) \circ \left( \bar{\HC \snd \trd {C_j}}{B_i} \right)_{\bar \F {} (X,X)} \circ \dom{ \bar {\HC \snd \trd {C_j}} {B_i} }( {(\bar \fst {A_k})}_X ,\id{\bar \F {} (X,X)}) \] (Remember that $\bar \fst {A_k} \colon \bar \F {A_k} \to \bar \G {A_k}$, here we wrote $\bar F {} (X,X)$ instead of $\bar F {A_k}(X,X)$ to save space.) Now we \emph{can} use Lemma~\ref{lemma:associativity techincal lemma} to ``split the bar'', as it were: \begin{multline*} \codom{\hc {\bar \snd {B_i}} {\bar \trd {C_j}}} \bigl( {(\bar \fst {A_k})}_X, \id{\bar \F {} (X,X)}, \id{\bar \F {} (X,X)}, {(\bar \fst {A_k})}_X \bigr) \circ \\[.5em] \left( \hc {\bar \snd {B_i}} {\bar \trd {C_j}} \right)_{\bar \F {} (X,X)} \circ \\[.5em] \dom{\hc {\bar \snd {B_i}} {\bar \trd {C_j}}} \bigl(\id{\bar \F {} (X,X)}, {(\bar \fst {A_k})}_X, {(\bar \fst {A_k})}_X, \id{\bar \F {} (X,X)}\bigr) \end{multline*} This morphism is equal, by definition of horizontal composition, to \[ \left( \hc {\bar \fst {A_k}} {\left( \hc {\bar \snd {B_i}} {\bar \trd {C_j}} \right)} \right)_X \] which, by Theorem~\ref{thm:associativity simple case}, is the same as \[ \left( \hc {\left( \hc {\bar \fst {A_k}} {\bar \snd {B_i}} \right)} {\bar \trd {C_j}} \right)_X. \] An analogous series of steps shows how this is equal to $\left( \HC {\left(\HC \fst \snd {B_i}\right)} \trd {C_j} \right)_V$, thus concluding the proof. \qed \end{proofAssociativity} \subsection{Whiskering and horizontal composition} Let $\phi \colon F \to G$, with $F \colon \C^\alpha \to \C$ and $G \colon \C^\beta \to \C$, and $\psi \colon H \to K$, with $H,K \colon \Op\C \times \C\to \C$, be dinatural transformations of type $\begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}$ and $\begin{tikzcd}[cramped,sep=small] 2 \ar[r] & 1 & \ar[l] 2 \end{tikzcd}$ respectively. Then $\hc \phi \psi \colon H(\Op G , F) \to K(\Op F, G)$ is of type \[ \begin{tikzcd} \length\beta + \length\alpha \ar[r,"{[\tau,\sigma]}"] & n & \ar[l,"{[\sigma,\tau]}"'] \length\alpha + \length\beta \end{tikzcd} \] and its general component $(\hc \phi \psi)_{\bfA}$, with $\bfA=(A_1,\dots,A_n)$, is either leg of \[ \begin{tikzcd}[column sep=2em] & H\bigl( F(\bfA\sigma), F(\bfA\sigma) \bigr) \ar[r,"\psi_{F(\bfA\sigma)}"] & K \bigl( F(\bfA\sigma), F(\bfA\sigma) \bigr) \ar[dr,"{K\bigl( F(\bfA\sigma),\phi_\bfA \bigr)}"] \\ H \bigl( G(\bfA\tau), F(\bfA\sigma) \bigr) \ar[ur,"{H \bigl( \phi_\bfA , F(\bfA\sigma) \bigr)}"] \ar[dr,"{ H \bigl( G(\bfA\tau), \phi_\bfA \bigr)}"'] & & & K \bigl( F(\bfA\sigma), G(\bfA\tau) \bigr) \\ & H \bigl( G(\bfA\tau), G(\bfA\tau) \bigr) \ar[r,"\psi_{G(\bfA\tau)}"'] & K \bigl( G(\bfA\tau), G(\bfA\tau) \bigr) \ar[ur,"{K \bigl( \phi_\bfA, G(\bfA\tau) \bigr)}"'] \end{tikzcd} \] If we look at the upper leg of the above hexagon, we may wonder: is it, in fact, the general component of the vertical composition \begin{equation}\label{eqn:whiskering vs horizontal composition} \bigl(\hc {(F,\phi)} K \bigr) \circ \bigl( \hc F \psi \bigr) \circ \bigl( \hc {(\phi,F)} H \bigr), \end{equation} where by $\hc {(F,\phi)} K$ we mean the simultaneous horizontal composition of $\id K$ with $\id F$ in its first variable and with $\phi$ in its second, namely $\bigl(\HC {\id F} {\id K} 1 \bigr) \circ \bigl( \HC \phi {\id K} 2 \bigr) = \bigl( \HC \phi {\id K} 2 \bigr) \circ \bigl( \HC {\id F} {\id K} 1 \bigr)$? In other words, is horizontal composition a vertical composite of \emph{whiskerings}, analogously to the classical natural case? \emph{No}, but it is intimately related to it, as we show now by computing the composite~\eqref{eqn:whiskering vs horizontal composition}. Let $\bfA=(A_1,\dots,A_n)$, $\bfB=(B_1,\dots,B_{\length\alpha})$ and $\bfC=(C_1,\dots,C_{\length\alpha})$. Then \begin{multline*} \bigl(\hc {(\phi,F)} H \bigr)_{\bfA \concat \bfB} = \begin{tikzcd}[ampersand replacement=\&,column sep=4em] H \bigl( G(\bfA\tau), F(\bfB) \bigr) \ar[r,"{H\bigl( \phi_\bfA, F(\bfB)\bigr)}"] \& H \bigl( F(\bfA\sigma), F(\bfB) \bigr) \end{tikzcd} \\ (\hc F \psi)_\bfC = \begin{tikzcd}[ampersand replacement=\&,column sep=4em] H \bigl( F(\bfC), F(\bfC) \bigr) \ar[r,"\psi_{F(\bfC)}"] \& K \bigl( F(\bfC), F(\bfC) \bigr) \end{tikzcd} \end{multline*} Therefore, upon composing $\hc {(\phi,F)} H $ with $\hc F \psi$, we have to impose $\bfA\sigma = \bfC = \bfB$, which means $A_{\sigma_i} = C_i = B_i$ for all $i \in \length\alpha$. If we take also $\bfD=(D_1,\dots,D_{\length\alpha})$ and $\bfE=(E_1,\dots,E_n)$, we have: \begin{multline*} \Bigl(\bigl( \hc F \psi \bigr) \circ \bigl( \hc {(\phi,F)} H \bigr)\Bigr)_{\bfA} = \begin{tikzcd}[ampersand replacement=\&,column sep=6em] H \bigl( G(\bfA\tau), F(\bfA\sigma) \bigr) \ar[r,"{\psi_{F(\bfA\sigma)} \circ H \bigl( \phi_\bfA, F(\bfA\sigma)\bigr)}"] \& K \bigl( F(\bfA\sigma), F(\bfA\sigma) \bigr) \end{tikzcd} \\ \bigl(\hc {(F,\phi)} K \bigr)_{\bfD \concat \bfE} = \begin{tikzcd}[ampersand replacement=\&,column sep=3em] K \bigl( F(\bfD), F(\bfE\sigma) \bigr) \ar[r,"{K\bigl( F(\bfD), \phi_\bfE \bigr)}"] \& K \bigl( F(\bfD), G(\bfE\tau) \bigr) \end{tikzcd} \end{multline*} So, to compose $\bigl( \hc F \psi \bigr) \circ \bigl( \hc {(\phi,F)} H \bigr)$ with $\bigl(\hc {(F,\phi)} K \bigr)$, we only need $\bfD = \bfA\sigma = \bfE\sigma$. Crucially, for all $k \in n \setminus \mathrm{Im}(\sigma)$, $A_k$ and $E_k$ need not to be equal. This means that, if $l=\length{n \setminus \mathrm{Im}(\sigma)}$, the composite~\eqref{eqn:whiskering vs horizontal composition} is a transformation depending on $l + 2 \cdot (n-l)$ variables (equivalently, $n+(n-l)$ variables), where the two instances of $\phi$, at the beginning and at the end of the general component of~\eqref{eqn:whiskering vs horizontal composition}, are computed in variables $\bfA$ and $\bfE$ respectively where $A_{\sigma i} = E_{\sigma i}$ for all $i \in \length\alpha$, and $A_k \ne E_k$ for $k \in n \setminus \mathrm{Im}(\sigma)$. Instead, the horizontal composition $\hc \phi \psi$ uses \emph{the same} general component of $\phi$ in both instances, as it is obtained by ``applying the dinaturality condition of $\psi$ to $\phi_\bfA$'': it requires a stronger constraint, namely $A_k = E_k$ for all $k\in n$. This means that $\hc \phi \psi$ is a sub-family of~\eqref{eqn:whiskering vs horizontal composition}, and it coincides with it precisely when $\sigma$ is surjective. Analogously, $\hc \phi \psi$ is in general a sub-family of \[ \bigl(\hc {(\phi,G)} K \bigr) \circ \bigl( \hc G \psi \bigr) \circ \bigl( \hc {(G,\phi)} H \bigr) \] and they coincide precisely when $\tau$ is surjective. This issue is part of the wider problem of incompatibility of horizontal and vertical composition, which we discuss in the next section. \begin{remark} If $\phi$ and $\psi$ are classical dinatural transformations as in Definition~\ref{def:horCompDef}, $\hc \phi \psi$ is indeed equal to the vertical composite of whiskerings~\eqref{eqn:whiskering vs horizontal composition}. In this case, one can alternatively infer the dinaturality of $\hc \phi \psi$ (Theorem~\ref{thm:horCompTheorem}) from the easy-to-check dinaturality of $\hc {(\phi,F)} H$, $\hc F \psi$ and $\hc {(F,\phi)} K$ by applying Theorem~\ref{theorem:compositionality with complicated graphs}: one can draw the composite graph of three whiskerings (Figure~\ref{fig:composite graph whiskerings}) and notice that the resulting Petri Net is acyclic. The trellis of commutative diagrams in Figure~\ref{fig:DinaturalityHorizontalCompositionFigure}, which proves Theorem~\ref{thm:horCompTheorem}, is the algebraic counterpart of firing transitions in Figure~\ref{fig:composite graph whiskerings}: the dinaturality of $\phi$ corresponds to firing the top-left and bottom-right transitions, while the dinaturality of $\psi$ to firing the two transitions in the middle layer. \end{remark} \begin{figure} \caption{The composite graph of the three whiskerings in~\eqref{eqn:whiskering vs horizontal composition} \label{fig:composite graph whiskerings} \end{figure} \subsection{(In?)Compatibility with vertical composition}\label{section compatibility} Looking at the classical natural case, there is one last property to analyse: the \emph{interchange law}~\cite{mac_lane_categories_1978}. In the following situation, \[ \begin{tikzcd}[column sep=1.5cm] \A \arrow[r, out=60, in=120, ""{name=U, below}] \arrow[r, ""{name=D, }] \arrow[r,phantom,""{name=D1,below}] \arrow[r, bend right=60,""{name=V,above}] & \B \arrow[r, bend left=60, ""{name=H, below}] \arrow[r,""{name=E}] \arrow[r,phantom,""{name=E1,below}] \arrow[r, bend right=60,""{name=K,above}] & \C \arrow[Rightarrow, from=U, to=D,"\phi"] \arrow[Rightarrow, from=D1, to=V,"\psi"] \arrow[Rightarrow, from=H, to=E, "\phi'"] \arrow[Rightarrow, from=E1,to=K, "\psi'"] \end{tikzcd} \] with $\phi,\phi',\psi$ and $\psi'$ natural transformations, we have: \begin{equation}\label{interchange law} \hc {(\psi \circ \phi)} {(\psi' \circ \phi')} = (\hc \psi {\psi'}) \circ (\hc \phi {\phi'}). \tag{$\dagger$} \end{equation} The interchange law is the crucial property that makes $\C\mathrm{at}$ a 2-category. It is then certainly highly interesting to wonder whether a similar property holds for the more general notion of horizontal composition for dinatural transformations too. As we know all too well, dinatural transformations are far from being as well-behaved as natural transformations, given that they do not, in general, vertically compose; on the other hand, their horizontal composition always works just fine. Are these two operations compatible, at least when vertical composition is defined? The answer, unfortunately, is \emph{No}, at least if by ``compatible'' we mean ``compatible as in the natural case (\ref{interchange law})''. Indeed, consider classical dinatural transformations \begin{equation}\label{compatibility situation} \begin{tikzcd}[column sep=0.75cm] \Op\A \times \A \arrow[rr, bend left=60, ""{name=U,below}] \arrow[rr, phantom, bend left=60, "F"{above}] \arrow[rr, "G"{name=D,anchor=center,fill=white,pos=0.34}] \arrow[rr, bend right=60,""{name=V,above}] \arrow[rr, bend right=60,"H"{below}] & & \B & \Op\B \times \B \arrow[rr, bend left=60, ""{name=H, below,}] \arrow[rr,phantom, bend left=60, "J"{above}] \arrow[rr,"K"{name=E,anchor=center,fill=white},pos=0.35] \arrow[rr, bend right=60,""{name=K,above}] \arrow[rr,phantom, bend right=60,"L"{below}] & & \C \arrow[Rightarrow, from=U, to=D,"\phi"] \arrow[Rightarrow, from=D, to=V,"\psi"] \arrow[Rightarrow, from=H, to=E, "\phi'"] \arrow[Rightarrow, from=E,to=K, "\psi'"] \end{tikzcd} \end{equation} such that $\psi\circ\phi$ and $\psi'\circ\phi'$ are dinatural. Then \[ \hc \phi {\phi'} \colon J(\Op G,F) \to K(\Op F,G) \qquad \hc \psi {\psi'} \colon K(\Op H,G) \to L(\Op G,H) \] which means that $\hc \phi {\phi'}$ and $\hc \psi {\psi'}$ are not even composable as families of morphisms, as the codomain of the former is not the domain of the latter. The problem stems from the fact that the codomain of the horizontal composition $\hc \phi {\phi'}$ depends on the codomain of $\phi'$ and also the domain \emph{and} codomain of $\phi$, which are not the same as the domain and codomain of $\psi$: indeed, in order to be vertically composable, $\phi$ and $\psi$ must share only one functor, and not both. This does not happen in the natural case: the presence of mixed variance, which forces to consider the codomain of $\phi$ in $\hc \phi {\phi'}$ and so on, is the real culprit here. The failure of (\ref{interchange law}) is not completely unexpected: after all, our definition of horizontal composition is strictly more general than the classical one for natural transformations, as it extends the audience of functors and transformations it can be applied to quite considerably. Hence it is not surprising that this comes at the cost of losing one of its properties, albeit so desirable. Of course, one can wonder whether a different definition of horizontal composition exists for which (\ref{interchange law}) holds. Although we cannot exclude \emph{a priori} this possibility, the fact that ours not only is a very natural generalisation of the classical definition for natural transformations (as it follows the same idea, see discussion after Definition~\ref{def:horizontal composition natural transformations}), but also enjoys associativity and unitarity, let us think that we \emph{do} have the right definition at hand. (As a side point, behold Figure~\ref{fig:DinaturalityHorizontalCompositionFigure}: its elegance cannot be the fruit of a wrong definition!) What we suspect, instead, is that a different \emph{interchange law} should be formulated, that can accommodate the hexagonal shape of the dinatural condition. Indeed, what proves (\ref{interchange law}) in the natural case is the naturality of either $\phi'$ or $\psi'$. For instance, the following diagrammatic proof uses the latter, for $\phi \colon F \to G$, $\psi \colon G \to H$, $\phi' \colon J \to K$, $\psi' \colon K \to L$ natural: \[ \begin{tikzcd}[row sep=2.5em,column sep=2.5em,font=\small] JF(A) \ar[r,"{\phi'_{F(A)}}"] \ar[rd,dashed,"{(\hc \phi {\phi'})_A}"'] & KF(A) \ar[r,"\psi'_{F(A)}"] \ar[d,"K(\phi_A)"'] & LF(A) \ar[d,"L(\phi_A)"] \\ & KG(A) \ar[r,"\psi'_{G(A)}"] \ar[dr,dashed,"{(\hc \psi {\psi'})}_A"'] & LG(A) \ar[d,"L(\psi_A)"] \\ & & LH(A) \end{tikzcd} \] (The upper leg of the diagram is $ \hc {(\psi \circ \phi)} {(\psi' \circ \phi')}$.) The naturality condition of $\psi'$ is what causes $\phi$ and $\psi'$ to swap places, allowing now $\phi$ and $\phi'$ to interact with each other via horizontal composition; same for $\psi$ and $\psi'$. However, for $\phi, \psi, \phi',\psi'$ dinatural as in (\ref{compatibility situation}), this does not happen: \[ \begin{tikzcd}[column sep=.5cm,font=\small] & & J(F,F) \ar[r,"\phi'"] & K(F,F) \ar[r,"\psi'"] & L(F,F) \ar[dr,"{L(1,\phi)}"] \\ & J(G,F) \ar[ur,"{J(\phi,1)}"] \ar[rrrr,dashed,"\hc {\phi} {({\psi'}\circ{\phi'})}"] & & & & L(F,G) \ar[dr,"{L(1,\psi)}"]\\ J(H,F) \ar[ur,"{J(\psi,1)}"] & & & & & & L(F,H) \end{tikzcd} \] Here, the upper leg of the diagram is again $\hc {(\psi \circ \phi)} {(\psi' \circ \phi')}$; we have dropped the lower-scripts of the transformations and we have written ``$J(H,F)$'' instead of ``$J(H(A,A),F(A,A))$'' to save space. The dinaturality conditions of $\phi'$ and $\psi'$ do not allow a place-swap for $\phi$ and $\phi'$ or for $\phi$ and $\psi'$; in fact, they cannot be applied at all! The only thing we can notice is that we can isolate $\phi$ from $\phi'$, obtaining the following: \[ \hc {(\psi \circ \phi)} {(\psi' \circ \phi')} = L(1,\psi) \circ \Bigl(\hc {\phi} {({\psi'}\circ{\phi'})}\Bigr) \circ J(\psi,1). \] Notice that the right-hand side is \emph{not} $\hc \psi {\Bigl(\hc {\phi} {({\psi'}\circ{\phi'})}\Bigr)}$, as one might suspect at first glance, simply because the domain of $\hc {\phi} {({\psi'}\circ{\phi'})}$ is not $J$ and its codomain is not $L$. It is clear then that the only assumption of $\phi'\circ\phi$ and $\psi'\circ\psi$ being dinatural (for whatever reason) is not enough. One chance of success could come from involving the graph of our transformations; for example, if the composite graphs $\graph\psi \circ \graph\phi$ and $\graph{\psi'} \circ \graph{\phi'}$ are acyclic—hence dinatural, yes, but for a ``good'' reason—then maybe we could be able to deduce a suitably more general, ``hexagonal'' version of (\ref{interchange law}) for dinatural transformations. It also may well be that there is simply no sort of interchange law, of course. This is still an open question, and the matter of further study in the future. In the conclusions we shall make some additional comments in light of the calculus we will build in the rest of the article. \section{A category of partial dinatural transformations} Since dinatural transformations do not always compose, they do not form a category. However, the work done in Section~\ref{section vertical compositionality} permits us to define a category whose objects are functors of mixed variance and whose morphisms are transformations that are dinatural only in \emph{some} of their variables, as we shall see. A first attempt would be to construct $\fc \B \C$ by defining: \begin{itemize}\label{first attempt} \item objects: pairs $(\alpha, F \colon \B^\alpha \to \C)$; \item morphisms: a morphism $(\alpha, F) \to (\beta, G)$ would be a tuple $(\phi, \graph\phi, \Delta_\phi)$ where $\phi \colon F \to G$ is a transformation whose standard graph is $\graph\phi$, and if $n$ is the number of connected components of $\graph\phi$ (hence, the number of variables of $\phi$), then $\Delta_\phi \colon n \to \{0,1\}$ would be the ``discriminant'' function that tells us in which variables $\phi$ is dinatural: if $\Delta_\phi(i)=1$, then $\phi$ is dinatural in its $i$-th variable; \item composition: given $(\phi,\graph\phi,\Delta_\phi) \colon (\alpha,F) \to (\beta,G)$ and $(\psi,\graph\psi,\Delta_\psi) \colon (\beta,G) \to (\gamma,H)$ morphisms, their composite would be $(\psi\circ\phi,\graph{\psi\circ\phi},\Delta_{\psi\circ\phi})$, where $\psi\circ\phi$ is simply the vertical composition of transformations $\phi$ and $\psi$, $\graph{\psi\circ\phi}$ is its standard graph, and $\Delta_{\psi\circ\phi}(x)$ is defined to be $1$ if and only if the $x$-th connected component of $\graph\psi \circ \graph\phi$ is acyclic and $\phi$ and $\psi$ are dinatural in all variables involved in the $x$-th connected component of the composite graph $\graph{\psi}\circ\graph\phi$, in the sense of Theorem~\ref{theorem:acyclicity implies dinaturality GENERAL}. \end{itemize} However, composition so defined fails to be associative in $\Delta$. Suppose we have three consecutive transformations $\phi$, $\psi$ and $\chi$, dinatural in all their variables, where \[ \graph\phi = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ & \node [category] (1) {}; & & & & \node [opCategory] (6) {}; \\ & \node [component] (A) {}; & & & & \node [component] (B) {};\\ \node [category] (2) {}; & & \node [category] (3){}; & & \node [opCategory] (5) {}; & & \node [opCategory] (7) {};\\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; 5 -> B -> 6; 7 -> B; }; \end{tikzpicture} \quad \graph\psi = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ \node [category] (2) {}; & & \node [category] (3){}; & & \node [opCategory] (5) {}; & & \node [opCategory] (7) {};\\ \node [component] (C) {}; & & & \node [component] (D) {}; & & &\node [component] (E) {}; \\ \node [category] (4) {}; & & & & & & \node [opCategory] (8) {};\\ }; \graph[use existing nodes]{ 2 -> C -> 4; 3 -> D -> 5; 8 -> E -> 7; }; \end{tikzpicture} \quad \graph\chi = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ \node[category](2){}; & & & & \node[opCategory](5){}; \\ & & \node[component](B){};\\ }; \graph[use existing nodes]{ 2 -> B -> 5; }; \end{tikzpicture} \] Of course vertical composition of transformations \emph{is} associative, therefore $(\chi \circ \psi) \circ \phi = \chi \circ (\psi \circ \phi)$ and $\graph{(\chi \circ \psi) \circ \phi} = \graph{\chi \circ (\psi \circ \phi)}$. Yet, $\Delta_{(\chi \circ \psi) \circ \phi} \ne \Delta_{\chi \circ (\psi \circ \phi)}$: indeed, by computing $\graph\chi \circ \graph\psi$ and then collapsing the connected components, we obtain \[ \graph{\chi\circ\psi} = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ \node [category] (1) {}; & \node[category] (7) {}; & & \node[opCategory] (8) {}; & \node[opCategory] (6) {}; \\ & & \node[component](D){}; \\ & & \node[component](B){};\\ }; \graph[use existing nodes]{ 1 -> B -> 6; 7 -> D -> 8; }; \end{tikzpicture} \quad \text{hence } \graph{\chi \circ \psi} \circ \graph\phi = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ &\node[category](1){}; & & & &\node[opCategory](6){};\\ &\node[component](A){};& & & &\node[component](C){};\\ \node[category](2){}; & &\node[category](7){}; & &\node[opCategory](8){}; & &\node[opCategory](5){};\\ & & &\node[component](D){};\\ & & &\node[component](B){};\\ }; \graph[use existing nodes]{ 1 -> A -> 2 -> B ; B -> 5 -> C -> 6; A -> 7 -> D -> 8 -> C; }; \end{tikzpicture} \] Since $\graph{\chi \circ \psi} \circ \graph\phi$ is acyclic, we have that $(\chi\circ\psi)\circ\phi$ is dinatural, thus $\Delta_{(\chi\circ\psi)\circ\phi} \colon 1 \to \{0,1\}$ is the function returning 1. On the other hand, however, we have \[ \graph\psi \circ \graph\phi = \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ & \node [category] (1) {}; & & & & \node [opCategory] (6) {}; \\ & \node [component] (A) {}; & & & & \node [component] (B) {};\\ \node [category] (2) {}; & & \node [category] (3){}; & & \node [opCategory] (5) {}; & & \node [opCategory] (7) {};\\ \node [component] (C) {}; & & & \node [component] (D) {}; & & &\node [component] (E) {}; \\ \node [category] (4) {}; & & & & & & \node [opCategory] (8) {};\\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; 2 -> C -> 4; 3 -> D -> 5 -> B -> 6; 8 -> E -> 7 -> B; }; \end{tikzpicture} \quad \text{so } \graph{\psi\circ\phi} = \begin{tikzpicture} \matrix[column sep=3.5mm,row sep=0.4cm]{ \node [category] (1) {}; & & \node [opCategory] (2) {}; \\ & \node [component] (A) {}; \\ \node [category] (3) {}; & & \node [opCategory] (4) {}; \\ }; \graph[use existing nodes]{ 1 -> A -> 3; 4 -> A -> 2; }; \end{tikzpicture} \] which means that, when we glue together $\graph\chi$ and $\graph{\psi\circ\phi}$, we obtain: \[ \graph{\chi}\circ\graph{\psi\circ\phi}= \begin{tikzpicture} \matrix[column sep=3.5mm,row sep=0.4cm]{ \node[category](1){}; & & \node[opCategory](6){};\\ &\node[component](A){};\\ \node[category](2){}; & & \node[opCategory](5){};\\ &\node[component](B){};\\ }; \graph[use existing nodes]{ 1->A->2->B->5->A->6; }; \end{tikzpicture} \] which is cyclic, so $\Delta_{\chi\circ(\psi\circ\phi)} \colon 1 \to \{0,1\}$ returns 0. What went wrong? In the graph of $\psi\circ\phi$ there is a path from the bottom-right node to the bottom-left node, which then extends to a cycle once connected to $\graph{\chi}$. That path was created upon collapsing the composite graph $\graph\psi \circ \graph\phi$ into $\graph{\psi \circ \phi}$: but in $\graph\psi \circ \graph\phi$ there was no path from the bottom-right node to the bottom-left one. And rightly so: to get a token moved to the bottom-left vertex of $\graph\psi \circ \graph\phi$, we have no need to put one token in the bottom-right vertex. Therefore, once we have formed $\graph{\psi\circ\phi}$, we have lost crucial information about which sources and sinks are \emph{directly} connected with which others, because we have collapsed the entire connected component into a single internal transition, with no internal places. As it happens, by computing the composite graph in a different order, instead, no new paths have been created, hence no cycles appear where there should not be. After all, by Theorem~\ref{theorem:acyclicity implies dinaturality GENERAL} we know that $\chi \circ \psi \circ \phi$ is dinatural because it can be written as the composite of two dinatural transformations, namely $\chi \circ \psi$ and $\phi$, whose composite graph is acyclic. This tells us that the crucial reason for which associativity fails in our preliminary definition of the category $\fc \B \C$ is that only keeping track of which connected component each of the arguments of the domain and codomain functors belongs to is not enough: we are forgetting too much information, namely the paths that directly connect the white and grey boxes. Hence our transformations will have to be equipped with more complicated Petri Nets than their standard graph that do contain internal places, and upon composition we shall simply link the graphs together along the common interface, without collapsing entire connected components into a single transition. Recall from Definition~\ref{def:FBCF petri net} that a FBCF Petri Net is a net where all the places have at most one input and at most one output transition. We now introduce the category of FBCF Petri Nets, using the usual definition of morphism for bipartite graphs. \begin{definition} The category $\PN$ consists of the following data: \begin{itemize} \item objects are FBCF Petri Nets $N=(P_N,T_N,\inp{},\out{})$ together with a fixed ordering of its connected components. Such an ordering will allow us to speak about the ``$i$-th connected component'' of $N$; \item a morphism $f \colon N \to M$ is a pair of functions $(f_P,f_T)$, for $f_P \colon P_N \to P_M$ and $f_T \colon T_N \to T_M$, such that for all $t \in T_N$ \[ \inp{f_T(t)} = \{f_P(p) \mid p \in \inp t \} \quad \text{and} \quad \out{f_T(t)} = \{ f_P(p) \mid p \in \out t \}. \] \end{itemize} \end{definition} Note that if $f \colon N \to M$ is a morphism in $\PN$ then $f$ preserves (undirected) paths, hence for $C$ a connected component of $N$ we have that $f(C)$ is connected. In particular, if $f$ is an isomorphism then $f(C)$ is a connected component of $M$. \begin{remark}\label{remark:finite sets are in PN} We have a canonical inclusion $\finset \to \PN$ by seeing a set as a Petri Net with only places and no transitions. \end{remark} For a function $x \colon A \to B$ of sets we call $\parts x \colon \parts A \to \parts B$ the action of the covariant powerset functor on $x$, that is the function such that $\parts x (S) = \{x(a) \mid a \in S\} $ for $S \subseteq A$. We then have that if $f \colon N \to M$ is a morphism in $\PN$, then \[ \begin{tikzcd} T_N \ar[r,"f_T"] \ar[d,"\inp{}"'] & T_M \ar[d,"\inp{}"] \\ \parts{P_N} \ar[r,"\parts{f_P}"] & \parts{P_M} \end{tikzcd} \quad \text{and} \quad \begin{tikzcd} T_N \ar[r,"f_T"] \ar[d,"\out{}"'] & T_M \ar[d,"\out{}"] \\ \parts{P_N} \ar[r,"\parts{f_P}"] & \parts{P_M} \end{tikzcd} \] commute by definition of the category $\PN$. It turns out that $\PN$ admits pushouts, hence we can form a category $\cospan\PN$. \begin{proposition}\label{prop: pushouts in PN} Let $N,M,L$ be in $\PN$, and consider the following diagram in $\PN$: \begin{equation}\label{diagram: pushout in PN} \begin{tikzcd}[column sep=3em] (P_N,T_N,\Inp {N} ,\Out {N} ) \ar[r,"{(g_P,g_T)}"] \ar[d,"{(f_P,f_T)}"'] & (P_L,T_L,\Inp L,\Out L ) \ar[d,"{(k_P,k_T)}"] \\ (P_M,T_M,\Inp M,\Out M) \ar[r,"{(h_P,h_T)}"] & (P_Q,T_Q,\Inp Q,\Out Q) \end{tikzcd} \end{equation} where \[ \begin{tikzcd} P_N \ar[r,"g_P"] \ar[d,"f_P"'] & P_L \ar[d,"k_P"] \\ P_M \ar[r,"h_P"] & P_Q \end{tikzcd} \quad {and} \quad \begin{tikzcd} T_N \ar[r,"g_T"] \ar[d,"f_T"'] & T_L \ar[d,"k_T"] \\ T_M \ar[r,"h_T"] & T_Q \end{tikzcd} \] are pushouts and $\Inp Q \colon T_Q \to \parts{P_Q}$ is the unique map (the dashed one) that makes the following diagram commute: \[ \begin{tikzcd} \parts{P_N} \ar[dddr,bend angle=20,bend right,"\parts{f_P}"'] \ar[rrrd,bend angle=20,bend left,"\parts{g_P}"] \\ & T_N \ar[r,"g_T"] \ar[d,"f_T"'] \ar[ul,"\Inp N"] & T_L \ar[d,"k_T"] \ar[r,"\Inp L"] & \parts{P_L} \ar[dd,"\parts{k_P}"] \\ & T_M \ar[r,"h_T"] \ar[d,"\Inp M"] & T_Q \ar[dr,dashed] \\ & \parts{P_M} \ar[rr,"\parts{h_P}"] & & \parts{P_Q} \end{tikzcd} \] $\Out Q \colon T_Q \to \parts{P_Q}$ is defined analogously. Then (\ref{diagram: pushout in PN}) is a pushout. \end{proposition} \begin{proof} It is easily checked that (\ref{diagram: pushout in PN}) satisfies the definition of pushout.\qed \end{proof} Remember from Remark~\ref{remark:finite sets are in PN} that finite sets can be seen as places-only Petri Nets: if $S$ is a set and $N$ is an object in $\PN$, then a morphism $f \colon S \to N$ in $\PN$ is a pair of functions $f=(f_P,f_T)$ where $f_T$ is the empty map $\emptyset \colon \emptyset \to T_N$. Hence, by little abuse of notation, we will refer to $f_P$ simply as $f$. For later convenience, we consider the following subcategory of $\cospan\PN$, whose morphisms are essentially Petri Nets $N$ in $\PN$ with ``interfaces'', that is specific places seen as ``inputs'' and ``outputs'' of $N$. Composition will then be computed by ``gluing together'' two consecutive nets along the common interface. \begin{definition}\label{definition:graph category} The category $\gc$ consists of the following data: \begin{itemize} \item objects are lists in $\List\{+,-\}$; \item morphisms $f \colon \alpha \to \beta$ are (equivalence classes of) cospans in $\PN$ of the form \[ \begin{tikzcd} \length\alpha \ar[r,"\lambda"] & N & \ar[l,"\rho"'] \length\beta \end{tikzcd} \] where \begin{itemize}[leftmargin=*] \item $\lambda \colon \length\alpha \to P_N$ and $\rho \colon \length\beta \to P_N$ are injective functions, hence we can see $\length\alpha$ and $\length\beta$ as subsets of $P_N$; \item $\mathit{sources}(N) = \{ \lambda(i) \mid \alpha_i=+ \} \cup \{ \rho(i) \mid \beta_i = - \}$; \item $\mathit{sinks}(N) = \{ \lambda(i) \mid \alpha_i=- \} \cup \{ \rho(i) \mid \beta_i = + \}$. \end{itemize} Two such cospans are in the same class if and only if they differ by an isomorphism of Petri Nets on $N$ coherent with $\lambda$, $\rho$ and the ordering of the connected components of $N$; \item composition is that of $\cospan\PN$. \end{itemize} \end{definition} \begin{proposition} Composition in $\gc$ is well defined. \end{proposition} \begin{proof} Consider $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\lambda"] & M & \ar[l,"\rho"'] \length\beta \end{tikzcd} $ and $ \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\lambda'"] & L & \ar[l,"\rho'"'] \length\gamma \end{tikzcd} $ two morphisms in $\gc$. By Proposition~\ref{prop: pushouts in PN} then, their composite is given by computing the pushouts \[ \begin{tikzcd} \length\beta \ar[r,"\lambda'"] \ar[d,"\rho"'] & P_L \ar[d,"k_P"] \\ P_M \ar[r,"h_P"] & P_Q \end{tikzcd} \quad \text{and} \quad \begin{tikzcd} \emptyset \ar[r,"\emptyset"] \ar[d,"\emptyset"'] & T_L \ar[d,"k_T"] \\ T_M \ar[r,"h_T"] & T_Q \end{tikzcd} \] Now, the injectivity of $\rho$ and $\lambda'$ implies that $k_P$ and $h_P$ are also injective, as the pushout (in $\Set$) of an injective map against another yields injective functions. $P_Q$, in particular, can be seen as the quotient of $P_M + P_L$ where the elements of $P_M$ and $P_L$ with a common pre-image in $\length\beta$ are identified. Next, the pushout of the empty map against itself yields as a result the coproduct, thus $T_Q = T_M + T_L$ where $h_T$ and $k_T$ are the injections. Hence, the input function of the composite is defined as follows: \[ \begin{tikzcd}[row sep=0em,ampersand replacement=\&] T_M + T_L \ar[r,"\inp{}"] \& \parts{P_Q} \\ t \ar[r,|->] \& \begin{cases} \inp{_M(t)} & t \in T_M \\ \inp{_L(t)} & t \in T_L \end{cases} \end{tikzcd} \] and similarly for the output function. All in all, therefore, composition in $\gc$ is computed by ``glueing'' together the Petri Nets $M$ and $L$ along the common $\length\beta$-places; the resulting morphism of $\gc$ is \[ \begin{tikzcd}[column sep=3em] \length\alpha \ar[r,"h_P \circ \lambda"] & L \circ M & \length\beta \ar[l,"k_P \circ \rho'"']. \end{tikzcd} \] Now, for all $i \in \length\beta$, if $\beta_i=+$ then $\rho(i)$ is a sink of $M$ and $\lambda'(i)$ a source of $L$; if $\beta_i=-$ instead then $\rho(i)$ is a source of $M$ and $\lambda'(i)$ a sink of $L$: in every case, once we glue together $M$ and $L$ along the $\length\beta$-places to form the composite net $L \circ M$, these become internal places of $L \circ M$, with at most one input and one output transition each (depending whether they are proper sources or sinks in $M$ and $L$). Hence $L \circ M$ is still a FBCF Petri Net, and \begin{align*} \mathit{sources}(L \circ M) &= \bigl(\mathit{sources} (L) \setminus \rho(\length\beta) \bigr) \cup \bigl(\mathit{sources} (L)\setminus \lambda'(\length\beta) \bigr) \\ &= \{h_P \circ \lambda(i) \mid \alpha_i = +\} \cup \{k_P \circ \rho'(i) \mid \gamma_i =-\} \end{align*} and similarly for $\mathit{sinks}(N' \circ N)$. \qed \end{proof} \paragraph{Generalised graphs of a transformation} We can now start working towards the definition of a category $\fc \B \C$ of functors of mixed variance and transformations that are dinatural only on some of their variables; $\fc \B \C$ will be a category over $\gcf$ in the sense that transformations in $\fc \B \C$ will carry along, as part of their data, certain cospans in $\PN$. The category of graphs $\gcf$ will be built from $\fc \B \C$ by forgetting the transformations. As such, $\gcf$ will be defined \emph{after} $\fc \B \C$. It is clear how to define the objects of $\fc \B \C$: they will be pairs $(\alpha,F \colon \B^\alpha \to \C)$. Morphisms are less obvious to define, as we learnt in our preliminary attempt on p.~\pageref{first attempt}. A morphism $(\alpha,F) \to (\beta,G)$ will consist of a transformation $\phi \colon F \to G$ of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $, together though with a morphism $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma "] & N & \length\beta \ar[l,"\overline\tau "'] \end{tikzcd} $ in $\gc$ coherent with the type of $\phi$, in the sense that the Petri Net $N$, under certain conditions, looks exactly like $\graph\phi$ as in Definition~\ref{def:standard graph} except that it allows for internal places as well. For example, if $\psi_1$ and $\psi_2$ are two arbitrary consecutive transformations, $\graph{\psi_2} \circ \graph{\psi_1}$ will be coherent with the type of $\psi_2\circ\psi_1$. In other words, $N$ will have $n$ connected components, its sources (sinks) are exactly the places corresponding to the positive (negative) entries of $\alpha$ and the negative (positive) entries of $\beta$, and elements in $\length\alpha$ ($\length\beta$) mapped by $\sigma$ ($\tau$) into the same $i \in \{1,\dots,n\}$ will belong to the $i$-th connected component of $N$. A priori $N$ can contain places with no inputs or outputs: this will be useful for the special case of $\phi = \id F$ as we shall see in Theorem~\ref{theorem: {B,C} is a category}; however, if all sources and sinks in $N$ are proper, then $N$ plays the role of a generalised $\graph{\phi}$. \begin{definition}\label{definition: generalised graph of transformation} Let $\phi \colon F \to G$ be a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $. A cospan $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd} $ in $\PN$, which is a representative of a morphism in $\gc$ (hence $\overline\sigma$ and $\overline\tau$ are injective), is said to be \emph{coherent with the type of $\phi$} if and only if the following conditions are satisfied: \begin{itemize}[leftmargin=*] \item $N$ has $n$ connected components; \item for all $i \in \length\alpha$ and $j \in \length\beta$, $\overline\sigma(i)$ belongs to the $\sigma(i)$-th connected component of $N$ and $\overline\tau(j)$ belongs to the $\tau(j)$-th connected component of $N$. \end{itemize} In this case we say that $N$ is a \emph{generalised graph of $\phi$}. \end{definition} \begin{example}\label{example: graph and type are a generalised graph} For $\phi \colon F \to G$ a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $, recall that the set of places of $\graph\phi$ is $P = \length\alpha + \length\beta$. If we call $\injP {\length\alpha}$ and $\injP {\length\beta}$ the injections as in Definition~\ref{def:standard graph}, then \[ \begin{tikzcd} \length\alpha \ar[r,"\injP{\length\alpha}"] & \Gamma(\phi) & \ar[l,"\injP{\length\beta}"'] \length\beta \end{tikzcd} \] is indeed coherent with the type of $\phi$. Also $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ itself, seen as a cospan in $\PN$, is coherent with itself. \end{example} \begin{remark} If $N$ is a generalised graph of $\phi$ as in the notations of Definition~\ref{definition: generalised graph of transformation} and does not have any place which is a source and a sink at once, then $N$ has exactly $\length\alpha + \length\beta$ sources and sinks and their union coincides with the joint image of $\overline\sigma$ and $\overline\tau$. Moreover, $\overline\sigma$ and $\overline\tau$ have to make sure that they map elements of their domain into places belonging to the correct connected component: in this way, $N$ reflects the type of $\phi$ in a Petri Net like $\graph\phi$, with the possible addition of internal places. \end{remark} We shall now show how composition in $\gc$ preserves generalised graphs, in the following sense. \begin{proposition}\label{proposition: composition in G preservers generalised graphs} Let $\phi \colon F \to G$ and $\psi \colon G \to H$ be transformations of type, respectively, $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ and $ \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\eta"] & m & \length\gamma \ar[l,"\theta"'] \end{tikzcd} $; let also $u= \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd} $ and $v= \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\overline\eta"] & N' & \length\gamma \ar[l,"\overline\theta"'] \end{tikzcd} $ be cospans in $\PN$ coherent with the type of $\phi$ and $\psi$, respectively. Suppose the type of $\psi \circ \phi$ is given by \[ \begin{tikzcd} & & \length\gamma \ar[d,"\theta"] \\ & \length\beta \ar[d,"\tau"'] \ar[r,"\eta"] \ar[dr, phantom, "\ulcorner" very near start] & m \ar[d,"\xi"] \\ \length\alpha \ar[r,"\sigma"] & n \ar[r,"\zeta"] & l \end{tikzcd} \] and that the composite in $\gc$ of $u$ and $v$ is given by \begin{equation}\label{composite generalised graphs} \begin{tikzcd} & & \length\gamma \ar[d,"\overline\theta"] \\ & \length\beta \ar[d,"\overline\tau"'] \ar[r,"\overline\eta"] \ar[dr, phantom, "\ulcorner" very near start] & N' \ar[d,"\overline\xi"] \\ \length\alpha \ar[r,"\overline\sigma"] & N \ar[r,"\overline\zeta"] & N' \circ N \end{tikzcd} \end{equation} Then $v\circ u$ is coherent with the type of $\psi \circ \phi$. \end{proposition} \begin{proof} As we said in the discussion after Definition~\ref{definition:graph category}, $N' \circ N$ is obtained by gluing together $N$ and $N'$ along the $\length\beta$ places which they have in common. The number of connected components of $N' \circ N$ is indeed $l$ by construction. The morphisms $\overline\zeta$ and $\overline\xi$ in $\PN$ are pairs of injections that map each place and transition of $N$ and $N'$ to itself in the composite $N' \circ N$. This means that $\overline\zeta \overline\sigma(i)$ does belong to the $\zeta\sigma(i)$-th connected component of $N' \circ N$, as the latter contains the $\sigma(i)$-th c.c.\ of $N$; similarly the $\overline\xi \overline\theta (j)$ belongs to the $\xi\theta(j)$-th c.c.\ of $N' \circ N$. \qed \end{proof} The morphisms of our generalised functor category $\fc \B \C$ will be, therefore, transformations $\phi$ equipped with a generalised graph $N$ and a discriminant function that tells us in which variables $\phi$ is dinatural. The Petri Net $N$ will not be arbitrary though: unless $\phi$ is an identity transformation, $N$ can be either $\graph\phi$ or $\graph{\phi_k} \circ \dots \circ \graph{\phi_1}$, for some consecutive transformations $\phi_1,\dots,\phi_k$ such that $\phi = \phi_k \circ \dots \phi_1$. Therefore, only transformations which are \emph{explicitly} recognisable as the composite of two or more families of morphisms are allowed to have an associated Petri Net, containing internal places, that is not their standard graph. \begin{definition}\label{def: generalised functor category} Let $\B$ and $\C$ be categories. The \emph{generalised functor category} $\fc \B \C$ consists of the following data: \begin{itemize}[leftmargin=*] \item objects are pairs $(\alpha,F)$, for $\alpha \in \List\{+,-\}$ and $F \colon \B^\alpha \to \C$ a functor; \item morphisms $(\alpha,F) \to (\beta,G)$ are equivalence classes of tuples \[ \Phi = (\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ) \] where: \begin{itemize}[leftmargin=*] \item $\phi \colon F \to G$ is a transformation of type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $, \item $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd} $ is a representative of a morphism in $\gc$ coherent with the type of $\phi$, \item $\Delta_\Phi \colon n \to \{0,1\}$ is a function such that $\Delta_\Phi (i) = 1$ implies that the $i$-th connected component of $N$ is acyclic and $\phi$ is dinatural in its $i$-th variable. \end{itemize} Moreover: \begin{itemize}[leftmargin=*] \item If $N$ consists of $n$ places and no transitions, then $(\alpha,F) = (\beta,G)$, $\phi=\id F$, $\sigma=\tau=\overline\sigma=\overline\tau=\id{\length\alpha}$ and $\Delta_\Phi = K_1$, the constant function equal to $1$; in this case $\Phi$ is the identity morphism of the object $(\alpha,F)$. \item If $N = \graph\phi$, $\overline\sigma = \injP{\length\alpha}$ and $\overline\tau=\injP{\length\beta}$, we say that $\Phi$ is \emph{atomic}. \item If $N \ne \graph\phi$ and $\Phi \ne \id{(\alpha,F)}$, then there exist $\Phi_1, \dots, \Phi_k$ atomic such that $\Phi = \Phi_k \circ \dots \circ \Phi_1$ in $\fc \B \C$, according to the composition law to follow in this Definition. \end{itemize} We say that $\Phi \eq \Phi'$, for $\Phi' = (\phi', \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma'"] & n & \length\beta \ar[l,"\tau'"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline{\sigma'}"] & N' & \length\beta \ar[l,"\overline{\tau'}"'] \end{tikzcd}, \Delta_{\Phi'} )$, if and only if the transformations differ only by a permutation of their variables (in a coherent way with the rest of the data) and $N$ and $N'$ are coherently isomorphic: more precisely, when \begin{itemize}[leftmargin=*] \item there is a permutation $\pi \colon n \to n$ such that $\sigma'=\pi\sigma$, $\tau'=\pi\tau$, $\phi_{A_1,\dots,A_n}'=\phi_{A_{\pi 1},\dots,A_{\pi n}}$, $\Delta_{\Phi}=\Delta_{\Phi'} \pi$; \item there is an isomorphism $f=(f_P,f_T) \colon N \to N'$ in $\PN$ such that the following diagram commutes: \[ \begin{tikzcd} \length\alpha \ar[r,"\overline\sigma"] \ar[dr,"\overline{\sigma'}"'] & N \ar[d,"f"] & \length\beta \ar[l,"\overline\tau"'] \ar[dl,"\overline{\tau'}"] \\ & N' \end{tikzcd} \] mapping the $i$-th connected component of $N$ to the $\pi(i)$-th connected component of $N'$. \end{itemize} \item Composition of $\Phi$ as above and \[ \Psi = (\psi, \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\eta"] & m & \ar[l,"\theta"'] \length\gamma \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\overline\eta"] & N' & \ar[l,"\overline\theta"'] \length\gamma \end{tikzcd}, \Delta_\Psi ) \colon (\beta,G) \to (\gamma, H) \] is component-wise: it is the equivalence class of the tuple \begin{equation}\label{eqn:composition in {B,C}} \Psi \circ \Phi = ( \psi\circ\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\zeta\sigma"] & l & \ar[l,"\xi\theta"'] \length\gamma \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\zeta \overline\sigma"] & {N'} \circ N & \ar[l,"\overline\xi \overline\theta"'] \length\gamma \end{tikzcd}, \Delta_{\Psi\circ\Phi} ) \end{equation} where $\psi\circ\phi$ is the transformation of type given by the result of the pushout: \[ \begin{tikzcd} & & \length\gamma \ar[d,"\theta"] \\ & \length\beta \ar[d,"\tau"'] \ar[r,"\eta"] \ar[dr, phantom, "\ulcorner" very near start] & m \ar[d,"\xi"] \\ \length\alpha \ar[r,"\sigma"] & n \ar[r,"\zeta"] & l \end{tikzcd} \] $N' \circ N$ is computed by composing in $\gc$, that is by performing the pushout in $\PN$: \[ \begin{tikzcd} & & \length\gamma \ar[d,"\overline\theta"] \\ & \length\beta \ar[d,"\overline\tau"'] \ar[r,"\overline\eta"] \ar[dr, phantom, "\ulcorner" very near start] & N' \ar[d,"\overline\xi"] \\ \length\alpha \ar[r,"\overline\sigma"] & N \ar[r,"\overline\zeta"] & N' \circ N \end{tikzcd} \] and the discriminant $\Delta_{\Psi\circ\Phi} \colon l \to \{0,1\}$ is obtained by setting $\Delta_{\Psi\circ\Phi} (x) = 1$ if and only if the $x$-th connected component of $N'\circ N$ is acyclic \emph{and} for all $y \in \zeta^{-1}\{x\}$ and $z \in \xi^{-1}\{x\}$ we have that $\Delta_\Phi(y) = 1 = \Delta_\Psi(z)$. The latter condition is tantamount to asking that $\phi$ and $\psi$ are dinatural in all the variables involved by the $x$-th connected component of the composite graph ${N'}\circ N$ of $\psi\circ\phi$. \end{itemize} \end{definition} \begin{theorem}\label{theorem: {B,C} is a category} $\fc \B \C$ is indeed a category. \end{theorem} \begin{proof} First of all, if $\Phi$ and $\Psi$ as above are in $\fc \B \C$, it is not difficult to check that the equivalence class of $\Psi \circ \Phi$ as in~(\ref{eqn:composition in {B,C}}) does not depend on the choice of representatives for the classes of $\Phi$ and $\Psi$. Next, we aim to prove that $\Psi \circ \Phi$ is again a morphism of $\fc \B \C$. By Proposition~\ref{proposition: composition in G preservers generalised graphs} we have that ${N'} \circ N$ is a generalised graph for $\psi\circ\phi$. In order to prove that $\Delta_{\Psi\circ\Phi}$ correctly defines a morphism of $\fc \B \C$, that is that if $\Delta_{\Psi\circ\Phi}(i)=1$ then $\psi\circ\phi$ is indeed dinatural in its $i$-th variable, we first show that composition in $\fc \B \C$ is associative: once we have done that we will use Theorem~\ref{theorem:compositionality with complicated graphs} to conclude. Consider \begin{align*} \Phi_1 &= (\phi_1, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\alpha \ar[r,"\sigma_1"] \& n \& \length\beta \ar[l,"\tau_1"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\alpha \ar[r,"\overline{\sigma_1}"] \& N_1 \& \length\beta \ar[l,"\overline{\tau_1}"'] \end{tikzcd}, \Delta_\Phi ) \colon (\alpha, F) \to (\beta, G), \\ \Phi_2 &= ( \phi_2, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\beta \ar[r,"\sigma_2"] \& m \& \length\gamma \ar[l,"\tau_2"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\beta \ar[r,"\overline{\sigma_2}"] \& N_2 \& \length\gamma \ar[l,"\overline{\tau_2}"'] \end{tikzcd}, \Delta_{\Phi_2} ) \colon (\beta, G) \to (\gamma, H), \\ \Phi_3 &= ( \phi_3, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\gamma \ar[r,"\sigma_3"] \& p \& \length\delta \ar[l,"\tau_3"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small,ampersand replacement=\&] \length\gamma \ar[r,"\overline{\sigma_3}"] \& N_3 \& \length\delta \ar[l,"\overline{\tau_3}"'] \end{tikzcd}, \Delta_{\Phi_3} ) \colon (\gamma,H) \to (\delta,K). \end{align*} We know that composition of cospans via pushout is associative, as well as composition of transformations; suppose therefore that $\phi_3 \circ \phi_2 \circ \phi_1$ has type given by: \[ \begin{tikzcd} & & & \length\delta \ar[d,"\tau_3"] \\ & & \length\gamma \ar[r,"\sigma_3"] \ar[d,"\tau_2"'] \ar[dr, phantom, "\ulcorner" very near start] & p \ar[d,"\xi_2"] \\ & \length\beta \ar[r,"\sigma_2"] \ar[d,"\tau_1"'] \ar[dr, phantom, "\ulcorner" very near start] & m \ar[r,"\zeta_2"] \ar[d,"\xi_1"'] \ar[dr, phantom, "\ulcorner" very near start] & q \ar[d,"\xi_3"] \\ \length\alpha \ar[r,"\sigma_1"] & n \ar[r,"\zeta_1"] & l \ar[r,"\zeta_3"] & r \end{tikzcd} \] and the generalised graph $N_3 \circ N_2 \circ N_1$ is obtained as the result of the following pushout-pasting: \[ \begin{tikzcd} & & & \length\delta \ar[d,"\overline{\tau_3}"] \\ & & \length\gamma \ar[r,"\overline{\sigma_3}"] \ar[d,"\overline{\tau_2}"'] \ar[dr, phantom, "\ulcorner" very near start] & N_3 \ar[d,"\overline{\xi_2}"] \\ & \length\beta \ar[r,"\overline{\sigma_2}"] \ar[d,"\overline{\tau_1}"'] \ar[dr, phantom, "\ulcorner" very near start] & N_2 \ar[r,"\overline{\zeta_2}"] \ar[d,"\overline{\xi_1}"'] \ar[dr, phantom, "\ulcorner" very near start] & N_3 \circ N_2 \ar[d,"\overline{\xi_3}"] \\ \length\alpha \ar[r,"\overline{\sigma_1}"] & N_1 \ar[r,"\overline{\zeta_1}"] & N_2 \circ N_1 \ar[r,"\overline{\zeta_3}"] & N_3 \circ N_2 \circ N_1 \end{tikzcd} \] We prove that $\Delta_{\Phi_3 \circ (\Phi_2 \circ \Phi_1)} = \Delta_{(\Phi_3 \circ \Phi_2) \circ \Phi_1}$. We have that $\Delta_{\Phi_3 \circ (\Phi_2 \circ \Phi_1)}(x) = 1$ if and only if, by definition: \begin{enumerate}[labelindent=0pt] \item[(1)] the $x$-th c.c.\ of $N_3 \circ N_2 \circ N_1$ is acyclic; \item[(2)] $\forall y \in \zeta_3^{-1}\{x\} \ldotp \Delta_{\Phi_2 \circ \Phi_1}(y) = 1$; \item[(3)] $\forall z \in (\xi_3 \circ \xi_2)^{-1}\{x\} \ldotp \Delta_{\Phi_3}(z) = 1$; \end{enumerate} which is equivalent to say that: \begin{enumerate}[labelindent=0pt] \item[(1)] the $x$-th c.c.\ of $N_3 \circ N_2 \circ N_1$ is acyclic; \item[(2a)] $\forall y \in l \ldotp \Bigl[ \zeta_3(y) = x \implies \text{$y$-th c.c.\ of $N_2 \circ N_1$ is acyclic} \Bigr] $; \item[(2b)] $\forall y \in l \ldotp \Bigl[ \zeta_3(y) = x \implies \forall a \in n \ldotp \Bigl( \zeta_1(a)=y \implies \Delta_{\Phi_1}(a)=1 \Bigr) \Bigr] $; \item[(2c)] $\forall y \in l \ldotp \Bigl[ \zeta_3(y) = x \implies \forall b \in m \ldotp \Bigl( \xi_1(b)=y \implies \Delta_{\Phi_2}(b)=1 \Bigr) \Bigr] $; \item[(3)] $\forall z \in p \ldotp \Bigl[ \xi_3\bigl(\xi_2(z)\bigr) = x \implies \Delta_{\Phi_3} (z) = 1 \Bigr] $. \end{enumerate} Call $A$ the conjunction of the conditions above. Next, we have that $\Delta_{(\Phi_3 \circ \Phi_2) \circ \Phi_1}(x)=1$ if and only if: \begin{enumerate}[labelindent=0pt] \item[(i)] the $x$-th c.c.\ of $N_3 \circ N_2 \circ N_1$ is acyclic; \item[(ii)] $\forall a \in n \ldotp \Bigl[ \zeta_3 \bigl( \zeta_1(a) \bigr) =x \implies \Delta_{\Phi_1}(a)=1\Bigr]$; \item[(iiia)] $\forall w \in q \ldotp \Bigl[ \xi_3(w)=x \implies \text{ $w$-th c.c.\ of $N_3 \circ N_2$ is acyclic } \Bigr]$; \item[(iiib)] $\forall w \in q \ldotp \Bigl[ \xi_3(w)=x \implies \forall b \in m \ldotp \Bigl( \zeta_2(b)=w \implies \Delta_{\Phi_2}(b)=1 \Bigr) \Bigr]$; \item[(iiic)] $\forall w \in q \ldotp \Bigl[ \xi_3(w)=x \implies \forall z \in p \ldotp \Bigl( \xi_2(z)=w \implies \Delta_{\Phi_3}(z)=1 \Bigr) \Bigr]$ \end{enumerate} Call $B$ the conjunction of these last five conditions. We prove that $A$ implies $B$; in a similar way one can prove the converse as well. \begin{enumerate}[labelindent=0pt] \item[(ii)] Let $a \in n$, suppose $\zeta_3\bigl(\zeta_1(a)\bigr) = x$. By (2b), with $y = \zeta_1(a)$, we have $\Delta_{\Phi_1}(a) = 1$. \item[(iiia)] Let $w \in q$, suppose $\xi_3(w)=x$. Then the $w$-th c.c.\ of $N_3 \circ N_2$ must be acyclic as it is part of the $x$-th c.c.\ of $N_3 \circ N_2 \circ N_1$, which is acyclic. \item[(iiib)] Let $w \in q$, suppose $\xi_3(w)=x$. Let also $b \in m$ and suppose $\zeta_2(b) = w$. Then $x = \xi_3\bigl( \zeta_2(b)\bigr) = \zeta_3 \bigl(\xi_1(b)\bigr)$. By (2c), with $y = \xi_1(b)$, we have $\Delta_{\Phi_2}(b)=1$. \item[(iiic)] Let $w \in q$, suppose $\xi_3(w)=x$. Let $z \in p$ be such that $\xi_2(z)=w$. Then $\xi_3\bigl(\xi_2(z)\bigr)=x$: by (3), we have $\Delta_{\Phi_3}(z)=1$. \end{enumerate} Hence composition is associative. Take now $\Phi$ and $\Psi$ consecutive morphisms of $\fc \B \C$ as in the Definition of $\fc \B \C$. Then $\Phi=\Phi_k \circ \dots \circ \Phi_1$ for some $\Phi_j$'s, in particular $\phi=\phi_k \circ \dots \circ \phi_1$ for some $\phi_j$'s, and $\Delta_\Phi (i) =1 $ precisely when the $i$-th connected component of $N$ is acyclic and for all $j \in \{1,\dots,k\}$ the transformation $\phi_j$ is dinatural in all its variables involved in the $i$-th c.c.\ of $N$: one can see this by simply unfolding the definition of $\Delta_{\Phi_k \circ \dots \circ \Phi_1}$, extending the case of $\Delta_{\Phi_3 \circ \Phi_2 \circ \Phi_1}$ above. Similarly for $\Psi=\Psi_{k'} \circ \dots \Psi_1$, with $\psi = \psi_{k'} \circ \dots \psi_1$. We have then that if \[ N' \circ N = \graph{\psi_{k'}} \circ \dots \circ \graph{\psi_1} \circ \graph{\phi_k} \circ \dots \circ \graph{\phi_1} \] is acyclic in its $x$-th connected component and for all $y \in \zeta^{-1}\{x\}$ and $z \in \xi^{-1}\{x\}$ we have that $\Delta_\Phi(y) = 1 = \Delta_\Psi(z)$, then all the $\phi_j$'s and $\psi_j$'s are dinatural in all their variables involved in the $x$-th connected component of $N' \circ N$: by Theorem~\ref{theorem:compositionality with complicated graphs}, we have that $\psi\circ\phi$ is dinatural in its $x$-th variable. Hence $\Psi\circ\Phi$ is still a morphism of $\fc \B \C$. All that is left to prove is that composition is unitary where the identity morphism of $(\alpha,F)$ is given by the equivalence class of \[ ( \id F, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\id{}"] & \length\alpha & \ar[l,"\id{}"'] \length\alpha \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\id{}"] & \length\alpha & \ar[l,"\id{}"'] \length\alpha \end{tikzcd}, K_1 ), \] which is indeed a morphism of $\fc\B\C$ because, as discussed in Example~\ref{example: graph and type are a generalised graph} we have that $\length\alpha$ is a generalised graph for $\id F$; moreover, the identity transformation is indeed (di)natural in all its variables, therefore the constant function equal to $1$, $K_1$, is a valid discriminant function for $\id{\length\alpha}$. Let \[ \Phi = (\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ) \colon (\alpha,F) \to (\beta,G). \] We prove that $\Phi \circ \id{(\alpha,F)} = \Phi$ and ${\id{(\beta,G)}} \circ \Phi = \Phi$ (by ``$\Phi$'' here we mean its equivalence class). It is clear that $\Phi \circ {\id{(\alpha,F)}}$ consists of $\phi$ together with its type and generalised graph as specified in $\Phi$. Also, $\Delta_{\Phi \circ \id{(\alpha,F)}}(x) = 1$ precisely when the $x$-th connected component of $N$ is acyclic and $\Delta_{\Phi}(x)=1$, by definition. Given that $\Delta_\Phi(x)=1$ implies that the $x$-th c.c.\ of $N$ is acyclic, we have that $\Delta_{\Phi \circ \id{(\alpha,F)}} = \Delta_\Phi$. One can prove in a similar way the other identity law. \qed \end{proof} \begin{remark} The condition ``$\Delta_\Phi(i)=1$ implies that the $i$-th connected component of $N$ is acyclic'' in Definition~\ref{def: generalised functor category} is designed to ignore dinaturality properties that happen to be satisfied ``by accident'', as it were, which could cause problems upon composition. Indeed, suppose that we have a transformation $\phi$ which is the composite of four transformations $\phi_1,\dots,\phi_4$, whose resulting generalised graph, obtained by pasting together $\Gamma(\phi_1),\dots,\Gamma(\phi_4)$, is as follows: \[ N= \quad \begin{tikzpicture} \matrix[column sep=2.4mm,row sep=0.4cm]{ \node (1) [category] {}; \\ \node (2) [component] {}; & & & \node (7) [component] {}; \\ \node (3) [category] {}; & & \node (8) [category] {}; & & \node (6) [opCategory] {}; \\ & \node (4) [component] {}; & & & \node (5) [component] {}; \\ & \node (A) [category] {}; & & & \node(F) [opCategory] {};\\ & \node (B) [component] {}; & & & \node(J) [component] {};\\ \node (C) [category] {}; & & \node(D) [category] {}; & & \node(E) [opCategory] {};\\ \node (H) [component] {}; & & & \node(I) [component] {};\\ \node (G) [category] {}; & & & \\ }; \graph[use existing nodes]{ 1 -> 2 -> 3 -> 4 -> A -> B -> {C, D}; C -> H -> G; D -> I -> E -> J -> F -> 5 -> 6 -> 7 -> 8 -> 4; }; \node[coordinate](p) at (-2,0) {}; \node[coordinate](q) at (2,0) {}; \draw [dashed] (3.west -| p) -- (6.east -| q); \draw [dashed] (A.west -| p) -- (F.east -| q); \draw [dashed] (C.west -| p) -- (E.east -| q); \end{tikzpicture} \] Call $\Phi$ the tuple in $\fc \B \C$ consisting of $\phi$ with its type $ \begin{tikzcd}[cramped,sep=small] 1 \ar[r] & 1 & 1 \ar[l] \end{tikzcd} $ and $N$ as a generalised graph, as a composite of the atomic morphisms of $\fc \B \C$ given by $\phi_1,\dots,\phi_4$. Suppose that $\phi$ happens to be dinatural in its only variable for some reason (extreme example: the category $\C$ is the terminal category). If in the definition of $\fc \B \C$ the only condition on $\Delta$ were ``$\Delta_\Phi(i) = 1$ implies $\phi$ dinatural in its $i$-th variable'', without requiring that the $i$-th connected component of $N$ be acyclic if $\Delta_\Phi(i)=1$, then equipping $\phi$ in $\Phi$ with a discriminant function $\Delta_\Phi$ defined as \[ \begin{tikzcd}[row sep=0pt] 1 \ar[r,"\Delta_\Phi"] & 1 \\ 1 \ar[r,|->] & 1 \end{tikzcd} \] would be legitimate. Compose now $\Phi$ with the identity morphism of $\fc \B \C$: by definition we would obtain again $\Phi$ except for the discriminant function, which would be defined as $\Delta_{\Phi \circ \id{}}(1)=0$ because the composite graph, which is $N$, is not acyclic. Composition would not be unitary! The condition ``the $i$-th connected component of $N$ is acyclic whenever $\Delta_\Phi(i)=1$'' in Definition~\ref{def: generalised functor category} is therefore not only sufficient, but also necessary for unitarity of composition in $\fc \B \C$. \end{remark} \begin{remark}\label{remark:non-atomic morphisms of {B,C}} Although it is impossible, in general, to judge whether a transformation is or is not a composite of others by looking at its type, one can distinguish atomic morphisms of $\FC \B \C$ from composite morphisms by looking at the generalised graph $N$ they come with. Indeed, if \[ \Phi = (\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ) \] is a non-identity morphism of $\FC \B \C$, then $\Phi$ is atomic if and only if $N=\graph\phi$. In case $N \ne \graph\phi$, then $N$ contains internal places as a result of composing together ``atomic'' graphs of transformations: that is, we have that $\phi = \phi_k \circ \dots \circ \phi_1$ for some transformations $\phi_i$, and $N=\graph{\phi_k} \circ \dots \circ \graph{\phi_1}$. This decomposition of $\phi$ and $N$ is not necessarily unique. \end{remark} \paragraph{The category of graphs} We can now finally individuate the category $\gcf$ of graphs of transformations. To do so, we will first build a category $\GC$, which will consist of those morphisms in $\gc$ that are the generalised graph of a transformation in $\fc \B \C$, together with a discriminant function. The category of graphs $\gcf$ we seek will be defined as a subcategory of it. We begin by defining the notion of \emph{skeleton} of a morphism in $\gc$, as it will be useful later on. \begin{definition} Let $ f = \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \ar[l,"\overline\tau"'] \length\beta \end{tikzcd} $ be a morphism in $\gc$, and let $n$ be the number of connected components of $N$. The \emph{skeleton} of the cospan $f$ is an (equivalence class of) cospan(s) in $\finset$ \[ \begin{tikzcd} \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} \] where $\sigma(i)$ is the number of the connected component of $N$ to which $\overline\sigma(i)$ belongs to, and similarly is defined $\tau$. \end{definition} \begin{remark} If $\phi$ is a transformation and $N$ is a generalised graph of $\phi$, then the type of $\phi$ is the skeleton of $N$. \end{remark} The category $\GC$ will then consist of only part of the data of $\fc \B \C$, obtained, as it were, by discarding functors and transformations, and only considering the graphs and the discriminant functions. \begin{definition}\label{definition:graph category definitive} The category $\GC$ of graphs consists of the following data. \begin{itemize}[leftmargin=*] \item Objects are lists in $\List\{+,-\}$. \item Morphisms $\alpha \to \beta$ are equivalence classes of pairs \[ \bigl( \begin{tikzcd} \length\alpha \ar[r,"\overline\sigma"] & N & \ar[l,"\overline\tau"'] \length\beta \end{tikzcd}, \Delta_N \bigr) \] where: \begin{itemize} \item $(\overline\sigma,\overline\tau,N)$ is a morphism in $\gc$, \item let $n$ be the number of connected components of $N$: then $\Delta_N \colon n \to \{0,1\}$ is called \emph{discriminant function} and it is such that $\Delta(i)=1$ implies that the $i$-th connected component of $N$ is acyclic. \end{itemize} A pair above is equivalent to another $((\overline\sigma',\overline\tau',N'),\Delta_{N'})$, where $N'$ also has $n$ connected components, if and only if there exists $f \colon N \to N'$ an isomorphism in $\PN$ and $\pi \colon n \to n$ a permutation such that \[ \begin{tikzcd} \length\alpha \ar[r,"\overline\sigma"] \ar[dr,"\overline{\sigma'}"'] & N \ar[d,"f"] & \length\beta \ar[l,"\overline\tau"'] \ar[dl,"\overline{\tau'}"] \\ & N' \end{tikzcd} \quad \text{and} \quad \begin{tikzcd} n \ar[r,"\Delta_N"] \ar[d,"\pi"'] & \{0,1\} \\ n \ar[ur,"\Delta_N'"'] \end{tikzcd} \] commute and $f$ maps the $i$-th c.c.\ of $N$ to the $\pi(i)$-th c.c.\ of $N'$. \item Composition is defined exactly as in $\fc \B \C$. To wit, composition of \[ \bigl( \begin{tikzcd} \length\alpha \ar[r,"\overline\sigma"] & N & \ar[l,"\overline\tau"'] \length\beta \end{tikzcd}, \Delta_N \bigr) \quad \text{and} \quad \bigl( \begin{tikzcd} \length\beta \ar[r,"\overline\eta"] & N & \ar[l,"\overline\theta"'] \length\gamma \end{tikzcd}, \Delta_{N'} \bigr) \] is the equivalence class of the pair \[ ( \begin{tikzcd} \length\alpha \ar[r,"\overline\zeta \overline\sigma"] & {N'} \circ N & \ar[l,"\overline\xi \overline\theta"'] \length\gamma \end{tikzcd}, \Delta_{g \circ f} ) \] where $N' \circ N$ is the Petri Net given by the result of the pushout \[ \begin{tikzcd} & & \length\gamma \ar[d,"\overline\theta"] \\ & \length\beta \ar[d,"\overline\tau"'] \ar[r,"\overline\eta"] \ar[dr, phantom, "\ulcorner" very near start] & N' \ar[d,"\overline\xi"] \\ \length\alpha \ar[r,"\overline\sigma"] & N \ar[r,"\overline\zeta"] & N' \circ N \end{tikzcd} \] and $\Delta_{N' \circ N}$ is defined as follows. If $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ and $ \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\eta"] & m & \length\gamma \ar[l,"\theta"'] \end{tikzcd} $ are the skeletons of $(\overline\sigma,\overline\tau,N)$ and $(\overline\eta,\overline\theta,N')$ respectively, then the skeleton of $(\overline\zeta\overline\sigma,\overline\xi\overline\theta,N'\circ N)$ is given by the pushout \[ \begin{tikzcd} & & \length\gamma \ar[d,"\theta"] \\ & \length\beta \ar[d,"\tau"'] \ar[r,"\eta"] \ar[dr, phantom, "\ulcorner" very near start] & m \ar[d,"\xi"] \\ \length\alpha \ar[r,"\sigma"] & n \ar[r,"\zeta"] & l \end{tikzcd} \] (cf.\ Proposition~\ref{proposition: composition in G preservers generalised graphs}). Define therefore $\Delta_{N' \circ N}(x)=1$ if and only if the $x$-th connected component of $N'\circ N$ is acyclic \emph{and} for all $y \in \zeta^{-1}\{x\}$ and $z \in \xi^{-1}\{x\}$ we have that $\Delta_N(y) = 1 = \Delta_{N'}(z)$. \end{itemize} \end{definition} \begin{definition} The category $\gcf$ of graphs is the wide subcategory of $\GC$ (that is, it contains all the objects of $\GC$) generated by equivalence classes of pairs \[ \bigl( \begin{tikzcd} \length\alpha \ar[r,"\overline\sigma"] & N & \ar[l,"\overline\tau"'] \length\beta \end{tikzcd}, \Delta_N \bigr) \] where $P_N=\length\alpha + \length\beta$, $\overline\sigma=\injP{\length\alpha}$, $\overline\tau = \injP{\length\beta}$ and for all $p$ place, $\length{\inp p} + \length{\out p} = 1$ (equivalently, $N$ has no internal places and every place is either a proper source or a proper sink). Hence, the general morphism of $\gcf$ is either: \begin{itemize} \item an identity $ \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\id{}"] & \length\alpha & \ar[l,"\id{}"'] \length\alpha \end{tikzcd}, K_1 \bigr), $ \item a generator satisfying the conditions above; such morphisms are called \emph{atomic}, \item a finite composite of atomic morphisms. \end{itemize} \end{definition} The assignment $(\alpha,F) \mapsto \alpha$ and \[ \bigl[(\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\injP{\length\alpha}"] & \ggraph\phi & \length\beta \ar[l,"\injP{\length\beta}"'] \end{tikzcd}, \Delta_\Phi )\bigr] \mapsto \Bigl[\bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\injP{\length\alpha}"] & \ggraph\phi & \length\beta \ar[l,"\injP{\length\beta}"'] \end{tikzcd}, \Delta_\Phi \bigr)\Bigr] \] mapping atomic morphisms of $\FC \B \C$ to atomic morphisms of $\gcf$ uniquely extends to a functor $\gf \colon \FC \B \C \to \gcf$. Moreover, $\gf$ has two special properties, by virtue of the ``modularity'' of our $\FC \B \C$ and $\gcf$ and the fact that all and only atoms in $\FC \B \C$ have atomic images: it reflects compositions and identities. By ``reflects identities'' we mean that if $\Phi \colon (\alpha,F) \to (\alpha,F)$ is such that $\gf(\Phi)=\id{\length\alpha}$, then $\Phi=\id{(\alpha,F)}$. By ``reflects compositions'' we mean that if $\Phi$ is a morphism in $\FC \B \C$ and $\gf(\Phi)$ is not atomic, i.e.\ $\gf(\Phi) = (N_k,\Delta_k) \circ \dots \circ (N_1,\Delta_1)$ with $(N_i,\Delta_i)$ atomic in $\gcf$, then there must exist $\Phi_1,\dots,\Phi_k$ morphisms in $\FC \B \C$ such that: \begin{itemize} \item $\Phi = \Phi_k \circ \dots \circ \Phi_1$, \item $\gf(\Phi_i) = (N_i,\Delta_i)$. \end{itemize} Hence, say $\Phi = (\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ) $: then there must exist transformations $\phi_i$ with graph $\graph{\phi_i}$ (hence atomic), dinatural according to $\Delta_i$, such that $\phi = \phi_k \circ \dots \circ \phi_1$, cf.\ Remark~\ref{remark:non-atomic morphisms of {B,C}}. In other words, $\gf$ satisfies the following definition. \begin{definition} Let $\D,\E$ be any categories. A functor $P \colon \D \to \E$ is said to be a \emph{weak Conduché fibration} (WCF) if, given $f \colon A \to B$ in $\D$: \begin{itemize} \item $P(f)=\id{}$ implies $f=\id{}$; \item given a decomposition $P(f)=u \circ v$ in $\E$, we have that there exist $g,h$ in $\D$ such that $f = g \circ h$, $P(g) = u$, $P(h)=v$. \end{itemize} We define $\WCFover \E$ to be the full subcategory of $\catover\E$ whose objects are the categories over $\E$ whose augmentation is a weak Conduché fibration. \end{definition} We have then proved the following theorem. \begin{theorem} $\FC \B \C$ is an object of $\,\,\WCFover\gcf$. \end{theorem} Conduché fibrations were introduced in~\cite{conduche_au_1972} as a re-discovery after the original work of Giraud~\cite{giraud_methode_1964} on exponentiable functors in slice categories. Our notion is weaker in not requiring the additional property of uniqueness of the decomposition $f=g \circ h$ up to equivalence, where we say that two factorisations $g \circ h$ and $g' \circ h'$ are equivalent if there exists a morphism $j \colon \codom h \to \dom {g'}$ such that everything in sight commutes in the following diagram: \[ \begin{tikzcd} & \codom{h} \ar[r,"g"] \ar[d,"j"] & B \\ A \ar[ur,"h"] \ar[r,"h'"'] & \dom{g'} \ar[ur,"g'"'] \end{tikzcd} \] We will not, in fact, need such uniqueness; moreover, it is not evident whether our $\gf$ is a Conduché fibration or not. \begin{remark} The fact that $\FC \B \C$ is not just an object of $\catover\gcf$, but even of $\WCFover\gcf$, will allow us to build the substitution category $\ring \A \B$ just for categories $\A$ over $\gcf$ whose augmentation is more than a mere functor: it is a weak Conduché fibration. The main advantage of restricting our attention to $\WCFover\gcf$ is that a category $\A$ in it inherits, in a sense, the modular structure of $\gcf$, as we shall see in the next Lemma. \end{remark} \begin{definition} Let $P \colon \D \to \gcf$ be an object of $\WCFover\gcf$. A morphism $d$ in $\D$ is said to be \emph{atomic} if $P(d)$ is atomic. \end{definition} \begin{lemma}\label{lemma:functors determined by atoms in WCF over E} Suppose that, in the following diagram, $P$ is a weak Conduché fibration and $Q$ is an ordinary functor. \[ \begin{tikzcd}[column sep={1cm,between origins}] \D \ar[rr,"Q"] \ar[dr,"P"'] & & \mathbb F \\ & \gcf \end{tikzcd} \] Then $Q$ is completely determined on morphisms by the image of atomic morphisms of $\D$. \end{lemma} \begin{proof} Let $d \colon D \to D'$ be a morphism in $\D$ with $P(D)=\alpha$, $P(D')=\beta$ and $P(d) = \bigl[ \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_d \bigr) \bigr] $. If $P(d)$ is not atomic, then either $P(d)=\id{}$, in which case $d=\id{}$ (because $P$ is a weak Conduché fibration), or $P(d)=(N_k,\Delta_k) \circ \dots \circ (N_1,\Delta_1)$ for some (not necessarily unique) atomic $(N_i,\Delta_i)$. Hence there must exist $d_1,\dots,d_k$ in $\D$ such that $d=d_k \circ \dots \circ d_1$ and $P(d_i)=(N_i,\Delta_i)$. Then $Q(d)$ will necessarily be defined as $\id{}$ in the first case, or as $Q(d_k) \circ \dots \circ Q(d_1)$ in the second case, otherwise $Q$ would not be a functor. \qed \end{proof} \section{The category of formal substitutions}\label{section:category of formal substitutions} Kelly~\cite{kelly_many-variable_1972}, after defining his generalised functor category $\fc \B \C$ for covariant functors and many-variable natural transformations only, proceeds by showing that the functor $\fc \B -$ has a left adjoint, which he denotes with $\ring - \B$. The category $\ring \A \B$ will be essential to capture the central idea of substitution. Here we aim to do the same in our more general setting where $\fc \B \C$ comprises mixed-variance functors and many-variable, partial dinatural transformations. First, we give an explicit definition of the functor $\FC \B - \colon \Cat \to \WCFover\gcf$. Given a functor $K \colon \C \to \C'$, we define $\FC \B K \colon \FC \B \C \to \FC \B {\C'}$ to be the functor mapping $(\alpha,F \colon \B^\alpha \to \C)$ to $(\alpha,KF \colon \B^\alpha \to \C')$; and if \[ \Phi = (\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ) \colon (\alpha,F) \to (\beta,G) \] is a morphism in $\FC \B \C$, then $\FC \B K (\Phi)$ is obtained by whiskering $K$ with $\phi$, obtaining therefore a transformation with the same type and generalised graph as before, with the same dinaturality properties: \[ \FC \B K (\Phi) = ( K\phi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\Phi ). \] In particular, $\FC \B K$ is clearly a functor over $\gcf$. It is a classic exercise in Category Theory to prove that $\fc \B -$ is continuous, see~\cite[Theorem 3.52]{santamaria_towards_2019}, which is a necessary condition for the existence of a left adjoint \[ \ring - \B \colon \WCFover\gcf \to \Cat. \] We shall prove that a left adjoint does exist by first constructing the category $\ring \A \B$ explicitly, and then showing the existence of a universal arrow $(\A \circ \B, F_\A \colon \A \to \FC \B {\ring \A \B})$ from $\A$ to $\FC \B -$: this will yield the desired adjunction. To see what $\ring \A \B$ looks like, we follow Kelly's strategy: we aim to prove that there is a natural isomorphism \[ \Cat ( \ring \A \B, \C) \cong \WCFover\gcf (\A, \FC \B \C) \] and we use this to deduce how $\ring \A \B$ must be. Write $\Gamma$ for all augmentations (as weak Conduché fibrations) over $\gcf$, and let $\Phi$ be an element of $\,\WCFover\gcf(\A,\FC \B \C)$. We now spell out all we can infer from this fact. To facilitate reading, and to comply with Kelly's notation in~\cite{kelly_many-variable_1972}, we shall now refer to the $\bfA$-th component of a transformation $\phi$, for $\bfA=(A_1,\dots,A_m)$ say, as $\phi(\bfA)$ instead of $\phi_{\bfA}$. \begin{enumerate}[(a),wide,labelindent=0pt] \item \label{PhiA} For all $A \in \A$, $\Gamma(A)=\alpha$ we have $\Phi A \colon \B^\alpha \to \C$ is a functor, hence \begin{enumerate}[label=(a.\roman*),wide,leftmargin=\parindent] \item for every $\bfB=(B_1,\dots,B_{\length\alpha})$ object of $\B^\alpha$, $\Phi A (\bfB)$ is an object of $\C$,\label{PhiA(B1...Balpha)} \item for all $\bfg=(g_1,\dots,g_{\length\alpha})$, with $g_i \colon B_i \to B_i'$ a morphism in $\B$, we have \[ \Phi(A)(\bfg) \colon \funminplus {\Phi A} {B_i'} {B_i} i {\length\alpha} \to \funminplus {\Phi A} {B_i} {B_i'} i {\length\alpha} \] is a morphism in $\C$.\label{PhiA(g1...galpha)} \end{enumerate} This data is subject to functoriality of $\Phi A$, that is: \begin{enumerate}[(1),wide,leftmargin=\parindent] \item For every $\bfB$ object of $\B^\alpha$, $\Phi A (\id\bfB) = \id{\Phi A (\bfB)}$\label{PhiA(1...1)=1PhiA}. \item For $\bfh=(h_1,\dots,h_{\length\alpha})$, with $h_i \colon B_i' \to B_i''$ morphism of $\B$, \[ \funminplus {\Phi A} {g_i \circ_{\Op\B} h_i} {h_i \circ_{\B} g_i} i {\length\alpha} = \funminplus {\Phi A} {g_i} {h_i} i {\length\alpha} \circ \funminplus {\Phi A} {h_i} {g_i} i {\length\alpha}. \]\label{PhiA(hg)=PhiA(h)PhiA(g)} \end{enumerate} \item \label{Phif}For all $f \colon A \to A'$ in $\A$ with $ \Gamma(f) = \Bigl[\bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd},\Delta_f \bigr) \Bigr] $, we have that $\Phi f$ is an equivalence class of transformations whose graphs are representatives of $\Gamma(f)$, such transformations being dinatural in some variables according to $\Delta_f$. Hence for all $\xi = \bigl((\overline\sigma, \overline\tau, N),\Delta_\xi\bigr) \in \Gamma(f)$ we have a transformation $\Phi f_\xi \colon \Phi A \to \Phi A'$ whose type $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ is the skeleton of $(\overline\sigma,\overline\tau,N)$ and with discriminant function $\Delta_\xi$ that tells us in which variables $\Phi f_\xi$ is dinatural. Therefore to give $\Phi f$ one has to provide, for all $\xi = \bigl((\overline\sigma, \overline\tau, N),\Delta_\xi\bigr) \in \Gamma(f)$, for every $\bfB=(B_1,\dots,B_n)$ object of $\B^n$, a morphism in $\C$ \[ \Phi f_\xi (\bfB) \colon \Phi A (\bfB\sigma) \to \Phi A' (\bfB\tau) \] such that: \begin{enumerate}[(1),start=3,wide,leftmargin=\parindent] \item for all $\pi \colon n \to n$ permutation, $\Phi f_{\pi\xi}(\bfB) = \Phi f_\xi (\bfB\pi)$,\label{Phif_pixi(Bi)=Phif_xi(Bpii)} \item \label{Phif_xi dinatural} for $\bfB'=(B_1',\dots,B_n')$ in $\B^n$ and for $\bfg=(g_1,\dots,g_n) \colon \bfB \to \bfB'$ in $\B^n$, where if $\Delta_\xi(i)=0$ then $B_i=B_i'$ and $g_i = \id {B_i}$, the following hexagon commutes: \[ \begin{tikzcd}[font=\normalsize, column sep={0.5cm}] & \Phi A (\bfB\sigma) \ar[rrr,"{\Phi f_\xi (\bfB)}"] & && \Phi A' (\bfB\tau) \ar[dr,"\funminplus{\Phi A'}{B_{\tau i}}{g_{\tau i}} i {\length\beta}"] \\ \funminplus{\Phi A}{B_{\sigma i}'}{B_{\sigma i}} i {\length\alpha} \ar[ur,"{\funminplus{\Phi A}{g_{\sigma i}}{B_{\sigma i}} i {\length\alpha}}"] \ar[dr,"\funminplus{\Phi A} {{B_{\sigma i}}} {{g_{\sigma i}}} i {\length\alpha}"'] & & && & \funminplus {\Phi A'} {B_{\tau i}}{B_{\tau i}'} i {\length\beta} \\ & \Phi A (\bfB'\sigma) \ar[rrr,"{\Phi f_\xi (\bfB')}"'] & & &\Phi A'(\bfB'\tau) \ar[ur,"\funminplus{\Phi A'}{g_{\tau i}}{B_{\tau i}} i {\length\beta}"'] \end{tikzcd} \] \end{enumerate} \item The data provided in \ref{PhiA} and \ref{Phif} is subject to the functoriality of $\Phi$ itself, hence: \begin{enumerate}[(1),start=5,wide,leftmargin=\parindent] \item $\Phi(\id A) = \id{\Phi A}$, \label{Phi(1A)=1_Phi(A)} \item for $f \colon A \to A'$ and $f' \colon A' \to A''$, $\Phi(f' \circ_{\A} f) = {\Phi f'} \circ_{\FC \B \C} {\Phi f}$ \label{Phi(f2 f1)=Phi(f2) Phi(f1)}. \end{enumerate} \end{enumerate} We now mirror all the data and properties of a functor $\Phi \colon \A \to \FC \B \C$ over $\gcf$ to define the category $\ring \A \B$. \begin{definition}\label{definition A ring B} Let $\A$ be a category over $\gcf$ via a weak Conduché fibration $\Gamma \colon \A \to \gcf$, and let $\B$ be any category. The category $\ring \A \B$ of \emph{formal substitutions} of elements of $\B$ into those of $\A$ is the free category generated by the following data. We use the same enumeration as above to emphasise the correspondence between each piece of information. \begin{itemize}[wide=0pt,leftmargin=*] \item[\ref{PhiA(B1...Balpha)}] Objects are of the form $A[\bfB]$, for $A$ an object of $\A$ with $\Gamma(A)=\alpha$, and for $\bfB=(B_1,\dots,B_{\length\alpha})$ in $\B^\alpha$. As it is standard in many-variable calculi, we shall drop a set of brackets and write $A[B_1,\dots,B_{\length\alpha}]$ instead of $A[(B_1,\dots,B_{\length\alpha})]$. \item[\ref{PhiA(g1...galpha)},\ref{Phif}] Morphisms are to be generated by \[ A[\bfg] \colon \funminplussq A {B_i'} {B_i} i {\length\alpha} \to \funminplussq A {B_i} {B_i'} i {\length\alpha} \] for $A$ in $\A$ with $\Gamma(A)=\alpha$, $\bfg=(g_1,\dots,g_{\length\alpha})$ and $g_i \colon B_i \to B_i'$ in $\B$, and by \[ f_{\xi}[\bfB] \colon A[\bfB\sigma] \to A'[\bfB\tau] \] for $f \colon A \to A'$ in $\A$, $ \xi = \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd},\Delta_\xi \bigr) $ a representative of $\Gamma(f)$, $(\sigma,\tau,n)$ the skeleton of $(\overline\sigma,\overline\tau,N)$, $\bfB=(B_1,\dots,B_n)$ object of $\B^n$. \end{itemize} Such data is subject to the following conditions: \begin{itemize}[wide=0pt,leftmargin=*] \item[\ref{Phif_pixi(Bi)=Phif_xi(Bpii)}] For every permutation $\pi \colon n \to n$ and for every $\bfB=(B_1,\dots,B_n)$ object of $\B^n$ \[ f_{\pi\xi}[\bfB] = f_\xi[\bfB\pi]. \] \item[\ref{PhiA(1...1)=1PhiA},\ref{Phi(1A)=1_Phi(A)}] For all $A\in\A$ with $\Gamma(A)=\alpha$ and for every $\bfB=(B_1,\dots,B_{\length\alpha})$ object of $\B^\alpha$ \[ A[\id\bfB] = \id{A[\bfB]} = {\id A}[\bfB]. \] \item[\ref{PhiA(hg)=PhiA(h)PhiA(g)}] For all $A \in \A$ with $\Gamma(A)=\alpha$, for all $g_i \colon B_i \to B_i'$ and $h_i \colon B_i' \to B_i''$ in $\B$, $i \in \{1,\dots,\length\alpha\}$ \[ \funminplussq { A} {g_i \circ_{\Op\B} h_i} {h_i \circ_{\B} g_i} i {\length\alpha} = \funminplussq { A} {g_i} {h_i} i {\length\alpha} \circ \funminplussq { A} {h_i} {g_i} i {\length\alpha}. \] \item[\ref{Phi(f2 f1)=Phi(f2) Phi(f1)}] For all $f \colon A \to A'$ and $f' \colon A' \to A''$ in $\A$, for all \[ \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd},\Delta \bigr) \in \Gamma(f) \quad \text{and} \quad \bigl( \begin{tikzcd}[cramped,sep=small] \length\beta \ar[r,"\overline\eta"] & M & \ar[l,"\overline\theta"'] \length\gamma \end{tikzcd},\Delta' \bigr) \in \Gamma(f'), \] with $(\sigma,\tau,n)$ and $(\eta,\theta,m)$ the skeletons of, respectively, $(\overline\sigma,\overline\tau,N)$ and $(\overline\eta,\overline\theta,M)$, and for all choices of a pushout \[ \begin{tikzcd} & & \length\gamma \ar[d,"\theta"] \\ & \length\beta \ar[d,"\tau"'] \ar[r,"\eta"] \ar[dr,phantom,very near start,"\ulcorner"] & m \ar[d,"\xi"] \\ \length\alpha \ar[r,"\sigma"] & n \ar[r,"\zeta"] & l \end{tikzcd} \] each choice determining the skeleton of (the first projection of) a representative of $\Gamma(f' \circ f)$, and for all $\bfB=(B_1,\dots,B_l)$ object of $\B^l$ \[ f'_{(\eta,\theta)}[\bfB\xi] \circ f_{(\sigma,\tau)}[\bfB\zeta] = (f'\circ f)_{(\zeta\sigma,\xi\theta)}[\bfB]. \] \item[\ref{Phif_xi dinatural}] For all $f \colon A \to A'$, $\xi= \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\xi \bigr) \in \Gamma(f) $, with $(\sigma,\tau,n)$ the skeleton of $(\overline\sigma,\overline\tau,N)$, for all $\bfB=(B_1,\dots,B_n)$, $\bfB'=(B_1',\dots,B_n')$ objects of $\B^n$ and for all $\bfg=(g_1,\dots,g_n) \colon \bfB \to \bfB'$, with $B_i=B_i'$ and $g_i=\id{B_i}$ if $\Delta_\xi(i)=0$, the following hexagon commutes: \begin{equation}\label{f[g1...gn]} \begin{tikzcd}[column sep={0.5cm}] & A[\bfB\sigma] \ar[rrr,"{f_\xi [\bfB]}"] & && A' [\bfB\tau] \ar[dr,"\funminplussq{A'}{B_{\tau i}}{g_{\tau i}} i {\length\beta}"] \\ \funminplussq{A}{B_{\sigma i}'}{B_{\sigma i}} i {\length\alpha} \ar[ur,"{\funminplussq{A}{g_{\sigma i}}{B_{\sigma i}} i {\length\alpha}}"] \ar[dr,"\funminplussq{A} {{B_{\sigma i}}} {{g_{\sigma i}}} i {\length\alpha}"'] & & && & \funminplussq { A'} {B_{\tau i}}{B_{\tau i}'} i {\length\beta} \\ & A [\bfB'\sigma] \ar[rrr,"{f_\xi [\bfB']}"'] & & & A'[\bfB'\tau] \ar[ur,"\funminplussq{A'}{g_{\tau i}}{B_{\tau i}} i {\length\beta}"'] \end{tikzcd} \end{equation} We will denote the diagonal of \ref{f[g1...gn]} as $f[\bfg]$. \end{itemize} \end{definition} \begin{remark} By \ref{Phi(1A)=1_Phi(A)} and \ref{PhiA(hg)=PhiA(h)PhiA(g)}, we have \[ A[\bfg] = \id A [\bfg] \] and by \ref{PhiA(1...1)=1PhiA}, we have \[ f[\bfB] = f[\id\bfB] \] which is coherent with the usual notation of $A$ for $\id A$. \end{remark} Since two consecutive morphisms both of type \ref{PhiA(g1...galpha)} or both of type \ref{Phif} can be merged together into a single one by \ref{PhiA(hg)=PhiA(h)PhiA(g)} and \ref{Phi(f2 f1)=Phi(f2) Phi(f1)}, we have no way, in general, to swap the order of a morphism of type $A[\bfg]$ followed by one of the form $f_\xi[\bfB]$, because the only axiom that relates the two generators is (\ref{f[g1...gn]}). Therefore, all we can say about the general morphism of $\ring \A \B$ is that it is a string of compositions of alternate morphisms of type \ref{PhiA(g1...galpha)} and \ref{Phif}, subject to the equations \ref{PhiA(1...1)=1PhiA}-\ref{Phi(f2 f1)=Phi(f2) Phi(f1)}. \begin{remark} If $\A$ is such that $\length{\Gamma(A)}=1$ for all objects $A$ in $\A$, then $\ring \A \B$ is highly reminiscent of the category $\A \otimes \B$ as described by Power and Robinson in~\cite{power_premonoidal_1997}. The authors studied the \emph{other} symmetric monoidal closed structure of $\Cat$, where the exponential $[\B,\C]$ is the category of functors from $\B$ to $\C$ and morphisms are simply transformations (not necessarily natural), and $\otimes \B$ is the tensor functor that is the left adjoint of $[\B,-]$. The category $\A \otimes \B$ has pairs $(A,B)$ of objects of $\A$ and $\B$, and a morphism from $(A,B)$ to $(A',B')$ is a finite sequence of non-identity arrows consisting of alternate chains of consecutive morphisms of $\A$ and $\B$. Composition is given by concatenation followed by cancellation accorded by the composition in $\A$ and $\B$, much like our $\ring \A \B$. The only difference with their case is that we have the additional dinaturality equality \ref{Phif_xi dinatural}. For an arbitrary category $\A$ over $\gcf$, our $\ring \A \B$ would be a sort of generalised tensor product, where the number of objects of $\B$ we ``pair up'' with an object $A$ of $\A$ depends on $\Gamma(A)$. \end{remark} We are now ready to show that $\fc \B -$ has indeed a left adjoint. This is going to be a crucial step towards a complete substitution calculus for dinatural transformations; we shall discuss some ideas and conjectures about the following steps in the conclusions. \begin{theorem}\label{theorem:{B,-} has a left adjoint} The functor $\FC \B -$ has a left adjoint \[ \begin{tikzcd}[column sep=2cm,bend angle=30] {\WCFover\gcf} \ar[r,bend left,"\ring - \B"{name=A},pos=.493] & \Cat \ar[l,bend left,"\FC \B -"{name=B},pos=.507] \ar[from=A,to=B,phantom,"\bot"] \end{tikzcd} \] therefore there is a natural isomorphism \begin{equation}\label{natural isomorphism (A circ B, C) -> (A,{B,C})} \Cat \bigl( \ring \A \B , \C \bigr) \cong \WCFover\gcf \bigl( \A, \FC \B \C \bigr). \end{equation} Moreover, $\ring {} {} \colon \WCFover\gcf \times \Cat \to \Cat$ is a functor. \end{theorem} \begin{proof} Recall that to give an adjunction $ (\ring - \B) \dashv \FC \B -$ is equivalent to give, for all $\A \in \WCFover\gcf$, a universal arrow $(\ring \A \B, F_\A \colon \A \to \FC \B {\ring \A \B})$ from $\A$ to the functor $\FC \B -$; $F_\A$ being a morphism of $\WCFover\gcf$. This means that, for a fixed $\A$, we have to define a functor over $\gcf$ that makes the following triangle commute: \[ \begin{tikzcd}[column sep={1.5cm,between origins}] \A \ar[rr,"F_\A"] \ar[dr,"\Gamma"'] & & \FC \B {\ring \A \B} \ar[dl,"\gf"] \\ & \gcf \end{tikzcd} \] and that is universal among all arrows from $\A$ to $\FC \B -$: for all arrows $(\C, \Phi \colon \A \to \FC \B \C)$ from $\A$ to $\FC \B -$ ($\Phi$ being a functor over $\gcf$), there must exist a unique morphism in $\Cat$, that is a functor, $H \colon \ring \A \B \to \C$ such that \[ \begin{tikzcd} \A \ar[r,"F_\A"] \ar[dr,"\Phi"'] & \FC \B {\ring \A \B} \ar[d,"{\FC \B H}"] \\ & \FC \B \C \end{tikzcd} \] commutes. In the proof we will refer to properties \ref{PhiA(1...1)=1PhiA}-\ref{Phi(f2 f1)=Phi(f2) Phi(f1)} as given in the definition of $\ring \A \B$. Let then $\A$ be a category over $\gcf$ with $\Gamma \colon \A \to \gcf$ a weak Conduché fibration. We define the action of $F_\A$ on objects first. If $A$ is an object of $\A$ with $\Gamma(A)=\alpha$, then the assignment \[ \begin{tikzcd}[row sep=0em] \B^{\alpha} \ar[r,"F_\A(A)"] & \ring \A \B \\ \bfB \ar[|->,r] \ar{d}[description,name=A]{\bfg} & A[\bfB] \ar{d}[description,name=B]{{A[\bfg]}} \\[2em] \bfB' \ar[|->,r] & A[\bfB'] \arrow[from=A,to=B,|->] \end{tikzcd} \] is a functor by virtue of \ref{PhiA(1...1)=1PhiA} and \ref{PhiA(hg)=PhiA(h)PhiA(g)}. By little abuse of notation, call $F_\A(A)$ also the pair $(\alpha,F_\A(A))$, which is an object of $\FC \B {\ring \A \B}$. To define $F_\A$ on morphisms, let $f \colon A \to A'$ be a morphism in $\A$, with $\Gamma(A)=\alpha$, $\Gamma(A')=\beta$, let \[ \xi = \bigl( \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\xi \bigr) \in \Gamma(f), \] and call $ \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd} $ the skeleton of $(\overline\sigma,\overline\tau,N)$. We define $F_\A (f) \colon F_\A(A) \to F_\A(A')$ to be the equivalent class of the tuple \[ \bigl( F_\A (f)_\xi, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\sigma"] & n & \length\beta \ar[l,"\tau"'] \end{tikzcd}, \begin{tikzcd}[cramped,sep=small] \length\alpha \ar[r,"\overline\sigma"] & N & \length\beta \ar[l,"\overline\tau"'] \end{tikzcd}, \Delta_\xi \bigr) \] where $F_\A(f)_\xi$ is a transformation whose general component is \[ \begin{tikzcd}[row sep=1em,column sep=4em] F_\A(A)(\bfB\sigma) \ar[d,phantom,"\rotatebox{90}="] \ar[r,"{f_\xi[\bfB]}"] & F_\A(A')(\bfB\tau) \ar[d,phantom,"\rotatebox{90} ="] \\ A[\bfB\sigma] & A'[\bfB\tau] \end{tikzcd} \] Then $F_\A(f)_\xi$ is indeed dinatural in its $i$-th variable whenever $\Delta_\xi(i)=1$ because of~\ref{Phif_xi dinatural}. Moreover, $F_\A$ is well-defined on morphisms because of \ref{Phif_pixi(Bi)=Phif_xi(Bpii)} and is in fact a functor thanks to \ref{Phi(1A)=1_Phi(A)} and \ref{Phi(f2 f1)=Phi(f2) Phi(f1)}. Finally, $F_\A(f)$ so defined is indeed a morphism of $\FC \B {\ring \A \B}$: if $f$ is such that $\Gamma(f)$ is atomic, then $F_\A(f)$ is an atomic morphism of $\FC \B {\ring \A \B}$; if instead $\Gamma(f)=(N_k,\Delta_k) \circ \dots \circ (N_1,\Delta_1)$ where $(N_i,\Delta_i)$ is atomic, then there exists a factorisation $f=f_k \circ \dots \circ f_1$ in $\A$ with $\Gamma(f_i)=(N_i,\Delta_i)$ because $\Gamma$ is a weak Conduché fibration. By functoriality of $F_\A$, we have that $F_\A(f)=F_\A(f_k) \circ \dots \circ F_\A(f_1)$, hence it is a composite of atomic morphisms of $\FC \B {\ring \A \B}$. We now prove that $F_\A$ is universal. Let then $\Phi \colon \A \to \FC \B \C$ be a morphism in $\WCFover\gcf$, that is a functor over $\gcf$. We define $H \colon \ring \A \B \to \C$ as follows: \begin{itemize}[wide=0pt,leftmargin=*] \item[\ref{PhiA(B1...Balpha)}] For $A \in \A$ with $\Gamma(A)=\alpha$ and $\bfB \in \B^\alpha$, \[ H\bigl(A[\bfB]\bigr) = \Phi(A)(\bfB); \] \item[\ref{PhiA(g1...galpha)}] For $A \in \A$ with $\Gamma(A)=\alpha$, for $\bfg$ in $\B^\alpha$, \[ H\bigl(A[\bfg]\bigr) = \Phi(A)(\bfg); \] \item[\ref{Phif}] For $f \colon A \to A'$ in $\A$, $\xi = (N_\xi,\Delta_\xi) \in \Gamma(f)$ where $N_\xi$ has $n$ connected components, for $\bfB \in \B^n$, \[ H\bigl(f_\xi[\bfB]\bigr) = \Phi(f)_\xi(\bfB), \] where $\Phi(f)_\xi$ is the representative of $\Phi(f)$ whose type is given by the skeleton of $N_\xi$, cf.~the discussion on the data entailed by a functor $\Phi \colon \A \to \FC \B \C$ over $\gcf$ preceding Definition~\ref{definition A ring B}. \end{itemize} $H$ so defined on the generators of $\ring \A \B$ extends to a unique functor provided that $H$ preserves the equalities \ref{PhiA(1...1)=1PhiA}-\ref{Phi(f2 f1)=Phi(f2) Phi(f1)} in $\ring \A \B$, which it does as they have been designed \emph{precisely} to reflect all the properties of a functor $\Phi \colon \A \to \FC \B \C$, and $H$ is defined using $\Phi$ accordingly. Finally, by construction \[ \begin{tikzcd} \A \ar[r,"F_\A"] \ar[dr,"\Phi"'] & \FC \B {\ring \A \B} \ar[d,"{\FC \B H}"] \\ & \FC \B \C \end{tikzcd} \] commutes. The uniqueness of $H$ follows from the fact that the commutativity of the above triangle implies that $\Phi(A)=H(F_\A(A))$ for all $A \in \A$ and $\Phi(f)=H(F_\A(f))$, hence any such functor $H$ \emph{must} be defined as we did to make the triangle commutative. With such a universal arrow $(\ring \A \B, F_\A \colon \A \to \FC \B {\ring \A \B})$ we can define a functor $\ring - \B$ which is the left adjoint of $\FC \B -$. Given $F \colon \A \to \A'$ a functor over $\gcf$, by universality of $F_\A$ there exists a unique functor $\ring F \B \colon \ring \A \B \to \ring {\A'} \B$ that makes the following square commute: \[ \begin{tikzcd} \A \ar[r,"F_\A"] \ar[d,"F"'] & \FC \B {\ring \A \B} \ar[d,"\FC \B {\ring F \B}"] \\ \A' \ar[r,"F_{\A'}"'] & \FC \B {\ring {\A'} \B} \end{tikzcd} \] Such $\ring F \B$ is defined on objects as $\ring F \B \bigl( A[\bfB] \bigr) = (F_{\A'} \circ F)(A)(\bfB) = FA[\bfB]$ and on morphisms as \[ \ring F \B \bigl( A[\bfg] \bigr) = FA[\bfg], \quad \ring F \B \bigl( f[\bfB] \bigr) = Ff[\bfB]. \] Finally, $\ring{}{}$ extends to a functor \[ \begin{tikzcd}[row sep=0em] \WCFover\gcf \times \Cat \ar[r,"\ring{}{}"] & \Cat \\ \A\quad\quad\,\B \ar[r,|->] \ar[d,shift right=17mu,"F"'] \ar[d,shift left=17mu,"G"] & \ring \A \B \ar[d,"\ring F G"] \\[3em] {\A'}\quad\quad\B' \ar[r,|->] & \ring {\A'} {\B'} \end{tikzcd} \] where $\ring F G$ is defined as follows on the generators: \begin{itemize} \item $\ring F G \bigl( A[\bfB] \bigr) = FA[G\bfB]$, \item $\ring F G \bigl( A[\bfg] \bigr) = FA[G\bfg]$, \item $\ring F G \bigl( f[\bfB] \bigr) = Ff[G\bfB]$ \end{itemize} (where $G\bfB=(GB_1,\dots,GB_{\length\alpha})$ if $\bfB=(B_1,\dots,B_{\length\alpha})$). It is easy to see that $\ring F G$ is well defined (i.e.\ it preserves equalities in $\ring \A \B$), thanks to the functoriality of $F$ and $G$. It is also immediate to verify that $\ring{}{}$ is indeed a functor.\qed \end{proof} \section{Conclusions}\label{section:coda} The ultimate goal to achieve a complete substitution calculus of dinatural transformations is to obtain an appropriate functor over $\gcf$ \[ M \colon \ring{\FC \B \C} {\FC \A \B} \to \FC \A \C \] which, \emph{de facto}, realises a \emph{formal} substitution of functors into functors and transformations into transformations as an \emph{actual} new functor or transformation. As in Kelly's case, {horizontal} composition of dinatural transformations will be at the core, we believe, of the desired functor; the rules of vertical composition are, instead, already embodied into the definition of $\FC \B \C$. Such $M$ will arise as a consequence of proving that $\WCFover\gcf$ is a monoidal closed category, much like Kelly did, by showing that the natural isomorphism~(\ref{natural isomorphism (A circ B, C) -> (A,{B,C})}) extends to \[ \WCFover\gcf (\ring \A \B , \C) \cong \WCFover\gcf (\A , \FC \B \C). \] Necessarily then, we will first have to show that the substitution category $\ring \A \B$ is itself an object of $\WCFover\gcf$. Following Kelly's steps described in~\cite[\S 2.1]{kelly_many-variable_1972}, this will be done by extending our functor $\ring{}{} \colon \WCFover\gcf \times \Cat \to \Cat$ to a functor \[ \ring{}{} \colon \WCFover\gcf \times \WCFover\gcf \to \WCFover\gcf, \] exhibiting $\WCFover\gcf$ as a monoidal category, with tensor $\ring{}{}$. To do so in his case, Kelly defined $\ring \A \B$ just as before, ignoring the augmentation on $\B$, and then augmented $\ring \A \B$ using the augmentations of $\A$ and $\B$. In fact, what he did, using the category $\Per$ of permutations, was to regard $\Per$ as a category over itself in the obvious way and then to define a functor $P \colon \ring \Per \Per \to \Per$ that computes substitution of permutations into permutations. That done, he set $\Gamma \colon \ring \A \B \to \Per$ as a composite \[ \begin{tikzcd} \ring \A \B \ar[d,"\ring {\Gamma_\A} {\Gamma_\B}"'] \ar[r] & \Per \\ \ring \Per \Per \ar[ur,"P"'] \end{tikzcd} \] This suggests, as usual, to do the same in our case. Hence, the next step will be to come up with a substitution functor \[ S \colon \ring \gcf \gcf \to \gcf, \] which is tantamount to define an operation of substitution of graphs, and then define $\Gamma \colon \ring \A \B \to \gcf$ as \begin{equation}\label{augmentation of A ring B via G ring G} \begin{tikzcd} \ring \A \B \ar[d,"\ring {\Gamma_\A} {\Gamma_\B}"'] \ar[r] & \gcf\\ \ring \gcf \gcf \ar[ur,"S"'] \end{tikzcd} \end{equation} A possible hint to how to do this is given by how we defined the horizontal composition of dinatural transformations in Chapter~\ref{chapter horizontal}, and what happened to the graphs of the transformations (that is, we consider the special case of $\A = \B = \FC \C \C$). Looking back at Example~\ref{ex:hc example}, when we computed the first horizontal composition of $\delta$ and $(\eval A B)_{A,B}$, in fact we considered the formal substitution $\eval{}{}\bigl[\delta,([+],\id\C)\bigr]$ in $\ring {\FC \C \C} {\FC \C \C}$, which we then realised into the transformation $\HC \delta {\eval{}{}} 1$. This realisation part is what the desired functor $M$ will do, once properly defined. Now, consider, in $\ring \gcf \gcf$, the formal substitution $\graph{\eval{}{}}\bigl[\graph\delta,[+]\bigr]$, which is the image of $\eval{}{}\bigl[\delta,([+],\id\C)\bigr]$ along the functor $\ring \gf \gf \colon \ring {\FC \C \C} {\FC \C \C} \to \ring \gcf \gcf$. Since $M \colon \ring {\FC \C \C} {\FC \C \C}$ ought to be a functor over $\gcf$, we have that $S\bigl(\graph{\eval{}{}}\bigl[\graph\delta,[+]\bigr]\bigr)$ should be the graph that $\HC \delta {\eval{}{}} 1$ has, which is \[ \begin{tikzpicture} \matrix[column sep=1em,row sep=1.5em]{ \node[category] (1) {}; & \node[opCategory] (2) {}; & \node[opCategory] (3) {}; & \node[category] (4) {}; \\ & \node[component] (A) {}; & & \node[component] (B) {};\\ & & & \node[category] (5) {};\\ }; \graph[use existing nodes]{ 1 -> A -> {2,3}; 4 -> B -> 5; }; \end{tikzpicture} \] The intuition for it was that we ``bent'' $\graph\delta$ into the U-turn that is the first connected component of $\graph{\eval{}{}}$. A possible approach to a general definition of substitution of graphs into graphs is the following: given two connected graphs $N_1$, $N_2$ in $\gcf$, the graph $S\bigl(N_1[N_2]\bigr)$ is the result of subjecting $N_2$ to all the ramifications and U-turns of $N_1$; in so doing, one would have to substitute a copy of $N_2$ in every \emph{directed path} of $N_1$. This idea is not original, as it was suggested by Bruscoli, Guglielmi, Gundersen and Parigot~\cite{guglielmi_substitution} in private communications to implement substitution of \emph{atomic flows}~\cite{GuglGundStra::Breaking:uq}, which are graphs extracted from certain formal proofs in \emph{Deep Inference}~\cite{Gugl:06:A-System:kl} and they look very much like a morphism in $\gcf$. How to put such an intuitive idea into a formal, working definition is the subject of current investigations, and this task has already revealed itself as far from being trivial. Once that is done, the rest should follow relatively easily, and we would expect that the correct compatibility law for horizontal and vertical composition sought in \ref{section compatibility} will become apparent, once the substitution functor $M$ above will be found as part of a monoidal closed structure. {\small \noindent \copyright 2021. This manuscript version is made available under the CC-BY-NC-ND 4.0 license \url{http://creativecommons.org/licenses/by-nc-nd/4.0/}.} \noindent\includegraphics[scale=0.75]{by-nc-nd.eps} \end{document}
\begin{document} \title{Long time behaviour of Ricci flow on open 3-manifolds} \author{Laurent Bessi\`eres \thanks{Institut de Math\'ematiques de Bordeaux, universit\'e de Bordeaux.} \and G\'erard Besson \thanks{Institut Fourier-CNRS, universit\'e de Grenoble.} \and Sylvain Maillot \thanks{Institut de Math\'ematiques et de mod\'elisation de Montpellier, universit\'e de Montpellier.}} \maketitle \begin{abstract} We study the long time behaviour of the Ricci flow with bubbling-off on a possibly noncompact $3$-manifold of finite volume whose universal cover has bounded geometry. As an application, we give a Ricci flow proof of Thurston's hyperbolisation theorem for $3$-manifolds with toral boundary that generalises Perelman's proof of the hyperbolisation conjecture in the closed case. \end{abstract} \thanks{This research was partially supported by ANR project GTO ANR-12-BS01-0014.} \section{Introduction} A Riemannian metric is \bydef{hyperbolic} if it is complete and has constant sectional curvature equal to $-1$. If $N$ is a $3$-manifold-with-boundary, then we say it is \bydef{hyperbolic} if its interior admits a hyperbolic metric. In the mid-1970s, W.~Thurston stated his \emph{Hyperbolisation Conjecture,} which gives a natural sufficient condition on the topology of a $3$-manifold-with-boundary $N$ which implies that it is hyperbolic. Recall that $N$ is \bydef{irreducible} if every embedded $2$-sphere in $N$ bounds a $3$-ball. It is \bydef{atoroidal} if every incompressible embedded $2$-torus in $N$ is parallel to a component of $\bord N$ or bounds a product neighbourhood $T^2\times [0,1)$ of an end of $N$. A version of Thurston's conjecture states that if $N$ is compact, connected, orientable, irreducible, and $\pi_1N$ is infinite and does not have any subgroup isomorphic to $\mathbf{Z}^2$, then $N$ is hyperbolic. If one replaces the hypotheses on the fundamental group by the assumption that $N$ is atoroidal then one gets the conclusion that $N$ is hyperbolic or Seifert fibred. Thurston proved his conjecture for the case of so-called \emph{Haken manifolds}, which includes the case where $\bord N$ is nonempty. The case where $N$ is closed was solved by G.~Perelman \cite{Per1,Per2} using Ricci flow with surgery, based on ideas of R.~Hamilton. It is natural to ask whether the Hamilton-Perelman approach works when $\bord N\neq\emptyset$. The interior $M$ of $N$, on which one wishes to construct a hyperbolic metric, is then noncompact. This question can be divided into two parts: first, is it possible to construct some version of Ricci flow with surgery on such an open manifold $M$, under reasonable assumptions on the initial metric? Second, does it converge (modulo scaling) to a hyperbolic metric? A positive answer to both questions would give a Ricci flow proof of the full Hyperbolisation Conjecture logically independent of Thurston's results. A positive answer to the first question was given in~\cite{bbm:openflow}, for initial metrics of bounded geometry, i.e. of bounded curvature and positive injectivity radius. If one considers irreducible manifolds, surgeries are topologically trivial: each surgery sphere bounds a $3$-ball. Hence a surgery splits off a $3$-sphere. In this situation we can refine the construction of the Ricci flow with surgery so that it is not necessary to perform the surgery topologically. We obtain a solution which is a piecewise smooth Ricci flow on a fixed manifold; at singular times, one performs only a metric surgery, changing the metric on some $3$-balls. This construction was defined in \cite{B3MP} in the case of closed irreducible nonspherical $3$-manifolds, and called Ricci flow with bubbling-off. One can extend it to the setting of bounded geometry. The purpose of this paper is to answer the second question, in the situation where the initial metric has a \emph{cusp-like structure}. \begin{defi}\label{def:cusp-like} We say that a metric $g$ on $M$ has a \bydef{cusp-like structure}, or is a \emph{cusp-like metric}, if $M$ has finitely many ends (possibly zero), and each end has a neighbourhood which admits a metric $g_\mathrm{cusp}$ homothetic to a rank two cusp neighbourhood of a hyperbolic manifold such that $g-g_\mathrm{cusp}$ goes to zero at infinity in $C^k$-norm for all positive integers $k$. (Thus if $M$ is closed, any metric is cusp-like.) \end{defi} Note that such a metric is automatically complete with bounded curvature and of finite volume, but its injectivity radius equals zero hence it does not have bounded geometry. However, except in the case where $M$ is homeomorphic to a solid torus, its universal covering does have bounded geometry (see Lemma~\ref{lem:bd}). Since solid tori are Seifert fibred, we will assume that $M$ is not homeomorphic to a solid torus when necessary. Also note that if $M$ admits a cusp-like metric, then $M$ admits a manifold compactification whose boundary is empty or a union of $2$-tori. This compactification is irreducible (resp.~atoroidal, resp.~Seifert-fibred) if and only if $M$ is irreducible (resp.~atoroidal, resp.~Seifert-fibred). In section \ref{sec:openflow} we construct a Ricci flow with bubbling-off on $M$, for any cusp-like initial metric, by passing to the universal cover and working equivariantly. For simplicity we restrict ourselves to the case where $M$ is nonspherical. This is not a problem since spherical manifolds are Seifert fibred. We also prove that the cusp-like structure is preserved by this flow (cf.~Theorem \ref{thm:cusp-like}). Using this tool, we can adapt Perelman's proof of geometrisation to obtain the following result: \begin{theo}\label{thm:geometrisation} Let $M$ be a connected, orientable, irreducible, atoroidal $3$-manifold and $g_0$ be a metric on $M$ which is cusp-like at infinity. Then $M$ is Seifert-fibred, or there exists a Ricci flow with bubbling-off $g(\cdot)$ on $M$ defined on $[0,\infty)$, such that $g(0)=g_0$, and as $t$ goes to infinity, $t^{-1}g(t)$ converges smoothly in the pointed topology for appropriate base points to some finite volume hyperbolic metric on $M$. Moreover, $g(\cdot)$ has finitely many singular times, and there are positive constants $T,C$ such that $|\mathop{\rm Rm}\nolimits| < Ct^{-1}$ for all $t \ge T$. \end{theo} \label{rem:geometrisation} If $N$ is a compact, connected, orientable $3$-manifold such that $\bord N$ is empty or a union of $2$-tori, then $M=\mathop{\rm int}\nolimits N$ always carries a cusp-like at infinity metric. Thus we obtain: \begin{corol}[Thurston, Perelman]\label{corol:geometrisation} Let $N$ be a compact, connected, orientable $3$-manifold-with-boundary such that $\bord N$ is empty or a union of $2$-tori. If $N$ is irreducible and atoroidal, then $N$ is Seifert-fibred or hyperbolic. \end{corol} Note that it should be possible to obtain this corollary directly from the closed case by a doubling trick. The point of this paper is to study the behaviour of Ricci flow in the noncompact case.\\ Let us review some results concerning global stability or convergence to finite volume hyperbolic metrics. In the case of surfaces, R. Ji, L. Mazzeo and N. Sesum \cite{Ji-Maz-Ses:cusps} show that if $(M,g_0)$ is complete, asymptotically hyperbolic of finite area with $\chi(M) < 0$, then the normalised Ricci flow with initial condition $g_0$ converges exponentially to the unique complete hyperbolic metric in its conformal class. G. Giesen and P. Topping \cite[Theorem 1.3]{Gie-Top:incomplete} show that if $g_0$, possibly incomplete and with unbounded curvature, is in the conformal class of a complete finite area hyperbolic metric $g_\mathrm{hyp}$, then there exists a unique Ricci flow with initial condition $g_0$ which is instantaneously complete and maximaly stretched (see the precise definition in \cite{Gie-Top:incomplete}), defined on $[0,+\infty)$ and such that the rescaled solution $(2t)^{-1}g(t)$ converges smoothly locally to $g_0$ as $t \to \infty$. Moreover, if $g_0 \leq C g_{\mathrm{hyp}}$ for some constant $C>0$ then the converence is global: for any $k \in \NN$ and $\mu \in (0,1)$ there exists a constant $C>0$ such that for all $t \geq 1$, $\vert (2t)^{-1}g(t) - g_\mathrm{hyp} \vert_{C^k(M,g_\mathrm{hyp})} < \frac{C}{t^{1-\mu}}$. In dimensions greater than or equal to 3, R.~Bamler \cite{Bam:stability} shows that if $g_0$ is a small $C^0$-perturbation of a complete finite volume hyperbolic metric $g_{\mathrm{hyp}}$, that is if $\vert g_0 - g_\mathrm{hyp} \vert_{C^0(M,g_\mathrm{hyp})} <\epsi$ where $\epsi = \epsi(M,g_{hyp}) >0$, then the normalised Ricci flow with initial condition $g_0$ is defined for all time and converges in the pointed Gromov-Hausdorff topology to $g_{hyp}$. In dimension $3$ at least, there cannot be any global convergence result. Indeed, consider a complete finite volume hyperbolic manifold $(M^3,g_\mathrm{hyp})$ with at least one cusp. Let $g_0$ be a small $C^0$ pertubation of $g_\mathrm{hyp}$ such that $g_0$ remains cusp-like at infinity but with a different hyperbolic structure in the given cusp (change the cross-sectional flat structure on the cusp). By Bamler \cite{Bam:stability} a rescaling of $g(t)$ converges in the pointed topology to $g_{\mathrm{hyp}}$. The pointed convergence takes place on balls of radius $R$ for all $R$; however, our stability theorem \ref{thm:stability2} implies that, out of these balls, the cusp-like structure of $g_0$ is preserved for all time, hence is different from the one of the pointed limit. The convergence cannot be global.\\ The paper is organised as follows. In Section 2 we introduce the necessary definitions and we prove the existence of a Ricci flow with bubbling-off which preserves cusp-like structures. Section 3 is devoted to a thick-thin decomposition theorem which shows that the thick part of $(M,t^{-1}g(t))$ (sub)-converges to a complete finite volume hyperbolic manifold. We give also some estimates on the long time behaviour of our solutions. In Section 4 we prove the incompressibility of the tori bounding the thick part. Section 5 is devoted to a collapsing theorem, which is used to show that the thin part is a graph manifold. Finally the main theorem \ref{thm:geometrisation} is proved in Section 6. To obtain the curvature estimates on the thin part, we follow \cite{Bam:longtimeI}. An overview of the proof is given at the beginning of that section.\\ Throughout this paper, we will use the following convention: \emph{all $3$-manifolds are connected and orientable.} Finally, we acknowledge the support of the Agence Nationale de la Recherche through Grant ANR-12-BS01-0004. \section{Ricci flow with bubbling-off on open manifolds}\label{sec:openflow} \subsection{Definition and existence} In this section we define Ricci flow with bubbling-off and state the main existence theorem. For convenience of the reader we recall here the most important definitions involved, and refer to Chapters~2, 4, and~5 of the monograph~\cite{B3MP} for completeness. \begin{defi}[Evolving metric] Let $M$ be an $n$-manifold and $I\subset\mathbf{R}$ be an interval. An \bydef{evolving metric} on $M$ defined on $I$ is a map $t\mapsto g(t)$ from $I$ to the space of smooth Riemannian metrics on $M$. A \bydef{regular} time is a value of $t$ such that this map is $C^1$-smooth in a neighbourhood of $t$. If $t$ is not regular, then it is \bydef{singular}. We denote by $g_+(t)$ the right limit of $g$ at $t$, when it exists. An evolving metric is \bydef{piecewise $C^1$} if singular times form a discrete subset of $\RR$ and if $t \mapsto g(t)$ is left continuous and has a right limit at each point. A subset $N \times J \subset M\times I$ is \emph{unscathed} if $t \to g(t)$ is smooth there. Otherwise it is \emph{scathed}. \end{defi} If $g$ is a Riemannian metric, we denote by $\mathop{\rm Rm}\nolimitsin(g)$ (resp.~$\mathop{\rm Rm}\nolimitsax(g)$) the infimum (resp.~the supremum) of the scalar curvature of $g$. For any $x \in M$, we denote by $\mathop{\rm Rm}\nolimits(x) : \Lambda^2 T_xM \to \Lambda^2 T_xM$ the \emph{curvature operator} defined by $$\langle \mathop{\rm Rm}\nolimits(X \wedge Y),Z \wedge T \rangle = \mathrm{Riem}(X,Y,Z,T),$$ where $\mathrm{Riem}$ is the Riemann curvature tensor and $\wedge$ and $\langle \cdot, \cdot \rangle$ are normalised so that $\{e_i \wedge e_j \mid i<j\}$ is an orthonormal basis if $\{e_i\}$ is. In particular, if $\lambda \geq \mu \geq \nu$ are the eigenvalues of $\mathop{\rm Rm}\nolimits$, then $\lambda$ (resp. $\nu$) is the maximal (resp. minimal) sectional curvature and $R=2(\lambda+\mu+\nu)$. \footnote{This convention is different from that used by Hamilton and other authors.} \begin{defi}[Ricci flow with bubbling-off] A piecewise $C^1$ evolving metric $t\mapsto g(t)$ on $M$ defined on $I$ is a \bydef{Ricci flow with bubbling-off} if \begin{enumerate}[(i)] \item The Ricci flow equation ${\partial g\over\partial t} = -2\mathop{\rm Ric}\nolimits$ is satisfied at all regular times; \item for every singular time $t\in I$ we have \begin{enumerate}[(a)] \item $\mathop{\rm Rm}\nolimitsin(g_+(t)) \geqslant \mathop{\rm Rm}\nolimitsin(g(t))$, and \item $g_+(t) \leqslant g(t)$. \end{enumerate} \end{enumerate} \end{defi} \begin{rem}\label{rem:finite volume} If $g(\cdot)$ is a complete Ricci flow with bubbling-off of bounded sectional curvature defined on an interval of type $[0,T]$ or $[0,\infty)$, and if $g(0)$ has finite volume, then $g(t)$ has finite volume for every $t$. \end{rem} A \bydef{parabolic neighbourhood} of a point $(x,t)\in M\times I$ is a set of the form $$P(x,t,r,-\Delta t) = \{ (x',t') \in M\times I \mid x'\in B(x,t,r), t'\in [t-\Delta t, t] \}.$$ \begin{defi}[$\kappa$-noncollapsing] For $\kappa,r>0$ we say that $g(\cdot)$ is \bydef{$\kappa$-collapsed} at $(x,t)$ on the scale $r$ if for all $(x',t')$ in the parabolic neighbourhood $P(x,t,r,-r^2)$ we have $|\mathop{\rm Rm}\nolimits (x',t')| \le r^{-2}$ and $\mathrm{vol}(B(x,t,r))<\kappa r^n$. Otherwise, $g(\cdot)$ is \bydef{$\kappa$-noncollapsed} at $(x,t)$ on the scale $r$. If this is true for all $(x,t)\in M\times I$, then we say that $g(\cdot)$ is \bydef{$\kappa$-noncollapsed on the scale $r$}. \end{defi} Next is the definition of \emph{canonical neighbourhoods}. From now on and until the end of this section, $M$ is a $3$-manifold and $\epsi,C$ are positive numbers. \begin{defi}[$\epsi$-closeness, $\epsi$-homothety] If $U\subset M$ is an open subset and $g, g_0$ are two Riemannian metrics on $U$ we say that $g$ is $\epsi$-close to $g_0$ on $U$ if $$ ||g-g_0||_{[\epsi^{-1},U,g_0]} < \epsi,$$ where the norm is defined on page 26 of~\cite{B3MP}. We say that $g$ is $\epsi$-homothetic to $g_0$ on $U$ if there exists $\lambda>0$ such that $\lambda g$ is $\epsi$-close to $g_0$ on $U$. A pointed Riemannian manifold $(U,g,x)$ is $\epsi$-close to another Riemannian manifold $(U_0,g_0,x_0)$ if there exists a $C^{[\epsi^{-1}]+1}$-diffeomorphism $\psi$ from $U_0$ to $U$ sending $x_0$ to $x$ and such that the pullback metric $\psi^\ast(g)$ is $\epsi$-close to $g_0$ on $U$. We say that $(U,g,x)$ is $\epsi$-homothetic to $(U_0,g_0,x_0)$ if there exists $\lambda>0$ such that $(U,\lambda g,x)$ is $\epsi$-close to $(U_0,g_0,x_0)$. \end{defi} \begin{defi}[$\epsi$-necks, $\epsi$-caps] Let $g$ be a Riemannian metric on $M$. If $x$ is a point of $M$, then an open subset $U\subset M$ is an \bydef{$\epsi$-neck centred at $x$} if $(U,g,x)$ is $\epsi$-homothetic to $(S^2\times (-\epsi^{-1},\epsi^{-1}),g_\mathrm{cyl},(*,0))$, where $g_\mathrm{cyl}$ is the standard metric with unit scalar curvature. An open set $U$ is an \bydef{$\epsi$-cap centred at $x$} if $U$ is the union of two sets $V,W$ such that $x\in\mathop{\rm int}\nolimits V$, $V$ is a closed $3$-ball, $\bar W\cap V=\bord V$, and $W$ is an $\epsi$-neck. \end{defi} \begin{defi}[$(\epsi,C)$-cap] \label{def:epsi-C cap} An open subset $U\subset M$ is an \bydef{$(\epsi,C)$-cap centred at $x$} if $U$ is an $\epsi$-cap centred at $x$ and satisfies the following estimates: $R(x)>0$ and there exists $r\in (C^{-1} R(x)^{-1/2},C R(x)^{-1/2})$ such that \begin{enumerate}[(i)] \item $\overline{B(x,r)} \subset U \subset B(x,2r)$; \item The scalar curvature function restricted to $U$ has values in a compact subinterval of $(C^{-1} R(x),C R(x))$; \item $\mathrm{vol}(U) > C^{-1}R(x)^{-3/2}$ and if $B(y,s) \subset U$ satisfies $|\mathop{\rm Rm}\nolimits| \leqslant s^{-2}$ on $B(y,s)$ then $$ C^{-1} < \frac{\mathrm{vol} B(y,s)}{s^3} ~; $$ \item On $U$, $$ |\nabla R|< C R^{\frac32}~, $$ \item On $U$, \begin{equation} |\Delta R + 2|\mathop{\rm Ric}\nolimits|^2|< C R^2 ~, \label{eq:Delta R} \end{equation} \item On $U$, $$ |\nabla \mathop{\rm Rm}\nolimits|< C |\mathop{\rm Rm}\nolimits|^{\frac32}~,$$ \end{enumerate} \end{defi} \begin{rem}\label{rem:partial l} If $t \mapsto g(t)$ is a Ricci flow, then $\frac{\partial R}{\partial t} = \Delta R + 2|\mathop{\rm Ric}\nolimits|^2$ hence equation \eqref{eq:Delta R} implies that $|\dfrac{\partial R}{\partial t}| \leq C R^2$. \end{rem} \begin{defi}[Strong $\epsi$-neck] We call \bydef{cylindrical flow} the pointed evolving manifold $(S^2\times\mathbf{R},\{g_\mathrm{cyl}(t)\}_{t\in (-\infty,0]})$, where $g_\mathrm{cyl}(\cdot)$ is the product Ricci flow with round first factor, normalised so that the scalar curvature at time $0$ is $1$. If $g(\cdot)$ is an evolving metric on $M$, and $(x_0,t_0)$ is a point in spacetime, then an open subset $N\subset M$ is a \bydef{strong $\epsi$-neck centred at $(x_0,t_0)$} if there exists $Q>0$ such that $(N,\{g(t)\}_{t\in [t_0-Q^{-1},t_0]},x_0)$ is unscathed, and, denoting $\bar g(t)=Qg(t_0+tQ^{-1})$ the parabolic rescaling with factor $Q>0$ at time $t_0$, $(N,\{\bar g(t)\}_{t\in [-1,0]},x_0)$ is $\epsi$-close to $(S^2\times (-\epsi^{-1},\epsi^{-1}),\{g_\mathrm{cyl}(t)\}_{t\in [-1,0]},*)$. \end{defi} \begin{rem} A strong $\epsi$-neck satisfies the estimates (i)--(vi) of Definition \ref{def:epsi-C cap} for an appropriate constant $C=C(\epsi)$, at all times, that is on all $N \times [t_0-Q^{-1},t_0]$ for any $Q>0$ as above. \end{rem} \begin{defi}[$(\epsi,C)$-canonical neighbourhood] \label{def:VC} Let $\{g(t)\}_{t \in I})$ be an evolving metric on $M$. We say that a point $(x,t)$ admits (or is centre of) an \bydef{$(\epsi,C)$-canonical neighbourhood} if $x$ is centre of an $(\epsi,C)$-cap in $(M,g(t))$ or if $(x,t)$ is centre of a strong $\epsi$-neck $N$ which satisfies (i)--(vi) at all times. \end{defi} In~\cite[Section 5.1]{B3MP} we fix constants $\epsi_0,C_0$. For technical reasons, we need to take them slightly different here; this will be explained in the proof of Theorem~\ref{thm:existence}. \begin{defi}[Canonical Neighbourhood Property $(CN)_r$]\label{def:CN} Let $r>0$. An evolving metric satisfies the property $(CN)_r$ if, for any $(x,t)$, if $R(x,t) \geq r^{-2}$ then $(x,t)$ is centre of an $(\epsi_0,C_0)$-canonical neighbourhood. \end{defi} Next we define a pinching property for the curvature tensor coming from work of Hamilton and Ivey \cite{hamilton:nonsingular,Ive}. We consider a familly of positive functions $(\phi_{t})_{t\geqslant 0}$ defined as follows. Set $s_{t}:=\frac{e^2}{1+t}$ and define $$\phi_{t} :[-2s_{t},+\infty) \longrightarrow [s_{t},+\infty)$$ \index{$\phi_{t}$} as the reciprocal of the increasing function $$s \mapsto 2s(\ln(s) + \ln(1+t) -3).$$ Compared with the expression used in \cite{hamilton:nonsingular,Ive}, there is an extra factor $2$ here. This comes from our curvature conventions. A key property of this function is that $\frac{\phi_{t}(s)}{s} \to 0$ as $s \to +\infty$. \begin{defi}[Curvature pinched toward positive]\label{def:pinching} Let $I\subset [0,\infty)$ be an interval and $\{g(t)\}_{t\in I}$ be an evolving metric on $M$. We say that $g(\cdot)$ has \bydef{curvature pinched toward positive at time $t$} if for all $x \in M$ we have \begin{gather} R(x,t) \geqslant -\frac{6}{4t+1}, \label{eq:pinching 1} \\ \mathop{\rm Rm}\nolimits (x,t) \geqslant -\phi_{t}(R(x,t)). \label{eq:pinching 2} \end{gather} We say that $g(\cdot)$ has \bydef{curvature pinched toward positive} if it has curvature pinched toward positive at each $t \in I$. \end{defi} This allows in particular to define the notion of \emph{surgery parameters} $r,\delta$ (cf.~\cite[Definition 5.2.5]{B3MP}). Using~\cite[Theorem~5.2.4]{B3MP} we also define their \emph{associated cutoff parameters} $h,\Theta$. Using the metric surgery theorem, we define the concept of a metric $g_+$ being \emph{obtained from $g(\cdot)$ by $(r,\delta)$-surgery at time $t_0$} (cf.~\cite[Definition 5.2.7]{B3MP}). This permits to define the following central notion: \begin{defi}[Ricci flow with $(r,\delta)$-bubbling-off]\label{defi:r delta solution} Fix surgery parameters $r,\delta$ and let $h,\Theta$ be the associated cutoff parameters. Let $I\subset [0,\infty)$ be an interval and $\{g(t)\}_{t\in I}$ be a Ricci flow with bubbling-off on $M$. We say that $\{g(t)\}_{t\in I}$ is a \bydef{Ricci flow with $(r,\delta)$-bubbling-off} if it has the following properties: \begin{enumerate}[(i)] \item $g(\cdot)$ has curvature pinched toward positive and satisfies $R(x,t)\leqslant \Theta$ for all $(x,t)\in M\times I$; \item For every singular time $t_0\in I$, the metric $g_+(t_0)$ is obtained from $g(\cdot)$ by $(r,\delta)$-surgery at time $t_0$; \item $g(\cdot)$ satisfies property $(CN)_r$. \end{enumerate} \end{defi} \begin{defi}[Ricci flow with $(r,\delta,\kappa)$-bubbling-off] Let $\kappa>0$. A Ricci flow with $(r,\delta)$-bubbling-off $g(\cdot)$ is called a \bydef{Ricci flow with $(r,\delta,\kappa)$-bubbling-off} if it is $\kappa$-noncollapsed on all scales less than or equal to 1. \end{defi} \begin{defi}\label{def:normalised} A metric $g$ on a $3$-manifold $M$ is \bydef{normalised} if it satisfies $\operatorname{tr} \mathop{\rm Rm}\nolimits^2\le 1$ and each ball of radius $1$ has volume at least half of the volume of the unit ball in Euclidean $3$-space. \end{defi} Note that a normalised metric always has bounded geometry. At last we can state our existence theorem: \begin{theo}\label{thm:existence} There exist decreasing sequences of positive numbers $r_k,\kappa_k>0$ and, for every continuous positive function $t\mapsto \bar \delta(t)$, a decreasing sequence of positive numbers $\delta_k$ with $\delta_k \leqslant \bar \delta(\cdot)$ on $]k,k+1]$ with the following property. For any complete, normalised, nonspherical, irreducible Riemannian $3$-manifold $(M,g_0)$, one of the following conclusions holds: \begin{enumerate}[(i)] \item There exists $T>0$ and a complete Ricci flow with bubbling-off $g(\cdot)$ of bounded geometry on $M$, defined on $[0,T]$, with $g(0)=g_0$, and such that every point of $(M,g(T))$ is centre of an $\epsi_0$-neck or an $\epsi_0$-cap, or \item \label{item:stop} There exists a complete Ricci flow with bubbling-off $g(\cdot)$ of bounded geometry on $M$, defined on $[0,+\infty)$, with $g(0)=g_0$, and such that for every nonnegative integer $k$, the restriction of $g(\cdot)$ to $]k,k+1]$ is a Ricci flow with $(r_k,\delta_k,\kappa_k)$-bubbling-off. \end{enumerate} \end{theo} \begin{defi}[Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off]\label{def:rdelta} We fix forever a function $r(\cdot)$ such that $r(t)=r_k$ on each interval $]k,k+1]$. Given $\delta(\cdot)$ satisfying $\delta(t)=\delta_k$ on all $]k,k+1]$, we call a solution as above a \emph{Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off}. We define similarly $h(\cdot)$ and $\Theta(\cdot)$ their associated cutoff parameters. \end{defi} \begin{addendum}[Ricci flow with bubbling-off on the quotient]\label{add:existence} With the same notation as in Theorem~\ref{thm:existence} and under the same hypotheses, if in addition $(M,g_0)$ is a Riemannian cover of some Riemannian manifold $(X,\bar g_0)$, then in either case there exists a Ricci flow with bubbling-off $\bar g(\cdot)$ on $X$ such that for each $t$, $(M,g(t))$ is a Riemannian cover of $(X,\bar g(t))$, and in Case~(ii), the restriction of $\bar g(\cdot)$ to $]k,k+1]$ is a Ricci flow with $(r_k,\delta_k)$-bubbling-off for every $k$. \end{addendum} The only differences between Theorem \ref{thm:existence} and Theorem 11.5 of~\cite{bbm:openflow} is that $M$ is assumed to be irreducible, that `surgical solution' is replaced with `Ricci flow with bubbling-off', and that there is the alternative conclusion (i). Theorem~\ref{thm:existence} follows from iteration of the following result, which is analogous to~\cite[Theorem 5.6]{bbm:openflow}: \begin{theo}\label{thm:a iterer} For every $Q_0,\rho_0$ and all $0\le T_A<T_\Omega<+\infty$, there exist $r,\kappa>0$ and for all $\bar \delta>0$ there exists $\delta \in (0,\bar \delta)$ with the following property. For any complete, nonspherical, irreducible Riemannian $3$-manifold $(M,g_0)$ which satisfies $|\mathop{\rm Rm}\nolimits| \le Q_0$, has injectivity radius at least $\rho_0$, has curvature pinched toward positive at time $T_A$, one of the following conclusions holds: \begin{enumerate}[(i)] \item There exists $T\in (T_A,T_\Omega)$ and a Ricci flow with bubbling-off $g(\cdot)$ on $M$, defined on $[T_A,T]$, with $g(T_A)=g_0$, and such that every point of $(M,g(T))$ is centre of an $\epsi_0$-neck or an $\epsi_0$-cap, or \item There exists a Ricci flow with $(r,\delta,\kappa)$-bubbling-off $g(\cdot)$ on $M$, defined on $[T_A,T_\Omega]$, satisfying $g(T_A)=g_0$. \end{enumerate} \end{theo} The proof of Theorem~\ref{thm:a iterer} is the same as~\cite[Theorem 5.6]{bbm:openflow}. It follows from three propositions, which we do not write here, analogous to Propositions A, B, C of~\cite{bbm:openflow} (see the propositions page 949). The only notable difference is that we have to modify Proposition A to add the alternative conclusion that in $(M,g(b))$, every point is centre of an $\epsi_0$-cap or an $\epsi_0$-neck. Let us explain the proof of this adapted proposition A (see ~\cite{bbm:openflow} pages 959-961). It uses the surgical procedure of the monograph~\cite{B3MP} rather than that of~\cite{bbm:openflow}. If the curvature is large everywhere, that is if $R \geq 2r^{-2}$ on $(M,g(b))$ where $r$ is the surgery parameter, then by property $(CN)_r$ (Definitions 2.10 and 2.12 (iii)) every point has a canonical neighbourhood, so the alternative conclusion holds. Otherwise, we partition $M$ in three sets of small, large or very large curvature. Precisely, as in \cite[page 89]{B3MP}, we define $\mathcal{G}$ (resp.~$\mathcal{O}$, resp.~$\mathcal{R}$) as the set of points of $M$ of scalar curvature less than $2 r^{-2}$, (resp.~$\in [2r^{-2}, \Theta /2)$, resp.~$\geqslant \Theta /2$). By the assumption that $\mathop{\rm Rm}\nolimitsin(b) < 2r^{-2}$ and $\mathop{\rm Rm}\nolimitsax(b)=\Theta$, these sets are nonempty. One can find a locally finite collection of cutoff $\delta$-necks $\{N_i\}$ in $\mathcal{O}$ which separates $\mathcal{G}$ from $\mathcal{R}$, in the sense that any connected component of $M \setminus \{N_i\}$ is contained in $\mathcal{G} \cup \mathcal{O}$ or in $\mathcal{O} \cup \mathcal{R}$. Since $M$ is irreducible and not homeomorphic to $S^3$, the middle sphere of each $N_i$ bounds a unique topological $3$-ball $B_i$. Then one of the following cases occurs: \paragraph{Case 1} Each $B_i$ is contained in a unique maximal $3$-ball $B_j$. If $\mathcal{O}$ is contained in the union of maximal $B_j$'s, we can perform the surgical procedure using the Metric surgery theorem 5.2.2 of~\cite{B3MP} on each maximal cap $B_j$, yielding a metric which has the desired properties. Otherwise one can see that each point of $M$ is centre of $\epsi$-cap. Hence the alternative conclusion holds. \paragraph{Case 2} $M$ is the union of the $B_i$'s. Then each point is separated from infinity by a cutoff neck, so each point is centre of a cap. Hence the alternative conclusion holds.\\ Finally, we need to explain how the addendum is proved. We already remarked in~\cite{bbm:openflow} Section 11 that the construction can be made equivariant with respect to a properly discontinuous group action, by work of Dinkelbach and Leeb~\cite{dl:equi}. The only thing to check is that we still have the Canonical Neighbourhood Property for the quotient evolving metric $\bar g(\cdot)$. This is not obvious, since the projection map $p:M\to X$ might not be injective when restricted to a canonical neighbourhood. We use a classical trick: by adjusting the constants, we may assume that $g(\cdot)$ has the stronger property that each point $(x,t)$ such that $R(x,t)\ge r^{-2}$ has an $(\epsi_0/2,C_0)$-canonical neighbourhood. Take now $(x,t)\in X\times I$ such that $R(x,t)\ge r^{-2}$. Choose $\bar x\in M$ such that $p(\bar x)=x$. Then $R(\bar x,t)=R(x,t)\ge r^{-2}$, so $(\bar x,t)$ has an $(\epsi_0/2,C_0)$-canonical neighbourhood $U$. By truncation, it also has an $(\epsi_0,C_0)$-canonical neighbourhood $U'$ contained in $U$ (see figure below) : \begin{center} \input{truncation.pstex_t} \end{center} Precisely, if $U$ is an $\epsi_0/2$-neck with parametrisation $\phi : S^2\times (-2\epsi_{0}^{-1},2\epsi_{0}^{-1}) \to U$, we set $U':=\phi(S^2\times (-\epsi_{0}^{-1},\epsi_{0}^{-1}))$. If $U$ is a cap, then $U$ is the union of two sets $V,W$, where $\overline{W} \cap V=\bord V$ and $W$ is an $\epsi_0/2$-neck with parametrisation $\phi$. Then we set $W':= \phi(S^2\times (0,2\epsi_{0}^{-1}))$ and $U':=V \cup W'$. \begin{claim} The restriction of the projection map $p$ to $U'$ is injective. \end{claim} Once the claim is proved, we can just project $U'$ to $X$ and obtain an $(\epsi_0,C_0)$-canonical neighbourhood for $(x,t)$, so we are done. To prove the claim we consider two cases: \paragraph{Case 1} $U$ and $U'$ are caps. Assume by contradiction that there is an element $\gamma$ in the deck transformation group, different from the identity, and a point $y\in U'$ such that $\gamma y\in U'$. Following~\cite{dl:equi}, we consider the subset $N_{\epsi_0}$ of $M$ consisting of points which are centres of $\epsi_0$-necks. According to~\cite[Lemmas 3.6, 3.7]{dl:equi} there is an open supset $F \supset \overline{N_{\epsi_0}}$ which has an equivariant foliation $\mathcal F$ by almost round $2$-spheres. All points sufficiently close to the centre of $W$ are centres of $\epsi_0$-necks. Pick a point $z$ in $N_{\epsi_0} \cap W\setminus W'$ sufficiently far from $W'$ so that the leaf $S$ of $\mathcal F$ through $z$ is disjoint from $U'$. By Alexander's theorem, $S$ bounds a $3$-ball $B\subset U$. Note that $B$ contains $U'$. If $S=\gamma S$, then $B=\gamma B$ or $M=B\cup \gamma B$. The former possibility is ruled out by the fact that the action is free, while any self-homeomorphism of the $3$-ball has a fixed point. The latter is ruled out by the assumption that $M$ is not diffeomorphic to $S^3$. Hence $S\neq \gamma S$. Since $S$ and $\gamma S$ are leafs of a foliation, they are disjoint. Then we have the following three possibilities: \paragraph{Subcase a} $\gamma S$ is contained in $B$. Then we claim that $\gamma B\subset B$. Indeed, otherwise we would have $M=B \cup \gamma B$, and $M$ would be diffeomorphic to $S^3$. Now $\gamma$ acts by isometry, so $\mathrm{vol} B=\mathrm{vol} \gamma B$. This is impossible since the annular region between $S$ and $\gamma S$ has nonzero volume. \paragraph{Subcase b} $S$ is contained in $\gamma B$. This case is ruled out by a similar argument exchanging the roles of $S$ and $\gamma S$ (resp.~of $B$ and $\gamma B$.) \paragraph{Subcase c} $B$ and $\gamma B$ are disjoint. Then since $U'\subset B$, the sets $U'$ and $\gamma U'$ are also disjoint, contradicting the existence of $y$. \paragraph{Case 2} $U$ and $U'$ are necks. Seeking a contradiction, let $\gamma$ be an element of the deck transformation group, different from the identity, and $y$ be a point of $U'$ such that $\gamma y\in U'$. Consider again the set $N_{\epsi_0}$ defined above and the equivariant foliation $\mathcal F$. Since $U'$ is contained in the bigger set $U$, each point of $U'$ is centre of an $\epsi_0$-neck. Let $S$ (resp.~$\gamma S$) be the leaf of $\mathcal F$ passing through $y$ (resp.~$\gamma y$.) Since $M$ is irreducible, $S$ (resp.~$\gamma S$) bounds a $3$-ball $B$ (resp.~$B_\gamma$). As in the previous case, we argue that one of these balls is contained into the other, otherwise we could cover $M$ by $B,B_\gamma$ and possibly an annular region between them, and get that $M$ is diffeomorphic to $S^3$. Since $\gamma$ acts by an isometry, we must in fact have $B=B_\gamma$, and $\gamma$ has a fixed point, contradicting our hypotheses. This finishes the proof of the claim, hence that of Addendum~\ref{add:existence}. \subsection{Stability of cusp-like structures} In this section, we prove the stability of cusp-like structures under Ricci flow with bubbling-off. We consider a (nonspherical, irreducible) $3$-manifold $M$, endowed with a cusp-like metric $g_0$. To begin we remark that the universal cover of $M$ has bounded geometry, except in the case of solid tori: \begin{lem}\label{lem:bd} Assume that $M$ is not homeomorphic to a solid torus. Let $(\tilde M,\tilde g_0)$ denote the universal cover of $(M,g_0)$. Then $(\tilde M,\tilde g_0)$ has bounded geometry. \end{lem} \begin{proof} Sectional curvature is bounded on $(M,g_0)$, hence on the universal cover $(\tilde M,\tilde g_0)$ by the same constant. Observe that for any lift $\tilde x \in \tilde M$ of some $x\in M$, the injectivity radius at $\tilde x$ is not less than the injectivity radius at $x$. Fix a compact subset $K \subset M$ such that each connected component $C$ of $M \setminus K$ is $\epsi$-homothetic to a hyperbolic cusp neighbourhood, for some small $\epsi>0$. Let $\tilde K$ denote any lift of $K$ to $\tilde M$. Then the $5$-neighbourhood of $\tilde K$ has injectivity radius bounded below by $i_0>0$, the injectivity radius of the (compact) $5$-neighbourhood of $K$. Now consider a lift $\tilde C$ of a cuspidal component $C$. The boundary $\partial C$ is incompressible in $M$, otherwise $M$ would be homeomorphic to a solid torus (see Theorem A.3.1 in \cite{B3MP}). It follows that $\tilde C$ is simply connected with an incomplete metric of negative sectional curvature. Arguing as in the proof of the Hadamard theorem, it follows that the injectivity radius at a given point $p \in \tilde C$ is not less than $d(p,\partial \tilde C)$. Together with the previous estimate, this implies that $\mathop{\rm inj}\nolimits(\tilde M,\tilde g_0) \ge \min\{i_0,5\}>0$. \end{proof} Let us denote by $g_c$ a metric on $M$ which is hyperbolic on the complement of some compact subset of $M$, and such that, for each end $E$ of $M$ there is a factor $\lambda_E>0$ such that $\lambda_E g_0 - g_c$ goes to zero at infinity in the end, in $C^k$-norm for each integer $k$. Let $g(\cdot)$ be a Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off on $M$ such that $g(0)=g_0$, defined on $[0,T]$ for some $T>0$. Set $\lambda_E(t)=\frac{\lambda_E}{1+4\lambda_Et}$. We then have: \begin{theo}[Stability of cusp-like structures]\label{thm:cusp-like} \label{thm:stability2} For each end $E$ of $M$, $\lambda_E(t) g(t)-g_c$ goes to zero at infinity in this end, in $C^k$-norm for each integer $k$, uniformly for $t \in [0,T]$. \end{theo} \begin{proof} Let us first explain the idea. It is enough to work on each cusp. The main tool is the Persistence Theorem 8.1.3 from \cite{B3MP}, which proves that a Ricci flow remains close, on a parabolic neighbourhood where it has a priori curvature bounds, to a given Ricci flow model, if the initial data are sufficiently close on some larger balls. The model we use now is a hyperbolic Ricci flow on $T^2 \times \RR$. To obtain the required curvature bounds, we shall consider an interval $[0,t]$ where the closeness to the hyperbolic flow holds, and $\sigma>0$ fixed small enough so that Property $(CN)_r$, which prevents scalar curvature to explode too fast, gives curvature bounds on $[0,t+\sigma]$. The Persistence Theorem then gives closeness to the hyperbolic flow until time $t+\sigma$ on a smaller neighbourhood of the cusp. One can iterate this procedure, shrinking the neighbourhood of the cusp by a definite amount at each step, until time $T$. Let us now give the details. Let $E$ be an end of $M$ and $U$ be a neighbourhood of $E$ such that $(U,g_c)$ is isometric to $(\textbf{T}^2\times [0, +\infty), g_\mathrm{hyp}=e^{-2r}g_{\textbf{T}^2}+dr^2)$, where $g_{\textbf{T}^2}$ is flat. Let $\phi: \textbf{T}^2\times [0, +\infty) \to U$ be an isometric parametrisation (between $g_c$ and $g_\mathrm{hyp}$.) Then $\lambda_E \phi^\ast g_0 - g_\mathrm{hyp}$ and its derivatives go to zero at infinity. We may assume for simplicity that $\lambda_E=1$, and we define $\bar g(t):=\phi^\ast g(t)$ to be the pullback Ricci flow with bubbling-off on $\textbf{T}^2\times [0, +\infty)$. Let $g_\mathrm{hyp}(\cdot)$ denote the Ricci flow on $\textbf{T}^2\times \textbf{R}$ such that $g_\mathrm{hyp}(0)=e^{-2r}g_{\textbf{T}^2}+dr^2$, i.e. $g_\mathrm{hyp}(t)=(1+4t)g_\mathrm{hyp}$. We use it as the Ricci flow model, in the sense of \cite[Theorem 8.1.3.]{B3MP}. Our goal is to compare $g_\mathrm{hyp}(t)$ to $\bar g(t)$. By definition of our Ricci flow with bubbling-off, $r(\cdot)$ and $\Theta(\cdot)$ are piecewise constant. More precisely, there exist $0=t_0<t_1<\dots <t_N=T$ such that $r(t)=r_i$ and $\Theta (t)=\Theta_i$ on $(t_i, t_{i+1}]$. In fact, we can choose $t_i=i$ for $i<N$ (cf. Definition \ref{def:rdelta}). In particular, $g(t)$ satisfies the canonical neighbourhood property at scale $r_{i}$ on this interval (every point at which the scalar curvature is greater than $r_i^{-2}$ is centre of an $(\varepsilon_0, C_0)$ canonical neighbourhood) and the scalar curvature is bounded above by $\Theta_i$. The pinching assumption (cf.~Definition~\ref{def:pinching}) then implies that the full curvature tensor is bounded by some $K_i$ on the same interval. Set $K:=\sup_{i=1,\dots,N-1}\big\{ K_i\big\}$. Define a small number $\sigma >0$ by setting $$\sigma:=\frac{r_{N-1}^2}{2C_0}\leq \frac{r_i^2}{2C_0}\,\quad \forall i=0,\dots ,N-1\,.$$ This number is small enough so that $g(\cdot)$ cannot develop a singularity on a cusp on $[t,t+\sigma]$ if $R \leq 0$ at time $t$. Precisely, let us put ${\mathcal C}_s := \textbf{T}^2\times [s, +\infty)$, for $s \geq 0$. Then we have: \begin{lem}\label{lem:boundedcurv} If $\bar g(\cdot)$ is unscathed on ${\mathcal C}_s\times [0, \Delta ]$ and has scalar curvature $R\leqslant 0$ there, then it is also unscathed on ${\mathcal C}_s\times [0, \Delta +\sigma]$ and has curvature tensor bounded by $K$. \end{lem} \begin{proof} We know that singular times are discrete. Let $t \in [0,\sigma]$ be maximal such that $\mathcal{C}_s\times [0, \Delta +t]$ is unscathed for $\bar g(\cdot)$ (possibly $t=0$). We prove first that for $x\in \mathcal C_s$ and $t'\in [\Delta, \Delta +t]$ we have \begin{equation*} R(x,t')\leq 2r(t')^{-2}<<h(t')^{-2}.\, \end{equation*} Indeed, since $r(\cdot)$ is nonincreasing, $g(\cdot)$ satisfies $(CN)_{r(\Delta +t)}$ on $[\Delta,t']$. If $R(x,\Delta)\leq 0$ and $R(x,t')>2r(t')^{-2}$, then we can find a subinterval $[t_1,t_2]\subset [\Delta, t']$ such that for $u\in [t_1,t_2]$, $R(x, u)\geq r(t')^{-2}$, $r(x,t_1)=r(t')^{-2}$, and $r(x,t_2)=2r(t')^{-2}$. Then the inequality $\vert \frac{\partial R}{\partial t}\vert <C_0 R^2$ holds on $\{x\}\times [t_{1},t_{2}]$, thanks to Property~\eqref{eq:Delta R} of canonical neighbourhoods (cf Remark \ref{rem:partial l}). The contradiction follows by integrating this inequality and using the fact that $t_2-t_1 <\sigma$. Assume now that $t<\sigma $. Then there is a surgery at time $\Delta + t$ and, by definition of the maximal time, $\phi(\mathcal C_s)$ is scathed at time $\Delta +t$. The surgery spheres are disjoint from $\phi(\mathcal C_s)$, as they have curvature $\approx (h(\Delta+t))^{-2}$, where $h(\Delta+t)$ is the cutoff parameter, and curvature on $\phi(\mathcal C_s)$ is less than $2r(t')^{-2} <<(h(\Delta+t))^{-2}$. By definition of our surgery, this means that $\phi(\mathcal C_s) \subset M$ is contained in a $3$-ball where the metric surgery is performed. But a cusp of $M$ cannot be contained in a $3$-ball of $M$, hence we get a contradiction. We conclude that $t=\sigma$ and $R(x,t')\leq 2r(t')^{-2}$, $\forall t'\in [\Delta , \Delta +\sigma ]$. The pinching assumption then implies $\vert \mathop{\rm Rm}\nolimits \vert < K$ there. \end{proof} For every $A>0$, let $\rho_A =\rho (A,T,K)$ be given by the Persistence Theorem 8.1.3 of \cite{B3MP}. The proof of Theorem \ref{thm:stability2} is obtained by iteration of Lemma \ref{lem:boundedcurv} and the Persistence Theorem as follows. Fix $A>0$. Let $s_0>0$ be large enough so that $\bar g(0)$ is $\rho_A^{-1}$-close to $g_\mathrm{hyp}(0)$ on $\mathcal{C}_{s_0}$. In particular $R \leq 0$ there, so by Lemma~\ref{lem:boundedcurv}, $\bar g(\cdot)$ is unscathed on $\mathcal{C}_{s_0}\times [0,\sigma]$, with curvature tensor bounded by $K$. The above-mentioned Persistence Theorem applied to $P(q,0,A,\sigma )$, for all $q\in {\mathcal C}_{s_0+\rho_A}$, shows that $\bar g(t)$ is $A^{-1}$-close to $g_\mathrm{hyp}(t)$ there. Hence on ${\mathcal C}_{s_0+\rho_A-A}\times [0, \sigma ]$, $\bar g(\cdot)$ is $A^{-1}$-close to $g_\mathrm{hyp}(\cdot)$, and in particular $R\leqslant 0$ there. We then iterate this argument, applying Lemma~\ref{lem:boundedcurv} and the Persistence Theorem, $n=[T/\sigma]$ times and get that $\bar g(\cdot)$ is $A^{-1}$-close to $g_\mathrm{hyp}(\cdot)$ on $\mathcal{C}_{s_0+n(\rho_A-A)}\times [0,T]$. By letting $A$ go to infinity and rescaling appropriately, this finishes the proof of Theorem~\ref{thm:stability2}. \end{proof} \section{Thick-thin decomposition theorem} Let $(X,g)$ be a complete Riemannian $3$-manifold and $\epsi$ be a positive number. The \bydef{$\epsi$-thin part} of $(X,g)$ is the subset $X^-(\epsi)$ of points $x\in X$ for which there exists $\rho\in (0,1]$ such that on the ball $B(x,\rho)$ all sectional curvatures are at least $-\rho^{-2}$ and the volume of this ball is less than $\epsi \rho^3$. Its complement is called the \bydef{$\epsi$-thick part} of $(X,g)$ and denoted by $X^+(\epsi)$. The aim of this section is to gather curvature and convergence estimates on the $\epsi$-thick part of $(M,t^{-1}g(t))$ as $t \to \infty$, when $g(\cdot)$ is a Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off for suitably chosen surgery parameters $r(\cdot)$ and $\delta(\cdot)$. {\bf Here, we assume $M$ irreducible, nonspherical and not Seifert fibred. We assume also that $M$ is not homeomorphic to $\RR^3$}, which does not have cusp-like metrics. As a consequence, {\bf M does not have a complete metric with $\mathop{\rm Rm}\nolimits \ge 0$}. In the compact case, this follows from Hamilton's classification theorem (Theorem~B.2.5 in Appendix~B of~\cite{B3MP}). In the noncompact case, this follows from the Cheeger-Gromoll theorem and the Soul theorem (cf.~B.2.3 in~\cite{B3MP}). Recall that $r(\cdot)$ has been fixed in Definition~\ref{def:rdelta}. In \cite[Definition 11.1.4]{B3MP}, we define a positive nonincreasing function $\bar \delta(\cdot)$ such that any Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off satisfies some technical theorems---Theorems 11.1.3 and 11.1.6, analoguous to \cite[Propositions 6.3 and 6.8]{Per2}---if $\delta \leq \bar \delta$ and the initial metric is normalised. Both Theorems~11.1.3 and~11.1.6 remain true for a Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off on a noncompact nonspherical irreducible manifold, with the weaker assumption that the metric has normalised curvature at time $0$, i.e. $\mathop{\rm tr}\nolimits \mathop{\rm Rm}\nolimits^2 \leqslant 1$ for the initial metric, instead of being normalised in the sense of Definition~\ref{def:normalised}. In particular it applies to metrics which are cusp-like at infinity. Indeed, the proofs of theorems 11.1.3 and 11.1.6 do not use the assumption on the volume of unit balls for the initial metric; it only uses the assumption on the curvature, mainly through the estimates \eqref{eq:pinching 1}-\eqref{eq:pinching 2}. It uses neither the compactness of the manifold, the finiteness of the volume nor the particular manifold. We recall that the core of Theorem~11.1.3 is to obtain $\kappa$-noncollapsing property, canonical neighbourhoods and curvature controls relatively to a distant ball satisfying a lower volume bound assumption. The parameters then depend on the distance to the ball and on its volume, not on time or initial data. These estimates are then used to control the thick part (Theorem 11.1.6). We gather below results following mainly from Perelman \cite[6.3, 6.8, 7.1-3]{Per2}. We need some definitions. Given a Ricci flow with bubbling-off on $M$, we define $$ \rho(x,t):=\max \{\rho >0\ :\ \mathop{\rm Rm}\nolimits \geqslant -\rho^{-2}\ \textrm{ on } B(x,t,\rho)\ \}$$ and $\rho_{\sqrt{t}} := \min\{\rho(x,t),\sqrt{t}\}$. We denote by $\tilde M$ the universal cover of $M$ and $\tilde g(t)$ the lifted evolving metric, which is by Addendum \ref{add:existence} a Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off if $g(t)$ is. If $x \in M$, we denote by $\tilde x \in \tilde M$ a lift of $x$ and by $\tilde B(\tilde x,t,r)$ the $r$-ball in $(\tilde M,\tilde g(t))$ centered at $\tilde x$. An evolving metric $\{g(t)\}_{t\in I}$ on $M$ is said to have \bydef{finite volume} if $g(t)$ has finite volume for every $t\in I$. We denote this volume by $V(t)$. We then have: \begin{prop}\label{prop:thick thin} \label{prop:bar rho bis} \label{thm:thick thin} For every $w>0$ there exists $0 < \bar \rho(w) <\bar r(w) < 1$, $\bar T=\bar T(w)$, $\bar K=\bar K(w) >0$ such that for any Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off $g(\cdot)$ on $M$ such that $\delta(\cdot)\le \bar\delta(\cdot)$ and with normalised curvature at time $0$, the following holds: \begin{enumerate}[(i)] \item \label{prop:bar r} For all $x \in M$, $t\geqslant \bar T$ and $0 < r \le \min\{\rho(x,t),\bar r \sqrt{t}\}$, if $\mathrm{vol} \tilde B(\tilde x,t,r) \geqslant wr^3$ for some lift $\tilde x$ of $x$ then $|\mathop{\rm Rm}\nolimits| \leqslant Kr^{-2}$, $|\nabla \mathop{\rm Rm}\nolimits| \leqslant Kr^{-3}$ and $|\nabla^2 \mathop{\rm Rm}\nolimits | \leqslant Kr^{-4}$ on $B(x,t,r)$. \item For all $x \in M$ and $t \geqslant \bar T$, if $\mathrm{vol} \tilde B(\tilde x,t,r) \ge wr^3$ for some lift $\tilde x$ of $x$ where $r=\rho(x,t)$, then $\rho(x,t) \geqslant \bar \rho \sqrt{t}$. \item If $g(\cdot)$ has finite volume, then: \begin{enumerate}[(a)] \item There exists $C>0$ such that $V(t)\le C t^{3/2}$. \item Let $w>0$, $x_n\in M$ and $t_n\to +\infty$. If $x_n$ is in the $w$-thick part of $(M,t_n^{-1}g(t_n))$ for every $n$, then the sequence of pointed manifolds $(M,t_n^{-1} g(t_n),x_n)$ subconverges smoothly to a complete finite volume pointed 'hyperbolic' $3$-manifold of sectional curvature $-1/4$. \end{enumerate} \end{enumerate} \end{prop} \begin{proof} Note that $\mathrm{vol} \tilde B(\tilde x,t,r) \ge \mathrm{vol} B(x,t,r)$. Properties (i), (ii) with the stronger assumption $\mathrm{vol} B(x,t,r) \ge wr^3$ correspond to Perelman \cite[6.8, 7.3]{Per2}). For the extension to the universal cover see \cite[propositions 4.1, 4.2]{Bam:longtimeI}. We remark that we extend the curvature controls to the full ball, as in \cite[Sec. 11.2.3]{B3MP} (cf. \cite[Remark 11.2.12]{B3MP}). Property (iii) follows from Perelman \cite[7.1,7.2]{Per2}. For more details one can see Section 11.2 in \cite{B3MP}, using technical theorems 11.1.3 and 11.1.6. The assumption on the volume is used to prove that limits of rescaled parabolic neighbourhoods are hyperbolic (cf Proposition~11.2.3). \end{proof} \begin{rem} The hypothesis that $M$ is irreducible is not essential here, but since our Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off is defined for this situation, it makes sense to keep this assumption throughout. \end{rem} For later purposes, namely to prove that cuspidal tori in the appearing hyperbolic pieces are incompressible in $M$, we need the following improvement of Proposition~\ref{prop:thick thin}(iii)(b), which gives convergence of flows rather than metrics. With the notations of Proposition~\ref{prop:thick thin}, we define $g_{n} := t_{n}^{-1}g(t_{n})$ and $g_{n}(t):=t_{n}^{-1}g(tt_{n})$, the latter being a Ricci flow with bubling-off such that $g_{n}(1)=g_{n}$. If $g_{\mathrm{hyp}}$ denotes the `hyperbolic' metric of sectional curvature $-1/4$, then the Ricci flow $g_{\mathrm{hyp}}(t)$ satisfying $g_{hyp}(1)=g_{\mathrm{hyp}}$ is simply $g_{\mathrm{hyp}}(t)=tg_{\mathrm{hyp}}$. Consider $w>0$, $t_{n} \to \infty$ and $x_{n}$ in the $w$-thick part of $(M,g_{n})$. By Proposition~\ref{prop:thick thin} there exists a (sub)-sequence of $(M,g_{n},x_{n})$ converging smoothly to $(H,g_{\mathrm{hyp}}, x_{\infty})$. By relabeling, we can assume that the sequence converges. Then we have: \begin{prop}\label{prop:thick thin persist} The sequence $(M\times[1,2], g_{n}(t),(x_{n},1))$ converges smoothly to $(H \times[1,2], g_{\mathrm{hyp}}(t),(x_{\infty},1))$. \end{prop} \begin{proof} We need to show that, for all $A>0$, for all $n$ large enough, the rescaled parabolic ball $B(\bar x_{n},1,A)\times [1,2]$ is $A^{-1}$-close to $B(x_{\infty},1,A) \times [1,2]$. In what follows we put a bar on $x_n$ to indicate that the ball is w.r.t $g_n(t)$.\\ We use the Persistence Theorem \cite[Theorem 8.1.3]{B3MP}, the hyperbolic limit $(H \times[1,2], g_{\mathrm{hyp}}(t),(x_{\infty},1))$ being the model ${\mathcal{M}}_{0}$ in the sense of \cite[page 89]{B3MP}. Fix $A>1$ and let $\rho:=\rho({\mathcal{M}}_{0},A,1)\geq A$ be the parameter from the Persistence Theorem. By definition of $(H,g_{\mathrm{hyp}}, x_{\infty})$, note that $(B(\bar x_{n},1,\rho),g_{n})$ is $\rho^{-1}$-close to $(B(x_{\infty},1,\rho),g_{\mathrm{hyp}})$ for all sufficiently large $n$, satisfying assumption (ii) of \cite[Theorem 8.1.3]{B3MP}. To verify the other assumptions, we adapt arguments of \cite[Lemma 88.1]{Kle-Lot:topology} to our situation. In particular we have to take care of hyperbolic pieces appearing in a large $3$-ball affected by a metric surgery. This is ruled out by a volume argument.\\ So we consider for each $n$, $T_{n} \in [t_n,2t_n]$ maximal such that \begin{enumerate}[(i)] \item $B(x_{n},t_{n},\rho\sqrt{t_{n}}) \times [t_{n},T_{n}]$ is unscathed, \item $|2t\mathop{\rm Ric}\nolimits(x,t) + g(x,t)|_{g(t)} \leqslant 10^{-6}$ there. \end{enumerate} The case $T_{n}=t_n$, where $t_{n}$ is a singular time and a surgery affects the ball just at that time, is not \emph{a priori} excluded. Note that (ii) implies $|\mathop{\rm Rm}\nolimits_{g_{n}}| \leqslant 1$ on the considered neighbourhood: one has $\mathop{\rm Ric}\nolimits_{g(t)} \approx -\frac{1}{2t}g(t)$ for $t \in [t_n,T_n]$, or $\mathop{\rm Ric}\nolimits_{g(tt_{n})} \approx -\frac{1}{2tt_{n}}g(tt_{n})$ for $t\in[1,T_n/t_n]$, and then $\mathop{\rm Ric}\nolimits_{g_{n}(t)} = \mathop{\rm Ric}\nolimits_{t_{n}^{-1}g(tt_{n})} \approx -\frac{1}{2tt_{n}}g(tt_{n}) = -\frac{1}{2t}g_{n}(t)$. Thus the sectional curvatures of $g_{n}(t)$ remain in $[-\frac{1}{4}-\frac{1}{100},-\frac{1}{8}+\frac{1}{100}]$ for $A$ large enough. We let $\bar T_n:=T_n/t_n \in [1,2]$ denote the rescaled final time. The assumptions of \cite[Theorem 8.1.3]{B3MP} being satisfied on $B(\bar x_{n},1,\rho)\times [1,\bar T_n]$, the conclusion holds on $B(\bar x_{n},1,A)\times [1,\bar T_n]$, that is $(B(\bar x_{n},1,A)\times [1,\bar T_n],g_n(t))$ is $A^{-1}$-close to $(B(x_{\infty},1,A) \times [1,\bar T_n],g_{\mathrm{hyp}}(t))$. \begin{claim}\label{claim:two} For all $n$ large enough, $\bar T_n = 2$. \end{claim} \begin{proof}[Proof of Claim~\ref{claim:two}] We first prove that there are at most finitely many integers $n$ such that $T_{n}$ is a singular time where $B(x_{n},t_{n},\rho\sqrt{t_{n}})$ is scathed, that is $g_{+}(x,T_{n})\not=g(x,T_{n})$ for some $x\in B(x_{n},t_{n},\rho\sqrt{t_{n}})$. We first describe the idea of the proof. Assume that $T_{n}$ is such a singular time. By definition of our $(r,\delta)$-surgery, there is a surgery $3$-ball $B\ni x$ whose boundary $\partial B$ is the middle sphere of a strong $\delta$-neck with scalar curvature $\approx h^{-2}(T_{n})>>0$, where $h(T_{n})$ is the cutoff parameter at time $T_n$. By assumption (ii) above, $R<0$ at time $T_{n}$ on $B(x_{n},t_{n},\rho\sqrt{t_{n}})$, hence $\partial B \cap B(x_{n},t_{n},\rho\sqrt{t_{n}}) = \emptyset$. It follows that $B(x_{n},t_{n},\rho\sqrt{t_{n}}) \subset B$, which is an almost standard cap for $g_{+}(T_{n})$. For the pre-surgery metric, the persistence theorem implies that $(B(x_{n},t_{n},A\sqrt{t_{n}}),g(T_n))$ is almost homothetic to a (large) piece of the hyperbolic manifold $H$. Hence the surgery shrinks this piece to a small standard cap, decreasing volume by a definite amount. As moreover $t^{-1}g(t)$ is volume decreasing along time, volume would become negative if there were too many such singular times, yielding a contradiction. We now go into the details. Let $\mu >0$ be the volume of the unit ball in $(H,g_\mathrm{hyp}(1))$ centred at $x_\infty$, $B_\mathrm{hyp}:= B(x_\infty,1,1)$. For any $t\geqslant 1$ we then have $\mathrm{vol}_{g_{\mathrm{hyp}}(t)}(B_\mathrm{hyp})=t^{3/2}\mathrm{vol}_{g_{\mathrm{hyp}}}(B_\mathrm{hyp}) = t^{3/2}\mu$. We assume $A>1$, so that for $n$ large enough, by closeness at time $\bar T_n$ between $g_n(\cdot)$ and $g_\mathrm{hyp}(\cdot)$ we have: $$\mathrm{vol}_{g_{n}(\bar T_n)}(B(\bar x_{n},1,A)) \geqslant \frac{1}{2} \mathrm{vol}_{g_{\mathrm{hyp}}(\bar T_n)} (B_\mathrm{hyp}) = (\bar T_n)^{3/2}\frac{\mu}{2}.$$ Assume that $T_n$ is a singular time such that $g_{+}(x,T_{n})\not=g(x,T_{n})$ for some $x\in B(x_{n},t_{n},\rho\sqrt{t_{n}})$ and let $B\ni x$ be a surgery $3$-ball as discussed above. As $B$ contains $B(\bar x_{n},1,\rho)$ and $\rho \geq A$, we also have $$\mathrm{vol}_{g_{n}(\bar T_n)} (B) > \bar T_n^{3/2}\frac{\mu}{2}.$$ For the unscaled metric $g(T_n)=t_ng_n(T_n/t_n)=t_ng_n(\bar T_n)$ we then have, before surgery, $\mathrm{vol}_{g(T_n)}(B) = t_n^{3/2} \mathrm{vol}_{g_{n}(\bar T_n)}(B) \geq (t_n\bar T_n)^{3/2}\frac{\mu}{2}=T_n^{3/2}\frac{\mu}{2}$. After surgery, $\mathrm{vol}_{g_{+}(T_{n})}(B)$ is comparable to $h^{3}(T_{n})$. Computing the difference of volumes gives: \begin{eqnarray*} \mathrm{vol}_{g_{+}(T_{n})}(B) - \mathrm{vol}_{g(T_{n})}(B) & \leq & c.h^{3}(t'_{n}) - T_n^{3/2}\frac{\mu}{2} < -T_n^{3/2} \frac{\mu}{4}. \end{eqnarray*} for all $n$ large enough. Since $g_{+}(t) \leqslant g(t)$ on the whole manifold, we have \begin{equation} \mathrm{vol}_{g_{+}(T_{n})}(M) - \mathrm{vol}_{g(T_{n})}(M) < -T_n^{3/2} \frac{\mu}{4}, \label{eq:volume} \end{equation} for all $n$ large enough. Now the proof of \cite[Proposition 11.2.1]{B3MP} shows that $(t+\frac{1}{4})^{-1}g(t)$ is volume non-increasing along a smooth Ricci flow. Since $g_{+}\leqslant g$ at singular times, this monotonicity holds for a Ricci flow with bubbling-off. One easily deduces by comparing the $(t+1/4)^{-1}$ and the $t^{-1}$ scaling that $t^{-1}g(t)$ is also volume decreasing. Precisely, let us now set $\bar g(t):=t^{-1}g(t)$, then for all $t' > t$: $$ \mathrm{vol}_{\bar g(t')}(M) \leqslant \left(\frac{t'+1/4}{t'} \frac{t}{t+1/4} \right)^{3/2} \mathrm{vol}_{\bar g(t)}(M)<\mathrm{vol}_{\bar g(t)}(M).$$ It particular, the sequence $\mathrm{vol}_{\bar g(t_{n})} (M)$ is decreasing. Moreover, if $[t_{n},t_{m}]$ contains a singular time $T_{n}$ as above, then using \eqref{eq:volume} in the second inequality, we get: $$ \mathrm{vol}_{\bar g(t_m)} (M) \leqslant \mathrm{vol}_{\bar g_{+}(T_{n})}(M) < \mathrm{vol}_{\bar g(T_{n})} (M) -\frac{\mu}{4} \leqslant \mathrm{vol}_{\bar g(t_{n})} (M) -\frac{\mu}{4}. $$ On the other hand, $\mathrm{vol}_{\bar g(t_n)}(M) >0$. Thus there are at most finitely many such singular times. We conclude that $B(x_{n},t_{n},\rho\sqrt{t_{n}})$ is unscathed at time $T_{n}$ for all $n$ large enough.\\ From now on we suppose $n$ large enough such that $B(x_{n},t_{n},\rho\sqrt{t_{n}}) \times [t_n,T_n]$ is unscathed. Recall that singular times form a discrete subset of $\RR$, hence there exists $\sigma_{n} >0$ such that $B(x_{n},t_{n},\rho\sqrt{t_{n}})$ is unscathed on $[t_{n},T_n+\sigma_{n}]$. By maximality of $\bar T_n$, when $\bar T_n < 2$ we must have $|2t\mathop{\rm Ric}\nolimits(x,t) + g(x,t)|_{g(t)} = 10^{-6}$ at time $T_n$ for some $x\in\overline{B(x_{n},t_{n},\rho\sqrt{t_{n}})}$. Otherwise by continuity we find $\sigma_{n}$ small enough such that (ii) holds on $[t_{n},T_n+\sigma_{n}]\subset [t_{n},2t_{n}]$, contradicting the maximality of $\bar T_n$.\\ We now show that for all large $n$, $|2t\mathop{\rm Ric}\nolimits(x,t) + g(x,t)|_{g(t)} < 10^{-6}$ at time $T_{n}$ on $\overline{B(x_{n},t_{n},\rho\sqrt{t_{n}})}$, which will imply that $\bar T_n =2$ by the discussion above. Using the $A^{-1}$-closeness of the rescaled parabolic ball $B(\bar x_{n},1,A) \times [1,\bar T_n]$ with $B(\bar x_{\infty},1,A) \times [1,\bar T_n]$, one can check that $x_n$ is in the $w'$-thick part of $(M,{T_n}^{-1}g(T_n))$, for some fixed $w'>0$, for all $n$ large enough. Proposition \ref{prop:thick thin}(b) then implies that ${T_n}^{-1}g(T_n)$ becomes arbitrarily close to being hyperbolic on any fixed ball (w.r.t ${T_n}^{-1}g(T_n)$) centred at $x_n$, when $n \to \infty$. Controlling the distortion of distances on $B(x_n,t_n,\rho\sqrt{t_{n}})\times [t_n, T_n]$ (with the estimates (ii)), one can conclude that $|2t\mathop{\rm Ric}\nolimits(x,t) + g(x,t)|_{g(t)} < 10^{-6}$ on $\overline{B(x_n,t_n,\rho\sqrt{t_{n}})}$ at time $T_n$ for $n$ large enough. The details are left to the reader. Together with the first part of the proof and the maximality of $\bar T_{n}$, this implies that $\bar T_{n} = 2$ for $n$ large enough, proving Claim~\ref{claim:two}. \end{proof} As already noted, we then have, by the Persistence Theorem, that $B(x_{n},1,A)\times [1,2]$, with the rescaled flow $g_{n}(t)$, is $A^{-1}$-close to $B(x_{\infty},1,A) \times [1,2]$ for all $n$ large enough. This concludes the proof of Proposition~\ref{prop:thick thin persist}. \end{proof} From Proposition \ref{prop:thick thin persist} one easily obtains: \begin{corol}\label{cor:persist} Given $w >0$ there exist a number $T=T(w)>0$ and a nonincreasing function $\beta = \beta_w : [T,+\infty) \to (0,+\infty)$ tending to $0$ at $+\infty$ such that if $(x,t)$ is in the $w$-thick part of $(M,t^{-1}g(t))$ with $t \ge T$, then there exists a pointed hyperbolic manifold $(H,g_{\mathrm{hyp}},\ast)$ such that: \begin{enumerate}[(i)] \item $P(x,t,\beta(t)^{-1}\sqrt{t},t)$ is $\beta(t)$-homothetic to $P(\ast,1,\beta(t)^{-1},1) \subset H\times [1,2]$, endowed with $g_{\mathrm{hyp}}(s)=sg_{\mathrm{hyp}}(1))$, \item For all $y \in B(x,t,\beta(t)^{-1}\sqrt{t})$ and $s \in [t,2t]$, $$ \lVert \bar g(y,s) - \bar g(y,t) \rVert < \beta,$$ where the norm is in the $C^{[\beta^{-1}]}$-topology w.r.t the metric $\bar g(t)=t^{-1}g(t)$. \end{enumerate} \end{corol} \section{Incompressibility of the boundary tori}\label{sec:incompressible} We prove that under the hypotheses of the previous section the tori that separate the thick part from the thin part are incompressible. More precisely, we consider $M$ nonspherical, irreducible, not homeomorphic to $\RR^3$, endowed with a complete finite volume Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off $g(\cdot)$ such that $\delta(\cdot)\le \bar\delta(\cdot)$, \emph{and whose universal cover has bounded geometry} (for each time slice). We call \bydef{hyperbolic limit} a pointed `hyperbolic' manifold of finite volume and sectional curvature $-1/4$ that appears as a pointed limit of $(M,t_n^{-1} g(t_n),x_n)$ for some sequence $t_n\to\infty$. {\bf In this section, we assume the existence of at least one hyperbolic limit $(H,g_\mathrm{hyp},*)$, which is supposed not to be closed.}\\ Given a hyperbolic limit $H$, we call \bydef{compact core of $H$}, a compact submanifold $\bar H \subset H$ whose complement consists of finitely many product neighbourhoods of the cusps. Then for large $n$, we have an approximating embedding $f_n:\bar H\to M$ which is almost isometric with respect to the metrics $g_\mathrm{hyp}$ and $t_n^{-1} g(t_n)$. The goal of this section is to prove the following result: \begin{prop}\label{prop:incompressible} If $n$ is large enough, then for each component $T$ of $\bord \bar H$, the image $f_n(T)$ is incompressible in $M$. \end{prop} We argue following Hamilton's paper \cite{hamilton:nonsingular}. A key tool is the stability of the hyperbolic limit $H$: it is a limit along the flow, not just along a sequence of times. We give a statement following Kleiner-Lott (cf.~\cite[Proposition 90.1]{Kle-Lot:topology}.) \begin{prop}[Stability of thick part]\label{prop:stability} There exist a number $T_0>0$, a nonincreasing function $\alpha:[T_0,+\infty) \to (0,+\infty)$ tending to $0$ at $+\infty$, a finite collection $\{(H_{1},\ast_{1}),\ldots,(H_{k},\ast_{k})\}$ of hyperbolic limits and a smooth family of smooth maps $$f(t) : B_{t} = \bigcup_{i=1}^{k}B(*_{i},\alpha(t)^{-1}) \to M$$ defined for $t\in [T_0,+\infty)$, such that \begin{enumerate}[(i)] \item The $C^{[\alpha(t)^{-1}]}$-norm of $t^{-1} f(t)^* g(t) - g_\mathrm{hyp}$ is less than $\alpha(t)$; \item For every $t_0\ge T_0$ and every $x_0\in B_{t_{0}}$, the time-derivative at $t_0$ of the function $t\mapsto f(t)(x_0)$ is less than $\alpha(t_{0}) t_{0}^{-1/2}$. \item $f(t)$ parametrises more and more of the thick part: the $\alpha(t)$-thick part of $(M,t^{-1}g(t))$ is contained in $\mathop{\rm im}\nolimits(f(t))$. \end{enumerate} \end{prop} The proof of \cite{Kle-Lot:topology} transfers directly to our situation, using Corollary \ref{cor:persist}. \begin{rem}\label{rem:stability} Any hyperbolic limit $H$ is isometric to one of the $H_{i}$. Indeed, let $\ast \in H$ and $w>0$ be such that $\ast \in H^+(w)$. Then $x_n$ is in the $w/2$-thick part of $(M,t_{n}^{-1}g(t_{n}))$ for $n$ large enough. Assume that $f(t_{n})^{-1}(x_{n}) \in B(*_{i},\alpha(t_{n})^{-1})$ for a subsequence. Then $f(t_{n})^{-1}(x_{n})$ remains at bounded distance of $\ast_{i}$, otherwise it would go into a cusp contradicting the $w/2$-thickness of $x_{n}$. It follows that $(M,x_{n})$ and $(M,f(t_{n})(\ast_{i}))$ will have the same limit, up to an isometry. \end{rem} \subsection{Proof of Proposition \ref{prop:incompressible}} The proof of Hamilton \cite{hamilton:nonsingular} is by contradiction. Assuming that some torus is compressible, one finds an embedded compressing disk for each time further. Using Meeks and Yau \cite{Mee-Yau,Mee-Yau:smith}, the compressing disks can be chosen of least area. By controlling the rate of change of area of these disks, Hamilton shows that the area must go to zero in finite time---a contradiction. Due to the possible noncompactness of our manifold, the existence of the least area compressing disks is not ensured: an area minimising sequence of disks can go deeper and deeper in an almost hyperbolic cusp. We will tackle this difficulty by considering the universal cover, which has bounded geometry (cf. Lemma \ref{lem:bd} and Addendum \ref{add:existence}), when necessary. Let us fix some notation. For all small $a>0$ we denote by $\bar H_a$ the compact core in $H$ whose boundary consists of horospherical tori of diameter $a$. By Proposition \ref{prop:stability} and Remark \ref{rem:stability}, we can assume that the map $f(t)$ is defined on $B(\ast, \alpha(t)^{-1}) \supset \bar H_a$ for $t$ larger than some $T_a>0$. For all $t\geqslant T_a$ the image $f(t)(\bar H_a)$ is well defined and the compressibility in $M$ of a given boundary torus $f(t)(\partial \bar H_a)$ does not depend on $t$ or $a$. We assume that some torus $\mathbf T$ of $\partial \bar H_a$ has compressible image in $M$. Below we refine the choice of the torus $\mathbf T$. We define, for some fixed $a>0$, $$Y_{t} := f(t)(\bar H_a), \quad \mathbf{T}_{t}:=f(t)(\mathbf{T}) \quad \textrm{and} \quad W_{t} := M - \mathop{\rm int}\nolimits(Y_{t}).$$ Our first task is to find a torus in $\partial Y_t$ which is compressible in $W_t$. Note that $\mathbf{T}_t$ is compressible in $M$, incompressible in $Y_t$ which is the core of a hyperbolic $3$-manifold, but not necessarily compressible in $W_t$: for example $Y_t$ could be contained in a solid torus and $\mathbf{T}_t$ compressible on this side. Consider the surface $\partial Y_t \subset M$ (not necessarily connected). As the induced map $\pi_1(\mathbf{T}_t) \to \pi_1(M)$, with base point choosen in $\mathbf{T}_t$, is noninjective by assumption, Corollary~3.3 of Hatcher~\cite{Hat} tells that there is a compressing disk $D\subset M$, with $\partial D \subset \partial Y_t$ homotopically non trivial and $\mathop{\rm int}\nolimits(D) \subset M-\partial Y_t$. As $\mathop{\rm int}\nolimits(D)$ is not contained in $Y_t$, one has $\mathop{\rm int}\nolimits(D)\subset W_t$. Rename $\mathbf{T}_t$ the connected component of $\partial Y_t$ which contains $\partial D$ and $\mathbf{T}\subset \partial \bar H_a$ its $f(t)$-preimage. Then $\mathbf{T}_t$ is compressible in $W_t$. Let $X_t$ be the connected component of $W_t$ which contains $D$. Using \cite[Lemma A.3.1]{B3MP} we have two exclusive possibilities: \begin{enumerate}[(i)] \item $X_t$ is a solid torus. It has convex boundary, hence Meeks-Yau \cite[Theorem 3]{Mee-Yau} provide a least area compressing disk $D^2_t \subset X_t$ where $\partial D^2 \subset \mathbf{T}_t$ is in a given nontrivial free homotopy class. \item $\mathbf{T}_t$ does not bound a solid torus and $Y_t$ is contained in a $3$-ball $B$. Then $Y_t$ lifts isometrically to a $3$-ball in the universal cover $(\tilde M,\tilde g(t))$. Let $\tilde Y_t$ be a copy of $Y_t$ in $\tilde M$. By \cite{Hat} again, there is a torus $\tilde{\mathbf{T}_t }\subset \partial \tilde Y_t$ compressible in $\tilde M - \partial \tilde Y_t$, hence in $\tilde M - \tilde Y_t$. We denote by $\tilde X_t$ the connected component of $\tilde M - \mathop{\rm int}\nolimits(\tilde Y_t)$ in which $\tilde{\mathbf{T}_t}$ is compressible. As $(\tilde M,\tilde g(t))$ has bounded geometry, by \cite[Theorem 1]{Mee-Yau:smith} there is a compressing disk $D^2_t \subset \tilde X_t$ of least area with $\partial D^2_t \subset \tilde{\mathbf{T}_t}$ in a given nontrivial free homotopy class. \end{enumerate} We define a function $A : [T_a,+\infty) \to (0,+\infty)$ by letting $A(t)$ be the infimum of the areas of such embedded disks. Similarly to \cite[Lemma 91.12]{Kle-Lot:topology} we have \begin{lem}\label{lem:rate} For every $D>0$, there is a number $a_0>0$ with the following property. Given $a \in (0,a_0)$ there exists $T'_a>0$ such that for all $t_0\geqslant T'_a$ there is a piecewise smooth function $\bar A$ defined in a neighbourhood of $t_0$ such that $\bar A(t_0)=A(t_0)$, $\bar A \geqslant A$ everywhere, and $$ \bar A'(t_0) < \frac{3}{4}\left(\frac{1}{t_0+\frac{1}{4}} \right) A(t_0) -2\pi + D$$ if $\bar A$ is smooth at $t_0$, and $\lim_{t \to t_0,t>t_0} \bar A(t) \leqslant \bar A(t_0)$ if not. \end{lem} \begin{proof} The proof is similar to the proof of \cite[Lemma 91.12]{Kle-Lot:topology}, and somewhat simpler as we don't have topological surgeries. Recall that our Ricci flow with bubbling-off $g(t)$ is non increasing at singular times, hence the unscathedness of least area compressing disks (\cite[Lemma 91.10]{Kle-Lot:topology}) is not needed: we have $\lim_{t \to t_0,t>t_0} A(t) \leqslant A(t_0)$ if $t_0$ is singular. However, something must be said about \cite[Lemma 91.11]{Kle-Lot:topology}. This lemma asserts that given $D>0$, there is $a_0>0$ such that for $a \in (0,a_0)$ and $\mathbf{T} \subset H$ a horospherical torus of diameter $a$, for all $t$ large enough $\int_{\partial D^2_t} \kappa_{\partial D^2_t} ds \leqslant \frac{D}{2}$ and $\mathrm{length}(\partial D^2_t) \leqslant \frac{D}{2}\sqrt{t}$, where $\kappa_{\partial D^2_t}$ is the geodesic curvature of $\partial D^2_t$. Its proof relies on the fact that an arbitrarily large collar neighbourhood of $\mathbf{T}_t$ in $W_t$ is close (for the rescaled metric $t^{-1}g(t)$) to a hyperbolic cusp if $t$ is large enough. In case (1) above, this holds on $X_t \cap f(t)B(\ast,\alpha(t)^{-1}))$ by Proposition \ref{prop:stability}. In case (2) observe that $f(t)(B(\ast,\alpha(t)^{-1}))$ is homotopically equivalent to the compact core $\bar H_t$, hence lifts isometrically to $(\tilde M,\tilde g(t))$. It follows that $\tilde X_t$ also has an arbitrarily large collar neighbourhood of $\tilde{\mathbf{T}}_t$ close to a hyperbolic cusp. The rest of the proof is identical to the proof of \cite[Lemma 91.12]{Kle-Lot:topology} and hence omitted. \end{proof} In particular $A$ is upper semi-continous from the right. Note also that as $A$ is defined as an infimum and $g(t_k)$ and $g(t)$ are $(1+\epsi_k)$-bilischitz when times $t_k \nearrow t$, for some $\epsi_k \to 0$, $A$ is lower semi-continuous from the left. Fix $D<2\pi$, $a \in (0,a_0)$ and $T'_a$ as in Lemma \ref{lem:rate}. Then consider the solution $\hat A : [T'_a,+\infty) \to \RR$ of the ODE $$ \hat A' = \frac{3}{4}\left(\frac{1}{t+\frac{1}{4}} \right) \hat A -2\pi + D$$ with initial condition $\hat A(T'_a)=A(T'_a)$. By a continuity argument, $A(t) \leqslant \hat A(t)$ for all $t \geqslant T'_a$. However, from the ODE we have $$\hat A(t) \left(t+\frac{1}{4} \right)^{-3/4} =4(-2\pi +D) \left(t+\frac{1}{4}\right)^{1/4} + \mathrm{const},$$ which implies that $\hat A(t) <0$ for large $t$, contradicting the fact that $A(t)>0$. This finishes the proof of Proposition \ref{prop:stability}. \section{A Collapsing Theorem} In this section we state a version of the collapsing theorem \cite[Theorem 0.2]{Mor-Tia:collapsing} in the context of manifolds with cusp-like metrics. Let $(M_n,g_n)$ be a sequence of Riemannian $3$-manifolds. \begin{defi} We say that $g_n$ has \bydef{locally controlled curvature in the sense of Perelman} if for all $w>0$ there exist $\bar r(w)>0$ and $K(w) >0$ such that for $n$ large enough , if $0<r \le \bar r(w)$, if $x \in (M_n,g_n)$ satisfies $\mathrm{vol} B(x,r) \ge w r^3$ and $\sec \ge -r^{-2}$ on $B(x,r)$ then $|\mathop{\rm Rm}\nolimits(x)| \leqslant Kr^{-2}$, $|\nabla \mathop{\rm Rm}\nolimits(x)| \leqslant Kr^{-3}$ and $|\nabla^2 \mathop{\rm Rm}\nolimits(x) | \leqslant Kr^{-4}$ on $B(x,r)$. \end{defi} \begin{rem}\label{rem:curvature locally} Note that if $g_n = {t_n}^{-1}g(t_n)$, where $g(\cdot)$ is as in Proposition~\ref{thm:thick thin} and $t_n \to \infty$, then $g_n$ has locally controlled curvature in the sense of Perelman. \end{rem} \begin{defi} We say that $(g_n)$ \bydef{collapses} if there exists a sequence $w_n \to 0$ of positive numbers such that $(M_n,g_n)$ is $w_n$-thin for all $n$. \end{defi} From \cite[Theorem 0.2]{Mor-Tia:collapsing} we obtain: \begin{theo}\label{thm:collapsing} Assume that $(M_n,g_n)$ is a sequence of complete Riemannian oriented $3$-manifolds such that \begin{enumerate}[\indent (i)] \item $g_n$ is a cusp-like metric for each $n$, \item $(g_n)$ collapses, \item $(g_n)$ has locally controlled curvature in the sense of Perelman, \end{enumerate} then for all $n$ large enough $M_n$ is a graph manifold. \end{theo} The manifolds in \cite[Theorem 0.2]{Mor-Tia:collapsing} are assumed to be compact and may have convex boundary. Our cusp-like assumption (i) allows to apply their result by the following argument. First we deform each $g_n$ so that the sectional curvature is $-\frac{1}{4}$ on some neighbourhood of the ends, assumptions (ii),(iii) remaining true. Let $w_n \to 0$ be a sequence of positive numbers such that $g_n$ is $w_n$-thin. For each $n$, we can take a neighbourhood $U_n$ of the ends of $M_n$, with horospherical boundary, small enough so that the complement $M'_n=M_n \setminus \mathop{\rm int}\nolimits U_n$ satisfies assumptions of \cite[Theorem 0.2]{Mor-Tia:collapsing} with collapsing numbers $w_n$, except for the convexity of the added boundary. Then we deform the metric on $M'_n$ near the boundary into a reversed hyperbolic cusp so that the boundary becomes convex. It follows that $M'_n$, hence $M_n$, is a graph manifold for all $n$ large enough. In fact it should be clear from Morgan-Tian's proof that the convexity assumption is not necessary in this situation (see the more general \cite[Proposition 5.1]{bam:longtimeII}). \section{Proof of the main theorem} Here we prove Theorem~\ref{thm:geometrisation}. We sketch the organisation of the proof. Let $(M,g_0)$ be a Riemannian $3$-manifold satisfying the hypotheses of this theorem. We also assume that $M$ is not a solid torus, is nonspherical and does not have a metric with $\mathop{\rm Rm}\nolimits\ge 0$, otherwise it would be Seifert fibred and conclusion of Theorem~\ref{thm:geometrisation} holds. We first define on $M$ a Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off $g(\cdot)$, issued from $g_0$ and defined on $[0,+\infty)$. As mentioned before, we may have to pass to the universal cover. By existence Theorem \ref{thm:existence} $g(\cdot)$ exists on a maximal interval $[0,T_\mathrm{max})$. The case $T_\mathrm{max} < +\infty$ is ruled out using the fact that $(M,g(T_\mathrm{max}))$ is covered by canonical neighbourhoods (see claim \ref{claim:alltime} below). Proposition~\ref{thm:thick thin} then provides a sequence $t_n \nearrow +\infty$ and connected open subsets $H_{n} \subset M_n=(M,{t_n}^{-1}g(t_n))$, diffeomorphic to a complete, finite volume hyperbolic manifold $H$ (possibly empty). We set $G_n := M_n \setminus H_n$. Proposition \ref{prop:incompressible} proves that the tori of $\partial \overline H_{n}$ (if $H \not=\emptyset$) are incompressible in $M$ for large $n$. In this case, the atoroidality assumption on $M$ implies that $H_n$ is diffeomorphic to $M$ and that each component of $G_n$ is a cuspidal end $T^2 \times [0,\infty)$ of $M_n$. Then $g(t)$ converges (in the pointed topology) to a complete, finite volume hyperbolic metric on $M$. In both cases ($H = \emptyset$ or $H \neq\emptyset$), $G_{n}$ collapses with curvature locally controlled in the sense of Perelman. If $H = \emptyset$, we conclude by collapsing theorem \ref{thm:collapsing} that $M_n=G_n$ is a graph manifold (hence Seifert fibred) for all $n$ large enough. If $H_n \neq \emptyset$, Proposition 4.2 gives a continuous decomposition $M=H_t \cup G_t$ where $H_t$ is diffeomorphic to $M$, $g(t)$ is smooth and $|\mathop{\rm Rm}\nolimits| \leq Ct^{-1}$ there, and $G_t$ is $\alpha(t)$-thin. We then use the topological/geometric description of the thin part presented in \cite{Bam:longtimeI, bam:longtimeII} to obtain that $|\mathop{\rm Rm}\nolimits| \le Ct^{-1}$ on $G_t$, by the same argument as in \cite[Theorem 1.1]{Bam:longtimeI}.\\ \subsection{Setting up the proof} Let $(\tilde M,\tilde g_0)$ be the Riemannian universal cover of $(M,g_0)$. By Lemma \ref{lem:bd} it has bounded geometry. Without loss of generality, we assume that it is normalised. If $M$ is compact, we can even assume that $g_0$ itself is normalised. We now define a Riemannian $3$-manifold $(\bar M,\bar g_0)$ by setting $(\bar M,\bar g_0):=(M,g_0)$ if $M$ is compact, and $(\bar M,\bar g_0):=(\tilde M,\tilde g_0)$ otherwise. In either case, $\bar g_0$ is complete and normalised. By~\cite{msy}, $\bar M$ is irreducible. If $\bar M$ is spherical, then $M$ is spherical, contrary to the assumption. Henceforth, we assume that $\bar M$ is nonspherical. Thus Theorem~\ref{thm:existence} applies to $(\bar M,\bar g_0)$, where $\bar \delta(\cdot)$ is chosen from Theorem \ref{thm:thick thin}. Let $\bar g(\cdot)$ be a Ricci flow with bubbling-off on $\bar M$ with initial condition $\bar g_0$. By Addendum~\ref{add:existence}, we also have a Ricci flow with bubbling-off $g(\cdot)$ on $M$ with initial condition $g_0$ covered by $\bar g(\cdot)$. \begin{claim}\label{claim:alltime} The evolving metrics $g(\cdot)$ and $\bar g(\cdot)$ are defined on $[0,+\infty)$. \end{claim} \begin{proof} If this is not true, then they are only defined up to some finite time $T$, and every point of $(\bar M,\bar g(T))$ is centre of an $\epsi_0$-neck or an $\epsi_0$-cap. By Theorem~7.4 of~\cite{bbm:openflow}, $\bar M$ is diffeomorphic to $S^3$, $S^2\times S^1$, $S^2\times \mathbf{R}$ or $\mathbf{R}^3$.\footnote{This list is shorter than the corresponding list in~\cite{bbm:openflow} since we do not consider caps diffeomorphic to the punctured $RP^3$.} Since $\bar M$ is irreducible and nonspherical, $\bar M$ is diffeomorphic to $\mathbf{R}^3$. The complement of the neck-like part (cf.~again~\cite{dl:equi}) is a $3$-ball, which must be invariant by the action of the deck transformation group. Since this group acts freely, it is trivial. Thus $M=\bar M$. Being covered by $\bar g(\cdot)$, the evolving metric $g(\cdot)$ is complete and of bounded sectional curvature. Hence by Remark~\ref{rem:finite volume}, $(M,g(T))$ has finite volume. By contrast, $(\bar M,\bar g(T))$ contains an infinite collection of pairwise disjoint $\epsi_0$-necks of controlled size, hence has infinite volume. This contradiction completes the proof of Claim~\ref{claim:alltime}. \end{proof} It follows from Claim~\ref{claim:alltime} that $\bar M$ carries an equivariant Ricci flow with bubbling-off $\bar g(\cdot)$ defined on $[0,+\infty)$ with initial condition $\bar g_0$. We denote by $g(\cdot)$ the quotient evolving metric on $M$. By Addendum~\ref{add:existence}, it is also a Ricci flow with $(r(\cdot),\delta(\cdot))$-bubbling-off. By Theorem \ref{thm:cusp-like}, $g(\cdot)$ remains cusp-like at infinity for all time. Now consider the alternative that follows the conclusion of Proposition \ref{thm:thick thin} part (iii) : Either \begin{enumerate}[(i)] \item there exist $w>0$, $t_n \to \infty$ such that the $w$-thick part of $(M,t_n^{-1}g(t_n))$ is nonempty for all $n$, or \item there exist $w_n \to 0$, $t_n \to \infty$ such that the $w_n$-thick part of $(M,t_n^{-1}g(t_n))$ is empty for all $n$. \end{enumerate} We refer to the first case as the \emph{noncollapsing case} and to the second as the \emph{collapsing case}.\\ We denote by $g_n$ the metric ${t_n}^{-1}g(t_n)$. Note that $g_n$ has curvature locally controlled in the sense of Perelman (cf.~Remark~\ref{rem:curvature locally}). We denote by $M_n$ the Riemannian manifold $(M,g_n)$, $M_n^+(w)$ its $w$-thick part, and $M_n^-(w)$ its $w$-thin part. In the collapsing case, $M_n=M_n^-(w_n)$ fits the assumptions of Theorem \ref{thm:collapsing}. Hence it is a graph manifold for $n$ large enough. Let us consider the other case. \subsection{The noncollapsing case} By assumption, there exist $w>0$ and a sequence $t_n \to \infty$ such that the $w$-thick part of $M_n$ is nonempty for all $n$. Choose a sequence $x_n \in M_n^+(w)$. Up to extracting a subsequence, by part (iii) of Proposition \ref{thm:thick thin}, $(M_n,x_n)$ converges to a complete hyperbolic manifold $(H,\ast)$ of finite volume. By definition of the convergence, there exist an exhaustion of $H$ by compact cores $\bar H_n \subset H$ and embeddings $f_n : (\bar H_n,\ast) \to (M,x_n)$ such that $|g_\mathrm{hyp} - f_{n}^\ast g_n|$ goes to zero. Proposition~\ref{prop:stability} (stability of the thick part) gives $T_0>0$ and a nonincreasing function $\alpha : [T_{0},\infty) \to (0,\infty)$ tending to zero at infinity, and for $t\ge T_0$ embeddings $f(t) : B(\ast,\alpha(t)^{-1})\subset H \to M$ satisfying conclusions (i)--(iii) of this proposition. If $H$ is closed, the desired conclusion follows. From now on we assume that $H$ is not closed. By Proposition \ref{prop:incompressible}, for each $m \in \NN$, for all $n$ large enough, each component of $f_n(\partial \bar H_m)$ is an incompressible torus in $M$. Relabeling the $f_n$ we can assume that the property holds for $f_m(\partial \bar H_m)$ for all $m$. By atoroidality of $M$, it follows that $H_n := \mathop{\rm int}\nolimits f_n(\bar H_n) \subset M$ is diffeomorphic to $M$ for all $n$, and $G_n :=M \setminus H_n$ is a disjoint union of neighbourhoods of cuspidal ends of $M_n$. For large $t\ge T_0$, choose a compact core $\bar H_t \subset B(\ast,\alpha(t)^{-1})$ such that $\partial \bar H_t$ consists of horospherical tori whose diameter goes to zero as $t \to \infty$. We assume moreover that $t \to \bar H_t$ is smooth. Set $H_t := f(t)(\bar H_t)\subset M$ and $G_t := M \setminus H_t$. Then $H_t$ is diffeomorphic to $M$, $t \mapsto g(t)$ is smooth there and $|\mathop{\rm Rm}\nolimits| \leq Ct^{-1}$ by closeness with $H$. On the other hand, $G_t$ is $w(t)$-thin for some $w(t) \to 0$ as $t\to \infty$. There remains to prove that $G_t$ satisfies $|\mathop{\rm Rm}\nolimits| \leq Ct^{-1}$ also, which will imply its unscathedness. Consider a connected component $\mathcal{C}(t)$ of $G_t$. For all large $t$, $\partial \mathcal{C}(t)$ is an incompressible torus in $M$ with a collar neighbourhood $\alpha(t)$-close, w.r.t $t^{-1}g(t)$, to a collar neighbourhood of a horospherical torus in $H$. On the other hand, $\mathcal{C}(t)$ is diffeomorphic to $T^2 \times [0,\infty)$ and its end has a cusp-like structure, hence curvature also bounded by $Ct^{-1}$. There remains to control what happens in the middle of $\mathcal{C}(t)$. We apply the topological/geometric description of the thin part obtained in \cite[Proposition 5.1]{bam:longtimeII} to a compact subset $\mathcal{C}'(t) \subset \mathcal{C}(t)$ which we define as follows. By Theorem \ref{thm:stability2} there is an embedding $f_{\mathrm{cusp}} : T^2 \times [0,+\infty) \to M$ and a function $b : [0,+\infty) \to [0,\infty)$ such that $$| (4t)^{-1}f_{\mathrm{cusp}}^{\ast}g(t) - g_{\mathrm{hyp}}|_{T^2 \times [b(t),+\infty)} < w(t)$$ and $f_\mathrm{cusp}(T^2\times [b(t),+\infty)) \subset \mathcal{C}(t)$ is a neighbourhood of its end. Here $g_{\mathrm{hyp}}$ denotes a hyperbolic metric $e^{-2s}g_{\mathrm{eucl}} + ds^2$ (with sectional curvature $-1$) on $T^2 \times [0,+\infty)$. The metric $4g_\mathrm{hyp}$ may differ from the one on $H$. We can assume $b(t) \to \infty$. We define $\mathcal{C}_{\mathrm{cusp}}(t):=f_{\mathrm{cusp}}(T^2 \times [b(t)+2,+\infty))$ and $$ \mathcal{C}'(t) := \mathcal{C}(t) \setminus \mathop{\rm int}\nolimits \mathcal{C}_{\mathrm{cusp}}(t).$$ Now we fix functions $\bar r$, $K$ given by Proposition~\ref{thm:thick thin}, $\mu_1>0$ given by \cite[Lemma 5.2]{bam:longtimeII}, $w_1=w_1(\mu_1,\bar r,K)>0$ given by \cite[Proposition 5.1]{bam:longtimeII}. The closed subset $\mathcal{C}'(t)$ satisfies the assumptions of the latter proposition for $t \geq T_1$ large enough such that $w(t) < w_1$. We now follow the proof of \cite[Theorem 1.1 on p.~23]{Bam:longtimeI}. Decompose $\mathcal{C}'(t)$ into closed subsets $V_1,V_2,V'_2$ as given by the proposition. The two boundary components of $\mathcal{C}'(t)$ have to bound components of $V_1$. Either $\mathcal{C}'(t)=V_1$ or the boundary components of $\mathcal{C}'(t)$ bound components $\mathcal{C}_1,\mathcal{C}_2$ of $V_1$, which are diffeomorphic to $T^2 \times I$ and there is a component $\mathcal{C}_3$ of $V_2$ adjacent to $\mathcal{C}_1$. We prove that only the first case occurs, for all $t$ large enough. \begin{lem}\label{lem:vone} For all $t$ large enough, $\mathcal{C}'(t)=V_1$. \end{lem} Before proving this lemma, we explain how to conclude the proof of the theorem. First \cite[Lemma 5.2(ii)]{bam:longtimeII} applies to any $x \in V_{1}$, giving $w_1=w_1(\mu_1,\bar r,K)>0$ such that $$\mathrm{vol} \tilde B(\tilde x,t,\rho_{\sqrt{t}}(x,t)) \ge w_1(\rho_{\sqrt{t}}(x,t))^3,$$ for any lift $\tilde x \in \tilde M$ of $x$. Let $\bar \rho=\bar \rho(w_1)>0$ be given by Proposition \ref{prop:bar rho bis}. If $\rho_{\sqrt{t}}(x,t) < \rho(x,t)$ then $\rho(x,t) \geq \sqrt{t} > \bar \rho \sqrt{t}$. If not, $\rho(x,t) = \rho_{\sqrt{t}}(x,t)$ and Proposition \ref{prop:bar rho bis} (ii) implies $$\rho_{\sqrt t}(x,t) \geq \bar \rho \sqrt{t}$$ if $t$ is large enough (larger than $\bar T =\bar T(w_1)$). In both cases, $\rho_{\sqrt t} \geq \bar \rho \sqrt{t}$. Then Proposition \ref{prop:bar rho bis}(i) with $r=\rho_{\sqrt{t}}(x,t)$ implies $|\mathop{\rm Rm}\nolimits| \le C(w_1)t^{-1}$ at $(x,t)$ for some $C=C(w_1)>0$. Thus the proof of the theorem is finished if $\mathcal{C}'(t) = V_{1}$ for all large $t$. \\ We now prove Lemma~\ref{lem:vone}, arguing by contradiction. Set $T_2 :=\max\{T_0,T_1,\bar T\}$. Assume that there exist arbitrarily large times $t \geq T_2$ such that $\mathcal{C}'(t) \not=V_{1}$. At any of these times, the $S^1$-fibres of $\mathcal{C}_3$ are homotopic to a fibre of $\partial \mathcal{C}_1$, by \cite[Proposition 5.1(b2)]{bam:longtimeII}. By incompressibility of $\partial \mathcal{C}_{1}$ in $M$, this curve generates an infinite cyclic subgroup in $\pi_1(M)$. Then \cite[Lemma 5.2(i)]{bam:longtimeII} applies to any $x \in \mathcal{C}_3 \cap V_{2,\mathrm{reg}}$ and gives $$ \mathrm{vol} \tilde B(\tilde x,t,\rho_{\sqrt{t}}(x,t)) \ge w_1(\rho_{\sqrt{t}}(x,t))^3,$$ for any lift $\tilde x$ of $x$, and hence $\rho_{\sqrt{t}}(x,t) \geq \bar \rho \sqrt{t}$ as above. Moreover \cite[Proposition 5.1(c3)]{bam:longtimeII} gives $s=s_2(\mu_1,\bar r,K) \in (0,1/10)$, an open set $U $ such that \begin{equation} B(x,t,\frac{1}{2}s\rho_{\sqrt{t}}(x,t)) \subset U \subset B(x,t, s\rho_{\sqrt{t}}(x,t)), \label{eq:U} \end{equation} and a $2$-Lipschitz map $p : U \to \RR^2$ whose image contains $B(0,\frac{1}{4}s\rho_{\sqrt{t}} (x,t)) \subset \RR^2$ and whose fibres are homotopic to fibres of $\mathcal{C}_3$, hence noncontractible in $M$. Now consider any noncontractible loop $\gamma \subset \mathcal{C}'(T_2)$. Define for all $t\geq T_2$, $\gamma_1(t) \subset \partial \mathcal{C}(t)$ freely homotopic to $\gamma$ such that $f(t)^{-1} \circ \gamma_1(t)$ is geodesic in $\partial \bar H_t$ and evolves by parallel transport in $H$ w.r.t. $t$. On the side of the cusp, define $\gamma_2(t) \subset \partial \mathcal{C}_{\mathrm{cusp}}(t)$ freely homotopic to $\gamma$ such that $f_{\mathrm{cusp}}^{-1}\circ \gamma_2(t) \subset T^2 \times \{b(t)+2\}$ is geodesic in $(T^2,g_\mathrm{eucl})$ and evolves by parallel transport (at speed $b'$). In particular $\gamma_1(t) \subset \mathcal{C}_1$ and $\gamma_2(t) \subset \mathcal{C}_{2}$ at each time when these sets are defined (that is when $\mathcal{C}'(t) \not=V_{1}$) and these loops are freely homotopic in $\mathcal{C}'(t)$. Let $A(t)$ be the infimum of the areas of all smooth homotopies $H : S^1 \times [0,1] \to \mathcal{C}'(t)$ connecting $\gamma_1(t)$ to $\gamma_2(t)$. \begin{claim}\label{claim:four} $t^{-1}A(t) \to 0$ as $t \to \infty$. \end{claim} \begin{proof} It is identical to \cite[Lemma 8.2]{Bam:longtimeI}, except that we have to account for the fact that $\partial_{t} \gamma_{2}(t)$ may a priori not be bounded. This estimate appears when we compute the area added to the homotopy by moving the boundary curves. The infinitesimal added area to the homotopy due to the deplacement of $\gamma_1$ is negative (we can assume $\alpha' >0$), hence neglected. The contribution of $\gamma_2$, by closeness with the hyperbolic cusp, is bounded by $Ct.e^{-b}b'$. On the other hand, the normalised length $t^{-1/2} \ell(\gamma_i) \to 0$ and the normalised geodesic curvature $t\kappa(\gamma_i(t)) < C$, by closeness with the hyperbolic situation. Let us denote $L(t) = t^{-1/2}(\ell (\gamma_1(t))+\ell(\gamma_2(t))$. Computations in \cite[Lemma 8.2]{Bam:longtimeI} give (compare with equation (8.1) there) \begin{equation*} \frac{d}{dt^+}(t^{-1}A(t)) \leq -\frac{A(t)}{4t^2}+ C\left(\frac{L(t)}{t} + e^{-b}b'\right).\label{eq:bamler} \end{equation*} Denoting $y(t)=t^{-1}A(t)$ this gives the differential inequality $$ \frac{d}{dt^+}y \leq -y/4t + C(t^{-1}L+ e^{-b}b').$$ Using the standard method, one obtains that $y(t)=K(t)t^{-1/4}$ where $$\frac{d}{dt^+}K \leq Ct^{1/4}\left(t^{-1}L+ e^{-b}b' \right).$$ We can assume that $L(t)$ is almost nonincreasing, that is that for any $T_2 \le a \le t$, one has $L(t) \le 2L(a)$. Then for $T_2 \le a \le t$, \begin{eqnarray*} K(t) - K(a) & \leq & C\left (\int_a^t \left(u^{-3/4} L(u) + u^{1/4}e^{-b}b'\right)\, du\right)\\ & \le & C\left(2L(a)\int_a^t u^{-3/4}\, du \, \, + t^{1/4} \int_a^t e^{-b}b'\, du\right)\\ & \leq & C\left( 8L(a)t^{1/4} + t^{1/4} e^{-b(a)} \right), \end{eqnarray*} hence \begin{eqnarray*} y(t) & \leq & \frac{K(a)}{t^{1/4}} + \frac{K(t)-K(a)}{t^{1/4}} \\ & \leq & \frac{K(a)}{t^{1/4}} + C\left(8L(a)+e^{-b(a)} \right), \end{eqnarray*} which is arbitrary small by taking $a$ then $t$ large enough. \end{proof} We conclude the proof of Lemma~\ref{lem:vone}. The argument is the same as the one given in \cite{Bam:longtimeI}. Consider smooth loops $\gamma,\beta$ in $\mathcal{C}'(t)$ generating $\pi_1 \mathcal{C}'(t)$. Let $\gamma_i(t)$, resp. $\beta_i(t)$, $i=1,2$, defined as above, freely homotopic to $\gamma$, resp. $\beta$. Let $A(t)$, resp. $B(t)$, be the infimum of the areas of all smooth homotopies connecting $\gamma_1(t)$ to $\gamma_2(t)$, resp. $\beta_1(t)$ to $\beta_2(t)$. By Claim~\ref{claim:four}, \begin{equation} t^{-1}A(t) + t^{-1} B(t) \to 0 \label{eq:area} \end{equation} as $t \to \infty$. On the other hand let $H_\gamma$, resp. $H_\beta$, be any of these homotopies. At any time $t$ where $\mathcal{C}_3$ is defined, any fibre of the projection $p : U \to \RR^2$ is a noncontractible loop $\subset \mathcal{C}_3$, hence it intersects at least once the homotopies $H_\gamma$,$H_\beta$. For all such times $t$ large enough one has, using the fact that $p$ is $2$-bilipschitz and equation \eqref{eq:U}, that $$\mathrm{area}(H_\gamma) + \mathrm{area}(H_\beta) \geq \frac{1}{4} \mathrm{vol}(p(U)) \geq cs^2\bar \rho t,$$ for some constant $c=c(s,\bar \rho)>0$. This contradicts \eqref{eq:area}. \end{document}
\begin{document} \title{Vibrational response functions for multidimensional electronic spectroscopy \\ in non-adiabatic models} \author{Filippo Troiani} \affiliation{Centro S3, CNR-Istituto di Nanoscienze, I-41125 Modena, Italy} \email{[email protected]} \begin{abstract} The interplay of nuclear and electronic dynamics characterizes the multi-dimensional electronic spectra of various molecular and solid-state systems. Theoretically, the observable effect of such interplay can be accounted for by response functions. Here, we report analytical expressions for the response functions corresponding to a class of model systems. These are characterized by the coupling between the diabatic electronic states and the vibrational degrees of freedom resulting in linear displacements of the corresponding harmonic oscillators, and by non-adiabatic couplings between pairs of diabatic states. In order to derive the linear response functions, we first perform the Dyson expansion of the relevant propagators with respect to the non-adiabatic component of the Hamiltonian, then derive and expand with respect to the displacements the propagators at given interaction times, and finally provide analytical expressions for the time integrals that lead to the different contributions to the linear response function. The approach is then applied to the derivation of third-order response functions describing different physical processes: ground state bleaching, stimulated emission, excited state absorption and double quantum coherence. Comparisons between the results obtained up to sixth order in the Dyson expansion and independent numerical calculation of the response functions provide an evidence of the series convergence in a few representative cases. \end{abstract} \date{\today} \maketitle \section{Introduction} Multidimensional coherent spectroscopy represents a powerful tool for investigating ultrafast dynamical processes occurring in molecular and solid-state systems\cite{Mukamel95a,Hamm11a,Scholes2017,Smallwood18a,Rozzi18a,Collini2021}. In fact, the dependence of the nonlinear spectra on multiple frequencies allows one to separate different and otherwise overlapping contributions, and to establish correlations between the observed excitation energies. These processes often involve an interplay between electronic and vibrational degrees of freedom, which plays an important role in processes such as charge or energy transfer and determines the observed coherent beatings \cite{Chin2013,Falke14a,Romero2014,OReilly2014,DeSio16a,Thouin2019,Rafiq2021}. In a semiclassical representation of the system dynamics, ultrashort laser pulses induce impulsive transitions to different electronic states. This triggers the wave packet motion on the corresponding potential energy surfaces, with features that depend on the specific form of the electron-phonon coupling. In many cases of interest, such coupling is represented in terms of the linearly displaced-oscillator model, where each vibrational mode is represented as an independent harmonic oscillator, which undergoes an electronic-state dependent displacement of the origin \cite{Kumar01a,Egorova2007a,Mancal10a,Pollard1990a,Pollard1992a,Butkus12a,Cina2016,Le21a,Turner2020,Quintela2022a}. This adiabatic picture can be integrated in a number of respects, including deviations from harmonicity \cite{Park2000a,Arpin21a}, coupling between different modes \cite{Schultz22a,Yan1986a}, dependence of the vibrational frequencies on the electronic state \cite{Fidler13a}. The interplay between electronic and nuclear degrees of freedom is even closer in the presence of vibronic couplings, which result in coherent population transfer between the diabatic states and hopping of the vibrational wave packet between the corresponding potential energy surfaces \cite{DeSio16a}. Its effects have been observed in a variety of physical systems, ranging from molecular crystals to J-aggregates \cite{Spano2010a}, from polymeric films to natural and artificial light-harvesting systems. A detailed and quantitative explanation of the observed multidimensional spectra requires a detailed theoretical description of these complex system, and possibly of its interaction with the environment. The general understanding of the multidimensional spectra, and specifically the capability of disentangling the electronic and vibrational coherences, can be possibly favored by the investigation of relatively simple systems, such as molecular dimers \cite{Hayes13a,Halpin2014}. On the other hand, a number of reduced models have been introduced in order to allow the rationalization of the observed spectra and to provide a semi-quantitative understanding of the underlying dynamics in terms of a few electronic levels and vibrational modes \cite{Ishizaki10a,Tiwari13a,Butkus14a,krcmar,Duan2016,DeSio16a,Li2021a,Caycedo-Soler2022}. Here we consider linear and nonlinear response functions in a class of multilevel non-adiabatic model systems defined as follows. The vibrational degrees of freedom are described by harmonic oscillators, which undergo a different displacement for each of the electronic diabatic states. The Hamiltonian also includes terms that coherently couple pairs of diabatic states, thus introducing non-adiabaticity. The linear response functions are identified (up to a prefactor) with specific propagators, which are computed in three steps. First, the propagators are expanded in a Dyson series with respect to the non-adiabatic component of the Hamiltonian: each term in the series thus corresponds to a given number of transitions between the diabatic electronic states. For given number of transition and for given values of these transition times, the propagator can be formally (though not physically) identified with the adiabatic response functions, whose analytical expressions have been derived in Ref. \onlinecite{Quintela2022a} within a coherent state approach. After performing the Taylor expansion of such response function with respect to the relevant displacement, we integrate with respect to the interaction times, and obtain simple analytical expressions for each of the contributions. Third-order response functions are then derived, after decomposing them into the product of three propagators. The paper is organized as follows. In Section II we define the model systems to which the approach is applied. Section III contains the main results, namely the expressions of the single- and multiple-time propagators, and the corresponding (linear and nonlinear) response functions. Section IV contains the main steps in the formal derivation of the above results. Finally, we draw the conclusions in Section V. \section{The model} \begin{figure} \caption{First model system ($A$), which includes two, non-adiabatically coupled excited electronic states $|1\rangle$ and $|2\rangle$. Optical transitions (green arrows) couple $|1\rangle$ with the ground state $|0\rangle$ and $|2\rangle$ with the doubly excited state $|3\rangle$. Each electronic state $|k\rangle$ implies a displacement by $-z_k$ of the harmonic oscillator corresponding to the vibrational mode. \label{fig:1} \label{fig:1} \end{figure} The present approach allows the derivation of the response function in the presence of non-adiabatic couplings between electronic and vibrational degrees of freedom. More specifically, it applies to models where the vibrational modes can be described by harmonic oscillators and the coupling between these and the electronic degree of freedom results in an electronic-state dependent displacement of the oscillators. {\color{black} The eigenstates of such displaced harmonic oscillator Hamiltonian are characterized by the factorization of the electronic and vibrational components, as results from the crude adiabatic approximation \cite{Azumi}. The non-adiaticity is introduced by a direct coupling between two electronic states, with no involvement of the vibrational degrees of freedom \cite{Witkowski}. Within such a class of models, we consider in the following those that are complex enough to display the processes of interest, but otherwise as simple as possible. Throughout the paper, we assume that the non-adiabatic coupling only involves the first two excited states.} The corresponding Hamiltonian reads \begin{align}\label{eq:ham1} H = H_0 + V = \sum_{\xi=0}^{N-1} H_{0,\xi} + \hbar[\eta |1\rangle\langle 2| + \eta^*|2\rangle\langle 1|] , \end{align} where $V$ represents the non-adiabatic term, $H_0=\sum_{\xi=1}^N H_{0,\xi}$ includes all the adiabatic ones, and its electronic-state specific components are given by \begin{align}\label{eq:ham2} H_{0,\xi} = | \xi \rangle\langle \xi | \left[\hbar\bar\omega_\xi +\sum_{\zeta=1}^{G} \hbar\omega_\zeta (a_\zeta^\dagger + z_{\zeta,\xi}) (a_\zeta+z_{\zeta,\xi})\right] . \end{align} In the following, and for the rest of the paper, we set $\hbar\equiv 1$. The eigenstates of the Hamiltonian $H$ coincide with those of the adiabatic part $H_0$ for $\xi=0$ or $\xi\ge 3$. In these subspaces and for $G=1$, the eigenstates of $H$ and $H_0$ are in fact given by $|\xi; n,-z_\xi\rangle$, where $| n,-z_\xi\rangle = \mathcal{D} (-z_\xi) |n\rangle $ are the displaced Fock states. Instead, due to the non-adiabatic term $V$, the eigenstates $| \xi,-z_1\rangle$ and $| \xi,-z_2\rangle$ of $H_{0,1}$ and $H_{0,2}$ don't coincide with those of $H$, which in general don't have a simple analytical expression. {\color{black} Interestingly, the form of the above Hamiltonian, and specifically that of the non-adiabatic term, changes qualitatively if one replaces the basis $\{|1\rangle,|2\rangle\}$ with $\{|+\rangle,|-\rangle\}$, formed by the states that diagonalize $H_e=\sum_{\xi=1,2} \hbar\bar\omega_\xi |\xi\rangle\langle\xi|+V$. In such a basis, the coupling between electronic and vibrational degrees of freedom is has a non-diagonal component in the electronic basis, which can be identified with the non-adiabatic part of the Hamiltonian (see Appendix \ref{app:x}).} In the following, we refer to two simple and yet interesting model systems, corresponding to particular cases of the above Hamiltonian $H$. The first one, referred to as {\it model $A$}, is represented by a four-level system with a single vibrational mode ($N=4$, $G=1$, Fig. \ref{fig:1}). Within such model, we derive the expressions of the third-order response functions, which include contributions from processes such as excited state absorption, involving the doubly excited state $|3\rangle$. The second model, referred to as {\it model $B$}, is represented by a three-level system with two vibrational modes ($N=3$, $G=2$, Fig. \ref{fig:2}) and can be referred to a pair of coupled monomers; each monomer is coupled to its own (localized) vibrational mode. For this model we compute the first-order response function, and show how this can formally reduced to a single-mode response function in the case of a symmetric dimer. {\color{black} The dimer would in principle include a doubly excited state $|3\rangle$, which however doesn't play any role in the linear response functions considered for this model, and is thus disregarded. In fact, the present approach could in principle be applied to a single, more general model, which includes both a doubly excited state and two vibrational modes. However, this would complicate the analytical expressions and make their physical meaning less transparent, without introducing significantly new elements. For the sake of clarity, these two features are kept separate, and investigated independently from one another in the two models.} Finally, in deriving the response functions, we assume that the system dynamics is triggered by a Franck-Condon transition between the electronic states, induced by the interaction with the external electric field. This is followed by a free evolution of the system, resulting from the interplay between the electronic and vibrational degrees of freedom. \begin{figure} \caption{Second model system ($B$), which includes two, non-adiabatically coupled excited electronic states $|1\rangle$ and $|2\rangle$. These correspond to the excitation respectively of the first and second monomer that form a dimer. Transitions between the ground ($g$) and excited state ($e$) of each monomer can be induced optically (green arrows). The excitation of each monomer results in a displacement by $-z_e$ of the corresponding vibrational mode (harmonic oscillator). \label{fig:2} \label{fig:2} \end{figure} \section{\label{sec:main results} Main results} The central result of the present article is represented by the time propagators between the excited states belonging to the subspace $\mathcal{S}_e$. {\color{black} This, result is then used to derive the expressions of the linear and nonlinear response functions. In the following, we present a brief discursive description of the method (Subsec. \ref{subsec:1}), followed by the presentation of the final expressions (Subsecs. \ref{subsec:2}--\ref{subsec:3}). The formal derivations of the results are presented in Section IV. \subsection{Brief description of the method\label{subsec:1}} The relevant time propagator is the matrix element of the time evolution operator between the states $|\sigma;0\rangle$ and $|\sigma';0\rangle$: these are given by the product of the diabatic electronic states that are coupled by the non-adiabatic interaction $V$ ($\sigma,\sigma'=1,2$), and of the vibrational ground state of the undisplaced harmonic oscillator. The time-evolution operator is computed by performing a Dyson expansion with respect to $V$: each term in the expansion corresponds to an {\it electronic pathway}, {\it i.e.} to a given sequence of electronic states $e_1,\dots,e_{M}$ (an alternating sequence of $|1\rangle$ and $|2\rangle$), being $M-1$ the order of the expansion. The overall time evolution of the system that one can associate to each electronic pathway consists of a sequence of sudden transitions between the two diabatic states, interleaved by time intervals during which the system remains in the same electronic state (Fig. \ref{fig:FA1}). For the vibrational state, each transition between the states $|1\rangle$ and $|2\rangle$ implies a hopping of the coherent state from one potential energy surface to the other, being these two relatively displaced parabolas. The resulting time evolution resembles that induced by sequences of delta-like laser pulses within the linearly displaced harmonic oscillator model \cite{Quintela2022a}. This formal analogy allows us to use in the present case the analytical expressions that have recently been derived for the vibrational component of the response function in the adiabatic case ($R$). The following step consists in the integration over all the possible values of the non-adiabatic interaction times. In order to perform such integration analytically, we perform a Taylor expansion of $R$, which can be written as the product of $M(M+1)/2$ double exponential functions. Each term in the Dyson expansion (order $M-1$ in the non-adiabatic coupling constant $\eta$) thus gives rise to a number of infinite terms, one for each set of orders $k_i$ ($i=1,\dots,M(M+1)/2$) of the Taylor expansions. Formally, each of these terms can be written as a product of exponential functions, that oscillate during the intervals of duration $\tau_j$ ($j=1,\dots,M$) with a frequency $\Omega_j$. This is given by the sum of an electronic and a vibrational contributions. The former corresponds to the energy $\bar\omega_j$ ($\hbar\equiv 1$) of the electronic state for the relevant time interval (specified by $j$) and electronic pathway (specified by $M-1$); the latter one is the energy $q_j\omega$ of the $q_j$-th eigenstate of the undisplaced harmonic oscillator. The values of $q_j$ result from those of $k_l$ in a one-to-many correspondence. Physically, one can thus associate to each term of the Taylor expansion a {\it vibronic pathway}, defined by a sequence of electronic and vibrational states $|e_j;q_j\rangle$, with $j=1,\dots,M$. Besides, each of these term is proportional to the displacements ($z_1$ or $z_2$) or their difference to the power of $k_T=\sum_{i=1}^{M(M+1)/2} k_i$. Being the modulus of the displacement typically smaller than one, this series is expected to converge, even though the number of terms increases rapidly with $k_T$. These functions can be analytically integrated, and give a formally simple result, consisting - for each vibronic pathway - in the sum of $M$ terms, each one oscillating at a frequency $\Omega_j$. If all these frequencies differ from one another, the oscillating terms $e^{-i\Omega_j t}$ are multiplied by constants $A_j$. If $k$ of those frequencies coincide, then each of the multiple $e^{-i\Omega_j t}$ is multiplied by a monomial $a_j t^{r_j}$, with $r_j=0,\dots,k-1$ (it follows from the calculations that the number of identical frequencies for each vibronic pathway cannot exceed $(M-1)/2$). Being this feature common to all the terms that result from the Taylor expansion, the entire contribution of order up to $M-1$ in $\eta$ is given by the sum of terms that oscillate at the frequencies $\bar\omega_1$ and $\bar\omega_2$ (diabatic state energies) and of their vibrational replicas, multiplied by polynomial functions of $t$, of order $(M-2)/2$ for even $M$ and $(M-1)/2$ for odd $M$. The extension of this approach to the multimode case is rather straightforward, because the dynamics of the $G$ vibrational modes are independent from one another. The Dyson expansion is of the propagator is not modified by the presence of multiple modes. On the other hand, the Taylor expansion has to be performed for each of the adiabatic response functions $R$, resulting in a larger number of vibronic pathways. Each of these is given by a sequence of states $|e_j;{\bf q}_j\rangle$, where ${\bf q}_j$ defines a $G$-dimensional vibrational (Fock) state. The final expression of the response function is thus identical to that discussed above, apart from the replacement - in the frequencies $\Omega_j$ - of the single-mode energies $q_j\omega$ with their multimode counterparts $\sum_{\zeta=1}^G \omega_\zeta q_{j,\zeta}$. In the multitime propagators of interest, the overall evolution of the system is divided in three time intervals ($T_L$, $T_C$, and $T_R$), delimited by optically-induced transitions between the subspace $\mathcal{S}_e$, and the ground or doubly excited states. The generalization of the above procedure thus requires two independent Dyson expansions, one for each of the time evolutions that take place in $\mathcal{S}_e$, during the waiting times $T_L$ and $T_R$ (the evolution in $T_C$ always takes place outside from the subspace $\mathcal{S}_e$, and therefore does not require a further expansion). The overall function at defined interaction times can be written as a product of $M_L(M_L+1)/2+1+M_R(M_R+1)/2$ double exponential functions, being $M_L$ ($M_R$) the order in the Dyson expansion for the first (third) time interval. In the final step, the integration is performed independently with respect to the interaction times belonging to the intervals $T_L$ and $T_R$. This gives rise to the functions of order $M_L - 1$ and $M_R - 1$ in $\eta$, and that depend respectively on $T_L$ and $T_R$, in the same way as the single-time propagators depend on $t$. } \subsection{Time propagators\label{subsec:2}} \begin{figure} \caption{\color{black} \label{fig:FA1} \end{figure} We consider the case where the system modeled by the Hamiltonian $H$ [Eqs. (\ref{eq:ham1}-\ref{eq:ham2})] undergoes a Franck-Condon transition from the ground state $|0;0\rangle$ to $|1;0\rangle$, corresponding to a generic linear superposition of Hamiltonian eigenstates. This will evolve in time, under the combined effect of the adiabatic ($H_{0,e}$) and non-adiabatic ($V$) terms. {\color{black} More specifically, the propagators are written as the sum of different terms, each one corresponding order ($M-1$) in the non-adiabatic interactions. The nonadiabatic interactions take place at times $t_k$, with $t_{M-1} < t_{M-2} < \dots < t_1 $ and result in transitions between, e.g., states $|1; q_{k+1}\rangle$ and $|2; q_k\rangle$ (where the $q_k$ specify the Fock states of the undisplaced harmonic oscillator). Between two consecutive nonadiabatic interactions, for time intervals $\tau_k=t_{k-1}-t_k$, the system evolves freely under the effect of the Hamiltonian $H_{0}$ and accumulates the phase $\Omega_k\tau_k$. The response functions are eventually derived by integrating over the interaction times $t_k$. Between two consecutive non-adiabatic interactions, for time intervals of duration $\tau_k=t_{k-1}-t_k$, the system evolves freely, under the effect of the Hamiltonian $H_0$, and accumulates the phase $\Omega_k\tau_k$. The response functions are eventually derived by integrating over the interaction times. } \subsubsection{Off-diagonal elements} {\color{black} If the number of transitions that has taken place in the time $t$ is odd, $M-1=2n+1$, the initial and final excited states differ. The resulting time propagator, {\it i.e.} the matrix element of the time-evolution operator $U_S=e^{-i H t}$, can be written in the form: \begin{gather} \langle 2; 0 | U_S | 1; 0 \rangle = \sum_{n=0}^\infty \eta^* |\eta|^{2n} F_{2n+1}(t) \nonumber\\ = \sum_{n=0}^\infty \eta^* |\eta|^{2n} \sum_{\bf k} \left\{C\sum_{j=1}^{2n+2} A_{j} (t)\, e^{-i\Omega_j t}\right\}_{\bf k} ,\label{eq:a1} \end{gather} where it is intended that all the functions and parameters in the curly brackets depend on ${\bf k}$ (see below). The function $F_{2n+1}(t)$, corresponding to the order $2n+1$ in the Dyson expansion, is given by the sum of monomials $A_{j}(t)=a_j t^{r_j}$, with $r_j \le n$, multiplied by terms that oscillate at the frequencies: \begin{align}\label{eq:abc} \Omega_j = \left\{ \begin{array}{c} \omega\, q_j + \bar\omega_1\,,\ {\rm for\ even}\ j \\ \omega\, q_j + \bar\omega_2\,,\ {\rm for\ odd}\ j \end{array}\right. , \end{align} with $q_j$ non-negative integers. These frequencies are thus given by the sum of two terms: the energy of the diabatic states ($\bar\omega_1$ or $\bar\omega_2$), and an integer multiple of the vibrational frequency $\omega$. Each of the $(2n+1)$-th order terms in the Dyson expansion [Eq. \ref{eq:abc}] is given by the sum of different contributions, one for each vector ${\bf k}=(k_1,\dots.k_{M(M+1)/2})$. These contributions result from the Taylor expansion of the adiabatic propagator, and are of order $k_T=\sum_{i=1}^{M(M+1)/2} k_i$ in the displacements $z_{\zeta,\xi}$ [see Eq. (\ref{eq:ham2})]. The explicit dependence of the contributions in the sum on ${\bf k}$ and on the displacements can be expressed as follows: \begin{gather} \langle 2; 0 | U_S | 1; 0 \rangle = \sum_{n=0}^\infty \eta^* |\eta|^{2n} \nonumber\\ \sum_{{\bf k}} \left\{\left[(-i)^{2n+1} e^{h_{M}({\bf z})} \chi_{M} ({\bf z},{\bf k}) \right] \sum_{j=1}^{2n+2} A_j(t)\, e^{-i\Omega_j t} \right\}_{\bf k}. \end{gather} where $M=2n+1$. The frequencies $\Omega_j$ and the functions $A_j(t)$ depend on ${\bf k}$ only through the integers $q_j$, which specify the sequence of vibrational states in the related pathway. These integers are given by the expression: \begin{gather} q_j = \sum_{x=1}^M \ \sum_{y=\max(1,j-x+1)}^{\min(j,M+1-x)} k_{(x-1)M-\frac{1}{2}(x-1)(x-2)+y}. \end{gather} We note that the relation between ${\bf k}$ and ${\bf q}$ is not one-to-one, for different vectors ${\bf k}$ can correspond to a same ${\bf q}$. The constant prefactor, denoted with $C$ in Eq. (\ref{eq:abc}), depends both on ${\bf k}$ and on the vector ${\bf z}=({\rm z}_1,\dots,{\rm z}_M)$, whose components coincide with the displacements of the oscillator (here, these are given by ${\rm z}_k=z_2$ for odd $k$ and ${\rm z}_k=z_1$ for even $k$). Such dependence is expressed by the functions $h_M$ and $\chi_M$.} The former one, whose general expression is reported in Section \ref{sec:derivations}, is here given by \begin{gather} h_M ({\bf z}) = -\frac{1}{2} M z_{12}^2 - z_1 z_2, \end{gather} where $z_{ij}\equiv z_i-z_j$. The latter one $\chi_M$, which depends both on ${\bf z}$ and on ${\bf k}$, in the present case reads \begin{gather} \chi_M ({\bf k},{\bf z}) = \frac{\prod_{p=1}^{M-1} [(-1)^{p} z_1 z_{21}]^{k_{1+w}} [(-1)^{p} z_2 z_{12}]^{k_{M-p+1+w}}}{\prod_{l=1}^{M(M+1)/2} k_l!} \nonumber\\ \times (z_1 z_2)^{k_{M(M+1)/2}} \prod_{q=2}^{M-p}[(-1)^{p+1} z_{12}^2]^{k_{q+w}}, \label{eq:s1} \end{gather} where $w=(p-1)M-(p-1)(p-2)/2$. {\color{black} The zero-phonon line corresponds to ${\bf q}={\bf 0}$ and $\chi_M=1$.} The expansion in Eq. (\ref{eq:a1}) includes in principle an infinite number of terms, {\color{black} resulting from both the Dyson and the Taylor expansions. However the relative importance in the former expansion is expected to decrease for increasing values of the order $2n+1$, especially in the short-time limit ($|\eta| t \lesssim 1$). As to the second expansion, being in general $|z_1|,|z_2|<1$, the value of the constant prefactor $C$ is also expected to rapidly decrease for increasing values of the order $k_T$, which defines the power in the displacements.} \subsubsection{Diagonal elements} If the number of transitions that has taken place in the time $t$ is even, the initial and final excited states coincide. The resulting propagators read: {\color{black} \begin{gather} \langle \sigma; 0 | U_S | \sigma; 0 \rangle = \sum_{n=0}^\infty |\eta|^{2n} F_{2n}(t) \nonumber\\ = \sum_{n=0}^\infty |\eta|^{2n} \sum_{\bf k} \left\{C\sum_{j=1}^{2n+1} A_{j} (t)\, e^{-i\Omega_j t}\right\}_{\bf k} , \label{eq:a2} \end{gather} where $\sigma=1,2$. In the $0$-th order contribution ($n=0$), the propagator is reduced to that derived for the adiabatic case \cite{Quintela2022a}. Analogously to the case of the off-diagonal elements, the functions $F_{2n}$ are given by the sum of monomial functions $A_j(t) = a_j t^{n_j}$ (with $n_j \le n$), multiplied by terms that oscillate at the frequencies \begin{align}\label{eq:abc} \Omega_j = \left\{ \begin{array}{c} \omega\, q_j + \bar\omega_{3-\sigma}\,,\ {\rm for\ even}\ j \\ \omega\, q_j + \bar\omega_\sigma\,,\ {\rm for\ odd}\ j \end{array}\right. . \end{align} As to the dependence of the different contributions on ${\bf k}$, resulting from the Taylor expansion, this is given by: \begin{gather} \langle \sigma; 0 | U_S | \sigma; 0 \rangle = \sum_{n=0}^\infty |\eta|^{2n} \nonumber\\ \sum_{{\bf k}} \left\{\left[(-i)^{2n} e^{h_{M}({\bf z})} \chi_{M} ({\bf z},{\bf k}) \right] \sum_{j=1}^{2n+2} A_j(t)\, e^{-i\Omega_j t} \right\}_{\bf k}, \end{gather} where $M=2n+1$. } The functions $h_M$ and $\chi_M$ take here different forms with respect to the previous case. In fact, the function $h_M$ of the displacements is given by \begin{gather}\label{eq:h1} h_M ({\bf z}) = -\frac{1}{2} (M-1) z_{12}^2 - z_\sigma^2, \end{gather} where the vector ${\bf z}$ has components ${\rm z}_k=z_\sigma$ for odd $k$ and ${\rm z}_k=z_{3-\sigma}$, for even $k$. The function $\chi_M$, which depends both on ${\bf z}$ and on ${\bf k}$, reads \begin{gather} \chi_M ({\bf k},{\bf z}) = \frac{\prod_{p=1}^{M-1} [(-1)^{p} z_\sigma z_{3-\sigma,\sigma}]^{k_{1+w}+k_{M-p+1+w}}}{\prod_{l=1}^{M(M+1)/2} k_l!} \nonumber\\ \label{eq:chi1} \times z_\sigma^{2k_{M(M+1)/2}} \prod_{q=2}^{M-p}[(-1)^{p+1} z_{12}^2]^{k_{q+w}}, \end{gather} where $w=(p-1)M-(p-1)(p-2)/2$. {\color{black} The zero-phonon line corresponds to ${\bf q}={\bf 0}$ and $\chi_M=1$.} \begin{figure} \caption{\color{black} \label{fig:FA2} \end{figure} {\color{black} \subsection{Propagator in the frequency domain \label{subsec:Pitsd}} The propagators are shown to consist of a number of contributions, whose time dependence is given by functions $f_r(t)=A(t)\,e^{-(\gamma+i\Omega) t}$, where $A(t)=a t^{r}$ and the exponential decay ($\gamma >0$) results from decoherence (see Subsec. \ref{subsec:deco}). Therefore, the Fourier transform of the propagator is given by combinations, with equal coefficients, of the $\hat{f}_r(\omega) = {\rm FT} \{f_r(t)\}$, whose expressions read: \begin{align} \hat{f}_r(\omega) \!=\! a \!\int^{\infty}_0\! dt \, t^{r} \, e^{-[\gamma+i(\Omega-\omega)] t} \!=\! \frac{r!}{[\gamma+i(\Omega-\omega)]^{r+1}} . \end{align} In Fig. \ref{fig:FA2}(a,b) we plot the real and imaginary parts of the functions $\hat{f}_r(\omega)$ corresponding to different values of $r$. Due to the complex character of the prefactors that appear in the expression of the functions $A_j(t)$ (see Appendix \ref{app:lof}), both the real and the imaginary part of each contribution in the Dyson expansion of the propagator's Fourier transform \begin{gather} \langle \sigma; 0 | \hat{U}_S (\omega) | \sigma'; 0 \rangle = {\rm FT} \{\langle \sigma; 0 | U_S (t) | \sigma'; 0 \rangle\} \end{gather} consist of combinations of real and imaginary parts of the functions $\hat{f}_r(\omega)$ [panels (c,d)], and thus present a mixed absorptive and dispersive character. We finally apply these results to the diagonal and off-diagonal propagators, up to different orders in the Dyson expansion. For the sake of simplicity, we show this in the case of the undisplaced oscillator ($z_1=z_2=0$), where $h_M=0$, $\chi_M=1$, $\Omega_{2k-1}=\bar\omega_\sigma$, and $\Omega_{2k}=\bar\omega_{3-\sigma}$ (being $q_j=0$ for all the $j$). The propagators are given by the sum of two terms that oscillate at the diabatic state energies, $e^{-i\bar\omega_1 t}$ and $e^{-i\bar\omega_2 t}$, each one multiplied by a polynomial of order $n$ (the expressions of the monomials $A_j(t)$, in general and specifically for the case of the undisplaced oscillator, are given in Appendix \ref{app:lof} for $M \le 6$). In this particular case, the diagonal propagator is dominated by the zero-th order contribution ($M=1$) in the diagonal case (peak at $\bar\omega_1$), corresponding to a diabatic evolution within the initial state $|1\rangle$, with a minor contribution at $\bar\omega_2$, resulting mainly from the second-order term ($|\eta|^2$, transitions $|1\rangle\longrightarrow |2\rangle\longrightarrow |1\rangle |$) [panel (e)]. The off-diagonal propagator presents two symmetric peaks at the two frequencies $\bar\omega_1$ and $\bar\omega_2$, mainly resulting from first-order contribution ($M=2$), and corresponding to the occurrence of a single non-adiabatic transition $|1\rangle\longrightarrow |2\rangle$. From the expressions of the Fourier transforms, it follows that the relative weight of the contributions corresponding to different orders is given by the values of the diabatic gap $\bar\omega_{12}$ and of the relevant decay rate $\gamma$, relative to the non-adiabatic coupling $\eta$. In fact, the terms of order $M-1$ and resulting from a monomial $A_j(t)$ of order $r$ are proportional (at resonance) to \begin{gather} \frac{|\eta|^{M-1}}{|\bar\omega_{12}|^{M-r-1} \gamma^{r}}. \end{gather} The smaller $\gamma$, the larger the relative weight of the terms with high $r$. The convergence (the fact that the contributions lose weight for increasing $M$) results from the condition $|\eta| < |\bar\omega_{12}|,\gamma$. } \subsection{Linear response function} \begin{figure} \caption{\label{fig:3} \label{fig:3} \end{figure} A first, straightforward application of the propagators reported in the previous Subsection is represented by the first-order response function, for the model systems $A$ and $B$, schematized respectively in Fig. \ref{fig:1} and Fig. \ref{fig:2}. \paragraph{\color{black} Model $A$.} Model $A$ is characterized by the presence of one vibrational mode, and only one excited state that can be optically addressed from the ground state. The resulting response function is given by: \begin{gather}\label{eqx02} \mathcal{R}^{(1)}_A (T_1) =i|\mu_{01}|^2 \sum_{n=0}^\infty (-i)^{2n} |\eta|^{2n} e^{h_{M}} \sum_{{\bf k}} \chi_{M} f_{M,{\bf q},1}(T_1) , \end{gather} where $h_{M}({\bf z})$ and $\chi_{M}({\bf k},{\bf z})$ are given respectively by Eq. (\ref{eq:h1}) and Eq. (\ref{eq:chi1}), while the time dependence is given by \begin{gather} f_{M,{\bf q},\sigma}(T_1) = \sum_{j=1}^M A_j(T_1)\, e^{i\Omega_j T_1} . \end{gather} In the response function, only even-order contributions in the non-adiabatic interaction matter, because also the emission process at the end of the time evolution has to take place from the excited state $|1\rangle$. Therefore, $M=2n+1$ and the vector ${\bf z}$ has components ${\rm z}_k=z_\sigma$ for odd $k$, and ${\rm z}_k=z_{3-\sigma}$ for even $k$. {\color{black} In the presence of relaxation and dephasing (Subsec. \ref{subsec:deco}), the above response function undergoes an exponential decay as a function of $T_1$. In particular, this results in a prefactor \begin{gather} F_A(T_1) = e^{-(\gamma_g+\gamma_e+\Gamma_e/2)T_1} \end{gather} to be added to the above expression of $\mathcal{R}^{(1)}_A (T_1)$.} \paragraph{\color{black} Model B.} The case of model $B$ is conceptually equivalent to the previous one, but includes some additional contribution. This is due to the presence of a second vibrational mode and of a second allowed optical transition, that between the states $|0\rangle$ and $|2\rangle$. As a result, in the case of a symmetric dimer ($\bar\omega_1\!=\!\bar\omega_2\!\equiv\!\omega_e$, $\mu_{0,1}\!=\!\mu_{0,2}\!\equiv\!\mu_e$, $z_{1,1}\!=\!z_{2,2}\!\equiv\! z_e$), the linear response function reads: \begin{gather} \mathcal{R}^{(1)}_B (T_1) = i|\mu_{0e}|^2 \left[ \sum_{n=0}^\infty (-i)^{2n+1}(\eta +\eta^*) |\eta|^{2n} e^{-(2n+2)z_{e}^2} \right. \nonumber\\ \sum_{{\bf k}} \chi_{2n+2}' \, f_{2n+2,{\bf q}}(T_1) + 2 \sum_{n=0}^\infty (-i)^{2n} |\eta|^{2n} e^{-(2n+1)z_{e}^2} \nonumber\\ \left. \sum_{{\bf k}} \chi_{2n+1}' \, f_{2n+1,{\bf q}}(T_1)\right] . \end{gather} Here, the first (second) term in square brackets corresponds to pathways with an odd (even) number of non-adiabatic processes, such that the absorption and emission processes involve different (the same) excited states. We note that, due to the degeneracy between the two excited states, $\bar\omega_{12}=0$ and the $f_{M,{\bf q},1}=f_{M,{\bf q},2}\equiv f_{M,{\bf q}}$. The fact that the two-level systems are identical implies that the vibrational modes are characterized by the same frequency and undergo the same displacement $z_e$ in passing from the ground state $|g\rangle$ to the excited state $|e\rangle$. This leads to a simplification of the propagator and of the resulting response function, which can be written formally as in the single-mode case, apart from the replacement of $h_M ({\bf z})$ with $h_M' ({\bf z}) = - M z_{e}^2 $ and of the function $\chi$ with \begin{gather} \chi_M' ({\bf k},z_e) = \frac{\prod_{p=1}^{M-1} [(-1)^{p+1} z^2_e]^{k_{1+w}+k_{M-p+1+w}}}{\prod_{l=1}^{M(M+1)/2} k_l!} \nonumber\\ \times \frac{1}{2} [1-(-1)^M] z_e^{2k_{M(M+1)/2}} \prod_{q=2}^{M-p}[2(-1)^{p+1} z_{12}^2]^{k_{q+w}}, \end{gather} where $w=(p-1)M-(p-1)(p-2)/2$. {\color{black} In the presence of dephasing and decoherence, the response function decays exponentially as a function of time. Such decay is described by the prefactor $F_B(T_1)=F_A(T_1)$.} \paragraph{\color{black} Verification against numerical results.} In order to test the approach, we compare the response function obtained with the present approach with one computed with a completely independent method. This consists in diagonalizing $H$ and propagating the initial state $|1;0\rangle$ by expanding it in the basis of the Hamiltonian eigenstates. As shown in Fig. \ref{fig:3}, the results of the perturbative approach (symbols) converge to the nonperturbative results (solid line) for increasing number of terms in the expansion. Terms of increasing order are clearly required for increasing time $t$. In this particular case, a good agreement for $t |\eta| > 1$ requires the inclusion of terms up to 6-th order in the non-adiabatic coupling $V$. {\color{black} In general, from the expression of the functions $A_j(t)$ (see Appendix \ref{app:lof}) it follows that the expansion should converge for small values of $|\eta/\bar{\omega}_{12}|$ and of $t|\eta|$ ($\hbar\equiv 1$).} \subsection{Nonlinear response function\label{subsec:3}} The expression of the single-time propagator represents a starting point for the derivation of multi-time propagators, which can be directly related to nonlinear response functions. In particular, we focus hereafter on the response functions of third order in the light-matter interaction for model $A$ (Fig. \ref{fig:1}). Third-order response functions are expressed with respect to the waiting times $T_1$, $T_2$, and $T_3$, corresponding to the time intervals between consecutive interactions with the field. Besides, one can distinguish between the different contributions (pathways), based on the underlying physical process: ground-state bleaching, stimulated emission, photo-induced absorption, and double quantum coherence. In the following, the two inequivalent contributions to the response functions are derived for each of these processes. The functions $h_M$ and $\chi_M$ of the displacements are however common to all the cases, and are reported hereafter. The function $h_M$ is given by \begin{gather}\label{eq:hM} h_M ({\bf z}) = -\sum_{p=1}^M \sum_{q=1}^{M-p+1} z_{j_{q-1} j_{q}} z_{j_{q+p-1} j_{q+p}}, \end{gather} where $z_{ij}\equiv z_i-z_j$. The function $\chi_M$, which also depends on ${\bf k}$, reads \begin{gather}\label{eq:chiM} \chi_M ({\bf k},{\bf z}) = \frac{\prod_{p=1}^M \prod_{q=1}^{M-p+1} (z_{j_{q-1} j_{q}} z_{j_{q+p-1} j_{q+p}})^{k_{q+w}}}{\prod_{l=1}^{M(M+1)/2} k_l!}, \end{gather} where $w=(p-1)M-(p-1)(p-2)/2$. \subsubsection{Ground-state bleaching} \begin{figure} \caption{\label{fig:4} \label{fig:4} \end{figure} The ground state bleaching is associated with those pathways where both the ket and the bra are in the ground state during the second waiting time. It includes a rephasing and a non-rephasing contribution, which are treated separately hereafter. \paragraph{\color{black} Rephasing contribution.} The rephasing contribution corresponds in the perturbative (or Mukamelian) approach to the following sequence of transitions between operators: $| 0 \rangle\langle 0 | \longrightarrow | 0 \rangle\langle j | \longrightarrow | 0 \rangle\langle 0 | \longrightarrow | k \rangle\langle 0 | \longrightarrow | 0 \rangle\langle 0 | $, where $|j\rangle$ and $|k\rangle$ are optically excited states. In the case of model $A$, one has that $j=k=1$. The response function reads: \begin{gather} \mathcal{R}^{(3)}_2 =-i|\mu_{01}|^4\sum_{n_L,n_R=0}^\infty (-i|\eta|)^{2(n_L+n_R)} e^{h_M} \nonumber\\ \sum_{{\bf k}} \chi_{M} \, f_{M_L,{\bf q}_L,1} (-T_1)\, f_{M_R,{\bf q}_R,1} (T_3)\, e^{i m_C \omega (T_2+T_3)} . \end{gather} The $M$-dimensional vector ${\bf z}$ has the $l$-th component ${\rm z}_l=z_{j_l}$, where all the odd-numbered indices are $j_{2k+1}=1$ and all the even-numbered are $j_{2k}=2$, apart from $j_0=j_{M_L+1}=j_{M+1}=0$. The overall order $M-3=2(n_L+n_R)$ in the non-adiabatic coupling results from $2n_L$ ($2n_R$) virtual transitions in the evolution of the bra (ket) during the first (third) waiting time. {\color{black} In the presence of decoherence (Subsec. \ref{subsec:deco}), the above response function is multiplied by a factor $F_2$, which is given by the following expression: \begin{gather} F_2 (T_1,T_2,T_3) = e^{-(\gamma_e+\gamma_g+\Gamma_e/2)(T_1+T_3)}. \end{gather} This accounts for the decay of the coherence between the ground state $|0\rangle$ and an arbitrary linear superposition of the states $|1\rangle$ and $|2\rangle$ that takes place during the first and third waiting times, and for the relaxation of the excited states. } \paragraph{\color{black} Non-rephasing contribution.} The non-rephasing contribution corresponds to the following sequence of transitions: $| 0 \rangle\langle 0 | \longrightarrow | k \rangle\langle 0 | \longrightarrow | 0 \rangle\langle 0 | \longrightarrow | j \rangle\langle 0 | \longrightarrow | 0 \rangle\langle 0 | $, where $|j\rangle$ and $|k\rangle$ are optically excited states. In the case of model $A$, one has that $j=k=1$. The expression of this contribution reads: \begin{gather} \mathcal{R}^{(3)}_5 =-i|\mu_{01}|^4\sum_{n_L,n_R=0}^\infty (-i|\eta|)^{2(n_L+n_R)} e^{h_M} \nonumber\\ \sum_{{\bf k}} \chi_M \, f_{M_L,{\bf q}_L,1} (T_3)\, f_{M_R,{\bf q}_R,1} (T_1)\, e^{-i m_C \omega T_2} . \end{gather} The $M$-dimensional vector ${\bf z}$ has the $l$-th component ${\rm z}_l=z_{j_l}$, where all the odd-numbered indices are $j_{2k+1}=1$ and all the even-numbered are $j_{2k}=2$, apart from $j_0=j_{M_L+1}=j_{M+1}=0$. The overall order $M-3=2(n_L+n_R)$ in the non-adiabatic coupling results from $2n_L$ ($2n_R$) virtual transitions in the evolution of the ket during the third (first) waiting time. {\color{black} Decoherence affects the non-rephasing contribution in the same way as the rephasing one. Correspondingly, the above response function has to be multiplied by a factor $F_5(T_1,T_2,T_3)=F_2(T_1,T_2,T_3)$.} \paragraph{\color{black} Verification against numerical results.} In order to test these analytical results, we compare the third-order response function obtained for the rephasing contribution with that derived by numerical diagonalization of the Hamiltonian. As shown in Fig. \ref{fig:4}, the results of the perturbative approach (symbols) converge to the nonperturbative results (solid line) for increasing number of terms in the expansion. Terms of increasing order are clearly required for increasing values of $T_3$ and (not shown) of $T_1$. The value of $T_2$ is irrelevant in this perspective, because non-adiabatic transitions can take place during the second waiting time, when the system state evolves within the ground state manifold. \subsubsection{Stimulated emission} The stimulated emission is associated with those paths where both the ket and the bra are in the excited-state subspace $\mathcal{S}_e$ state during the second waiting time. It includes a rephasing and a non-rephasing contribution. \paragraph{\color{black} Rephasing contribution.} The rephasing contribution corresponds to transitions $| 0 \rangle\langle 0 | \longrightarrow | k \rangle\langle 0 | \longrightarrow | k \rangle\langle j | \longrightarrow | 0 \rangle\langle j | \longrightarrow | 0 \rangle\langle 0 | $, where $|j\rangle$ and $|k\rangle$ are optically excited states, here (model $A$) coinciding with $|1\rangle$. Its expression reads: \begin{gather} \mathcal{R}^{(3)}_1 =-i |\mu_{01}|^4 \sum_{n_L,n_R=0}^\infty (-i|\eta|)^{2(n_L+n_R)} e^{h_M} \nonumber\\ \sum_{{\bf k}} \chi_M \, f_{M_L,{\bf q}_L,1} (-T_1)\, f_{M_R,{\bf q}_R,1} (T_2+T_3)\, e^{i m_C \omega T_3}. \end{gather} The $M$-dimensional vector ${\bf z}$ has the $l$-th component ${\rm z}_l=z_{j_l}$, where all the odd-numbered indices are $j_{2k+1}=1$ and all the even-numbered are $j_{2k}=2$, apart from $j_0=j_{M_L+1}=j_{M+1}=0$. The overall order $M-3=2(n_L+n_R)$ in the non-adiabatic coupling results from $2n_L$ ($2n_R$) virtual transitions in the evolution of the bra (ket) during the first (second and third) waiting time(s). {\color{black} In the presence of decoherence (Subsec. \ref{subsec:deco}), the above response function is multiplied by a factor $F_1$, which is given by the following expression: \begin{gather} F_1 (T_1,T_2,T_3) = e^{-(\gamma_e+\gamma_g+\Gamma_e/2)(T_1+T_3)-\Gamma_eT_2}. \end{gather} This accounts not only for the dephasing and relaxation processes that affect the coherences during the waiting times $T_1$ and $T_3$ (as for the contributions related to ground state bleaching), but also for the relaxation taking place during the second waiting time $T_2$. } \paragraph{\color{black} Non-rephasing contribution.} The non-rephasing contribution corresponds to transitions $| 0 \rangle\langle 0 | \longrightarrow | j \rangle\langle 0 | \longrightarrow | j \rangle\langle k | \longrightarrow | j \rangle\langle 0 | \longrightarrow | 0 \rangle\langle 0 | $, where $|j\rangle$ and $|k\rangle$ are optically excited states, here coinciding with $|1\rangle$ (model $A$). Its expression reads: \begin{gather} \mathcal{R}^{(3)}_4 =-i |\mu_{01}|^4 \sum_{n_L,n_R=0}^\infty (-i|\eta|)^{2(n_L+n_R)} e^{h_M} \sum_{{\bf k}} \chi_M \nonumber\\ f_{M_L,{\bf q}_L,1} (-T_2)\, f_{M_R,{\bf q}_R,1} (T_1\!+\!T_2\!+\!T_3)\, e^{i m_C \omega T_3} . \end{gather} The $M$-dimensional vector ${\bf z}$ has the $l$-th component ${\rm z}_l=z_{j_l}$, where all the odd-numbered indices are $j_{2k+1}=1$ and all the even-numbered are $j_{2k}=2$, apart from $j_0=j_{M_L+1}=j_{M+1}=0$. The overall order $M-3=2(n_L+n_R)$ in the non-adiabatic coupling results from $2n_L$ ($2n_R$) virtual transitions in the evolution of the bra (ket) during the second (three) waiting time(s). {\color{black} The effect of decoherence on the non-rephasing contribution coincides with that on the rephasing one. Therefore, the above response function has to be multiplied by a factor $F_4(T_1,T_2,T_3)=F_1(T_1,T_2,T_3)$.} \subsubsection{Excited state absorption} \begin{figure} \caption{\label{fig:5} \label{fig:5} \end{figure} The excited state absorption is associated to those paths where both the ket and the bra are in an excited state subspace $\mathcal{S}_e$ during the second waiting time, and the ket undergoes a further excitation process at the end of such period. \paragraph{\color{black} Rephasing contribution.} The response function associated to the rephasing contribution corresponds to transitions $| 0 \rangle\langle 0 | \longrightarrow | 0 \rangle\langle j | \longrightarrow | k \rangle\langle j | \longrightarrow | l \rangle\langle j | \longrightarrow | j \rangle\langle j | $, where $|j\rangle$ and $|k\rangle$ are singly excited states, while $|l\rangle$ is doubly excited. In the case of model $A$, one has that $j=k=1$ and $l=3$. The expression of the response function reads: \begin{gather} \mathcal{R}^{(3)}_3 =i |\mu_{01}\mu_{23}|^2 \sum_{n_L,n_R=0}^\infty (-i|\eta|)^{2(n_L+n_R+1)} e^{h_M} \sum_{{\bf k}} \chi_M \nonumber\\ \, f_{M_L,{\bf q}_L,1} (\!-T_1\!-\!T_2\!-\!T_3)\, f_{M_R,{\bf q}_R,2} (T_2)\, e^{-i (m_C \omega +\omega_3) T_3} . \end{gather} The $M$-dimensional vector ${\bf z}$ has the $l$-th component ${\rm z}_l=z_{j_l}$, where all the odd-numbered indices are $j_{2k+1}=1$, apart from $j_{M_L+1}=3$, and all the even-numbered are $j_{2k}=2$, apart from $j_0=j_{M+1}=0$. The overall order $M-3=2(n_L+n_R)+2$ in the non-adiabatic coupling results from $2n_R+1$ ($2n_L+1$) virtual transitions in the evolution of the ket (bra) during the second (three) waiting time(s). {\color{black} Decoherence affects the above response function (Subsec. \ref{subsec:deco}). Its effect can be accounted by including a factor $F_3$, which reads: \begin{gather} F_3 (T_1,T_2,T_3) = e^{-(\gamma_e+\gamma_g+\Gamma_e/2)T_1-\Gamma_eT_2}\nonumber\\ \times e^{-(\gamma_e+\gamma_b+\Gamma_e/2+\Gamma_b/2)T_3}. \end{gather} This accounts not only for the dephasing and relaxation processes that affect the coherences during the waiting times $T_1$ and $T_3$ (as for the contributions related to ground state bleaching), but also for the relaxation taking place during the second waiting time $T_2$. } \paragraph{\color{black} Non-rephasing contribution.} The response function associated to the non-rephasing contribution corresponds to transitions $| 0 \rangle\langle 0 | \longrightarrow | j \rangle\langle 0 | \longrightarrow | j \rangle\langle k | \longrightarrow | l \rangle\langle k | \longrightarrow | k \rangle\langle k | $, where $|j\rangle$ and $|k\rangle$ are singly excited states, while $|l\rangle$ is doubly excited. In the case of model $A$, one has that $j=k=1$ and $l=3$. The expression of the response function reads: \begin{gather} \mathcal{R}^{(3)}_6 =i |\mu_{01}\mu_{23}|^2 \sum_{n_L,n_R=0}^\infty (-i|\eta|)^{2(n_L+n_R+1)} e^{h_M} \sum_{{\bf k}} \chi_M \nonumber\\ f_{M_L,{\bf q}_L,1} (\!-T_2\!-\!T_3)\, f_{M_R,{\bf q}_R,2} (T_1\!+\!T_2)\, e^{-i (m_C \omega +\omega_3) T_3} . \end{gather} The $M$-dimensional vector ${\bf z}$ has the $l$-th component ${\rm z}_l=z_{j_l}$, where all the odd-numbered indices are $j_{2k+1}=1$, apart from $j_{M_L+1}=3$, and all the even-numbered are $j_{2k}=2$, apart from $j_0=j_{M+1}=0$. The overall order $M-3=2(n_L+n_R)+2$ in the non-adiabatic coupling results from $2n_R+1$ ($2n_L+1$) virtual transitions in the evolution of the ket (bra) during the first and second (second and third) waiting times. {\color{black} The effect of decoherence on the non-rephasing and rephasing contribution coincides. Therefore, also the above response function has to be multiplied by a factor $F_6(T_1,T_2,T_3)=F_3(T_1,T_2,T_3)$.} \paragraph{\color{black} Verification against numerical results.} In order to test these analytical results, we compare the third-order response function obtained for the rephasing contribution with that derived by numerical diagonalization of the Hamiltonian. As shown in Fig. \ref{fig:5}, the results of the perturbative approach (symbols) converge to the nonperturbative results (solid line) for increasing number of terms in the expansion. Terms of increasing order are clearly required for increasing values of $T_3$ and (not shown) of $T_1$, while the value of $T_2$ is irrelevant in this respect. \subsubsection{Double quantum coherence} We finally consider the pathways that involve coherences between the ground and a doubly excited state. These give rise to two kinds of contributions. \paragraph{\color{black} First contribution.} The response function associated to the first kind of contributions corresponds to transitions $| 0 \rangle\langle 0 | \longrightarrow | j \rangle\langle 0 | \longrightarrow | l \rangle\langle 0 | \longrightarrow | l \rangle\langle k | \longrightarrow | k \rangle\langle k | $, where $|j\rangle$ and $|k\rangle$ are singly excited states, while $|l\rangle$ is doubly excited. In the case of model $A$, one has that $j=k=1$ and $l=3$. The expression of the response function reads: \begin{gather} \mathcal{R}^{(3)}_7 =i |\mu_{01}\mu_{23}|^2 \sum_{n_L,n_R=0}^\infty (-i|\eta|)^{2(n_L+n_R+1)} e^{h_M} \sum_{{\bf k}} \chi_M \nonumber\\ f_{M_L,{\bf q}_L,1} (-T_3)\, f_{M_R,{\bf q}_R,2} (T_1)\, e^{-i (m_C \omega +\omega_3) (T_2+T_3)} . \end{gather} The $M$-dimensional vector ${\bf z}$ has the $l$-th component ${\rm z}_l=z_{j_l}$, where all the odd-numbered indices are $j_{2k+1}=1$, apart from $j_{M_L+1}=3$, and all the even-numbered are $j_{2k}=2$, apart from $j_0=j_{M+1}=0$. The overall order $M-3=2(n_L+n_R)+2$ in the non-adiabatic coupling results from $2n_R+1$ ($2n_L+1$) virtual transitions in the evolution of the ket (bra) during the first (third) waiting time. {\color{black} Decoherence affects the above response function by inducing a decay of the single and double coherences that evolve during the three waiting times (Subsec. \ref{subsec:deco}). As a result, the above response function has to be multiplied by a factor $F_7$, whose expression reads: \begin{gather} F_7 (T_1,T_2,T_3) = e^{-(\gamma_e+\gamma_g+\Gamma_e/2)T_1}\nonumber\\ \times e^{-(\gamma_b+\gamma_g+\Gamma_b/2)T_2-(\gamma_e+\gamma_b+\Gamma_e/2+\Gamma_b/2)T_3}. \end{gather}} \paragraph{\color{black} Second contribution.} The response function associated to the second kind of contributions corresponds to transitions $| 0 \rangle\langle 0 | \longrightarrow | j \rangle\langle 0 | \longrightarrow | l \rangle\langle 0 | \longrightarrow | k \rangle\langle 0 | \longrightarrow | 0 \rangle\langle 0 | $, where $|j\rangle$ and $|k\rangle$ are singly excited states, while $|l\rangle$ is doubly excited. In the case of model $A$, one has that $j=k=1$ and $l=3$. The expression of the response function reads: \begin{gather} \mathcal{R}^{(3)}_8 =-i |\mu_{01}\mu_{23}|^2 \sum_{n_L,n_R=0}^\infty (-i|\eta|)^{2(n_L+n_R+1)} e^{h_M} \nonumber\\ \sum_{{\bf k}} \chi_M f_{M_L,{\bf q}_L,1} (T_3)\, f_{M_R,{\bf q}_R,2} (T_2)\, e^{-i (m_C \omega +\omega_3) T_2}. \end{gather} The $M$-dimensional vector ${\bf z}$ has the $l$-th component ${\rm z}_l=z_{j_l}$, where all the odd-numbered indices are $j_{2k+1}=1$, apart from $j_{M_L+1}=3$, and all the even-numbered are $j_{2k}=2$, apart from $j_0=j_{M+1}=0$. The overall order $M-3=2(n_L+n_R)+2$ in the non-adiabatic coupling results from $2n_R+1$ ($2n_L+1$) virtual transitions in the evolution of the ket during the first (third) waiting time. {\color{black} The effect of decoherence on the second contribution that involves a double quantum coherence differs from that on the first contribution. In particular, the effect of dephasing and relaxation is accounted by a factor $F_8$, whose expression reads: \begin{gather} F_8 (T_1,T_2,T_3) = e^{-(\gamma_e+\gamma_g+\Gamma_e/2)(T_1+T_3)} e^{-(\gamma_b+\gamma_g+\Gamma_b/2)T_2}. \end{gather}} {\color{black} \subsection{Nonlinear response functions \\ in the frequency domain} \begin{figure} \caption{\color{black} \label{fig:G} \end{figure} The third-order response functions are given by the sum of terms corresponding to different orders $M_L+M_R-2$ in $\eta$. The same applies to the response functions in the frequency domain, $\mathcal{R}(\omega_1,T_2,\omega_3)$, obtained by performing the Fourier transform with respect to the times $T_1$ and $T_3$. In the following, we consider as a representative example the response function $\mathcal{R}_6^{(3)}$, related to excited state absorption, non-rephasing contribution, for $T_2=0$ and $z_1=z_2=0$ (Fig. \ref{fig:G}). We note in passing that, for $T_2=0$, this coincides with the response function related to the first contribution of the double quantum coherence, $\mathcal{R}_7^{(3)}$. The lowest nonzero contribution corresponds to $M_L=M_R=2$ (and thus to $|\eta|^2$). This physically corresponds to a single non-adiabatic transition $|1\rangle \longrightarrow |2\rangle$, taking place during the first two waiting times, and to a single non-adiabatic transition $\langle 1| \longrightarrow \langle 2|$, taking place during the last two waiting times. In the time domain, this term only includes terms that oscillate at the diabatic states energies ($\Omega_{k,L}=\bar\omega_k$ and $\Omega_{k,R}=\bar\omega_{3-k}$, with $k=1,2$), with constant prefactors $A_1$ and $A_2$. The resulting contribution [panels (a) and (b), real and imaginary parts, respectively] is characterized by the presence of two identical diagonal peaks at the diabatic states energies, and by off-diagonal peaks with opposite sign. The following nonzero contribution corresponds to $M_L=M_R=8$ (and thus to $|\eta|^6$). This physically corresponds to the triple transition $|1\rangle \longrightarrow |2\rangle \longrightarrow |1\rangle \longrightarrow |2\rangle$, taking place during $T_1+T_2$, and to the transition $\langle 1| \longrightarrow \langle 2|\longrightarrow \langle 1|\longrightarrow \langle 2|$, taking place during $T_2+T_3$. In the time domain, this term still includes terms that oscillate at the diabatic states energies ($\bar\omega_1$ and $\bar\omega_2$), but with prefactors that are linear in the relevant waiting times ($T_1$ and $T_3$). The resulting contribution [panels (c) and (d), real and imaginary parts, respectively] is characterized by the presence of more complex features in the diagonal and off-diagonal positions, with hybrid absorptive and dispersive character (see Subsec. \ref{subsec:Pitsd}). } \section{Derivations \label{sec:derivations}} In the following, we provide the formal derivation of the results reported in Sec. \ref{sec:main results}. \subsection{Time-evolution operator} The starting point is the introduction of an interaction picture, based on the separation of the adiabatic and non-adiabatic components of the Hamiltonian: $H=(H_g+H_{0,e}+H_b)+V\equiv H_0+V$. The terms corresponding to the ground ($H_g$) and doubly-occupied states ($H_b$) are by assumption adiabatic, while the projection of the Hamiltonian onto the subspace $\mathcal{S}_e = \{|1\rangle,|2\rangle \}$ includes both an adiabatic ($H_{0,e}$) and a non-adiabatic ($V$) term. Hereafter, the focus is on the free dynamics that takes place within the subspace $\mathcal{S}_e$, which can undergo optical transitions from and to the ground- and doubly-occupied states. In the interaction picture, the time-dependent state is given by: $|\psi_I(t)\rangle = e^{iH_0t} e^{-iHt} |\psi (0)\rangle \equiv U_I |\psi (0)\rangle$, where the time evolution operator reads \cite{Mahan}: \begin{align}\label{eq:02} U_I(t) \! = \! 1 \! +\sum_{n=1}^\infty (-i)^n\! \int^t_0 dt_1 \dots\int^{t_{n-1}}_0 dt_n V_I(t_1)\dots V_I(t_n) . \end{align} From this, one can obtain the time evolution operator in the Schr\"odinger picture: $U_S=e^{-iHt}=e^{-iH_0t} U_I$. The non-adiabatic operator corresponds to $V_I (t) = e^{iH_0t} V e^{-iH_0t} = e^{iH_{0,e}t} V e^{-iH_{0,e}t}$. The exponential operators can be expressed in terms of the displacement operators $\mathcal{D} (\alpha) = e^{\alpha a^\dagger - \alpha^* a}$ as follows: \begin{align} e^{\pm i H_{0,e} t} = \sum_{\sigma=1}^2 | \sigma \rangle\langle \sigma | \, e^{\pm i\bar\omega_\sigma t}\, \mathcal{D} (-z_\sigma)\, e^{\pm i \omega a^\dagger a t}\, \mathcal{D} (z_\sigma) , \end{align} being $z_\sigma$ the displacement corresponding to the electronic state $\sigma$. From this it follows that the non-adiabatic component of the Hamiltonian is given by: \begin{align} V_I (t) & = \eta |1\rangle\langle 2| \, e^{i\bar\omega_{12} t}\, \mathcal{D} (-z_1)\, e^{i \omega a^\dagger a t}\, \nonumber\\ & \mathcal{D} (z_{12})\, e^{- i \omega a^\dagger a t}\, \mathcal{D} (z_2) + {\rm H.c.}, \end{align} where $\bar\omega_{12}\equiv\bar\omega_1-\bar\omega_2=-\bar\omega_{21}$. The products of an odd number of operators $\hat{V}$ that appear in Eq. (\ref{eq:02}) can thus be written as \begin{align}\label{eq:03} V_I & (t_1) \dots V_I (t_{2n+1}) = \eta |\eta|^{2n} |1\rangle\langle 2| e^{i\bar\omega_{21} \sum_{k=1}^{2n+1} (-1)^{k} t_k} \nonumber\\ &\mathcal{D} (-z_2) \left\{\prod_{l=1}^{2n+2} \mathcal{D} [(-1)^{l} z_{12}] e^{-i \omega a^\dagger a \tau_l} \right\} \mathcal{D} (z_2) + {\rm H. c.} , \end{align} where $t_0=t_{2n+2}=0$. They physically correspond to contributions where the system undergoes $2n+1$ transitions between the states $|1\rangle$ and $|2\rangle$, at the times $t_{2n+1}<t_{2n}<\dots<t_1$, {\color{black} separated by time intervals of duration $\tau_k=t_{k-1}-t_k$}. The products of an even number of non-adiabatic operators are diagonal in the basis of the adiabatic states and read: \begin{align}\label{eq:04} V_I & (t_1) \dots V_I (t_{2n}) = |\eta|^{2n} |1\rangle\langle 1| e^{i\bar\omega_{21} \sum_{k=1}^{2n} (-1)^{k} t_k} \nonumber\\ & \mathcal{D} (-z_2) \left\{\prod_{l=1}^{2n+1} \mathcal{D} [(-1)^{l} z_{12}] e^{-i \omega a^\dagger a \tau_l}\right\} \mathcal{D} (z_1) \nonumber\\ & + |\eta|^{2n} |2\rangle\langle 2| e^{i\bar\omega_{12} \sum_{k=1}^{2n} (-1)^{k} t_k} \nonumber\\ & \mathcal{D} (-z_1) \left\{\prod_{l=1}^{2n+1} \mathcal{D} [(-1)^{l} z_{21}] e^{-i \omega a^\dagger a \tau_l}\right\} \mathcal{D} (z_2) , \end{align} where $t_0=t_{2n+1}=0$. They physically correspond to contributions where the system undergoes $2n$ transitions between the states $|1\rangle$ and $|2\rangle$, at the times $t_{2n}<t_{2n-1}<\dots<t_1$, {\color{black} separated by time intervals of duration $\tau_k=t_{k-1}-t_k$}. \subsection{Propagators at defined interaction times} From the above equations it follows that the matrix element between the electronic states $|\sigma\rangle$ and $|\sigma'\rangle$ (where $\sigma,\sigma'=1,2$) of the products $e^{-i H_{0,e}t} V_I (t_1)\dots V(t_{M-1})$ can always be written as alternating sequences of displacement operators and free-oscillator time-evolution operators. The expectation value of such operators in the vacuum state $|0\rangle$ of the vibrational mode, to which we refer in the following as {\it adiabatic response function}, has a well defined analytical expression, which reads \cite{Quintela2022a}: \begin{align}\label{eq:arf} R^{(v,M)}_{j_1\,j_M} & (\tau_1,\dots,\tau_M) = \exp[h_M({\bf z})] \times\nonumber\\ & \exp \left(-\sum_{k=1}^M \sum_{l=1}^{M-k+1} z_{j_{l-1},j_l} z_{j_{l+k-1},j_{l+k}} \prod_{p=l}^{l+k-1} v_p\right). \end{align} The function $h_M$ of the displacements is given in Eq. (\ref{eq:hM}). We stress that $R_{j_1\,j_M}^{(v,M)}$ formally coincides with the vibrational response function for the displaced harmonic oscillator model, but has here a different physical interpretation. In particular, the transition between electronic states were induced there by the interaction of the system with the electric field, and here by the nonadiabatic term $V$. In order to stress such difference, the response function for the non-adiabatic model that is considered in the present paper is denoted with the symbol $\mathcal{R}$. The adiabatic response function $R_{j_1\,j_M}^{(v,M)}$ can be associated to a time evolution of the vibrational state induced by an Hamiltonian that is piece wise constant, and undergoes abrupt transitions as the system undergoes transitions between the electronic state $|1\rangle$ or $|2\rangle$. In particular, the Hamiltonian is constant during each of the $M$ time intervals, $\tau_k=t_{k-1}-t_k$ ($k=1,\dots,M$), delimited by two consecutive transitions. At each time interval one can associate a function $v_k=e^{-i\omega\tau_k}$, which appears in the expression of $R_{j_1\,j_M}^{(v,M)}$, and an index $j_k=1,2$, which specifies the electronic state and thus the Hamiltonian $H_{0,j_k}$ that induces the time evolution. The index $j_k$ also specifies the relevant displacement $z_{j_k}$, whose differences $z_{j_{k-1},j_k}\equiv z_{j_{k-1}} - z_{j_{k}}$ appear in Eq. (\ref{eq:arf}). \begin{figure} \caption{Representation in terms of the double-sided Feynman diagrams of the adiabatic response functions $R^{(v,4)} \label{fig:A} \end{figure} In the case of products of odd-order terms, the expectation value that enters the expression of the matrix element $\langle 1;0 | U_S | 2;0 \rangle $ reads: \begin{gather} \langle 1;0 | e^{-iH_{0,1} t} V_I (t_1)\dots V_I(t_{M-1}) |2; 0\rangle = e^{-i\bar\omega_1 t} \nonumber\\ \label{eq:100} \eta |\eta|^{2n} e^{i\bar\omega_{21} \sum_{k=1}^{2n+1} (-1)^{k} t_k} R^{(v,M)}_{12} (\tau_1,\dots,\tau_{M}) \end{gather} where $M=2n+2$, $j_{2k}=2$, $j_{2k+1}=1$, apart from $j_0=j_{M+1}=0$. The corresponding Feynman diagrams are characterized by $M+1$ arrows, all on the left side [Fig. \ref{fig:A}(b)], with the state before the first interaction and after the last one both coinciding with $|0\rangle$, and in between an alternation of states $|1\rangle$ and $|2\rangle$. The time-independent term in the exponent of the adiabatic response function is given by $h_M({\bf z}) = -\frac{1}{2} M z_{12}^2 - z_1 z_2 $. The expression of the propagator $\langle 2,0 | e^{-iH_{0,2} t} V_I (t_1)\dots V_I(t_{M-1}) |1, 0\rangle$ [see Fig. \ref{fig:A}(a)] can be obtained from the above expression by swapping the indices 1 and 2 that define the electronic states and by replacing $\eta$ with its complex conjugate. In the case of even-order terms, the expectation value that enters the expression of the matrix element $\langle \sigma ;0 | U_S | \sigma ;0 \rangle $ reads: \begin{gather} \langle \sigma; 0 | e^{-iH_{0,\sigma} t} V_I (t_1)\dots V_I (t_{M-1}) |\sigma; 0\rangle = e^{-i\bar\omega_\sigma t} \nonumber\\ \label{eq:200} |\eta|^{2n} e^{i\bar\omega_{12} \sum_{k=1}^{2n} (-1)^{k+\sigma} t_k} R^{(v,M)}_{\sigma\sigma} (\tau_1,\dots,\tau_{M}) \end{gather} where $\sigma=1,2$, $M=2n+1$, $j_{2k}=3-\sigma$, $j_{2k+1}=\sigma$, apart from $j_0=j_{M+1}=0$. The corresponding Feynman diagrams are characterized by $M+1$ arrows, all on the left side [Fig. \ref{fig:A}(c,d)], with the state before the first interaction and after the last one both coinciding with $|0\rangle$, in between an alternation of $|1\rangle$ and $|2\rangle$. The time-independent term in the exponent of the adiabatic response function is given by $h_M({\bf z}) = -\frac{1}{2} (M-1) z_{12}^2 - z_\sigma^2 $. \subsection{Taylor expansion of the propagator \label{subsec:te}} In order to compute the integrals with respect to the interaction times, we expand the response functions $R_{\sigma\sigma'}^{(v,M)}$ in Taylor series with respect to all the exponentials that appear in the exponent. In particular, the response function of order $M$ is given by the sum of $M(M+1)/2$ terms: the first $M$ terms $v_1,v_{2},\dots,v_M$ correspond to the individual time intervals $\tau_1,\tau_{2},\dots,\tau_{M}$, the following $M-1$ terms $v_1 v_{2}, v_{2} v_{3},\dots,v_{M-1} v_M$ correspond to the double time intervals $\tau_1 + \tau_{2},\tau_{2}+\tau_{3},\dots,\tau_{M-1}+\tau_{1}$; and so on until the last term $v_1 v_2 \dots v_{M-1} v_M$, which corresponds to the $M$-tuple time interval $t=\sum_{k=1}^M\tau_M$. The Taylor expansion thus gives: \begin{gather} R_{\sigma\sigma'}^{(v,M)}=e^{h_M({\bf z})}\sum_{\bf k} \frac{(z_\sigma z_{\sigma\bar\sigma})^{k_1}\dots (z_\sigma z_{\sigma'})^{k_{M(M+1)/2}}}{k_1!\dots k_{M(M+1)/2}!}\, \nonumber\\ e^{-i\omega k_1 t_{M-1}} \dots e^{-i\omega k_{M+1} t_{M-2}} \dots e^{-i\omega k_{M(M+1)/2} t} \nonumber\\ \label{eq:expansion} = e^{h_M} \sum_{\bf k} \chi_M \prod_{p=0}^{M-1} e^{-i\omega t_{p} m_{p}} = e^{h_M} \sum_{\bf k} \chi_M \prod_{p=1}^{M} e^{-i\omega \tau_{p} q_{p}} , \end{gather} where ${\bf z}=(z_1,\dots,z_M)$ and ${\bf k}=[k_1,\dots,k_{M(M+1)/2}]$, with the components $k_i$ that vary from $0$ to $\infty$. In the last equation above, the $M(M+1)/2$ oscillating terms are reduced either to the $M$ terms that depend on one of the times $t_p$, or to the $M$ that depend on the time intervals $\tau_p$. In the former case, the exponent of each factor in the last line above, $m_{p}=l_{M-p}-l_{M-p+1}$ ($p=0,\dots,M-1$), depends on ${\bf k}$ through the expression \begin{align}\label{eq:lp} l_{M-p}\! =\! k_{M-p}\! + \sum_{q=2}^{M} \sum_{s=\max(0,q-p-1)}^{\min(q-1,M-p-1)} \!\!\!\!\!\!\!\!\! k_{q M-p-(q-1)(q-2)/2-s} . \end{align} The coefficients $q_j$ can be expressed as a function of the $m_i$, being $q_j=\sum_{i=0}^j m_i$. The function $\chi_M$ depends both on ${\bf z}$ and on ${\bf k}$, through the expression reported in Eq. (\ref{eq:chiM}). As a result, the propagator corresponding to defined interaction times is expressed as sum of terms, each one given by a product of exponential functions of the times. \subsection{Integration over the interaction times \label{subsec:iit}} In order to derive the matrix elements of the time-evolution operator $U_S$, one finally needs to integrate the above quantities, multiplied by the additional oscillating terms [see Eqs. (\ref{eq:03},\ref{eq:04})], with respect to the $M-1$ times $t_p$. The multiple integral gives rise to the following expression: {\color{black} \begin{gather} f_{M,{\bf q},\sigma}(t) = e^{-i(\omega m_0+\bar\omega_\sigma)t} \prod_{p=1}^{M-1} \int_0^{t_{p-1}} dt_{p}\, e^{i\omega_{pp} t_{p}} \nonumber\\ = \sum_{j=1}^{M} A_{j}(t)\, e^{-i(\bar\omega_\sigma-\omega_{0,j-1})\, t} \equiv \sum_{j=1}^{M} A_{j}(t)\, e^{-i\Omega_{j} t} \label{eq:05} \end{gather} where ${\bf q}\equiv (q_1,\dots,q_{M})$. Besides, $A_{j}(t) = a_{j} t^{r_j}$, being $r$ the number of zero frequencies amongst the $\omega_{k,j-1}$, for $k=1,\dots,j-1$. Besides the $\Omega_j$, which appear in the final expression above, it is thus necessary to introduce the frequencies $\omega_{kj}$, which take the value \begin{align}\label{eq:mfr} \omega_{kj} = - \omega\sum_{i=k}^j m_i - \frac{1}{2}[(-1)^{k+\sigma}+(-1)^{j+\sigma}]\,\bar\omega_{21} \end{align} for $k\le j$ and $\omega_{kj}=0$ for $k>j$. From the above expression of the $\omega_{kj}$ it follows that $r$ cannot be larger than $M/2-1$, for even values of $M$, and of $(M-1)/2$, for odd values. The frequencies $\omega_{kj}$ and $\Omega_j$ can be expressed as a function of one another, through the relations: \begin{align} \Omega_j = \bar\omega_\sigma-\omega_{0,j-1},\ \omega_{kj}=\Omega_{k}-\Omega_{j+1} \end{align}} If none of the frequencies $\omega_{kj}$ vanishes, then one can define a set of constants $A_{M-1-k,j}(t)$, with $k = 1, \dots ,M – 1$ and $j = 1, \dots , k + 1)$. By sequentially performing the integrals in Eq. (\ref{eq:05}), one can show that the following recurrence relations apply, starting from $A_{M-1,1}=1$: \begin{gather} A_{M-1-k,j} = \frac{A_{M-k,j-1}}{i\omega_{M-k,M-2-k+j}}\\ A_{M-1-k,1}=-\sum_{j=2}^{k+1} A_{M-1-k,j} . \end{gather} Combining together the above equations, one can eventually express all the coefficients that enter the expression of the functions $f_{M,{\bf q},\sigma}(t)$ in terms of the frequencies $\omega_{ij}$: \begin{gather}\label{eq:frequencies} A_{M-m} = \frac{(-1)^m\,i^{1-M}}{\prod_{k=1}^{M-m-1} \omega_{k,M-m-1}\prod_{l=M-m}^{M-1} \omega_{M-m,l}} , \end{gather} being $A_{M-m} \equiv A_{0,M-m}$. In the presence of zero frequencies, the above recursive relations have to be modified. One can derive Eq. (\ref{eq:05}) by introducing functions $A_{ij}(t)=\sum_{k=0}^r a_{ijk} t^k$. If $\omega_{M-k,M-k+j-2}\neq 0$, then \begin{gather} a_{M-k-1,j,r} = \sum_{s=r}^{b} \frac{s!}{r!} \frac{(-1)^{s-r}\, a_{M-k,j-1,s}}{(i\omega_{M-k,M-k+j-2})^{s-r+1}}, \end{gather} where $b$ is the order of the polynomial $A_{M-k,j-1}$, and the constant term in the polynomial is given by \begin{gather} a_{M-k-1,1,0}=-\sum_{j=2}^{k+1} a_{M-k-1,j,0} . \end{gather} If instead $\omega_{M-k,M-k+j-2} = 0$, then \begin{gather} a_{M-1-k,j,r} = \frac{1}{r}\, a_{M-k,j-1,r-1} \end{gather} and $a_{M-k-1,j,0}=0$. \begin{figure} \caption{Representation in terms of the double-sided Feynman diagrams of the third-order response functions of the type $\mathcal{X} \label{fig:B} \end{figure} {\color{black} \subsection{Decoherence\label{subsec:deco}} The effect of decoherence can be included in the present approach at a phenomenological level. In particular, such inclusion leads to simple time-dependent prefactors for the derived response functions under the condition that the environment couples symmetrically to the subspace $\mathcal{S}_{e}=\{|1\rangle,|2\rangle\}$ where the non-adiabatic term is defined. This implies that pure dephasing between $|1\rangle$ and $|2\rangle$ is not included, and that these two states are assumed to relax at an equal rate. In the presence of decoherence, the free evolution of the system between two consecutive transitions induced by the electric field can no longer be simulated by the Schr\"odinger equation. We thus refer to a master equation in the Lindblad form \cite{Breuer}, \begin{gather} \frac{d}{dt} \rho = i [\rho,H] + \sum_{i=1}^{N_L} \left[L_i \rho L_i^\dagger - \frac{1}{2} (L_i^\dagger L_i \rho + \rho L_i^\dagger L_i)\right], \end{gather} with $N_L=6$ Lindblad operators $L_i$. Three of these, namely \begin{gather}\label{lo1} L_1=\sqrt{\Gamma_e}\, |0\rangle\langle 1|,\ L_2=\sqrt{\Gamma_e}\, |0\rangle\langle 2|,\ L_3=\sqrt{\Gamma_b}\, |0\rangle\langle 3|, \end{gather} account for relaxation, respectively from the states $|1\rangle$, $|2\rangle$, and $|3\rangle$. The other three operators read: \begin{gather} L_4=\sqrt{2\gamma_g}\, |0\rangle\langle 0|,\ L_6=\sqrt{2\gamma_b}\, |3\rangle\langle 3|, \nonumber\\ \label{lo2} L_5=\sqrt{2\gamma_e}\, (|1\rangle\langle 1|+|2\rangle\langle 2|), \end{gather} and account respectively for the decay of coherences between the subspaces $\mathcal{S}_g=\{|0\rangle\}$, $\mathcal{S}_b=\{|3\rangle\}$, and $\mathcal{S}_e$, and any other subspace. It should be intended that each of the above operators $L_i$ is multiplied by an identity operator that applies to the vibrational degrees of freedom, and thus has no direct effect of the state of the harmonic oscillator(s). The coherences between states belonging to different subspaces decay at a rate which is given by the sum of the respective dephasing rates and of the average relaxation rate. For example, $\dot\rho_{13}=-\rho_{13}[\gamma_e+\gamma_b+\frac{1}{2}(\Gamma_e+\Gamma_b)+i\bar\omega_{13}]$, with $\bar\omega_{ij}\equiv\bar\omega_i-\bar\omega_j$. Coherences between the states $|1\rangle$ and $|2\rangle$, instead, undergo an exponential decay only in virtue of the relaxation from the subspace $\mathcal{S}_e$: $\dot\rho_{12}=-\rho_{12}(\Gamma_e+i\bar\omega_{12})$. The same exponential decay affects the populations $\rho_{11}$ and $\rho_{22}$. As a result, the superoperators associated with all the Lindblad operators $L_i$ commutes with the one related to the Hamiltonian, and the effect of decoherence on the time evolution of any $\rho_{ij}$ can be reduced to a multiplicative exponential decay, with suitable decay rate. This set of Lindblad operators doesn't account for a pure dephasing term within the subspace $\mathcal{S}_3$. Its inclusion would require a generalization of the derivations presented in Sec. \ref{sec:derivations}, which is beyond the scope of the present article. In view of the above results, the effect of the Lindblad operators reported in Eqs. (\ref{lo1}-\ref{lo2}) can be effectively incorporated in the expression of the propagators and of the response function, through the inclusion of prefactors that decay exponentially with the waiting times. We note for completeness, that this approach accounts for the effects of the population loss in the initial state of the relaxation process, but not for those of the population gain in the final state.} \subsection{Multi-mode case} The above results can be generalized to the case of multiple ($G >1$) vibrational modes. The procedure is the one that has been followed in the single-mode case: calculation of the operators $V_I(t)$ and of their products; identification of their expectation values in the ground state of the vibational modes with the multimode adiabatic response functions; integration with respect to the interaction times. In the case of products of odd-order terms, such expectation value reads: \begin{gather} \langle 1;{\bf 0} | e^{-iH_{0,1} t} V_I (t_1)\dots V_I(t_{2n+1}) |2; {\bf 0}\rangle = e^{-i\bar\omega_1 t} \nonumber\\ \eta |\eta|^{2n} e^{i\bar\omega_{21} \sum_{k=1}^{2n+1} (-1)^{k} t_k} \prod_{\zeta =1}^G R^{(v_\zeta,M)}_{12} (\tau_1,\dots,\tau_M) \end{gather} where, $|{\bf 0}\rangle\equiv |0,\dots,0\rangle$ is the multimode ground state. As in the case $G=1$, the following relations hold: $M=2n+2$, $j_{2k}=2$, $j_{2k+1}=1$, apart from $j_0=j_{M+1}=0$. Physically, this term still refers to the occurrence of $2n+1$ hopping processes between the excited states, at the times $t_{2n+1} < t_{2n} < \dots < t_1 $, which eventually lead to a transition from $|2\rangle$ to $|1\rangle$. In the case of even-order terms, the expectation value of the vibrational ground state reads: \begin{gather} \langle \sigma; {\bf 0} | e^{-iH_{0,\sigma} t} V_I (t_1)\dots V_I (t_{2n}) |\sigma; {\bf 0} \rangle = e^{-i\bar\omega_\sigma t} \nonumber\\ |\eta|^{2n} e^{i\bar\omega_{12} \sum_{k=1}^{2n} (-1)^{k+\sigma} t_k} \prod_{\zeta =1}^G R^{(v_\zeta,M)}_{\sigma\sigma} (\tau_1,\dots,\tau_M) \end{gather} where, as in the case $G=1$, $\sigma=1,2$, $M=2n+1$, $j_{2k}=3-\sigma$, $j_{2k+1}=\sigma$, apart from $j_0=j_{M+1}=0$. Physically, this term refers to the occurrence of $2n$ hopping processes between the states $|2\rangle$ to $|1\rangle$, at the times $t_{2n} < t_{2n} < \dots < t_1 $, which eventually bring the system back to its initial state. We are now in the condition of writing the final expression of the propagators. In particular, the off-diagonal one in the basis $\{|1\rangle,|2\rangle\}$ reads: \begin{gather} \langle 1; {\bf 0} | U_S | 2; {\bf 0} \rangle = \sum_{n=0}^\infty (-i)^{2n+1}\eta |\eta|^{2n} \nonumber\\ \left[ \prod_{\zeta=1}^G e^{h_{M}({\bf z_\zeta})} \sum_{{\bf k}_\zeta} \chi_{M} ({\bf z}_\zeta,{\bf k}_\zeta) \right]\, f_{M,{\bf Q},1} (t), \end{gather} where $M=2n+2$, $j_{2k}=2$, $j_{2k+1}=1$, apart from $j_0=j_{M+1}=0$. Besides, ${\bf Q} \equiv ({\bf q}_1,\dots,{\bf q}_G) $, where the relation between the vector ${\bf q}_\zeta$ and ${\bf k}_\zeta$ is given by Eq. (6). The diagonal part of the propagator is given by the following expression: \begin{gather} \langle \sigma; {\bf 0} | U_S | \sigma; {\bf 0} \rangle = \prod_{\zeta=1}^G u_{0,\zeta} + \sum_{n=1}^\infty (-i)^{2n} |\eta|^{2n} \nonumber\\ \left[ \prod_{\zeta=1}^G e^{h_{M}({\bf z_\zeta})} \sum_{{\bf k}_\zeta} \chi_{M} ({\bf z}_\zeta,{\bf k}_\zeta) \right]\, f_{M,{\bf q},\sigma} (t) \end{gather} where $M=2n+1$, $j_{2k}=3-\sigma$, $j_{2k+1}=\sigma$, apart from $j_0=j_{M+1}=0$, and $u_{0,\zeta}=\exp[z_{1,\zeta}^2(e^{-i\omega_\zeta t}-1)] e^{-i\bar\omega_\sigma t}$. As in the even-$M$ case, the time-dependent polynomials are obtained from the single-mode expressions by replacing $\omega q_p$ with $\sum_{\zeta=1}^G \omega_\zeta q_{p,\zeta}$. A simple and yet relevant case is one where the two excited states correspond to electronic excitations localized in the first or second component of a dimer: $|1\rangle = |e,g\rangle$ and $|2\rangle=|g,e\rangle$ (model $B$, Fig. \ref{fig:2}). The model includes two vibrational modes ($G=2$), each one localized in one of the monomers. The oscillator displacement vanishes when the corresponding monomer is in the ground state ($z_{\zeta=1,2}=z_{\zeta=2,1}=0$). If the two units are identical, then the two vibrational frequencies and the displacements ($z_e\equiv z_{\zeta=1,1}=z_{\zeta=2,2}\neq 0$) coincide, and $\bar\omega_{12}=0$. In this case, the two-mode adiabatic response function can be written as a single-mode one, by replacing $h_M({\bf z})$ and $\chi_M({\bf z},{\bf k})$ respectively with $h_M'({\bf z})$ and $\chi_M'({\bf z},{\bf k})$. In particular, one can show that $h_{M}'({\bf z}) = - Mz_e^2 $ for all values of $\sigma,\sigma'=1,2$. As to the functions $\chi_M' ({\bf z},{\bf k})$ [Eqs. (8) and (13)], their nominators are given by products of terms $X_i^{k_i}$. The terms corresponding to $i=(p-1)M-(p-1)(p-2)/2+1$ and $i=pM-p(p-1)/2$ (with $p = 1, \dots ,M - 1$) is $X_i=(-1)^{p+1}z^2_e$, while the term corresponding to $i=M(M+1)/2$ is $z^2_e$ for $\sigma=\sigma'$. and 0 otherwise; in all the other cases, $X_i= 2(-1)^{p+1} z^2_e$. \subsection{Multitime propagators \\ and nonlinear response functions} The present approach can also be applied to multitime propagators, such as the ones that enter the expressions of nonlinear response functions. We focus hereafter on the three-time propagators, which typically represents the most relevant one in multidimensional coherent spectroscopy. For the sake of simplicity, we consider the case where optical transitions are only allowed between the ground state $|0\rangle$ and the excited state $|1\rangle$, and between $|2\rangle$ and the doubly-excited state $|3\rangle$ (model $A$, Fig. \ref{fig:1}). The relevant and inequivalent propagators can thus be reduced to two. In the first one, the left and right propagators only involve the state $|1\rangle$, while the central one involves the ground state: \begin{gather} \mathcal{X}_1 = \langle 1;0| e^{-i H T_L} | 1\rangle \langle 0 | e^{-i H T_C} | 0 \rangle \langle 1 | e^{-i H T_R} | 1; 0\rangle . \end{gather} In order to derive the above quantities, one can proceed along the same lines as for the single-time propagators. In a first step, the time evolution operators associated to the non-adiabatic Hamiltonian $H_e$ are expanded in powers of $V_I$. As a result, one has, for given values of the intermediate times $t_{L,2n_L}<t_{L,2n_L-1}<\dots,t_{L,1}$ and $t_{R,2n_R}<t_{R,2n_R-1}<\dots <t_{R,1}$ an operator given by an alternating sequence of displacement operators and free oscillator time evolution operators: \begin{gather} \langle 1 ; 0 | e^{-iH_{0,1} T_L} V_I (t_{L,1})\dots V_I(t_{L,2n_L}) |1\rangle \langle 0| e^{-i H_{0,0} T_C} |0\rangle \nonumber\\ \langle 1 | e^{-iH_{0,1} T_R} V_I (t_{R,1})\dots V_I(t_{R,2n_R}) |1; 0\rangle \nonumber\\ = e^{-i\bar\omega_1(T_L+T_R)} |\eta|^{2(n_L+n_R)} e^{i\bar\omega_{21}\sum_{\xi=L,R} \sum_{k=1}^{2n_\xi} (-1)^k t_{\xi,k} } \nonumber\\ R^{(v,M)}_{11} (\tau_{L,1},\dots,\tau_{L,M_L},T_C,\tau_{R,1},\dots,\tau_{R,M_R}), \end{gather} where $M_L=2n_L+1$ and $M_R=2n_R+1$. This can be formally identified with an adiabatic response function of order $ M = 2(n_L + n_R) + 3$, where and all the odd-numbered indices are $j_{2k+1}=1$ and all the even-numbered are $j_{2k}=2$, apart from $j_0=j_{M_L+1}=j_{M+1}=0$. In a second step, the adiabatic response function is expanded in powers of the exponentials that appear in the exponent. Finally, the multiple integration is performed independently with respect to the interaction times $t_{L,i}$ and $t_{R,j}$. As a result, one obtains \begin{gather} \mathcal{X}_1 =\sum_{n_L,n_R=0}^\infty (-i|\eta|)^{2(n_L+n_R)} e^{h_M} \nonumber\\ \sum_{{\bf k}} \chi_M\, f_{M_L,{\bf q}_L,1} (T_L)\, f_{M_R,{\bf q}_R,1} (T_R)\, e^{-i m_C \omega T_C}. \end{gather} In the second case, the system undergoes a transition from $|1\rangle$ to $|2\rangle$ during the time $T_R$. The state occupied during $T_C$ necessarily coincides with $|3\rangle$, being this the only electronic state that is optically coupled to $|2\rangle$: \begin{gather} \mathcal{X}_2 = \langle 1;0 | e^{-i H T_L} | 2\rangle \langle 3 | e^{-i H T_C} | 3 \rangle \langle 2 | e^{-i H T_R} | 1; 0\rangle . \end{gather} The expansion with respect to the non-adiabatic term $V_I$, where now only odd powers contribute, leads to: \begin{gather} \langle 1;0 | e^{-iH_{0,1} T_L} V_I (t_{L,1})\dots V_I(t_{L,2n_L+1}) |2\rangle \langle 3 | e^{-i H_{0,3} T_C} | 3 \rangle \nonumber\\ \langle 2 | e^{-iH_{0,2} T_R} V_I (t_{R,1})\dots V_I(t_{R,2n_R+1}) |1; 0\rangle = e^{-i\bar\omega_1 T_L} \nonumber\\ e^{-i (\omega_3 T_C + \bar\omega_2 T_R)} |\eta|^{2(n_L+n_R+1)} e^{i\bar\omega_{21}\sum_{\xi=L,R}\sum_{k=1}^{2n_\xi+1} (-1)^k t_{\xi,k}} \nonumber\\ R^{(v,M)}_{11} (\tau_{L,1},\dots,\tau_{L,M_L},T_C,\tau_{R,1},\dots,\tau_{R,M_R}) , \end{gather} where $M_L=2(n_L+1)$ and $M_R=2(n_R+1)$. In the adiabatic response function or order $M=2(n_L+n_R)+5$, all the odd-numbered indices are $j_{2k+1}=1$, apart from $j_{M_L}=3$, and all the even-numbered are $j_{2k}=2$. After performing the Taylor expansion and integrating with respect to the interaction times \begin{gather} \mathcal{X}_2 =\sum_{n_L,n_R=0}^\infty (-i|\eta|)^{2(n_L+n_R+1)} e^{h_M} \nonumber\\ \sum_{{\bf k}} \chi_M \, f_{M_L,{\bf q}_L,1} (T_L)\, f_{M_R,{\bf q}_R,2} (T_R)\, e^{-i (m_C \omega +\omega_3)T_C} . \end{gather} The two expressions above capture all the cases that are relevant for the third-order response functions, which can be obtained by suitably defining the times $T_L$, $T_C$, and $T_R$ in terms of the waiting times $T_1$, $T_2$, and $T_3$ and exploiting the fact that $U^\dagger (t)=U(-t)$. \subsubsection{Ground state bleaching} The rephasing component of the ground state bleaching contribution is associated to the quantity: \begin{gather} \langle 1,0 | U^\dagger_e(T_1) | 1\rangle \langle 0 | U^\dagger_g(T_2+T_3) | 0 \rangle \langle 1 | U_e(T_3) | 1,0 \rangle . \end{gather} This can be reduced to the function $\mathcal{X}_1$ by setting: $T_L=-T_1$, $T_C=-T_2-T_3$, $T_R=T_3$. The non-rephasing component of the ground-state bleaching contribution is associated to the quantity: \begin{gather} \langle 1,0 | U_e(T_3) | 1\rangle \langle 0 | U_g(T_2) | 0 \rangle \langle 1 | U_e(T_1) | 1,0 \rangle . \end{gather} This can be reduced to the function $\mathcal{X}_1$ by setting: $T_L=T_3$, $T_C=T_2$, $T_R=T_1$. \subsubsection{Stimulated emission} The rephasing component of the stimulated emission contribution is related to the function: \begin{gather} \langle 1,0 | U_e^\dagger(T_1+T_2) | 1\rangle \langle 0 | U_g^\dagger(T_3) | 0 \rangle \langle 1 | U_e(T_2+T_3) | 1,0 \rangle . \end{gather} This can be reduced to the quantity $\mathcal{X}_1$ by setting: $T_L=-T_1-T_2$, $T_C=-T_3$, $T_R=T_2+T_3$. The non-rephasing component of the stimulated emission contribution is related to the function: \begin{align} \langle 1,0 | U_e^\dagger(T_2) | 1\rangle \langle 0 | U_g^\dagger(T_3) | 0 \rangle \langle 1 | U_e(T_1+T_2+T_3) | 1,0 \rangle . \end{align} This can be reduced to the quantity $\mathcal{X}_1$ by setting: $T_L=-T_2$, $T_C=-T_3$, $T_R=T_1+T_2+T_3$. \subsubsection{Excited state absorption} The rephasing component of the excited state absorption is associated to the quantity: \begin{align} \langle 1,0 | U_e^\dagger(T_1+T_2+T_3) | 2\rangle \langle 3 | U_b (T_3) | 3 \rangle \langle 2 | U_e(T_2) | 1,0 \rangle . \end{align} This can be reduced to the function $\mathcal{X}_2$ by setting: $T_L=-T_1-T_2-T_3$, $T_C=T_3$, $T_R=T_2$. The non-rephasing component of the excited state absorption is associated to the quantity: \begin{align} \langle 1,0 | U_e^\dagger(T_2+T_3) | 2\rangle \langle 3 | U_b (T_3) | 3 \rangle \langle 2 | U_e(T_1+T_2) | 1,0 \rangle \end{align} This can be reduced to the function $\mathcal{X}_2$ by setting: $T_L=-T_2-T_3$, $T_C=T_3$, $T_R=T_1+T_2$. \subsubsection{Double quantum coherence} The first component of double quantum coherence contribution is related to the function: \begin{align} \langle 1,0 | U_e^\dagger(T_3) | 2\rangle \langle 3 | U_b(T_2+T_3) | 3 \rangle \langle 2 | U_e(T_1) | 1,0 \rangle \end{align} This can be reduced to the quantity $\mathcal{X}_2$ by setting: $T_L=-T_3$, $T_C=T_2+T_3$, $T_R=T_1$. The second component of double quantum coherence contribution is related to the function: \begin{align} \langle 1,0 | U_e(T_3) | 2\rangle \langle 3 | U_b(T_2) | 3 \rangle \langle 2 | U_e(T_1) | 1,0 \rangle \end{align} This can be reduced to the quantity $\mathcal{X}_2$ by setting: $T_L=T_3$, $T_C=T_2$, $T_R=T_1$. \section{Conclusions} In conclusion, we have developed an approach for analytically deriving the response functions $\mathcal{R}$ in model systems that include non-adiabatic couplings. The approach is based on the perturbative expansion of the relevant propagators with respect to the non-adiabatic term in the Hamiltonian, and on the formal correspondence between the contributions in the expansion and adiabatic response functions $R$, recently derived for the displaced oscillator model. {\color{black} After performing the Taylor expansion of $R$ with respect to the displacements and integrating with respect to the interaction times, we derive analytical expressions for the one- and three-time propagators and, from these, the linear and nonlinear response functions. It has also been shown that the effect of a simple and yet relevant form of decoherence, including both dephasing and relaxation, can be accounted by multiplying the above quantities by suitable exponential decay functions.} The approach has been applied to two prototypical model systems, which have been used for modeling a number of physical systems of interest. In these cases, the response functions have been compared with those obtained by an independent numerical approach, showing the convergence of the perturbative approach for time intervals of increasing duration, as the number of terms in the expansion increases. {\color{black} General criteria are given for the convergence of the Dyson expansion, both in the time and in the frequency domains.} The application of the present approach to higher-order response functions or to more complex models, which include more vibrational modes, electronic levels, allowed optical transitions, or more non-adiabatic terms in the Hamiltonian, is conceptually straightforward. {\color{black} In fact, it mainly requires to apply the above procedure to a number of additional pathways, that such extensions would allow. Other generalizations can also be envisaged, resulting from a different expression of the non-adiabatic term $V$ in the Hamiltonian. In particular, expressions of such term that are proportional to the nuclear position operator are often encountered in the literature. This would require an analogous generalization of the adiabatic response function to the case of nuclear-position dependent transitions amplitudes (from Franck-Condon to Herzberg-Teller coupling), which is the object of ongoing investigations.} \acknowledgements The author acknowledges fruitful discussions with Frank Ernesto Quintela Rodriguez. \appendix {\color{black} \section{Equivalent expressions \\ of the Hamiltonian \label{app:x}} Within the subspace $\mathcal{S}_e$, the Hamiltonian $H$ given in Eqs. (1-2) can be written as the sum of a term ($\hbar=1$) \begin{gather} H_a = \alpha {\bf n} \cdot {\bf \sigma} + \beta \sigma_z (a^\dagger+a) \equiv H_{a,1} + H_{a,2} \end{gather} and of a term $H_b$ that is proportional to the identity operator $\mathcal{I}=| 1 \rangle\langle 1 |+| 2 \rangle\langle 2 |$, and plays no role in the following discussion. The components of ${\bf\sigma}$ are the Pauli matrices $\sigma_X$, $\sigma_Y$, and $\sigma_Z$ in the basis $\{|1\rangle,|2\rangle\}$. The electronic part of the Hamiltonian, $H_{a,1}$, is characterized by the real coupling constant $\alpha$ and by the unit vector ${\bf n} =(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$. The same vector can also be expressed as a function of the Hamiltonian parameters in Eqs. (\ref{eq:ham1}-\ref{eq:ham2}): \begin{gather} {\bf n} = C \left[{\rm Re}(\eta),-{\rm Im}(\eta),\frac{1}{2}(\bar\omega_1-\bar\omega_2)\right], \end{gather} with $C=[|\eta|^2+\tfrac{1}{4}(\bar\omega_1-\bar\omega_2)^2]^{-1/2}$. This determines the eigenstates $|+\rangle$ and $|-\rangle$ of the electronic part, which can also be written as \begin{gather} H_{a,1}= \alpha {\bf n} \cdot {\bf \sigma} = \alpha (| + \rangle\langle + | - | - \rangle\langle - |) \equiv \alpha \tau_Z . \end{gather} With respect to the basis $\{|+\rangle,|-\rangle\}$ and to the corresponding Pauli matrices $\tau_X$, $\tau_Y$, and $\tau_Z$, the Hamiltonian reads: \begin{gather} H_a = \alpha\tau_Z + \beta{\bf m} \cdot {\bf \tau} (a^\dagger+a) , \end{gather} where ${\bf m}=(-\sin\theta\cos\phi,-\sin\theta\sin\phi,\cos\theta)$, or equivalently \begin{gather} {\bf m} = C \left[-{\rm Re}(\eta),{\rm Im}(\eta),\frac{1}{2}(\bar\omega_1-\bar\omega_2)\right]. \end{gather} Therefore, the kind of non-adiabatic Hamiltonian considered in the present paper, characterized by a transverse electronic term $V$ and an electron-vibrational coupling that is diagonal in the diabatic state basis, can also be written as the sum of a diagonal electronic term and of a more general, non-diagonal vibronic coupling.} \section{List of the functions $f_{M,{\bf q},\sigma}(t)$ for $M\le 6$\label{app:lof}} We consider for simplicity the case where the vibrational frequency $\omega$ and that corresponding to the electronic gap ($\bar\omega_{12}$) are incommensurate. Therefore, in view of Eq. (\ref{eq:mfr}), only the frequencies $\omega_{ij}$ where $j-i$ is an odd number can vanish. In particular, this happens if, in addition, $\sum_{k=i}^j m_k=0$. In the following we report, for each value of $M$: the expressions of the functions $A_j$ that apply if all the relevant frequencies $\omega_{ij}$ are nonzero; the expressions that change with respect to the above in case some of the frequencies vanish. If two frequencies $\omega_{ij}$ and $\omega_{mn}$, with $j \neq n$, vanish at the same time, the resulting changes in the functions $A_{j}$, with respect to the case where no frequencies vanish, are all the ones that are derived for $\omega_{ij}=0$ and $\omega_{mn}=0$ independently. {\color{black} \subsubsection{Zero-th order ($M=1$)} This is the contribution of lowest order in $\eta$ to the diagonal propagators $\langle\sigma;0|U_S|\sigma;0\rangle$ ($\sigma=1,2$). It is characterized by terms with \begin{gather} A_1 = 1, \end{gather} with corresponding frequencies $\Omega_1=\omega q_1 +\bar\omega_\sigma$. } \subsubsection{First order ($M=2$)} {\color{black} This is the contribution of lowest order in $\eta$ to the non-diagonal propagators $\langle\sigma;0|U_S|3-\sigma;0\rangle$ ($\sigma=1,2$).} From the general expressions of the coefficients $A_{j}$ [Eq. (\ref{eq:frequencies})] it follows that: \begin{gather} A_{2} = \frac{1}{i\omega_{11}},\ A_{1} = -\frac{1}{i\omega_{11}} . \end{gather} The frequency $\omega_{11}$ is always nonzero. {\color{black} Therefore, while considering the Fourier transforms, the first order contribution can only give rise to Lorentzian line shapes ($\hat{f}_0$), centered at the frequencies $\Omega_2=\omega q_2+\bar\omega_{3-\sigma}$ and $\Omega_1=\omega q_1+\bar\omega_{\sigma}$. In the absence of displacement ($z_1=z_2=0$), all the $q_j$ vanish, and the above expressions reduce to \begin{gather*} A_2=\frac{1}{i\bar\omega_{12}},\ A_1=-\frac{1}{i\bar\omega_{12}} . \end{gather*}} \subsubsection{Second order ($M=3$)} {\color{black} This is the contribution of lowest nonzero order in $\eta$ to the diagonal propagators.} From the general expressions of the coefficients $A_{j}$ [Eq. (\ref{eq:frequencies})], if all the frequencies $\omega_{ij}$ are nonzero, it follows that: \begin{gather} A_{3} = -\frac{1}{\omega_{12}\omega_{22}},\ A_{2} = \frac{1}{\omega_{11}\omega_{22}},\ A_{1} =-\frac{1}{\omega_{11}\omega_{12}}. \end{gather} {\color{black} These are multiplied by terms that oscillate at the frequencies $\Omega_{2k-1}=\omega q_{2k-1}+\bar\omega_{\sigma}$ and $\Omega_{2k}=\omega q_{2k}+\bar\omega_{3-\sigma}$.} If instead $\omega_{12}=0$, then the following expressions replace those reported above for the general case: \begin{gather} A_{3} = -\frac{it}{\omega_{22}},\ A_{1} = \frac{1}{\omega_{22}^2}. \end{gather} {\color{black} These coefficients correspond to the frequency $\Omega_1=\Omega_3$ (the equality follows from $\omega_{12}=\Omega_1-\Omega_3=0$). The coefficient $A_{2}$ remains unchanged. In the absence of displacement ($z_1=z_2=0$), the above expressions for $\omega_{12}=0$ reduce to \begin{gather*} A_3=\frac{it}{\bar\omega_{12}},\ A_2=-\frac{1}{\bar\omega_{12}^2},\ A_1=\frac{1}{\bar\omega_{12}^2} . \end{gather*}} \subsubsection{Third order ($M=4$)} From the general expressions of the coefficients $A_{j}$, if all the frequencies $\omega_{ij}$ are nonzero, it follows that: \begin{gather} A_{4} =-\frac{1}{i\omega_{13}\omega_{23}\omega_{33}},\ A_{3} =\frac{1}{i\omega_{12}\omega_{22}\omega_{33}} \nonumber\\ \label{eq:A01} A_{2} =-\frac{1}{i\omega_{11}\omega_{22}\omega_{23}},\ A_{1} =\frac{1}{i\omega_{11}\omega_{12}\omega_{13}} . \end{gather} {\color{black} The corresponding frequencies $\Omega_j$ are given by the same expressions specified for the previous orders.} If $\omega_{12}=0$ (and therefore $\Omega_1=\Omega_3$) and $\omega_{23}\neq 0$, then the following expressions replace those reported above for the general case: \begin{gather}\label{eq:a01} A_{3} = \frac{t}{\omega_{22}\omega_{33}},\ A_{1} = -\frac{1}{i\omega_{22}\omega_{33}} \left( \frac{1}{\omega_{22}} - \frac{1}{\omega_{33}} \right), \end{gather} while $A_{2}$ and $A_{4}$ remain unchanged. If $\omega_{23}=0$ (and therefore $\Omega_2=\Omega_4$) and $\omega_{12}\neq 0$, then the following expressions replace those reported above for the general case: \begin{gather}\label{eq:a02} A_{4} =-\frac{t}{\omega_{11}\omega_{33}},\ A_{2} = \frac{1}{i\omega_{11}\omega_{33}} \left( \frac{1}{\omega_{11}} + \frac{1}{\omega_{33}} \right), \end{gather} while $A_{1}$ and $A_{3}$ remain unchanged. If both $\omega_{12}=\omega_{23}=0$ are zero, then the functions $A_{j}$ are given by the expressions in Eqs. (\ref{eq:a01}-\ref{eq:a02}). {\color{black} In the absence of displacements, these reduce to: \begin{gather*} A_4=-\frac{t}{\bar\omega_{12}^2},\ A_3=-\frac{t}{\bar\omega_{12}^2}, \ A_2= \frac{2}{i\bar\omega_{12}^3},\ A_1=-\frac{2}{i\bar\omega_{12}^3} . \end{gather*}} \subsubsection{Fourth order ($M=5$)} From the general expressions of the coefficients $A_{j}$, if all the frequencies $\omega_{ij}$ are nonzero, it follows that: \begin{gather} A_{5}=\frac{1}{\omega_{14}\omega_{24}\omega_{34}\omega_{44}},\ A_{4}=-\frac{1}{\omega_{13}\omega_{23}\omega_{33}\omega_{44}} \nonumber\\ A_{3} = \frac{1}{\omega_{12}\omega_{22}\omega_{33}\omega_{34}},\ A_{2} = - \frac{1}{\omega_{11}\omega_{22}\omega_{23}\omega_{24}} \nonumber\\ \label{eq:A02} A_{1} = \frac{1}{\omega_{11}\omega_{12}\omega_{13}\omega_{14}}. \end{gather} If $\omega_{12}=0$ ($\Omega_1=\Omega_3$), then the above expressions of $A_{1}$ and $A_{3}$ are replaced by the following ones: \begin{gather} A_{3}=\frac{it}{\omega_{22}\omega_{33}\omega_{34}}\nonumber\\ A_{1} = -\frac{1}{\omega_{22}\omega_{33}\omega_{34}} \left( \frac{1}{\omega_{22}} -\frac{1}{\omega_{33}} -\frac{1}{\omega_{34}} \right). \end{gather} If $\omega_{14}=0$ ($\Omega_1=\Omega_5$) and $\omega_{34}\neq 0$, then: \begin{gather} A_{5}=\frac{it}{\omega_{24}\omega_{34}\omega_{44}}\nonumber\\ A_{1} = -\frac{1}{\omega_{24}\omega_{34}\omega_{44}} \left( \frac{1}{\omega_{24}} +\frac{1}{\omega_{34}} +\frac{1}{\omega_{44}} \right). \end{gather} If $\omega_{23}=0$ ($\Omega_2=\Omega_4$), then: \begin{gather} A_{4}=-\frac{it}{\omega_{13}\omega_{33}\omega_{44}} \nonumber\\ A_{2}=\frac{1}{\omega_{13}\omega_{33}\omega_{44}} \left( \frac{1}{\omega_{13}} +\frac{1}{\omega_{33}} -\frac{1}{\omega_{44}} \right). \end{gather} If $\omega_{34}=0$ ($\Omega_3=\Omega_5$) and $\omega_{14}\neq 0$, then: \begin{gather} A_{5}=\frac{it}{\omega_{14}\omega_{24}\omega_{44}}\nonumber\\ A_{3}=-\frac{1}{\omega_{14}\omega_{24}\omega_{44}} \left( \frac{1}{\omega_{14}} +\frac{1}{\omega_{24}} +\frac{1}{\omega_{44}} \right). \end{gather} If $\omega_{14}=\omega_{34}=0$ ($\Omega_1=\Omega_3=\Omega_5$): \begin{gather} A_{5}=-\frac{t^2}{2\omega_{24}\omega_{44}},\ A_{3}=-\frac{it}{\omega_{24}\omega_{44}}\left(\frac{1}{\omega_{24}}+\frac{1}{\omega_{44}}\right)\nonumber\\ A_{1}=\frac{1}{\omega_{24}\omega_{44}} \left(\frac{1}{\omega^2_{24}}+\frac{1}{\omega^2_{44}}+\frac{1}{\omega_{24}\omega_{44}}\right). \end{gather} {\color{black} If the oscillator doesn't undergo any displacement in the states $|1\rangle$ and $|2\rangle$ ($z_1=z_2=0$), then all the frequencies $\omega_{ij}$ with even (odd) $i$ and odd (even) $j$ vanish. The above equations reduce to: \begin{gather*} A_5=-\frac{t^2}{2\bar\omega_{12}^2}, \ A_4=\frac{it}{\bar\omega_{12}^3}, \ A_3=\frac{2it}{\bar\omega_{12}^3}, \\ A_2= -\frac{3}{\bar\omega_{12}^4}, \ A_1=\frac{3}{\bar\omega_{12}^4} . \end{gather*}} \subsubsection{Fifth order ($M=6$)} From the general expressions of the coefficients $A_{j}$, if all the frequencies $\omega_{ij}$ are nonzero, it follows that: \begin{gather} A_{6}=\frac{1}{i\omega_{15}\omega_{25}\omega_{35}\omega_{45}\omega_{55}},\ A_{5}=-\frac{1}{i\omega_{14}\omega_{24}\omega_{34}\omega_{44}\omega_{55}}\nonumber\\ A_{4}=\frac{1}{i\omega_{13}\omega_{23}\omega_{33}\omega_{44}\omega_{45}},\ A_{3}=-\frac{1}{i\omega_{12}\omega_{22}\omega_{33}\omega_{34}\omega_{35}} \nonumber\\ A_{2}=\frac{1}{i\omega_{11}\omega_{22}\omega_{23}\omega_{24}\omega_{25}},\ A_{1}=-\frac{1}{i\omega_{11}\omega_{12}\omega_{13}\omega_{14}\omega_{15}}. \end{gather} {\color{black} The Fourier transform of these contributions thus give rise to Lorentzian line shapes ($\hat{f}_0$), centered at the frequencies $\Omega_j$. Other vectors ${\bf q}$ in the Taylor expansion will give rise to vanishing frequencies. We start by considering the case where only one frequency vanishes within each group $\omega_{ik}$ ($i=1,\dots,k$).} If $\omega_{12}=0$ (and therefore $\Omega_1=\Omega_3$), then: \begin{gather} A_{3}=-\frac{t}{\omega_{22}\omega_{33}\omega_{34}\omega_{35}}\nonumber\\ A_{1}=\frac{1}{i\omega_{22}\omega_{33}\omega_{34}\omega_{35}} \left( \frac{1}{\omega_{22}} -\frac{1}{\omega_{33}} -\frac{1}{\omega_{34}} -\frac{1}{\omega_{35}} \right). \end{gather} If $\omega_{14}=0$ (and therefore $\Omega_1=\Omega_5$) and $\omega_{34}\neq 0$, then: \begin{gather} A_{5}=-\frac{t}{\omega_{24}\omega_{34}\omega_{44}\omega_{55}}\nonumber\\ A_{1}=\frac{1}{i\omega_{24}\omega_{34}\omega_{44}\omega_{55}} \left( \frac{1}{\omega_{24}} +\frac{1}{\omega_{34}} +\frac{1}{\omega_{44}} -\frac{1}{\omega_{55}} \right). \end{gather} If $\omega_{23}=0$ (and therefore $\Omega_2=\Omega_4$), then: \begin{gather} A_{4}=\frac{t}{\omega_{13}\omega_{33}\omega_{44}\omega_{45}},\ \nonumber\\ A_{2}=-\frac{1}{i\omega_{13}\omega_{33}\omega_{44}\omega_{45}} \left( \frac{1}{\omega_{13}} +\frac{1}{\omega_{33}} -\frac{1}{\omega_{44}} -\frac{1}{\omega_{45}} \right) . \end{gather} If $\omega_{25}=0$ (and therefore $\Omega_2=\Omega_6$) and $\omega_{45}\neq 0$, then: \begin{gather} A_{6}=\frac{t}{\omega_{15}\omega_{35}\omega_{45}\omega_{55}}\\ A_{2}=\frac{1}{i\omega_{15}\omega_{35}\omega_{45}\omega_{55}} \left( \frac{1}{\omega_{15}} +\frac{1}{\omega_{35}} +\frac{1}{\omega_{45}} +\frac{1}{\omega_{55}} \right) \end{gather} If $\omega_{34}=0$ (and therefore $\Omega_3=\Omega_5$) and $\omega_{14}\neq 0$, then: \begin{gather} A_{5}=-\frac{t}{\omega_{14}\omega_{24}\omega_{44}\omega_{55}}\\ A_{3}=\frac{1}{i\omega_{14}\omega_{24}\omega_{44}\omega_{55}} \left( \frac{1}{\omega_{14}} +\frac{1}{\omega_{24}} +\frac{1}{\omega_{44}} -\frac{1}{\omega_{55}} \right) . \end{gather} If $\omega_{45}=0$ (and therefore $\Omega_4=\Omega_6$) and $\omega_{25}\neq 0$, then: \begin{gather} A_{6}=\frac{t}{\omega_{15}\omega_{25}\omega_{35}\omega_{55}}\\ A_{4}=-\frac{1}{i\omega_{15}\omega_{25}\omega_{35}\omega_{55}} \left( \frac{1}{\omega_{15}} +\frac{1}{\omega_{25}} +\frac{1}{\omega_{35}} +\frac{1}{\omega_{55}} \right) . \end{gather} {\color{black} In all these cases, the Fourier transform gives rise to functions $\hat{f}_0$ and $\hat{f}_1$, both centered at the relevant frequencies $\Omega_j$. We finally consider the case where two frequencies vanish within each group $\omega_{ik}$ ($i=1,\dots,k$).} If $\omega_{14}=\omega_{34}=0$ (and therefore $\Omega_1=\Omega_3=\Omega_5$): \begin{gather} A_{5}=\frac{t^2}{2i\omega_{24}\omega_{44}\omega_{55}}\nonumber\\ A_{3}=\frac{t}{\omega_{24}\omega_{44}\omega_{55}}\left(\frac{1}{\omega_{24}}+\frac{1}{\omega_{44}}-\frac{1}{\omega_{55}}\right)\nonumber\\ A_{1}=-\frac{1}{i\omega_{24}\omega_{44}\omega_{55}} \left(\frac{1}{\omega^2_{24}}+\frac{1}{\omega^2_{44}}+\frac{1}{\omega^2_{55}}\right.\nonumber\\ \left.+\frac{1}{\omega_{24}\omega_{44}}-\frac{1}{\omega_{24}\omega_{55}}-\frac{1}{\omega_{44}\omega_{55}}\right). \end{gather} If $\omega_{25}=\omega_{45}=0$ ($\Omega_2=\Omega_4=\Omega_6$): \begin{gather} A_{6}=-\frac{t^2}{2i\omega_{15}\omega_{35}\omega_{55}}\nonumber\\ A_{4}=-\frac{t}{\omega_{15}\omega_{35}\omega_{55}}\left(\frac{1}{\omega_{15}}+\frac{1}{\omega_{35}}+\frac{1}{\omega_{55}}\right)\nonumber\\ A_{2}=\frac{1}{i\omega_{15}\omega_{35}\omega_{55}} \left(\frac{1}{\omega^2_{15}}+\frac{1}{\omega^2_{35}}+\frac{1}{\omega^2_{55}}\right.\nonumber\\ \left.+\frac{1}{\omega_{15}\omega_{35}}+\frac{1}{\omega_{15}\omega_{55}}+\frac{1}{\omega_{35}\omega_{55}}\right). \end{gather} {\color{black} If the the displacements corresponding to the states $|1\rangle$ and $|2\rangle$ vanish, then $\omega_{ij}=0$ for even (odd) $i$ and odd (even) $j$. The above equations thus reduce to: \begin{gather*} A_6=-\frac{t^2}{2i\bar\omega_{12}^3},\ A_5=\frac{t^2}{2i\bar\omega_{12}^3},\ A_4=-\frac{3t}{\bar\omega_{12}^4}, \\ A_3=-\frac{3t}{\bar\omega_{12}^4},\ A_2=\frac{6}{i\bar\omega_{12}^5},\ A_1=-\frac{6}{i\bar\omega_{12}^5} . \end{gather*}} \end{document}
\begin{document} \title{Dual species matter qubit entangled with light} \date{\today } \author{S.-Y. Lan, S. D. Jenkins,$^{*}$ T. Chaneli\`{e}re,$^{\star}$ D. N. Matsukevich,$^{\dagger}$ C. J. Campbell, R. Zhao, T. A. B. Kennedy, and A. Kuzmich} \affiliation{School of Physics, Georgia Institute of Technology, Atlanta, Georgia 30332-0430} \pacs{42.50.Dv,03.65.Ud,03.67.Mn} \begin{abstract} We propose and demonstrate an atomic qubit based on a cold $^{85}$Rb-$^{87}$Rb isotopic mixture, entangled with a frequency-encoded optical qubit. The interface of an atomic qubit with a single spatial light mode, and the ability to independently address the two atomic qubit states, should provide the basic element of an interferometrically robust quantum network. \end{abstract} \maketitle Quantum mechanics permits the secure communication of information between remote parties \cite{bennett,ekert,bouwmeester}. However, direct optical fiber based quantum communication over distances greater than about 100 km is challenging due to intrinsic fiber losses. To overcome this limitation it is necessary to take advantage of quantum state storage at intermediate locations on the transmission channel. Interconversion of the information from light to matter to light is therefore essential. It was the necessity to interface photonic communication channels and storage elements that lead to the proposal of the quantum repeater as an architecture for long-distance distribution of quantum information via qubits \cite{briegel,duan}. Recently there has been rapid progress in interfacing photonic and stored atomic qubits. Two-ensemble encoding of matter qubits was used to achieve entanglement of photonic and atomic rubidium qubits and quantum state transfer from matter to light \cite{matsukevich}. This was followed by a more robust single-ensemble qubit encoding \cite{matsukevich1}, which led to full light-matter-light qubit interconversion and entanglement of two remote atomic qubits \cite{matsukevich2}. More recently, both two-ensemble and single-ensemble atomic qubits were reported using cesium gas \cite{chou,riedma}. To realize scalable long distance qubit distribution telecommunication-wavelength photons and long-lived quantum memory elements are required \cite{chaneliere}. Although multiplexing of atomic memory elements vastly improves the dependence of entanglement distribution on storage lifetime \cite{collins}, there remains, the problem of robust atomic and photonic qubits for long-distance communication. Two-ensemble encoding suffers from the problem of long-term interferometric phase stability, while qubit states encoded in a single ensemble are hard to individually address. \begin{figure} \caption{(a) Schematic shows two orthogonal qubit states (arrows) encoded in two atomic ensembles coupled to distinct spatial light modes \cite{duan,matsukevich} \label{fig:1} \end{figure} A protocol for implementing entanglement distribution with an atomic ensemble-based quantum repeater has been proposed \cite{duan}. It involves generating and transmitting each of the qubit basis states individually, in practice via two interferometrically separate channels. Under prevailing conditions of low overall efficiencies it provides improved scaling compared to direct qubit entanglement distribution \cite{briegel}. Its disadvantage is the necessity to stabilize the length of both transmission channels to a small fraction of the optical wavelength, as the distribution of qubit entanglement is sensitive to the relative phase fluctuations in the two arms. In this Letter we propose an interferometrically robust quantum repeater element based on entangled mixed species atomic, and frequency-encoded photonic, qubits, Fig. 1. This avoids the use of two interferometrically separate paths for qubit entanglement distribution. The qubit basis states are encoded as single spin wave excitations in each one of the two atomic species co-trapped in the same region of space. The spectroscopically resolved transitions enable individual addressing of the atomic species. Hence one may perform independent manipulations in the two repeater arms which share a single mode transmission channel. Phase stability is achieved by eliminating the relative ground state energy shifts of the co-trapped atomic species, as is in any case essential to successfully read out an atomic excitation \cite{revival}. \begin{figure} \caption{Schematic of the experimental set-up showing the geometry of the addressing and scattered fields from the co-trapped isotope mixture of $^{85} \label{fig:Schematic} \end{figure} We consider a co-trapped isotope mixture of $^{85}$Rb and $^{87}$Rb, containing, respectively, $N_{85}$ and $N_{87}$ atoms cooled in a magneto optical trap, as shown in Fig. 2. Unpolarized atoms of isotope $\nu$ ($\nu \in \{85, 87\}$) are prepared in the ground hyperfine level $\isolevlab{a}{\nu }$, where $\isolevlab{a}{85} \equiv \levellabel{5S_{1/2},F_a^{(85)} = 3}$, $\isolevlab{a}{87} \equiv \levellabel{5S_{1/2},F_a^{(87)} = 2}$, and $F_f^{(\nu )}$ is the total atomic angular momentum for level $\isolevlab{f}{\nu }$. We consider the Raman configuration with ground levels $\isolevlab{a}{\nu}$ and $\isolevlab{b}{\nu}$ and excited level $\isolevlab{c}{\nu}$ with energies $\hbar\omega_{a}^{(\nu )}$, $\hbar\omega_{b}^{(\nu )}$, and $\hbar\omega_{c}^{(\nu )}$ respectively. Level $\isolevlab{b}{\nu }$ corresponds to the ground hyperfine level with smaller angular momentum, while level $\isolevlab{c}{\nu }$ is the $\levellabel{5P_{1/2}}$ hyperfine level with $F_c^{(\nu )}=F_a^{(\nu )}$. A 150 ns long \textit{write} laser pulse of wave vector $\mathbf{k}_w = k_w\unitvec{y}$, horizontal polarization $\mathbf{e}_H = \unitvec{z}$ and temporal profile $\varphi(t)$ (normalized to unity $\int dt~|\varphi(t)|^2 =1$) impinges on an electro-optic modulator (EOM), producing sidebands with frequencies $ck_w^{(85)} = ck_w + \delta \omega_w$ and $ck_w^{(87)} = ck_w - \delta\omega_w$ ($\delta \omega_w = 531.5$ MHz) nearly resonant on the respective isotopic $D_1$ ($\isolevlab{a}{\nu } \leftrightarrow \isolevlab{c}{\nu }$) transitions with detunings $\Delta_{\nu } = ck_w^{(\nu )} - (\omega_c^{(\nu )} - \omega_a^{(\nu )}) \approx -10 $ MHz. Spontaneous Raman scattering of the \textit{write} fields results in signal photons with frequencies $ck_s^{(\nu )} = ck_w^{(\nu )} + (\omega_b^{(\nu )} - \omega_a^{(\nu )})$ on the $\isolevlab{b}{\nu } \leftrightarrow \isolevlab{c}{\nu }$ transitions. The positive frequency component of the detected signal electric field from isotope $\nu $ with vertical polarization $\mathbf{e}_V$ is given by \begin{eqnarray} \vecop{E}^{(\nu)(+)}(\mathbf{r},t) &=& \sqrt{\frac{\hbar k_s^{(\nu)}}{2\epsilon_0}} e^{-ick_s^{(\nu)} (t - \unitvec{k}_s^{(\nu)} \cdot \mathbf{r})} \nonumber \\ & & \times u_s(\mathbf{r}) \hat{\psi}_s^{(\nu)}(t-\unitvec{k}_s\cdot \mathbf{r}) \mathbf{e}_V \mbox{,} \label{eq:SigEfield} \end{eqnarray} where $u_s(\spvec{r})$ is the transverse spatial profile of the signal field (normalized to unity in its transverse plane), and $\hat{\psi}_s^{(\nu)}(t)$ is the annihilation operator for the signal field. These operators obey the usual free field, narrow bandwidth bosonic commutation relations $[ \hat{\psi}_s^{(\nu)}(t), \hat{\psi}_s^{(\nu')\dag}(t') ] = \delta_{\nu,\nu'} \delta(t-t')$. The emission of $V$-polarized signal photons creates correlated atomic spin-wave excitations with annihilation operators given by \begin{equation} \hat{s}^{(\nu)} = \cos\theta_\nu \hat{s}_{-1}^{(\nu)} - \sin\theta_\nu \hat{s}_{+1}^{(\nu)} \mbox{,} \label{eq:HspinWaveDef} \end{equation} where $$\cos^2\theta_\nu = \sum_{m=-F_a^{(\nu)}}^{F_a^{(\nu)}} X_{m,-1}^{(\nu)\,2} / \sum_{\alpha = \pm1} \sum_{m=-F_a^{(\nu)}}^{F_a^{(\nu)}} X_{m,\alpha}^{(\nu)\,2},$$ $X_{m,\alpha}^{(\nu)} \equiv \cgcoeff{F_a^{(\nu)}}{1}{F_c^{(\nu)}}{m}{0}{m} \cgcoeff{F_b^{(\nu)}}{1}{F_c^{(\nu)}}{m-\alpha}{\alpha}{m} $ is a product of Clebsch-Gordan coefficients, and the spherical vector components of the spin wave are given by \begin{equation} \hat{s}_\alpha^{(\nu)} = \sum_{m=-F_a^{(\nu)}}^{F_a^{(\nu)}} \frac{X_{m,\alpha}^{(\nu)}}{\sqrt{\sum_{m=-F_a^{(\nu)}}^{F_a^{(\nu)}} \left|X_{m,\alpha}^{(\nu)}\right|^2}} \hat{s}_{m,\alpha}^{(\nu)} \mbox{.} \label{eq:spSwaveComp} \end{equation} The spin wave Zeeman components of isotope $\nu$ are given in terms of the $\mu$-th $^\nu$Rb atom transition operators ${\sigma}_{\isolevind{a}{\nu},m;\: \isolevind{b}{\nu},m'}$ and the \textit{write} $u_w(\spvec{r})$ and signal $u_s(\spvec{r})$ field spatial profiles \begin{eqnarray} \hat{s}_{m,\alpha}^{(\nu)} &=& i{A}^{(\nu)}\sqrt{ \frac{(2F_a^{(\nu )}+1)}{N_\nu} } \sum_{\mu}^{N_\nu} {\sigma}_{\isolevind{a}{\nu},m;\: \isolevind{b}{\nu}, m}^{\mu} \nonumber\\ & &\times e^{i\left(\mathbf{k}_s^{(\nu)}-\mathbf{k}_w^{(\nu)}\right) \cdot \mathbf{r}_{\mu}} u_s(\mathbf{r}_{\mu}) u_w^{\ast}(\mathbf{r}_{\mu}) . \label{eq:swavemCompDef} \end{eqnarray} The effective overlap of the \textit{write} beam and the detected signal mode \cite{jenkins} is given by \begin{equation} {A}^{(\nu)} = \left(\int d^3r \left| u_s(\spvec{r}) u_w^{\ast}(\spvec{r}) \right|^2 \frac{n^{(\nu)}(\spvec{r})}{N_\nu} \right)^{-1/2}, \label{eq:AbarDef} \end{equation} where $n^{(\nu)}(\spvec{r})$ is the number density of isotope $\nu$. The interaction responsible for scattering into the collected signal mode is given by \begin{eqnarray} \hat{H}_s(t) &=& i\hbar\chi\varphi(t) \Bigl( \cos\eta\, \hat{\psi}_s^{(85)\dag}(t) \hat{s}^{(85)\dag} \nonumber\\ & & + \sin\eta\, \hat{\psi}_s^{(87)\dag}(t) \hat{s}^{(87)\dag} \Bigr) + h.c. \mbox{,} \label{eq:sigHam} \end{eqnarray} where $\chi \equiv \sqrt{\chi_{85}^2 + \chi_{87}^2}$ is a dimensionless interaction parameter, \begin{equation} \chi_\nu \equiv \frac{\sqrt{2}d_{cb}^{(\nu)}d_{ca}^{(\nu)}}{{A}^{(\nu)}\Delta_\nu} \frac{k_s^{(\nu)}k_w^{(\nu)}n_w^{(\nu)} N_\nu} {(2F_a^{(\nu )}+1)\hbar \epsilon_0 } \sqrt{\sum_{\alpha=\pm 1} \sum_{m=-F_a^{(\nu)}}^{F_a^{(\nu)}} \left|X_{m,\alpha}^{(\nu)}\right|^2} \mbox{,} \label{eq:chi_isoDef} \end{equation} $d_{ca}^{(\nu)} $ and $d_{cb}^{(\nu)} $ are reduced matrix elements, $n_w^{(\nu)}$ is the average number of photons in the \textit{write} pulse sideband with frequency $ck_w^{(\nu)}$, and the parametric mixing angle $\eta$ is given by $ \cos^2\eta = \chi_{85}^2/(\chi_{85}^2 + \chi_{87}^2)$. The interaction picture Hamiltonian also includes terms representing Rayleigh scattering and Raman scattering into undetected modes. One can show, however, that these terms commute with the signal Hamiltonian (Eq.~(\ref{eq:sigHam})) and with the operators $\hat{\psi}_s^{(\nu)}(t)$ and $\hat{s}^{(\nu)}$ to order $O(1/\sqrt{N})$. As a result, the interaction picture density operator for the signal-spin wave system (tracing over undetected field modes) is given by $\hat{U}\hat{\rho}_0\hat{U}^{\dag}$, where ${\hat{\rho}_0}$ is the initial density matrix of the unpolarized ensemble and the vacuum electromagnetic field, and the unitary operator $\hat{U}$ is given by \begin{equation} \ln \hat{U} = {\chi (\cos\eta \hat{a}^{(85)\dag}\hat{s}^{(85)\dag} + \sin\eta \hat{a}^{(87)\dag}\hat{s}^{(87)\dag} - h.c. )}, \label{eq:Udef} \end{equation} where $\hat{a}^{(\nu)} = \int dt \varphi^{\ast}(t) \hat{\psi}_s^{(\nu)}(t)$ is the discrete signal mode bosonic operator. When the \textit{write} pulse is sufficiently weak we may write $\hat{U}-1 = \chi (\cos\eta \hat{a}^{(85)\dag}\hat{s}^{(85)\dag} + \sin\eta \hat{a}^{(87)\dag}\hat{s}^{(87)\dag}) + O(\chi^2)$, i.e., the Raman scattering produces entanglement between a two-mode field (frequency qubit) and the isotopic spin wave (dual species matter qubit). Although we explicitly treat isotopically distinct species, it is clear that the analysis is easily generalized to chemically distinct atoms and/or molecules. To characterize the nonclassical correlations of this system, the signal field is sent to an electro-optic phase modulator (PM2 in Fig. 2) driven at a frequency $\delta\omega_s = \delta\omega_w - \big[ \big(\omega_a^{(87)} -\omega_b^{(87)}\big) - \big(\omega_a^{(85)} - \omega_b^{(85)}\big) \big] /2 = 1368$ MHz. The modulator combines the two signal frequency components into a central frequency $ck_s = c(k_s^{(85)} + k_s^{(87)})/2$ with a relative phase $\phi_s$. A photoelectric detector preceded by a filter (an optical cavity, E1 in Fig. 2) which reflects all but the central signal frequency is used to measure the statistics of the signal. We describe the detected signal field using the bosonic field operator, \begin{eqnarray} \hat{\psi}_s(t,\phi_s)=\sqrt{\frac{\epsilon_s^{(85)}}{2}} e^{-i\phi_s/2} \hat{\psi}_{s}^{(85)}(t) + \sqrt{\frac{\epsilon_s^{(87)}}{2}}e^{i\phi_s/2} \hat{\psi}_s^{(87)}(t) \nonumber\\ +\sqrt{\frac{1-\epsilon_s^{(85)}}{2}}e^{-i\phi_s/2} \hat{\xi}_s^{(85)}(t)+\sqrt{\frac{1-\epsilon_s^{(87)}}{2}}e^{i\phi_s/2} \hat{\xi}_s^{(87)}(t) \nonumber \label{eq:detSigFOp} \end{eqnarray} where $\epsilon_s^{(\nu)} \in [0,1]$ is the signal efficiency including propagation losses and losses to other frequency sidebands within PM2, and $\hat{\xi}_s^{(\nu)}(t)$ represents concomitant vacuum noise. While quantum memory times in excess of 30 $\mu$s have been demonstrated \cite{dspg}, here the spin wave qubit is retrieved after 150 ns by shining a vertically polarized \textit{read} pulse into a third electro-optic phase modulator (PM3 in Fig. 2), producing two sidebands with frequencies $ck_r^{(85)}$ and $ck_r^{(87)}$ resonant on the $\isolevlab{b}{85} \leftrightarrow \isolevlab{c}{85}$ and $\isolevlab{b}{87} \leftrightarrow \isolevlab{c}{87}$ transitions, respectively. This results in the transfer of the spin wave excitations to horizontally polarized idler photons emitted in the phase matched directions $\spvec{k}_i^{(\nu)} = \spvec{k}_w^{(\nu)} - \spvec{k}_s^{(\nu)} + \spvec{k}_r^{(\nu)}$. We treat the retrieval dynamics using the effective beam splitter relations $\hat{b}^{(\nu)} = \sqrt{\epsilon_r^{(\nu)}} \hat{s}^{(\nu)} + \sqrt{1-\epsilon_r^{(\nu)}} \hat{\xi}_r^{(\nu)}$, where $\epsilon_r^{(\nu)}$ is the retrieval efficiency of the spin wave stored in the isotope $^\nu$Rb, $\hat{b}^{(\nu)} = \int dt \varphi_i^{(\nu)\ast}(t) \hat{\psi}_i^{(\nu)}(t)$ is the discrete idler bosonic operator for an idler photon of frequency $ck_i^{(\nu)}$, $\varphi_i^{(\nu)}(t)$ is the temporal profile of an idler photon emitted from the $^\nu$Rb spin wave (normalized to unity), and $\hat{\psi}_i^{(\nu)}(t)$ is the annihilation operator for an idler photon emitted at time $t$. As with the signal operators, the idler field operators obey the usual free field, narrow bandwidth bosonic commutation relations $[\hat{\psi}_i^{(\nu)}(t), \hat{\psi}_i^{(\nu')\dag}(t')] = \delta_{\nu,\nu'} \delta(t-t')$. A fourth EOM, PM4, driven at a frequency $\delta\omega_i = \delta\omega_w - (\Delta_{85} + \Delta_{87})/2=531.5$ MHz combines the idler frequency components into a sideband with frequency $ck_i = c(k_i^{(85)} + k_i^{(87)})/2$ with a relative phase $\phi_i$. The combined idler field is measured by a photon counter preceded by a frequency filter (an optical cavity, E2 in Fig. 2) which only transmits fields of the central frequency $ck_i$. The detected idler field is described by the bosonic field operator, \begin{eqnarray} \hat{\psi}_i(t,\phi_i) = \sqrt{\frac{\epsilon_i^{(85)}}{2}} e^{i\phi_i/2} \hat{\psi}_{i}^{(85)}(t) + \sqrt{\frac{\epsilon_i^{(87)}}{2}}e^{-i\phi_i/2} \hat{\psi}_i^{(87)}(t) \nonumber\\ +\sqrt{\frac{1-\epsilon_i^{(85)}}{2}}e^{i\phi_i/2} \hat{\xi}_i^{(85)}(t)+\sqrt{\frac{1-\epsilon_i^{(87)}}{2}}e^{-i\phi_i/2} \hat{\xi}_i^{(87)}(t) \nonumber \end{eqnarray} where $\epsilon_i^{(\nu)} \in [0,1]$ is the idler efficiency including propagation losses and losses to other frequency sidebands within PM4, and $\hat{\xi}_i^{(\nu)}(t)$ represents associated vacuum noise. The {\it write-read} protocol in our experiment is repeated $2\cdot 10^5$ times per second. The signal-idler correlations result in phase-dependent coincidence rates given, up to detection efficiency factors, by $C_{si}(\phi_s,\phi_i) = \int dt_s \int dt_i \left\langle \hat{\psi}_s^{\dag}(t_s,\phi_s)\hat{\psi}_i^{\dag}(t_i,\phi_i) \hat{\psi}_i(t_i,\phi_i) \hat{\psi}_s(t_s,\phi_s) \right\rangle$. From the state of the atom-signal system after the \textit{write} process, $\hat{U}\hat{\rho}_0\hat{U}^{\dag}$, (Eq.\ref{eq:Udef}), we calculate the coincidence rates to second order in $\chi$, \begin{eqnarray} \lefteqn{C_{si}(\phi_s,\phi_i) = \frac{\chi^2 }{4} \biggl( \mu^{(85)}\cos^2\eta + \mu^{(87)} \sin^2\eta} \nonumber\\ & &+ \Upsilon \sqrt{\mu^{(85)}\mu^{(87)}} \sin 2\eta \cos\left(\phi_i-\phi_s+\phi_0\right) \biggr) \label{eq:fringes} \end{eqnarray} where $\mu^{(\nu)}\equiv \epsilon_r^{(\nu)} \epsilon_i^{(\nu)}\epsilon_s^{(\nu)}$, and $\Upsilon$ and $\phi_0$ represent a real amplitude and phase, respectively, such that \begin{equation} \Upsilon e^{-i\phi_0} = e^{-\left(\delta\phi_s^2 + \delta\phi_i^2\right)/2} \int dt \varphi_i^{(85)*}(t)\varphi_i^{(87)}(t) \mbox{,} \label{eq:upsphi0Def} \end{equation} and we account for classical phase noise in the rf driving of the EOM pairs PM1,4 and PM2,3, by treating $\phi_s$ and $\phi_i$ as Gaussian random variables with variances $\delta\phi_s^2$ and $\delta\phi_i^2$ respectively, see Fig. 2. When the write fields are detuned such that the rates of correlated signal-idler coincidences are equal (i.e., when $\mu^{(85)} \cos^2\eta = \mu^{(87)} \sin^2\eta$), the fringe visibility is maximized, and Eq. (\ref{eq:fringes}) reduces to \begin{equation} C_{si}(\phi_s,\phi_i)=\frac{\chi^2}{2} \mu^{(85)}\cos^2\eta [ 1 + \Upsilon \cos (\phi_i-\phi_s+\phi_0) ]. \label{eq:fringes1} \end{equation} \begin{figure} \caption{Measured $C_{si} \label{fig:cFringes} \end{figure} Fig. 3 shows coincidence fringes as a function of $\phi_i$ taken for two different values of $\phi_s$. The detection rates measured separately for $^{85}$Rb and $^{87}$Rb were (a) $53$ Hz and $ 62$ Hz on D1 and (b) $95$ Hz and $107$ Hz on D2, respectively. These rates correspond to a level of random background counts about 2.5 times lower than the minima of the interference fringes. This implies that the observed value of visibility $\Upsilon = 0.86$ cannot be accounted for by random photoelectric coincidences alone. The additional reduction of visibility may be due to variations in the idler phases caused by temporal variations in the cloud densities during data accumulation, while the effects of rf phase noise are believed to be negligible. Following Ref. \cite{walls} we calculate the correlation function $E(\phi_s,\phi_i)$, given by \begin{equation} \frac{C_{si}(\phi_s,\phi_i) - C_{si}(\phi_s,\phi_i^\perp) - C_{si}(\phi_s^{\perp},\phi_i) + C_{si}(\phi_s^{\perp},\phi_i^{\perp})} {C_{si}(\phi_s,\phi_i) + C_{si}(\phi_s,\phi_i^\perp) + C_{si}(\phi_s^{\perp},\phi_i) + C_{si}(\phi_s^{\perp},\phi_i^{\perp})} \mbox{,} \label{eq:Edef} \end{equation} where $\phi_{s[i]}^{\perp} = \phi_{s[i]}+\pi$. We note that, by analogy with polarization correlations, the detected signal [idler] field $\hat{\psi}_{s[i]}(t,\phi_{s[i]}^{\perp})$ is orthogonal to $\hat{\psi}_{s[i]}(t,\phi_{s[i]})$, i.e., $\left[\hat{\psi}_{s[i]}(t,\phi_{s[i]}), \hat{\psi}_{s[i]}^{\dag}(t',\phi_{s[i]}^{\perp})\right] = 0$. One finds that a classical local hidden variable theory yields the Bell inequality $|S| \le 2$, where $S \equiv E(\phi_s,\phi_i)-E(\phi_s^{\prime},\phi_i) - E(\phi_s,\phi_i^{\prime}) - E(\phi_s^{\prime},\phi_i^{\prime})$ \cite{bell}. Using Eq.(\ref{eq:fringes1}), the correlation function is given by \begin{equation} E(\phi_s,\phi_i) = \Upsilon \cos (\phi_s-\phi_i+\phi_0) \mbox{.} \end{equation} Choosing, e.g., the angles $\phi_s=-\phi_0$, $\phi_i = \pi/4$, $\phi_s^{\prime} = -\phi_0 -\pi/2$, and $\phi_i^{\prime} = 3\pi/4$, we find the Bell parameter $S = 2\sqrt{2}\Upsilon$. \begin{table} \caption{\label{tab:table1} Measured correlation function $E(\phi _s, \phi _i)$ and $S$ for $\Delta t = 150$ ns delay between {\it write} and {\it read} pulses; all the errors are based on the statistics of the photon counting events.} \begin{ruledtabular} \begin{tabular}{ccccc} $\phi_s$ & $\phi _i $& $E(\phi_s, \phi _i)$ \\ \hline $0$ & $\pi/4$ & $0.629 \pm 0.018$ \\ $0$ & $3\pi/4$ & $-0.591 \pm 0.018$ \\ $-\pi/2$ & $\pi/4$ & $-0.614 \pm 0.018$ \\ $-\pi/2$ & $3\pi/4$ & $-0.608 \pm 0.018$ \\ & & $S_{exp}=2.44 \pm 0.04$ \\ \end{tabular} \end{ruledtabular} \end{table} Table 1 presents measured values for the correlation function $E\left({\phi_s},\phi _i\right)$ using the canonical set of angles $\phi_s,\phi _i$. We find $S_{exp}=2.44 \pm 0.04 \nleq 2$ - a clear violation of the Bell inequality. This value of $S_{exp}$ is consistent with the visibility of the fringes $\Upsilon \approx 0.86$ shown in Fig. 3. This agreement supports our observation that systematic phase drifts are negligible. We emphasize that no active phase stabilization of any optical frequency field is employed. In conclusion, we report the first realization of a dual species matter qubit and its entanglement with a frequency-encoded photonic qubit. Although we employed two different isotopes, our scheme should work for chemically different atoms (e.g., rubidium and cesium) and/or molecules. This work was supported by NSF, ONR, NASA, Alfred P. Sloan and Cullen-Peck Foundations. Present addresses: $^{*}$Dipartimento di Fisica e Matematica, Universit\`{a} dell' Insubria, 22100 Como, Italy; $^{\star}$Laboratoire Aim\'e Cotton, CNRS-UPR 3321, B\^atiment 505, Campus Universitaire, 91405 Orsay Cedex, France; $^{\dagger}$Department of Physics, University of Michigan, Ann Arbor, Michigan 48109. \end{document}
\begin{document} \title{\bf General entanglement} \author{Alexander A. Klyachko and Alexander S. Shumovsky} \affiliation{Faculty of Science, Bilkent University, Bilkent, Ankara, 06800 Turkey} \begin{abstract} The paper contains a brief review of an approach to quantum entanglement based on analysis of dynamic symmetry of systems and quantum uncertainties, accompanying the measurement of mean value of certain basic observables. The latter are defined in terms of the orthogonal basis of Lie algebra, corresponding to the dynamic symmetry group. We discuss the relativity of entanglement with respect to the choice of basic observables and a way of stabilization of robust entanglement in physical systems. \end{abstract} \pacs{03.65.Ud, 03.67.-a} \maketitle \section{Introduction} Entanglement, which is considered nowadays as the main physical resource of quantum information processing and quantum computing, has been discovered as a physical phenomenon representing ``the characteristic trait of quantum mechanics" (Schr\"{o}dinger 1935). According to the modern point of view, entangled states form a special class of quantum states closed under SLOCC (Stochastic Local Operations assisted by Classical Communications) (D\"{u}r {\it et al} 2000, Verstraete {\it et al} 2002, Miyake 2003). Two states belong to the same class iff they are converted into each other by SLOCC. Mathematically SLOCC amounts to action of the complexified dynamic symmetry group $G^c$ of the system (Verstraete {\it et al} 2002). This description puts entanglement in general framework of geometric invariant theory and allows extend it to arbitrary quantum systems (Klyachko 2002). SLOCC cannot transform entangled state into unentangled one and vice versa (D\"{u}r {\it et al} 2000). We define {\it completely entangled\,} (CE) states, manifesting maximal entanglement in their SLOCC class, such that all entangled states of a given system can be constructed from them by means of SLOCC. It was shown recently that CE states manifest the maximal amount of quantum fluctuations (Can {\it et al} 2002(a), Klyachko and Shumovsky 2003, Klyachko and Shumovsky 2004). This property can be used as a physical definition of CE states. It should be stressed that quantum fluctuations caused by the representation of observables in terms of Hermitian operators is an undoubted ``characteristic trait" of quantum systems. Within the classical description, the observables should be associated with c-numbers and hence are incapable of manifestation of quantum fluctuations. We now note that characterization of quantum states with respect to quantum fluctuation is a common way in quantum optics. Coherent (Glauber 1963, Perelomov 1986) and squeezed (Stoler 1970, Dodonov 2002) states provide an important examples. In particular, it has been recognized recently that coherent states can in general be associated with the unentangled (separable) states (Klyachko 2002, Barnum {\it et al} 2003). In turn, there are also attempts to characterize entanglement in terms of quantum fluctuations. The aim of this article is to discuss the corollaries coming from the physical definition of CE states via quantum fluctuations. We mostly concentrate on the {\it relativity of entanglement} and on the creation of {\it robust entanglement}. Let us emphasize once more that as soon as CE states are defined, all other entangled states of the same system can be obtained from CE states by means of SLOCC. The paper is arranged as follows. In Sec. 2 we briefly discuss the specification of basic observables based on the consideration of the dynamic symmetry properties of quantum systems and express the definition of CE states in terms of a variational principle. Then, in Sec. 3 the relativity of entanglement with respect to the choice of basic observables is considered. In Sec. 4 we discuss the stabilization of entanglement. Finally, in Sec. 5 we briefly summarize the obtained results. \section{Basic observables} Quantum entanglement as well as any other quantum phenomenon manifests itself via measurement of physical observables (Bell 1966). In von Neumann approach (von Neumann 1996) all observables are supposed to be equally accessible. However physical nature of the system often imposes inevitable constraints. For example, the components of composite system $\mathcal{H}_{AB}=\mathcal{H}_A\otimes\mathcal{H}_B$ may be spatially separated by tens of kilometers, as in EPR pairs used in quantum cryptography. In such circumstances only local observations $X_A$ and $X_B$ are available. As another example, consider a system of $N$ identical particles, each with space of internal degrees of freedom $\mathcal{H}$. By Pauli principle the state space of such system shrinks to {\it symmetric tensors\,} $S^N\mathcal{H}\subset\mathcal{H}^{\otimes N}$ for bosons, and to {\it skew symmetric tensors\,} $\wedge^N\mathcal{H}\subset\mathcal{H}^{\otimes N}$ for fermions. This superselection rule imposes severe restricion on manipulation with quantum states, effectively reducing the accessible measurements to that of a single particle. This consideration led many researchers to the conclusion, that available observables should be included in description of any quantum system from the outset, see Hermann 1966, Emch 1984. Robert Hermann 1966 stated this thesis as follows \begin{quote} {\it ``The basic principles of quantum mechanics seem to require the postulation of a Lie algebra of observables and a representation of this algebra by skew-Hermitian operators."} \end{quote} We denote this {\it Lie algebra of observables\,} by $\frak{L}$. The corresponding Lie group $$ G=\exp (i\frak{L}) $$ will be called {\it dynamic symmetry group\,} of the system. We'll refer to unitary representation of the dynamical group $G$ in state space $\mathcal{H}_S$ as {\it quantum dynamical system}. Note finally that there is no place for entanglement in von Neumann picture, where full dynamical group $\mathrm{SU}(\mathcal{H})$ makes all states equivalent. Entanglement is an effect caused by superselection rules or symmetry breaking which reduce the dynamical group to a subgroup $G\subset \mathrm{SU}(\mathcal{H})$ small enough to create intrinsic difference between states. For example, entanglement in two component system $\mathcal{H}_A\otimes\mathcal{H}_B$ comes from reduction of the dynamical group to $\mathrm{SU}(\mathcal{H}_A)\times\mathrm{SU}(\mathcal{H}_B)\subset \mathrm{SU}(\mathcal{H}_A\otimes\mathcal{H}_B)$. Entanglement essentially depends on the dynamical group and {\it must\,} be discussed in framework of a given quantum dynamical system $G:\mathcal{H}$. This {\it relativity of entanglement} is one of the topics of this paper. For calculations we choose an arbitrary orthonormal basis $X_i, i=1\ldots N$ of $\frak{L}=\mathrm{Lie}(G)$ and call its elements $X_i$ {\it basic observables\,} (Klyachko 2002 and Klyachko and Shumovsky 2003). For example, in the case of a qubit (spin-$\frac{1}{2}$ ``objects") the dynamic symmetry group is $G=\mathrm{SU}(2)$ and $\frak{L}=\frak{su}(2)$ is algebra of traceless Hermitian $2\times 2$ matrices. One can choose spin projector operators $J_x,J_y,J_z$ (or the Pauli matrices) as the basic observables. The level of quantum fluctuations of a basic observable $X_i$ in state $\psi\in\mathcal{H}_S$ of system $S$ is given by the variance \begin{eqnarray} \mathbb{V}(X_i,\psi)= \langle \psi|X_i^2|\psi \rangle - \langle \psi|X_i|\psi \rangle^2 \geq 0. \label{1} \end{eqnarray} Summation over all basic observables of the quantum dynamic system gives the {\it total uncertainty} (total variance) peculiar to the state $\psi$: \begin{eqnarray} \mathbb{V}(\psi)= \sum_{i} \mathbb{V}(X_i, \psi) = \sum_{i}\langle \psi|X_i^2|\psi \rangle - \langle \psi|X_i|\psi \rangle^2 . \label{2} \end{eqnarray} This quantity is independent of the choice the basic observables and measures the total level of quantum fluctuations in the system. Recall that the Casimir operator \begin{eqnarray} \widehat{C}= \sum_i X_i^2 , \nonumber \end{eqnarray} which appears in Eq. (2) is independent of the choice of the basis $X_i$ and acts as a multiplication by scalar $C$ if representation $G:\mathcal{H}_S$ is irreducible. In this case Eq. (2) takes the form \begin{eqnarray} \mathbb{V}(\psi) = C- \sum_i \langle \psi |X_i| \psi \rangle^2 . \label{3} \end{eqnarray} It has been observed that completely entangled (CE) states of an arbitrary number $n \geq 2$ of qubits obey a certain conditions. Namely, expectation values of all three spin-projection operators for all parties of the system have zero value in CE state $|\psi_{CE} \rangle$ (Can {\it et al} 2002). In general, the condition \begin{eqnarray} \forall i \quad \langle \psi_{CE}|X_i|\psi_{CE} \rangle =0, \quad |\psi_{CE} \rangle \in \mathcal{H}_S, \label{4} \end{eqnarray} can be used as a general physical definition of CE (Klyachko and Shumovsky 2004). This is an {\it operational} definition of CE (definition in terms of what can be directly measured). From Eq. (3) it follows that the total variance attains its maximal value equal to Casimir in the case of CE states: \begin{eqnarray} \mathbb{V}(\psi_{CE})= \max_{\psi \in \mathcal{H}_S} \mathbb{V}(\psi)=C. \label{5} \end{eqnarray} This Eq. (5) is, in a sense, equivalent to the maximum of entropy principle, defining the equilibrium states in quantum statistical mechanics (Landau and Lifshitz 1980). From operational point of view state $\psi\in\mathcal{H}$ is entangled if one can prepare a completely entangled state $\psi_{CE}$ from it using SLOCC operations. It should be emphasized that SLOCC transformations have been identified with action of the {\it complexified dynamical group} $$G^c=\exp(\frak{L}\otimes\mathbb{C}),$$ of the system (Verstraete 2003). This leads us to the following definition of general entangled states $\psi_E$ of the system \begin{eqnarray} \psi_{E} = g^c\psi_{CE} ,\quad \mbox{for some }g^c\in G^c. \label{6} \end{eqnarray} Thus, {\it the general entangled states can be defined as that obtained from the states, manifesting maximum total uncertainty, by action of the complexified dynamic group.} Let's stress that the above definition of basic observables and equations of CE (4) do not assume the composite nature of the system $S$. In other words, a single-particle system can manifest entanglement if its state obeys the conditions (4) (Can {\it et al} 2005). \section{Relativity of entanglement} Physics of quantum system $S$ with given Hilbert state space $\mathcal{H}_S$ may implies different dynamical groups. An important example is provided by a {\it qutrit} (three-state quantum system), which is widely discussed in the context of quantum ternary logic (Bechman-Pasquinucci and Peres 2000, Bru{\ss} and Macchiavello 2002, Kaszlikowski {\it et al} 2003). In this case, the general symmetry is given by $G=\mathrm{SU}(3)$, so that the local basic observables are given by the eight independent Hermitian generators of the $\frak{L}=\frak{su}(3)$ algebra (see Caves and Milburn 2000). In the special case of spin-1 system, the symmetry is reduced to the $G'=\mathrm{SU}(2)$ group, and the corresponding local basic observables coincide with the three spin-1 operators (Can {\it et al} 2005). Since $\frak{su}(2) \subset \frak{su}(3)$, the qutrit entanglement with respect to $\frak{su}(3)$ observables implies entanglement in the $\frak{su}(2)$ domain but not vice versa. For example, a single spin-1 object can be entangled with respect to the $\frak{su}(2)$ basic observables but not in the $\frak{su}(3)$ sector (Can {\it et al} 2005). A general spin-1 state has the form \begin{eqnarray} |\psi \rangle = \sum_{s=-1}^1 \psi_s |s \rangle , \quad \sum_s |\psi_s|^2 =1, \label{7} \end{eqnarray} where $s=0, \pm 1$ denotes the spin projection. In the basis $|s \rangle$, the spin-1 operators have the form \begin{eqnarray} S_x= \frac{1}{\sqrt{2}} \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right) , \quad S_y= \frac{i}{\sqrt{2}} \left( \begin{array}{crr} 0 & -1 & 0 \\ 1 & 0 & -1 \\ 0 & 1 & 0 \end{array} \right) \nonumber \\ S_z= \left( \begin{array}{ccr} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & -1 \end{array} \right). \nonumber \end{eqnarray} Using CE condition (4) with the basic observables $S_i$ and taking into account the normalization condition in (7), we obtain four equations for six real parameters $\mathrm{Re}( \psi_s)$ and $\mathrm{Im}( \psi_s)$. In particular, the state with zero projection of spin $|0 \rangle$ manifests CE. This state $|0 \rangle$ together with the states \begin{eqnarray} \frac{1}{\sqrt{2}} (|1 \rangle \pm |-1 \rangle ), \nonumber \end{eqnarray} form the basis of CE states in the three-dimensional Hilbert space of spin-1 states. The possibility of the single spin-1 entanglement was also discussed by Viola {\it et al} (Viola {\it et al} 2004). To understand the physical meaning of this CE, we note that there is a certain correspondence between the states of two qubits and single qutrit provided by the Clebsch-Gordon decomposition \begin{eqnarray} \mathcal{H}_{\frac{1}{2}} \otimes \mathcal{H}_{\frac{1}{2}}=\mathcal{H}_1 \oplus \mathcal{H}_0 . \nonumber \end{eqnarray} Here $\mathcal{H}_{\frac{1}{2}}$ denotes the two-dimensional Hilbert space of a single qubit. The three-dimensional Hilbert space $\mathcal{H}_1$ contains the symmetric states of two qubits \begin{eqnarray} |1 \rangle = |\uparrow \uparrow \rangle , \quad |0 \rangle = \frac{1}{\sqrt{2}} (|\uparrow \downarrow \rangle +|\downarrow \uparrow \rangle ), \quad |-1 \rangle =|\downarrow \downarrow \rangle \label{8} \end{eqnarray} while $\mathcal{H}_0$ corresponds to the antisymmetric state \begin{eqnarray} |A \rangle= \frac{1}{\sqrt{2}} (|\uparrow \downarrow \rangle -|\downarrow \uparrow \rangle ). \label{9} \end{eqnarray} It is now seen that the state $|0 \rangle$ of spin-1 is CE in terms of a certain pare of spin-$\frac{1}{2}$ ``particles", which can be interpreted as intrinsic degrees of freedom for the spin-1 object. A vivid physical example is provided by the $\pi$-mesons. It is known that three $\pi$-mesons form an isotriplet (Bogolubov and Shirkov 1982) \begin{eqnarray} \pi^+=|1 \rangle, \quad \pi^0=|0 \rangle , \quad \pi^-=|-1 \rangle, \label{10} \end{eqnarray} where $|\ell \rangle$ ($\ell =0, \pm 1$) denotes the states of isospin $I=1$. From the symmetry point of view, isospin is also specified by the $\mathrm{SU}(2)$ group. Thus, in view of our discussion one can conclude that $\pi^0$ meson is CE with respect to internal degrees of freedom. The internal structure of mesons is provided by the quark model (Huang 1982). Namely, the fundamental representation of the isospin symmetry corresponds to the two doublets (qubits) that contain the so-called up ($u$) and down ($d$) quarks and anti-quarks ($\bar{u}$ and $\bar{d}$). In terms of quarks, the isotriplet (10) has the form \begin{eqnarray} \pi^+ =u \bar{d}, \quad \pi^0= \frac{1}{\sqrt{2}} (u \bar{u}+d \bar{d}), \quad \pi^-= \bar{u} d. \nonumber \end{eqnarray} It is now clearly seen that $\pi^0$ meson represents CE state with respect to quark degrees of freedom. An oblique corroboration of this fact is given by the high instability of $\pi^0$ meson in comparison with $\pi^{\pm}$. Such an instability may result from the much higher amount of quantum fluctuations peculiar to CE state. Another example is given by a single dipole photon, which is emitted by a dipole transition in atom or molecule and carries total angular momentum $J=1$ (Berestetskii {\it et al} 1982). In the state with projection of the total angular momentum $m=0$ it is completely entangled. In fact, such photon carries two qubits. One of them is the polarization qubit, which is usually considered in the context of quantum information processing. Another qubit is provided by the orbital angular momentum, which can be observed (Padgett {\it et al} 2002) and used for the quantum information purposes (Mair {\it et al} 2001). Like in the case of $\pi^0$ meson, these two qubits correspond to the intrinsic degrees of freedom of the photon. As one more example, let us consider the so-called {\it biphoton}, which consists of two photons of the same frequency, created at once, and propagating in the same direction (Burlakov {\it et al} 1999, Chechova {\it et al} 2004). Before splitting, biphoton can be interpreted as a single ``particle". In the basis of linear polarizations, the states of biphoton have the form \begin{eqnarray} \left\{ \begin{array}{lcl} |1 \rangle & = & |x,x \rangle \\ |0 \rangle & = & \frac{1}{\sqrt{2}} (|x,y \rangle +|y,x \rangle ) \\ |-1 \rangle & = & |y,y \rangle \end{array} \right. \label{11} \end{eqnarray} (the propagation direction is chosen as the $z$-axis). Thus, formally they coincide with the spin-1 states. It should be stressed that the antisymmetric state \begin{eqnarray} |A \rangle = \frac{1}{\sqrt{2}} (|x,y \rangle - |y,x \rangle ), \nonumber \end{eqnarray} is forbidden (Berestetskii {\it et al} 1982). The CE of the state $|0 \rangle$ in (11) is evident. The antisymmetric state is also forbidden in a system of two two-level atoms with dipole interaction in the Lamb-Dicke limit of short distances (\c{C}ak{\i}r {\it et al} 2005), so that this system can also be considered as a single spin-1 object. Although a single qutrit can be prepared in CE state in the $\mathrm{SU}(2)$ sector, it does not manifest entanglement in the $\mathrm{SU}(3)$ sector, where the local observables are given by the eight independent Hermitian generators of the $\frak{su}(3)$ algebra (Caves and Milburn 2000): \begin{eqnarray} \left( \begin{array}{ccc} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{crr} 0 & -i & 0 \\ i & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) , & \left( \begin{array}{crr} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 0 \end{array} \right) \nonumber \\ \left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{array} \right) , & \left( \begin{array}{ccr} 0 & 0 & -i \\ 0 & 0 & 0 \\ i & 0 & 0 \end{array} \right) , & \label{12} \\ \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{array} \right) , & \left( \begin{array}{ccr} 0 & 0 & 0 \\ 0 & 0 & -i \\ 0 & i & 0 \end{array} \right) , & \frac{1}{\sqrt{3}} \left( \begin{array}{ccr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -2 \end{array} \right) . \nonumber \end{eqnarray} It is easily seen that conditions (6) cannot be realized for the state (7) with the basic observables (12). Thus, a single three-state quantum system (qutrit) may or may not manifest entanglement, depending on what kind of basic observables is accessible. Hence, there is a relativity of entanglement with respect to choice of basic observables. \section{Stabilization of entanglement} Numerous applications of quantum entanglement require not an arbitrary entangled state but a {\it robust} one. This assumes the high amount of entanglement together with the long lifetime of the entangled state. This lifetime is usually determined by interaction with a dissipative environment, which causes the decoherence in the system. The approach under discussion reveals a way of obtaining robust entanglement. In conformity with the definition (5), we should first prepare a state of a given system with the maximal amount of quantum fluctuations of all basic observables. As the second step, we should decrease the energy of the system up to a minimum (local minimum) to stabilize the state, keeping the level of quantum fluctuations. Thus obtained state would be stabile (metastable) and CE. As an example, consider atomic entanglement caused by photon exchange between two atoms in a cavity. In the simplest case of two-level atoms in an ideal cavity, containing a single photon, the CE state in atomic subsystem arises and decays periodically due to the Rabi oscillations (Plenio {\it et al} 1999). The above stabilization scheme can be used if instead we consider three-level atoms with the $\Lambda$-type transitions (Can {\it et al} 2002(b), Can {\it et al} 2003). In this case, the two three-level atoms with allowed dipole transitions $1 \leftrightarrow 2$ and $2 \leftrightarrow 3$ and dipole forbidden transition $1 \leftrightarrow 3$ are located in a cavity (Fig. 1) tuned to the resonance with transition $1 \leftrightarrow 2$. \begin{figure} \caption{Scheme of transitions in two three-level $\Lambda$-type atoms in a cavity. Transition $1 \leftrightarrow 2$ is resonant with the cavity (Pumping) field, while S ($2 \leftrightarrow 3$) corresponds to the transition with creation of Stokes photon. The dashed arrow shows the discarding of the Stokes photon.} \label{f:1} \end{figure} If initially both atoms are in the ground state and cavity contains one photon, than absorption of the photon by either atom leads to creation of CE atomic state \begin{eqnarray} |\psi^{(12)}_{CE} \rangle = \frac{1}{\sqrt{2}} (|2 \rangle_I \otimes |1 \rangle_{II} +|1 \rangle_I \otimes |2 \rangle_{II} ), \label{13} \end{eqnarray} where $|n \rangle_j$ denotes the state of $j$-th atom. This state manifests maximal amount of quantum fluctuations of the local basic observables (Pauli operators) \begin{eqnarray} \sigma_x^{(j)}= |2 \rangle_j \langle 1|+H.c., \quad \sigma_y^{(j)}=-i|2 \rangle_j \langle 1|+H.c., \nonumber \\ \sigma_z^{(j)}=|2 \rangle_j \langle 2|-|1 \rangle_j \langle 1|, \nonumber \end{eqnarray} so that \begin{eqnarray} \mathbb{V}(\psi^{(12)}_{CE})=6. \nonumber \end{eqnarray} The corresponding energy of the system is \begin{eqnarray} E^{(12)}=\epsilon_2 \sim \hbar \omega_C , \label{14} \end{eqnarray} where $\epsilon_j$ denotes the energy of the corresponding atomic level with respect to the ground state ($\epsilon_1=0$) and $\omega_C$ is the cavity mode frequency. This state (13) is unstable. There are the two channels of decay of the excited atomic state: \begin{eqnarray} |2 \rangle_j \rightarrow \left\{ \begin{array}{ll} |1 \rangle_j & \mbox{with creation of cavity photon} \\ |3 \rangle_j & \mbox{with creation of Stokes photon} \end{array} \right. \nonumber \end{eqnarray} The first way returns the system into the initial state. After that, the process would be repeated. The second decay channel creates the new CE state \begin{eqnarray} |\psi^{(13)} \rangle = \frac{1}{\sqrt{2}} (|3 \rangle_I \otimes |1 \rangle_{II}+|1 \rangle_I \otimes |3 \rangle_{II} ), \label{15} \end{eqnarray} which manifests the same amount of quantum fluctuations as (12) but with respect to the new local basic observables \begin{eqnarray} \sigma_x^{(j)}= |3 \rangle_j \langle 1|+H.c., \quad \sigma_y^{(j)}=-i|3 \rangle_j \langle 1|+H.c., \nonumber \\ \sigma_z^{(j)}=|3 \rangle_j \langle 3|-|1 \rangle_j \langle 1|. \nonumber \end{eqnarray} The corresponding energy is \begin{eqnarray} E^{(13)}= \epsilon_3+ \hbar \omega_S \sim \epsilon_2 , \nonumber \end{eqnarray} where $\omega_S$ denotes the frequency of Stokes photon. This is the same energy as for the $(12)$ configuration (13). If the Stokes photon is now discarded, the energy is decreased \begin{eqnarray} E^{(13)} \rightarrow E^{(13)}_{min}= \epsilon_3 \nonumber \end{eqnarray} and the state (15) becomes stable (at least, with respect to the dipole transitions). To discard the Stokes photon, we can think either about its absorption by the cavity walls or about its free leakage out of the cavity. In the latter case, detection of the Stokes photon outside the cavity signalizes the creation of the robust atomic entangled state (15). For further discussion of the above scheme, see Biswas and Agarwal 2004, \c{C}ak{\i}r {\it et al} 2004, \c{C}ak{\i}r {\it et al} 2005. \section{Conclusion} Summarizing, we should stress the generality of definition of CE states has been discussed in Sec. 2. Physically it associates CE with special behavior of expectation values of basic observables and, in that way, with the maximal amount of quantum fluctuations. In a sense, it follows Bell's ideology ( Bell 1966) that entanglement manifests itself in local measurements and their correlations. The possible role of quantum fluctuations in formation of entangled states was also noticed by G\"{u}hne {\it et al} (G\"{u}hne {\it et al} 2002) and Hofmann and Takeuchi (Hofmann and Takeuchi 2003). Since the classical level of description of physical systems neglects existence of quantum fluctuations, the total variance (1) can be chosen as a certain measure of {\it remoteness} of quantum reality from classical picture. Thus, the coherent states with minimal amount of quantum fluctuations are the closest states to classical picture, while CE states represent the most nonclassical states. In particular, Klyachko (Klyachko 2002) and Barnum {\it et al} (Barnum {\it et al} 2003) have associated generalized coherent states with the separable states of multipartite systems. From the physical point of view, the definition, connecting CE with quantum fluctuations reveals the way of preparing robust entanglement (Sec. 4). The general approach to quantum entanglement has been discussed in Sec. 2 is based on the consideration of the symmetry properties of physical systems. In particular, it associates definition of CE with the orthogonal basis of the Lie algebra, corresponding to the Lie group of the dynamical symmetry of the system. As it has been shown in Sec. 3, this causes a certain relativity of quantum entanglement with respect to the choice of the dynamic symmetry. As an example, a single-qutrit entanglement was considered. In the case of a two-qutrit system, the entanglement takes place both in the $\mathrm{SU}(3)$ and $\mathrm{SU}(2)$ sectors. Since the set of the $\frak{su}(3)$ basic observables (12) contains the spin-1 operators, CE of two qutrits in the $\mathrm{SU}(3)$ sector involves CE in the $\mathrm{SU}(2)$ sector but not vice versa. The CE states of two spin-1 objects can be examined through the use of the following symmetry relation \begin{eqnarray} \mathrm{SU}(2) \times \mathrm{SU}(2) \simeq SO(4). \nonumber \end{eqnarray} The symmetry based approach to quantum entanglement leads to a certain ``stratification" of possible states of quantum systems (Klyachko 2002). Namely, if $G$ is the dynamic symmetry group, the SLOCC are defined by the action of complexified group $G^c$. Then, the different classes of states are given by the {\it orbits} of the action of $g^c \in G^C$ in the Hilbert space $\mathcal{H}_S$. For example, in the case of three qubits, the dynamic symmetry of the system is described by the Lie group \begin{eqnarray} \mathrm{SU}(2) \times \mathrm{SU}(2) \times \mathrm{SU}(2). \nonumber \end{eqnarray} Thus, SLOCC belong to the group $SL(2,\mathbb{C})$. The orthogonal basis of the corresponding Lie algebra $s \ell (2, \mathbb{C})$ is given by the Pauli operators. It was shown by Miyake (Miyake 2003) through the use of the mathematical analysis of multidimensional matrices and determinants by Gelfand {\it et al} (Gelfand {\it et al} 1994) that there are only four SLOCC nonequivalent classes of states shown in the Table below. \begin{center} Table 1. \begin{tabular}{|l|l|} \hline $\frac{1}{\sqrt{2}} (|000 \rangle +|111 \rangle)$ & GHZ state \\ \hline $\frac{1}{\sqrt{3}} (|001 \rangle +|010 \rangle +|100 \rangle)$ & W-state \\ \hline $\frac{1}{\sqrt{2}} \times \left\{ \begin{array}{l} (|001 \rangle +|010 \rangle) \\ (|001 \rangle +|100 \rangle) \\ (|010 \rangle +|100 \rangle) \end{array} \right.$ & biseparable states \\ \hline $|000 \rangle$ & completely separable states \\ \hline \end{tabular} \end{center} Similar classification was proposed by A\'{c}in {\it et al} (A\'{c}in {\it et al} 2001) through the use of tripartite witnesses. Besides the classification, the notion of complex orbits allows to introduce a proper measure $\mu$ of entanglement as the length of minimal vector in the complex orbit (Klyachko 2002, Klyachko and Shumovsky 2004). Note that all natural measures of entanglement should be represented by the {\it entanglement monotones}, i.e. by functions decreasing under SLOCC (Vidal 2000, Eisert {\it et al} 2003, Verstraete {\it et al} 2003) and that the above measure obeys this condition. In the case of an arbitrary pure two-qubit state \begin{eqnarray} |\psi_{2,2} \rangle = \sum_{\ell , \ell' =0}^1 \psi_{\ell \ell'} |\ell \rangle \otimes |\ell' \rangle, \quad \sum_{\ell , \ell'} |\psi_{\ell \ell'}|^2=1, \nonumber \end{eqnarray} the measure of entanglement $\mathcal{C} = \det [\psi]$, where $[\psi]$ is the $(2 \times 2)$ matrix of the coefficients of the state $|\psi_{2,2}\rangle$. This determinant represents the only entanglement monotone in this case. To within a factor, this measure coincides with the {\it concurrence} $\mathcal{C}(\psi)$ (Wootters 1998), which is usually used to quantify entanglement in two-qubit systems: \begin{eqnarray} \mathcal{C}(\psi)=2|\det [\psi]|. \nonumber \end{eqnarray} In the case of three qubits, the measure is given by the absolute value of Cayley hyperdeterminant multiplied by four (Miyake 2003) also known as {\it 3-tangle} (Coffman {\it et al} 2000) \begin{eqnarray} \tau = 4|\psi_{000}^2 \psi_{111}^2+ \psi_{001}^2 \psi_{110}^2 + \psi_{010}^2 \psi_{101}^2 + \psi_{100}^2 \psi_{011}^2 \nonumber \\ -2(\psi_{000} \psi_{001} \psi_{110} \psi_{111} +\psi_{000} \psi_{010} \psi_{101} \psi_{111} \nonumber \\ + \psi_{000} \psi_{100} \psi_{011} \psi_{111} + \psi_{001} \psi_{010} \psi_{101} \psi_{110} \nonumber \\ + \psi_{001} \psi_{100} \psi_{011} \psi_{110} + \psi_{010} \psi_{100} \psi_{011} \psi_{101} ) \nonumber \\ +4( \psi_{000} \psi_{011} \psi_{101} \psi_{110} + \psi_{001} \psi_{010} \psi_{100} \psi_{111})|, \label{16} \end{eqnarray} where $\psi_{i,j,k}$ are the coefficients of the normalized state \begin{eqnarray} |\psi_{2,3} \rangle = \sum_{i,j,k=0}^1 \psi_{ijk} |i \rangle \otimes |j \rangle \otimes |k \rangle . \label{17} \end{eqnarray} This is the again the only entangled monotone for the states (17). In the case of GHZ (Greenberger-Horne-Zeilinger) state (the first row in Table 1), 3-tangle (16) has the maximal value $\tau (GHZ)=1$. For all other states in the Table 1, it has zero value, so that these states are unentangled. This fact allows us to separate essential from the accidental in the definition of quantum entanglement. For example, violations of Bell's inequalities is often considered as a definition of entanglement. The so-called W-states (the second row in Table 1) violate Bell's inequalities (Cabello 2002). But as we have seen, these states do not manifest entanglement (at least in the tripartite sector). In fact, violation of Bell's inequalities means the absence of hidden variables (Bell 1966) and can be observed even in the case of generalized coherent states (Klyachko 2002), which are unentangled by definition. Other definitions based on the nonsepsrability and nonlocality of states also have a limited application. For sure, they are meaningless in the case of a single spin-1 particle entanglement have been considered in Sec. 3. In this paper, we have considered entanglement of pure states. The generalization of the approach on the case of mixed states meets certain complications. The point is that the density matrix contains classical fluctuations caused by the statistical nature of the state together with quantum fluctuations. Their separation represents a hard problem of extremely high importance. One of the possible approaches consists in the use of the methods of thermo-field dynamics (Takahashi Y and Umezawa H 1996), which allows to represent a mixed state in terms of a pure state of doubled dimension. \end{document}
\begin{document} \begin{abstract} Let $F$ be a Siegel cusp form of weight $k$ and genus $n>1$ with Fourier-Jacobi coefficients $f_m$. In this article, we estimate the growth of the Petersson norms of $f_m$, where $m$ runs over an arithmetic progression. This result sharpens a recent result of Kohnen in~\cite{WK1}. \end{abstract} \subjclass[2010]{Primary 11F46,11F50; Secondary 11F30} \keywords{Siegel cusp forms, Fourier-Jacobi coefficients, Petersson norms} \maketitle \section{Introduction} Let $\mcal{H}_n$ be the Siegel upper half-plane of genus $n \ge 1$ and $\Gamma_n : = \textrm{Sp}_n({\mathbb Z})$ be the full Siegel modular group. Also let $S_k(\Gamma_n)$ be the space of Siegel cusp forms of weight $k$ on $\Gamma_n$. For $Z \in \mcal{H}_n$, write $Z= \psmat{\tau}{z^t}{z}{\tau^{\prime}}$, where $\tau \in \mcal{H}_{n-1}$, $z \in {\mathbb C}^{n-1}$ and $\tau^{\prime} \in \mcal{H}_1$. If $F \in S_k(\Gamma_n)$ with $n>1$, the Fourier-Jacobi expansion of $F$ relative to the maximal parabolic group of type $(n-1, 1)$ is of the form $$ F(Z) = \sum_{m \geq 1} f_m(\tau,z) e^{2\pi i m \tau^{\prime}}. $$ The functions $f_m$ belong to the space $J_{k,m}^{\mrm{cusp}}$ of Jacobi cusp forms of weight $k$, index $m$ and of genus $n-1$, i.e., invariant under the Jacobi group $\Gamma_{n-1}^J := \Gamma_{n-1} \ltimes {\mathbb Z}^{n-1} \times {\mathbb Z}^{n-1}$. For $f,g \in J_{k,m}^{\mrm{cusp}}$, the inner product of $f$ and $g$ is defined by $$ \langlegle f,g \ranglegle = \int_{\Gamma_{n-1}^J \backslash \mcal{H}^{n-1} \times {\mathbb C}^{n-1}} f(\tau,z) \overline{g(\tau,z)}(\mrm{det}v)^{k-n-1} e^{-4 \pi mv^{-1}[y^{t}]} dudvdxdy, $$ where $\tau=u+iv, z=x+iy$. Let $a, q \geq 2$ be natural numbers with $(a,q)=1$. In~\cite[Thm. 1] {BBK}, B\"ocherer, Bruinier and Kohnen showed that for any non-zero function $F$ in $S_k(\Gamma_n) (n>1)$ with Fourier-Jacobi coefficients $f_m$, there exist infinitely many $m \in {\mathbb N}$ with $m \equiv a \pmod q$ such that $\langle f_m,f_m \rangle \not = 0$. In this article, we prove the existence of infinitely many $m \in {\mathbb N}$ with $m \equiv a \pmod q$ such that $\langle f_m,f_m \rangle> c_{F,q} m^{k-1}$ (see Theorem~\ref{main-thm}). This also improves a recent result of Kohnen \cite{WK1} about existence of infinitely many $m \ge 1$ such that $\langle f_m, f_m \rangle > c_F m^{k-1}$. In order to prove our result, we combine the techniques of \cite{BBK} and \cite{WK1}. \section{Preliminaries} Let $F$ be a non-zero cusp form in $S_k(\Gamma_n) (n>1)$ with Fourier-Jacobi coefficients $\{ f_m\}_{m \in {\mathbb N}}$. By the works of Kohnen and Skoruppa \cite{KS} and of Krieg \cite{AK}, we know that $\langle f_m, f_m \rangle \ll_F m^k$ (the constant in $\ll$ depends only on $F$). Hence for natural numbers $a, q$ with $(a, q)=1$, the Dirichlet series \begin{equation*} D(s ; a, q, F) : = \underset{\underset{m \equiv a {\, \rm mod \, } q} {m \ge 1}}{\sum} \frac{\langle f_m, f_m \rangle}{m^s} \end{equation*} converges for $s \in {\mathbb C}$ with ${\mathbb R}e(s) > k+1$. \begin{prop}\label{Main-Prop} Let $a, q >1$ be natural numbers with $(a,q) =1$ and $F \in S_k(\Gamma_n)$, where $n > 1$. Then the Dirichlet series $D(s;a,q, F)$ converges for ${\mathbb R}e(s)> k$ and has a simple pole at $s=k$. Moreover, it vanishes at $s = 0, -1, -2, \cdots$. \end{prop} \begin{proof} Let $\chi$ be a Dirichlet character modulo $q$. Then the Dirichlet series $$ D(s, \chi, F) := \sum_{m \ge 1} \frac{\chi(m)\langle f_m, f_m \rangle}{m^s} $$ converges for $s \in {\mathbb C}$ with ${\mathbb R}e(s)\gg 0$. Let $\chi_0$ be the principal Dirichlet character modulo $q$. For $\chi \not = \chi_0$, we know that the completed Dirichlet series \begin{equation*} D^*(s, \chi , F) := \left(\frac{2\pi}{q}\right)^{-2s} \Gamma(s) \Gamma(s-k+n) L(2s -2k + 2n, \chi^2)~D(s, \chi, F) \end{equation*} extends to a holomorphic function on ${\mathbb C}$ (see \cite{KKS} and \cite{WK1} for details). But when $\chi = \chi_0$, the completed Dirichlet series $D^*(s, \chi_0, F)$ has a meromorphic continuation to ${\mathbb C}$ with a simple real pole at $s=k$ (see~\cite[page 495]{KKS} and the remark in page $7$ of~\cite{BBK}). We know if $\chi^2 \ne \chi_0$, then the real zeros of $L(s, \chi^2)$ are at $s = 0, -2, -4, \cdots$ since $\chi^2$ is an even character. Also the poles of $\Gamma(s)$ are at $s = 0 ,-1, -2, \cdots$. Further, all these zeros and poles are simple. Hence $D(s, \chi, F)$ for $\chi \ne \chi_0$ extends to a holomorphic function on ${\mathbb C}$ and vanishes at $s =0 , -1 , -2, \cdots$. If $\chi = \chi_0$, then $D(s, \chi_0, F)$ has a meromorphic continuation to ${\mathbb C}$ possibly with a simple pole at $s=k$. Indeed, the function $D(s, \chi_0, F)$ has a simple real pole at $s=k$, since $D^*(s, \chi_0, F)$ has a simple real pole at $s=k$ and none of the functions $L(2s -2k + 2n, \chi^2)$, $\Gamma(s-k+n)$ and $\Gamma(s)$ have a zero or a pole at $s=k$ and they are holomorphic there. Furthermore, the series $D(s, \chi_0, F)$ vanishes at $s = 0, -1, -2, \cdots$. Hence using orthogonality of characters, we get \begin{eqnarray}\label{one} D(s; a,q, F) &=& \frac{1}{\varphi(q)} \sum_{m \ge 1} \sum_{\chi {\, \rm mod \, } q} \chi(a^{-1}m) \langle f_m, f_m \rangle m^{-s} \nonumber \\ \label{test} &=& \frac{1}{\varphi(q)} \sum_{\chi {\, \rm mod \, } q} \chi(a^{-1})~ D(s, \chi, F) \phantom{m} \text{for } {\mathbb R}e(s) > k. \end{eqnarray} This implies that the Dirichlet series $D(s ; a,q, F)$ has a meromorphic continuation to ${\mathbb C}$ with a simple real pole at $s=k$ and vanishes at $s = 0, -1, -2, \cdots$. \end{proof} \begin{rmk}\label{lem-1} {\rm It is clear from equation (\ref{one}) that the residue of $D(s;a,q,F)$ at $s=k$ depends only on $q$ and the residue of $D(s,\chi_0,F)$ at $s=k$, but not on~$a$. In fact, the residue of $D(s,\chi_0,F)$ can be expressed in terms of the Petersson scalar product of $F$ with the Trace of ``$\chi_0$-twist of $F$'' (see \cite[Thm. 1]{KKS} for further details).} \end{rmk} \begin{rmk}\label{imp} {\rm The residue of $D(s;a,q,F)$ at $s=k$ is real and positive. This is true as $D(s;a,q,F)$ is holomorphic on ${\mathbb R}e(s) > k$ with a simple pole at $s=k$ and is positive on the real half axis ${\mathbb R}e(s) > k$.} \end{rmk} We end this section by recalling a recent result of Pribitkin on Dirichlet series with oscillating coefficients. \noindent {\bf Definition.} We call a sequence $\{ a_n \}_{n=1}^{\infty}$ with $a_n\in{\mathbb R}$ oscillatory if there exist infinitely many $n$ such that $a_n >0$ and infinitely many $n$ such that $a_n<0$. \begin{thm}[Pribitkin \cite{WP},\cite{WP1}] \label{Pribitkin-result} Let $a_n$ be a sequence of real numbers such that the associated Dirichlet series $$ F(s) =\sum_{n=1}^{\infty} \frac{a_n}{n^s} $$ be non-trivial and it converges on some half-plane. If $F(s)$ is holomorphic on the whole real line and has infinitely many real zeros, then the sequence $\{a_n\}_{n=1}^{\infty}$ is oscillatory. \end{thm} \section{Statement and proof of the Main Result} Let $a, q > 1$ be natural numbers with $(a,q)=1$. Also let $c_{F,q}$ be the residue of the function $D(s;a,q,F)$ at $s=k$. Recall that $c_{F,q}$ is independent of $a$ by Remark~\ref{lem-1}. Moreover, $c_{F,q}$ is real and positive by Remark \ref{imp}. \begin{thm}\label{main-thm} Let $a, q > 1$ be natural numbers with $(a,q)=1$ and $F$ be a non-zero Siegel cusp form in $S_k(\Gamma_n)$, $n>1$ with Fourier-Jacobi coefficients $\{f_m\}_{m \in {\mathbb N}}$. Then there exist infinitely many $m$ with $m \equiv a {\, \rm mod \, } q$ such that $\langle f_m, f_m\rangle > c_{F,q} m^{k-1}$. \end{thm} \begin{proof} Consider the Dirichlet series \begin{equation} \label{key-equation} \overline{D}(s; a,q, F) = D(s ; a, q,F) - c_{F,q} \zeta(s-k+1) \phantom{m} \text{for } {\mathbb R}e(s) > k. \end{equation} By Proposition~\ref{Main-Prop}, the series $\overline{D}(s; a,q, F)$ has a meromorphic continuation to ${\mathbb C}$ with no poles on the real line and vanishes at $s= k-1 - 2t$, where $t \in {\mathbb N}, ~ t > (k-1)/2 $. For $m \ge 1$, let \begin{eqnarray} \label{key-expression} \beta(m) &:=& \left\{ \begin{array}{ll} \langle f_m, f_m\rangle - c_{F,q} m^{k-1} & \mbox{if $m \equiv a \!\!\!\pmod{q}$} \\ - c_{F,q} m^{k-1} & \mbox{otherwise} \end{array} \right. \end{eqnarray} be the general coefficient of $\overline{D}(s; a, q,F)$. We know that $\overline{D}(s;a,q,F)$ cannot be identically zero as $c_{F,q} > 0$ by Remark \ref{imp}. Then by using Theorem~\ref{Pribitkin-result}, there exist infinitely many $m$ with $m \equiv a \pmod{q}$ such that $\langle f_m, f_m\rangle > c_{F,q} m^{k-1}$. \end{proof} Using the above method, we are unable to prove that there exist infinitely many $m$ with $m \equiv a \pmod{q}$ such that $\langle f_m, f_m\rangle < c_{F,q} m^{k-1}$. But we can prove the following weaker theorem. \begin{thm} Let $F$ be a non-zero cusp form in $S_k(\Gamma_n), n>1$ with Fourier-Jacobi coefficients $\{f_m\}_{m \in {\mathbb N}}$. Let $q$ be a natural number and also let $c_{F,q}$ be the residue of $D(s;a,q,F)$ for some $a \in {\mathbb N}$ with $(a,q)=1$ {\rm(}hence for all $a${\rm)}. Then there exist natural numbers $b,c$ with $(bc,q)=1$ such that the following hold: \begin{itemize} \item there exist infinitely many $m \in {\mathbb N}$ with $m \equiv b \pmod q$ such that $\langle f_m, f_m \rangle > q c_{F,q}m^{k-1}$ and \item there exist infinitely many $m \in {\mathbb N}$ with $m \equiv c \pmod q$ such that $\langle f_m, f_m \rangle < qc_{F,q} m^{k-1}$. \end{itemize} \end{thm} \begin{proof} Consider the Dirichlet series \begin{eqnarray*} \overline{D}(s; a,q, F) := \sum_{a=1 \atop (a,q)=1}^{q-1}D(s; a, q, F) ~-~ \alpha_{F,q}M \zeta(s-k+1), \end{eqnarray*} where $$ M:= \prod_{p|q \atop p \text{ prime}}(1 - p^{-(s-k+1)}) \phantom{m} \text{ and } \phantom{m} \alpha_{F, q}:= qc_{F,q}. $$ Since $$ M \zeta(s-k+1) = q^{-(s-k+1)} \sum_{a=1 \atop (a,q)=1}^{q-1}\zeta(s-k+1, a/q), $$ we have \begin{eqnarray*} \overline{D}(s; a,q, F) &=& \sum_{a=1 \atop (a,q)=1}^{q-1} \left[ D(s; a, q, F) ~-~ \alpha_{F,q}q^{-(s-k+1)} \zeta(s-k+1, a/q) \right]\\ &=& \sum_{a=1 \atop (a,q)=1}^{q-1} ~~\underset{\underset{m \equiv a {\, \rm mod \, } q} {m \ge 1}}{\sum} \frac{\langle f_m, f_m \rangle - \alpha_{F,q}m^{k-1}}{m^s}. \end{eqnarray*} By Proposition~\ref{Main-Prop}, the series $\overline{D}(s; a, q, F)$ has a meromorphic continuation to ${\mathbb C}$ with no poles on the real line and vanishes at $s= k-1 - 2t$, where $t \in {\mathbb N}, ~ t > (k-1)/2 $. Note that the function $\overline{D}(s; a, q, F)$ can not be identically zero. If otherwise, $$ \sum_{a=1 \atop (a,q)=1}^{q-1} D(s; a, q, F) = \alpha_{F,q} M \zeta(s-k+1). $$ This is a contradiction as zeros of the Riemann zeta function on the negative real axis are at negative even integers whereas each $D(s;a,q, F)$ has zeros at all negative integers. Now using Theorem~\ref{Pribitkin-result}, we get the desired result. \end{proof} As an immediate corollary, we get \begin{cor} For $n>1$, let $F$ be a non-zero cusp form in $S_k(\Gamma_n)$ with Fourier-Jacobi coefficients $\{f_m\}_{m \in {\mathbb N}}$. Let $c_{F,2}$ be the residue of the series $D(s; 1,2, F)$ at $s=k$. Then the following hold: \begin{itemize} \item there exist infinitely many odd $m \in {\mathbb N}$ such that $\langle f_m, f_m \rangle > 2 c_{F,2} m^{k-1}$ and \item there exist infinitely many odd $m \in {\mathbb N}$ such that $\langle f_m, f_m \rangle < 2 c_{F,2} m^{k-1}$. \end{itemize} \end{cor} \noindent {\bf Acknowledgments.} The second author would like to thank the Hausdorff Research Institute for Mathematics, where most of this work was carried out during the Trimester program ``Arithmetic and Geometry'', Jan-Feb 2013. We would like to thank the referee for several relevent suggestions which improved the presentation of the paper. Further, we would like to thank W. Kohnen for making this collaboration possible. \end{document}
\begin{document} \title{Fault-tolerant quantum error correction using error weight parities} \author{Theerapat Tansuwannont} \epsilonmail{[email protected]} \affiliation{ Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada } \author{Debbie Leung} \epsilonmail{[email protected]} \affiliation{ Institute for Quantum Computing and Department of Combinatorics and Optimization, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada } \affiliation{ Perimeter Institute for Theoretical Physics, Waterloo, Ontario, N2L 2Y5, Canada } \begin{abstract} In quantum error correction using imperfect primitives, errors of high weight arising from a few faults are major concerns since they might not be correctable by the quantum error correcting code. Fortunately, some errors of different weights are logically equivalent and the same correction procedure is applicable to all equivalent errors, thus correcting high-weight errors is sometimes possible. In this work, we introduce a technique called weight parity error correction (WPEC) which can correct Pauli error of any weight in some stabilizer codes provided that the parity of the weight of the error is known. We show that the technique is applicable to concatenated codes constructed from the \codepar{7,1,3} Steane code or the \codepar{23,1,7} Golay code. We also provide a fault-tolerant error correction protocol using WPEC for the \codepar{49,1,9} concatenated Steane code which can correct up to 3 faults and requires only 2 ancillas. \epsilonnd{abstract} \pacs{03.67.Pp} \maketitle \section{Introduction} \label{sec:Intro} One crucial component for large-scale quantum computers is fault-tolerant error correction (FTEC), which suppresses error propagation throughout the circuits. An arbitrarily small logical error rate can be achieved through code concatenation, given that the physical error rate is below some constant threshold value \cite{Shor96,AB08,Preskill98,KLZ96,AGP06}. However, increasing overheads are needed for decreasing logical error rate \cite{Steane03,PR12,CJL16b,TYC17}. Conventional FTEC schemes require many ancillas during error syndrome measurements. For example, the Shor-style \cite{Shor96,DA07} and the Knill-style \cite{Knill05a} error corrections, which apply to any stabilizer code, require as many ancillas as the maximum weight of the stabilizer generators and twice the blocklength, respectively. Steane-style error correction \cite{Steane97,Steane02} which applies to any CSS code requires one code block of ancillas. Recently, several FTEC schemes that use only a few ancillas and are applicable to the \codepar{7,1,3} Steane code \cite{Steane96b} have been proposed. The scheme due to Yoder and Kim for the \codepar{7,1,3} code uses 2 ancillas (9 qubits in total) \cite{YK17}. Their ideas are further developed to a ``flag FTEC'' scheme which, for the \codepar{7,1,3} code, also uses 2 ancillas \cite{CR17a}. A flag FTEC scheme for any \codepar{n,k,d} stabilizer code requires $d+1$ ancillas \cite{CR20}, where the schemes for some specific code families may require fewer \cite{CR17a,CB18,TCL20,CKYZ20,CZYHZ20}. The flag technique that uses a few ancillas to detect high-weight errors can also be applied to various fault-tolerant schemes \cite{CR17b,CC19,SCC19,BCC19,BXG19,Vui18,GMB19,LA19,CN20,DB20,RBMS21}. Another FTEC scheme applicable to the \codepar{7,1,3} code was proposed by Reichardt; the scheme extracts several syndrome bits at once and requires no ancillas, provided that there are at least two code blocks (so at least 14 qubits are required in total) \cite{Reichardt18}. In order to achieve an arbitrarily low error rate through code concatenation, the FTEC scheme used with the code must be modified accordingly. One way to do this is replacing all physical qubits with code blocks and replacing all physical gates with corresponding logical gates \cite{AGP06}. For the \codepar{7,1,3} code, each qubit (including each ancilla qubit) required in an FTEC scheme will become a block of 7 physical qubits in the modified scheme. Following this modification, the schemes in \cite{YK17,CR17a} applied to the \codepar{49,1,9} concatenated Steane code will require 63 qubits in total. Meanwhile, the scheme in \cite{Reichardt18} requires 98 qubits in total, encoding 2 logical qubits. Note that the maximum weight for the stabilizer generators increases quickly with concatenation. These difficulties motivate our main question: how to reduce the number of ancillas required for an FTEC scheme for a concatenated code? In this paper, we introduce a technique called weight parity error correction (WPEC) and construct an FTEC scheme for the \codepar{49,1,9} concatenated Steane code using only \epsilonmph{two} ancilla qubits. The scheme relies on the fact that, for the \codepar{7,1,3} code, errors with the same syndrome and weight parity differ by the multiplication of some stabilizer; these errors are thus logically equivalent and need not be distinguished from one another. Therefore, error correction on each subblock of 7 qubits in the \codepar{49,1,9} code can be accomplished using only two ingredients: the error syndrome and the weight parity of error in each subblock. Most importantly, the weight parity for each subblock of 7 qubits in the \codepar{49,1,9} code can be obtained from the full syndrome measurement. Using this idea in conjunction with 2 ancilla qubits, our FTEC protocol for the \codepar{49,1,9} code can correct up to 3 faults. As a result, our protocol can suppress the error rate from $p$ to $O(p^4)$ using 51 qubits in total. The paper is organized as follows: In \cref{sec:WPEC}, we observe the aforementioned equivalence between errors of any weight with the same syndrome and weight parity, and describe WPEC. In \cref{sec:Protocol}, we provide sufficient conditions for WPEC, then we provide syndrome extraction circuits and an FTEC protocol for the \codepar{49,1,9} concatenated Steane code using only two ancilla qubits. In \cref{sec:WPEC_Golay}, WPEC is extended to the \codepar{23,1,7} Golay code and concatenated Steane codes with more than 2 levels of concatenation. Last, we discuss our results and directions for future works in \cref{sec:Discussion}. \section{Weight parity error correction for the Steane code} \label{sec:WPEC} The Steane code \cite{Steane96b}, also known as the \codepar{7,1,3} code, is a quantum error correcting code that encodes 1 logical qubit into 7 physical qubits and can correct any error on up to 1 qubit. It has several desirable properties for fault-tolerant quantum computation, e.g., logical Clifford operations are transversal \cite{Shor96}. The Steane code is a code in the Calderbank-Shor-Steane (CSS) code family \cite{CS96,Steane96b} where $X$-type and $Z$-type errors can be detected and corrected separately. The Steane code in the stabilizer formalism can be constructed from the parity check matrix of the classical $[7,4,3]$ Hamming code through the CSS construction \cite{Gottesman97}. In addition, it is known that any classical Hamming code can be rearranged into a cyclic code, a binary linear code in which any cyclic shift of a codeword is also a codeword \cite{MS77}. We can describe the Steane code in cyclic form with the following stabilizer generators: \begingroup \setlength\arraycolsep{1pt} \begin{equation} \begin{matrix} g^x_1: &X &I &X &X &X &I &I, & \quad & g^z_1: &Z &I &Z &Z &Z &I &I,\\ g^x_2: &I &X &I &X &X &X &I, & \quad & g^z_2: &I &Z &I &Z &Z &Z &I,\\ g^x_3: &I &I &X &I &X &X &X, & \quad & g^z_3: &I &I &Z &I &Z &Z &Z. \epsilonnd{matrix} \epsilonnd{equation} \epsilonndgroup The generators of a stabilizer code define not only the codespace, but also the measurements that give rise to the error syndrome. When these measurements are imperfect, different sets of generators for the same code can have different fault-tolerant properties. The use of the Steane code in cyclic form gives some advantages in distinguishing high-weight errors in consecutive form \cite{TCL20} (see \cref{sec:Discussion} for more details). We can choose the logical $X$ and logical $Z$ operators to be $X^{\otimes 7} S$ and $Z^{\otimes 7} T$ for any stabilizer operators $S,T$. With this convention, we state the following crucial property of the Steane code that goes into our construction: \begin{fact} Let $M$ be any $Z$-type operator (a tensor product of $I$s and $Z$s) defined on 7 qubits. Suppose $M$ commutes with all X-type generators of the \codepar{7,1,3} code. If $M$ has even weight, then it is a logical $I$; otherwise, if $M$ has odd weight, then it is a logical $Z$. \label{fact:fact1} \epsilonnd{fact} \begin{proof} Because $M$ is a $Z$-type operator that commutes with all $X$-type generators, $M$ is either a stabilizer of $Z$ type or a logical $Z$ operator. Let $E_1$ and $E_2$ be $Z$-type operators with weights $w_1 $ and $w_2$. Then $E_1E_2$ is an operator of weight $w_1+w_2-2c$, where $c$ is the number of qubits supported by both $E_1$ and $E_2$. Observe that all stabilizer generators of the Steane code have even weight, and a multiplication of two operators with even weight always gives an operator with even weight. Thus, all stabilizers of $Z$ type (which are logical $I$ operators) have even weight. Moreover, a $Z$-type operator which is a logical $Z$ operator is of the form $Z^{\otimes 7} T$ where $T$ is a stabilizer of $Z$ type. Therefore, all logical $Z$ operators of $Z$ type have odd weight. \epsilonnd{proof} For a Pauli error $E$ on a block of 7 qubits, the syndrome is a 6-bit string denoted by $s(E) = (s_x|s_z)$ where $s_x,s_z \in \mathbb{Z}_2^3$. The $i$-th bit of $s_x$ (or $s_z$) is $0$ if $E$ commutes with $g_i^x$ (or $g_i^z$), and $1$ if $E$ anticommutes with $g_i^x$ ($g_i^z$). If $E$ occurs to a codeword of the Steane code, $s(E)$ corresponds to the outcomes of measuring the six generators (0 and 1 correspond to $+1$ and $-1$ outcomes, respectively). The Steane code is a \epsilonmph{perfect CSS code of distance $3$} meaning that for each $s_x$, $(s_x|000)$ is the syndrome of a \epsilonmph{unique} weight-1 $Z$-type error, which we denote as $E^z_{wt\mhyphen 1}(s_x)$, and similarly each $(000|s_z)$ is the syndrome of a unique $X$-type error \footnote{This is from the fact that the \codepar{7,1,3} Steane code can be constructed from the classical $[7,4,3]$ Hamming code which is a \epsilonmph{classical perfect code}, a code which saturates the classical Hamming bound \cite{MS77}.}. For CSS codes, the $X$-type and $Z$-type error corrections are independent of one another. Furthermore, we focus on CSS codes in which $X$-type and $Z$-type generators have the same form, and the same method applies to both types of error correction. So we focus on $Z$ errors for simplicity. Since $Z$-type errors have trivial $s_z$, we focus on $s_x$ from now on. With the above notations, consider the following simple error correction procedure on the Steane code: if the syndrome is $(s_x|000)$, do nothing if $s_x$ is trivial, apply $E^z_{wt\mhyphen 1}(s_x)$ otherwise. We observe that if the syndrome is caused by a $Z$-type error, then the procedure outputs the encoded data transformed by a logical $I$ or logical $Z$. This is because the actual $Z$-type error combined with the correction remains $Z$-type and commutes with all of $g^x_{1,2,3}$, so the conclusion follows from \cref{fact:fact1}. If a codeword is corrupted by an arbitrary $Z$-type error $E$, the above procedure always recovers the codeword, but sometimes with an undesirable logical $Z$ error. The technique of weight parity error correction, to be developed next, is a revised procedure that will \epsilonmph{always} correct the error $E$, but it requires knowing whether $E$ has odd or even weight. Measuring the error weight parity should not be done on a single layer of Steane code since it measures a logical operator on the Steane code. Fortunately, the parity information can be safely learnt for the constituent blocks when we concatenate the Steane code with itself. We will describe these ideas in detail in the rest of this section, and apply them for fault-tolerant error correction in the next section. First, we use \cref{fact:fact1} to show that $Z$-type errors with the same syndrome \epsilonmph{and} the same weight parity (whether odd or even) differ by the multiplication of some stabilizer. \begin{claim}{Logical equivalence of errors with the same syndrome and weight parity for the \codepar{7,1,3} code} Suppose $E_1,E_2$ are arbitrary $Z$-type errors (of any weights) on the \codepar{7,1,3} code with the same syndrome. Then, $E_1$ and $E_2$ have the same weight parity iff $E_1 = E_2S$ for some stabilizer $S$. \label{claim:equiv_Steane} \epsilonnd{claim} \begin{proof} Let $w_1,w_2$ be the weights of $E_1, E_2$, respectively. Let $N = E_1 E_2$ (so $E_2 = E_1 N$ as $E_1 = E_1^\dagger$). The weight of $N$ is equal to $w_1+w_2-2c$ where $c$ is the number of qubits supported by both $E_1$ and $E_2$. As $N$ commutes with all of $g^x_{1,2,3}$, from \cref{fact:fact1}, $N$ is a logical $I$ if and only if $w_1+w_2-2c$ is even (when $E_1$ and $E_2$ have the same weight parity). \epsilonnd{proof} Second, we use \cref{claim:equiv_Steane} to provide a method for error correction of $Z$-type error of arbitrary weight on the Steane code, \epsilonmph{if the weight parity of the error is known}: \begin{definition}{Weight parity error correction (WPEC) for the \codepar{7,1,3} code} Suppose a $Z$-type error $E$ occurs to a codeword of the \codepar{7,1,3} code. Let $s_x$ and $w$ be the syndrome and the weight of $E$, $E^z_{wt\mhyphen 1}(s_x)$ be the weight-1 $Z$-type operator with syndrome $s_x$, and $E^z_{wt\mhyphen 2}(s_x)$ be any weight-2 $Z$-type operator with syndrome $s_x$, respectively. The following procedure is called \epsilonmph{weight parity error correction (WPEC): \begin{enumerate} \item If $s_{x}$ is trivial, do nothing if $w$ is even, or apply any logical $Z$ if $w$ is odd. \item If $s_{x}$ is nontrivial, apply $E^z_{wt\mhyphen 1}(s_{x})$ if $w$ is odd, or apply $E^z_{wt\mhyphen 2}(s_{x})$ if $w$ is even. \epsilonnd{enumerate}} \label{Def:WPEC} \epsilonnd{definition} WPEC always returns the original codewords because in each case, the error $E$ and the correction operation have the same syndrome and weight parity, so by \cref{claim:equiv_Steane}, the correction is logically equivalent to $E$. WPEC allows us to correct high-weight errors in the Steane code, but we need to know the weight parity of the error. The weight parity of a $Z$-type error is the outcome of measuring $X^{\otimes 7}$, so learning the weight parity is equivalent to a logical $X$ measurement, which can destroy the superposition of the logical state. Fortunately, there is a setting in which the weight parity can be obtained without affecting the encoded data. Consider code concatenation in which each qubit of an error correcting code ${\cal C}_2$ is encoded into another quantum error correcting code ${\cal C}_1$. If ${\cal C}_1$ is chosen to be the Steane code, the weight parity of each codeblock can potentially be learnt from the syndrome of ${\cal C}_2$. We will develop WPEC for the concatenated Steane code in the rest of this section and show the advantage in the context of fault tolerance in the next section. Consider code concatenation using two Steane codes in cyclic form. The resulting code which is a \codepar{49,1,9} code can be described by 48 stabilizer generators. The first group of 42 generators, called 1st-level generators, have the form $g^x_i\otimes I^{\otimes 42},g^z_i\otimes I^{\otimes42},I^{\otimes 7} \otimes g^x_i\otimes I^{\otimes 35},I^{\otimes 7} \otimes g^z_i\otimes I^{\otimes 35},\dots,I^{\otimes 42}\otimes g^x_i,I^{\otimes 42}\otimes g^z_i$ for $i=1,2,3$. The remaining 6 of these generators, called 2nd-level generators, have the form \begingroup \setlength\arraycolsep{1pt} \begin{equation} \begin{matrix} \tilde{g}^x_1: &\mathbf{X} &\mathbf{I} &\mathbf{X} &\mathbf{X} &\mathbf{X} &\mathbf{I} &\mathbf{I}, & \quad & \tilde{g}^z_1: &\mathbf{Z} &\mathbf{I} &\mathbf{Z} &\mathbf{Z} &\mathbf{Z} &\mathbf{I} &\mathbf{I},\\ \tilde{g}^x_2: &\mathbf{I} &\mathbf{X} &\mathbf{I} &\mathbf{X} &\mathbf{X} &\mathbf{X} &\mathbf{I}, & \quad & \tilde{g}^z_2: &\mathbf{I} &\mathbf{Z} &\mathbf{I} &\mathbf{Z} &\mathbf{Z} &\mathbf{Z} &\mathbf{I},\\ \tilde{g}^x_3: &\mathbf{I} &\mathbf{I} &\mathbf{X} &\mathbf{I} &\mathbf{X} &\mathbf{X} &\mathbf{X}, & \quad & \tilde{g}^z_3: &\mathbf{I} &\mathbf{I} &\mathbf{Z} &\mathbf{I} &\mathbf{Z} &\mathbf{Z} &\mathbf{Z}, \epsilonnd{matrix} \epsilonnd{equation} \epsilonndgroup where $\mathbf{I}=I^{\otimes 7},\mathbf{X}=X^{\otimes 7},$ and $\mathbf{Z}=Z^{\otimes 7}$. The logical $X$ and logical $Z$ operators can be chosen to be $\bar{X}=X^{\otimes 49} S$ and $\bar{Z}=Z^{\otimes 49} T$ for any stabilizer operators $S, T$. Relevant parts of the error syndrome corresponding to the 1st-level and the 2nd-level generators will be called 1st-level and 2nd-level syndromes, respectively. Let us consider error correction on the \codepar{49,1,9} code and assume for now that error syndromes are reliable (which can be obtained from repetitive measurements). First, consider a simple motivating example, and suppose that a $Z$-type error $E$ acts nontrivially on at most one subblock of 7-qubit code. In order to perform WPEC, the weight parity of $E$ and the subblock in which $E$ occurs must be known. Suppose that $E$ has nontrivial 1st-level syndrome. The subblock in which $E$ occurs is actually the subblock whose corresponding 1st-level syndrome is nontrivial, while the weight parity of $E$ is a measurement result from a 2nd-level generator which acts nontrivially on that subblock (note that the 2nd-level generator must \epsilonmph{act nontrivially on all qubits} in such a subblock, thus a choice of 2nd-level generators is important). Now, suppose that $E$ has trivial 1st-level syndrome. The subblock in which $E$ occurs can no longer be identified by the 1st-level syndrome. Fortunately, since the 2nd-level Steane code (${\cal C}_2$) is a distance-3 code, it can identify if any of the $7$ subblocks of \codepar{7,1,3} code (the ${\cal C}_1$ subblocks) has a $Z$-type error logically equivalent to $Z^{\otimes 7}$, thus providing the weight parity for each subblock of \codepar{7,1,3} code. That is, if $E$ has trivial 1st-level syndrome and its weight is odd, the weight parity of $E$ and the subblock in which $E$ occurs can be determined using only the 2nd-level syndrome. (If $E$ has trivial 1st-level syndrome and has even weight, it is a stabilizer and no error correction is needed.) If the $Z$-type error is more general and may act on multiple subblocks, the 2nd-level syndrome may not provide the weight parities of the subblocks. Instead, we consider only $Z$-type errors that arise from a small number of faults in specially designed generator measurements. We will show that for these errors, the weight parity for each subblock can be determined by the 2nd-level syndrome along with the information whether each subblock has trivial 1st-level syndrome or not. \begin{figure}[htbp] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{circuit_lv2_normal} \captionsetup{justification=centering} \caption{} \label{fig:circuit_lv2_normal} \epsilonnd{subfigure} \begin{subfigure}{0.25\textwidth} \includegraphics[width=\textwidth]{XMeas} \captionsetup{justification=centering} \caption{} \label{fig:XMeas} \epsilonnd{subfigure} \caption{(a) An example of circuit for measuring generator $\tilde{g}^z_1 = \mathbf{ZIZZZII}$. Here we display only the subblocks in which the operator acts nontrivially (the 1st, 2nd, 3rd, and 4th subblocks in the figure correspond to the 1st, 3rd, 4th, and 5th subblocks of $\tilde{g}^z_1$). A circuit for measuring $X$-type operator such as $\tilde{g}^x_1 = \mathbf{XIXXXII}$ can be obtained by replacing each CNOT gate in (a) with the gate illustrated in (b).} \label{fig:circuit_normal} \epsilonnd{figure} In particular, let \epsilonmph{block parity} $p_x \in \mathbb{Z}_2^7$ be a bitstring, where each bit is the weight parity of the $Z$ error in one subblock, and $0$ and $1$ represent even and odd weights, respectively. Also, define the \epsilonmph{triviality} of a subblock to be $0$ or $1$ if the subblock has trivial or nontrivial 1st-level syndrome, and let \epsilonmph{block triviality} $\tau_x \in \mathbb{Z}_2^7$ be a $7$-bit string in which the $i$-th bit represents the triviality of the $i$-th subblock. If the block parity can be accurately determined using the 2nd-level syndrome together with the block triviality (we will elaborate how this can be done later), then we can blockwisely perform WPEC as described in \cref{Def:WPEC} by using the 1st-level syndrome and the weight parity of each subblock. In this work, we develop an FTEC protocol that uses WPEC to correct high-weight errors arising from up to 3 faults. As an example, consider the measurement of $\tilde{g}^z_1$ using the circuit depicted in \cref{fig:circuit_lv2_normal}. Here we assume that a fault from any two-qubit gate can cause any two-qubit Pauli errors on the qubits where the gate acts nontrivially, and $X$-type and $Z$-type errors can be detected separately. Thus, we may assume that high-weight errors arising from a single CNOT fault is of the form $\mathbf{PIZZZII}$, $\mathbf{IIPZZII}$, $\mathbf{IIIPZII}$, or $\mathbf{IIIIPII}$, where $\mathbf{Z} = Z^{\otimes 7}$, $\mathbf{P}=I^{\otimes 7-m}\otimes Z^{\otimes m}$, and $m\in\{1,\dots,7\}$ (see the analysis of possible errors in \cite{TCL20} for more details). It is not hard to find 2nd-level syndrome, block triviality, and block parity corresponding to each possible error. For example, error $\mathbf{PIZZZII}$ with $m=6$ anticommutes with $g^x_1$ and $\tilde{g}^x_1$, and commutes with the other generators. Thus, its corresponding 2nd-level syndrome, block triviality, and block parity are $(1,0,0), (1,0,0,0,0,0,0)$, and $(0,0,1,1,1,0,0)$, respectively. \cref{tab:ex_1fault} displays all possible high-weight errors arising from a single fault during $\tilde{g}^z_1$ measurement and their corresponding 2nd-level syndrome, block triviality, and block parity. Note that except for the first and the last row (with errors differing by multiplication of a stabilizer), each row has a unique combination of 2nd-level syndrome and block triviality, so the block parity can be determined from the table. Since the 2nd-level syndrome and the block triviality can in turn be obtained from the generator measurements, all possible errors arising from a single CNOT fault during the measurement of $\tilde{g}^z_1$ can be corrected using WPEC. In addition, observe that $\mathbf{ZIZZZII}$ and $\mathbf{I}^{\otimes 7}$ are equivalent up to a multiplication of $\tilde{g}^z_1$ but their block parities are different. Here we can see that multiplying an error with some 2nd-level generators may change its block parity, but its 2nd-level syndrome and block triviality (which is deduced from its 1st-level syndrome) remain the same. In this case, WPEC is still applicable. We say that block parities are equivalent whenever they can be transformed to one another by multiplying the corresponding errors with some stabilizer. \begin{table}[tbp] \begin{center} \begin{tabular}{| c | c | c | c | c |} \hline Form of & \multirow{2}{*}{$m$} & 2nd-level & Block & \multirow{2}{*}{Block parity} \\ error & & syndrome & triviality & \\ \hhline{|=|=|=|=|=|} \multirow{3}{*}{$\mathbf{PIZZZII}$} & 7 & (0,0,0) & (0,0,0,0,0,0,0) & (1,0,1,1,1,0,0) \\ \cline{2-5} & 2,4,6 & (1,0,0) & (1,0,0,0,0,0,0) & (0,0,1,1,1,0,0) \\ \cline{2-5} & 1,3,5 & (0,0,0) & (1,0,0,0,0,0,0) & (1,0,1,1,1,0,0) \\ \hline \multirow{3}{*}{$\mathbf{IIPZZII}$} & 7 & (1,0,0) & (0,0,0,0,0,0,0) & (0,0,1,1,1,0,0) \\ \cline{2-5} & 2,4,6 & (0,0,1) & (0,0,1,0,0,0,0) & (0,0,0,1,1,0,0) \\ \cline{2-5} & 1,3,5 & (1,0,0) & (0,0,1,0,0,0,0) & (0,0,1,1,1,0,0) \\ \hline \multirow{3}{*}{$\mathbf{IIIPZII}$} & 7 & (0,0,1) & (0,0,0,0,0,0,0) & (0,0,0,1,1,0,0) \\ \cline{2-5} & 2,4,6 & (1,1,1) & (0,0,0,1,0,0,0) & (0,0,0,0,1,0,0) \\ \cline{2-5} & 1,3,5 & (0,0,1) & (0,0,0,1,0,0,0) & (0,0,0,1,1,0,0) \\ \hline \multirow{3}{*}{$\mathbf{IIIIPII}$} & 7 & (1,1,1) & (0,0,0,0,0,0,0) & (0,0,0,0,1,0,0) \\ \cline{2-5} & 2,4,6 & (0,0,0) & (0,0,0,0,1,0,0) & (0,0,0,0,0,0,0) \\ \cline{2-5} & 1,3,5 & (1,1,1) & (0,0,0,0,1,0,0) & (0,0,0,0,1,0,0) \\ \hline $\mathbf{I}^{\otimes 7}$ & - & (0,0,0) & (0,0,0,0,0,0,0) & (0,0,0,0,0,0,0) \\ \hline \epsilonnd{tabular} \caption{All possible forms of data errors arising from a single fault occurred during syndrome measurement using a circuit in \cref{fig:circuit_lv2_normal} (where $\mathbf{P}=I^{\otimes 7-m}\otimes Z^{\otimes m}$). The block parity corresponding to each form of errors can be determined by the 2nd-level syndrome and the block triviality obtained from a full syndrome measurement. By knowing the block parity, high-weight errors can be corrected using WPEC.} \label{tab:ex_1fault} \epsilonnd{center} \epsilonnd{table} In an actual fault-tolerant protocol, we want to distinguish all possible high-weight errors arising from various types of faults up to 3 faults, including any gate faults, faults during the preparation and measurement of ancilla qubits, and faults during wait time. The circuit construction in \cref{fig:circuit_lv2_normal}, however, might not cause errors that can be distinguished. Note that possible errors arising from CNOT faults heavily depend on the ordering of CNOT gates being used in the measurement circuit. In \cref{sec:Protocol}, we will discuss conditions in which WPEC can be applied. We will also provide a family of circuits with specific CNOT ordering and an FTEC protocol for the \codepar{49,1,9} code which can correct high-weight errors arising from up to 3 faults. \section{Fault-tolerant error correction protocol for the \codepar{49,1,9} code} \label{sec:Protocol} Fault-tolerant error correction is one of the most essential gadgets for constructing large-scale quantum computers. Every FTEC protocol must satisfy the following conditions: \begin{definition}{Fault-tolerant error correction \cite{AGP06}} For $t = \lfloor (d-1)/2\rfloor$, an error correction protocol using a distance-$d$ stabilizer code is \epsilonmph{$t$-fault tolerant} if the following two conditions are satisfied: \begin{enumerate} \item For any input codeword with error of weight $v_{1}$, if $v_{2}$ faults occur during the protocol with $v_{1} + v_{2} \le t$, ideally decoding the output state gives the same codeword as ideally decoding the input state. \item If $v$ faults happen during the protocol with $v \le t$, no matter how many errors are present in the input state, the output state differs from any valid codeword by an error of at most weight $v$. \epsilonnd{enumerate} \label{Def:FaultTolerantDef} \epsilonnd{definition} In this work, we develop an FTEC protocol for the \codepar{49,1,9} code that can correct up to 3 faults. The circuits for measuring 1st-level and 2nd-level generators are shown in \cref{fig:circuit_WPEC}. The types of faults being considered include faults that happen to the physical gates, faults during the preparation and measurement of ancilla qubits in the circuits, and faults during wait time. \begin{figure}[htbp] \centering \begin{subfigure}{0.49\textwidth} \includegraphics[width=\textwidth]{circuit_lv2_permuted} \captionsetup{justification=centering} \caption{} \label{fig:circuit_lv2_permuted} \epsilonnd{subfigure} \begin{subfigure}{0.32\textwidth} \includegraphics[width=\textwidth]{circuit_lv1_flag} \captionsetup{justification=centering} \caption{} \label{fig:circuit_lv1_flag} \epsilonnd{subfigure} \caption{Circuits for measuring 2nd-level and 1st-level generators being used in this work are shown in (a) and (b), respectively. With this gate permutation, the block parity corresponding to every possible high-weight error arising from up to 3 faults can be accurately determined. As such, our protocol can correct up to 3 faults.} \label{fig:circuit_WPEC} \epsilonnd{figure} Let \epsilonmph{fault combination} be a collection of faults up to 3 faults (which may be of different types and can cause errors of weight much higher than 3 on the data qubits). Our goal is to distinguish all fault combinations that can be confusing and may cause WPEC to fail. Similar to an example of WPEC in \cref{sec:WPEC}, we can categorize all possible fault combinations into subsets by their 2nd-level syndrome and block triviality. The following sufficient condition can determine when the WPEC technique can be applied: \begin{claim}{Sufficient condition for WPEC} Let $\mathcal{F}$ be the set of all possible fault combinations during an FTEC protocol for the \codepar{49,1,9} code and let $\mathcal{F}_k \subseteq \mathcal{F}$ be a subset of fault combinations with the same 2nd-level syndrome and the same block triviality (where $\bigcup_k \mathcal{F}_k = \mathcal{F}$). WPEC is applicable in the FTEC protocol if each $\mathcal{F}_k$ satisfies one of the following conditions: \begin{enumerate} \item Data errors from all fault combinations in $\mathcal{F}_k$ give equivalent block parities. \pagebreak \item Not every data error from a fault combination in $\mathcal{F}_k$ gives the same block parity (or its equivalence), but for each pair of fault combinations in $\mathcal{F}_k$ whose block parities of their data errors are not equivalent, their 1st-level syndromes or flag measurement results (or both) are different. \epsilonnd{enumerate} \label{claim:suff_cond} \epsilonnd{claim} \begin{proof} Whenever subset $\mathcal{F}_k$ satisfies the first condition in \cref{claim:suff_cond}, we can find a block parity that works for all fault combinations in $\mathcal{F}_k$ using only the 2nd-level syndrome and the block triviality. A correction operator for each fault combination can be found following the definition of WPEC (\cref{Def:WPEC}) using the 1st-level syndrome and the block parity. On the other hand, if $\mathcal{F}_k$ satisfies the second condition in \cref{claim:suff_cond}, a block parity cannot be accurately determined using only the 2nd-level syndrome and the block triviality. Fortunately, with the assistance of the 1st-level syndrome and the flag measurement result, fault combinations that correspond to non-equivalent block parities can be distinguished and the block parity of each fault combination can be found. Similarly, a correction operator for each fault combination can be determined following \cref{Def:WPEC}. \epsilonnd{proof} Whether possible fault combinations satisfy \cref{claim:suff_cond} or not depends heavily on the ordering of the CNOT gates and the use of flag qubits in the circuits for syndrome measurements. In our FTEC protocol for the \codepar{49,1,9} code, the CNOT gates being used in the circuits for measuring 2nd-level generator are applied in the following ordering: \begin{equation} (1,8,15,22,2,9,16,23,\dots,7,14,21,28), \label{eq:CNOTperm} \epsilonnd{equation} where the numbers 1 to 28 represent the qubits in which $\tilde{g}^z_i$ acts nontrivially. That is, CNOT gates are applied on the first qubit in each subblock for all subblocks, then on the second qubit in each subblock for all subblocks, and so on. The circuit for measuring $\tilde{g}^z_1$ is shown in \cref{fig:circuit_lv2_permuted}. In addition, CNOT gates being used in the circuits for measuring 1st-level generator are in the normal ordering as shown in \cref{fig:circuit_lv1_flag}. Note that there is no flag qubit involved in the measurement of a 2nd-level generator, and there is one flag qubit in the circuit for measuring a 1st-level generator. Consider the case that there are some faults during $Z$-type generator measurements. Faulty circuits can produce nontrivial flag measurement results and cause error of any weight on the data qubits. Our goal is to detect and correct such an error using the flag measurement results from the faulty circuits, together with 1st-level and 2nd-level syndromes obtained from subsequent syndrome measurements. In particular, let the \epsilonmph{flag vector} $\in \mathbb{Z}^{21}_2$ be a bitstring wherein each bit is the flag measurement result from each circuit for measuring $g^z_i$ on each of the 7 subblocks. We define the \epsilonmph{cumulative flag vector} $f_x \in \mathbb{Z}^{21}_2$ to be the entry-wise sum of flag vectors (modulo 2) obtained from $g^z_i$ measurements accumulated from the first round up until the current round of measurements (see the protocol described below for the definition of a round of measurements). Cumulative flag vector $f_x$ together with 1st-level syndrome $s_x \in \mathbb{Z}^{21}_2$, 2nd-level syndrome $\tilde{s}_x \in \mathbb{Z}^{3}_2$, and block triviality $\tau_x \in \mathbb{Z}^{7}_2$ from the latest round of measurements will be used for distinguishing all possible fault combinations that can occur during the syndrome measurements as described in \cref{claim:suff_cond}. Using the computer simulation described in \cref{app:sim1}, we can verify that \cref{claim:suff_cond} is satisfied when the number of input errors $v_1$ and the number of faults $v_2$ satisfy $v_1+v_2 \leq 3$. A table of possible data errors and their corresponding $s_x, \tilde{s}_x, \tau_x, f_x$, and block parity $p_x$ (similar to \cref{tab:ex_1fault}) can also be obtained from the simulation. Moreover, the subsets $\mathcal{F}_k$ can be deduced from this table (see \cref{app:sim1} for more details). Let the \epsilonmph{outcome bundle} be the collection of 1st-level syndrome $s = (s_x|s_z)$, 2nd-level syndrome $\tilde{s} = (\tilde{s}_x|\tilde{s}_z)$, block triviality $\tau = (\tau_x|\tau_z)$, and cumulative flag vector $f = (f_x|f_z)$ obtained during a single round of full syndrome measurement, where subscripts $x$ and $z$ denote results corresponding to $X$-type and $Z$-type generator measurements. Using the simulation result together with the fact that $X$-type and $Z$-type errors can be corrected separately, an FTEC protocol for the \codepar{49,1,9} code can be constructed as follows. \pagebreak \textbf{FTEC protocol for the \codepar{49,1,9} code} A full syndrome measurement, or a round of measurements, measure the generators in the following order: measure all $\tilde{g}^z_i$'s, then all $\tilde{g}^x_i$'s, then all $g^z_i$'s, then all $g^x_i$'s. Perform full syndrome measurements until the outcome bundles are repeated 4 times in a row. Afterwards, perform the following error correction procedure: \begin{enumerate} \item Find the subset $\mathcal{F}_k$ corresponding to $\tilde{s}_x$ and $\tau_x$ from the table of possible errors (obtained from the simulation in \cref{app:sim1}). \begin{enumerate} \item If $\mathcal{F}_k$ satisfies Condition 1 in \cref{claim:suff_cond}, use a block parity of any fault combination in $\mathcal{F}_k$. \item If $\mathcal{F}_k$ satisfies Condition 2 in \cref{claim:suff_cond}, use a block parity of any combination in $\mathcal{F}_k$ that corresponds to $s_x$ and $f_x$. \item If there is no $\mathcal{F}_k$ from the table of possible errors which corresponds to $\tilde{s}_x$ and $\tau_x$, use the block parity with all 1's. \epsilonnd{enumerate} \item Let $s_{x,i}$ be the 1st-level syndrome and $w_i$ be the weight parity of the $i$-th subblock. Apply $Z$-type error correction on each subblock as given by \cref{Def:WPEC}. In particular: \begin{enumerate} \item If $s_{x,i}$ is trivial, apply $ZZIZIII$ (logically equivalent to $Z^{\otimes 7}$) to the $i$-th subblock when $w_i$ is odd, or do nothing when $w_i$ is even. \item If $s_{x,i}$ is nontrivial, apply $E^z_{wt\mhyphen 1}(s_{x,i})$ to the $i$-th subblock when $w_i$ is odd, or apply $E^z_{wt\mhyphen 2}(s_{x,i})$ when $w_i$ is even. \epsilonnd{enumerate} \item If there is no $\mathcal{F}_k$ from the table of possible errors which corresponds to $\tilde{s}_x$ and $\tau_x$, further apply the following error correction procedure: find a Pauli operator from $\{\mathbf{ZIIIIII},\mathbf{IZIIIII},\dots,\mathbf{IIIIIIZ}\}$ which corresponds to $\tilde{s}_x$, then apply such an operator (or its logically equivalent operator) to the data qubits. \item Repeat steps 1--3 but use $\tilde{s}_z$, $s_z$, $\tau_z$, and $f_z$ to deduce the $X$-type error correction ($E^x_{wt\mhyphen 1}(s_{z,i})$, $E^x_{wt\mhyphen 2}(s_{z,i})$, or $XXIXIII$) for each subblock. \epsilonnd{enumerate} Here we will assume that there are at most 3 faults during the protocol and the error is of $Z$ type. The assumption on the number of faults guarantees that the outcome bundles must be repeated 4 times in a row within 16 rounds (the outcome bundle cannot keep changing forever since the number of faults is limited). To verify that the protocol above is 3-fault tolerant, i.e., it satisfies the FTEC conditions in \cref{Def:FaultTolerantDef} with $t=3$ (the \codepar{49,1,9} code acts as a distance-7 code), first let us consider the case that there are no faults during the last round of full syndrome measurement. In this case, the outcome bundle corresponds to the actual data error. From the simulation in \cref{app:sim1}, we know that whenever $v_1+v_2 \leq 3$, one of the conditions in \cref{claim:suff_cond} is satisfied and the block parity can be accurately determined. The operation in Step 2 will give the correct output state, thus both of the FTEC conditions are satisfied. On the other hand, if $v_1+v_2 > 3$ but $v_2 \leq 3$, $\tilde{s}_x$ and $\tau_x$ may not correspond to any error in the table of possible errors. By using the block parity with all 1's, the operation in Step 2 will project the state in each subblock back to the subspace of the \codepar{7,1,3} code, where each subblock has an error equivalent to either $\mathbf{I}$ or $\mathbf{Z}$ after the operation. Afterwards, the operation in Step 3 will project the output state back to the subspace of the \codepar{49,1,9} code. Thus, the second condition in \cref{Def:FaultTolerantDef} is satisfied. Now, let us consider the case that there are some faults during the last round of full syndrome measurement. The outcome bundle we obtained from the last round may not correspond to the data error since some errors arising during the last round may be undetectable. Since we perform full syndrome measurements until the outcome bundles are repeated 4 times in a row and there are at most 3 faults during the whole protocol, at least one of the last 4 rounds of full syndrome measurement must be correct. From the simulation result in \cref{app:sim1}, the outcome bundle obtained from the last round (which is equal to that obtained from any correct round in the last 4 rounds) can definitely correct the error occurred before the last correct round. Here we want to verify that whenever the last 4 rounds have $v$ faults (where $v \leq 3$), after the last round, the weight of the data error is increased by no more than $v$. This can be verified using the computer simulation described in \cref{app:sim2}. By applying operation in Step 2 (and possibly Step 3) as previously discussed, the output state differs from a valid codeword by an error of weight at most $v$, regardless of the number of input errors. Thus, the second condition in \cref{Def:FaultTolerantDef} is satisfied. Furthermore, whenever $v_1+v_2 \leq 3$, we will obtain an output state which differs from a correct output state by an error of weight at most 3. Therefore, the first condition in \cref{Def:FaultTolerantDef} is also satisfied. The analysis for $X$-type errors is similar to that of $Z$-type errors. Note that during the measurement of $Z$-type generators, a single gate fault can cause an $X$-type error of weight 1 on the data qubits. This error can be considered as an input error for the measurement of $X$-type generators, thus the same analysis is applicable. \section{Weight parity error correction for other codes} \label{sec:WPEC_Golay} Besides the Steane code, we find that the WPEC technique is applicable to the \codepar{23,1,7} Golay code \cite{Steane03}, which is a perfect CSS code of distance 7. The \codepar{23,1,7} Golay code can correct up to 3 errors and can be constructed from the parity check matrix of the classical $[23,12,7]$ Golay code \cite{MS77}. In cyclic form, the \codepar{23,1,7} Golay code can be constructed from the parity check matrix, \begingroup \setlength\arraycolsep{2pt} \begin{equation} \begin{pmatrix} 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 \epsilonnd{pmatrix}, \nonumber \epsilonnd{equation} \epsilonndgroup which can be generated from the check polynomial $h(x) = x^{12}+x^{10}+x^7+x^4+x^3+x^2+x+1$ \cite{MS77}. The $i$-th $Z$-type (or $X$-type) generator of this code will be denoted as $g^z_i$ (or $g^x_i$) where $i=1,\dots,11$. The logical $X$ and logical $Z$ operators of this code can be chosen to be $\bar{X}=X^{\otimes 23} S$ and $\bar{Z}=Z^{\otimes 23} T$ for any stabilizer operators $S, T$. Similar to the \codepar{7,1,3} code, we can prove the equivalence of errors with the same syndrome and the same weight parity as follows: \begin{claim}{Logical equivalence of errors with the same syndrome for the \codepar{23,1,7} Golay code} Suppose $E_1,E_2$ are arbitrary $Z$-type errors (of any weights) on the \codepar{23,1,7} code with the same syndrome. Then, $E_1$ and $E_2$ have the same weight parity iff $E_1 = E_2S$ for some stabilizer $S$. \label{claim:equiv_Golay} \epsilonnd{claim} \begin{proof} We can verify that every $Z$-type stabilizer in the stabilizer group of the \codepar{23,1,7} code has even weight, and every logical $Z$ operator has odd weight. The rest of the proof follows the proof of \cref{claim:equiv_Steane}. \epsilonnd{proof} Let us consider $Z$-type error correction for the \codepar{23,1,7} code. Since the code is a perfect CSS code of distance 7, for each $s_x \in \mathbb{Z}_2^{11}$, $(s_x|0...0)$ is the syndrome of a unique $Z$-type error of weight $\leq 3$. Suppose that a codeword is corrupted by a $Z$-type error with syndrome $s_x$. If we apply the minimal weight error correction corresponding to $s_x$, we sometimes obtain the codeword with undesirable logical $Z$ operator. Fortunately, by knowing the weight parity of the error, the WPEC technique can be applied. The error correction procedure for the \codepar{23,1,7} code is defined as follows: \begin{definition}{Weight parity error correction for the \codepar{23,1,7} Golay code} Suppose a $Z$-type error $E$ occurs to a codeword of the \codepar{23,1,7} code. Let $s_x$ and $w$ be the syndrome and the weight of $E$, and let $E^z_{min}(s_x)$ be the unique minimal weight error correction corresponding to the syndrome $s_x$. The following procedure is called \epsilonmph{weight parity error correction (WPEC)}: \begin{enumerate} \item If $E^z_{min}(s_x)$ has even weight (0 or 2), apply $E^z_{min}(s_{x})$ to the data qubits whenever $w$ is even, or apply any $Z$-type operator $P$ that has odd weight and corresponds to $s_x$ to the data qubits whenever $w$ is odd. \item If $E^z_{min}(s_x)$ has odd weight (1 or 3), apply $E^z_{min}(s_{x})$ to the data qubits whenever $w$ is odd, or apply any $Z$-type operator $P$ that has even weight and corresponds to $s_x$ to the data qubits whenever $w$ is even. \epsilonnd{enumerate} \label{Def:WPEC_Golay} \epsilonnd{definition} Note that the \codepar{23,1,7} Golay code can be made cyclic, thus it can distinguish high-weight errors in consecutive form \cite{TCL20}. \cref{claim:equiv_Golay} together with the cyclic property give us some possibilities to construct an FTEC protocol for the \codepar{529,1,49} concatenated Golay code in the same way as what we have done for the \codepar{49,1,9} code. We expect that our technique can lead to a protocol which can correct a large number of faults and will compare well with other FTEC schemes. To reach this goal, syndrome extraction circuits with appropriate permutation of gates (and possibly with flag qubits) must be found so that conditions similar to those in \cref{claim:suff_cond} are satisfied. The search for such circuits with careful numerical verification of fault tolerance is a challenging and interesting future research direction. The WPEC technique may also apply to the code obtained from concatenating the Steane code to the $k$-th level, e.g., the \codepar{7^k,1,3^k} code. Since the $k$th-level Steane code is a distance-3 code, we expect that a block of errors in the $(k-1)$-th level can be determined and corrected using the syndrome and the block parity defined at the $k$-th level. Again, however, appropriate syndrome extraction circuits must be found, which is beyond the scope of this work. \section{Discussions and conclusions} \label{sec:Discussion} In this work, we prove the logical equivalence between errors of any weight on 7 qubits which have the same weight parity and correspond to the same error syndrome when error detection is performed by the \codepar{7,1,3} code in \cref{claim:equiv_Steane}. From this result, we introduce the WPEC technique in \cref{Def:WPEC}, which can correct errors of any weight on 7 qubits whenever their weight parity is known. We show that the WPEC technique can be extended to error correction in subblocks of the \codepar{49,1,9} code, and we prove the sufficient condition for WPEC in \cref{claim:suff_cond}. Afterwards, we provide a family of circuits and an FTEC protocol for the \codepar{49,1,9} code which can correct up to 3 faults. We also point out that the WPEC technique seems applicable to FTEC schemes for other codes such as the concatenated Golay code and concatenated Steane code with more than 2 levels of concatenation. Since the FTEC protocol provided in this work satisfies the definition of FTEC in \cref{Def:FaultTolerantDef} with $t=3$, we can guarantee that the logical error rate is suppressed to $O(p^4)$ whenever the physical error rate is $p$ under the random Pauli noise model. Note that we did not use the full ability of a code with distance 9 which, in principle, can correct up to 4 errors. In terms of error suppression, our FTEC protocol is as good as typical FTEC protocols for a concatenated code which are constructed by replacing each physical qubit with a code block and replacing each physical gate with the corresponding logical gate \cite{AGP06}. One major advantage of our FTEC protocol is that only 2 ancillas are needed: one ancilla for a syndrome measurement result and another ancilla for a flag measurement result (assuming that the qubit preparation and measurement are fast compared to the gate operation time). As a result, our protocol requires 51 qubits in total. The number of required qubits is very low compared to other FTEC protocols for the \codepar{49,1,9} code; the FTEC schemes in \cite{YK17,CR17a} extended to the \codepar{49,1,9} code require 63 qubits in total (the minimum number of required ancillas is 14 assuming that they are recyclable). Meanwhile, the FTEC protocol in \cite{Reichardt18} which extracts multiple syndromes at once encodes 2 logical qubits and requires no ancilla, but needs to work on two code blocks of 98 qubits in total. Our protocol might not have the fewest total number of qubits compared with other protocols for a different code which can correct up to 3 faults; for example, the flag FTEC protocol in \cite{CR20} applying to the \codepar{37,1,7} 2D color code requires 45 qubits in total. Nevertheless, our work provides a substantial improvement over other FTEC protocols for a \epsilonmph{concatenated} code, an approach that is still advantageous in some circumstances. Furthermore, the use of weight parities in error correction may be extended to other families of codes as well \cite{TL21}. We believe that if the protocol requires fewer ancillas, the number of total locations will decrease, which can result in higher accuracy threshold. However, a simulation with careful analysis is required for the accuracy threshold calculation, thus we leave this for future work. The protocol in \cref{sec:Protocol} which can correct up to 3 faults exploits two techniques: the flag technique which partitions set of possible errors using flag measurement results, and the WPEC technique which corrects errors of any weight using their syndromes and weight parities. It should be emphasized that flag ancillas are not necessarily required for a protocol exploiting WPEC technique; we find that a protocol which uses circuits similar to a circuit in \cref{fig:circuit_lv2_permuted} for 2nd-level syndrome measurements and uses non-flag circuits for 1st-level measurements can correct up to 2 faults. We point out that the permutation of CNOT gates in the syndrome extraction circuits that make the protocol satisfies \cref{claim:suff_cond} is not unique. We choose the permutation in \cref{eq:CNOTperm} by using the fact that a CSS code constructed from classical cyclic codes can distinguish high-weight errors in the consecutive form \cite{TCL20}. In particular, the circuit is designed in the way that high-weight errors arising in each subblock can be determined by the underlying \codepar{7,1,3} code in cyclic form. We did not prove the optimality of the choice of gate permutation in our protocol, so an FTEC protocol for the \codepar{49,1,9} code with only one ancilla or a protocol that can correct up to 4 faults might be possible. Last, we note that the WPEC technique introduced in this work is not limited to the \codepar{49,1,9} code. In \cref{sec:WPEC_Golay}, we prove the logical equivalence of errors with the same syndrome and weight parity for the \codepar{23,1,7} Golay code in \cref{claim:equiv_Golay} and provide a WPEC scheme in \cref{Def:WPEC_Golay}, which shows that WPEC can correct some high-weight errors in a subblock of the \codepar{529,1,49} concatenated Golay code. In addition, we expect that WPEC can be applied to any concatenated Steane code with more than 2 levels of concatenation in a similar fashion. However, circuits and a protocol must be carefully designed so that the full error correction ability of the code can be achieved. Another interesting future direction would be extending the WPEC technique to other families of quantum codes. \appendix \section{Simulation of possible faults during the FTEC protocol assuming that the last round of full syndrome measurement has no faults} \label{app:sim1} As discussed in \cref{sec:Protocol}, in order to verify that the FTEC protocol for the $\codepar{49,1,9}$ code satisfies the FTEC conditions in \cref{Def:FaultTolerantDef}, we consider two separate cases: the case that there are some faults during the last round of full syndrome measurement, and the case that there are not. In this section, we provide details of a simulation to show that whenever the number of faults is at most 3 and none of the faults occurs during the last round, all possible fault combinations satisfy \cref{claim:suff_cond} and our protocol can correct errors on the data qubits. In our protocol, we will perform full syndrome measurements until the outcome bundles are repeated 4 times in a row. Since there are at most 3 faults, the repetition condition will be satisfied within 16 rounds of full syndrome measurement. In this simulation, we assume that the last round of measurement has no faults, thus the high-weight error on the data qubits arising from at most 3 faults is accumulated from up to 15 rounds. We will use the outcome bundle (syndromes and flag vector) obtained from the last round to determine the fault combination that cause the error so that the corresponding weight parity can be found and the WPEC can be done. We first define mathematical objects being used in our simulation. Let \epsilonmph{fault} be an object with two associated variables: \epsilonmph{Pauli error} defined on the code block of 49 qubits arising from the fault, and \epsilonmph{flag vector} $\in \mathbb{Z}^{21}_2$ which indicates the flag measurement results associated with the fault. There are 4 types of possible faults: faults during wait time (denoted by $W$), faults arising from the measurement of 1st-level and 2nd-level generators (denoted by $G_1$ and $G_2$, respectively), and flag measurement faults (denoted by $F$). A \epsilonmph{fault combination} can be constructed by combining faults of same or different types up to 3 faults, i.e., multiplying their Pauli errors and adding their flag vectors. The errors on the input codeword can be considered as wait time faults in which associated Pauli errors do not propagate to other data qubits during the FTEC protocol. In addition, the $X$-type errors on the data qubits arising from the faults during the measurement of $Z$-type generators can be considered as wait time faults during the measurement of subsequent $X$-type generators, in which our simulation is also applicable. (Since the last round of measurement has no faults, we can assume that the syndromes obtained from the last round are correct and the syndrome measurement faults can be neglected.) Next, we define \epsilonmph{fault set} as follows: for faults of type $G_1$ (or type $G_2$), we denote $F^{G_1}_{i,j}$ (or $F^{G_2}_{i,j'}$) to be sets of possible $G_1$ (or $G_2$) faults arising from a circuit for measuring $g^z_j$, $j=1,\dots,21$ (or $\tilde{g}^z_{j'}$, $j'=1,2,3$) where the number of faults is $i \in \{0,1,2,3\}$ ($g^z_j$ refers to the generator $g^z_{(j-1)\text{ mod }3 +1}$ on the $\lceil j/3 \rceil$-th subblock). Also, we denote $F^{W}_{i}$ and $F^{F}_{i}$ to be sets of possible faults of type $W$ and $F$, respectively, where the number of faults is $i\in \{0,1,2,3\}$. In addition, we define \epsilonmph{fault set combination} to be a set of fault sets up to 3 sets. Last, let $v_{G_1},v_{G_2},v_{W},v_{F}$ be the number of faults of type $G_1,G_2,W,$ and $F$, respectively. $(v_{G_1},v_{G_2},v_{W},v_{F})$ that satisfies $v_{G_1}+v_{G_2}+v_{W}+v_{F} \leq 3$ is called \epsilonmph{fault number combination}. With the definitions of fault, fault combination, fault set, fault set combination, and fault number combination, now we are ready to describe the simulation. \textbf{Pseudocode for a simulation of possible faults assuming that the last round of full syndrome measurement has no faults} \begin{enumerate} \item Construct fault sets $F^{G_1}_{i,j}, F^{G_2}_{i,j'}, F^{W}_{i},$ and $F^{F}_{i}$ for all $i=0,1,2,3$, $j=1,\dots,21$, $j'=1,2,3$. \item Construct all possible fault number combinations that satisfy $v_{G_1}+v_{G_2}+v_{W}+v_{F} \leq 3$. \item For each $(v_{G_1},v_{G_2},v_{W},v_{F})$, find all possible fault set combinations from $v_{G_1},v_{G_2},v_{W},v_{F}$. Note that if $v_{G_1}$ is 2, the fault set combination can have $F^{G_1}_{i,j}$ and $F^{G_1}_{i',j'}$ with $i=i'=1$, or have $F^{G_1}_{i,j}$ with $i=2$. Also, if $v_{G_1}$ is 3, the fault set combination can have $F^{G_1}_{i,j}$, $F^{G_1}_{i',j'}$, and $F^{G_1}_{i'',j''}$ with $i=i'=i''=1$, or have $F^{G_1}_{i,j}$ and $F^{G_1}_{i',j'}$ with $i=2,i'=1$, or have $F^{G_1}_{i,j}$ with $i=3$. The same goes for $v_{G_2}$. \begin{enumerate} \item For each fault set combination, find all possible fault combinations. Each fault combination can be found by picking one fault from each fault set (up to 3 sets) in the fault set combination, then combine the faults to get the Pauli error $E$ and the cumulative flag vector $f_x$ associated with the fault combination. \item For each fault combination, find 1st-level syndrome $s_x$, 2nd-level syndrome $\tilde{s}_x$, block triviality $\tau_x$, and block parity $p_x$ from the associated Pauli error $E$. Store $(s_x,\tilde{s}_x,\tau_x,f_x,p_x)$ for each fault combination in a lookup table. \label{step:table} \epsilonnd{enumerate} \item After the lookup table is complete, categorize fault combinations by their 2nd-level syndromes and block trivialities in order to get $\mathcal{F}_k$'s as in \cref{claim:suff_cond}. \item For each $\mathcal{F}_k$, verify whether Condition 1 or 2 in \cref{claim:suff_cond} is satisfied. \epsilonnd{enumerate} From the simulation above, we find that all possible fault combinations satisfy \cref{claim:suff_cond}. That is, for each fault combination, we can determine the weight parity from the outcome bundles obtained from the last round of full syndrome measurement by looking at the table constructed in Step \ref{step:table}. The weight parity can be later used to perform WPEC on the code block. With this simulation result, we can verify our FTEC protocol for the \codepar{49,1,9} code satisfies FTEC conditions as previously discussed in \cref{sec:Protocol}. \section{Simulation of possible faults during the FTEC protocol assuming that the last round of full syndrome measurement has some faults} \label{app:sim2} In \cref{app:sim1}, we describe the simulation of possible faults during the FTEC protocol for the \codepar{49,1,9} code which is applicable to the case that there are no faults during the last round of full syndrome measurement. In this section, we will extend the ideas and construct a simulation of possible faults for the case that some faults occur during the last round. As previously described, we will perform full syndrome measurements in the protocol until the outcome bundles are repeated 4 times in a row. Now, suppose that the last round of full syndrome measurement has some faults. In this case, we cannot be sure whether the outcome bundle from the last round exactly corresponds to the error in the data qubits. Fortunately, since there are at most 3 faults during the whole protocol, at least one outcome bundle obtained from the last 4 rounds must be correct. Note that the outcome bundles from the last 4 rounds are identical. From the simulation result discussed in \cref{app:sim1}, the outcome bundle from the last round can be used to correct the data error occurred before \epsilonmph{any} correct round using the WPEC technique (see \cref{fig:sim2_p1} for more details). The goal of the simulation in this section is to verify that all possible fault combinations which can happen after the last correct round give data error of weight no more than 3. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{sim2_p1} \caption{At least one of the last 4 rounds of full syndrome measurement is correct since there are at most 3 faults. Because the outcome bundles from the last 4 rounds are identical, the outcome bundle from the last round can be used in WPEC to correct both errors $E_1$ and $E_2$ (even though $E_1$ and $E_2$ may not be equal).} \label{fig:sim2_p1} \epsilonnd{figure} A straightforward way to verify the claim above is to find all possible fault combinations and check the weight of their associated Pauli errors. Unfortunately, this process requires many computational resources. Thus, we will use ``relaxed conditions'' for the verification instead; for each fault combination, if the associated Pauli error and flag vector satisfy all relaxed conditions, the fault combination will be marked (indicating that the fault combination might cause the protocol to fail). We want to make sure that for all fault combination that can cause the protocol to fail (i.e., its associated error has weight more than 3), the fault combination will be marked. Note that some fault combinations may be marked by the relaxed conditions but will not cause the protocol to fail. For this reason, all of the marked fault combinations must be examined after the simulation is done. We should note that the order of generator measurements is important for the fault tolerance of our FTEC protocol. Consider the protocol description in \cref{sec:Protocol} in which we measure generator measurements in the following order during a single round of full syndrome measurement: measuring all $\tilde{g}^z_i$'s, then all $\tilde{g}^x_i$'s, then all $g^z_i$'s, then all $g^x_i$'s. Let us first consider the errors that can be caught by the $g^x_i$ measurements of the last round. Observe that all $Z$-type data errors that arise before the $g^x_i$ measurements of the last round will be evaluated by the 1st-level syndrome $s_x$. However, some faults during $g^x_i$ measurements of the last round may cause $X$-type or $Z$-type errors that will not be caught by any syndrome. Without loss of generality, we will construct a simulation using an assumption that faults before the $g^x_i$ measurements of the last round can cause only $Z$-type errors, and faults during or after the $g^x_i$ measurements can cause $X$-type or $Z$-type errors. The simulation is also applicable to the case of $g^z_i$ measurements. Let $E$, $\tilde{E}_a$, and $\tilde{E}_b$ be data errors arising from faults occurred before the last correct round among the last 4 rounds, faults occurred after the last correct round but before the $g^x_i$ measurements of the last round, and faults occurred during or after the $g^x_i$ measurements of the last round. The errors can be illustrated as follows: \begin{equation} \includegraphics[width=0.48\textwidth]{sim2_p2} \nonumber \epsilonnd{equation} \noindent The outcome bundle obtained from the last round is equal to the outcome bundle obtained from the correct round and can be used to correct $E$. Thus, we would like to mark every fault combination that can occur after the correct round, corresponds to the trivial outcome bundle (since the outcome bundle obtained from the last round is the same as that obtained from the correct round), and corresponds to a Pauli error of weight more than 3. In particular, our relaxed conditions will examine 3 objects for each fault combination: the 1st-level syndrome, the cumulative flag vector, and the weight of the Pauli error. The mathematical objects being used in this simulation are similar to those defined in \cref{app:sim1}. In addition, we will consider syndrome measurement faults (denoted by $S$) as another type of faults in this simulation since we will assume that the syndrome measurement during the last 4 rounds can be faulty. Also, let $v_{G_{1a}}$ be the number of $G_1$ faults that occur before the $g^x_i$ measurements of the last round, and let $v_{G_{1b}}$ be the number of $G_1$ faults that occur during or after the $g^x_i$ measurements of the last round. \epsilonmph{Fault number combination} is a tuple $(v_{G_{1a}},v_{G_{1b}},v_{G_2},v_W,v_F,v_S)$ that satisfies $v_{G_{1a}}+v_{G_{1b}}+v_{G_2}+v_W+v_F+v_S\leq 3$. For the first relaxed condition, let us first assume that none of the faults of type $W$ occurs before or during the $g^x_i$ measurements of the last round. For each $(v_{G_{1a}},v_{G_{1b}},v_{G_2},v_W,v_F,v_S)$, error $\tilde{E}_a$ will be constructed from possible fault combinations that correspond to $v_{G_{1a}}$ and $v_{G_2}$. We will mark every fault combination whose associated $\tilde{E}_a$ gives a 1st-level syndrome that has Hamming weight no more than $v_S$ (where the Hamming weight is the number of 1's in a bitstring). This is because each fault of type $S$ can alter at most 1 syndrome bit. Now let us consider the case that some faults of type $W$ occurs before or during the $g^x_i$ measurements. Each $W$ fault (which corresponds to error of weight 1) can change at most 3 bits of $s_x$, but the change will affect only the subblock in which the fault acts nontrivially. We will define function $\sigma(\tilde{E}_a,v_W)$ by the following calculation: \begin{enumerate} \item Find the 1st-level syndrome of $\tilde{E}_a$ and calculate the Hamming weight of the syndrome for each subblock. \item Sort the Hamming weights from all subblocks. The function value is the the sum of the $7-v_W$ smallest Hamming weights. \epsilonnd{enumerate} The value of $\sigma(\tilde{E}_a,v_W)$ is the minimum Hamming weight of the 1st-level syndrome when $v_W$ faults of type $W$ occur. Taking all fault types into account, our first relaxed condition becomes \begin{equation} \sigma(\tilde{E}_a,v_W) \leq v_S. \label{eq:relaxed_1} \epsilonnd{equation} For the second relaxed condition, we will consider the cumulative flag vector associated with each fault combination. Note that a flag measurement result will be obtained during any $g^x_i$ or $g^z_i$ measurement. Let $f = (f_x|f_z)$ denote the cumulative flag vector associated with each fault combination, and let $h(f)$ denote the Hamming weight of $f$. Since each fault of type $F$ can alter at most 1 bit of $f$, our second relaxed condition becomes, \begin{equation} h(f) \leq v_F. \label{eq:relaxed_2} \epsilonnd{equation} For the third relaxed condition, we will consider the weight of the Pauli error associated with each fault combination. The weight is evaluated at the end of the protocol where the resulting error is caused by all faults of type $G_1$, $G_2$, and $W$ (errors arising during or after the $g^x_i$ measurements of the last round can be $X$-type or $Z$-type). If $W$ faults do not occur before or during the $g^x_i$ measurements at the last round, the weight of the resulting error is the weight of $\tilde{E}_a\cdot\tilde{E}_b$. If they do, each $W$ fault can increase the total weight by at most 1. Hence, our third condition becomes, \begin{equation} \mathrm{wt}(\tilde{E}_a\cdot\tilde{E}_b)+v_W > 3. \label{eq:relaxed_3} \epsilonnd{equation} Note that the weight of $\tilde{E}_a\cdot\tilde{E}_b$ can be reduced by multiplication of some stabilizer, and the fault combination will not be marked unless \cref{eq:relaxed_3} is satisfied for all choice of stabilizer. Using the relaxed conditions in \cref{eq:relaxed_1,eq:relaxed_2,eq:relaxed_3}, our simulation to verify that all possible data errors arising after the correct round have weight no more than 3 can be constructed as follows: \textbf{Pseudocode for a simulation of possible faults assuming that the last round of full syndrome measurement has some faults} \begin{enumerate} \item Construct fault sets $F^{G_1}_{i,j}, F^{G_2}_{i,j'}$ for all $i=0,1,2,3$, $j=1,\dots,21$, $j'=1,2,3$. \item Construct all possible fault number combinations that satisfies $v_{G_{1a}}+v_{G_{1b}}+v_{G_2}+v_W+v_F+v_S\leq 3$. \item For each $(v_{G_{1a}},v_{G_{1b}},v_{G_2},v_W,v_F,v_S)$, construct all possible fault set combinations from only $v_{G_{1a}}$, $v_{G_{1b}}$, and $v_{G_2}$. During the construction of each fault set combination, label fault sets that come from $v_{G_{1a}}$ or $v_{G_2}$ with letter $a$, and label fault sets that come from $v_{G_{1b}}$ with letter $b$. Note that if $v_{G_{1a}}$ is 2, the fault set combination can have $F^{G_1}_{i,j}$ and $F^{G_1}_{i',j'}$ with $i=i'=1$, or have $F^{G_1}_{i,j}$ with $i=2$. Also, if $v_{G_{1a}}$ is 3, the fault set combination can have $F^{G_1}_{i,j}$, $F^{G_1}_{i',j'}$, and $F^{G_1}_{i'',j''}$ with $i=i'=i''=1$, or have $F^{G_1}_{i,j}$ and $F^{G_1}_{i',j'}$ with $i=2,i'=1$, or have $F^{G_1}_{i,j}$ with $i=3$. The same goes for $v_{G_{1b}}$ and $v_{G_2}$. \begin{enumerate} \item For each fault set combination, find all possible fault combinations. Each fault combination can be found by picking one fault from each fault set (up to 3 sets) in the fault set combination. $\tilde{E}_a$ associated with each fault combination can be found by combining only faults from fault sets with label $a$, while $f$ and $\tilde{E}_a \cdot \tilde{E}_b$ can be found by combining faults from all fault sets. \begin{enumerate} \item For each fault combination, if \cref{eq:relaxed_1,eq:relaxed_2,eq:relaxed_3} are all satisfied, the fault combination will be marked. Note that for \cref{eq:relaxed_3}, the weight of $\tilde{E}_a \cdot \tilde{E}_b$ must be minimized by stabilizer multiplication. \epsilonnd{enumerate} \epsilonnd{enumerate} \epsilonnd{enumerate} From the simulation above, we find that there are 6 fault combinations which are marked by the relaxed conditions in \cref{eq:relaxed_1,eq:relaxed_2,eq:relaxed_3}. All of them correspond to the case that $v_{G_2} = 1$, $v_W = 2$, $v_{G_{1a}},v_{G_{1b}},v_F,v_S=0$, and their associated Pauli errors are trivial on 5 subblocks and have either $IIIIIIZ$ or $ZIIIIII$ on 2 subblocks. We find that $IIIIIIZ$ and $ZIIIIII$ correspond to 1st-level syndrome $(001)$ and $(100)$, respectively. Since $v_S = 0$, the associated 1st-level syndrome must be trivial whenever errors from $W$ faults are taken into account. This can happen only when errors from $W$ faults cancel with the aforementioned Pauli error, which means that the resulting error has weight 0. As a result, we find that all of the marked fault combinations cannot cause data error of weight higher than 3. Similar simulations can be done to show that whenever $v$ faults occur where $v=0,1,2$, the weight of the output error is at most $v$. This result verifies that the FTEC protocol for the \codepar{49,1,9} code satisfies FTEC conditions as previously discussed in \cref{sec:Protocol}. \epsilonnd{document}
\begin{document} \numberwithin{equation}{section} \newtheorem{thm}[equation]{Theorem} \newtheorem{pro}[equation]{Proposition} \newtheorem{prob}[equation]{Problem} \newtheorem{qu}[equation]{Question} \newtheorem{cor}[equation]{Corollary} \newtheorem{con}[equation]{Conjecture} \newtheorem{lem}[equation]{Lemma} \theoremstyle{definition} \newtheorem{ex}[equation]{Example} \newtheorem{defn}[equation]{Definition} \newtheorem{ob}[equation]{Observation} \newtheorem{rem}[equation]{Remark} \hyphenation{homeo-morphism} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{S}et\,}{\mathcal{S}et\,} \newcommand{\mathcal{T}\!op \,}{\mathcal{T}\!op \,} \newcommand{\mathcal{T}\!op \,st}{\mathcal{T}\!op\, ^*} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{{\mathbb Z}}{{\mathbb Z}} \newcommand{{\mathbb C}}{{\mathbb C}} \newcommand{{\mathbb Q}}{{\mathbb Q}} \newcommand{{\mathbb R}}{{\mathbb R}} \newcommand{{\mathbb N}}{{\mathbb N}} \newcommand{{\mathcal F}}{{\mathcal F}} \def\operatorname{\operatornameeratorname} \title{Poincar\'e polynomials of a map and \\ a relative Hilali conjecture} \author{ Toshihiro Yamaguchi \ \ and \ \ Shoji Yokura} \thanks{2010 MSC: 06A06,18B35, 18B99, 54B99, 55P62, 55P99, 55N99.\\ Keywords: Hiali conjecture, Poincar\'e polynomial, rational homotopy theory.} \date{} \address{Faculty of Education, Kochi University, 2-5-1,Kochi, 780-8520, Japan} \email{[email protected]} \address{Department of Mathematics and Computer Science, Graduate School of Science and Engineering, Kagoshima University, 1-21-35 Korimoto, Kagoshima, 890-0065, Japan } \email{[email protected]} \maketitle \begin{abstract} In this paper we introduce homological and homotopical Poincar\'e polynomials $P_f(t)$ and $P^{\pi}_f(t)$ of a continuous map $f:X \to Y$ such that if $f:X \to Y$ is a constant map, or more generally, if $Y$ is contractible, then these Poincar\'e polynomials are respectively equal to the usual homological and homotopical Poincar\'e polynomials $P_X(t)$ and $P^{\pi}_X(t)$ of the source space $X$. Our relative Hilali conjecture $P^{\pi}_f(1) \leqq P_f(1)$ is a map version of the the well-known Hilali conjecture $P^{\pi}_X(1) \leqq P_X(1)$ of a rationally elliptic space X. In this paper we show that under the condition that $H_i(f;\mathbb Q):H_i(X;\mathbb Q) \to H_i(Y;\mathbb Q)$ is not injective for some $i>0$, the relative Hilali conjecture of product of maps holds, namely, there exists a positive integer $n_0$ such that for $\forall n \geqq n_0$ the \emph{strict inequality $P^{\pi}_{f^n}(1) < P_{f^n}(1)$} holds, where $f^n:X^n \to Y^n$. In the final section we pose a question whether a ``Hilali"-type inequality $HP^{\pi}_X(r_X) \leqq P_X(r_X)$ holds for a rationally hyperbolic space $X$, provided the the homotopical Hilbert--Poincare series $HP^{\pi}_X(r_X)$ converges at the radius $r_X$ of convergence. \end{abstract} \section{Introduction} The most important and fundamental topological invariant in geometry and topology is the Euler--Poincar\'e characteristic $\chi(X)$, which is the alternating sum of the Betti numbers $\dim H_i(X;{\mathbb Q})$: $$\chi(X):= \sum_{i \geqq 0} (-1)^i\dim H_i(X;{\mathbb Q}) ,$$ provided that each $\dim H_i(X;{\mathbb Q})$ and $\chi(X)$ are both finite. Similarly, for a topological space whose fundamental group is an Abelian group one can define the \emph{homotopical Betti number} $\dim (\pi_i(X)\otimes {\mathbb Q})$ where $i\geqq 1$ and the \emph{homotopical Euler--Poincar\'e characteristic}: $$\chi^{\pi}(X):= \sum_{i \geqq 1} (-1)^i\dim (\pi_i(X)\otimes {\mathbb Q}),$$ provided that each $\dim (\pi_i(X)\otimes {\mathbb Q})$ and $\chi^{\pi}(X)$ are both finite. The Euler--Poincar\'e characteristic is the special value of the Poincar\'e polynomial $P_X(t)$ at $t=-1$ and the homotopical Euler--Poincar\'e characteristic is the special value of the homotopical Poincar\'e polynomial $ P^{\pi}_X(t)$ at $t=-1$: $$P_X(t):= \sum_{i \geqq 0} t^i \dim H_i(X;{\mathbb Q}), \quad \chi(X) = P_X(-1),$$ $$ P^{\pi}_X(t):= \sum_{i \geqq 1} t^i \dim (\pi_i(X)\otimes {\mathbb Q}), \quad \chi^{\pi}(X) = P^{\pi}_X(-1).$$ Since we consider polynomials, besides the requirement that $\dim H_i(X;{\mathbb Q}) $ and $\dim \left (\pi_i(X) \otimes {\mathbb Q} \right) $ are each finite, we assume that there exist integers $n_0$ and $m_0$ such that $H_i(X;{\mathbb Q}) =0$ for $\forall i >n_0$ and $\pi_j(X)\otimes {\mathbb Q}=0$ for $\forall j >m_0$, which are equivalent to requiring that $$\dim H_*(X;{\mathbb Q} ) := \sum_{i \geqq 0} \dim H_i(X;{\mathbb Q}) < \infty, \quad \dim (\pi_*(X)\otimes {\mathbb Q} ) : = \sum_{i \geqq 1} \dim (\pi_i(X) \otimes {\mathbb Q}) < \infty.$$ Such a space $X$ is called \emph{rationally elliptic}. If we have $$\dim H_*(X;{\mathbb Q} ) < \infty, \quad \dim (\pi_*(X)\otimes {\mathbb Q} ) =\infty,$$ then such a space $X$ is called \emph{rationally hyperbolic}, because it follows (see \cite[Theorem 33.2]{FHT}) that there exist some $C >1$ and some positive integer $K$ such that $$\sum_{i\geqq 2}^k \dim (\pi_i(X) \otimes {\mathbb Q}) \geqq C^k, \quad k \geqq K.$$ From now on, unless otherwise stated, any topological space is assumed to be simply connected and of finite type (over $\mathbb Q$), i.e., the rational homology group is finitely generated for every dimension, $\dim H_i(X; \mathbb Q) < \infty$, which implies that $\dim \left (\pi_i(X) \otimes \mathbb Q) \right ) < \infty$ because it is well-known that a simply connected space has finitely generated homology groups in every dimension \emph{if and only if} it has finitely generated homotopy groups in every dimension (e.g., see \cite[16 Corollary, p.509]{Sp}). A very simple example of a non-simply connected space for which this statement does not hold is $S^2 \vee S^1$. The well-known Hilali conjecture \cite{Hil} claims that if $X$ is a simply connected rationally elliptic space, then $$\dim (\pi_*(X)\otimes {\mathbb Q} ) \leqq \dim H_*(X;{\mathbb Q} ), \quad \text{namely}, \quad P^{\pi}_X(1) \leqq P_X(1).$$ No counterexample to the Hilali conjecture has been so far found yet. In \cite{Yo} the second named author proved that for a simply connected rationally elliptic space $X$ the Hilali conjecture always holds ``modulo product", i.e., there exists a positive integer $n_{0}$ such that for $\forall \, \, n \geqq n_{0}$ \begin{equation}\label{hil-pro} \dim (\pi_*(X^n)\otimes {\mathbb Q} ) < \dim H_*(X^{n};{\mathbb Q} ), \, \text{i.e.,} \, P^{\pi}_{X^n}(1) < P_{X^n}(1). \end{equation} Here $X^n$ is the product $X^n =\underbrace{X \times \cdots \times X}_{n}$. In this paper we introduce the homological and homotopical Poincar\'e polynomials $P_f(t)$ and $P^{\pi}_f(t)$ of a continuous map $f:X \to Y$ and show that if $P_f(1) > 1$, i.e., there exists some integer $i >1$ such that $H_i(f;\mathbb Q):H_i(X;\mathbb Q) \to H_i(Y;\mathbb Q)$ is not injective, then there exists a positive integer $n_{0}$ such that for $\forall \, \, n \geqq n_{0}$ the strict inequality $P^{\pi}_{f^n}(1) < P_{f^n}(1) $ holds, where $f^n:X^n \to Y^n$ is defined component-wise by $(f^n)(x_1, \cdots, x_n):=(f(x_1), \cdots, f(x_n))$. This result is a map version of the above result (\ref{hil-pro}). Hinted by the proof \cite{Yo} of $P^{\pi}_{X^n}(1) < P_{X^n}(1)$, we give a reasonable conjecture claiming that if $P_f(1)=1$, then $P_f^{\pi}(1)=0$, in other words, if each homological homomorphism $H_i(f;\mathbb Q):H_i(X;\mathbb Q) \to H_i(Y;\mathbb Q)$ being \emph{injective} for $\forall i >1$ implies that each homotopical homomorphism $\pi_i(f;\mathbb Q):\pi_i(X)\otimes \mathbb Q \to \pi_i(Y) \otimes \mathbb Q$ is \emph{injective} $\forall i >1$. We remark that for this conjecture we assume that $X$ and $Y$ are rationally elliptic spaces and that the conjecture is false if the homology rank of the target $Y$ is not finite, as shown by a counterexample later. In fact, as seen in Conjecture \ref{injective}, for the above conjecture we assume that the map $f:X \to Y$ is \emph{a rationally elliptic map} (see Definition \ref{k-cok-el} below). Ellipticity of a map $f:X \to Y$ is a more lax condition than requiring $X$ and $Y$ to be rationally elliptic, in which case $f$ is certainly a rationally elliptic map. In passing, we recall that the well-know Whitehead--Serre Theorem (e.g., see \cite[Theorem 8.6]{FHT}) claims that for simply connected spaces $X$ and $Y$, $H_i(f;\mathbb Q):H_i(X;\mathbb Q) \to H_i(Y;\mathbb Q)$ is \emph{isomorphic} for $\forall i >0$ if and only if $\pi_i(f) \otimes \mathbb Q:\pi_i(X)\otimes \mathbb Q \to \pi_i(Y) \otimes \mathbb Q$ is \emph{isomorphic} $\forall i >1$. In \cite{Yo}, to show the above (\ref{hil-pro}), we need to show that if $P_X(1)=1$, then $P^{\pi}_X(1)=0$, for which we use this Whitehead--Serre Theorem. In the final section we discuss the case of hyperbolic spaces a bit. For a hyperbolic space $X$ we have the homotopical Hilbert--Poincar\'e series $HP^{\pi}_X(t)$ instead of the polynomial $P^{\pi}_X(t)$. It is known (see \cite{Fe}) that the radius $r_X$ of convergence of $HP^{\pi}_X(t)$ is \emph{less than $1$}. It is in general well-known that if $r$ denotes the radius of convergence of a power series $P(t)$, then whether $P(r)$ converges or not is case-by-case. So, when $HP^{\pi}_X(r_X)$ does converge, it seems to be an interesting question if the following holds or not: $$HP^{\pi}_X(r_X) \leqq P_X(r_X),$$ which could be called \emph {``a Hilali conjecture in the hyperbolic case"}. \section{Homological and homotopical Poincar\'e polynomials of a map} Let $f:X \to Y$ be a continuous map of simply connected spaces $X$ and $Y$ of finite type. For the homomorphisms $$H_i(f; \mathbb Q):H_i(X;\mathbb Q) \to H_i(Y;\mathbb Q),\quad \pi_i(f)\otimes \mathbb Q:\pi_i(X) \otimes \mathbb Q \to \pi_i(Y) \otimes \mathbb Q,$$ we have the following exact sequences of finite dimensional $\mathbb Q$-vector spaces: \begin{equation}\label{hkk} 0 \to \operatorname{Ker} H_i(f; \mathbb Q) \to H_i(X; \mathbb Q) \to H_i(Y; \mathbb Q) \to \operatorname{Coker} H_i(f;\mathbb Q) \to 0 \quad \forall i \geqq 0, \end{equation} \begin{equation}\label{pikk} 0 \to \operatorname{Ker}(\pi_i(f)\otimes \mathbb Q) \to \pi_i(X)\otimes \mathbb Q \to \pi_i(Y)\otimes \mathbb Q \to \operatorname{Coker}(\pi_i(f)\otimes \mathbb Q) \to 0 \quad \forall i \geqq 2. \end{equation} Here we recall that $\operatorname{Coker}(T):= B/\operatorname{Im}(T)$ for a linear map $T: A \to B$ of vector spaces. Since $X$ and $Y$ are simply connected, they are path-connected as well (by the definition of simply connectedness), thus we have $$\xymatrix{ \mathbb Q \cong H_0(X;\mathbb Q) \ar[r]^{f_*}_{\cong} & H_0(Y;\mathbb Q) \cong \mathbb Q, } $$ so $\operatorname{Ker} H_0(f; \mathbb Q) = \operatorname{Coker}H_0(f; \mathbb Q)=0$. It follows from (\ref{hkk}) and (\ref{pikk}) that we get the following equalities: \begin{equation}\label{d-hkk} \operatorname{dim}(\operatorname{Ker} H_i(f; \mathbb Q)) - \operatorname{dim} H_i(X; \mathbb Q) + \operatorname{dim} H_i(Y; \mathbb Q) - \operatorname{dim}(\operatorname{Coker}H_i(f; \mathbb Q)) = 0 \quad \forall i \geqq 2, \end{equation} \begin{equation}\label{d-pikk} \operatorname{dim}(\operatorname{Ker}(\pi_i(f)\otimes \mathbb Q)) - \operatorname{dim}(\pi_i(X) \otimes \mathbb Q) + \operatorname{dim}(\pi_i(Y) \otimes \mathbb Q) - \operatorname{dim}(\operatorname{Coker}(\pi_i(f)\otimes \mathbb Q)) = 0 \quad \forall i \geqq 2. \end{equation} \begin{defn}\label{k-cok-el} Let $f:X \to Y$ be a continuous map of simply connected spaces $X$ and $Y$. \begin{enumerate} \item If $\operatorname{dim} \left (\operatorname{Ker} H_*(f; \mathbb Q) \right):= \sum_i \operatorname{dim} \left (\operatorname{Ker} H_i(f; \mathbb Q) \right) < \infty$ and $\operatorname{dim} \left (\operatorname{Ker} (\pi_*(f) \otimes \mathbb Q) \right) := \sum_i \operatorname{dim} \left (\operatorname{Ker} (\pi_i(f) \otimes \mathbb Q) \right ) < \infty$, then $f$ is called \emph{rationally elliptic with respect to kernel}. \item If $\operatorname{dim} \left (\operatorname{Coker} H_*(f; \mathbb Q) \right) :=\sum_i \operatorname{dim} \left ( \operatorname{Coker} H_i(f; \mathbb Q)\right )< \infty,$ and $\operatorname{dim} \left (\operatorname{Coker} (\pi_*(f) \otimes \mathbb Q) \right ):=\sum_i \operatorname{dim} \left (\operatorname{Coker} (\pi_i(f) \otimes \mathbb Q) \right )< \infty$, then $f$ is called \emph{rationally elliptic with respect to cokernel}. \item If the map $f$ is rationally elliptic with respect to both kernel and cokernel, $f$ is called \emph{rationally elliptic}. \end{enumerate} \end{defn} \begin{rem} Let $f:X \to Y$ be a continuous map of simply connected spaces $X$ and $Y$. \begin{enumerate} \item If $X$ is rationally elliptic, then $f$ is rationally elliptic with respect to kernel. \item If $Y$ is rationally elliptic, then $f$ is rationally elliptic with respect to cokernel. \item If both $X$ and $Y$ are rationally elliptic, then $f$ is rationally elliptic. \end{enumerate} \end{rem} In this connection we also give definitions of ``hyperbolic" one corresponding to each above. \begin{defn} Let $f:X \to Y$ be a continuous map of simply connected spaces $X$ and $Y$. \begin{enumerate} \item If $\operatorname{dim} \left(\operatorname{Ker} H_*(f; \mathbb Q) \right) < \infty$ and $\operatorname{dim} \left (\operatorname{Ker} (\pi_*(f) \otimes \mathbb Q) \right) = \infty$, then $f$ is called \emph{rationally hyperbolic with respect to kernel}. \item If $\operatorname{dim} \left (\operatorname{Coker} H_*(f; \mathbb Q) \right) < \infty$ and $\operatorname{dim} \left (\operatorname{Coker} (\pi_*(f) \otimes \mathbb Q) \right) = \infty$, then $f$ is called \emph{rationally hyperbolic with respect to cokernel}. \item If the map $f$ is rationally hyperbolic with respect to both kernel and cokernel, $f$ is called \emph{rationally hyperbolic}. \end{enumerate} \end{defn} \begin{rem}\label{hyperbol-rem} Let $f:X \to Y$ be a continuous map of simply connected spaces $X$ and $Y$. \begin{enumerate} \item If $f:X \to Y$ is rationally hyperbolic with respect to kernel, then the homotopy rank of $X$ is infinite. \item If $f:X \to Y$ is rationally hyperbolic with respect to cokernel, then the homotopy rank of $Y$ is infinite. \item If $f:X \to Y$ is rationally hyperbolic, then the homotopy rank of $X$ and that of $Y$ are both infinite. \end{enumerate} \end{rem} Motivated by the definition of Poincar\'e polynomials of topological spaces, it is reasonable to make the following definitions: \begin{defn} Let $f:X \to Y$ be a rationally ellitpic map of simply connected spaces $X$ and $Y$. \begin{enumerate} \item (the homological ``Kernel" Poincar\'e polynomial of a map $f$) $$\operatorname{Ker} P_f(t):= \sum_{i\geqq 2} \operatorname{dim}(\operatorname{Ker} H_i(f; \mathbb Q) ) t^i.$$ \item (the homotopical ``Kernel" Poincar\'e polynomial of a map $f$) $$\operatorname{Ker}P^{\pi}_f(t):= \sum_{i\geqq 2} \operatorname{dim}(\operatorname{Ker}(\pi_i(f)\otimes \mathbb Q)) t^i.$$ \item (the homological ``Cokernel" Poincar\'e polynomial of a map $f$) $$\operatorname{Cok}P_f(t):= \sum_{i\geqq 2} \operatorname{dim}(\operatorname{Coker} H_i(f; \mathbb Q) ) t^i.$$ \item (the homotopical ``Cokernel" Poincar\'e polynomial of a map $f$) $$\operatorname{Cok}P^{\pi}_f(t):= \sum_{i\geqq 2} \operatorname{dim}(\operatorname{Coker}(\pi_i(f)\otimes \mathbb Q)) t^i.$$ \end{enumerate} \end{defn} With these definitions, if $X$ and $Y$ are both rationally elliptic, then it follows from (\ref{d-hkk}) and (\ref{d-pikk}) that we get the following equalities: \begin{equation}\label{kppc} \operatorname{Ker}P_f(t) - P_X(t) + P_Y(t) - \operatorname{Cok}P_f(t) =0, \end{equation} \begin{equation}\label{pi-kppc} \operatorname{Ker}P^{\pi}_f(t) - P^{\pi}_X(t) + P^{\pi}_Y(t) - \operatorname{Cok}P^{\pi}_f(t) =0. \end{equation} If $H_i(f; \mathbb Q)$ and $\pi_i(f) \otimes \mathbb Q$ are surjective for $\forall i \geqq 2$, then $\operatorname{Coker} H_i(f; \mathbb Q) = \operatorname{Coker} (\pi_i(f)\otimes \mathbb Q)=0$, thus we have \begin{equation} \operatorname{Ker}P_f(t) - P_X(t) + P_Y(t)=0, \end{equation} \begin{equation} \operatorname{Ker}P^{\pi}_f(t) - P^{\pi}_X(t) + P^{\pi}_Y(t)=0. \end{equation} In particular, when $Y$ is contractible, since $P_Y(t)=1$ and $P^{\pi}_Y(t)=0$, we have \begin{equation}\label{cont-h} P_X(t) = 1 + \operatorname{Ker}P_f(t) \end{equation} \begin{equation}\label{cont-pi} P^{\pi}_X(t) = \operatorname{Ker} P^{\pi}_f(t). \end{equation} In this paper we focus mainly on continuous rationally elliptic maps with respect to kernel. Let $f:X \to Y$ be a continuous rationally elliptic map with respect to kernel of simply connected spaces $X$ and $Y$ and we define the following: \begin{defn}[Homological Poincar\'e polynomial of a map] \begin{equation} P_f(t):= 1 + \operatorname{Ker} P_f(t) = 1 + \sum_{i\geqq 2} t^i \operatorname{dim} \left (\operatorname{Ker} H_i(f; \mathbb Q) \right). \end{equation} \end{defn} \begin{defn}[Homotopical Poincar\'e polynomial of a map] \begin{equation} P^{\pi}_f(t):= \operatorname{Ker} P^{\pi}_f(t) = \sum_{i\geqq 2} t^i \operatorname{dim} \left (\operatorname{Ker} (\pi_i(f) \otimes \mathbb Q) \right). \end{equation} \end{defn} From (\ref{cont-h}) and (\ref{cont-pi}), if $Y$ is contractible, then we have \begin{equation} P_f(t)= P_X(t), \quad P^{\pi}_f(t)= P^{\pi}_X(t). \end{equation} \section{The relative Hilali conjecture on products of maps} In our previous paper \cite{YaYo} we made the following conjecture, called \emph{a relative Hilali conjecture} \begin{con} For a continuous map $f:X \to Y$ of simply connected elliptic spaces $X$ and $Y$, $P^{\pi}_f(1) \leqq P_f(1)$ holds. Namely the following inequality holds: $$\sum_{i\geqq 2} \operatorname{dim} \left (\operatorname{Ker} (\pi_i(f) \otimes \mathbb Q) \right) \leqq 1 + \sum_{i\geqq 2} \operatorname{dim} \left (\operatorname{Ker} H_i(f; \mathbb Q) \right ).$$ \end{con} When $Y$ is a point or contractible, the above relative Hilali conjecure is nothing but the following well-known Hilali conjecture \cite{Hil}: \begin{con} For a simply connected elliptic space $X$, $P^{\pi}_X(1) \leqq P_X(1)$ holds. Namely the following inequlaity holds: $$\sum_{i\geqq 2} \operatorname{dim}(\pi_i(X) \otimes \mathbb Q) \leqq 1 + \sum_{i\geqq 2} \operatorname{dim} H_i(X; \mathbb Q).$$ \end{con} \begin{rem} We note that in the Hilali conjecture the inequality $\leqq$ cannot be replaced by the strict inequality $<$. Indeed, for example, if $X=S^{2k}$ the even dimensional sphere, we have $$\pi_i(S^{2k}) \otimes {\mathbb Q} =\begin{cases} {\mathbb Q} & \, i=2k\\ {\mathbb Q} & \, i=4k-1\\ \, 0 & \, i\not =2k, 4k-1. \end{cases} $$ Thus we have $P^{\pi}_{S^{2k}}(t) = t^{4k-1} + t^{2k} \, \text{and} \, P_{S^{2k}}(t) = t^{2k} +1.$ Hence $P^{\pi}_{S^{2k}}(1) = P_{S^{2k}}(1) = 2.$ \end{rem} In \cite{ZCH} (cf. \cite{CHHZ}) A. Zaim, S. Chouingou and M. A. Hilali have proved the above relative Hilali conjecture in some cases. Since we define the notion of rationally elliptic map with respect to kernel in the previous section, in the original version of this paper we speculated that the above relative Hilali conjecture could be furthermore generalized as follows: \emph{``(A generalized relative Hilali conjecture): Let $f:X \to Y$ be a continuous rationally elliptic map with respect to kernel of simply connected spaces $X$ and $Y$. Then $P^{\pi}_f(1) \leqq P_f(1)$ holds." } It turns out that this conjecture is false due to the following counterexample, which was given by the referee: \begin{ex}\label{c-ex} Consider the following map $$f: S^4 \times S^6 \to K(\mathbb Q, 4) \times K(\mathbb Q, 6)$$ which is defined by $f:= a \times b$. Here $a:S^4 \to K(\mathbb Q, 4)$ is such that $[a] \in [S^4, K(\mathbb Q, 4)]=H^4(S^4, \mathbb Q)= \mathbb Q$ is a generator and similar for $b: S^6 \to K(\mathbb Q, 6)$. Then we have $$ P^{\pi}_f(1) = \dim (\operatorname{Ker} (\pi_*(f) \otimes \mathbb Q)) = 2, \quad P_f(1)=1, \, \, \text{i.e.,} \, \, \dim \left (\operatorname{Ker} (H_*(f; \mathbb Q)) \right)= 0.$$ Thus $P^{\pi}_f(1) \not \leqq P_f(1)$. \end{ex} Here we note that in this counterexample $\dim H_*(K(\mathbb Q, 4) \times K(\mathbb Q, 6); \mathbb Q) =\infty$ although we have that $\dim \left (\pi_*(K(\mathbb Q, 4) \times K(\mathbb Q, 6)) \otimes \mathbb Q \right) <\infty$. So, if in the above generalized Hilali conjecture we add another requirement that the homology rank of the target $Y$ is finite, then it follows from (\ref{kppc}) with $t=1$ that the homology rank of the source $X$ has to be automatically finite. If we furthermore require that the target $Y$ is rationally elliptic, then it follows from (\ref{kppc}) and (\ref{pi-kppc}) with $t=1$ that the source $X$ has to be automatically also rationally elliptic, thus it becomes the original relative Hilali conjecture. So, we would like to pose the following slightly modified conjecture: \begin{con} (A generalized relative Hilali conjecture) \label{grhc} Let $f:X \to Y$ be a continuous rationally elliptic map with respect to kernel of simply connected spaces $X$ and $Y$. If the homology rank of the target $Y$ is finite, then $P^{\pi}_f(1) \leqq P_f(1)$ holds. \end{con} In \cite{Yo} (cf. \cite{Yo2}) the second named author has proved the following \begin{thm}[\emph{Hilali conjecture ``modulo product"}]\label{hilali-pro} Let $X$ be a rationally elliptic space such that its fundamental group is an Abelian group. Then there exists some integer $n_0$ such that for $\forall \, \, n\geqq n_0$ the strict inequality $ P^{\pi}_{X^n}(1) < P_{X^n}(1)$ holds, i.e., \begin{equation}\label{pro-hilali} \operatorname{dim} \left (\pi_*(X^n)\otimes {\mathbb Q} \right ) < \operatorname{dim} H_*(X^{n};{\mathbb Q} ). \end{equation} \end{thm} In this section, as a ``map version" of the above theorem, we show the following theorem, in which we do not require that the homology rank of the target $Y$ is finite (hence the homology rank of the source $X$ is automatically finite as explained above), instead we require that the homology rank of the source $X$ is finite: \begin{thm}[A generalized relative Hilali conjecture ``modulo product"]\label{main} Let $f:X \to Y$ be a continuous rationally elliptic map with respect to kernel of simply connected spaces $X$ and $Y$ such that the homology rank of the source $X$ is finite. If $P_f(1) > 1$, i.e., there exists some integer $i$ such that $H_i(f;\mathbb Q):H_i(X;\mathbb Q) \to H_i(Y;\mathbb Q)$ is not injective, then there exists some integer $n_0$ such that for $\forall \, \, n\geqq n_0$ the strict inequality $P^{\pi}_{f^n}(1) < P_{f^n}(1)$ holds, i.e., \begin{equation}\label{pro-rel-hilali} \sum_{i\geqq 2} \operatorname{dim} \left (\operatorname{Ker} (\pi_i(f^n) \otimes \mathbb Q) \right )< 1 + \sum_{i\geqq 2} \operatorname{dim} \left (\operatorname{Ker} H_i(f^n; \mathbb Q) \right ). \end{equation} \end{thm} \begin{rem} Note that if $Y$ is contractible, then the formula (\ref{pro-rel-hilali}) becomes the formula (\ref{pro-hilali}). In this case, the above requirement $P_f(1) > 1$ becomes $P_X(1) >1$, which can be dropped, namely $P_X(1)=1$ can be allowed. As explained in the introduction, by using Whitehead--Serre Theorem we can show that if $P_X(1)=1$, then $P_X^{\pi}(1)=0$. Thus for $\forall n \geqq n_0=1$ we have $0=n(P^{\pi}_X(1))= P^{\pi}_{X^n}(1) < P_{X^n}(1) = (P_X(1))^n=1$. \end{rem} \begin{rem} In the above Theorem \ref{main} we pose the condition that the homology rank of the source $X$ is finite. This is needed so that any product $f^n:X^n \to Y^n$ is also rationally elliptic with respect to kernel, thus we can consider the Poincar\'e polynomial $P_{f^n}(t)$ and a finite integer $P_{f^n}(1)$. The crucial condition is that $P_f(1) > 1$, i.e., $\dim \left (\operatorname{Ker} (H_*(f; \mathbb Q)) \right) \not = 0$ unlike the above counterexample Example \ref{c-ex}. If in the theorem we drop the condition that the homology rank of the source $X$ is finite, then $\sum_{i\geqq 2} \operatorname{dim} \left (\operatorname{Ker} (H_i(f^n; \mathbb Q) ) \right ) =\infty$ can happen and in this case $P_{f^n}(t)$ becomes \emph{a Hilbert--Poincar\'e power series $HP_{f^n}(t)$, not a polynomial}. In this case the above strict inequality (\ref{pro-rel-hilali}) automatically holds because the left-hand side is always finite and the right-hand-side is $\infty$. In this sense, we could drop the condition that the homology rank of the source $X$ is finite, if we are allowed to understand $P_{f^n}(t)$ as the Hilbert--Poincar\'e series $HP_{f^n}(t)$ for the obvious strict inequality $P^{\pi}_{f^n}(1) < P_{f^n}(1) =\infty$. \end{rem} A key ingredient for the proof of the above Theorem \ref{hilali-pro} is the following \emph{multiplicativity} of the homological Poincar\'e polynomial and \emph{additivity} of the homotopy Poincar\'e polynomial: \begin{equation}\label{x+} P_{X \times Y}(t) = P_X(t) \times P_Y(t), \quad P^{\pi}_{X \times Y}(t) = P^{\pi}_X(t) + P^{\pi}_Y(t). \end{equation} In order to prove the above Theorem \ref{main} first we show the following ``map version" of the above multiplicativity and additivity (\ref{x+}): \begin{pro}\label{prop-multi} For two rationally elliptic maps with respect to kernels $f_1:X_1 \to Y_1, f_2:X_2 \to Y_2$, where $X_i, Y_i (i=1,2)$ are simply connected spaces such that both $X_1$ and $X_2$ have the finite homology rank , we have the following formulas: \begin{enumerate} \item $P^{\pi}_{f_1 \times f_2}(t) = P^{\pi}_{f_1}(t) + P^{\pi}_{f_2}(t),$ for $\forall t$ \item $P_{f_1}(t) \times P_{f_2}(t) \leqq P_{f_1 \times f_2}(t)$ for $\forall t \geqq 0$. \end{enumerate} \end{pro} \begin{proof} The proof is straightforward, but we give a proof for the sake of completeness. First we observe that $\pi_i(f_1 \times f_2) \otimes \mathbb Q: \pi_i(X_1 \times X_2) \otimes \mathbb Q \to \pi_i(Y_1 \times Y_2) \otimes \mathbb Q$ is the same as $$(\pi_i(f_1) \otimes \mathbb Q) \operatornamelus (\pi_i(f_2) \otimes \mathbb Q): (\pi_i(X_1) \otimes \mathbb Q) \operatornamelus (\pi_i(X_2) \otimes \mathbb Q) \to (\pi_i(Y_1) \otimes \mathbb Q) \operatornamelus (\pi_i(Y_2) \otimes \mathbb Q). $$ Hence $$\operatorname{Ker} \bigl(\pi_i(f_1 \times f_2) \otimes \mathbb Q \bigr ) = \operatorname{Ker} \bigl(\pi_i(f_1) \otimes \mathbb Q \bigr) \operatornamelus \operatorname{Ker} \bigl(\pi_i(f_2) \otimes \mathbb Q \bigr),$$ which implies \begin{equation}\label{eq-pi} \operatorname{dim} \left (\operatorname{Ker} \bigl(\pi_i(f_1 \times f_2) \otimes \mathbb Q \bigr ) \right )= \operatorname{dim} \left (\operatorname{Ker} \bigl(\pi_i(f_1) \otimes \mathbb Q \bigr) \right) + \operatorname{dim} \left (\operatorname{Ker} \bigl(\pi_i(f_2) \otimes \mathbb Q \bigr) \right ). \end{equation} Thus $\operatorname{dim} \left (\operatorname{Ker} \bigl(\pi_*(f_1) \otimes \mathbb Q \bigr) \right) <\infty $ and $\operatorname{dim} \left (\operatorname{Ker} \bigl(\pi_*(f_2) \otimes \mathbb Q \bigr) \right ) <\infty$ imply that $$\operatorname{dim} \left (\operatorname{Ker} \bigl(\pi_*(f_1 \times f_2) \otimes \mathbb Q \bigr ) \right ) < \infty.$$ Since the homology rank of $X_i \, (i=1,2)$ is finite, i.e., $\operatorname{dim} H_*(X_i;\mathbb Q) < \infty \, (i=1,2)$, we have that $\operatorname{dim} \left (\operatorname{Ker} H_*(f_1 \times f_2;\mathbb Q \bigr ) \right )<\infty$, because $H_*(X_1 \times X_2; \mathbb Q) \cong H_*(X_1; \mathbb Q) \otimes H_*(X_2; \mathbb Q)$, thus $\operatorname{dim} H_*(X_1 \times X_2;\mathbb Q)<\infty$. Therefore the product $f_1 \times f_2: X_1 \times X_2 \to Y_1 \times Y_2$ is also a rationally elliptic map with respect to kernel. (1) From (\ref{eq-pi}) above we get \begin{align*} P^{\pi}_{f_1 \times f_2}(t) & = \sum_{i\geqq 2} t^i \operatorname{dim} \left (\operatorname{Ker} \bigl(\pi_i(f_1 \times f_2) \otimes \mathbb Q \bigr ) \right )\\ & = \sum_{i\geqq 2} t^i \operatorname{dim} \left (\operatorname{Ker} \bigl(\pi_i(f_1) \otimes \mathbb Q \bigr) \right) + \sum_{i\geqq 2} t^i \operatorname{dim} \left (\operatorname{Ker} \bigl(\pi_i(f_2) \otimes \mathbb Q \bigr) \right) \\ & = P^{\pi}_{f_1}(t) + P^{\pi}_{f_2}(t). \end{align*} (2) $H_i(f_1 \times f_2;\mathbb Q): H_i(X_1 \times X_2; \mathbb Q) \to H_i(Y_1 \times Y_2; \mathbb Q)$ can be expressed as follows by K\"unneth theorem: $$H_i(f_1 \times f_2;\mathbb Q): \sum_{i=j+k} H_j(X_1;\mathbb Q) \otimes H_k(X_2;\mathbb Q) \to \sum_{i=j+k}H_j(Y_1;\mathbb Q) \otimes H_k(Y_2;\mathbb Q)$$ Since $X_i$ and $Y_i$ ($i=1, 2$) are simply connected, the products $X_1 \times X_2$ and $Y_1 \times Y_2$ are also simply connected. Hence $ \operatorname{Ker} H_0(f_1 \times f_2; \mathbb Q) = \operatorname{Ker} H_1(f_1 \times f_2; \mathbb Q) =0.$ For $i \geqq 2$, we have the following inequality (*) \begin{align*} \label{sum} \Bigl ( \operatorname{Ker} H_i(f_1;\mathbb Q) & \otimes H_0(X_2;\mathbb Q) \Bigr ) \operatornamelus \Bigl (H_0(X_1;\mathbb Q) \otimes \operatorname{Ker} H_i(f_2;\mathbb Q) \Bigr) \operatornamelus \\ & \hspace{2cm} \sum_{i=j+k, j\geqq 2, k \geqq 2} \operatorname{Ker} H_j(f_1;\mathbb Q) \otimes \operatorname{Ker} H_k(f_2;\mathbb Q) \\ & \hspace{2cm} \subset \operatorname{Ker} H_i(f_1 \times f_2; \mathbb Q). \end{align*} Clearly $$\sum_{i=j+k, j\geqq 2, k \geqq 2} \operatorname{Ker} H_j(f_1;\mathbb Q) \otimes H_k(X_2; \mathbb Q) \, \, + \sum_{i=j+k, j\geqq 2, k \geqq 2} H_j(X_1; \mathbb Q) \otimes \operatorname{Ker} H_k(f_2;\mathbb Q) $$ is also contained in $\operatorname{Ker} H_i(f_1 \times f_2; \mathbb Q)$, and furthermore probably one could obtain a complete description of $\operatorname{Ker} H_i(f_1 \times f_2; \mathbb Q)$, but for our purpose we do not need to do so and the above inequality (*) is sufficient. The dimension of the above is equal to the following: for $i \geqq 2$ \begin{align*} \operatorname{dim} \left ( \operatorname{Ker} H_i(f_1;\mathbb Q) \right ) & + \operatorname{dim} \left (\operatorname{Ker} H_i(f_2;\mathbb Q) \right ) \\ & + \sum_{i=j+k, j\geqq 2, k \geqq 2} \operatorname{dim} \left (\operatorname{Ker} H_j(f_1;\mathbb Q) \right ) \times \operatorname{dim}\left (\operatorname{Ker} H_k(f_2;\mathbb Q) \right )\\ & \leqq \operatorname{dim} \left (\operatorname{Ker} H_i(f_1 \times f_2; \mathbb Q) \right ). \end{align*} Therefore we have that for each $i \geqq 2$ and $t \geqq 0$: \begin{align*} t^i\operatorname{dim} \left ( \operatorname{Ker} H_i(f_1;\mathbb Q) \right ) &+ t^i\operatorname{dim} \left ( \operatorname{Ker} H_i(f_2;\mathbb Q) \right ) \\ & + \sum_{i=j+k, j\geqq 2, k \geqq 2} t^j\operatorname{dim} \left (\operatorname{Ker} H_j(f_1;\mathbb Q) \right ) \times t^k \operatorname{dim} \left (\operatorname{Ker} H_k(f_2;\mathbb Q) \right )\\ & \leqq t^i\operatorname{dim} \left ( \operatorname{Ker} H_i(f_1 \times f_2; \mathbb Q) \right ). \end{align*} Therefore we have \begin{align*} P_{f_1 \times f_2}(t) &= 1 + \sum_{i\geqq 2} t^i \operatorname{dim} \left (\operatorname{Ker} H_i(f_1 \times f_2; \mathbb Q) \right )\\ & \geqq 1 + \sum_{i\geqq 2} t^i \operatorname{dim} \left (\operatorname{Ker} H_i(f_1; \mathbb Q) \right )+ \sum_{i\geqq 2} t^i \operatorname{dim} \left ( \operatorname{Ker} H_i(f_2; \mathbb Q) \right )\\ & \hspace{1cm} + \sum_{i\geqq 4} \Bigl ( \sum_{i=j+k, j\geqq 2, k\geqq 2} t^j\operatorname{dim} \left (\operatorname{Ker} H_j(f_1; \mathbb Q) \right )\times t^k\operatorname{dim} \left (\operatorname{Ker} H_k(f_2; \mathbb Q)\right ) \Bigr ) \\ & = \Bigl (1 + \sum_{j\geqq 2} t^j \operatorname{dim} \left (\operatorname{Ker} H_j(f_1; \mathbb Q) \right ) \Bigr ) \times \Bigl (1 + \sum_{k\geqq 2} t^k \operatorname{dim} \left (\operatorname{Ker} H_k(f_2; \mathbb Q) \right )\Bigr)\\ & = P_{f_1}(t) \times P_{f_2}(t). \end{align*} Hence we have $ P_{f_1}(t) \times P_{f_2}(t) \leqq P_{f_1 \times f_2}(t)$ for $\forall t \geqq 0$. \end{proof} \begin{rem} The equality $ P_{f_1}(t) \times P_{f_2}(t) = P_{f_1 \times f_2}(t)$ does not hold in general. However, in order to prove Theorem \ref{main} the above inequality (2) of Proposition \ref{prop-multi} is sufficient . \end{rem} \begin{cor}\label{cor} Let $f:X \to Y$ be a continuous rationally elliptic map with respect to kernel of simply connected spaces $X$ and $Y$ such that the homology rank of $X$ is finite. Then we have \begin{enumerate} \item $P^{\pi}_{f^n}(t) = n \bigl (P^{\pi}_{f}(t) \bigr)$ for $\forall t$ \item $ \bigl (P_{f}(t) \bigr)^n \leqq P_{f^n}(t)$ for $\forall t \geqq 0$. \end{enumerate} \end{cor} \begin{rem} Note that in (2) of Corollary \ref{cor} we do need $\forall t \geqq 0$. \end{rem} \begin{cor}\label{cor-1} Let the setup be as in Proposition \ref{prop-multi}. Suppose that $P^{\pi}_{f_i}(1) \leqq P_{f_i}(1) \, (i=1,2)$. Then $P^{\pi}_{f_1 \times f_2}(1) \leqq P_{f_1 \times f_2}(1)$ in the following cases: \begin{enumerate} \item $P_{f_i}(1) \geqq 2$ for $i=1,2$, \item $P^{\pi}_{f_1}(1) = 0$ or $P^{\pi}_{f_2}(1) = 0$. \end{enumerate} In particular, if the relative Hilali conjecture holds for $f_1$ and $f_2$, then it also holds for the product $f_1 \times f_2$ in the above two cases. \end{cor} \begin{proof} First we note that $P_{f_i}(1) \geqq 1 \, (i=1,2)$ by the definition. \begin{enumerate} \item \begin{align*} P^{\pi}_{f_1 \times f_2}(1) & = P^{\pi}_{f_1}(1) + P^{\pi}_{f_2}(1) \\ & \leqq P_{f_1}(1) + P_{f_2}(1) \\ & \leqq P_{f_1}(1) \times P_{f_2}(1) \quad \text{(since $P_{f_i}(1) \geqq 2$ \, $(i=1,2)$ and (*) below)} \\ & \leqq P_{f_1 \times f_2}(1) \end{align*} (*) If $a, b \geqq 2$, then $ab -a -b = (a-1)(b-1)-1 \geqq 0$ because $a-1\geqq 1, b-1\geqq 1$. \item For example, we let $P^{\pi}_{f_1}(1) = 0$. Then we have \begin{align*} P^{\pi}_{f_1 \times f_2}(1) & = P^{\pi}_{f_1}(1) + P^{\pi}_{f_2}(1) \\ & = P_{f_2}(1) \\ & \leqq P_{f_1}(1) \times P_{f_2}(1) \quad \text{(since $P_{f_1}(1) \geqq 1$)} \\ & \leqq P_{f_1 \times f_2}(1) \end{align*} \end{enumerate} \end{proof} \begin{rem} The other cases which are not treated in Corollary \ref{cor-1} are the cases when at least one $P_{f_i}(1)=1$ and $P^{\pi}_{f_i}(1) \not =0 \, (i=1,2)$. For example, let $P_{f_1}(1)=1$. Then since $0 \not = P^{\pi}_{f_1}(1) \leqq P_{f_1}(1) =1$, we have $P_{f_1}(1)= P^{\pi}_{f_1}(1)=1.$ In this case at the moment we do not know whether $P^{\pi}_{f_1 \times f_2}(1) \leqq P_{f_1 \times f_2}(1)$ or not. $P_{f}(1) =1$ means that $\operatorname{Ker} H_*(f;\mathbb Q)=0$, i.e., $H_*(f;\mathbb Q): H_*(X;\mathbb Q) \to H_*(Y;\mathbb Q)$ is injective and $P^{\pi}_{f}(1) =1$ means that $\pi_*(f) \otimes \mathbb Q): \pi_*(X) \otimes \mathbb Q \to \pi_*(Y) \otimes \mathbb Q$ is \emph{not} injective, thus for this map $f_1$ the homological injectivity does not imply the homotopical injectivity. If we could show that the homological injectivity implies the homotopical injectivity, i.e., $P_{f_1}(1) =1$ implies $P^{\pi}_{f_1}(1) =0$, which becomes the above second case (2). We will discuss this injectivity problem later. \end{rem} Now we give a proof of Theorem \ref{main}. \begin{proof} If $P_{f}(1) >1$, i.e., there exists some integer $i \geqq 2$ such that the homomorphism $H_i(f;\mathbb Q): H_i(X; \mathbb Q) \to H_i(Y; \mathbb Q)$ is not injective, then, whatever the value $P^{\pi}_{f}(1)$ is, there exists some integer $n_0$ such that for $\forall n \geqq n_0$ $$n(P^{\pi}_{f}(1)) < (P_{f}(1))^n.$$ Indeed, since $P_{f}(1) >1$, we have that $\frac{1}{P_{f}(1)}<1$. It follows from an elementary fact in calculus ``$|r| <1 {\mathbb R}ightarrow \displaystyle \lim_{n \to \infty} nr^n=0$" that we have $$\lim_{n \to \infty} n \Bigl (\frac{1}{P_f(1)} \Bigr)^n = 0.$$ Therefore, whatever the value $P^{\pi}_f(1)$ is, we obtain $$\lim_{n \to \infty} n P^{\pi}_f(1)\Bigl (\frac{1}{P_f(1)} \Bigr)^n = \lim_{n \to \infty} \frac{nP^{\pi}_f(1)}{(P_f(1))^n} = 0.$$ Hence there exists some integer $n_0$ such that for $\forall n \geqq n_0$ $$\frac{nP^{\pi}_f(1)}{(P_f(1))^n} < 1,$$ which implies, using (2) of Corollary \ref{cor}, that $$P^{\pi}_{f^n}(1) = n(P^{\pi}_{f}(1)) < (P_{f}(1))^n \leqq P_{f^n}(1).$$ Therefore we can conclude that there exists some integer $n_0$ such that for all $n \geqq n_0$ $$P^{\pi}_{f^n}(1) < P_{f^n}(1).$$ \end{proof} As one can see, in the above proof, the requirement $P_f(1) > 1$ or the non-injectivity of $H_i(f;\mathbb Q):H_i(X;\mathbb Q) \to H_i(Y;\mathbb Q)$ for some $i$ is crucial. If we could show that the injectivity of each \emph{homological} homomorphisms $H_i(f;\mathbb Q):H_i(X;\mathbb Q) \to H_i(Y;\mathbb Q)$ would imply the injectivity of the \emph{homotopical} homomorphism $\pi_i(f) \otimes \mathbb Q :\pi_i(X)\otimes \mathbb Q \to \pi_i(Y) \otimes \mathbb Q$, then $0=P^{\pi}_f(1) < P_f(1)= 1,$ thus the above inequality would hold for $n_0=1$ and in fact, as we can see that for $\forall n \geqq n_0=1$ the inequality holds. But, as seen in the counterexample Example \ref{c-ex}, in the set-up of Theorem \ref{main}, the injectivity of each $H_i(f;\mathbb Q)$ \emph{does not} necessarily imply the injectivity of each $\pi_i(f) \otimes \mathbb Q$. In fact, the map $f: S^4 \times S^6 \to K(\mathbb Q, 4) \times K(\mathbb Q, 6)$ of Example (\ref{c-ex}) is \emph{not a continuous rationally elliptic map with respect to cokernel}. Furthermore we do have another counterexample: \begin{ex} Consider the following canonical inclusion map $$g: S^3 \vee S^3 \hookrightarrow S^3 \times S^3.$$ Then $\operatorname{Ker} H_*(g;\mathbb Q)=0$, but $\dim \left (\operatorname{Ker}(\pi_*(g)\otimes \mathbb Q) \right)=\infty$, thus the homological injectivity does not imply the homotopical injectivity. In this case we emphasize that $g$ is \emph{not a continuous rationally elliptic map with respect to kernel}. \end{ex} If $H_i(Y;\mathbb Q) = 0$ for $\forall i>0$, e.g., if $Y$ is contractible, then the injectivity of each homological homomorphisms $H_i(f;\mathbb Q):H_i(X;\mathbb Q) \to H_i(Y;\mathbb Q)$ means that $H_i(X;\mathbb Q)=0$. Furthermore, $\dim H_{*}(X; {\mathbb Q}) =1$ (for a pathconnected space $X$) is equivalent to $H_*(a_X; \mathbb Q) :H_*(X;{\mathbb Q}) \to H_*(pt)={\mathbb Q}$ being an isomorphism, where $a_X:X \to pt$ is the map to a point. Thus it follows from the Whitehead--Serre Theorem \cite[Theorem 8.6]{FHT} that $(a_X)_*\otimes {\mathbb Q} :\pi_*(X)\otimes {\mathbb Q} \to \pi_*(pt)\otimes {\mathbb Q} =0$ is an isomorphism, hence $\pi_*(X)\otimes {\mathbb Q} =0$. Thus we get the injectivity of the homotopical homomorphism $\pi_i(f) \otimes \mathbb Q:\pi_i(X)\otimes \mathbb Q \to \pi_i(Y) \otimes \mathbb Q$. So, we would like to make the following conjecture, which we have been unable to resolve: \begin{con}[``Injectivity conjecture"]\label{injective} Let $f:X \to Y$ be a continuous rationally elliptic map of simply connected spaces $X$ and $Y$. The injectivity of each homological homomorphism $H_i(f;\mathbb Q):H_i(X;\mathbb Q) \to H_i(Y;\mathbb Q)$ for $\forall i >1$ implies the injectivity of each homotopical homomorphism $\pi_i(f) \otimes \mathbb Q:\pi_i(X)\otimes \mathbb Q \to \pi_i(Y) \otimes \mathbb Q$ for $\forall i >1$. \end{con} \begin{rem} In the original paper we made such a conjecture for \emph{a continuous map $f:X \to Y$ of simply connected elliptic spaces $X$ and $Y$}, which is surely a rationally elliptic map. Thus the above ``injectivity conjecture" is an extended version of the original conjecture. \end{rem} As a corollary of the above proof of Theorem \ref{main}, we can show that if $P_f(1) >1$, then for any $s >0$ there exists a positive integer $n(s)$ such that for $\forall n \geqq n(s)$ \begin{equation}\label{general one} P^{\pi}_{f^n}(s) < P_{f^n}(s) \end{equation} because $P_f(1) >1$ implies $P_f(s)=1 + \sum_{i\geqq 2}\operatorname{dim} \left (\operatorname{Ker} H_i(f^n; \mathbb Q) \right) s^i >1$. By the definition of $P_f(t)$ and $P^{\pi}_f(t)$ we have that $P_f(0)=1$ and $P^{\pi}_f(0)=0$. Hence for any integer $n \geqq 1$ we have that $0 = n(P^{\pi}_f(0))=P^{\pi}_{f^n}(0) < 1^n=P_f(0)^n = P_{f^n}(0)=1$ (whether $P_f(1) >1$ or not). Therefore we get the following \begin{cor} If $P_f(1) >1$, then for any $s \geqq 0$ there exists a positive integer $n(s)$ such that for $\forall n \geqq n(s)$ $$P^{\pi}_{f^n}(s) < P_{f^n}(s).$$ \end{cor} \section{A remark on the case of rationally hyperbolic maps} Before finishing we give a remark about the case when $f:X \to Y$ is a rationally hyperbolic map with respect to kernel. Since $f:X \to Y$ is rationally hyperbolic map with respect to kernel, as observed in Remark \ref{hyperbol-rem}, $X$ is rationally hyperbolic. Hence we have the homotopical Hilbert--Poincar\'e series and the homological Poincar\'e polynomial of $X$ and also those of $f:X \to Y$: $$HP^{\pi}_X(t):= \sum_{i\geqq 2}^{\infty} \operatorname{dim} \left (\pi_i(X) \otimes \mathbb Q \right ) t^i, \quad P_X(t)= 1 + \sum_{i\geqq 2} \operatorname{dim} H_i(X; \mathbb Q) t^i$$ $$HP^{\pi}_f(t):= \sum_{i\geqq 2}^{\infty} \operatorname{dim} \left (\operatorname{Ker} \bigl(\pi_i(f) \otimes \mathbb Q) \right ) t^i, \quad P_f(t)= 1+ \sum_{i\geqq 2} \operatorname{dim} \left (\operatorname{Ker} H_i(f; \mathbb Q) \right ) t^i $$ In \cite[Th\'eor\`eme 6.2.1]{Fe} Y. F\'elix showed that if $r_X$ denotes the radius of convergence of the above Hilbert--Poincar\'e series $HP^{\pi}_X(t)$ then $r_X <1$. For $t=0$ we have $HP^{\pi}_X(0)=0, P_X(0)=1$ and also $HP^{\pi}_f(0)=0, P_f(0)=1$, so we consider $r$ such that $0<r<r_f:=r(HP^{\pi}_f(t))$ the radius of convergence of the series $HP^{\pi}_f(t)$. Since we have $$\operatorname{dim} \left (\operatorname{Ker} \left (\pi_i(f) \otimes \mathbb Q \right ) \right ) \leqq \operatorname{dim} \bigl(\pi_i(X) \otimes \mathbb Q) $$ the convergence of $HP^{\pi}_X(r)$ implies the convergence of $HP^{\pi}_f(r)$, thus $r_X \leqq r_f<1$. Therefore, as a corollary of the proof of Theorem \ref{main}, we get the following corollary: \begin{cor} Let $f:X \to Y$ be a rationally hyperbolic map with respect to kernel of simply connected spaces $X$ and $Y$. Let $P_f(1)>1$. Then for any $r$ such that $0<r<r_X$ there exists a positive integer $n(r)$ such that for $\forall n \geqq n(r)$ $$HP^{\pi}_{f^n}(r) <P_{f^n}(r).$$ \end{cor} \begin{rem} Let $\alpha = \sum_{n \geqq 0} a^n t^n$ and $\beta =\sum_{n \geqq 0} b_n t^n$ be power series such that $0 \leqq a_n \leqq b_n$. Let $r(\alpha)$ and $r(\beta)$ be the radius of convergence of the power series $\alpha$ and $\beta$. Then they are not necessarily the same, in general $r(\beta) \leqq r(\alpha)$. Hence in the above corollary instead of $r_X$ we could take the radius $r_f$. \end{rem} Finally, let us consider the case when $Y$ is a point, i.e, we consider a rationally hyperbolic space $X$. We pose the following question: \begin{qu}(a ``Hilali conjecture" in the hyperbolic case) \label{quest} Let $X$ be a rationally hyperbolic space. Let $r_X:= r(HP^{\pi}_X(t))$ be the radius of convergence as above. Suppose that $HP^{\pi}_X(t)$ converges at $r_X$, i.e., $HP^{\pi}_X(r_X) < \infty$. Does the following inequality hold? $$HP^{\pi}_X(r_X) \leqq P_X(r_X).$$ \end{qu} \begin{rem} We point out that some power series $p(x)$ converge at $x=r$ where $r=r(p(x))$ is the radius of convergence, but some do not. Here are some examples: \begin{enumerate} \item $p_1(x) = \sum_{n=1}^{\infty} \frac{x^n}{n^2} = 1 + x + \frac{x^2}{2^2} + \frac{x^3}{3^2} + \cdots$, $r(p_1(x) ) =1$ and $p_1(1) = \sum_{n=1}^{\infty} \frac{1}{n^2}= \frac{\pi^2}{6}$ (This is nothing but the Basel problem.) \item A modified version of $p_1(x)$ is the following: Let $d>0$. \noindent $p_2(x) = \sum_{n=1}^{\infty} \frac{(dx)^n}{n^2} = 1 + dx + \frac{(dx)^2}{2^2} + \frac{(dx)^3}{3^2} + \cdots$, $r(p_2(x)) =\frac{1}{d}$ and $p_2(\frac{1}{d}) = \sum_{n=1}^{\infty} \frac{1}{n^2}= \frac{\pi^2}{6}.$ \item $p_3(x) = \sum_{n=0}^{\infty} x^n = 1 + x + x^2 + \cdots$, $r(p_3(x)) =1$, but $p_3(x)$ does not converge at $x=1$. \item A modified version of $p_3(x)$ is the following: Let $d>0$. \noindent $p_4(x) = \sum_{n=0}^{\infty} (dx)^n = 1 + dx + (dx)^2 + \cdots$, $r(p_4(x)) = \frac{1}{d}$, but $p_4(\frac{1}{d})$ does not converge at $x=\frac{1}{d}$. \end{enumerate} \end{rem} \begin{rem} Motivated by Question \ref{quest} for the hyperbolic space, it seems to be natural to consider the following other cases: \begin{enumerate} \item $\dim H_*(X;\mathbb Q) = \infty$ and $\dim (\pi_*(X) \otimes \mathbb Q) < \infty$: In this case we have the homological Hilbert--Poincar\'e series $HP_X(t)$ and the homotopical Poincar\'e polynomial $P^{\pi}_X(t)$ and $P^{\pi}_X(1) < HP_X(1) =\infty$. A real problem would be the following. Let $r_X$ be the radius of convergence of the power series $HP_X(t)$. When $HP_X(r_X)$ does converge, does the following ``Hilali"-type inequality hold? $$P^{\pi}_X(r_X) \leqq HP_X(r_X).$$ \item $\dim H_*(X;\mathbb Q) = \infty$ and $\dim (\pi_*(X) \otimes \mathbb Q) = \infty$: In this case we have the homological Hilbert--Poincar\'e series $HP_X(t)$ and the homotopical Hilbert--Poincar\'e series $HP^{\pi}_X(t)$ and $HP^{\pi}_X(1) = HP_X(1) =\infty$. Let $r^H_X$ be the radius of convergence of the power series $HP_X(t)$ and $r^{\pi}_X$ be the radius of convergence of the power series $HP^{\pi}_X(t)$. Let $r_X:= \operatorname{min} \{r^H_X, r^{\pi}_X \}$. When both $HP^{\pi}_X(r_X)$ and $HP_X(r_X)$ do converge (note that if $r^{\pi}_X < r^H_X$, say, then $HP_X(r^{\pi}_X)$ does converge by the definition of radius of convergence), does the following ``Hilali"-type inequality hold? $$HP^{\pi}_X(r_X) \leqq HP_X(r_X).$$ \\ \end{enumerate} \end{rem} {\bf Acknowledgements:} We would like to thank the referee for his/her thorough reading and useful comments and suggestions. T.Y. is supported by JSPS KAKENHI Grant Number JP20K03591 and S.Y. is supported by JSPS KAKENHI Grant Number JP19K03468. \\ \end{document}
\begin{document} \begin{abstract} We show that the dual character of the flagged Weyl module of any diagram is a positively weighted integer point transform of a generalized permutahedron. In particular, Schubert and key polynomials are positively weighted integer point transforms of generalized permutahedra. This implies several recent conjectures of Monical, Tokcan and Yong. \end{abstract} \maketitle \section{Introduction} Schubert polynomials and key polynomials are classical objects in mathematics. Schubert polynomials, introduced by Lascoux and Sch\"utzenberger in 1982 \cite{LS}, represent cohomology classes of Schubert cycles in flag varieties. Key polynomials, also known as Demazure characters, are polynomials associated to compositions. Key polynomials were first introduced by Demazure for Weyl groups \cite{demazure}, and studied in the context of the symmetric group by Lascoux and Sch\"utzenberger in \cite{LS1,LS2}. Beyond algebraic geometry, Schubert and key polynomials play an important role in algebraic combinatorics \cite{laddermoves,BJS,flaggedLRrule,FK1993, KM}. The second author and Escobar \cite{pipe1} showed that for permutations $1\pi'$ where $\pi'$ is dominant, Schubert polynomials are specializations of reduced forms in the subdivision algebra of flow and root polytopes. On the other hand, intimate connections of flow and root polytopes with generalized permutahedra have been exhibited by Postnikov \cite{beyond}, and more recently by the last two authors \cite{AK}. These works imply that for permutations $1\pi'$ where $\pi'$ is dominant, the Schubert polynomial $\mathfrak{S}_{1\pi'}({\bf x})$ is equal to the integer point transform of a generalized permutahedron \cite{AK}. The main result of this paper proves that the dual character of the flagged Weyl module of any diagram is a positively weighted integer point transform of a generalized permutahedron. Since Schubert and key polynomials are dual characters of certain flagged Weyl modules, it follows that the Newton polytope of any Schubert polynomial $\mathfrak{S}_\pi$ or key polynomial $\kappa_\alpha$ is a generalized permutahedron, and each of these polynomials is a sum over the lattice points in its Newton polytope with positive integral coefficients. After reviewing the necessary background, we prove our main theorem and draw corollaries about Schubert and key polynomials, confirming several recent conjectures of Monical, Tokcan and Yong \cite{MTY}. \section{Background} \label{sec:bg} This section contains a collection of definitions of classical mathematical objects. Our basic notions are Schubert and key polynomials, Newton polytopes, generalized permutahedra, (Schubert) matroids, and flagged Weyl modules. \subsection{Schubert polynomials} The Schubert polynomial of the longest permutation $w_0=n \hspace{.1cm} n\!-\!1 \hspace{.1cm} \cdots \hspace{.1cm} 2 \hspace{.1cm} 1 \in S_n$ is \[\mathfrak{S}_{w_0}:=x_1^{n-1}x_2^{n-2}\cdots x_{n-1}.\] For $w\in S_n$, $w\neq w_0$, there exists $i\in [n-1]$ such that $w(i)<w(i+1)$. For any such~$i$, the \newword{Schubert polynomial} $\mathfrak{S}_{w}$ is defined as \[\mathfrak{S}_{w}(x_1, \ldots, x_n):=\partial_i \mathfrak{S}_{ws_i}(x_1, \ldots, x_n),\] where $\partial_i$ is the $i$th divided difference operator \[\partial_i (f):=\frac{f-s_if}{x_i-x_{i+1}} \mbox{ and }s_i=(i,i+1).\] Since the $\partial_i$ satisfy the braid relations, the Schubert polynomials $\mathfrak{S}_{w}$ are well-defined. \subsection{Key polynomials} A \newword{composition} $\alpha$ is a sequence of nonnegative integers $(\alpha_1,\alpha_2,\ldots)$ with $\sum_{k=1}^{\infty}\alpha_k<\infty$. If $\alpha$ is weakly decreasing, define the \newword{key polynomial} $\kappa_\alpha$ to be \[\kappa_\alpha=x_1^{\alpha_1}x_2^{\alpha_2}\cdots. \] Otherwise, set \[\kappa_\alpha = \partial_i\left( x_i\kappa_{\hat{\alpha}}\right)\mbox{ where } \hat{\alpha}=(\alpha_1,\ldots,\alpha_{i+1},\alpha_{i},\ldots) \mbox{ and }\alpha_i<\alpha_{i+1}. \] It is an important fact due to Lascoux and Sch\"utzenberger \cite{LS1} that every Schubert polynomial is a sum of key polynomials. \subsection{Diagrams} View $[n]^2$ as an $n$ by $n$ grid of boxes labeled $(i,j)$ in the same way as entries of an $n\times n$ matrix, with labels increasing as you move top to bottom along columns and left to right across rows from the upper-left corner. By a \textbf{diagram}, we mean a subset $D\subseteq [n]^2$, a collection of boxes in the $n\times n$ grid. Throughout this paper, we view $D$ as an ordered list of subsets $D=(D_1,D_2,\ldots,D_n)$ where for each $j$, $D_j=\{i:\, (i,j)\in D \}$ is the set of row indices of boxes of $D$ in column $j$. Two important classes of diagrams are Rothe diagrams and skyline diagrams. \begin{definition} The \newword{Rothe diagram} of a permutation $\pi\in S_n$ is the collection of boxes $D(\pi)=\{(i,j):\, 1\leq i,j\leq n,\, \pi(i)>j,\, \pi^{-1}(j)>i \}$. $D(\pi)$ can be visualized as the set of boxes left in the $n\times n$ grid after you cross out all boxes weakly below or right of $(i,\pi(i))$ for each $i\in [n]$. Let $D(\pi)_j=\{i:\, (i,j)\in D(\pi) \}$ for each $j$, so $D(\pi)=(D(\pi)_1,\ldots, D(\pi)_n)$. \end{definition} See Figure \ref{fig:rothe} for an example of a Rothe diagram. \begin{figure} \caption{The Rothe diagram of $\pi=41532$ is $(\{1\} \label{fig:rothe} \label{fig:rothe} \end{figure} \begin{definition} If $\alpha=(\alpha_1,\alpha_2,\ldots)$ is a composition, let $l=\max\{i:\,\alpha_i\neq 0 \}$ and $n=\max\{l,\alpha_1,\ldots, \alpha_l \}$. The \newword{skyline diagram} of $\alpha$ is the diagram $D(\alpha)\subseteq[n]^2$ containing the first $\alpha_i$ boxes in row $i$ for each $i\in[n]$. More specifically, $D(\alpha)=(D(\alpha)_1,\ldots, D(\alpha)_n)$ with $D(\alpha)_j=\{j\leq n:\, \alpha_j\geq j \}$ for each $j$. \end{definition} See Figure \ref{fig:sky} for an example of a skyline diagram. \begin{figure} \caption{The skyline diagram of $\alpha=(3,2,1,0,1)$ is $(\{1,2,3,5\} \label{fig:sky} \end{figure} \subsection{Newton polytopes and generalized permutahedra} If $f$ is a polynomial in a polynomial ring whose variables are indexed by some set $I$, the \newword{support} of $f$ is the lattice point set in $\mathbb R^I$ consisting of the exponent vectors of monomials with nonzero coefficient in~$f$. The \newword{Newton polytope} $\mathrm{Newton}(f)\subseteq\mathbb R^I$ is the convex hull of the support of~$f$. Following the definition of \cite{MTY}, we say that a polynomial $f$ has \newword{saturated Newton polytope (SNP)} if every lattice point in $\mathrm{Newton}(f)$ is a vector in the support of $f$. In other words, SNP means that the polynomial is equal to a positively weighted integer point transform of its Newton polytope. Our main objects of study are the supports of Schubert and key polynomials. We prove that they have SNP and that their Newton polytopes are generalized permutahedra, which we define next. The standard permutahedron is the polytope in $\mathbb{R}^n$ whose vertices consist of all permutations of the entries of the vector $(0,1,\ldots,n-1)$. A \textbf{generalized permutahedron} is a deformation of the standard permutahedron obtained by translating the vertices in such a way that all edge directions and orientations are preserved (edges are allowed to degenerate to points). Generalized permutahedra are parametrized by certain collections of real numbers $\{z_I\}$ indexed by nonempty subsets $I\subseteq [n]$, and have the form \[P_n^z(\{z_I \})=\left \{ {\bf t}\in\mathbb{R}^n:\, \sum_{i\in I}{t_i}\geq z_I \mbox{ for } I\neq [n], \mbox{ and } \sum_{i=1}^{n}{t_i}=z_{[n]} \right \}. \] Postnikov initiated the study of these fascinating polytopes in \cite{beyond}, and they have since been studied extensively. The class of generalized permutahedra is closed under Minkowski sums. This follows from \cite[Lemma 2.2]{matroidpolytopes}: \[P_n^z(\{z_I \})+P_n^z(\{z'_I \})=P_n^z(\{z_I+z'_I \}). \] \subsection{Schubert matroids} A \newword{matroid} $M$ is a pair $(E, \mathcal{B})$ consisting of a finite set $E$ and a nonempty collection of subsets $\mathcal{B}$ of $E$, called the \newword{bases} of $M$. $\mathcal{B}$ is required to satisfy the \newword{basis exchange axiom}: If $B_1, B_2 \in \mathcal{B}$ and $b_1 \in B_1- B_2$, then there exists $b_2 \in B_2 - B_1$ such that $B_1 - b_1 \cup b_2 \in \mathcal{B}$. By choosing a labeling of the elements of $E$, we can assume $E=[n]$ for some $n$. Fix positive integers $1 \leq s_1 < \ldots < s_r \leq n$. The sets $\{a_1, \ldots, a_r\}$ with $a_1<\cdots<a_r$ such that $a_1 \leq s_1, \ldots, a_r \leq s_r$ are the bases of a matroid, called the \newword{Schubert matroid} $SM_n(s_1, \ldots, s_r)$ \cite[Section 2.4]{OMbook}. \subsection{Matroid polytopes} Given a matroid $M=(E,\mathcal{B})$ with $E=[n]$, the \newword{rank function} of $M$ is the function \[r_M:2^{E}\to \mathbb{Z}_{\geq 0}\] defined by $r_M(S)=\max\{\#(S\cap B):\, B\in \mathcal{B} \}$. The sets $S\cap B$ where $S\subseteq [n]$ and $B\in\mathcal{B}$ are called the \newword{independent sets} of $M$. The \newword{matroid polytope} of $M$ is the generalized permutahedron $P(M)$ defined by \begin{align*} P(M)&=P^z_n\left(\{ r_M(E)-r_M(E\backslash I) \}_{I\subseteq E}\right)\\ &=\left\{ {\bf t}\in\mathbb{R}^n:\, \sum_{i\in I}{t_i}\leq r_M(I) \mbox{ for } I\neq E, \mbox{ and } \sum_{i\in E}{t_i}=r_M(E) \right\}. \end{align*} The vertices of $P(M)$ are exactly the indicator vectors of the bases of $M$: if $B\in \mathcal{B}$ is a basis of $M$ and $\zeta^B=(\zeta_1^B,\ldots \zeta_n^B)\in\mathbb{R}^n$ is the vector with $\zeta_i^B=1$ if $i\in B$ and $\zeta_i^B=0$ if $i\notin B$ for each $i$, then \[P(M)=\mathrm{Conv}\{\zeta^B:\,B\in\mathcal{B} \}. \] \subsection{Flagged Weyl modules} Let $G=\mathrm{GL}(n,\mathbb{C})$ be the group of $n\times n$ invertible matrices over $\mathbb{C}$ and $B$ be the subgroup of $G$ consisting of the $n\times n$ upper-triangular matrices. The flagged Weyl module is a representation $M_D$ of $B$ associated to a diagram $D$, whose dual character has been shown in certain cases to be a Schubert polynomial or a key polynomial. We use the construction of $M_D$ in terms of determinants given in \cite{Magyar}. Denote by $Y$ the $n\times n$ matrix with indeterminants $y_{ij}$ in the upper-triangular positions $i\leq j$ and zeros elsewhere. Let $\mathbb{C}[Y]$ be the polynomial ring in the indeterminants $\{y_{ij}\}_{i\leq j}$. Note that $G$ acts on $\mathbb{C}[Y]$ on the right via left translation: if $f({\bf y})\in \mathbb{C}[Y]$, then a matrix $g\in G$ acts on $f$ by $f({\bf y})\cdot g=f(g^{-1}{\bf y})$. For any $R,S\subseteq [n]$, let $Y_R^S$ be the submatrix of $Y$ obtained by restricting to rows $S$ and columns $R$. For $R,S\subseteq [n]$, we say $R\leq S$ if $\#R=\#S$ and the $k$\/th least element of $R$ does not exceed the $k$\/th least element of $S$ for each $k$. For any diagrams $C=(C_1,\ldots, C_n)$ and $D=(D_1,\ldots, D_n)$, we say $C\leq D$ if $C_j\leq D_j$ for all $j\in[n]$. \begin{definition} For a diagram $D=(D_1,\ldots, D_n)$, the \newword{flagged Weyl module} $M_D$ is defined by \[M_D=\mathrm{Span}_\mathbb{C}\left\{\prod_{j=1}^{n}\det\left(Y_{D_j}^{C_j}\right):\, C\leq D \right\}. \] $M_D$ is a $B$-module with the action inherited from the action of $B$ on $\mathbb{C}[Y]$. \end{definition} Note that since $Y$ is upper-triangular, the condition $C\leq D$ is technically unncessary since $\det\left(Y_{D_j}^{C_j}\right)=0$ unless $C_j\leq D_j$. \section{Newton Polytopes of Dual Characters of Flagged Weyl Modules} \label{sec:schub} For any $B$-module $N$, the \newword{character} of $N$ is given by \[\mathrm{char}(N)(x_1,\ldots,x_n)=\mathrm{tr}\left(X:N\to N\right) \] where $X$ is the diagonal matrix $\mathrm{diag}(x_1,x_2,\ldots,x_n)$ with diagonal entries $x_1,\ldots,x_n$, and $X$ is viewed as a linear map from $N$ to $N$ via the $B$-action. Define the \newword{dual character} of $N$ to be the character of the dual module $N^*$: \begin{align*} \mathrm{char}^*(N)(x_1,\ldots,x_n)&=\mathrm{tr}\left(X:N^*\to N^*\right) \\ &=\mathrm{char}(N)(x_1^{-1},\ldots,x_n^{-1}). \end{align*} \begin{theorem}[\cite{KP}] Let $w\in S_n$ be a permutation, $D(w)$ be the Rothe diagram of $w$, and $M_{D(w)}$ be the associated flagged Weyl module. Then, \[\mathfrak{S}_w(x_1,\ldots,x_n) = \mathrm{char}^*M_{D(w)}. \] \end{theorem} \begin{theorem}[\cite{keypolynomials}] Let $\alpha$ be a composition, $D(\alpha)$ be the skyline diagram of $\alpha$, and $M_{D(\alpha)}$ be the associated flagged Weyl module. If $l=\max\{i:\,\alpha_i\neq 0 \}$ and $n=\max\{\alpha_1,\ldots,\alpha_l,l \}$, then \[\kappa_\alpha(x_1,\ldots,x_n) = \mathrm{char}^*M_{D(\alpha)}. \] \end{theorem} \begin{definition} For a diagram $D\subseteq [n]^2$, let $\chi_D=\chi_D(x_1,\ldots,x_n)$ be the dual character \[\chi_D=\mathrm{char}^*M_D. \] \end{definition} \begin{theorem} \label{thm:Newtonofdualcharacter} Let $D=(D_1,\ldots, D_n)$ be a diagram. Then $\chi_D$ has SNP, and the Newton polytope of $\chi_D$ is the Minkowski sum of matroid polytopes \[\mathrm{Newton}(\chi_D)=\sum_{j=1}^{n}P(SM_n(D_j)). \] In particular, $\mathrm{Newton}(\chi_D)$ is a generalized permutahedron. \end{theorem} \begin{proof} Denote by $X$ the diagonal matrix $\mathrm{diag}(x_1,x_2,\ldots,x_n)$. First, note that $y_{ij}$ is an eigenvector of $X$ with eigenvalue $x_i^{-1}$. Take a diagram $C=(C_1,\ldots,C_n)$ with $C\leq D$. Then, the element $\prod_{j=1}^{n}\det\left(Y_{D_j}^{C_j}\right)$ is an eigenvector of $X$ with eigenvalue \[\prod_{j=1}^{n}\prod_{i\in C_j}x_i^{-1}.\] Since $M_D$ is spanned by elements $\prod_{j=1}^{n}\det\left(Y_{D_j}^{C_j}\right)$ and each is an eigenvector of $D$, the monomials appearing in the dual character $\chi_D$ are exactly \[\left\{\prod_{j=1}^{n}\prod_{i\in C_j}x_i:\, C\leq D \right\}. \] For a diagram $C=(C_1,\ldots, C_n)$, define a vector $\xi^C=(\xi_1^C,\ldots, \xi_n^C)$ by setting $\xi_i^C=\#\{j:\,i\in C_j\}$ for each $i$. The exponent vector of $\prod_{j=1}^{n}\prod_{i\in C_j}x_i$ is exactly $\xi^C$, so the support of $\chi_D$ is precisely the set $\left\{\xi^C:\, C\leq D \right\}$. However, for each $j\in[n]$, the sets $S\subseteq [n]$ with $S\leq D_j$ are exactly the bases of the Schubert matroid $SM_n(D_j)$. In particular, choosing a diagram $C\leq D$ is equivalent to picking a basis $C_j$ of $SM_n(D_j)$ for each $j\in[n]$. If $\zeta^{C_j}$ is the indicator vector of $C_j$, then comparing components shows \[\xi^C=\sum_{j=1}^{n}\zeta^{C_j}. \] This shows that each vector $\xi^C$ is a sum consisting of a vertex from each matroid polytope $P(SM_n(D_j))$ for $j\in [n]$. Conversely, given any sum $\sum_{j=1}^{n}\zeta^{B_j}$ of a vertex $\zeta^{B_j}$ from each $P(SM_n(D_j))$, let $C=(B_1,\ldots,B_n)$. Since each $B_j$ is a basis of $SM_n(D_j)$, $C\leq D$. Thus, $\xi^C=\sum_{j=1}^{n}\zeta^{C_j}$ is in the support of $\chi_D$.\\ \noindent Consequently, \[\mathrm{Newton}(\chi_D)= \sum_{j=1}^{n}P(SM_n(D_j)). \] In particular, we have shown that each lattice point of $\mathrm{Newton}(\chi_D)$ corresponds to a sum consisting of a vertex from each $P(SM_N(D_j))$. It follows from \cite[Corollary 46.2c]{Schrijver} that each lattice point in this Minkowski sum is the sum of a lattice point in each summand $P(SM(D_j))$. However, the only lattice points in a matroid polytope are its vertices. Hence, $\chi_D$ has SNP. \end{proof} \begin{corollary}\label{thm:Schubert SNP} The support of any Schubert polynomial $\mathfrak{S}_{w}$ or key polynomial $\kappa_\alpha$ equals the set of lattice points a generalized permutahedron. \end{corollary} This confirms Conjectures 3.10 and 5.1 of \cite{MTY}, namely that key polynomials and Schubert polynomials have SNP. We now confirm Conjectures 3.9 and 5.13 of \cite{MTY}, which give a conjectural inequality description for the Newton polytopes of Schubert and key polynomials. We state this description and match it to the Minkowski sum description proven in Theorem \ref{thm:Newtonofdualcharacter}. Let $D\subseteq [n]^2$ be any diagram with columns $D_j=\{i:\, (i,j)\in D \}$ for $j\in[n]$. Let $I\subseteq [n]$ be a set of row indices and $j\in[n]$ a column index. Construct a string $\mathrm{word}_{j,I}(D)$ by reading column $j$ of the $n$ by $n$ grid from top to bottom and recording \begin{itemize} \item $($ if $(i,j)\notin D$ and $i\in I$; \item $)$ if $(i,j)\in D$ and $i\notin I$; \item $\star$ if $(i,j)\in D$ and $i\in I$. \end{itemize} Let $\theta_D^j(I)=\#\mbox{paired }()\mbox{'s in } \mathrm{word}_{j,I}(D) + \#\star\mbox{'s in }\mathrm{word}_{j,I}(D)$, and set \[\theta_D(I)=\sum_{j=1}^{n}\theta_D^j(I) .\] \begin{definition}[\cite{MTY}] For any diagram $D\subseteq [n]^2$, define the Schubitope $\mathcal{S}_D$ by \[\mathcal{S}_D=\left\{(\alpha_1,\,\ldots,\,\alpha_n)\in \mathbb{R}_{\geq 0}^n:\,\sum_{i=1}^{n}\alpha_i=\#D \mbox{ and } \sum_{i\in I}\alpha_i\leq \theta_D(I) \mbox{ for all } I\subseteq [n] \right\}. \] \end{definition} \begin{theorem} \label{thm:schubi} Let $D$ be a diagram $D\subseteq [n]^2$ with columns $D_j=\left\{i:\,(i,j)\in D \right\}$ for each $j\in [n]$. The Schubitope $\mathcal{S}_D$ equals the Minkowski sum of matroid polytopes \[ \mathcal{S}_D=\sum_{j=1}^{n}{P\left(SM_n\left(D_j\right)\right)}. \] \end{theorem} \begin{proof} Let $r_j$ be the rank function of the matroid $SM_n(D_j)$. By \cite[Lemma 2.2]{matroidpolytopes}, the Minkowski sum $\sum_{j=1}^{n}{P\left(SM_n\left(D_j\right)\right)}$ equals \[ \left\{(\alpha_1,\,\ldots,\,\alpha_n)\in \mathbb{R}_{\geq 0}^n:\,\sum_{i\in[n]}\alpha_i=\sum_{j=1}^{n}r_j([n]) \mbox{ and } \sum_{i\in I}\alpha_i\leq \sum_{j=1}^{n} r_j(I) \mbox{ for all } I\subseteq [n] \right\}. \] Thus, it is sufficient to prove that $\theta_D^j(I)=r_j(I)$ for each $j\in[n]$ and $I\subseteq[n]$. Fix $I$ and $j$, and let $\mathrm{word}_{j,I}(D)$ have $p$ paired $()$'s and $q$ $\star$'s. First, note that $D_j$ is a basis of $SM_n(D_j)$. Let $B$ be any basis of $SM_n(D_j)$ and pick elements $r_1$ and $r_2$ with $r_1\notin B$, $r_2\in B$, and $r_1<r_2$. Consider the set $B'=B\backslash\{r_2\}\cup\{r_1\}$. Then $B'\leq B\leq D_j$, so $B'$ is also a basis of $SM_n(D_j)$. Using this observation, we build a decreasing sequence of bases $D_j\geq B_1 \geq \cdots \geq B_p$. Order the set of paired $()$'s in $\mathrm{word}_{j,I}(D)$ from 1 to $p$. For the first pair, we get two grid squares $(r_1,\,j)$ and $(r_2,\,j)$ with $r_1<r_2$, $r_1\in I\backslash D_j$, and $r_2\in D_j\backslash I$. Define $B_1$ to be the basis $D_j\backslash \{r_2\}\cup\{r_1\}$. Inductively, the $i$th set of paired $()$'s in $\mathrm{word}_{j,I}(D)$ gives two grid squares $(r_1,\,j)$ and $(r_2,\,j)$ with $r_1<r_2$, $r_1\in I\backslash B_{i-1}$, and $r_2\in (B_{i-1}\cap D_j)\backslash I$. Define $B_i$ to be the basis $B_{i-1}\backslash \{r_2\}\cup\{r_1\}$. By construction, $\#(I\cap B_p)=p+\#(I\cap D_j)=p+q$. The proof will be complete of we can show $I\cap B_p$ is a maximal independent subset of $I$. If not, there is some $k\in I\backslash B_p$ and $l\in (B_p\cap D_j)\backslash I$ such that $B_p\backslash\{l\}\cup\{k\}$ is a basis. If $k<l$, then $k$ and $l$ correspond to a $()$ in $\mathrm{word}_{j,S}(D)$, so $k\in B_p$ already, a contradiction. If $k>l$, then in $\mathrm{word}_{j,S}(D)$, $k$ and $l$ correspond to a subword $)($ where neither parenthesis was paired. Then, the position of $l$ in $B_p$ is the same as the original position of $l$ in $D_j$, since it cannot have been changed by any of the swaps. In this case, $k>l$ implies $B\backslash \{l\}\cup\{k\}$ is not a basis. \end{proof} Theorem \ref{thm:schubi} confirms Conjectures 3.9 and 5.13 of \cite{MTY}. \end{document}
\begin{document} \author{{Benjin Xuan}\thanks{Supported by Grant 10101024 and 10371116 from the National Natural Science Foundation of China. {\it e-mail:[email protected]}\,(B. Xuan)}\ \ \ \ Jiangchao Wang\\ {\it Department of Mathematics}\\ {\it University of Science and Technology of China}\\ {\it Universidad Nacional de Colombia}\\ } \date{} \title{Asymptotic behavior of extremal functions to an inequality involving Hardy potential and critical Sobolev exponent} \begin{abstract} In this paper, we study the asymptotic behavior of radial extremal functions to an inequality involving Hardy potential and critical Sobolev exponent. Based on the asymptotic behavior at the origin and the infinity, we shall deduce a strict inequality between two best constants. Finally, as an application of this strict inequality, we consider the existence of nontrivial solution of a quasilinear Brezis-Nirenberg type problem with Hardy potential and critical Sobolev exponent. \noindent\textbf{Key Words:} asymptotic behavior, extremal functions, Hardy potential, critical Sobolev exponent, Brezis-Nirenberg type problem \noindent\textbf{Mathematics Subject Classifications:} 35J60. \end{abstract} \section{Introduction.}\label{intro} In this paper, we study the asymptotic behavior of extremal functions to the following inequality involving Hardy potential and critical Sobolev exponent: \begin{equation}\label{eq1.1} C \left(\int_{\mathbb{R}^N} \frac{|u|^{p_*}}{|x|^{bp_*}} \,{\rm d}x \right)^{p/p_*} \leqslant \int_{\mathbb{R}^N} \left( \frac{|{\rm D}u|^{p}}{|x|^{ap}} -\mu \frac{|u|^{p}}{|x|^{(a+1)p}}\right)\,{\rm d}x, \end{equation} where $1<p<N,\ 0\leqslant a <\frac{N-p}{p},\ a\leqslant b<(a+1),\ p_*=\frac{Np}{N-(a+1-b)p},\ \mu< \overline{\mu}$, $\overline{\mu}$ is the best constant in the Hardy equality. We shall show that for $\mu<\overline{\mu}$ the best constant of inequality (\ref{eq1.1}) is achievable. Furthermore, the extremal functions of inequality (\ref{eq1.1}) is radial symmetric. Then we study the asymptotic behavior of the radial extremal functions of inequality (\ref{eq1.1}) at the origin and the infinity. At last, for any smooth bounded open domain $\Omega \subset \mathbb{R}^N$ containing $0$ in its insides, we shall deduce a strict inequality between two best constants $S_{\lambda,\,\mu}(p,a,b;\Omega)$ and $S_{0,\,\mu}(p,a,b;\Omega)=S_{0,\,\mu}$: \begin{equation} \label{eq1.4} S_{\lambda,\,\mu}(p,a,b;\Omega)< S_{0,\,\mu}, \end{equation} if $\lambda>0$, where $S_{0,\,\mu}$ and $S_{\lambda,\,\mu}(p,a,b;\Omega)$ will be defined in Section 2 and 4 respectively. We believe that the strict inequality (\ref{eq1.4}) will be useful to study the existence of quasilinear elliptic problem involving Hardy potential and critical Sobolev exponent. As an application of this strict inequality, we consider the existence of nontrivial solution of a quasilinear Brezis-Nirenberg problem with Hardy potential and critical Sobolev exponent. In their famous paper \cite{BN}, Brezis and Nirenberg studied problem: \begin{equation}\label{eq1.5} \left\{ \begin{array}{rl} -\Delta u &=\lambda u+u^{2^\ast-1}, \mbox{ in }\ \Omega,\\ u& >0,\mbox{ in }\ \Omega,\\ u& =0, \mbox{ on }\ \partial\,\Omega. \end{array}\right. \end{equation} Since the embedding $H_0^1(\Omega) \hookrightarrow L^{2^*}(\Omega)$ is not compact where $2^*=2N/(N-2)$, the asociated energy functional does not satisfy the (PS) condition globally, which caused a serious difficulty when trying to apply standard variational methods. Brezis and Nirenberg successfully reduced the existence of solutions of problem (\ref{eq1.5}) into the verification of a special version of the strict inequality (\ref{eq1.4}) with $p=2,\ a=b=\mu=0$. To verify (\ref{eq1.4}) in their case, they applied the explicit expression of the extremal functions to the Sobolev inequality, especially the asymptotic behavior of the extremal functions at the origin and the infinity. Brezis-Nirenberg type problems have been generalized to many other situations (see \cite{CG, EH1, EH2, GV, JS, NL, PS, XC, XSY,ZXP} and references therein). Recently, Jannelli \cite{JE} introduced the term $\mu\frac{u}{|x|^2}$ in the equation, that is, \begin{equation}\label{eq1.6} \left\{ \begin{array}{rl} -\Delta u - \mu\frac{u}{|x|^2} & =\lambda u+u^{2^\ast-1}, \mbox{ in }\ \Omega,\\ u& >0,\mbox{ in }\ \Omega,\\ u& =0, \mbox{ on }\ \partial\,\Omega. \end{array}\right. \end{equation} He studied the relation between critical dimensions for $\lambda\in (0,\ \lambda_1)$ and $L_{\rm loc}^2$ integrability of the associated Green function, where $\lambda_1$ is the first eigenvalue of operator $-\Delta - \mu\frac{1}{|x|^2}$ on $\Omega$ with zero-Dirichlet condition. Ruiz and Willem \cite{RW} also studied problem (\ref{eq1.6}) under various assumption on the domain $\Omega$, and even for $\mu\leqslant 0$. Those proofs in \cite{JE} and \cite{RW} were reduced to verify the strict inequality (\ref{eq1.4}) with $p=2, a=b=0$. In 2001, Ferrero and Gazzola \cite{FG} considered the existence of sign-changed solution to problem (\ref{eq1.6}) for larger $\lambda$. They distinguished two distinct cases: resonant case and non-resonant cases of the Brezis-Nirenberg type problem (\ref{eq1.6}). For the resonant case, they only studied a special case: $\Omega$ is the unit ball and $\lambda=\lambda_1$. The general case was left as an open problem. In 2004, Cao and Han \cite{CH} complished the general case. In all the references cited above, the asymptotic behavior of the extremal functions at the origin and the infinity was applied to derived the local (PS) condition for the associated energy functional. The rest of this paper is organized as follows. In section 2, we shall show that the best constant of (\ref{eq1.1}) is achieved by some radial extremal functions. Section 3 is concerning with the asymptotic behavior of the radial extremal functions. In Section 4, we first derive various estimates on the approximation extremal functions, and then establish the strict inequality (\ref{eq1.4}). In section 5, based on this strict inequality, we obtain the existence results of nontrivial solution of a quasilinear Brezis-Nirenberg problem. \section{Radial extremal functions}\label{radial} In order to obtain the extremal functions of (\ref{eq1.1}). We consider the following extremal problem: \begin{equation}\label{eq2.2} S_{0,\,\mu}=\inf\left\{Q_\mu(u)\ :\ u\in \mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N),\ \|u;L_b^{p_\ast}(\mathbb{R}^N)\|=1\right\}, \end{equation} where $$ Q_\mu(u)=\int_{\mathbb{R}^N}\frac{|{\rm D}u|^p}{|x|^{ap}}\,{\rm d}x-\mu\int_{\mathbb{R}^N}\frac{|u|^p}{|x|^{(a+1)p}}\,{\rm d}x, $$ and $$ \mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N)=\{u\in L_b^{p_\ast}(\mathbb{R}^N):|{\rm D}u|\in L_a^p(\mathbb{R}^N)\} $$ is the closure of $C_0^\infty (\mathbb{R}^N)$ under the norm $\| u \|_{\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N)}=\| |{\rm D}u|; L_a^p (\mathbb{R}^N) \|$. For any $\alpha, q$, the norm of weighted space $L_\alpha^{q}(\mathbb{R}^N)$ is defined as $$ \|u;L_\alpha^{q}(\mathbb{R}^N)\|=(\int_{\mathbb{R}^N}\frac{|u|^{q}}{|x|^{\alpha{q}}}\,{\rm d}x)^{\frac{1}{q}}. $$ Similar to Lemma 2.1 in \cite{GP}, one can easily obtain the following Hardy inequality with best constant $\overline{\mu}=(\frac{N-(a+1)p}{p})^p$: \begin{equation}\label{eq2.02} \overline{\mu} \int_{\mathbb{R}^N}\frac{|u|^p}{|x|^{(a+1)p}}\,{\rm d}x \leqslant \int_{\mathbb{R}^N}\frac{|{\rm D}u|^p}{|x|^{ap}}\,{\rm d}x. \end{equation} Thus, for $\mu<\overline{\mu}$, $Q_\mu(u)\geqslant 0$ for all $u\in\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N)$, and the equality holds if and only if $u\equiv 0$. From the so-called Caffarelli-Kohn-Nirenberg inequality \cite{CKN}, $S_{0,\,\mu}<\infty$. \begin{lemma}\label{lem2.3} If $\mu\in (0,\ \overline{\mu}),\ b\in [a,\ a+1)$, then $S_{0,\,\mu}$ is achieved at some nonnegative function $u_0\in \mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N)$. In particular, there exists a solution to the following ``limited equation": \begin{equation}\label{eq2.1} -\mbox{div}(\frac{{|{\rm D}u|}^{p-2}{\rm D}u}{|x|^{ap}})-\mu\frac{|u|^{p-2}u}{|x|^{(a+1)p}}=\frac{|u|^{p_\ast-2}u}{|x|^{bp_\ast}}. \end{equation} \end{lemma} \proof The achievability of $S_{0,\,\mu}$ at some $u_0\in \mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N)$ with $\|u_0;L_b^{p_\ast}(\mathbb{R}^N)\|=1$ is due to \cite{WW} for $p=2$ and \cite{TY} for general $p$. Without loss of generality, suppose that $u_0\geqslant 0$, otherwise, replace it by $|u_0|$. It is easy to see that $u_0$ satisfies the following Euler-Lagrange equation: $$ -\mbox{div}(\frac{|{\rm D}u|^{p-2}{\rm D}u}{|x|^{ap}})-\mu\frac{|u|^{p-2}u}{|x|^{(a+1)p}} =\delta\frac{|u|^{p_\ast-2}u}{|x|^{b{p_\ast}}}, $$ where $\delta=Q_\mu(u_0)/ \|u_0;L_b^{p_\ast}(\mathbb{R}^N)\|^{p_\ast}= Q_\mu(u_0)=S_{0,\,\mu}>0$ is the Lagrange multiplier. Set $\overline{u}=c_0u_0,\ c_0=S_{0,\,\mu}^{\frac{1}{p_\ast-p}}$, then $\overline{u}$ is a solution to equation (\ref{eq2.1}). \epf In fact, all the dilation of $u_0$ of the form $\sigma^{-\frac{N-(a+1)p}{p}}u_0(\frac{\cdot}{\sigma})$ are also minimizers of $S_{0,\,\mu}$. In order to obtain further properties of the minimizers of $S_{0,\,\mu}$, let's recall the definition of the Schwarz symmetrization (see \cite{HT}). Suppose that $\Omega\subset{\mathbb{R}^N}$, and $f\in C_0(\Omega)$ is a nonnegative continuous function with compact support, the the Schwarz symmetrization $S(f)$ of $f$ is defined as $$ S(f)(x)=\sup \left\{t:\mu(t)>\omega_N|x|^N\right\}, \quad \mu(t)=\left|\,\{x\ :\ f(x)>t\}\right|,$$ where $\omega_N$ denotes the volume of the standard $N$-sphere. Applying those properties of Schwarz symmetrization in \cite{HT}, we have the following lemma: \begin{lemma}\label{lem2.2} For $v\in\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N)\setminus \{0\},\ k\geqslant 0$, define $$ R(v)=\dfrac{\int_{S^{N-1}}\int_0^{+\infty}\{k^{1-p-\frac{p}{p_\ast}}(|\partial_\rho v|^2+\frac{|\Lambda v|^2}{\rho^2})^{p/2}\rho^{N-1} -k^{1-\frac{p}{p_\ast}}\mu|v|^p\rho^{N-1-p}\}\,{\rm d}\rho \,{\rm d}S}{\int_{S^{N-1}}\int_0^{+\infty}|v|^{p_\ast}\rho^{\frac{(N-p){p_\ast}}{p}-1}\,{\rm d}\rho \,{\rm d}S}, $$ where $\partial_\rho$ is the directional differential operator along direction $\rho$ and $\Lambda$ is the tangential differential operator on $S^{N-1}$. Then $$ \inf \{R(v)\ :\ v\in\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N) \mbox{ is radial}\} = \inf \{R(v)\ :\ v \in\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N)\}. $$ \end{lemma} \proof By the density argument, it suffices to prove the lemma for $v\in C_0^\infty (\mathbb{R}^N)$. Let $v^*$ be the Schwarz symmetrization of $v$. Noting that $\Lambda v^\ast=0, p_*\leqslant \frac{Np}{N-p}$, and applying those properties of Schwarz symmetrization in \cite{HT}, we have $$ \int_{S^{N-1}}\int_0^{+\infty}|v^\ast|^{p_\ast}\rho^{\frac{(N-p){p_\ast}}{p}-1} \,{\rm d}\rho \,{\rm d}S\geqslant \int_{S^{N-1}}\int_0^{+\infty}|v|^{p_\ast}\rho^{\frac{(N-p){p_\ast}}{p}-1} \,{\rm d}\rho \,{\rm d}S=1, $$ \begin{equation*} \begin{split} k^{1-p-\frac{p}{p_\ast}}\int_{S^{N-1}}\int_0^{+\infty}&(|\partial_\rho v^\ast|^2+\frac{|\Lambda v^\ast|^2}{\rho^2})^{p/2}\rho^{N-1}\,{\rm d}\rho \,{\rm d}S\\ &\leqslant k^{1-p-\frac{p}{p_\ast}}\int_{S^{N-1}}\int_0^{+\infty}(|\partial_\rho v|^2+\frac{|\Lambda v|^2}{\rho^2})^{p/2}\rho^{N-1}\,{\rm d}\rho \,{\rm d}S \end{split} \end{equation*} and $$k^{1-\frac{p}{p_\ast}}\mu\int_{S^{N-1}}\int_0^{+\infty}|v^\ast|^p\rho^{N-1-p}\,{\rm d}\rho \,{\rm d}S\geqslant k^{1-\frac{p}{p_\ast}}\mu\int_{S^{N-1}}\int_0^{+\infty}|v|^p\rho^{N-1-p}\,{\rm d}\rho \,{\rm d}S. $$ Thus, we have \begin{equation*} \begin{split} &\int_{S^{N-1}}\int_0^{+\infty}\{k^{1-p-\frac{p}{p_\ast}}(|\partial_\rho v^\ast|^2+\frac{|\Lambda v^\ast|^2}{\rho^2})^{p/2}\rho^{N-1} -k^{1-\frac{p}{p_\ast}}\mu|v^\ast|^p\rho^{N-1-p}\}\,{\rm d}\rho \,{\rm d}S\\&\leqslant \int_{S^{N-1}}\int_0^{+\infty}\{k^{1-p-\frac{p}{p_\ast}}(|\partial_\rho v|^2+\frac{|\Lambda v|^2}{\rho^2})^{p/2}\rho^{N-1} -k^{1-\frac{p}{p_\ast}}\mu |v|^p\rho^{N-1-p}\}\,{\rm d}\rho \,{\rm d}S. \end{split} \end{equation*} That is, $$ R(v^*)\leqslant R(v), $$ thus, $$ \inf \{R(v)\ :\ v\in\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N) \mbox{ is radial}\} \leqslant \inf \{R(v)\ :\ v \in\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N)\}. $$ On the other hand, it is trivial that $$ \inf \{R(v)\ :\ v\in\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N) \mbox{ is radial}\} \geqslant \inf \{R(v)\ :\ v \in\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N)\}. $$ \epf \begin{lemma}\label{lem2.4} If $\mu\in (0,\ \overline{\mu}),\ b\in [a,\ a+1)$, then all the minimizers of $S_{0,\,\mu}$ is radial. In particular, there exists a family of radial solutions to equation (\ref{eq2.1}). \end{lemma} \proof We rewrite those integrals in $S_{0,\,\mu}$ in polar coordinates. Noting that $|{\rm D}u|^2=|\partial_r u|^2+\frac{1}{r^2}|\Lambda u|^2$, we have \begin{equation}\label{eq2.6} \begin{split} Q_\mu(u)&=\int_{S^{N-1}}\int_0^{+\infty}(|\partial_r u|^2+\frac{1}{r^2}|\Lambda u|^2)^{p/2} r^{N-1-ap}\,{\rm d}r\,{\rm d}S\\ &\ \ \ \ -\mu\int_{S^{N-1}}\int_0^{+\infty}|u|^p r^{N-1-(a+1)p}\,{\rm d}r\,{\rm d}S. \end{split} \end{equation} Making the change of variables $r=\rho^k,\quad k=\frac{N-p}{N-(a+1)p}\geqslant 1$, from (\ref{eq2.6}), we have \begin{equation}\label{eq2.7} \begin{split} Q_\mu(u)&=k^{1-p}\int_{S^{N-1}}\int_0^{+\infty}(|\partial_\rho u|^2+k^2\frac{|\Lambda u|^2}{\rho^2})^{p/2}\rho^{N-1}\,{\rm d}\rho \,{\rm d}S \\ &\ \ \ \ -k\mu\int_{S^{N-1}}\int_0^{+\infty}|u|^p \rho^{N-p-1}\,{\rm d}\rho \,{\rm d}S. \end{split} \end{equation} On the other hand, the restriction condition $\|u;L_b^{p_\ast}(\mathbb{R}^N)\|=1$ becomes \begin{equation}\label{eq2.8} k\int_{S^{N-1}}\int_0^{+\infty}|u|^{p_\ast}\rho^{\frac{(N-p){p_\ast}}{p}-1}\,{\rm d}\rho \,{\rm d}S=1. \end{equation} To cancel the coefficient $k$ in (\ref{eq2.8}), let $v=k^{\frac{1}{p_\ast}}u$, then we have the following equivalent form of $S_{0,\,\mu}$: \begin{equation}\label{eq2.9} \begin{split} S_{0,\,\mu}= \inf\left \{ \right. & \left. \int_{S^{N-1}}\int_0^{+\infty}\{k^{1-p-\frac{p}{p_\ast}}(|\partial_\rho v|^2+k^2\frac{|\Lambda v|^2}{\rho^2})^{p/2}\rho^{N-1} \right.\\ & \ \ \ -k^{1-\frac{p}{p_\ast}}\mu|v|^p\rho^{N-1-p}\}\,{\rm d}\rho \,{\rm d}S \ :\ v \in\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N),\\ & \left. \ \ \ \ \ \ \ \ \ \ \ \ \int_{S^{N-1}}\int_0^{+\infty}|v|^{p_\ast}\rho^{\frac{(N-p){p_\ast}}{p}-1}\,{\rm d}\rho \,{\rm d}S=1\right\}. \end{split} \end{equation} Since $k\geqslant 1$, we have \begin{equation}\label{eq2.10} \begin{split} S_{0,\,\mu} \geqslant \inf\left \{ \right. & \left. \int_{S^{N-1}}\int_0^{+\infty}\{k^{1-p-\frac{p}{p_\ast}}(|\partial_\rho v|^2+\frac{|\Lambda v|^2}{\rho^2})^{p/2}\rho^{N-1} \right.\\ & \ \ \ -k^{1-\frac{p}{p_\ast}}\mu|v|^p\rho^{N-1-p}\}\,{\rm d}\rho \,{\rm d}S\ :\ v \in\mathfrak{D}_{a,b}^{1,p}(\mathbb{R}^N),\\ & \left. \ \ \ \ \ \ \ \ \ \ \ \ \int_{S^{N-1}}\int_0^{+\infty}|v|^{p_\ast}\rho^{\frac{(N-p){p_\ast}}{p}-1}\,{\rm d}\rho \,{\rm d}S=1\right\}. \end{split} \end{equation} From Lemma \ref{lem2.2}, we know that the left side hand is achieved at some radial function, and the inequality in (\ref{eq2.10}) becomes equality if and only if $v$ is radial. Thus, all the minimizers of $S_{0,\,\mu}$ is radial. \epf \section{Asymptotic behavior of extremal functions}\label{Behavior} In this section, we describe the asymptotic behavior of radial extremal functions of $S_{0,\,\mu}$. Our argument here is similar to that in \S3.2 of \cite{AFP}. Let $u(r)$ be a nonnegative radial solution to (\ref{eq2.1}). Rewriting in polar coordinates, we have \begin{equation}\label{eq3.1} (r^{N-1-ap}|u'|^{p-2}u')'+r^{N-1}(\mu\frac{|u|^{p-2}u}{r^{(a+1)p}}+\frac{|u|^{p_\ast-2}u}{r^{bp_\ast}})=0. \end{equation} Set \begin{equation}\label{eq3.2} t=\log r,\quad y(t)=r^\delta u(r),\quad z(t)=r^{(1+\delta)(p-1)}|u'(r)|^{p-2}u'(r), \end{equation} where $\delta=\frac{N-(a+1)p}{p}$. A simple calculation shows that \begin{equation}\label{eq3.3} \left\{{\begin{array}{l} \dfrac{{\rm d}y}{{\rm d}t}=\delta y+|z|^{\frac{2-p}{p-1}}z;\\[3mm] \dfrac{{\rm d}z}{{\rm d}t}=-\delta z-|y|^{p_\ast-2}y-\mu|y|^{p-2}y. \end{array}}\right. \end{equation} It follows from (\ref{eq3.3}) that $y$ satisfies the following equation: \begin{equation}\label{eq3.4} (p-1)|\delta y-y'|^{p-2}(\delta y'-y'' )+\delta |\delta y-y'|^{p-2}(\delta y-y')-\mu y^{p-1}-y^{p_\ast-1}=0. \end{equation} It is easy to see that the complete integral of the autonomous system (\ref{eq3.3}) is \begin{equation}\label{eq3.5} V(y,z)=\frac{1}{p_\ast}|y|^{p_\ast}+\frac{\mu}{p}|y|^p+\frac{p-1}{p}|z|^{\frac{p}{p-1}}+\delta yz. \end{equation} Similar to Lemma 3.6-3.9 in \cite{AFP}, we have the following four lemmas. We will omit proofs of the first three lemmas because one only needs to replace $\delta=\frac{N-p}p$ there by $\delta=\frac{N-(a+1)p}{p}$ in our case. The interested reader can refer to \cite{AFP}. The idea of the fourth Lemma is also similar to that of Lemma 3.9 in \cite{AFP}, with different choice of function $\xi$. We shall write down its complete proof for completeness. \begin{lemma}\label{lem3.1} $y$ and $z$ are bounded. \end{lemma} \begin{lemma}\label{lem3.2} For any $t\in\mathbb{R}^N$, $(y(t),z(t))\in \{(y,z)\in\mathbb{R}^2:V(y,z)=0\}.$ \end{lemma} \begin{lemma}\label{lem3.3} There exists $t_0\in\mathbb{R}$, such that $y(t)$ is strictly increasing for $t<t_0$; and strictly decreasing for $t>t_0$. Furthermore, we have \begin{equation}\label{eq3.6} \underset{t\in\mathbb{R}}{\max}\ y(t)=y(t_0)=[\frac{N}{N-(a+1-b)p}(\delta^p-\mu)]^{\frac{1}{p_\ast-p}} \end{equation} \end{lemma} \begin{lemma}\label{lem3.4} Suppose that $y$ is a positive solution to (\ref{eq3.4}) such that $y$ is increasing in $(-\infty,0)$ and decreasing in $(0,+\infty)$, then there exist $c_1,c_2>0$, such that \begin{equation}\label{eq3.7} \underset{t\rightarrow-\infty}{\lim}e^{(l_1-\delta)t}y(t)=y(0)c_1>0; \end{equation} \begin{equation}\label{eq3.8} \underset{t\rightarrow+\infty}{\lim}e^{(l_2-\delta)t}y(t)=y(0)c_2>0, \end{equation} where $l_1,l_2$ are zeros of function $\xi(s)=(p-1)s^p-(N-(a+1)p)s^{p-1}+\mu$ such that $0<l_1<l_2.$ \end{lemma} \proof First, it is easy to see that $l_1<\delta<l_2$. Next, we prove (\ref{eq3.7}) step by step and omit the proof of (\ref{eq3.8}). \noindent\textbf{1.} It follows from (\ref{eq3.3}) that \begin{equation}\label{eq3.9} \begin{array}{rl} \frac{{\rm d}}{{\rm d}t}(e^{-(\delta-l_1)t}y(t)) &=-(\delta-l_1)e^{-(\delta-l_1)t}y(t)+e^{-(\delta-l_1)t}(\delta y(t)+|z|^{\frac{1}{p-1}})\\ &=e^{-(\delta-l_1)t}y(t)(l_1-\frac{|z(t)|^{\frac{1}{p-1}}}{y(t)}).\\ \end{array} \end{equation} Rewritting the above equation into the integral form, we have \begin{equation}\label{eq3.10} e^{-(\delta-l_1)t}y(t)=y(0)e^{-\int_{t}^{0}(l_1-y(s)^{-1}|z(s)|^{1/p-1}){\rm d}s}. \end{equation} \noindent\textbf{2.} Let $H(s)=\frac{|z(s)|^{\frac{1}{p-1}}}{y(s)}$. \textbf{Claim: } $H(s)$ is a increasing function from $(-\infty,0]$ into $(l_1,\delta]$. In fact, we shall prove that $H'(s)>0$ for $s<0$. Otherwise, we prove by contradiction, suppose that there exists $s_0<0$ such that $H'(s_0)\leqslant0$. A direct computation shows that $$ H'(s)=\frac{-\frac{1}{p-1}y(s)z'(s)|z(s)|^{\frac{2-p}{p-1}}-|z(s)|^{\frac{1}{p-1}}y'(s)}{y^2(s)}. $$ Replacing formulas of $y'(s_0)$ and $z'(s_0)$ from (\ref{eq3.3}), and noting that (\ref{eq3.5}) and Lemma \ref{lem3.2}, it follows that $$ H'(s_0)=(\frac{1}{p}-\frac{1}{p_\ast})y^{p_\ast}(s_0)\leqslant0, $$ which contradicts to the fact that $y>0$. Thus, $H'(s)>0$, and hence $H$ is strictly increasing on $(-\infty,0]$. On the other hand, from (\ref{eq3.3}) and $y'(0)=0$, we have $H(0)=\delta$; from (\ref{eq3.5}), it follows that $\underset{t\rightarrow-\infty}{\lim}H(s)=l_1$, which proves our claim. \noindent\textbf{3.} (\ref{eq3.7}) holds. From the above claim and (\ref{eq3.10}), it follows that $e^{-(\delta-l_1)t}y(t)>0$ is decreasing on $(-\infty,0]$, and hence the limit $\underset{t\rightarrow-\infty}{\lim}e^{-(\delta-l_1)t}y(t)$ exists. Set $$ \alpha\equiv\underset{t\rightarrow-\infty}{\lim}e^{-(\delta-l_1)t}y(t)= y(0)e^{\int_{-\infty}^{0}(H(s)-l_1)ds}. $$ To prove (\ref{eq3.7}), it suffices to show that $\alpha<-\infty$. From (\ref{eq3.3}) and (\ref{eq3.5}), a direct computation shows that $$H'(s)=-\frac{(a+1-b)p}{(p-1)(N-(a+1-b)p)}H(s)^{2-p}\xi(H(s)),$$ where $$ \xi(s)=(p-1)s^p-(N-(a+1)p)s^{p-1}+\mu. $$ From the definitions of $l_1,l_2$, we may suppose that $$ H'(s)=(H(s)-l_1)(H(s)-l_2)g(H(s)), $$ where $g$ is a continuous negative function on the interval $[l_1,\delta]$, thus satisfies $|g(H(s))|\geqslant c_1>0$. From (\ref{eq3.10}), it follows that $$ \alpha=\underset{t\rightarrow-\infty}{\lim}e^{(\delta-l_1)t}y(t)=y(0)e^{\int_{-\infty}^{0}(H(s)-l_1){\rm d}s} =y(0)e^{\int_{l_1}^{\delta}[(H(s)-l_2)g(H(s))]^{-1}{\rm d}H(s)}. $$ Since $l_2>\delta$ and $|g(H(s))|\geqslant c_1$ on $[l_1,\delta]$, we know that $$ \int_{l_1}^{\delta}[(H(s)-l_2)g(H(s))]^{-1}{\rm d}H(s)<+\infty, $$ that is, $\alpha<+\infty$, thus (\ref{eq3.7}) follows. \epf In the following corollary, we rewrite these conclusions on $y$ into those on the positive solution $u\in \mathfrak{D}^{1,p}(\mathbb{R}^N)$ of equation (\ref{eq3.1}). \begin{corollary}\label{coro3.1} Let $u\in \mathfrak{D}^{1,p}(\mathbb{R}^N)$ be a positive solution of equation (\ref{eq3.1}). Then there exists two positive constants $C_1, C_2>0$ such that \begin{equation}\label{eq3.11} \lim_{r\rightarrow 0}\, r^{l_1}u(r)=C_1>0, \quad\quad \lim_{r\rightarrow+\infty}\, r^{l_2}u(r)=C_2>0. \end{equation} and \begin{equation}\label{eq3.13} \lim_{r\rightarrow0}\, r^{l_1+1}|u'(r)|=C_1 l_1>0, \quad\quad \lim_{r\rightarrow+\infty}\, r^{l_2+1}|u'(r)|=C_2 l_2>0. \end{equation} \end{corollary} \proof From (\ref{eq3.2}), we know $u(r)=r^{-\delta}y(t)$. Applying Lemma \ref{lem3.4} directly, we have $$ \lim_{r\rightarrow0}\, r^{l_1}u(r)=\lim_{t\rightarrow-\infty}\, e^{(l_1-\delta)t}y(t)=y(0)c_1=C_1>0, $$ $$ \lim_{r\rightarrow+\infty}\, r^{l_2} u(r)=\lim_{t\rightarrow+\infty}\, e^{(l_2-\delta)t}y(t)=y(0)c_2=C_2>0. $$ Noting that $\lim\limits_{t\rightarrow-\infty}H(t)=l_1$ and $\lim\limits_{t\rightarrow+\infty}\,H(t)=l_2$, it follows that \begin{equation}\label{eq3.14} \begin{split} \lim_{r\rightarrow0}\,r^{l_1}u(r)\cdot H(t) &=\lim_{r\rightarrow0}\,r^{l_1}u(r)\cdot \frac{|z(t)|^{\frac{1}{p-1}}}{y(t)} =\lim_{r\rightarrow0}\,r^{l_1}u(r)\cdot \frac{r^{1+\delta}|u'(r)|}{r^{\delta}u(r)}\\ &=\lim_{r\rightarrow0}\,r^{l_1+1}|u'(r)| =C_1l_1>0 \end{split} \end{equation} and \begin{equation}\label{eq3.15} \begin{split}\lim_{r\rightarrow+\infty}\,r^{l_2}u(r)\cdot H(t)& =\lim_{r\rightarrow+\infty}\,r^{l_2}u(r)\cdot \frac{|z(t)|^{\frac{1}{p-1}}}{y(t)} =\lim_{r\rightarrow+\infty}\,r^{l_2}u(r)\cdot \frac{r^{1+\delta}|u'(r)|}{r^{\delta}u(r)}\\ &=\lim_{r\rightarrow+\infty}\,r^{l_2+1}|u'(r)|=C_2l_2>0.\ \end{split} \end{equation} \epf Next, we shall give a uniqueness result of positive solution of equation (\ref{eq3.1}). \begin{theorem}\label{thm3.1} Suppose that $u_1(r)$ and $u_2(r)$ are two positive solutions of equation (\ref{eq3.1}). Let $(y_1(t),z_1(t))$ and $(y_2(t),z_2(t))$ be two solutions to ODE system (\ref{eq3.5}) corresponding to $u_1(r)$ and $u_2(r)$ respectively. If \begin{equation}\label{eq3.16} \underset{t\in(\infty,+\infty)}{\max}y_1(t)=y_1(0)=[\frac{N}{N-(a+1-b)p}(\delta^p-\mu)]^{\frac{1}{p_\ast-p}}, \end{equation} and $y_2(0)=y_1(0)$. Then $(y_1(t),z_1(t))=(y_2(t),z_2(t))$, hence $u_1=u_2$. \end{theorem} \proof The proof is similar to that of Theorem 3.11 in \cite{AFP}. \epf Similar to Theorem 3.13 in \cite{AFP}, we resume the above results together and obtain the following theorem which describes the asymptotic behavior of all the radial solutions to equation (\ref{eq3.1}). \begin{theorem}\label{thm3.2} All positive radial solutions to equation (\ref{eq2.1}) have the form: \begin{equation}\label{eq3.17} u(\cdot)=\varepsilon^{-\frac{N-(a+1)p}{p}}u_0(\frac{\cdot}{\varepsilon}), \end{equation} where $u_0$ is a solution to equation (\ref{eq2.1}) satisfying $u_0(1)=y(0)=[\frac{N}{N-(a+1-b)p}(\delta^p-\mu)]^{\frac{1}{p_\ast-p}}$. Furthermore, there exist constants $C_1,C_2>0$ such that \begin{equation}\label{eq3.18} 0<C_1\leqslant\frac{u_0(x)}{(|x|^{l_1/\delta}+|x|^{l_2/\delta})^{-\delta}}\leqslant C_2, \end{equation} where $l_1,l_2$ are the two zeros of function $\xi(s)=(p-1)s^p-(N-(a+1)p)s^{p-1}+\mu$ satisfying $0<l_1<l_2.$ \end{theorem} \section{Strict inequality (\ref{eq1.4})} In this section, applying the asymptotic behavior of the solutions to equation (\ref{eq2.1}) obtained in the previous section, we give some estimates on the extremal function of $S_{0,\,\mu}$. Let $u_0$ be an extremal function of $S_{0,\,\mu}$ with $\|u_0;L_b^{p_\ast}(\mathbb{R}^N)\|=1$. From the discussion in Section 2 and 3, we know that $u_0$ is radial, and for all $\varepsilon>0$, $$U_\varepsilon(r)=\varepsilon^{-\frac{N-(a+1)p}{p}}u_0(\frac{r}{\varepsilon})$$ is also an extremal function of $S_{0,\,\mu}$, and there exists a positive constant $C_\varepsilon$ such that $C_\varepsilon U_\varepsilon$ is a solution to equation (\ref{eq2.1}). In fact, from the proof of Lemma \ref{lem2.3}, we know that $C_\varepsilon=S_{0,\,\mu}^{\frac{1}{p_\ast-p}}$, which is independent of $\varepsilon$, denoted by $C_0$. Set $u_{\varepsilon}^{\ast}=C_0 U_\varepsilon$, then from equation (\ref{eq2.1}) we have \begin{equation}\label{eq4.1} Q_\mu(u_{\varepsilon}^\ast)=\|u_{\varepsilon}^{\ast}; {L_b^{p_\ast}} \|^{p_\ast}=S_{0,\,\mu}^ {\frac{p_\ast}{p_\ast-p}}=S_{0,\,\mu}^{\frac{N}{(a+1-b)p}}. \end{equation} For any $\varepsilon>0$, and $m\in\mathbb{N}$ large enough such that $B_{\frac{1}{m}}\subseteq \Omega$, define \begin{equation}\label{eq4.2} u_{\varepsilon}^m(x)= \Big\{{\begin{array}{l} u_{\varepsilon}^\ast(x)-u_{\varepsilon}^\ast(\frac{1}{m}),\quad x\in B_{\frac{1}{m}}\backslash\{0\};\\ 0,\quad\quad\quad\quad\quad\quad\quad\ \ x\in\Omega\backslash B_{\frac{1}{m}}. \end{array}} \end{equation} \begin{lemma}\label{lem4.1} Set $\varepsilon=m^{-h},\ h>1$. Then as $m\to \infty$, we have \begin{equation}\label{eq4.3} Q_\mu(u_{\varepsilon}^m)\leqslant S_{0,\,\mu}^{\frac{N}{(a+1-b)p}}+{\cal O}(m^{-(h-1)[(a+1+l_2)p-N]}), \end{equation} and \begin{equation}\label{eq4.4} \|u_{\varepsilon}^{\ast}; {L_b^{p_\ast}} \|^{p_\ast}\geqslant S_{0,\,\mu}^{\frac{N}{(a+1-b)p}}- {\cal O}(m^{-(h-1)[(b+l_2)p_*-N]}), \end{equation} where and afterward ${\cal O}(m^{-\alpha})$ denotes a positive quality which is $O(m^{-\alpha})$, but is not $o(m^{-\alpha})$, as $m\to \infty$. \end{lemma} \proof We shall only prove (\ref{eq4.3}), and omit the prove of (\ref{eq4.4}). Since $Q_\mu(u_{\varepsilon}^m)= \int_{\mathbb{R}^N}\frac{|Du_{\varepsilon}^m|^p}{|x|^{ap}}\,{\rm d}x-\mu\int_{\mathbb{R}^N}\frac{|u_{\varepsilon}^m|^p}{|x|^{(a+1)p}}\,{\rm d}x$, we estimate each term in $Q_\mu(u_{\varepsilon}^m)$ as follows: \begin{equation}\label{eq4.5}\begin{array}{rl} \displaystyle \int_\Omega\frac{|{\rm D}u_{\varepsilon}^m|^p}{|x|^{ap}}\,{\rm d}x &=\displaystyle \int_{B_{\frac{1}{m}}}\frac{|{\rm D}u_{\varepsilon}^\ast|^p}{|x|^{ap}}\,{\rm d}x\\[5mm] &=\displaystyle \int_{\mathbb{R}^N}\frac{|{\rm D}u_{\varepsilon}^\ast|^p}{|x|^{ap}}\,{\rm d}x -\int_{\mathbb{R}^N\backslash B_{\frac{1}{m}}}\frac{|{\rm D}u_{\varepsilon}^\ast|^p}{|x|^{ap}}\,{\rm d}x\\ &\leqslant \displaystyle \int_{\mathbb{R}^N}\frac{|{\rm D}u_{\varepsilon}^\ast|^p}{|x|^{ap}}\,{\rm d}x \end{array}\end{equation} and \begin{equation}\label{eq4.6}\begin{array}{rl} \displaystyle &\displaystyle\int_\Omega\frac{|u_{\varepsilon}^m|^p}{|x|^{(a+1)p}}\,{\rm d}x =\displaystyle \int_{B_{\frac{1}{m}}} \frac{(u_{\varepsilon}^\ast(x)- u_{\varepsilon}^\ast(\frac1m) )^p}{|x|^{(a+1)p}} \,{\rm d}x\\[5mm] &\ \geqslant\displaystyle \int_{B_{\frac{1}{m}}} \frac{u_{\varepsilon}^\ast(x)^p -p u_{\varepsilon}^\ast(\frac1m)u_{\varepsilon}^\ast(x)^{p-1}}{|x|^{(a+1)p}} \,{\rm d}x\\[5mm] &\ =\displaystyle \int_{\mathbb{R}^N} \frac{u_{\varepsilon}^\ast(x)^p}{|x|^{(a+1)p}} \,{\rm d}x -\int_{\mathbb{R}^N\backslash B_{\frac{1}{m}}}\frac{u_{\varepsilon}^\ast(x)^p}{|x|^{(a+1)p}} \,{\rm d}x -p u_{\varepsilon}^\ast(\frac1m) \displaystyle \int_{B_{\frac{1}{m}}}\frac{u_{\varepsilon}^\ast(x)^{p-1}}{|x|^{(a+1)p}} \,{\rm d}x. \end{array}\end{equation} On the other hand, from the definition of $u_{\varepsilon}^\ast$, we have \begin{equation}\label{eq4.7}\begin{array}{rl} \displaystyle \int_{\mathbb{R}^N\backslash B_{\frac{1}{m}}}\frac{u_{\varepsilon}^\ast(x)^p}{|x|^{(a+1)p}} \,{\rm d}x &=C_0^p\omega_N \displaystyle \int_{\frac1m}^{+\infty} \frac{\varepsilon ^{-[N-(a+1)p]} u_0(\frac r\varepsilon)^p}{r^{(a+1)p}} r^{N-1}\,{\rm d}r\\[3mm] & =C_0^p\omega_N \displaystyle \int_{m^{h-1}}^{+\infty} u_0(t)^p t^{N-1-(a+1)p} \,{\rm d}t \\[3mm] & ={\cal O}(m^{-(h-1)[(a+1+l_2)p-N]}), \end{array}\end{equation} where in the second equality, we make the change of variable $t=\frac r\varepsilon$, and in the last equality, we use the asymptotic behavior of $u_0$ at the infinity, since $h>1$, hence $m^{h-1}\to \infty$ as $m\to \infty$. Note that $\xi^\prime(l_2)=p(p-1)l_2^{p-1}-(p-1)(N-(a+1)p)l_2^{p-2}>0$, that is $(a+1+l_2)p-N>0$. Similarly, we can estimate the last integration in (\ref{eq4.6}) as follows: \begin{equation}\label{eq4.8}\begin{array}{rl} u_{\varepsilon}^\ast(\frac1m) \displaystyle \int_{B_{\frac{1}{m}}}\frac{u_{\varepsilon}^\ast(x)^{p-1}}{|x|^{(a+1)p}} \,{\rm d}x &=C_0^p\omega_N u_0(\frac1{m\varepsilon}) \displaystyle \int_0^{\frac1m} \frac{\varepsilon ^{-[N-(a+1)p]} u_0(\frac r\varepsilon)^{p-1}}{r^{(a+1)p}} r^{N-1}\,{\rm d}r\\[3mm] &=C_0^p\omega_N u_0(m^{h-1}) \displaystyle \int_0^{m^{h-1}} u_0(t)^{p-1} t^{N-1-(a+1)p} \,{\rm d}t \\[3mm] & \leqslant C_0^p\omega_N C_2 m^{-(h-1)l_2p} [C+m^{(h-1)[N-(a+1)p-(p-1)l_2]} ] \\[3mm] &={\cal O}(m^{-(h-1)[(a+1+l_2)p-N]}), \end{array}\end{equation} where the last equality is from $\xi(l_2)=0$ and so $N-(a+1)p-(p-1)l_2=\mu/l_2^{p-1}>0$. Thus, (\ref{eq4.3}) follows from (\ref{eq4.5})-(\ref{eq4.8}). \epf \begin{lemma}\label{lem4.2} Set $\varepsilon=m^{-h},\ h>1$. If $c<(a+1+l_2)p-N$, then \begin{equation}\label{eq4.9} \int_{\mathbb{R}^N}\frac{|u_{\varepsilon}^m(x)|^p}{|x|^{(a+1)p-c}}\,{\rm d}x\geqslant {\cal O}(m^{-ch}). \end{equation} \end{lemma} \proof A direct computation shows that $$ \begin{array}{rl} & \displaystyle \int_{\mathbb{R}^N}\frac{|u_{\varepsilon}^m(x)|^p}{|x|^{(a+1)p-c}}\,{\rm d}x =\displaystyle \int_{B_{\frac{1}{m}}} \frac{(u_{\varepsilon}^\ast(x)- u_{\varepsilon}^\ast(\frac1m) )^p}{|x|^{(a+1)p-c}} \,{\rm d}x\\[5mm] &\ \ \ \geqslant\displaystyle \int_{B_{\frac{1}{m}}} \frac{u_{\varepsilon}^\ast(x)^p -p u_{\varepsilon}^\ast(\frac1m)u_{\varepsilon}^\ast(x)^{p-1}}{|x|^{(a+1)p-c}} \,{\rm d}x\\[5mm] & \ \ \ =\displaystyle \int_{\mathbb{R}^N} \frac{u_{\varepsilon}^\ast(x)^p}{|x|^{(a+1)p-c}} \,{\rm d}x -\int_{\mathbb{R}^N\backslash B_{\frac{1}{m}}}\frac{u_{\varepsilon}^\ast(x)^p}{|x|^{(a+1)p-c}} \,{\rm d}x -p u_{\varepsilon}^\ast(\frac1m) \displaystyle \int_{B_{\frac{1}{m}}}\frac{u_{\varepsilon}^\ast(x)^{p-1}}{|x|^{(a+1)p-c}} \,{\rm d}x. \end{array}$$ We estimate each of the above integrations as follows: \begin{equation}\label{eq4.10} \displaystyle \int_{\mathbb{R}^N} \frac{u_{\varepsilon}^\ast(x)^p}{|x|^{(a+1)p-c}} \,{\rm d}x =C_0^p\omega_N\varepsilon^c \int_0^\infty u_0(t)^pt^{N-1-(a+1)p+c}\, {\rm d}x = {\cal O}(m^{-ch}), \end{equation} \begin{equation}\label{eq4.11} \begin{array}{rl} \displaystyle \int_{\mathbb{R}^N\backslash B_{\frac{1}{m}}}\frac{u_{\varepsilon}^\ast(x)^p}{|x|^{(a+1)p-c}} \,{\rm d}x&=C_0^p\omega_N \varepsilon^c\displaystyle\int_{m^{h-1}}^\infty u_0(t)^pt^{N-1-(a+1)p+c}\, {\rm d}x\\ &={\cal O}(m^{-(h-1)[(a+1+l_2)p-N]-c}) \end{array}\end{equation} and \begin{equation}\label{eq4.12} \begin{array}{rl} u_{\varepsilon}^\ast(\frac1m) \displaystyle \int_{B_{\frac{1}{m}}}\frac{u_{\varepsilon}^\ast(x)^{p-1}}{|x|^{(a+1)p-c}} \,{\rm d}x &=C_0^p\omega_N u_0(\frac1{m\varepsilon}) \displaystyle \int_0^{\frac1m} \frac{\varepsilon ^{-[N-(a+1)p]} u_0(\frac r\varepsilon)^{p-1}}{r^{(a+1)p-c}} r^{N-1}\,{\rm d}r\\[3mm] & =C_0^p\omega_N u_0(m^{h-1}) \varepsilon^c\displaystyle \int_0^{m^{h-1}} u_0(t)^{p-1} t^{N-1-(a+1)p+c} \,{\rm d}t \\[3mm] &\leqslant C_0^p\omega_N C_2 m^{-(h-1)l_2p-ch} [C+m^{(h-1)[N-(a+1)p-(p-1)l_2]} ] \\[3mm] & ={\cal O}(m^{-(h-1)[(a+1+l_2)p-N]-c}). \end{array}\end{equation} Note that since $c<(a+1+l_2)p-N$, we have $-ch> -(h-1)[(a+1+l_2)p-N]-c$, that is, we prove the lemma. \epf Let $\Omega$ be a smooth bounded open domain in $\mathbb{R}^N$ with $0\in \Omega$, define $\mathfrak{D}_{a,b}^{1,p}(\Omega)$ as the closure of $C_0^\infty (\Omega)$ under the norm $\| u \|_{\mathfrak{D}_{a,b}^{1,p}(\Omega)}=\| |{\rm D}u|; L_a^p (\Omega) \|$ and \begin{equation}\label{eq4.13} S_{\lambda,\,\mu}(p,a,b;\Omega)=\inf\left\{Q_{\lambda,\,\mu}(u)\ :\ u\in \mathfrak{D}_{a,b}^{1,p}(\Omega),\ \|u;L_b^{p_\ast}(\Omega)\|=1\right\}, \end{equation} where $$ Q_{\lambda,\,\mu}(u)=\int_{\Omega}\frac{|{\rm D}u|^p}{|x|^{ap}}\,{\rm d}x-\mu\int_{\Omega}\frac{|u|^p}{|x|^{(a+1)p}}\,{\rm d}x - \lambda \int_{\Omega}\frac{|u|^p}{|x|^{(a+1)p-c}}\,{\rm d}x. $$ If $\lambda=0$, by rescaling argument, it is easy to show that $S_{0,\,\mu}(p,a,b;\Omega)=S_{0,\,\mu}$. But for $\lambda>0$, we shall have a strict inequality between $S_{\lambda,\,\mu}(p,a,b;\Omega)$ and $S_{0,\,\mu}$. \begin{theorem}\label{thm4.1} If $\mu\in (0,\ \overline{\mu}),\ \lambda>0,\ b\in [a,\ a+1),\ c\in (0,\ (a+1+l_2)p-N)$, then the strict inequality (\ref{eq1.4}) holds. \end{theorem} \proof We shall study $$ \frac{Q_{\lambda,\,\mu}(u_\varepsilon^m)}{\|u_\varepsilon^m; L_b^{p_*}(\Omega)\|^p}. $$ It follows from Lemma 4.1 and 4.2 that \begin{equation}\label{eq4.14} \begin{array}{rl} Q_{\lambda,\,\mu}(u_\varepsilon^m) &=Q_\mu(u_\varepsilon^m) -\lambda\displaystyle \int_\Omega \frac{|u_{\varepsilon}^m(x)|^p}{|x|^{(a+1)p-c}}\,{\rm d}x \\ &\leqslant S_{0,\,\mu}^{\frac{N}{(a+1-b)p}}+{\cal O}(m^{-(h-1)[(a+1+l_2)p-N]})-{\cal O}(m^{-ch}) \end{array}\end{equation} and \begin{equation}\label{eq4.15} \begin{array}{rl} \|u_\varepsilon^m; L_b^{p_*}(\Omega)\|^p &\geqslant S_{0,\,\mu}^{\frac{N}{(a+1-b)p_*}} -{\cal O}(m^{-(h-1)[(b+l_2)p_*-N]p/p_*})\\ &= S_{0,\,\mu}^{\frac{N}{(a+1-b)p_*}}-{\cal O}(m^{-(h-1)[(a+1+l_2)p-N]}). \end{array}\end{equation} Thus, we have \begin{equation}\label{eq4.16} \begin{array}{rl} \dfrac{Q_{\lambda,\,\mu}(u_\varepsilon^m)}{\|u_\varepsilon^m; L_b^{p_*}(\Omega)\|^p} &\leqslant \dfrac{S_{0,\,\mu}^{\frac{N}{(a+1-b)p}}+{\cal O}(m^{-(h-1)[(a+1+l_2)p-N]})-{\cal O}(m^{-ch})}{S_{0,\,\mu}^{\frac{N}{(a+1-b)p_*}} -{\cal O}(m^{-(h-1)[(b+l_2)p_*-N]p/p_*})}\\ &= S_{0,\,\mu} +{\cal O}(m^{-(h-1)[(a+1+l_2)p-N]}) - {\cal O}(m^{-ch}). \end{array}\end{equation} If $c\in (0,\ (a+1+l_2)p-N)$, we can choose $h$ large enough such that $c<(h-1)(a+1+l_2)p-N)/h$ and so $-ch > -(h-1)[(a+1+l_2)p-N]$, thus as $m$ large enough, (\ref{eq1.4}) holds. \epf \section{Application} In this section, as an application of the strict inequality of (\ref{eq1.4}), we consider the existence of nontrivial solutions to the following quasilinear Brezis-Nirenberg type problem involving Hardy potential and Sobolev critical exponent: \begin{equation}\label{eq5.1} \left\{ {\begin{array}{rl} -\mbox{div}(\dfrac{|{\rm D}u|^{p-2}{\rm D}u}{|x|^{ap}})-\mu\dfrac{|u|^{p-2}u}{|x|^{(a+1)p}} &=\dfrac{|u|^{{p_\ast}-2}u}{|x|^{bp_\ast}}+\lambda\dfrac{|u|^{p-2}u}{|x|^{(a+1)p-c}}, \mbox{ in}\ \Omega,\\[3mm] u&=0, \mbox{ on}\ \partial\,\Omega, \end{array}} \right. \end{equation} where $\Omega\subset \mathbb{R}^N$ is an open bounded domain with $C^1$ boundary and $0\in \Omega$, $1<p<N,\ p_\ast=\frac{Np}{N-(a+1-b)p}$, $0\leq a<\frac{N-p}{p},\ a\leqslant b<(a+1),\ c>0$; $\lambda,\ \mu$ are two positive real parameters. To obtain the existence result, let's define the energy functional $E_{\lambda,\,\mu}$ on $\mathfrak{D}_{a,b}^{1,p}(\Omega)$ as $$ E_{\lambda,\,\mu}(u)=\frac1p\int_\Omega \left[ \frac{|{\rm D}u|^p}{|x|^{ap}}-\mu \frac{|u|^p}{|x|^{(a+1)p}} -\lambda\frac{|u|^p}{|x|^{(a+1)p-c}}\right]\,{\rm d}x -\frac1 {p_*}\int_\Omega \frac{|u|^{p_*}}{|x|^{bp_*}}\,{\rm d}x. $$ It is easy to see that $E_{\lambda,\,\mu}$ is well-defined in $\mathfrak{D}_{a,b}^{1,p}(\Omega)$, and $E_{\lambda,\,\mu} \in C^1(\mathfrak{D}_{a,b}^{1,p}(\Omega), \mathbb{R})$. Furthermore, all the critical points of $E_{\lambda,\,\mu}$ are weak solutions to (\ref{eq5.1}). We shall apply the Mountain Pass Lemma without (PS) condition due to Ambrosetti and Rabinowitz \cite{AR} to ensure the existence of (PS)$_\beta$ sequence of $E_{\lambda,\,\mu}$ at some Mountain Pass type minimax value level $\beta$. Then the strict inequality (\ref{eq1.4}) implies that $\beta< \frac {a+1-b}N S_{0, \mu}^\frac{N}{(a+1-b)p}$. Finally, combining the generalized concentration compactness principle and a compactness property called singular Palais-Smale condition due to Boccardo and Murat \cite{BM}(cf. also \cite{GP}), we shall obtain the existence of nontrivial solutions to (\ref{eq5.1}). Let's define two more functionals on $\mathfrak{D}_{a,b}^{1,p}(\Omega)$ as follows: $$ I_\mu(u)=\frac1p\int_\Omega \frac{|{\rm D}u|^p}{|x|^{ap}}\,\,{\rm d}x -\frac\mu p\int_\Omega \frac{|u|^p}{|x|^{(a+1)p}}\,\,{\rm d}x,\ J(u)=\int_\Omega \frac{|u|^p}{|x|^{(a+1)p-c}}\,\,{\rm d}x, $$ and denote ${\cal M}=\{u\in \mathfrak{D}_{a,b}^{1,p}(\Omega)\ : \ J(u)=1 \}$. For $\mu\in (0, \overline{\mu})$, the Hardy inequality shows that $\frac1p \frac{|{\rm D}u|^p}{|x|^{ap}}\,{\rm d}x -\frac\mu p \frac{|u|^p}{|x|^{(a+1)p}}\,{\rm d}x$ is nonnegative measure on $\Omega$. The classical results in the Calculus of Variations(cf. \cite{SM}) show that $I_\mu$ is lower semicontinuity on ${\cal M}$. On the other hand the compact imbedding theorem in \cite{XBJ} implies that ${\cal M}$ is weakly closed. Thus the direct method ensure that $I_\mu$ attains its minimum on ${\cal M}$, denote $\lambda_1=\min \{I_\mu(u) \ :\ u\in {\cal M} \}>0$. From the homogeneity of $I_\mu$ and $J$, $\lambda_1$ is the first nonlinear eigenvalue of problem: \begin{equation}\label{eq5.3} \left\{ {\begin{array}{rl} -\mbox{div}(\dfrac{|{\rm D}u|^{p-2}{\rm D}u}{|x|^{ap}})-\mu\dfrac{|u|^{p-2}u}{|x|^{(a+1)p}} &=\lambda\dfrac{|u|^{p-2}u}{|x|^{(a+1)p-c}}, \mbox{in}\ \Omega,\\ u&=0, \mbox{on}\ \partial\,\Omega. \end{array}} \right. \end{equation} The following lemma indicates that $E_{\lambda,\,\mu}$ satisfies the geometric condition of Mountain Pass Lemma without (PS) condition due to Ambrosetti and Rabinowitz \cite{AR}, the proof is direct and omitted. \begin{lemma}\label{lem5.1} If $\mu\in (0,\ \mu), \lambda\in (0,\ \lambda_1)$, then \begin{enumerate} \item[(i)]$E_{\lambda,\,\mu}(0)=0$; \item[(ii)] $\exists \,\alpha, r>0$, s.t. $E_{\lambda,\,\mu}(u)\geqslant \alpha$, if $\|u\|=r$; \item[(iii)] For any $v\in \mathfrak{D}_{a,b}^{1,p}(\Omega),\ v\neq 0$, there exists $T>0$ such that $E_{\lambda,\,\mu}(tv)\leqslant 0$ if $t>T$. \end{enumerate} \end{lemma} For $v\in \mathfrak{D}_{a,b}^{1,p}(\Omega)$ with $\|v\|>r$ and $E_{\lambda,\,\mu}(v)\leqslant 0$, set $$\beta:=\inf_{\gamma\in\Gamma} \max_{t\in [0, 1]} E_{\lambda,\,\mu}(\gamma(t)),$$ where $$ \Gamma:=\{\gamma\in C([0, 1], \mathfrak{D}_{a,b}^{1,p}(\Omega)) \ | \ \gamma(0)=0,\ \gamma(1)=v \}. $$ It is easy to see that $\beta$ is independent of the choice of $v$ such that $E_{\lambda,\,\mu}(v)\leqslant 0$, and furthermore $\beta\geqslant \alpha$. If $\beta$ is finite, from Lemma \ref{lem5.1} and Mountain Pass Lemma, there exists a (PS)$_\beta$ sequence $\{u_m\}_{m=1}^\infty$ of $E_{\lambda,\,\mu}$ at level $\beta$, that is, $E_{\lambda,\,\mu} (u_m) \to \beta$ and $E_{\lambda,\,\mu}^\prime (u_m) \to 0$ in the dual space $(\mathfrak{D}_{a,b}^{1,p}(\Omega))^\prime$ of $\mathfrak{D}_{a,b}^{1,p}(\Omega)$ as $m\to \infty$. \begin{lemma}\label{lem5.2} If $\mu\in (0,\ \overline{\mu}), \lambda\in (0,\ \lambda_1)$, then the strict inequality (\ref{eq1.4}) is equivalent to \begin{equation}\label{eq5.4}\beta< \frac{a+1-b}{N}S_{0,\,\mu}^{\frac{N}{(a+1-b)p}}.\end{equation} \end{lemma} \proof \textbf{1.} (\ref{eq1.4}) $\implies$ (\ref{eq5.4}). Let $v_1$ be a function such that $\|v_1; L_b^{p_*}(\Omega)\|=1$, and $Q_{\lambda,\,\mu}(v_1)<S_{0,\,\mu}$. We have \begin{equation}\label{eq5.5} \begin{array}{rl}\beta & \leqslant\sup\limits _{0<t<\infty} E_{\lambda,\,\mu}(tv_1)=\sup\limits _{0<t<\infty} (\dfrac{t^p}p Q_{\lambda,\,\mu}(v_1)-\dfrac{t^{p_*}}{p_*})\\ & = (\dfrac 1p-\dfrac1{p_*})Q_{\lambda,\,\mu}(v_1)^{\frac{p_*}{p_*-p}}=\dfrac{a+1-b}N Q_{\lambda,\,\mu}(v_1)^{\frac{N}{(a+1-b)p}}\\ & <\dfrac{a+1-b}N S_{0,\,\mu}^{\frac{N}{(a+1-b)p}}. \end{array} \end{equation} \textbf{2.} (\ref{eq5.4}) $\implies$ (\ref{eq1.4}). Since $\lambda<\lambda_1$, for $u=g(t)=tv$ with $t$ closed to $0$, we have $(D E_{\lambda,\,\mu}(u), u)>0$; while for $u=g(1)=v$, we have $$ (D E_{\lambda,\,\mu}(v), v)< p E_{\lambda,\,\mu}(v)\leqslant 0. $$ Consider function $f(t)=E_{\lambda,\,\mu}(tv)\in C^1([0,1],\, \mathbb{R})$, we have that $f^\prime (t)>0$ for $t$ closed to $0$, and $f_-^\prime(1)\leqslant 0$. From the medium value theorem, there exists $t_0\in (0, 1)$ such that $f^\prime(t_0)=0$, that is, for $u=t_0v$, we have $$ (D E_{\lambda,\,\mu}(u), u)=Q_{\lambda,\,\mu}(u)-\|u; L_b^{p_*}(\Omega)\|^{p_*}=0. $$ Thus a direct computation shows that $$ \frac{Q_{\lambda,\,\mu}(u)}{\|u; L_b^{p_*}(\Omega)\|^{p}} =Q_{\lambda,\,\mu}(u)^{1-p/p_*} =(\frac{N}{a+1-b} E_{\lambda,\,\mu}(u) )^\frac{(a+1-b)p}{N}, $$ that is, $$ \beta=\inf_{\gamma\in\Gamma} \max_{t\in [0, 1]} E_{\lambda,\,\mu}(\gamma(t))\geqslant \frac{a+1-b}N S_{\lambda,\,\mu}(p, a, b, \Omega)^\frac{N}{(a+1-b)p}. $$ Hence (\ref{eq5.4}) $\implies$ (\ref{eq1.4}). \epf \begin{lemma}\label{lem5.3} If $\mu\in (0,\ \overline{\mu}), \lambda\in (0,\ \lambda_1)$, then any (PS)$_\beta$ sequence of $E_{\lambda,\,\mu}$ is bounded in $\mathfrak{D}_{a,b}^{1,p}(\Omega)$. \end{lemma} \proof Suppose that $\{u_m\}_{m=1}^\infty$ is a (PS)$_\beta$ sequence of $E_{\lambda,\,\mu}$. As $m\to \infty$, we have \begin{equation}\label{eq5.6} \begin{array}{ll} & \beta+o(1) =E_{\lambda,\,\mu}(u_m)\\[2mm] &\ \ \ \ =\dfrac1p\displaystyle \int_\Omega \left[ \frac{|{\rm D}u_m|^p}{|x|^{ap}}-\mu \frac{|u_m|^p}{|x|^{(a+1)p}} -\lambda\frac{|u_m|^p}{|x|^{(a+1)p-c}}\right]\,{\rm d}x -\dfrac1 {p_*}\displaystyle\int_\Omega \frac{|u_m|^{p_*}}{|x|^{bp_*}}\,{\rm d}x \end{array}\end{equation} and \begin{equation}\label{eq5.7} \begin{array}{rl} & o(1)\|\varphi\| =({\rm D} E_{\lambda,\,\mu}(u_m), \varphi)\\[2mm] &\ \ \ \ \ =\displaystyle\int_\Omega \left[ \frac{|{\rm D}u_m|^{p-2} {\rm D}u_m \cdot {\rm D}\varphi}{|x|^{ap}} -\mu \frac{|u_m|^{p-2} u_m \varphi}{|x|^{(a+1)p}} -\lambda\frac{|u_m|^{p-2} u_m \varphi}{|x|^{(a+1)p-c}}\right]\,{\rm d}x \\[4mm] &\ \ \ \ \ \ \ \ \ -\displaystyle\int_\Omega \frac{|u_m|^{p_*-2}u_m \varphi}{|x|^{bp_*}}\,{\rm d}x, \end{array}\end{equation} for any $\varphi \in \mathfrak{D}_{a,b}^{1,p}(\Omega)$. From (\ref{eq5.6}) and (\ref{eq5.7}), as $m\to \infty$, it follows that $$ \begin{array}{rl} p_*\beta+o(1)-o(1)\|u_m\|& =p_*E_{\lambda,\,\mu}(u_m) - ({\rm D} E_{\lambda,\,\mu}(u_m), u_m)\\[2mm] &=(\dfrac{p_*}p-1) \displaystyle \int_\Omega \left[ \frac{|{\rm D}u_m|^p}{|x|^{ap}}-\mu \frac{|u_m|^p}{|x|^{(a+1)p}} -\lambda\frac{|u_m|^p}{|x|^{(a+1)p-c}}\right]\,{\rm d}x \\[4mm] & \geqslant (\dfrac{p_*}p-1)(1-\dfrac{\lambda}{\lambda_1})\displaystyle \int_\Omega \left[ \frac{|{\rm D}u_m|^p}{|x|^{ap}}-\mu \frac{|u_m|^p}{|x|^{(a+1)p}} \right]\,{\rm d}x\\[4mm] & \geqslant (\dfrac{p_*}p-1)(1-\dfrac{\lambda}{\lambda_1})(1-\dfrac{\mu}{\overline{\mu}})\|u_m\|^p. \end{array} $$ Thus, $\{u_m\}_{m=1}^\infty$ is bounded in $\mathfrak{D}_{a,b}^{1,p}(\Omega)$ if $\mu\in (0,\ \mu), \lambda\in (0,\ \lambda_1)$. \epf From the boundedness of $\{u_m\}_{m=1}^\infty$ in $\mathfrak{D}_{a,b}^{1,p}(\Omega)$, we have the following medium convergence: $$ u_m \rightharpoonup u \mbox{ in } \mathfrak{D}_{a,b}^{1,p}(\Omega), \ L_1^{p}(\Omega) \mbox{ and } L_b^{p_*}(\Omega), $$ $$ u_m \to u \mbox{ in } L_\alpha^{r}(\Omega)\ \mbox{ if } 1\leq r<\frac{Np}{N-p},\ \frac\alpha r< (a+1)+N(\frac1r-\frac1p), $$ $$ u_m \to u\ \ \mbox{a.e. in } \Omega. $$ In order to obtain the strong convergence of $\{u_m\}_{m=1}^\infty$ in $L_b^{p_*}(\Omega)$, we need the following generalized concentration compactness principle(cf. also \cite{TY}) and \cite{WW} and references therein), the proof is similar to that in \cite{LPL2} and we omit it. \begin{lemma}[Concentration Compactness Principle] \label{lem5.4} Suppose that ${\cal M}(\mathbb{R}^N)$ is the space of bounded measures on $\mathbb{R}^N$, and $\{u_m\}\subset \mathfrak{D}_{a,b}^{1,p}(\Omega)$ is a sequence such that: $$ \begin{array}{ll} u_m \rightharpoonup u & \mbox{ in } \mathfrak{D}_{a,b}^{1,p}(\Omega),\\[2mm] \xi_m:=\left(|x|^{-ap}|{\rm D}u_m|^p-\mu |x|^{-(a+1)p}|u_m|^p \right) \,{\rm d}x \rightharpoonup \xi & \mbox{ in } {\cal M}(\mathbb{R}^N),\\[2mm] \nu_m:=|x|^{-bp_*}|u_m|^{p_*} \,{\rm d}x\rightharpoonup \nu & \mbox{ in } {\cal M}(\mathbb{R}^N),\\[2mm] u_m\to u & \mbox{ a.e. on } \mathbb{R}^N. \end{array} $$ Then there are the following statements: \begin{enumerate} \item[(1)] There exists some at most countable set $J$, a family $\{x^{(j)}\ :\ j\in J\}$ of distinct points in $\mathbb{R}^N$, and a family $\{\nu^{(j)}\ :\ j\in J \}$ of positive numbers such that \begin{equation} \label{eq5.8} \nu=|x|^{-bp_*}|u|^{p_*} \,{\rm d}x+\sum_{j\in J} \nu^{(j)}\delta_{x^{(j)}}, \end{equation} where $\delta_x$ is the Dirac-mass of mass $1$ concentrated at $x\in \mathbb{R}^N$. \item[(2)] The following inequality holds \begin{equation} \label{eq5.9} \xi \geq (|x|^{-ap}|{\rm D}u|^p-\mu |x|^{-(a+1)p}|u|^p ) \,{\rm d}x+\sum_{j\in J} \xi^{(j)}\delta_{x^{(j)}}, \end{equation} for some family $\{\xi^{(j)}>0\ :\ j\in J \}$ satisfying \begin{equation} \label{eq5.10} S_{0,\, \mu}\big( \nu^{(j)}\big)^{p/p_*}\leqslant \xi^{(j)},\ \ \mbox{for all }j\in J. \end{equation} In particular, $\sum\limits_{j\in J}\big( \nu^{(j)}\big)^{p/p_*}<\infty$. \end{enumerate} \end{lemma} \begin{lemma}\label{lem5.5} If $\mu\in (0,\ \mu), \lambda\in (0,\ \lambda_1)$, let $\{u_m\}_{m=1}^\infty$ be a (PS)$_\beta$ sequence of $E_{\lambda,\,\mu}$ at level $\beta$ defined above. (\ref{eq5.4}) implies that $\nu^{(j)}=0$ for all $j\in J$, that is, up to a subsequence, $u_m\to u$ in $L_b^{p_*}(\Omega)$ as $m\to 0$. \end{lemma} \proof From Lemma \ref{lem5.3}, $\{u_m\}_{m=1}^\infty$ is bounded in $\mathfrak{D}_{a,b}^{1,p}(\Omega)$, then we have that $|{\rm D}u_m|^{p-2} {\rm D}u_m$ is bounded in $\left(L^{p\prime}(\Omega; |x|^{-ap}) \right)^N$, where $p^\prime$ is the conjugate exponent of $p$, i.e. $\frac1p+\frac1{p^\prime}=1$. Without loss of generality, we suppose that $T\in \left(L^{p\prime}(\Omega; |x|^{-ap}) \right)^N$ such that $$ |{\rm D}u_m|^{p-2} {\rm D}u_m \rightharpoonup T \mbox{ in }\left(L^{p\prime}(\Omega; |x|^{-ap}) \right)^N. $$ Also, $|u_m|^{p-2} u_m$ is bounded in $L^{p\prime}(\Omega; |x|^{-(a+1)p})$, $|u_m|^{p_*-2} u_m$ is bounded in $L^{p_*\prime}(\Omega; |x|^{-bp_*})$, and $u_m\to u$ almost everywhere in $\Omega$, thus it follows that $$ |u_m|^{p-2} u_m \rightharpoonup |u|^{p-2} u \mbox{ in } L^{p\prime}(\Omega; |x|^{-(a+1)p}) $$ and $$ |u_m|^{p_*-2} u_m \rightharpoonup |u|^{p_*-2} u \mbox{ in } L^{p_*\prime}(\Omega; |x|^{-bp_*}). $$ From the compactness imbedding theorem in \cite{XBJ}, it follows that $$ |u_m|^{p-2} u_m \to |u|^{p-2} u \mbox{ in } L^{p\prime}(\Omega; |x|^{-(a+1)p+c}). $$ Taking $m\to \infty$ in (\ref{eq5.7}), we have \begin{equation} \label{eq5.14} \displaystyle\int_\Omega \frac{T\cdot {\rm D}\varphi}{|x|^{ap}}\,{\rm d}x= \mu \displaystyle\int_\Omega \frac{|u|^{p-2} u \varphi}{|x|^{(a+1)p}}\,{\rm d}x +\lambda \displaystyle\int_\Omega \frac{|u|^{p-2} u \varphi}{|x|^{(a+1)p-c}}\,{\rm d}x + \displaystyle\int_\Omega \frac{|u|^{p_*-2}u \varphi}{|x|^{bp_*}}\,{\rm d}x, \end{equation} for any $\varphi \in \mathfrak{D}_{a,b}^{1,p}(\Omega)$. Let $\varphi=\psi u_m$ in (\ref{eq5.7}), where $\psi \in C(\bar\Omega)$, and take $m\to \infty$, it follows that \begin{equation} \label{eq5.15} \displaystyle\int_\Omega \psi \,{\rm d}\xi+ \displaystyle\int_\Omega \frac{uT\cdot {\rm D}\psi}{|x|^{ap}}\,{\rm d}x= \displaystyle\int_\Omega \psi \,{\rm d}\nu+ \lambda \displaystyle\int_\Omega \frac{|u|^{p} \psi}{|x|^{(a+1)p-c}}\,{\rm d}x. \end{equation} Let $\varphi=\psi u$ in (\ref{eq5.14}), it follows that \begin{equation} \label{eq5.16}\begin{array}{ll} \displaystyle\int_\Omega \frac{uT\cdot {\rm D}\psi}{|x|^{ap}}\,{\rm d}x& + \displaystyle\int_\Omega \frac{\psi T\cdot {\rm D}u}{|x|^{ap}}\,{\rm d}x = \mu \displaystyle\int_\Omega \frac{|u|^{p} \psi}{|x|^{(a+1)p}}\,{\rm d}x\\[3mm] & \ \ \ +\lambda \displaystyle\int_\Omega \frac{|u|^{p} \psi}{|x|^{(a+1)p-c}}\,{\rm d}x+ \displaystyle\int_\Omega \frac{|u|^{p_*} \psi}{|x|^{bp_*}}\,{\rm d}x, \end{array}\end{equation} Thus, form (\ref{eq5.8}) and Lemma \ref{lem5.4}, (\ref{eq5.15})$-$(\ref{eq5.16}) implies that \begin{equation} \label{eq5.17}\begin{array}{ll} \displaystyle\int_\Omega \psi \,{\rm d}\xi &=\displaystyle\int_\Omega \frac{\psi T\cdot {\rm D}u}{|x|^{ap}}\,{\rm d}x-\mu \displaystyle\int_\Omega \frac{|u|^{p} \psi}{|x|^{(a+1)p}}\,{\rm d}x+\displaystyle\int_\Omega \psi \,{\rm d}\nu-\displaystyle\int_\Omega \frac{|u|^{p_*} \psi}{|x|^{bp_*}}\,{\rm d}x\\[3mm] & =\displaystyle\int_\Omega \frac{\psi T\cdot {\rm D}u}{|x|^{ap}}\,{\rm d}x- \mu \displaystyle\int_\Omega \frac{|u|^{p} \psi}{|x|^{(a+1)p}}\,{\rm d}x+\sum_{j\in J} \nu^{(j)}\psi(x^{(j)}). \end{array}\end{equation} Letting $\psi\to \delta_{x^{(j)}}$, we have $$ \xi^{(j)}= \nu^{(j)}. $$ Combining with (\ref{eq5.10}), it follows that $\nu^{(j)} \geqslant S_{0,\, \mu}\big( \nu^{(j)}\big)^{p/q}$, which means that \begin{equation} \label{eq5.18}\nu^{(j)} \geqslant S_{0,\, \mu}^\frac{N}{(a+1-b)p}, \end{equation} if $\nu^{(j)}\neq 0$. On the other hand, taking $m\to\infty$ in (\ref{eq5.6}), and using (\ref{eq5.17}) with $\psi\equiv 1$, (\ref{eq5.8}) and (\ref{eq5.14}), it follows that \begin{equation} \label{eq5.19}\begin{array}{ll} \beta & = \dfrac1p \displaystyle\int_\Omega \,{\rm d}\xi -\dfrac{1}{p_*}\displaystyle\int_\Omega \,{\rm d}\nu - \dfrac\lambda p \displaystyle\int_\Omega \frac{|u|^{p} }{|x|^{(a+1)p-c}}\,{\rm d}x \\[3mm] & =\dfrac1p\left( \sum\limits_{j\in J} \nu^{(j)} +\displaystyle\int_\Omega \frac{T\cdot {\rm D}u}{|x|^{ap}}\,{\rm d}x- \mu \displaystyle\int_\Omega \frac{|u|^{p} }{|x|^{(a+1)p}}\,{\rm d}x\right)\\[3mm] &\ \ \ -\dfrac1{p_*}\left( \sum\limits_{j\in J} \nu^{(j)} + \displaystyle\int_\Omega \frac{|u|^{p_*} }{|x|^{bp_*}}\,{\rm d}x\right)- \dfrac\lambda p \displaystyle\int_\Omega \frac{|u|^{p} }{|x|^{(a+1)p-c}}\,{\rm d}x \\[3mm] &=(\dfrac1p- \dfrac1{p_*}) \sum\limits_{j\in J} \nu^{(j)} + (\dfrac1p- \dfrac1{p_*}) \displaystyle\int_\Omega \frac{|u|^{p_*}}{|x|^{bp_*}}\,{\rm d}x\\[3mm] & \geqslant (\dfrac1p- \dfrac1{p_*}) \sum\limits_{j\in J} \nu^{(j)} =\dfrac {a+1-b}N \sum\limits_{j\in J} \nu^{(j)}. \end{array}\end{equation} From (\ref{eq5.18}), (\ref{eq5.19}), (\ref{eq5.4}) implies that $\nu^{(j)}=0$ for all $j\in J$. Hence we have $$ \int_\Omega \frac{|u_m|^{p_*} }{|x|^{bp_*}}\,{\rm d}x \to \int_\Omega \frac{|u|^{p_*} }{|x|^{bp_*}}\,{\rm d}x, $$ as $m\to \infty$. Thus, the Brezis-Lieb Lemma \cite{BL} implies that, up to a subsequence, $u_m\to u$ in $L_b^{p_*}(\Omega)$ as $m\to 0$. \epf In order to deduce the almost everywhere convergence of ${\rm D}u_m$ in $\Omega$ and to obtain existence of nontrivial solution to (\ref{eq5.1}), we shall apply the variational approach supposed in \cite{GP} and a convergence theorem due to Boccardo and Murat(cf. Theorem 2.1 in \cite{BM}), so we suppose that $a=0$, and $\mathfrak{D}_{a,b}^{1,p}(\Omega) =W_0^{1,p}(\Omega)$. \begin{theorem}\label{thm5.1} If $a=0, \mu\in (0,\ \overline{\mu}),\ \lambda\in (0,\ \lambda_1),\ b\in [0,\ 1),\ c\in (0,\ (1+l_2)p-N)$, then there exists a nontrivial solution to (\ref{eq5.1}). \end{theorem} \proof Apply the variational approach supposed in \cite{GP} and a convergence theorem in \cite{BM}, there exists a subsequence of $\{u_m\}_{m=1}^\infty$, still denoted by $\{u_m\}_{m=1}^\infty$, such that $$ u_m\to u \mbox{ in } W_0^{1,\,q}(\Omega),\ q<p, $$ which implies that $u$ is a solution to (\ref{eq5.1}) in sense of distributions. Since $u\in W_0^{1,p}(\Omega)$, by density argument, $u$ is a weak solution to (\ref{eq5.1}). Next, we shall show that $u\not\equiv 0$. In fact, from the homogeneity and Lemma \ref{lem5.5}, we have $$ \begin{array}{ll} 0<\alpha\leqslant \beta & =\lim\limits_{m\to\infty}E_{\lambda,\,\mu}(u_m) =\lim\limits_{m\to\infty}\left[E_{\lambda,\,\mu}(u_m) -\dfrac1p({\rm D} E_{\lambda,\,\mu}(u_m), u_m)\right]\\ &=\lim\limits_{m\to\infty}(\dfrac1p- \dfrac1{p_*}) \displaystyle\int_\Omega \frac{|u_m|^{p_*}}{|x|^{bp_*}}\,{\rm d}x\\[3mm] &=(\dfrac1p- \dfrac1{p_*}) \displaystyle\int_\Omega \frac{|u|^{p_*}}{|x|^{bp_*}}\,{\rm d}x, \end{array} $$ Thus, $u\not\equiv 0$. \epf In sight of Theorem \ref{thm5.1}, we conjecture that the conclusion is also true for $0\leqslant a <\frac{N-p}{p}$. \begin{conjecutre} If $0\leqslant a <\frac{N-p}{p}, \mu\in (0,\ \overline{\mu}),\ \lambda\in (0,\ \lambda_1),\ b\in [a,\ a+1),\ c\in (0,\ (a+1+l_2)p-N)$, then there exists a nontrivial solution to (\ref{eq5.1}). \end{conjecutre} \end{document}
\begin{document} \title[Minimal Non-Nilpotent Semigroups]{Finite semigroups that are minimal for not being Malcev nilpotent} \author{E. Jespers and M.H. Shahzamanian} \address{Eric Jespers and M.H. Shahzamanian\\ Department of Mathematics, Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussel, Belgium} \email{[email protected], [email protected]} \address{Current address M.H. Shahzamanian\\ Centro de Matematica Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto, Portugal} \email{ [email protected]} \thanks{ 2010 Mathematics Subject Classification. Primary 20F19, 20M07, Secondary: 20F18. Keywords and phrases: semigroup, nilpotent. \\The research of the first author is partially supported by Onderzoeksraad of Vrije Universiteit Brussel, Fonds voor Wetenschappelijk Onderzoek (Belgium). A large part of this work was done while the second author was working at Vrije Universiteit Brussel. The second author also gratefully acknowledges support by FCT through the Centro de Matematica da Universidade do Porto (Portugal).} \begin{abstract} We give a description of finite semigroups $S$ that are minimal for not being Malcev nilpotent, i.e. every proper subsemigroup and every proper Rees factor semigroup is Malcev nilpotent but $S$ is not. For groups this question was considered by Schmidt. \end{abstract} \maketitle \section{Introduction}\label{pre} Finite groups $G$ that are minimal for not being nilpotent, i.e. $G$ is not nilpotent but every proper subgroup is nilpotent, have been characterized by Schmidt in \cite{Schmidt} (see also \cite[Theorem 6.5.7]{Scott} or \cite[Theorem Schmidt-Redei-Iwasawa]{Lausch}). For simplicity we call such a group $G$ a \textit{Schmidt group}. It has the following properties: \begin{enumerate} \item $\abs{G} =p^aq^b$ for some distinct primes $p$ and $q$ and some $a, b > 0$. \item $G$ has a normal Sylow $p$-subgroup and the Sylow $q$-subgroups are cyclic. \item The Frattini subgroups of Sylow subgroups of $G$ are central in $G$. \item $G$ is two-generated, i.e. $G=\langle g_{1},g_{2}\rangle$ for some $g_{1},g_{2}\in G$. \end{enumerate} It is well known that nilpotent groups can be defined by using semigroup identities (that is without using inverses) and hence there is a natural notion of nilpotent semigroup. This was introduced by Malcev (\cite{malcev}), and independently by Neuman and Taylor (\cite{neu-tay}). For completeness' sake we recall the definition. For elements $x,y,z_{1},z_{2},\ldots $ in a semigroup $S$ one recursively defines two sequences $$\lambda_n=\lambda_{n}(x,y,z_{1},\ldots, z_{n})\quad{\rm and} \quad \rho_n=\rho_{n}(x,y,z_{1},\ldots, z_{n})$$ by $$\lambda_{0}=x, \quad \rho_{0}=y$$ and $$\lambda_{n+1}=\lambda_{n} z_{n+1} \rho_{n}, \quad \rho_{n+1} =\rho_{n} z_{n+1} \lambda_{n}.$$ A semigroup $S$ is said to be \textit{nilpotent} (in the sense of Malcev \cite{malcev}, denoted (MN) in \cite{Riley}) if there exists a positive integer $n$ such that $$\lambda_{n}(a,b,c_{1},\ldots, c_{n}) = \rho_{n}(a,b,c_{1}, \ldots, c_{n})$$ for all $a, b$ in $S$ and $ c_{1}, \ldots, c_{n}$ in $S^{1}$ (by $S^{1}$ we denote the smallest monoid containing $S$). The smallest such $n$ is called the nilpotency class of $S$. Note that, as in \cite{lal}, the defining condition to be nilpotent is a bit stronger than the one required by Malcev in \cite{malcev}, who requires elements $w_{i}$ in $S$ only. However the definitions agree on the class of cancellative semigroups. Furthermore, it is shown that a cancellative semigroup $S$ is nilpotent of class $n$ if and only if $S$ has a two-sided group of quotients which is nilpotent of class $n$ (see also \cite{Okninski}). Obviously, other examples of nilpotent semigroups are the power nilpotent semigroups, that is, semigroups $S$ with zero $\theta$ such that $S^{m}=\{ \theta \}$ for some $m\geq 1$. In \cite{jespers-okninski99} it is shown that a completely $0$-simple semigroup $S$ over a maximal group $G$ is nilpotent if and only if $G$ is nilpotent and $S$ is an inverse semigroup. Of course subsemigroups and Rees factor semigroups of nilpotent semigroups are again nilpotent. The class of $2$-nilpotent semigroups has been described in \cite{jespers-okninski99}; as for commutative semigroups they have a semilattice decomposition into Archimedean semigroups. For more information on this topic we refer the reader to \cite{jespers-okninski99,jespers-okninski-book,Riley,Jes-shah2}. In particular, in \cite{Jes-shah2}, we describe a class of finite semigroups that are near to being nilpotent, called pseudo nilpotent semigroups. Roughly said, in these semigroups being nilpotent lifts through ideal chains. In this paper we continue investigating finite semigroups that are close to being nilpotent. Recall that a proper Rees factor semigroup of a semigroup $S$ is a Rees factor semigroup $S/I$ with $I$ an ideal of $S$ of cardinality greater than $1$. Obviously, every finite semigroup that is not nilpotent has a subsemigroup that is minimal for not being nilpotent, in the sense that every proper subsemigroup and every Rees factor semigroup is nilpotent. We simply call such a semigroup a \textit{minimal non-nilpotent} semigroup. The aim of this paper is to describe such semigroups and thus extend Schmidt's investigations to the class of finite semigroups. The main result (Theorem~\ref{main-theorem}) is a classification (of sorts) of minimal non-nilpotent finite semigroups. More specifically, it is shown that such a semigroup is either a Schmidt group or one of four types of semigroups which are not groups. These four types of semigroup are each the union of a completely $0$-simple inverse ideal and a $2$-generated subsemigroup or a cyclic group. It is also shown that not every semigroup of these four types is minimal non-nilpotent. The proof of the main theorem utilizes the fact that a minimal non-nilpotent semigroup $S$ which is not a group or a semigroup of left or right zeros has a completely $0$-simple inverse ideal $M$ and $S$ acts on the ${\mathcal R}$-classes of $M$. The different types of orbits of this action are analyzed to provide the classification in Theorem~\ref{main-theorem}. For standard notations and terminology we refer to \cite{cliford}. A completely $0$-simple finite semigroup $S$ is isomorphic with a regular Rees matrix semigroup $\mathcal{M}^{0}(G, n,m;P)$, where $G$ is a maximal subgroup of $S$, $P$ is the $m\times n$ sandwich matrix with entries in $G^{\theta}$ and $n$ and $m$ are positive integers. The nonzero elements of $S$ we denote by $(g;i,j)$, where $g\in G$, $1\leq i \leq n$ and $1\leq j\leq m$; the zero element is simply denoted $\theta$. The element of $P$ on the $(i,j)$-position we denote by $p_{ij}$. The set of nonzero elements we denote by $\mathcal{M} (G,n,m;P)$. If all elements of $P$ are nonzero then this is a semigroup and every completely simple finite semigroup is of this form. If $P=I_{n}$, the identity matrix, then $S$ is an inverse semigroup. By what is mentioned earlier, a completely $0$-simple semigroup $\mathcal{M}^{0}(G,n,m;P)$ is nilpotent if and only if $n=m$, $P=I_{n}$ and $G$ is a nilpotent group [\cite{jespers-okninski99}, Lemma 2.1]. The outline of the paper is as follows. In Section 2 we show that a finite minimal non-nilpotent semigroup is either a Schmidt group, or a semigroup with $2$ elements of left or right zeros, or $S$ has an ideal $M$ that is a completely $0$-simple inverse semigroup with nilpotent maximal subgroups. In the latter case we prove that $S$ acts on the ${\mathcal R}$-classes of $M$. Next the different types of orbits of this action are analyzed; three cases show up. In Section 3 we deal with each of these cases separately. As a consequence, we obtain in Section 4 a description of finite minimal non-nilpotent semigroups. \section{Properties of minimal non-nilpotent\ Semigroups}\label{non-nil} We begin by showing that a finite minimal non-nilpotent semigroup is either a Schmidt group, a non-commutative semigroup with two elements or it has an ideal that is completely $0$-simple inverse semigroup with nilpotent maximal subgroups. The starting point of our investigations is the following necessary and sufficient condition for a finite semigroup not to be nilpotent \cite{Jes-shah}. \begin{lem} \label{finite-nilpotent} A finite semigroup $S$ is not nilpotent if and only if there exists a positive integer $m$, distinct elements $x, y\in S$ and elements $ w_{1}, w_{2}, \ldots, w_{m}\in S^{1}$ such that $x = \lambda_{m}(x, y, w_{1}, w_{2}, \ldots, w_{m})$, $y = \rho_{m}(x,y, w_{1}, w_{2}, \ldots, w_{m})$. \end{lem} Recall that if $S$ is a semigroup with an ideal $I$ such that both $I$ and $S/I$ are nilpotent semigroups then it does not follow in general that $S$ is nilpotent. For counter examples we refer the reader to \cite{jespers-okninski99}. However, if $I^{n}=\{ \theta\}$ (with $\theta$ the zero element of $S$) and $S/I$ is nilpotent then $S$ is a nilpotent semigroup. This easily follows from the previous lemma. It is easily verified that a finite semigroup of minimal cardinality that is a minimal non-nilpotent\ semigroup but is not a group is the semigroup of right or left zeros with $2$ elements. Obviously, these are bands. For convenience and in order to have a uniform notation for our main result (Theorem~\ref{main-theorem}) we will denote such bands respectively as $U_1 =\{ e,f\}$, with $ef=f$, $fe=e$, and $U_2 =\{ e',f'\}$ with $e'f'=e'$, $f'e'=f'$. They also can be described as the completely simple semigroups $\mathcal{M}(\{e\}, 1, 2; \left(\begin{matrix} 1 \\ 1 \end{matrix} \right))$ and $\mathcal{M}(\{e\}, 2, 1; (1,1))$. The following result is a first step towards our classification result. It turns out that these are precisely the minimal non-nilpotent\ finite semigroups that are completely simple. \begin{lem} \label{starter} Let $S$ be a finite semigroup. If $S$ is minimal non-nilpotent\ then one of the following properties hold: \begin{enumerate} \item $S$ is a minimal non-nilpotent\ group; \item $S$ is a semigroup of left or right zeros with 2 elements; \item $S$ has a proper ideal which is a completely $0$-simple inverse semigroup with nilpotent maximal subgroups, i.e. $S$ has an ideal isomorphic to $\mathcal{M}^{0}(G, n,n;I_{n})$ where $G$ is a nilpotent group and $n\geq 2$. \end{enumerate} \end{lem} \begin{proof} Since $S$ is finite, it has a principal series $$S= S_1 \supset S_2 \supset \cdots \supset S_{h'} \supset S_{h'+1} = \emptyset .$$ That is, each $S_i$ is an ideal of $S$ and there is no ideal of $S$ strictly between $S_i$ and $S_{i+1}$ (for convenience we call the empty set an ideal of $S$). Each principal factor $S_i / S_{i+1} (1 \leq i \leq m)$ of $S$ is either completely $0$-simple, completely simple or null. Assume $S$ is minimal non-nilpotent. So, by Lemma~\ref{finite-nilpotent}, there exist a positive integer $h$, distinct elements $s_1,\, s_2 \in S$ and elements $w_1, w_2, \ldots, w_h\in S^1$ such that \begin{eqnarray} \label{simple-identity} s_1 = \lambda_{h}(s_1, s_2, w_{1}, w_{2}, \ldots, w_{h}) &\mbox{and} & s_2 = \rho_{h}(s_1, s_2, w_{1}, w_{2}, \ldots, w_{h}). \end{eqnarray} Suppose that $s_1 \in S_i \backslash S_{i+1}$. Because $S_i$ and $S_{i+1}$ are ideals of $S$, the equalities (\ref{simple-identity}) imply that $s_2 \in S_i \backslash S_{i+1}$ and $w_{1}, w_{2}, \ldots, w_{h} \in S^1 \backslash S_{i+1}$. Furthermore, one obtains that $S_i / S_{i+1}$ is a completely $0$-simple, say $\mathcal{M}^{0}(G, n,m;P)$, or a completely simple semigroup, say $\mathcal{M}(G, n,m;P)$. Also, since $S$ is minimal non-nilpotent, $S_{i+1}=\emptyset$ or $S_{i+1}=\{ \theta \}$. If $n+m=2$ then $S_i \backslash S_{i+1}$ is a group. We denote by $e$ its identity element. Since $s_1,s_2 \in S_i \backslash S_{i+1}$, the sequences $s_1 = \lambda_{h}(s_1, s_2, w_{1}, w_{2}, \ldots, w_{h})$, $s_2 = \rho_{h}(s_1, s_2, w_{1}, w_{2}, \ldots, w_{h})$ imply that $\lambda_j, \rho_j \in S_{i}\backslash S_{i+1}$ for $1 \leq j \leq h$. Now, as $e$ is the identity element of $S_{i}\backslash S_{i+1}$, $\lambda_j w_{j+1} \rho_i=\lambda_j e w_{j+1} \rho_i$ and $\rho_j w_{j+1} \lambda_j=\rho_ie w_{j+1} \lambda_j$ for $0 \leq j \leq h-1$. If $S_{i+1} = \{\theta\}$ and $ew_{j+1}= \theta$ for some $0 \leq j \leq h-1$, then $s_1=s_2=\theta$, in contradiction with $s_1 \neq s_2$. So, $ew_{j+1} \in S_i \backslash S_{i+1}$ for $0 \leq j \leq h-1$. Consequently $$s_1 =\lambda_{h}(s_1, s_2, w_{1}, w_{2}, \ldots, w_{h})=\lambda_{h}(s_1, s_2, ew_{1}, ew_{2}, \ldots, ew_{h}) $$ $$\neq s_2 = \rho_{h}(s_1,s_2, w_{1}, w_{2}, \ldots, w_{h})=\rho_{h}(s_1,s_2, ew_{1}, ew_{2}, \ldots, ew_{h}).$$ From Lemma~\ref{finite-nilpotent} it follows that $S_i \backslash S_{i+1}$ is a group that is not nilpotent. Hence, since $S$ is minimal non-nilpotent, $S=S_{i}\backslash S_{i+1}$ and so $S$ is a minimal non-nilpotent\ group. Suppose $n+m>2$. If $S_{i}/S_{i+1}$ is not nilpotent then (by the results mentioned in the introduction ([\cite{jespers-okninski99}, Lemma 2.1]) a column or row of $P$ has two nonzero elements. Without loss of generality, we may suppose this is either the first column or the first row. If $p_{i1}$ and $p_{j1}$ are nonzero with $i \neq j$ then it is easily verified that the subsemigroup $\langle (p_{i1}^{-1};1,i),\, (p_{j1}^{-1};1,j)\rangle$ is isomorphic with the minimal non-nilpotent\ semigroup $U_{1}$. If, on the other hand, the first row of $P$ contains two nonzero elements, then the semigroup is isomorphic with the minimal non-nilpotent\ semigroup $U_{2}$. So, $S$ is a semigroup of right or left zeros with two elements. The remaining case is $n+m>2$ and $S_{i}/S_{i+1}$ is a nilpotent semigroup. Again by [\cite{jespers-okninski99}, Lemma 2.1], in this case, $S_{i}/S_{i+1}=\mathcal{M}^{0}(G, n,n;I_{n})$ with $G$ a nilpotent group. Since $\abs{S_{i+1}} \leq 1$, the result follows. \end{proof} In order to obtain a classification, we thus assume throughout the remainder of this section that $S$ is a finite minimal non-nilpotent\ semigroup that has a proper ideal $M=\mathcal{M}^{0}(G,n,n;I_{n})$ with $G$ a nilpotent group and $n>1$. To further refine our way towards a classification, we introduce an action of $S$ on the ${\mathcal R}$-classes of $M$, i.e. we define a representation (a semigroup homomorphism) $$\Gamma : S\longrightarrow \mathcal{T}_{\{1, \ldots, n\} \cup \{\theta\}} ,$$ where $\mathcal{T}$ denotes the full transformation semigroup $\mathcal{T}_{\{1, \ldots, n\} \cup \{\theta\}}$ on the set $\{1, \ldots, n\} \cup \{\theta\}$. The definition is as follows, for $1\leq i\leq n$ and $s\in S$, $$\Gamma(s)(i) = \left\{ \begin{array}{ll} i' & \mbox{if} ~s(g;i,j)=(g';i',j) ~ \mbox{for some} ~ g, g' \in G, \, 1\leq j \leq n\\ \theta & \mbox{otherwise}\end{array} \right.$$ and $$\Gamma (s)(\theta ) =\theta .$$ We call $\Gamma$ a minimal non-nilpotent\ representation of $S$ and $\Gamma(S)$ a minimal non-nilpotent\ image of $S$. It is easy to check that $\Gamma$ is well-defined and that it is a semigroup homomorphism. Also, for every $s \in S$, we define a map \begin{eqnarray} \label{map-psi} \Psi(s): \{1, \ldots, n\}\cup \{\theta\} & \longrightarrow &G^{\theta} \end{eqnarray} as follows $$\Psi(s)(i)=g \;\; \;\; \mbox{ if } \Gamma(s)(i) \neq \theta \mbox{ and } s(1_G;i,j)=(g;\Gamma(s)(i),j)$$ for some $1\leq j \leq n,$ otherwise $\Psi(s)(i)=\theta$. It is straightforward to verify that $\Psi$ is well-defined. Note that if $\Psi(s)(i)=g$ and $g \in G$ then $s(h;i,j)=(gh;\Gamma(s)(i),j)$ for every $h \in G$. Also if $\Psi(st)(i)=g$, $\Psi(t)(i)=g'$ and $\Psi(s)(\Gamma(t)(i))=g''$, then $g=g''g'$. Hence it follows that $$\Psi (st) =(\Psi (s) \circ \Gamma (t) ) \; \Psi (t).$$ We claim that for $s\in S$ the map $\Gamma (s)$ restricted to the domain $S\backslash \Gamma(s)^{-1}(\theta )$ is injective. Indeed, suppose $\Gamma(s)(m_1) = \Gamma(s)(m_2)= m$ for $1 \leq m_1, m_2, m \leq n$. Then there exist $g,g',h, h' \in G$, $1 \leq l, l' \leq n$ such that $s(g;m_1,l)=(g';m,l)$ and $s(h;m_2,l')=(h';m,l')$. Hence $$(1_G;m,m)s(g;m_1,l)=(g';m,l),\;\; (1_G;m,m)s(h;m_2,l')=(h';m,l')$$ and thus $$(1_G;m,m)s= (x;m,m_1)= (x';m,m_2)$$ for some $x, x' \in G$. It implies that $m_1= m_2$, as required. It follows that if $\theta \not\in \Gamma(s) (\{ 1, \ldots, n\})$ then $\Gamma (s)$ induces a permutation on $\{ 1, \ldots, n\}$ and we may write $\Gamma (s)$ in the disjoint cycle notation (we also write cycles of length one). In the other case, we may write $\Gamma (s)$ as a product of disjoint cycles of the form $(i_{1}, i_{2}, \ldots, i_{k})$ or of the form $(i_{1}, i_{2}, \ldots, i_{k}, \theta )$, where $1\leq i_{1}, \ldots, i_{k}\leq n$. The notation for the latter cycle means that $\Gamma (s)(i_{j})=i_{j+1}$ for $1\leq j\leq k-1$, $\Gamma (s)(i_{k})=\theta$, $\Gamma (s)(\theta ) =\theta$ and there does not exist $1\leq r \leq n$ such that $\Gamma (s)(r)=i_{1}$. We also agree that letters $i,j,k$ represent elements of $\{ 1, \ldots, n\}$, in other words we write explicitly $\theta$ if the zero appears in a cycle. Another agreement we make is that we do not write cycles of the form $(i, \theta )$ in the decomposition of $\Gamma (s)$ if $\Gamma(s)(i) =\theta$ and $\Gamma(s)(j) \neq i$ for every $1\leq j \leq n$ (this is the reason for writing cycles of length one). If $\Gamma(s)(i) =\theta$ for every $1\leq i \leq n$, then we simply denote $\Gamma (s)$ as $\theta$. For convenience, we also introduce the following notation. If the cycle $\varepsilon$ appears in the expression of $\Gamma(s)$ as product of disjoint cycles then we denote this by $\varepsilon \subseteq \Gamma(s)$. If $\Gamma(s)(i_1)=i_1', \ldots, \Gamma(s)(i_m)=i_m'$ then we write $$[\ldots, i_1, i_1', \ldots, i_2, i_2', \cdots, \ldots, i_m, i_m', \ldots] \sqsubseteq \Gamma(s).$$ It can be easily verified that if $g\in G$ and $1 \leq n_1, n_2 \leq n$ with $n_1 \neq n_2$ then \begin{eqnarray} \label{elements-of-ideal} \Gamma((g;n_1,n_2)) = (n_2,n_1,\theta) &\mbox{ and }& \Gamma((g;n_1,n_1)) = (n_1) . \end{eqnarray} Further, for $s,t\in S$, $g\in G$ and $1\leq i \leq n$, if $(\ldots, o, m, k, \ldots) \subseteq \Gamma(s)$ and $(m_1, \ldots, m_2,$ $ \theta) \subseteq \Gamma(t)$ then $$s(g;m,i)=(g';k,i), \;\; t(g;m_2,i)=\theta ,$$ for some $g' \in G$. Since $s(g; o, i) = (g''; m, i)$ for some $g'' \in G$ we obtain that $$(g; i,m)s(g; o, i) = (g; i, m)(g''; m, i)= (gg''; i, i)$$ and thus $(g; i,m)s(g; o, i)\neq \theta$. Hence, there exists $k \in G$ such that $$(g; i,m)s = (k; i, o).$$ We claim that $$(g;i,m_1)t=\theta .$$ Indeed, suppose this is not the case. Then $(g;i,m_1)t=(g';i,m_3)$ for some $g'\in G$ and some $m_{3}$. Hence, $(g;i,m_1)t(1_G;m_3,m_3) \neq \theta$ and $t(1_G;m_3,m_3) \neq \theta$. So $\Gamma(t)(m_3)=m_1$, a contradiction. In the following lemma we analyze the orbits of this action. Three cases show up. \begin{lem}\label{nilpotency} Let $S$ be a finite minimal non-nilpotent\ semigroup. Suppose $S$ has proper ideal $M = \mathcal{M}^0(G,n,n;I_{n})$ with $G$ a nilpotent group and $n\geq 2$. Then, there exist elements $w_1$ and $w_2$ of $S \backslash M$ such that one of the following properties holds: \begin{enumerate} \item[(i)] $(m,l) \subseteq \Gamma(w_1), (m)(l) \subseteq \Gamma(w_2), $ \item[(ii)] $(\ldots, m,l,m', \ldots) \subseteq \Gamma(w_1), (l)(\ldots, m, m',\ldots)\subseteq \Gamma(w_2),$ \item[(iii)] $[\ldots, k,m, \ldots, l,k', \ldots]\sqsubseteq \Gamma(w_1), [\ldots, l, m, \ldots, k, k', \ldots] \sqsubseteq \Gamma(w_2)$, \end{enumerate} for some pairwise distinct numbers $l, m, m',k$ and $k'$ between $1$ and $n$. \end{lem} \begin{proof} Because of Lemma~\ref{finite-nilpotent} there exists a positive integer $h$, distinct elements $s_1 , s_2\in S$ and elements $w_{1}, w_{2}, \ldots, w_{h}\in S^{1}$ such that $s_1 = \lambda_{h}(s_1, s_2 , w_{1},$ $ w_{2}, \ldots, w_{h})$, $s_2 = \rho_{h}(s_1, s_2, w_{1}, w_{2}, \ldots, w_{h})$. Note that both $s_{1}$ and $s_{2}$ are nonzero. Because the semigroups $M$ and $S /M$ are nilpotent, $\{ s_{1} , s_{2} , w_1, $ $\ldots, w_h \} \cap M \neq \emptyset$ and $\{ s_{1} , s_{2} , w_1, \ldots, w_h \} \cap (S \backslash M) \neq \emptyset$. It follows that $s_{1},s_{2}\in M$ and that there exist $1 \leq n_1,n_2,n_3,n_4 \leq n$ and $g, g' \in G$ such that $s_1 =(g; n_1, n_2), \, s_2 = (g'; n_3, n_4)$. Hence $$[\ldots, n_3,n_2, \ldots, n_1,n_4, \ldots] \sqsubseteq \Gamma(w_1), [\ldots, n_1, n_2, \ldots, n_3, n_4, \ldots] \sqsubseteq \Gamma(w_2).$$ Here we agree that we take $w_2=w_1$ in case $h=1$. If $(n_1,n_2)=(n_3,n_4)$ (for example in the case that $h=1$) then, there exist $k_{i} \in G$ such that $(k;\alpha,n_2)w_i= (kk_{i};\alpha,n_1) \text{ for every } k \in G \text{ and } \alpha \in \{n_1,n_2\}.$ Since $\lambda_{i-1}= (g_{i-1};n_1,n_2)$, $\rho_{i-1}= (g'_{i-1};n_1,n_2)$, for some $g_{i-1},g_{i-1}'\in G$, we get that $$\lambda_{i}= (g_{i-1}k_ig'_{i-1};n_1,n_2), \rho_{i}= (g'_{i-1}k_ig_{i-1};n_1,n_2)$$ and thus $$g = \lambda_{h}(g, g', k_{1}, \ldots, k_{h}), g' = \rho_{h}(g, g',k_{1}, \ldots, k_{h}).$$ Because of Lemma~\ref{finite-nilpotent}, this yields a contradiction with $G$ being nilpotent. So we have shown that $(n_{1},n_{2})\neq (n_{3},n_{4})$. In particular, we obtain that $h>1$. We deal with two mutually exclusive cases. (Case 1) $n_1=n_2=l$. Since $[\ldots, n_1, n_2, \ldots, n_3, n_4, \ldots]\sqsubseteq \Gamma(w_2)$, it is impossible that $n_3=l, n_4 \neq l$ or $n_3 \neq l, n_4 = l$. As $(n_1,n_2)\neq (n_3,n_4)$ we thus obtain that $n_3 \neq l$ and $n_4 \neq l$. Consequently, $$[\ldots, m,l, \ldots, l,m, \ldots] \sqsubseteq \Gamma(w_1),~ [\ldots, l, l, \ldots, m, m, \ldots] \sqsubseteq \Gamma(w_2)$$ $$\mbox{or~}[\ldots, m,l, \ldots, l,m', \ldots] \sqsubseteq \Gamma(w_1),~ [\ldots, l, l, \ldots, m, m', \ldots]\sqsubseteq \Gamma(w_2)$$ and thus $$(m,l) \subseteq \Gamma(w_1), (l)(m)\subseteq \Gamma(w_2)$$ $$\mbox{or~} (\ldots, m,l,m', \ldots)\subseteq \Gamma(w_1), (l)(\ldots, m, m', \ldots)\subseteq \Gamma(w_2)$$ for the pairwise distinct numbers $l, m$ and $m'$. (Case 2) $n_1 \neq n_2$ and $n_3\neq n_4$ (the latter because otherwise, by symmetry reasons, we are as in Case 1). We obtain five possible cases:\\ \begin{flushleft} $(1)[n_2=n_3=m]:$\end{flushleft} $$[\ldots, m,m, \ldots, n_1,n_4, \ldots] \sqsubseteq \Gamma(w_1),~ [\ldots, n_1, m, \ldots, m, n_4, \ldots]\sqsubseteq \Gamma(w_2),$$ $(2)[n_2=n_4=m]:$ $$[\ldots, n_3,m, \ldots, n_1,m, \ldots]\sqsubseteq \Gamma(w_1),~ [\ldots, n_1, m, \ldots, n_3, m, \ldots]\sqsubseteq \Gamma(w_2),$$ $(3)[n_1=n_3=m]:$ $$[\ldots, m,n_2, \ldots, m,n_4, \ldots]\sqsubseteq \Gamma(w_1),~ [\ldots, m, n_2, \ldots, m, n_4, \ldots]\sqsubseteq \Gamma(w_2),$$ $(4)[n_1=n_4=m]:$ $$[\ldots, n_3,n_2, \ldots, m,m, \ldots]\sqsubseteq \Gamma(w_1),~ [\ldots, m, n_2, \ldots, n_3, m, \ldots]\sqsubseteq \Gamma(w_2),$$ $(5):$ $$[\ldots, k,m, \ldots, l,k', \ldots]\sqsubseteq \Gamma(w_1),~ [\ldots, l, m, \ldots, k, k', \ldots]\sqsubseteq \Gamma(w_2)$$ for some pairwise distinct positive integers $l, m,k,k'\leq n$. Cases one and four are as in (ii) of the statement of the lemma. Note that cases two and three are not possible, since $(n_1, n_2) \neq (n_3, n_4)$ and the restriction of $\Gamma(w_1)$ to $\{ 1, \ldots, n \} \backslash \Gamma(w_1)^{-1}(\theta )$ is an injective map. Case five is one of the desired options. Finally, because of (\ref{elements-of-ideal}) we know how the elements of $M$ are written as products of disjoint cycles. Hence it is easily seen that $w_1,\, w_2\in (S\backslash M)$. \end{proof} \section{Three types of semigroups} In this section we deal with each of the cases listed in Lemma~\ref{nilpotency}. For the first case we obtain the following description. \begin{lem}\label{nilpotency-case1} Let $S$ be a finite minimal non-nilpotent\ semigroup. Suppose $M = \mathcal{M}^0(G,n,n;I_{n})$ is a proper ideal, with $G$ a nilpotent group and $n\geq 2$. If there exists $u\in S \backslash M$ such that $(m,l) \subseteq \Gamma(u)$, then $$S=\mathcal{M}^0(G,2,2;I_{2}) \cup \langle u \rangle,$$ a disjoint union, and \begin{enumerate} \item $\langle u\rangle$ a cyclic group of order $2^{k}$, \item $u^{2^{k}}=1$ is the identity of $S$, \item $\Gamma(u)=(1,2)$ and $\Gamma(1)=(1)(2)$, \item $G=\langle \Psi(u)(1), \; \Psi(u)(2) \rangle$, \item $(\Psi(u)(1)\; \Psi(u)(2))^{2^{k-1}}=1$. \end{enumerate} A semigroup $S=\mathcal{M}^0(G,2,2;I_{2}) \cup \langle u \rangle$ that satisfies these five properties is said to be of type $U_{3}$. Furthermore, $S= \langle (g;i,j), u\rangle$ for any (nonzero) $(g;i,j) \in \mathcal{M}^0(G,2,2;I_{2})$. \end{lem} \begin{proof} Let $1_G$ denote the identity of $G$. Obviously $(m,l) \subseteq \Gamma(u)$ implies that $ (m)(l) \subseteq \Gamma(u^2)$. Then because of (\ref{elements-of-ideal}), it is easily seen that \begin{eqnarray} \Gamma((1_G;m,l))&=&\lambda_2(\Gamma((1_G;m,l)),\Gamma((1_G;l,m)),\Gamma(u^2),\Gamma(u)), \label{not-nilp1}\\ \Gamma((1_G;l,m))&=&\rho_2(\Gamma((1_G;m,l)),\Gamma((1_G;l,m)),\Gamma(u^2),\Gamma(u)). \label{not-nilp2} \end{eqnarray} Hence the semigroup $\langle u, (g;m,l),(g;l,m) \mid g\in G\rangle$ is not nilpotent by Lemma~\ref{finite-nilpotent}. Since $S$ is minimal non-nilpotent, this implies that $$S=\langle u, (g;m,l),(g;l,m) \mid g\in G\rangle.$$ Let $g\in G$. Since $u(1_G;m,l)=(x;l,l)$ for some $x\in G$, we obtain that $(g;m,l)u(1_G;m,l)=(gx;m,l) \neq \theta$. Hence, \begin{eqnarray} \label{left-right} (g;m,l)u&=&(gx;m,m). \end{eqnarray} In particular, $(g;m,l)u,\, u(g;m,l) \in I=\langle (g;m,l),\, (g;l,m) \mid g\in G\rangle$. Note that $I=\mathcal{M}^0(G,2,2;I_{2})$. Hence $I$ is an ideal in the semigroup $T=\langle u,I\rangle$. Because of (\ref{not-nilp1}) and (\ref{not-nilp2}) the semigroup $T$ is not nilpotent. Furthermore, for any $u'\in \langle u\rangle$ one easily sees that $\Gamma (u')$ has at least two fixed points or contains a transposition in its disjoint cycle decomposition. Hence, because of (\ref{elements-of-ideal}), $\Gamma (u') \not\in \Gamma (M)$. Therefore, $\langle u\rangle \cap M =\emptyset$. Consequently, we obtain that $S=\langle u\rangle \cup M$, a disjoint union, $n=2$ and $\Gamma (u)=(m,l)$. It is then clear that in (\ref{not-nilp1}) and (\ref{not-nilp2}) one may replace $u$ by $u^{k_1}$, with $k_1$ an odd positive integer. It follows that the subsemigroup $\langle u^{k_1},M\rangle$ is not nilpotent. Since $S$ is minimal non-nilpotent\ this implies that $S=\langle u,M\rangle =\langle u^{k_1},M\rangle$. So $u=u^{r}$ for some positive integer $r\geq 3$. Let $r$ be the smallest such positive integer. Then $u^{r-1}$ is an idempotent and $\langle u\rangle$ is a cyclic group of even order. As $\langle u\rangle =\langle u^{k_1}\rangle$ for any odd positive integer $k_1$, we get that $\langle u\rangle$ has order $2^{k}$ for some positive integer $k$. Without loss of generality we may assume that $m=1$, $l=2$. As $(1_G;1,1)u^2$ $=(\Psi(u)(2)\; \Psi(u)(1);1,1)$ we have $$(1_G;1,1)u^{2^k+1}=((\Psi(u)(2)\; \Psi(u)(1))^{2^{k-1}}\; \Psi(u)(2);1,2).$$ Since $\langle u\rangle$ has order $2^{k}$, $u^{2^k+1}=u$ and thus $(\Psi(u)(2)\Psi(u)(1))^{2^{k-1}}= 1_G$ and $(1_G;1,1)u^{2^k}=(1_G;1,1)$. It follows then easily that $(x;1,1)u^{2^k}=(x;1,1)$ for any $x\in G$. Similarly one obtains that $u^{2^k}(x;1,1)=(x;1,1)$, $u^{2^k}(x;2,2)=(x;2,2)u^{2^k}=(x;2,2)$ for any $x\in G$. Hence, $u^{2^{k}}$ is the identity of the semigroup $S$. Let $H=\langle \Psi(u)(1), \Psi(u)(2) \rangle$. From (\ref{not-nilp1}) and (\ref{not-nilp2}) it easily follows that the subsemigroup $$\mathcal{M}^0(H,2,2;I_{2})\cup \langle u\rangle$$ is not nilpotent. Then, we obtain that $$\mathcal{M}^0( H, 2,2;I_2)\cup \langle u\rangle= \mathcal{M}^0( G, 2,2;I_2)\cup \langle u\rangle.$$ We now show that $G=H$. Suppose the contrary, then there exists $g \in G\backslash H$. Let $\alpha=(g;1,1)$. Clearly, $\alpha \not \in \mathcal{M}^0(H, 2,2;I_2)$ and thus $\alpha \in \langle u\rangle$. Since $\Gamma(u^{2k})=(1)(2),$ we get $\Gamma(u^{2k+1})=(1,2)$ for $k>1$ and $\Gamma((g;1,1))=(1,1,\theta)$, a contradiction. Thus $G=H$ and $S$ is a semigroup of type $U_{3}$. Now suppose that $(g;i,j) \in \mathcal{M}^0(G,2,2;I_{2})$. If $i=j$ then \begin{eqnarray} \Gamma((g;i,i))&=&\lambda_2(\Gamma((g;i,i)),\Gamma(u(g;i,i)u),\Gamma(u),\Gamma(u^2)), \\ \Gamma(u(g;i,i)u)&=&\rho_2(\Gamma((g;i,i)),\Gamma(u(g;i,i)u),\Gamma(u),\Gamma(u^2)). \end{eqnarray} Hence the semigroup $\langle (g;i,i), u(g;i,i)u\rangle$ is not nilpotent by Lemma~\ref{finite-nilpotent}. Since $S$ is minimal non-nilpotent\ this implies that $S=\langle (g;i,j), u\rangle$. Otherwise if $i \neq j$ we have \begin{eqnarray} \Gamma((g;i,j)u)&=&\lambda_2(\Gamma((g;i,j)u),\Gamma(u(g;i,j)),\Gamma(u),\Gamma(u^2)), \\ \Gamma(u(g;i,j))&=&\rho_2(\Gamma((g;i,j)u),\Gamma(u(g;i,j)),\Gamma(u),\Gamma(u^2)). \end{eqnarray} Hence the semigroup $\langle (g;i,j)u, u(g;i,j)\rangle$ is not nilpotent by Lemma~\ref{finite-nilpotent}. Again since $S$ is minimal non-nilpotent\ this implies that $S=\langle (g;i,j), u\rangle$. \end{proof} Note that not every semigroup of type $U_{3}$ is minimal non-nilpotent. Indeed, let $S=\mathcal{M}^0(G,2,2;I_{2}) \cup \langle u \rangle$, with $\langle u\rangle$ a cyclic group of order $2$, $u^{2}=1$ is the identity of $S$, $\Gamma(u)=(1,2)$, $\Gamma(1)=(1)(2)$, $\Psi(u)(1)=g$, $\Psi(u)(2)=g$ and $G$ a cyclic group $\{1,g\}$. The subsemigroup $$\{(1_G;1,1), (1_G;2,2),(g;1,2),(g;2,1), u,1,\theta\}$$ is isomorphic with $\mathcal{M}^0(\{e\},2,2;I_{2}) \cup \langle u \rangle$. As this is a proper semigroup and it is not nilpotent, the semigroup $S$ is of type $U_{3}$ but it is not minimal non-nilpotent. Semigroups of type $U_{3}$ show up as obstructions in the description, given in \cite{Riley}, of the structure of linear semigroups satisfying certain global and local nilpotence conditions. In particular, it is described when finite semigroups are positively Engel. Recall that a semigroup $S$ is said to be \textit{positively Engel}, denoted (PE), if for some positive integer $n\geq 2$, $\lambda_{n}(a,b,1,1,c,c^{2},\ldots, c^{n-2}) =\rho_{n} (a,b,1,1,c,c^{2},\ldots, c^{n-2})$ for all $a, b$ in $S$ and $c\in S^{1}$. From Corollary 8 in \cite{Riley} it follows that one of the obstructions for a finite semigroup $S$ to be (PE) is that $S$ has an epimorphic image that has the semigroup $\mathcal{F}_{7}$ of type $U_{3}$ as a subsemigroup, where $\mathcal{F}_{7}=\mathcal{M}^0(\{e\},2,2;I_{2}) \cup \langle u \mid u^{2}=1 \rangle$. In order to deal with the second and third cases listed in Lemma~\ref{nilpotency} we prove the following lemma. \begin{lem}\label{nilpotency-lemma} Let $S= \mathcal{M}^0(G,3,3;I_{3}) \cup \langle w_1, w_2\rangle$ be a semigroup that is the union of the ideal $ M=\mathcal{M}^0(G,3,3;I_{3})$ and the subsemigroup $T=\langle w_1,w_2\rangle$. Suppose $\Gamma(w_1)=(2,1,3,\theta)$ and $\Gamma (w_2)= (2,3,\theta ) (1)$. Assume $G$ is a nilpotent group, $\theta$ is the zero element of both $M$ and $S$, and suppose $w_2w_{1}^{2}=w_{1}^{2}w_2=w_{1}^{3}=w_2w_{1}w_2= \theta$. Then the following properties hold. \begin{enumerate} \item $S$ is not nilpotent. \item $T$ is nilpotent. \item If a subsemigroup $S'$ of $S$ is not nilpotent, then $\langle w_1, w_2\rangle \subseteq S'$. \item Every proper Rees factor semigroup of $S$ is nilpotent. \end{enumerate} \end{lem} \begin{proof} (1) As $$\Gamma((1_G;1,1))=\lambda_2(\Gamma((1_G;1,1)),\Gamma((1_G;2,3)),\Gamma(w_1),\Gamma(w_2))$$ and $$ \Gamma((1_G;2,3))=\rho_2(\Gamma((1_G;1,1)),\Gamma((1_G;2,3)),\Gamma(w_1),\Gamma(w_2))$$ we get from Lemma~\ref{finite-nilpotent} that $S$ is not nilpotent. (2) Clearly $I=T\backslash \langle w_{2}\rangle$ is an ideal of $T$ and $I^{3}=\{ \theta\}$. Obviously $T/I$ is commutative and thus nilpotent. Hence, $T$ is nilpotent. (3) Assume $S'$ is a subsemigroup of $S$ that is not nilpotent. Again by Lemma~\ref{finite-nilpotent}, there exists a positive integer $p$, distinct elements $t, t'\in S'$ and $t_{1}, t_{2}, \ldots, t_{p}\in S'^1$ such that $t = \lambda_{p}(t, t', t_{1}, t_{2}, \ldots, t_{p})$, $t' = \rho_{p}(t, t', t_{1}, t_{2}, \ldots, t_{p})$. Since $T$ is nilpotent, $$\{t, t', t_{1}, t_{2}, \ldots, t_{p}\} \cap \mathcal{M}^0(G,3,3;I_{3}) \neq \emptyset $$ and since $\mathcal{M}^0(G,3,3;I_{3})$ is an ideal of $S$, $t$ and $t'$ are in $\mathcal{M}^0(G,3,3;I_{3})$. Since $S'$ is not nilpotent and $\mathcal{M}^0(G,3,3;I_{3})$ is nilpotent, we obtain that at least one of the elements $t_1, \ldots, t_{p}$ is in $T$. Now, if necessary, replacing $t$ by $\lambda_{i-1}(t, t', t_{1}, t_{2}, \ldots, t_{i-1})$ and $t'$ by $\rho_{i-1}(t, t', t_{1}, t_{2}, \ldots, t_{i-1})$, we may assume that $t_1\in T$. Write $t=(g_1 ;n_1,n_2)$ and $t'=(g_2 ;n_3,n_4)$, for some $1\leq n_1,n_2,n_3,n_4 \leq 3$ and $g_1 , g_2 \in G$. Consider the following subsets of $T$: $A=\{w_1\}$, $B=\{w_{2}\}$, $C=\{w_{2}^{n} \mid n \in \mathbb{N}, n > 1\}$, $D=\{w_1 w_{2}^{n} w_1 \mid n \in \mathbb{N}\}$, $E=\{w_1 w_{2}^n \mid n \in \mathbb{N}, n > 0\}$, $F=\{w_{2}^{n} w_1 \mid n \in \mathbb{N}, n > 0\}$ and $Z=\{T^1w_{1}^{2}T^1, T^1w_{2} x_{1} w_{2} T^1 \} \backslash \{w_{1}^{2}\}.$ By determining the images of these sets under the mapping $\Gamma$ one sees that these sets form a partition of $T$. Since $w_{2}w_{1}^{2}=w_{1}^{2}w_{2}=w_{1}^{3}=w_{2}w_{1}w_{2}= \theta$ we have that $Z=\{\theta\}$. Hence $t_1 \not \in Z$. If $t_1 \in C$ then $n_1=n_2=n_3=n_4=1$ (because, for every $a \in C$ we have $ \Gamma(a)=(1)$) and thus $g_{1} = \lambda_{p}(g_{1},g_{2}, x_{1}, \ldots, x_{p})$ and $g_{2}=\rho_{p}(g_{1},g_{2},x_{1}, \ldots, x_{p})$ for some $x_{1},\ldots, x_{p}\in G$, in contradiction with $G$ being nilpotent. If $t_1 \in D$ then $n_1=n_3=2$ and $n_2=n_4=3$ (because for every $a \in D$ we have $\Gamma(a)=(2,3,\theta)$); this again yields a contradiction with $G$ being nilpotent. Similarly $t_1 \not \in E, F$. Now suppose that $t_1\in A$, i.e. $t_{1}=w_1$. Since $\Gamma(w_1)=(2,1,3,\theta)$, $t=\lambda_{p}(t,t',t_{1},t_{2},\ldots, t_{p})\neq \theta$ and $t'=\rho_{p}(t,t',t_{1},\ldots, t_{p})\neq \theta$ we get that $\{n_1,n_3\} \subseteq \{1,2\}$. As $tw_1t'\neq \theta$ and $t'w_1t\neq \theta$ we obtain that $n_2=\Gamma (w_{1})(n_3)$ and $n_4=\Gamma (w_{1})(n_1)$. Hence, if $n_1=n_3$, then $n_2=n_4$, again yielding a contradiction with $G$ being nilpotent. So, $n_1\neq n_3$. If $n_1=1$ then $n_3=2$, $n_4=3$, $n_2=1$ and thus then $\{(n_1,n_2),(n_3,n_4)\}=\{(1,1),(2,3)\}$. Similarly, we also get the latter if $n_1=2$. Note that in this case $p>1$. It then can be easily verified that $t_2=w_2$ and thus $T \subseteq S'$, as desired. Similarly if $t_1=w_2$, then $T \subseteq S'$. (4) Since $M$ is a completely $0$-simple semigroup and because $T$ is nilpotent, it is clear that every proper Rees factor of $S$ is nilpotent. \end{proof} We now are in a position to obtain a description of finite minimal non-nilpotent\ semigroups that are not of type $U_3$ and that have a proper ideal that is a completely $0$-simple inverse semigroup. \begin{lem}\label{nilpotency-case2} Let $S$ be a finite minimal non-nilpotent\ semigroup with a proper ideal $M = \mathcal{M}^0(G,n,n;I_{n})$, $G$ a nilpotent group and $n\geq 2$. Suppose $S$ is not of type $U_{3}$, i.e. for $x \in S\backslash M$ there do not exist distinct numbers $l_1$ and $l_2$ between $1$ and $n$ such that $(l_1,l_2)\subseteq \Gamma(x)$. Then $S$ is a semigroup of one of the following two types. \begin{enumerate} \item $S= \mathcal{M}^0(G,3,3;I_{3}) \cup \langle x_1, x_2\rangle$, with $ \mathcal{M}^0(G,3,3;I_{3})$ a proper ideal of $S$, $\Gamma(x_1)=(2,1,3,\theta)$, $\Gamma(x_2)=(2,3,\theta)(1)$, $x_{2}x_{1}^{2}=x_{1}^{2}x_{2}=x_{1}^{3}=x_{2}x_{1}x_{2}= \theta$ (the zero element of $S$), $$G=\langle \Psi(x_1)(1), \, \Psi(x_1)(2),\, \Psi(x_2)(1),\, \Psi(x_2)(2) \rangle .$$ Such a semigroup is said to be of type $U_{4}$. \item $S= \mathcal{M}^0(G,n,n;I_{n}) \cup \langle v_1, v_2\rangle$, with $ \mathcal{M}^0(G,n,n;I_{n})$ a proper ideal of $S$, $$[\ldots, k_1, k_2, \ldots, k_3,k_4, \ldots]\sqsubseteq \Gamma(v_1), [\ldots, k_1, k_4, \ldots, k_3, k_2, \ldots] \sqsubseteq \Gamma(v_2)$$ for pairwise distinct numbers $k_1, k_2,k_3$ and $k_4$ between $1$ and $n$, $G=\langle \Psi(v_1)(1), \ldots, \Psi(v_1)(n),\, \Psi(v_2)(1), \ldots, \Psi(v_2)(n), \theta \rangle \backslash \{\theta\}$ and there do not exist pairwise distinct numbers $o_1, o_2$ and $o_3$ between $1$ and $n$ such that $(o_2,o_1,o_3,\theta)\subseteq \Gamma(y)$, $(o_2,o_3,\theta)(o_1)\subseteq \Gamma(z) $ for some $y,z\in \langle v_1, v_2 \rangle$. Such a semigroup is said to be of type $U_{5}$. \end{enumerate} Furthermore, if $S$ of type $U_{4}$ then $S=\langle (g;1,1), (g';2,3), x_1,\, x_2 \rangle$ and if $S$ is of type $U_{5}$ then $S=\langle (g;k_1, k_4), (g';k_3, k_2), v_1, v_2 \rangle $, for every $g, g' \in G$. \end{lem} \begin{proof} For clarity we give a brief outline of the structure of the proof. By assumption $S$ is not of type $U_{3}$ and hence it follows from Lemma~\ref{nilpotency} that we have two cases to deal with and this is done in three parts. In part (1) we deal with a special case of part (ii) of Lemma~\ref{nilpotency}. In the remainder of the proof we then assume that we are not in this special case. In part (2) we deal with all cases occurring in part (iii) of Lemma~\ref{nilpotency} as well as in part (ii) of Lemma~\ref{nilpotency}, the latter provided some extra condition is satisfied. Finally in part (3), we show that if this extra assumption is not satisfied then $S$ has to be of type $U_5$. Part (1). We begin the proof with handling a special case stated in part (ii) of Lemma~\ref{nilpotency}. Suppose that there exist elements $x_1, x_2\in S$ such that $$(o_2,o_1,o_3,\theta) \subseteq \Gamma(x_{1}) \mbox{ and } (o_2,o_3,\theta)(o_1)\subseteq \Gamma(x_{2}),$$ with $o_1,\, o_2,\, o_3$ positive integers between $1$ and $n$. It is then readily verified that $\mathcal{M}^0(G,\{o_1,o_2,o_3\},\{o_1,o_2,o_3\};I_{\{o_1,o_2,o_3\}}$ $)$ is an ideal in the subsemigroup $\mathcal{M}^0(G,\{o_1,o_2,o_3\},\{o_1,o_2,o_3\};I_{\{o_1,o_2,o_3\}}$ $) \cup \langle x_1, x_2 \rangle$ of $S$. Because \begin{eqnarray} ~\Gamma((1_G;o_1,o_1))&=& \lambda_2(\Gamma((1_G;o_1,o_1)),\Gamma((1_G;o_2,o_3)),\Gamma(x_1),\Gamma(x_2)) \label{eq3u3} , \\ ~\Gamma((1_G;o_2,o_3))&=& \rho_2(\Gamma((1_G;o_1,o_1)),\Gamma((1_G;o_2,o_3)),\Gamma(x_1),\Gamma(x_2)) \label{eq4u3} \end{eqnarray} we get that the semigroup $\mathcal{M}^0(G,\{o_1,o_2,o_3\},\{o_1,o_2,o_3\};I_{\{o_1,o_2,o_3\}}$ $) \cup \langle x_1, x_2 \rangle$ is not nilpotent and thus, as $S$ is minimal non-nilpotent, $S = \mathcal{M}^0(G,3,3;I_3) \cup \langle x_1, x_2 \rangle$. We now prove that $S$ is a semigroup of type $U_4$. We do so by showing that $x_{1}$ and $x_{2}$ satisfy conditions listed in part (1) of the statement of the lemma. Let $T=\langle x_{1},x_{2}\rangle$. Consider the following subsets of $T$: $A= \{x_1\}$, $B=\{x_{2}\}$, $C=\{x_{2}^{n} \mid n \in \mathbb{N}, n > 1\}$, $D=\{x_1 x_{2}^{n} x_1 \mid n \in \mathbb{N}\}$, $E=\{x_1 x_{2}^n \mid n \in \mathbb{N}, n > 0\}$, $F=\{x_{2}^{n} x_1 \mid n \in \mathbb{N}, n > 0\}$ and $Z=\{T^1x_{1}^{2}T^1, T^1x_{2} x_{1} x_{2} T^1 \} \backslash \{x_{1}^{2}\}$. By determining the images of these sets under the mapping $\Gamma$ one sees that these sets form a partition of $T$. Since $S'=\{T^1x_{1}^2T^1, T^1x_{2} x_{1}x_{2} T^1 \} \backslash \{x_{1}^2\} \cup \{\theta\}$ is an ideal of $S$, it easily follows from (\ref{eq3u3}) and (\ref{eq4u3}) that $S / S'$ is non-nilpotent. Hence, $S'=\{\theta\}$ and thus $Z=\{\theta\}$. So, $x_{1}^3= x_{1}^{2}x_{2}=x_{2}x_{1}^{2}=x_{2}x_{1}x_{2}=\theta$, as desired. From Lemma~\ref{nilpotency-lemma}(2) we know that the subsemigroup $T=\langle x_{1},x_{2}\rangle $ is nilpotent. Let $H=\langle \Psi(x_1)(o_1), \Psi(x_1)(o_2), \Psi(x_2)(o_1), \Psi(x_2)(o_2) \rangle$. For simplicity, and without loss of generality, we may assume that $o_1=1$, $o_2=2$ and $o_3=3$. From (\ref{eq3u3}) and (\ref{eq4u3}) it can be easily verified that the subsemigroup $$\mathcal{M}^0(\langle \Psi(x_1)(1), \Psi(x_1)(2), \Psi(x_2)(1), \Psi(x_2)(2) \rangle,3,3;I_{3})\cup \langle x_{1},x_{2}\rangle$$ is not nilpotent. Hence $$\mathcal{M}^0( H, 3,3;I_{3})\cup \langle x_{1},x_{2}\rangle= \mathcal{M}^0( G, 3,3;I_{3})\cup \langle x_{1},x_{2}\rangle.$$ We now show that $G=H$. Suppose the contrary and let $g \in G\backslash H$. Let $\alpha=(g;1,1)$. Clearly, $\alpha \not \in \mathcal{M}^0(H, 3,3;I_{3})$ and thus $\alpha \in \langle x_{1},x_{2}\rangle$. Since $(g;1,1)(1_G;1,1) \neq \theta$, we get that $\Gamma(\alpha)(1), \Psi(\alpha)(1) \neq \theta$ and $(g;1,1)=(g;1,1)(1_G;1,1)=\alpha(1_G;1,1)=(\Psi(\alpha)(1);\Gamma(\alpha)(1),1)$ and thus $g = \Psi(\alpha)(1)$. This contradicts with $g \not \in H$. So, indeed, $G=H$. Hence we have shown that indeed $S$ is a semigroup of type $U_{4}$. To prove the last part of the statement of the lemma for this semigroup, let $g,g'\in G$. Since $$\Gamma((g;1,1))=\lambda_2(\Gamma((g;1,1)),\Gamma((g';2,3)),\Gamma(x_1),\Gamma(x_2))$$ and $$ \Gamma((g';2,3))=\rho_2(\Gamma((g;1,1)),\Gamma((g';2,3)),\Gamma(x_1),\Gamma(x_2)),$$ we get that the subsemigroup $\langle (g;1,1), (g';2,3), x_1, x_2 \rangle$ is not nilpotent. Since $S$ is minimal non-nilpotent\ it follows that $S= \langle (g;1,1), (g';2, 3), x_1,$ $ x_2 \rangle$. Part (2). In the remainder of the proof we assume that there do not exist pairwise distinct numbers $o_1, o_2$ and $o_3$ between $1$ and $n$ such that $(o_2,o_1,o_3,\theta)\subseteq \Gamma(y)$, $(o_2,o_3,\theta)(o_1)\subseteq \Gamma(z) \mbox{ for some } y,z\in S$. Because, by assumption $S$ is not of type $U_{3}$, it follows from Lemma~\ref{nilpotency} that $S\backslash M$ contains elements $w_1$ and $w_2$ such that $$(\ldots, m,l,m', \ldots) \subseteq \Gamma(w_1), (l)(\ldots, m, m',\ldots)\subseteq \Gamma(w_2),$$ or it contains elements $v_1$ and $v_2$ such that $$ [\ldots, k,m, \ldots, l,k', \ldots]\sqsubseteq \Gamma(v_1), [\ldots, l, m, \ldots, k, k', \ldots] \sqsubseteq \Gamma(v_2),$$ for some pairwise distinct numbers $l, m, m',k$ and $k'$ between $1$ and $n$. In the former case, without loss of generality, we may assume that $m=1$, $l=3$ and $m'=2$. Assume the former case holds, i.e. $$(\ldots, 1,3,2, \ldots ) \subseteq \Gamma (w_{1}) \mbox{ and } (3) \, (\ldots, 1,2, \ldots ) \subseteq \Gamma (w_{2}) ,$$ and also suppose that $$\Gamma(w_1)(2)=r \neq \theta .$$ Then, $[\ldots, 1,2, \ldots, 3,r, \ldots]\sqsubseteq \Gamma(w_{1}^2), [\ldots, 1, r, \ldots, 3, 2, \ldots] \sqsubseteq \Gamma(w_1w_2)$. Hence, $v_{1}=w_{1}^{2}$ and $v_{2}=w_{1}w_{2}$ are elements as in the latter case. For such elements $\Gamma(v_1)=[\ldots, 1,2, \ldots, 3,r, \ldots]$ and $\Gamma(v_2)=[\ldots, 1, r, \ldots, 3, 2,$ $ \ldots]$ we get that $$\Gamma((1_G;1,r))=\lambda_2(\Gamma ((1_G;1,r)),\Gamma((1_G;3,2)),\Gamma(v_1),\Gamma(v_2))$$ and $$ \Gamma((1_G;3,2))=\rho_2(\Gamma((1_G;1,r)),\Gamma((1_G;3,2)),\Gamma(v_1),\Gamma(v_2)).$$ It follows that the subsemigroup $\mathcal{M}^0(G,n,$ $n;I_n) \cup \langle v_{1}, v_{2} \rangle$ is not nilpotent. So, because $S$ is minimal non-nilpotent, $S = \mathcal{M}^0(G,n,n;I_n) \cup \langle v_1, v_2 \rangle$. Furthermore, it is easily verified that $$\Gamma((g;1, r))=\lambda_2(\Gamma((g;1, r)),\Gamma((g';3, 2)),\Gamma(v_1),\Gamma(v_2))$$ and $$ \Gamma((g';3,2))=\rho_2(\Gamma((g;1, r)),\Gamma((g';3,2)),\Gamma(v_1),\Gamma(v_2)).$$ Hence, for any $g,g'\in G$, the subsemigroup $\langle (g;1, r), (g';3,2), v_1, v_2\rangle$ is not nilpotent. Therefore, $S=\langle (g;1, r), (g';3,2), v_1, v_2 \rangle$ for every $g, g' \in G$. We now show that such a semigroup $S$ is of type $U_5$. Let $$H_{1}=\langle \Psi(v_1)(1), \ldots, \Psi(v_1)(n), \Psi(v_2)(1), \ldots, \Psi(v_2)(n), {\theta} \rangle.$$ Note that $H=H_{1}\backslash \{ \theta \}$ is a subgroup of the maximal subgroup $G$ defining $M$. Since \begin{eqnarray*} \Gamma((1_G;3,2))&=&\lambda_2(\Gamma((1_G;3,2)),\Gamma((1_G;1,r)),\Gamma(v_1),\Gamma(v_2)), \label{v1}\\ \Gamma((1_G;1,r))&=&\rho_2(\Gamma((1_G;3,2)),\Gamma((1_G;1,r)),\Gamma(v_1),\Gamma(v_2)) \label{v2} \end{eqnarray*} we get that $\mathcal{M}^0( H, n,n;I_{n})\cup \langle v_{1},v_{2}\rangle$ is not nilpotent. Because, by assumption, $S$ is minimal non-nilpotent, we obtain that $$\mathcal{M}^0( H, n,n;I_{n})\cup \langle v_{1},v_{2}\rangle= \mathcal{M}^0( G, n,n;I_{n})\cup \langle v_{1},v_{2}\rangle.$$ We now show that $G=H$. Suppose the contrary and let $g \in G\backslash H$. Let $\alpha=(g;1,1)$. Clearly, $\alpha \not \in \mathcal{M}^0(H, n,n;I_{n})$ and thus $\alpha \in \langle v_{1},v_{2}\rangle$. Since $(g;1,1)(1_G;1,1) \neq \theta$ we get that $\Gamma(\alpha)(1) \neq \theta$, $\Psi(\alpha)(1) \neq \theta$ and $$(g;1,1)=(g;1,1)(1_G;1,1)=\alpha(1_G;1,1)=(\Psi(\alpha)(1);\Gamma(\alpha)(1),1)$$ Thus $g = \Psi(\alpha)(1)$. This contradicts with $g \not \in H$. So, indeed, $G=H$. Part (3). We are left to deal with the case that $S\backslash M$ contains elements $w_1$ and $w_2$ such that $$(\ldots, 1,3,2, \ldots) \subseteq \Gamma(w_1), (3)(\ldots, 1, 2,\ldots)\subseteq \Gamma(w_2),$$ and $$\Gamma(w_1)(2)=\theta .$$ We show that if $S$ is not of type $U_5$ then this can not occur. First notice that we also may assume that there does not exist a positive integer $r'$, with $1\leq r' \leq n$, such that $\Gamma (w_{1})(r')=1$. Indeed, suppose the contrary and let $r'$ be such a positive integer. Then $[\ldots, 1,2, \ldots, r',3, \ldots]\sqsubseteq \Gamma(w_{1}^2), [\ldots, 1, 3, \ldots, r', 2, \ldots] \sqsubseteq \Gamma(w_2w_1)$ and thus $S$ is a semigroup of type $U_{5}$. This proves the claim. Thus $$(1,3,2,\theta) \subseteq \Gamma(w_1) .$$ Next we claim that the cycle $(\ldots, 1, 2, \ldots)$ in $\Gamma(w_2)$ ends in $\theta$. Indeed, assume the contrary. That is, this cycle ends in a positive integer. Let $n_{2}$ denote the length of this cycle. Then, $(1,3) \subseteq \Gamma(w_2^{n_2-1}w_1), (1)(3)\subseteq \Gamma(w_2^{n_2})$. However, this is excluded by assumption. This proves the claim and thus there exist positive integers $k,k'$ and $n'$ such that $\Gamma(w_2)(k)=k'\neq \theta$, $\Gamma(w_2)(k')= \theta$ and $\Gamma(w_2^{n'})(1)=k'$. So $$(3)\, ( \ldots, 1,2, \ldots, k,k',\theta ) \subseteq \Gamma (w_{2}) .$$ If $\Gamma(w_2^{n'-1}w_1)(k')=k''\neq \theta$ then $[\ldots, 1,k', \ldots, 3,k'', \ldots]\sqsubseteq \Gamma((w_2^{n'-1}w_1)^2)$, $[\ldots, 1, k'', \ldots, 3, k', \ldots] \sqsubseteq \Gamma(w_2^{n'-1}w_1w_2^{n'})$. Since $\Gamma(w_2)(3)=3, \Gamma(w_2)(1)=2$ and $\Gamma(w_2)(k')=\theta$, it is clear that $1,k'$ and $3$ are pairwise distinct. As $\Gamma(w_2^{n'-1}w_1)(k')=k''\neq \theta$, we get that $\Gamma(w_1)(k')=\alpha \neq \theta$. As $(1,3,2,\theta) \subseteq \Gamma(w_1)$ and because $1,k',3$ are pairwise distinct positive integers, we get that $\alpha \not \in \{1,2,3\}$. We claim that $1,k',3$ and $k''$ are pairwise distinct and thus that $S$ is a semigroup of type $U_{5}$, yielding a contradiction. Indeed, for otherwise, $k''=3$ or $k''=k'$ or $k''=1$. The former is excluded as it implies that $\alpha= 3$. If $k''=k'$, then $\alpha=2$, a contradiction. If $k''=1$ then $(\alpha,3)\subseteq \Gamma(w_1w_2^{n'-1}w_1w_2^{2n'-1})$ and thus $S$ is of type $U_3$, again a contradiction. This proves the claim. Finally, if $\Gamma(w_2^{n'-1}w_1)(k')=\theta$, we get that $(1,3,k', \theta) \subseteq \Gamma(w_2^{n'-1}w_1)$. Clearly, $(3)(1, k',\theta) \subseteq \Gamma(w_2^{n'})$. However, this contradicts with our assumptions. This final contradiction shows that indeed this considered case does not occur. \end{proof} \section{Main result and examples}\label{main Theorem} We are now in a position to state and prove the main result. \begin{thm} \label{main-theorem} Let $S$ be a finite minimal non-nilpotent\ semigroup. Then $S$ is either a Schmidt group or a semigroup of type $U_{1}$, $U_{2}$, $U_{3}$, $U_4$ or $U_{5}$. In particular, the semigroups $S$ of type $U_m$, with $3\leq m \leq 5$, are generated by four elements, they have a two-generated subsemigroup $T$ and an ideal $M=\mathcal{M}^{0}(G,n,n;I_{n})$ (with $G$ a nilpotent group) such that $$S=M\cup T,$$ and there exists a representation $$\Gamma : S\longrightarrow \mathcal{T}_{\{1, \ldots, n\} \cup \{\theta\}} ,$$ such that, for all $s\in S$, \begin{enumerate} \item $\Gamma (s) (\theta )=\theta$, \item $\Gamma (s)$ is injective when restricted to $\{ 1 ,\ldots, n\} \backslash \Gamma (s)^{-1}(\theta )$, \item $\abs{\Gamma^{-1}(\theta )}=1$, \item if $T$ has a zero element, say $\theta_{T}$, then $\theta_{T}=\theta$ (the zero of $S$). \end{enumerate} Furthermore, $\Gamma (S)$ also is a minimal non-nilpotent\ semigroup. \end{thm} \begin{proof} All parts, except the items (3) and (4), follow at once from Lemma~\ref{starter}, Lemma~\ref{nilpotency}, Lemma~\ref{nilpotency-case1} and Lemma~\ref{nilpotency-case2}. Part (3) follows from the fact that $\Gamma^{-1}(\theta )$ is an ideal of $S$ and $S/\Gamma^{-1}(\theta )$ is not nilpotent. To prove part (4), assume $T$ has a zero element, say $\theta_{T}$. We prove by contradiction that $\theta_{T}=\theta$. So suppose that $\theta_T\neq \theta$. Then, by part (3), $\Gamma(\theta_T)\neq \theta$. Hence, there exists $i$ between $1$ and $n$ such that $\Gamma(\theta_T)(i)\neq \theta$. Now let $t \in T$. We have $$\theta_T(1_G;i,i)=(\Psi(\theta_T)(i) ; \Gamma(\theta_T)(i),i)$$ and \begin{eqnarray*} \theta_Tt(1_G;i,i)&=&\theta_T(\Psi(t)(i);\Gamma(t)(i),i)\\ &=&(\Psi(\theta_T)(\Gamma(t)(i))\Psi(t)(i);\Gamma(\theta_T)(\Gamma(t)(i)),i). \end{eqnarray*} Because $\theta_T t= \theta_T$ we obtain that $\Gamma(\theta_T)(\Gamma(t)(i))=\Gamma(\theta_T)(i)$. Now as $\Gamma(\theta_T)(i) \neq \theta$, $\Gamma(t)(i)=i$. Therefore, for every $t\in T$, we have $(i) \subseteq \Gamma(t)$. Because $\Gamma((g;\alpha,\beta))=(\beta,\alpha,\theta)$ for every $(g;\alpha,\beta) \in M$, it follows that $M \cap T = \emptyset$. Let $M'=\mathcal{M}^{0}(G,\{1,\ldots,i-1, i+1, \ldots,n\},\{1,\ldots,i-1, i+1, \ldots,n\};I_{n-1})$. Since $(i) \subseteq \Gamma(t)$ and $\Gamma(t)$ restricted to $\{ 1, \ldots, n\} \backslash \Gamma(t)^{-1}(\theta)$ is injective for every $t \in T$, we get that $M'T, TM' \subseteq M'$ and thus $M' \cup T$ is a subsemigroup of $S$. Lemma~\ref{nilpotency} implies that there exist elements $w_1$ and $w_2$ of $T$ such that $$(m,l) \subseteq \Gamma(w_1), (m)(l) \subseteq \Gamma(w_2) $$ $$\mbox{or~} (\ldots, m,l,m', \ldots) \subseteq \Gamma(w_1), (l)(\ldots, m, m',\ldots)\subseteq \Gamma(w_2)$$ $$\mbox{or~} [\ldots, k,m, \ldots, l,k', \ldots]\sqsubseteq \Gamma(w_1), [\ldots, l, m, \ldots, k, k', \ldots] \sqsubseteq \Gamma(w_2)$$ for pairwise distinct numbers $l, m, m',k$ and $k'$ between $1$ and $n$. As $\Gamma(w_1)(o)$ $\neq o$ for $o \in \{l, m, m',k,k'\}$, $i \not \in \{l, m, m',k,k'\}$. Hence $M' \cup T$ is not nilpotent. As $T \cap M = \emptyset$ and $M \neq M'$, $S \neq M' \cup T$ and this is in contradiction with $S$ being minimal non-nilpotent. \end{proof} The theorem shows that finite minimal non-nilpotent\ semigroups that are not a group belong to five classes. In order to get a complete classification, a remaining problem is to determine which semigroups in these classes are actually minimal non-nilpotent. In particular, one has to determine when precisely a union $M\cup T$ of an inverse semigroup $M=\mathcal{M}^{0}(G,n,n;I_{n})$ (with $G$ a nilpotent group) and a two-generated semigroup $T$ is minimal non-nilpotent. One might expect that the easiest case to deal with is when $M$ and $T$ are $\theta$-disjoint, i.e. the only possible joint element is the zero element $\theta$. In Corollary~\ref{main-cor} we show that every finite minimal non-nilpotent\ semigroup which is of type $U_3$, $U_4$ or $U_5$ is an epimorphic image of such a semigroup. However, not every semigroup of type $U_4$ or $U_5$ that is a $\theta$-disjoint union of $M$ and $T$ is minimal non-nilpotent. Next we give examples of minimal non-nilpotent\ semigroups of type $U_5$ for which the maximal subgroups of $M$ are not trivial. We finish by constructing an infinite class of minimal non-nilpotent\ semigroups of type $U_5$ with $n\geq 5$ and $G$ the trivial group. Note that in general the subsemigroups $T$ and $M$ of a minimal non-nilpotent\ semigroup $U_{m}$ (listed in Theorem~\ref{main-theorem}) are not $\theta$-disjoint ($\theta$-disjoint means that if there is a common element then it is $\theta$). We now show that $U_{m}$ (with $3\leq n \leq 5$) is an epimorphic image of a semigroup built on $\theta$-disjoint semigroups. Let $T$ be a semigroup with a zero $\theta_T$ and let $M$ be a nilpotent regular Rees matrix semigroup $\mathcal{M}^0(G,n,n;I_n)$. Let $\Gamma$ be a representation of $T$ to the full transformation semigroup $\mathcal{T}_{\{ 1, \ldots, n\} \cup \{\theta\}}$ such that for every $t\in T$, $\Gamma (t) (\theta ) =\theta$, $\abs{\Gamma^{-1}(\theta )}\leq 1$ (as agreed before, by $\theta$ we also denote the constant map onto $\theta$), $\Gamma(t)$ restricted to $\{ 1, \ldots, n\} \backslash \Gamma(t)^{-1}(\theta)$ is injective and $\Gamma (\theta_T) =\theta$. Further, for every $t\in T$, let $$\Psi(t): \{ 1, \ldots, n\} \cup \{\theta\}\rightarrow G^{\theta}$$ be a map (as considered in (\ref{map-psi})) such that $\Psi(t)(i)\neq \theta$ if and only if $\Gamma(t)(i) \neq \theta$ and $\Psi (t_{1}t_{2}) =(\Psi (t_{1}) \circ \Gamma (t_{2})) \, \Psi (t_{2})$ for every $t_{1},t_{2}\in T$. We define a semigroup denoted by $$S=\mathcal{M}^{0}(G,n,n;I_{n}) \cup_{\Psi}^{\Gamma} T.$$ As sets this is the $\theta$-disjoint union of $\mathcal{M}^{0}(G,n,n;I_{n})$ and $T$ (i.e. the disjoint union with the zeros identified). The multiplication is such that $T$ and $M$ are subsemigroups, $$t \, (g;i,j) = \left\{ \begin{array}{ll} (\Psi (t)(i) g; \Gamma (t)(i),j) & \mbox{ if } \Gamma (t)(i) \neq \theta\\ \theta & \mbox{ otherwise} \end{array} \right. $$ and $$ (g;i,j) t = \left\{ \begin{array}{ll} (g\Psi(t)(j');i,j') & \mbox{ if } \Gamma (t)(j')=j\\ \theta &\mbox{ otherwise } \end{array} \right. $$ It can be easily verified that $S$ is associative. Note that if $G=\{e\}$, then $\Psi(t)(i)=e$ if and only if $\Gamma(t)(i) \neq \theta$. In this case we denote $\Psi$ simply as $id$. It follows from Theorem~\ref{main-theorem} (and its proof) that the minimal non-nilpotent\ semigroup S of type $U_{m}$ (with $3\leq n\leq 5$) is an epimorphic image of a semigroup of the type $\mathcal{M}^{0}(G,n,n;I_{n}) \cup_{\Psi}^{\Gamma} T$, with $G$ a nilpotent group and $T$ a two-generated nilpotent semigroup with a zero. \begin{cor} \label{main-cor} Every finite minimal non-nilpotent\ semigroup $S$ is an epimorphic image of one of the following semigroups: \begin{enumerate} \item a Schmidt group, \item $U_{1}=\{ e,f\}$ with $e^{2}=e$, $f^{2}=f$, $ef=f$ and $fe=e$, \item $U_{2}=\{ e,f\}$ with $e^{2}=e$, $f^{2}=f$, $ef=e$ and $fe=f$, \item $\mathcal{M}^{0}(G,2,2;I_{2}) \cup_{\Psi}^{\Gamma} T$ such that $T=\langle u \rangle \cup \{\theta\}$ with $\theta$ the zero of $S$, $u^{2^{k}}=1$ the identity of $T \backslash \{\theta\}$ (and of $S$) and $\Gamma (u)= (1,2)$. \item $\mathcal{M}^0(G,3,3;I_{3}) \cup_{\Psi}^{\Gamma} \langle w_1, w_2\rangle$, with $\Gamma(w_1)=(2,1,3,\theta)$ and $\Gamma(w_2)=(2,3,\theta)(1)$, $w_{2}w_{1}^{2}=w_{1}^{2}w_{2}=w_{1}^{3}=w_{2}w_{1}w_{2}= \theta$. \item $\mathcal{M}^0(G,n,n;I_{n}) \cup_{\Psi}^{\Gamma} \langle v_1, v_2\rangle$, with $$[\ldots, k, m, \ldots, k',m', \ldots]\sqsubseteq \Gamma(v_1), [\ldots, k, m', \ldots, k', m, \ldots] \sqsubseteq \Gamma(v_2)$$ for pairwise distinct numbers $k, k', m$ and $m'$ between $1$ and $n$, there do not exist distinct numbers $l_1$ and $l_2$ between $1$ and $n$ such that $(l_1,l_2)\subseteq \Gamma(x)$ for some $x\in \langle v_1, v_2 \rangle$ and there do not exist pairwise distinct numbers $o_1, o_2$ and $o_3$ between $1$ and $n$ such that $(o_2,o_1,o_3,\theta)$ $\subseteq \Gamma(y_1)$, $(o_2,o_3,\theta)(o_1)\subseteq \Gamma(y_2)$ for some $y_1,y_2\in \langle v_1, v_2 \rangle$. \end{enumerate} \end{cor} {\it An example of a semigroup of type $U_4$ that is not minimal non-nilpotent.}\\ Consider the following semigroup $$S= \mathcal{M}^0(\{1_G,g \},3,3;I_{3})\cup^{\Gamma}_{id} \langle w,v \rangle$$ with $v^2=v^3$, $wv^2=wv$, $vw=v^2w$, $w^2=wvw=wv^2w$, $vw^{2}=w^{2}v=w^{3}=vwv= \theta$, $\Gamma(w) = (2,1,3,\theta)$ and $\Gamma(v) = (2,3,\theta)(1)$. Clearly, $\mathcal{M}^0(\{1_G\},3,3;I_{3}$ $)\cup^{\Gamma}_{id} \langle w,v \rangle$ is a proper semigroup. The latter is minimal non-nilpotent\ and thus $S$ is not minimal non-nilpotent. {\it An example of a finite minimal non-nilpotent\ semigroup of type $U_5$ with trivial maximal subgroups in $M$.}\\ Let $$S=\mathcal{M}^0(G,4,4;I_{4})\cup^{\Gamma}_{\Psi} \langle w,v \rangle$$ with $G=\{1_G,g\}$ a cyclic group of order 2, $w^{2}=v^{2}=wv=vw=\theta,$ $$\Gamma(w) = (4,1,\theta)(3,2,\theta) \mbox{ and } \Gamma(v) = (4,2,\theta)(3,1,\theta),$$ $$\Psi(w)(4)=\Psi(w)(3)=\Psi(v)(4)=1,\;\; \Psi(v)(3)=g \mbox{ and } \langle w,v\rangle=\{w,v,\theta\}.$$ Since $$\Gamma((1_{G};3,1))=\lambda_2(\Gamma((1_{G};3,1)),\Gamma((1_{G};4,2)),\Gamma(w),\Gamma(v))$$ and $$\Gamma((1_{G};4,2))=\rho_2(\Gamma((1_{G};3,1)),\Gamma((1_{G};4,2)),\Gamma(w),\Gamma(v)),$$ we obtain that $S$ is not nilpotent. Suppose that a subsemigroup $S'$ of $S$ is not nilpotent. By Lemma~\ref{finite-nilpotent}, there exists a positive integer $m$, distinct elements $x, y\in S'$ and elements $w_{1}, w_{2}, \ldots, w_{m}\in S'^1$ such that $x = \lambda_{m}(x, y, w_{1}, w_{2}, \ldots, w_{m})$ and $y = \rho_{m}(x, y, w_{1}, w_{2}$ $, \ldots, w_{m})$. As $\langle w, v\rangle$ is nilpotent, $$\{x, y, w_{1}, w_{2}, \ldots, w_{m}\} \cap \mathcal{M}^0(G,4,4;I_{4}) \neq \emptyset $$ and because $\mathcal{M}^0(G,4,4;I_{4})$ is an ideal of $S$, $x$ and $y$ are nonzero elements in $\mathcal{M}^0(G,4,4;I_{4})$. Since $S'$ is not nilpotent and $\mathcal{M}^0(G,4,4;I_{4})$ is nilpotent, at least one element of the set $\{w_1, \ldots, w_m\}$ is not in $\mathcal{M}^0(G,4,4;I_{4})$. As before, without loss of generality, we may suppose that $w_1 \not \in \mathcal{M}^0(G,4,4;I_{4})$. Write $x=(g_1 ;n_1,n_2)$ and $y=(g_2;n_3,n_4)$, for some $1\leq n_1,n_2,n_3,n_4 \leq 4$ and $g_1,g_2 \in G$. If $w_1 =w$ then $\{(n_1,n_2),(n_3,n_4)\}=\{(3,1),(4,2)\}$ and it can be easily verified that $w_2=v$. It also is easily verified that $\mathcal{M}^0(\{1_G\},4,4;I_{4})$ is a subsemigroup of $$\langle \Gamma((g_1 ;3,1)),\, \Gamma((g_2 ;4,2)),\, \Gamma(w),\, \Gamma(v)\rangle .$$ So, $\mathcal{M}^0(\{1_G\},4,4;I_{4}) \subseteq \Gamma(S')$. Therefore, for every pair $1 \leq \alpha, \beta \leq 4$, there exists an element $p \in S'$ such that $\Gamma(p)=(\beta,\alpha,\theta)$. As $\Gamma(w) = (4,1,\theta)(3,2,\theta)$ and $\Gamma(v) = (4,2,\theta)(3,1,\theta)$ and $\langle w, v\rangle= \{w,v,\theta\}$, we obtain that there exists an element $h\in\{1_G,g\}$ such that $p=(h;\alpha,\beta)$. If $S \neq S'$ then there exists an element $(k;i,j) \in \mathcal{M}^0(G,4,4;I_{4})$ such that $(k;i,j)\not\in S'$. Now suppose that $(n_1,n_2)=(3,1)$ and $(n_3,n_4)=(4,2)$. Then $$v(g_1 ;3,1)v=(gg_1g ;1,3), \; \; w(g_2 ;4,2)w=(g_2 ;1,3), $$ $$v(g_1 ;3,1)w=(gg_1 ;1,4), \; \; w(g_2 ;4,2)v=(g_2 ;1,4). $$ Since $g_1,g_2 \in\{1_G,g\}$ we get that both $(1_G;1,3)$ and $(g;1,3)$, or both $(1_G;1,4)$ and $(g;1,4)$ are in $S'$. Suppose that both $(1_G;1,3)$ and $(g;1,3)$ are in $S'$. As proved above, there exist elements $k_{1},k_{2}\in G$ such that $(k_1;i,1),\, (k_2;3,j)\in S'$. Then $(k_1;i,1)(1_G;1,3)(k_2;3,j),\; (k_1;i,1)(g;1,3)(k_2;$ $3,j)\in S'$ and thus $(k_1k_2;i,j),\; (k_1gk_2;i,j)\in S'$. Since $k,k_1, k_2 \in \{1_G,g\}$ we get that $(k;i,j)$ is in $S'$, a contradiction. So, $S=S'$ in this case. Similarly, $(1_G;1,4),\; (g;1,4)\in S'$ leads to $S=S'$. Hence, we have proved that $S=S'$ if $w_{1}=w$. If $w_1=v$, then one proves in an analogous manner that $S=S'$. So, it follows that $S$ is a minimal non-nilpotent\ semigroup of type $U_{5}$. {\it An infinite class of finite minimal non-nilpotent\ semigroups of type $U_5$.}\\ Let $n\geq 5$ and consider $\mathcal{M}^{0}(\{ e \},n,n;I_{n})$ as a subsemigroup of the full transformation semigroup (see (\ref{elements-of-ideal})) on $\{ 1,\ldots, n\} \cup \{ \theta \}$, i.e. we identify $(e;i,j)$ with the cycle $(j,i,\theta )$ if $i\neq j$ and $(e;i,i)$ with the permutation $(i)$. Let $$Y_{n} = \mathcal{M}^0(\{e\},n,n;I_{n})\cup^{\Gamma}_{id} \langle w,v \rangle$$ and $$\Gamma(w)=(2,3,\theta)(4,1,\theta), \; \Gamma(v)=(2,1,\theta)(n,n-1, \ldots,5,4,3,\theta).$$ It can be easily verified that $$\Gamma(v^pw^q) = \Gamma(w^k) =\Gamma(v^l) = \theta \mbox{ for } p,q \geq 1,\; k \geq 2, \; l\geq n-2,$$ $$\Gamma(w^qv^p) = \theta \mbox{ for } q \geq 2, p \geq 1,$$ $$\Gamma(wv^p) = (p+4,1,\theta) \mbox{ for } n-4 \geq p \geq 1, \Gamma(wv^p) = \theta \mbox{ for } p > n-4 \mbox{ and}$$ $$\Gamma(awv^p) = \theta \mbox{ for } p \geq 1, a \in \langle w, v\rangle.$$ Because $\abs{\Gamma^{-1}(\theta )}=1$ we thus obtain that $v^pw^q = w^k =v^l = \theta$ for $p,q \geq 1, k \geq 2,\; l\geq n-2,\; w^qv^p = \theta$ for $q \geq 2, p \geq 1$ and $awv^p = \theta $ for $p \geq 1, a \in \langle w, v\rangle$. So, $$\langle w,v \rangle =\{ w,v,\ldots,v^{n-3},wv, \ldots, wv^{n-4}, \theta\}$$ and clearly $\langle w, v\rangle^n=\{\theta\}$. Therefore $\langle w, v\rangle$ is nilpotent. We claim that $Y_n$ is minimal non-nilpotent. To prove this, suppose that $Y$ is a subsemigroup of $Y_{n}$ that is not nilpotent. We need to prove that $Y=Y_{n}$. As before, there exists a positive integer $m$, distinct elements $x, y\in \mathcal{M}^0(\{e\},n,n;I_{n})$ and elements $w_{1}, w_{2}, \ldots, w_{m}\in Y^1$ with $w_{1}\not\in \mathcal{M}^0(\{e\},n,n;I_{n})$ such that $x = \lambda_{m}(x, y, w_{1}, w_{2}, \ldots, w_{m})$, $y = \rho_{m}(x, y, w_{1}, w_{2}$ $, \ldots, w_{m})$. Write $x=(e;n_1,n_2)$, $y=(e;n_3,n_4)$ for some $1\leq n_1,n_2,n_3,n_4 \leq n$. Since $x \neq y$, $(n_1,n_2) \neq (n_3,n_4)$. Since $\Gamma(xw_1y)$ and $\Gamma(yw_1x)$ are nonzero, $\Gamma(w_1)(n_3)=n_2$ and $\Gamma(w_1)(n_1)=n_4$. Now as $\Gamma(wv^p) = (p+4,1,\theta) \mbox{ for } n-4 \geq p \geq 1$ and $(n_1,n_2)\neq (n_3,n_4)$, $w_1 \notin \{wv, \ldots, wv^{n-4}\}$. Similarly $w_2 \notin \{wv, \ldots, wv^{n-4}\}$. So, $w_{1}=w$ or $w_{1}=v^{k}$ for some $n-3\geq k\geq 1$. Suppose that $w_1 = v^k$ for some $1\leq k \leq n-3$. Since $\Gamma(xw_1y)$ and $\Gamma(yw_1x)$ are nonzero, $\Gamma(v^k)(n_3)$ $=n_2$ and $\Gamma(v^k)(n_1)=n_4$. Hence $\lambda_1=(e;n_1,n_4), \rho_1=(e;n_3,n_2)$. If $w_2=w$ then $\Gamma(w)(n_1)=n_2$ and $\Gamma(w)(n_3)=n_4$. Since $(n_1,n_2) \neq (n_3,n_4)$ and $\Gamma(w)=(2,3,\theta)(4,1,\theta)$, $n_1=4, n_2=1, n_3=2, n_4=3$ or $n_1=2, n_2=3, n_3=4, n_4=1$. Now as $\Gamma(v^k)(n_3)=n_2$, $\Gamma(v^k)(n_1)=n_4$ and $\Gamma(v^k)(2)=\theta$ for $k > 1$, $v^k=v$ and thus $Y=Y_n$. So we may assume that there exists $1\leq l\leq n$ such that $w_2=v^l$. Similarly we have $\Gamma(v^l)(n_1)=n_2$ and $\Gamma(v^l)(n_3)=n_4$. It can be easily verified that $n_4 = n_1 - k= n_3 -l , n_2= n_1 - l =n_3 - k$. Then $k-l=l-k$ and thus $k=l$. Hence $n_4=n_2, n_1=n_3$, a contradiction. Finally suppose that $w_1=w$. As $\Gamma(w_1)(n_3)=n_2$, $\Gamma(w_1)(n_1)=n_4$, $\Gamma(w)=(2,3,\theta)\, (4,1,\theta)$ and $(n_1,n_2) \neq (n_3,n_4)$, $\{x, y\} = \{(e;4,3),(e;2,1)\}$ and thus $\rho_1, \lambda_1 \in \{(e;2,3),(e;4,1)\}$. Now as $\Gamma(\rho_1w_2\lambda_1)$ and $\Gamma(\lambda_1w_2\rho_1)$ are nonzero, it follows that $(2,1,\theta)\subseteq \Gamma(w_2),(4,3,\theta)\subseteq \Gamma(w_2)$. Since $\Gamma(v^k)(2)=\theta$ for $k > 1$ and $\Gamma(w)(2)=3$, one then obtains that $w_2=v$. Therefore $Y=Y_{n}$. It follows that indeed $Y_n$ is minimal non-nilpotent. \end{document}
\begin{document} \title[Doubly-dressed states for near-field trapping and subwavelength lattice structuration]{Doubly-dressed states for near-field trapping and subwavelength lattice structuration} \author{Maxime Bellouvet$^{1}$, Caroline Busquet$^{1,2}$, Jinyi Zhang$^{1}$, Philippe Lalanne$^{1}$, Philippe Bouyer$^{1}$ and Simon Bernon$^{1}$\footnote{http://www.coldatomsbordeaux.org/aufrons}} \address{$^1$ LP2N, IOGS, CNRS and Universit\'e de Bordeaux, F-33400 Talence, France} \address{$^2$ Muquans, Institut d'Optique, F-33400 Talence, France} \ead{[email protected]} \begin{indented} \item[]January 2018 \end{indented} \begin{abstract} We propose a scheme to tailor nanostructured trapping potentials for ultracold atoms. Our trapping scheme combines an engineered extension of repulsive optical dipole forces at short distances and attractive Casimir-Polder forces at long distances between an atom and a nanostructured surface. This extended dipole force takes advantage of excited-state dressing by plasmonically-enhanced intensity to doubly dress the ground state and create a strongly repulsive potential with spatially tunable characteristics. In this work, we show that, under realistic experimental conditions, this method can be used to trap Rubidium atoms close to surfaces (tens of nanometers) or to realize nanostructured lattices with subwavelength periods. The influence of the various losses and heating rate mechanism in such traps is characterized. As an example we present a near-field optical lattice with 100nm period and study the tunability of lattice and trapping depths. Such lattices can enhance energy scales with interesting perspectives for the simulation of strongly-correlated physics. Our method can be extended to other atomic species and to other near-field hybrid systems where a strong atom-light interaction can be expected. \end{abstract} \section{Introduction} Various hybrid quantum systems have been developed in the recent years with the idea that the overall combination of systems could exploit the properties of individual systems at best and overcome their individual performances \cite{Kurizki15}. In this spirit, an intensive research has been carried on various combinations such as for example NV centers or cold atom coupling to superconducting circuits processors for quantum memory applications \cite{Amsuss11,Kubo10,Diniz11,Bernon13}, magnetic sensing \cite{Weiss15,Bienfait16} or hybrid combinations involving macroscopic quantum systems such as mechanical oscillators \cite{Arcizet11,Kolkowitz12,Fang16,Vochezer18}. In this family of coupled quantum systems, ultracold atoms are special because of their very well-controlled intrinsic properties due to their strong decoupling from the environment and because they stand in gaseous form while other systems are predominantly solid-state devices. Transporting atoms close to a quantum counterpart is crucial for applications in quantum information and quantum simulation \cite{bloch12}. By increasing the coupling strength with, for instance, microwave superconducting photons \cite{Henschel17,Hattermann17}, optical photons \cite{Pappa15,Goban14,Sayrin15,Thompson13}, slow light \cite{Douglas15,Zang16} or plasmons \cite{stehle14}, we can devise efficient quantum memory or engineer surface-metiated atom-atom interactions \cite{gullans12,hood16,gonzalez15}. Tailoring nanostructured optical lattice quantum simulators with tunable properties (energy scale, geometry, surface-mediated long-range interaction) will allow one to investigate new regimes in many-body physics. In this quest for exotic quantum phases (e.g., quantum antiferromagnetism \cite{RevModPhys.80.885}), the reduction of thermal entropy is a crucial challenge \cite{Bernier09,Hulet15,cheuk15,greif16,Mazurenko17}. The price to pay for such low temperature and entropy is a long thermalization time that will ultimately limit the experimental realization. Miniaturization of lattice spacing is a promising solution to speed up this dynamics. There is therefore a wide effort in the community to push the optical trapping technology to the nanometer scale \cite{stehle14,gullans12,gonzalez15,stehle11,Lukin13,chang14} or to generate subwavelength lattice spacing using multi-photon optical transitions \cite{PhysRevA.66.045402,PhysRevA.74.063622}, time-periodic modulation \cite{PhysRevLett.115.140401} or superconducting vortices \cite{PhysRevLett.111.145304}. Engineering cold atom hybrids offers promising perspectives but requires to interface quantum systems in different states of matter at very short distances, which still remains an experimental challenge. In this paper, we present a novel trapping method that enables to trap and manipulate cold atoms below 100 nm from surfaces where the electromagnetic environment of cold atoms could be engineered. We show that our method can also be used to create subwavelength potentials with controllable parameters. Our method, that we refer as Doubly-Dressed State (DDS), is inspired by \cite{gonzalez15,chang14} where the authors take advantage of vacuum forces and material engineering to create near-field traps. Here, two off-resonant laser light fields interacting with the structured surface overcome the Casimir-Polder (CP) attraction at short distances \cite{casimir48}. The combination of the DDS repulsion and CP attraction forms a trap that follows the shape of the surface. A modulation of the CP potential can be generated with a periodically-structured surface, thus resulting in a lattice potential for cold atoms. In particular we study here the generation of a 1D lattice by periodically-spaced dielectric ridges in the spirit of \cite{gonzalez15}. Another scheme for the realization of surface traps with subwavelength optical lattices is published back-to-back \cite{Mildner18}. In section \ref{sec:level1} we present the general arguments of the trapping method which combines optical and vacuum forces. In section \ref{sec:level2} we apply the method to a stratified surface and characterize the trapping properties. We then extend it to a one-dimensional nanostructured surface where longitudinal and transverse trapping in a subwavelength lattice is demonstrated. In section \ref{sec:level3} we discuss the advantages and limitations of this method, to finally conclude in section \ref{sec:level4} with perspectives. \section{\label{sec:level1}Trapping method} In the absence of external field, an atom in vacuum close to a surface interacts with it because of quantum charge and field fluctuations, also known as the Casimir-Polder (CP) interaction \cite{casimir48}. It can be calculated by assimilating the atom to a fluctuating induced dipole with a moment $\mathbf{d}$, which creates an electromagnetic field that is reflected by the surface before interacting back with the dipole. This CP interaction is therefore characterized by an energy $U$ proportionnal to the atomic polarizability $\alpha$ and to the scattering Green tensor $\mathbf{G}^{(1)}(\mathbf{r},\mathbf{r},\omega)$. The latter describes the round-trip propagation (between the atom and the surface) of an electric field at frequency $\omega$ intially generated at the position of the atom $\mathbf{r}$. It contains all the optical properties of the surface. The CP force derived from the energy between two electric or paramagnetic objects is always attractive for an atom in the ground state. \begin{figure} \caption{\textbf{Trapping method.} \label{fig1} \end{figure} We consider here a semi-infinite dielectric surface with a dielectric-vacuum interface at $z=0$. Fig.\ref{fig1}\textbf{(a)} sketches three internal states of a $^{87}Rb$ atom modulated by the vacuum forces only. In that case, the potentials are attractive and drag the atoms to the surface. We consider now a laser field with an intensity profile decaying from the surface as shown in Fig.\ref{fig1}\textbf{(b)}, which dresses the $5P_{3/2}$ state. This laser is tuned a few hundreds of GHz on the blue of the $5P_{3/2}$ to $4D_{5/2}$ resonance. The resulting AC Stark shift effect induces a strong repulsive potential for the $5P$ state while barely affecting the $5S_{1/2}$ state. The $5S$-$5P$ transition frequency $\omega_{0}(z)$ therefore depends on the distance $z$ from the surface. Double dressing is achieved by using a second laser at frequency $\omega_{L,780}$ tuned close to the free space $5S-5P$ transition $\omega_{0,vac}$. The resulting AC Stark shift on the $5S$ state, is governed by the spatially-dependent detuning $\Delta(z)=\omega_{L,780}-\omega_{0}(z)$ (see Fig.\ref{fig1}\textbf{(c)}), which cancels at position $z_b$. This position directly depends on the laser frequency $\omega_{L,780}$. For positions $z>z_b$, the detuning $\Delta(z)$ is positive (blue) and the doubly-dressed potential expels the atoms away from the surface. As depicted in Fig.\ref{fig1}\textbf{(c)}, a trap is then formed by the contribution of the attractive CP and repulsive doubly-dressed state potentials. The trapping potential is calculated by integrating the time derivative of the atomic momentum $\hat{p}_A$ over the trajectory of the atom \cite{Dalibard85} : \begin{equation} U(z)=\int_\infty^z dz' \left\langle\frac{d\hat{p}_A(z')}{dt}\right\rangle =-\int_\infty^z dz' \left[\rho_{ee}(z')\frac{dU_{5P}}{dz'}+\rho_{gg}(z')\frac{dU_{5S}}{dz'}\right] \end{equation} where $\rho_{ee}$ (resp. $\rho_{gg}$) is the population of the $5P$ (resp. $5S$) state obtained by solving the optical Bloch equations. This expression for the total potential states that both excited and ground state energy gradient contributions (state population $\times$ energy gradient) are opposite at the trap position $z_t$ : $\rho_{ee}(z_t)\left.\frac{dU_{5P}}{dz'}\right|_{z_t}=-\rho_{gg}(z_t)\left.\frac{dU_{5S}}{dz'}\right|_{z_t}$. \section{\label{sec:level2}Results} \subsection{Case of a planar and stratified surface} \label{stratPart} As shown later, a key ingredient of the DDS trapping method is the amplitude of the excited-state energy shift. As explained in section \ref{sec:level1}, the decaying profile of the 1529nm-laser intensity directly imprints the energy shift of the 5P state. The spatial variation of intensity (gradient) must be as strong as possible to create a sufficient barrier to trap the atoms. To create an enhanced exponentially-decaying field in the vacuum, we take advantage of Surface Plasmon Resonance (SPR) that occurs at a metal-dielectric interface of a stratified surface when the angle of incidence is $\theta_{SPR}=\arcsin \left(n_{eff}/n_i\right)$ where $n_i$ is the refractive index in the incident medium and $n_{eff}$ is the effective index of the Surface Plasmon Polariton (SPP) mode. \begin{figure} \caption{\textbf{Trapping atoms in the near field.} \label{fig2} \end{figure} The surface geometry studied in this section is given in Fig.\ref{fig2}\textbf{(a)}. The geometrical parameters of the stratified surface are chosen by maximizing the 1529nm-intensity decay from 50~nm to 100~nm as shown in Fig.\ref{fig2}\textbf{(b)}. Given the optimization, we take a stratified structure composed of a dielectric layer ($SiO_2$) with a thickness of 158~nm and a 41nm layer of gold deposited on a $Si$ substrate. The lasers intensities are determined using transfer matrix method \cite{yeh98} implemented with Reticolo software \cite{reticolo}. The CP potentials are calculated using the multipolar coupling approach \cite{Buhmann04}. The thermal effects are neglected considering the distances at which the atoms are trapped ($z_A << \lambda_T=\hbar c/k_BT \simeq 7.6\mu m $, $z_A$ being the atom position and $\lambda_T$ the thermal wavelength). For an atom in a state $n$ ($n\equiv 5S_{1/2}$ or $n\equiv 5P_{3/2}$), the interaction with the stratified surface is described by a potential that can be split into two terms : \begin{equation} U_{n,ss}^{CP}(z_A)=U_{n,ss}^{nres}(z_A)+U_{n,ss}^{res}(z_A), \label{atom_stratifiedSurface_interaction} \end{equation} where \begin{equation} U_{n,ss}^{nres}(z_A)=\frac{\hbar\mu_0}{2\pi}\int_0^\infty d\xi \xi^2 \Tr\left[\boldsymbol{\alpha}^{(n)}(i\xi)\cdot \mathbf{G}^{(1)}(z_{A},z_A,i\xi)\right] \label{CPPotentialsNResPart} \end{equation} is the non-resonant term. The second term, named resonant term, is non-zero if the atom is excited ($5P$) and is given by \begin{equation} U_{n,ss}^{res}(z_A)=-\mu_0\sum_{n<m}\omega_{nk}^2 \mathbf{d}_{mn} \cdot \mathrm{Re}\mathbf{G}^{(1)}(z_{A},z_{A},\omega)\cdot\mathbf{d}_{nm}. \label{CPPotentialsResPart} \end{equation} By considering a $^{87}Rb$ atom with an isotropic polarizability $\boldsymbol{\alpha}^{(n)}(i\xi)=\alpha^{(n)}(i\xi)\mathds{1}$, these two expressions can be written as \begin{equation} \fl U_{n,ss}^{nres}(z_A)=\frac{\hbar\mu_0}{8\pi^2}\int_0^\infty \xi^2d\xi\alpha_{Rb}^{(n)}(i\xi)\int_0^\infty dk^{\parallel}\frac{k^{\parallel}}{\kappa^\perp}e^{-2\kappa^\perp z_A}\left\{r_{s}(k^{\parallel},i\xi)+\left(1-\frac{2\kappa^\perp{}^2c^2}{\xi^2}\right)r_{p}(k^{\parallel},i\xi)\right\} \label{atom_stratifiedSurface_interaction_nresTermV2} \end{equation} and \begin{eqnarray} \fl U_{n,ss}^{res}(z_A)=&\frac{\mu_0}{12\pi}\omega_{0,vac}^{2}|\mathbf{d}_{0}|^2\int_0^{\omega_{0}/c} dk^\perp\left\{\mathrm{Im}\left(e^{2ik^\perp z_A}r_{s}\right)+\left(1-\frac{2k^\perp{}^2c^2}{\omega_{0,vac}^{2}}\right)\mathrm{Im}\left(e^{2ik^\perp z_A}r_{p}\right)\right\}\nonumber\\ &-\frac{\mu_0}{12\pi}\omega_{0,vac}^{2}|\mathbf{d}_{0}|^2\int_0^{\infty} d\kappa^\perp e^{-2\kappa^\perp z_A}\left\{\mathrm{Re}\left(r_{s}\right)+\left(1+\frac{2\kappa^\perp{}^2c^2}{\omega_{0,vac}^{2}}\right)\mathrm{Re}\left(r_{p}\right)\right\}. \label{atom_stratifiedSurface_interaction_resTermV2} \end{eqnarray} For numerical reasons, the integrations are performed over imaginary frequencies $\xi$ ($\omega=i\xi$) for the non-resonant term (\ref{atom_stratifiedSurface_interaction_nresTermV2}) and over imaginary wavevectors $\kappa$ ($k=i\kappa$) for the resonant term (\ref{atom_stratifiedSurface_interaction_resTermV2}). We define longitudinal and transverse parts for the wavevector as follows: $\mathbf{k}= \mathbf{k}^\parallel+\mathbf{k}^\perp$ where $\mathbf{k}^\parallel=k^\parallel\mathbf{\hat{z}}$ and $\mathbf{k}^\perp=k_x\mathbf{\hat{x}}+k_y\mathbf{\hat{y}}$. $r_{s}$ (resp. $r_{p}$) is the amplitude reflection coefficient of the stratified surface for $s$ (resp. $p$) polarization defined by the electric (resp. magnetic) field being transverse to the plane of incidence. $|\mathbf{d}_{0}|$ and $\omega_{0,vac}$ are respectively the electric dipole moment and the frequency of the $5S-5P$ transition ($|\mathbf{d}_{0}|=5.977ea_0$ and $\lambda_{0}=2\pi c/\omega_{0}^{vac}=780.241$~nm for $^{87}Rb$). In this planar and stratified example, the coefficients $r_s$ and $r_p$ are calculated with Fresnel equations. The dynamic polarizability for an isotropic atom in a state $n$ at imaginary frequencies is given by \cite{Arora07} \begin{equation} \alpha^{(n)}(i\xi;J_n)=\frac{2}{3\hbar \left(J_n+1\right)}\sum_{J_m} \frac{\omega_{mn}\left|\langle J_n ||\mathbf{d}||J_m\rangle\right|^2}{\left(\omega_{mn}^2+\xi^2\right)}, \label{polarizability} \end{equation} where the sum is carried over all possible $n\rightarrow m$ transitions with a frequency $\omega_{mn}$ and a reduced dipole moment $\left|\langle J_n ||\mathbf{d}||J_m\rangle\right|$. $J$ denotes the total electronic angular momentum of the state. The calculated CP potentials are plotted in Fig.\ref{fig2}\textbf{(c)} and present an oscillation for the excited-state modulation due to the possible spontaneously-emitted real photon. Both potentials are attractive for atoms at position $z_A< 300$~nm from the surface. We now illuminate the rear side of the surface at the plasmonic angle $\theta_{SPR}\sim 18.6^\circ$ with laser light at a wavelength $\lambda_{L,1529}=1529.34$~nm, blue-detuned by 26 pm from the $5P-4D$ transition. As shown in Fig.\ref{fig2}\textbf{(d)}, using an incident power of 400 mW and a waist of 200 $\mu$m, we obtain a maximum SPP intensity at the dielectric-vacuum interface ($z=0$) of $610~\mu W/\mu m^2$ (100 SPP enhancement factor) corresponding to a maximal AC Stark shift of the $5P$ state of $\sim$ 31 GHz. This AC Stark shift is calculated using strong-field approximation (fine structure basis), justified by the SPP enhancement. The laser at $\lambda_{L,780}=780.201$~nm, \textit{i.e.} $\Delta_0=\omega_{L,780}-\omega_{0,vac}\simeq 2\pi\cdot30$~GHz, with a power of 200~mW in a 200~$\mu m$ waist is retroreflected on the surface (see inset in Fig.\ref{fig2}\textbf{(e)}). The detuning $\Delta(z)$ is zero at distance $z_b=24$~nm (resonant position). In this simulation we use a spatially homogeneous Rabi frequency $\Omega_R /2\pi \sim 132$~MHz and we obtain the total potential represented in Fig.\ref{fig2}\textbf{(e)}, with a trapping depth $U_0\simeq 13.5$~MHz and a trapping position $z_t\simeq 31$~nm. The geometrical properties of the trap such as the position and the depth can be adjusted with the powers of both lasers (1529 and 780 nm) and the frequency of the 780nm laser (Fig.\ref{fig3}). \begin{figure} \caption{\textbf{Tuning the geometrical parameters.} \label{fig3} \end{figure} As in \cite{chang14}, we evaluate the performance of our configuration by considering the four following characteristic times~: the exit time $\tau_{out}$, the tunneling time $\tau_t$, the anti-damping time $\tau_{ad}$ and the trap oscillation time $\tau_a$. We neglect the effect of transient heating \cite{chang14}. The exit time is due to scattering heating. It is defined as $\tau_{out}=\frac{dE}{dt}|E_g|^{-1}$ where $dE/dt = \hbar^2k_{eff}^2/(2m)\Gamma_{sc}$ is the energy change for a duration $dt$ due to the absorption by the atom with a mass $m$ of a photon with an effective momentum $\hbar k_{eff}$, at a scattering rate $\Gamma_{sc}=\Gamma_0\rho_{ee}=\tau_{sc}^{-1}$. $E_g$ is the ground-state-binding energy determined along with the wavefunction by using imaginary time propagation method. The tunneling time is due to the possibility for the atom to escape the trap by tunneling towards the surface. This characteristic time is calculated within the WKB approximation \cite{griffiths}. The anti-damping time corresponds to the time for the energy increase due to blue-transition heating to equal the energy $E_g$. In our case it is defined by $\tau_{ad}=\log\left[\left(|E_g|+\Delta p^2/(2m)\right)/\left(\Delta p^2/(2m)\right)\right]/2\beta$ where $\beta=-4\omega_r \frac{\Delta(z_t)\Gamma_{sc}}{\hbar k_0^2 \left|\Delta_c(z_t)\right|^4}\left.\frac{dU_{5P}}{dz}\right|_{z_t}\left.\frac{d\Delta}{dz}\right|_{z_t}$ is the anti-damping rate with $\Delta_c(z)=\Delta(z)+i\Gamma_0/2$ and the recoil energy $\omega_r$ \cite{chang14}. We note $\Delta z$ and $\Delta p$ the position and momentum standard deviations of the trapping ground state respectively. The trap oscillation time $\tau_a = m\Delta z/\Delta p$ represents a characteristic adiabatic time. \begin{figure} \caption{\textbf{Trap characteristics.} \label{fig4} \end{figure} In Fig.\ref{fig4}, we plot the trapping depth and barrier height as well as the characteristic evolution timescales for two trapping positions (corresponding to the dashed and full lines in Fig.\ref{fig3}\textbf{(a)}) as a function of the 1529nm-laser power. The trapping depth, which is mostly controlled by the ground-state ($5S$) CP potential, strongly increases while reducing the trap-to-surface distance (Fig.\ref{fig4}\textbf{(b)}). As shown in Fig.\ref{fig4}\textbf{(c)},\textbf{(e)},\textbf{(f)}, all contributions to the trap lifetime $\tau=\left(\tau_{out}^{-1}+\tau_{ad}^{-1}+\tau_{t}^{-1}\right)^{-1}$ increase together with the 1529nm-laser radiation. The 1529nm power ($P_{1529}$) is therefore a key parameter that tunes the energy gradient of the $5P$ state and optimizes the trap characteristics. Experimentally, the optimal power (maximal) will be given by the heat capacity and absorption of the surface at 1529nm. In the range of experimental parameters studied here and for atom-to-surface distances around $z_t = 50$nm, the exit and anti-damping times are the limiting trapping times, which increase together with $P_{1529}$. At higher intensities and for both trapping positions presented here, anti-damping heating becomes the limiting lifetime effect. Within the WKB approximation, the tunneling time is exponentially dependent on the barrier height and width. For similar barrier heights (Fig.\ref{fig4}\textbf{(a)}), the reduction of barrier width at shorter distances has a strong impact on the tunneling time (20-orders-of-magnitude decrease) as we can see in Fig.\ref{fig4}\textbf{(e)}. At short atom-to-surface distances, tunneling through the barrier will therefore become the limiting lifetime effect. \subsection{Case of a 1D grating} In this section, we extend the previously described method to a nanostructured surface which modulates the CP potentials in the transverse direction (\textit{i.e.} ($xOy$) plane), therefore tailoring the trapping potential. Hereafter, we restrict our example to a 1D periodic nanostructuration, sketched in Fig.\ref{fig5}\textbf{(a)}, to create a lattice potential with spacing fixed at $\ell=100$~nm. Using the same optimization procedure as in the case of the planar stratified structure, we optimize the geometry by maximizing the intensity decay of the 1529nm laser in front of a ridge (Fig.\ref{fig5}\textbf{(b)}). This led to a dielectric thickness of 500~nm, a ridge width of 25~nm and a metal thickness of 10~nm (optimization not shown). \begin{figure} \caption{\textbf{1D lattice structuration.} \label{fig5} \end{figure} The CP potentials for a structured surface are calculated using equations (\ref{CPPotentialsNResPart}) and (\ref{CPPotentialsResPart}) in which the scattering Green tensor for a one-dimensional grating is given by \begin{equation} \fl \mathbf{G}^{(1)}(\mathbf{r}_A,\mathbf{r}_A,\omega)=\frac{i}{8\pi^2}\int_{-\pi/\ell}^{\pi/\ell} dk_x \sum_{m,n}e^{i\frac{2\pi}{\ell}(m-n)x_A}\int_{-\infty}^{\infty} dk_y \sum_{\sigma,\sigma'=s,p} \frac{e^{i(k_z^m+k_z^n)z_A}}{k_z^n}\mathbf{e}_{m+}^{\sigma}r_{mn}^{\sigma \sigma'}\mathbf{e}_{n-}^{\sigma'} \end{equation} where $\mathbf{r}_A=(x_A,0,z_A)$ is the atom position. $\mathbf{e}_{n-}^{\sigma'}$ is the unit vector describing the field that propagates along $-z$ in the $n^{th}$ order with a polarization $\sigma'$, and scatters out of the surface in the $m^{th}$ order with polarization $\sigma$, $r_{mn}^{\sigma \sigma'}$ being the reflection coefficient. The reflected field is then described by $\mathbf{e}_{m+}^{\sigma}$ and propagates along $+z$. The integration over $k_x$ is performed in the first Brillouin zone $[-\pi/\ell,+\pi/\ell]$ where $\ell$ is defined as the period of the grating. The amplitude reflection coefficients $r_{mn}^{\sigma \sigma'}$ at imaginary frequencies for the ground state CP potential are calculated by implementing Fourier modal method \cite{guerout13,contrerasreyes10,Buhmann:2015rka} and the ones at real frequencies for the excited state CP potential are calculated using Reticolo software. The optical scheme uses here two laser beams at $1529$~nm : one being shined from the back at frequency $\omega_{1529}^{Back}$ that excites SPPs (noted '$Back$') and one shined from the front at frequency $\omega_{1529}^{Back}+\delta$ (noted '$Front$'). As shown in Fig.\ref{fig5}\textbf{(c)} top, the \textit{Back} laser does not create a decaying intensity profile in front of a groove. This is detrimental to trap atoms at this position. The \textit{Front} laser produces the inverted profiles (Fig.\ref{fig5}\textbf{(c)} middle). By having a slight frequency mismatch $\delta$ the interferences can be time-averaged and the total contribution to the potential is given by the sum of both intensities. As shown in Fig.\ref{fig5}\textbf{(c)} bottom, the total intensity has a rapidly-decaying profile at any transverse position. We define $\alpha_{1529}=P_{152 9}^{Front}/P_{1529}^{Back}$ as the ratio of power between the \textit{Front} and \textit{Back} lasers. \begin{figure} \caption{\textbf{1D lattice geometry.} \label{fig6} \end{figure} The profile of the total trapping potential is plotted in Fig.\ref{fig6}\textbf{(a)} with longitudinal and transverse cross-sections in Fig.\ref{fig6}\textbf{(b)} and \textbf{(c)} respectively. Here, the parameters are $P_{1529}^{Back}=500$mW, $\alpha_{1529}=7$, $\Delta_0/2\pi = 16.5$GHz and $\Omega_R/2\pi \sim 42$MHz. In this configuration, the atoms are trapped in front of a ridge at $z_t\sim 29$~nm and the modulation of the ground-state CP potential creates lattice and trap depths of $\left\{U_{\ell},U_{0}\right\}=\left\{6.8,8.6\right\}$~MHz ($U_{\ell}\sim 118E_R$ with $E_R$ the lattice recoil energy) and a period of $100$~nm. The trapping frequencies are $\left\{\omega_x,\omega_z\right\}=2\pi\cdot\left\{6,32\right\}$~MHz. In Fig.\ref{fig6}\textbf{(d)}-\textbf{(f)}, the $\alpha_{1529}$ parameter is scanned to find the optimal intensity profiles that create a trap both above a ridge and a groove (Fig.\ref{fig6}\textbf{(d)},\textbf{(e)}).\footnote{We note that the existence of a lattice potential requires the formation of a trap in front of a ridge and the presence of a barrier in front of a groove.} The final lattice depth, which corresponds to the energy difference between the groove and ridge energy minima, is plotted in Fig.\ref{fig6}\textbf{(f)}. As shown in Fig.\ref{fig6}\textbf{(d)}-\textbf{(f)}, for $\alpha_{1529}>7$ the lattice depth is little sensitive to an increase of the 1529nm-forward-laser power ($P_{1529}^{Front}$). To minimize the heating effect due to laser power, we choose $\alpha_{1529}=7$, which fixes the spatial profile of the light intensity while keeping its amplitude as a free parameter \textit{via} the power of the 1529nm-backward-laser beam ($P_{1529}^{Back}$). \begin{figure} \caption{ \textbf{Scanning the laser intensities.} \label{fig7} \end{figure} As demonstrated before, the $P_{1529}^{Back}$ parameter is key to increase the atom lifetime in the DDS trap (Fig.\ref{fig4}). In Fig.\ref{fig7}\textbf{(a)} we show the dependence of the lattice depth as a function of $P_{1529}^{Back}$. The 780nm-laser beam can also be used to change the trap parameters by controlling the coupling between the ground ($5S$) and excited ($5P$) states. As shown in Fig.\ref{fig7}\textbf{(b)}, by tuning both the frequency and the power of the 780nm laser, the one-dimensionnal lattice depth can be tuned from 0 to $\sim 35$ MHz (610 $E_R$). Although the contribution of these parameters to the lattice depth seems equivalent (symmetry), their influence on the trap-to-surface distance is very different (Fig.\ref{fig7}\textbf{(c)}). Indeed, a scan of $P_{780}$ at constant $\Delta_0$ has little effect on $z_t$ (dotted line) while a scan of $\Delta_0$ at constant $P_{780}$ shifts the trap from 17 to 60~nm (dashed line). \section{\label{sec:level3}Discussion} We have described and analyzed an experimental method to trap atoms in the near field of planar and nanostructured surfaces. The properties of the trap formed with this method can be tuned in lattice and trapping depths over relevant scales to simulate Hubbard Hamiltonians (0 - 600 $E_R$). The lattice parameters are experimentally adjusted by tuning the amplitude and frequency of laser beams, which are variables very well-controlled in laboratories. The surface geometry studied here led to a one-dimensional lattice. The idea can be straightforwardly extended to generate 2D subwavelength lattices at the cost of an increased numerical complexity for the computations of the optical response of the 2D structured surface. The algorithm for the 2D calculations based on Rigorous Coupled-Wave Analysis (RCWA) has already been developed \cite{moharam,Lalanne98} and is available on software such as Reticolo \cite{reticolo}. In \cite{chang14}, a similar trapping method has been proposed in which the repulsive shape of the 6P$_{3/2}$ state of Cesium was induced by a Drude material with plasmon resonance frequency corresponding to the atomic transition and a Q-factor of $10^7$. This condition is extremely stringent on material engineering and a practical implementation seems hardly possible. Our method is an interesting compromise where the excited-state properties are tuned by simple optical means which are little influenced by the material properties. Regardless of experimental implementation issues, we point out that \cite{chang14} benefits from a $z^{-3}$ divergence in potential close to the surface, which allows to access a diverging gradient of the excited state close to the surface. On the contrary, our method has an exponentially-decaying intensity profile and its slope is therefore upper-bounded by its value at the dielectric-vacuum interface. To trap atoms at distance below 15 nm, our method will require unrealistic optical power or close to transition frequency that will make it experimentally inappropriate. In our method, the constraint for the choice of materials are guided by: a high refractive index in order to generate strong surface plasmon waves, very low absorption at 1529 nm so as to handle high optical powers and the existence of nanofabrication protocole to generate the desired nanostructures. In this work we have shown the predicted potentials for $Si$ and $SiO_2$ materials, which have negligible absorption at 1529 nm\footnote{We could not find any measured absorption at this wavelength in the literature where it is estimated to have zero absorption.} as well as a reasonably high refractive index ($n_{Si}=3.48$). In addition, the geometry of nanostructures that we have detailed in this work can be realized by Metal Assisted Etching (MAE). This technique allows for creating nanopillar array with dimensions down to a few tenth of nanometers in diameters and up to a micron in height \cite{kara16}. MAE can be extended to other types of material. Further improvement can be carried on the material choice to optimize the trapping geometry. We have seen in section \ref{stratPart} that the atomic lifetime in the trap was of the order of 100 ms for the chosen range of parameters. This time was influenced by the exponentially-decaying optical profile from the dielectric-vacuum interface. In the 1D grating case, this exponentially-decaying profile was found to be absent in front of a groove and was compensated by the forward laser. The latter reduces the slope in front of a ridge that in turn limits the anti-damping time to 1 ms at best for our parameter range. Improvement in the lifetime in the lattice trap requires further investigation. In particular, the optimization of the geometry was carried by maximizing the intensity gradient in front of a ridge, which is crucial to increase the lattice depth. In future work, it would be interesting to consider as well the gradient of intensity in front of a groove and to look at different angles of incidence for the laser beams to improve the overall slope and consequently the lifetime of atoms in this subwavelength lattice. Another possibility to improve the lifetime would be to add some laser tunability to the long-range CP attraction force. For example, this can be achieved by a red-detuned evanescent field \cite{Mildner18} that would help to increase the lattice depth at large distances to maintain long lifetimes. \section{\label{sec:level4}Conclusion and perspectives} In this work, we have presented the doubly-dressed state trapping method and applied it to trap atoms in the near field of various surfaces. We emphasized specifically on planar stratified structure and on 1D periodically-structured surface to generate periodic subwavelength potentials. For both cases, we have shown that the trap characteristics (trap-to-surface distance, depth and trapping frequencies) could be adjusted on a large range by simply changing the frequency and/or power of the two dressing lasers. In the case of the stratified medium, we simulated the different contributions to the trap loss and showed that trap lifetime of $\sim$100 ms could be expected. For the case of 1D nanostructures, we calculated the modulation of the Casimir-Polder potential and took advantage of this modulation to generate a subwavelength lattice. We showed that the lattice and trap depths could be adjusted by controlling the dressing-laser parameters. We anticipate that our demonstration of a 100 nm lattice spacing can be extended to smaller sizes as low as a few tenth of nanometers below which the optical power to form a trap will become irrelevant for applications. Our study was developed and adapted to $^{87}$Rb but it can be straightforwardly applied to most alcaline atoms for which the excited-state transition lies in the range 1-1.6 $\mu$m where the absorption of $Si$ is low. Other materials such as $GaP$ could also be exploited as it has a high refractive index and a very low absorption above 550 nm. In addition, materials such as $GaP$ could be interestingly used to include other optical functionalities on the surface such as optical waveguides for readout and addressing or surface guided modes for optically-mediated long-range interactions \cite{hood16,gonzalez15}. The reduction of lattice spacing is one key possibility to increase energies and to speed up dynamics in future cold-atom lattice experiments. In the present work, the numerical tools that have been used are well adapted to periodic structures but the use of Casimir-Polder potential structuration goes beyond this periodic frame. By nanoshaping the surface, we can access a large variety of lattice geometries. The possibilities include the design of pillar defects to simulate the role of impurities in solid-state physics, the introduction of a controlled disorder or confined optical modes \cite{douglas15}. The Doubly-Dressed State method can be directly applied to tapered nanofibers, hollow-core fibers and slow light waveguides for which near-field trapping will strongly increase the atom-photon coupling strength. \section*{Competing financial interests} The authors declare no competing financial interests. \section*{References} \end{document}
\begin{document} \title{ extbf{ itleOfPaper space{1em} \hrule \begingroup \renewcommand{\value{footnote}}{} \footnotetext[1]{ \textcopyright\ 2021. This manuscript version is made available under the \href{http://creativecommons.org/licenses/by-nc-nd/4.0/}{CC-BY-NC-ND 4.0 license}.\\ Published journal article: \textit{Journal of Sound and Vibration}, vol. 501, pp. 116040, 2021. \textsc{doi}: \href{https://doi.org/10.1016/j.jsv.2021.116040}{10.1016/j.jsv.2021.116040}.\\ The article can be accessed at \url{https://www.sciencedirect.com/science/article/pii/S0022460X21001127}. } \setcounter{footnote}{0} \renewcommand{\value{footnote}}{\value{footnote}} \endgroup \FloatBarrier\Oldsection*{\small \centering Abstract} \label{sec*:abstract} \begingroup \small \leftskip1cm \rightskip\leftskip Semi-active vibration reduction techniques are defined as techniques in which controlled actions do not operate directly on the system's degrees of freedom (as in the case of active vibration control) but on the system's parameters, i.e., mass, damping, or stiffness. Cyclic variations in the stiffness of a structural system have been addressed in several previous studies as an effective semi-active vibration reduction method. The proposed applications of this idea, denoted here as \textit{stiffness modulation}, range from stepwise stiffness variations on a simple spring-mass system to continuous stiffness changes on rotor blades under aerodynamic loads. Semi-active systems are generally claimed to be energetically passive. However, changes in stiffness directly affect the elastic potential energy of the system and require external work under given conditions. In most cases, such injection or extraction of energy (performed by the device in charge of the stiffness variation and denoted as the \textit{pseudo-active} effect) usually coexists with the \textit{semi-active} effect, which operates by redistributing the potential energy within the system in such a way that it can be dissipated more efficiently. This work focuses on the discrimination between these two effects, which is absent in previous literature. A first study on their dependence on the process parameters of stiffness modulation is presented here, with emphasis on the spatial distribution of the stiffness changes. It is shown that localized changes tend to result in a larger semi-active share of vibration attenuation, whereas a spatially homogeneous stiffness modulation only generates a pseudo-active effect. \begin{enumerate}[label=\textbf{Keywords:},leftmargin=*] \leftskip1cm \rightskip\leftskip \item { Stiffness Modulation, Variable Stiffness, Switched Stiffness, Semi-Active Vibration Reduction, Vibration Control. } \end{enumerate} \endgroup \hrule \FloatBarrier\Oldsection{Introduction} \label{sec:1} \FloatBarrier\Oldsubsection{State of the art} \label{subsec:1.1} Structures and machines are, in general, capable of experiencing vibrations owing to the mutual interaction between elastic and inertial forces. In cases where vibrations are unwanted and the inherent damping of a given system is not sufficient to keep the decay time and resonance amplitude below acceptable limits, vibration suppression techniques come into play to modify the system's dynamics accordingly. A multiplicity of approaches for vibration suppression are known from the literature, which can be classified in different ways. The most commonly used classification subdivides vibration suppression approaches into the categories \textit{passive}, \textit{active}, and \textit{semi-active}. Passive methods are those that do not require actuators or other controlled devices. Active methods interact with the vibrating system by controlled actuators which directly operate on the system's degrees of freedom (DoF). Semi-active methods involve controlled devices that act on the parameters of the system \cite{Nitzsche2012}. This paper focuses on semi-active methods that modify the system's stiffness. Semi-active methods that employ stiffness changes to influence vibrations can be further classified according to two distinct principles. The first principle (\textit{adaptive stiffness}) uses variable stiffness to modify, in a quasi-static manner, the system's transfer function, thus improving its response to external disturbances. The second principle uses cyclical stiffness changes related to the vibration signal. Following the wording of Anusonti-Inthra and Gandhi \cite{Anusonti-Inthra2000}, it involves \textit{modulating} the stiffness of the system in real time and is therefore referred to here as \textit{stiffness modulation}. Adaptive stiffness systems can modify their resonance frequencies \cite{Greiner-Petter2014} which can thus be moved to values with a low spectral amplitude of the excitation. Stiffness adaptation can also be used to change the resonance frequency of tuned absorbers or dampers to match the excitation frequency of the main system \cite{Franchek1996}. Electromechanical stiffness adjustment of absorbers by means of shunted piezoceramic elements is covered in \cite{Davis2000}. In \cite{Lin2015}, a tuned mass damper is supplemented with a resettable variable stiffness device which adapts the system's stiffness and damping. Other concepts for adaptive devices include the use of magnets with adjustable mutual distance \cite{Sayyad2014}, conical springs \cite{Suzuki2013}, actuator-controlled variation of the angle between springs \cite{Nagarajaiah2005}, magnetorheological elastomers \cite{Komatsuzaki2015}, and variable-length pendula \cite{Pasala2014,Wang2020}. If the stiffness is changed in an actively controlled and cyclical manner in periods comparable to the period of the vibration under consideration (stiffness modulation), a significant reduction in the vibration amplitude can be achieved. Main contributions on this topic focus on building seismic isolation \cite{Xinghua2000,Nagarajaiah2006,Tan2004,Liu2006} and helicopter vibration suppression \cite{Anusonti-Inthra2000,Yong2004}. Studies on shock isolation are presented in \cite{Ledezma-Ramirez2011,Ledezma-Ramirez2012,Ledezma-Ramirez2014}. General purpose and basic studies can be found in \cite{Liu2008,Corr2001,Onoda1992} and \cite{Ramaratnam2006}. In the context of stiffness modulation, the logic that rules the stiffness changes as a function of vibration signals is called the \textit{control logic}. A first control logic concept (switched stiffness), adopted among others in \cite{Leitmann1994,Corr2001,Ramaratnam2006}, uses two possible values for the stiffness. At the extremal points of the vibration (maximum absolute value of the displacement and zero velocity), the stiffness is switched from the higher to the lower value, whereas at zero crossings the higher stiffness value is restored. In \cite{Corr2001}, the stiffness change is realized by piezoelectric actuators, which can be switched from an open-circuit state (high stiffness) to a short-circuit state (low stiffness). In \cite{Ramaratnam2006}, the stiffness of a spring-mass oscillator is varied using a motor-controlled arm, which changes the effective spring length (see \autoref{fig:01}a). Depending on the position of the arm, some coils can be blocked and rendered inactive, which results in increased stiffness. A similar control logic is adopted in \cite{Liu2006} and \cite{Onoda1992}. The first contribution deals with magnetorheological dampers, in which the damping is varied along with the stiffness. The second study investigates a tension-controlled string. The aforementioned studies on shock isolation \cite{Ledezma-Ramirez2011,Ledezma-Ramirez2012,Ledezma-Ramirez2014} also adopt the switched-stiffness control logic. \begin{figure} \caption{(a) Spring with adaptable active length according to \cite{Ramaratnam2006} \label{fig:01} \end{figure} In \cite{Xinghua2000}, the primary structure is connected by a hydraulic cylinder to an elastic brace. When the cylinder control valve is closed, the stiffness of the brace acts on the primary structure; by opening the valve, the value of the stiffness connected to the primary system drops to zero. Other than in the switched-stiffness case, in which the low-stiffness state is maintained until the zero crossings, in \cite{Xinghua2000}, the stiffness is reduced for a short time about the extremal points of the vibration. The hardware concept presented in \cite{Tan2004} is virtually identical, except for the presence of additional passive dampers. The system is driven by an on-off controller. The ``Smart Spring'' presented in \cite{Yong2004} (see \autoref{fig:01}b) works according to a principle similar to that in \cite{Xinghua2000} and \cite{Tan2004}, since it adds a spring in a parallel configuration to the primary system. The coupling is realized by friction forces controlled by a piezoceramic actuator. Strictly speaking, also this solution is limited to two possible stiffness values (primary structure with or without the additional spring). Intermediate values of the normal force produced by the actuator generate stick-slip effects with piecewise linear force-displacement curves. The authors assign to these states intermediate stiffness values, which are linearly interpolated with respect to the electric voltage applied to the piezoceramic actuator. The voltage is varied according to an adaptive control law. The device presented in \cite{Ledezma-Ramirez2011,Ledezma-Ramirez2012,Ledezma-Ramirez2014} is similar to the Smart Spring, but without a friction interface. In \cite{Ledezma-Ramirez2014}, the main dissipation mechanism is explained as the result of the inelastic impact between the masses associated with the two degrees of freedom at the time of coupling (switching from low to high stiffness). In \cite{Ledezma-Ramirez2011}, the presence of a viscous damper is considered instead. A practical realization of the stiffness variation by means of magnets is addressed in \cite{Ledezma-Ramirez2012}. A similar setup is used in \cite{Tran2017} to study the effect of time delay on variable stiffness control. In \cite{Liu2008}, the serial arrangement of a spring and a Voigt element with a controllable damper is used. In the experiment, a magnetorheological damper is used for this purpose. For low values of the damping coefficient, a low-stiffness condition is reached, in which the stiffness of the spring and of the Voigt element work in series. If the controllable damper is set to high damping, then the Voigt element tends to be rigid and the stiffness of the system approaches the value of the spring (high-stiffness condition). An on-off control scheme is used to select between the two configurations. In \cite{Anusonti-Inthra2000}, an extensive study on the effect of cyclic variations of the root stiffness on the vibration of rotor blades is presented. No concept is provided for the hardware interface expected to produce stiffness variations, which are considered as given and processed numerically. The bending stiffness along the two axes and the torsional stiffness are varied independently and in a sinusoidal fashion. Different frequencies (as multiples of the rotational frequency) and phase lags are investigated. In \cite{Nagarajaiah2006}, a device similar to \cite{Nagarajaiah2005} is used for base isolation employing stiffness modulation. The system with four springs in a rhombus configuration is used to influence the stiffness of the system by changing the shape of the rhombus. The springs are stabilized against buckling by telescopic tubes, which also add friction forces to the system. Different time signals (sinusoidal, square, triangle, and random) are used to drive the stiffness-variable system. A final mention concerns a concept in which the stiffness variation device is not actively controlled but mechanically coupled with the vibrating system. In \cite{Anubi2013}, a secondary vibrating system consisting of a ``control mass,'' a restoring spring, and a damper, is connected to the stiffness variation device of the main system. In a later work, the possibility of influencing the motion of the control mass by a magnetorheological damper is investigated \cite{Anubi2015}. In this last option, the parameters of a secondary vibrating system are modified, which in turn influences through its motion the parameters of the primary system. \FloatBarrier\Oldsubsection{Semi-active vs. pseudo-active} \label{subsec:1.2} As previously mentioned, vibration control techniques are defined as semi-active if they act on the system's parameters. Although a large number of contributions dealing with semi-active techniques neither provide a definition of the ``semi-active'' attribute nor a corresponding citation, the proposed solutions meet the classification criterion mentioned above. However, there is a relatively large number of contributions in which the ``semi-active'' attribute is related to energy considerations, which can lead to ambiguities. In some cases, semi-active measures are said to require less energy to operate \cite{Tan2004,Liu2005,Liu2006,Liu2008,Clark2000}. According to Xingua \cite{Xinghua2000}, semi-active devices have extremely low power requirements. Suzuki and Abe \cite{Suzuki2013} went even further and asserted that a semi-active control system does not need a power source to suppress vibration, while Fisco and Adeli \cite{Fisco2011} claimed that it ``is normally operated by battery.'' More substantial statements deal with the capability to perform work on the system. According to Anubi and Crane \cite{Anubi2015}, semi-active devices are either dissipative or conservative. Similarly, Liu \cite{Liu2004} asserted that they ``can only dissipate energy'' and ``cannot put energy into the system.'' The second assertion can also be found in other studies \cite{Leitmann1993,Spencer1997,Spencer2003,Pasala2013}. Following this more restrictive definition, a vibration control device acting on the system's parameters but capable of performing work on the system cannot be classified as semi-active, and would have to be labeled as an active control device. A variable-stiffness device, however, mostly requires the ability to perform work on the system, since a change in the stiffness of a vibrating system modifies its elastic potential energy. Indeed, while several authors agree with the fact that semi-active measures, owing to the above-mentioned property of being unable to perform work on the system, cannot destabilize it \cite{Spencer2003,Pasala2013,Preumont2011}, in the special case of variable-stiffness devices it is recognized that instabilities can occur \cite{Winthrop2004,Corless1997}. Within this more restrictive definition, it remains unclear whether the vibration control device must not perform positive work on the system to be classified as semi-active, or if any kind of energy exchange (positive or negative) disqualifies it from this category. As vibration attenuation implies a reduction in the energy level, the case of negative work should be the rule and needs to be considered. Performing negative work implies loading the system with controlled forces, which requires, in turn, a proper actuator and energy supply. Therefore, it can be expected that the resulting cost, effort, and complexity primarily depend on the managed energy flow, and only secondarily on its sign. Based on these considerations, we refer to the most restrictive choice and define as (purely) \textit{semi-active} a vibration suppression effect that acts on the system's parameters (especially on the stiffness), and does not add or subtract energy through the stiffness variation device. If the stiffness variation device performs (positive or negative) work on the system, this effect is referred to here as \textit{pseudo-active}. Because no energy can flow through the stiffness variation device with the semi-active effect, this can only involve rearranging the vibration energy in the system so that it is better dissipated by internal damping. The semi-active effect is therefore quantified after subtracting the energy reduction due to internal damping, which would be present without stiffness modulation (\textit{passive} effect). \FloatBarrier\Oldsubsection{Motivation of this study} \label{subsec:1.3} Analysis of published work leads to following considerations: \begin{itemize} \setlength{\itemsep}{-1pt} \item A large part of the considered contributions on stiffness modulation deals with isolators \cite{Nagarajaiah2006,Liu2006,Ledezma-Ramirez2011,Ledezma-Ramirez2012,Ledezma-Ramirez2014,Liu2005,Liu2004} or interfaces acting on the boundary conditions of the vibrating system \cite{Anusonti-Inthra2000,Yong2004}. Hence, all these approaches do not consider the variation of the internal stiffness of the vibrating system. A second large group of studies involves principle investigations on single-mass oscillators; practical applications or the extension to more complex structures is not discussed \cite{Leitmann1994,Corr2001,Ramaratnam2006,Tran2017}. Solutions involving stiffness changes of a structural system of practical relevance and complexity are treated in \cite{Xinghua2000,Tan2004}. \item All investigated practical implementations of variable stiffness involve lumped elements or forces (springs, dampers, hydraulic cylinders, piezoceramic stack actuators, magnets), which are primarily devised to influence a single degree of freedom of the system. \item Many solutions \cite{Greiner-Petter2014,Nagarajaiah2006,Tan2004,Liu2006,Yong2004,Corr2001,Anubi2013,Anubi2015} employ additional dissipative elements, such as dampers, friction interfaces, or resistors. This makes it difficult to judge the potential of mere stiffness modulation. \end{itemize} A need for dedicated investigations can be identified with a focus on the following points: \begin{itemize} \setlength{\itemsep}{-1pt} \item Stiffness modulation involving stiffness changes of the main load-carrying, vibration-prone system (and not of an external interface or isolator); \item Stiffness modulation involving distributed stiffness changes; \item Stiffness modulation that relies on the inherent damping properties of the vibrating system as dissipation mechanism and does not require additional damping elements. \end{itemize} With this long-term scenario in mind, the present work focuses on the distinction between the above defined semi-active and pseudo-active effects, and on identifying criteria to maximize the semi-active part of the vibration reduction achieved by stiffness modulation. A large semi-active part leads to a low power flow through the stiffness variation devices. These can be built, in turn, in a more compact way, and therefore be more easily integrated and/or distributed in the main load-carrying system. This paper presents a theoretical study on stiffness modulation using switching stiffness control logic. The stiffness changes are numerically imposed on the considered lumped-parameter systems without modeling a physical stiffness variation device. The analysis focuses on the above introduced energy discrimination, distinguishing between pseudo-active, (purely) semi-active, and passive effects. As an essential parameter of this analysis, the effect of the localization of stiffness changes is investigated. After an illustrative part based on spatial coordinates, the analysis proceeds in the modal space, and free as well as forced oscillations are studied. \FloatBarrier\Oldsection{Investigation on spring-mass oscillators} \label{sec:2} \FloatBarrier\Oldsubsection{Switching stiffness control logic} \label{subsec:2.1} In \cite{Leitmann1994}, the already introduced switching stiffness control logic was identified, for a single-DoF system, as the control logic that maximizes the energy extracted from the system. It can be formulated as follows: \begin{equation} k = \begin{cases} k_h, & \text{if}~~y(t)\dot{y}(t) \geq 0 \\ k_l, & \text{if}~~y(t)\dot{y}(t) < 0 \end{cases} \label{eq:01} \end{equation} where $ k_h $ and $ k_l $ are the high and low values of the system's stiffness, respectively, and $ y(t) $ is the vibration signal. To include systems with more than one degree of freedom, we specify the stiffness for any single stiffness parameter~$ j $ via a corresponding factor~$ \gamma_j $ \begin{equation} k_j = \begin{cases} \gamma_j k_{0j}, & \text{if}~~c(t)\dot{c}(t) \geq 0 \\ k_{0j}, & \text{if}~~c(t)\dot{c}(t) < 0 \end{cases} \label{eq:02} \end{equation} which scales the stiffness between low and high stiffness. The value $ c(t) $ is a properly chosen \textit{observation variable} of the vibrating system. To obtain a criterion for real-time control, it can be observed that the upper part of condition~\autoref{eq:02} is verified when the absolute value of the observation variable increases, while the lower part is verified when it decreases. It can then be written in terms of two consecutive points in time $ t_i $ and $ t_{i+1} $ as \begin{equation} k_j = \begin{cases} \gamma_j k_{0j}, & \text{if}~~|c(t_{i+1})| \geq |c(t_{i})| \\ k_{0j}, & \text{if}~~|c(t_{i+1})| < |c(t_{i})| \end{cases} \label{eq:03} \end{equation} In the case in which the observation variable oscillates around zero with only one extremal point between two zeroes, the stiffness is increased directly after each zero crossing and is decreased directly after each extremal point. In the following, the stiffness decrease and increase points are labeled as $ e_i $ and $ z_i $, respectively. All following considerations are based on the switched-stiffness control logic presented here. \FloatBarrier\Oldsubsection{Coupleable two-DoF system} \label{subsec:2.2} In many systems studied in the literature (e.g., \cite{Yong2004} -- see \autoref{fig:01}b -- and \cite{Xinghua2000,Tan2004,Ledezma-Ramirez2011,Ledezma-Ramirez2012,Ledezma-Ramirez2014}) the stiffness change is achieved by coupling and decoupling the vibrating system with an additional stiffness in parallel. As a first illustrative example (see \autoref{fig:02}), such an option is studied here, where the secondary system also possesses its own mass. However, the mass of the secondary system is relatively small; therefore its effect on the dynamics of the primary system in the coupled state is of limited relevance. The two oscillators have the same damping coefficient ($d_1=d_2$), and the coupling connection is assumed to be massless and ideally stiff. A case is considered in which the masses $m_1$ and $m_2$ initially oscillate in a coupled configuration (see \autoref{fig:02}, Point~0 as the starting point for the combined oscillation, defined by $u_1=u_2$). After the first zero-crossing, the switching control logic \autoref{eq:01} is applied. At the point of maximum deflection of the two coupled springs, the system is decoupled (see \autoref{fig:02}, Point~1). As a result, the small spring begins to oscillate at a higher frequency with a rapidly decreasing amplitude (see $u_2$ curve in decoupled configuration) such that this oscillation can be assumed to be extinguished before the springs are coupled again at the zero crossing (see \autoref{fig:02}, Point~2). Through the coupling, the remaining kinetic energy of the system, stored in the first mass, is redistributed between the two masses. In the subsequent cycles, the process is repeated, resulting in the progressive decay of the amplitude of oscillator~1. \begin{figure} \caption{Switched stiffness approach on a system of two coupleable oscillators} \label{fig:02} \end{figure} Recalling \autoref{eq:01}, $ k $ corresponds in the present case to the stiffness acting on the first mass, i.e., $ k_l $ is the stiffness of the first spring and $ k_h $ is the stiffness of the parallel connection $ k_1+k_2 $. The observation variable $ c $ is given by the first mass displacement $ u_1 $. The example in \autoref{fig:02} shows that the vibration reduction of the overall system is significantly faster with cyclic coupling and decoupling of the springs (see graph in \autoref{fig:02}, \textit{switched coupling}) than with permanently coupled springs (see graph in \autoref{fig:02}, \textit{constant coupling}). Damping has a much stronger effect on the high-frequency oscillator 2 than on the coupled state. The switched stiffness approach exploits this effect by repeatedly shifting energy to oscillator~2 to dissipate it more effectively. \FloatBarrier\Oldsubsection{Single-DoF system} \label{subsec:2.3} Let us now consider the case of an undamped single-DoF system with a constant mass $ m = m_1 $ and variable stiffness according to \autoref{eq:01} with \begin{equation} k_l = k_1, \quad k_h = k_1 + k_2 \label{eq:04} \end{equation} By changing the stiffness along a square wave as shown in \autoref{fig:03} (dashed curve), nearly the same decay curve (see \autoref{fig:03}, green curve) as in the first example (see \autoref{fig:03}, black curve) can be obtained. There is only a slight difference owing to the invariable mass $ m $ and the absence of damping. Methodically, however, the two examples differ essentially from each other: the energy that was transferred to another passive subsystem (and then dissipated) in the first case is now extracted from the system at the points of stiffness reduction. The stiffness variation device performs negative work on the system. This negative work (called \textit{extracted energy} in the following) is equal to the variation in the potential energy \begin{equation} \Delta U = \dfrac{1}{2} \Delta k u^2 \label{eq:05} \end{equation} which results from the stiffness change $ \Delta k $ at displacement $ u $. Hence, in the first case, the vibration-reducing effect is purely semi-active, as defined in \autoref{subsec:1.2}; in the second case, the pseudo-active effect is at work. Generally, in a single-DoF system, semi-active vibration reduction is not possible because energy transfer to another degree of freedom cannot occur; hence, only the pseudo-active effect is present. \begin{figure} \caption{Single-DoF oscillator compared with the coupleable two-DoF system in \autoref{fig:02} \label{fig:03} \end{figure} \FloatBarrier\Oldsubsection{Serial two-DoF system} \label{subsec:2.4} Now the example of a serial two-DoF spring-mass oscillator with variable stiffness is considered. The system (\autoref{fig:04}) is appropriate for modeling the device in \cite{Ramaratnam2006}, which, as explained above, realizes two different stiffness values by changing the effective length of a spring. The first spring (with stiffness $ k_1 $) in \autoref{fig:04} represents the portion of the spring used in \cite{Ramaratnam2006}, which can be rendered inactive; the second spring (with stiffness $ k_2 $) represents the free portion. The mass $ m_1 $, which is significantly smaller than $ m_2 $, represents the mass of the physical spring. Mass- and stiffness-proportional damping is applied here with the same coefficients for both subsystems. To directly reproduce the experiment presented in \cite{Ramaratnam2006}, $ k_1 $ must change between a given finite value to infinity, while $ k_2 $ remains at a constant value. An alternative option, which reproduces the stiffness variation effect on the motion of mass $ m_2 $ by scaling both stiffness values, is presented later. We define a low-stiffness state where \begin{equation} k_1 = k_{01}, \quad k_2 = k_{02} \label{eq:06} \end{equation} In the high-stiffness state, the spring constants are scaled as follows: \begin{equation} k_1 = \gamma_1 k_{01}, \quad k_2 = \gamma_2 k_{02} \label{eq:07} \end{equation} As will be shown later, owing to the relatively low value of mass $ m_1 $, the dynamics of the mass $ m_2 $ is essentially affected by the stiffness $ k $ of the serial arrangement of the two springs \begin{equation} k = \dfrac{k_1 k_2}{k_1 + k_2} \label{eq:08} \end{equation} The low- and high-stiffness states are, respectively, \begin{equation} k_l = \dfrac{k_{01} k_{02}} {k_{01} + k_{02}}, \quad k_h = \dfrac{\gamma_1 \gamma_2 k_{01} k_{02}} {\gamma_1 k_{01} + \gamma_2 k_{02}} \label{eq:09} \end{equation} The aforementioned option to reproduce the experiment in \cite{Ramaratnam2006} (hereafter referred to as the \textit{local} case) requires \begin{equation} \gamma_1 \rightarrow \infty, \quad \gamma_2 = 1 \label{eq:10} \end{equation} which, with \autoref{eq:09} leads to \begin{equation} k_l = \dfrac{k_{01} k_{02}} {k_{01} + k_{02}}, \quad k_h = k_{02} \label{eq:11} \end{equation} By equating with the values of $ k_l $ and $ k_h$ reported in \cite{Ramaratnam2006} \begin{equation} k_l = 220\,\mathrm{N/m}, \quad k_h = 300\,\mathrm{N/m} \label{eq:12} \end{equation} the basic stiffness values $ k_{01} $ and $ k_{02} $ specified in \autoref{fig:04} are obtained. \begin{figure} \caption{Serial two-DoF-system with variable stiffness} \label{fig:04} \end{figure} As mentioned, the effect on the motion of mass $ m_2 $ can also be reproduced using the same scaling factor for both springs (\textit{global} case). With \begin{equation} \gamma_1 = \gamma_2 = \gamma \label{eq:13} \end{equation} equation~\autoref{eq:09} writes \begin{equation} k_l = \dfrac{k_{01} k_{02}} {k_{01} + k_{02}}, \quad k_h = \gamma \dfrac{k_{01} k_{02}} {k_{01} + k_{02}} \label{eq:14} \end{equation} By using the already determined basic stiffness values and \autoref{eq:14}, the value $ \gamma = 1.3636 $ is obtained. The initial conditions are given by imposing a static displacement of the second degree of freedom $u_2$ (no velocity) and leaving the first degree of freedom $u_1$ unloaded. The control logic is defined by \autoref{eq:02} with the observation variable $c=u_2$, and the stiffness scaling factors in the high-stiffness phase given by \autoref{eq:10} and \autoref{eq:13}, respectively. The resulting displacement curves of the two variants are shown in \autoref{fig:05}, each compared with the non-modulated structure (dotted lines). \begin{figure} \caption{Representation of the experimental setup in \cite{Ramaratnam2006} \label{fig:05} \end{figure} In the global case (a), both degrees of freedom resemble -- like the green curve in \autoref{fig:03} -- a damped sinusoidal curve. The real law of the curves is not a damped sine but a piecewise sinusoidal curve (\autoref{fig:03}) or a piecewise damped sinusoidal curve (\autoref{fig:05}a). We now consider the green curve in \autoref{fig:03}. Because the system is undamped and the stiffness is piecewise constant, the curve will be piecewise harmonic. In the theoretical case in which the switching stiffness control logic is applied exactly at the extremal and zero-crossing points, the low- and high-stiffness phases will respectively have the following lengths: \begin{equation} P_l = \frac{\pi}{2} \cdot \sqrt{\frac{m_1}{k_1}}, \quad P_h = \frac{\pi}{2} \cdot \sqrt{\frac{m_1}{k_1+k_2}} \label{eq:15} \end{equation} Any of the phases is denoted as a \textit{quarter cycle} in the following. In the first quarter cycle in \autoref{fig:03}, there is no difference with respect to the free oscillation of an undamped system with a finite displacement and zero velocity as the initial condition. At the first zero-crossing, the stiffness is increased without affecting the vibration energy (as shown later). In the second quarter cycle, the spring, loaded with the same energy present in the system at the initial time, reaches a lower amplitude owing to the higher stiffness. In addition, we observe a shortening of the quarter cycle owing to the higher natural frequency. At the end of the second quarter cycle, the stiffness is decreased, and energy is subtracted from the system. The quarter cycle resumes its original length, and the velocity at the next zero-crossing decreases (owing to the loss of vibration energy), which can be seen in the decrease in the curve slope. The fourth quarter cycle behaves essentially like the second cycle. In a similar way, it could be shown that the curves in \autoref{fig:05}a are -- due to the presence of damping -- piecewise damped sinusoidal curves. In the local case (\autoref{fig:05}b), the first degree of freedom remains blocked starting from a zero-crossing point, whereas the absolute displacement value of the second degree of freedom increases. At each extremal point of the $ u_2 $ curve, the first spring is released, which leads to clearly visible high-frequency oscillations of the first degree of freedom. Although the course of the displacement of the first degree of freedom differs significantly between the global and local case, the amplitude reduction for the second degree of freedom shows no appreciable differences between the two cases. The undamped equations of motion, valid for both variants, are as follows \begin{align} \begin{split} m_1 \ddot{u}_1 + (k_1 + k_2) u_1 - k_2 u_2 &= 0\\ m_2 \ddot{u}_2 - k_2 u_1 + k_2 u_2 &= 0 \label{eq:16} \end{split} \end{align} Owing to the low value of $ m_1 $, the inertial term can be neglected in the first equation, unless the accelerations are particularly high. The following relationship is then obtained: \begin{equation} u_1 = \dfrac{k_2}{k_1 + k_2} u_2 \label{eq:17} \end{equation} The two displacements are proportional to each other. The mass $ m_1 $ just ``follows'' $ m_2 $ in a quasi-static equilibrium state. The system can be reduced to a single DoF, with the equation (obtained from the second equation of \autoref{eq:16} and \autoref{eq:17}) \begin{equation} m_2 \ddot{u}_2 + \dfrac{k_1 k_2}{k_1 + k_2} u_2 = 0 \label{eq:18} \end{equation} Because the stiffness term of this equation yields, for both the local and global case, the same value in the high- and low-stiffness phase (see \autoref{eq:11} and \autoref{eq:14}), the curve for $ u_2 $ is approximately the same in \autoref{fig:05}a and b (due to the assumption $m_1 \ll m_2$). In the global case, the curves differ only by a scaling factor, which is in accordance with \autoref{eq:17} and confirms the hypothesis that $ m_1 $ is negligible. Because the spring stiffness values are scaled by the same factor, the proportionality coefficient in \autoref{eq:17} does not change. According to \autoref{eq:14}, at the beginning of the observation, the system equation~\autoref{eq:18} can be written as \begin{equation} m_2 \ddot{u}_2 + k_l u_2 = 0 \label{eq:19} \end{equation} After one quarter cycle, the stiffness increases and the equation becomes \begin{equation} m_2 \ddot{u}_2 + k_h u_2 = 0 \label{eq:20} \end{equation} with the values for $ k_l $ and $ k_h $ given in \autoref{eq:14}. The local case does not deviate from the global case in the first quarter cycle. At the zero crossing, $ k_1 $ is set to an infinite value by imposing $ u_1 = 0 $. The system is again reduced to a single-DoF system, with the equation (obtained from \autoref{eq:16}, second equation, for $ u_1 = 0 $) \begin{equation} m_2 \ddot{u}_2 + k_2 u_2 = 0 \label{eq:21} \end{equation} which, according to \autoref{eq:11} with $ k_2 = \mathrm{const.} = k_{02} $, provides equation~\autoref{eq:20}. At the extremal point, spring~1 is released, and its stiffness is instantaneously set back to $ k_{01} $. The force acting on $ m_1 $ suddenly increases from zero to a finite value, and the resulting acceleration renders the inertial term in \autoref{eq:16} no longer negligible. This is clearly shown by the $ u_1 $ curve, which no longer follows the proportionality law \autoref{eq:17}. The mass oscillates about the quasi-static equilibrium position given by the right-hand side of \autoref{eq:17}. Toward the end of the quarter cycle, however, this effect swings out, and the curve again fulfills \autoref{eq:18}. Because the inertial forces generated by $ m_1 $ cannot be neglected in this quarter cycle, equation~\autoref{eq:18} -- strictly speaking -- is no longer valid. The $ u_2 $ curve, however, does not show any significant difference with respect to the global case. Evidently, the fast oscillating inertial forces of $ m_1 $ do not affect the motion of the much larger mass $ m_2 $ to an appreciable extent. The (total) vibration energy $ E $ is the sum of the elastic potential energy $ U $ and the kinetic energy $ T $. The potential energy is given in both cases by the sum of the contributions of the two springs \begin{equation} U = \frac{1}{2} k_1 u_1^2 + \frac{1}{2} k_2 (u_1 - u_2)^2 \label{eq:22} \end{equation} As shown for the single-DoF system, the stepwise stiffness reductions, which occur at finite displacements, imply stepwise changes in the potential energy. These energy changes are achieved by the stiffness variation device (extracted energy). For the stiffness increase points, which occur at zero displacements and therefore do not alter the potential energy, no external work is required to balance changes in potential energy. From \autoref{eq:22}, the expression for the extracted energy at the $i$th stiffness reduction point $ e_i $ can be written as \begin{equation} \Delta E_{ai} = \frac{1}{2} \Delta k_1 u_1^2(e_i) + \frac{1}{2} \Delta k_2(u_1(e_i) - u_2(e_i))^2 \label{eq:23} \end{equation} where the negative factors $ \Delta k_j $ express the drop of the spring constants \begin{equation} \Delta k_j = k_{0j} (1-\gamma_j) \label{eq:24} \end{equation} The vibration energy loss over the half cycle between $ e_i $ and the next extremal point $ e_{i+1} $, computed as the difference of the left limits of potential energies \begin{equation} \Delta E_i = U(e_{i+1}^-) - U(e_i^-) \label{eq:25} \end{equation} corresponds, since the system is conservative, to the extracted energy. In the general case in which the system's internal damping is considered, the vibration energy loss is expressed as the sum of the extracted energy and the energy dissipated by damping \begin{equation} \Delta E_i = \Delta E_{ai} + \Delta E_{pi} \label{eq:26} \end{equation} Dividing by the vibration energy of the system at time $ e_i $ \begin{equation} \dfrac{\Delta E_i}{E(e_i)} = \dfrac{\Delta E_{ai}}{E(e_i)} + \dfrac{\Delta E_{pi}}{E(e_i)} \label{eq:27} \end{equation} renders the energy balance independent of the choice of the half cycle because the damping losses are proportional to the available vibration energy. The same holds true for the extracted energy because it depends, according to \autoref{eq:23}, on the square of the displacement amplitudes. The numerator and denominator of the fractions in \autoref{eq:27} change with time, but the quotients remain constant. To avoid overloading the notation, we omit the dependence on $ i $ and $ e_i $ in the symbols of the energy loss rates in the following equations. As long as relationship \autoref{eq:17} holds true, the system's potential energy \autoref{eq:22} can be expressed as a function of $ u_2 $. This generally applies to the global case as well as during the high-stiffness phase of the local case. In the high-stiffness phase, it holds \begin{equation} U_h = \dfrac{1}{2} k_h u_2^2 \label{eq:28} \end{equation} and \autoref{eq:25} can then be written as \begin{equation} \Delta E_i = \dfrac{1}{2} k_h (u_2^2 (e_{i+1}^-) - u_2^2 (e_i^-)) \label{eq:29} \end{equation} Assuming the same value of the function $ u_2 $ for both cases, it can be concluded that the local stiffness modulation leads to the same energy loss rate $ \Delta E / E $ as the global stiffness modulation. In the local case, however, the extracted energy is zero, according to \autoref{eq:23}, because $ u_1 $ and $ \Delta k_2 $ are zero. The damping energy rate in \autoref{eq:27} is defined to be equal to the energy loss rate of the system without stiffness variations, which is equal for the two cases. Hence, a new term must replace the extracted energy in the half-cycle balance: \begin{equation} \dfrac{\Delta E}{E} = \dfrac{\Delta E_s}{E} + \dfrac{\Delta E_p}{E} \label{eq:30} \end{equation} The new term $ \Delta E_s/E $ represents the semi-active effect. In both cases, the same share of energy is periodically extracted from the motion of the second degree of freedom \begin{equation} \dfrac{\Delta E_s}{E} = \dfrac{\Delta E_a}{E} \label{eq:31} \end{equation} with the difference that, in the local case, instead of being extracted by the actuator, this energy is transferred to the high-frequency motion of the first degree of freedom and dissipated, similarly to the case in \autoref{fig:02}. In the global case, with \autoref{eq:28} and the following expression of potential energy for the low-stiffness phase \begin{equation} U_l = \dfrac{1}{2} k_l u_2^2 \label{eq:32} \end{equation} the extracted energy \autoref{eq:23} can be written as \begin{equation} \Delta E_{ai} = U_l - U_h = \dfrac{1}{2} (k_l - k_h) u_2^2 (e_i) \label{eq:33} \end{equation} Finally, the energy rates from \autoref{eq:31} can be expressed as \begin{equation} \dfrac{\Delta E_{a}}{E} = \dfrac{\Delta E_s}{E} = \dfrac{k_l - k_h}{k_h} = \dfrac{k_l}{k_h} - 1 \label{eq:34} \end{equation} Note that in this energy analysis, we neglected the small amount of external, negative work to bring the velocity of mass $ m_1 $ to zero at the stiffness increase points (kinks in the $ u_1 $ displacement curves). The presented example helped us to define the different contributions to vibration reduction by separating the pseudo-active and semi-active components. In the general case, all the considered contributions coexist: \begin{equation} \dfrac{\Delta E}{E} = \dfrac{\Delta E_a}{E} + \dfrac{\Delta E_s}{E} + \dfrac{\Delta E_p}{E} \label{eq:35} \end{equation} The lost energy as a function of time \begin{equation} L(t) = E(t) - E(0) \label{eq:36} \end{equation} contains the three components resulting from the analyzed effects \begin{equation} L(t) = L_a(t) + L_s(t) + L_p(t) \label{eq:37} \end{equation} The extracted energy $ L_a(t) $ is a piecewise constant function given by \begin{equation} L_a(t) = \sum_i \Delta E_{ai} H(e_i) \label{eq:38} \end{equation} with $ H $ as the Heaviside function and $\Delta E_{ai}$ resulting from \autoref{eq:23}. The difference between the lost energy and extracted energy is due to internal dissipation. It consists of the above introduced terms $ L_s(t) $ and $ L_p(t) $, which are continuous functions of time, and can be analytically expressed as \begin{equation} \delimitershortfall -0.1pt L_s(t) + L_p(t) = -\int_0^t \dot{u}_1^2(\tau) \left(\alpha m_1 + \beta (k_1+k_2)\right) + \dot{u}_2^2(\tau) (\alpha m_2 + \beta k_2) - 2 \dot{u}_1(\tau) \dot{u}_2(\tau) \beta k_2 \,d\tau \label{eq:39} \end{equation} with varying $ k_1 $ and $ k_2 $ as a function of $ \gamma $ \autoref{eq:02}. The diagram in \autoref{fig:06} shows the dissipative energy loss $ L_s(t) + L_p(t) $ for different local and global stiffness variations. Other than in the previous example, spring~2 is affected by the stiffness variation in the local case, which implies that $ L_a $ is now also present. In the global case (dashed lines) the curves only represent the contribution of the passive term $ L_p(t) $, while the curves of the local case (dotted lines) additionally include the semi-active contribution $ L_s(t) $. This is shown later in more detail. The stiffness scaling factors of the three local examples are chosen in such a way that the total dissipated energy of the system per cycle is pairwise identical to that of the global examples. Curves of the same color refer to example pairs with the same energy loss. Because the passive term is assumed to be equal for both cases, the difference between the two curves of the same color shows the semi-active component $ L_s(t) $. \begin{figure} \caption{Dissipative energy loss $ L_s(t) + L_p(t) $;\\ dashed lines: global stiffness scaling; dotted lines: local stiffness scaling of spring 2} \label{fig:06} \end{figure} With higher values of the factor $ \gamma $, the dissipative energy loss clearly increases in the local case, mainly because of the semi-active effect. In the global case, the passive component shows a reverse trend with respect to the stiffness scaling factor. This can be attributed to the fact that the percentage reduction in vibration energy by damping is proportional to the vibration energy itself. If another vibration reduction mechanism is at work (pseudo-active in the present case), it reduces the vibration amplitude and therefore indirectly diminishes the \textit{absolute} energy dissipated by inherent damping. \FloatBarrier\Oldsubsection{Modal analysis} \label{subsec:2.5} In the analysis of the system with coupleable oscillators (\autoref{fig:02}), the energy transfer between the two oscillators is evident. In a serial system, such as the one examined in the previous section, the two oscillators are always coupled; hence, the vibration energy cannot be directly assigned to the individual physical degrees of freedom. However, we were able to study the stiffness modulation effect energetically because of the relatively small mass of one degree of freedom, but this cannot be expected to represent the general case. In the modal space, decoupled oscillators are present (which recalls the first example), and energy can be split over the modal degrees of freedom. In this section, we therefore analyze the stiffness modulation effect by modal analysis. The analysis is specifically performed on the serial two-DoF system, but can easily be extended to the general case of a discrete system with a larger number of degrees of freedom or a continuous structure. The stiffness matrix $ \mathbf{K} $ and mass matrix $ \mathbf{M} $ of the system can be calculated as follows: \begin{equation} \mathbf{K} = \begin{bmatrix} k_{01} + k_{02} & -k_{02}\\ -k_{02} & k_{02} \end{bmatrix}, \quad \mathbf{M} = \begin{bmatrix} m_1 & 0\\ 0 & m_2 \end{bmatrix} \label{eq:40} \end{equation} Damping is assumed to be proportional to mass and stiffness, and the damping matrix $ \mathbf{C} $ thus results from the linear combination of $ \mathbf{K} $ and $ \mathbf{M} $ \begin{equation} \mathbf{C} = \alpha \mathbf{M} + \beta \mathbf{K} \label{eq:41} \end{equation} The solution of the eigenvalue problem \begin{equation} \mathbf{K} \boldsymbol{\unslant\varphi} = \lambda \mathbf{M} \boldsymbol{\unslant\varphi} \label{eq:42} \end{equation} provides the system's eigenvalues $ \lambda_1, \lambda_2 $ and the eigenvectors $ \boldsymbol{\unslant\varphi}_1, \boldsymbol{\unslant\varphi}_2 $. It holds \begin{equation} \lambda_i = \omega_i^2,\quad i = 1,2 \label{eq:43} \end{equation} with $ \omega_i $ as the system's eigenfrequencies. The transformation into the modal space follows the relationship \begin{equation} \mathbf{u = \Phi}^\mathrm{T} \mathbf{q} \label{eq:44} \end{equation} with $ \mathbf{u} $ as the spatial displacements, $ \mathbf{q} $ as the modal displacements, and \begin{equation} \mathbf{\Phi} = [\,\boldsymbol{\unslant\varphi}_1 \;|\; \boldsymbol{\unslant\varphi}_2\,] \label{eq:45} \end{equation} as the modal matrix. The modal displacement fulfills the modal system equation \begin{equation} \tilde{\mathbf{M}} \ddot{\mathbf{q}} + \tilde{\mathbf{C}} \dot{\mathbf{q}} + \tilde{\mathbf{K}} \mathbf{q} = \tilde{\mathbf{f}} \label{eq:46} \end{equation} with the modal system matrices: \begin{align} \begin{split} \tilde{\mathbf{M}} &= \mathbf{\Phi}^\mathrm{T} \mathbf{M} \mathbf{\Phi}\\ \tilde{\mathbf{C}} &= \mathbf{\Phi}^\mathrm{T} \mathbf{C} \mathbf{\Phi}\\ \tilde{\mathbf{K}} &= \mathbf{\Phi}^\mathrm{T} \mathbf{K} \mathbf{\Phi} \label{eq:47} \end{split} \end{align} The diagonal entries $ \tilde{m}_i $, $ \tilde{c}_i $, and $ \tilde{k}_i $ of the modal matrices supply the system parameters of the decoupled oscillators. Owing to the mass and stiffness proportionality, not only $ \tilde{\mathbf{M}} $ and $ \tilde{\mathbf{K}} $ but also $ \tilde{\mathbf{C}} $ is a diagonal matrix. First, we consider how a global stiffness change affects the modal quantities. In this case, all the elements of the stiffness matrix $ \mathbf{K} $ are scaled by the same factor $ \gamma_1 = \gamma_2 = \gamma $: \begin{equation} \mathbf{K}_h = \gamma \mathbf{K} \label{eq:48} \end{equation} The eigenvalue problem of the new system \begin{equation} \mathbf{K}_h \boldsymbol{\unslant\varphi} = \bar{\lambda} \mathbf{M} \boldsymbol{\unslant\varphi} \Rightarrow \gamma \mathbf{K} \boldsymbol{\unslant\varphi} = \bar{\lambda} \mathbf{M} \boldsymbol{\unslant\varphi} \label{eq:49} \end{equation} is fulfilled by the same eigenvectors of \autoref{eq:42} and by the eigenvalues \begin{equation} \bar{\lambda}_i = \gamma \lambda_i \label{eq:50} \end{equation} The modal transformation does not change. The modal spring-mass systems experience the same proportional change in stiffness as the physical one, while the modal masses remain unchanged. Owing to the change in the modal stiffness, the energy also changes in each modal oscillator separately (by the same factor $ \gamma $), but no energy transfer occurs between the oscillators. As previously observed in the single-DoF case, a global stiffness variation can therefore only lead to a pseudo-active and not a semi-active vibration suppression. Now, we consider the case of a local change in stiffness, realized by scaling the second spring stiffness by the factor $ \gamma_2 $ and leaving $ k_1 $ unchanged. In this case, the stiffness changes as follows: \begin{equation} \mathbf{K}_h = \begin{bmatrix} k_{01}+ \gamma_2 \cdot k_{02} & - \gamma_2 \cdot k_{02}\\ - \gamma_2 \cdot k_{02} & \gamma_2 \cdot k_{02} \end{bmatrix} \label{eq:51} \end{equation} As a result of the nonproportional modification of individual entries in the stiffness matrix, the modal basis $ \mathbf{\Phi} $ changes. Consequently, at the moment of stiffness change, the vibration energy is redistributed among the modal oscillators, which in turn change their modal parameters. Thus, the basis for vibration reduction through internal energy transfer is given. The described difference will now be illustrated by means of an exemplary calculation with the data given in \autoref{tab:1}. The stiffness scaling factors correspond to the orange curve pair in \autoref{fig:06}, and result in the same total energy loss over the observed time period. Analogously to the previous examples, the stiffness is switched according to \autoref{eq:02} with the first modal amplitude $ q_1 $ as observation variable $ c $ (see \autoref{fig:07}c and d). The previously used formulation with $c=u_2$ yields similar switching points (see \autoref{fig:07}a and b) but should be omitted here for the purpose of explanation. \begin{table}[ht] \centering \footnotesize \begin{tabularx}{16cm}{p{3.2cm}| X| X| X} & \centering \textbf{Unchanged} & \centering \textbf{Global change} & \centering \textbf{Local change} \tabularnewline \hline \rule{0pt}{3ex} \textbf{Stiffness scaling factors} & \centering -- & \centering $ \gamma_1 = \gamma_2 = \gamma = 2.421 $ & \centering $ \gamma_2=5 $ \tabularnewline[5pt] \rule{0pt}{3ex} \textbf{Modal mass matrix} & $\tilde{\mathbf{M}} = $ \scriptsize $\begin{bmatrix} 1 & 0\\0 & 1\end{bmatrix}$ & $\tilde{\mathbf{M}} = $ \scriptsize $\begin{bmatrix} 1 & 0\\0 & 1\end{bmatrix}$ & $\tilde{\mathbf{M}} = $ \scriptsize $\begin{bmatrix} 1 & 0\\0 & 1\end{bmatrix}$\\ \rule{0pt}{5ex} \textbf{Modal damping matrix} & $\tilde{\mathbf{C}} = $ \scriptsize $\begin{bmatrix} 0.247 & 0\\0 & 112.653\end{bmatrix}$ & $\tilde{\mathbf{C}} = $ \scriptsize $\begin{bmatrix} 0.455 & 0\\0 & 272.592\end{bmatrix}$ & $\tilde{\mathbf{C}} = $ \scriptsize $\begin{bmatrix} 0.454 & 0\\0 & 233.246\end{bmatrix}$ \\ \rule{0pt}{5ex} \textbf{Modal stiffness matrix} & $\tilde{\mathbf{K}} = $ \scriptsize $\begin{bmatrix} 146.597 & 0\\0 & 1.126\cdot10^5\end{bmatrix}$ & $\tilde{\mathbf{K}} = $ \scriptsize $\begin{bmatrix} 354.912 & 0\\0 & 2.725\cdot10^5\end{bmatrix}$ & $\tilde{\mathbf{K}} = $ \scriptsize $\begin{bmatrix} 353.855 & 0\\0 & 2.332\cdot10^5\end{bmatrix}$ \\ \rule{0pt}{5ex} \textbf{Modal matrix} & $\mathbf{\Phi} = $ \scriptsize $\begin{bmatrix} -6.893 & -316.153\\-25.814 & 0.563\end{bmatrix}$ & $\mathbf{\Phi} = $ \scriptsize $\begin{bmatrix} -6.893 & -316.153\\-25.814 & 0.563\end{bmatrix}$ & $\mathbf{\Phi} = $ \scriptsize $\begin{bmatrix} -16.66 & -315.789\\-25.784 & 1.36\end{bmatrix}$ \\ \end{tabularx} \caption{Stiffness scaling factors and modal matrices} \label{tab:1} \end{table} At $ t = t_0 = 0 $ only the first mode is active and with zero velocity \begin{equation} \mathbf{q}(t_0) = \begin{bmatrix} q_1(t_0) \\ 0 \end{bmatrix}, \quad \dot{\mathbf{q}}(t_0) = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \quad q_1(t_0) = -0.0775 \label{eq:52} \end{equation} First, the \textit{global} case is considered. At $ t = z_i $, both modal oscillators are at their static equilibrium position (see \autoref{fig:07}c, Point~1): \begin{equation} \mathbf{q}(z_i) = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \quad \dot{\mathbf{q}}(z_i) = \begin{bmatrix} \dot{q}_1(z_i) \\ 0 \end{bmatrix} \label{eq:53} \end{equation} At this point, the stiffness matrix is scaled according to \autoref{eq:48}. As mentioned before, this occurs without external work because the modal displacements are zero; therefore, the potential energy does not change: \begin{equation} U(z_i) = \dfrac{1}{2} \mathbf{q}^\mathrm{T} (z_i) \tilde{\mathbf{K}} \mathbf{q}(z_i) = \dfrac{1}{2} \mathbf{q}^\mathrm{T} (z_i) \gamma \tilde{\mathbf{K}} \mathbf{q}(z_i) = 0 \label{eq:54} \end{equation} In the following quarter cycle, the structure oscillates at a slightly higher frequency and lower amplitude than the initial configuration owing to the higher stiffness value. This is only apparently a vibration reduction because the vibration energy in the system remains unchanged. As already mentioned, no change occurs in the modal transformation and the second modal oscillator remains at rest (see \autoref{fig:07}c, green curve). Thus, the energetic considerations made for the global case in \autoref{subsec:2.4} are strictly valid, and we now have (without the need for a simplifying hypothesis) a single-DoF system. The energy loss in the semicycle starting at $ t = e_i^- $ (Point~2 in the graph) is given by \autoref{eq:26}, with the pseudo-active component expressed by (see \autoref{eq:33}) \begin{equation} \Delta E_a = \dfrac{1}{2}(1-\gamma) \mathbf{q}^\mathrm{T} (e_i) \tilde{\mathbf{K}} \mathbf{q}(e_i) \label{eq:55} \end{equation} and the passive term $ \Delta E_p $ owing to damping of the first mode. In the \textit{local} case, as seen in the spatial analysis, the semi-active effect joins the pseudo-active effect. At the stiffness variation points, the modal basis changes between $ \mathbf{\Phi}_l $ (low stiffness) and $ \mathbf{\Phi}_h $ (high stiffness) such that the modal displacements of the low-stiffness basis $ \mathbf{q}_l $ must be converted to the new space with modal displacements $ \mathbf{q}_h $, and vice versa: \begin{equation} \mathbf{q}_h = \mathbf{\Phi}_h^{-1} \mathbf{\Phi}_l \mathbf{q}_l, \quad \mathbf{q}_l = \mathbf{\Phi}_l^{-1} \mathbf{\Phi}_h \mathbf{q}_h \label{eq:56} \end{equation} This change in the modal basis has two consequences: it changes the modal matrices (see \autoref{tab:1}), and therefore the parameters of the modal oscillators; and it rearranges the contributions of the single modes to the motion of the system. The latter effect leads to the excitation of the second (modified) modal oscillator, as shown in \autoref{fig:07}d. At the points where the stiffness is changed to its lower value, the second modal oscillator begins to oscillate at a high frequency, which recalls \autoref{fig:02} (the corresponding curves in row~2 are plotted on a separate axis with different scaling for better resolution). \begin{figure} \caption{Results for global stiffness variation (left column) and local stiffness variation (right column);\\ 1\textsuperscript{st} \label{fig:07} \end{figure} As shown by previous examples and the energy analysis in \autoref{fig:07}, the energy transferred to the higher mode can dissipate much faster than in the first mode, which leads to a significant semi-active contribution to vibration reduction. Before discussing the energy curves, it should be recalled that the stiffness factor $ \gamma $ is adjusted such that the total energy per cycle is identical for both methods. The plots in \autoref{fig:07}e and f (3\textsuperscript{rd} row) show the potential, kinetic, and total energy as a function of time for both cases. In the global case, between the extremal points, the total energy loss owing to system damping is minor. The main part of the energy loss is provided by the steps at the extremal points owing to the extracted energy $ L_a $ reported in \autoref{fig:07}g, together with the passive component $ L_p $. As mentioned, there is no excitation of the second modal oscillator (see \autoref{fig:07}c); therefore, $ L_s $ is equal to 0. While examining the local case (\autoref{fig:07}, right-hand side), a clear difference with respect to the global case appears in the behavior of the curve between the extremal points. The step in the energy loss is much smaller and is followed by a steep (but continuous) energy loss that extends over the time interval in which the high-frequency oscillations of the second mode occur. The extracted energy (\autoref{fig:07}h) now represents a much smaller portion of the total energy loss. The missing part of the loss is replaced by the additional energy loss occurring between the extremal points, which corresponds to $ \Delta E_s $. As shown in \autoref{fig:07}h, the total internal energy dissipation $ L_s + L_p $ exceeds, for the case under consideration, its pseudo-active counterpart $ L_a $. In \autoref{fig:08}, the dissipative energy loss is shown separately for both modes and for the local and global case. The curves represent the damped energy $ \tilde{L}_i $ related to mode $ i $: \begin{equation} \tilde{L}_i (t,\gamma) = -\int_0^t \tilde{c}_i (\gamma) \dot{q}_i^2(\tau) \,d\tau = -\int_0^t \left(\alpha \tilde{m}_i + \beta \tilde{k}_i (\gamma) \right) \dot{q}_i^2(\tau) \,d\tau \label{eq:57} \end{equation} The dissipation related to mode~1 is equal for the two cases (dashed black and dotted green curve) and represents the passive component: \begin{equation} \tilde{L}_{1,\mathrm{local}} = \tilde{L}_{1,\mathrm{global}} = L_p \label{eq:58} \end{equation} As expected, no dissipation occurs in the second mode in the global case (solid black curve) because the mode is not excited at all. The dissipation associated with the second mode in the local case (solid green curve) constitutes the semi-active part $ L_s $: \begin{equation} \tilde{L}_{2,\mathrm{local}} = L_s, \quad \tilde{L}_{2,\mathrm{global}} = 0 \label{eq:59} \end{equation} \begin{figure} \caption{Dissipative energy loss, separated into modal components} \label{fig:08} \end{figure} \FloatBarrier\Oldsubsection{Forced oscillation} \label{subsec:2.6} Now, the case of a harmonic external force is considered. The proof of the feasibility of the presented method is performed on the serial two-DoF system shown in \autoref{fig:04} with the stiffness amplification factors $ \gamma $ used in the previous section (see \autoref{tab:1}). The analysis of the system without stiffness modification yields the natural frequencies 1.927\,Hz and 53.395\,Hz. To determine the response behavior around the first natural frequency, an input force $ F(t) $ with an amplitude of 1\,N and a linearly increasing frequency between 1 and 3\,Hz is applied to mass $ m_2 $: \begin{equation} \begin{gathered} \delimitershortfall -0.1pt F(t) = F_{\mathrm{max}} \cdot \sin \left(2 \pi t \left(f_0 + t \dfrac{f_1-f_0}{2 t_1}\right) \right), \\ t_1 = 100\,\mathrm{s}, \quad f_0 = 1\,\mathrm{Hz}, \quad f_1 = 3\,\mathrm{Hz}, \quad F_{\mathrm{max}} = 1\,\mathrm{N} \label{eq:60} \end{gathered} \end{equation} The resulting displacement time graph of DoF $ u_2 $ is shown in \autoref{fig:09}. \begin{figure} \caption{Displacement of $ u_2 $ due to a linear sweeping input force} \label{fig:09} \end{figure} The time response of $ u_2 $ without a change in stiffness (\autoref{fig:09}, gray curve) shows the typical resonance behavior in the neighborhood of the first eigenfrequency. By modulating the stiffness in the same way as shown in \autoref{fig:07}, 1\textsuperscript{st} row (switch at zero-crossings / extremal points) this resonance amplification can be completely avoided (\autoref{fig:09}, black curve). The stiffness modulation here operates only in the region indicated by the gray box. Note that the black curve shows the time response of $ u_2 $ for both global and local stiffness variations. As shown for the free oscillation, there are no deviations between the two cases for this degree of freedom with the selected parameter $ \gamma $ (for comparison see \autoref{fig:05} and \autoref{fig:07}). In the free-vibration study it was shown that, while the total lost energy per cycle (due to the adjustment of the amplification factors) is the same in both cases, large differences between the local and global case exist in terms of the dissipated energy (by damping). In the local case, the amount of dissipated energy is substantially higher than that in the global case. The energy loss due to direct extraction at the stiffness variation interface is consequently higher in the global case. The difference in dissipated energy between the two cases can be entirely attributed to the semi-active effect, since this is absent in the global case. To extend this kind of analysis to the forced oscillation case, the dissipative energy loss in the two cases is analyzed for the individual modes according to \autoref{eq:57} (see \autoref{fig:10}). The curve $ \tilde{L}_1 $ relative to the first mode is -- as previously stated in \autoref{eq:58} -- identical in both cases (dashed black and dotted green curve). As far as the second mode is concerned, the applied force does not activate it because the excitation frequency lies far away from the second resonance. In the global case, the dissipated energy of the second mode $ \tilde{L}_2 $ (solid black curve) remains negligible (as in the free vibration case; see \autoref{fig:08}); it is evident that no excitation of the second mode by energy transfer occurs. Thus, it can be concluded that there is no semi-active effect in the global case (see also \autoref{eq:59}). In the case of local stiffness modulation, a large amount of energy is dissipated by the second mode (solid green curve) during the stiffness variation phase, which shows the semi-active component. \begin{figure} \caption{Dissipative energy loss, separated into modal components (sweep)} \label{fig:10} \end{figure} With this modal consideration, the individual contributions to the energy loss ($ L_a $, $ L_s $, and $ L_p $, see \autoref{subsec:2.4}) can now be accounted for. The above analyzed dissipative energy loss $ L_s + L_p $ \autoref{eq:39} is now considered for the whole system (mode~1 and mode~2), along with the extracted energy $ L_a $ \autoref{eq:38} for the local and global stiffness variation (see \autoref{fig:11}). As mentioned, the total energy loss $ L $ \autoref{eq:37} (gray curve) is identical for both cases. In the global case, the energy loss consists mainly of extracted energy (pseudo-active effect, dashed black curve). The small remaining portion corresponds to the passive part $ L_p $ (passive effect, solid black curve). This portion results exclusively from the dissipated energy of mode~1 (see \autoref{fig:10}, dashed black curve). Thus, no semi-active component $ L_s $ is present, and the dissipated energy $ L_s + L_p $ is reduced to $ L_p $. In the local case, the energy transfer to the second mode results in a substantial semi-active contribution, which, together with the passive one (both combined in the solid green curve), exceeds the pseudo-active component (dashed green curve) for the selected parameter setting. \begin{figure} \caption{Energy loss, separated into pseudo-active and semi-active + passive component (sweep)} \label{fig:11} \end{figure} In summary, it can be stated that for harmonic excitation, resonances can be effectively prevented using the presented method of stiffness modulation. If stiffness adaptation is possible, however, vibrations can also be effectively reduced by a constant stiffness change which moves the system's eigenfrequency away from the excitation frequency (see \autoref{subsec:1.1}); however, this would only work under ideal conditions, such as a known narrowband excitation. With the cyclic variations of the structure's stiffness based on the amplitude of the considered mode, we are not restricted to such simplified situations. Furthermore, if local stiffness modulation is applied, this leads to an internal energy transfer and thus to a higher ratio of energy loss due to inherent damping to energy extraction by the actuator. \FloatBarrier\Oldsection{Conclusions and outlook} \label{sec:3} In this paper, a study on semi-active vibration reduction on the basis of cyclic stiffness variations (modulation) was presented, with focus on the so-called switched-stiffness approach. Compared with previous work on this topic, the novelty of this contribution consists in the discrimination of the physical mechanism of vibration reduction into a purely semi-active effect, which enhances dissipation within the structure by redistributing energy among modes, and a pseudo-active effect, which extracts vibration energy from the system by negative work of the stiffness variation device. Both contributions join the passive effect, i.e., the energy dissipation that would be present without stiffness modulation. The semi-active effect is expected to be more advantageous because it does not require mechanical work to be performed by the stiffness variation device, which reduces the hardware size and cost. This energetically passive nature also fulfills the common assumption formulated for semi-active measures. To exploit it to the fullest, it is crucial to deepen how the semi-active share of vibration attenuation can be influenced by design. The analysis of spring-mass oscillators showed that the spatial distribution of the stiffness variation plays a central role in this sense. The semi-active effect is only present when the stiffness is changed locally and disappears when the stiffness change is homogeneous over the system. The exploitation of the semi-active effect allows for the more effective use of the inherent damping capacity of structures, and reduces the need for cumbersome external devices. This will be of particular interest for lightweight systems. In future numerical and experimental studies, concrete options for stiffness variation devices will have to be investigated. Variable-stiffness beam structures can be realized, for instance, by shape adaption of the cross section, which has a strong effect on the bending and torsion stiffness. A thin-walled construction with several independently controllable shape-adaptable ribs offers the possibility of differentiated actuation. In this way, the positive effect of local stiffness variation on the semi-active effect can be verified for continuous systems. Another option for the physical realization of stiffness variation consists in the use of bending elements with variable prestress. Other topics for future studies are provided by alternative choices of the control logic. In the presence of a physical stiffness variation device, the steps included in the stiffness-switching strategy involve impulsive excitation of the systems, which can lead to unwanted vibrations. For this reason, it can be appropriate to move from the theoretical optimum control logic to a smoother time law. Further, continuous systems offer a theoretically infinite range of choices for the observation function; thus, dedicated studies are needed to identify the best options. Finally, the above-mentioned extensions will be considered for additional types of excitation of the vibration structure. \FloatBarrier\Oldsection*{Acknowledgements} Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- SPP~1897 `Calm, Smooth and Smart' (project numbers HA 7893/1-1 and WI 1181/10-1). \printbibliography \end{document}
\begin{equation}gin{document} \begin{equation}gin{abstract} We define and study twisted Alexander-type invariants of complex hypersurface complements. We investigate torsion properties for the twisted Alexander modules and extend local-to-global divisibility results of \cite{M06,DL} to the twisted setting. In the process, we also study the splitting fields containing the roots of the corresponding twisted Alexander polynomials. \end{enumerate}d{abstract} \maketitle \tableofcontents \section{Introduction} The classical Alexander polynomial from knot theory has proved to be a powerful and versatile tool in the study of complements of plane algebraic curves. As showed by Libgober in \cite{L82}, the Alexander polynomial of a plane curve complement is sensitive to the local type and position of singularities of the curve, and it can be used to detect Zariski pairs (i.e., pairs of plane curves which have homeomorphic tubular neighborhoods, but non-homeomorphic complements). The study of Alexander polynomials of complements of higher-dimensional complex hypersurfaces have been initiated by Libgober in \cite{L94}, and was pursued in greater generality (for arbitrary singularities) in \cite{M06,DL,Liu}. A twisted version of the Alexander polynomial (based on the extra datum of a representation of the fundamental group) was introduced by Lin \cite{L}, Wada \cite{W}, Kirk-Livingston \cite{KL} in the $1990$s, and has well proved its worth, for instance, in the works of Friedl and Vidussi (e.g., see \cite{FV} and the references therein). The twisted Alexander polynomial was ported to the study of plane algebraic curves by Cogolludo and Florens \cite{CF}, who used it to refine Libgober's divisibility results from \cite{L82}, and showed that these twisted Alexander polynomials can detect Zariski pairs which were undistinguishable by the classical Alexander polynomial. In this paper, we extend the Cogolludo-Florens construction to high dimensions and arbitrary singularities, and establish some of the basic properties of the twisted Alexander invariants in this algebro-geometric setting. More concretely, we investigate torsion properties for the twisted Alexander modules, and extend local-to-global divisibility results of \cite{M06,DL} to the twisted setting. In the process, we also study the splitting fields containing the roots of the corresponding twisted Alexander polynomials. \subsection*{Main results} In what follows, we give a brief overview of our results. Let $V \subset \mathbb{C}P$ be a projective complex hypersurface, and fix a hyperplane $H$ in $\mathbb{C}P$, which we call the hyperplane at infinity. Let $${\mathcal U}:=\mathbb{C}P\setminus (V \cup H)$$ denote the (affine) hypersurface complement. Fix a field $\mathbb F$ which is a subfield of $\mathbb{C}$ closed under conjugation, and let ${\mathbb V}$ be a finite dimensional $\mathbb F$-vector space. To a pair $(\varepsilon,\rho)$ of an epimorphism $\varepsilon:\pi_1({\mathcal U}) \to \mathbb{Z}$ and a representation $\rho:\pi_1({\mathcal U}) \to GL({\mathbb V})$, we associate {\it (co)homological (global) twisted Alexander modules} $H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}])$ and resp. $H^i_{\varepsilon,\rho}({\mathcal U},\mathbb{F}[t^{\pm 1}])$, which are $\mathbb{F}[t^{\pm 1}]$-modules of finite type (and which are related by the universal coefficient theorem). These are homotopy invariants of the complement ${\mathcal U}$. dskip We say that the hypersurface $V$ is {\it in general position at infinity} if the reduced variety $V_{red}$ underlying $V$ is transversal to $H$ in the stratified sense. One of our first results describes torsion properties of the (global) twisted Alexander modules (see Theorems \ref{t1} and \ref{t3} and Corollarly \ref{imc}): \begin{thm}\label{t1i} Let $V \subset \mathbb{C}P$ be a hypersurface in general position at infinity. The twisted Alexander modules $H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}])$ are torsion $\mathbb{F}[t^{\pm 1}]$-modules for any $0\leq i \leq n$, they are trivial for $i>n+1$, and $H^{\varepsilon,\rho}_{n+1}({\mathcal U},\mathbb{F}[t^{\pm 1}])$ is a free $\mathbb{F}[t^{\pm 1}]$-module of rank $(-1)^{n+1}\cdot \dim_{\mathbb F}({\mathbb V})\cdot \chi({\mathcal U}).$ \end{enumerate}d{thm} This is a far-reaching generalization of results from \cite{M06,DL,Liu}, which only dealt with the case of the linking number homomorphism and the trivial representation defined on complements of {\it reduced} hypersurfaces. dskip For any point $x \in V$, let ${\mathcal U}_x={\mathcal U} \cap B_x$ denote the local complement at $x$, for $B_x$ a small ball about $x$ in $\mathbb{C}P$. Then $(\varepsilon,\rho)$ induces via the inclusion map $i_x:{\mathcal U}_x \hboxookrightarrow {\mathcal U}$ a pair $(\varepsilon_x,\rho_x)$ on ${\mathcal U}_x$, so that {\it local twisted Alexander modules} of $({\mathcal U}_x, \varepsilon_x,\rho_x)$ can be defined. Proposition \ref{ploc} asserts that for any pair $(\varepsilon,\rho)$ as above, we have the following local torsion property: \begin{prop}\label{p1i} If $V$ is in general position at infinity, then the local twisted Alexander modules $H_i^{\varepsilon_x,\rho_x}({\mathcal U}_x,\mathbb{F}[t^{\pm 1}])$ are torsion $\mathbb{F}[t^{\pm 1}]$-modules for any $x \in V$. \end{enumerate}d{prop} This local torsion property removes a technical assumption used by Cogolludo-Florens \cite{CF} in the proof of their main divisibility result for twisted Alexander polynomials of plane curve complements. dskip Since $\mathbb{F}[t^{\pm 1}]$ is a PID, torsion $\mathbb{F}[t^{\pm 1}]$-modules of finite type have orders (called Alexander polynomials) associated to them. Let $\Delta_{i,{\mathcal U}}$ (resp. $\Delta^i_{{\mathcal U}}$) and $\Delta_{i,x}$ (resp. $\Delta^i_x$) be the corresponding {\it global} and {\it local} twisted Alexander polynomial associated to the above (co)homological twisted Alexander modules. In Theorem \ref{t5} we indicate how to compute the global twisted Alexander polynomials from the local topological information at points on the hypersurface. This relationship can be roughly formulated as follows (see Theorem \ref{t5} for the precise folrmulation): \begin{thm}\label{t2i} For a projective hypersurface $V$ in general position at infinity, the zeros of the global twisted Alexander polynomials of the complement ${\mathcal U}$ are among those of the local ones at points in the affine part of some irreducible component of $V$. \end{enumerate}d{thm} This result is a generalization to the twisted setting of the local-to-global analysis for the classical Alexander polynomials initiated in the first author's work \cite{M06}, and continued in \cite{DL, Liu}. As a consequence of the proof of Theorem \ref{t5}, we remark that the local torsion property for the twisted Alexander modules at points in $V \cap H$ is enough to conclude that the global twisted Alexander modules are torsion $\mathbb{F}[t^{\pm 1}]$-modules in the desired range. For hypersurfaces in general position at infinity, the local torsion property at points in $V \cap H$ is a consequence of transversality and the K\"unneth formula, see the proof of Proposition \ref{ploc}, but there may be other instances (e.g., for various choices of $(\varepsilon,\rho)$) when it is satisfied. dskip We also single out the contribution of the meridian at infinity (i.e., a meridian loop about $H$) to the global twisted Alexander polynomials, see Theorem \ref{t4} for the precise formulation. For the case of the linking number homomorphism and trivial representation, Theorem \ref{t4} reduces to the fact that the zeros of the classical Alexander polynomials of ${\mathcal U}$ are roots of unity of order $d=\deg(V)$, a fact shown in \cite{M06,DL} for reduced hypersurfaces. In the case of reduced plane curves and for $\varepsilon$ the linking number homomorphism, we identify explicitly the splitting fields containing the roots of the corresponding global twisted Alexander polynomials. Similar results were obtained by Libgober \cite{L09} by Hodge-theoretic methods. More precisely, in Theorem \ref{t2} we prove the following: \begin{thm}\label{t13} Let $C$ be a reduced curve of degree $d$ and in general position at infinity. Denote by $x_0$ the (homotopy class of the) meridian about the line $H$ at infinity. Suppose $\mathbb{F} = \mathbb{C}$, and denote the eigenvalues of $\rho(x_0)^{-1}$ by $\lambda_1,\cdots,\lambda_{\end{enumerate}d{lem}l}$. Then the roots of $\Delta_{1,{\mathcal U}}(t)$ lie in the splitting field $\mathbb{S}$ of $\prod_{i=1}^{\end{enumerate}d{lem}l} (t^d-\lambda_i)$ over $\mathbb{Q}$, which is cyclotomic over $\mathbb{K}=\mathbb{Q}(\lambda_1,\cdots,\lambda_{\end{enumerate}d{lem}l})$.\end{enumerate}d{thm} This result is based on our calculation of the twisted Alexander polynomial for the Hopf link on $d$ components (see Proposition \ref{p1}), which in our geometric situation can be identified with the link of $C$ ``at infinity''. \subsection*{Acknowledgement.} This paper was written while the first author visited the Max-Planck-Institut f\"ur Mathematik in Bonn, and the Institute of Mathematical Sciences at the Chinese University of Hong Kong. He thanks these institutes for their hospitality and for providing him with excellent working conditions. L. Maxim was partially supported by grants from NSF, NSA, by a fellowship from the Max-Planck-Institut f\"ur Mathematik, Bonn, and by the Romanian Ministry of National Education, CNCS-UEFISCDI, grant PN-II-ID-PCE-2012-4-0156. K. Wong gratefully acknowledges the support provided by the NSF-RTG grant \#1502553 at the University of Wisconsin-Madison. \section{Twisted chain complexes. Twisted Alexander invariants}\label{secex} \subsection{Definitions}\label{def} In this section, we recall the definitions of twisted chain complexes, twisted Alexander modules, and twisted Alexander polynomials of path-connected finite CW-complexes. For more details, see \cite{KL,CF}. Let $X$ be a path-connected finite CW-complex, with $\pi=\pi_1(X)$, and fix a field $\mathbb{F}$ which is a subfield of $\mathbb{C}$ closed under conjugation. Fix a group homomorphism $$\varepsilon: \pi_1(X) \rightarrow \mathbb{Z},$$ and note that $\varepsilon$ extends to an algebra homomorphism $$\varepsilon: \mathbb{F}[\pi] \rightarrow \mathbb{F}[\mathbb{Z}] \it Cohng \mathbb{F}[t^{\pm 1}].$$ Consider a finite dimensional $\mathbb{F}$-vector space $\mathbb{V}$ and a linear representation $$\rho: \pi \rightarrow GL(\mathbb{V}).$$ For simplicity, this representation will also be denoted by ${\mathbb V}_{\rho}$. Let $\widetilde{X}$ be the universal cover of $X$. The cellular chain complex $C_*(\widetilde{X},\mathbb{F})$ of $\widetilde{X}$ is a complex of free left $\mathbb{F}[\pi]$-modules, generated by lifts of the cells of $X$. For notational convenience, we follow \cite{KL} and regard $\mathbb{V}$ as a right $\mathbb{F}[\pi]$-module, i.e., with the right $\pi$-action for $v\in \mathbb{V}$ and $\alphapha\in \pi$ given by: $$v\cdot \alphapha = \rho(\alphapha)(v).$$ Also consider the right $\mathbb{F}[\pi]$-module $\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} \mathbb{V}$, with $\mathbb{F}[\pi]$-multiplication induced by $\varepsilon\otimes \rho$ as: $$(p\otimes v)\cdot \alphapha = pt^{\varepsilon(\alphapha)} \otimes v\cdot\alphapha= pt^{\varepsilon(\alphapha)} \otimes \rho(\alphapha)v, \ \alphapha \in \pi.$$ Let the chain complex of $(X,\varepsilon,\rho)$ be defined as the complex of $\mathbb{F}[t^{\pm 1}]$-modules: $$C^{\varepsilon,\rho}_*(X,\mathbb{F}[t^{\pm 1}]) := (\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} \mathbb{V}) \otimes_{\mathbb{F}[\pi]}C_*(\widetilde{X},\mathbb{F}),$$ where the $\mathbb{F}[t^{\pm 1}]$-action is given by $$t^n ((p\otimes v)\cdot c) = (t^n \cdot p \otimes v)\cdot c.$$ It is complex of free $\mathbb{F}[t^{\pm 1}]$-modules. \begin{df} The {\it $i$-th homological twisted Alexander module} $H^{\varepsilon,\rho}_i(X,\mathbb{F}[t^{\pm 1}])$ of the triple $(X, \varepsilon, \rho)$ is the $\mathbb{F}[t^{\pm 1}]$-module defined by: $$H^{\varepsilon,\rho}_i(X,\mathbb{F}[t^{\pm 1}]):=H_i\big(C^{\varepsilon,\rho}_*(X,\mathbb{F}[t^{\pm 1}])\big).$$ Similarly, the {\it $i$-th cohomological twisted Alexander module} $H^i_{\varepsilon,\rho}(X,\mathbb{F}[t^{\pm 1}])$ of $(X, \varepsilon, \rho)$ is the $\mathbb{F}[t^{\pm 1}]$-module given by: $$H^i_{\varepsilon,\rho}(X,\mathbb{F}[t^{\pm 1}]):=H^i\big({\rm{Hom}}_{\mathbb{F}[t^{\pm 1}]}(C_*^{\varepsilon,\rho}(X,\mathbb{F}[t^{\pm 1}]),\mathbb{F}[t^{\pm 1}])\big).$$ \end{df} \begin{rem} The classical Alexander modules correspond to the case of the trivial representation $\rho=triv$, i.e., ${\mathbb V}=\mathbb F=\mathbb{Q}$ and $\rho(x)=1$ for all $x \in \pi$. \end{enumerate}d{rem} The twisted Alexander modules are homotopy invariants. The universal coefficient theorem (UCT) applied to the principal ideal domain $\mathbb{F}[t^{\pm 1}]$ yields that: \begin{equation}\label{uct} H^i_{\varepsilon,\rho}(X,\mathbb{F}[t^{\pm 1}]) \it Cohng {\rm{Hom}}_{\mathbb{F}[t^{\pm 1}]}\big(H^{\varepsilon,\rho}_i(X,\mathbb{F}[t^{\pm 1}]),\mathbb{F}[t^{\pm 1}]\big) \oplus {\rm{Ext}}_{\mathbb{F}[t^{\pm 1}]}\big(H^{\varepsilon,\rho}_{i-1}(X,\mathbb{F}[t^{\pm 1}]),\mathbb{F}[t^{\pm 1}]\big). \end{enumerate}d{equation} So, if $H^{\varepsilon,\rho}_i(X,\mathbb{F}[t^{\pm 1}])$ are torsion $\mathbb{F}[t^{\pm 1}]$-modules for all $i \leq n$, then $H^i_{\varepsilon,\rho}(X,\mathbb{F}[t^{\pm 1}])$ are also torsion in the same range. dskip An equivalent definition of the twisted chain complex of $(X,\varepsilon,\rho)$ was given in \cite{KL}. Let $X_{\infty}$ be the infinite cyclic cover of $X$ associated to $\pi'=\ker\varepsilon$. The chain complex $$C_*(X_{\infty},\mathbb{V}_{\rho}):=\mathbb{V}\otimes_{\mathbb{F}[\pi']} C_*(\widetilde{X}),$$ defined via the restricted actions to $\pi'$, can be regarded as a complex of $\mathbb{F}[t^{\pm 1}]$-modules via the action $t^n \cdot (v\otimes c) = v\cdot\gamma^{-n} \otimes \gamma^n c$, where $\gamma$ is an element in $\pi$ such that $\varepsilon(\gamma)=1$. Then \cite[Theorem 2.1]{KL} states that $C_*(X_{\infty},\mathbb{V}_{\rho})$ and $C^{\varepsilon,\rho}_*(X,\mathbb{F}[t^{\pm 1}])$ are isomorphic as a $\mathbb{F}[t^{\pm 1}]$-modules. \begin{df}\label{ac} Denote by $\mathbb{F}(t)$ the field of fractions of $\mathbb{F}[t^{\pm 1}]$, and define $$C^{\varepsilon,\rho}_*(X,\mathbb{F}(t)) = C^{\varepsilon,\rho}_*(X,\mathbb{F}[t^{\pm 1}]) \otimes \mathbb{F}(t).$$ We say that $(X,\varepsilon,\rho)$ is {\it acyclic} if the chain complex $C^{\varepsilon,\rho}_*(X,\mathbb{F}(t))$ is acyclic over $\mathbb{F}(t)$.\end{df} \begin{rem} Since $\mathbb{F}[t^{\pm 1}]$ is a principal ideal domain, $\mathbb{F}(t)$ is flat over $\mathbb{F}[t^{\pm 1}]$. So, $(X,\varepsilon,\rho)$ is acyclic if and only if $H^{\varepsilon,\rho}_*(X,\mathbb{F}[t^{\pm 1}])$ are torsion $\mathbb{F}[t^{\pm 1}]$-modules. \end{enumerate}d{rem} Since $\mathbb{F}[t^{\pm 1}]$ is a principal ideal domain and ${\mathbb V}$ is finite dimensional over $\mathbb F$, the twisted Alexander modules $H^{\varepsilon,\rho}_*(X,\mathbb{F}[t^{\pm 1}])$ are finitely generated modules over $\mathbb{F}[t^{\pm 1}]$. Thus they have a direct sum decomposition into cyclic modules. Similar considerations apply for the cohomological invariants. \begin{df}\label{tap} The order of the torsion part of $H^{\varepsilon,\rho}_i(X,\mathbb{F}[t^{\pm 1}])$ is called the {\it $i$-th homological twisted Alexander polynomial} of $(X,\varepsilon,\rho)$, and is denoted by $\Delta_{i,X}^{\varepsilon,\rho}(t)$. Similarly, we define the {\it $i$-th cohomological twisted Alexander polynomial} of $(X,\varepsilon,\rho)$ to be the order $\Delta^{i}_{\varepsilon,\rho,X}(t)$ of the torsion part of the $\mathbb{F}[t^{\pm 1}]$-module $H^i_{\varepsilon,\rho}(X,\mathbb{F}[t^{\pm 1}])$. \end{df} The twisted Alexander polynomials are well-defined up to units in $\mathbb{F}[t^{\pm 1}]$. Moreover, it follows from (\ref{uct}) that $$\Delta^{i}_{\varepsilon,\rho,X}(t) = \Delta_{i-1,X}^{\varepsilon,\rho}(t).$$ For further use, we also recall here the following fact: \begin{equation}gin{prop}\label{p0}\cite{KL} If $\varepsilon$ is non-trivial, then $H^{\varepsilon,\rho}_0(X,\mathbb{F}[t^{\pm 1}])$ is a torsion $\mathbb{F}[t^{\pm 1}]$-module. \end{enumerate}d{prop} \subsection{Examples}\label{ex} In this section, we compute the twisted Alexander invariants on several examples with geometric significance. \subsubsection{Hopf link with $d$ components.}\label{Hopf} This example has important consequences in the study of twisted Alexander invariants of plane curve complements. More precisely, for a degree $d$ plane curve $C$ with regular behavior at infinity, the Hopf link with $d$ components is what we call ``the link of $C$ at infinity''. Recall that a link in $S^3$ is an embedding of a disjoint union of circles (link components) into $S^3$. Throughout this section, let $K$ be the Hopf link with $d$ components in $S^3$, that is, the link with $d\geq 2$ components with the property that the linking number of any two of its components is $1$. \begin{lem}\label{l1} If $K \subset S^3$ is the Hopf link with $d$ components, then \begin{equation}\label{eq1} \pi_1(S^3\setminus K)\it Cohng \mathbb{Z} \times F_{d-1} \it Cohng \langle x_0,x_1,\cdots,x_{d-1} \mbox{vect}rt \ x_0x_ix_0^{-1}x_i^{-1}, i=1,\cdots,d-1 \rightarrowngle,\end{enumerate}d{equation} with $F_{d-1}$ the free group on $d-1$ generators. \end{enumerate}d{lem} \begin{equation}gin{proof} First note that $S^3\setminus K$ is homotopy equivalent to the link exterior associated to the singularity $\{x^d=y^d\} \subset \mathbb{C}^2$. Equivalently, if $\mathcal{A} = \{x^d=y^d\}$ is the central line arrangement of $d$ lines in $\mathbb{C}^2$, then $S^3\setminus K \simeq \mathbb{C}^2 \setminus \mathcal{A}$. On the other hand, it can be easily seen that $$\mathbb{C}^2 \setminus \mathcal{A} \simeq \mathbb{C}^*\times (\mathbb{CP}^1\setminus \{\text{d points}\}).$$ Indeed, the Hopf fibration $\mathbb{C}^2 \setminus \{0\} \to \mathbb{CP}^1$ restricts to a $\mathbb{C}^*$-locally trivial fibration $\mathbb{C}^2 \setminus \mathcal{A} \to \mathbb{CP}^1\setminus \{\text{d points}\}$. Moreover, the latter fibration is trivial, since it can be seen as a restriction of the trivial fibration $\mathbb{C}^2 \setminus H \to \mathbb{CP}^1\setminus \{\text{1 point}\}=\mathbb{C}$ obtained from the Hopf fibration by first restricting to the complement of only one line $H$ of $\mathcal{A}$. Altogether, $$S^3\setminus K \simeq \mathbb{C}^2 \setminus \mathcal{A} \simeq S^1 \times (\bigvee_{d-1} S^1),$$ which yields the desired presentation for $\pi_1(S^3\setminus K)$. \end{enumerate}d{proof} \begin{rem} An equivalent presentation of $\pi_1(S^3\setminus K)$ can be obtained by using the van Kampen theorem (e.g., see \cite[Theorem 4.2.17, Proposition 4.2.21]{Di1} and the references therein). More precisely, $\pi_1(S^3\setminus K)$ is called $G(d,d)$ in loc.cit., and has the presentation: $$\pi_1(S^3\setminus K) \it Cohng \langle x_0,x_1,\cdots,x_d \ \mbox{vect}rt \ x_dx_{d-1}\cdots x_1 x_0^{-1}, x_0x_ix_0^{-1}x_i^{-1}, i=1,\cdots,d \rightarrowngle,$$ where the generators $x_1,\cdots,x_d$ correspond to meridian loops about the $d$ lines of $\mathcal{A}$. \end{enumerate}d{rem} We can now compute the twisted Alexander invariants of $S^3 \setminus K$: \begin{prop}\label{p1} Let $K \subset S^3$ be the Hopf link with $d$ components. Let $$\varepsilon: \pi_1(S^3\setminus K) \longrightarrow \mathbb{Z}$$ be an epimorphism with $$\varepsilon(x_0) \neq 0,$$ and $$\rho:\pi_1(S^3\setminus K) \longrightarrow GL(\mathbb{V}) = GL_{\end{enumerate}d{lem}l}(\mathbb{F})$$ be a linear representation of rank $\end{enumerate}d{lem}l$. Then the following hold: \begin{equation}gin{itemize} \item[(a)] $H^{\varepsilon,\rho}_i(S^3\setminus K,\mathbb{F}[t^{\pm 1}])$ are torsion $\mathbb{F}[t^{\pm 1}]$-modules, for $i=0,1$. \item[(b)] $H^{\varepsilon,\rho}_i(S^3\setminus K,\mathbb{F}[t^{\pm 1}])=0$ for $i\geq 2$. \item[(c)] $\Delta_0^{\varepsilon,\rho}$ is the greatest common divisor of the $\end{enumerate}d{lem}l \times \end{enumerate}d{lem}l$ minors of the column matrix $$\left( \rho(x_i)t^{\varepsilon(x_i)} -Id\right)_{i=0,\cdots,d-1}.$$ \item[(d)] $\Delta_1^{\varepsilon,\rho}/\Delta_0^{\varepsilon,\rho} = \big(\det(\rho(x_0)t^{\varepsilon(x_0)}-Id)\big)^{d-2}$. \end{enumerate}d{itemize} \end{enumerate}d{prop} \begin{equation}gin{proof} Recall from Lemma \ref{l1} that the link complement $S^3\setminus K$ has the homotopy type of a (central) line arrangement complement, namely $S^3\setminus K \simeq \mathbb{C}^2 \setminus \mathcal{A}$. As such, it has a minimal cell structure (i.e., so that the number of $i$-cells equals its $i$-th Betti number $b_i$, for all $i\geq 0$). Moreover, since $\mathbb{C}^2 \setminus \mathcal{A}$ has the homotopy type of a finite real $2$-dimensional CW-complex, it follows that $H^{\varepsilon,\rho}_i(S^3\setminus K,\mathbb{F}[t^{\pm 1}])=0$ for $i\geq 3$. We next note that $S^3\setminus K$ is a $K(\pi,1)$-space, since $\mathbb{C}^2 \setminus \mathcal{A}$ is so, with $\pi=\pi_1(S^3\setminus K)$. Indeed, since $\mathcal{A}$ is defined by a (weighted) homogeneous polynomial, there is a global Milnor fibration $$F \hboxookrightarrow \mathbb{C}^2 \setminus \mathcal{A} \longrightarrow \mathbb{C}^*$$ whose fiber $F$ has the homotopy type of a join of circles. The long exact sequence of homotopy groups for this fibration then yields that $\pi_i( \mathbb{C}^2 \setminus \mathcal{A})=0$ for all $i \geq 2$. Since $S^3\setminus K$ is a $K(\pi,1)$-space, its (twisted) homology can be computed from its (twisted) group homology using Fox calculus (this was the starting point for Wada's construction of twisted Alexander invariants \cite{W}). So the twisted chain complex of $S^3 \setminus K$ can be identified with the complex of Fox derivatives for the presentation $$\pi_1(S^3 \setminus K) \it Cohng \langle x_0,x_1,\cdots,x_{d-1} \mbox{vect}rt \ x_0x_ix_0^{-1}x_i^{-1}, i=1,\cdots,d-1 \rightarrowngle$$ of Lemma \ref{l1}, and it has the form: $$0\longrightarrow \mathbb{F}[t^{\pm 1}]^{\end{enumerate}d{lem}l(d-1)} \overset{\partialrtial_2}{\longrightarrow} \mathbb{F}[t^{\pm 1}]^{\end{enumerate}d{lem}l d} \overset{\partialrtial_1}{\longrightarrow} \mathbb{F}[t^{\pm 1}]^\end{enumerate}d{lem}l \longrightarrow 0.$$ In particular, as in \cite[Section 4]{KL}, we have that $\partialrtial_1$ is the column matrix with $i$-th entry given by $$\rho(x_i)t^{\varepsilon(x_i)} - Id,$$ which yields the desired description of $\Delta_0^{\varepsilon,\rho}$. Similarly, $\partialrtial_2$ is a $(d-1)\times d$ matrix with entries in $M_{\end{enumerate}d{lem}l}(\mathbb{F}[t^{\pm 1}])$ given by the matrix of Fox derivatives of the relations, tensored with $\mathbb{F}[t^{\pm 1}]^{\end{enumerate}d{lem}l}$. Therefore, $\partialrtial_2$ equals {\it Forotnotesize{\[ \left( \begin{equation}gin{array}{ccccc} Id-\rho(x_1)t^{\varepsilon(x_1)} & \rho(x_0)t^{\varepsilon(x_0)}-Id & 0 & \cdots & 0 \\ Id-\rho(x_2)t^{\varepsilon(x_2)} & 0 & \rho(x_01)t^{\varepsilon(x_0)}-Id & \cdots & 0 \\ \vdots & \vdots& \vdots & \vdots & \vdots \\ Id-\rho(x_{d-2})t^{\varepsilon(x_{d-2})} & 0 & \cdots & \rho(x_0)t^{\varepsilon(x_0)}-Id & 0 \\ Id-\rho(x_{d-1})t^{\varepsilon(x_{d-1})} & 0 & \cdots & 0 & \rho(x_0)t^{\varepsilon(x_0)}-Id \end{enumerate}d{array} \right)\] }} Since, by our assumption, $\varepsilon(x_0)\neq 0$, this yields that $\ker(\partialrtial_2)=0$. Therefore, $$H^{\varepsilon,\rho}_2(S^3\setminus K,\mathbb{F}[t^{\pm 1}])=0.$$ Also, since $\varepsilon$ is non-trivial, we get by Proposition \ref{p0} that $H^{\varepsilon,\rho}_0(S^3\setminus K,\mathbb{F}[t^{\pm 1}])$ is a torsion $\mathbb{F}[t^{\pm 1}]$-module. So, by using the fact that $$\chi(S^3\setminus K)= b_0-b_1+b_2 = 1-d+(d-1)= 0,$$ we obtain that $$\rightarrownk_{\mathbb{F}[t^{\pm 1}]}(H^{\varepsilon,\rho}_1(S^3\setminus K,\mathbb{F}[t^{\pm 1}])) = -\chi(S^3\setminus K) =0.$$ Hence the first twisted Alexander module $H^{\varepsilon,\rho}_1(S^3\setminus K,\mathbb{F}[t^{\pm 1}])$ is also torsion over $\mathbb{F}[t^{\pm 1}]$. Finally, by \cite[Theorem 4.1]{KL}, we get that $$\Delta_1^{\varepsilon,\rho}/\Delta_0^{\varepsilon,\rho} = \big(\det(\rho(x_0)t^{\varepsilon(x_0)}-Id)\big)^{d-2}.$$ \end{enumerate}d{proof} \subsubsection{Links of $A_{odd}$-singularities} Let $C=\{ x^2-y^{2n}=0\} \subset \mathbb{C}^2$, and fix $(\varepsilon,\rho)$ as before, with $\varepsilon$ non-trivial. The germ $(C,0)$ of $C$ at the origin of $\mathbb{C}^2$ is known as the ${A}_{2n-1}$-singularity. The curve $C$ is the union of two smooth curves which intersect non-transversely at the origin. Let $K \subset S^3$ be the link of $(C,0)$. Since the defining polynomial of $(C,0)$ is weighted homogeneous, it follows that $S^3 \setminus K \simeq \mathbb{C}^2 \setminus C$ fibers over $S^1 \simeq \mathbb{C}^*$, with fiber homotopy equivalent to a join of circles. In particular, $S^3 \setminus K$ is aspherical, so its twisted Alexander invariants can be computed by Fox calculus from a presentation of the fundamental group. By \cite{O1}, we have that $$\pi_1(S^3 \setminus K)\it Cohng \pi_1(\mathbb{C}^2\setminus C) \it Cohng G(2,2n) = \langle a_i,\begin{equation}ta \ \mbox{vect}rt \ \begin{equation}ta = a_1a_0, R_1, R_2 \rightarrowngle,$$ where $$R_1: a_{i+2n} =a_i, i=0,...,2n-1, \ \ {\rm and} \ \ \ R_2: a_{i+2} = \begin{equation}ta^{-1}a_i\begin{equation}ta , i=0,...,2n-1.$$ So, explicitly, $$\pi_1(S^3 \setminus K)\it Cohng \begin{equation}gin{array}{c} \langle a_0,a_1,...,a_{2n-1},\begin{equation}ta \ \mbox{vect}rt \ a_1a_0\begin{equation}ta^{-1},\\ \begin{equation}ta a_2 \begin{equation}ta^{-1}a_0^{-1}, \begin{equation}ta a_4 \begin{equation}ta^{-1}a_2^{-1},...,\begin{equation}ta a_0 \begin{equation}ta^{-1} a_{2n-2}^{-1},\\ \begin{equation}ta a_3 \begin{equation}ta^{-1}a_1^{-1}, \begin{equation}ta a_5 \begin{equation}ta^{-1}a_3^{-1},...,\begin{equation}ta a_1 \begin{equation}ta^{-1} a_{2n-1}^{-1}\rightarrowngle \end{enumerate}d{array}$$ By direct computation, it can be seen that in the corresponding twisted chain complex one has $\ker(\partialrtial_2)=0$, so $H^{\varepsilon,\rho}_2(S^3\setminus K,\mathbb{F}[t^{\pm 1}])=0.$ Also, since $\varepsilon$ is non-trivial, we get by Proposition \ref{p0} that $H^{\varepsilon,\rho}_0(S^3\setminus K,\mathbb{F}[t^{\pm 1}])$ is a torsion $\mathbb{F}[t^{\pm 1}]$-module. An Euler characteristic argument similar to that of the previous example then yields that $H^{\varepsilon,\rho}_1(S^3\setminus K,\mathbb{F}[t^{\pm 1}])$ is a torsion $\mathbb{F}[t^{\pm 1}]$-module. \section{Twisted Alexander invariants of plane curve complements}\label{tc} Twisted Alexander invariants were ported to the study of plane algebraic curves by Cogolludo and Florens \cite{CF}, who showed that these twisted invariants can detect Zariski pairs which share the same (classical) Alexander polynomial. In this section, we study torsion properties of the twisted Alexander modules of plane curve complements and study the splitting fields containing the roots of the corresponding twisted Alexander polynomials. We focus here on homological invariants, while similar statements about their cohomological counterparts can be obtained via the universal coefficient theorem (\ref{uct}). Let $C$ be a reduced curve in $\mathbb{C} \mathbb{P}^2$ of degree $d$ with $r$ irreducible components, and let $L$ be a line in $\mathbb{C} \mathbb{P}^2$. Set $${\mathcal U}:=\mathbb{C} \mathbb{P}^2 \setminus (C\cup L) = \mathbb{C}^2 \setminus (C\setminus (C\cap L)),$$ where we use the natural identification of $\mathbb{C}^2$ with $\mathbb{C} \mathbb{P}^2 \setminus L$. The line $L$ will usually be refered to as the {\it line at infinity}. Alternatively, let $f(x,y): \mathbb{C}^2 \rightarrow \mathbb{C}$ be a square-free polynomial of degree $d$ defining an affine plane curve $C^a:=\{f=0\}$. Let $C$ be the zero locus in $\mathbb{C} \mathbb{P}^2$ (with homogeneous coordinates $x,y,z$) of the projectivization ${\bb a}r f$ of $f$, and let $L$ by given by $z=0$. Then ${\mathcal U}=\mathbb{C}^2 \setminus C^a$. Recall that $H_1({\mathcal U},\mathbb{Z})\it Cohng \mathbb{Z}^r$, generated by homology classes $\nu_i$ of meridian loops $\gamma_i$ bounding transversal disks at a smooth point in each irreducible component of $C^a$. Let $n_1,\cdots,n_r$ be positive integers with gcd($n_1,\cdots,n_r$)=1. Let $ab: \pi_1 ({\mathcal U}) \to H_1({\mathcal U},\mathbb{Z})$ denote the abelianization map, sending $[\gamma_i]$ to $\nu_i$. Then the composition $$\varepsilon: \pi_1 ({\mathcal U}) \xrightarrow{ab} H_1({\mathcal U},\mathbb{Z}) \xrightarrow{\psi: \nu_i \rightarrow n_i} \mathbb{Z}$$ defines an epimorphism. If all $n_i =1$, then $\varepsilon$ can be identified with the total linking number homomorphism $$lk: \pi_1 ({\mathcal U}) \xrightarrow{[\alphapha] \mapsto lk(\alphapha,C \cup -dL)} \mathbb{Z},$$ which is just the homomorphism $f_\#:\pi_1({\mathcal U}) \to \pi_1(\mathbb{C}^*)\it Cohng \mathbb{Z}$ induced by the restriction of $f$ to ${\mathcal U}$ (e.g., see \cite[pp.77]{Di1}). Fix a finite dimensional $\mathbb{F}$-vector space $\mathbb{V}$ and a linear representation $\rho: \pi_1({\mathcal U}) \rightarrow GL(\mathbb{V}).$ As in Section \ref{def}, the $\mathbb{F}[t^{\pm 1}]$-module $H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}])$ is defined for any $i\geq 0$, and is called the {\it $i$-th (homological) twisted Alexander modules of $C$ with respect to $L$}. The twisted Alexander modules associated to the total linking number homomorphism $lk$ will be denoted by $$H_i^{\rho}({\mathcal U},\mathbb{F}[t^{\pm 1}]) := H^{lk,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}]).$$ In the case of the trivial representation, these further reduce to the classical Alexander modules, as originally studied in \cite{L82}. Note that since ${\mathcal U}$ is the complement of a plane affine curve, it is a complex $2$-dimensional Stein manifold. Therefore ${\mathcal U}$ has the homotopy type of a real $2$-dimensional finite CW-complex. Hence, $H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}]) =0$ for $i\geq3$, and $H^{\varepsilon,\rho}_2({\mathcal U},\mathbb{F}[t^{\pm 1}])$ is a free $\mathbb{F}[t^{\pm 1}]$-module. For $i=0,1$, the $\mathbb{F}[t^{\pm 1}]$-modules $H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}])$ are of finite type, and in the next section we investigate their torsion properties. \subsection{Torsion properties}\label{torc} In this section, we use the above notations to prove the following result: \begin{thm}\label{t1} Let $C$ be a reduced complex projective plane curve. If $C$ is irreducible and $\rho$ is abelian (i.e., the image of $\rho$ is abelian), or if $C$ is in general position at infinity (i.e., $C$ is transversal to the line at infinity $L$), then the twisted Alexander modules $H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}])$ are torsion $\mathbb{F}[t^{\pm 1}]$-modules, for $i=0,1$. \end{enumerate}d{thm} \begin{equation}gin{proof} The claim about $H^{\varepsilon,\rho}_0({\mathcal U},\mathbb{F}[t^{\pm 1}])$ follows from Proposition \ref{p0} since $\varepsilon$ is non-trivial. If $C$ is irreducible and $\rho$ is abelian, it follows from \cite{L09} that the classical Alexander modules of an irreducible curve complement determine the twisted ones. So the claim follows in this case from \cite{L82}. Assume now that the line at infinity $L$ is transversal to the curve $C$, and let $d=\deg(C)$. Let $S^3_{\infty} \subset \mathbb{C}^2$ be a sphere of sufficiently large radius. Then the link of $C$ at infinity, $K_{\infty} = S^3_{\infty} \cap C$, is the Hopf link on $d$ components, as described in Section \ref{Hopf}. Let $i:S^3_{\infty} \setminus K_{\infty} \hboxookrightarrow {\mathcal U}$ denote the inclusion map. Then by \cite[Lemma 5.2]{L82}, the induced homomorphism $$\pi_1(S^3_{\infty} \setminus K_{\infty})\it Cohng \langle x_0,x_1,\cdots ,x_d \ \mbox{vect}rt \ x_dx_{d-1}\cdots x_1 x_0^{-1}, x_0x_ix_0^{-1}x_i^{-1}, i=1,\cdots,d\rightarrowngle \overset{i_\#}{\longrightarrow} \pi_1({\mathcal U})$$ is surjective. Moreover, as in \cite[Section 7]{L82}, the groups $\pi_1({\mathcal U})$ and $\pi_1(S^3_{\infty} \setminus K_{\infty})$ have the same generators, while the relations in $\pi_1({\mathcal U})$ are those of $\pi_1(S^3_{\infty} \setminus K_{\infty})$ together with relations describing the monodromy about exceptional lines by using the Zariski-Van Kampen method. Therefore, $\varepsilon \circ i_\#= \varepsilon$ and $\rho\circ i_\# = \rho$ (as this can be checked on generators). Up to homotopy, ${\mathcal U}$ is obtained from $S^3_{\infty} \setminus K_{\infty}$ by attaching cells of dimension $\geq 2$. So the homomorphism $$H^{\varepsilon,\rho}_k(S^3_{\infty} \setminus K_{\infty},\mathbb{F}[t^{\pm 1}]) \longrightarrow H^{\varepsilon,\rho}_k({\mathcal U},\mathbb{F}[t^{\pm 1}])$$ induced by the inclusion map $i$ is an isomorphism for $k=0$, and an epimorphism for $k=1$. Here, $H^{\varepsilon,\rho}_k(S^3_{\infty} \setminus K_{\infty},\mathbb{F}[t^{\pm 1}])$ is defined with respect to the pair $(\varepsilon \circ i_\#= \varepsilon, \rho\circ i_\# = \rho)$ induced by the inclusion map $i$. As a consequence, in order to conclude that $H^{\varepsilon,\rho}_1({\mathcal U},\mathbb{F}[t^{\pm 1}])$ is a $\mathbb{F}[t^{\pm 1}]$-torsion module, it suffices by Proposition \ref{p1} to show that $\varepsilon\circ i_\# (x_0) =\varepsilon(x_0)\neq 0$. We have a commutative diagram: $$\xymatrix{ \pi_1(S^3_{\infty} \setminus K_{\infty})\ar[d]^{ab} \ar[r]^{i_\#} & \pi_1({\mathcal U})\ar[r]^{\varepsilon} \ar[d]^{ab} & \mathbb{Z} \\ H_1(S^3_{\infty} \setminus K_{\infty},\mathbb{Z}) \ar[r]^{i_*} & H_1({\mathcal U},\mathbb{Z}) \ar[ru]^{\psi} }$$ So, $\varepsilon\circ i_\# = \psi\circ i_*\circ ab$, hence it is enough to understand the maps $ab$ and $i_*$. Recall that the Hopf link complement $S^3_{\infty} \setminus K_{\infty}$ is homotopy equivalent to the complement $\mathbb{C}^2 \setminus \mathcal{A}$ of a central line arrangement $\mathcal{A}$ of $d$ lines in $\mathbb{C}^2$. So $$H_1(S^3_{\infty} \setminus K_{\infty},\mathbb{Z}) \it Cohng \mathbb{Z}^d = \langle \mu_1,...,\mu_d \rightarrowngle,$$ where $\mu_k$ is the homology class of the meridian about the line $l_k\subset \mathcal{A}$. Moreover, $ab(x_k) = \mu_k$ for $k=1,\cdots,d$, hence $$ab(x_0) = \mu_1+\cdots+\mu_d.$$ On the other hand, $H_1({\mathcal U},\mathbb{Z}) = \mathbb{Z}^r$, generated by the homology classes $\nu_l$ of the meridians about each irreducible component of $C^a$. Since $\mathcal{A}$ is defined by the homogeneous part of the defining equation of $C^a$, it is clear that $i_*$ takes each $\mu_k$ to one of the $\nu_l$'s. In fact, exactly $d_l$ of the $\mu_k$'s are being mapped by $i_*$ to $\nu_l$, where $d_l$ is the degree of the component $C_l$ of $C$. Finally, since $\psi(\nu_l) =n_l$, for all $k\geq 1$ we have that $\varepsilon\circ i_\#(x_k) =n_{l_j}$ for some $l_j$, and $$\varepsilon\circ i_\#(x_0) = \psi\circ i_* (\mu_1+\cdots+\mu_d) =\sum_{l=1}^r d_ln_l> 0.$$ This concludes the proof of the fact that $H^{\varepsilon,\rho}_1({\mathcal U},\mathbb{F}[t^{\pm 1}])$ is a finitely generated $\mathbb{F}[t^{\pm 1}]$-torsion module. \end{enumerate}d{proof} \begin{rem} The above result will be generalized in Theorem \ref{t3} to arbitrary (possibly non-reduced) hypersurfaces. The reason for stating it in this section is our study of splitting fields containing the roots of the associated twisted Alexander polynomials, see Theorem \ref{t2}. \end{enumerate}d{rem} As a consequence of Theorem \ref{t1} and Proposition \ref{p1}, we obtain the following: \begin{cor}\label{c1} If $C$ is a reduced curve of degree $d$ in general position at infinity, then the first twisted Alexander polynomial $\Delta_{1,{\mathcal U}}^{\varepsilon,\rho}(t)$ of ${\mathcal U}$ divides {\it Forotnotesize{$$\gcd(\det(\rho(x_0)t^{\sum_{l=1}^r d_ln_l}-Id), \det(\rho(x_1)t^{n_{l_1}}-Id),\cdots,\det(\rho(x_{d-1})t^{n_{l_{d-1}}}-Id))\cdot(\det(\rho(x_0)t^{\sum_{l=1}^r d_ln_l}-Id))^{d-2}.$$}} In particular, if $\varepsilon = lk$, then $\Delta_{1,{\mathcal U}}^{\rho}(t)$ divides $$\gcd(\det(\rho(x_0)t^d-Id),\det(\rho(x_{1})t-Id),\cdots,\det(\rho(x_{d-1})t-Id))\cdot(\det(\rho(x_0)t^d-Id))^{d-2}.$$ \end{enumerate}d{cor} \begin{rem} For curves in general position at infinity, Corollary \ref{c1} generalizes Libgober's divisbility result \cite[Theorem 2]{L82}, which states that the Alexander polynomial $\Delta_{1,{\mathcal U}}(t):=\Delta_{1,{\mathcal U}}^{lk,triv}(t)$ of $C$ divides the Alexander polynomial of the link at infinity, which is given by $(t-1)(t^d-1)^{d-2}$. \end{enumerate}d{rem} \subsection{Roots of twisted Alexander polynomials}\label{rc} In \cite[Theorem 5.4]{L09}, Libgober used Hodge theory to show that for an irreducible curve $C$, and for $\rho$ a unitary representation, the roots of the first twisted Alexander polynomial of $C$ are in a cyclotomic extension of the field generated by the rationals and the eigenvalues of $\rho(\gamma)$, where $\gamma$ is a meridian about $C$ at a non-singular point. Libgober's result does not touch upon the extension degree. In this section, we give a topological proof of Libgober's result, and identify this cyclotomic extension in an explicit way. \begin{thm}\label{t2} Let $C$ be a reduced curve of degree $d$ and assume that $C$ is in general position at infinity. Denote by $x_0$ the (homotopy class of the) meridian about the line $L$ at infinity. Suppose $\mathbb{F} = \mathbb{C}$, and denote the eigenvalues of $\rho(x_0)^{-1}$ by $\lambda_1,\cdots,\lambda_{\end{enumerate}d{lem}l}$. Then the roots of $\Delta_{1,{\mathcal U}}^{\rho}(t)$ lie in the splitting field $\mathbb{S}$ of $\prod_{i=1}^{\end{enumerate}d{lem}l} (t^d-\lambda_i)$ over $\mathbb{Q}$, which is cyclotomic over $\mathbb{K}=\mathbb{Q}(\lambda_1,\cdots,\lambda_{\end{enumerate}d{lem}l})$. \end{enumerate}d{thm} \begin{equation}gin{proof} Let us denote as before by $x_1,\cdots,x_d$ the (homotopy classes of) meridians about the irreducible components of $C$. If there is no common eigenvalue for all of $\rho(x_1),\cdots,\rho(x_d)$, then Corollary \ref{c1} yields that $\Delta_{1,{\mathcal U}}^{\rho}(t)$ divides $(\det(\rho(x_0)t^d-Id))^{d-2}.$ In particular, the prime factors of $\Delta_{1,{\mathcal U}}^{\rho}(t)$ are among the prime factors of $\det(\rho(x_0)t^d-Id)$. Let $p(t)$ be the characteristic polynomial of $\rho(x_0)^{-1}$. Then: $$\det(\rho(x_0)t^d-Id) = (-1)^r\det(\rho(x_0))\cdot p(t^d)=(-1)^r\det(\rho(x_0))\cdot (t^d-\lambda_1)\cdots (t^d-\lambda_{\end{enumerate}d{lem}l}).$$ Therefore, the roots of $\Delta_{1,{\mathcal U}}^{\rho}(t)$ are contained in the splitting field $\mathbb{S}$ of $\prod_{i=1}^{\end{enumerate}d{lem}l} (t^d-\lambda_i)$ over $\mathbb{Q}$. If $\alphapha$ is a common eigenvalue of all matrices $\rho(x_1),\cdots,\rho(x_d)$, then one of the eigenvalues of $\rho(x_0)=\rho(x_d)\rho(x_{d-1})...\rho(x_1)$ is $\alphapha^d$. Without loss of generality, assume that $\alphapha^d=\lambda_1^{-1}$. Then $\alphapha\in \mathbb{S}$. \end{enumerate}d{proof} \section{Twisted Alexander invariants of complex hypersurface complements}\label{th} In this section, we generalize the above results to the context of complex hypersurfaces with arbitrary singularities. We study the torsion properties of the associated twisted Alexander modules, and compute their corresponding twisted Alexander polynomials in terms of local topological data encoded by the singularities. \subsection{Definitions} Let $V$ be a (not necessarily reduced) degree $d$ hypersurface in $\mathbb{C}P$ ($n \geq 1$) and let $H$ be a hyperplane in $\mathbb{C}P$, called the ``hyperplane at infinity''. Let $${\mathcal U}:=\mathbb{C}P \setminus (V\cup H) = \mathbb{C}^{n+1} \setminus V^a,$$ where $V^a \subset \mathbb{C}^{n+1} =\mathbb{C}P \setminus H$ denotes the affine part of $V$. Alternatively, if $f(z_1,\cdots,z_{n+1}): \mathbb{C}^{n+1} \rightarrow \mathbb{C}$ is a polynomial of degree $d$, then $V^a=\{f=0\}$ and $V \subset \mathbb{C}P$ is the projectivization of $V_a$, with $H$ given by $z_0=0$. Assume that the underlying reduced hypersurface $V_{red}$ of $V$ has $r$ irreducible components $V_1, \cdots, V_r$, with $d_i=\deg(V_i)$ for $i=1,\cdots,r$. Then $$H_1({\mathcal U},\mathbb{Z}) \it Cohng \mathbb{Z}^r,$$ generated by the homology classes $\nu_i$ of meridians $\gamma_i$ about the irreducible components $V_i$ of $V_{red}$ (e.g., see \cite{Di1}, (4.1.3), (4.1.4)). Moreover, if $\gamma_{\infty}$ denotes the meridian loop in ${\mathcal U}$ about the hyperplane $H$ at infinity, with homology class $\nu_{\infty}$, then the following relation holds in $H_1({\mathcal U},\mathbb{Z})$: \begin{equation}\label{relinf} \nu_{\infty}+\sum_{i=1}^r d_i \nu_i=0.\end{enumerate}d{equation} Let $n_i$ be $r$ positive integers with $\gcd(n_1,\cdots,n_r)=1$, and define the epimorphism $\varepsilon:\pi_1 ({\mathcal U}) \to \mathbb{Z}$ by the composition $$\varepsilon: \pi_1 ({\mathcal U}) \xrightarrow{ab} H_1({\mathcal U},\mathbb{Z}) \xrightarrow{\nu_i \mapsto n_i} \mathbb{Z}.$$ Note that if the defining equation $f$ of the affine hypersurface $V^a$ has an irreducible decomposition given by $f=f_1^{n_1}\cdots f_r^{n_r}$, then $\varepsilon$ coincides with the homomorphism $f_\#:\pi_1({\mathcal U}) \to \pi_1(\mathbb{C}^*)\it Cohng \mathbb{Z}$ induced by the restriction of $f$ to ${\mathcal U}$, or equivalently, with the {\it total linking number homomorphism} (cf. \cite[p.76-77]{Di1}): $$lk: \pi_1 ({\mathcal U}) \xrightarrow{[\alphapha] \rightarrow lk(\alphapha,V \cup -dH)} \mathbb{Z}.$$ Fix a finite dimensional $\mathbb{F}$-vector space $\mathbb{V}$ and a linear representation $\rho: \pi_1({\mathcal U}) \rightarrow GL(\mathbb{V}).$ As in Section \ref{def}, the $\mathbb{F}[t^{\pm 1}]$-modules $H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}])$ and $H^i_{\varepsilon,\rho}({\mathcal U},\mathbb{F}[t^{\pm 1}])$ are defined for any $i\geq 0$, and are called the {\it $i$-th (co)homological twisted Alexander modules of $V$ with respect to the hyperplane at infinity $H$}. The twisted Alexander modules associated to the total linking number homomorphism $lk$ will be denoted by $$H_i^{\rho}({\mathcal U},\mathbb{F}[t^{\pm 1}]) := H^{lk,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}]),$$ and similarly for their cohomology counterparts $H^i_{\rho}({\mathcal U},\mathbb{F}[t^{\pm 1}])$. In the case of the trivial representation, these further reduce to the classical Alexander modules, as studied e.g., in \cite{M06}, \cite{DL} and \cite{Liu}. Note that, since ${\mathcal U}$ is the complement of a complex $n$-dimensional affine hypersurface, it is an $(n+1)$-dimensional affine variety, hence it has the homotopy type of a finite CW-complex of real dimension $n+1$ (e.g., see \cite{Di1}, (1.6.7), (1.6.8)). Therefore, $H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}]) =0$ for $i\geq n+1$, $H^{\varepsilon,\rho}_{n+1}({\mathcal U},\mathbb{F}[t^{\pm 1}])$ is a free $\mathbb{F}[t^{\pm 1}]$-module, and the $\mathbb{F}[t^{\pm 1}]$-modules $H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}])$ are of finite type for $0\leq i \leq n$. In the next sections, we investigate torsion properties of the latter. \subsection{Torsion properties}\label{torh} In the notations of the previous section, we say that the hypersurface $V \subset \mathbb{C}P$ is {\it in general position (with respect to the hyperplane $H$) at infinity} if the reduced underlying variety $V_{red}$ is transversal to $H$ in the stratified sense. The main result of this section is the following high-dimensional generalization of Theorem \ref{t1}: \begin{thm}\label{t3} If the hypersurface $V \subset \mathbb{C}P$ is in general position at infinity, then for any $0\leq i \leq n$ the twisted Alexander modules $H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}])$ are torsion $\mathbb{F}[t^{\pm 1}]$-modules. \end{enumerate}d{thm} In order to prove Theorem \ref{t3}, we need to introduce some notations and develop some prerequisites. dskip Let $S^{2n+1}_{\infty}$ be a $(2n+1)$-sphere in $\mathbb{C}^{n+1}$ of a sufficiently large radius (that is, the boundary of a small tubular neighborhood in $\mathbb{C}P$ of the hyperplane $H$ at infinity). Denote by $$K_{\infty}=S^{2n+1}_{\infty} \cap V^a$$ the {\it link of $V^a$ at infinity}, and by $${\mathcal U}^{\infty}=S^{2n+1}_{\infty} \setminus K_{\infty}$$ its complement in $S^{2n+1}_{\infty}$. Note that ${\mathcal U}^{\infty}$ is homotopy equivalent to $T(H) \setminus (V \cup H)$, where $T(H)$ is the tubular neighborhood of $H$ in $\mathbb{C}P$ for which $S^{2n+1}_{\infty}$ is the boundary. Then a classical argument based on the Lefschetz hyperplane theorem yields that the homomorphism $$\pi_i({\mathcal U}^{\infty}) \longrightarrow \pi_i({\mathcal U})$$ induced by inclusion is an isomorphism for $i < n$ and it is surjective for $i=n$; see \cite[Section 4.1]{DL} for more details. It follows that \begin{equation}\label{eq} \pi_i({\mathcal U},{\mathcal U}^{\infty})=0 \ \ \ {\rm for \ all} \ \ i \leq n,\end{enumerate}d{equation} hence ${\mathcal U}$ has the homotopy type of a CW complex obtained from ${\mathcal U}^{\infty}$ by adding cells of dimension $\geq n+1$. We denote by $(\varepsilon_{\infty},\rho_{\infty})$ the epimorphism and resp. representation on $\pi_1({\mathcal U}^{\infty})$ induced by composing $(\varepsilon,\rho)$ with the homomorphism $\pi_1({\mathcal U}^{\infty}) \to \pi_1({\mathcal U})$. Hence the {\it twisted Alexander modules of $V$ at infinity}, $H^{\varepsilon_{\infty},\rho_{\infty}}_i({\mathcal U}^{\infty},\mathbb{F}[t^{\pm 1}])$, can be defined (and similarly for the corresponding cohomology modules). Then the fact that twisted Alexander modules are homotopy invariants yields the following: \begin{prop}\label{p3} The inclusion map ${\mathcal U}^{\infty} \hboxookrightarrow {\mathcal U}$ induces $\mathbb{F}[t^{\pm 1}]$-module isomorphisms $$H^{\varepsilon_{\infty},\rho_{\infty}}_i({\mathcal U}^{\infty},\mathbb{F}[t^{\pm 1}]) \overset{\it Cohng}{\longrightarrow} H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}])$$ for any $i<n$, and an epimorphism of $\mathbb{F}[t^{\pm 1}]$-modules $$H^{\varepsilon_{\infty},\rho_{\infty}}_n({\mathcal U}^{\infty},\mathbb{F}[t^{\pm 1}]) \twoheadrightarrow H^{\varepsilon,\rho}_n({\mathcal U},\mathbb{F}[t^{\pm 1}]).$$ \end{enumerate}d{prop} \begin{cor}\label{c2} For any $0 \leq i\leq n$, if $H^{\varepsilon_{\infty},\rho_{\infty}}_i({\mathcal U}^{\infty},\mathbb{F}[t^{\pm 1}])$ is a torsion $\mathbb{F}[t^{\pm 1}]$-module, then so is $ H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}])$. \end{enumerate}d{cor} dskip Let us now assume that the projective hypersurface $V$ is in general position at infinity, i.e., $V_{red}$ is transversal in the stratified sense to the hyperplane at infinity $H$. Then the complement of the link at infinity ${\mathcal U}^{\infty}$ is a circle fibration over $H \setminus (V \cap H)$, which is homotopy equivalent to the complement in $\mathbb{C}^{n+1}$ to the affine cone over the projective hypersurface $V\cap H \subset H=\mathbb{CP}^{n}$ (for a similar argument see \cite[Section~4.1]{DL}). Hence, by the Milnor fibration theorem (e.g., see \cite[(3.1.9),(3.1.11)]{Di1}), ${\mathcal U}^{\infty}$ fibers over $\mathbb{C}^* \simeq S^1$, with fiber homotopy equivalent to a finite $n$-dimensional CW-complex. Moreover, it is known that this fiber is also homotopy equivalent to the infinite cyclic cover of ${\mathcal U}^{\infty}$ defined by the kernel of the total linking number homomorphism defined with respect to $V^a$. We can now complete the proof of Theorem \ref{t3}. \begin{equation}gin{proof} By the above Corollary \ref{c2}, it suffices to prove that for any $0\leq i \leq n$, the $\mathbb{F}[t^{\pm 1}]$-module $H^{\varepsilon_{\infty},\rho_{\infty}}_i({\mathcal U}^{\infty},\mathbb{F}[t^{\pm 1}])$ is torsion. The idea is to replace $V^a$ by another affine hypersurface $X$ with the same underlying reduced structure, hence also the same complement ${\mathcal U}$, so that $\varepsilon$ becomes the homomorphism defined by the total linking number with $X$. Let $f_1\cdots f_r=0$ be a square-free polynomial equation defining $V^a_{red}$, the reduced affine hypersurface underlying $V^a=V \setminus H$. Recall that if $\gamma_i$ is the meridian about the irreducible component $f_i=0$, then by definition we have that $\varepsilon([\gamma_i])=n_i$. Let us now consider the polynomial $g={f_1}^{n_1}\cdots {f_r}^{n_r}$ on $\mathbb{C}^{n+1}$ defining an affine hypersurface $$X=\{g=0\},$$ and replace $V$ by the projective hypersurface $\overline{X}$ defined by the projectivization of $g$. Clearly, the underlying reduced hypersurface $X_{red}$ coincides with $V^a_{red}$, so $X$ and $V^a$ have the same complement $${\mathcal U}:=\mathbb{C}^n\setminus V^a=\mathbb{C}^n \setminus X.$$ Moreover, the given homomorphism $\varepsilon:\pi_1({\mathcal U})\to \mathbb{Z}$ (hence also $\varepsilon_{\infty}:\pi_1({\mathcal U}^{\infty}) \to \mathbb{Z}$) coincides with the total linking number homomorphism defined with respect to $X$ (cf. \cite[p.76-77]{Di1}). Finally, since $V$ is in general position at infinity, so is $\overline{X}$, and the corresponding complements of the links at infinity coincide. Therefore, the complement of the link at infinity ${\mathcal U}^{\infty}$ admits a locally trivial topological fibration $$F \hboxookrightarrow {\mathcal U}^{\infty} \longrightarrow \mathbb{C}^*$$ whose fiber $F$ has the homotopy type of a finite $n$-dimensional CW-complex, and which is also homotopy equivalent to the infinite cyclic cover of ${\mathcal U}^{\infty}$ defined by the kernel of the linking number with respect to $X$ (i.e., by $\ker(\varepsilon_{\infty})$). Altogether, for any $0 \leq i\leq n$, we have: $$H^{\varepsilon_{\infty},\rho_{\infty}}_i({\mathcal U}^{\infty},\mathbb{F}[t^{\pm 1}]) \it Cohng H_i(F, {\mathbb V}_{\rho_{\infty}}),$$ which is a finite dimensional $\mathbb{F}$-vector space, hence a torsion $\mathbb{F}[t^{\pm 1}]$-module. \end{enumerate}d{proof} As an immediate consequence of Theorem \ref{t3}, we have the following: \begin{cor}\label{imc} If the hypersurface $V \subset \mathbb{C}P$ is in general position at infinity, then $$\rightarrownk_{\mathbb{F}[t^{\pm 1}]} H^{\varepsilon,\rho}_{n+1}({\mathcal U},\mathbb{F}[t^{\pm 1}])=(-1)^{n+1}\cdot \end{enumerate}d{lem}l \cdot \chi({\mathcal U}),$$ with $\end{enumerate}d{lem}l$ the rank of the representation $\rho$. \end{enumerate}d{cor} By (\ref{uct}) and Theorem \ref{t3}, we also deduce the following: \begin{cor}\label{cst} If $V \subset \mathbb{C}P$ is a hypersurface in general position at infinity, then the cohomological twisted Alexander modules $H^i_{\varepsilon,\rho}({\mathcal U},\mathbb{F}[t^{\pm 1}])$ are torsion $\mathbb{F}[t^{\pm 1}]$-modules for any $0\leq i \leq n$. \end{enumerate}d{cor} \begin{rem} If $V$ is in general position at infinity, and $\dim_{\mathbb{C}} Sing(V) \leq n-2$ (in which case $V$ is already irreducible), then $\pi_1({\mathcal U}) \it Cohng \mathbb{Z}$ (e.g., see \cite[Lemma 1.5]{L94}). So in this case, the representation $\rho$ is abelian, and the twisted Alexander invariants of ${\mathcal U}$ are determined by the classical ones (already studied in \cite{M06,DL,Liu}). Results of this paper are particularly interesting for hypersurfaces with singularities in codimension one (e.g., hyperplane arrangements) and non-abelian representations. \end{enumerate}d{rem} \subsection{Local twisted Alexander invariants}\label{loca} For each point $x \in V$, consider the local complement $${\mathcal U}_x:={\mathcal U} \cap B_x,$$ for $B_x$ a small open ball about $x$ in $\mathbb{C}P$ chosen so that $(V,x)$ has a conic structure in $\overline{B}_x$. Let $$\varepsilon_x:\pi_1({\mathcal U}_x) \overset{(i_x)_{\#}}{\longrightarrow} \pi_1({\mathcal U}) \overset{\varepsilon}{\longrightarrow}\mathbb{Z}$$ and $$\rho_x:\pi_1({\mathcal U}_x) \overset{(i_x)_{\#}}{\longrightarrow} \pi_1({\mathcal U}) \overset{\rho}{\longrightarrow}GL({\mathbb V})=GL_{\end{enumerate}d{lem}l}(\mathbb F)$$ be induced by the inclusion $i_x:{\mathcal U}_x \hboxookrightarrow {\mathcal U}$. The corresponding {\it local} (co)homological twisted Alexander modules $H_k^{\varepsilon_x,\rho_x}({\mathcal U}_x,\mathbb{F}[t^{\pm 1}])$ and $H^k_{\varepsilon_x,\rho_x}({\mathcal U}_x,\mathbb{F}[t^{\pm 1}])$ inherit $\mathbb{F}[t^{\pm 1}]$-module structures. \begin{rem} Note that $\varepsilon_x$ is not necessarily onto, so the infinite cyclic cover of ${\mathcal U}_x$ defined by $\ker(\varepsilon_x)$ may be disconnected. \end{enumerate}d{rem} \begin{df}\label{ac2} We say that $(\varepsilon,\rho)$ is {\it acyclic at $x \in V$} if $(\varepsilon_x,\rho_x)$ is acyclic in the sense of Definition \ref{ac}, i.e., if $H_k^{\varepsilon_x,\rho_x}({\mathcal U}_x,\mathbb{F}[t^{\pm 1}])$ are torsion $\mathbb{F}[t^{\pm 1}]$-modules for all $k \in \mathbb{Z}$. (Note that by UCT (\ref{uct}), this condition implies that the local cohomological twisted Alexander modules are torsion as well.) We say that $(\varepsilon,\rho)$ is {\it locally acyclic} along a subset $Y \subseteq V$ if $(\varepsilon,\rho)$ is acyclic at any point $x \in Y$. \end{df} The next result provides one important geometric example of local acyclicity. \begin{prop}\label{ploc} Let $V \subset \mathbb{C}P$ be a degree $d$ projective hypersurface in general position at infinity. Then $(\varepsilon,\rho)$ is locally acyclic along $V$, for any nontrivial epimorphism $\varepsilon:\pi_1({\mathcal U}) \to \mathbb{Z}$ and any representation $\rho:\pi_1({\mathcal U})\to GL({\mathbb V})$.\end{enumerate}d{prop} \begin{equation}gin{proof} As in the proof of Theorem \ref{t3}, after changing $V^a$ (resp. $V$) by an affine hypersurface $X$ (resp., by its projectivization $\overline{X}$) with the same underlying reduced structure, hence also preserving the (local) complements, we can assume without loss of generality (and without changing the notations) that $\varepsilon$ is the total linking number homomorphism $lk$. Therefore, for any $x \in V$, the local homomorphism $\varepsilon_x$ becomes $lk_x:=lk \circ (i_x)_{\#}.$ Denote by ${\mathcal U}_{x,\infty}$ the infinite cyclic cover of ${\mathcal U}_x$ defined by $\ker(lk_x)$. Let ${\mathcal U}'=\mathbb{C}P \setminus V$, and for any point $x\in V$ let ${\mathcal U}'_x:={\mathcal U}' \cap B_x$, for $B_x$ denoting as before a small open ball about $x$ in $\mathbb{C}P$ for which $(V,x)$ has a conic structure in $\overline{B}_x$. Let $S_x:=\partialrtial \overline{B}_x$, with $K_x:=V \cap S_x$ denoting the corresponding link of $(V,x)$. Note that ${\mathcal U}'_x$ is homotopy equivalent to the link complement $S_x \setminus K_x$. Moreover, since $K_x$ is an algebraic link, the Milnor fibration theorem implies that the complement $S_x \setminus K_x$ fibers over a circle, with (Milnor) fiber $F_x$ homotopy equivalent to a finite CW-complex. It is also known that $F_x$ is homotopy equivalent to the infinite cyclic cover of $S_x \setminus K_x$ defined by the linking number with respect to $K_x$. For future reference, let us denote by $lk'_x$ the epimorphism on $\pi_1(S_x \setminus K_x)\it Cohng \pi_1({\mathcal U}'_x)$ defined by the total linking number with $K_x$. If $x \in V \setminus H$, then ${\mathcal U}_x={\mathcal U}'_x \simeq S_x \setminus K_x$, so in this case $$H_k^{\varepsilon_x,\rho_x}({\mathcal U}_x,\mathbb{F}[t^{\pm 1}])=H_k^{lk_x,\rho_x}({\mathcal U}_x,\mathbb{F}[t^{\pm 1}]) \it Cohng H_k({\mathcal U}_{x,\infty},{\mathbb V}_{\rho_x}) \it Cohng H_k(F_x,{\mathbb V}_{\rho_x})$$ is a finite dimensional $\mathbb F$-vector space, hence a torsion $\mathbb{F}[t^{\pm 1}]$-module for any $k \in \mathbb{Z}$. If $x \in V \cap H$, then by the transversality assumption we have that ${\mathcal U}_x \simeq {\mathcal U}'_x \times S^1$, with the restrictions of $lk_x$ to the factors of this product described as follows: on $\pi_1({\mathcal U}'_x)$, $lk_x$ restricts to the homomorphism $lk'_x$ defined by the linking number with $K_x$ (this is, of course, the same as $lk_{x'}$ at a nearby point $x'\in V \setminus H$ in the same stratum as $x$), while on $\pi_1(S^1)$, it can be seen from (\ref{relinf}) that $lk_x$ acts by sending the generator (which coincides with the homotopy class of the meridian loop $\gamma_{\infty}$ about $H$) to ${-d}$. The acyclicity at $x \in V \cap H$ then follows by the K\"unneth formula, since the homotopy factors of ${\mathcal U}_x$, endowed with the corresponding homomorphisms and representations induced from the pair $(lk_x,\rho_x)$, are acyclic. \end{enumerate}d{proof} \subsection{Sheaf (co)homology interpretation of twisted Alexander modules}\label{sh} For the remaining of the paper, we will employ the language of perverse sheaves for relating local and global properties of twisted Alexander invariants. For this purpose, we first rephrase the definition of twisted Alexander modules as the (co)homology of a certain local system defined on the complement ${\mathcal U}$. Let ${\mathcal L}$ be the local system of $\mathbb{F}[t^{\pm 1}]$-modules on ${\mathcal U}$, with stalk $\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} {\mathbb V}$, and action of the fundamental group $$\pi_1({\mathcal U}) \longrightarrow Aut(\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} {\mathbb V})\it Cohng GL_{\end{enumerate}d{lem}l}(\mathbb{F}[t^{\pm 1}])$$ given by $$[\alphapha] \mapsto t^{\varepsilon(\alphapha)} \otimes \rho(\alphapha).$$ (Here $\end{enumerate}d{lem}l$ denotes as before the rank of the representation $\rho$.) Then it is clear from the definition of the homological twisted Alexander modules that we have the following isomorphism of $\mathbb{F}[t^{\pm 1}]$-modules: \begin{equation}\label{hsh} H^{\varepsilon,\rho}_i({\mathcal U},\mathbb{F}[t^{\pm 1}]) \it Cohng H_i({\mathcal U},{\mathcal L}). \end{enumerate}d{equation} Note that $\mathbb{F}[t^{\pm 1}]$ has a natural involution, denoted by $ {\bb a}r{\ \cdot \ } $ and defined by $t \mapsto t^{-1}$. Let ${\mathcal L}^{\mbox{vect}e}$ be the local system on ${\mathcal U}$ dual to ${\mathcal L}$. Then ${\mathcal L}^{\mbox{vect}e}\it Cohng {\bb a}r{{\mathcal L}}$, where ${\bb a}r{{\mathcal L}}$ is the local system of $\mathbb{F}[t^{\pm 1}]$-modules on ${\mathcal U}$, with stalk $\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} {\mathbb V}^{\mbox{vect}e}$, but with the $\mathbb{F}[t^{\pm 1}]$-module structure composed with the above involution. Then it follows as in \cite[pp.638]{KL} that the cohomological twisted Alexander modules of $({\mathcal U},\varepsilon,\rho)$ can be realized as: \begin{equation}\label{csh} H^i_{\varepsilon,\rho}({\mathcal U},\mathbb{F}[t^{\pm 1}]) \it Cohng H^i({\mathcal U},{\mathcal L}^{\mbox{vect}e}). \end{enumerate}d{equation} If $x \in V$, let $i_x:{\mathcal U}_x:={\mathcal U} \cap B_x \hboxookrightarrow {\mathcal U}$ denote the inclusion of local complement at $x$, with corresponding induced local pair $(\varepsilon_x,\rho_x)$ as in Section \ref{loca}. Let ${\mathcal L}_x:=i_x^*{\mathcal L}$ be the restriction of the local system ${\mathcal L}$ to ${\mathcal U}_x$, i.e., ${\mathcal L}_x$ is defined via the action of $(\varepsilon_x,\rho_x)$. Then, for any $k \in \mathbb{Z}$, the local $k$-th (co)homological twisted Alexander modules at $x$ can be described as: $$H_k^{\varepsilon_x,\rho_x}({\mathcal U}_x,\mathbb{F}[t^{\pm 1}])\it Cohng H_k({\mathcal U}_x,{\mathcal L}_x) \ \text{ and } \ H^k_{\varepsilon_x,\rho_x}({\mathcal U}_x,\mathbb{F}[t^{\pm 1}])\it Cohng H^k({\mathcal U}_x,{\mathcal L}_x^{\mbox{vect}e}).$$ \subsection{Local-to-global analysis. Divisibility results}\label{lg} In this section, we assume that the projective hypersurface $V$ is in general position at infinity. By Theorem \ref{t3} and Corollary \ref{cst}, the (co)homological twisted Alexander modules $H^{\varepsilon,\rho}_{i}({\mathcal U},\mathbb{F}[t^{\pm 1}])$, resp. $H_{\varepsilon,\rho}^{i}({\mathcal U},\mathbb{F}[t^{\pm 1}])$, are torsion $\mathbb{F}[t^{\pm 1}]$-modules for any $0 \leq i \leq n$. Following Definition \ref{tap}, we denote by $\Delta^{\varepsilon,\rho}_{i,{\mathcal U}}(t)$, resp., $\Delta_{\varepsilon,\rho,{\mathcal U}}^{i}(t)$, with $0\leq i \leq n$, the corresponding twisted Alexander polynomials. The sheaf theoretic realization of twisted Alexander modules in Section \ref{sh} allows us the use of perverse sheaves (or intersection homology) which, when coupled with homological algebra techniques, provide a concise relationship between the global twisted Alexander invariants of complex hypersurface complements and the corresponding local ones at singular points (respectively, at infinity). For simplicity of exposition, we choose to formulate our results in this section in cohomological terms, but see also Remark \ref{finrem} below. Our approach is similar to \cite[Section 3]{DM}. dskip We work with sheaves of $\mathbb{F}[t^{\pm 1}]$-modules. For a topological space $Y$, we denote by $D^b_c(Y;\mathbb{F}[t^{\pm 1}])$ the derived category of complexes of sheaves of $\mathbb{F}[t^{\pm 1}]$-modules on $Y$ with constructible cohomology, and we let ${\it Perv}(Y)$ be the abelian category of perverse sheaves of $\mathbb{F}[t^{\pm 1}]$-modules on $Y$. dskip The first result of this section singles out the contribution of the loop ``at infinity'' $\gamma_{\infty}$ to the global twisted Alexander invariants, and it can be regarded as a high-dimensional generalization (and for arbitrary singularities) of Corollary \ref{c1}, where $\gamma_{\infty}$ plays the role of $x_0^{-1}$ in loc.cit.: \begin{thm}\label{t4} Let $V \subset \mathbb{C}P$ be a projective hypersurface in general position (with respect to the hyperplane $H$) at infinity, with complement ${\mathcal U}=\mathbb{C}P\setminus (V \cup H)$. Fix a non-trivial epimorphism $\varepsilon:\pi_1({\mathcal U}) \to \mathbb{Z}$ and a rank $\end{enumerate}d{lem}l$ representation $\rho:\pi_1({\mathcal U}) \to GL({\mathbb V})$. Then, for any $0 \leq i \leq n$, the zeros of the global cohomological Alexander polynomial $\Delta_{\varepsilon,\rho,{\mathcal U}}^{i}(t)$ are among those of the order of the cokernel of the endomorphism $t^{\varepsilon(\gamma_{\infty})} \otimes \rho(\gamma_{\infty}) - Id \in {\it End}(\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} {\mathbb V})$. \end{enumerate}d{thm} \begin{equation}gin{proof} Let ${\mathbb C}^{n+1}=\mathbb{C}P \setminus H$, and denote by $u:{\mathcal U} \hboxookrightarrow {\mathbb C}^{n+1}$ and $v: {\mathbb C}^{n+1} \hboxookrightarrow \mathbb{C}P$ the two inclusions. Since ${\mathcal U}$ is smooth and $(n+1)$-dimensional, and ${\mathcal L}^{\mbox{vect}e}$ is a local system on ${\mathcal U}$, it follows that ${\mathcal L}^{\mbox{vect}e}[n+1] \in {\it Perv}({\mathcal U})$. Moreover, since $u$ is a quasi-finite affine morphism, we also have that $${\mathcal F}^{\bullet}:=Ru_{\ast}(\mathcal{L}^{\mbox{vect}e}[n+1]) \in {\it Perv}({\mathbb C}^{n+1}),$$ e.g., see \cite[Theorem 6.0.4]{Sch}. But $\mathbb{C}^{n+1}$ is an affine $(n+1)$-dimensional variety, so by Artin's vanishing theorem for perverse sheaves (e.g., see \cite[Corollary 6.0.4]{Sch}), we obtain that: \begin{equation}\label{van1} \mathbb{H}^k({\mathbb C}^{n+1}, {\mathcal F}^{\bullet})=0, \ \text{for all} \ k>0,\end{enumerate}d{equation} and \begin{equation}\label{van2} \mathbb{H}_c^k({\mathbb C}^{n+1}, {\mathcal F}^{\bullet})=0, \ \text{for all} \ k<0.\end{enumerate}d{equation} Let $a:\mathbb{CP}^{n+1} \to point$ be the constant map. Then: \begin{equation}\label{i1} \mathbb{H}^k({\mathbb C}^{n+1}, {\mathcal F}^{\bullet})\it Cohng H^{k+n+1}(\mathcal{U}, \mathcal{L}^{\mbox{vect}e})\it Cohng H^k(Ra_{\ast}Rv_{\ast}{\mathcal F}^{\bullet}).\end{enumerate}d{equation} Similarly, \begin{equation}\label{i2} \mathbb{H}_c^k({\mathbb C}^{n+1}, {\mathcal F}^{\bullet})\it Cohng H^k(Ra_{!}Rv_{!}{\mathcal F}^{\bullet}),\end{enumerate}d{equation} where the last equality follows since $a$ is a proper map, hence $Ra_!=Ra_{\ast}$. Consider the canonical morphism $Rv_{!}{\mathcal F}^{\bullet} \to Rv_{\ast}{\mathcal F}^{\bullet}$, and extend it to the distinguished triangle: \begin{equation}\label{tri1} Rv_{!}{\mathcal F}^{\bullet} \to Rv_{\ast}{\mathcal F}^{\bullet} \to \mathcal{G}^{\bullet} \overset{[1]}{\to}\end{enumerate}d{equation} in $D_c^b(\mathbb{CP}^{n+1};\mathbb{F}[t^{\pm 1}])$. Since $v^{\ast}Rv_{!} \it Cohng id \it Cohng v^{\ast}Rv_{\ast}$, after applying $v^*$ to the above triangle we get that $v^*{\mathcal G}\it Cohng 0$, or equivalently, ${\mathcal G}$ is supported on $H$. Next, we apply $Ra_!=Ra_{\ast}$ to the distinguished triangle (\ref{tri1}) to obtain a new triangle in $D^b_c(point;\mathbb{F}[t^{\pm 1}])$: \begin{equation}\label{tri2} Ra_{!}Rv_{!}\mathcal{F}^{\bullet} \to Ra_{\ast}Rv_{\ast}\mathcal{F}^{\bullet} \to Ra_{\ast}\mathcal{G}^{\bullet} \overset{[1]}{\to} \end{enumerate}d{equation} Upon applying the cohomology functor to the distinguished triangle (\ref{tri2}), and using the vanishing from (\ref{van1}) and (\ref{van2}) together with the identifications (\ref{i1}) and (\ref{i2}), we obtain that: $$H^{k+n+1}(\mathcal{U},\mathcal{L}^{\mbox{vect}e})\it Cohng \mathbb{H}^k(\mathbb{CP}^{n+1}, \mathcal{G}^{\bullet})\it Cohng \mathbb{H}^k(H, \mathcal{G}^{\bullet}) \ \ \text{for} \ \ k < -1,$$ and $H^{n}(\mathcal{U}, \mathcal{L}^{\mbox{vect}e})$ is a submodule of the $\mathbb{F}[t^{\pm 1}]$-module $\mathbb{H}^{-1}(H, \mathcal{G}^{\bullet})$. So in order to prove the theorem, it remains to show that the $\mathbb{F}[t^{\pm 1}]$-modules $\mathbb{H}^k(H, \mathcal{G}^{\bullet})$ are torsion for $k \leq -1$, and the zeros of their corresponding orders are amongs those of the order of the cokernel of $t^{\varepsilon(\gamma_{\infty})} \otimes \rho(\gamma_{\infty}) - Id \in {\it End}(\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} {\mathbb V})\it Cohng M_{\end{enumerate}d{lem}l}(\mathbb{F}[t^{\pm 1}])$. Note that $\mathbb{H}^{k}(H,\mathcal{G}^{\bullet})$ is the abutment of a hypercohomology spectral sequence with the $E_2$-term defined by \begin{equation}\label{ssg} E_2^{p,q}=H^p(H,\mathcal{H}^q(\mathcal{G}^{\bullet})).\end{enumerate}d{equation} This prompts us to investigate the stalk cohomology of ${\mathcal G}^{\bullet}$ at points along $H$. For $x \in H$, let us denote as before by ${\mathcal U}_x={\mathcal U} \cap B_x$ the local complement at $x$, for $B_x$ a small ball in $\mathbb{C}P$ centered at $x$. Then we have the following identification: \begin{equation}\label{loc1} \mathcal{H}^q(\mathcal{G}^{\bullet})_x \it Cohng H^{q+n+1}(\mathcal{U}_x,\mathcal{L}^{\mbox{vect}e}_x),\end{enumerate}d{equation} where ${\mathcal L}_x$ is the restriction of ${\mathcal L}$ to ${\mathcal U}_x$. Indeed, the following isomorphisms of $\mathbb{F}[t^{\pm 1}]$-modules hold: \[ \begin{equation}gin{aligned} \mathcal{H}^q(\mathcal{G}^{\bullet})_x & \it Cohng {\mathcal H}^q(Rv_*{\mathcal F}^{\bullet})_x \\ & \it Cohng {\mathcal H}^{q+n+1}(Rv_*Ru_*{\mathcal L}^{\mbox{vect}e})_x \\ & \it Cohng {\mathbb H}^{q+n+1}(B_x,R(v \circ u)_*{\mathcal L}^{\mbox{vect}e}) \\ & \it Cohng H^{q+n+1}(\mathcal{U}_x,\mathcal{L}^{\mbox{vect}e}_x). \end{enumerate}d{aligned} \] If $x \in H \setminus V$, then ${\mathcal U}_x$ is homotopy equivalent to $S^1$, and the corresponding local system ${\mathcal L}_x$ is defined by the action of $\gamma_{\infty}$, i.e., by multiplication by $t^{\varepsilon(\gamma_{\infty})} \otimes \rho(\gamma_{\infty})$ on $\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} {\mathbb V}$. In particular, $H_*(U_x,{\mathcal L}_x)$ is the homology of the complex of $\mathbb{F}[t^{\pm 1}]$-modules: $$0 \longrightarrow \mathbb{F}[t^{\pm 1}]^{\end{enumerate}d{lem}l} \xrightarrow{t^{\varepsilon(\gamma_{\infty})} \otimes \rho(\gamma_{\infty}) - Id} \mathbb{F}[t^{\pm 1}]^{\end{enumerate}d{lem}l} \longrightarrow 0,$$ i.e., \begin{equation}\label{loc2} H_k(U_x,{\mathcal L}_x)= \begin{equation}gin{cases} \it Cohk(t^{\varepsilon(\gamma_{\infty})} \otimes \rho(\gamma_{\infty}) - Id), & k=0 \\ 0 , & k>0. \end{enumerate}d{cases} \end{enumerate}d{equation} If $x \in H \cap V$, then we know by Proposition \ref{ploc} that the local twisted Alexander modules $H^{k}(\mathcal{U}_x,\mathcal{L}^{\mbox{vect}e}_x)$ are $\mathbb{F}[t^{\pm 1}]$-torsion, for all $k \in \mathbb{Z}$. Moreover, in the notations of Proposition \ref{ploc}, ${\mathcal U}_x \simeq {\mathcal U}'_x \times S^1$, and the local system ${\mathcal L}_x$ is an external tensor product, the second factor being defined by the action of $\gamma_{\infty}$ as in the previous case. So it follows from the K\"unneth formula that the zeros of the homological (hence also cohomological by the UCT) local twisted Alexander polynomials at points in $H \cap V$ are among those of the order of the cokernel of $t^{\varepsilon(\gamma_{\infty})} \otimes \rho(\gamma_{\infty}) - Id \in {\it End}(\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} {\mathbb V})$. By (\ref{loc1}) and the above calculations, it then follows that the $\mathbb{F}[t^{\pm 1}]$-modules $\mathcal{H}^q(\mathcal{G}^{\bullet})_{x\in H}$ are torsion, and the zeros of their associated orders are among those of the order of the cokernel of $t^{\varepsilon(\gamma_{\infty})} \otimes \rho(\gamma_{\infty}) - Id \in {\it End}(\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} {\mathbb V})$. Hence, by using the spectral sequence (\ref{ssg}), each hypercohomology group $\mathbb{H}^{k}(H,\mathcal{G}^{\bullet})$ ia a torsion $\mathbb{F}[t^{\pm 1}]$-module, and the zeros of its associated order are among those of the order of the cokernel of $t^{\varepsilon(\gamma_{\infty})} \otimes \rho(\gamma_{\infty}) - Id \in {\it End}(\mathbb{F}[t^{\pm 1}] \otimes_{\mathbb F} {\mathbb V})$. This ends the proof of our theorem. \end{enumerate}d{proof} \begin{rem} If $\mathbb F={\mathbb C}$ and $\varepsilon=lk$ is the total linking number homomorphism, Theorem \ref{t4} implies that any root $\lambda$ of $\Delta^i_{\rho,{\mathcal U}}(t)$, $i \leq n$, must satisfy the condition that $\lambda^d$ is an eigenvalue of $\rho(\gamma_{\infty})$, where $d=\sum_{i=1}^r n_i d_i$ is the degree of $V$. If, in addition, $\rho=triv$ is the trivial representation, the statement of Theorem \ref{t4} reduces to the fact that the zeros of the classical cohomological Alexander polynomials $\Delta^i_{\mathcal U}(t)$, $i \leq n$, are roots of unity of order $d=\deg(V)$, a fact also shown in \cite{M06,DL,Liu} in the reduced case. \end{enumerate}d{rem} In the next theorem, we assume for simplicity of exposition that $V$ is a reduced hypersurface. Recall from Sections \ref{loca} and \ref{sh} that for any point $x$ in $V$, with local complement ${\mathcal U}_x={\mathcal U} \cap B_x$, we get from $(\varepsilon,\rho)$ an induced pair $(\varepsilon_x,\rho_x)$ via the inclusion map $i_x:{\mathcal U}_x \hboxookrightarrow {\mathcal U}$. Moreover, the local twisted Alexander modules have a sheaf description in terms of the local system ${\mathcal L}_x:=i_x^*{\mathcal L}$ and its dual, namely, $H_k^{\varepsilon_x,\rho_x}({\mathcal U}_x,\mathbb{F}[t^{\pm 1}])\it Cohng H_k({\mathcal U}_x,{\mathcal L}_x)$ and $H^k_{\varepsilon_x,\rho_x}({\mathcal U}_x,\mathbb{F}[t^{\pm 1}])\it Cohng H^k({\mathcal U}_x,{\mathcal L}_x^{\mbox{vect}e})$, for all $k \in \mathbb{Z}$. We denote by $\Delta_{k,x}(t):=\Delta_{k,{\mathcal U}_x}^{\varepsilon_x, \rho_x}(t)$ and $\Delta^k_x(t):=\Delta^{k}_{\varepsilon_x, \rho_x,{\mathcal U}_x}(t)$ the {\it local} (co)homological twisted Alexander polynomials at $x$. Let us now assume also that $V$ is in general position at infity. Then if $x \in V \cap H$, in the notations of Proposition \ref{ploc} there is a homotopy equivalence ${\mathcal U}_x \simeq {\mathcal U}'_x \times S^1$, where ${\mathcal U}'_x=B_x \setminus V$ and with the $S^1$-factor corresponding to the meridian loop about the hyperplane at infinity $H$. On the other hand, ${\mathcal U}'_x$ is homeomorphic to any local complement ${\mathcal U}_{x'}$ at a point $x'\in V \setminus H$ in the same stratum with $x$. So by the K\"unneth formula, the zeros of the local twisted Alexander polynomials $\Delta^k_x(t)$ of $({\mathcal U}_x,\varepsilon_x,\rho_x)$ are among those associated to $({\mathcal U}_{x'},\varepsilon_{x'},\rho_{x'})$, for $x' \in V \setminus H$ a nearby point in the same stratum of $V$ as $x$. For brevity, points of $V^a=V \setminus H$ will be referred to as {\it affine points of $V$}. The next result shows that the zeros of the global twisted Alexander polynomials are completely determined by those of the local twisted Alexander polynomials at (affine) points along some irreducible component of $V$: \begin{thm}\label{t5} Let $V \subset \mathbb{C}P$ be a reduced hypersurface in general position at infinity, with complement ${\mathcal U}=\mathbb{C}P\setminus (V \cup H)$, and let $V_1$ be a fixed irreducible component of $V$. Fix a non-trivial epimorphism $\varepsilon:\pi_1({\mathcal U}) \to \mathbb{Z}$, a rank $\end{enumerate}d{lem}l$ representation $\rho:\pi_1({\mathcal U}) \to GL({\mathbb V})$, and a non-negative integer $\sigmama$. If $\lambda \in \mathbb F$ is not a root of the $i$-th local twisted Alexander polynomial $\Delta^i_x(t)$ for any $i<n+1-\sigmama$ and any (affine) point $x \in V_1 \setminus H$, then $\lambda$ is not a root of the global twisted Alexander polynomial $\Delta^i_{\varepsilon,\rho,{\mathcal U}}(t)$ for any $i<n+1-\sigmama$. \end{enumerate}d{thm} \begin{equation}gin{proof} First note that by the transversality assumption and K\"unneth, it follows by the above considerations that the hypothesis on local Alexander polynomials implies that $\lambda$ is not a root of the $i$-th local twisted Alexander polynomial $\Delta^i_x(t)$ for any $i<n+1-\sigmama$ and any point $x \in V_1$ (including points in $V_1 \cap H$). As in the proof of Theorem \ref{t4}, after replacing $\mathbb{C}^{n+1}$ by $\mathcal{U}_1=\mathbb{CP}^{n+1} \setminus V_1$, it follows that for $k \leq -1$, $H^{k+n+1}({\mathcal U},{\mathcal L}^{\mbox{vect}e})$ is a submodule of ${\mathbb H}^k(\mathbb{C}P,{\mathcal G}^{\bullet})$, where ${\mathcal G}^{\bullet}$ is now a complex of sheaves of $\mathbb{F}[t^{\pm 1}]$-modules supported on $V_1$. It thus suffices to show that ${\mathbb H}^k(\mathbb{C}P,{\mathcal G}^{\bullet})$, $k < -\sigmama$, is a torsion $\mathbb{F}[t^{\pm 1}]$-module whose order does not vanish at $\lambda$. As in (\ref{loc1}), the cohomology stalks of ${\mathcal G}^{\bullet}$ at any $x \in V_1$ are given by $$\mathcal{H}^q(\mathcal{G}^{\bullet})_x \it Cohng H^{q+n+1}(\mathcal{U}_x,\mathcal{L}^{\mbox{vect}e}_x),$$ and these are all torsion $\mathbb{F}[t^{\pm 1}]$-modules by Proposition \ref{ploc}. Therefore, for a fixed $x \in V_1$ the fact that $\lambda$ is not a root of $\Delta^i_x(t)$ for any $i<n+1-\sigmama$ is equivalent to the assertion that the order of $\mathcal{H}^q(\mathcal{G}^{\bullet})_x$ does not vanish at $\lambda$ for all $i < -\sigmama$. The desired claim follows now by using the hypercohomology spectral sequence with $E_2$-term defined by $E_2^{p,q}=H^p(V_1,\mathcal{H}^q(\mathcal{G}^{\bullet}))$, which computes the groups $\mathbb{H}^{k}(V_1,\mathcal{G}^{\bullet})\it Cohng {\mathbb H}^k(\mathbb{C}P,{\mathcal G}^{\bullet})$. \end{enumerate}d{proof} \begin{rem} Note that the proofs of Theorems \ref{t4} and \ref{t5} indicate that we can give a more general condition than transversality with respect to $H$ in order to conclude that the global cohomological twisted Alexander modules $H^i_{\varepsilon,\rho}({\mathcal U};\mathbb{F}[t^{\pm 1}])$ are torsion for all $i \leq n$. Indeed, it suffices to assume that the pair $(\varepsilon,\rho)$ is acyclic along $V\cap H$ (or even $V_1 \cap H$, in the context of Theorem \ref{t5}). Of course this assumption is satisfied if $V$ is in general position at infinity, as Proposition \ref{ploc} shows. But there are other instances when it is satisfied, like in the examples discussed in Section \ref{ex}. \end{enumerate}d{rem} \begin{rem}\label{finrem} Let us conclude with a few observations about other possible approaches for studying twisted Alexander-type invariants of hypersurface complements. If $\mathbb F={\mathbb C}$, one can argue as in \cite{DL} if similar divisibility results are desired for the homological twisted Alexander polynomials. In more detail, the study of such twisted homological invariants is reduced via a twisted version of the Milnor sequence to studying the vanishing (except in the middle degree) of the homology groups $H_k({\mathcal U},{\mathcal L}_{\lambda} \otimes {\mathbb V}_{\rho})$ (or equivalently, of cohomology groups $H^k({\mathcal U},{\mathcal L}_{\lambda} \otimes {\mathbb V}_{\rho})$), where ${\mathcal L}_{\lambda}$ is the rank-one ${\mathbb C}$-local system on ${\mathcal U}$ defined by the character $$\pi_1({\mathcal U}) \xrightarrow{\varepsilon} \mathbb{Z} \xrightarrow{1\mapsto \lambda} {\mathbb C}^*.$$ The language of $\mathbb{C}$-perverse sheaves can then be employed as in the proofs of Theorems \ref{t4} and \ref{t5} to get the desired vanishing, thus providing a twisted generalization of results from \cite{M06,DL}. Alternatively, one can use the approach from \cite{M06,LM} to study the (co)homological twisted Alexander invariants by using the associated {\it residue complex} ${\mathcal R}^{\bullet}$ of ${\mathcal U}$, which is defined as the cone of the natural morphism $Rj_!{\mathcal L} \longrightarrow Rj_*{\mathcal L}$, for $j:{\mathcal U} \hboxookrightarrow \mathbb{C}P$ the inclusion map. Lastly, such results can also be derived by using more elementary techniques as follows: first, by transversality and a Lefschetz-type argument one can reduce, as in \cite{L94}, the study of the twisted Alexander modules of ${\mathcal U}$ to those of a regular neighborhood ${\mathcal N}$ in ${\mathbb C}^{n+1}$ of the affine part $V^a$ of $V$; secondly, Alexander-type invariants of ${\mathcal N}$ can be computed via the Mayer-Vietoris spectral sequence for the induced stratification of such a neighborhood. We leave the details and precise formulations as an exercise for the interested reader. \end{enumerate}d{rem} \begin{equation}gin{thebibliography}{ADMS} \bibitem{CF} J.I. Cogolludo Agustin, V. Florens, {\it Twisted Alexander polynomials of plane algebraic curves}, J. Lond. Math. Soc. (2) {\bf 76} (2007), no. 1, 105--121. \bibitem{Di1} A. Dimca, {\it Singularities and topology of hypersurfaces}, Universitext. Springer-Verlag, New York, 1992. \bibitem{DL} A. Dimca, A. Libgober, {\it Regular functions transversal at infinity}, Tohoku Math. J. (2) {\bf 58} (2006), no. 4, 549--564. \bibitem{DM} A. Dimca, L. Maxim, {\it Multivariable Alexander invariants of hypersurface complements}, Trans. Amer. Math. Soc. {\bf 359} (2007), no. 7, 3505--3528. \bibitem{FV} S. Friedl, S. Vidussi, {\it A survey of twisted Alexander polynomials}, in: The mathematics of knots, Contrib. Math. Comput. Sci. {\bf 1}, Springer, Heidelberg, 2011, pp. 45--94. \bibitem{KL} P. Kirk, C. Livingston, {\it Twisted Alexander invariants, Reidemeister torsion, and Casson-Gordon invariants}, Topology {\bf 38} (1999), 635--661. \bibitem{L} X. S. Lin, {\it Representations of knot groups and twisted Alexander polynomials}, Acta Math. Sin. (Engl. Ser.) {\bf 17} (2001), no. 3, 361--380. \bibitem{L82} A. Libgober, {\it Alexander polynomial of plane algebraic curves and cyclic multiple planes}, Duke Math. J. {\bf 49} (1982), no. 4, 833--851. \bibitem{L94} A. Libgober, {\it Homotopy groups of the complements to singular hypersurfaces, II}, Ann. of Math. (2) {\bf 139} (1994), 117--144. \bibitem{L09} A. Libgober, {\it Non vanishing loci of Hodge numbers of local systems}, Manuscripta Math. {\bf 128} (2009), no. 1, 1--31. \bibitem{Liu} Y. Liu, {\it Nearby cycles and Alexander modules of hypersurface complements}, Adv. Math. {\bf 291} (2016), 330--361. \bibitem{LM} Y. Liu, L. Maxim, {\it Characteristic varieties of hypersurface complements}, arXiv:1411.7360. \bibitem{M06} L. Maxim, {\it Intersection homology and Alexander modules of hypersurface complements}, Comment. Math. Helv. {\bf 81} (2006), no. 1, 123--155. \bibitem{O1} M. Oka, {\it On the fundamental group of the complement of certain plane curves}, J. Math. Soc. Japan {\bf 30} (1978), no. 4, 579--597. \bibitem{Sch} J. Sch\"urmann, {\it Topology of Singular Spaces and Constructible Sheaves}, Birkh\"auser, Monografie Matematyczne {\bf 63}, 2003. \bibitem{W} M. Wada, {\it Twisted Alexander polynomial for finitely presentable groups}, Topology {\bf 33} (1994), no. 2, 241--256. \end{enumerate}d{thebibliography} \end{enumerate}d{document}
\begin{document} \title{Photon Sorting, Efficient Bell Measurements and a Deterministic CZ Gate using a Passive Two-level Nonlinearity} \author{T.C.Ralph$^1$, I.~S\"{o}llner$^2$, S.~Mahmoodian$^2$, A.G.White$^1$ and P.~Lodahl$^2$}\affiliation{$^1$Centre for Quantum Computation and Communication Technology, \\ School of Mathematics and Physics, University of Queensland, Brisbane, Queensland 4072, Australia \\ $^2$ Niels Bohr Institute, University of Copenhagen, Blegdamsvej 17, DK-2100 Copenhagen, Denmark} \date{\today} \begin{abstract} {Although the strengths of optical non-linearities available experimentally have been rapidly increasing in recent years, significant challenges remain to using such non-linearities to produce useful quantum devices such as efficient optical Bell state analysers or universal quantum optical gates. Here we describe a new approach that avoids the current limitations by combining strong non-linearities with active Gaussian operations in efficient protocols for Bell state analysers and Controlled-Sign gates. } \end{abstract} \maketitle {\it Introduction}: It has long been the dream of the quantum optics community to use non-linear optical interactions to produce deterministic quantum logic operations, such as a Controlled-Sign (CZ) gate, between individual photons \cite{MIL89}. In combination with easily implemented single qubit operations the CZ gate produces a universal set of quantum logic operations that would enable applications from quantum repeaters to full scale quantum computation with photons. Unfortunately not only is it very difficult to achieve the strengths of non-linearity required for such gates, but it has been predicted for several candidate systems that strong non-linearities inevitably add noise and/or distort the optical modes of the single photons sufficiently that successful operation, even under ideal conditions, is impossible \cite{SHA06, SHE07, XU13}. One such candidate system with various possible physical implementations is a single two-level emitter deterministically coupled to a one-dimensional photonic waveguide. We will refer to such a system here as a Two-Level Scatterer (TLS). One interesting capability of such systems is to separate the single and two photon components of an optical mode into two separate modes. This has been referred to as photon sorting \cite{WIT12}. In principle photon sorting, if efficient and mode preserving, could be used to perform full Bell measurements, and to implement deterministic quantum logic gates between, photonic qubits. However, it has been shown that the TLS introduces mode distortion between the single and two photon components in the form of spectral entanglement of the two photon component \cite{SHE07} which has been argued to be unavoidable \cite{XU13}. As a result photon sorting is inefficient. Whilst it has been shown that by combining multiple interactions with a TLS with linear optics it is possible to perform near deterministic Bell measurements the proposed scheme requires $80$ separate scattering events to obtain a probability of success of about 95\% \cite{WIT12}. Here we show that by adding active Gaussian optics to our tool-box of scatterer plus passive linear optics we are able to perform a deterministic Bell measurement using only 4 interactions with a TLS. Similarly, in principle it becomes possible to implement deterministic quantum logic gates in this way. Ironically, it is by exploiting the inherent mode distortion of the scatterer that these operations become possible. {\it Action of the Scatterer}: We consider a TLS formed by placing a two-level emitter in a nanophotonic cavity or waveguide that is designed for unidirectional interaction \cite{LUK14, RAUSCH14, SOL14}, cf. Fig.\ref{e1}(a). The monochromatic creation operator for the input mode, with wave number $k$, is scattered such that the corresponding creation operator for the output mode is given by \cite{SHE05} \begin{eqnarray} \hat a_{k, in}^{\dagger} \to t_k \hat a_{k, out}^{\dagger} \label{e1} \end{eqnarray} where \begin{eqnarray} t_k = {{c k - \omega_0+i(\gamma-\Gamma)/2}\over{c k - \omega_0+i(\gamma+\Gamma)/2}} \label{e2} \end{eqnarray} with $c$ the speed of light, $\omega_0$ the resonant frequency of the two-level emitter, $\Gamma$ the coupling strength between the emitter and a uni-directional waveguide mode and $\gamma$, the coupling strength of the emitter to modes other than the directional waveguide mode of interest. Eq.\ref{e1} describes a linear transformation similar to that produced by reflection from a single ended optical cavity. In general the output state will be mixed due to losses into other modes and the relevant figure of merit is the directional $\beta_{dir}$-factor defined as $\beta_{dir} = \Gamma/(\gamma + \Gamma)$ \cite{SOL14}. In the ideal case for which losses are negligible, i.e. $\gamma = 0$, the scattering is unitary and we can write the input-output relation for a single photon state with an arbitrary pulse shape, $f(k)$, as \begin{eqnarray} |1_f \rangle &=& \int dk f(k) \hat a_{k, in}^{\dagger} |0 \rangle \nonumber \\ &\to& |1_{f'} \rangle = \int dk f(k) t_k \hat a_{k, out}^{\dagger} |0 \rangle \label{e3} \end{eqnarray} We now consider two photon inputs. The equivalent of Eq.(\ref{e1}) for a pair of monochromatic creation operator with wave numbers $k_1$ and $k_2$ is \cite{SHE07, SHE07a} \begin{eqnarray} \label{e4} \hat a_{k_1, in}^{\dagger} \hat a_{k_2, in}^{\dagger} &\to & t_{k_1} \; \hat a_{k_1, out}^{\dagger} \; t_{k_2} \; \hat a_{k_2, out}^{\dagger} \nonumber \\ &+& T_{k_1,k_2,p_1,p_2} \; \hat a_{p_1, out}^{\dagger} \; \hat a_{p_2, out}^{\dagger} \label{e4} \end{eqnarray} where \begin{eqnarray} T_{k_1,k_2,p_1,p_2} = {{i \sqrt{\Gamma}}\over{2 \pi}} \delta(k_1 + k_2 - p_1 - p_2) s_{p_1} s_{p_2} (s_{k_1} + s_{k_2}) \nonumber \\ \label{e5} \end{eqnarray} with $s_{k} = {{1}\over{i \sqrt{\Gamma}}}(1-t_k)$. Eq.\ref{e4} describes a highly non-linear interaction which produces entanglement between the spectral components of the two input photons. Again considering the ideal case for which $\gamma = 0$ we can write \begin{eqnarray} |2_f \rangle \to |2_{f'} \rangle + |2_{fb} \rangle \label{e7} \end{eqnarray} where \begin{eqnarray} |2_{f'} \rangle = \int dk_1 \; dk_2 f(k_1) t_{k_1} \hat a_{k_1, out}^{\dagger} f(k_2) t_{k_2} \hat a_{k_2, out}^{\dagger} |0 \rangle \nonumber \\ \label{e8} \end{eqnarray} and \begin{eqnarray} |2_{fb} \rangle &=& \int dk_1 \; dk_2 \; dp_1 \; dp_2 \; T_{k_1,k_2,p_1,p_2} \nonumber \\ &\times& f(k_1) \hat a_{p_1, out}^{\dagger} f(k_2) \hat a_{p_2, out}^{\dagger} |0 \rangle \label{e9} \end{eqnarray} The solution in the form of Eq.\ref{e7} was presented in \cite{WIT12}. However, it is clear from the normalization of Eq.\ref{e7} that the states of Eq.\ref{e8} and (Eq.\ref{e9}) are not orthogonal. Improved physical insight into the process can be obtained by rewriting Eq.\ref{e7} in terms of orthogonal states. We obtain \begin{eqnarray} |2_f \rangle \to (1-2 \eta) |2_{f'} \rangle + 2\sqrt{\eta(1-\eta)} |\bar 2_{f'} \rangle \label{e10} \end{eqnarray} where \begin{eqnarray} \eta = {{1}\over{2}} |\langle 2_{f'} |2_{fb} \rangle| \label{e11} \end{eqnarray} and $|\bar 2_{f'} \rangle$ is a normalized state satisfying $\langle 2_{f'}|\bar 2_{f'} \rangle = 0$. The value of $\eta$ depends on the specific pulse shape chosen for the input state. It can be calculated analytically for pulses with a Lorentzian spectral shape and is found to be \begin{eqnarray} \eta =\frac{4 \Gamma^2 \sigma \left(3 \Gamma^2+38 \Gamma \sigma +96 \sigma^2\right)}{(\Gamma +2\sigma )^3 (3 \Gamma +2\sigma) (\Gamma +6\sigma)} \label{e12} \end{eqnarray} where $\sigma$ is the width of the Lorentzian. A plot of the behaviour of Eq.\ref{e12} as a function of $\sigma$ is shown in Fig.\ref{figcalcs}(b). If it was possible to achieve $\eta = 1$ then one could directly use two TLSs to build a deterministic CZ gate as the transformation would essentially be a Non-linear Sign shift (NS) gate - imposing a phase flip on the two photon component but not the single photon component of the state \cite{KNI01}. It would also be possible to achieve deterministic photon sorting via the scheme in Ref. \cite{WIT12}. Unfortunately, numerically it appears that $\eta$ is bounded by $0 \le \eta < 0.75$. \begin{figure} \caption{(a) Possible physical implementations of an efficient TLS exploiting unidirectional coupling obtained by implementing a two-level emitter in a chiral photonic-crystal waveguide (left) \cite{SOL14} \label{figcalcs} \end{figure} \begin{figure} \caption{Components of the photon sorter and Bell measurement device: (a) the photon sorter is constructed from a TLS followed by Sum Frequency Generation (SFG), where the classical pump is in the mode $f'$, followed by a dichroic beamsplitter which separates the single photon component at the sum frequency from the two photon component at the original frequency; (b) a Bell measurement can be implemented with linear optics and 4 photon sorters as shown. The four Bell states are unambiguously determined by the measurement of photons at particular detector combinations. In particular, $| \psi^{+} \label{fig1} \end{figure} {\it Efficient Photon Sorting}: One solution to this problem is to operate instead with $\eta = 0.5$, which is easily achieved with either a Lorentzian (see Fig.\ref{figcalcs}(b)) or Gaussian \cite{WIT12} mode function . An arbitrary superposition of single and two photon components is then transformed by the TLS as \begin{eqnarray} \alpha |1_f \rangle + \xi |2_f \rangle \to \alpha |1_{f'} \rangle + \xi |\bar 2_{f'} \rangle \label{e13} \end{eqnarray} With this choice of parameters the one and two photon components are completely mapped into co-propagating, but orthogonal, spatio-temporal modes which can in principle be perfectly separated with Gaussian transformations. This conclusion continues to be true for $\beta_{dir} < 1$ as shown in Fig.~\ref{figcalcs}b. Now the matching condition is $\eta = {{\epsilon_1^2}\over{2}}$ where $\epsilon_1$ is the probability that a single photon is scattered into the output mode by the TLS (see Supplementary Material for details). Because the modes have overlapping spectral and temporal domains passive filtering will not be sufficient to perfectly separate them - instead active filtering is required. In particular, consider sum frequency generation (SFG). It was shown in Ref. \cite{ECK11} that by using suitably engineered SFG a quantum pulse gate can be produced which can efficiently extract a particular spatio-temporal mode from a multi-mode field. This works by choosing the pump field to perfectly match the spatio-temporal mode to be extracted. After interaction with a $\chi^{(2)}$ non-linear crystal it is this, and only this mode that is converted to the sum frequency. A passive frequency filter will then suffice to split the field into separate beams. The procedure is shown diagrammatically in Fig.\ref{fig1}(a) and can be represented mathematically as \begin{eqnarray} (\alpha |1_f \rangle + \xi |2_f \rangle) |0 \rangle_a &_{\to}^{TLS}& (\alpha |1_{f'} \rangle + \xi |\bar 2_{f'} \rangle) |0 \rangle_a \nonumber\\ &_{\to}^{SFG}& \alpha |0 \rangle |1_{f'} \rangle_a + \xi |\bar 2_{f'} \rangle |0 \rangle_a \label{e14} \end{eqnarray} where the mode function of the classical pump beam for the SFG is $t_k f(k)$ and the ket (initially in the vacuum state) labelled with the subscript $a$ is an ancilla mode at the sum frequency. Hence, using a single TLS plus sum frequency generation and passive filtering it is possible to produce a deterministic photon sorter. One should compare this with Ref.\cite{WIT12} where (assisted by only linear optics) 10 TLSs (or perhaps 10 interactions with a single TLS) are required to achieve, in principle, 95\% separation of the one and two photon components. {\it Bell Measurement}: Equipped with a deterministic photon sorter it is straightforward to construct a circuit from passive linear optics that can implement deterministic Bell measurements on dual rail single photon qubits. A dual rail qubit is where the logical value of the qubit is determined by which of two orthogonal modes is occupied, i.e. $|0 \rangle_L = |1 \rangle_u |0 \rangle_l$ and $|1 \rangle_L = |0 \rangle_u |1 \rangle_l$, where number kets for the two modes are labeled $u$ (upper) and $l$ (lower). The circuit is shown in Fig.\ref{fig1}(b), where the qubits are labelled as $Q1$ and $Q2$ at the inputs and the orthogonal modes making up the qubits are shown as separate spatial rails. We note that it does not matter that the single photon and two photon components of the state end up in different spatio-temporal modes (at different average frequencies) because: (i) the coherent interactions that occur after the photon sorters only superpose modes containing two photon components, hence these interactions occur between matched modes; and (ii) in the end destructive measurements are made on all the modes that have been through the photon sorters. If $\beta_{dir} < 1$ then there will be loss in the TLS and there will be heralded failure events when the photons do not make it through the circuit (see Supplementary Material for discussion and graph of probability of success). \begin{figure} \caption{Components of the non-linear sign (NS) gate and deterministic Controlled-Sign (CZ) gate: (a) the NS gate is constructed from a two-level scatterer (TLS) followed by Sum Frequency Generation (SFG), where the classical pump is in the mode $f'$. This is followed by a $\pi$ phase shift which is imposed only on the two photon term. If $\beta_{dir} \label{fig2} \end{figure} {\it Deterministic CZ Gate}: Given deterministic Bell measurements it is possible to construct a deterministic CZ gate using the techniques of gate teleportation \cite{GOT99} and linear optical quantum computing \cite{KNI01}. The necessary circuit is however quite complex requiring significant off-line optical resources for state preparation and hence either quantum memory or sophisticated real-time optical switching. In addition such a gate necessarily includes electro-optic feedforward. It is interesting to ask if alternatively the TLS non-linearity plus Gaussian optics is sufficient to directly implement a deterministic CZ gate in an all-optical arrangement. In the following we show that this is possible in principle with TLS, SFG, gradient echo memory (GEM) and linear optics. The set-up is shown schematically in Fig.\ref{fig2}. We start with an input light field containing zero, one and two photon terms that can be written as: \begin{equation} \alpha |0 \rangle +\xi |1_f \rangle + \gamma |2_f \rangle \label{start} \end{equation} As described in the previous section, the combination of TLS and SFG with a suitable classical pump leads to a state of the form: \begin{equation} \alpha |0 \rangle |0 \rangle_a +\xi |0 \rangle |1_{f'} \rangle_a + \gamma |\bar 2_{f'} \rangle |0 \rangle_a \end{equation} Because of the different frequencies of the one and two photon components they can be addressed individually, hence we impose a $\pi$ phase shift only on the two photon term -- in Fig.\ref{fig2} we represent this by spatially separating the beams, imposing the phase shift and then recombining them, but in practice easier techniques, such as using a wave-shaper, may be available. SFG is a reversible process, thus, by choosing a suitable phase relationship between the classical pump and the beams, the one photon component can be converted back to its original centre frequency. The output state after this manipulation is: \begin{equation} \alpha |0 \rangle +\xi |1_{f'} \rangle - \gamma |\bar 2_{f'} \rangle \label{pregem} \end{equation} We now wish to undo the initial separation of the one and two photon terms into orthogonal modes by interacting with the TLS a second time. However, the mode distortion is not time-symmetric so we must first invert the pulse shape of the modes. This can be achieved using a gradient echo memory (GEM) \cite{ALE06, HED10}. The GEM can be thought of as a material containing an ensemble of two-level atoms that can absorb and store an incident light pulse as it passes through it. During the storage or writing process a field gradient is applied to the material, producing a spatially selective storage of the different frequency components of the input signal. To release or read out the pulse, the gradient is reversed and the light emerges from the other end of the material. However, the reversal of the gradient results in the shape of the pulse being inverted between input and output. In particular, in the limit that the storage bandwidth of the memory is much larger than the bandwidth of the pulse and the storage time of the memory is much longer than the pulse length, the action of the memory on an optical mode operator can be expressed as \cite{HUS13}: \begin{eqnarray} \hat a_{in}(t) &=& \int d k F(k) e^{-i k t} \hat a_{k} \nonumber \\ & ^{GEM}_{\to}& \int d k F(-k) e^{-i k (t-T)} \hat a_{k} \label{gem} \end{eqnarray} The pulse is delayed by a time $T$ and the pulse shape is inverted. An explicit calculation confirms that if the state of Eq.\ref{pregem} is transformed according to Eq.\ref{gem} and then interacted a second time with a TLS, the final output is: \begin{equation} \alpha |0 \rangle +\xi |1_{f} \rangle - \gamma |2_{f} \rangle \label{end} \end{equation} where we have assumed the initial mode shape was time symmetric. The total transformation from Eq.\ref{start} to Eq.\ref{end} is characteristic of a Nonlinear-Sign (NS) gate, as introduced in Ref.\cite{KNI01}. Two NS gates can be combined with linear optics to make a CZ gate as shown in Fig.\ref{fig2}(b). Consider first the case for which $\beta_{dir} =1$ and hence $\epsilon_1 = 1$. All but one of the possible two qubit logical input states lead to only zero or one photon occupation of the interferometer containing the NS gates in the central region of the circuit. The exception is the logical state with the lower rail of $Q1$ occupied and the upper rail of $Q2$ occupied. In this case, because of the Hong Ou Mandel effect \cite{HOM}, the {\it only} allowed photon arrangements in the central interferometer are a pair of photons through the upper NS gate or a pair of photons through the lower NS gate. Hence, only in this case a phase is imposed on the output state as required for a CZ gate. Notice that all the mode distortions are undone, hence a network of such gates may be used to implement universal quantum computation using single photon inputs. In Ref.\cite{KNI01} the NS gate was implemented with linear optics and had a probability of success of 25\%, hence leading to a CZ gate with probability of success 6.25\%. Here the NS gate and hence the CZ gate are in principle deterministic. In the case for which $\beta_{dir} < 1$ the gate will no longer be deterministic because photons can be lost in the TLS. In this case additional loss elements need to be introduced into the gate to ensure the qubit states do not become skewed (see Fig.\ref{fig2}(b) and Supplementary Material for discussion and plot of probability of success). {\it Discussion}: We have shown that a deterministic Bell measurement and CZ gate can be implemented by combining a non-linear element with active and passive Gaussian optics. This is possible in spite of (or perhaps because of) the mode distortion produced by the non-linear element. We now discuss the challenges involved in implementing our schemes by briefly reviewing the state of the art for the various components. Different platforms have been experimentally shown to be suitable for constructing a TLS \cite{LOD13}(see Fig.\ref{figcalcs}(a)). Single atoms and single quantum dots that are coupled to either photonic nanostructures \cite{LUK14, SEN14, KIMB14, ARC14} or whispering gallery mode resonators \cite{RAUSCH14, PAINT07}are the most promising at optical frequencies . Furthermore, transmon qubits in 1D transmission lines can be employed in the microwave regime \cite{HOI12}. In the case of quantum dots in photonic-crystal waveguides, coupling efficiencies of 98.4\% have been demonstrated in experiments on emission dynamics \cite{ARC14}. For the coherent scattering applications considered here any pure dephasing, spectral diffusion, and Raman scattering into the phonon-sideband will limit the performance. Excitingly $97 \%$ indistinguishability of single photons \cite{HE13} and about $95 \%$ of the emission in the zero-phonon line \cite{KONT12} have been experimentally reported. Mode selectivity of 80\%, with bandwidths compatible with quantum dot TLSs, has been experimentally demonstrated for SFG \cite{BRE14}, with excellent prospects for improvement. Hence, the technology required for implementing the Bell measurement protocol currently exists. The bottle neck in our CZ gate protocol is likely to be the GEM memories that, whilst showing good storage times and efficiency, currently operate with bandwidths around a MHz - and hence are currently incompatible with quantum dot bandwidths of around 320MHz. Never-the-less, there does not seem to be any in principle reasons why GEM of the necessary bandwidth could not be realized. This research was supported by the Australian Research Council (ARC) under the Centre of Excellence for Quantum Computation and Communication Technology (CE110001027). I.S., S.M. and P.L. gratefully acknowledge financial support from the Danish Council for Independent Research (Natural Sciences and Technology and Production Sciences) and the European Research Council (ERC consolidator grant ALLQUANTUM). \begin{thebibliography}{99} \bibitem{MIL89} G.~J. Milburn, Phys. Rev. Lett. {\bf 62}, 2124 (1989). \bibitem{SHA06} J.~H. Shapiro, Phys. Rev. A, {\bf 73}, 062305 (2006). \bibitem{SHE07} J.-T. Shen and S. Fan, Phys. Rev. Lett. {\bf 98}, 153003 (2007). \bibitem{XU13} S. Xu, E. Rephaeli, and S. Fan, Phys. Rev. Lett. {\bf 111}, 223602 (2013). \bibitem{WIT12} D. Witthaut, M.~D. Lukin and A.~S. S{\o}rensen, EPL, {\bf 97}, 50007 (2012). \bibitem{LUK14} T.G.Tiecke, J.~D. Thompson, N.~P. deLeon, L.~R. Liu, V. Vuleti\'c, and M.~D. Lukin, Nature {\bf 508}, 241 (2014). \bibitem{RAUSCH14} J. Volz, M. Scheucher, C. Junge, and A. Rauschenbeutel, Nature Phot. {\bf 8}, 965 (2014). \bibitem{SOL14} I. S\"ollner, S. Mahmoodian, S. Lindskov~Hansen, L. Midolo, A. Javadi, G. Kirsanske, T. Pregnolato, H. El-Ella, E.~H. Lee, J.~D. Song, S. Stobbe, and P. Lodahl, (Submitted), (2014). \bibitem{SHE05} J.-T. Shen and S. Fan, Opt. Lett. {\bf 30}, 2001 (2005). \bibitem{SHE07a} J.-T. Shen and S. Fan, Phys. Rev. A {\bf 76}, 062709 (2007). \bibitem{KNI01} E. Knill, R. Laflamme, G. J. Milburn, Nature {\bf 409}, 46 (2001). \bibitem{ECK11} A. Eckstein, B. Brecht, C. Silberhorn, Opt. Express, {\bf 19}, 13770 (2011). \bibitem{GOT99} D. Gottesman, I.~L. Chuang, Nature {\bf 402}, 390 (1999). \bibitem{ALE06} A.~L. Alexander, J.~J. Longdell, M.~J. Sellars and N.~B. Manson, Phys. Rev. Lett. {\bf 96}, 043602 (2006); G. H\'etet, J.~J. Longdell, A.~L. Alexander, P.~K. Lam and M.~J. Sellars, Phys. Rev. Lett. {\bf 100}, 023601 (2008). \bibitem{HED10} M.~P. Hedges, J.~J. Longdell, Y. Li and M.~J. Sellars, Nature {\bf 465}, 1052 (2010); M. Hosseini, B.~M. Sparkes, G. Campbell, P.~K. Lam and B.~C. Buchler, Nature Commun. {\bf 2}, 174 (2011). \bibitem{HUS13} M.~R. Hush, A.~R.~R. Carvalho, M. Hedges and M.~R. James, New Journal of Physics, {\bf 15}, 085020 (2013). \bibitem{HOM} C.~K. Hong, Z.~Y. Ou and L. Mandel, Phys. Rev. Lett. {\bf 59}, 2044 (1987). \bibitem{LOD13} P. Lodahl, S. Mahmoodian, S. Stobbe, to appear in Rev. Mod. Phys., arXiv:1312.1079 (2013). \bibitem{SEN14} A.~K. Nowak, S.~L. Portalupi, V. Giesz, O. Gazzano, C. DalSavio, P.-F. Braun, K. Karrai, C. Arnold, L. Lanco, I. Sagnes, A. Lemaître, and P. Senellart, Nature Commun. {\bf 5}, 3240 (2014). \bibitem{KIMB14} A. Goban, C.-L. Hung, S.-P. Yu, J.~D. Hood, J.~A. Muniz, J.~H. Lee, M.~J. Martin, A.~C. McClung, K.~S. Choi, D.~E. Chang, O. Painter, and H.~J. Kimble, Nature Commun. {\bf 5}, 3808 (2014). \bibitem{ARC14} M. Arcari, I. S\"ollner, A. Javadi, S. Lindskov Hansen, S. Mahmoodian, J. Liu, H. Thyrrestrup, E.H. Lee, J.~D. Song, S. Stobbe, and P. Lodahl, Phys. Rev. Lett. {\bf 113}, 0993603 (2014). \bibitem{PAINT07} K. Srinivasan and O. Painter, Nature {\bf 450}, 863 (2007). \bibitem{HOI12} I.-C. Hoi, T. Palomaki, J. Lindkvist, G. Johansson, P. Delsing, and C.M. Wilson, Phys. Rev. Lett. {\bf 108}, 263601 (2012). \bibitem{HE13} Y.-M. He, Y. He, Y.-J. Wei, D. Wu, M. Atataure, C. Schneider, S. Hofling, M. Kamp, C.-Y. Lu, and J.-W. Pan, Nat. Nanotechnol. {\bf 8}, 213 (2013) \bibitem{KONT12} K. Konthasinghe, J. Walker, M. Peiris, C.K. Shih, Y. Yu, M.F. Li, J.F. He, L.J. Wang, H.Q. Ni, Z.C. Niu, and A. Muller, Phys. Rev. B {\bf 85}, 235315 (2012). \bibitem{BRE14} B. Brecht, A. Eckstein, R. Ricken, V. Quiring, H. Suche, L. Sansoni, C. Silberhorn, Phys. Rev. A {\bf 90}, 030302(R) (2014). \end{thebibliography} \section{Supplementary Material} In the main text we considered for the most part the case where $\gamma = 0$ ($\beta_{dir} = 1$). When $\gamma \ne 0$ ($\beta_{dir} < 1$) photons can be lost from the mode of interest resulting in a mixed state for the output mode. However, we can continue to write the output state of the TLS as a pure state by formally including the lost modes in the output ket. Thus the output when a single photon state is incident on the TLS can be written \begin{equation} |1_{f} \rangle \to |1_{f'} \rangle + |not 1 \rangle \label{loss1} \end{equation} where $|not 1 \rangle$ is an unnormalized composite ket containing all the state components for which a single photon has not been coupled into the output mode. It is useful to write the output in terms of normalized kets such that \begin{equation} |1_{f} \rangle \to \sqrt{\epsilon_1}|1_{f'} \rangle_N + \sqrt{1-\epsilon_1}|not 1 \rangle_N \label{loss1n} \end{equation} where $\epsilon_1 = \langle 1_{f'} |1_{f'} \rangle$ and in general $|\theta \rangle_N = |\theta \rangle/\sqrt{|\langle \theta|\theta \rangle|}$. For a two photon input we now find \begin{eqnarray} |2_{f} \rangle &\to& \epsilon_1(1-{{2 \eta}\over{\epsilon_1^2}})\; |2_{f'} \rangle_N + \sqrt{\epsilon_b - {{4 \eta^2}\over{\epsilon_1^2}}} \;\; |\bar 2_{f'} \rangle_N \nonumber \\ &+& \sqrt{1-\epsilon_1^2+4 \eta -\epsilon_b}\;\; |not 2 \rangle_N \label{loss2} \end{eqnarray} where $\epsilon_b = \langle 2_{fb} |2_{fb} \rangle$ and now $|not 2 \rangle_N$ is a (normalized) ket containing all state components for which two photons are not coupled into the output mode. Analytic solutions can be found for the different probability amplitudes $$\epsilon_1=1-\frac{4 \gamma \Gamma (\gamma +\Gamma +4 \sigma )}{(\gamma +\Gamma ) (\gamma +\Gamma +2 \sigma )^2} $$ and $$\epsilon_b =\frac{16 \Gamma ^4 \sigma \left(38 \sigma (\gamma +\Gamma )+3 (\gamma +\Gamma )^2+96 \sigma^2\right)}{(\gamma +\Gamma )^2 (\gamma +\Gamma +2 \sigma )^3 (3 (\gamma +\Gamma )+2 \sigma) (\gamma +\Gamma +6 \sigma)},$$ where the incoming spectral amplitude of the photons is taken to be a Lorentzian of the form $$f_L(k)= \frac{\sqrt{\frac{2\sigma^3}{\pi}}}{\sigma ^2+(k-\text{k0})^2}.$$ The matching condition such that the two-photon component mode is orthogonal to the single photon component mode is now \begin{equation} \eta = {{\epsilon_1^2}\over{2}} \label{loss2e} \end{equation} which tends to the lossless case of $0.5$ as $\epsilon_1$ tends to unity. Inserting this condition into Eq.\ref{loss2} gives \begin{eqnarray} |2_{f} \rangle \to \sqrt{\epsilon_b - \epsilon_1^2} \;\; |\bar 2_{f'} \rangle_N + \sqrt{1+ \epsilon_1^2 -\epsilon_b}\;\; |not 2 \rangle_N \label{loss3} \end{eqnarray} In the limit of $\gamma \to 0$ we have $\epsilon_1 \to 1$ and (when the condition of Eq.\ref{loss2e} is fulfilled) $\epsilon_b \to 2$, and Eqs \ref{loss1n} and \ref{loss3} reduce to the lossless cases in the main text. The behaviour of the losses as a function of the spectral width is shown in Fig.\ref{figloss}. \begin{figure} \caption{Probability of loss in the TLS for two photon components (solid lines) and two single photons (dotted lines) as a function of $\beta_{dir} \label{figloss} \end{figure} If the TLS are used to make a Bell measurement, the effect of the loss is to lead to a non-unity probability of success. In order for $|\psi^{\pm} \rangle$ to be identified two single photon scattering events must succeed, thus the probability of success will be $\epsilon_1^2$. On the other hand to identify $|\phi^{\pm} \rangle$ requires a single two-photon scattering event to succeed, thus the probability of success will be $(\epsilon_b - \epsilon_1^2)$, leading to an average probability of success of $\epsilon_b/2$. Unsuccessful events are heralded by the failure to detect two photons. In Fig.\ref{fig3} we plot the probability of success of the Bell measurement versus the $\beta_{dir}$ factor (where we assume unit efficiency for the SFG and detectors). We now consider the CZ gate. The loss introduced by the four TLS affect the various state components differently producing a skewing of the output state. This can be compensated to some extent by inserting additional linear loss of transmission $\epsilon_1$ on the outer arms of the CZ gate (see Fig.3(b) main text). The state transformation of the entire gate then becomes \begin{eqnarray} && \alpha \; |0 , 1_{f} \rangle |0 , 1_{f} \rangle + \xi \; |1_{f} , 0 \rangle |1_{f} , 0 \rangle + \gamma \;|1_{f} , 0 \rangle |0 , 1_{f} \rangle + \nonumber \\ && \;\;\;\;\; \delta \;|0 , 1_{f} \rangle |1_{f} , 0 \rangle \;\;\;\;\; \to \nonumber \\ && \epsilon_1^2 (\alpha \;|0 , 1_{f} \rangle |0 , 1_{f} \rangle + \xi \;|1_{f} , 0 \rangle |1_{f} , 0 \rangle + \gamma \;|1_{f} , 0 \rangle |0 , 1_{f} \rangle) - \nonumber \\ && \;\;\;\;\; (\epsilon_b - \epsilon_1^2)\; \delta \;|0 , 1_{f} \rangle |1_{f} , 0 \rangle + |not 2 \rangle \label{loss4} \end{eqnarray} \begin{figure} \caption{Probability of success of the Bell Measurement (upper curve) and the CZ gate (lower curve) as a function of $\beta_{dir} \label{fig3} \end{figure} where we have used the short hand $|0 , 1_{f} \rangle = |0 \rangle |1_{f} \rangle$ to represent the logical zero state component of a dual rail qubit with no photon in the top mode and one in the lower mode, and similarly for the logical one state component, $|1_{f}, 0 \rangle = |1_{f} \rangle |0 \rangle$. The first ket in Eq.\ref{loss4} represents the first qubit ($Q1$) and the second ket, the second qubit ($Q2$). The state component $|not 2 \rangle$ is a an unnormalized composite ket representing all state components for which the total photon number coupled into the four output modes is less than two photons. Again, in the limit of $\gamma \to 0$ we have $\epsilon_1 \to 1$ and $\epsilon_b \to 2$, and the gate becomes an ideal CZ gate. Two different kinds of errors can occur when $\gamma \ne 0$. The first are locatable errors that arise due to the non-zero amplitude of the $ |not 2 \rangle$ component. These are locatable because they can be identified by an incorrect number of photons being detected at the end of a calculation or alternatively via non-destructive number measurement during a calculation. In contrast the fact that in general $(\epsilon_b - \epsilon_1^2) \ne \epsilon_1^2$ means that there is a skewing of the output state that can lead to non-locatable or logical errors in any computation, that would require a full error correction code to correct. Fortunately the skewing can be easily corrected by introducing some additional loss, $\eta_2$, along with the phase shift on the 2 photon component in the middle of the NS gate (Fig.3(b) main text). If the value of this loss is chosen such that $\eta_2 = {{\epsilon_1^2}\over{\epsilon_b - \epsilon_1^2}}$ then Eq.\ref{loss4} becomes \begin{eqnarray} && \alpha \; |0 , 1_{f} \rangle |0 , 1_{f} \rangle + \xi \; |1_{f} , 0 \rangle |1_{f} , 0 \rangle + \gamma \;|1_{f} , 0 \rangle |0 , 1_{f} \rangle + \nonumber \\ && \;\;\;\;\; \delta \;|0 , 1_{f} \rangle |1_{f} , 0 \rangle \;\;\;\;\; \to \nonumber \\ && \epsilon_1^2 (\alpha \;|0 , 1_{f} \rangle |0 , 1_{f} \rangle + \xi \;|1_{f} , 0 \rangle |1_{f} , 0 \rangle + \gamma \;|1_{f} , 0 \rangle |0 , 1_{f} \rangle - \nonumber \\ && \;\;\;\;\; \; \delta \;|0 , 1_{f} \rangle |1_{f} , 0 \rangle) + |not 2 \rangle \label{loss5} \end{eqnarray} hence the skewing is removed and the overall probability of success of the CZ gate becomes $\epsilon_1^4$. \end{document}
\begin{document} \tikzstyle{every node}=[circle, draw, fill=black!10, inner sep=0pt, minimum width=4pt] \title{Effect of predomination and vertex removal on the game total domination number of a graph} \author{ Vesna Ir\v si\v c $^{a, b}$ } \date{\today} \maketitle \begin{center} $^a$ Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia\\ {\tt [email protected] } $^b$ Faculty of Mathematics and Physics, University of Ljubljana, Slovenia\\ \end{center} \begin{abstract} The game total domination number, ${\gamma_{g}^{t}}$, was introduced by Henning et al.\ in 2015. In this paper we study the effect of vertex predomination on the game total domination number. We prove that ${\gamma_{g}^{t}}(G|v) \geq {\gamma_{g}^{t}}(G) - 2$ holds for all vertices $v$ of a graph $G$ and present infinite families attaining the equality. To achieve this, some new variations of the total domination game are introduced. The effect of vertex removal is also studied. We show that ${\gamma_{g}^{t}}(G) \leq {\gamma_{g}^{t}}(G-v) + 4$ and ${\gamma_{g}^{t}}'(G) \leq {\gamma_{g}^{t}}'(G-v) + 4$. \end{abstract} \noindent{\bf Keywords:} total domination game; game total domination number; critical graphs \noindent{\bf AMS Subj.\ Class.:} 05C57, 05C69 \section{Introduction} \label{sec:intro} The domination game was introduced in 2010 by Bre\v{s}ar et al.~\cite{dom} as a game played by two players, Dominator and Staller, on the graph $G$. They alternate taking turns for as long as possible and on each turn one chooses such a vertex in $G$ that dominates at least one not yet dominated vertex. Recall that a vertex dominates itself and its neighbors. Dominator tries to minimize and Staller tries to maximize the number of moves. The total number of selected vertices is called \emph{the game domination number}, $\gamma_{g}(G)$, if Dominator starts the game (\emph{D-game}) or \emph{the Staller-start game domination number}, $\gamma_{g}'(G)$, if Staller makes the first move (\emph{S-game}). If Staller is allowed to pass one move, the game is called the \emph{Staller-pass game} and the number of moves made is $\gamma_{g}^{sp} (G)$ or $\gamma_{g}'^{sp} (G)$. Analogously we define the \emph{Dominator-pass game}. Graphs on which an optimal domination game yields a minimum dominating set of the graph have been introduced and studied in~\cite{D-trivial}. See~\cite{XLK-2018}, for results about graphs with maximal possible game domination number, i.e.\ $2 \gamma (G) - 1$. One can also consider a domination game played on the graph $G$ where some vertices are already considered dominated~\cite{dom}. If $S \subseteq V(G)$ are already dominated, then the resulting game domination number is denoted by $\gamma_{g}(G|S)$ or $\gamma_{g}'(G|S)$. If $S = \{v\}$, then we write $\gamma_{g}(G|v)$ or $\gamma_{g}'(G|v)$. An important property arising from this definition is the Continuation Principle~\cite{extremal} which states the following. For a graph $G$ and the sets $A, B \subseteq V(G)$, $B \subseteq A$, it holds that $\gamma_{g}(G|A) \leq \gamma_{g}(G|B)$ and $\gamma_{g}'(G|A) \leq \gamma_{g}'(G|B)$. The total version of the domination game was introduced in~\cite{totDom} and is defined analogously, except that on each turn only a vertex which totally dominates at least one not yet totally dominated vertex can be played. Recall that a vertex totally dominates only its neighbors and not itself. \emph{The game total domination number}, ${\gamma_{g}^{t}}(G)$, is the number of vertices chosen if Dominator starts the game on $G$. If Staller plays first, then \emph{the Staller-start game total domination number} is denoted by ${\gamma_{g}^{t}}'(G)$. As in the domination game, \emph{Staller- and Dominator-pass games}, as well as games on partially dominated graphs, can also be considered. See~\cite{trees}, for results about the game total domination number on trees. An important property of the game total domination number is the Total Continuation Principle~\cite{totDom}, which states that for a graph $G$ and the sets $A, B \subseteq V(G)$, $B \subseteq A$, it holds that ${\gamma_{g}^{t}}(G|A) \leq {\gamma_{g}^{t}}(G|B)$ and ${\gamma_{g}^{t}}'(G|A) \leq {\gamma_{g}^{t}}'(G|B)$. Another fundamental property, which is also parallel to the ordinary domination game, is the following~\cite{totDom}. For any graph $G$, we have $|{\gamma_{g}^{t}}(G) - {\gamma_{g}^{t}}'(G)| \leq 1$. The game total domination number is non-trivial even on paths and trees~\cite{totDomCP} and is log-complete in PSPACE~\cite{pspace}. The $\frac{3}{4}$-conjecture, stating that for all graphs $G$ it holds ${\gamma_{g}^{t}}(G) \leq \frac{3}{4} |V(G)|$, was posed in~\cite{3/4_1} and further studied in~\cite{3/4_2, 3/4_3}. \emph{Game domination critical} (or \emph{$\gamma_{g}$-critical}) graphs have been introduced in~\cite{DomCritical} as graphs $G$ for which $\gamma_{g}(G) > \gamma_{g}(G|v)$ for all $v \in V(G)$. Among other results from~\cite{DomCritical}, we recall that for any vertex $u \in V(G)$ it holds $\gamma_{g}(G|u) \geq \gamma_{g}(G) - 2$ and that there exist a graph attaining the equality. Analogously, \emph{total domination game critical} (or \emph{${\gamma_{g}^{t}}$-critical}) graphs were introduced in~\cite{totDomCritical} as graphs $G$ for which ${\gamma_{g}^{t}}(G) > {\gamma_{g}^{t}}(G|v)$ for all $v \in V(G)$. Infinite families of ${\gamma_{g}^{t}}$-critical circular and M\"{o}bius ladders were presented in~\cite{ladders}. In Section~\ref{sec:sth} we state that ${\gamma_{g}^{t}}(G|v) \geq {\gamma_{g}^{t}}(G) - 2$ for every $v \in V(G)$ and observe that neither of the total domination game critical graphs studied in~\cite{totDomCritical, ladders} attains the equality. Hence, in Sections~\ref{sec:sth} and~\ref{sec:generalisation} we present infinite families of graphs attaining it. We apply the introduced techniques to another family of graphs as well. A similar concept, the effect of edge or vertex removal on the game domination number has been studied in~\cite{EdgeVertexRemoval, G-e, book}. It holds that ${\gamma_{g}^{t}}(G-v)$ cannot be bounded from above by ${\gamma_{g}^{t}}(G)$, but that ${\gamma_{g}^{t}}(G) - {\gamma_{g}^{t}}(G-v) \leq 2$. Examples of graphs attaining ${\gamma_{g}^{t}}(G) - {\gamma_{g}^{t}}(G-v) \in \{0,1,2\}$ have also been presented. Analogous results hold for the Staller-start game. In Section~\ref{sec:vertex} we present similar results for the game total domination number. \section{Graphs with the property ${\gamma_{g}^{t}}(G|v) = {\gamma_{g}^{t}}(G) - 2$} \label{sec:sth} As mentioned, the game total domination critical graphs were introduced in~\cite{totDomCritical}, where also critical cycles and paths were characterized, as well as $2$- and $3$-${\gamma_{g}^{t}}$-critical graphs. Domination game critical graphs were introduced and studied in~\cite{DomCritical}, where we find the following property: $\gamma_g(G | u) \geq \gamma_g (G) - 2$ for every vertex $u$. With the same reasoning, we derive an analogous result for the game total domination number. To be self contained, we rephrase the proof here without detail. \begin{lemma} \label{lem:-2} For every $v \in V(G)$, it holds ${\gamma_{g}^{t}}(G|v) \geq {\gamma_{g}^{t}}(G) - 2$. \end{lemma} \noindent{\bf Proof.\ } The players play the real game on $G$ while Dominator imagines a Staller-pass game on $G|u$. He ensures that all the vertices that are totally dominated in the real game are also totally dominated in the imagined game. Dominator plays optimally in the imagined game and copies each of his moves into the real game. Every move of Staller in the real game is copied into the imagined game. If this move is not legal in the imagined game, the only new totally dominated vertex in the real game in $u$. In this case, Staller simply skips her move in the imagined game. Let $p$ denote the number of moves played in the real game and $q$ denote the number of moves played in the imagined game. As Staller plays optimally in the real game, it holds ${\gamma_{g}^{t}}(G) \leq p$. As in the imagined game one move might be skipped, we have $p \leq q + 1$. As Dominator plays optimally on $G | u$, it holds $q \leq {\gamma_{g}^{t}}^{sp} (G|u)$. Similarly as in~\cite{union}, we can also derive ${\gamma_{g}^{t}}^{sp}(G|u) \leq {\gamma_{g}^{t}}(G|u) + 1$. Combining these inequalities yields ${\gamma_{g}^{t}}(G) \leq {\gamma_{g}^{t}}(G|u) + 2$. $\square$ For all game total domination critical graphs in~\cite{totDomCritical, ladders} it holds that for each vertex $v$ we have ${\gamma_{g}^{t}}(G|v) = {\gamma_{g}^{t}}(G) - 1$. A natural question is whether there exist graphs $G$ with ${\gamma_{g}^{t}}(G|v) = {\gamma_{g}^{t}}(G) - 2$ for some vertex $v \in V(G)$, i.e.\ graphs attaining the equality in Lemma~\ref{lem:-2} for some vertex $u$. It turns out that the answer is positive. To show this, we consider the following family of graphs. Let $G_{n, m}$, $n \geq 8, m \geq 4$, $n \equiv 2 \mod 6$, be a graph consisting of a cycle $C_n$ with vertices $\mathcal U = \{u_1, \ldots, u_n\}$ (and naturally defined edges), a complete graph $K_m$ with vertices $w_1, w_2$ and $\mathcal V = \{ v_1, \ldots, v_{m-2}\}$ (and naturally defined edges), and additional edges $\{u_1, w_1\}, \{u_1, w_2\}, \{u_5, w_1\}, \{u_5, w_2\}$ (cf.\ Fig.~\ref{fig:primer}). Denote by $H_m$ a subgraph induced on vertices $\{u_1, u_5, w_1, w_2\} \cup \mathcal V$. Observe that $G_{n,m}$ is $2$-connected. We shall prove the following. \begin{theorem} \label{thm:minus2} For $n \geq 8, m \geq 4$, $n \equiv 2 \mod 6$, it holds that ${\gamma_{g}^{t}}(G_{n, m}) = \frac{2n-1}{3} + 2$ and ${\gamma_{g}^{t}}(G_{n, m}|w_1) = \frac{2n-1}{3}$. \end{theorem} \begin{figure} \caption{A graph $G_{14, 4} \label{fig:primer} \end{figure} During the proof we will consider several different variations of the game played on the cycle. We will study them seperately. \subsection{Preliminaries} \label{sec:preliminaries} Recall from~\cite{totDomCP, totDomCritical} that for $n \equiv 2 \mod 6$ it holds $${\gamma_{g}^{t}}(C_n) = {\gamma_{g}^{t}}(C_n|v) = \frac{2n-1}{3} \quad \text{and} \quad {\gamma_{g}^{t}}'(C_n) = {\gamma_{g}^{t}}'(C_n|v) = \frac{2n-1}{3} - 1$$ for every vertex $v \in V(C_n)$. In some of the following proofs we will use the strategy for Staller from~\cite{totDomCP}, thus we rephrase it here. First recall that a \emph{run} on a partially dominated cycle is a maximal sequence of (at least two) consecutive totally dominated vertices. An \emph{anti-run} is a maximal sequence of (at least two) consecutive not totally dominated vertices. Let $A$ denote the set of already totally dominated vertices on a cycle $C_n$. Suppose $A$ is neither empty nor $V(C_n)$. If $A$ contains a run or an anti-run, Staller can play on its extremity and totally dominate only one new vertex. If $A$ contains no runs and no anti-runs, then $(A, A^C)$ must be a bipartition of the cycle. In this case, Staller cannot totally dominate only one new vertex in the next move. We call this strategy $S_1$. Recall also the strategy for Dominator from~\cite{totDomCP}. Let $U$ denote the set of unplayable vertices. Clearly, Staller adds at least one vertex to $U$ on each move. Dominator's strategy is to ensure that after her move and his reply, at least three vertices are added to $U$. Say Staller plays on $v_1$. She totally dominates at least one new vertex, say $v_2$. Now label the other vertices cyclically as $v_3, \ldots, v_n$. Since $v_2$ was not yet totally dominated before Staller played $v_1$, vertex $v_3$ was not played yet. If $v_5$ was also not played yet, then $v_4$ is not yet totally dominated and Dominator can play on $v_5$. Thus, he adds $v_1, v_3, v_5$ to $U$. But if $v_5$ was played before, then Staller's move makes both $v_1$ and $v_3$ unplayable. Thus, Dominator can reply anywhere to add another vertex to $U$. We call this strategy $D_1$. Notice also that $${\gamma_{g}^{t}}(H_m) = {\gamma_{g}^{t}}'(H_m) = {\gamma_{g}^{t}}'(H_m|v) = 2$$ for every $v \in V(H_m)$ and $${\gamma_{g}^{t}}(H_m|v) = \begin{cases} 1; & v \in \{w_1, w_2\},\\ 2; & \text{otherwise}. \end{cases} $$ Now we prove a series of lemmas. \begin{lemma} \label{lem:Cycle15} If $n \equiv 2 \mod 6$, then ${\gamma_{g}^{t}}(C_n|\{u_1, u_5\}) = \frac{2n-1}{3}$. \end{lemma} \noindent{\bf Proof.\ } It follows from the total continuation principle that ${\gamma_{g}^{t}}(C_n|\{u_1, u_5\}) \leq {\gamma_{g}^{t}}(C_n) = \frac{2n-1}{3}$. Thus we only need to prove ${\gamma_{g}^{t}}(C_n|\{u_1, u_5\}) \geq \frac{2n-1}{3}$, which can be done by finding a suitable strategy for Staller. Let $n = 6 q + 2$ for some positive integer $q$. Denote $x_k = u_{2k-1}$, $y_k = u_{2k}$ for $k \in [\frac{n}{2}]$ and $X = \{x_1, \ldots, x_k\}$, $Y = \{y_1, \ldots, y_k\}$. Clearly, $(X, Y)$ is a bipartition of the cycle $C_n$. So by playing in $X$, a player can totally dominate only vertices in $Y$, and vice versa. As $u_1, u_5 \in X$ are already predominated, the players need to dominate only $3q-1$ vertices in $X$, and $3q+1$ vertices in $Y$. Observe that if Dominator plays on $X$ and after his move $Y$ is not yet totally dominated, then there exists $y_i, y_{i+1}$ such that one is already totally dominated and the other is not. Then Staller can play their common neighbor $x_{i+1} \in X$ and hence she totally dominates only one new vertex in such a move. A similar observation holds if we switch $X$ and $Y$. We now distinguish two cases. \begin{enumerate} \item Dominator is the first player to play on $X$.\\ Staller's strategy is to reply on $X$ whenever Dominator plays on $X$, and on $Y$, whenever he plays on $Y$. Suppose Dominator makes in total $\ell$ moves on $X$. So Staller makes $\ell$ or $\ell-1$ moves on $X$ and it follows from the above observation, that she can totally dominate only one new vertex on each move. Suppose Dominator totally dominates two new vertices on each move on $X$. Then together both players dominate $2\ell + \ell = 3\ell$ or $2\ell + (\ell-1) = 3\ell-1$ vertices in $Y$. But as $|Y| \equiv 1 \mod 3$, this is not possible. Hence, Dominator has to dominate only one new vertex on at least one move on $X$. If Staller plays first on $Y$, she plays $y_1$ (i.e.\ $u_2)$, so she only totally dominates one new vertex. In all other moves, or if Dominator starts playing on $Y$, it is clear that Staller can totally dominate only one new vertex on each move. Let $m$ denote the number of moves played. If $m$ is odd, we have $$n \leq 2 + \frac{m-1}{2} \cdot 1 + \frac{m+1}{2} \cdot 2 - 1 = \frac{3m+3}{2} \,,$$ so $$m \geq \left\lceil \frac{2n-3}{3} \right\rceil = \frac{2n-1}{3}\,.$$ If $m$ is even, we have $$n \leq 2 + \frac{m}{2} \cdot 1 + \frac{m}{2} \cdot 2 - 1 = \frac{3m+2}{2}\,,$$ thus $$m \geq \left\lceil \frac{2n-2}{3} \right\rceil = \frac{2n-1}{3}\,.$$ \item Staller is the first player to play in $X$.\\ Thus Dominator started on $Y$ and then Staller replies on $X$ (resp.\ $Y$) if Dominator plays in $X$ (resp.\ $Y$). But she follows additional strategy to avoid playing on $\{y_1, y_2\}$ (i.e.\ $\{u_2, u_4\}$) for as long as possible. Notice that she can always play elsewhere unless all other vertices in $X-\{x_2\}$ are already totally dominated, as $x_2$ is the common neighbor of $y_1, y_2$, and $x_1, x_3$ are already totally dominated. But as Staller should play first on $X$, Dominator has to play last on $Y$ and as $x_2$ will not be dominated by Staller (due to her strategy), it is clear that Dominator will totally dominate only one new vertex on at least one of his moves on $Y$. As above it is clear that Staller can totally dominate only one new vertex on each of her moves, except when she plays first on $X$. Denote by $\ell$ the number of moves Dominator makes on $Y$. Suppose he totally dominates just one new vertex on only one of his moves on $Y$. Because he should also play last on $Y$, Staller makes $\ell-1$ moves on $Y$. So together they totally dominate $2\ell - 1 + \ell-1 = 3\ell-2$ vertices in $X$, which is a contradiction with the fact that $3q-1$ vertices in $X$ need to be totally dominated. Therefore, Dominator totally dominates just one new vertex on at least two of his moves. Let again $m$ denote the number of moves played. If $m$ is odd, we have $$n \leq 2 + \frac{m-1}{2} \cdot 1 + 1 + \frac{m+1}{2} \cdot 2 - 2 = \frac{3m+3}{2}\,,$$ so $$m \geq \left\lceil \frac{2n-3}{3} \right\rceil = \frac{2n-1}{3}\,.$$ If $m$ is even, we have $$n \leq 2 + \frac{m}{2} \cdot 1 + 1 + \frac{m}{2} \cdot 2 - 2 = \frac{3m+2}{2}\,,$$ thus $$m \geq \left\lceil \frac{2n-2}{3} \right\rceil = \frac{2n-1}{3}\,.$$ \end{enumerate} Hence also ${\gamma_{g}^{t}}(C_n|\{u_1, u_5\}) \geq \frac{2n-1}{3}$. $\square$ Consider the following variation of the total domination game---Staller plays twice and only then the players start to alternate moves. So we have moves $s_1, s_2, d_1, s_3, \ldots$ The number of moves in such a game is denoted by ${\gamma_{g}^{t}}''(G)$. \begin{lemma} \label{lem:ss} If $n\geq 3$ is a positive integer, then ${\gamma_{g}^{t}}''(C_n) = {\gamma_{g}^{t}}(C_n)$. \end{lemma} \noindent{\bf Proof.\ } The first move $s_1$ of Staller totally dominates two new vertices. After that the players alternate moves and both play optimally. But as the cycle is vertex-transitive, the first move $s_1$ of Staller can be considered as an optimal first move of Dominator in the usual D-game. Hence, ${\gamma_{g}^{t}}''(C_n) = {\gamma_{g}^{t}}(C_n)$. $\square$ Consider another variation of the total domination D-game---both players alternate moves normally, but after $m$, $0 \leq m \leq {\gamma_{g}^{t}}(G)$, moves of the game vertices $x_1, \ldots, x_k$ become totally dominated (for free). The number of moves in such a game is denoted by ${\gamma_{g}^{t}}(G|^{m}\{x_1, \ldots, x_k\})$. Notice that ${\gamma_{g}^{t}}(G|^{0}\{x_1, \ldots, x_k\}) = {\gamma_{g}^{t}}(G|\{x_1, \ldots, x_k\})$ and ${\gamma_{g}^{t}}(G|^{{\gamma_{g}^{t}}(G)}\{x_1, \ldots, x_k\}) = {\gamma_{g}^{t}}(G)$. We point out that neither of the players is aware in advance of the parameter $m$ and the set $\{x_1, \ldots, x_k\}$. \begin{lemma} \label{lem:cycle15} If $n \equiv 2 \mod 6$, $n \geq 8$, then ${\gamma_{g}^{t}}(C_n|^{m}\{u_1, u_5\}) \geq \frac{2n-1}{3}$ for any $m$, $0 \leq m \leq {\gamma_{g}^{t}}(C_n)$. \end{lemma} \noindent{\bf Proof.\ } The real game is played simultaneously with an imagined total domination D-game on $C_n|\{u_1, u_5\}$ which is imagined by Staller. We will describe Staller's strategy that will ensure that the set of totally dominated vertices in the real game is a subset of the totally dominated vertices in the imagined game. Each move of Dominator is copied to imagined game. If it is not playable, any other legal move is played there. Staller replies optimally in the imagined game and copies her move to the real game. This is always legal, even after $u_1$ and $u_5$ become predominated in the real game. Hence by Lemma~\ref{lem:Cycle15}, it holds ${\gamma_{g}^{t}}(C_n|^{m}\{u_1, u_5\}) \geq {\gamma_{g}^{t}}(C_n|\{u_1, u_5\}) = \frac{2n-1}{3}$ for any $m$. $\square$ Another variation we shall introduce is the following. Consider a total domination D-game on a graph $G$, where Staller has to pass one move, and after that also Dominator has to pass one move. The number of moves in such a game is denoted by ${\gamma_{g}^{t}}^{sdp (k,\ell)}(G)$ if Staller passes exactly after $k$-th move of the game and Dominator passes exactly after the $\ell$-th move, $k \leq \ell$. These two passes are not counted as moves. \begin{lemma} \label{lem:sdp} If $n \equiv 2 \mod 6$, $n \geq 8$, then ${\gamma_{g}^{t}}^{sdp (k,\ell)}(C_n) \geq \frac{2n-1}{3}$ for any $0 \leq k \leq \ell \leq {\gamma_{g}^{t}}(C_n)$. \end{lemma} \noindent{\bf Proof.\ } Following the same strategy as presented in~\cite{totDomCP}, Staller can totally dominate only one vertex on each of her moves, except maybe once. Let $m$ be the number of moves played. If $m$ in even, we have $n \leq \frac{m}{2} \cdot 1 + 1 + \frac{m}{2} \cdot 2 = \frac{3m+2}{2}$, and thus $m \geq \frac{2n-1}{3}$. If $m$ is odd, we have $n \leq \frac{m-1}{2} \cdot 1 + 1 + \frac{m+1}{2}\cdot 2 = \frac{3m+3}{2}$, and thus $m \geq \frac{2n-1}{3}$. $\square$ The combination of the above variations is such that Staller has to pass one move, and then Dominator has to pass one move, but when Dominator passes, some vertices $x_1, \ldots, x_k$ become predominated. The number of moves in such a game is denoted by ${\gamma_{g}^{t}}^{sdp (m,\ell)}(G|^{\ell}\{x_1, \ldots, x_k\})$ if Staller passes after the $m$-th move of the game and Dominator passes after the $\ell$-th move, $m \leq \ell$. Combining Lemma~\ref{lem:sdp} with the imagination strategy used in the proof of Lemma~\ref{lem:cycle15}, yields the following lemma. \begin{lemma} \label{lem:sdp15} If $n \equiv 2 \mod 6$, $n \geq 8$, then ${\gamma_{g}^{t}}^{sdp(m,\ell)}(C_n|^{\ell}\{u_1, u_5\}) \geq \frac{2n-1}{3}$. \end{lemma} Consider now our final variation. Consider a D-game on $G$ where Staller has to pass exactly twice. Each pass appears after Dominator plays the vertex $u'$ or $u''$ for some $u', u'' \in V(G)$. Additionally, we allow Dominator to play his first move on $\{u', u''\}$ even if this dominates no new vertices on $G$. But his second move on $\{u', u''\}$ has to totally dominate some new vertex in $G$. The number of moves in such a game is denoted by ${\gamma_{g}^{t}}^{ssp, u', u''}(G)$. In the case when $G$ is a cycle $C_n$, we can write ${\gamma_{g}^{t}}^{ssp, u', u''}(C_n) = {\gamma_{g}^{t}}^{ssp, \d(u', u'')}(C_n)$, where $\d(u', u'')$ denotes the distance between the special vertices, $u'$ and $u''$. In our case, $\d(u_1, u_5) = 4$. \begin{lemma} \label{lem:ssp} If $n \equiv 2 \mod 6$, $n \geq 8$, then ${\gamma_{g}^{t}}^{ssp,4}(C_n) \geq \frac{2n-1}{3}$. \end{lemma} \noindent{\bf Proof.\ } Staller again follows her strategy from~\cite{totDomCP}, which ensures that except perhaps on one move, she can totally dominate just one new vertex. Let $m$ denote the total number of moves played. Distinguish two cases. \begin{enumerate} \item Staller can always totally dominate just one new vertex.\\ If $m$ in even, we have $n \leq \frac{m-2}{2} \cdot 1 + \frac{m+2}{2} \cdot 2 = \frac{3m+2}{2}$, and thus $m \geq \frac{2n-1}{3}$. If $m$ is odd, we have $n \leq \frac{m-3}{2} \cdot 1 + \frac{m+3}{2} \cdot 2 = \frac{3m+3}{2}$, and thus $m \geq \frac{2n-1}{3}$. \item Staller dominates two new vertices on some move.\\ We know this happens exactly when it is Staller's turn when $(A, A^C)$ forms a bipartition of the cycle, where $A$ denotes already totally dominated vertices. If it is not a bipartition, Staller can totally dominate only one new vertex (due to the strategy $S_1$). If Dominator totally dominates just one (or zero) new vertex on some move, calculations similar to the above show $m \geq \frac{2n-1}{3}$. From now on suppose Dominator totally dominates exactly two new vertices on each move. If $(A, A^C)$ is a bipartition before $u_1$ and $u_5$ were played, it follows $u_1 \in A$. Hence Staller can play $u_3$. But than Dominator cannot play $u_1$ or $u_5$ and still dominate two vertices on his move. If $(A, A^C)$ is a bipartition after $u_1$ and $u_5$ were played, it follows that $u_1 \in A^C$. So both Staller's passes appeared before this situation and also it has to be Staller's turn now. Denote by $d$ the number of moves Dominator made before $(A, A^C)$ is a bipartition. Then Staller made only $d-3$ moves up to this point. So together they totally dominated $2d + (d-3) = 3 (d-1)$ vertices, which is a contradiction, as $|A| \equiv 1 \mod 3$. \end{enumerate} Thus in all cases we have $m \geq \frac{2n-1}{3}$. $\square$ \subsection{Proof of Theorem~\ref{thm:minus2}} \label{sec:proof} First we describe a strategy for Dominator to show that ${\gamma_{g}^{t}}(G_{n, m}) \leq \frac{2n-1}{3} + 2$. Dominator starts by playing $v_1$. If Staller replies on $K_m$, the whole $K_m$ is totally dominated after her move. So what remains is a normal game on $C_n$ or $C_n|\{u_1, u_5\}$, thus Dominator just plays according to his optimal strategy there. In this case the total number of moves is $\frac{2n-1}{3} + 2$. But if Staller replies on $C_n$, Dominator's second move is $v_2$. After his move, the whole $K_m$ is totally dominated and we get a ${\gamma_{g}^{t}}''$-game on $C_n$. It follows from Lemma~\ref{lem:ss} that in this case $\frac{2n-1}{3} + 2$ moves are made. Hence, ${\gamma_{g}^{t}}(G_{n, m}) \leq \frac{2n-1}{3} + 2$. Next we describe a strategy for Staller to show that ${\gamma_{g}^{t}}(G_{n, m}) \geq \frac{2n-1}{3} + 2$. If Dominator plays on $\{u_1, u_5\}$ or on $\{w_1, w_2\}$, Staller replies on $\mathcal V$. If Dominator plays on $\mathcal V$, Staller replies on $\{w_1, w_2\}$. If the prescribed reply is not legal, Staller plays any legal move. Note that this can only happen if the whole $K_m$ is already totally dominated. If Dominator plays on $C_n - \{u_1, u_5\}$, Staller replies on such optimal move on $C_n$, that she does not play on $\{u_1, u_5\}$. Why is this possible whenever $C_n$ is not yet totally dominated? Let $\mathcal S$ denote the set of all Staller's optimal moves on $C_n$. Due to strategy $S_1$, these are the vertices which have only one not yet totally dominated neighbor, or half of all vertices in the case when $(A, A^C)$ forms a bipartition. If $\mathcal S \cap \{u_1, u_5\} \neq \emptyset$, Staller can make an optimal move on $C_n - \{u_1, u_5\}$. Else, $\mathcal S \subseteq \{u_1, u_5\}$ and only one new vertex will be totally dominated. Without loss of generality, $u_1 \in \mathcal S$. Suppose $u_2$ is already totally dominated and $u_n$ is not. But as $u_{n-1} \notin \mathcal S$, $u_{n-2}$ is also not totally dominated, otherwise Staller could play $u_{n-1}$. Repeating this reasoning results in the fact that $u_2$ is not totally dominated, which is a contradiction. Supposing that $u_2$ is not totally dominated and $u_n$ is, yields a similar contradiction. Hence, Staller can play on $C_n - \{u_1, u_5\}$, as long as $C_n$ is not yet totally dominated. If it is, she plays on $\mathcal V$. This strategy ensures that at least two moves are played on $K_m$. Thus we only need to prove that at least $\frac{2n-1}{3}$ moves are played on $C_n$. Consider now only moves played on the cycle. If Dominator does not start on it, after Staller's reply the whole $K_m$ is totally dominated and all remaining legal moves are on $C_n$. Hence, this is always a D-game. We distinguish two cases. \begin{enumerate} \item Dominator is the first player to play on $K_m$.\\ After Staller's reply, the whole $K_m$ is totally dominated so exactly two moves are played on $K_m$. Thus we have a normal game on $C_n$, where possibly vertices $\{u_1, u_5\}$ become predominated at some point of the game. The number of moves on $C_n$ is therefore at least ${\gamma_{g}^{t}}(C_n)$ or ${\gamma_{g}^{t}}(C_n|^{\ell}\{u_1, u_5\})$ for some $\ell$. It follows from Lemma~\ref{lem:cycle15}, that in either case at least $\frac{2n-1}{3}$ moves are played on $C_n$. \item Staller is the first player to play on $K_m$. \begin{enumerate} \item If the whole $C_n$ is totally dominated before Staller first plays on $K_m$ (specifically, on $\mathcal V$), a normal D-game was played on $C_n$, thus at least $\frac{2n-1}{3}$ moves were made on the cycle. \item Otherwise Dominator played on $\{u_1, u_5\}$ right before Staller's move on $K_m$. Dominator can reply on: \begin{enumerate} \item $\mathcal V$: with this move he completes the total domination of $K_m$ without any vertex of $C_n$ becoming predominated. This results on a normal ${\gamma_{g}^{t}}$-game on the cycle. \item $\{w_1, w_2\}$: similar as above, but vertices $\{u_1, u_5\}$ become predominated, so at least ${\gamma_{g}^{t}}(C_n|^{\ell}\{u_1, u_5\})$ moves are played on the cycle (for some $\ell$). \item $C_n$: if Dominator plays on $\{u_1, u_5\}$ again, before playing on $K_m$, Staller makes another move on $K_m$. Thus a ${\gamma_{g}^{t}}^{ssp,4}$-game is played on the cycle. But if Dominator plays on $K_m$ before playing on $\{u_1, u_5\}$ again, at least ${\gamma_{g}^{t}}^{sdp (k,\ell)} (C_n)$ or ${\gamma_{g}^{t}}^{sdp (k,\ell)} (C_n|^{\ell}\{u_1, u_5\})$ moves are made on the cycle (for some $k \leq l$). \end{enumerate} \end{enumerate} It follows from Lemmas~\ref{lem:cycle15}, \ref{lem:ssp}, \ref{lem:sdp} and~\ref{lem:sdp15} that at least $\frac{2n-1}{3}$ moves are played on $C_n$. \end{enumerate} Hence, ${\gamma_{g}^{t}}(G_{n, m}) = \frac{2n-1}{3} + 2$. Now we describe a strategy for Dominator that shows the inequality ${\gamma_{g}^{t}}(G_{n, m}|w_1) \leq \frac{2n-1}{3}$. Dominator starts on $w_1$. After this move the whole $K_m$ (and also vertices $\{u_1, u_5\}$) is totally dominated. Thus all other moves are played on $C_n$, so Dominator follows his optimal strategy on the cycle. Also, Staller plays first on $C_n$. Thus, using total continuation principle, $${\gamma_{g}^{t}}(G_{n, m}|w_1) \leq 1 + {\gamma_{g}^{t}}'(C_n|\{u_1, u_5\}) \leq 1 + \frac{2n-1}{3} - 1 = \frac{2n-1}{3}\,.$$ As it follows from Lemma~\ref{lem:-2} that ${\gamma_{g}^{t}}(G_{n, m}|w_1) \geq {\gamma_{g}^{t}}(G_{n,m}) - 2 = \frac{2n-1}{3}$, the proof is complete. \subsection{Another example} \label{sec:another} Another family of graphs with similar properties is the following. Let $\widetilde{G}_{n,m}$, $n, m \geq 3$, be a graph consisting of a cycle $C_n$ with vertices $\mathcal U = \{u_1, \ldots, u_n\}$, a complete graph $K_m$ with vertices $w$ and $\mathcal V = \{ v_1, \ldots, v_{m-1}\}$ (both with natural edges), and an edge $\{u_1, w\}$ (cf.\ Fig.~\ref{fig:primer2}). \begin{figure} \caption{A graph $\widetilde{G} \label{fig:primer2} \end{figure} A similar (but simpler) reasoning as above leads to the result $${\gamma_{g}^{t}}(\widetilde{G}_{n,m}) = \frac{2n-1}{3} + 2 \quad \text{ and } \quad {\gamma_{g}^{t}}(\widetilde{G}_{n,m}|w) = \frac{2n-1}{3}\,.$$ \section{A generalization} \label{sec:generalisation} In Section~\ref{sec:preliminaries} we introduced four new variations of the game total domination number, it would be interesting to find other applications of them. But they are focused to configurations where two vertices, $u_1$ and $u_2$, of a graph $H$, are connected with two vertices of a complete graph $K_m$. Denote the resulting graph with $G_{H, u_1, u_2, m}$. Following the reasoning of the proof of the Theorem~\ref{thm:minus2}, we can conclude $$\min_{p,k,l,m}\{{\gamma_{g}^{t}}(H|^p\{u_1, u_2\}), {\gamma_{g}^{t}}^{ssp, u_1, u_2}(H), {\gamma_{g}^{t}}^{sdp(k,l)}(H|^m\{u_1, u_2\})\} + 2 \leq$$ $$\leq {\gamma_{g}^{t}}(G_{H, u_1, u_2, m}) \leq \max\{ {\gamma_{g}^{t}}(H), {\gamma_{g}^{t}}''(H) \} + 2.$$ In the following we consider the case when $H = C_n$. Denote the graph $G_{H, u_1, u_2, m}$ where $\d(u_1, u_2) = d$ by $G_{n,d,m}$. Notice that $G_{n,m} = G_{n,4,m}$. Using the analogous results as in the preliminaries above, we can state \begin{equation} \label{eqn:meja} {\gamma_{g}^{t}}(C_n|\{u_1, u_2\}) + 2 \leq {\gamma_{g}^{t}}(G_{n, \d(u_1, u_2), m}) \leq {\gamma_{g}^{t}}(C_n) + 2. \end{equation} Therefore we must first determine the value of ${\gamma_{g}^{t}}(C_n|\{u_1, u_2\})$ for different choices of vertices $u_1$ and $u_2$. \begin{theorem} \label{thm:cycles} If $n \equiv 2 \mod 6$ and $u_1, u_2 \in V(C_n)$, then $${\gamma_{g}^{t}}(C_n|\{u_1, u_2\}) = \begin{cases} {\gamma_{g}^{t}}(C_n); & \d(u_1, u_2) \mod 6 \equiv 4,\\ {\gamma_{g}^{t}}(C_n) - 1; & \d(u_1, u_2) \mod 6 \in \{1, 3, 5\},\\ {\gamma_{g}^{t}}(C_n) - 2; & \d(u_1, u_2) \mod 6 \in \{0, 2\}. \end{cases}$$ \end{theorem} \noindent{\bf Proof.\ } We distinguish three cases. Note that $n = 6k + 2$ for some integer $k$. \begin{enumerate} \item $\d(u_1, u_2)$ is odd.\\ We first prove the lower bound. Let $A$ denote the set of already totally dominated vertices. By playing as in strategy $S_1$, Staller can always totally dominate only one new vertex. As $u_1, u_2 \in A$ and $\d(u_1, u_2)$ is odd, $(A, A^C)$ can never be a bipartition. Let $m$ denote the number of moves. If $m$ is odd, we have $n \leq 2 + \frac{m-1}{2} + 2 \cdot \frac{m+1}{2}$, thus $m \geq \lceil \frac{2n-5}{3} \rceil = {\gamma_{g}^{t}}(C_n) - 1$. If $m$ is even, we get $n \leq 2 + \frac{m}{2} + 2 \cdot \frac{m}{2}$, thus $m \geq \lceil \frac{2n-4}{3} \rceil = {\gamma_{g}^{t}}(C_n) - 1$. Consider now the upper bound. Let $v, v' \in V(C_n)$ such that $\d(v, u_1) = \d(v', u_1) = 3$ and $\d(v, u_2) \geq \d(v', u_2)$ (i.e.\ $v$ lies on the longer arc between $u_1$ and $u_2$ and $v'$ on the shorter). Dominator's strategy is to start on $v$ and then reply on the Staller's move such that at least three new vertices become unplayable after these two moves (as in strategy $D_1$). Observe that his reply can be on the same part of the bipartition that Staller played on, unless all vertices in it are unplayable. Notice that two vertices become unplayable after the first Dominator's move and if Staller makes the last move, she also makes two vertices unplayable. Let $(X, Y)$ be a bipartition of $V(C_n)$ and $u_1 \in X$. Thus $u_2 \in Y$ and $v \in Y$. Notice that $|X| = |Y| = \frac{n}{2} = 3k+1$. Due to the above strategy, Dominator starts playing on $Y$. If Dominator is forced to play first on $X$, he plays the vertex $\widetilde{v} \in V(C_n)$ which is at distance $3$ from $u_2$ and lies on the longer arc between $u_1$ and $u_2$. Let $l$ denote the number of moves played on $Y$. If $l$ is odd, we have $|X| \geq 2 + 3 \cdot \frac{l-1}{2}$, thus $l \leq 2k-1$. In this case, Staller starts playing on $X$. Let $l'$ denote the number of moves played on $X$. If $l'$ is odd, then $|Y| \geq 3 \cdot \frac{l'-1}{2} + 2$, hence $l' \leq 2k-1$. If $l'$ is even, then $|Y| \geq 3 \cdot \frac{l'}{2}$, so $l' \leq 2k$. In either case, the total number of moves is at most $4k-1 \leq {\gamma_{g}^{t}}(C_n) - 1$. If $l$ is odd, we have $|X| \geq 2 + 3 \cdot \frac{l-2}{2} + 2$, so $l \leq 2k$. In this case, Dominator starts on $X$ and repeating the analogous reasoning as for the moves on $Y$, we can conclude that the total number of moves is at most $4k \leq {\gamma_{g}^{t}}(C_n) - 1$. So for $\d(u_1, u_2)$ odd, we have ${\gamma_{g}^{t}}(C_n|\{u_1, u_2\}) = {\gamma_{g}^{t}}(C_n) - 1$. \item $\d(u_1, u_2) \equiv 2 \mod 6$.\\ Due to strategy $S_1$, we have ${\gamma_{g}^{t}}(C_n|\{u_1, u_2\}) \geq {\gamma_{g}^{t}}(C_n) - 2$. To show the opposite inequality, we present a suitable strategy for Dominator. Let $(X, Y)$ be the bipartition of $C_n$ and $u_1, u_2 \in X$. Denote $X = \{x_1, \ldots, x_{3k+1}\}, Y = \{y_1, \ldots, y_{3k+1}\}$, where $\d(x_i, x_{i+1}) = \d(y_i, y_{i+1}) = 2$, $x_1 = u_1$ and $y_1, y_2$ are the first two vertices totally dominated on $Y$. Sets $\{x_{3m-1}, x_{3m}, x_{3m+1}\}$ and $\{y_{3m-1}, y_{3m}, y_{3m+1}\}$ are called triplets. Notice that $\{x_1\}$ and $\{y_1\}$ are the only singletons. Let $v \in V(C_n)$ be the vertex at distance $3$ from $u_2$ which lies on the longer arc between $u_1$ and $u_2$. Dominator's strategy is to start on $v$ and then reply to Staller's moves such that after every two moves, at least one triple becomes totally dominated. Note that this is possible due to the strategy $D_1$. Now consider separately the moves played on $X$ and on $Y$. After the first move on $Y$, one triple and one singleton is totally dominated. Every next pair of Staller's and Dominator's moves totally dominates at least one new triple. Thus at most $1 + 2 (k-1) = 2k-1$ moves are played on $Y$. If Staller is the first player to play on $X$, Dominator's first answer should be such that he totally dominates $y_3$ and $y_4$. In this way, the first two moves totally dominate a singleton and a triple. Every next pair of moves totally dominates at least one triple. Hence in this case, the number of moves on $X$ is at most $2 + 2 (k-1) = 2k$. If Dominator is the first to play on $X$, his move totally dominated only one singleton, and each next pair of moves dominates at least one triple. Hence, the number of moves on $X$ is at most $2k+1$. But this situation only occurs if Dominator is forced to play first on $X$, so there was and even number of moves played on $Y$, thus at most $2k -2$. In all cases, the total number of moves is at most $4k-1 = {\gamma_{g}^{t}}(C_n) - 2$. So for $\d(u_1, u_2) \equiv 2 \mod 6$, we have ${\gamma_{g}^{t}}(C_n|\{u_1, u_2\}) = {\gamma_{g}^{t}}(C_n) - 2$. \item $\d(u_1, u_2) \equiv 0 \mod 6$.\\ Dominator starts on $v$, such that $\d(v, u_2) = 3$ and $v$ lies on the shorter $u_1, u_2$-arc. Now the reasoning is similar as in the previous case. \item $\d(u_1, u_2) \equiv 4 \mod 6$.\\ It follows from the total continuation principle that ${\gamma_{g}^{t}}(C_n|\{u_1, u_2\}) \leq {\gamma_{g}^{t}}(C_n)$. Notice that here Dominator cannot choose such first move that exactly one singleton and one triple would be totally dominated after his move. This is probably the reason for a different value of ${\gamma_{g}^{t}}(C_n|\{u_1, u_2\})$. Let $(X, Y)$ be the bipartition of $C_n$ (as before) and $u_1, u_2 \in X$. Set $u_1 = x_1$ and $u_2 = x_i$ for some $i$. Staller's strategy is to reply on the same part of the bipartition that Dominator plays. Thus she totally dominates only one new vertex on each move, except if she is the first player to play on $X$. If Staller is not the first player to play on $X$. \\ Thus, Dominator plays first on $X$ and Staller can totally dominate only one new vertex on each of her moves on $X$. Let $l$ be the number of moves Dominator makes on $X$. If he always totally dominated two new vertices, we get $|Y| = 2l + l = 3l$ or $|Y| = 2l + (l-1) = 3l-1$, but neither of those values is congruent to $|Y|$ modulo $3$. Hence, Dominator totally dominates only one new vertex on at least one of his moves. Let $m$ denote the total number of moves. If $m$ is odd, then $n \leq 2 + \frac{m-1}{2} + 2 \cdot \frac{m+1}{2} - 1$, so $m \geq \frac{2n-1}{3}$. If $m$ is even, then $n \leq 2 + \frac{m}{2} + 2 \cdot \frac{m}{2} - 1$, so $m \geq \frac{2n-1}{3}$. If Staller is the first player to play on $X$.\\ This means Dominator started the game on $Y$ and kept playing on $Y$ for as long as possible. Also, he was the last player to play on $Y$. Suppose Dominator totally dominates two new vertices on every move on $Y$. Staller can reply to Dominator's move by totally dominating the remaining vertex in the triple Dominator has just partially dominated. In the end, exactly one singleton remains undominated on the shorter and one on the longer arc between $u_1$ and $u_2$. But Dominator cannot totally dominate them in one move. Hence, he totally dominates only one new vertex on at least one move. Let $l$ be the number of moves Dominator plays on $Y$. If he totally dominates only one new vertex on just one of his moves, then $|X| - 2 = 2l - 1 + l-1 = 3l-2$, which does not match the size of $X$ modulo $3$. Thus, Dominator dominates just one new vertex on at least two of his moves. Let $m$ denote the total number of moves. If $m$ is odd, then $n \leq 2 + \frac{m-1}{2} + 1 + 2 \frac{m+1}{2} - 2$, so $m \geq \frac{2n-1}{3}$. If $m$ is even, then $n \leq 2 + \frac{m}{2} + 1 + 2 \frac{m}{2} - 2$, so $m \geq \frac{2n-1}{3}$. So for $\d(u_1, u_2) \equiv 4 \mod 6$, we have ${\gamma_{g}^{t}}(C_n|\{u_1, u_2\}) = {\gamma_{g}^{t}}(C_n)$. $\square$ \end{enumerate} To complete the study of cycles with two vertices predominated, we also state the following. \begin{proposition} \label{thm:cycles'} If $n \equiv 2 \mod 6$ and $u_1, u_2 \in V(C_n)$, then $${\gamma_{g}^{t}}'(C_n|\{u_1, u_2\}) = {\gamma_{g}^{t}}(C_n) - 1.$$ \end{proposition} \noindent{\bf Proof.\ } From the total continuation principle it follows that ${\gamma_{g}^{t}}'(C_n|\{u_1, u_2\}) \leq {\gamma_{g}^{t}}'(C_n) = {\gamma_{g}^{t}}(C_n) - 1$. To prove the lower bound on ${\gamma_{g}^{t}}'(C_n|\{u_1, u_2\})$, consider the strategy $S_1$ for Staller. Using similar observations as in the previous proofs, we can see that ${\gamma_{g}^{t}}'(C_n|\{u_1, u_2\}) \geq {\gamma_{g}^{t}}(C_n) - 1$. $\square$ Simplifying~\eqref{eqn:meja} yields that for $\d(u_1, u_2) \equiv 4 \mod 6$, it holds ${\gamma_{g}^{t}}(G_{n, \d(u_1, u_2), m}) = {\gamma_{g}^{t}}(C_n) + 2$. For $\d(u_1, u_2)$ odd, we get ${\gamma_{g}^{t}}(C_n) + 1 \leq {\gamma_{g}^{t}}(G_{n, \d(u_1, u_2), m}) \leq {\gamma_{g}^{t}}(C_n) + 2$, and for $\d(u_1, u_2) \equiv 0 \text{ or } 2 \mod 6$, we have ${\gamma_{g}^{t}}(C_n) \leq {\gamma_{g}^{t}}(G_{n, \d(u_1, u_2), m}) \leq {\gamma_{g}^{t}}(C_n) + 2$. However, computer calculations for $n \in \{8, 14, 20, 26, 32\}$ and $m = 4$ indicate the following. \begin{problem} Is it true that if $n \equiv 2 \mod 6$ and $u_1, u_2 \in V(C_n)$ such that $\d(u_1, u_2) \not \equiv 4 \mod 6$, then $${\gamma_{g}^{t}}(G_{n, \d(u_1, u_2), m}) = {\gamma_{g}^{t}}(C_n) + 1 \, ?$$ \end{problem} The above results give rise to another family of graphs with the property ${\gamma_{g}^{t}}(G|v) = {\gamma_{g}^{t}}(G) - 2$ for some vertex $v$. Suppose that Dominator's first move on the graph $G_{n, \d(u_1, u_2), m}|w_1$ is on $w_1$, then using Proposition~\ref{thm:cycles'} we get: $${\gamma_{g}^{t}}(G_{n, \d(u_1, u_2), m}|w_1) \leq 1 + {\gamma_{g}^{t}}'(C_n|\{u_1, u_2\}) = {\gamma_{g}^{t}}(C_n).$$ Hence, the family $G_{n, \d(u_1, u_2), m}$ for $\d(u_1, u_2) \equiv 4 \mod 6$, has the desired property (and is in fact a generalization of the family $G_{n,m}$). \section{Effect of vertex removal on game total domination number} \label{sec:vertex} As already mentioned, the effect of vertex removal on the game domination number has been studied in~\cite{EdgeVertexRemoval, book}. Here we present some analogous results for the game total domination number. Let $G$ be a graph and $v$ one of its vertices. The game total domination number of the graph ${G-v}$ cannot be bounded from above by ${\gamma_{g}^{t}}(G)$, moreover the difference ${{\gamma_{g}^{t}}(G-v) - {\gamma_{g}^{t}}(G)}$ can be arbitrarily large. Let $H$ be a graph with ${\gamma_{g}^{t}}(H) = k$ and $v \notin V(H)$ a vertex. By connecting $v$ to all other vertices of the graph $H$, we obtain the graph $G$. Clearly, ${\gamma_{g}^{t}}(G) = 2$ and ${\gamma_{g}^{t}}(G-v) = {\gamma_{g}^{t}}(H) = k$. Furthermore, if $H$ is $p$-connected, then $G$ is also $p$-connected. But we can bound ${\gamma_{g}^{t}}(G-v)$ with ${\gamma_{g}^{t}}(G)$ from below. Notice that the bound is weaker compared to the analogous bound for the game domination number. \begin{proposition} \label{prop:-v} If $G$ is a graph and $v \in V(G)$, then $${\gamma_{g}^{t}}(G) \leq {\gamma_{g}^{t}}(G-v) + 4.$$ \end{proposition} \noindent{\bf Proof.\ } As Dominator can start on $v$, it holds ${\gamma_{g}^{t}}(G) \leq 1 + {\gamma_{g}^{t}}'(G|N(v))$. Using the imagination strategy we can prove that ${\gamma_{g}^{t}}'(G|N(v)) \leq {\gamma_{g}^{t}}'((G-v)|N(v)) + 2$. Indeed, it follows from~\cite[Theorem 2.2]{totDom} that ${\gamma_{g}^{t}}'(G-v) \leq {\gamma_{g}^{t}}(G-v) + 1$. Combining these results and using the total continuation principle yields the desired result. Hence, it only remains to prove that ${\gamma_{g}^{t}}'(G|N(v)) \leq {\gamma_{g}^{t}}'((G-v)|N(v)) + 2$. A total domination game is played on $G|N(v)$, while simultaneously Dominator imagines a Staller-pass game on $(G-v)|N(v)$ (and plays optimally on it). Each move of Staller is copied from the real to the imagined game. This is not possible at most once, exactly when Staller totally dominates only $v$. In this case, Staller passes a move in the imagined game. Dominator replies optimally and copies his move to the real game (this is always legal). Hence, there are at most ${\gamma_{g}^{t}}'^{sp}((G-v)|N(v))$ moves made on the imagined game. When it is finished, $v$ might be still undominated in the real game. Thus ${\gamma_{g}^{t}}'(G|N(v)) \leq 1 + {\gamma_{g}^{t}}'^{sp}((G-v)|N(v))$. With the similar reasoning as in~\cite{union} we can derive ${\gamma_{g}^{t}}'^{sp}(H) \leq 1 + {\gamma_{g}^{t}}'(H)$ for any graph $H$. Thus we have ${\gamma_{g}^{t}}'(G|N(v)) \leq 2 + {\gamma_{g}^{t}}'((G-v)|N(v))$. $\square$ A natural question arising from here is whether the bound in Proposition~\ref{prop:-v} is sharp and which differences ${\gamma_{g}^{t}}(G) - {\gamma_{g}^{t}}(G-v)$ can be realized. We present some partial results on this problem. \begin{enumerate} \item ${\gamma_{g}^{t}}(G) - {\gamma_{g}^{t}}(G-v) = 0$. Let $k \in \mathbb N$ and $G$ be a graph obtained from $K_{k+2}$, $V(K_{k+2}) = \{ u, v, x_1, \ldots, x_k \}$, by attaching a leaf $y_i$ to $x_i$ for all $i \in [k]$. Notice that both in $G$ and $G-v$, vertices $x_1, \ldots, x_k$ must be played in order to totally dominate all leaves. Suppose Dominator starts on $x_1$. If Staller replies on some $x_i$, then the only playable vertices are $\{x_2, \ldots, x_k\} - \{x_i\}$, hence at most $k$ moves are made all together. If Staller replies on $\{ u, v, y_1 \}$, then the only still playable vertices are $\{x_2, \ldots, x_k\}$, thus at most $k+1$ moves are made in total. If Staller replies on some $y_i$, $i \neq 1$, then Dominator replies on $x_i$ and leaves only the vertices $\{x_2, \ldots, x_k\} - \{x_i\}$ playable. Thus again, at most $k+1$ moves are played on the graph. This strategy for Dominator yields both ${\gamma_{g}^{t}}(G) \leq k+1$ and ${\gamma_{g}^{t}}(G-v) \leq k+1$. Similarly, we observe that ${\gamma_{g}^{t}}(G) \geq k+1$ and ${\gamma_{g}^{t}}(G-v) \geq k+1$. Hence, ${\gamma_{g}^{t}}(G) = {\gamma_{g}^{t}}(G-v)$. \item ${\gamma_{g}^{t}}(G) - {\gamma_{g}^{t}}(G-v) = 1$. It follows from~\cite{totDomCP} that ${\gamma_{g}^{t}}(P_n) - {\gamma_{g}^{t}}(P_n-v) = 1$ for $n \equiv 0, 1, 2, 4 \mod 6 $ where $v$ is an end-vertex of the path $P_n$. \end{enumerate} Consider now the Staller-start game. Similarly as in the $D$-game, the value of ${\gamma_{g}^{t}}'(G-v)$ cannot be bounded from above by ${\gamma_{g}^{t}}'(G)$. But we can determine a lower bound (which is again weaker than for the ordinary domination game). \begin{proposition} \label{prop:-v'} Let $G$ be a graph and $v \in V(G)$. Then $${\gamma_{g}^{t}}'(G) \leq {\gamma_{g}^{t}}'(G-v) + 4.$$ \end{proposition} \noindent{\bf Proof.\ } Using the imagination strategy as in the proof of Proposition~\ref{prop:-v} we can show that ${\gamma_{g}^{t}}(G|S) \leq 2 + {\gamma_{g}^{t}}((G-v)|S)$. If Staller starts on $v$, then we can conclude $${\gamma_{g}^{t}}'(G) = 1 + {\gamma_{g}^{t}}(G|N(v)) \leq 3 + {\gamma_{g}^{t}}(G-v | N(v)) \leq 4 + {\gamma_{g}^{t}}'(G-v).$$ If Staller starts on a vertex $x$, $x \neq v$, then Dominator's strategy is to reply on $v$. If this move is not legal, we have $N(x) = N(v)$ and without loss of generality, Staller could start on $v$ instead of $x$, thus we have the above situation. But if Dominator can reply on $v$, we have \begin{eqnarray*} {\gamma_{g}^{t}}'(G) & = & 1 + {\gamma_{g}^{t}}(G | N(x)) \leq 2 + {\gamma_{g}^{t}}'(G | N(x) \cup N(v)) \leq \\ & \leq & 4 + {\gamma_{g}^{t}}'(G-v | N(x) \cup N(v)) \leq 4 + {\gamma_{g}^{t}}'(G-v), \end{eqnarray*} which concludes the proof. $\square$ The question whether the above bound is sharp remains unanswered, but we present some examples of realizable values of ${\gamma_{g}^{t}}'(G) - {\gamma_{g}^{t}}'(G-v)$. \begin{enumerate} \item ${\gamma_{g}^{t}}'(G) - {\gamma_{g}^{t}}'(G-v) = 0$. Let $k \in \mathbb N$ and $G$ be a graph obtained from $K_{k+2}$, $V(K_{k+2}) = \{ u, v, x_1, \ldots, x_k \}$, by attaching a leaf $y_i$ to $x_i$ for all $i \in [k]$. As above, we can prove that ${\gamma_{g}^{t}}'(G) = {\gamma_{g}^{t}}'(G-v) = k+1$. \item ${\gamma_{g}^{t}}'(G) - {\gamma_{g}^{t}}'(G-v) = 1$. It follows from~\cite{totDomCP} that ${\gamma_{g}^{t}}'(P_n) - {\gamma_{g}^{t}}'(P_n-v) = 1$ for $n \equiv 1, 2, 4, 5 \mod 6 $ where $v$ is an end-vertex of the path $P_n$. \item ${\gamma_{g}^{t}}'(G) - {\gamma_{g}^{t}}'(G-v) = 2$. Recall the family of graphs $Z_k$ from~\cite{EdgeVertexRemoval}. Let $Z_0$ be as in Figure~\ref{fig:grafZ}. The graph $Z_k$, $k \geq 1$, is obtained from $Z_0$ by identifying end-vertices of $k$ copies of $P_6$ with the vertex $x$ (cf.~Figure~\ref{fig:grafZk} for $Z_3$). Denote the graph induced by $x$, $u$, $v$, and the two leaves attached to $x$, by $S$. Denote the graph $Z_0 - S$ by $Z$. Observe that $\gamma_t(Z) = {\gamma_{g}^{t}}(Z) = {\gamma_{g}^{t}}'(Z) = {\gamma_{g}^{t}}(Z|z) = {\gamma_{g}^{t}}'(Z|z) = 4$ and $\gamma_t(P_6) = 3$. \begin{figure} \caption{A graph $Z_0$.} \label{fig:grafZ} \caption{A graph $Z_3$.} \label{fig:grafZk} \end{figure} We prove that ${\gamma_{g}^{t}}'(Z_k) = 3 k + 8$ and ${\gamma_{g}^{t}}'(Z_k - v) = 3 k + 6$. Notice that at least four moves are played on $Z$ and at least three moves are played on each path. Consider the following strategy for Staller. She starts the game on $Z_k$ by playing the vertex $v$. If Dominator replies on $u$, then she plays on a neighbor of $x$ on one of the paths. Then she can ensure at least three moves on $S$ and at least four moves on this path. Hence, the total number of moves is at least $3k + 8$. If Dominator replies anywhere else, then she replies optimally on the same subgraph and thus ensures at least four moves on $S$. Hence, ${\gamma_{g}^{t}}'(Z_k) \geq 3 k + 8$. We now describe a strategy for Dominator to show that ${\gamma_{g}^{t}}'(Z_k) \leq 3 k + 8$. If Staller plays on $Z$, then Dominator replies optimally on $Z$. If Staller plays on $S$ or one of the paths, then Dominator replies optimally on the same subgraph. Except immediately after the first move of Staller outside $Z$, when he replies by playing $x$. In this way he ensures that at most $3 k + 8$ moves are played all together. By applying similar reasoning to the graph $Z_k-v$ we can prove that ${\gamma_{g}^{t}}'(Z_k-v) = 3 k + 6$. \end{enumerate} An interesting question arising from here is whether there exist graphs $G$ and their vertices $v$ such that ${\gamma_{g}^{t}}(G) - {\gamma_{g}^{t}}(G-v) \in \{2,3,4\}$ and ${\gamma_{g}^{t}}'(G) - {\gamma_{g}^{t}}'(G-v) \in \{3,4\}$? And if not, can it be proven in general that for example ${\gamma_{g}^{t}}(G) - {\gamma_{g}^{t}}(G-v) \leq 2$? \end{document}
\begin{document} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}} \title{Forbidden Patterns and the Alternating Derangement Sequence} \author{ Enrique Navarrete$^{\ast}$}\footnotetext[1]{Grupo ANFI, Universidad de Antioquia, Medell\'in, Colombia.} \date{} \makeatletter \def\maketitle{ \bgroup \par\centering{\LARGE\@title}\\[3em] \ \par {\@author}\\[4em] \egroup } \maketitle \begin{abstract} In this note we count linear arrangements that avoid certain\linebreak patterns and show their connection to the derangement numbers. We introduce the sequence $\langle D_n\rangle$, which counts the linear arrangements that avoid the patterns $12, 23,\ldots, (n-1)n, n1$, and show that this sequence almost follows the derangement sequence itself since the number of its odd terms is one more than the derangement numbers while the number of its even terms is one less. We also express the derangement numbers in terms of these and other linear arrangements. Finally, we relate these arrangements to permutations and show that both the arrangements\linebreak described above as well as deranged permutations are equidistributed, in the sense defined in Section 5.\\[1em] \textit{Keywords}: Derangements, permutations, linear arrangements, forbidden patterns, fixed points, bijections. \end{abstract} ---------------------------------------------------------------- \section{Introduction} In this note we will be counting the number of ways in which the integers $1, 2, \ldots, n$ can be arranged in a line so that some patterns are forbidden or blocked. We will be mainly concerned with the arrangements that block the patterns $12, 23, \ldots , (n-1)n, n1$. This problem can be naturally phrased in terms of permutations of the integer set $S = \{1, 2, \ldots, n\}$ that avoid the pairs just mentioned. Hence in Section 5 we will be identifying linear arrangements with some permutations in the symmetric group. Since we will be discussing several forbidden patterns, the following definitions are in order: \begin{description} \item[$S_n$] := the set of all linear arrangements on the set of integers $\{1,2, \ldots, n\}$. \item[$\{d_n\}$] := the set of linear arrangements of $S_n$ that avoid the patterns\linebreak $12, 23, \ldots, (n-1)n$. \item[$\{d_{n1}\}$] := the subset of linear arrangements of $\{d_n\}$ that allow the pattern $n1$. \item[$\{D_n\}$] := the set of linear arrangements of $S_n$ that avoid the patterns\linebreak $12, 23, \ldots, (n-1)n, n1$. \item[$\{Der_n\}$] := the set of deranged arrangements of the integers $\{1,2, \ldots, n\}$. \item[$d_n$] := cardinality of the set $\{d_n\}$. \item[$d_{n1}$] := cardinality of the set $\{d_{n1}\}$. \item[$D_n$] := cardinality of the set $\{D_n\}$. \item[$\langle D_n\rangle$] := the sequence of numbers $D_n$, $n = 0,1,2,\ldots$ \item[$Der_n$] := the nth derangement number, \textit{ie}. \begin{equation}\label{eq1} Der_{n}=n!\sum_{k=0}^{n}\frac{(-1)^{n}}{k!}. \end{equation} \end{description} Note that in all cases we will be discussing linear arrangements (as opposed to circular ones). \section{Main Lemmas and Proposition} We first see that there are $n!$ possible linear arrangements in $S_n$, but not all of them will be valid for our counting purposes. The largest subset of arrangements we will be considering is $\{d_n\}$, the set of linear arrangements of $S_n$ that avoid the patterns $12, 23, \ldots, (n-1)n$. Hence the set of forbidden patterns is $S_n - \{d_n\}$. This set will be used at the end of the following main result. \begin{lem}\label{lemm21} For $n \geq 1$, $d_n = D_{n-1} + D_n$. \end{lem} \begin{proof} The Lemma is true for the case $n = 1$ if we define $D_0 = 0$, $D_1 = 1$, $d_1 = 1$. The cases $n = 2, 3$ are easily checked, since $d_2 = 1$, $D_2 = 0$, $d_3 = 3$, $D_3 = 3$, (see Table \ref{tabA1} in the Appendix for arrangements in $\{D_n\}$ and $\{d_n\}$ up to $n = 5$). We will show in general that $\{d_n\} = \{D_{n-1}\} \cup \{D_n\}$ is a disjoint union by actually showing that $\{d_n\} = \{d_{n1}\} \cup \{D_n\}$ is a disjoint union and that\linebreak $\{d_{n1}\}\leftrightarrow\{D_{n-1}\}$ is a bijection. To show the bijection between the sets $\{d_{n1}\}$ and $\{D_{n-1}\}$ we will construct arrangements in $\{d_{n1}\}$ starting from arrangements in $\{D_{n-1}\}$ and the map $\varphi:\{D_{n-1}\}\rightarrow\{d_{n1}\}$ that inserts the digit $n$ in front of the digit 1 in linear arrangements in $\{D_{n-1}\}$. Note that arrangements in $\{D_{n-1}\}$ only contain digits $1,2, \ldots, n-1$, but this map produces arrangements of length $n$ in $\{d_{n1}\}$. We claim this map creates all the arrangements in $\{d_{n1}\}$ \textit{ie}. arrangements in $\{d_n\}$ that allow the pattern $n1$. We have to show that the arrangements obtained this way are distinct, that they contain no forbidden patterns $12, 23, \ldots, (n-1)n$, that $\{d_n\}$ is the disjoint union $\{d_{n1}\} \cup \{D_n\}$, and that all the arrangements in $\{d_{n1}\}$ are formed by the map described above. We show each of these properties below. \begin{enumerate} \item The sets $\{d_{n1}\}$ and $\{D_n\}$ contain arrangements which are $n$ digits long and are disjoint since, by construction, arrangements in $\{d_{n1}\}$ allow the pattern $n1$ while those in $\{D_n\}$ avoid it. \item To show that the linear arrangements in $\{d_{n1}\}$ produced by the map are valid, $ie$.\hspace{-0.8mm} contain no forbidden patterns $12, 23, \ldots, (n-1)n$, we see that the only problematic arrangements would be those for which, when inserting an $n$ before a 1, we create the pattern $(n-1)n1$, which is forbidden due to the pattern $(n-1)n$. However, this will never happen since all the arrangements in $\{D_{n-1}\}$, from which $\{d_{n1}\}$ is constructed, avoid the pattern $(n-1)1$. \item To show the map is 1-1 is trivial, since by removing the digit $n$ in\linebreak arrangements in $\{d_{n1}\}$ we get back arrangements in $\{D_{n-1}\}$, which are distinct and valid, by the previous step. \item To show that the map $\{D_{n-1}\}\rightarrow\{d_{n1}\}$ is onto, we need to show that all the arrangements in $\{d_{n1}\}$ are produced from $\{D_{n-1}\}$ by the map described above and that there is no other way to produce them. \end{enumerate} To show this, suppose we want to create arrangements in $\{d_{n1}\}$ from the larger set $\{d_{n-1}\}$ (as opposed to $\{D_{n-1}\}$). If we start from arrangements in $\{d_{n-1}\}$, there are members of this set that allow the pattern $(n-1)1$, hence inserting the digit $n$ before 1 will produce the pattern $(n-1)n$, which is forbidden, so this case is settled. Finally, since $S_{n-1} = \{d_{n-1}\} \cup \{S_{n-1} - d_{n-1}\}$ is a disjoint union, the last possibility would be that if we insert the digit $n$ before 1 in the forbidden arrangements in $\{S_{n-1} - d_{n-1}\}$ we might break a forbidden pattern and create a valid arrangement in $\{d_{n1}\}$ with the pattern $n1$ in it. However, this cannot happen since when we insert $n$ before 1, the forbidden patterns\linebreak $12, 23, \ldots, (n-1)n$ will remain intact in the set $\{S_{n-1} - d_{n-1}\}$ of forbidden arrangements. Therefore the only way to produce valid arrangements in $\{d_n\}$ that allow the pattern $n1$ is to start with the set $\{D_{n-1}\}$ and form the\linebreak arrangements in $\{d_{n1}\}$ by the map described above. Since the previous steps showed that $\{d_n\} = \{d_{n1}\} \cup \{D_n\}$ is a disjoint union and that $\{d_{n1}\}\leftrightarrow\{D_{n-1}\}$ is a bijection, this implies the Lemma for the set cardinalities. \end{proof} We record explicitly the result of the bijection below. \begin{cor}\label{coro22} There are exactly $D_{n-1}$ linear arrangements in $\{d_n\}$ that contain the pattern $n1$. \end{cor} \begin{proof} This follows from the previous Lemma by the bijection\linebreak $\{d_{n1}\}\leftrightarrow\{D_{n-1}\}$. \end{proof} Note that the counting relation proved above is very similar to a well known formula for $d_n$, which we state below as a lemma: \begin{lem}\label{lemm23} For all $n \geq 1$, $d_n = Der_n + Der_{n-1}$. \end{lem} \begin{proof} Since $d_n$ counts the linear arrangements that avoid the patterns $12, 23, \ldots, (n-1)n$, it is easy to show that the there are a total of \[\binom{n-1}{k}(n-k)!\] forbidden patterns of length $k$ since the combinatorial term counts the number of ways to get such patterns while the term $(n-k)!$ counts the permutations of the patterns and the remaining elements (note that patterns are not necessarily disjoint but may overlap in pattern strings). Then using inclusion-exclusion we have that \[d_n=\sum_{k=0}^{n-1}(-1)^k\binom{n-1}{k}(n-k)!\text{,}\] which is equal to the right-hand side of the equation in the Lemma by direct computation. \end{proof} Note also that the expression for $d_n$ above is similar to the derangement formula, \[Der_n=\sum_{k=0}^{n}(-1)^k\binom{n}{k}(n-k)!.\] Now we get to the main proposition: \begin{prop}\label{propo24} For all $n \geq 1$, $D_n = Der_n + (-1)^{n-1}$. \end{prop} \begin{proof} Combining the two previous lemmas we have that\linebreak $D_n = Der_n + Der_{n-1} - D_{n-1}$. Iterating this expression repeatedly yields the Proposition. Note that we have $n-1$, not $n$ in the exponent of -1 since the iteration stops after $n-1$ steps at $Der_2 = 1$. \end{proof} Note from the Proposition that by adding $(-1)^n$ to both sides we get\linebreak $Der_n = D_n + (-1)^n$, so that $Der_n - D_n = (-1)^n$. This means that the numbers $D_n$ can be obtained by adding in the usual derangement formula, Equation \ref{eq1} above, up to the term $n-1$, leaving out the last term, $(-1)^n$. That is, for the number of linear arrangements that avoid the patterns $12, 23, \ldots, (n-1)n, n1$, we have that \begin{equation}\label{eq2} D_n=n!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!}. \end{equation} This creates the alternating behavior referred to above, in the sense that the number of these arrangements is one less than the derangement number when $n$ is even, and one more than the derangement number when $n$ is odd. Hence we have the sequence $\langle D_n\rangle$ starting at $\mbox{n = 0}$ that runs like\linebreak $\{0, 1, 0, 3, 8, 45, 264, 1855, 14832,\ldots\}$ (see Table \ref{tabA2} in the Appendix). Notice that when we combine Lemmas \ref{lemm21} and \ref{lemm23} we get the equality \[d_n=D_n+D_{n-1}=Der_n +Der_{n-1},\] which produces the following sequence of equations starting at $n = 1$: \begin{align*} &d_1 = 1 + 0 = 0 + 1 = 1 & &(n = 1);\\ &d_2 = 0 + 1 = 1 + 0 = 1 & &(n = 2);\\ &d_3 = 3 + 0 = 2 + 1 = 3 & &(n = 3);\\ &d_4 = 8 + 3 = 9 + 2 = 11 & &(n = 4);\\ &d_5 = 45 + 8 = 44 + 9 = 53 & &(n = 5)\ldots \end{align*} \section{Recursions followed by linear arrangements in $\boldsymbol{\{D_n\}}$} Notice from Table \ref{tabA2} that the numbers $D_n$ are all divisible by $n$. This is not a coincidence but a result of the following lemmas and recursions for linear arrangements in $\{D_n\}$. \begin{lem}\label{lemm31} For all $n \geq 1$, $D_n = n Der_{n-1}$. \end{lem} \begin{proof} From Proposition \ref{propo24} we have that $D_n = Der_n + (-1)^{n-1}$,\linebreak $n \geq 1$. If we add $(-1)^n$ to both sides we have that $Der_n = D_n + (-1)^n$. Then using a well-known recursion from the derangement numbers, namely $Der_n = n Der_{n-1} + (-1)^n$, $n \geq 1$, we equate both sides to get the equation in the Lemma. \end{proof} Now we get a recursion similar to the one used in the proof of the Lemma for arrangements in $\{D_n\}$. \begin{cor}\label{coro32} The recursion $D_n = n[D_{n-1} + (-1)^{n-1}]$, $n \geq 1$, holds for linear arrangements in $\{D_n\}$. \end{cor} \begin{proof} From the proof of the Lemma above we have that $Der_n = D_n + (-1)^n$, hence $Der_{n-1} = D_{n-1} + (-1)^{n-1}$. The result is immediate upon substitution in the right-hand side of Lemma \ref{lemm31}, $D_n = n Der_{n-1}$. \end{proof} Combining most of the lemmas we have so far, we can also get the following recursion for $D_n$, which is similar to the well-known recursion\linebreak $Der_n = (n-1)[Der_{n-1} + Der_{n-2}]$, $n \geq 2$, for the derangement numbers. \begin{cor}\label{coro33} The recursion $D_n = (n-1)[D_{n-1} + D_{n-2}] + (-1)^{n-1}$, $n \geq 2$, holds for linear arrangements in $\{D_n\}$. \end{cor} \begin{proof} \hspace{-9mm}Since \mbox{$D_n=Der_n+(-1)^{n-1}$} by Proposition \ref{propo24}, and\linebreak $Der_n = (n-1)[Der_{n-1} + Der_{n-2}]$, $n \geq 2$, for the derangement numbers, we have that $D_n = (n-1)[Der_{n-1} + Der_{n-2}] + (-1)^{n-1}$. By Lemma \ref{lemm23} we have that $d_{n-1} = Der_{n-1} + Der_{n-2}$, and by Lemma \ref{lemm21}, that $d_{n-1} = D_{n-1} + D_{n-2}$. \end{proof} We summarize below the recursions followed by arrangements in $\{D_n\}$ along with the corresponding recursions for derangements $\{Der_n\}$. \begin{table}[h!] \centering { \hspace*{-1cm} \begin{tabular}{|c|c|c|} \hline {\small Linear arrangements $\{D_n\}$} & {\small Derangements $\{Der_n\}$} & {\small Reference}\\ \hline $D_n=n\left[D_{n-1}+(-1)^{n-1}\right]$ & $Der_n=nDer_{n-1}+(-1)^{n}$ & {\small Coroll. \ref{coro32}}\\ $D_n=(n-1)\left[D_{n-1}+D_{n-2}\right]+(-1)^{n-1}$ & $Der_n=(n-1)\left[Der_{n-1}+Der_{n-2}\right]$ & {\small Coroll. \ref{coro33}}\\ \hline \end{tabular}} \caption{ Recursions valid for linear arrangements $\{D_n\}$ and derangements $\{Der_n\}$.}\label{tab1} \end{table} The recursions and lemmas for $\{D_n\}$ have some interesting consequences that we state below as the following corollaries. \begin{cor}\label{coro34} The number $D_n$ is divisible by $n$. \end{cor} \begin{proof} This is just Lemma \ref{lemm31}, $D_n = n Der_{n-1}$. \end{proof} We will see that even more can be said from this equation in the next section. Corollary \ref{coro34} in turn implies the following: \begin{prop} $Der_n - 1$ is divisible by $n$ for $n$ even $(n > 0)$, whereas $Der_n + 1$ is divisible by $n$ for $n$ odd. \end{prop} \begin{proof} Immediate from Proposition \ref{propo24} and Corollary \ref{coro34}. By adding $(-1)^n$ to both sides of Proposition \ref{propo24} we have that $Der_n = D_n + (-1)^n$, and the fact that $n$ divides $D_n$ (Corollary \ref{coro34}) gives the result. \end{proof} We will get more divisibility properties from the following lemma: \begin{lem}\label{lemm36} The derangement numbers $Der_n$ can be expressed in terms of the number of linear arrangements of the set of integers $\{1,2, \ldots, n\}$ that avoid the patterns $12, 23, \ldots, (n-1)n$, as $Der_n = (n-1)d_{n-1}$, $n \geq 2$. \end{lem} \begin{proof} By substituting the counting relation $d_{n-1} = Der_{n-1} + Der_{n-2}$ \mbox{(Lemma \ref{lemm23})} into the recursion $Der_n = (n-1)[Der_{n-1} + Der_{n-2}]$. \end{proof} From this we get the following. \begin{cor}\label{coro37} The number $Der_n$ is divisible by $n-1$, $n \geq 2$. \end{cor} \begin{proof} This is just a rephrasing of Lemma \ref{lemm36}. \end{proof} We will see that more can be said from these divisibility properties in the next section. \section{Equidistribution properties for linear arrangements and derangements} Now that we have found recursions for linear arrangements in $\{D_n\}$, we move on to discuss distribution properties of these arrangements. We first state two brief definitions. \begin{defn} A \textit{class} of linear arrangements is a subset of arrangements of $\{D_n\}$, $\{d_n\}$ or $\{d_{n1}\}$ that start with the same digit. \end{defn} \begin{defn} We say that a set of linear arrangements is \textit{equidistributed} if the set can be partitioned into nonempty classes having the same cardinality. \end{defn} Now we have the following proposition and corollaries: \begin{prop}\label{propo43} The linear arrangements that avoid the patterns\linebreak $12, 23, \ldots, (n-1)n, n1$, \textit{ie}. $\{D_n\}$ are equidistributed for all $n \geq 3$. \end{prop} \begin{proof} From Lemma \ref{lemm31} we have that $D_n = n Der_{n-1}$. In fact, this equation may be read as $\{D_n\}$ consisting of $n$ nonempty classes starting with digits $1, 2, \ldots, n$, each class consisting of $Der_{n-1}$ members. The classes in $\{D_n\}$ are not empty since we can start each of the $n$ classes with the patterns\linebreak $1n, 21, 32, \ldots, n(n-1)$, which are valid in $\{D_n\}$. \end{proof} The fact that $\{D_n\}$ can be partitioned into $n$ classes of size $Der_{n-1}$ is important enough to record it as a corollary: \begin{cor} The number of members in each class of $\{D_n\}$ is exactly $Der_{n-1}$. \end{cor} Hence we see that there is yet another object being counted by the derangement numbers, namely the cardinality or number of members in each class of $\{D_n\}$. As an example of the Proposition and Corollary, we have $D_4 = 8 = 4\cdot 2$, hence $\{D_4\}$ consists of 4 classes of linear arrangements, each class consisting of 2 members. Similarly, $D_5 = 45 = 5\cdot 9$, hence $\{D_5\}$ consists of 5 classes of arrangements with 9 members in each class. Now we turn our attention to deranged arrangements, or derangements $\{Der_n\}$. We have the following proposition: \begin{prop}\label{propo45} The deranged linear arrangements $\{Der_n\}$ are equidistributed for all $n \geq 2$. \end{prop} \begin{proof} We see that for derangements, the first class (which starts with 1) is empty, so we only have $n-1$ nonempty classes. Then from Corollary \ref{coro37} we have that $n-1$ divides $Der_n$ and the result follows. \end{proof} The fact that $\{Der_n\}$ can be partitioned into $n-1$ classes of size $d_{n-1}$ is important enough to record it as a corollary: \begin{cor} The number of members in each nonempty class of $\{Der_n\}$ is exactly $d_{n-1}$. \end{cor} \begin{proof} Lemma \ref{lemm36} and Corollary \ref{coro37} give these counting relations. \end{proof} As an example of the Corollary, we have $Der_4 = 9 = 3\cdot 3$, hence $\{Der_4\}$ consists of 3 classes of arrangements (starting with 2, 3 and 4), each class consisting of 3 members. Similarly, $Der_5 = 44 = 4\cdot 11$, hence $\{Der_5\}$ consists of 4 classes of arrangements with 11 members in each class. We also have the following: \begin{lem}\label{lemm47} For $n \geq 2$, the class in $\{d_{n1}\}$ that starts with 1 (\textit{ie}. the first class) is empty. \end{lem} \begin{proof} When we form the arrangements in $\{d_{n1}\}$ from $\{D_{n-1}\}$ by the\linebreak $n1$-mapping from Lemma \ref{lemm21}, we know that the class in $\{D_{n-1}\}$ that starts with 1 will be mapped into the class that starts with $n1$ in $\{d_{n1}\}$, while the other classes $i$, $i = 2, 3, \ldots, n-1$ get mapped to the same class. This means that no class in $\{D_{n-1}\}$ gets mapped to the first class in $\{d_{n1}\}$, so this class will be empty. \end{proof} \begin{lem}\label{lemm48} For $n \geq 2$, the number of members in each nonempty class of $\{d_{n1}\}$ is exactly $Der_{n-2}$. \end{lem} \begin{proof} By Lemma \ref{lemm31}, we have that $D_n = n Der_{n-1}$. By Corollary \ref{coro22}, since $d_{n1} = D_{n-1}$, we have that $d_{n1} = D_{n-1} = (n-1) Der_{n-2}$. \end{proof} \begin{prop} The linear arrangements in $\{d_n\}$ that contain the pattern $n1$ \textit{ie}. $\{d_{n1}\}$ are equidistributed for $n \geq 2$. \end{prop} \begin{proof} Lemma \ref{lemm47} shows that there are exactly $n-1$ nonempty classes in $\{d_{n1}\}$. By Lemma \ref{lemm48}, we have that $d_{n1} = (n-1) Der_{n-2}$. Hence $\{d_{n1}\}$ consists of $n-1$ nonempty classes, each class consisting of $Der_{n-2}$ arrangements. \end{proof} We record explicitly this divisibility property: \begin{cor} For $n \geq 2$, $n-1$ divides $d_{n1}$. \end{cor} An example of this relationship is given by the arrangements in $\{D_n\}$ for $n = 5$ that allow the pattern 51. The class that starts with 1 is empty, and the remaining $n-1 = 4$ classes have $Der_{n-2} = Der_3 = 2$ members each. Note also that for $n = 2$, we have that $\{d_2\} = \{d_{21}\}$ since the only arrangement in this case is $\{21\}$, whereas for $n = 3$, we have that $\{D_3\} = \{d_3\}$, hence $\{d_{31}\}$ is empty and there are no arrangements that contain the pattern 31 (see \mbox{Table \ref{tabA2}} in the Appendix). So far we have seen that the linear arrangements $\{D_n\}$, the deranged\linebreak arrangements $\{Der_n\}$, and the arrangements in $\{d_n\}$ that contain the pattern $n1$ are equidistributed, with the last two of them containing only $n-1$ nonempty classes. We summarize the different properties of these arrangements in the following table: \begin{table}[h!] \centering { \begin{tabular}{|c|c|c|c|c|} \hline \parbox{1.9cm}{\centering {\small Arrangement Type}}& \parbox[c][1.3cm]{1.6cm}{\centering {\small Number of Nonempty Classes}} & \parbox{1.2cm}{\centering {\small Size of Class}} & \parbox{3.5cm}{\centering {\small Relevant Equation}} & \parbox{1.7cm}{\centering {\small Numbered Reference}}\\ \hline $D_n$ & $n$ & $Der_{n-1}$ &$D_n=nDer_{n-1}$ & {\small Lemma \ref{lemm31}}\\ $Der_n$ & $n-1$ & $d_{n-1}$ &$Der_n=(n-1)d_{n-1}$ & {\small Lemma \ref{lemm36}}\\ $d_{n1}$ & $n-1$ & $Der_{n-2}$ & $d_{n-1}=(n-1)Der_{n-2}$& {\small Lemma \ref{lemm48}}\\ \hline \end{tabular}} \caption{Properties satisfied by $\{D_n\}$, $\{d_{n1}\}$, and the derangements $\{Der_n\}$.}\label{tab2} \end{table} We will now see, however, that the linear arrangements in $\{d_n\}$ are not\linebreak equidistributed. \begin{prop} The linear arrangements that avoid the patterns\linebreak $12, 23, \ldots, (n-1)n$, \textit{ie}. $\{d_n\}$ are not equidistributed for $n > 3$. \end{prop} \begin{proof} We know from Lemma \ref{lemm21} that $\{d_n\} = \{D_n\}\cup \{d_{n1}\}$ and from Proposition \ref{propo43} that $\{D_n\}$ is equidistributed, with no class empty. When we form arrangements in $\{d_{n1}\}$ from $\{D_{n-1}\}$ by the $n1$-mapping, Lemma \ref{lemm47} shows that no class in $\{D_{n-1}\}$ gets mapped to the first class in $\{d_{n1}\}$, while the other classes $i$, $i = 2, 3,\ldots, n-1$ in $\{D_{n-1}\}$ get mapped to the same class in $\{d_{n1}\}$, hence these classes get increased by at least one arrangement. This proves the Proposition. \end{proof} \section{Results for Permutations} Now we turn our attention to permutations themselves. Note that since there is a natural correspondence between linear arrangements and permutations in one-line notation, we want to examine how Propositions \ref{propo43} and \ref{propo45} behave in terms of permutations. To do so we need a preliminary result. \begin{prop}\label{propo51} For $n \geq 2$, there is a bijection between linear arrangements in $\{d_n\}$ and permutations in one-line notation. \end{prop} \begin{proof} Let us consider any linear arrangement $\lambda$ in $\{d_n\}$ as representing a permutation in one-line notation and the mapping $\sigma$ of the arrangement $12\ldots n$ by the permutation. Then the digits in $\lambda$ represent the second line in the permutation, and since such digits avoid the patterns $j$, $j+1$, $j = 1, \ldots, n-1$, the digits $i$, $i+1$ in the line above them \textit{ie}. the digits in $12\ldots n$ get mapped to an arrangement $\lambda'$ that avoids successive patterns, hence $\lambda'$ belongs either to $\{d_n\}$ or $\{D_n\}$. However, the map $\sigma(1) = k +1$, $\sigma(n) = k$ is not ruled out so the pattern $n1$ may be formed, which is not allowed in $\{D_n\}$. The fact that the map is bijective follows by inverse mappings of permutations. \end{proof} As an example of the Proposition, consider the linear arrangement 42153 in $\{d_5\}$. If we consider this arrangement as a permutation in one-line notation, then the arrangement maps 12345 to 32514, which contains the pattern 51, not allowed in $\{D_5\}$. Now we define permutations that avoid patterns $12, 23, \ldots, (n-1)n$, $n1$ in terms of the linear arrangements they produce. \begin{defn}\label{defi52} A permutation (in one-line notation) \textit{avoids the patterns}\linebreak $12, 23, \ldots, (n-1)n$, $n1$ if it maps the arrangement $12\ldots n$ to an arrangement in $\{D_n\}$. \end{defn} \begin{prop} The permutations (in one-line notation) that avoid the\linebreak patterns $12, 23, \ldots, (n-1)n$, $n1$ are not equidistributed for $n \geq 3$. \end{prop} \begin{proof} From the proof of Proposition \ref{propo51}, we see that arrangements in $\{D_n\}$ may map to arrangements in $\{d_{n1}\}$ under the one-line permutation map, so $\{D_n\}$, which is equidistributed, does not map to itself. In particular,\linebreak arrangements in $\{D_n\}$ that start with 1 will never map $12\ldots n$ to arrangements in $\{d_{n1}\}$ under the one-line permutation map, unlike arrangements from other classes. \end{proof} Next we define deranged permutations in similar way as in Definition \ref{defi52}. \begin{defn} A permutation (in one-line notation) is \textit{deranged} if it maps the arrangement $12\ldots n$ to an arrangement in $\{Der_n\}$. \end{defn} Using this definition, we see that Proposition \ref{propo45} does hold in terms of both deranged linear arrangements and deranged permutations. \begin{prop}\label{propo55} For $n \geq 2$, there is a bijection between deranged linear arrangements $\{Der_n\}$ \textup{(}ie. derangements\textup{)} and permutations in one-line notation. \end{prop} \begin{proof} If we consider deranged arrangements in $\{Der_n\}$ as representing permutations in one-line notation and the mapping of $12\ldots n$ by the\linebreak permutations, then, as in Proposition \ref{propo51}, the digits in the arrangements\linebreak represent the second line in the permutations. Since these digits have no fixed points, the digits $i$, $i+1$ in $12\ldots n$ get mapped to derangements. The fact that the map is bijective follows by the existence of inverse mappings for permutations. \end{proof} As an example of the Proposition, consider the deranged arrangement 3142\linebreak in $\{Der_4\}$. If we consider this arrangement as a permutation in one-line notation, then the arrangement 1234 gets mapped to 2413, which is in $\{Der_4\}$. \begin{prop} The deranged permutations are equidistributed for all $n \geq 2$. \end{prop} \begin{proof} From the bijection in Proposition \ref{propo55}, and the fact that deranged arrangements are equidistributed (Proposition \ref{propo45}). \end{proof} We end this note with an easy application of one of the counting relationships derived previously. \section{Another object counted by $\boldsymbol{\langle D_n\rangle }$} We see that the sequence $\langle D_n\rangle = \{0, 1, 0, 3, 8, 45, 264, 1855, 14832,\ldots\}$ not only counts linear arrangements that avoid the patterns $12, 23, \ldots, (n-1)n, n1$, but also counts the permutations that have exactly one fixed point. This can be seen directly from inclusion-exclusion, since the formula that holds for exactly $m$ conditions is given by: \[E_m=S_m-\binom{m+1}{1}S_{m+1}+\binom{m+2}{2}S_{m+2}+\cdots+(-1)^{n-m}\binom{n}{n-m}S_n.\] For $m = 1$, this gives: \[E_1=S_1-\binom{2}{1}S_2+\binom{3}{2}S_3+\cdots+(-1)^{n-1}\binom{n}{n-1}S_n.\] Upon substitution of \[S_j=\binom{n}{j}(n-j)!, \quad j= 1,\ldots,n,\] this reduces to \[E_1=n!\sum_{k=0}^{n-1}\frac{(-1)^k}{k!},\] which is just Equation \ref{eq2} for $D_n$ right after Proposition \ref{propo24}. Alternatively, this can be seen from the following final proposition. \begin{prop} The permutations that have exactly one fixed point are\linebreak counted by the sequence $\langle D_n\rangle$. \end{prop} \begin{proof} If we let $h(n,k)$ be the number of permutations of $n$ objects with exactly $k$ fixed points, we see that $h(n,1) = n h(n-1,0)$ holds since the left hand side counts the number of permutations with exactly one fixed point, while the right-hand side counts the $n$ ways to choose a fixed point and the number of ways to derange $n-1$ other elements. But the right-hand side is just $n Der_{n-1}$, which we have seen is equal to $D_n$ by Lemma \ref{lemm31}. Hence $h(n,1) = D_n$ and the proof follows. \end{proof} \section{Conclusions} We have counted the number of linear arrangements that avoid certain patterns for the sets $\{D_n\}$, $\{d_n\}$, and $\{d_{n1}\}$, and have also obtained equations for the derangement numbers $Der_n$ in terms of them. We see that both $\{D_n\}$ and $\{d_{n1}\}$ share the equidistribution property of the deranged arrangements, $\{Der_n\}$, and that if we define deranged permutations using one-line notation, these permutations are equidistributed as well. We also see that the sequence $\langle D_n\rangle$ not only counts linear arrangements that avoid the patterns $12, 23, \ldots, (n-1)n, n1$, but also the permutations that have exactly one fixed point. Even though this result can be obtained by\linebreak inclusion-exclusion, it is an easy consequence of the counting relations developed. An explicit bijection between $\{D_n\}$ and the permutations that have exactly one fixed point would be interesting. Finally, we see that the sequence $\langle D_n\rangle$ is very conveniently described since it nearly follows the derangement numbers, as shown in Proposition \ref{propo24}. \section*{References} [1] R.A. Brualdi, Introductory Combinatorics (1992), 2nd edition. [2] R.P. Stanley, Enumerative Combinatorics, Vol. 1 (2011), 2nd edition. \begin{center} {\Large \textbf{APPENDIX}} \end{center} \appendix \begin{table}[h!] \centering { \begin{tabular}{lc} \begin{tabular}{|c|c|} \hline $\ \ \ \{D_2\}=\emptyset$\ \ \ & $\{d_2\}=\{21\}$ \\ \hline \end{tabular} & \\ \begin{tabular}{|c|} \hline $\{D_3\}=\{d_3\}$ \\ \hline 132 \\ 213 \\ 321 \\ \\ \\ \\ \\ \\ \hline \end{tabular} & \hspace{-0.4cm}\begin{tabular}{cc|c|c|} \cline{3-4} & & \multicolumn{2}{|c|}{$\{d_4\}=\{d_{41}\} \cup \{D_4\}$} \\ \hline \hspace{-0.21cm}\vline$\hspace{0.5cm}\{D_3\}$ & \vline\hspace{0.4cm}\ $\{D_4\}$\hspace{0.4cm} & \hspace{0.4cm}$\{d_{41}\}$\hspace{0.4cm} & \hspace{0.4cm}$\{D_4\}$\hspace{0.4cm} \\ \hline \hspace{-0.47cm}\vline\hspace{0.4cm} 132 & \vline\hspace{0.42cm} 1324\ \ \ \ \ & 4132 & 1324 \\ \hspace{-0.47cm}\vline\hspace{0.4cm} 213 & \vline\hspace{0.42cm} 1432\ \ \ \ \ & 2413 & 1432 \\ \hspace{-0.47cm}\vline\hspace{0.4cm} 321 &\vline\hspace{0.42cm} 2143\ \ \ \ \ & 3241 & 2143 \\ \hspace{-1.51cm}\vline\hspace{0cm} & \vline\hspace{0.54cm}2431\ \ \ \ \ & & 2431 \\ \hspace{-1.51cm}\vline\hspace{0cm} & \vline\hspace{0.54cm}3142\ \ \ \ \ & & 3142 \\ \hspace{-1.51cm}\vline\hspace{0cm} & \vline\hspace{0.54cm}3214\ \ \ \ \ & & 3214 \\ \hspace{-1.51cm}\vline\hspace{0cm} & \vline\hspace{0.54cm}4213\ \ \ \ \ & & 4213 \\ \hspace{-1.51cm}\vline\hspace{0cm} & \vline\hspace{0.54cm}4321\ \ \ \ \ & & 4321 \\ \hline \end{tabular}\\ \end{tabular}} \\[1em] { \hspace*{0.2cm}\begin{tabular}{|c|c|c|c|c|c|} \hline $\{D_4\}$ & \multicolumn{5}{|c|}{$\{D_5\}$} \\ \hline \hspace{0.67cm}1324\hspace{0.67cm} & \hspace{0.45cm}13254\hspace{0.45cm} & \hspace{0.3cm}21354\hspace{0.3cm} &\hspace{0.25cm}31425\hspace{0.25cm} & \hspace{0.4cm}41325\hspace{0.4cm} &\hspace{0.35cm}52143\hspace{0.35cm} \\ 1432 & 13524 & 21435 & 31524 & 41352 & 52413 \\ 2143 & 13542 & 21543 & 31542 & 41532 & 52431 \\ 2431 & 14253 & 24135 & 32154 & 42135 & 53142 \\ 3142 & 14325 & 24153 & 32415 & 42153 & 53214 \\ 3214 & 14352 & 24315 & 32541 & 42531 & 53241 \\ 4213 & 15243 & 25314 & 35214 & 43152 & 54132 \\ 4321 & 15324 & 25413 & 35241 & 43215 & 54213 \\ & 15432 & 25431 & 35421 & 43521 & 54321 \\ \hline \end{tabular} \\[1em] \hspace*{0.2cm}\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{$\{d_5\}=\{d_{51}\}\cup \{D_5\}$} \\ \hline $\{d_{51}\}$ & \multicolumn{5}{|c|}{$\{D_5\}$} \\ \hline \hspace{0.6cm}51324\hspace{0.6cm} & \hspace{0.4cm}13254\hspace{0.4cm} &\hspace{0.3cm}21354\hspace{0.3cm} &\hspace{0.25cm}31425\hspace{0.25cm} & \hspace{0.45cm}41325\hspace{0.45cm} &\hspace{0.35cm}52143\hspace{0.35cm} \\ 51432 & 13524 & 21435 & 31524 & 41352 & 52413 \\ 25143 & 13542 & 21543 & 31542 & 41532 & 52431 \\ 24351 & 14253 & 24135 & 32154 & 42135 & 53142 \\ 35142 & 14325 & 24153 & 32415 & 42153 & 53214 \\ 32514 & 14352 & 24315 & 32541 & 42531 & 53241 \\ 42513 & 15243 & 25314 & 35214 & 43152 & 54132 \\ 43251 & 15324 & 25413 & 35241 & 43215 & 54213 \\ & 15432 & 25431 & 35421 & 43521 & 54321 \\ \hline \end{tabular}} \caption{Arrangements in $\{D_n\}$ and $\{d_n\}$ up to $n = 5$.}\label{tabA1} \end{table} \begin{table} \centering { \begin{tabular}{|r|r|r|r|c|} \hline $n$\quad & $Der_n\qquad $ & $D_n$\qquad\ & $d_n$\qquad\ & $Der_n - D_n$ \\ \hline 0 & 1 & 0 & & $+1$ \\ 1 & 0 & 1 & 1 & $-1$ \\ 2 & 1 & 0 & 1 & $+1$ \\ 3 & 2 & 3 & 3 & $-1$ \\ 4 & 9 & 8 & 11 & $+1$ \\ 5 & 44 & 45 & 53 & $-1$ \\ 6 & 265 & 264 & 309 & $+1$ \\ 7 & 1.854 & 1.855 & 2.119 & $-1$ \\ 8 & 14.833 & 14.832 & 16.687 & $+1$ \\ 9 & 133.496 & 133.497 & 148.329 & $-1$ \\ 10 & 1.334.961 & 1.334.960 & 1.468.457 & $+1$ \\ 11 & 14.684.570 & 14.684.571 & 16.019.531 & $-1$ \\ 12 & 176.214.841 & 176.214.840 & 190.899.411 & $+1$ \\ 13 & 2.290.792.932 & 2.290.792.933 & 2.467.007.773 & $-1$ \\ 14 & 32.071.101.049 & 32.071.101.048 & 34.361.893.981 & $+1$ \\ 15 & 481.066.515.734 & 481.066.515.735 & 513.137.616.783 & $-1$ \\ \hline \end{tabular}} \caption{Values for $Der_n$, $D_n$ and $d_n$ up to $n = 15$.}\label{tabA2} \end{table} \ \end{document}
\begin{document} \begin{abstract} We study the spectral types of the families of discrete one-dimensional Schrödinger operators $\{H_\omega\}_{\omega\in\Omega}$, where the potential of each $H_\omega$ is given by $V_\omega(n)=f(T^n\omega)$ for $n\in\mathbb{Z}$, $T$ is an ergodic homeomorphism on a compact space $\Omega$ and $f:\Omega\rightarrow\mathbb{R}$ is a continuous function. We show that a generic operator $H_\omega\in \{H_\omega\}_{\omega\in\Omega}$ has purely continuous spectrum if $\{T^n\alpha\}_{n\geq0}$ is dense in $\Omega$ for a certain $\alpha\in\Omega$. We also show the former result assuming only that $\{\Omega, T\}$ satisfies topological repetition property (\textit{TRP}), a concept introduced by Boshernitzan and Damanik \cite{damanikBase}. Theorems presented in this paper weaken the hypotheses of the cited research and allow us to reach the same conclusion as those authors. We also provide a proof of Gordon's lemma, which is the main tool used in this work. \end{abstract} \keywords{} \subjclass[]{} \date{\today} \maketitle \section{Introduction} A Schrödinger operator is the Hamiltonian that describes the dynamics of a conservative system of particles at quantum scale in absence of relativistic forces. Despite its phenomenological origin, theory of Schrödinger operators stands as an autonomous branch of mathematics. Its study links notions of differential equations, geometric analysis, and measure theory, among other fields. In this paper we study the spectral types of discrete one-dimensional Schrödinger operators. This question is of interest because of its physical interpretation: in the system described by a Schrödinger operator, bound states are associated with presence of point spectrum of the operator, while scattering states are associated with continuous spectrum. Discrete one-dimensional Schrödinger operators are defined as: \begin{equation}\label{Sec1Eq0} \begin{aligned} H: \mathcal{D}(H)\subseteq\ell^2(\mathbb{Z})&\rightarrow \ell^2(\mathbb{Z})\\ \psi&\mapsto (\Delta_d+V)\psi \end{aligned} \end{equation} where $\Delta_d$ is the discrete Laplacian operator and $V$ is the potential function (formal definition of these operators is presented in section \ref{premilinaries}). The spectral theory of Schrödinger operators studies the spectrum of $H$ according to the properties of the function $V$.\\ Given a dynamical system $\{\Omega, T\}$ it is possible to define for each $\omega\in\Omega$ a potential function $V_\omega(n)=f(T^n\omega)$, where $f:\Omega \rightarrow \mathbb{R}$. This approach relates the theory of dynamical systems with the spectral theory of Schrödinger operators. In this context, the analysis focuses on studying the properties of the family of operators $\{H_\omega\}_{\omega\in\Omega}$, where each $H_\omega$ is given by: \begin{equation}\label{Sec1Eq1} \begin{aligned} H_\omega: \mathcal{D}(H_\omega)\subseteq\ell^2(\mathbb{Z})&\rightarrow\ell^2(\mathbb{Z})\\ \psi&\mapsto (\Delta_d+V_\omega)\psi. \end{aligned} \end{equation} If the transformation $T:\Omega\rightarrow \Omega$ is ergodic, then $\{H_\omega\}_{\omega\in\Omega}$ is a family of ergodic Schrödinger operators. The central question in this domain consists of two objectives: on the one hand, to determine the spectral types (point, absolutely continuous, and singular continuous) and the shape of the spectrum of the operator $H_\omega$ (equation \ref{Sec1Eq1}), and on the other hand, to describe the dynamics of physical systems associated with $H_\omega$. A particular interest in the theory of ergodic Schrödinger operators lies in the study of properties that are satisfied for a generic element of $\{H_\omega\}_{\omega\in\Omega}$. The latter implies, from the topological point of view, studying the properties of an element in a residual subset of $\{H_\omega\}_{\omega\in\Omega}$, and from the measure-theoretical approach, studying the properties of an element in a subset of full measure of $\{H_\omega\}_{\omega\in\Omega}$. In this paper we study the spectral properties of a generic $H_\omega\in\{H_\omega\}_{\omega\in\Omega}$ from the topological point of view. \\ The main background of this work is the research of Boshernitzan and Damanik \cite{damanikBase}. They introduce the definition of topological and metric repetition property (\textit{TRP} and \textit{MRP}, respectively) on the system $\{\Omega, T\}$, and study its implications on the absence of point spectrum of $\{H_\omega\}_{\omega\in\Omega}$, where the potential $V_\omega(n)=f(T^n\omega)$ is given by an ergodic homeomorphism $T$ on a compact space $\Omega$. Another related research is the work of Avila and Damanik \cite{avilaDamanik}. These authors show, using tools from harmonic analysis and Kotani Theory, that the absense of absolute continuous spectrum is a generic property of $\{H_\omega\}_{\omega\in\Omega}$, where $T:\Omega \rightarrow \Omega$ is a nonperiodic homeomorphism.\\ The aim of this paper is to determine sufficient conditions for the purely continuous spectrum of a family of ergodic Schrödinger operators $\{H_\omega\}_{\omega\in\Omega}$ to be a generic property. Our strategy is to show that a generic operator $H_\omega\in\{H_\omega\}_{\omega\in\Omega}$ has no eigenvalues (and therefore its point spectrum is empty). This objective is similar to that of \cite{damanikBase} and \cite{avilaDamanik}, since we seek to rule out the presence of a certain spectral type in a generic Schrödinger operator. Our work is based on two theoretical tools: \begin{itemize} \item[\textit{(i)}] \textit{Gordon's Lemma}: this is a classical postulate of spectral theory that gives conditions for the absence of point spectrum of a Schrödinger operator. \item[\textit{(ii)}] \textit{TRP}: this property provides the system $\{\Omega, T\}$ with a structure that allows, using Gordon's lemma, to show that a generic operator $H_\omega$ of the family $\{H_\omega\}_{\omega\in\Omega}$ has purely continuous spectrum (see definition \ref{repeticionTopyMet}). \end{itemize} Our results extend the main theorem of Boshernitzan and Damanik (\cite{damanikBase}, p.650). These authors demonstrate that purely continuous spectrum is a generic property of $\{H_\omega\}_{\omega\in\Omega}$ using two hypotheses: that the system $\{\Omega, T\} $ is minimal and satisfies \textit{TRP}. We weaken the hypotheses of this result from two perspectives: first, we prove the previous result using only the hypothesis that $\{T^n\alpha\}_{n\geq0}$ is dense in $\Omega$ for certain element $\alpha\in \Omega$ (specifically, for $\alpha\in PRP(T)$. \textit{See definition \ref{PRP}}). Then we demonstrate that \textit{TRP} is sufficient condition for the purely continuous spectrum to be a generic property of the family of ergodic Schrödinger operators $\{H_\omega\}_{\omega\in\Omega}$. The following are the statements of the two main results of this paper. \begin{thA} \label{thA} Suppose that $\alpha\in PRP(T)$ and $\{T^n\alpha\}_{n\geq0}$ is dense in $\Omega$. Then there exists a residual subset $\mathcal{F}$ of $C(\Omega)$ such that if $f\in\mathcal{F}$ then there exists a residual subset $\Omega_f$ of $\Omega$ with the property that $f(T^n\omega)$ is a Gordon potential, for every $\omega\in \Omega_f$. \end{thA} \begin{thB} \label{thB} Suppose that the dynamical system $\{\Omega, T\}$ satisfies \textit{TRP}. Then there exists a residual subset $\mathcal{F}$ of $C(\Omega)$ such that if $f\in\mathcal{F}$ then there exists a residual subset $\Omega_f$ of $\Omega$ with the property that $f(T^n\omega)$ is a Gordon potential, for every $\omega\in \Omega_f$. \end{thB} In addition to this introduction, the paper is divided into five sections. In the second section we introduce the main definitions in which the work is framed. In the third, we state and demonstrate Gordon's lemma. In the fourth and fifth sections, we prove theorems A and B, respectively. Finally, in the sixth section, we discuss some applications. \section{Setting and main results} \label{premilinaries} In this section we introduce the formal definition of discrete Schrödinger operators. We also present the definition of \textit{TRP} property and discuss some of its implications on the structure of a dynamical system $\{\Omega, T\}$. \subsection{Schrödinger operators.} Let $ \mathscr{L}^{2}(\mathbb{R}^n)$ be the space of square integrable functions, which is defined \begin{equation*} \mathscr{L}^2(\mathbb{R}^n)=\{\Psi:\mathbb{R}^n\rightarrow \mathbb{C} \mid \int_{\mathcal{D}(\Psi)}\lVert \Psi(q)\rVert^2 dq<\infty\} \end{equation*} where the inner product is given by: \begin{equation*} \langle \Psi,\Phi\rangle=\int_{\mathcal{D}}\overline{\Psi(q)}\Phi(q)dq. \end{equation*} Discrete version of $ \mathscr{L}^{2}(\mathbb{R}^n)$ is defined as: \begin{equation*} \ell^2(\mathbb{Z}^n)=\{\psi:\mathbb{Z}^n\rightarrow\mathbb{C}\mid \sum_{n\in\mathbb{Z}^n}|\psi(n)|^2<\infty\} \end{equation*} endowed with inner product: \begin{equation*} \langle \psi,\varphi\rangle=\sum_{n\in\mathbb{Z}}\overline{\psi(n)}\varphi(n). \end{equation*} \begin{definition} \label{operdor de schrodinger} A Schrödinger operator is defined as a linear operator \begin{equation}\label{Kap2Eq5} \begin{aligned} \mathcal{H}: \mathcal{D}(\mathcal{H})\subseteq \mathscr{L}^2(\mathbb{R}^n)&\rightarrow \mathscr{L}^2(\mathbb{R}^n)\\ \Psi&\mapsto (\Delta+V)\Psi \end{aligned} \end{equation} \end{definition} where $\Delta$ is the Laplacian in $ \mathscr{L}^2(\mathbb{R}^n)$ and $V:\mathbb{R}^n\rightarrow\mathbb{R}$ the potential function. Similary, discrete Schrödinger operators are defined as: \begin{equation}\label{Kap2Eq8} \begin{aligned} H: \mathcal{D}(H)\subseteq\ell^2(\mathbb{Z}^n)&\rightarrow \ell^2(\mathbb{Z}^n)\\ \psi&\mapsto (\Delta_d+V)\psi \end{aligned} \end{equation} with $\Delta_d$ the discrete Laplacian in $\ell^2(\mathbb{Z}^n)$ and $V:\mathbb{Z}^n\rightarrow\mathbb{R}$ the potential function. \quad In this paper we focus on the one-dimensional case, which leads to the following definition, considering $\Delta_d = \psi(n+1)+\psi(n-1)$ as the discrete Laplacian in $\ell^2(\mathbb{Z})$. \begin{definition} \label{Operador de Schrödinger unidimensional discreto} The discrete one-dimensional Schrödinger operator: \begin{equation}\label{Kap2Eq9} \begin{aligned} H: \mathcal{D}(H)\subseteq\ell^2(\mathbb{Z})&\rightarrow \ell^2(\mathbb{Z})\\ \psi&\mapsto (\Delta_d+V)\psi\\&=\psi(n+1)+\psi(n-1)+V(n)\psi(n) \end{aligned} \end{equation} \end{definition} \begin{proposition} \label{proposition1} The operator $H$ (equation \ref{Kap2Eq9}) is self-adjoint. \end{proposition} \begin{proof} Let $\psi(n)$ and $\varphi(n)\in\ell^2(\mathbb{Z})$. Then, on the one hand: \begin{equation}\label{Kap2Eq91} \begin{aligned} \langle H\psi(n),\varphi(n)\rangle&=\langle (\Delta_d+V)\psi(n),\varphi(n)\rangle &&\text{by definition of $H$}\\ &=\langle \Delta_d\psi(n),\varphi(n)\rangle+\langle V(n)\cdot\psi(n),\varphi(n)\rangle&&\text{by definition of $\langle,\rangle$}\\ &=\langle \psi(n),\Delta_d\varphi(n)\rangle+\langle V(n)\cdot\psi(n),\varphi(n)\rangle&&\text{as $\Delta_d$ is self-adjoint} \end{aligned} \end{equation} On the other hand, note that: \begin{equation}\label{Kap2Eq10} \begin{aligned} \langle V(n)\cdot\psi(n),\varphi(n)\rangle&=\sum_{n\in\mathbb{Z}}\overline{V(n)\psi(n)}\varphi(n)&&\text{by definition of $\langle,\rangle$}\\ &=\sum_{n\in\mathbb{Z}}\overline{\psi(n)V(n)}\varphi(n)\\ &=\sum_{n\in\mathbb{Z}}\overline{\psi(n)}V(n)\varphi(n)&&\text{since $V(n)\in\mathbb{R}$}\\ &=\langle \psi(n),V(n)\cdot\varphi(n)\rangle&&\text{by definition of $\langle,\rangle$} \end{aligned} \end{equation} Replacing \eqref{Kap2Eq10} in \eqref{Kap2Eq9} we obtain: \begin{equation*} \begin{aligned} \langle H\psi(n),\varphi(n)\rangle&=\langle \psi(n),\Delta_d\varphi(n)\rangle+\langle \psi(n),V(n)\varphi(n)\rangle\\&=\langle \psi(n),(\Delta_d+V)\varphi(n)&&\text{by definition of $\langle,\rangle$}\\ &=\langle\psi(n),H\varphi(n)\rangle&&\text{for definition $H$} \end{aligned} \end{equation*} we conclude that $H$ is self-adjoint. \end{proof} As a consequence of the previous proposition we have the next corollary. \begin{corollary} \label{corollary1} Let $H=\Delta_d + V$ be a discrete one-dimensional Schrödinger operator. If $V$ is bounded, then $H$ is a bounded self-adjoint operator. \end{corollary} \subsection{Discrete one-dimensional ergodic Schrödinger operators.} Let $\{\Omega, T\}$ be a dynamical system and consider $f:\Omega\rightarrow\mathbb{R}$. Each $\omega\in \Omega$ and its orbit over $T$ (\textit{i.e.} the set $\{T^n\omega\}_{n\in\mathbb{Z}}$) induces the definition of a potential function: \begin{equation}\label{Kap2Eq11} \begin{aligned} V_\omega:\mathbb{Z}&\rightarrow\mathbb{R}\\ n&\mapsto f(T^n\omega). \end{aligned} \end{equation} Consequently, each $\omega\in\Omega$ is associated with a Schrödinger operator \begin{equation}\label{Kap2Eq12} \begin{aligned} H_\omega: \mathcal{D}(H_\omega)\subseteq\ell^2(\mathbb{Z})&\rightarrow\ell^2(\mathbb{Z})\\ \psi&\mapsto (\Delta_d+V_\omega)\psi. \end{aligned} \end{equation} and $\{H_\omega\}_{\omega\in\Omega}$ is called \textit{family of Schrödinger operators with dynamically defined potential}.\\ We denote by $\mathcal{S}(\mathcal{H})$ the space of self-adjoint operators in a Hilbert space, that is: \begin{equation*} \mathcal{S}(\mathcal{H})=\{H:\mathcal{D}(H)\subseteq\mathcal{H}\rightarrow\mathcal{H}\mid \text{$H$ is self-adjoint}\} \end{equation*} The following definition relates the ergodic theory to the Schrödinger operators. \begin{definition}\label{ergodic schrodiger operator} Let $(\Omega,\mathcal{B},\mu)$ be a probability space and consider: \begin{equation*} \begin{aligned} A:\Omega&\rightarrow\mathcal{S}(\mathcal{H})\\ \omega&\mapsto H_\omega \end{aligned} \end{equation*} The operator $H_\omega$ is an \textit{ergodic operator}, if there exists a family of ergodic transformations $\{T_i\}_{i\in\mathbb{Z}}$ in $\Omega$ and a family of unitary operators $\{U_i\}_{i\in\mathbb{Z}}$ en $\mathcal{H}$ such that: \begin{equation*} H_{T_i(\omega)}=U_i^* H_{\omega}U_i \end{equation*} where $U_i^*$ is the adjoint operator of $U_i$. \end{definition} \begin{lemma} \label{lema22-1} Let $(\Omega, \mathcal{B},\mu)$ be a probability space, $\{\Omega, T\}$ a dynamical system with $T$ a $\mu-$ergodic transformation and $f:\Omega\rightarrow\mathbb{R}$ a given function. Then, the operator $H_\omega$ (equation \ref{Kap2Eq12}) is ergodic. \end{lemma} \begin{proof} Consider the transformation: \begin{equation*} \begin{aligned} A: \Omega&\rightarrow\mathcal{S}(\mathcal{H})\\ \omega&\mapsto H_\omega=\Delta_d+V_\omega \end{aligned} \end{equation*} By hypothesis $T$ is $\mu-$ergodic, so $\{T^t\}_{t\in\mathbb{Z}}$ is a family of ergodic transformations on $\Omega$. To prove that $H_\omega$ is an ergodic operator, it is necessary to find a family of unitary operators $\{U_t\}_{t\in\mathbb{Z}}$ such that: \begin{equation}\label{Kap2Eq13} H_{T^t(\omega)}=U_t^* H_{\omega}U_t \end{equation} For each $t\in\mathbb{Z}$, consider the translation operator on $\ell^2(\mathbb{Z})$: \begin{equation*} \begin{aligned} U_t: \ell^2(\mathbb{Z})&\rightarrow \ell^2(\mathbb{Z})\\ \psi(n)&\mapsto\psi(n+t) \end{aligned} \end{equation*} Notice that the operator $U_t$ is unitary: \begin{equation*} \begin{aligned} \lVert U_t\rVert=\sup_{\psi\neq0}\frac{\lVert U_t \psi(n)\rVert}{\lVert\psi(n)\rVert}=\sup_{\psi\neq0}\frac{\sum_{n\in\mathbb{Z}}|\psi(n+t)|^2}{\sum_{n\in\mathbb{Z}}|\psi(n)|^2}=\sup_{\psi\neq0}\frac{\sum_{n\in\mathbb{Z}}|\psi(n)|^2}{\sum_{n\in\mathbb{Z}}|\psi(n)|^2}=1 \end{aligned} \end{equation*} therefore, $\{U_t\}_{t\in\mathbb{Z}}$ is a family of unitary operators on $\ell^2(\mathbb{Z})$. Additionally, we have that: \begin{equation*} U_t^*=U_{-t},\quad\forall t\in\mathbb{Z} \end{equation*} Let $\psi(n)\in\ell^2(\mathbb{Z})$. then: \begin{equation}\label{Kap2Eq14} \begin{aligned} H_{T^t(\omega)}\psi(n)=(\Delta_d+V_{T^t\omega})\psi(n)&=\Delta_d \psi(n)+V_{T^t\omega}\psi(n)\\ &=\psi(n+1)+\psi(n-1)+f(T^{n}[T^t\omega])\psi(n)\\ &=\psi(n+1)+\psi(n-1)+f(T^{n+t}\omega)\psi(n) \end{aligned} \end{equation} furthermore: \begin{equation}\label{Kap2Eq15} \begin{aligned} U_{-t}H_\omega U_t \psi(n)=U_{-t}H_\omega \psi(n+t)&=U_{-t}[(\Delta_d+V_\omega)\psi(n+t)]\\ &=U_{-t}[\Delta_d \psi(n+t)+V_\omega\psi(n+t)]\\ &=U_{-t}(\psi(n+t+1)+\psi(n+t-1)+f(T^{n+t}\omega)\psi(n+t))\\ &=\psi(n+1)+\psi(n-1)+f(T^{n+t}\omega)\psi(n) \end{aligned} \end{equation} from equations \eqref{Kap2Eq14} and \eqref{Kap2Eq15}: \begin{equation*} H_{T^t(\omega)}\psi(n)=U_t^* H_{\omega}U_t\psi(n),\quad\forall\psi(n)\in\ell^2(\mathbb{Z}) \end{equation*} \end{proof} Lemma \ref{lema22-1} allows us to conclude that $H_\omega$ (equation \ref{Kap2Eq12}) is an ergodic operator. Therefore, if $T$ is an ergodic homeomorphism on $\Omega$, then $\{H_\omega\}_{\omega\in\Omega}$ is a family of discrete one-dimensional Schrödinger ergodic operators. \subsection{Repetition property and the dynamical system \texorpdfstring{$\{\Omega, T\}$}{(O,T)} } \label{sec: espectro continuo topología} \begin{definition} \label{repetición} Let $(\Omega, d)$ be a compact metric space. The sequence $\{\omega_n\}_{n\geq0}\subseteq\Omega$ satisfies \textit{the repetition property} (\textit{RP}) if for all $\varepsilon>0$ and $r\in\mathbb{Z}_+$ there exist $q\in\mathbb{Z}_+$ such that $d(\omega_n,\omega_{n+q})<\varepsilon$ for all $n\in\{0,1,\ldots,rq\}$. \end{definition} Figure \ref{figure1:PR} illustrates \textit{RP} for $\varepsilon>0$ with parameter $r=4$. \begin{figure} \caption{{\small Graphic representation of property \textit{RP} \label{figure1:PR} \label{fig:PR} \end{figure} \hspace{2cm} Property \textit{RP} is independent of the metric defined on $\Omega$ if the latter is compact. This is demonstrated in the following lemma. \begin{lemma} \label{métricas equivalentes} Let $(\Omega, d_1)$ and $(\Omega, d_2)$ be compact metric spaces and suppose that $\{\omega_n\}_{n\geq0}$ satisfies \textit{RP} on $(\Omega, d_1)$. Then, $\{\omega_n\}_{n\geq0}$ satisfies \textit{RP} on $(\Omega, d_2)$. \end{lemma} \begin{proof} Suppose that $\{\omega_n\}_{n\geq0}$ satisfies \textit{PR} on $(\Omega, d_1)$. Let $\varepsilon=\frac{1}{k}$ and $r=k$ for $k\in\mathbb{Z}_+$ in definition \ref{repetición}. Consider $B_{d_1}(\omega_i, \frac{1}{k})$ the open ball with center in $\omega_i$ and radius $\frac{1}{k}$ on the space $(\Omega, d_1)$. Observe that: \begin{equation*} \bigcup_{i=0}^{kq}B_{d_1}(\omega_i, \frac{1}{k}) \cup A \end{equation*} is a finite covering of $\Omega$ on the space $(\Omega, d_1)$, where $A$ is a open set on $(\Omega, d_1)$ that cover to $\Omega\setminus\{\omega_0,\ldots, \omega_{kq}\}$. In the same way: \begin{equation*} \bigcup_{i=0}^{\overline{k}\overline{q}}B_{d_2}(\omega_i, \frac{1}{\overline{k}}) \cup B \end{equation*} is a finite covering of $\Omega$ on the space $(\Omega, d_2)$, where $B_{d_2}(\omega_i, \frac{1}{\overline{k}})$ is the open ball with center at $\omega_i$ and radius $\frac{1}{\overline{k}}$, and $B$ is a open set that covers $\Omega\setminus\{\omega_0,\ldots, w_{\overline{k}\overline{q}}\}$ on the space $(\Omega, d_2)$. \\ Given $\overline{k}\in\mathbb{Z}_+$, there exist $k\in\mathbb{Z}_+$ such that $\frac{1}{k}<\frac{1}{\overline{k}}$. Since $B_{d_1}$ and $B_{d_2}$ are open balls on $(\Omega, d_1)$ and $(\Omega, d_2)$ respectively, for every $x\in\Omega$ we have that $B_{d_1}(x,\frac{1}{k})\subset B_{d_2}(x,\frac{1}{\overline{k}})$ or $B_{d_2}(x,\frac{1}{\overline{k}})\subset B_{d_1}(x,\frac{1}{k})$. We will examine each of these two cases.\\ First, we assume that $B_{d_1}(x,\frac{1}{k})\subset B_{d_2}(x,\frac{1}{\overline{k}})$ and consider $x=\omega_n$, for $n\in\{0,\ldots, kq\}$. By hypotheses $\{\omega_n\}_{n\geq0}$ satisfies \textit{RP} on $(\Omega, d_1)$, which mean that $\omega_{n+q}\in B_{d_1}(\omega_n,\frac{1}{k})$ for all $n\in\{0,\ldots, kq\}$. As a result: \begin{equation*} \omega_{n+q}\in B_{d_1}(\omega_n,\frac{1}{k})\subset B_{d_2}(\omega_n,\frac{1}{\overline{k}}) \quad\Rightarrow\quad\omega_{n+q}\in B_{d_2}(\omega_n,\frac{1}{\overline{k}}), \quad \forall n\in\{0,\ldots, kq\}, \end{equation*} taking, $\overline{q}=q$ we conclude that $\{\omega_n\}_{n\geq0}$ satisfies \textit{RP} on $(\Omega, d_2)$.\\ Now we assume that $B_{d_2}(x,\frac{1}{\overline{k}})\subset B_{d_1}(x,\frac{1}{k})$. This implies, for $x=\omega_n$ with $n\in\{0,\ldots, kq\}$, that if $d_1(\omega_n,y)<\frac{1}{k}$, then $d_2(\omega_n,y)<\frac{1}{k}$. By hypotheses $\{\omega_n\}_{n\geq0}$ satisfies \textit{RP} on $(\Omega, d_1)$, therefore $d_1(\omega_n,\omega_{n+q})<\frac{1}{k}$ for $n\in\{0,\ldots,kq\}$. Then: \begin{equation*} \begin{aligned} B_{d_2}(\omega_n,\frac{1}{\overline{k}})\subset B_{d_1}(\omega_n,\frac{1}{k})\quad&\Rightarrow\quad d_2(\omega_n,\omega_{n+q})<d_1(\omega_n,\omega_{n+q})<\frac{1}{k}\\ &\Rightarrow \quad d_2(\omega_n, \omega_{n+q})<\frac{1}{k}, \quad \forall n\in\{0,\ldots, kq\} \end{aligned} \end{equation*} then $\{\omega_n\}_{n\geq0}$ satisfies \textit{RP} on $(\Omega, d_2)$. \end{proof} \quad Now we define the set \textit{PRP(T)}, which relates property \textit{RP} with dynamical systems. \begin{definition} \label{PRP} Let $\{\Omega,T\}$ be a dynamical system. $PRP(T)$ is defined as the set of points in $\Omega$ such that $\{T^n\omega\}_{n\geq0}$ satisfies \textit{RP}. \begin{equation}\label{Kap4Eq2} PRP(T)=\{\omega\in\Omega \mid \forall \varepsilon>0, r\in\mathbb{Z}_+ \quad \exists \ q\in\mathbb{Z}_+: d(T^n\omega,T^{n+q}\omega)<\varepsilon \text{, to $0\leq n\leq rq$}\} \end{equation} \end{definition} Considering $\varepsilon(k)=\frac{1}{k}$, $k\in\mathbb{Z}_+$, the set $PRP(T)$ can be expressed as: \begin{equation}\label{Kap4Eq3} PRP(T)=\{\omega\in\Omega \mid \forall k\in\mathbb{Z}_+ \quad\exists \ q\in\mathbb{Z}_+: d(T^n\omega,T^{n+q}\omega)<\frac{1}{k} \text{ , to $0\leq n\leq kq$}\} \end{equation} \begin{lemma}\label{q infinito} Consider $\omega\in PRP(T)$. Given $k$ and $r\in\mathbb{Z}_+$, let $q_k$ be such that $d(T^n\omega,T^{n+q_k}\omega)<\frac{1}{k}$, for $0\leq n\leq rq_k$. Then: \begin{equation*} \lim_{k\rightarrow\infty}q_k=\infty \end{equation*} \end{lemma} \begin{proof} Let $\omega\in PRP(T)$. Reasoning by contradiction, we assume that there exists $L\in\mathbb{Z}_+$ such that $q_k<L$ for all positive integer $k$. By definition \ref{PRP}, there exist $K\in\mathbb{Z}_+$ such that for $n\in\{0,\ldots, KL\}$, we have \begin{equation*} d(T^n\omega,T^{n+L}\omega)\geq\frac{1}{K}, \end{equation*} otherwise, $q_K=L$ which does not satisfy $q_k<L$ for every $k\in\mathbb{Z}_+$. On the other hand, as $\{T^n\omega\}_{n\geq0}$ satisfies \textit{RP}, then for every $k$ there exists $q_k\in\{0,1,\ldots L-1\}$ with $d(T^n\omega,T^{n+q_k}\omega)<\frac{1}{k}$, where $0\leq n\leq kq_k$.\\ Let $\varepsilon_0=\min_{q_k}d(T^n\omega,T^{n+q_k}\omega)$ and $m\in\mathbb{Z}_+$ such that $0<\frac{1}{m}<\min\{\frac{1}{K},\varepsilon_0\}$. Therefore, as $\omega\in PRP(T)$, there must be a $q_m\in\mathbb{Z}$ with $d(T^n\omega, T^{n+q_m}\omega)<\frac{1}{m}$ for every $0\leq n\leq mq_m$. However, since $\frac{1}{m}<\varepsilon_0$, then $q_m\notin\{0,1,\ldots L-1\}$. It follows that $q_m\geq L$, but this contradicts $q_k<L$ for all $k$. We conclude that $\lim_{k\rightarrow\infty}q_k=\infty$. \end{proof} In the following two propositions we study some properties of the set $PRP(T)$. \begin{lemma}\label{PRP es G_delta} $PRP(T)$ is a $G_\delta$ set (i.e. $PRP(T)$ is the countable intersection of open subsets) on $\Omega$. \end{lemma} \begin{proof} Let $k$ and $q$ positive integers, consider: \begin{equation*} A_k(q)=\left\{\omega\in\Omega\mid \max_{0\leq n\leq kq} d(T^n\omega, T^{n+q}\omega)<\frac{1}{k}\right\} \end{equation*} we have, that $A_k(q)$ is an open subset of $\Omega$. Indeed, given the function $f$: \begin{equation*} \begin{aligned} f:\Omega&\rightarrow\Omega\times\Omega\\ \omega&\mapsto (T^n\omega, T^{n+q}\omega) \end{aligned} \end{equation*} We observe that $f$ is continuous, since by hypothesis $T:\Omega\rightarrow\Omega$ is a homeomorphism. Furthermore, we know that, the metric $d:\Omega\times \Omega\rightarrow \mathbb{R}^+$ is a continuous function, so that $d\circ f:\Omega\rightarrow\mathbb{R}^+$ is continuous, thus: \begin{equation*} d\circ f(A_k(q))=[0,\frac{1}{k}) \end{equation*} where $[0,\frac{1}{k})$ is open set on $\mathbb{R}^+$. We conclude that $A_k(q)$ is an open set on $\Omega$.\\ Now, we will show the following equality \begin{equation}\label{Kap4Eq4} \bigcap_{k\geq1}\bigcap_{m\geq1}\bigcup_{q\geq m} A_k(q)= PRP(T) \end{equation} Let $\omega\in \bigcap_{k\geq1}\bigcap_{m\geq1}\bigcup_{q\geq m} A_k(q)$, then for every $k$ and $m$ into $\mathbb{Z}$ there exist $q\in\mathbb{Z}_+$ such that $d(T^n\omega, T^{n+q}\omega)<\frac{1}{k}$, for all $n\in\{0,\ldots,kq\}$. Then by definition \ref{PRP}, we have that $\omega\in PRP(T)$ i. e., \begin{equation*} \bigcap_{k\geq1}\bigcap_{m\geq1}\bigcup_{q\geq m} A_k(q)\subseteq PRP(T) \end{equation*} In a similar way, if $\omega\in PRP(T)$ then for $k\in\mathbb{Z}_+$ there exists $q$ such that $d(T^n\omega, T^{n+q}\omega)<\frac{1}{k}$, so that $\omega \in A_k(q)$ for all positive integer $k$ hence: \begin{equation*} \omega\in \bigcap_{k\geq1}\bigcap_{m\geq1}\bigcup_{q\geq m} A_k(q) \end{equation*} so, $PRP(T)$ is the countable intersection of open sets and therefore it is a $G_\delta$ set.\end{proof} \begin{lemma} \label{PRP(T) es invariante} $PRP(T)$ is $T-$ invariant, this is $T(PRP(T))\subseteq PRP(T)$. \end{lemma} \begin{proof} Given $\omega\in PRP(T)$, let us show $T\omega\in PRP(T)$, \textit{i.e.} for all $k\in\mathbb{Z}_+$ there exists $q$ such that $d(T^n(T\omega), T^{n+q}(T\omega))<\frac{1}{k}$, for $0\leq n\leq kq$.\\ If $\omega\in PRP(T)$, then there exists $q\in\mathbb{Z}_+$ such that $d(T^n\omega, T^{n+q}\omega)<\frac{1}{k+1}$, for $0\leq n\leq (k+1)q$. this implies in particular that for $0\leq n\leq kq$: \begin{equation*} d(T^{n+1}\omega, T^{n+1+q}\omega)<\frac{1}{k+1}\quad\Rightarrow\quad d(T^n(T\omega), T^{n+q}(T\omega))<\frac{1}{k} \end{equation*} then, $\{T^n(T\omega)\}_{n\geq0}$ satisfies \textit{RP}, so $T\omega\in PRP(T)$. \end{proof} Consequently, we have that $\{T^n\omega\}_{n\geq0}\subseteq PRP(T)$ for all $\omega\in PRP(T)$. \begin{definition} \label{repeticionTopyMet} (BD, \cite{damanikBase} \textit{p.650}). We say that the dynamical system $\{\Omega,T\}$ satisfies: \begin{itemize} \item[\textit{(i)}] \textit{Topological repetition property (TRP)}, if $PRP(T)$ is dense on $\Omega$. \item[\textit{(ii)}] \textit{Metric repetition property (MRP)}, if $\mu(PRP(T))>0$, where $(\Omega, \mathcal{B}, \mu)$ is a measure space. \item[\textit{(iii)}] \textit{Global repetition (GRP)}, if $PRP(T)=\Omega$. \end{itemize} \end{definition} To conclude this section, in the next lemmas we will demonstrate that \textit{TRP} and \textit{MRP} are sufficient conditions to guarantee that the set $PRP(T)$ is generic from the topological and measure-theoretical point of view, respectively, on $\Omega$. \begin{lemma} \label{TRP implica residual} If $\{\Omega, T\}$ satisfies \textit{TRP}, then the set $PRP(T)$ is residual into $\Omega$.\end{lemma} \begin{proof} By lemma (\ref{PRP es G_delta}) the set {\it PRP(T)} is $G_\delta$. Now, using the hypothesis then $PRP(T)$ is dense, we conclude that $PRP(T)$ is $G_\delta$ and dense, and therefore, by definition, a residual on $\Omega$. \end{proof} \begin{lemma} \label{transitividad implica TRP} Given $\alpha\in PRP(T)$, such that $\{T^n\alpha\}_{n\geq0}$ is dense in $\Omega$. Then: \begin{itemize} \item[\textit{(i)}] System $\{\Omega, T\}$ satisfies \textit{TRP}. \item[\textit{(ii)}] Given $q\in\mathbb{Z}$, the set $\{T^{q+j}\alpha\}_{j\geq0}$ is dense in $\Omega$. \end{itemize} \end{lemma} \begin{proof}\textcolor{white}{Sea:} \textit{(i)}\quad Let $\alpha\in PRP(T)$. By lemma \ref{PRP(T) es invariante}, $\{T^n\alpha\}_{n\geq0}\subseteq PRP(T)$. Therefore, if $\{T^n\alpha\}_{n\geq0}$ is dense in $\Omega$, then \textit{PRP(T)} contains a dense subset of $\Omega$, and consequently \textit{PRP(T)} is dense in $\Omega$.\\ \textit{(ii)}\quad Let $\omega\in\Omega$. Define the set: $A=\{\alpha, T\alpha,\ldots, T^{q-1}\alpha\}$. As $A$ is finite, there exists $\delta_0>0$ such that $B_{\delta_0}(\omega)$, i.e. the open ball centered at $\omega$ with radius $\delta_0$ is disjoint with $A$: \begin{equation*} B_{\delta_0}(\omega) \cap A=\emptyset \end{equation*} Given that, by hypothesis, the set $\{T^n\alpha\}_{n\geq0}$ is dense in $\Omega$, then there exists a $j\geq0$ such that $T^j\alpha\in B_{\delta_0}(\omega)$ and therefore: \begin{equation*} \{T^n\alpha\}_{n\geq0}\cap B_{\delta_0}(\omega)\neq\emptyset \end{equation*} As $B_{\delta_0}(\omega) \cap A=\emptyset$ and $T^j\alpha\in B_{\delta_0}(\omega)$, then $j\geq q$. We conclude that $\{T^{q+j}\alpha\}_{n\geq0}$ is dense in $\Omega$.\end{proof} \begin{lemma} \label{MRP implica medida completa} If $\{\Omega, T\}$ satisfies \textit{MRP} and $T$ is $\mu-$ergodic, then $\mu(PRP(T))=\mu(\Omega)$. \end{lemma} \begin{proof} To simplify calculations, let us denote $PRP(T)=A$. Assume that the system $\{\Omega, T\}$ satisfies \textit{MRP} and $\mu(A)>0$. We will show that $\mu(A)=1$. Let \begin{equation*} B=\bigcup_{n\geq0}T^{-n}A \end{equation*} then $T^{-1}B=\bigcup_{n\geq1}T^{-n}A$, therefore $T^{-1}B\subseteq B$. This implies that $\mu(B)=0$ or $\mu(B)=1$, as $T$ is $\mu-$ ergodic. As $A\subseteq B$ and $\mu(A)>0$ by hypotesis, then $\mu(B)>0$, and therefore $\mu(B)=1$.\\ Furthermore, because $T$ is invariant, $\mu(T^{-1}(B))=1$. We note that: \begin{equation*} \begin{aligned} 1=\mu(T^{-1}(B))&=\mu(A)+\mu(T^{-1}(B)\setminus A) \end{aligned} \end{equation*} moreover \begin{equation*} \begin{aligned} 1=\mu(B)&=\mu(A)+\mu(B\setminus A) \end{aligned} \end{equation*} then we have that $\mu(B\setminus A)=\mu(T^{-1}(B)\setminus A)$.\\ Now suppose that $\mu(B\setminus A)>0$. Note that $T^{-1}(B\setminus A)\subseteq B\setminus A$, indeed: \begin{equation*} \begin{aligned} x\in T^{-1}(B\setminus A)\Rightarrow T(x)\in B\setminus A& \Rightarrow T(x) \in B \ \text{and} \ T(x)\notin A\\ &\Rightarrow x \notin A &&\text{lemma \ref{PRP(T) es invariante}}\\ &\Rightarrow x\in B\setminus A&&\text{because $T^{-1}B\subseteq B$} \end{aligned} \end{equation*} By ergodicity of $T$, we have that $\mu(B\setminus A)=1$, so $\mu(A)=0$, which contradicts the hypothesis $\mu(A)>0$. Therefore $\mu(B\setminus A)=0$, and we conclude that $\mu(A)=1$, \textit{i.e.} $\mu(PRP(T))=1$. \end{proof} Boshernitzan and Damanik demonstrate that \textit{MRP} is sufficient condition for the continuous spectrum to be a generic property, from the measure-theoretical point of view, of the familiy of ergodic Schrödinger operators $\{H_\omega\}_{\omega\in\Omega}$ (\cite{damanikBase} \textit{theorem 2}, p.650). However, this result is beyond the scope of this paper. In the next section we will study Gordon's lemma, result that will allow us to show that the continuous spectrum is a generic property, from the topological point of view, of $\{H_\omega\}_{\omega\in\Omega}$. \section{Gordon's lemma} \label{Gordon} Gordon's lemma is the main analytical tool that supports theorems \textbf{A} and \textbf{B}. In this section we present the definition of Gordon potential and study some properties of matrix representation of Schrödinger operators. We also provide a detailed proof of Gordon's lemma. \subsection{Gordon potential} \begin{definition}\label{Potencial de Gordon} A bounded function $V:\mathbb{Z}\rightarrow\mathbb{R}$ is a \textit{Gordon potential} if there exists a sequence of periodic functions $\{V_m\}_{m\in\mathbb{Z}_+}: \mathbb{Z}\rightarrow\mathbb{R}$, where $V_m(n)=V_m(n+T_m)$ and $T_m\rightarrow\infty$, that satisfies the following two conditions: \begin{itemize} \item[\textit{(i)}] $\sup_{n,m}|V_m(n)|<\infty$. \item[\textit{(ii)}] $\sup_{|n|\leq 2T_m}|V_m(n)-V(n)|\leq Cm^{-T_m}$, for a $C>0$. \end{itemize} \end{definition} A Gordon potential is essentially a function that, for $|n|\leq 2T_m$, can be approximated by a sequence of periodic functions. In addition to any periodic function, a nontrivial example of a Gordon potential is $V(n)=\sin(an)+\sin(bn)$ where $a$ and $b$ are such that $\frac{a}{b}\notin\mathbb{Q}$. Gordon's lemma, which is stated below, describes the spectral type of the discrete Schrödinger operator in $\ell^2(\mathbb{Z})$: \begin{equation}\label{Kap3Eq100} H[\psi(n)]=\psi(n+1)+\psi(n-1)+V(n)\psi(n) \end{equation} whose potential function $V(n)$ satisfies the definition \ref{Potencial de Gordon}. \begin{theorem} (Gordon's, lemma) Let $V:\mathbb{Z}\rightarrow\mathbb{R}$ be a Gordon potential. For $E\in\mathbb{C}$, if $\psi$ is a solution to: \begin{equation}\label{Kap3Eq9} \psi(n+1)+\psi(n-1)+V(n)\psi(n)=E\cdot \psi(n) \end{equation} Then: \begin{equation}\label{Kap3Eq10} \limsup_{|n|\rightarrow\infty}\frac{\psi(n+1)^2+\psi(n)^2}{\psi(1)^2+\psi(0)^2}\geq\frac{1}{4} \end{equation} \end{theorem} Equation \ref{Kap3Eq10} implies that $\lim_{|n|\rightarrow\infty}\psi(n)>0$. Therefore: \begin{equation}\label{Kap3Eq110} \lVert\psi(n)\rVert^2=\sum_{n\in\mathbb{Z}}|\psi(n)|^2=\infty \end{equation} as a consequence of equation \ref{Kap3Eq110} we conclude that $\psi(n)\notin\ell^2(\mathbb{Z})$, therefore the operator $H$ in $\ell^2(\mathbb{Z})$ (equation \ref{Kap3Eq100}) has no eigenvalues (\textit{i.e.} its point spectrum is empty). The following theorem provides a characterization of Gordon's potential that will be useful later on the demonstration of Theorems A and B. \begin{theorem}\label{Equivalencia potencial de Gordon} A bounded function $V:\mathbb{Z}\rightarrow\mathbb{R}$ is a Gordon potential if and only if there exists a sequence $\{q_m\}_{m\in\mathbb{Z}_+}$ such that $q_m\rightarrow\infty$ and for a $C>0$ and every $m\geq1$: \begin{equation*} \max_{1\leq n\leq q_m}|V(n)-V(n+q_m)|\leq Cm^{-q_m} \quad \text{and} \quad \max_{1\leq n\leq q_m}|V(n)-V(n-q_m)|\leq Cm^{-q_m} \end{equation*} \end{theorem} \begin{proof} ``$\Rightarrow$'' Suppose that $V(n)$ is a Gordon potential. Using triangular inequality and the fact that $V_m$ is a periodic function with period $T_m$: \begin{equation*} \begin{aligned} \max_{1\leq n\leq T_m}|V(n)-V(n\pm T_m)|&=\max_{1\leq n\leq T_m}|V(n)-V_m(n)+V_m(n\pm T_m)-V(n\pm T_m)|\\ &\leq \sup_{1\leq n\leq T_m}|V_m(n\pm T_m)-V(n\pm T_m)|+\sup_{1\leq n\leq T_m}|V_m(n)-V(n)|\\ &\leq 2Cm^{-T_m} \end{aligned} \end{equation*} by taking $q_m=T_m$ and $\hat{C}=2C$, we conclude that: \begin{equation*} \max_{1\leq n\leq q_m}|V(n)-V(n\pm q_m)|\leq\hat{C}m^{-qm} \end{equation*} where $\lim_{m\rightarrow\infty}q_m=\infty$.\\ ``$\Leftarrow$'' Suppose that there exists $C>0$ such that $\max_{1\leq n\leq q_m}|V(n)-V(n\pm q_m)|\leq Cm^{-q_m}$ for a sequence of positive integers $\{q_m\}_{m\in\mathbb{Z}_+}$ such that $q_m\rightarrow\infty$. We want to construct, for each $m\in\mathbb{Z}$, a periodic function $V_m:\mathbb{Z}\rightarrow\mathbb{R}$ with period $T_m$, such that $\sup_{n,m}|V_m(n)|<\infty$ and $\sup_{|n|\leq 2T_m}|V_m(n)-V(n)|\leq Cm^{-T_m}$. \quad Let $m\in\mathbb{Z}$. By hypothesis, for each $n\in\{1,\ldots, q_m\}$ there exist $r_1(m)$ y $r_2(m)$ such that: \begin{equation*} \begin{aligned} V(n-q_m)+r_1(m)&=V(n), \quad&&|r_1(m)|\leq Cm^{-q_m}\quad\text{y}\\ V(n)+r_2(m)&=V(n+q_m), \quad&& |r_2(m)|\leq Cm^{-q_m} \end{aligned} \end{equation*} Initially consider the function $V_m: \{1-q_m,\ldots,2q_m\}\rightarrow\mathbb{R}$ defined as: \begin{equation*} V_m(n)= \begin{cases} V(n)+r_1(m) \quad&\text{if $1-q_m\leq n\leq 0$}\\ V(n) \quad&\text{if $1\leq n\leq q_m$}\\ V(n)-r_2(m) \quad&\text{if $q_m+1\leq n\leq 2q_m$} \end{cases} \end{equation*} For all $n\in\{1,\ldots,q_m\}$ the function $V_m$ is periodic: \begin{equation*} \begin{aligned} V_m(n+q_m)&=V(n+q_m)-r_2(m) &&\quad\text{definition of $V_m(n)$, because $q_m+1\leq n+q_m\leq 2q_m$}\\ &=V(n)&&\quad\text{by hypothesis}\\ &=V_m(n)&&\quad\text{definition of $V_m(n)$} \end{aligned} \end{equation*} Similarly: \begin{equation*} \begin{aligned} V_m(n-q_m)&=V(n-q_m)+r_1(m) &&\quad\text{definition of $V_m(n)$ because $1-q_m\leq n-q_m\leq 0$}\\ &=V(n)&&\quad\text{by hypothesis}\\ &=V_m(n)&&\quad\text{definition of $V_m(n)$} \end{aligned} \end{equation*} therefore $V_m$ is periodic, with period $q_m$. \quad Function $V_m$ is extended to $\mathbb{Z}$ by establishing, for each $j\in\mathbb{Z}: V_m(j)=V_m(n)$, where $n\in\{1,\ldots q_m\}$ and $j\equiv_{q_m}n$. We conclude that $V_m: \mathbb{Z}\rightarrow\mathbb{Z}$ is a periodic function with period $q_m$. \quad Additionally, since the function $V:\mathbb{Z}\rightarrow \mathbb{R}$ is bounded, there exists $K$ such that $V(n)\leq K$ for all $n\in\mathbb{Z}$. Let $r(m)=\max\{r_1(m), r_2(m)\}$, note that for all $n$ and $m\in\mathbb{Z}$: \begin{equation*} \begin{aligned} |V_m(n)|\leq |V(n)|+|r(m)|\leq K+Cm^{-q_m}<\infty \end{aligned} \end{equation*} then $\sup_{n,m}|V_m(n)|<\infty$. Finally: \begin{equation*} |V_m(n)-V(n)|\leq |r(m)|\leq Cm^{-q_m} \Rightarrow \sup_{|n|\leq 2q_m}|V_m(n)-V(n)|\leq Cm^{-q_m} \end{equation*} Taking $q_m=T_m$, it is concluded that the sequence of functions $\{V_m\}_{m\in\mathbb{Z}^+}$ satisfies definition \ref{Potencial de Gordon} and therefore $V(n)$ is a Gordon potential. \end{proof} \subsection{Matrix representation of Schrödinger operators} Let $V(n)$ be a Gordon potential and $\Psi(n)$ the column vector $(\psi(n), \psi(n+1))$ where $\psi(n)$ is a solution to the equation: \begin{equation}\label{Kap3Eq2} \psi(n+1)+\psi(n-1)+V(n)\psi(n)=E\psi(n) \end{equation} which means that $E\in\mathbb{C}$ is an eigenvalue of $H$ (equation \ref{Kap3Eq100}) with given initial condition $\Psi(0)$. For $n>0$, equation \eqref{Kap3Eq2} can be written in matrix form: \begin{equation}\label{Kap3Eq3} \Psi(n)=A(n)\cdots A(1)\Psi(0), \quad\text{where}\quad A(n)=\begin{pmatrix} 0&1\\-1&E-V(n) \end{pmatrix} \end{equation} Similarly, let $\Psi_m(n)=(\psi_m(n), \psi_m(n+1))$ and consider the equation: \begin{equation*} \psi_m(n-1)+\psi_m(n+1)+V_m(n)\psi_m(n)=E\psi_m(n) \end{equation*} with initial condition $\Psi_m(0)=\Psi(0)$. Therefore: \begin{equation}\label{Kap3Eq4} \Psi_m(n)=A_m(n)\cdots A_m(1)\Psi(0), \quad\text{where}\quad A_m(n)=\begin{pmatrix} 0&1\\-1&E-V_m(n) \end{pmatrix} \end{equation} In the following two lemmas we study some properties of $A_m(n)$ and $A(n)$. \begin{theorem}\label{Aux1} Let $A(n)$ and $A_m(n)$ be according to the equations \eqref{Kap3Eq3} and \eqref{Kap3Eq4}. Then: \begin{equation*} \lVert A_m(n)\cdots A_m(1)-A(n)\cdots A(1)\rVert\leq n\cdot[\sup_{m,j}\lVert A_m(j)\rVert]^{n-1}\cdot[\sup_{1\leq j\leq n}\lVert A_m(j)-A(j)\rVert] \end{equation*} \end{theorem} \begin{proof} Matrix $A_m(n)\cdots A_m(1)-A(n)\cdots A(1)$ can be written as a telescopic sum: \begin{equation}\label{Kap3Eq5} \begin{aligned} &A_m(n)\cdots A_m(1)-A(n)\cdots A(1)\\&=[A_m(n)-A(n)]\cdot[A_m(n-1)\cdots A_m(1)]\\ &+[A(n)]\cdot[A_m(n-1)-A(n-1)]\cdot[A_m(n-2)\cdots A_m(1)]+\cdots\\ &+[A(n)\cdots A(n-j+1)]\cdot[A_m(n-j)-A(n-j)]\cdot[A_m(n-j-1)\cdots A_m(1)]+\cdots\\ &+[A(n)\cdots A(2)]\cdot[A_m(1)-A(1)] \end{aligned} \end{equation} Since $\lVert AB\rVert\leq\lVert A\rVert\cdot\lVert B\rVert=\lVert B\rVert\cdot\lVert A\rVert$, the norm of each summand on the right-hand side of the equation \eqref{Kap3Eq5} can be bounded as follows, for all $0\leq j\leq n-1$: \begin{equation}\label{Kap3Eq6} \begin{aligned} &\lVert [A(n)\cdots A(n-j+1)]\cdot[A_m(n-j)-A(n-j)]\cdot[A_m(n-j-1)\cdots A_m(1)]\rVert\\ &\leq \lVert [A(n)\cdots A(n-j+1)]\rVert \cdot\lVert[A_m(n-j)-A(n-j)\rVert\cdot\lVert[A_m(n-j-1)\cdots A_m(1)]\rVert\\ &=\lVert [A(n)\cdots A(n-j+1)]\rVert\cdot\lVert[A_m(n-j-1)\cdots A_m(1)]\rVert\lVert[A_m(n-j)-A(n-j)]\rVert\\ &\leq [\sup_{m,j}\lVert A_m(j)\rVert]^{n-1}\cdot[\sup_{1\leq j\leq n}\lVert A_m(j)-A(j)\rVert] \end{aligned} \end{equation} In the above equation: \begin{equation*} \sup_{m,j}\lVert A_m(j)\rVert=\sup_{1\leq j\leq n}\{\lVert A_m(j)\rVert, \lVert A(j)\rVert\} \end{equation*} From the equations \eqref{Kap3Eq5} and \eqref{Kap3Eq6} we obtain: \begin{equation*} \lVert A_m(n)\cdots A_m(1)-A(n)\cdots A(1)\rVert\leq n[\sup_{m,j}\lVert A_m(j)\rVert]^{n-1}[\sup_{1\leq j\leq n}\lVert A_m(j)-A(j)\rVert] \end{equation*} \end{proof} \begin{theorem}\label{Aux2} Let $x$ be a vector such that $\lVert x\rVert=1$ and $B$ an invertible $2\times2$ matrix. Then: \begin{equation*} \max_{a=\pm1,\pm2}\lVert B^ax\rVert\geq\frac{1}{2} \end{equation*} In particular, for $\Psi_m(n)=A_m(n)\cdots A_m(1)\Psi(0)$, we have that: \begin{equation*} \max_{a=\pm1,\pm2}\lVert \Psi_m(aT_m)\rVert\geq\frac{1}{2}\lVert\Psi(0)\rVert \end{equation*} \end{theorem} \begin{proof} Cayley-Hamilton theorem states that if $q(\lambda)=\sum_{k=0}^{n}a_k\lambda^k$ is the characteristic polynomial of a linear transformation $T$ in a vector space $V$ of dimension $n$, then $q(A)=0$, where $A$ is the square $n\times n$ matrix associated to the operator $T$ (Axler, \citeyear{axler2}. \textit{theorem 8.37}). Therefore, if $q(\lambda)=a_2\lambda^2+a_1\lambda+a_0$ is the characteristic polynomial associated with the matrix $B_{2\times2}$, then: \begin{equation}\label{Kap3Eq7} a_2B^2+a_1B+a_0=0 \end{equation} To demonstrate theorem \ref{Aux2} we consider three cases. First $|a_2|=\max_{0\leq i\leq 2}|a_i|$. Multiplying right-hand side of the equation \eqref{Kap3Eq7} by $\frac{1}{a_2}B^{-2}x$, and making $c_1=\frac{a_1}{a_2}$ y $c_0=\frac{a_0}{a_2}$, we obtain: \begin{equation*} x+c_1B^{-1}x+c_0B^{-2}x=0 \end{equation*} where $|c_1|, |c_0|\leq1$. Taking the norm of the above expression: \begin{equation*} \begin{aligned} \lVert x\rVert=\lVert c_1B^{-1}x+c_0B^{-2}x\rVert \quad&\Rightarrow\quad 1\leq |c_1|\cdot\lVert B^{-1}x\rVert+|c_0|\cdot\lVert B^{-2}x\rVert\\ &\Rightarrow\quad\max\{\lVert B^{-1}x\rVert, \lVert B^{-2}x\rVert\}\geq\frac{1}{2} \end{aligned} \end{equation*} The second case is $|a_1|=\max_{0\leq i\leq 2}|a_i|$. Multiplying equation \eqref{Kap3Eq7} by $\frac{1}{a_1}B^{-1}x$: \begin{equation*} d_2Bx+x+d_0B^{-1}x=0 \end{equation*} where $|d_2|=\frac{a_2}{a_1}\leq1$ y $|d_0|=\frac{a_0}{a_1}\leq1$. Then: \begin{equation*} \begin{aligned} \lVert x\rVert=\lVert d_2Bx+d_0B^{-1}x\rVert \quad&\Rightarrow\quad 1\leq |d_2|\cdot\lVert Bx\rVert+|d_0|\cdot\lVert B^{-1}x\rVert\\ \quad&\Rightarrow\quad\max\{\lVert Bx\rVert, \lVert B^{-1}x\rVert\}\geq\frac{1}{2} \end{aligned} \end{equation*} The third case is $|a_0|=\max_{0\leq i\leq 2}|a_i|$. An analogous reasoning to the one just presented, multiplying now the equation \eqref{Kap3Eq7} by $\frac{1}{a_0}x$ allows us to affirm that: \begin{equation*} \max\{\lVert B^2x\rVert, \lVert Bx\rVert\}\geq\frac{1}{2} \end{equation*} It is concluded that invertible $2\times 2$ matrix $B$ satisfies inequality: \begin{equation}\label{Kap3Eq8} \max_{a=\pm1,\pm2}\lVert B^ax\rVert\geq\frac{1}{2} \end{equation} Let $B_m(n)=A_m(n)\cdots A_m(1)=\Pi_{j=1}^{n}A_m(j)$. Note that $A_m(n)=A_m(n+T_m)$, since $V_m(n)=V_m(n+T_m)$. Then: \begin{equation*} \begin{aligned} {B_m^a(T_m)}=(\Pi_{j=1}^{T_m}A_m(j))^a&=\Pi_{j=1}^{T_m}A_m(j)\cdots\Pi_{j=1}^{T_m}A_m(j)\\ &=\Pi_{j=1}^{T_m}A_m(j)\cdot \Pi_{j=T_m+1}^{2T_m}A_m(j)\cdots\Pi_{j=(a-1)(T_m+1)}^{aT_m}A_m(j)\\ &=\Pi_{j=1}^{aT_m}A_m(j)\\ &=B_m(aT_m) \end{aligned} \end{equation*} Replacing $x$ by $\frac{\Psi(0)}{\lVert\Psi(0)\rVert}$ and $B$ by $B_m(T_m)$ in the equation \eqref{Kap3Eq8}: \begin{equation*} \begin{aligned} \max_{a=\pm1,\pm2}\lVert B_m^a(T_m)\Psi(0)\rVert = \max_{a=\pm1,\pm2}\lVert B_m(aT_m)\Psi(0)\rVert\geq\frac{1}{2}\lVert\Psi(0)\rVert \end{aligned}\end{equation*} \end{proof} \subsection{Proof of Gordon's lemma} \label{demostración lema de Gordon} \begin{proof} This proof is based on Simon's suggestions (\cite{simon} theorem 7.1, p.476). On the one hand, note that: \begin{equation}\label{Kap3Eq11} \begin{aligned} \lVert \Psi_m(n)-\Psi(n)\rVert&=\lVert [A_m(n)\cdots A_m(1)-A(n)\cdots A(1)]\cdot\Psi(0)\rVert\\ &\leq \lVert (A_m(n)\cdots A_m(1)-A(n)\cdots A(1))\rVert\cdot\lVert\Psi(0)\rVert\\ &\leq n\cdot[\sup_{m,j}\lVert A_m(j)\rVert]^{n-1}\cdot[\sup_{1\leq j\leq n}\lVert A_m(j)-A(j)\rVert]\cdot \lVert\Psi(0)\rVert \end{aligned} \end{equation} where the last inequality is justified in the theorem \ref{Aux1}. On the other hand: \begin{equation*} A_m(j)-A(j)=\begin{pmatrix} 0&0\\0&V(n)-V_m(n) \end{pmatrix}\quad\Rightarrow\quad\lVert A_m(j)-A(j)\rVert\leq|V(n)-V_m(n)| \end{equation*} By assumption $V(n)$ is a Gordon potential. Therefore: \begin{equation}\label{Kap3Eq12} \sup_{|n|\leq 2T_m}|V_m(n)-V(n)|\leq Cm^{-T_m}\quad\Rightarrow\quad\lim_{m\rightarrow\infty}\sup_{|n|\leq 2T_m}\lVert A_m(j)-A(j)\rVert=0 \end{equation} From the equations \eqref{Kap3Eq11} and \eqref{Kap3Eq12} we obtain: \begin{equation*} \sup_{|n|\leq 2T_m} \lVert \Psi_m(n)-\Psi(n)\rVert\rightarrow0, \quad\text{as }m\rightarrow\infty \end{equation*} then in particular: \begin{equation*} \max_{a=\pm1,\pm2}\lVert\Psi(aT_m)-\Psi_m(aT_m)\rVert\rightarrow0, \quad\text{as }m\rightarrow\infty \end{equation*} As $\max_{a=\pm1,\pm2}\lVert \Psi_m(aT_m)\rVert\geq\frac{1}{2}\lVert\Psi(0)\rVert$ (theorem \ref{Aux2}), it follows from the above equation that: \begin{equation}\label{Kap3Eq13} \max_{a=\pm1,\pm2}\lVert \Psi(aT_m)\rVert\geq\frac{1}{2}\lVert \Psi(0)\rVert \end{equation} Consequently: \begin{equation*} \begin{aligned} \limsup_{|n|\rightarrow\infty}\lVert \Psi(n)\rVert\geq \max_{a=\pm1,\pm2}\lVert \Psi(aT_m)\rVert\quad&\Rightarrow\quad\limsup_{|n|\rightarrow\infty}\lVert \Psi(n)\rVert\geq\frac{1}{2}\lVert \Psi(0)\rVert &&\quad\text{by equation \eqref{Kap3Eq13}}\\&\Rightarrow\quad\limsup_{|n|\rightarrow\infty} \frac{\lVert \Psi(n)\rVert^2}{\lVert \Psi(0)\rVert^2}\geq \frac{1}{4} &&\quad\text{since $\lVert \Psi(0)\rVert>0$}\\ &\Rightarrow \quad\frac{\psi(n+1)^2+\psi(n)^2}{\psi(1)^2+\psi(0)^2}\geq\frac{1}{4} &&\quad\text{definition of $\lVert \Psi(n)\rVert$} \end{aligned} \end{equation*} Therefore: \begin{equation}\label{Kap3Eq14} \limsup_{|n|\rightarrow\infty}\frac{\psi(n+1)^2+\psi(n)^2}{\psi(1)^2+\psi(0)^2}\geq\frac{1}{4} \end{equation} \end{proof} \section{Proof of theorem A} \label{Proof of theorem A} \begin{proof} We will show that $f(T^n\omega)$ is a Gordon potential for all $f$ in a residual set $\mathcal{F}$ of $C(\Omega)$ and $\omega$ in a residual set $\Omega_f$ of $\Omega$. Recall that the real function $V(n)=f(T^n\omega)$ is a Gordon potential if there exists a sequence of positive integers $q_m\rightarrow\infty$ and a $C>0$ such that (theorem \ref{Equivalencia potencial de Gordon}): \begin{equation}\label{Kap4Eq7} \begin{aligned} \max_{1\leq n\leq q_m}|f(T^n\omega)-f(T^{n+q_m}\omega)|&\leq Cm^{-q_m} \quad\text{and:}\\ \max_{1\leq n\leq q_m}|f(T^n\omega)-f(T^{n-q_m}\omega)|&\leq Cm^{-q_m} \end{aligned} \end{equation} Demonstration is divided into three steps. Firstly, we construct a residual set $\mathcal{F}\subseteq C(\Omega)$. Secondly, for each $f\in\mathcal{F}$ we construct a residual set $\Omega_f\subset\Omega$. Finally, we show that for every $f\in\mathcal{F}$ and $\omega\in\Omega_f$ the function $f(T^n\omega)$ satisfies both inequalities of the equation \eqref{Kap4Eq7} and therefore is a Gordon potential.\\ \textit{(1) Construction of a residual set $\mathcal{F}\subseteq C(\Omega)$:} Let $\alpha\in PRP(T)$ \textit{i.e.} $\{T^j\alpha\}_{j\geq0}$ satisfies \textit{RP} and thus, for each $k\in\mathbb{Z}_+$ exists a $q_k$ such that: \begin{equation*} d(T^j\alpha, T^{j+q_k}\alpha)<\frac{1}{k},\quad\text{for $0\leq j\leq 3q_k$} \end{equation*} where $\lim_{k\rightarrow\infty}q_k=\infty$ (lemma \ref{q infinito}).\\ For each $k\in\mathbb{Z}_+$ let $B_k=B_k(\alpha,r(k))$ be the open ball centered at $\alpha$ with radius $r(k)$. As $T^j$ is a homeomorphism on $\Omega$ then $\{T^j(B_k)\}_{j=1}^{4q_k}$ is a sequence of open sets in $\Omega$. We represent the union of elements of this sequence as: \begin{equation*} \begin{aligned} &\bigcup_{j=1}^{q_k}T^j(B_k) \cup T^{q_k+j}(B_k) \cup T^{2q_k+j}(B_k) \cup T^{3q_k+j}(B_k)=\bigcup_{j=1}^{q_k}\bigcup_{l=0}^{3}T^{j+lq_k}(B_k) \end{aligned} \end{equation*} Radius $r(k)$ of the open ball $B_k$ centered at $\alpha$ is considered sufficiently small for the following two conditions to be met: \begin{itemize} \item[\textbf{(a)}] $\overline{T^i(B_k)}\cap \overline{T^j(B_k)}=\emptyset,\quad\forall i,j\in \{1,\ldots,4q_k\}$\quad for $i\neq j$. \item[\textbf{(b)}] For each $1\leq j\leq q_k$ we have that $\bigcup_{l=0}^{3}T^{j+lq_k}(B_k)$ is contained in a ball of radius $\frac{4}{k}$. \end{itemize} Imposing the condition \textbf{(a)} is possible since by hypothesis $T$ is a homeomorphism in $\Omega$ and also we are considering a finite number of iterates of $T$. Regarding the condition \textbf{(b)}, note that as $\{T^j\alpha\}_{j\geq0}$ satisfies \textit{RP} then for each $k\in\mathbb{Z}$ we have that: \begin{equation*} \begin{aligned}[c] d(T^j\alpha,T^{j+q_k}\alpha)&<\frac{1}{k}\\ d(T^{j+q_k}\alpha,T^{j+2q_k}\alpha)&<\frac{1}{k}\\ d(T^{j+2q_k}\alpha,T^{j+3q_k}\alpha)&<\frac{1}{k} \end{aligned} \qquad\Rightarrow\qquad \begin{aligned}[c] d(T^j\alpha,T^{j+lq_k}\alpha)&<\frac{3}{k} \quad\text{for $0\leq l\leq 3$} \end{aligned} \end{equation*} \begin{figure} \caption{{\small Illustration of conditions \textbf{(a)} \label{fig:DemoTeo1} \end{figure} The utility of conditions \textbf{(a)} and \textbf{(b)} is that they guarantee that the set $\mathcal{F}$ constructed below is residual in $C(\Omega)$. Initially, consider the set: \begin{equation*} \mathcal{C}_k=\{g\in C(\Omega)\mid \text{$g$ is constant on $\bigcup_{l=0}^3T^{j+lq_k}(B_k)$ for each $j: 1\leq j\leq q_k$}\} \end{equation*} We define the set $\mathcal{F}_k$ as the open neighborhoods of radio $k^{-q_k}$ centered at $g\in\mathcal{C}_k$: \begin{equation}\label{Kap4Eq8} \mathcal{F}_k=\{f\in C(\Omega)\mid |f-g|=\sup_{x\in \Omega}|f(x)-g(x)|<k^{-q_k}, \quad\text{para}\quad g\in \mathcal{C}_k\} \end{equation} For each $m\in\mathbb{Z}_+$ consider: \begin{equation*} \hat{\mathcal{F}_m}=\bigcup_{k\geq m}\mathcal{F}_k \end{equation*} The set $\hat{\mathcal{F}_m}$ is open, because it is the countable union of open sets in $C(\Omega)$. Additionally $\hat{\mathcal{F}_m}$ is dense in $C(\Omega)$: consider $h\in C(\Omega)$, we will construct a $g\in\hat{\mathcal{F}_m}$ arbitrarily close to $h$. As $\Omega$ is compact, then $h$ is uniformly continuous. Let $x,y\in \bigcup_{l=0}^3T^{j+lq_k}(B_k)$, In accordance with condition \textbf{(b)}: $d(x,y)<4/k$. This implies, due to the uniform continuity of $h$, that $|h(x)-h(y)|\rightarrow0$ for $k\in\mathbb{Z}$ arbitrarily large. For a given $\hat{x}\in \bigcup_{l=0}^3T^{j+lq_k}(B_k)$, define the function: \begin{equation}\label{Kap4Eq9} g(x)= \begin{cases} h(\hat{x}) & \text{ if}\quad x\in \bigcup_{l=0}^3T^{j+lq_k}(B_k)\\ h(x) & \text{ if} \quad x \notin \bigcup_{l=0}^3T^{j+lq_k}(B_k) \end{cases} \end{equation} It follows that $g\in \mathcal{C}_k$ and $h\in \hat{\mathcal{F}_m}$ for every $m\geq1$ therefore $\hat{\mathcal{F}_m}$ is dense in $C(\Omega)$. We conclude that for each $m\in\mathbb{Z}_+$, the set $\hat{\mathcal{F}_m}$ is open and dense in $C(\Omega)$. Therefore: \begin{equation*} \begin{aligned} \mathcal{F}&=\bigcap_{m\geq1}\hat{\mathcal{F}_m}=\bigcap_{m\geq1}\bigcup_{k\geq m}\mathcal{F}_k \end{aligned} \end{equation*} is a residual set in $C(\Omega)$ since it is the countable intersection of open and dense sets in $C(\Omega)$. \quad \textit{(2) Construction of a residual set $\Omega_f\subseteq\Omega$:} Let $f\in\mathcal{F}$. By definition of the set $\mathcal{F}$ in the previous step of this proof, there exists a sequence of positive integers $k_l$, with $\lim_{l\rightarrow\infty}k_l=\infty$, such that $f$ belongs to each $\hat{\mathcal{F}_{k_l}}$ (equation \ref{Kap4Eq8}). Moreover, for each $k_l$ there exists a $q_{k_l}$ such that: \begin{equation*} d(T^j\alpha, T^{j+q_{k_l}}\alpha)<\frac{1}{k_l}, \quad\text{para}\quad1\leq j\leq k_l\cdot q_{k_l} \end{equation*} where $\lim_{l\rightarrow\infty}q_{k_l}=\infty$, by lemma \ref{q infinito}. According to lemma \ref{transitividad implica TRP} the set $\{T^{q+j}\alpha\}_{j\geq1}$ is dense in $\Omega$ for every $q\in\mathbb{Z}$, then in particular $\{T^{q_{k_l}+j}\alpha\}_{j\geq1}$ is dense in $\Omega$ for each $q_{k_l}$. The union of elements of the latter sequence is represented, for each $m\geq1$, as: \begin{equation*} \bigcup_{l\geq m}\bigcup_{j=1}^{q_{k_l}}T^{q_{k_l}+j}(\alpha), \quad\text{where:}\quad\bigcup_{l\geq m}\bigcup_{j=1}^{q_{k_l}}T^{q_{k_l}+j}(\alpha)\subseteq \bigcup_{l\geq m}\bigcup_{j=1}^{q_{k_l}}T^{q_{k_l}+j}(B_{k_l}) \end{equation*} The above equation is justified by the fact that $\alpha\in B_{k_l}$, where $B_{k_l}$ is an open ball, in accordance with step \textit{(1)} of this proof. Therefore, the set: \begin{equation*} \Omega_{f,m}=\bigcup_{l\geq m}\bigcup_{j=1}^{q_{k_l}}T^{q_{k_l}+j}(B_{k_l}) \end{equation*} is dense in $\Omega$ as it contains $\{T^{q_{k_l}+j}\alpha\}_{j\geq1}$, which is dense in $\Omega$. We conclude that: \begin{equation*} \Omega_f=\bigcap_{m\geq1} \Omega_{f,m}=\bigcap_{m\geq1}\bigcup_{l\geq m}\bigcup_{j=1}^{q_{k_l}}T^{j+q_{k_l}}(B_{k_l}) \end{equation*} is the countable intersection of open and dense sets, and therefore residual subset of $\Omega$. \quad \textit{(3) We show that $f(T^n\omega)$ is a Gordon potential, for every $f\in\mathcal{F}$ and every $\omega\in\Omega_f$:} Let $f\in\mathcal{F}$ and $\omega\in\Omega_f$. Since $\omega\in\Omega_f$, there exists a sequence $k_l\rightarrow\infty$ such that: \begin{equation*} \omega\in\bigcup_{j=1}^{q_{k_l}}T^{j+q_{k_l}}(B_{k_l}) \end{equation*} Therefore, for each ${k_l}$ exists a $\hat{j}$ ($1\leq\hat{j}\leq q_{k_l}$) such that $\omega\in T^{\hat{j}+q_{k_l}}(B_{k_l})$. This implies that for each $j$, with $1\leq j\leq q_{k_l}$: \begin{equation*} T^j\omega\in T^{\hat{j}+j+q_{k_l}}(B_{k_l}),\quad T^{j+q_{k_l}}\omega\in T^{\hat{j}+j+2q_{k_l}}(B_{k_l})\quad\text{and}\quad T^{j-q_{k_l}}\omega\in T^{\hat{j}+j}(B_{k_l}) \end{equation*} Let $\bar{j}=\hat{j}+j$. Therefore: \begin{equation*} T^j\omega, \quad T^{j+q_{k_l}}\omega \quad\text{and}\quad T^{j-q_{k_l}}\omega\quad\in\bigcup_{1\leq \bar{j}\leq q_{k_l}}\bigcup_{l=0}^{3}T^{\bar{j}+lq_{k_l}}(B_{k_l}), \quad \forall j:\quad1\leq j\leq q_{k_l} \end{equation*} Consider a function $g\in\mathcal{C}_{k_l}$ . This means, by the definition of the set $\mathcal{C}_{k_l}$ in the previous step of this proof, that $g$ is constant on $\bigcup_{1\leq \bar{j}\leq q_{k_l}}\bigcup_{l=0}^{3}T^{\bar{j}+lq_{k_l}}(B_{k_l})$. In particular: \begin{equation*} g(T^{j}\omega)=g(T^{j+q_{k_l}}\omega)\quad\text{and}\quad g(T^{j}\omega)=g(T^{j-q_{k_l}}\omega) \end{equation*} Therefore, if $f\in\mathcal{F}$ then: \begin{equation*} \begin{aligned} |f(T^j\omega)-f(T^{j+q_{k_l}}\omega)|&=|f(T^j\omega)-g(T^j\omega)+g(T^{j+q_{k_l}}\omega)-f(T^{j+q_{k_l}}\omega)|&&\text{as $g(T^j\omega)=g(T^{j+q_{{k_l}}}\omega)$}\\ &\leq |f(T^j\omega)-g(T^j\omega)|+|f(T^{j+q_{k_l}}\omega)-g(T^{j+q_{k_l}}\omega)|&&\text{by triangular inequality}\\ &< {k_l}^{-q_{k_l}}+{k_l}^{-q_{k_l}}=2{k_l}^{-q_{k_l}}&&\text{as $f\in\mathcal{F}$} \end{aligned} \end{equation*} Similarly: \begin{equation*} \begin{aligned} |f(T^j\omega)-f(T^{j-q_{k_l}}\omega)|&=|f(T^j\omega)-g(T^j\omega)+g(T^{j-q_{k_l}}\omega)-f(T^{j-q_{k_l}}\omega)|&&\text{as $g(T^j\omega)=g(T^{j-q_{k_l}}\omega)$}\\ &\leq |f(T^j\omega)-g(T^j\omega)|+|f(T^{j-q_{k_l}}\omega)-g(T^{j-q_{k_l}}\omega)|&&\text{by triangular inequality}\\ &< {k_l}^{-q_{k_l}}+{k_l}^{-q_{k_l}}=2{k_l}^{-q_{k_l}}&&\text{as $f\in\mathcal{F}$} \end{aligned} \end{equation*} Consequently: \begin{equation*} |f(T^j\omega)-f(T^{j+q_{k_l}}\omega)|<2{k_l}^{-q_{k_l}} \quad\text{and}\quad|f(T^j\omega)-f(T^{j-q_{k_l}}\omega)|<2{k_l}^{-q_{k_l}} \end{equation*} The above inequality is satisfied for each $j\in\{1,\ldots,q_{k_l}\}$. This leads us to conclude that: \begin{equation*} \max_{1\leq j\leq q_{k_l}}|f(T^j\omega)-f(T^{j+q_{k_l}}\omega)|<2{k_l}^{-q_{k_l}}\quad\text{and}\quad\quad \max_{1\leq j\leq q_{k_l}}|f(T^j\omega)-f(T^{j-q_{k_l}}\omega)|<2{k_l}^{-q_{k_l}} \end{equation*} Taking $C=2$ in equation \eqref{Kap4Eq7} we conclude that $f(T^n\omega)$ is a Gordon potential. \end{proof} \section{Proof of theorem B}\label{Proof of theorem B} \begin{proof} We focus on the construction of the residual $\Omega_f$ set in $\Omega$, since the rest of the proof is analogous to that of \textbf{Theorem A}. Let $f\in\mathcal{F}$. This implies that for a subsequence of positive integers $k_l\rightarrow\infty$, $f$ belongs to each set $\mathcal{F}_{k_l}$ (equation \ref{Kap4Eq8}). Moreover, since by hypothesis the system $\{\Omega, T\}$ satisfies \textit{TRP}, we have that $\Omega$ can be covered as follows: \begin{equation*} \Omega\subseteq \bigcup_{\omega\in PRP(T)}B_{\varepsilon_\omega}\omega \end{equation*} where $B_{\varepsilon_\omega}$ are open balls centered at $\omega\in PRP(T)$ with radius $\varepsilon_\omega>0$. For each $m\in\mathbb{Z}_+$ we define the set: \begin{equation*} \Omega_{f,m}=\bigcup_{\omega\in PRP(T)}\bigcup_{l\geq m}\bigcup_{j=1}^{q_{k_l}}T^{j+q_{k_l}}(B_{r(k_l)}(\omega)) \end{equation*} For each $m\in\mathbb{Z}_+$, the set $\Omega_{f,m}$ is open since it is the union of open sets. Additionally, $\Omega_{f,m}$ is dense in $\Omega$, since the system $\{\Omega, T\}$ satisfies \textit{TRP} and $T$ is a homeomorphism. Consequently: \begin{equation*} \Omega_f=\bigcap_{m\geq 1}\Omega_{f,m}=\bigcap_{m\geq 1}\bigcup_{\omega\in PRP(T)} \bigcup_{l\geq m}\bigcup_{j=1}^{q_{k_l}}T^{j+q_{k_l}}(B_{r(k_l)}(\omega)) \end{equation*} is a residual subset of $\Omega$ because it is the intersection of dense open sets. Lastly, for each $f\in\mathcal{F}$ and $\omega\in\Omega_{f,m}$, the function $f(T^n\omega)$ is a Gordon potential. The argument is the same as presented in the third step of the proof of \textbf{Theorem A} (section \ref{Proof of theorem A}). \end{proof} \section{Applications} \label{section4} As application of theorems \textbf{A} and \textbf{B}, in this section we study the spectrum of quasi-periodic Schrödinger operators and operators whose potential function is determined by skew-shift on the torus. First, the following theorem allows us to determine the spectral type of a broader class of dynamical systems. \begin{theorem} \label{thc} Let $(\Omega,d)$ be a compact metric space. Suppose that $T:\Omega\rightarrow\Omega$ satisfies the following two conditions: \begin{itemize} \item[\textit{(i)}] $T$ is minimal: $\overline{\{T^n\omega\}}_{n\geq0}=\Omega, \quad \forall \omega\in\Omega$. \item[\textit{(ii)}] $T$ is an isometry: $d(\omega_1,\omega_2)=d(T\omega_1,T\omega_2), \quad \forall \omega_1,\omega_2\in\Omega$. \end{itemize} Then the system $\{\Omega, T\}$ satisfies \textit{TRP}. Therefore, by \textbf{Theorem B}, for all $\omega$ in a residual subset of $\Omega$ and $f$ in a residual subset of $C(\Omega)$, the operator: \begin{equation}\label{Kap5Eq3} \begin{aligned} H_\omega: \ell^2(\mathbb{Z})&\rightarrow \ell^2(\mathbb{Z})\\ \psi(n)&\mapsto \psi(n+1)+\psi(n-1)+f(T^n\omega)\psi(n) \end{aligned} \end{equation} has purely continuous spectrum. \end{theorem} \begin{proof} Consider $\omega\in \Omega$. Our objective is to show that $\omega$ belongs to the \textit{PRP(T)} \textit{i.e.} that for all $k\in\mathbb{Z}_+$ there exists a $q_k\in\mathbb{Z}_+$ such that $d(T^n\omega, T^{n+q_k}\omega)<\frac{1}{k}$ for $n\in\{0,\ldots, kq_k\}$. By hypothesis $T$ is minimal, then for $k\in\mathbb{Z}_+$ exists a $\hat{q}_k\in\mathbb{Z}_+$ such that $d(\omega, T^{\hat{q}_k}\omega)<\frac{1}{k}$, for all $n\in\{0,\ldots, k\hat{q}_k\}$. Since $T$ is an isometry: $d(\omega, T^{q_k}\omega)=d(T^n\omega, T^{n+q_k}\omega)$. Therefore: \begin{equation*} d(\omega, T^{\hat{q}_k}\omega)<\frac{1}{k} \quad\Rightarrow\quad d(T^n\omega, T^{n+\hat{q}_k}\omega)=d(\omega, T^{\hat{q}_k}\omega)<\frac{1}{k}, \quad\forall n\in\{0,\ldots, k\hat{q}_k\} \end{equation*} taking $q_k=\hat{q}_k$ we conclude that $\omega\in PRP(T)$, therefore $\{\Omega, T\}$ satisfies \textit{TRP}.\end{proof} \begin{theorem} \label{theorem 3} (BD, \cite{damanikBase} theorem 3, p.651). \textbf{Spectral type of a generic quasi-periodic Schrödinger operator.} Consider the dynamical system $\{\mathbb{T}^d, T\}$, where $T:\mathbb{T}^d\rightarrow\mathbb{T}^d$ is the ergodic \textit{Shift} on $\mathbb{T}^d$: $T\omega=\omega+\alpha$, where $\alpha=(\alpha_1,\cdots, \alpha_d)$ and the set $\{1,\alpha_1,\dots,\alpha_d\}$ is independent over the rationals numbers. Then $\{\mathbb{T}^d, T\}$ satisfies \textit{TRP} and consequently, for a generic $\omega\in\mathbb{T}^d$ and $f\in C(\mathbb{T}^d)$, the operator: \begin{equation}\label{Kap5Eq4} \begin{aligned} H_\omega: \ell^2(\mathbb{Z})&\rightarrow \ell^2(\mathbb{Z})\\ \psi(n)&\mapsto \psi(n+1)+\psi(n-1)+f(T^n\omega)\psi(n)\\ &=\psi(n+1)+\psi(n-1)+f(\omega+n\alpha)\psi(n) \end{aligned} \end{equation} has purely continuous spectrum. \end{theorem} \begin{proof} Since the transformation $T:\mathbb{T}^d\rightarrow\mathbb{T}^d$ is ergodic, then $T$ is minimal on $\mathbb{T}^d$. Moreover, $T$ is an isometry, since for $\omega_1$ and $\omega_2$ in $\mathbb{T}^d$: \begin{equation*} \begin{aligned} d(T\omega_1,T\omega_2)&=d(\omega_1+\alpha, \omega_2+\alpha)=|\omega_1+\alpha-\omega_2-\alpha|=|\omega_1-\omega_2|=d(\omega_1,\omega_2) \end{aligned} \end{equation*} As $T$ is minimal and an isometry on $\mathbb{T}^d$, in view of theorem \ref{thc} we conclude that the spectrum of the operator \eqref{Kap5Eq4} is purely continuous for a generic $\omega\in \mathbb{T}^d$. \end{proof} Now we study the system $\{\mathbb{T}^2, T\}$, where $T$ is the skew-shift: $T(\omega_1,\omega_2)=(\omega_1+2\alpha, \omega_1+\omega_2)$. For this purpose, we require the definition of the badly approximable subset of $\mathbb{T}$. \begin{definition} \label{mal aproximable} Let $\alpha\in\mathbb{T}=\mathbb{R}/\mathbb{Z}$. We define the constant $c(\alpha)$ as: \begin{equation*} c(\alpha)=\liminf_{q\rightarrow\infty}q\langle \alpha q\rangle\quad\text{where:}\quad \langle \alpha q\rangle=\text{dist$_\mathbb{T}(\alpha q, 0)$}=\min\{|\alpha q-p|: p\in\mathbb{Z}\} \end{equation*} It is said that $\alpha\in\mathbb{T}$ is \textit{badly approximable} if $c(\alpha)>0$. Therefore, $\alpha\in\mathbb{T}$ is not badly approximable if there exists a subsequence $\{q_k\}_{k\in\mathbb{Z}_+}$ such that $\lim_{q_k\rightarrow\infty}q_k\langle \alpha q_k\rangle=0$. \end{definition} \begin{remark} \label{medida mal aproximables} The set of badly approximable numbers has Lebesgue measure equal to zero in $\mathbb{T}$ (Khinchin, \textit{\cite{khinchin} theorem 29, p.60}). Consequently, the set of numbers that are not a badly approximable is of full measure in $\mathbb{T}$. \end{remark} \begin{theorem} \label{Teorema 4} \textit{(BD, \cite{damanikBase} theorem 4, p.651)}. consider the dynamical system $\{\mathbb{T}^2, T\}$, where $T:\mathbb{T}^2\rightarrow\mathbb{T}^2$ is the \textit{Skew Shift} operator: $T(\omega_1,\omega_2)=(\omega_1+2\alpha, \omega_1+\omega_2)$, with $\alpha\in\mathbb{R}\setminus\mathbb{Q}$. The following statements are equivalent. \begin{itemize} \item[(i)] $\alpha$ not badly approximable. \item[(ii)] $\{\mathbb{T}^2, T\}$ satisfies \textit{GRP}. \item[(iii)] $\{\mathbb{T}^2, T\}$ satisfies \textit{MRP}. \item[(iv)] $\{\mathbb{T}^2, T\}$ satisfies \textit{TRP}. \end{itemize} \end{theorem} \begin{proof}\textcolor{white}{Sea:} \underline{$(i)\Rightarrow (ii)$}: Suppose that $\alpha$ is not badly approximable. Let us show that $\{\mathbb{T}^2, T\}$ satisfies \textit{GRP}, i.e., $\mathbb{T}^2=PRP(T)$. Since $\alpha$ is not badly approximable, then there exists a sequence $\{q_k\}_{k\in\mathbb{Z}_+}$, with $\lim_{k\rightarrow\infty}q_k=\infty$ such that. \begin{equation*} \lim_{k\rightarrow\infty}q_k\langle \alpha q_k\rangle=0 \end{equation*} Let $\omega=(\omega_1,\omega_2)\in\mathbb{T}^2$ and consider $k\in\mathbb{Z}_+$. To show that $\omega\in PRP(T)$, we shall construct a sequence $\hat{q}_k$ such that: \begin{equation} \label{Kap5Eq6} d(T^{n+\hat{q}_k}\omega,T^n\omega)<\frac{1}{k},\quad\text{for all $n$ such that:}\quad 0\leq n\leq\hat{q}_k \end{equation} According to the definition of $T$ in $\mathbb{T}^2$: \begin{equation}\label{Kap5Eq7} T^{n+\hat{q}_k}\omega-T^n\omega=(2\hat{q}_k\alpha, 2\hat{q}_k\omega_1+\hat{q}_k^2\alpha+2n\hat{q}_k\alpha-\hat{q}_k\alpha) \end{equation} We want to show that $\langle 2\hat{q}_k\alpha\rangle\rightarrow0$ and $\langle2\hat{q}_k\omega_1+\hat{q}_k^2\alpha+2n\hat{q}_k\alpha-\hat{q}_k\alpha\rangle\rightarrow0$. For each $k\in\mathbb{Z}_+$, let $\hat{q}_k=m_k q_k$, where $m_k\in\{1,\ldots,k+1\}$. The above implies that: \begin{equation*} T^{n+\hat{q}_k}\omega-T^n\omega=(2m_k q_k\alpha, 2m_k q_k\omega_1+(m_k q_k)^2\alpha+2nm_k q_k\alpha-m_k q_k\alpha) \end{equation*} Since $\alpha$ is not badly approximable: $\langle 2m_k q_k\alpha\rangle\rightarrow0$. Likewise: $\langle (m_k q_k)^2\alpha \rangle\rightarrow0$, also $\langle m_k q_k\alpha\rangle\rightarrow0$ and $\langle 2nm_k q_k\alpha\rangle\rightarrow0$. It remains to show that $2m_k q_k\omega_1\rightarrow0$. For this purpose, we select $m_k$ from the set $\{1, \cdots, k+1\}$ such that $\langle m_k(2q_k\omega_1)\rangle<\varepsilon$. It follows that $T^{n+\hat{q}_k}\omega-T^n\omega\rightarrow0$ and consequently $d(T^{n+\hat{q}_k}\omega,T^n\omega)\rightarrow0$ for all $n$ such that that $0\leq n\leq \hat{q}_k$. It is concluded that $\omega\in PRP(T)$. Since the selection of $\omega$ was arbitrary, it is concluded that $\mathbb{T}^2 = PRP(T)$. It is concluded that $\mathbb{T}^2=PRP(T)$. \quad \underline{$(ii)\Rightarrow (iii)$}: suppose that the dynamical system $\{\mathbb{T}^2,T\}$ satisfies \textit{GRP}, i.e. $PRP(T)=\mathbb{T}^2$. This implies that $\mu(PRP(T))=\mu(\mathbb{T}^2)=1>0$, then $\{\mathbb{T}^2,T\}$ satisfies \textit{MRP}. \quad \underline{$(iii)\Rightarrow (iv)$}: suppose $\{\mathbb{T}^2,T\}$ satisfies \textit{MRP}. Since $T$ is $\mu-$ergodic in $\mathbb{T}^2$ and the Lebesgue measure is strictly positive, then the system $\{\mathbb{T}^2,T\}$ satisfies \textit{TRP}. \quad \underline{$(iv)\Rightarrow (i)$}: Suppose $\{\mathbb{T}^2,T\}$ satisfies \textit{TRP}. Then the set $PRP(T)$ is dense in $\mathbb{T}^2$. In particular, there exists $\omega\in\mathbb{T}^2$ such that $\{T^n\omega\}_{n\geq0}$ satisfies \textit{RP}. This means that for all $\varepsilon>0$ there exists a $q_k$ such that $d(T^{n+q_k}\omega, T^n\omega)<\varepsilon$. According to equation \eqref{Kap5Eq7}: \begin{equation}\label{Kap5Eq8} \langle 2q_k\omega_1+q_k^2\alpha+2nq_k\alpha-q_k\alpha\rangle<\varepsilon,\quad\text{for $n$ such that:}\quad 0\leq n\leq q_k \end{equation} for $n = 0$ equation \eqref{Kap5Eq8} implies that $\langle 2q_k\omega_1+q_k^2\alpha-q_k\alpha\rangle<\varepsilon$. That is, $2q_k\omega_1+q_k^2\alpha-q_k\alpha\in\mathbb{T}$ is at a distance less than $\varepsilon$ from $0$. Then, for $\varepsilon>0$ arbitrarily small: $\langle 2nq_k\alpha\rangle=n\langle 2q_k\alpha\rangle$ for all $n$: $0\leq n\leq q_k$. Making $n=q_k$ we get $q_k\langle q_k\alpha\rangle<\varepsilon$. We conclude that $q_k\langle \alpha q_k\rangle\rightarrow0$, which by definition means that $\alpha$ is not badly approximable. \end{proof} \begin{corollary} \label{Teorema 4_1} \textbf{Spectral type of Schrödinger operators with potential defined by skew-shift on $\mathbb{T}^2$}. Let $T:\mathbb{T}^2\rightarrow\mathbb{T}^2$ be the operator defined by $T(\omega_1,\omega_2)=(\omega_1+2\alpha, \omega_1+\omega_2)$, where $\alpha$ is not badly approximable. Then for a generic $\omega\in\mathbb{T}$ and $f:\mathbb{T}^2\rightarrow\mathbb{R}$, the operator: \begin{equation}\label{Kap5Eq20} \begin{aligned} H_\omega: \ell^2(\mathbb{Z})&\rightarrow \ell^2(\mathbb{Z})\\ \psi(n)&\mapsto \psi(n+1)+\psi(n-1)+f(T^n\omega)\psi(n)\\ &=\psi(n+1)+\psi(n-1)+f((\omega_1+2n\alpha, \omega_2+2n\omega_1+n(n-1)\alpha))\psi(n) \end{aligned} \end{equation} has purely continuous spectrum. \end{corollary} \begin{proof} Suppose that $\alpha$ is not badly approximable. By theorem \ref{Teorema 4}, the system $\{\mathbb{T}^2, T\}$ satisfies \textit{TRP}. This in turn implies, as a consequence of \textbf{Theorem B}, that purely continuous spectrum is a generic property of $\{H_\omega\}_{\omega\in\mathbb{T}^2}$. \end{proof} In this paper we studied the conditions that allow us to conclude that purely continuous spectrum is a generic property of ergodic Schrödinger operators. The theorems studied, as pointed out by Boshernitzan and Damanik \cite{damanikBase}, support the notion that a repetition property is associated with the absence of point spectrum in Schrödinger operators. The topic covered in this paper can be expanded in multiple directions, for example: \textit{Schrödinger operators in higher dimensions}. This domain has been covered by authors as Fan and Han \cite{fan}, who study the continuous spectrum of the Schrödinger operators in $\ell^2(\mathbb{Z}^d)$ for a measurable potential function $f: \mathbb{T}^d\rightarrow \mathbb{R}$. \textit{Repetition properties and entropy}. A question of interest is to study what implications has the properties \textit{TRP} and \textit{MRP} on the entropy of a dynamical system. This issue was investigated by Huang et. al. \cite{huang}, who state that positive entropy of a dynamical system $\{\Omega, T\}$ implies the presence of point spectrum on a generic Schrödinger operator $\{H_\omega\}_{\omega\in\Omega}$. Finally, remains the question of neccesary conditions on the dynamical system $\{\Omega, T\}$ to guarantee the generic continuous spectrum of the family of Schrödinger operators $\{H_\omega\}_{\omega\in\Omega}$, topic that is framed in the theory of inverse problems in spectral theory. \end{document}
\begin{document} \title{ extsf{Singular Behavior of Electric Field of \ High Contrast Concentrated Composites} \thispagestyle{empty} \begin{abstract} \noindent A heterogeneous medium of constituents with vastly different mechanical properties, whose inhomogeneities are in close proximity to each other, is considered. The gradient of the solution to the corresponding problem exhibits singular behavior (blow up) with respect to the distance between inhomogeneities. This paper introduces a concise procedure for capturing the leading term of gradient's asymptotics precisely. This procedure is based on a thorough study of the system's energy. The developed methodology allows for straightforward generalization to heterogeneous media with a nonlinear constitutive description. \end{abstract} \section{Introduction} This paper is on the study of blow up phenomena that occur in heterogeneous media consisting of a finite-conductivity matrix and perfectly conducting inhomogeneities (particles or fibers) close to touching. This investigation is motivated by the issue of material failure initiation where one has to assess the magnitude of local fields, including extreme electric or current fields, heat fluxes, and mechanical loads, in the zones of high field concentrations. Such zones are normally created by large gradient flows confined in very thin regions between particles of different potentials, see e.g. \cite{mark,bud-car,keller93,basl}. These media are described by elliptic or degenerate elliptic equations with discontinuous coefficients. The problem of analytical study of solution regularity for such problems has been actively studied since 1999, and resulted in series of papers \cite{bonnetier-vogelius,li-nirenbrg,li-vogelius,basl,akl,aklll,akllz,yun01,yun02,lim-yun,bly,bly2,gn12} investigating different cases based on dimensions, shape of inclusions, applied boundary conditions, etc. The main result up to date can be summarized as follows: \textit{For two perfectly conducting particles of an arbitrary smooth shape located at distance $\delta$ from each other and away from the external boundary, typically there exists $C>0$ independent of $\delta$ such that} \begin{equation}\label{E:prev} \frac{1}{C\sqrt{\delta}}\leq \|\nabla u\|_{L^{\infty}} \leq \frac{C}{\sqrt{\delta}} \quad \mbox{for} \quad d=2,~\qquad\frac{\log\delta^{-1}}{C~\delta}\leq \|\nabla u\|_{L^{\infty}} \leq \frac{C \log\delta^{-1}}{\delta} \quad \mbox{for} \quad d=3, \end{equation} and corresponding bounds for the case of $N>2$ particles and $d>3$, see \cite{bly2}. It is important to note that even though in some referred studies it was mentioned on what parameters the constant $C$ in \eqref{E:prev} depends upon, the precise asymptotics have not been captured, only bounds for it have been established. Moreover, methods in the aforementioned contributions have their limitations, e.g. some of them use methods that work only in $2$D, some deal with inhomogeneities of spherical shape only, and the developed techniques, except one \cite{gn12} by the author, were designed to treat \textit{linear} problems only, with no direct extension or generalization to a nonlinear case. In the current paper an approach for gradient estimates for problems with particles of degenerate properties that works for any number of particles of arbitrary shape in any dimensions is presented. The advantage and novelty is that the rate of blow up of the electric field is captured {\bf precisely} as opposed to the existing methods and allows for direct extensions to the nonlinear case (e.g. $p$-Laplacian). In particular, it is shown that \begin{equation}\label{E:result} \|\nabla u\|_{L^{\infty}}=\frac{C}{\sqrt{\delta}} \quad \mbox{for} \quad d=2,~\qquad \|\nabla u\|_{L^{\infty}} = \frac{C \log\delta^{-1}}{\delta} \quad \mbox{for} \quad d=3, \end{equation} with explicitly computable constant $C$ that depends on dimension $d$, particles array and their shapes, and an applied boundary field. The rest of the paper is organized as follows. Chapter \ref{S:formulation} provides the problem setting and formulation of main results, proof of which is presented in Chapter \ref{S:proof}. Discussion of possible extensions is done in Chapter \ref{S:extensions} and conclusions are given in Chapter \ref{S:conclusions}. Proofs of auxiliary facts are shown in Appendices. \noindent \textbf{Acknowledgements}. The author thank A. Novikov for helpful discussions on the subject of the paper. \section{Problem Formulation and Main Results} \label{S:formulation} The current paper focuses only on physically relevant dimensions $d=2,3$. To that end, let $\Omega \in \mathbb{R}^d$, $d=2,3$ be an open bounded domain with $C^{1,\alpha}$ ($0<\alpha\leq 1$) boundary $\Gamma$. It contains two particles $\mathcal{B}_1$ and $\mathcal{B}_2$ with smooth boundaries at the distances $0<\delta \ll 1$ from each other; see Figure \ref{F:domain}. We assume \begin{equation}\label{E:distance} \hbox{dist}(\partial \Omega, \mathcal{B}_1\cup \mathcal{B}_2) \geq K \end{equation} for some $K$ independent of $\delta$. Let $\Omega_\delta$ model the matrix (or the background medium) of the composite, that is, $\Omega_\delta=\Omega/\overline{(\mathcal{B}_1\cup \mathcal{B}_2)}$, in which we consider \begin{equation} \label{E:pde-form-main} \left\{ \begin{array}{r l l } \triangle u(x) & = 0, & \displaystyle x\in \Omega_{\delta}\\[3pt] u(x) &=\mbox{const}, & \displaystyle x\in \partial \mathcal{B}_i,~ i=1,2\\[3pt] \displaystyle \int_{\partial \mathcal{B}_i}\frac{\partial u}{\partial n} ~ds&=0, & i=1,2\\[3pt] u(x) &= U(x), & \displaystyle x\in \Gamma \end{array} \right. \end{equation} where a bounded weak solution $u$ represents the electric potential in $\Omega_\delta$, and $U$ is the given applied potential on the external boundary $\Gamma$. Note $u$ takes a constant value, that we denote $\mathcal{T}_i$, on the boundary of particle $\mathcal{B}_i$ ($i=1,2$). This is a unique constant for which the zero-flux condition, that is the third equation of \eqref{E:pde-form-main}, is satisfied. The constants $\mathcal{T}_1$, $\mathcal{T}_2$ are unknown apriori and should be found in the course of solving the problem. \begin{figure}\label{F:domain2} \label{F:domain0} \label{F:domain} \end{figure} The goal is to derive the asymptotics of the solution gradient with respect to the small parameter $\delta\ll 1$ that defines the close proximity of particles to each other. To formulate the main result of the paper, consider an auxiliary problem defined as follows. Construct a line connecting the centers of mass of $\mathcal{B}_1$ and $\mathcal{B}_2$ and ``move'' particles toward each other along this line until they touch. Denote now the newly obtained domain outside of particles by $\Omega_o$ where we consider the following problem: \begin{equation} \label{E:pde-form-v0} \left\{ \begin{array}{r l l } \triangle v_o(x) & = 0, & x\in\Omega_o\\[3pt] v_o(x) & =\mbox{const}, & \displaystyle x\in\partial \mathcal{B}_1\cup \partial\mathcal{B}_2\\[3pt] \displaystyle \int_{\partial\mathcal{B}_1}\frac{\partial v_o}{\partial n}~ds+ \int_{\partial\mathcal{B}_2}\frac{\partial v_o}{\partial n}~ds & =0, \\[5pt] v_o(x) & = U(x) , & x\in\Gamma \end{array} \right. \end{equation} This problem differs from \eqref{E:pde-form-main} by that the potential takes the {\it same} constant value on the boundaries of {\it both} particles. We denote this potential by $\mathcal{T}_o$ and introduce a number that depends on the external potential $U$: \begin{equation} \label{E:R0} \mathcal{R}_o=\mathcal{R}_o[U]:= \int_{\partial\mathcal{B}_1}\frac{\partial v_o}{\partial n}~ds. \end{equation} Without loss of generality, we assume that particles potential in \eqref{E:pde-form-main} satisfy $\mathcal{T}_2>\mathcal{T}_1$, which would mean that $\mathcal{R}_o>0$ for sufficiently small $\delta$. The following theorem summarizes the main result of this study. \begin{theorem} \label{T:main} The asymptotics of the electric field for problem \eqref{E:pde-form-main} is given by \begin{equation} \label{E:main-res} \|\nabla u\|_{L^\infty(\Omega_\delta)}=\left[1+o(1)\right] \begin{cases} \displaystyle \frac{\mathcal{R}_o}{\mathcal{C}_{12}}\frac{1}{\delta^{1/2}}, & d=2\\[8pt] \displaystyle \frac{\mathcal{R}_o}{\mathcal{C}_{12}}\frac{1}{ \delta |\ln \delta|}, & d=3 \end{cases} \quad \quad \mbox{for }~\delta\ll 1, \end{equation} with $\mathcal{R}_o$ defined above in \eqref{E:R0} and explicitly computable constant $\mathcal{C}_{12}$ that depends on curvatures of particle boundaries $\partial \mathcal{B}_1$ and $\partial \mathcal{B}_2$ at the point of the closest distance and defined below in \eqref{E:C0}. \end{theorem} \section{Proof of Main Results} \label{S:proof} The proof of Theorem \ref{T:main} consists of ingredients collected in the following facts. In \cite{gn12} using the method of barriers it was shown that the electric field of the system associated with the problem: \begin{equation} \label{E:phi-eq} \left\{ \begin{array}{r l l } \triangle \phi(x) & = 0, & x\in\Omega_ \delta\\[3pt] \phi(x) & =T_i, & \displaystyle x\in\partial \mathcal{B}_i,~i=1,2\\[3pt] \phi(x) & = U(x) , & x\in\Gamma \end{array} \right. \end{equation} stated on the same domain $\Omega_\delta$ with the same boundary potential $U$ as in the above problem \eqref{E:pde-form-main} is given by \[ \|\nabla \phi \|_{L^\infty(\Omega_\delta)}=\frac{|T_2-T_1|}{\delta}[1+o(1)], \quad \mbox{for }~ \delta\ll 1. \] In contrast to \eqref{E:pde-form-main}, the constants $T_1$ and $T_2$ in \eqref{E:phi-eq} are arbitrary, which implies the solution of \eqref{E:phi-eq} may not satisfy the integral identities the flux of $u$ on $\partial \mathcal{B}_i$ as in \eqref{E:pde-form-main}. In particular, one has \begin{lemma} \label{L:elfield} The asymptotics of the electric field of \eqref{E:pde-form-main} is as follows: \begin{equation} \label{E:u-grad} \|\nabla u \|_{L^\infty(\Omega_\delta)}=\frac{\mathcal{T}_2-\mathcal{T}_1}{\delta}[1+o(1)], \quad \mbox{for }~ \delta\ll 1, \end{equation} where $\mathcal{T}_2$ and $\mathcal{T}_1$ are the potentials on $\mathcal{B}_1$ and $\mathcal{B}_2$, respectively, for which the zero integral flux condition as in \eqref{E:pde-form-main} satisfied. \end{lemma} With \eqref{E:u-grad} the problem is reduced to finding the asymptotics of the potential difference $\mathcal{T}_2-\mathcal{T}_1$ in terms of the distance parameter $\delta$, given in the proposition. \begin{proposition} \label{L:potdif} The asymptotics of the potential difference $\mathcal{T}_2-\mathcal{T}_1$ is given by: \begin{equation} \label{E:potdif} \mathcal{T}_2-\mathcal{T}_1=\frac{\mathcal{R}_o}{g_\delta}[1+o(1)], \quad \mbox{for }~ \delta\ll 1, \end{equation} where $\mathcal{R}_o$ is defined by \eqref{E:R0} and $g_\delta$ by: \begin{equation} \label{E:g} g_\delta=\begin{cases} \displaystyle \mathcal{C}_{12}\delta^{-1/2}, & d=2\\[3pt] \displaystyle \mathcal{C}_{12} |\ln \delta|, & d=3 \end{cases} \end{equation} with constant $\mathcal{C}_{12}$ introduced below in \eqref{E:C0} that depends on curvatures of particles boundaries at the point of their closest distance. \end{proposition} \noindent \textit{Proof of Proposition \ref{L:potdif}.} \\ The method of proof is based on observation that asymptotics \eqref{E:potdif} of $\mathcal{T}_2-\mathcal{T}_1$ can be derived by investigating the energy associated with the system \eqref{E:pde-form-main} and defined by: \begin{equation} \label{E:enE} \mathcal{E}=\int_{\Omega_\delta}|\nabla u|^2~dx, \end{equation} where $u$ solves \eqref{E:pde-form-main}. A remarkable feature of problem \eqref{E:pde-form-main} is that potentials $\mathcal{T}_1$ and $\mathcal{T}_2$ are minimizers of the energy quadratic form of the potentials: \begin{equation} \label{E:IML} \mathcal{E}=\min_{\{T_1,T_2\}} E(T_1,T_2), \quad E(T_1,T_2)= \int_{\Omega_\delta}|\nabla \phi|^2~dx, \quad \mbox{where }~ \phi~~ \mbox{solves }~ \eqref{E:phi-eq}. \end{equation} This observation is the essence of the so-called Iterative Minimization Lemma, first introduced in \cite{bgn2}. Therefore, if we find an approximation of $\mathcal{E}$ for sufficiently small $\delta$, we would be able to derive an asymptotics for $\mathcal{T}_2-\mathcal{T}_1$ then. For the energy $\mathcal{E}$ the following holds true. \begin{lemma} \label{L:energy} The energy $\mathcal{E}$ of \eqref{E:pde-form-main} can be written as \begin{equation} \label{E:min-pr} \mathcal{E}=\min_{\{t_1,t_2\}} \left[a_1t_1^2+a_2t_2^2+2b_1t_1+2b_2t_2+2c_{12}t_1t_2+C\right], \end{equation} with asymptotics of coefficients of the quadratic form $E$: \begin{equation} \label{E:coef-as} a_1=a_2=g_\delta[1+o(1)], \quad b_1=-b_2=\mathcal{R}_o[1+o(1)], \quad c_{12}=-g_\delta[1+o(1)], \quad \mbox{for }~ \delta\ll 1, \end{equation} and $\mathcal{R}_o$ given by \eqref{E:R0}, and $g_\delta$ by \eqref{E:g}. \end{lemma} This lemma is proven in Appendix \ref{A:en-lem}. Now substituting asymptotics \eqref{E:coef-as} of coefficients to \eqref{E:min-pr} and dropping the low order terms we define the quadratic form: \[ \hat{E}(t_1,t_2)=g_\delta(t_2-t_1)^2-2\mathcal{R}_o(t_2-t_1), \] whose minimizer $(\hat{t}_1,\hat{t}_2)$ provides asymptotics of the sought potential difference, namely, \[ \mathcal{T}_2-\mathcal{T}_1=|\hat{t}_2-\hat{t}_1|[1+o(1)]=\frac{\mathcal{R}_o}{g_\delta}[1+o(1)], \quad \mbox{for }~ \delta\ll 1. \] This concludes the proof of Proposition \ref{L:potdif}. $\Box$ \noindent \textit{Proof of Theorem \ref{T:main}.}\\ Asymptotic relations \eqref{E:u-grad}-\eqref{E:potdif} and definition \eqref{E:g} yield main result \eqref{E:main-res} for sufficiently small $\delta$. $\Box$ \section{Extensions} \label{S:extensions} \noindent {\bf 4.1. Extension to the case of $N>2$ particles}. \hspace{2pt} The presented above approach allows for an extension to any number of particles $N>2$, where neighbors $\mathcal{B}_i$ and $\mathcal{B}_j$ are located at the distance $\delta_{ij}=O(\delta)\ll 1$ from each other, see Figure \ref{F:domainN}. The notion of ``neighbors'' can be defined based on the Voronoi tesselation with respect to the particles centers of mass, namely, the neighbors are the nodes that share the same Vonoroi face. In this case, similarly to above, one has to consider a ``limiting problem'' \eqref{E:pde-form-v0} in the domain $\Omega_o$ where the third condition is replaced to \[ \sum_{i=1}^N\int_{\partial\mathcal{B}_i}\frac{\partial v_o}{\partial n}~ds=0.\] To obtain $\Omega_o$ one can connect centers of mass of neighboring pairs $\mathcal{B}_i$, $\mathcal{B}_j$ with lines, and ``move'' all particles alone those lines toward each other until $\partial \mathcal{B}_i$ touches at least one of its neighbor, where $i\in \{1,\ldots,N\}$, $j\in \mathcal{N}_i$, where $\mathcal{N}_i$ is the set of indices of neighbors to particle $\mathcal{B}_i$. Now, similarly to \eqref{E:R0}, introduce numbers \[ \mathcal{R}_i=\mathcal{R}_i[U]:= \int_{\partial\mathcal{B}_i}\frac{\partial v_o}{\partial n}~ds, \quad i\in \{1,\ldots,N\}. \] \begin{figure} \caption{Composite containing $N>2$ particles $\mathcal{B} \label{F:domainN} \end{figure} Then minimize the energy quadratic form $E$ as in \eqref{E:min-pr} and derive asymptotics of its coefficients in terms of $\mathcal{R}_i$ and $\delta_{ij}$ using $|\mathcal{T}_i-\mathcal{T}_\delta|\ll 1$ to obtain the potential difference asymptotics for the neighbors: \begin{equation} \label{E:potdif-N} |\mathcal{T}_i-\mathcal{T}_j|=\frac{|\mathcal{R}_i-\mathcal{R}_j|}{g_{ij}}[1+o(1)], \quad \mbox{for }~ \delta\ll 1, \quad \mbox{and }~ i\in \{1,\ldots,N\}, ~j\in \mathcal{N}_i. \end{equation} Asymptotics of parameters $g_{ij}$ in \eqref{E:potdif-N} is similar to one of $g_\delta$ and given by \[ g_{ij}=\mathcal{C}_{ij} \begin{cases} \delta^{-1/2}_{ij},& d=2\\[3pt] \left|\ln \delta_{ij}\right|,& d=3 \end{cases}, \] with $\mathcal{C}_{ij}$ given by formula \eqref{E:C0} in Appendix \ref{A:coeff} where $i$ should be replaced by $1$ and $j$ by $2$. Finally, use \[ \|\nabla u \|_{L^\infty(\Omega_\delta)}=\max_{i\in\{1,..,N\}, ~j\in \mathcal{N}_i}\frac{|\mathcal{T}_i-\mathcal{T}_j|}{\delta_{ij}}[1+o(1)], \quad \mbox{for }~ \delta\ll 1, \] with asymtotics \eqref{E:potdif-N} to obtain the blow up of electric field of the composite with more than two particles. \noindent {\bf 4.2. Extension to the nonlinear case}. \hspace{2pt} One can also generalize the proposed methodology for high-contrast materials with the matrix described by nonlinear constitutive laws such as $p$-Laplacian. The system's energy in this case is given by $\displaystyle \mathcal{E}=\int_{\Omega_\delta}|\nabla u|^p~dx$, ($p>2$), where $u$ solves \eqref{E:pde-form-main} with first and third equations replaced by $\nabla \cdot \left( |\nabla u|^{p-2}\nabla u\right)=0$ in $\Omega_\delta$ and $\displaystyle \int_{\partial \mathcal{B}_i}|\nabla u|^{p-2} (\nabla u\cdot n) ~ds=0$, respectively. Note that for a successful application of the described approach, one needs to show that the energy function $E(T_1,T_2)$, whose minimal value $ \mathcal{E}$ is attained at the solution $u$, is differentiable with respect to the potential $T_i$ on $\partial \mathcal{B}_i$. The blow up of the electric field is then \[ \|\nabla u\|_{L^\infty(\Omega_\delta)}=\left( \frac{\mathcal{R}_o}{\mathcal{C}_{12}} \right)^{\frac{1}{p-1}} \delta^{-\frac{d-1}{2(p-1)}}\left[1+o(1)\right], \quad p>2, \quad d=2,3, \quad \mbox{for }~\delta\ll 1, \] see also \cite{gn12}. \noindent {\bf 4.3. Extension to dimensions $d>3$}. \hspace{2pt} The described above procedure remains the same if one needs to obtain asymptotics for $|\nabla u|$ in dimensions greater than three. For this, one has to derive asymptotics of $g_\delta$ for $d>3$ first, following method described in Appendices \ref{A:G-lem} and \ref{A:coeff}. For simplicity of presentation we omit this case here. \section{Conclusions} \label{S:conclusions} As observed in \cite{mark,bud-car,keller93}, in a composite consisting of a matrix of finite conductivity with perfectly conducting particles close to touching the electric field exhibits blow up. This blow up is, in fact, the main cause for a material failure which occurs in the thin gaps between neighboring particles of different potentials. The electric field of such composites is described by the gradient of the solution to the corresponding boundary value problem. The current paper provides a concise and elegant procedure for capturing the singular behavior of the solution gradient {\it precisely} that does not require employing a heavy analytical machinery developed in previous studies \cite{akl,aklll,akllz,yun01,yun02,lim-yun,bly,bly2,gn12}. This procedure relies on simple observations about energy of the corresponding system and its minimizers that were sufficient to acquire the sought asymtpotics exactly. The techniques developed and adapted here are independent of dimension $d$, particles shape and their total number $N$, whereas strict dependence on $d$ and particles shape was the main limitation of previous contributions on the subject \cite{akl,aklll,akllz,yun01,yun02,lim-yun,bly,bly2,gn12}. Furthermore, the developed above procedure allows for a straightforward generalization to a {\it nonlinear} case. \section{Appendices} \subsection{Proof of Lemma \ref{L:energy}} \label{A:en-lem} \textit{Proof.} Consider a family of auxiliary problems defined on the same domain $\Omega_\delta$ as \eqref{E:pde-form-main}: \begin{equation} \label{E:pde-form-v-delta} \left\{ \begin{array}{r l l } \triangle v(x) & = 0, & \displaystyle x\in\Omega_\delta\\[3pt] v(x) & =\mbox{const}, & \displaystyle x\in\partial \mathcal{B}_1\cup \partial\mathcal{B}_2\\[3pt] \displaystyle \int_{\partial\mathcal{B}_1}\frac{\partial v}{\partial n}~ds + \int_{\partial\mathcal{B}_2}\frac{\partial v}{\partial n}~ds& =0, \\[5pt] v(x) & = U(x), & \displaystyle x\in \Gamma \end{array} \right. \end{equation} As in \eqref{E:pde-form-v0} the constant value of the potential is the same on both particles that we denote by $\mathcal{T}_\delta$. However, in contrast to \eqref{E:pde-form-v0} here particles are located at distance $\delta$ from each other while in \eqref{E:pde-form-v0} particles touch at one point. With that, similarly to \eqref{E:pde-form-v0} we introduce the number \begin{equation} \label{E:R_delta} \mathcal{R}_\delta[U]:=\int_{\partial\mathcal{B}_1}\frac{\partial v}{\partial n}~ds. \end{equation} In \cite{gn12}, it was shown that asymptotics of $\mathcal{R}_\delta[U]$ is given by \begin{equation} \label{E:Rd-asym} \mathcal{R}_\delta[U]=\mathcal{R}_o[1+o(1)], \qquad \mbox{for }~ \delta\ll 1. \end{equation} Using the linearity of problem \eqref{E:pde-form-main} we decompose its solution into \begin{equation} \label{E:u-decomp} u=v+(\mathcal{T}_1-\mathcal{T}_\delta)\psi_1+(\mathcal{T}_2-\mathcal{T}_\delta)\psi_2, \end{equation} with $\psi_i$ ($i=1,2$) solving \begin{equation} \label{E:psi-eq} \left\{ \begin{array}{r l l } \triangle \psi_i(x) & = 0, & x\in\Omega_ \delta\\[3pt] \psi_i(x) & =\delta_{ij}, & \displaystyle x\in\partial \mathcal{B}_j,~~i,j\in\{1,2\}\\[3pt] \psi_i(x) & = 0, & x\in\Gamma \end{array} \right. \end{equation} where $\delta_{ij}$ is the Kroneker delta. Invoking \eqref{E:u-decomp}, we compute the energy \eqref{E:enE} of the system and obtain: \begin{equation} \label{E:en-decomp} \mathcal{E}=\mathcal{E}_v+\mathcal{G}_1(\mathcal{T}_1-\mathcal{T}_\delta)^2+\mathcal{G}_2(\mathcal{T}_2-\mathcal{T}_\delta)^2+ 2\mathcal{R}_\delta[U](\mathcal{T}_1-\mathcal{T}_2)+ 2\mathcal{C}_{12}(\mathcal{T}_1-\mathcal{T}_\delta)](\mathcal{T}_2-\mathcal{T}_\delta), \end{equation} where \begin{equation} \label{E:not} \mathcal{E}_v:= \int_{\Omega_\delta}|\nabla v|^2~dx, \qquad \mathcal{G}_i:= \int_{\Omega_\delta}|\nabla \psi_i|^2~dx=\int_{\partial \mathcal{B}_i}\frac{\partial \psi_i}{\partial n}~ds, \quad i=1,2, \end{equation} are the energies of systems given by \eqref{E:pde-form-v-delta} and \eqref{E:psi-eq}, respectively, and \[ \mathcal{C}_{12}:= \int_{\Omega_\delta}(\nabla \psi_1 \cdot\nabla \psi_2)~dx. \] Trivial integration by parts yields that \begin{equation} \label{E:C12} \mathcal{C}_{12}=-\mathcal{G}_1+C_1=-\mathcal{G}_2+C_2, \end{equation} where constants $C_i$ depend on $d$, $K$ and shape of the particles, but independent of $\delta$. On the other hand, the problem \eqref{E:pde-form-v-delta} is regular in the sense that its electric field $|\nabla v|$ does not exhibit blow up since there is no potential drop between the particles. Hence, \begin{equation} \label{E:Ev} \mathcal{E}_v=:C, \end{equation} that depends on the same parameters as the above constants. Finally, in Appendix \ref{A:G-lem} we show that for sufficiently small $\delta$: \begin{equation} \label{E:G-blowup} \mathcal{G}_i=g_\delta[1+o(1)]. \end{equation} With notations introduced in \eqref{E:coef-as}, \eqref{E:Ev}, \eqref{E:not}, Iterative Minimization Lemma \eqref{E:IML}, and asymptotics \eqref{E:Rd-asym}, \eqref{E:C12}, \eqref{E:G-blowup} we have from \eqref{E:en-decomp}: \[ \mathcal{E}=C+a_1(\mathcal{T}_1-\mathcal{T}_\delta)^2+a_2(\mathcal{T}_2-\mathcal{T}_\delta)^2+ 2b_1\mathcal{T}_1+2b_2\mathcal{T}_2+ 2c_{12}(\mathcal{T}_1-\mathcal{T}_\delta)(\mathcal{T}_2-\mathcal{T}_\delta), \] which with \eqref{E:IML} yields \eqref{E:min-pr}, where $t_i=T_i-\mathcal{T}_\delta$, $i=1,2$.\\ $\Box$ \subsection{Asymptotics of $\mathcal{G}_i$} \label{A:G-lem} Here we prove asymptotic formula \eqref{E:G-blowup} which is stated in the following lemma. \begin{lemma} \label{L:G-blowup} For $\delta\ll 1$ asymptotics of the energy $\mathcal{G}_i$ defined in \eqref{E:not} is given by $\mathcal{G}_i=g_\delta[1+o(1)]$, $i=1,2$, with $g_\delta$ defined in \eqref{E:g}. \end{lemma} \noindent \textit{Proof.} To derive an asymptotics of $\mathcal{G}_i$ we adopt the method of {\it variational bounds} that has become a classical tool in capturing the leading terms of asymptotics of the energy of the corresponding system. This method is based on two equivalent variational formulations of the corresponding problem that provide upper and lower bounds for the energy matching up the leading order of asymptotics. Employing this method we use of a couple observations made in \cite{bk,bn,bgn1} which are vital in capturing the sought asymtptotics. But before, we need to introduce a coordinate system in which the construction will be made. First, we write each point $ x\in \mathbb{R}^d$ as $x=(\bar{x},x_d)$ where \[\left\{ \begin{array}{l l l} \bar{x}=x,&x_d=y, & \mbox{when }~d=2\\[5pt] \bar{x}=(x,y),&x_d=z, & \mbox{when }~d=3 \end{array} \right. \] Then, connect the centers of mass of particles with a line and ``move'' $\mathcal{B}_1$ and $\mathcal{B}_2$ along this line toward each other until they touch, thus, producing domain $\Omega_o$ as above in \eqref{E:pde-form-v0}. The point of their touching defines the origin of our cylindrical coordinate system. The line connecting the centers will be the axis $Ox_d$, see Figure \ref{F:neck01}. When particles are ``moved back'' at the distance $\delta$ from each other along $Ox_d$, we construct a ``cylinder'' of radius $w\gg \delta$ that contains this line. This ``cylinder'' is depicted as the red region in Figure \ref{F:neck02} that we call a {\it neck} and denote by $\Pi$. Also, introduce the distance $H=H(\bar{x})$ between boundaries of $\mathcal{B}_1$ and $\mathcal{B}_2$, which in the selected coordinate system is a function of $\bar{x}\in \mathbb{R}^{d-1}$. \begin{figure}\label{F:neck01} \label{F:neck02} \label{F:neck} \end{figure} The mentioned above observations about energy estimates are as follows. First, the minimal value of the energy functional in the neck $\Pi$ is attained on the system with insulating lateral boundary $\partial \Pi$ of the cylinder, that is, \[ \mathcal{G}_i=\int_{\Omega_\delta}|\nabla \psi_i|^2~dx\geq \int_{\Pi}|\nabla \psi_i|^2~dx \geq \int_{\Pi}|\nabla \psi^i_{\Pi}|^2~dx, \] where function $ \psi^i_{\Pi}$ solves the problem \begin{equation} \label{E:psi-Pi} \left\{ \begin{array}{r l l } \triangle \psi^i_{\Pi}(x) & = 0, & x\in\Pi\\[3pt] \psi^i_{\Pi}(x) & =\delta_{ij}, & \displaystyle x\in\partial \mathcal{B}_j,~~i,j\in\{1,2\}\\[5pt] \displaystyle \frac{\partial \psi^i_{\Pi}}{\partial n}(x) & = 0, & x\in \partial\Pi \end{array} \right. \end{equation} On the other hand, since energy $\mathcal{G}_i$ is the minimal value of the energy functional attained at the minimizer $\psi_i$, its upper bound is given by any test function $\phi_i$ from the set \[ V_i=\left\{\phi_i\in H^1(\Omega):~ \phi_i =\delta_{ij}~~\mbox{on } \partial \mathcal{B}_j,~\phi_i =0~\mbox{on }\Gamma\right\}, \] via \[ \mathcal{G}_i=\int_{\Omega_\delta}|\nabla \psi_i|^2~dx \leq \int_{\Omega_\delta}|\nabla \phi_i|^2~dx. \] Hence, the variational bounds for $\mathcal{G}_i$ are \begin{equation} \label{E:en-bounds} \int_{\Pi}|\nabla \psi^i_{\Pi}|^2~dx\leq \mathcal{G}_i \leq \int_{\Omega_\delta}|\nabla \phi_i|^2~dx, \quad \mbox{where } \phi_i \in V_i, ~\mbox{ and } \psi^i_{\Pi}\mbox{ solves } \eqref{E:psi-Pi}. \end{equation} Therefore, the problem is now reduced to construction of an approximation to $\nabla \psi^i_{\Pi}$ and finding a function $\phi_i\in V_i$ so that the integrals in \eqref{E:en-bounds} match up to the leading order for $\delta\ll 1$. For this purpose, one can use the {\it Keller's} functions \cite{keller63} defined in $\Pi$ by \begin{equation} \label{E:keller} \phi_\Pi^i(x)= \frac{x_d}{H(\bar{x})}, \quad x\in \Pi. \end{equation} With this $\phi_\Pi^i$, we define a test function $\phi_i\in V_i$ by \[ \phi_i(x)=\begin{cases} \phi_\Pi^i(x), & x\in \Pi\\[3pt] \phi_o^i(x) & x\in \Omega_\delta\setminus \Pi \end{cases}, \] where $\phi_o^i$ solves \[ \left\{ \begin{array}{r l l } \triangle\phi_o^i(x) & = 0, & x\in \Omega_\delta\setminus \Pi\\[3pt] \phi_o^i(x) & =\delta_{ij}, & \displaystyle x\in\partial \mathcal{B}_j,~~i,j\in\{1,2\}\\[5pt] \phi_o^i(x) & = \phi_\Pi^i, & x\in \partial\Pi\\[3pt] \phi_o^i(x) & = 0, & x\in \Gamma \end{array} \right. \] Employing the method of barriers to this problem one can show that $|\nabla \phi_o^i|\leq C$ with constant $C$ depending on $d$ and $K$ but independent of $\delta$. Thus, \[ \mathcal{G}_i \leq \int_{\Pi}|\nabla \phi_\Pi^i|^2~dx+C,\quad \mbox{for } \delta\ll 1. \] The {\it dual variational principle} will help to estimate integral $\displaystyle \int_{\Pi}|\nabla \psi^i_{\Pi}|^2~dx$, namely, \[ \begin{array}{r l l } \nabla \psi^i_{\Pi}& \displaystyle =\mbox{argmax}_{W_\Pi}\left[- \int_{\Pi}j_i^2~dx+2\int_{\partial \mathcal{B}_i}(j_i\cdot n)~ds\right],\\[12pt] W_{\Pi}& \displaystyle= \left\{j\in L^2(\Pi;\mathbb{R}^d):~\nabla\cdot j=0~\mbox{in }\Pi,~j\cdot n=0~\mbox{on } \partial\Pi\right\}. \end{array} \] The test flux $j_i\in \mathbb{R}^d$ is chosen \begin{equation} \label{E:test-flux} j_i=\begin{cases} \displaystyle \left(0,\frac{1}{H(x)}\right), & d=2\\[10pt] \displaystyle \left(0,0,\frac{1}{H(x,y)}\right), & d=3 \end{cases} \end{equation} Therefore, \begin{equation} \label{E:en-test-flux} \int_{\Pi}j_i^2~dx=\int_{\Pi}\frac{d\bar{x}}{H^2(\bar{x})}. \end{equation} Hence, we have two-sided bounds for $ \mathcal{G}_i$: \[ - \int_{\Pi}j_i^2~dx+2\int_{\partial \mathcal{B}_i}(j_i\cdot n)~ds\leq \mathcal{G}_i\leq \int_{\Pi}|\nabla \phi_\Pi^i|^2~dx +C,\quad \mbox{for } \delta\ll 1. \] With selected test functions $ \phi_\Pi^i$ and $j_i$ by \eqref{E:keller} and \eqref{E:test-flux}, respectively, it is trivial to show that the difference between the upper and lower bounds is simply \[ \left| \int_{\Pi}|\nabla \phi_\Pi^i|^2~dx+ \int_{\Pi}j_i^2~dx-2\int_{\partial \mathcal{B}_i}(j_i\cdot n)~ds\right|=\int_{\Pi}|\nabla \phi_\Pi^i-j_i|^2~dx. \] This quantity is bounded, hence, the asymptotics of $\mathcal{G}_i$ is given by \eqref{E:en-test-flux}, whose asymptotics in its turn is shown in Appendix \ref{A:coeff}, see also \cite{bk,bn,bgn1}), and is given by: \[ \int_{\Pi}\frac{d\bar{x}}{H^2(\bar{x})}=g_\delta[1+o(1)],\quad \mbox{for } \delta\ll 1. \] $\Box$ \subsection{Constant $\mathcal{C}_{12}$ in definition of $g_\delta$} \label{A:coeff} Here we show what is the constant $\mathcal{C}_{12}$ in asymptotics of $g_\delta$ that we claimed to be dependent on curvatures of particles boundaries at the point of the smallest distance between each other. In the cylindrical coordinate system introduced above, that is, the one with the axis $Ox_d$ coinciding with the line of the closest distance between $\mathcal{B}_1$ and $\mathcal{B}_2$, and with the origin at the mid-point of this line, the boundaries $\partial\mathcal{B}_1$ and $\partial\mathcal{B}_2$ are approximated by parabolas ($d=2$) and paraboloids ($d=3$): \begin{equation} \label{E:oscul} \begin{array}{r l r l l} \partial\mathcal{B}_1: & \displaystyle y=\frac{\delta}{2}+\frac{x^2}{2\alpha_1}, & \partial\mathcal{B}_2: & \displaystyle y=-\frac{\delta}{2}-\frac{x^2}{2 \alpha_2}, & d=2\\[8pt] \partial\mathcal{B}_1: & \displaystyle z=\frac{\delta}{2}+\frac{x^2}{2a_1}+\frac{y^2}{2b_1}, & \partial\mathcal{B}_2: & \displaystyle z=-\frac{\delta}{2}-\frac{x^2}{2a_2}-\frac{y^2}{2b_2}, & d=3\\[5pt] \end{array} \end{equation} The distance between these paraboloids is \begin{equation} \label{E:dist} h(\bar{x})=\begin{cases} \displaystyle \delta+\frac{x^2}{\alpha}, \quad \alpha:=\frac{2\alpha_1\alpha_2}{\alpha_1+\alpha_2}, & d=2\\[8pt] \displaystyle \delta+\frac{x^2}{a}+\frac{y^2}{b}, \quad a:=\frac{2a_1a_2}{a_1+a_2},~ b:=\frac{2b_1b_2}{b_1+b_2}, & d=3 \end{cases} \end{equation} For sufficiently small neck-width $w\ll 1$, this distance $h(\bar{x})$ by \eqref{E:dist} is a ``good'' approximation for the actual distance $H(\bar{x})$ between the boundaries $\partial\mathcal{B}_1$ and $\partial\mathcal{B}_2$ in the sense that \[ \int_{\Pi}\frac{d\bar{x}}{H^2(\bar{x})}=\int_{\Pi}\frac{d\bar{x}}{h^2(\bar{x})}[1+o(1)], \] that is, provides the leading asymptotics of $\mathcal{G}_i$ from Appendix \ref{A:G-lem}. Going back to \eqref{E:dist}, we note that in 2D the parameter $\alpha$ is the harmonic mean of the radii of curvatures of parabolas approximating $\partial\mathcal{B}_1$ and $\partial\mathcal{B}_2$. Similarly, in 3D quantities $a$ and $b$ are related to the Gaussian $\mathcal{K}_i$ and mean $\mathcal{H}_i$ curvatures of the corresponding paraboloids at the points of the their closest distance via: \[ \mathcal{K}_i=\frac{4}{a_ib_i}, \qquad \mathcal{H}_i=\frac{a_i+b_i}{a_ib_i}, \quad i=1,2. \] Finally, direct evaluating of the integral $\displaystyle \int_{\Pi}\frac{d\bar{x}}{h^2(\bar{x})}$ yields the main asymptotic term for $\mathcal{G}_i$ as $\delta \ll 1$ and defines $g_\delta$ of \eqref{E:g}: \[ \int_{\Pi}\frac{d\bar{x}}{h^2(\bar{x})}=[1+o(1)] \begin{cases} \pi \alpha \delta^{1/2}, & d=2\\[3pt] \pi a b |\ln \delta |, & d=3 \end{cases} \quad \mbox{for }~ \delta\ll 1, \] where $\alpha$, $a$, $b$ are defined in \eqref{E:dist} in terms of coefficients of the osculating paraboloids \eqref{E:oscul} at the point of the closest distance between particles surfaces. Thus, \begin{equation} \label{E:C0} \mathcal{C}_{12}= \begin{cases} \pi \alpha , & d=2\\[3pt] \pi a b , & d=3 \end{cases} \end{equation} $\Box$ \end{document}
\begin{document} \setcounter{page}{1} \title[Quasiconformal close-to-convex harmonic mappings] {On quasiconformal close-to-convex harmonic\\ \vskip.05in mappings involving starlike functions} \author[Zhi-Gang Wang, Xin-Zhong Huang, Zhi-Hong Liu and Rahim Kargar]{Zhi-Gang Wang, Xin-Zhong Huang, Zhi-Hong Liu and Rahim Kargar} \vskip.10in \address{\noindent Zhi-Gang Wang\vskip.05in School of Mathematics and Computing Science, Hunan First Normal University, Changsha 410205, Hunan, P. R. China.} \vskip.05in \email{\textcolor[rgb]{0.00,0.00,0.84}{wangmath$@$163.com}} \address{\noindent Xin-Zhong Huang\vskip.05in School of Mathematical Sciences, Huaqiao University, Quanzhou 362021, Fujian, P. R. China.} \email{\textcolor[rgb]{0.00,0.00,0.84}{huangxz$@$hqu.edu.cn}} \address{\noindent Zhi-Hong Liu\vskip.05in College of Science, Guilin University of Technology, Guilin 541004, Guangxi, P. R. China.} \email{\textcolor[rgb]{0.00,0.00,0.84}{liuzhihongmath$@$163.com}} \address{\noindent Rahim Kargar\vskip.05in Department of Mathematics and Statistics, University of Turku, Turku, Finland.} \email{\textcolor[rgb]{0.00,0.00,0.84}{rakarg$@$utu.fi}} \subjclass[2010]{Primary 58E20; Secondary 30C55.} \keywords{Analytic function; univalent function; starlike function; close-to-convex harmonic mapping; quasiconformal harmonic mapping.} \begin{abstract} In the present paper, we discuss several basic properties of a class of quasiconformal close-to-convex harmonic mappings with starlike analytic part, such results as coefficient inequalities, an integral representation, a growth theorem, an area theorem, and radii of close-to-convexity of partial sums of the class, are derived. \end{abstract} \vskip.20in \maketitle \tableofcontents \section{Introduction} A planar harmonic mapping $f$ in the open unit disk $\mbox{$\mathbb{D}$}$ can be represented as $f=h+\overline{g}$, where $h$ and $g$ are analytic functions in $\mbox{$\mathbb{D}$}$. We call $h$ and $g$ the analytic part and co-analytic part of $f$, respectively. Since the Jacobian of $f$ is given by $\abs{h'}^2-\abs{g'}^2$, by Lewy's theorem (see \cite{Lewy}), it is locally univalent and sense-preserving if and only if $|g^{\prime}|<|h^{\prime}|$, or equivalently, if $h^{\prime}(z)\neq 0$ and the dilatation $\omega={g^{\prime}}/{h^{\prime}}$ has the property $|\omega|<1$ in $\mbox{$\mathbb{D}$}$. Let $\mathcal{H}$ denote the class of harmonic functions $f=h+\overline{g}$ normalized by the conditions $f(0)=f_{z}(0)-1=0$, which have the form \begin{equation}\label{111}f (z)=z+\sum_{k=2}^{\infty}a_kz^k+\overline{\sum_{k=1}^{\infty}b_kz^k} \quad (z\in\mbox{$\mathbb{D}$}).\end{equation} Denote by $\mathcal{S_{\mathcal{H}}}$ the class of harmonic functions $f\in\mathcal{H}$ that are univalent and sense-preserving in $\mbox{$\mathbb{D}$}$. Also denote by $\mathcal{S}^{0}_{\mathcal{H}}$ the subclass of $\mathcal{S_{\mathcal{H}}}$ with the additional condition $f_{\overline{z}}(0)=0$. We observe that Clunie and Sheil-Small \cite{cs} have proved several fundamental characteristics for the class $\mathcal{S_{\mathcal{H}}}$, but other basic problems such as Riemann mapping theorem for planar harmonic mappings, harmonic analogue of Bieberbach conjecture, sharp coefficient inequalities and radius of covering theorem for the class $\mathcal{S}^{0}_{\mathcal{H}}$ are still \textit{open} (see \cite{d}). The classical family $\mathcal{S}$ of analytic univalent and normalized functions in $\mbox{$\mathbb{D}$}$ is a subclass of $\mathcal{S}_\mathcal{H}^0$ with $g(z)\equiv 0$. If a univalent harmonic mapping $f=h+\overline{g}$ satisfies the condition \begin{equation*}\label{13} \abs{\omega(z)}=\abs{\frac{g^{\prime}(z)}{h^{\prime}(z)}}\leq k\quad (0\leq k<1;\, z\in\mbox{$\mathbb{D}$}), \end{equation*} then $f$ is said to be a $K$-quasiconformal harmonic mapping, where $$K=\frac{1+k}{1-k}\quad (0\leq k<1).$$ A domain $\Omega$ is said to be close-to-convex if $\mbox{$\mathbb{C}$}\backslash\Omega$ can be represented as a union of non-crossing half-lines. Following the result due to Kaplan (see \cite{k}), an analytic function $f$ is called close-to-convex if there exits a univalent convex function $\phi$ defined in $\mbox{$\mathbb{D}$}$ such that $${\rm Re}\left(\frac{f'(z)}{\phi'(z)}\right)>0\quad (z\in\mbox{$\mathbb{D}$}).$$ Furthermore, a planar harmonic mapping $f:\mbox{$\mathbb{D}$}\rightarrow\mbox{$\mathbb{C}$}$ is close-to-convex if it is injective and $f(\mbox{$\mathbb{D}$})$ is a close-to-convex domain. We denote by $\mathcal{C}_\mathcal{H}^0$ the class of close-to-convex harmonic mappings. The theory and applications of planar harmonic mappings are presented in the recent monograph by Duren \cite{d}. Furthermore, Bshouty {\it et al.} \cite{bjj,bl,bls}, Chen \textit{et al.} \cite{cprw}, Chuaqui and Hern\'{a}ndez \cite{ch}, Kalaj \cite{k1}, Mocanu \cite{m4,m5}, Nagpal and Ravichandran \cite{nr,nr1}, Partyka {\it et al.} \cite{psz}, Ponnusamy and Sairam Kaliraj \cite{pk,ps2,ps1}, Sun {\it et al.} \cite{sjr,srj}, Wang {\it et al.} \cite{wll,wsj} derived several criteria for univalency, or quasiconformality, involving planar harmonic mappings. Let $\mathcal{A}$ denote the class of functions $h$ of the form $$h(z)=z+\sum_{k=2}^{\infty}a_kz^k,$$ which are analytic in $\mbox{$\mathbb{D}$}$. Also let $\mathcal{G}(\alpha)$ be the subclass of $\mathcal{A}$ whose members satisfy the inequality \begin{equation}\label{101}{\rm Re}\left(1+\frac{zh''(z)}{h'(z)}\right)<\alpha\quad\left(\alpha>1;\, z\in\mbox{$\mathbb{D}$}\right).\end{equation} For convenience, we write $\mathcal{G}(3/2)=:\mathcal{G}$. The class $\mathcal{G}$ plays an important role in the analytic function theory. We observe that the function class $\mathcal{G}(\alpha)$ was studied extensively by Kargar \textit{et al.} \cite{kpe}, Kanas \textit{et al.} \cite{kmp}, Maharana \textit{et al.} \cite{mps}, Obradovi\'{c} \textit{et al.} \cite{opw}, Ponnusamy and Sahoo \cite{ps} and Ponnusamy \textit{et al.} \cite{psw} for differential purposes. It is known that the functions in $\mathcal{G}(\alpha)$ are starlike in $\mbox{$\mathbb{D}$}$ for $\alpha\in(1,3/2]$ (see Ponnusamy and Rajasekaran \cite{pr}, Singh and Singh \cite{ss}), whereas not univalent in $\mbox{$\mathbb{D}$}$ for $\alpha\in(3/2,+\infty)$ (see \cite{opw}). Recently, Mocanu \cite{m5} posed the following conjecture. \begin{con}\label{c111} Let $$\mathcal{M}=\left\{f=h+\overline{g}\in \mathcal{H}:\ g'=zh'\ {\it and}\ {\rm Re}\left(1+\frac{zh''(z)}{h'(z)}\right)>-\frac{1}{2}\quad(z\in\mbox{$\mathbb{D}$})\right\}.$$ Then $\mathcal{M}\subset\mathcal{S}_\mathcal{H}^0$. \end{con} By making use of the classical results of close-to-convexity (see Kaplan \cite{k}) and harmonic close-to-convexity (see Clunie and Sheil-Small \cite{cs}), Bshouty and Lyzzaik \cite{bl} have proved Conjecture \ref{c111} by established the following stronger result. \vskip.10in \noindent{\bf Theorem A.}\ $\mathcal{M}\subset\mathcal{C}_\mathcal{H}^0$. \vskip.10in For more recent general results on the convexity, starlikeness and close-to-convexity of harmonic mappings, we refer the readers to \cite{mp,bjj,gv1,gv,kpv,koh1,koh2,lh,m5,ps1,wlrs}. Recall the following criterion for harmonic close-to-convexity due to Abu Muhanna and Ponnusamy \cite[Corollary 3]{mp}. \vskip.10in \noindent{\bf Theorem B.}\ \textit{Let $h$ and $g$ be normalized analytic functions in $\mbox{$\mathbb{D}$}$ such that $${\rm Re}\left(1+\frac{zh''(z)}{h'(z)}\right)<\frac{3}{2},$$ and $$ g'(z)=\lambda z^nh'(z) \quad \left(0<\abs{\lambda}\leq \frac{1}{n+1};\, n\in\mbox{$\mathbb{N}$}:=\{1,2,3,\ldots\}\right).$$ Then the harmonic mapping $f=h+\overline{g}$ is univalent and close-to-convex in $\mbox{$\mathbb{D}$}$.} \vskip.10in Motivated essentially by Theorem B and the definition of quasiconformal harmonic mappings, we introduce and investigate the following subclass $\mathcal{F}(\alpha,\lambda,n)$ of quasiconformal close-to-convex harmonic mappings. \begin{dfn} {\rm A harmonic mapping $f=h+\overline{g}\in\mathcal{H}$ is said to be in the class $\mathcal{F}(\alpha,\lambda,n)$ if $h$ and $g$ satisfy the conditions \begin{equation}\label{113}{\rm Re}\left(1+\frac{zh''(z)}{h'(z)}\right)<\alpha \quad\left(1<\alpha\leq\frac{3}{2}\right),\end{equation} and \begin{equation}\label{114} g'(z)=\lambda z^nh'(z) \quad \left(\lambda\in\mbox{$\mathbb{C}$}\ {\rm with}\ \abs{\lambda}\leq \frac{1}{n+1};\, n\in\mbox{$\mathbb{N}$}\right).\end{equation}} \end{dfn} For simplicity, we denote the class $\mathcal{F}(\alpha,\lambda,1)$ by $\mathcal{F}(\alpha,\lambda)$. The image of $\mbox{$\mathbb{D}$}$ under the mapping $$f(z)=z-\frac{1}{2}z^2+\overline{\frac{1}{4}z^2-\frac{1}{6}z^3}\in\mathcal{F}\left({3}/{2}, {1}/{2}\right)$$ is presented as Figure \ref{fig11}. \begin{figure} \caption{The image of $\mbox{$\mathbb{D} \label{fig11} \end{figure} This paper is organized as follows. In Section 2, we provide a counterexample to illustrate the non-univalency of the class $\mathcal{G}(\alpha)$ for $\alpha\in(3/2,2)$. In Section 3, we prove several basic properties of the class $\mathcal{F}(\alpha,\lambda,n)$ of quasiconformal close-to-convex harmonic mappings with starlike analytic part, such results as coefficient inequalities, an integral representation, a growth theorem, an area theorem, and radii of close-to-convexity of partial sums of the class, are derived. \vskip.20in \section{Non-univalency of the class $\mathcal{G}(\alpha)$ for $\alpha\in(3/2,+\infty)$} Obradovi\'{c} \textit{et al.} \cite{opw} stated that the class $\mathcal{G}(\alpha)$ is not univalent in $\mbox{$\mathbb{D}$}$ for $\alpha\in(3/2,+\infty)$, but they did not give detailed proof about the non-univalency. We note that Kargar \textit{et al.} \cite{kpe} given a counterexample to prove the class $\mathcal{G}(\alpha)$ is not univalent in $\mbox{$\mathbb{D}$}$ for $\alpha\in[2,+\infty)$, in this section, we shall give a counterexample to illuminate the non-univalency of the class $\mathcal{G}(\alpha)$ for $\alpha\in(3/2,2)$. \begin{theorem}\label{t001} $\mathcal{G}(\alpha)\not\subset \mathcal{S}$ for $\alpha\in(3/2,+\infty)$. \end{theorem} \begin{proof} We consider the analytic function $h_{\beta}\in \mathcal{A}$ given by $$h_{\beta}(z)=\frac{1}{\beta}\left[1-(1-z)^{\beta}\right]\quad\left(2<\beta<3;\ z\in\mbox{$\mathbb{D}$}\right).$$ It follows that $$1+\frac{zh_{\beta}''(z)}{h_{\beta}'(z)}=\frac{1-\beta z}{1-z},$$ and therefore, $${\rm Re}\left(1+\frac{zh_{\beta}''(z)}{h_{\beta}'(z)}\right)<\frac{1+\beta}{2}\quad\left(\frac{3}{2}<\frac{1+\beta}{2}<2\right),$$ which implies that $$h_{\beta}\in\mathcal{G}((1+\beta)/2)=\mathcal{G}(\alpha)\quad\left(\frac{3}{2}<\alpha<2\right).$$ In what follows, we shall prove that the function $h_{\beta}$ is not univalent in $\mbox{$\mathbb{D}$}$. It easily to verify that $h_{\beta}$ have real coefficients, and thus, $h_{\beta}(z)=\overline{h_{\beta}(\overline{z})}$ for all $z\in\mbox{$\mathbb{D}$}$. In particular, we see that $${\rm Re}\left(h_{\beta}\left(r e^{i \theta}\right)\right)={\rm Re}\left(h_{\beta}\left(r e^{-i \theta}\right)\right) $$ for some $r\in (0,1)$ and $\theta\in (-\pi,0)\cup(0,\pi)$. It is sufficient to show that there exist $r_{0}\in (0,1)$ and $\theta_{0}\in (-\pi,0)\cup(0,\pi)$ such that $${\rm Im}\left(h_{\beta}\left(r_{0} e^{i \theta_{0}}\right)\right)={\rm Im}\left(h_{\beta}\left(r_{0} e^{-i \theta_{0}}\right)\right)=0. $$ In view of \begin{equation*} {\rm Im}\left(h_{\beta}(z)\right) ={\rm Im}\left(\frac{1-(1-z)^{\beta}}{\beta}\right) =-{\rm Im}\left(\frac{e^{\beta\log(1-z)}}{\beta}\right), \end{equation*} we see that \begin{equation*} \begin{split} {\rm Im}\left(h_{\beta}\left(r e^{i\theta}\right)\right)&=-{\rm Im}\left(\frac{e^{\beta\log\left(1-r e^{i\theta}\right)}}{\beta}\right)\\ &=-\frac{e^{\beta\log|1-r e^{i\theta}|}}{\beta}\sin \left[\beta\arg\left(1-r e^{i\theta}\right)\right], \end{split} \end{equation*} and \begin{equation*} \begin{split} -{\rm Im}\left(h_{\beta}\left(r e^{-i\theta}\right)\right)=\frac{e^{\beta\log|1-r e^{-i\theta}|}}{\beta}\sin \left[\beta\arg\left(1-r e^{-i\theta}\right)\right]={\rm Im}\left(h_{\beta}\left(r e^{i\theta}\right)\right). \end{split} \end{equation*} By noting that $$\arg\left(1-r e^{i\theta}\right)\in\left(-\frac{\pi}{2},0\right)\cup \left(0,\frac{\pi}{2}\right), $$ we deduce that for each $\beta\in(2, 3)$, there exist $r_{0}\in (0,1)$ and $\theta_{0}\in (-\pi,0)\cup(0,\pi)$ such that $$\sin\left[\beta\arg\left(1-r_{0} e^{i\theta_{0}}\right)\right]=0. $$ It follows that $${\rm Im}\left(h_{\beta}\left(r_{0} e^{i \theta_{0}}\right)\right)={\rm Im}\left(h_{\beta}\left(r_{0} e^{-i \theta_{0}}\right)\right)=0.$$ Therefore, we see that there exist two distinct points $z_1 = r_{0} e^{i \theta_{0}}$ and $z_2 = r_{0} e^{-i \theta_{0}}$ in $\mbox{$\mathbb{D}$}$ such that $h_{\beta}(z_1) = h_{\beta}(z_2)$, which shows that the function $h_{\beta}(z)$ is not univalent in $\mbox{$\mathbb{D}$}$. Thus, we deduce that the class $\mathcal{G}(\alpha)$ always contains a non-univalent function for each $\alpha\in(3/2,2)$. Moreover, by noting that the class $\mathcal{G}(\alpha)$ is not univalent in $\mbox{$\mathbb{D}$}$ for $\alpha\in[2,+\infty)$ (see \cite[Example 2.1]{kpe}), we deduce that the assertion of Theorem \ref{t001} holds. \end{proof} To illustrate our counterexample, we present the image domain of $\mbox{$\mathbb{D}$}$ under the function $h_{5/2}(z)={2}/{5}\left[1-(1-z)^{5/2}\right]$ (see Figure \ref{fig1}). \begin{figure} \caption{The image of $\mbox{$\mathbb{D} \label{fig1} \end{figure} \vskip.20in \section{Properties and characteristics of the class $\mathcal{F}(\alpha,\lambda,n)$} Let us recall the following lemma, due to Obradovi\'{c} \textit{et al.} \cite{opw}, in a slightly modified form, which will be required in the proof of Theorem \ref{t1}. \begin{lemma}\label{lem1} If $h(z)=z+\sum_{k=2}^{\infty}a_kz^k$ satisfies the condition \eqref{101} with $1<\alpha\leq 3/2$, then \begin{equation}\label{41}\abs{a_k}\leq\frac{2(\alpha-1)}{(k-1)k}\quad(k\geq 2),\end{equation} with the extremal function given by $$h(z)=\int_0^z\left(1-t^{k-1}\right)^{\frac{2(\alpha-1)}{k-1}}dt\quad(k\geq 2).$$ \end{lemma} \begin{theorem}\label{t1} Let $f=h+\overline{g}\in\mathcal{F}(\alpha,\lambda,n)$ be of the form \eqref{111}. Then the coefficients $a_k\ (k\geq 2)$ of $h$ satisfy \eqref{41}, furthermore, the coefficients $b_k\ (k=n+1, n+2, \ldots;\, n\in\mbox{$\mathbb{N}$})$ of $g$ satisfy \begin{equation}\label{42}\abs{b_{n+1}}\leq\frac{\abs{\lambda}}{n+1}\quad(n\in\mbox{$\mathbb{N}$})\ \ {\it and} \ \ \abs{b_{k+n}}\leq\frac{2\abs{\lambda}(\alpha-1)}{(k-1)k(k+n)}\quad\left(k\in\mbox{$\mathbb{N}$}\setminus\{1\};\, n\in\mbox{$\mathbb{N}$}\right).\end{equation} The bounds are sharp for the extremal function given by $$f(z)=\int_0^z\left(1-t^{k-1}\right)^{\frac{2(\alpha-1)}{k-1}}dt+\overline{\int_0^z\lambda t^n\left(1-t^{k-1}\right)^{\frac{2(\alpha-1)}{k-1}}dt}\quad\left(n\in\mbox{$\mathbb{N}$}\right).$$ \end{theorem} \begin{proof} Comparing the coefficients of $z^{k+n-1}$ of both sides in \eqref{114}, we obtain \begin{equation}\label{6}(k+n)b_{k+n}=\lambda k a_k\quad (k,\,n\in\mbox{$\mathbb{N}$};\, a_1=1).\end{equation} Combining Lemma \ref{lem1} with \eqref{6}, we readily get the desired coefficient inequalities \eqref{42} of Theorem \ref{t1}. \end{proof} The Fekete-Szeg\"{o} functional for $\abs{a_{3}-\delta a_{2}^{2}}$ of the class $\mathcal{G}(\alpha)$ with $\alpha\in(1,3/2]$ was discussed by Obradovi\'{c} \textit{et al.} \cite{opw}, which will be useful in the proof of the upper bounds for $\abs{b_{3}-\delta b_{2}^{2}}$ of functions in the class $\mathcal{F}(\alpha,\lambda)$. We here present its modified form. \begin{lemma}\label{lem2} Let $f\in\mathcal{G}(\alpha)$ with $\alpha\in(1,3/2]$. Then \begin{equation}\label{31} \abs{a_{3}-\delta a_{2}^{2}}\leq\left\{\begin{array}{cc} \frac{\alpha-1}{3}\abs{3+\delta-(2+\delta)\alpha} &\left(\abs{\delta-\frac{3-2\alpha}{3(\alpha-1)}}\geq\frac{1}{3(\alpha-1)}\right), \\\\ \frac{\alpha-1}{3}\ \ &\left(\abs{\delta-\frac{3-2\alpha}{3(\alpha-1)}}<\frac{1}{3(\alpha-1)}\right).\end{array}\right. \end{equation} Equality in the Fekete-Szeg\"{o} functional is attained in each case. \end{lemma} \begin{theorem}\label{t2} Let $f\in\mathcal{F}(\alpha,\lambda)$ be of the form \eqref{111}. Then \begin{equation}\label{32} \big|b_{3}-\delta b_{2}^{2}\big|\leq \frac{2(\alpha-1)|\lambda|}{3}+\frac{|\delta||\lambda|^{2}}{4}. \end{equation} The inequality is sharp. \end{theorem} \begin{proof} By noting that $g'(z)=\lambda zh'(z)$ for $f\in\mathcal{F}(\alpha,\lambda)$, we have \begin{equation*} \sum_{k=2}^{\infty}kb_{k}z^{k-1}=\lambda\sum_{k=1}^{\infty}ka_{k}z^{k}\quad(a_1=1). \end{equation*} Clearly, we see that \begin{equation}\label{302}b_{2}=\frac{1}{2}\lambda a_{1}=\frac{1}{2}\lambda\ \ \textrm{and}\ \ b_{3}=\frac{2}{3}\lambda a_{2}.\end{equation} Therefore, by virtue of \eqref{31} and \eqref{302}, we obtain \begin{equation*} \big|b_{3}-\delta b_{2}^{2}\big| =\left|\frac{2}{3}\lambda a_{2}-\frac{1}{4}\delta\lambda^{2}\right| \leq \frac{2|\lambda||a_{2}|}{3}+\frac{|\delta||\lambda|^{2}}{4} \leq \frac{2(\alpha-1)|\lambda|}{3}+\frac{|\delta||\lambda|^{2}}{4}. \end{equation*}The proof of Theorem \ref{t2} is thus completed. \end{proof} By setting $\delta=1$ in Lemma \ref{lem2}, respectively Theorem \ref{t2}, we get the Zalcman type coefficient inequalities of the class $\mathcal{F}(\alpha,\lambda)$ for the case $k=2$. For recent developments on this topic (see Li and Ponnusamy \cite{lp} and the references therein). \begin{cor} Let $f\in\mathcal{F}(\alpha,\lambda)$ be of the form \eqref{111}. Then $$ \abs{a_{3}-a_{2}^{2}}\leq\left\{\begin{array}{cc} \frac{\alpha-1}{3}\abs{4-3\alpha} &\left(\abs{1-\frac{3-2\alpha}{3(\alpha-1)}}\geq\frac{1}{3(\alpha-1)}\right), \\\\ \frac{\alpha-1}{3}\ \ &\left(\abs{1-\frac{3-2\alpha}{3(\alpha-1)}}<\frac{1}{3(\alpha-1)}\right),\end{array}\right.$$ and $$\big|b_{3}-b_{2}^{2}\big|\leq \frac{2(\alpha-1)|\lambda|}{3}+\frac{|\lambda|^{2}}{4}\leq\frac{11}{48}. $$ The inequalities are sharp. \end{cor} Now, we give an integral representation of the mapping $f\in\mathcal{F}(\alpha,\lambda,n)$. \begin{theorem}\label{t3} Let $f\in\mathcal{F}(\alpha,\lambda,n)$. Then \begin{equation*}\begin{split}f(z)=&\int_0^z\exp\left(2(1-\alpha)\int_0^{\zeta} \frac{\varpi(t)}{t(1-\varpi(t))}dt\right)d\zeta\\&\quad\quad+\overline{\lambda\int_0^z{{\zeta}^{n}}\cdot\exp\left(2(1-\alpha)\int_0^{\zeta} \frac{\varpi(t)}{t(1-\varpi(t))}dt\right)d\zeta},\end{split} \end{equation*}where $\varpi$ is the Schwarz function with $\varpi(0)=0$ and $\abs{\varpi(z)}<1\ (z\in\mbox{$\mathbb{D}$})$. \end{theorem} \begin{proof} Suppose that $f\in\mathcal{F}(\alpha,\lambda,n)$. It follows from \eqref{113} that \begin{equation}\label{37}1+\frac{zh''(z)}{h'(z)}\prec\frac{1-(2\alpha-1)z}{1-z}\quad(z\in\mbox{$\mathbb{D}$}), \end{equation} where $``\prec"$ denotes the familiar subordination of analytic functions. By virtue of \eqref{37}, we see that \begin{equation}\label{38}1+\frac{zh''(z)}{h'(z)}=\frac{1-(2\alpha-1)\varpi(z)}{1-\varpi(z)}\quad(z\in\mbox{$\mathbb{D}$}), \end{equation} where $\varpi$ is the Schwarz function with $\varpi(0)=0$ and $\abs{\varpi(z)}<1\ (z\in\mbox{$\mathbb{D}$})$. From \eqref{38}, we have $$\frac{(zh'(z))'}{zh'(z)}-\frac{1}{z}=\frac{2(1-\alpha)\varpi(z)}{z(1-\varpi(z))},$$ which, upon integration, yields \begin{equation}\label{39}\log(h'(z))=2(1-\alpha)\int_0^z\frac{\varpi(t)}{t(1-\varpi(t))}dt.\end{equation} We thus find from \eqref{39} that \begin{equation}\label{310}h(z)=\int_0^z\exp\left(2(1-\alpha)\int_0^{\zeta} \frac{\varpi(t)}{t(1-\varpi(t))}dt\right)d\zeta.\end{equation} Combining \eqref{114} with \eqref{310}, we obtain \begin{equation}\label{311}g(z)=\lambda\int_0^z{{\zeta}^{n}}\cdot\exp\left(2(1-\alpha)\int_0^{\zeta} \frac{\varpi(t)}{t(1-\varpi(t))}dt\right)d\zeta.\end{equation} Thus, the assertion of Theorem \ref{t3} follows from \eqref{310} and \eqref{311}. \end{proof} \begin{rem} {\rm Theorem \ref{t3} provides a direct integration method for constructing quasiconformal close-to-convex harmonic mappings by choosing suitable Schwarz functions $\varpi$.} \end{rem} The following lemma due to Maharana \textit{et al.} \cite{mps} will play a crucial role in the proof of our last three results. \begin{lemma}\label{lem3} If $h\in\mathcal{G}$, then for $\abs z=r<1$, the following statements are true. \begin{enumerate} \item $$\abs{\frac{zh''(z)}{h'(z)}}\leq\frac{r}{1-r}.$$ The inequality is sharp and equality is attained for the function \begin{equation}\label{4001}h(z)=z-\frac{z^2}{2}.\end{equation} \item \begin{equation}\label{4002}1-r\leq\abs{h'(z)}\leq 1+r.\end{equation} The inequalities are sharp and equalities are attained for the function given by \eqref{4001}. \item If $h(z)=\mathcal{S}_n(z)+\Sigma_n(z)$, with $\Sigma_n(z)=\sum_{k=n+1}^{\infty}a_kz^k$, then $$\abs{\Sigma'_n(z)}\leq r^n\phi(r,1,n)\ {\it and}\ \abs{z\Sigma''_n(z)}\leq\frac{r^n}{1-r},$$ where $\phi(r,1,n)$ is the unified zeta function which is defined by the series $$\phi(z,s,a)=\sum_{k=0}^{\infty}\frac{z^k}{(k+a)^s}\quad(\abs z<1;\, \mbox{$\mathbb{R}$}e(s)>1;\, a\neq 0,-1,-2,\ldots).$$ \end{enumerate} \end{lemma} We now give the growth theorem for the class $\mathcal{F}(\alpha,\lambda,n)$. \begin{theorem}\label{t401} Let $f\in\mathcal{F}(\alpha,\lambda,n)$. Then \begin{equation}\label{411} \begin{split} &r \left[|\lambda| \left(\frac{r}{n+2}-\frac{1}{n+1}\right) r^n-\frac{r}{2}+1\right] \leq \abs{f(z)}\\ &\qquad\qquad\qquad\leq r \left[|\lambda| \left(\frac{r}{n+2}+\frac{1}{n+1}\right) r^n+\frac{r}{2}+1\right]. \end{split} \end{equation} The inequalities are sharp. \end{theorem} \begin{proof} Assume that $f=h+\overline{g}\in\mathcal{F}(\alpha,\lambda,n)$. By observing that $h\in\mathcal{G}$, we know that \eqref{4002} holds. Also, let $\Gamma$ be the line segment joining $0$ and $z$, then \begin{equation}\label{413} \begin{split} \abs{f(z)}&=\abs{\int_{\Gamma}\frac{\partial f}{\partial \xi}d\xi +\frac{\partial f}{\partial \overline{\xi}}d\overline{\xi}}\\ &\leq \int_{\Gamma}\left(\abs{h'(\xi)}+\abs{g'(\xi)}\right)\abs{d\xi}\\ &= \int_{\Gamma}\left(1+\abs{\lambda}\abs{\xi}^n\right)\abs{h'(\xi)}\abs{d\xi}\\ &\leq \int_{0}^{r}{(1+\xi)(1+|\lambda|\xi^n)}d\xi\\ &=\frac{1}{2} r \left[2 |\lambda| \left(\frac{r}{n+2}+\frac{1}{n+1}\right) r^n+r+2\right]. \end{split} \end{equation} Moreover, let $\widetilde{\Gamma}$ be the preimage under $f$ of the line segment joining $0$ and $f(z)$, then we obtain \begin{equation}\label{414} \begin{split} \abs{f(z)}&=\int_{\widetilde{\Gamma}}\abs{\frac{\partial f}{\partial \xi}d\xi +\frac{\partial f}{\partial \overline{\xi}}d\overline{\xi}}\\ &\geq \int_{\widetilde{\Gamma}}\left(\abs{h'(\xi)}-\abs{g'(\xi)}\right)\abs{d\xi}\\ &= \int_{\widetilde{\Gamma}}\left(1-|\lambda||\xi|^n\right)\abs{h'(\xi)}\abs{d\xi}\\ &\geq \int_{0}^{r}{(1-\xi)(1-|\lambda|\xi^n)}d\xi\\ &=\frac{1}{2} r \left[2 |\lambda| \left(\frac{r}{n+2}-\frac{1}{n+1}\right) r^n-r+2\right]. \end{split} \end{equation} It follows from \eqref{413} and \eqref{414} that the assertion \eqref{411} of Theorem \ref{t401} holds. \end{proof} Denote by $\mathcal{A}\left(f(\mbox{$\mathbb{D}$}_{r})\right)$ the area of $f(\mbox{$\mathbb{D}$}_{r})$, where $\mbox{$\mathbb{D}$}_{r}:=r\mbox{$\mathbb{D}$}$ for $0<r<1$. We now consider the area theorem of mappings $f$ belong to the class $\mathcal{F}(\alpha,\lambda,n)$. \begin{theorem}\label{t43} Let $f\in\mathcal{F}(\alpha,\lambda,n)$. Then for $0<r<1$, we have \begin{equation}\label{421} 2\pi\int_{0}^{r}{\left(1-|\lambda|^{2}\xi^{2n}\right)(1-\xi)^2\xi}d\xi \leq \mathcal{A}\left(f(\mbox{$\mathbb{D}$}_{r})\right)\leq 2\pi\int_{0}^{r}{\left(1-|\lambda|^{2}\xi^{2n}\right)(1+\xi)^2\xi}d\xi. \end{equation} \end{theorem} \begin{proof} Suppose that $f=h+\overline{g}\in\mathcal{F}(\alpha,\lambda,n)$. Then for $0<r<1$, we get \begin{equation}\label{4012} \mathcal{A}\left(f(\mbox{$\mathbb{D}$}_{r})\right)=\iint_{\mbox{$\mathbb{D}$}_{r}}\left(|h'(z)|^{2}-|g'(z)|^{2}\right)\,dx\,dy =\iint_{\mbox{$\mathbb{D}$}_{r}}\left(1-|\lambda|^{2}|z|^{2n}\right)|h'(z)|^{2}\,dx\,dy. \end{equation} In view of \eqref{4002} and \eqref{4012}, we obtain the result of Theorem \ref{t43}. \end{proof} Finally, we shall discuss the radius problems of mappings $f\in\mathcal{F}(\alpha,\lambda)$. The largest value of $r$ so that the partial sums of $f\in\mathcal{F}(\alpha,\lambda)$ are close-to-convex in $|z|<r$ are considered. For recent results on partial sums of univalent harmonic mappings (see, e.g., Chen \textit{et al.} \cite{crw}, Ghosh and Vasudevarao \cite{gv}, Li and Ponnusamy \cite{lp1,lp2,lp3}, Ponnusamy \textit{et al.} \cite{pss2}, Sun \textit{et al.} \cite{sjr}). \begin{theorem}\label{t5} Let $f\in\mathcal{F}(\alpha,\lambda)$ be of the form \eqref{111}. Then for each $m\geq 1,\ l\geq 2$, $$\mathcal{S}_{m,l}(f)(z)=\sum_{k=1}^{m}a_{k}z^{k}+\overline{\sum_{k=2}^{l} b_{k}z^{k}}\quad(a_1=1)$$ is close-to-convex in $|z|<r_{c}\approx 0.503$, where $r_{c}$ is the least positive real root in the interval $(0, 1)$ of the equation: \begin{equation}\label{50} 2+2\ln(1-r)+r\ln(1-r)-r+r^2=0. \end{equation} The bound $r_{c}$ is sharp. \end{theorem} \begin{proof} Let $f=h+\overline{g}\in\mathcal{F}(\alpha,\lambda)$ and $\phi=h+\varepsilon\overline{g}$ with $|\varepsilon|=1$. We observe that ${\rm Re}\left(\varphi'(z)\right)>0$ for $\varphi\in\mathcal{A}$ implies that $\varphi$ is a close-to-convex analytic function. Therefore, it is sufficient to show that each partial sums \begin{equation*} \mathcal{S}_{m,l}(\phi)(z)=\sum_{k=1}^{m}a_{k}z^{k}+\varepsilon\overline{\sum_{k=2}^{l} b_{k}z^{k}} \end{equation*} satisfies the condition \begin{equation*} {\rm Re}\left(\Gamma'_{m,l}(\phi)(z)\right)>0 \end{equation*} in the disk $|z|<r_{c}$ for all $|\varepsilon|=1$ and $m\geq 1,\ l\geq 2$, where \begin{equation*} \Gamma_{m,l}(\phi)(z)=\sum_{k=1}^{m}a_{k}z^{k}+\varepsilon\sum_{k=2}^{l} b_{k}z^{k}. \end{equation*} In order to prove the radii of close-to-convexity for the partial sums $\mathcal{S}_{m,l}(f)(z)$, we split it into four cases to prove. \begin{enumerate} \item For $m=1,2$, $l=2$, we have $$ \Gamma_{1,2}(\phi)(z)=z+\varepsilon b_{2}z^{2},$$ and $$\Gamma_{2,2}(\phi)(z)=z+a_{2}z^{2}+\varepsilon b_{2}z^{2}, $$ it follows that $$ \Gamma'_{1,2}(\phi)(z)=1+\varepsilon\lambda z, $$ and $$ \Gamma'_{2,2}(\phi)(z)=1+2a_{2}z+\varepsilon\lambda z. $$ Clearly, ${\rm Re}\big(\Gamma'_{1,2}(\phi)(z)\big)>0$ in $|z|<r_{1}={2}/{3}$. By Lemma \ref{lem1}, we know that $|a_{2}|\leq \alpha-1$, thus, \begin{equation*}\begin{split} {\rm Re}\left(\Gamma'_{2,2}(\phi)(z)\right)&\geq 1-2|a_{2}||z|-|\lambda||z| \\&\geq 1-[2(\alpha-1)+|\lambda|]|z|\\&\geq 1-\frac{3}{2}|z|>0 \quad (|z|<r_{1}).\end{split} \end{equation*} \item For $m,l\geq 3$, we find from \eqref{113} and \eqref{114} that \begin{equation}\label{51} \begin{split} &{\rm Re}\left(\Gamma'_{m,l}(\phi)(z)\right) \\&\quad={\rm Re}\left(\mathcal{S}'_{m}(h)(z)+\varepsilon\lambda z\mathcal{S}'_{l-1}(h)(z)\right)\\ &\quad={\rm Re}\left(\left(h'(z)-\Sigma'_{m}(h)(z)\right)+\varepsilon\lambda z\left(h'(z)-\Sigma'_{l-1}(h)(z)\right)\right)\\ &\quad\geq {\rm Re}\left(h'(z)\right)-|\Sigma'_{m}(h)(z)|-|\lambda||z||h'(z)|-|\lambda||z||\Sigma'_{l-1}(h)(z)|\\ &\quad\geq {\rm Re}\left(h'(z)\right)-|\Sigma'_{m}(h)(z)|-\frac{1}{2}|z||h'(z)|-\frac{1}{2}|z||\Sigma'_{l-1}(h)(z)|. \end{split} \end{equation} In view of \eqref{4002}, we obtain \begin{equation}\label{52} \min_{|z|=r<1}\{{\rm Re}\left(h'(z)\right)\} \geq \min_{|z|=r<1}\{{\rm Re}\left(1-z\right)\} \geq 1-r. \end{equation} From Lemma \ref{lem3}(3), for $|z|=r<1$, we know that \begin{equation*} \abs{\Sigma'_n(z)} \leq\sum_{k=0}^{\infty}\frac{r^{k+n}}{k+n}=-\ln(1-r)-\sum_{k=1}^{n-1}\frac{r^k}{k}=:\mbox{$\mathbb{D}$}elta(n), \end{equation*} and \begin{equation*} \mbox{$\mathbb{D}$}elta(n+1)-\mbox{$\mathbb{D}$}elta(n)=-\frac{r^{n}}{n}<0\quad(n\geq 2). \end{equation*} Therefore, $\mbox{$\mathbb{D}$}elta(n)$ is a decreasing function of $n$. For all $m,l\geq 3$, we see that \begin{equation}\label{53} \mbox{$\mathbb{D}$}elta(m)\leq \mbox{$\mathbb{D}$}elta(3)=-\ln(1-r)-r-\frac{r^2}{2}, \end{equation} and \begin{equation}\label{54} \mbox{$\mathbb{D}$}elta(l-1)\leq \mbox{$\mathbb{D}$}elta(2)=-\ln(1-r)-r. \end{equation} Moreover, it follows from Lemma \ref{lem3}(2) that \begin{equation}\label{55} |z||h'(z)|\leq |z|(1+|z|)=r(1+r)\quad(|z|=r<1). \end{equation} From the relationships \eqref{51}, \eqref{52}, \eqref{53}, \eqref{54} and \eqref{55}, it follows that \begin{equation*} \begin{split} {\rm Re}\left(\Gamma'_{m,l}(\phi)(z)\right) \geq 1+\ln(1-r)-\frac{r}{2}+\frac{1}{2}r\ln(1-r)+\frac{1}{2}r^2>0 \end{split} \end{equation*} for all $m,l\geq 3$ and $|z|=r<r_{2}\approx 0.503$, where $r_{2}$ is the least positive root in the interval $(0, 1)$ of the equation: \begin{equation*} 2+2\ln(1-r)+r\ln(1-r)-r+r^2=0. \end{equation*} \item For $m=1,2$, $l\geq 3$, we see that \begin{equation*} \begin{split} {\rm Re}\left(\Gamma'_{2,l}(\phi)(z)\right) &={\rm Re}\left(\mathcal{S}'_{2}(h)(z)+\varepsilon \mathcal{S}'_{l}(g)(z)\right)\\ &={\rm Re}\left(1+2a_{2}z+\varepsilon\lambda z \mathcal{S}'_{l-1}(h)(z)\right)\\ &\geq 1-2|a_{2}||z|-|\lambda||z||h'(z)|-|\lambda||z||\Sigma'_{l-1}(h)(z)|\\ &\geq 1-\frac{1}{2}|z|-\frac{1}{2}|z||h'(z)|-\frac{1}{2}|z||\Sigma'_{l-1}(h)(z)|. \end{split} \end{equation*} From \eqref{53} and \eqref{54}, we know that \begin{equation*} {\rm Re}\left(\Gamma'_{2,l}(\phi)(z)\right) \geq 1-\frac{1}{2}r-\frac{r(1+r)}{2}+\frac{1}{2}r[\ln(1-r)+r]>0 \end{equation*} for all $l\geq 3$ and $|z|=r<r_{3}\approx 0.653575$, where $r_{3}$ is the least positive root in the interval $(0, 1)$ of the equation: \begin{equation*} 2-2r+r\ln(1-r)=0. \end{equation*} Similarly, for all $l\geq 3$ and $|z|=r<r_{3}$, we have \begin{equation*} \begin{split} {\rm Re}\left(\Gamma'_{1,l}(\phi)(z)\right) \geq 1-\frac{1}{2}|z||h'(z)|-\frac{1}{2}|z||\Sigma'_{l-1}(h)(z)|\geq 1-\frac{r}{2}+\frac{r}{2}\ln(1-r) >0. \end{split} \end{equation*} \item For $m\geq 3, l=2$, we deduce from \eqref{52} and \eqref{53} that \begin{equation*} \begin{split} {\rm Re}\left(\Gamma'_{m,2}(\phi)(z)\right) &={\rm Re}\left(\mathcal{S}'_{m}(h)(z)+\varepsilon \mathcal{S}'_{2}(g)(z)\right)\\ &\geq {\rm Re}\left(h'(z)\right)-|\Sigma'_{m}(h)(z)|-|\lambda||z|\\ &\geq {\rm Re}\left(h'(z)\right)-|\Sigma'_{m}(h)(z)|-\frac{1}{2}|z|\\ &\geq 1-\frac{1}{2}r+\ln(1-r)+\frac{r^2}{2} >0, \end{split} \end{equation*} where $|z|=r<r_{4}\approx 0.584628$, where $r_{4}$ is the least positive root in the interval $(0, 1)$ of the equation: \begin{equation*} 2-r+2\ln(1-r)+{r^2}=0. \end{equation*} \end{enumerate} By setting \begin{equation*} r_{c}:=\min\{r_{1},r_{2},r_{3},r_{4}\}=r_{2}, \end{equation*} we see that ${\rm Re}\left(\Gamma'_{m,l}(\phi)(z)\right)>0$ for all $|z|<r_{c}$ and $m\geq 1,\ l\geq 2$. The proof of Theorem \ref{t5} is thus completed. \end{proof} \vskip .20in \begin{center}{\sc Acknowledgments} \end{center} \vskip .05in The present investigation was supported by the \textit{Key Project of Education Department of Hunan Province} under Grant no. 19A097, and the \textit{National Natural Science Foundation} under Grant no. 11961013 of the P. R. China. \vskip .20in \end{document}
\begin{document} \setlength{\textwidth}{8in} \setlength{\textheight}{9in} \def{\mathfrak c}{{\mathfrak c}} \def{\mathfrak i}{{\mathfrak i}} \def{\mathfrak U}{{\mathfrak U}} \def{\Bbb C}{{\Bbb C}} \def{\Bbb H}{{\Bbb H}} \def{\Bbb N}{{\Bbb N}} \def{\Bbb P}{{\Bbb P}} \def{\Bbb Q}{{\Bbb Q}} \def{\Bbb R}{{\Bbb R}} \def{\Bbb T}{{\Bbb T}} \def{\Bbb Z}{{\Bbb Z}} \def{\mathcal A}{{\mathcal A}} \def{\mathcal B}{{\mathcal B}} \def{\mathcal C}{{\mathcal C}} \def{\mathcal D}{{\mathcal D}} \def{\mathcal E}{{\mathcal E}} \def{\mathcal F}{{\mathcal F}} \def{\mathcal G}{{\mathcal G}} \def{\mathcal H}{{\mathcal H}} \def{\mathcal I}{{\mathcal I}} \def{\mathcal J}{{\mathcal J}} \def{\mathcal K}{{\mathcal K}} \def{\mathcal L}{{\mathcal L}} \def{\mathcal M}{{\mathcal M}} \def{\mathcal N}{{\mathcal N}} \def{\mathcal O}{{\mathcal O}} \def{\mathcal P}{{\mathcal P}} \def{\mathcal Q}{{\mathcal Q}} \def{\mathcal R}{{\mathcal R}} \def{\mathcal S}{{\mathcal S}} \def{\mathcal T}{{\mathcal T}} \def{\mathcal U}{{\mathcal U}} \def{\mathcal V}{{\mathcal V}} \def{\mathcal W}{{\mathcal W}} \def{\mathcal X}{{\mathcal X}} \def{\mathcal Y}{{\mathcal Y}} \def{\mathcal Z}{{\mathcal Z}} \def\mathrm{od}{\mathrm{od}} \def\mathrm{cf}{\mathrm{cf}} \def\mathrm{dom}{\mathrm{dom}} \def\mathrm{id}{\mathrm{id}} \def\mathrm{int}{\mathrm{int}} \def\mathrm{cl}{\mathrm{cl}} \def\mathrm{Hom}{\mathrm{Hom}} \def\mathrm{ker}{\mathrm{ker}} \def\mathrm{log}{\mathrm{log}} \def\mathrm{nwd}{\mathrm{nwd}} \def{\Bbb T}{{\Bbb T}} \def{\Bbb Z}{{\Bbb Z}} \def{\Bbb Z}Z{{\Bbb Z}} \newcommand{\mathcal }{\mathcal } \newcommand{\mathbf{A}}{\mathbf{A}} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\mathbf{C}}{\mathbf{C}} \newcommand{\mathbf{D}}{\mathbf{D}} \newcommand{\mathbf{I}}{\mathbf{I}} \newcommand{\mathbf{E}}{\mathbf{E}} \newcommand{\mathbf{K}}{\mathbf{K}} \newcommand{\mathbf{T}}{\mathbf{T}} \newtheorem{thm}{Theorem}[section] \newtheorem{theorem}[thm]{Theorem} \newtheorem{corollary}[thm]{Corollary} \newtheorem{lemma}[thm]{Lemma} \newtheorem{claim}[thm]{Claim} \newtheorem{axiom}[thm]{Axiom} \newtheorem{conjecture}[thm]{Conjecture} \newtheorem{fact}[thm]{Fact} \newtheorem{hypothesis}[thm]{Hypothesis} \newtheorem{assumption}[thm]{Assumption} \newtheorem{proposition}[thm]{Proposition} \newtheorem{criterion}[thm]{Criterion} \newtheorem{definition}[thm]{Definition} \newtheorem{definitions}[thm]{Definitions} \newtheorem{discussion}[thm]{Discussion} \newtheorem{example}[thm]{Example} \newtheorem{notation}[thm]{Notation} \newtheorem{remark}[thm]{Remark} \newtheorem{remarks}[thm]{Remarks} \newtheorem{problem}[thm]{Problem} \newtheorem{terminology}[thm]{Terminology} \newtheorem{question}[thm]{Question} \newtheorem{questions}[thm]{Questions} \newtheorem{notation-definition}[thm]{Notation and Definitions} \newtheorem{acknowledgement}[thm]{Acknowledgement} \title{Exactly $n$-resolvable Topological Expansions \footnote{1991 Mathematics Subject Classification. Primary 05A18, 03E05, 54A10; Secondary 03E35, 54A25, 05D05}} \author{W.W. Comfort\footnote{Department of Mathematics and Computer Science, Wesleyan University, Wesleyan Station, Middletown, CT 06459; phone: 860-685-2632; FAX 860-685-2571; Email: [email protected]}} \author{Wanjun Hu\footnote{Department of Mathematics and Computer Science, Albany State University, Albany, GA 31705; phone: 229-886-4751;Email: [email protected]}} \maketitle \begin{abstract} For $\kappa$ a cardinal, a space $X=(X,\sT)$ is $\kappa$-{\it resolvable} if $X$ admits $\kappa$-many pairwise disjoint $\sT$-dense subsets; $(X,\sT)$ is {\it exactly} $\kappa$-{\it resolvable} if it is $\kappa$-resolvable but not $\kappa^+$-resolvable. The present paper complements and supplements the authors' earlier work, which showed for suitably restricted spaces $(X,\sT)$ and cardinals $\kappa\geq\lambda\geq\omega$ that $(X,\sT)$, if $\kappa$-resolvable, admits an expansion $\sU\supseteq\sT$, with $(X,\sU)$ Tychonoff if $(X,\sT)$ is Tychonoff, such that $(X,\sU)$ is $\mu$-resolvable for all $\mu<\lambda$ but is not $\lambda$-resolvable (cf. Theorem~3.3 of \cite{comfhu10}). Here the ``finite case" is addressed. The authors show in ZFC for $1<n<\omega$: (a)~every $n$-resolvable space $(X,\sT)$ admits an exactly $n$-resolvable expansion $\sU\supseteq\sT$; (b)~in some cases, even with $(X,\sT)$ Tychonoff, no choice of $\sU$ is available such that $(X,\sU)$ is quasi-regular; (c)~if $n$-resolvable, $(X,\sT)$ admits an exactly $n$-resolvable quasi-regular expansion $\sU$ if and only if either $(X,\sT)$ is itself exactly $n$-resolvable and quasi-regular or $(X,\sT)$ has a subspace which is either $n$-resolvable and nowhere dense or is $(2n)$-resolvable. In particular, every $\omega$-resolvable quasi-regular space admits an exactly $n$-resolvable quasi-regular expansion. Further, for many familiar topological properties $\PP$, one may choose $\sU$ so that $(X,\sU)\in\PP$ if $(X,\sT)\in\PP$. \\ \\ {\sl Keywords: resolvable space, $n$-resolvable space, exactly $n$-resolvable space, quasi-regular space,expansion of topology} \end{abstract} \maketitle \section{Introduction} Let $\kappa>1$ be a (possibly finite) cardinal. Generalizing a concept introduced by Hewitt~\cite{hewa}, Ceder~\cite{ceder64} defined a space $(X,\sT)$ to be $\kappa$-{\it resolvable} if there is a family of $\kappa$-many pairwise disjoint nonempty subsets of $X$, each $\sT$-dense in $X$. Generalizations of this concept (for example: the dense sets are perhaps not pairwise disjoint, but have pairwise intersections which are ``small" in some sense; or, the dense sets are required to be Borel, or to be otherwise restricted), were introduced and studied in subsequent decades, for example in \cite{maly74a} \cite{maly97}, \cite{comfgarc98}, \cite{comfgarc01}, \cite{maly98}. We refer the reader to such works as \cite{juhss}, \cite{comfhu07}, \cite{juhss2}, \cite{comfhu10} for extensive bibliographic references relating to the existence of spaces, typically Tychonoff spaces, which satisfy certain prescribed resolvability properties but not others. The flavor of our work \cite{comfhu10} is quite different from that of other papers known to us. In those papers, broadly speaking, the objective is either (a)~to find conditions on a space sufficient to ensure some kind of resolvability or (b)~to construct by {\it ad hoc} means spaces which for certain infinite cardinals $\lambda$ are $\lambda$-resolvable (sometimes in a modified sense) but which are not $\kappa$-resolvable for specified $\kappa>\lambda$. In \cite{comfhu10}, in contrast, a broader spectrum of results is enunciated. We showed there that the tailor-made specific spaces constructed by those {\it ad hoc} arguments arise as instances of a widely available phenomenon, in this sense: {\it every} Tychonoff space satisfying mild necessary conditions admits larger Tychonoff topologies as in (b) above. The constructions of \cite{comfhu10} are based on the $\sK\sI\sD$ expansion technique introduced in \cite{huthesis} and developed further in \cite{hu03}, \cite{comfhu03}, \cite{comfhu04}, \cite{comfhu07}. Roughly speaking, the present work in the finite context parallels theorems (cf. \cite{comfhu10}(especially Theorem~3.3)) about $\kappa$-resolvability when $\kappa$ is infinite. Specifically we show for fixed $n<\omega$ that every $n$-resolvable space admits an exactly $n$-resolvable expansion. In some cases, even when the initial space is Tychonoff, the expansion cannot be chosen to be quasi-regular. Further, we characterize explicitly those $n$-resolvable quasi-regular spaces which do admit an exactly $n$-resolvable quasi-regular expansion. It is a pleasing feature of our arguments that for many familiar topological properties $\PP$, when the initial hypothesized space $(X,\sT)$ has $\PP$ and does admit an exactly $n$-resolvable quasi-regular expansion $\sU$, one may arrange also that $(X,\sU)$ has $\PP$. {\it Ad hoc} constructions of Tychonoff spaces which for fixed $n<\omega$ are exactly $n$-resolvable have been available for some time~\cite{vd93b}; see also \cite{cederpear}, \cite{elkin69c}, \cite{eck97}, \cite{fengmasa99}, and \cite{feng00} for other examples, not all Tychonoff. \begin{remark} {\rm In a preliminary version of this paper, submitted to this journal August 31, 2010, we purported to have proved the statements claimed in our abstract \cite{comfhu08abs} and \cite{comfhu10}(5.4(**)). We are grateful to the referee for indicating a simple counterexample (see \ref{example} below for a broad generalization of the suggested argument); that example helped us to recognize the unavoidable relevance of the quasi-regularity property which figures prominently in this work, and to find the more delicate correct condition captured in Theorem~\ref{lem46} below. } \end{remark} Following van Douwen~\cite{vd93b}, we call a space {\it crowded} if it has no isolated points. (Some authors prefer the term {\it dense-in-itself}.) Obviously every resolvable space is crowded. \begin{definition} {\rm Let $\kappa>1$ be a (possibly finite) cardinal and let $X=(X,\sT)$ be a space. Then (a) $X$ is {\it hereditarily $\kappa$-irresolvable} if no nonempty subspace of $X$ is $\kappa$-resolvable in the inherited topology; (b) $X$ is {\it hereditarily irresolvable} if $X$ is hereditarily $2$-irresolvable; and (c) $X$ is {\it open-hereditarily irresolvable} if no nonempty open subspace of $X$ is resolvable in the inherited topology. } \end{definition} \begin{notation} {\rm (a) Let $(X,\sT)$ be a space and let $Y\subseteq X$. The symbol $(Y,\sT)$ denotes the set $Y$ with the topology inherited from $(X,\sT)$. (b) Given a set $X$ and $\sA\subseteq\sP(X)$, the smallest topology $\sT$ on $X$ such that $\sT\supseteq\sA$ is denoted $\sT:=\langle\sA\rangle$. } \end{notation} \begin{definition}\label{CCC} {\rm Let $\PP$ be a topological property. (a) $\PP$ is {\it chain-closed} if for each set $X$ and each chain $\sC$ of $\PP$-topologies on $X$, necessarily $(X,\bigcup\sC)\in\PP$. (b) $\PP$ is {\it clopen-closed} if: $[(X,\sT)\in\PP$ and $A\subseteq X$ and $\sU:=\langle\sT\cup\{A,X\backslash A\}\rangle] \Rightarrow(X,\sU)\in\PP$. (c) $\PP$ is {\it $\oplus$-closed} if: $[(X_0,\sT_0)\in\PP, (X_1,\sT_1)\in\PP, X_0\cap X_1=\emptyset \Rightarrow (X_0,\sT_0)\oplus(X_1,\sT_1)\in\PP$],\\ \noindent where $(X_0,\sT_0)\oplus(X_1,\sT_1)$ denotes the ``disjoint union" or ``topological sum" of the spaces $(X_i,\sT_i)$). (d) If $\PP$ is a chain-closed and clopen-closed and $\oplus$-closed property, then $\PP$ is a {\it CC$\oplus$} property. } \end{definition} \begin{remark}\label{list} {\rm We make no attempt to compile a list of all CC$\oplus$ properties but we note that many familiar topological properties are of that type. Examples are: $T_0$; $T_1$; $T_2$; quasi-regular; regular; completely regular; normal; Tychonoff; has a clopen basis; every two points are separated by a clopen partition; any concatenation of CC$\oplus$ properties. For additional input the interested reader might consult \cite{engel}(1.5.8). } \end{remark} \begin{discussion} {\rm With the necessary preliminaries behind us, we now address the proper topic of this paper---the search for exactly $n$-resolvable expansions. This divides naturally and necessarily into two sections: When the hypothesized topological space is $\omega$-resolvable (``The Infinite Case"), and when it is not (``The Finite Case"). We treat these in Sections~2 and 3, respectively. } \end{discussion} \section{The Infinite Case} We will use frequently the following statement, given by Illanes~\cite{illanes}(Lemma~2). \begin{lemma}\label{lem401} Let $0<n<\omega$. A space which is the union of $n$-many open-hereditarily irresolvable subspaces is not $(n+1)$-resolvable. \end{lemma} \begin{lemma}\label{lem41} Let $1<n<\omega$ and let $(X, \sT)$ be an $\omega$-resolvable space. Then there exist an expansion $\sT'$ of $\sT$, a nonempty set $U\in\sT$, and a $\sT'$-dense partition $\{D_j:j<n\}\cup\{E_j:j<n\}$ of $X$ such that each $(U\cap D_j, \sT')$ is hereditarily irresolvable. If in addition $(X,\sT)\in\PP$ with $\PP$ a CC$\oplus$ property, then $\sT'$ may be chosen so that $(X,\sT')\in\PP$. \end{lemma} {\sl Proof: } By transfinite induction we will define an (eventually constant) family $\{\sT_\eta:\eta<(2^{|X|})^+\}$ of topologies on $X$. The initial sequence $\{\sT_k:k<\omega\}$ requires special attention. Recall that the set $\omega$ admits a sequence $\sI_j=\{I^0_j,I^1_j\}$ ($j<\omega$) of two-cell partitions with the property that for each $F\in[\omega]^{<\omega}$ and $f\in\{0,1\}^F$ one has $|\bigcap_{j\in F}\,I^{f(j)}_j|=\omega$. (A quick way to see that is to identify $\omega$ with a countable dense subset $D$ of the space $\{0,1\}^\omega$ and to set $I^i_j:=D\cap\pi_j^{-1}(\{i\})$ for $j<\omega$, $i\in\{0,1\}$.) Let $\{S(m):m<\omega\}$ witness the $\omega$-resolvability of $(X,\sT)$ and for $j<\omega$, $i\in\{0,1\}$ define $A^i_j:=\bigcup\{S(m):m\in I^i_j\}$.\\ \noindent Now define $\sT_0:=\sT$ and $\sT_k:=\langle\sT_0\cup\{A_0^0,A_0^1\}\cup\cdots \cup\{A^0_{k-1},A^1_{k-1}\}\rangle$ for $1\leq k<\omega$.\\ \noindent Each space $(X,\sT_k)$ is resolvable (in fact, $\omega$-resolvable) since for $F=\{0,1,\cdots k-1\}$ and $f\in\{0,1\}^F$ we have $|\bigcap_{j\in F}\,I^{f(j)}_j|=\omega$ so infinitely many $m$ satisfy $S(m)\subseteq\bigcap_{j\in F}\,A_j^{f(j)}$; each such $S(m)$ meets each nonempty $U\in\sT=\sT_0$. Continuing the construction, we define the topologies $\sT_\eta$ for $\omega\leq\eta<(2^{|X|})^+$. For limit ordinals $\eta$, we set $\sT_\eta:=\bigcup_{\xi<\eta}\,\sT_\xi$. For successor ordinals $\eta+1$ we have two cases: If $(X,\sT_\eta)$ is resolvable we choose a dense partition $\{A^0_\eta,A^1_\eta\}$ of $(X,\sT_\eta)$ and we set $\sT_{\eta+1}:=\langle\sT_\eta\cup\{A^0_\eta,A^1_\eta\}\rangle$, and if $(X,\sT_\eta)$ is irresolvable we set $\sT_{\eta+1}:=\sT_\eta$. The definitions of the topologies $\sT_\eta$ are complete. Routine arguments show that each space $(X,\sT_\eta)$ is crowded, and Definition~\ref{CCC}(b) and (c) (invoked recursively) shows for each CC$\oplus$ property $\PP$ that each space $(X,\sT_\eta)$ has $\PP$ if the initial space $(X,\sT)$ has $\PP$. Now for notational simplicity let $\lambda$ be the least ordinal such that $\sT_\lambda=\sT_{\lambda+1}$ (necessarily with $\lambda<(2^{|X|})^+$ since for $\eta<\lambda$ we have $A^0_\eta\in\sT_{\eta+1}\backslash\sT_\eta$). Then $\lambda\geq\omega$ according to our definition of $\{\sT_k: k<\omega\}$. We set $R:=\bigcup\{S\subseteq X:(S,\sT_\lambda)$ is resolvable$\}$, and $W:=X\backslash R$.\\ \noindent Then $W\in\sT_\lambda$, $(W,\sT_\lambda)$ is hereditarily irresolvable, and $W\neq\emptyset$ since $(X,\sT_\lambda)$ is irresolvable. We fix a nonempty $\sT_\lambda$-basic subset $U\cap H$ of $W$; here $U\in\sT=\sT_0$ and $H=\bigcap_{\eta\in F}\,A_\eta^{f(\eta)}$ for some $F\in[\lambda+1]^{<\omega}$, $f\in\{0,1\}^F$. Now let $1<n<\omega$ as hypothesized, choose $G=\{\eta_j:j<n\}\in[\lambda+1]^n$ such that $G\cap F=\emptyset$, and let $\{g_j:j<n\}$ be a set of $n$-many distinct functions from $G$ to $\{0,1\}$. For $j<n$ set $H_j:=\bigcap_{\eta\in G}\,A_\eta^{g_j(\eta)}$ and define $D_j:=H_j\cap H$ for $j<n$, $E_j:=H_j\backslash H$ for $1\leq j<n$, and $E_0:=[H_0\backslash H]\cup[X\backslash (\bigcup_{j<n}\,D_j\cup\bigcup_{1\leq j<n}\,E_j)]$.\\ \noindent The sets $H$ and $H_j$ ($j<n$) are $\sT_\lambda$-clopen, so each $D_j$ and $E_j$ ($j<n)$ is $\sT_\lambda$-clopen. We define $\sT':=\bigcup\{\sT_\eta:\eta\leq\lambda,\eta\notin F\cup G\}$. A typical basic open subset of $\sT'$ has the form $U'\cap H'$ with $U'\in\sT=\sT_0$ and $H'=\bigcap _{\eta\in F'}\,A_\eta^{f'(\eta)}$ with $F'\in[\lambda+1]^{<\omega}$, $F'\cap(F\cup G)=\emptyset$ and $f'\in\{0,1\}^{F'}$; hence the sets $D_j$, $E_j$ ($j<n$) are dense in $(X,\sT')$. It is clear further that $\{D_j:j<n\}\cup\{E_j:j<n\}$ is a partition of $X$. It remains then to show that each space $(U\cap D_j,\sT')$ ($j<n$) is hereditarily irresolvable. We note first a weaker statement: \begin{equation}\label{eq1} \mbox{Each space~}(U\cap D_j,\sT_\lambda)~(j<n) \mbox{~is hereditarily irresolvable.} \end{equation} \noindent Statement (\ref{eq1}) is clear, since $U\cap D_j\subseteq U\cap H\subseteq W$ and $(W,\sT_\lambda)$ is hereditarily irresolvable. Suppose now for some (fixed) $j<n$ that there is a nonempty set $A=A^0\cup A^1\subseteq U\cap D_j$ with $\{A^0,A^1\}$ a dense partition of $(A,\sT')$. From (\ref{eq1}), not both $A^0$ and $A^1$ are dense in $(A,\sT_\lambda)$, so we assume without loss of generality that $\int_{(A,\sT_\lambda)}\,A^0\neq\emptyset$, say \begin{equation}\label{eq2} \emptyset\neq V\cap H'\cap H''\cap A\subseteq A^0 \end{equation} \noindent with $V\in\sT$, $H'=\bigcap_{\eta\in F'}\,A_\eta^{f'(\eta)}$, $F'\in[(\lambda+1)\backslash(F\cup G)]^{<\omega}$, $f'\in\{0,1\}^{F'}$, and with in addition $H''=\bigcap_{\eta\in F''}\,A_\eta^{f''(\eta)}$, $F''\subseteq F\cup G$, $f''\in\{0,1\}^{F''}$. We assume $V\subseteq U$. We assume also, using $f''|(F\cap F'')=f|(F\cap F'')$ and $f''|(G\cap F'')=g_j|(F\cap F'')$ and replacing $f''$ by $f\cup g_i$, that $F''=F\cup G$. Then $H''=H_j\cap H=D_j$ and from (\ref{eq2}) we have $\emptyset\neq V\cap H'\cap H''\cap A=V\cap H'\cap D_j\cap A\subseteq A^0\subseteq D_j$,\\ \noindent and hence \begin{equation}\label{eq3} \emptyset\neq(V\cap H')\cap A\subseteq A^0. \end{equation} \noindent Since $V\cap H'\in\sT'$, (\ref{eq3}) shows $\int_{(A,\sT')}\,A^0\neq\emptyset$, a contradiction since $A^1=A\backslash A^0$ is dense in $(A,\sT')$. $\Box$ We use the following lemma only in the case $\kappa=n<\omega$, but we give the general statement and proof since these require no additional effort. \begin{lemma} \label{thm411} Let $\kappa>1$ be a (possibly finite) cardinal and let $(X, \sT')$ be a space with a dense partition $\{D_\eta:\eta<\kappa\}\cup\{E_\eta:\eta<\kappa\}$ in which there is a nonempty set $U\in\sT'$ such that each space $(U\cap D_\eta,\sT')$ is hereditarily irresolvable. Then there is an expansion $\sT''$ of $\sT'$ such that {\rm (a)} $U\cap(\bigcup\{D_\eta:\eta<\kappa\})\in\sT''$; and {\rm (b)} each set $D_\eta\cup E_\eta$ is dense in $(X,\sT'')$. If in addition $(X,\sT')\in\PP$ with $\PP$ a CC$\oplus$ property, then $\sT''$ may be chosen so that $(X,\sT'')\in\PP$. \end{lemma} {\sl Proof: } With notation as hypothesized, set $W:=U\cap(\bigcup\{D_\eta:\eta<\kappa\})$ and define $\sT'':=\langle\sT'\cup\{W,X\backslash W\}\rangle$. Clearly (a) holds, since $W=U\cap(\bigcup\{D_\eta:\eta<\kappa\})\in\sT''$. Definition~\ref{CCC} applies as before to guarantee that if $(X,\sT')\in\PP$ then also $(X,\sT'')\in\PP$. For (b), we fix a nonempty basic set $V''\in\sT''$ and $\overline{\eta}<\kappa$. We must show \begin{equation}\label{eq4} (D_{\overline{\eta}}\cup E_{\overline{\eta}})\cap V''\neq\emptyset. \end{equation} For some nonempty set $U'\in\sT''$ we have either $V''=U'\cap W$ or $V''=U'\backslash W$. In the former case since $\emptyset\neq U'\cap U\in\sT'$ and $D_{\overline{\eta}}$ is dense in $(X,\sT')$ we have $\emptyset\neq(D_{\overline{\eta}}\cap U)\cap U'=(D_{\overline{\eta}}\cap W) \cap U'=D_{\overline{\eta}}\cap V''$;\\ \noindent and in the latter case from $E_{\overline{\eta}}\cap W=\emptyset$ and the density of $E_{\overline{\eta}}$ in $(X,\sT')$ we have $V''\cap E_{\overline{\eta}}=U'\cap E_{\overline{\eta}}\neq\emptyset$. Thus (\ref{eq4}) is proved. $\Box$ \begin{theorem} \label{cor3-30} Let $1<n<\omega$ and let $(X, \sT)$ be an $\omega$-resolvable space. Then there is an expansion $\sU$ of $\sT$ such that $(X,\sU)$ is exactly $n$-resolvable. If in addition $(X,\sT)\in\PP$ with $\PP$ a CC$\oplus$ property, then $\sU$ may be chosen so that $(X,\sU)\in\PP$. \end{theorem} {\sl Proof: } Let $\sT'\supseteq\sT$, $U\in\sT$, and $\{D_j:j<n\}\cup\{E_j:j<n\}$ be as given by Lemma~\ref{lem41}. Then by Lemma~\ref{thm411} there is an expansion $\sT''$ of $\sT'$ such that $\emptyset\neq W:=U\cap(\bigcup_{j<n}\{D_j:j<n\})\in\sT''$\\ \noindent and each set $D_j\cup E_j$ is dense in $(X,\sT'')$. We define $\sU:=\sT''$. As indicated in the statements of Lemmas~\ref{lem41} and \ref{thm411}, we have $(X,\sU)\in\PP$ if $(X,\sT)\in\PP$. The family $\{D_j:j<n\}\cup\{E_j:j<n\}$ is a dense partition of $(X,\sT'')=(X,\sU)$, so $(X,\sU)$ is $n$-resolvable (indeed, $2n$-resolvable). We have $W\cap D_j=U\cap D_j\neq\emptyset$ and $(U\cap D_j,\sT')$ is hereditarily irresolvable, so (since $\sU\supseteq\sT'$) the space $(W\cap D_j,\sU)$ is hereditarily irresolvable. The relation $W=\bigcup_{j<n}\,(W\cap D_j)$ expresses $W$ as the union of $n$-many open-hereditarily irresolvable (even, hereditarily irresolvable) $\sU$-dense subspaces, so from Lemma~\ref{lem401} we have that $(W,\sU)$ is not $(n+1)$-resolvable. Then surely, since $\emptyset\neq W\in\sU$, the space $(X,\sU)$ is not $(n+1)$-resolvable. $\Box$ \section{The Finite Case} We have shown for $1<n<\omega$ that each $\omega$-resolvable space admits an exactly $n$-resolvable expansion. That result leaves unresolved the following two questions: (a)~Does every $n$-resolvable ($\omega$-irresolvable) space admit an exactly $n$-resolvable expansion? (b)~If not, which $n$-resolvable spaces do admit such an expansion? In this Section we respond fully to those questions, as follows. First, we show in Theorem~\ref{expansionsexist} that every $(n+1)$-resolvable, $\omega$-irresolvable space admits an exactly $n$-resolvable expansion which is not quasi-regular. Next, we give in Lemma~\ref{neg} a set of conditions sufficient to ensure that a given $n$-resolvable space admits no exactly $n$-resolvable quasi-regular expansion. Then, profiting from a referee's report and leaning heavily on an example given by Juh\'asz, Soukup and Szentmikl\'ossy~\cite{juhss}, we show in Theorem~\ref{example} that for every $n>1$ there do exist (many) Tychonoff spaces satisfying the hypotheses of Lemma~\ref{neg}; thus, not every $n$-resolvable Tychonoff space admits an exactly $n$-resolvable quasi-regular expansion. Finally in Theorem~\ref{lem46}, sharpening the results given, we characterize internally those $n$-resolvable quasi-regular spaces which do admit an exactly $n$-resolvable quasi-regular expansion; and we show, as in the $\omega$-resolvable case treated in Section~2, that for every CC$\oplus$ property $\PP$ the expansion may be chosen in $\PP$ if the initial space was in $\PP$. Twice in this section we will invoke the following useful result Theorem of Illanes~\cite{illanes}. We remark that the anticipated generalization of Theorem~\ref{illanes} to (arbitrary) infinite cardinals of countable cofinality, not needed here, was given by Bhaskara Rao~\cite{brao94}. \begin{theorem}\label{illanes} A space which is $n$-resolvable for each integer $n<\omega$ is $\omega$-resolvable. \end{theorem} We follow Oxtoby~\cite{oxtoby} in adopting the terminology of this next definition; alternatively, the spaces in question might be referred to as spaces with a {\it regular-closed $\pi$-base}. (We are grateful to Alan Dow and Jerry Vaughan for helpful correspondence concerning these terms.) \begin{definition}\label{quasireg} {\rm A space $(X,\sT)$ is {\it quasi-regular} if for every nonempty $U\in\sT$ there is a nonempty $V\in\sT$ such that $V\subseteq\overline{V}^{(X,\sT)}\subseteq U$. } \end{definition} The condition of quasi-regularity has the flavor of a (very weak) separation condition. Clearly every regular space is quasi-regular. We note that a quasi-regular space need not be a $T_1$-space. \begin{theorem}\label{expansionsexist} Let $1<n<\omega$. Every $(n+1)$-resolvable, $\omega$-irresolvable space admits an exactly $n$-resolvable expansion $(X,\sU)$ which is not quasi-regular. \end{theorem} {\sl Proof: } It suffices to prove that there is nonempty $U\in\sT$ such that $(X\backslash U,\sT)$ is $n$-resolvable and $(U,\sT)$ admits an exactly $n$-resolvable non-quasi-regular expansion. For if $\sU'$ is such a topology on $U$ then $\sU:=\langle\sT\cup\sU'\cup\{U,X\backslash U\}\rangle$ is as required for $X$. (Note for clarity: In the notation of Definition~\ref{CCC} we have $(X,\sU)=(U,\sU')\oplus(X\backslash U,\sT)$.) Since $(X,\sT)$ is not $\omega$-resolvable, by Theorem~\ref{illanes} there is $m$ such that $n<m<\omega$ and $(X,\sT)$ is exactly $m$-resolvable. We set $R:=\bigcup\{S\subseteq X:(S,\sT)$ is $(m+1)$-resolvable$\}$. Then $(R,\sT)$ is closed in $(X,\sT)$ and $n$-resolvable (even $(m+1)$-resolvable), and with $U:=X\backslash R$ we have $\emptyset\neq U\in\sT$. For clarity, we denote by $\sT'$ the trace of $\sT$ on $U$. By the previous paragraph it suffices to find an exactly $n$-resolvable, non-quasi-regular expansion $\sU'$ of $\sT'$ on $U$. Let $\{D_i:i<m\}$ be a dense partition of $(U,\sT')$, and for $i<m$ set $R_i:=\bigcup\{S:S\subseteq D_i, (S,\sT')$ is resolvable$\}$.\\ Then $R_i$ is closed in $(D_i,\sT')$, and $(R_i,\sT')$ is resolvable. We claim for $i<m$ that \begin{equation}\label{(1)} R_i \mbox{~is nowhere dense in~} (D_i,\sT'). \end{equation} Indeed if for some $\overline{i}<m$ there is nonempty $V\in\sT'$ such that $V\cap D_{\overline{i}}\subseteq R_{\overline{i}}$ then (since $(R_{\overline{i}},\sT')$ is resolvable and $U\cap R_{\overline{i}}$ is open in $(R_{\overline{i}},\sT')$) the set $V\cap R_{\overline{i}}=V\cap D_{\overline{i}}$ is resolvable. Then $V=\bigcup_{i<m}\,(V\cap D_i)$ would be $(m+1)$-resolvable, contrary to the fact that $(U,\sT)=(U,\sT')$ is hereditarily $(m+1)$-irresolvable. Thus (\ref{(1)}) is proved. It follows that the set $R(U):=\bigcup_{i<m}\,R_i$ is nowhere dense in $(U,\sT')$. For $i<m$ we write $E_i:=D_i\backslash\overline{R(U)}^{(U,\sT')}$.\\ \noindent Then each set $E_i$ is dense in $(U,\sT')$, each space $(E_i,\sT')$ is hereditarily irresolvable, and $U=(\bigcup_{i<m}\,E_i)\cup\overline{R(U)}^{(U,\sT')}$. Now set $E:=\bigcup\{E_i:i<n\}$ and $\sU':=\langle\{\sT'\cup\{E\}\rangle$. As the union of $n$-many hereditarily irresolvable subsets, the space $(E,\sT')$ is not $(n+1)$-resolvable (by Lemma~\ref{lem401}), hence is exactly $n$-resolvable. Then since $E$ is open in $(U,\sT')$, also $(U,\sT')$ is exactly $n$-resolvable. To see that $(U,\sU')$ is not quasi-regular, let $\emptyset\neq V\in\sU'$ with $V\subseteq E\in\sU'$. There is $W\in\sT'$ (with $W\subseteq U$) such that $V=W\cap E$, and using $(E,\sU')=(E,\sT')$ we have $\overline{V}^{(U,\sU')}=\overline{W\cap E}^{(U,\sU')}=\overline{W}^{(U,\sU')}=\overline{W\cap E}^{(U,\sT')} =\overline{W}^{(U,\sT')}\supseteq W\cap E_n\neq\emptyset$,\\ \noindent so $\overline{V}^{U,\sU')}\subseteq E$ fails. $\Box$ We continue with a lemma which is a routine strengthening of Theorem~\ref{cor3-30}. \begin{lemma}\label{subspace} Let $1<n<\omega$ and let $(X,\sT)$ be an $n$-resolvable space with a nonempty $\omega$-resolvable subspace. Then there is an expansion $\sU$ of $\sT$ such that $(X,\sU)$ is exactly $n$-resolvable. If in addition $(X,\sT)\in\PP$ with $\PP$ a CC$\oplus$ property, then $\sU$ may be chosen so that $(X,\sU)\in\PP$. \end{lemma} {\sl Proof: } Let $A\subseteq X$ have the property that $(A,\sT)$ is $\omega$-resolvable. Replacing $A$ by $\overline{A}^{(X,\sT)}$ if necessary, we assume that $A$ is $\sT$-closed in $X$. By Theorem~\ref{cor3-30} there is a topology $\sW$ on $A$, with $(X,\sW)\in\PP$ if $(X,\sT)\in\PP$, such that $\sW$ expands (the trace of) $\sT$ on $A$ and $(A,\sW)$ is exactly $n$-resolvable. Both $A$ and $X\backslash A$ are clopen in the topology $\sU:=\langle\sT\cup\sW\rangle$; here $(X,\sU)=(A,\sW)\oplus(X\backslash A,\sU)$. Since $(X\backslash A,\sU)=(X\backslash A,\sT)$ is $n$-resolvable and $(A,\sU)=(A,\sW)$ is exactly $n$-resolvable, the space $(X,\sU)$ is exactly $n$-resolvable. When $(X,\sT)\in\PP$ and $(A,\sU)\in\PP$, necessarily $(X,\sU)\in\PP$ since property $\PP$ is $\oplus$-closed (see Definition~\ref{CCC}(c). $\Box$ \begin{definition}\label{-maximal} {\rm For $\kappa$ a cardinal, a space $(X, \sT)$ is $\kappa$-{\it maximal} if no nonempty subspace of $X$ is both nowhere dense and $\kappa$-resolvable. } \end{definition} \begin{lemma}\label{neg} Let $1<n<m<2n<\omega$ and let $(X,\sT)$ be an $n$-maximal, $m$-resolvable space which is hereditarily ($m+1)$-irresolvable. Then $(X,\sT)$ admits no exactly $n$-resolvable quasi-regular expansion. \end{lemma} {\sl Proof:} Let $\{D_i:i<m\}$ be a dense partition of $(X,\sT)$, and for $i<m$ set $R_i:=\bigcup\{S:S\subseteq D_i, (S,\sT)$ is resolvable$\}$.\\ Then $R_i$ is closed in $(D_i,\sT)$, and $(R_i,\sT)$ is resolvable. Exactly as in the proof of (\ref{(1)}) we have for $i<m$ that $R_i$ is nowhere dense in $(D_i,\sT)$. \noindent It follows that the set $R:=\bigcup_{i<m}\,R_i$ is nowhere dense in $(X,\sT)$. In what follows for $i<m$ we write $E_i:=D_i\backslash\overline{R}^{(X,\sT)}$.\\ \noindent Then each set $E_i$ is dense in $(X,\sT)$, each space $(E_i,\sT)$ is hereditarily irresolvable, and $X=(\bigcup_{i<m}\,E_i)\cup\overline{R}^{(X,\sT)}$ with $(\bigcup_{i<m}\,E_i)\cap\overline{R}^{(X,\sT)}=\emptyset$. Suppose now that there is an expansion $\sU$ of $\sT$ such that $(X,\sU)$ is exactly $n$-resolvable and quasi-regular. We claim \begin{eqnarray}\label{card} \mbox{there are nonempty~}U''\in\sU\mbox{~and~}F\in[m]^{\leq n}\nonumber\\ \mbox{~such that~}U''\subseteq\bigcup_{i\in F}\,E_i. \end{eqnarray} \noindent To prove (\ref{card}) let $R_\sU:=\bigcup\{S\subseteq X:(S,\sU)$ is $(n+1)$-resolvable$\}$ and set $V:=X\backslash(R_\sU\cup\overline{R}^{(X,\sT)})$. Then $\emptyset\neq V\in\sU$ and $(V,\sU)$ is hereditarily $(n+1)$-irresolvable. Now for $\emptyset\neq U''\subseteq V$ with $U''\in\sU$ we set $\#(U''):=\{i<m:U\cap E_i\neq\emptyset\}$ and we choose such $U''$ with $|\#(U'')|$ minimal. Since $U''=\bigcup_{i\in\#(U'')}\,(U''\cap E_i)$ and $(U'',\sU)$ is $(n+1)$-irresolvable, if $|\#(U'')|>n$ there is $\overline{i}\in\#(U'')$ such that $(U''\cap E_{\overline{i}})$ is not dense in $(U'',\sU)$; then some nonempty $U'\subseteq U''$ with $U'\in\sU$ satisfies $U'\cap E_{\overline{i}}=\emptyset$, and then $|\#(U')|<|\#(U'')|$. That contradiction establishes (\ref{card}). Since $(X,\sU)$ is quasi-regular, there are by (\ref{card}) a set $F\in[m]^{\leq n}$ and nonempty $U'\in\sU$ such that $U'\subseteq\overline{U'}^{(X,\sU)}\subseteq\bigcup_{i\in F}\,E_i$.\\ \noindent We choose and fix such $U'$ and we assume further, reindexing the sets $E_i$ if necessary, that \begin{equation}\label{U'} U'\subseteq\overline{U'}^{(X,\sU)}\subseteq\bigcup_{i<n}\,E_i. \end{equation} For every nonempty $U\subseteq U'$ such that $U\in\sU$ the space $(U,\sU)$ is $n$-resolvable, so $(U,\sT)$ is $n$-resolvable; therefore, since $(X,\sT)$ is $n$-maximal, such $U$ is not nowhere dense in $(X,\sT)$. Given such $U$, let $V=V(U):=\int_{(X,\sT)}\,\overline{U}^{(X,\sT)}$; then $\emptyset\neq U\cap V$. Every nonempty $W\in\sU$ such that $W\subseteq U\cap V$ is $n$-resolvable in the topology $\sU$, hence in the topology $\sT$. More explicitly, we claim: \begin{eqnarray}\label{W} \noindent\hspace{.3in}U\subseteq U',U\in\sU,V=\int_{(X,\sT)}\,\overline{U}^{(X,\sT)}, W\in\sU,\nonumber\\ \noindent\hspace{.3in}W\subseteq U\cap V, i<n\Rightarrow W\subseteq\overline{W\cap E_i}^{(X,\sT)}. \end{eqnarray} If the claim fails for some such $W$ then for some $i_0<n$ we have, defining $W':=W\backslash\overline{W\cap E_{i_0}}^{(X,\sT)}$, that $\emptyset\neq W'\in\sU$ and $W'\cap E_{i_0}=\emptyset$. Now recursively for $i<n$ we will define a nonempty set $U_i\in\sT$ and a (possibly empty) set $F_i\subseteq E_i$ as follows. For $i=0$, if $W'\cap E_0$ is empty or crowded in the topology $\sT$ we take $U_0=X$ and $F_0=\emptyset$. If $W'\cap E_0$ is neither empty nor crowded in the topology $\sT$ we choose a point $x_0$ which is isolated in $(W'\cap E_0,\sT)$ and we choose $U_0\in\sT$ such that $U_0\cap W'\cap E_0=\{x_0\}$; then we define $F_0:=\{x_0\}$. We continue similarly, assuming $i<n$ and that $U_j\in\sT$ and $F_j\subseteq E_j$ have been defined for all $j<i$ in such a way that $U_{j'}\subseteq U_j$ when $0\leq j<j'<i$. If $U_{i-1}\cap W'\cap E_i$ is empty or crowded in the topology $\sT$ we take $U_i=U_{i-1}$ and $F_i=\emptyset$. If $U_{i-1}\cap W'\cap E_i$ is neither empty nor crowded in the topology $\sT$ we choose a point $x_i$ which is isolated in $U_{i-1}\cap W'\cap E_i$ and we choose $U_i\in\sT$ such that $U_i\cap U_{i-1}\cap W'\cap E_i=\{x_i\}$; we assume, replacing $U_i$ by $U_i\cap U_{i-1}$ if necessary, that $U_i\subseteq U_{i-1}$. Further in this case we define $F_i:=\{x_i\}$. The definitions are complete for (all) $i<n$. We have that $\emptyset\neq U_{n-1}\cap W'\in\sU$, so $(U_{n-1}\cap W',\sU)$ is $n$-resolvable. It follows then that $(U_{n-1}\cap W'\cap E_i,\sT)$ is crowded for at least one $i<n$ (for otherwise $|U_{n-1}\cap W'\cap E_i|\leq1$ for each $i<n$, with $U_{n-1}\cap W'\cap E_{i_0}=\emptyset$ and then $|U_{n-1}\cap W'|\leq n-1$, so $(U_{n-1}\cap W',\sU)$ cannot be $n$-resolvable). [We note in passing that no separation properties have been used or assumed here, in particular it is not assumed that a finite subset of $X$ is closed in $(X,\sU)$. We use only the fact that a space of cardinality $n-1$ or less cannot be $n$-resolvable.] The relation $U_{n-1}\cap W'=\bigcup_{i<n}\,(U_{n-1}\cap W'\cap E_i) $\\ \noindent expresses $U_{n-1}\cap W'$ as the union of $n$-many sets, each of them hereditarily irresolvable in the topology $\sT$. (Some of these sets may be singletons, and at least one of them, namely with $i=i_0$ is empty.) Then by Lemma~\ref{lem401} the space $(U_{n-1}\cap W',\sT)$ is not $n$-resolvable, so $(U_{n-1}\cap W',\sU)$ is not $n$-resolvable. That contradiction completes the proof of claim (\ref{W}). Now recursively for $i<n$ we will define nonempty sets $U_i$, $W_i\in\sU$ and $V_i$, $O_i\in\sT$. To begin, take $U_0:=U'$, $V_0:=\int_{(X,\sT)}\,\overline{U_0}^{(X,\sT)}$, and $W_0:=U_0\cap V_0$. From (\ref{W}) we have $\overline{W_0}^{(X,\sT)}=\overline{W_0\cap E_0}^{(X,\sT)}$. Since $(W_0,\sT)$ is $n$-resolvable, also $(\overline{W_0}^{(X,\sT)},\sT)=(\overline{W_0\cap E_0}^{(X,\sT)},\sT)$ is $n$-resolvable, so $\int_{(X,\sT)}\,(\overline{W_0\cap E_0}^{(X,\sT)})\neq\emptyset$ (since $(X,\sT)$ is $n$-maximal). Then since $(E_0,\sT)$ is hereditarily irresolvable, we have $\int_{(E_0,\sT)}\,(W_0\cap E_0)\neq\emptyset$; we choose nonempty $O_0\in\sT$ such that $\emptyset\neq O_0\cap E_0\subseteq W_0\cap E_0$. We continue similarly, assuming $i<n$ and that $U_j$, $W_j\in\sU$ and $V_j$, $O_j\in\sT$ have been defined for all $j<i$ so that $O_{j'}\subseteq O_j$ when $0\leq j<j'<i$. We set $U_i:=W_{i-1}\cap O_{i-1}$, $V_i:=\int_{(X,\sT)}\,\overline{U_i}^{(X,\sT)}$, and $W_i:=U_i\cap V_i$. By the preceding argument (applied now to $W_i\cap E_i$ in place of $W_0\cap E_0$) there is nonempty $O_i\in\sT$ such that $O_i\cap E_i\subseteq W_i\cap E_i$. Replacing $O_i$ by $O_i\cap O_{i-1}$ if necessary, we have $O_i\subseteq O_{i-1}$. The definitions are complete for (all) $i<n$. We set $M:=O_{n-1}\backslash(\overline{U'}^{(X,\sU)}\cup\overline{R}^{(X,\sT)})$.\\ \noindent To see that $M\neq\emptyset$, we argue as follows. First, $\emptyset\neq O_{n-1}\in\sT$ and $\overline{R}^{(X,\sT)}$ is nowhere dense in $(X,\sT)$, so $O_{n-1}\backslash\overline{R}^{(X,\sT)}\neq\emptyset$; then, $E_n$ is dense in $(X,\sT)$, so $(O_{n-1}\backslash\overline{R}^{(X,\sT)})\cap E_n\neq\emptyset$; finally, since $\overline{U'}^{(X,\sU)}\cap E_n=\emptyset$, we have $M\supseteq(M\cap E_n)=((O_{n-1}\backslash\overline{R}^{(X,\sT)}) \backslash\overline{U'}^{(X,\sU)})\cap E_n\neq\emptyset$. For $i<n$ we have $M\cap(O_{n-1}\cap E_i)=\emptyset$, so \begin{equation}\label{M} M=\bigcup_{n\leq i<m}\,(M\cap E_i). \end{equation} \noindent Since $m<2n$, relation (\ref{M}) expresses $M$ as the union of fewer than $n$-many subsets, each hereditarily irresolvable in the topology $\sT$, so again by Lemma~\ref{lem401} the space $(M,\sT)$ is not $n$-resolvable. Then also $(M,\sU)$ is not $n$-resolvable, which with the relation $\emptyset\neq M\in\sU$ contradicts the assumption that $(X,\sU)$ is (exactly) $n$-resolvable. $\Box$ The following argument shows, as promised, that, for every integer $n>1$, Tychonoff spaces satisfying the conditions of Lemma~\ref{neg} exist in profusion. \begin{theorem}\label{example} Let $1<n<\omega$ and let $\kappa\geq\omega$. Then there is an $n$-resolvable Tychonoff space $(X,\sT)$ such that $|X|=\kappa$ and $(X,\sT)$ admits no exactly $n$-resolvable quasi-regular expansion. \end{theorem} {\sl Proof: } According to \cite{juhss}(4.1) there is a dense subspace $Y$ of the space $\{0,1\}^{2^\kappa}$ such that $|Y|=\kappa$ and $Y$ is {\it submaximal} in the sense that every dense subspace of $Y$ is open in $Y$. Let $\{D_i:i<n+1\}$ be a family of pairwise disjoint dense subspaces of $\{0,1\}^{2^\kappa}$, each homeomorphic to $Y$, and set $X:=\bigcup_{i<n+1}\,D_i$. (To find such spaces $D_i$, let $H$ be the subgroup of $\{0,1\}^{2^\kappa}$ generated by $Y$, note that $|H|=|Y|=\kappa$, choose any $(n+1)$-many cosets of H and chose a dense homeomorph of $Y$ inside each of those.) Clearly $X=(X,\sT)$ is $n$-resolvable, indeed $(n+1)$-resolvable. Hence to see that $(X,\sT)$ admits no exactly $n$-resolvable quasi-regular expansion it suffices, according to Lemma~\ref{neg} (taking $m=n+1$ there), to show that $(X,\sT)$ is $n$-maximal and hereditarily $(n+2)$-irresolvable. In fact, $(X,\sT)$ is even $2$-maximal. To see that, let $A$ be nowhere dense in $(X,\sT)$. Then $A\cap D_i$ is nowhere dense in $(D_i,\sT)$ for each $i<n+1$, hence is hereditarily closed, hence is (hereditarily) discrete. The relation $A=\bigcup_{i<n+1}\,(A\cap D_i)$ expresses $A$ as the union of finitely many discrete sets, so (since $(X,\sT)$ is a $T_1$-space) the space $(A,\sT)$ is not crowded and hence is not $2$-resolvable. To see that $(X,\sT)$ is hereditarily $(n+2)$-irresolvable, we begin with a preliminary observation. \begin{equation}\label{submax} \mbox{Every submaximal space is hereditarily irresolvable}. \end{equation} For the proof, let $S\subseteq Z$ with $Z$ a submaximal space, and suppose that $\{S_0,S_1\}$ is a dense partition of $S$. Then $S':=Z\backslash S_0=S_1\cup(Z\backslash S)$ is dense in $Z$ and hence open in $Z$, so $S'\cap S=S_1$ is open in $S$; then $S_0=S\backslash S_1$ cannot be dense in $S$, a contradiction completing the proof of (\ref{submax}). Suppose now in the present case that $A\subseteq X$ and that $(A,\sT)$ is $(n+2)$-resolvable. Replacing $A$ by $\overline{A}^{(X,\sT)}$ if necessary, we assume that $A$ is closed in $(X,\sT)$. We consider two cases. \underline{Case 1}. $A$ is nowhere dense in $(X,\sT)$. Then $A\cap D_i$ is nowhere dense in $(D_i,\sT)$ for each $i<n+1$, hence is discrete and closed in $D_i$. Then $A=\bigcup_{i<n+1}\,(A\cap D_i)$ is not crowded, hence is not resolvable. \underline{Case 2}. Case 1 fails. Then there is nonempty $U\in\sT$ such that $U\subseteq A$. Each set $U\cap D_i$ with $i<n+1$ is dense in $(U,\sT)$, and each space $(U\cap D_i,\sT)$ is hereditarily irresolvable since (by (\ref{submax})) each space $(D_i,\sT)$ is hereditarily irresolvable. The relation $U=\bigcup_{i<n+1}\,(U\cap D_i)$ then expresses the $(n+2)$-resolvable space $(U,\sT)$ as the union of $(n+1)$-many dense, hereditarily irresolvable subsets. That contradicts Lemma~\ref{lem401}. $\Box$ Finally in Theorem~\ref{lem46} we give the promised internal characterization of those $n$-resolvable quasi-regular spaces which admit an exactly $n$-resolvable quasi-regular expansion. \begin{theorem}\label{lem46} Let $1<n<\omega$ and let $(X,\sT)$ be an $n$-resolvable quasi-regular space. Then these conditions are equivalent. {\rm (a)} $(X,\sT)$ admits an exactly $n$-resolvable quasi-regular expansion; {\rm (b)} either \begin{itemize} \item[(i)] $(X,\sT)$ is exactly $n$-resolvable; or \item[(ii)] $(X,\sT)$ is not $n$-maximal; or \item[(iii)] $(X,\sT)$ is not hereditarily $(2n)$-irresolvable. \end{itemize} If in addition conditions {\rm (a)} and {\rm (b)} are satisfied and $(X,\sT)\in\PP$ with $\PP$ a CC$\oplus$ property, then the exactly $n$-resolvable quasi-regular expansion $\sU$ of $\sT$ may be chosen so that $(X,\sU)\in\PP$. \end{theorem} {\sl Proof: } (b)$\Rightarrow$(a). If (i) holds there is nothing to prove. We\ assume that (i) fails and we show (ii)$\Rightarrow$(a) and (iii)$\Rightarrow$(a). Let $\{D_i:i<n\}$ be a dense partition of $(X,\sT)$ and let $(A,\sT)$ be a subspace of $(X,\sT)$ such that either $(A,\sT)$ is $n$-resolvable and nowhere dense in $(X,\sT)$, or $(A,\sT)$ is ($2n$)-resolvable. Replacing $A$ by $\overline{A}^{(X,\sT)}$ if necessary, we assume that $A$ is $\sT$-closed in $X$. If $(A,\sT)$ is $\omega$-resolvable the desired conclusion is given by Lemma~{\ref{subspace}} (using the fact that quasi-regularity is a CC$\oplus$ property), so we assume that $(A,\sT)$ is not $\omega$-resolvable; then by Lemma~\ref{illanes} there is $m$ such that $n< m<\omega$ and $(A,\sT)$ is exactly $m$-resolvable. Let $\{E_i:i<m\}$ witness that fact, and set $E:=\bigcup_{i<n}\,E_i$. We claim \begin{equation}\label{8} \mbox{If~} {\mathcal U}\mbox{~is an expansion of~} {\mathcal T}\mbox{~in which~} E \mbox{~is~} \sU\mbox{-clopen, then~} (X,\sU) \mbox{~is not~} (n+1)\mbox{-resolvable.} \end{equation} Indeed, if $(X,\sU)$ is $(n+1)$-resolvable then its clopen subset $(E,\sU)$ admits a dense partition of the form $\{F_j:j<n+1\}$; then $\{F_j:j<n+1\}\cup\{E_i:n\leq i<m\}$ would be a dense $(m+1)$-partition of $(A,\sT)$, contrary to the fact that $(A,\sT)$ is not $(m+1)$-resolvable. Thus (\ref{8}) is proved. Suppose now that $(A,\sT)$ is nowhere dense in $(X,\sT)$, as in (ii), and set $\sU:=\langle\sT\cup\{E,X\backslash E\}\rangle$. Each nonempty set $U\in\sU$ meets either $E$ or $X\backslash E$, hence (since $\int_{(X,\sT)}\,A=\emptyset$) meets either $E$ or $X\backslash A$; hence $U$ meets each set $D_i$ ($i<n$) or each set $E_i$ ($i<n$), so $\{D_i\cup E_i:i<n\}$ witnesses the fact that $(X,\sU)$ is $n$-resolvable. It then follows from (\ref{8}), since $E$ is $\sU$-clopen, that $(X,\sU)$ is exactly $n$-resolvable. Suppose that $(A,\sT)$ is ($2n$)-resolvable, as in (iii), set $\sU:=\{\sT\cup\{X\backslash A,E,A\backslash E\}\rangle$, and let $\emptyset\neq U\in\sU$. Clearly $U$ meets either $X\backslash A$ or $E$ or $A\backslash E$. It follows, since $(X\backslash A,\sU)=(X\backslash A,\sT)$ and $(E,\sU)=(E,\sT)$ and $(A\backslash E,\sU)=(A\backslash E,\sT)$, that $U$ meets either each set $D_i$ ($i<n$) or each set $E_i$ ($i<n$) or each set $E_{n+i}$ ($i<n$). Thus $\{D_i\cup E_i\cup E_{n+i}:i<n\}$ witnesses the fact that $(X,\sU)$ is $n$-resolvable. It then follows from (\ref{8}) that $(X,\sU)$ is exactly $n$-resolvable. Every CC$\oplus$ property $\PP$ is preserved under passage from $(X,\sT)$ to $(X,\sU)$ under the constructions given in (ii) and (iii), so in particular $(X,\sU)$ is quasi-regular since $(X,\sT)$ was assumed quasi-regular. (a)~$\Rightarrow$~(b). Assume that (b) fails. For $i<n$ let $R_i:=\bigcup\{S\subseteq X:(S,\sT)$ is $(n+i+1$)-resolvable$\}$,\\ \noindent and note that $R_0\supseteq R_1\supseteq\ldots\supseteq R_{i-1}\supseteq R_i \ldots\supseteq R_{n-1}$ \noindent with $R_0=X$ (since $(X,\sT)$ is $(n+1)$-resolvable) and $R_{n-1}=\emptyset$ (since $(X,\sT)$ is hereditarily $(2n)$-irresolvable). Since each set $R_i$ ($i<n-1$) is closed in $(X,\sT)$, each set $R_i\backslash R_{i+1}$ ($i<n-1$) is open in $(R_i,\sT)$; further, $R_i\backslash R_{i+1}$ is $(n+i+1)$-resolvable and hereditarily $(n+i+2)$-irresolvable. For $i<n-1$ let $U_i:=\int_{(X,\sT)}\,R_i$, and $B_i:=R_i\backslash U_i$. Let $\sU$ be a refinement of $\sT$ such that $(X,\sU)$ is exactly $n$-resolvable, and set $R:=\bigcup\{S\subseteq X:S$ is $(n+1)$-resolvable$\}$.\\ \noindent Then with $W:=X\backslash R$ we have $\emptyset\neq W\in\sU$, and $(W,\sU)$ is $n$-resolvable and hereditarily $(n+1)$-irresolvable. Since $X=\bigcup_{i<n-1}\,(R_i\backslash R_{i+1})$, we have $W=\bigcup_{i<n-1}\,(W\cap(R_i\backslash R_{i+1}))$. Each $B_i$ ($i<n-1$) is closed and nowhere dense in $(R_i,\sT)$, hence in $(X,\sT)$, so $B:=\bigcup_{i<n-1}\,B_i$ and each of its subsets is nowhere dense in $(X,\sT)$; since $(X,\sT)$ is $n$-maximal, $(B,\sT)$ and each of its subsets is $n$-irresolvable. Then since $(W,\sT)$ is $n$-resolvable the relation $W\subseteq B$ fails, so there is $i<n-1$ such that $V:=W\cap(U_i\backslash R_{i+1})$ is nonempty. From $i<n-1$ it follows with $m:=n+i+1$ that $1<n<m=n+i+1<n+n-1+1=2n$\\ \noindent and the proof concludes with the observation that the following two facts contradict (the instance $X=V$ of) Lemma~\ref{neg}. (A) the space $(V,\sT)$ is $n$-maximal, $(n+i+1)$-resolvable, and hereditarily $(n+i+2)$-irresolvable; and (B) $\emptyset\neq V\in\sU$, $(V,\sU)$ is $n$-resolvable, quasi-regular, and not $(n+1)$-resolvable; that is, the space $(V,\sU)$ is exactly $n$-resolvable and quasi-resolvable. $\Box$ \end{document}
\begin{document} \title{Eulerian Methods for Visualizing Continuous Dynamical Systems using Lyapunov Exponents} \author{ Guoqiao You\thanks{School of Science, Nanjing Audit University, Nanjing, 211815, China. Email: {\bf [email protected]}} \and Tony Wong\thanks{Department of Mathematics and Institute of Applied Mathematics, University of British Columbia, Vancouver V6T 1Z2, BC, Canada. Email: {\bf [email protected]}} \and Shingyu Leung\thanks{Department of Mathematics, the Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong. Email: {\bf [email protected]}} } \maketitle \begin{abstract} \reminder{We propose a new Eulerian numerical approach for constructing the forward flow maps in continuous dynamical systems. The new algorithm improves the original formulation developed in \cite{leu11,leu13} so that the associated partial differential equations (PDEs) are solved forward in time and, therefore, the \textit{forward} flow map can now be determined \textit{on the fly}. Due to the simplicity in the implementations, we are now able to efficiently compute the unstable coherent structures in the flow based on quantities like the finite time Lyapunov exponent (FTLE), the finite size Lyapunov exponent (FSLE) and also a related infinitesimal size Lyapunov exponent (ISLE). When applied to the ISLE computations, the Eulerian method is in particularly computational efficient.} For each separation factor $r$ in the definition of the ISLE, typical Lagrangian methods require to shoot and monitor an individual set of ray trajectories. If the scale factor in the definition is changed, these methods have to restart the whole computations all over again. The proposed Eulerian method, however, requires to extract only an isosurface of a volumetric data for an individual value of $r$ which can be easily done using any well-developed efficient interpolation method or simply an isosurface extraction algorithm. Moreover, we provide a theoretical link between the FTLE and the ISLE fields which explains the similarity in these solutions observed in various applications. \end{abstract} \section{Introduction} It is an important task to visualize, understand and then extract useful information in complex continuous dynamical systems from many science and engineering fields including flight simulation \reminderre{\cite{carmoh08,tanchahal10}}, wave propagation \reminderre{\cite{tanpea10}}, bio-inspired fluid flow motions \reminderre{\cite{lipmoh09,grerowsmi10,lukyanfau10}}, ocean current model \reminderre{\cite{lekleo04,shalekmar05}}, and etc. It is, therefore, natural to see many existing tools to study dynamical systems. For example, a lot of work in dynamical systems focuses on understanding different types of behaviors in a model, such as elliptical zones, hyperbolic trajectories, chaotic attractors, mixing regions, to name just a few. One interesting tool is to partition the space-time domain into subregions based on certain quantity measured along with the passive tracer advected according to the associated dynamical system. Because of such a Lagrangian property in the definition, the corresponding partition is named the Lagrangian coherent structure (LCS). One commonly used quantity is the so-called finite time Lyapunov exponent (FTLE) \cite{hal01,hal01b,hal11,karhal13} which measures the rate of separation of a passive tracer with an infinitesimal perturbation in the initial condition, over a finite period of time. Another popular diagnostic of trajectory separation in dynamical systems is the finite-size Lyapunov exponent (FSLE) \cite{abcpv97,hlht11,cenvul13} which is especially popular in the applications from the oceanography. Instead of introducing an infinitesimal perturbation in the initial condition of the fluid particles, FSLE considers finite perturbation. It does not measure the rate of separation but computes the time required to separate two adjacent tracers up to certain distance. Since all these quantities are long treated as a Lagrangian property of a continuous dynamical system, most, if not all, numerical methods are developed based on the traditional Lagrangian ray tracing method by solving the ODE system using any well-developed numerical integrator. These approaches, however, require the velocity field defined at arbitrary locations in the whole space depending on the location of each individual particle. This implies that one has to in general implement some interpolation routines in the numerical code. Unfortunately, it could be numerically challenging to develop an interpolation approach which is computationally cheap, \reminder{monotone, and high order accurate}. An Eulerian approach to the flow map computation for the FTLE has been first proposed in \cite{leu11} which has incorporated the level set method \cite{oshset88}. Based on the phase flow method \cite{canyin06,leuqia09}, we have developed a backward phase flow method for the Eulerian FTLE computations in \cite{leu13}. Based on these partial differential equation (PDE) based algorithms, more recently, we have developed in \cite{youleu14} an efficient Eulerian numerical approach for extracting invariant sets in a continuous dynamical system in the extended phase space (the $\mathbf{x}-t$ space). In \cite{youleu14b}, we have proposed a simple algorithm called VIALS which determines the average growth in the surface area of a family of level surfaces. This Eulerian tool relates closely to the FTLE and provides an alternative for understanding complicated dynamical systems. In this paper, we are going to further develop these Eulerian tools for continuous dynamical systems. There are several main contributions of this paper. We will first improve the numerical algorithm for \textit{forward} flow map construction. The papers \cite{leu11,leu13} have proposed to construct the \textit{forward} flow map defined on a fixed Cartesian mesh by solving the level set equations or the Liouville equations in the \textit{backward} direction. In particular, to compute the forward flow map from the initial time $t=0$ to the final time $t=T$, one needs to solve the Liouville equations \textit{backward} in time from $t=T$ to $t=0$ with the initial condition given at $t=T$. This implementation is numerically inconvenient, especially when incorporating with some computational fluid dynamic (CFD) solvers, since the velocity field is loaded from the current time $t=T$ backward in time to the initial time. This implies that the whole field at all time steps has to be stored in the desk which might not be practical at all. In this paper, we first propose an Eulerian approach to compute the \textit{forward} flow map \textit{on the fly} so that the PDE is solved \textit{forward} in time. \reminder{Developed using this improved algorithm, we propose a simple Eulerian approach to compute the ISLE.} For each of the separation factor $r$ in the definition of the ISLE, typical Lagrangian method requires to shoot an individual set of rays and to monitor the variations in these trajectories. If the scale $r$ is changed, these methods will require to restart the whole computations all over again. In this work, we are going to develop an efficient algorithm which provides the ISLE field for any arbitrary separation factor $r$. For an individual value of $r$, the method requires to extract only an isosurface of a volumetric data. This can be done easily by any well-developed interpolation algorithm or simply the function {\sf isosurface} in {\sf MATLAB}. Another main contribution of this work is to provide a theoretical link between the FTLE and the ISLE. Even though these two quantities are measuring different properties of the flow, it has been reported widely, such as \cite{ppss14}, that they give visually very similar solutions in many examples. We are going to show in particular that their ridges can be identified through the ridge of the largest eigenvalue of the associated Cauchy-Green deformation tensor of the flow. \reminder{To the best of our knowledge, this theoretical result is the first one quantitatively revealing the relationship between the FTLE ridges and the ISLE ridges.} Indeed, comparisons between FTLE and FSLE have been made in various studies. The paper \cite{blrv01} has pointed out that the FTLE might be unable to recognize the boundaries between the chaotic and the large-scale mixing regime. The article \cite{karhal13} has argued that the FSLE has several limitations in Lagrangian coherence detection including aspects from local ill-posedness, spurious ridges and intrinsic jump-discontinuities. Nevertheless, the work \cite{ppss14} has stated that these two concepts might yield similar results, if properly calibrated, and could be even interchangeable in \reminder{many flow visualizatons such as oceanography and chemical kinetics.} In this paper, however, we would like to emphasize on the computational aspect of using these Lyapunov exponents, rather than on comparing the advantages and the disadvantages of these different methods. The preference for one or the other could be based on the tradition in various fields, or the availability of the computational resources. This paper is organized as follows. In Section \ref{Sec:Background} we will give a summary of several important concepts including the FTLE, FSLE and ISLE, and also our original Eulerian formulations for computing the flow maps and the FTLE. Then, our proposed Eulerian algorithms will be given in Section \ref{Sec:Proposal}. We will then point out some theoretical properties and relationships between the FTLE and the ISLE in Section \ref{Sec:Relation}. Finally, some numerical examples will be given in Section \ref{Sec:Examples}. \section{Background} \label{Sec:Background} In this section, we will summarize several useful concepts and methods, which will be useful for the developments that we are proposing. We first introduce the definition of various closely related Lyapunov exponents including the finite time Lyapunov exponent (FTLE) \cite{halyua00,hal01,hal01b,shalekmar05,lekshamar07}, the finite size Lyapunov exponent (FSLE) \cite{abccv97,abcpv97,letkan00} and also the infinitesimal size Lyapunov exponent (ISLE) \cite{karhal13}. Then we discuss typical Lagrangian approaches and summarize the original Eulerian approaches as in \cite{leu11,leu13,youleu14,youleu14b}. The discussions here, however, will definitely not be a complete survey. We refer any interested readers to the above references and thereafter. \subsection{FTLE, FSLE and ISLE} In this paper, we consider a continuous dynamical system governed by the following ordinary differential equation (ODE) \begin{equation} \mathbf{x}'(t;\mathbf{x}_0,t_0)=\mathbf{u}(\mathbf{x}(t;\mathbf{x}_0,t_0),t) \label{Eqn:ODE} \end{equation} with the initial condition $\mathbf{x}(t_0;\mathbf{x}_0,t_0)=\mathbf{x}_0$. The velocity field $\mathbf{u}:\Omega \times \mathbb{R} \rightarrow \mathbb{R}^d$ is a time dependent Lipschitz function where $\Omega \subset \mathbb{R}^d$ is a bounded domain in the $d$-dimensional space. To simplify the notation in the later sections, we collect the solutions to this ODE for all initial conditions in $\Omega$ at all time $t\in\mathbb{R}$ and introduce the flow map $$ \Phi_a^b:\Omega \times \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}^d $$ such that $\Phi_a^b(\mathbf{x}_0)=\mathbf{x}(b;\mathbf{x}_0,a)$ represents the arrival location $\mathbf{x}(b;\mathbf{x}_0,a)$ of the particle trajectory satisfying the ODE (\ref{Eqn:ODE}) with the initial condition $\mathbf{x}(a;\mathbf{x}_0,a)=\mathbf{x}_0$ at the initial time $t=a$. This implies that the mapping will take a point from $\mathbf{x}(a;\mathbf{x}_0,a)$ at $t=a$ to another point $\mathbf{x}(b;\mathbf{x}_0,a)$ at $t=b$. The finite time Lyapunov exponent (FTLE) \cite{halyua00,hal01,hal01b,shalekmar05,lekshamar07} measures the rate of separation between adjacent particles over a finite time interval with an infinitesimal perturbation in the initial location. Mathematically, consider the initial time to be 0 and the final time to be $t$, we have the change in the initial infinitesimal perturbation given by \begin{eqnarray*} \delta \mathbf{x}(t) &=& \Phi_0^t(\mathbf{x}+\delta \mathbf{x}(0)) - \Phi_0^t(\mathbf{x}) \mathbf{n}onumber\\ &=& \mathbf{n}abla \Phi_0^t(\mathbf{x}) \delta \mathbf{x}(0) + \mbox{ higher order terms} \, . \end{eqnarray*} The leading order term of the magnitude of this perturbation is given by $$ \| \delta \mathbf{x}(t) \| = \sqrt{ \left< \delta \mathbf{x}(0), [\mathbf{n}abla \Phi_0^t(\mathbf{x})]^* \mathbf{n}abla \Phi_0^t(\mathbf{x}) \delta \mathbf{x}(0) \right> } \, . $$ With the \reminder{Cauchy-Green strain tensor} $\reminder{C_0^t(\mathbf{x})}=[\mathbf{n}abla \Phi_0^t(\mathbf{x})]^* \mathbf{n}abla \Phi_0^t(\mathbf{x})$, we obtain the \reminder{maximum deformation} $$ \max_{\delta \mathbf{x}(0)} \|\delta \mathbf{x}(t) \| = \sqrt{\lambda_{\max}[\reminder{C_0^t(\mathbf{x})]}} \| \mathbf{e}(0)\| = e^{ \sigma_0^t(\mathbf{x}) |t|} \|\mathbf{e}(0) \| \, , $$ where $\mathbf{e}(0)$ is the eigenvector associated with the largest eigenvalue of the deformation tensor. Using this quantity, the finite-time Lyapunov exponent (FTLE) $\sigma_0^t(\mathbf{x})$ is defined as \begin{equation} \sigma_0^t(\mathbf{x}) = \frac{1}{|t|} \ln \sqrt{\lambda_{\max}[\reminder{C_0^t(\mathbf{x})}]} = \frac{1}{|t|} \ln \sqrt{\lambda_0^t(\mathbf{x})} \, . \label{Eqn:FTLE} \end{equation} where $\lambda_0^t(\mathbf{x})=\lambda_{\max}(\reminder{C_0^{t}(\mathbf{x})})$ denotes the largest eigenvalue of the Cauchy-Green tensor. The absolute value of $t$ in the expression reflects the fact that we can trace the particles either \textit{forward} or \textit{backward} in time. In the case when $t<0$, we are measuring the maximum stretch \textit{backward} in time and this corresponds to the maximum compression \textit{forward} in time. To distinguish different measures, we call $\sigma_0^t(\mathbf{x})$ the \textit{forward} FTLE if $t>0$ and the \textit{backward} FTLE if $t<0$. A related concept is a widely used concept in oceanography called the finite size Lyapunov exponent (FSLE) which tries to measure the time it takes to separate two adjacent particles up to certain distance \cite{abcpv97,hlht11,cenvul13}. There are two length scales in the original definition of the FSLE. One is the initial distance between two particles which is denoted by $\epsilon$. The other one is the so-called separation factor, denoted by $r>1$. For any trajectory with \reminder{initial location} $\mathbf{x}(0;\mathbf{x}_1,0)=\mathbf{x}_1$ with $\|\mathbf{x}_0-\mathbf{x}_1\|=\epsilon$, we first determine the shortest time $\tau_r(\mathbf{x}_1)>0$ so that $$ \|\mathbf{x}(\tau_r(\mathbf{x}_1);\mathbf{x}_0,0)-\mathbf{x}(\tau_r(\mathbf{x}_1);\mathbf{x}_1,0)\| = \| \Phi_0^{\tau_r(\mathbf{x}_1)}(\mathbf{x}_0)-\Phi_0^{\tau_r(\mathbf{x}_1)}(\mathbf{x}_1) \| = r \, . $$ The FSLE at the point $\mathbf{x}_0$ is then defined as $$ \gamma_{\epsilon,r}(\mathbf{x}_0,0)=\max_{\|\mathbf{x}_0-\mathbf{x}_1\|=\epsilon} \frac{ \ln r}{\lvert \tau_r(\mathbf{x}_1)\rvert} \, . $$ Numerically, on the other hand, the maximization over the constraint $\|\mathbf{x}_0-\mathbf{x}_1\|=\epsilon$ might require special considerations and the corresponding optimization problem might not be easily solved by typical optimization algorithms. One simple approximation is to consider only $2^d$-neighbors of $\mathbf{x}_0$ with each point $\mathbf{x}_1$ obtained by perturbing one coordinate of $\mathbf{x}_0$ with distance $\epsilon$. To avoid introducing two \reminder{separate} length scales ($\epsilon$ and $r$) and the approximation by $2^d$ neighbors in the final optimization step, one theoretically takes the limit as the length scale $\epsilon$ tends to 0. The FSLE can then be reduced to the infinitesimal size Lyapunov exponent (ISLE) as defined in \cite{karhal13} given by \begin{equation} \gamma_r(\mathbf{x},0)=\frac{\ln r}{\lvert \tau_r(\mathbf{x}) \rvert} \label{Eqn:ISLE} \end{equation} where $\lvert \tau_r(\mathbf{x}) \rvert$ is the shortest time for which \reminder{$\sqrt{\lambda_0^{\tau_r(\mathbf{x})}(\mathbf{x})}=r$}. Even with tremendous theoretical development, there are not many discussions on the numerical implementations. Because the quantity is defined using Lagrangian particle trajectories, most numerical algorithms for computing the FTLE or the FSLE are developed based on the ray tracing method which tries to solve the flow system (\ref{Eqn:ODE}) using a numerical integrator. In the two dimensional case, for example, the flow map is first determined by solving the ODE system with the initial condition imposed on a uniform mesh $\mathbf{x}=\mathbf{x}_{i,j}$. Then one can apply simple finite difference to determine the deformation tensor and, therefore, its eigenvalues. The computation of the FTLE is a little easier. For a given final time, one simply solves the ODE system up to that particular time level. Then the FTLE at each grid location can be computed directly from these eigenvalues. The ISLE, on the other hand, is a little more involved since for a particular separation factor, $r$, one has to continuously monitor the change in these eigenvalues and record the precise moment when it reaches the thresholding value. Computational complexity could be one major concern of these numerical tools. Let $M=O(\Delta x^{-1})$ and $N=O(\Delta t^{-1})$ be the number of grid points in each physical direction and in the temporal direction, respectively. The total number of operations required to compute the flow map $\Phi_{t_0}^{t_N}(\mathbf{x}_{i,j})$ is, therefore, $O(M^2N)$. Note however that such complexity is needed for determining the FTLE or the ISLE on one single time level. For all time levels, the computational complexity for these usual Lagrangian implementation is $O(M^2N^2)$. There are several nice ideas to improve the computational time. One natural idea is to concentrate the computations near the locations where the solutions have large derivatives by implementing an adaptive mesh strategy \cite{sadpei07,ggth07,lekros10}. Another interesting approach is to avoid the computation of the overall map $\Phi_{t_0}^{t_N}$ but to numerically decompose it into sub-maps $\Phi_{t_n}^{t_{n+1}}$ \cite{brurow10}. Then, the extra effort from $\Phi_{t_0}^{t_{N}}$ to $\Phi_{t_1}^{t_{N+1}}$, given by \begin{eqnarray*} \Phi_{t_0}^{t_{N}} &=& \Phi_{t_{N-1}}^{t_N} \circ \cdots \circ \Phi_{t_1}^{t_2} \circ \Phi_{t_0}^{t_1} \\ \Phi_{t_1}^{t_{N+1}} &=& \Phi_{t_{N}}^{t_{N+1}} \circ \cdots \circ \Phi_{t_2}^{t_3} \circ \Phi_{t_1}^{t_2} \, , \end{eqnarray*} are simply some interpolations. All these proposed improvements are only on the flow map computations, and therefore the FTLE. For the FSLE or the ISLE computations on one single time level and one particular $r$, say $\gamma_r(\mathbf{x}_{i,j},t_0)$, one has to go through all $M$ mesh points in the temporal direction. For a different time level or even a different separation factor $r$, we have to go though the same procedure all over again. We do not aware of many efficient algorithms for the FSLE or the ISLE implementation. \subsection{An Eulerian method for computing the flow map} \label{SubSec:oldEulerian} \begin{figure} \caption{Lagrangian and Eulerian interpretations of the function $\Psi$ \cite{leu11} \label{Fig:ForwardBackward} \end{figure} We briefly summarize the Eulerian approach based on the level set method and the Louville equation. We refer interested readers to \cite{leu11} and thereafter. We define a vector-valued function {$\Psi=(\Psi^1,\Psi^2,\cdots,\Psi^d): \Omega \times \mathbb{R} \rightarrow \mathbb{R}^d$.} At $t=0$, we initialize these functions by \begin{equation} \Psi(\mathbf{x},0) = \mathbf{x} =(x^1,x^2,\cdots,x^d) \, . \label{Eqn:LevelSetEquationIC} \end{equation} These functions provide a labeling for any particle in the phase space at $t=0$. In particular, any particle initially located at {$(\mathbf{x},t)=(\mathbf{x}_0,0)=(x_0^1,x_0^2,\cdots,x_0^d,0)$} in the extended phase space can be \textbf{implicitly} represented by the intersection of $d$ {codimension-1 surfaces represented by $\cap_{i=1}^d \{\Psi^i(\mathbf{x},0)=x_0^i\}$ in $\mathbb{R}^d$}. Following the particle trajectory with $\mathbf{x}=\mathbf{x}_0$ as the initial condition in a given velocity field, {any particle identity should be preserved in the Lagrangian framework and this implies that} the material derivative of these level set functions is zero, i.e. \begin{equation*} \frac{D \Psi(\mathbf{x},t)}{Dt} = \mathbf{0} \, . \end{equation*} This implies the following level set equations, or the Liouville equations, \begin{equation} \frac{\partial \Psi(\mathbf{x},t)}{\partial t} + (\mathbf{u} \cdot \mathbf{n}abla) \Psi(\mathbf{x},t) = \mathbf{0} \label{Eqn:LevelSetEquation} \end{equation} with the initial condition (\ref{Eqn:LevelSetEquationIC}). {The above \textbf{implicit} representation} embeds all path lines in the extended phase space. For instance, the trajectory of a particle initially located at $(\mathbf{x}_0,0)$ can be found by determining the intersection of $d$ {codimension-1 surfaces represented by $\cap_{i=1}^d \{\Psi^i(\mathbf{x},t)=x_0^i\}$} in the extended phase space. Furthermore, the forward flow map at {a grid location} $\mathbf{x}=\mathbf{x}_0$ from $t=0$ to $t=T$ is given by $\Phi_0^T(\mathbf{x}_0) = \mathbf{y}$ {where} $\mathbf{y}$ satisfies $\Psi(\mathbf{y},0+T)=\Psi(\mathbf{x}_0,0) \equiv \mathbf{x}_0$. {Note that, in general, $\mathbf{y}$ is a non-mesh location. The typical two dimensional scenario is illustrated in Figure \ref{Fig:ForwardBackward} (a).} The solution to (\ref{Eqn:LevelSetEquation}) contains much more information than what {was referred to above}. Consider a given mesh location $\mathbf{y}$ in the phase space at {the} time $t=T$, as shown in Figure \ref{Fig:ForwardBackward} (b), i.e. $(\mathbf{y},T)$ in the extended phase space. As discussed in our previous work, these level set functions $\Psi(\mathbf{y},T)$ {defined on a uniform Cartesian mesh} in fact give the {backward} flow map from $t=T$ to $t=0$, i.e. $ \Phi_T^0(\mathbf{y})=\Psi(\mathbf{y},T) $. Moreover, {the} solution to the level set equations (\ref{Eqn:LevelSetEquation}) for $t\in (0,T)$ provides also backward flow maps for all intermediate times, i.e. $ \Phi_t^0(\mathbf{y})=\Psi(\mathbf{y},t) $. To compute the forward flow map, on the other hand, \cite{leu11} has proposed to simply reverse the above process by initializing the level set functions at $t=T$ by $ \Psi(\mathbf{x},T) = \mathbf{x} $ and {solving} the corresponding level set equations (\ref{Eqn:LevelSetEquation}) {backward} in time. A typical algorithm of this type is given in {\sf Algorithm 1}. Note that in the original Lagrangian formulation, the {attracting} LCS or the {unstable} manifold is obtained by {backward} time tracing, while the {repelling} LCS or the {stable} manifold is computed by {forward} time integration. In this current Eulerian formulation, on the other hand, {forward} time integration of the Liouville equations gives the {attracting} LCS and {backward} time marching provides the {repelling} LCS. \pic \\ \mathbf{n}oindent {\sf Algorithm 1: Computing the {forward} flow map $\Phi_0^T(\mathbf{x})$: \begin{enumerate} \item Discretize the computational domain to get $x_i, y_j,t_k$. \item Initialize the level set functions on the last time level $t=t_N$ by \begin{eqnarray*} \Psi^1(x_i,y_j,t_k) &=& x_i \mathbf{n}onumber\\ \Psi^2(x_i,y_j,t_k) &=& y_j \, . \end{eqnarray*} \item Solve the Liouville equations for each individual level set function $l=1,2$ $$ \frac{\partial \Psi^l}{\partial t} + (\mathbf{u} \cdot \mathbf{n}abla) {\Psi}^l = 0 $$ from $t=t_{k}$ down to $t=0$ using any well-developed high order numerical methods like WENO5-TVDRK2 \cite{liuoshcha94,shu97,gotshu98} with the boundary conditions \begin{eqnarray} \Psi(\mathbf{x},t)|_{\mathbf{x} \in \partial \Omega} = \mathbf{x} \, && \mbox{ if $\mathbf{n} \cdot \mathbf{u} < 0$} \label{Eqn:Inflow} \\ \mathbf{n} \cdot \mathbf{n}abla \Psi^l(\mathbf{x},t)|_{\mathbf{x} \in \partial \Omega} = 0 \, && \mbox{ if $\mathbf{n} \cdot \mathbf{u} > 0$} \label{Eqn:Outflow} \end{eqnarray} where $\mathbf{n}$ is the outward normal of the boundary. \item Assign $\Phi_0^T(x_i,y_j)=\Psi(x_i,y_j,0)$. \end{enumerate} } \pic \section{Our proposed approaches} \label{Sec:Proposal} In this section, we introduce two new Eulerian algorithms for flow visualizations. The first algorithm is to improve the Eulerian numerical method for computing the forward flow map. Instead of solving the PDE \textit{backward} in time as in Section \ref{SubSec:oldEulerian} and \cite{leu11}, we develop an Eulerian PDE algorithm to construct the forward flow map from $t=0$ to the final time $t=T$ so that we do not need to load the time-dependent velocity field from the terminal time level backward to the initial time. In other words, we construct the forward flow map \textit{on the fly}. This is computationally more natural and convenient. \reminder{Because of this simple algorithm, we propose to develop an efficient and systematic PDE based method for computing the ISLE field.} Typical challenge in obtaining such field is that the choice of the separation factor $r$ is in general flow dependent. One has to vary $r$ to extract the necessary information. Usual Lagrangian methods unfortunately require shooting different sets of initial rays for each individual separation factor $r$. This is computationally very inefficient. The proposed Eulerian formulation requires either some simple interpolations or only one single level surface extraction which can be easily implemented using any well-developed contour extraction function such as \textsf{isosurface} in \textsf{MATLAB}. \subsection{A forward time marching PDE approach for constructing the forward flow map} \label{SubSec:newEulerian} One disadvantage about the previous Eulerian approach for forward flow map computation as discussed in Section \ref{SubSec:oldEulerian} and \cite{leu11,leu13,youleu14} is that the level set equation \begin{equation} \frac{\partial \Psi(\mathbf{x},t)}{\partial t} + (\mathbf{u} \cdot \mathbf{n}abla) \Psi(\mathbf{x},t) = \mathbf{0} \label{Eqn:LevelSet} \end{equation} has to be first solved \textit{backward} in time from $t=T$ to $t=0$. Once we have the final solution at the time level $t=0$, one can then identity the flow map $\Phi_0^T(\mathbf{x})$ by $\Psi(\mathbf{x},0)$. This could be inconvenient especially when we need to access the intermediate forward flow map. In this section, we propose a new algorithm to construct the forward flow map \textit{on the fly}. Consider the forward flow map from $t=0$ to $t=T$. Suppose the time domain $[0,T]$ is discretized by $N+1$ discrete points $t_n$, where $t_0=0$ and $t_N=T$. Then on each subinterval $[t_n,t_{n+1}]$ for $n=0,1,\cdots,N-1$, we use the same method as described in Section \ref{SubSec:oldEulerian} to construct the forward flow map $\Phi_{t_n}^{t_{n+1}}$. In particular, we solve the level set equation (\ref{Eqn:LevelSet}) \textit{backward} in time from $t=t_{n+1}$ to $t=t_n$ with the terminal condition $\Psi(\mathbf{x},t_{n+1})=\mathbf{x}$ imposed on the time level $t=t_{n+1}$. Then the forward flow map is given by $\Phi_{t_n}^{t_{n+1}}(\mathbf{x})=\Psi(\mathbf{x},t_{n})$. Once we have obtained this one step \textit{forward} flow map $\Phi_{t_n}^{t_{n+1}}$, the \textit{forward} flow map from $t=0$ to $t=T$ can be obtained using the composition $\Phi_0^{t_{n+1}}=\Phi_{t_n}^{t_{n+1}}\circ \Phi_0^{t_{n}}$. And this can be easily done by typical numerical interpolation methods. To prevent extrapolation, we could enforce $\Phi_{t_n}^{t_{n+1}} \subseteq [ x_{\min}, x_{\max} ] \times [y_{\min},y_{\max}]$. Here we emphasize that even though we solve equation (\ref{Eqn:LevelSet}) \textit{backward} in time for each subinterval, we access the velocity field data $\mathbf{u}^n$ prior to $\mathbf{u}^{n+1}$ and therefore the forward flow map is indeed obtained \textit{on the fly}. \reminder{As a simple example, we consider the first order Euler method in the temporal direction. The forward flow map $\Phi_{t_n}^{t_{n+1}}$ can be obtained by first solving $$ \frac{\Psi_{i,j}^{n}-\Psi_{i,j}^{n+1}}{\Delta t}-\mathbf{u}_{i,j}^{n+1}\cdot \mathbf{n}abla\Psi_{i,j}^{n+1} = \mathbf{0} $$ with the terminal condition $\Psi_{i,j}^{n+1}=\mathbf{x}_{i,j}$, and then assigning $\Phi_{t_n}^{t_{n+1}}=\Psi_{i,j}^{n}$. Higher order generalization is straight-forward.} For example, if we use the TVD-RK2 method in the temporal direction \cite{oshshu91}, we can compute the forward flow map $\Phi_{t_n}^{t_{n+1}}$ by \begin{eqnarray*} \frac{\reminder{\hat{\Psi}_{i,j}^{n}}-\Psi_{i,j}^{n+1}}{\Delta t}-\mathbf{u}_{i,j}^{n+1}\cdot \mathbf{n}abla\Psi_{i,j}^{n+1} &=& \mathbf{0} \\ \frac{\reminder{\hat{\Psi}_{i,j}^{n-1}}-\reminder{\hat{\Psi}_{i,j}^{n}}}{\Delta t}-\mathbf{u}_{i,j}^{n}\cdot \mathbf{n}abla \reminder{\hat{\Psi}_{i,j}^{n}} &=&\mathbf{0} \\ \Psi_{i,j}^n=\frac{1}{2} \left( \reminder{\hat{\Psi}_{i,j}^{n-1}}+\reminder{\Psi_{i,j}^{n+1}} \right) \end{eqnarray*} with the terminal condition $\Psi_{i,j}^{n+1}=\mathbf{x}_{i,j}$, \reminder{and the functions $\hat{\Psi}_{i,j}^{n-1}$ and $\hat{\Psi}_{i,j}^{n}$ are two intermediate predicted solutions at the time levels $t_{n-1}$ and $t_n$, respectively.} Then the flow map from the time level $t=t_n$ to $t=t_{n+1}$ is given by $\Phi_{t_n}^{t_{n+1}}(\mathbf{x})=\Psi^n(\mathbf{x})$. It is rather natural to decompose the flow map $\Phi_{0}^{T}$ into a composition of maps $\Phi_{0}^{T} = \Phi_{t_{N-1}}^{t_N} \circ \cdots \circ \Phi_{t_1}^{t_2} \circ \Phi_{0}^{t_1}$. For example, such idea has been used recently in \cite{brurow10} to improve the computational efficiency of the Lagrangian FTLE construction between various time levels. \reminder{Define the interpolation operator by $\mathcal{I}$, which returns the interpolated mesh values from the non-mesh values. Then the flow map $\Phi_{t_0}^{t_N}$ can be decomposed into $ \Phi_{t_0}^{t_N} = \mathcal{I} \Phi_{t_{N-1}}^{t_N} \circ \cdots \circ \mathcal{I} \Phi_{t_1}^{t_2} \circ \mathcal{I} \Phi_{t_0}^{t_1}$. If the maps $\Phi_{t_k}^{t_{k+1}}$ for $k=0,\cdots,N-2$ are all stored once they are computed, one can form $\Phi_{t_0}^{t_N}$ by determining only $\Phi_{t_{N-1}}^{t_N}$. } The idea in this work shares some similarities with what we are implementing. Our approach can also re-use all intermediate flow maps, the work \cite{brurow10} however has concentrated only on improving the computational efficiency of the Lagrangian approach. We are using the idea of flow map decomposition to propose a forward computational strategy for the Eulerian formulation. To obtain a stable evolution in the flow map constructions, theoretically we require the interpolation scheme between two local flow maps to be monotone, i.e. the interpolation scheme should preserve the monotonicity of the given data points \cite{leu13}. This is especially important in the context of the phase flow maps for autonomous flows \cite{canyin06,leu13} since the overshooting due to the interpolation could be significantly amplified in the time doubling strategy $$ \Phi_0^{t_{2^k}}=\left(\Phi_0^{t_{2^{k-1}}}\right)^2=\left[\left(\Phi_0^{t_{2^{k-2}}}\right)^2\right]^2 \, . $$ \subsection{An application to ISLE computations} \label{SubSec:ISLEComp} In this section, we propose an efficient Eulerian approach to compute the ISLE function for an arbitrary separation factor $r$ based on the techniques developed in the previous section. To compute the ISLE at $\mathbf{x}$, one has to determine the minimum time $\tau_r(\mathbf{x})$ at each location $\mathbf{x}$ for which $\sqrt{\lambda_0^{\tau_r(\mathbf{x})}(\mathbf{x})}=r$. It is, therefore, necessary to keep track of all intermediate values $\lambda_0^{t_n}(\mathbf{x})$ for all $n$'s during the flow map construction. At each grid point $\mathbf{x}_{i,j}$ and each intermediate time step $t_n$, we construct a quantity $s_0^{t_n}(\mathbf{x}_{i,j})$ by $$ s_0^{t_n}(\mathbf{x}_{i,j}) = \max\left[\sqrt{\lambda_0^{t_n}(\mathbf{x}_{i,j})},s_0^{t_{n-1}}(\mathbf{x}_{i,j})\right] $$ with $s_0^0(\mathbf{x}_{i,j})=0$. Once we determine the flow map $\Phi_0^{t_N}$, we have also constructed an increasing sequence at each grid point $\mathbf{x}_{i,j}$ in time $\{ s_0^{t_n}(\mathbf{x}_{i,j}) : n=0,1,\dots, N \}$, i.e. $$ 0=s_0^0(\mathbf{x}_{i,j}) \le s_0^{t_1}(\mathbf{x}_{i,j}) \le \cdots \le s_0^{t_N}(\mathbf{x}_{i,j}) $$ for each $\mathbf{x}_{i,j}$. Now, we interpret this quantity as a spatial-temporal volumetric data in the $\mathbf{x}-t$ space. Due to such monotonicity property in the temporal direction, the isosurface $$ \left\{ \left(\mathbf{x},t^* \right) : s_0^{t^*}(\mathbf{x})=r \right\} $$ forms a graph of $\mathbf{x}$, i.e. for each $\mathbf{x}$ we have a unique $t^*=t^*(\mathbf{x})$ so that $s_0^{t^*(\mathbf{x})}(\mathbf{x})=r$. And more importantly, the value $t^*(\mathbf{x})$ gives the shortest time so that $\sqrt{\lambda_0^{t^*(\mathbf{x})}(\mathbf{x})}=r$. This implies that $\tau_r(\mathbf{x}_{i,j})=t^*(\mathbf{x}_{i,j})$ and so the level surface in the extended phase space, i.e. $$ \left\{ \left(\mathbf{x},\tau_r^*(\mathbf{x}) \right) : s_0^{\tau_r^*(\mathbf{x})}(\mathbf{x})=r \right\} \, , $$ can be used to define $\tau_r(\mathbf{x}_{i,j})$. Finally, the ISLE function can then be computed using $$ \gamma_r(\mathbf{x}_{i,j},0)=\frac{1}{\left| \tau_r(\mathbf{x}_{i,j}) \right|} \ln r \, . $$ The computational algorithm is summarized in {\sf Algorithm 2}. \mathbf{n}oindent \pic\\ {\sf Algorithm 2 (Our Proposed Eulerian Method for Computing the ISLE): \begin{enumerate} \item Discretize the computational domain \begin{eqnarray*} && x_i=x_{\min}+(i-1)\Delta x,\quad \Delta x=\frac{x_{\max}-x_{\min}}{I-1},\quad i=1,2,\cdots,I, \\ && y_j=y_{\min}+(j-1)\Delta y,\quad \Delta y=\frac{y_{\max}-y_{\min}}{J-1},\quad j=1,2,\cdots,J, \\ && t_n=n\Delta t,\quad \Delta t=\frac{T}{N},\quad n=0,1,...,N \, . \end{eqnarray*} \item Set $s_0^0(\mathbf{x})=0$. \item For $n=0,1,\cdots,N-1$, \begin{enumerate} \item Solve the level set equation (\ref{Eqn:LevelSet}) for $\Psi_{i,j}^n$ with the condition $\Psi_{i,j}^{n+1}=(x_i,y_j)$. \item Define $\Phi_{t_n}^{t_{n+1}}=\Psi_{i,j}^n$. \item Interpolate the solutions to obtain $\Phi_0^{t_{n+1}}=\Phi_{t_n}^{t_{n+1}}\circ \Phi_0^{t_n}$. \item Compute $\sqrt{\lambda_0^{t_{n+1}}(\mathbf{x}_{i,j})}$ and determine $$ s_0^{t_{n+1}}(\mathbf{x}_{i,j})=\max \left[ \sqrt{\lambda_0^{t_{n+1}}(\mathbf{x}_{i,j})},s_0^{t_n}(\mathbf{x}_{i,j})\right] \, . $$ \end{enumerate} \item Given an arbitrary separation factor $r$. For each $\mathbf{x}_{i,j}$, search for a time step $t_n$ such that $s_0^{t_n}(\mathbf{x}_{i,j}) \leq r \leq s_0^{t_{n+1}}(\mathbf{x}_{i,j})$. Then determine $\tau_r(\mathbf{x}_{i,j})$ by applying the piecewise linear interpolation using the values $s_0^{t_n}(\mathbf{x}_{i,j})$ and $s_0^{t_{n+1}}(\mathbf{x}_{i,j})$. The ISLE function is computed as $$ \gamma_r(\mathbf{x}_{i,j},0)=\frac{1}{\left| \tau_r(\mathbf{x}_{i,j}) \right|}\ln r. $$ \end{enumerate} } \pic \\ There are several major advantages of the proposed Eulerian approach. It provides an efficient yet simple numerical implementation of $\gamma_r(\mathbf{x},0)$ for multiple separation factors $r$. The construction of the quantity $s_0^t$ can be first done in the background. For each particular given value of $r$, one has to perform only one single isosurface extraction in the extended phase space, i.e. the $\mathbf{x}-t$ space. Unlike the typical Lagrangian implementation where one has to repeatedly keep track of {\it all} particle trajectories for the constraint $s_0^t=r$, the proposed Eulerian algorithm provides a systematic way based on a simple thresholding strategy. \begin{remark} Note that in our implementation, if it happens that $s_0^{t_n}(x_i,y_j)< r$ for all $t_n$, we simply set $\gamma_r(x_i,y_j,0)=0$. \end{remark} \subsection{Computational complexities} We conclude this section by discussing the computational complexity of our proposed algorithm for a two dimensional flow, i.e. $d=2$. Let $N$ and $M$ be the discretization size of one spatial dimension and time dimension respectively. Since the Liouville equation is a hyperbolic equation, we have $M=O(N)$ by the CFL condition. At each time step $t_n$, a short time flow map $\Phi_{t_n}^{t_{n+1}}$ is obtained by solving the Liouville equations from $t_{n+1}$ to $t_{n}$, the computational effort is $O(N^2)$. Then in computing the long time flow map $\Phi_0^{t_{n+1}}$, an interpolation $\Phi_0^{t_{n+1}} = \Phi_{t_n}^{t_{n+1}} \circ \Phi_0^{t_n}$ takes $O(N^2)$ operations. Therefore the construction of flow map at each time step requires $O(N^2)+O(N^2)=O(N^2)$ operations. Computing $\sqrt{\lambda_0^{t_n}(\mathbf{x}_{i,j})}$ and $s_0^{t_{n+1}}(\mathbf{x}_{i,j})$ also needs $O(N^2)$ operations, the complexity order is kept at $O(N^2)$. Summing up this procedure in all time steps, the overall computational complexity is $M \cdot O(N^2)=O(N^3)$. \reminder{This new Eulerian method has significantly improved the overall computational complexity in the application to the ISLE computations. Since the calculations involve the forward flow map from $t=t_0$ to $t=t_i$ for all $i=1,2,\cdots,M$, the original Eulerian approach as discussed in Section \ref{SubSec:oldEulerian} requires to solve the Liouville equation from each individual time level $t=t_i$ backward in time to $t=t_0$. The overall computational complexity in obtaining all flow maps is therefore given by $O(M^2N^2)=O(N^4)$ which is one order magnitude larger than the newly developed Eulerian method.} \section{A relationship between the FTLE and the ISLE} \label{Sec:Relation} \reminder{With algorithms proposed in Section \ref{Sec:Proposal}, we can finally develop an efficient numerical approach to compute the \textit{forward} FTLE field by solving corresponding PDEs \textit{forward} in time and the ISLE field for arbitrary separation factor $r$.} The FTLE and the ISLE (or the FSLE) are indeed measuring different properties of a given flow, even though both quantities depend on the same function ${\lambda_0^t(\mathbf{x})}$ which relates the growth of an infinitesimal perturbation. In particular, FTLE measures how much ${\lambda_0^t(\mathbf{x})}$ grows over a finite time span $[0,t]$, while the ISLE measures the time required for the quantity ${\lambda_0^t(\mathbf{x})}$ to reach a threshold value prescribed by the separation factor. They are two different tools to study how rapid ${\lambda_0^t(\mathbf{x})}$ grows. A careful comparison of these two quantities can be found in, for example, \cite{ppss14}. \reminder{In many numerical experiences, on the other hand, the FTLE fields can show striking visual resemblance with the ISLE fields of certain separation factors. However, there has not been much studies on the theoretical relationship between these two quantities.} In this section, we are going to demonstrate how these two quantities relate to each other by considering their corresponding {\it ridges}. Of course, the way to define a {\it ridge} is not unique and one can pick a convenient definition for a particular application \cite{spft11,allpea15}. In this work, we adapt the following simple definition. \begin{defn} For a given scalar function $f(\mathbf{x})$, we define a $f$-ridge to be a codimension-one compact and $C^1$ surface, denoted by $\mathcal{M}$, such that for every point $\mathbf{x} \in \mathcal{M}$, $f(\mathbf{x})$ is a local maximum along the normal direction to $\mathcal{M}$ at $\mathbf{x}$. \end{defn} At these locations, $f(\mathbf{x})$ is also called a generalized maximum in some literatures. Next, we define a particular neighbourhood of a ridge for later discussion. \begin{defn} Suppose $\mathcal{S}$ is a codimension-one, compact and smooth surface. We define $\Gamma_\mathcal{S}(\rho)$, the tubular neighbourhood of $\mathcal{S}$ with radius $\rho>0$, as the collection of any point $\mathbf{y}$ that can be uniquely expressed as $\mathbf{x}+s \mathbf{n}_{\mathbf{x}}$, for some point $\mathbf{x} \in \mathcal{S}$, a real number $s \in (-\rho,\rho)$, and $\mathbf{n}_{\mathbf{x}}$ is an unit normal vector of $\mathcal{S}$ at $\mathbf{x}$. \end{defn} \begin{remark} To ensure that any point $\mathbf{y}$ on $\Gamma_\mathcal{S}(\rho)$ can be uniquely expressed as such, one can equivalently require that the boundary of $\Gamma_\mathcal{S}(\rho)$ does not intersect itself. For a smooth surface, this can always be guaranteed by choosing a small enough positive number $\rho$. \end{remark} The definitions of both the FTLE and the ISLE involve $\lambda_0^t(\mathbf{x})$. It is therefore natural to link them through this particular quantity. The following simple \reminder{theorem} which identifies the $\sigma_0^t$-ridges with the $\lambda_0^t$-ridges, will be the very first step in building up our analysis. \begin{thm} \label{prop:FTLE&lambda} Let $\sigma_0^t$ and $\lambda_0^t$ be the FTLE function and the largest eigenvalue of the Cauchy-Green deformation tensor from time $0$ to time $t$, respectively. Then, a $\sigma_0^t$-ridge is also a $\lambda_0^t$-ridge. \end{thm} \begin{proof} Since $(2|t|)^{-1}$ is a just a scaling factor and the natural logarithm function is strictly increasing, all generalized maxima of $\sigma_0^t$ as defined in (\ref{Eqn:FTLE}) and $\lambda_0^t$ coincide. \end{proof} For convenience, \reminder{in the time window $[0,t]$, we impose $\gamma_r(\mathbf{x},0)=0$ if $\sqrt{\lambda_0^s(\mathbf{x})}<r$ for all $s \in [0,t]$}. If an ISLE function $\gamma_r(\mathbf{x},0) \equiv 0$ on an open set $\Omega$ of the spatial domain, there is obviously no visible ISLE feature on $\Omega$, corresponding to this separation factor $r$. The following \reminder{theorem} establishes a fundamental result that for certain choices of $r$, the ISLE function $\gamma_r$ can be positive on a tubular neighbourhood of a $\lambda_0^t$-ridge. This explains the non-trivial ISLE features around the location of a FTLE ridge, when one chooses a suitable separation factor $r$. \begin{thm} \label{prop:funProp1} Let $\lambda_0^t$ be the largest eigenvalue of the Cauchy-Green deformation tensor from time $0$ to time $t$. Suppose $\lambda_0^t$ is continuous everywhere in the spatial domain, and $\mathcal{M}$ is a $\lambda_0^t$-ridge with $m=\min\limits_{\mathbf{x} \in \mathcal{M}}\sqrt{\lambda_0^t(\mathbf{x})} > 0$. Then for any positive number $r \in (0,m)$, there is a tubular neighbourhood $\Gamma_\mathcal{M}(\rho)$ such that the ISLE function is positive on $\Gamma_\mathcal{M}(\rho)$. \end{thm} \begin{proof} For each $\mathbf{x} \in \mathcal{M}$, using the definition of $m$, we have $\sqrt{\lambda_0^t(\mathbf{x})} \geq m > r$. Then by the continuity of $\lambda_0^t$ at $\mathbf{x}$, we can find an open ball $B(\mathbf{x},\rho_\mathbf{x})$ with center $\mathbf{x}$ and radius $\rho_\mathbf{x}$ such that $\sqrt{\lambda_0^t(\mathbf{y})} \geq r$ for all $\mathbf{y} \in B(\mathbf{x},\rho_\mathbf{x})$. Collect all these open balls at every point on $\mathcal{M}$ into $\mathcal{F}$, which is clearly an open cover of $\mathcal{M}$. By the compactness of $\mathcal{M}$, we can find a finite subcover $\mathcal{F}'=\{B(\mathbf{x}_1,\rho_{\mathbf{x}_1}),\dots,B(\mathbf{x}_n,\rho_{\mathbf{x}_n})\}$ from $\mathcal{F}$. Take $\rho=\min\limits_{1 \leq i \leq n}\rho_{\mathbf{x}_i}$. Without loss of generality, assume $\rho$ is small enough such that the tubular neighbourhood $\Gamma_\mathcal{M}(\rho)$ is contained in the union of sets in $\mathcal{F}'$. Now for any $\mathbf{y} \in \Gamma_\mathcal{M}(\rho)$, it is clear that $\sqrt{\lambda_0^{t}(\mathbf{y})} \geq r$. \reminder{Because the function $\sqrt{\lambda_0^{\tau}(\mathbf{y})}$ is a continuous function of $\tau$, we apply the Intermediate Value Theorem and conclude that it} must have reached $r$ at some $\tau=\tau_r(\mathbf{y}) \in (0,t]$. Therefore $\gamma_r(\mathbf{y},0)$ is positive. \end{proof} The next \reminder{theorem} is to identify a $\lambda_0^t$-ridge with a $\gamma_r$-ridge of a certain ISLE function $\gamma_r$. To justify the condition, the points on the location of a $\lambda_0^t$-ridge, are expected to exhibit highly chaotic behavior. It is reasonable to assume that we can find a tubular neighbourhood of a $\lambda_0^t$-ridge, where the time monotonicity of $\lambda_0^{\tau}$ holds for $\tau \in [0,t]$, and that no other points reach the minimum value of $\lambda_0$, except those on $\mathcal{M}$. \begin{thm} \label{prop:funProp2} Let $\lambda_0^t$ be the largest eigenvalue of the Cauchy-Green deformation tensor from time $0$ to time $t$. Suppose $\mathcal{M}$ is a $\lambda_0^t$-ridge with $m=\min\limits_{\mathbf{x} \in \mathcal{M}}\sqrt{\lambda_0^t(\mathbf{x})}>0$. If there exists a tubular neighbourhood $\Gamma_\mathcal{M}(\rho)$ of $\mathcal{M}$ such that \begin{enumerate} \item for any fixed $\mathbf{y} \in \Gamma_\mathcal{M}(\rho)$, the quantity $\lambda_0^{\tau}(\mathbf{y})$ is increasing in $\tau \in [0,t]$, and \item $\{\mathbf{x} \in \Gamma_\mathcal{M}(\rho) : \lambda_0^t(\mathbf{x}) \geq m^2 \} = \mathcal{M}$, \end{enumerate} then $\mathcal{M}$ is also a $\gamma_m$-ridge for the ISLE function $\gamma_m$. \end{thm} \begin{proof} Let $\mathbf{x} \in \mathcal{M}$, and $\mathbf{y} \in \Gamma_\mathcal{M}(\rho)-\{\mathbf{x}\}$ be a point in the normal direction of $\mathcal{M}$ at $\mathbf{x}$. By the second condition, we have $\sqrt{\lambda_0^t(\mathbf{y})} < m$. And since $\lambda_0^{\tau}(\mathbf{y})$ is increasing with $\tau$, it follows that $\sqrt{\lambda_0^{\tau}(\mathbf{y})} <m$ for $\tau \in [0,t]$. Therefore $\gamma_m(\mathbf{y},0)=0$ by our convention. We can conclude that no other points on the normal direction of $\mathcal{M}$ at $\mathbf{x}$ has positive value of $\gamma_m$, except $\mathbf{x}$ itself. This makes $\mathbf{x}$ a generalized maximum of $\gamma_m$. \end{proof} \reminder{Theorem} \ref{prop:funProp2} is a restricted result, in the sense that the whole $\lambda_0^t$-ridge is preserved for a particular $\gamma_r$ function. But it is straightforward to extend it to the following corollary, which looks for a certain portion of a $\lambda_0^t$-ridge that can be conserved in a larger class of ISLE functions. \begin{cor} \label{prop:funCor} Let $\lambda_0^t$ be the largest eigenvalue of the Cauchy-Green deformation tensor from time $0$ to time $t$. Suppose $\mathcal{M}$ is a $\lambda_0^t$-ridge with $m=\min\limits_{\mathbf{x} \in \mathcal{M}}\sqrt{\lambda_0^t(\mathbf{x})}>0$, and $\mathcal{M}'$ is a connected subset of $\mathcal{M}$ with $m'=\min\limits_{\mathbf{x} \in \mathcal{M}'}\sqrt{\lambda_0^t(\mathbf{x})}>0$. For any positive number $r \in [m,m']$, if there exists a tubular neighbourhood $\Gamma_{\mathcal{M}'}(\rho)$ of $\mathcal{M}'$ such that \begin{enumerate} \item for any fixed $\mathbf{y} \in \Gamma_{\mathcal{M}'}(\rho)$, the quantity $\lambda_0^{\tau}(\mathbf{y})$ is increasing in $\tau \in [0,t]$, and \item $\{\mathbf{x} \in \Gamma_{\mathcal{M}'}(\rho) : \lambda_0^t(\mathbf{x}) \geq r^2 \} = \mathcal{M}'$, \end{enumerate} then $\mathcal{M}'$ is a $\gamma_r$-ridge for the ISLE function $\gamma_r$. \end{cor} \begin{proof} The proof is parallel to that of \reminder{Theorem} \ref{prop:funProp2}, with $\mathcal{M}$ and $m$ being substituted by $\mathcal{M}'$ and $r$, respectively. \end{proof} Through \reminder{Theorem} \ref{prop:funProp1} to Corollary \ref{prop:funCor}, we have obtained a close relationship between the $\lambda_0^t$-ridges and the ISLE ridges. By the equivalence of the FTLE ridges and the $\lambda_0^t$-ridges as proved in \reminder{Theorem} \ref{prop:FTLE&lambda}, we can then link the FTLE ridges and the ISLE ridges. In practice, the estimated location of the ISLE ridges can be inferred by the location of the FTLE ridges, while $\lambda_0^t$ plays an important role for estimating a suitable separation factor defined in the ISLE ridge. To demonstrate this, suppose that there is a FTLE ridge $\mathcal{M}$ satisfying all conditions in \reminder{Theorem} \ref{prop:funProp2}. Then $\mathcal{M}$ is an ISLE ridge with the separation factor $m=\min\limits_{\mathbf{x} \in \mathcal{M}}\sqrt{\lambda_0^t(\mathbf{x})}$. Although the value of $m$ is actually an unknown, we can determine a number $l$ that is slightly less than $\min\limits_{\mathbf{x} \in \mathcal{M}}\sigma_0^t(\mathbf{x})$ manually. From the definition of the FTLE, we have $$ m=\min\limits_{\mathbf{x} \in \mathcal{M}}\sqrt{\lambda_0^t(\mathbf{x})} = \exp\left[{t\min\limits_{\mathbf{x} \in \mathcal{M}}\sigma_0^t(\mathbf{x})}\right] > e^{lt} \, . $$ Now, with any separation factor $r\in[e^{lt},m)$, \reminder{Theorem} \ref{prop:funProp1} implies that there is a tubular neighborhood around $\mathcal{M}$ with positive ISLE values. We can then increase the separation factor from $e^{lt}$ approaching to $m$, so that the tubular neighborhood containing $\mathcal{M}$ becomes narrower, and thus leads to a better approximation of $\mathcal{M}$. \reminder{In practice, the value of $l$ has to be chosen in a trial-by-error fashion and has to be determined case-by-case. Based on the Eulerian algorithm we developed in Section \ref{SubSec:newEulerian} and Section \ref{SubSec:ISLEComp}, we are now able to efficiently construct the ISLE field of each individual separation factor using one single isocontour extraction.} \section{Numerical examples} \label{Sec:Examples} \reminder{In this section, we present numerical results to show the feasibility of the proposed Eulerian approach. We will compare the solution from the new approach with the previous Eulerian approach discussed in Section \ref{SubSec:oldEulerian}. It is noteworthy that for a dynamical system in a fixed time window, we can now construct the ISLE function with an arbitrary choice of the separation factor in only one single numerical flow simulation. Therefore, we can easily verify the theoretical connection between FTLE and ISLE, developed in Section \ref{Sec:Relation}.} \subsection{The double gyre flow} \label{SubSec:Example_Double_Gyre} This first example is a simple flow taken from \cite{shalekmar05} to describe a periodically varying double-gyre. The flow is modeled by the following stream-function $$ \psi(x,y,t)=A \sin[ \pi g(x,t) ] \sin(\pi y) \, , $$ where \begin{eqnarray*} g(x,t) &=& a(t) x^2 + b(t) x \, , \mathbf{n}onumber\\ a(t) &=& \epsilon \sin(\omega t) \, , \mathbf{n}onumber\\ b(t) &=& 1- 2\epsilon \sin(\omega t) \, . \end{eqnarray*} In this example, we follow \cite{shalekmar05} and use $A=0.1$, $\omega=2\pi/10$. We discretize the domain $[0,2]\times[0,1]$ using 513 grid points in the $x$-direction and 257 grid points in the $y$-direction. This gives $\Delta x=\Delta y=1/256$. \begin{figure} \caption{(Section \ref{SubSec:Example_Double_Gyre} \label{Fig:FTLE} \end{figure} \begin{figure} \caption{\reminder{(Section \ref{SubSec:Example_Double_Gyre} \label{Fig:convergence} \end{figure} \begin{figure} \caption{(Section \ref{SubSec:Example_Double_Gyre} \label{Fig:ISLE1} \end{figure} \begin{figure} \caption{(Section \ref{SubSec:Example_Double_Gyre} \label{Fig:Isosurface} \end{figure} In Figure \ref{Fig:FTLE}, we compare the FTLE field $\sigma_0^{10}(\mathbf{x})$ of the double-gyre flow with $\epsilon=0.1$ computed using the Lagrangian approach and our proposed Eulerian approach, respectively. These two solutions match extremely well. \reminder{We have also checked the $L_2$-error of the forward flow map $\Phi_0^{10}(x,y)$ using different $\Delta x$'s ranging from $1/32$ to $1/256$. Since we do not have the exact solution for this flow, the \textit{exact} solutions are computed using the Lagrangian ray tracing with a very small time step. Figure \ref{Fig:convergence} shows the errors in the flow map $\Phi_0^{10} : (x,y) \rightarrow \left(\phi(x,y),\psi(x,y) \right)$ with respect to different $\Delta x$'s. We find that the flow map computed using the proposed Eulerian approach is approximately second order accurate. } Figures \ref{Fig:ISLE1} shows the ISLE fields $\gamma_r(\mathbf{x},0)$ with several separation factors $r$. In the implementations, the flow map is computed from $t=0$ up to $t=10$. Recall that we impose zero ISLE value at points whose local separation rate is less than the given separation factor. With the increasing separation factor, the regions with non-zero ISLE values, get smaller. Nevertheless, the regions with non-zero or high ISLE values, always concentrate near the location of the FTLE ridges shown in Figure \ref{Fig:FTLE}. We denote by $\mathcal{M}$ the most prominent ridge originating from the central bottom of the domain. Note that a rough estimate of the minimum FTLE value on $\mathcal{M}$ is approximately 0.3. And so we have the estimate $\min\limits_{\mathbf{x} \in \mathcal{M}} \sqrt{\lambda_0^t(\mathbf{x})} \geq e^{0.3 \times 10}= e^3 > 20$. Using \reminder{Theorem} \ref{prop:funProp1}, we can always find a tubular neighborhood near $\mathcal{M}$ of non-zero ISLE using a separation factor $r$ smaller than approximately 20. In Figure \ref{Fig:ISLE1}, we have computed the ISLE using various separation factors from $r=3$ to $r=20$. As $r$ increases, we can see that there are indeed non-zero ISLE regions around the major ridge $\mathcal{M}$. And evidently, the ISLE ridges lie within these regions. To better compare the FTLE field and the ISLE field, we plot these two solutions together in Figure \ref{Fig:Isosurface}. The $x$- and the $y$- axis represent the computational domain $\Omega$ while the $z$ axis denotes the temporal direction. In (a), we plot the FTLE field $\sigma_0^{10/3}(\mathbf{x})$ at $t=10/3$ and $\sigma_0^{20/3}(\mathbf{x})$ at $t=20/3$. In (b-d), we plot the isosurfaces of the ISLE field $\lambda_r(\mathbf{x},0)$ with $r=1.5,3,4$, respectively. The $z$-values denote the time required for the local separation ratio to achieve $r$ for the first time, i.e. the $z$-values are the $\tau_r(\mathbf{x})$ we defined before. Near the major FTLE ridge, we can see the values of $\tau_r(\mathbf{x})$ are significantly lower than the remaining part of the domain. Therefore the corresponding ISLE values are generalized maxima. This shows us the ISLE ridges match with the FTLE ridges very well. \subsection{A simple analytic field} \label{SubSec:Example_Analytic} \begin{figure} \caption{(Section \ref{SubSec:Example_Analytic} \label{Fig:FTLE2} \end{figure} \begin{figure} \caption{(Section \ref{SubSec:Example_Analytic} \label{Fig:ISLE2a} \end{figure} \begin{figure} \caption{(Section \ref{SubSec:Example_Analytic} \label{Fig:ISLE2} \end{figure} We consider a simple analytical field \cite{tanchahal10} given by $u=x-y^2$ and $v=-y+x^2$ in this example. The computational domain is $[-6,6]^2$ and we discretize the domain with $\Delta x=\Delta y= 12/256$. We impose the fixed inflow boundary condition and non-reflective outflow boundary condition as in \cite{leu11}. Figure \ref{Fig:FTLE2} shows the FTLE field $\sigma_0^5(\mathbf{x})$ computed using our original Eulerian approach and the proposed Eulerian approach. It is clear that the new approach can successfully handle the inflow and outflow of particles on the boundaries, just like the previous approach does. Next, we are going to apply the results in Section \ref{Sec:Relation} to analyze the ISLE ridges. Figure \ref{Fig:FTLE2} suggests that there is only one FTLE ridge. And the minimum value of $\sigma_0^5$ on the ridge is close to $0.8$. By the definition of FTLE, we have $$ \sqrt{\lambda_0^5(\mathbf{x})} = \exp\left[{\sigma_0^5(\mathbf{x})\times 5}\right] \, . $$ Therefore we can conclude that $e^{0.8 \times 5} = e^{4}$ is a reasonable estimate of the minimum value of $\sqrt{\lambda_0^5(\mathbf{x})}$ on the ridge. We have tested with several separation factors that are near $e^4$, and the results can be plotted in Figure \ref{Fig:ISLE2a}. To better visualize the ISLE field, we plot in Figure \ref{Fig:ISLE2} the time $\tau_r(\mathbf{x})$ required for different separation factors $r$. The shorter the time, the bigger the corresponding ISLE. \subsection{The forced-damped Duffing van der Pol equation} \label{SubSec: DuffingVanDerPol} \begin{figure} \caption{(Section \ref{SubSec: DuffingVanDerPol} \label{Fig:FTLE3} \end{figure} \begin{figure} \caption{(Section \ref{SubSec: DuffingVanDerPol} \label{Fig:ISLE3} \end{figure} We consider the dynamical system governed by a Duffing and a van der Pol oscillator, as in \cite{halsap11}. The system is given by $$ u=y \, , \, v=x-x^3+0.5y(1-x^2)+0.1 \sin t \, . $$ The computational domain is $[-2,2]\times [-1.5,1.5]$, with mesh size $\Delta x = \Delta y = 0.01$. We simulate the flow from $t=0$ to $t=10$. The discretization size is rather coarse for such a complicated dynamics, but our new approach can still demonstrate the fine details of the transport barrier as shown in Figure \ref{Fig:FTLE3}. There are several major FTLE ridges that are easily spotted. The minimum value of $\sigma_0^t(\mathbf{x})$ on those FTLE ridges are very close to $0.4$. So $\sqrt{\lambda_0^t(\mathbf{x})} \geq e^{0.4 \times 10} = e^4$. \reminder{Theorem} \ref{prop:funProp1} implies that the separation factor $r=e^4$ can approximately single out the ISLE ridges. We pick several separation factors that are close to $e^{4}$, and we find that the ISLE ridges corresponding to $r=e^{4.1}$, as shown in Figure \ref{Fig:ISLE3}(c), have the greatest resemblance to the FTLE ridges in Figure \ref{Fig:FTLE3}. \subsection{\reminder{The Ocean Surface Current Analyses Real-time (OSCAR) dataset}} \label{SubSec:OSCAR} \begin{figure} \caption{\reminder{(Section \ref{SubSec:OSCAR} \label{Fig:OSCAR_FTLE} \end{figure} \begin{figure} \caption{\reminder{(Section \ref{SubSec:OSCAR} \label{Fig:OSCAR_ISLE} \end{figure} \reminder{To further demonstrate the effectiveness and robustness of our proposed approach, we test our algorithm on a real life dataset obtained from the Ocean Surface Current Analyses Real-time (OSCAR)}\footnote{\url{http://www.esr.org/oscar_index.html}}. \reminder{The OSCAR data were obtained from JPL Physical Oceanography DAAC developed by ESR. It measures ocean surface flow in the region from $-80^{\circ}$ to $80^{\circ}$ in latitude and from $0^{\circ}$ to $360^{\circ}$ longitude. The time resolution is about 5 days and the spatial resolution is $1/3^{\circ}$ in each direction.} \reminder{ In this numerical example, we consider the area near the Line Islands in the region from $-17^{\circ}$ to $8^{\circ}$ in latitude and from $180^{\circ}$ to $230^{\circ}$ longitude. To have a better visualization of both the FTLE and ISLE field, we interpolate the velocity data to obtain a finer resolution of $0.25$ days in the temporal direction and $1/12^{\circ}$ in each spatial direction. Then we look at the ocean surface current within the first 50 days in the year 2014 to obtain the forward FTLE field $\sigma_0^{50}(\mathbf{x})$ and the ISLE field $\gamma_r(\mathbf{x})$. } \reminder{ As demonstrated in Figure \ref{Fig:OSCAR_FTLE}, our new approach works well in extracting fine details of the transport barrier even for real data. There are a few major FTLE ridges spotted in the figure. According to Theorem \ref{prop:funProp1}, we found that the separation factor $r$ roughly equals to $e^{2.5}$ to $e^{3.5}$ can single out the ISLE ridges. We have tested several separation factors that are close to $e^3 \approx 20$, as shown in Figure \ref{Fig:OSCAR_ISLE}. We find that the ISLE ridges corresponding to $r=20$, in Figure \ref{Fig:OSCAR_ISLE}(b), have the greatest resemblance to the FTLE ridges in Figure \ref{Fig:OSCAR_FTLE}. As a comparison, we have also plotted the ISLE field $\gamma_r(\mathbf{x})$ for $r=12$ and $r=30$ in Figure \ref{Fig:OSCAR_ISLE} (a) and (c), respectively. These results verify the theory developed in Section \ref{Sec:Relation}.} \reminder{ \section{Conclusion} To summarize, we emphasize the novelty and importance of both the two Eulerian algorithms we developed in this paper. Based on the first algorithm, we are now able to determine the \textit{forward} flow map \textit{on the fly} so that the PDE is solved \textit{forward} in time. Unlike the original Eulerian algorithm as developed in \cite{leu11,leu13}, there is no need to store the whole velocity field at all time steps at the beginning of the computations. This makes the forward flow map computations practical. More importantly, because of the first algorithm, we are now able to access all intermediate forward flow maps which is necessary in computing the ISLE field computations. This Eulerian approach is extremely efficient in computing the ISLE field. To the best of our knowledge, this is the first efficient numerical approach for the ISLE computations. Theoretically, this paper provides a rigorous mathematical analysis to explain the similarity between the FTLE ridges and the ISLE ridges based on \cite{ppss14}, which has been verified by the proposed Eulerian approach. } \section*{Acknowledgment} The work of You was supported by the Talents Introduction Project of Nanjing Audit University, and the Natural Science Foundation of Jiangsu Higher Education Institutions of China (No.16KJB110012). The work of Leung was supported in part by the Hong Kong RGC grants 16303114 and 16309316. \end{document}
\begin{document} \title{On Matroids and Linearly Independent Set Families} \author{Giuliano G. La Guardia, Luciane Grossi, Welington Santos \thanks{The authors are with Department of Mathematics and Statistics, State University of Ponta Grossa, 84030-900, Ponta Grossa - PR, Brazil. Corresponding author: Giuliano G. La Guardia ({\tt \small [email protected]}). }} \maketitle \begin{abstract} New families of matroids are constructed in this note. These new families are derived from the concept of linearly independent set family (LISF) introduced by Eicker and Ewald [Linear Algebra and its Applications 388 (2004) 173–-191]. The proposed construction generalizes in a natural way the well known class of vectorial matroids over a field. \end{abstract} \section{Introduction} In his seminal paper \cite{Whitney:1935} on matroid theory, Hassler Whitney dealt with the problem of characterizing matroids that are representable over a given field (see also the interesting papers \cite{Brualdi:1969,Brylawski:1973,Greene:1976}). In fact, as it is well known, the matroid theory is a powerful tool in order to study several classes endowed with algebraic structures such as, affine spaces, vector spaces, algebraic independence, graph theory and so on. Among these classes, a particular class is of essential importance: the class of vectorial matroids. In this note we generalize the class of vectorial matroid by applying the concept of linearly independent set family (LISF), introduced by Eicker and Ewald \cite{Eicker:2004}, which extends in a straightforward way the definition of linearly independent vectors to independent sets in a vector space. More precisely, the LISF's have essential ingredients in order to provide a natural generalization of the class of vectorial matroids over a given field. Section~2 presents basic concepts on matroid theory and linearly independent set family, necessary for the development of this note. In Section 3, we present the contributions of this paper: a new class of matroids derived from linearly independent set families are constructed. In Section~4, the final remarks are drawn. \section{Preliminaries}\label{sec2} This section is concerned with a review of matroid theory \cite{Welsh:1976,Oxley:1992} as well as the review of the concept of linearly independent set family (LISF) \cite{Eicker:2004}. \subsection{Matroid Theory}\label{sub2.1} As was said previously we utilize the definition of matroid based on independent sets (although the other definitions are equivalents). The following basic concepts can be found in \cite{Oxley:1992}. \begin{definition}\label{matro} A matroid $M$ is an ordered pair $(S, \mathcal{I})$ consisting of a finite set $S$ and a collection $\mathcal{I}$ of subsets of $S$ satisfying the following three conditions:\\ (I.1) $\emptyset \in \mathcal{I}$;\\ (I.2) If $I \in \mathcal{I}$ and ${I}' \subset I$, then ${I}' \in \mathcal{I}$;\\ (I.3) If ${I}_1$, ${I}_2$ $\in \mathcal{I}$ and $\mid {I}_1\mid < \mid {I}_2\mid$, then there exists an element $e \in I_{2} - I_1$ such that $I_1 \cup \{e\} \in \mathcal{I}$, where $\mid .\mid$ denote the cardinality of the set. \end{definition} If $M$ is the matroid $(S, \mathcal{I})$, then $M$ is called \emph{matroid on} $S$. The members of $\mathcal{I}$ are \emph{independent sets of} $M$, and $S$ is the \emph{ground set of} $M$. A subset of $S$ that is not in $\mathcal{I}$ is called \emph{dependent}. Minimal dependent sets are dependents sets all of whose proper subsets are independents. Minimal dependent sets are called \emph{circuits} of $M$. An independent set is called \emph{maximal} if the inclusion of any element in this set results in a dependent set. Maximal independent sets are called \emph{basis} of the matroid. It is well known that a matroid can be defined in many different (but equivalent) ways, i. e., by means of independent sets, circuits, basis and so on. In our case we consider the definition of matroid based on independent sets, as given above. Let us recall the well known concept of vectorial matroid: \begin{theorem}\label{connec} Let $S$ be the set of column labels of a matrix $A_{m \times n}$ over a field ${\mathbb F}$, and let $\mathcal{I}$ be the set of subsets $X$ of $S$ for which the multiset of columns labelled by $X$ is linearly independent (LI) in $V(m, F)$, the $m$-dimensional vector space over ${\mathbb F}$. Then $(S, \mathcal{I})$ is a matroid. \end{theorem} \subsection{Linearly Independent set Families}\label{sub2.2} The concept of linearly independent set families was introduced by Eicker and Ewald in \cite{Eicker:2004}. This definition extends in a straightforward way the definition of linearly independent vectors to independent sets in a vector space. Although our definition is different from the original one, it contains essentially the same idea contained in \cite{Eicker:2004}. \begin{definition} Let ${\mathbb V}$ be a $l$-dimensional vector space over a field ${\mathbb F}$. A family $\mathcal{J}$ of non-empty subsets $C_{i}\subset {\mathbb V}$, where $i=1,\ldots, n$ ( $n \leq l$), given by $\mathcal{J}:= \{ C_{i}, i=1,\ldots, n \}$ is called a \emph{linearly independent set family} (LISF) if and only if any selection of $n$ vectors $v_{i} \in C_{i}$ is linearly independent in ${\mathbb V}$. \end{definition} \begin{example} The first and trivial example of LISF is a set of $n\leq l$ linearly independent vectors in ${\mathbb{R}}^{l}$. As a second illustrative example, consider in ${\mathbb{R}}^{2}$ the open quadrants $C_{1}:= \{ (x_1, x_{2}): x_1 > 0, x_{2} > 0 \}$, and $C_{2}:= \{ (x_1,x_2): x_1 > 0, x_2 < 0 \}$. Then $\mathcal{F}:= \{ C_{1}, C_{2} \}$ is a LISF. \end{example} \section{The Results} In this section we present the contributions of the paper. Theorems~\ref{main} and \ref{main-multdim} generalize the well known class of vectorial matroids over a given field, consequently, new families of matroids are obtained. \begin{theorem}\label{main} Consider that $n\geq 1$ and $ l\geq 1$ are integers. Let $E_1, E_{2}, \ldots , E_{n}$ be subsets of a finite dimensional vector space ${\mathbb V}$ over a field ${\mathbb F}$ such that $E_{i} \subset {\mathbb{W}}_{i}$ for all $i=1, \ldots , n$, where ${\mathbb{W}}_{i}$ are one-dimensional subspaces of ${\mathbb V}$. Consider the multiset of labels $S =\{ 1, \ldots , n\}$, and let $\mathcal{I}$ be the set of subsets $I = \{ i_1, \ldots , i_{j}\}$ of $S$ for which $\{E_{i_{1}}, E_{i_{2}}, \ldots , E_{i_{j}}\}$ form a LISF. Then the ordered pair $(S, \mathcal{I})$ is a matroid. \end{theorem} \begin{proof} We must prove that $(S, \mathcal{I})$ satisfies (I.1), (I.2) and (I.3) of Definition~\ref{matro}. Properties (I.1) and (I.2) are clearly satisfied. Let us now show that (I.3) holds. Seeking a contradiction, suppose that (I.3) does not hold. Consider that $I_1 , I_{2} \in \mathcal{I}$ with $| I_1 | < | I_{2}|$, where $I_1 = \{ i_{a_{1}}, \ldots , i_{a_{j}}\}$ and $I_{2} = \{ i_{b_{1}}, \ldots , i_{b_{k}}\}$, $j < k$. Then for each $e\in I_{2} - I_1$ it follows that $I_1 \cup \{e\} \notin \mathcal{I}$. We know that $\{E_{i_{a_{1}}}, E_{i_{a_{2}}}, \ldots , E_{i_{a_{j}}}\}$ and $\{E_{i_{b_{1}}}, E_{i_{b_{2}}}, \ldots , E_{i_{b_{k}}}\}$ are LISF's. Since $I_1 \cup \{e\} \notin \mathcal{I}$ for each $e\in I_{2} - I_1$, then the sets $\{E_{i_{a_{1}}}, E_{i_{a_{2}}}, \ldots ,$ $E_{i_{a_{j}}}, E_{e}\}$ does not form a LISF for each $e\in I_{2} - I_1$. Fix $e\in I_{2} - I_1$. Then there exist vectors $ { \bf v}_{l}\in E_{i_{a_{l}}}$, where $1\leq l \leq j$, and ${ \bf x}\in E_{e}$ such that ${ \bf v}_1, \ldots , { \bf v}_{j}, { \bf x}$ are linearly dependents in ${\mathbb V}$. Hence there exist ${\alpha}_{x}, {\alpha}_{l} \in \mathbb{F}$, $1\leq l \leq j$, with ${\alpha}_{x} \neq 0$ such that ${\alpha}_1 { \bf v}_{1} + \ldots + {\alpha}_{j} { \bf v}_{j} + {\alpha}_{x} { \bf x}=0$, otherwise the unique solution for the last equality would be ${\alpha}_{x}={\alpha}_{l}=0$ for each $1\leq l \leq j$, which is a contradiction. This means that $ { \bf x}= \left(-{\alpha}_1 {\alpha}_{x}^{-1}\right) { \bf v}_{1} + \ldots + \left(-{\alpha}_{j} {\alpha}_{x}^{-1}\right) { \bf v}_{j}$. For every vector ${\bf w} \in E_{e}$ we have ${\bf w}= \beta { \bf x}= \left(-{\alpha}_1 \beta {\alpha}_{x}^{-1}\right) { \bf v}_{1} + \ldots + \left(-{\alpha}_{j}\beta {\alpha}_{x}^{-1}\right) { \bf v}_{j}$, $\beta \in {\mathbb F}$, because $E_{e}\subset {\mathbb{W}}_{e}$ and ${\mathbb{W}}_{e}$ is an one-dimensional subspace of ${\mathbb V}$. Thus the subspace ${\mathbb{U}}_{e}$ spanned by the sets $E_{i_{a_{1}}}, E_{i_{a_{2}}}, \ldots $, $E_{i_{a_{j}}}, E_{e}$, is contained in the subspace $\mathbb{Y}$ spanned by $E_{i_{a_{1}}}, E_{i_{a_{2}}}, \ldots , E_{i_{a_{j}}}$, for each $e \in I_{2} - I_1$. Consequently, the subspace $\mathbb{X}$ spanned by $E_{i_{a_{1}}}, E_{i_{a_{2}}}, \ldots , E_{i_{a_{j}}}, E_{i_{b_{1}}}, E_{i_{b_{2}}}, \ldots ,$ $E_{i_{b_{k}}}$, is also contained in $\mathbb{Y}$, so it follows that $|I_{2}| \leq \dim(\mathbb{W})\leq |I_{1}| < |I_{2}|$, which is a contradiction. Therefore the ordered pair $(S, \mathcal{I})$ is a matroid. \end{proof} In the following corollaries of Theorem~\ref{main}, one can generate more families of matroids: \begin{corollary}\label{cor1} Suppose that $n\geq 1$ and $ l\geq 1$ are integers. Let $E_1, E_{2}, \ldots , E_{n}$ be subsets of a finite dimensional vector space ${\mathbb V}$ such that $E_{i} \subset {\mathbb{W}}_{i}$ for all $i=1, \ldots , n$, where ${\mathbb{W}}_{i}$ are one-dimensional subspaces of ${\mathbb V}$. Consider the multiset of labels $S =\{ 1, \ldots , n\}$, and let $\mathcal{I}$ be the set of subsets $I = \{ i_1, \ldots , i_{j}\}$ of $S$ for which $\{{\lambda_{i_{1}}}E_{i_{1}}, {\lambda_{i_{2}}}E_{i_{2}}, \ldots ,$ $ {\lambda_{i_{j}}}E_{i_{j}}\}$ form a LISF, where ${\lambda_{i_{r}}} \neq 0$ for all $r=1, \ldots, j$. Then the ordered pair $(S, \mathcal{I})$ is a matroid. \end{corollary} \begin{proof} It follows from the fact that $\{E_{i_{1}}, E_{i_{2}}, \ldots , E_{i_{j}}\}$ is a LISF if and only if $\{{\lambda_{i_{1}}}E_{i_{1}}, {\lambda_{i_{2}}}E_{i_{2}}, \ldots , {\lambda_{i_{j}}}E_{i_{j}}\}$ is a LISF, where ${\lambda_{i_{r}}} \neq 0$ for all $r=1, \ldots, j$. \end{proof} \begin{corollary}\label{cor2} Consider that $n\geq 1$ and $ l\geq 1$ are integers and let $E_1, E_{2}, \ldots , E_{n}$ be subsets of a finite dimensional vector space ${\mathbb V}$ such that $E_{i} \subset {\mathbb{W}}_{i}$ for all $i=1, \ldots , n$, where ${\mathbb{W}}_{i}$ are one-dimensional subspaces of ${\mathbb V}$. Assume that $S =\{ 1, \ldots , n\}$ is the multiset of labels and $\mathcal{I}$ is the set of subsets $I = \{ i_1, \ldots , i_{j}\}$ of $S$ such that $\{T(E_{i_{1}}), T(E_{i_{2}}), \ldots ,$ $ T(E_{i_{j}})\}$ form a LISF, for any isomorphism $T$ on ${\mathbb V}$. Then the ordered pair $(S, \mathcal{I})$ is a matroid. \end{corollary} \begin{proof} This is true due to the fact that $\{E_{i_{1}}, E_{i_{2}}, \ldots , E_{i_{j}}\}$ form a LISF if and only if $\{T(E_{i_{1}}), T(E_{i_{2}}), \ldots , T(E_{i_{j}})\}$ form a LISF. \end{proof} \begin{corollary}\label{cor3} Consider that $n\geq 1$ and $ l\geq 1$ are integers. Let $E_1, E_{2}, \ldots , E_{n}$ be subsets of a finite dimensional vector space ${\mathbb V}$ such that $E_{i} \subset {\mathbb{W}}_{i}$ for all $i=1, \ldots , n$, where ${\mathbb{W}}_{i}$ are one-dimensional subspaces of ${\mathbb V}$. Consider the multiset of labels $S =\{ 1, \ldots , n\}$ and let $\mathcal{I}$ be the set of subsets $I = \{ i_1, \ldots , i_{j}\}$ of $S$ such that $\{E_{i_{1}}\cup (-E_{i_{1}}), E_{i_{2}}\cup (-E_{i_{2}}), \ldots , E_{i_{j}}\cup (-E_{i_{j}})\}$ form a LISF. Then the ordered pair $(S, \mathcal{I})$ is a matroid. \end{corollary} \begin{proof} Follows from the fact that $\{E_{i_{1}}, E_{i_{2}}, \ldots , E_{i_{j}}\}$ is a LISF if and only if $\{E_{i_{1}}\cup (-E_{i_{1}}), E_{i_{2}}\cup (-E_{i_{2}}), \ldots , E_{i_{j}}\cup (-E_{i_{j}})\}$ is a LISF. \end{proof} In the following examples we presents two LISF's that does not form a matroid: \begin{example}\label{exa1} Consider the (real) vector space ${\mathbb R}^{2}$ and the following subsets $E_1 , E_{2},$ $E_3$ of ${\mathbb R}^{2}$ given by: $E_1 =\{ (x, y) \in {\mathbb R}^{2} | {(x-1)}^{2}+{(y-1)}^{2}\leq 1 \}$; $E_{2} =\{ (x, y) \in {\mathbb R}^{2} | {(x-1)}^{2}+ y^{2}\leq 1 \} \ \{(0, 0)\}$; $E_{3} =\{ (x, y) \in {\mathbb R}^{2} | {(x-1)}^{2}+ {(y+1)}^{2}\leq 1/9 \}$. Assume that $S=\{ 1, 2, 3\}$ and consider that $I = \{ i_1, \ldots , i_{j}\} \in \mathcal{I}$ if and only if the sets $\{E_{i_{1}}, E_{i_{2}}, \ldots , E_{i_{j}}\}$ form a LISF. From construction and applying the same notation as in Theorem~\ref{main}, one has $\mathcal{I}=\{\emptyset, \{1\}, \{2,\}, \{3\}, \{1, 3\} \}$. But since $|\{2\}|<|\{1, 3\}|$, one concludes from (I.3) that $\{1, 2\} \in \mathcal{I}$ or $\{2, 3\} \in \mathcal{I}$, a contradiction. \end{example} \begin{example}\label{exa2} Consider now the (real) vector space ${\mathbb R}^{3}$ and the following subsets $E_1 , E_{2}, E_3$ of ${\mathbb R}^{3}$ given by: $E_1 =\{ (x, y, z) \in {\mathbb R}^{3} | (x, y, z)= (0, 0, 0)+ a(1, 0, 0)+ b(0, 1, 0); a, b \in {\mathbb R} \}$ $- \{ (0, y, 0)| y \in {\mathbb R} \}$; $E_{2} =\{ (x, y, z) \in {\mathbb R}^{3} | (x, y, z)= (0, 0, 0)+ c(0, 0, 1)+ d(1, 1, 0); c, d \in {\mathbb R} \}$ $- \{(0, 0, 0\}$; $E_{3} =\{ (x, y, z) \in {\mathbb R}^{3} | (x, y, z)= (0, 0, 0)+ e(0, 1, 0)+ f(0, 0, 1); e, f \in {\mathbb R} \}$ $- \{ (0, y, 0)| y \in {\mathbb R} \}$. As in the previous example, if $S=\{ 1, 2, 3\}$, from construction we have $\mathcal{I}=\{\emptyset, \{1\}, \{2,\}, \{3\}, \{1, 3\} \}$. However, since $|\{2\}|<|\{1, 3\}|$, it follows from (I.3) that $\{1, 2\} \in \mathcal{I}$ or $\{2, 3\} \in \mathcal{I}$, a contradiction. \end{example} However, under suitable hypothesis one can get the following: \begin{theorem}\label{main-multdim} Let ${\mathbb F}$ be a field of characteristic zero. Assume that ${\mathbb V}={\mathbb{W}}_{1}\oplus\ldots \oplus {\mathbb{W}}_{k}$ is a vector space over ${\mathbb F}$ that is the direct sum of $n$-dimensional subspaces ${\mathbb{W}}_{i}$, $i=1, \ldots , k$. Let $E_{1}, \ldots , E_{m}$ be subsets of ${\mathbb V}$ such that for each $i=1, \ldots , m$, $E_{i}$ contains (with exception of the zero vector) an $n_{E_{i}}$-dimensional subspace of ${\mathbb V}$ , where $\lceil n/2 \rceil + 1 \leq n_{E_{i}} \leq n$, and $E_{i}\subset {\mathbb{W}}_{i^{*}}$ for some $1\leq i^{*}\leq k$. If $S=\{ 1, \ldots, m\}$ is the multiset of labels and $\mathcal{I}$ is the set of subsets $I = \{ i_1, \ldots , i_{r}\}$ of $S$ such that $\{E_{i_{1}}, E_{i_{2}}, \ldots , E_{i_{r}}\}$ form a LISF, then the ordered pair $(S, \mathcal{I})$ is a matroid. \end{theorem} \begin{proof} Obviously (I.1) and (I.2) are satisfied. We will prove (I.3). For, assume that $I_1 , I_{2} \in \mathcal{I}$ with $|I_1| < |I_{2}|$ and $I_1 = \{ i_1, \ldots, i_{s} \}$ and $I_2 = \{ j_1, \ldots, j_{t} \}$, where $s < t$. Thus the sets $\{E_{i_{1}}, E_{i_{2}}, \ldots , E_{i_{s}}\}$ form a LISF and, from hypothesis, each of these sets is contained in distinct ${\mathbb{W}}_{i}$'s. These facts also hold for the sets $\{E_{j_{1}}, E_{j_{2}}, \ldots , E_{j_{t}}\}$ corresponding to $I_2$. Suppose without loss of generality (w.l.g.) that $E_{i_{1}}\subset {\mathbb{W}}_{1},$ $\ldots,$ $E_{i_{s}} \subset {\mathbb{W}}_{s}$. Since $|I_1| < |I_{2}|$ then there exists an ${\mathbb{W}}_{s^{*}}\neq {\mathbb{W}}_1, {\mathbb{W}}_{2}, \ldots, {\mathbb{W}}_{s}$ such that $E_{j_{s+1}} \subset {\mathbb{W}}_{s^{*}}$. This is possible due to the fact that each of the sets $E_{j_{1}}, E_{j_{2}}, \ldots , E_{j_{t}}$ is contained in distinct ${\mathbb{W}}_{i}$'s and $s<t$. Consider the sets $E_{i_{1}}, E_{i_{2}}, \ldots , E_{i_{s}}, E_{j_{s+1}}$. For every choice of vectors ${{ \bf v}}_{i_{1}}\in E_{i_{1}},$ \ $\ldots $ \ $,{{ \bf v}}_{i_{s}}\in E_{i_{s}}$ and ${{ \bf v}}_{j_{s+1}}\in E_{j_{s+1}}$, we claim that the vectors ${{ \bf v}}_{i_{1}}, \ldots, {{ \bf v}}_{i_{s}}, {{ \bf v}}_{j_{s+1}}$ are linearly independents. In fact, seeking a contradiction we assume that the vectors are linearly dependents. W.l.g., suppose that $a_{i_{1}} {{ \bf v}}_{i_{1}}+ \ldots + a_{i_{s}} {{ \bf v}}_{i_{s}}+ b_{j_{s+1}} {{ \bf v}}_{j_{s+1}}=0$, where $a_{i_{1}}, \ldots, a_{i_{s}}, b_{j_{s+1}} \in {\mathbb F}$, and $b_{j_{s+1}}\neq 0$; then $\left(a_{i_{1}} b_{j_{s+1}}^{-1}\right) {{ \bf v}}_{i_{1}}+ \ldots + \left(a_{i_{s}} b_{j_{s+1}}^{-1}\right) {{ \bf v}}_{i_{s}}+ {{ \bf v}}_{j_{s+1}}=0$. Since $\left(a_{i_{1}} b_{j_{s+1}}^{-1}\right) {{ \bf v}}_{i_{1}}\in {\mathbb{W}}_{1}, \ldots , \left(a_{i_{s}} b_{j_{s+1}}^{-1}\right) {{ \bf v}}_{i_{s}}\in {\mathbb{W}}_{s}$ and ${{ \bf v}}_{j_{s+1}}\in {\mathbb{W}}_{s^{*}}$, then one has ${{ \bf v}}_{j_{s+1}}=0$, which is a contradiction. The cases for which $a_{i_{1}} {{ \bf v}}_{i_{1}}+ \ldots + a_{i_{s}} {{ \bf v}}_{i_{s}}+ b_{j_{s+1}} {{ \bf v}}_{j_{s+1}}=0$, where $a_{i_{1}}, a_{i_{2}}, \ldots, a_{i_{s}}, b_{j_{s+1}} \in {\mathbb F}$, and $a_{i_{l}}\neq 0$ for some $l=1, \ldots, s$ are analogous. Thus the sets $\{E_{i_{1}}, E_{i_{2}}, \ldots , E_{i_{s}}, E_{j_{s+1}} \}$ form a LISF, so $I_1 \cup \{ j_{s+1}\} \in \mathcal{I} $, where $j_{s+1}\in I_{2}-I_1$. Therefore, the ordered pair $(S, \mathcal{I})$ is a matroid and the proof is complete. \end{proof} \section{Summary} We have constructed new families of matroids derived from linearly independent set families. The presented construction generalizes in a natural way the class of vectorial matroids over a field. \section*{Acknowledgment} This research was partially supported by the Brazilian Agencies CAPES and CNPq. \end{document}
\begin{document} \title[Quantizing Braids and Other Mathematical Objects]{Quantizing Braids and Other Mathematical Objects: \ The General Quantization Procedure} \author{Samuel J. Lomonaco} \address{University of Maryland Baltimore County (UMBC)\\ Baltimore, MD \ 21250 \ \ USA} \email{[email protected]} \urladdr{http://www.csee.umbc.edu/\symbol{126}lomonaco} \author{Louis H. Kauffman} \address{University of Illinois at Chicago\\ Chicago, IL \ 60607-7045 \ \ USA} \email{[email protected]} \urladdr{http://www.math.uic.edu/\symbol{126}kauffman} \date{April 3, 2010} \subjclass[2000]{Primary 81P68, 57M25, 81P15, 57M27; Secondary 20C35} \keywords{Quantum Braids, Braid Group, Quantization, Motif, Quantum Motif, Knot Theory, Quantum Computation, Quantum Algorithms} \begin{abstract} Extending the methods from our previous work on quantum knots and quantum graphs, we describe a general procedure for quantizing a large class of mathematical structures which includes, for example, knots, graphs, groups, algebraic varieties, categories, topological spaces, geometric spaces, and more. \ This procedure is different from that normally found in quantum topology. \ We then demonstrate the power of this method by using it to quantize braids. This general method produces a blueprint of a quantum system which is physically implementable in the same sense that Shor's quantum factoring algorithm is physically implementable. \ Mathematical invariants become objects that are physically observable. \end{abstract} \maketitle \tableofcontents \section{Introduction} Extending the methods found in previous work \cite{Lomonaco1, Lomonaco2} on quantum knots and quantum graphs, we describe a general procedure for quantizing a large class of mathematical structures which includes, for example, knots, graphs, groups, algebraic varieties, categories, topological spaces, geometric spaces, and more. \ This procedure is different from that normally found in quantum topology. \ We then demonstrate the power of this method by using it to quantize braids. We should also mention that this general method produces a blueprint of a quantum system which is physically implementable in the same sense that Shor's quantum factoring algorithm is physically implementable. \ Moreover, mathematical invariants become objects that are physically observable. The above mentioned general quantization procedure consists of two steps: \begin{itemize} \item[\textbf{Step 1.}] Mathematical construction of a motif system $\mathcal{S}$ , and \item[\textbf{Step 2.}] Mathematical construction of a quantum motif system $\mathcal{Q}$ from the system $\mathcal{S}$ . \end{itemize} \noindent\textbf{Caveat.} The term "motif" used in this paper should not be confused with the use of the term "motive" (a.k.a., "motif") found in algebraic geometry. \section{Part I. A General Procedure for Quantizing Mathematical Structures} We now outline a general procedure for quantizing mathematical structures. \ One useful advantage to this quantization procedure is that the resulting system is a multipartite quantum system, a property that is of central importance in quantum computation, particularly in regard to the design of quantum algorithms. \ In a later section of this paper, we illustrate this quantization procedure by using it to quantize braids. Examples of the application of this quantization procedure to knots, graphs, and algebraic structures can be found in \cite{Lomonaco1, Lomonaco2, Kauffman1}. \subsection{Stage 1. \ Construction of a motif system $\mathcal{S}_{n}$} \qquad Let \[ T=\left\{ t_{0},t_{1},\ldots,t_{\ell-1}\right\} \] be a finite set of symbols, with a distinguished element $t_{0}$, called the \textbf{trivial symbol}, and with a linear ordering denoted by `$<$'. Let \[ T^{\times N} \] be the the $N$-fold cartesian product of $T$ with an induced LEX ordering also denoted by `$<$', and let $S\left( n\right) $ be the group of all permutations of $T^{\times N}$. \ \ For positive integers $N$ and $N^{\prime}$ ($N<N^{\prime}$), let \[ \iota:T^{\times N}\longrightarrow T^{\times N^{\prime}} \] be the injection defined by \[ \left( t_{j(0)},t_{j(1)},t_{j(2)},\ldots,t_{j(N-1)}\right) \longmapsto \left( t_{j(0)},t_{j(1)},t_{j(2)},\ldots,t_{j(N-1)},\overset{N^{\prime }-N}{\overbrace{t_{0},t_{0},t_{0},\ldots,t_{0}}}\right) \] Next, let \[ N_{0}<N_{1}<N_{2}<\ldots \] be a monotone strictly increasing infinite sequence of positive integers. For each positive integer $n\geq0$, let $M^{(n)}$ be a subset of $T^{\times N_{n}}$ such that $\iota\left( M^{(n)}\right) $ lies in $M^{(n+1)}$, i.e., $\iota\left( M^{(n)}\right) \subset M^{(n+1)}$. \ Moreover, for each non-negative integer $n$, let $A(n)$ be a subgroup of the permutation group $S\left( N_{n}\right) $ having $M^{(n)}$ as an invariant subset, and such that the injection $\iota:T^{\times N_{n}}\longrightarrow T^{\times N_{n}+n}$ induces a monomorphism $\iota:A\left( N_{n}\right) \longrightarrow A\left( N_{n+1}\right) $, also denoted by $\iota$. We define a \textbf{motif system} $\mathcal{S}_{n}=\mathcal{S}\left( M^{(n)},A(n)\right) $ of\textbf{ order} $n$ as the pair $\left( M^{(n)},A(n)\right) $, where $M^{(n)}$ is called the \textbf{set of motifs}, and where $A(n)$ is called the \textbf{ambient group}. \ Finally, we define a\textbf{ nested motif system} $\mathcal{S}_{\ast }=\mathcal{S}_{\ast}\left( M^{(\ast)},A(\ast)\right) $ as the following sequence of sets, groups, injections, and monomorphisms: \[ \mathcal{S}_{1}\left( M^{(1)},A(1)\right) \overset{\iota}{\longrightarrow }\mathcal{S}_{2}\left( M^{(2)},A(2)\right) \overset{\iota}{\longrightarrow }\cdots\overset{\iota}{\longrightarrow}\mathcal{S}_{n}\left( M^{(n)} ,A(n)\right) \overset{\iota}{\longrightarrow}\cdots \] \begin{remark} There is also one more symbolic motif system that is often of use, the \textbf{direct limit motif system} defined by \[ \mathcal{S}_{\infty}\left( M^{(\infty)},A(\infty)\right) =\lim _{\longrightarrow}\mathcal{S}_{\ast}\left( M^{(\ast)},A(\ast)\right) \text{ ,} \] where $\lim\limits_{\longrightarrow}$ denotes the direct limit. \end{remark} \subsection{Stage 2. Motif equivalence and motif invariants} \qquad Let $\mathcal{S}_{n}=\mathcal{S}\left( M^{(n)},A(n)\right) $ be a motif system of order $n$. Two motifs $m_{1}$ and $m_{2}$ of the set $\mathcal{M}^{(n)}$ are said to be of the \textbf{same }$n$\textbf{-motif type}, written \[ m_{1}\underset{n}{\sim}m_{2}\text{ ,} \] if here exists an element $g$ of the ambient group $A(n)$ which takes $m_{1}$ to $m_{2}$, i.e., such that \[ gm_{1}=m_{2}\text{ .} \] The motifs $m_{1}$ and $m_{2}$ are said to be of the \textbf{same motif type}, written \[ m_{1}\sim m_{2}\text{ ,} \] if there exists a non-negative integer $k$ such that \[ \iota^{k}m_{1}\underset{n+k}{\sim}\iota^{k}m_{2}\text{ .} \] We now wish to answer the question: \noindent\textbf{Question}. \ \textit{What is meant by a motif invariant?} \begin{definition} Let $\mathcal{S}_{n}=\mathcal{S}\left( M^{(n)},A(n)\right) $ be a motif system, and let $\mathbb{D}$ be some yet to be chosen mathematical domain. \ By an $n$-\textbf{motif invariant} $I^{(n)}$, we mean a map \[ I^{(n)}:M^{(n)}\longrightarrow\mathbb{D} \] such that, when two motifs $m_{1}$ and $m_{2}$ are of the same $n$-type, i.e., \[ m_{1}\underset{n}{\sim}m_{2}\text{ , } \] then their respective invariants must be equal, i.e., \[ I^{(n)}\left( m_{1}\right) =I^{(n)}\left( m_{2}\right) \text{ .} \] In other words, $I^{(n)}:M^{(n)}\longrightarrow\mathbb{D}$ is a map that is invariant under the action of the ambient group $A(n)$, i.e., \[ I^{(n)}\left( m\right) =I^{(n)}\left( gm\right) \] for all elements of $g$ in $A(n)$. \end{definition} \subsection{Stage 3. Construction of the corresponding quantum motif systems $\mathcal{Q}_{n}$} \quad We now use the nested motif system $\mathcal{S}_{\ast}$ to construct a nested sequence of quantum motif systems $\mathcal{Q}_{\ast}$. For each non-negative $n,$ the corresponding $n$\textbf{-th order} \textbf{quantum motif system} \[ \mathcal{Q}_{n}=\mathcal{Q}\left( \mathcal{M}^{(n)},\mathcal{A}(n)\right) \] consists of a Hilbert space $\mathcal{M}^{(n)}$, called the \textbf{quantum motif space}, and a group $\mathcal{A}(n)$, also called the \textbf{ambient group}.\ The quantum motif space $\mathcal{M}^{(n)}$ and the ambient group $\mathcal{A}(n)$ are defined as follows: \begin{itemize} \item The\textbf{ quantum motif space} $\mathcal{M}^{(n)}$ is the Hilbert space with orthonormal basis \[ \left\{ \ \left\vert m\right\rangle :m\in M^{(n)}\ \right\} \text{ .} \] The elements of $\mathcal{M}^{(n)}$ are called \textbf{quantum motifs} $.$ \item The \textbf{ambient group} $\mathcal{A}(n)$ is the unitary group acting on the Hilbert space $\mathcal{M}^{(n)}$ consisting of all linear transformations of the form \[ \left\{ \ \widetilde{g}:\mathcal{M}^{(n)}\longrightarrow\mathcal{M} ^{(n)}:g\in A(n)\ \right\} \text{ ,} \] where $\widetilde{g}$ is the linear transformation defined by \[ \begin{array} [c]{rcc} \widetilde{g}:\mathcal{M}^{(n)} & \longrightarrow & \mathcal{M}^{(n)}\\ \left\vert m\right\rangle \quad & \longmapsto & \left\vert gm\right\rangle \end{array} \] Since each element $g$ in $A(n)$ is a permutation, each $\widetilde{g}$ permutes the orthonormal basis $\left\{ \ \left\vert m\right\rangle :m\in M^{(n)}\ \right\} $ of $\mathcal{M}^{(n)}$. \ Hence, $\widetilde{g}$ is automatically a unitary transformation. \ It follows that $A(n)$ and $\mathcal{A}(n)$ are isomorphic as groups. \ We will often abuse notation by denoting $\widetilde{g}$ by $g$, and $\mathcal{A}(n)$ by $A(n)$. \end{itemize} Next, for each non-negative integer $n$, let \[ \iota:\mathcal{M}^{(n)}\longrightarrow\mathcal{M}^{(n+1)} \] and \[ \iota:\mathcal{A}(n)\longrightarrow\mathcal{A}(n+1) \] respectively denote the Hilbert space monomorphism and the group monomorphism induced by the injection \[ \iota:M^{(n)}\longrightarrow M^{(n+1)} \] and the group monomorphism \[ \iota:A(n)\longrightarrow A(n+1)\text{ .} \] Finally, we define the\textbf{ nested quantum motif system} $\mathcal{Q} _{\ast}=\mathcal{Q}_{\ast}\left( \mathcal{M}^{(\ast)},\mathcal{A} (\ast)\right) $ as the following sequence of Hilbert spaces, groups, Hilbert space monomorphisms, and group monomorphisms: \[ \mathcal{Q}_{1}\left( \mathcal{M}^{(1)},\mathcal{A}(1)\right) \overset{\iota }{\longrightarrow}\mathcal{Q}_{2}\left( \mathcal{M}^{(2)},\mathcal{A} (2)\right) \overset{\iota}{\longrightarrow}\cdots\overset{\iota }{\longrightarrow}\mathcal{Q}_{n}\left( \mathcal{M}^{(n)},\mathcal{A} (n)\right) \overset{\iota}{\longrightarrow}\cdots \] \begin{remark} We should also mention one other quantum motif system that can be useful, namely, the\textbf{ quantum direct limit motif system} defined by \[ \mathcal{Q}_{\infty}=\mathcal{Q}_{\infty}\left( \mathcal{M}^{(\infty )},\mathcal{A}(\infty)\right) =\lim_{\longrightarrow}\mathcal{Q}_{\ast }\left( \mathcal{M}^{(\ast)},\mathcal{A}(\ast)\right) \text{ ,} \] where $\lim\limits_{\longrightarrow}$ denotes the direct limit. \ This quantum system is often also physically implementable. \end{remark} \subsection{Stage 4. Quantum motif equivalence} Let $\mathcal{Q}_{n}=\mathcal{Q}\left( \mathcal{M}^{(n)},\mathcal{A} (n)\right) $ be a quantum motif system of order $n$. Two quantum motifs $\left\vert \psi_{1}\right\rangle $ and $\left\vert \psi_{2}\right\rangle $ of the Hilbert space $\mathcal{M}^{(n)}$ are said to be of the \textbf{same }$n$\textbf{-motif type}, written \[ \left\vert \psi_{1}\right\rangle \underset{n}{\sim}\left\vert \psi _{2}\right\rangle \text{ ,} \] if there exists an element $g$ of the ambient group $\mathcal{A}(n)$ which takes $\left\vert \psi_{1}\right\rangle $ to $\left\vert \psi_{2}\right\rangle $, i.e., such that \[ g\left\vert \psi_{1}\right\rangle =\left\vert \psi_{2}\right\rangle \text{ .} \] The quantum motifs $\left\vert \psi_{1}\right\rangle $ and $\left\vert \psi_{2}\right\rangle $ are said to be of the \textbf{same motif type}, written \[ \left\vert \psi_{1}\right\rangle \sim\left\vert \psi_{2}\right\rangle \text{ ,} \] if there exists a non-negative integer $m$ such that \[ \iota^{m}\left\vert \psi_{1}\right\rangle \underset{n+m}{\sim}\iota ^{m}\left\vert \psi_{2}\right\rangle \text{ .} \] \subsection{Stage 5. Motif invariants as quantum observables} \qquad We consider the following question: \noindent\textbf{Question:} \ \textit{What do we mean by a physically observable quantum motif invariant?} We answer this question with a definition. \begin{definition} Let $\mathcal{Q}_{n}=\mathcal{Q}\left( \mathcal{M}^{(n)},\mathcal{A} (n)\right) $ be a quantum motif system of order $n$, and let $\Omega$ be an observable, i.e., a Hermitian operator on the Hilbert space $\mathcal{M} ^{(n)}$ of quantum motifs. \ Then $\Omega$ is a \textbf{quantum motif } $n$\textbf{-invariant} provided $\Omega$ is left invariant under the big adjoint action of the ambient group $\mathcal{A}(n)$, i.e., provided \[ U\Omega U^{-1}=\Omega \] for all $U$ in $\mathcal{A}(n)$. \end{definition} \begin{proposition} If \[ I^{(n)}:M^{(n)}\longrightarrow\mathbb{R} \] is a real valued $n$-motif invariant, then \[ \Omega= {\displaystyle\sum\limits_{m\in M^{(n)}}} I^{(n)}\left( m\right) \left\vert m\right\rangle \left\langle m\right\vert \] is a quantum motif observable which is a quantum motif $n$-invariant. \end{proposition} Much more can be said about this topic. \ \ For a more in-depth discussion of this issue, we refer the reader to \cite{Lomonaco1, Lomonaco2}. \section{Part II. Quantizing Braids} We now illustrate the quantization procedure defined above by using it to quantize braids. \subsection{Stage 1. The set of braid mosaics $\mathbb{B}^{(n,\ell)}$} \qquad For each integer $n\geq2$, let $\mathbb{T}^{(n)}$ denote the following set of the $2n-1$ symbols \[ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtmmn.jpg} } ,\cdots\text{,}\ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtm2n.jpg} } ,\ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtm1n.jpg} } ,\ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rt0n.jpg} } ,\ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtp1n.jpg} } ,\ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtp2n.jpg} } ,\cdots\text{,}\ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtpmn.jpg} } \] called $\mathbf{n}$-\textbf{stranded braid tiles}, or $n$-\textbf{tiles}, or simply \textbf{tiles}. \ We also denote these tiles respectively by the symbols \[ b_{-(n-1)},\ldots,b_{-2},b_{-1},b_{0}=1,b_{1},b_{2},\ldots,b_{n-1}\text{ ,} \] as indicated in the table given below: \[ \begin{tabular} [c]{c} \begin{tabular} [c]{||c||c||c||c||c||c||c||c||c||}\hline\hline $ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtmmn.jpg} } $ & $\cdots$ & $ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtm2n.jpg} } $ & $ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtm1n.jpg} } $ & $\overset{}{ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rt0n.jpg} } }$ & $ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtp1n.jpg} } $ & $ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtp2n.jpg} } $ & $\cdots$ & $ {\includegraphics[ natheight=6.000100in, natwidth=3.000000in, height=0.627in, width=0.3269in ] {rtpmn.jpg} } $\\\hline $b_{-(n-1)}$ & $\cdots$ & $b_{-2}$ & $b_{-1}$ & $b_{0}=1$ & $\overset{}{\underset{}{b_{1}}}$ & $b_{2}$ & $\cdots$ & $b_{n-1} $\\\hline\hline \end{tabular} \\ $n$\textbf{-stranded braid tiles} \end{tabular} \ \ \] \begin{definition} An $\left( \mathbf{n},\mathbf{\ell}\right) $\textbf{-braid mosaic} $\beta$ is defined as a sequence of $n$-stranded braid tiles \[ \beta=b_{j(1)}b_{j(2)}\ldots b_{j(\ell)} \] of length $\ell$. \ We let $\mathbb{B}^{(n,\ell)}$ denote the \textbf{set of all }$\left( n,\mathbf{\ell}\right) $\textbf{-braid mosaics}. \end{definition} An example of a $(3,8)$-braid mosaic is given below \[ \begin{tabular} [c]{c} $ \begin{array} [c]{cccccccc} \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } \\ 1 & b_{-1} & b_{1} & b_{2} & 1 & 1 & b_{-1} & b_{2} \end{array} $\\ The $\left( 3,8\right) $-braid mosaic $\beta=1b_{-1}b_{1}b_{2}11b_{-1}b_{2}$ \end{tabular} \ \ \ \] \begin{remark} Please note that the set of all $\left( n,\ell\right) $-braid mosaics $\mathbb{B}^{(n,\ell)}$ is a finite set of cardinality $\left( 2n-1\right) ^{\ell}$. \end{remark} \subsection{Stage 1 (Cont.) Braid mosaic moves} \begin{definition} Let $\ell^{\prime}$ and $\ell$ be positive integers such that $\ell^{\prime }\leq\ell$. \ An $\left( n,\ell^{\prime}\right) $-braid mosaic $\gamma$ is is said to be an $\left( n,\ell^{\prime}\right) $\textbf{-braid submosaic} of an $\left( n,\ell\right) $-braid mosaic $\beta$ provided $\gamma$ is a subsequence of consecutive tiles of $\beta$. \ The $\left( n,\ell^{\prime }\right) $-braid submosaic $\gamma$ is said to be at \textbf{position} $p$ in $\beta$ if the first (leftmost) tile of $\gamma$ is the $p$-th tile of $\beta$ from the left. \ We denote the $\left( n,\ell^{\prime}\right) $-braid submosaic $\gamma$ of $\beta$ at location $p$ by $\gamma=\beta^{p:\ell ^{\prime}}$. \end{definition} \begin{remark} The number of $\left( n,\ell^{\prime}\right) $-braid submosaics of an $\left( n,\ell\right) $-braid mosaic $\beta$ is $\ell-\ell^{\prime}+1$. \end{remark} Two examples of braid submosaics of the $\left( 3,8\right) $-braid mosaic $\beta=1b_{-1}b_{1}b_{2}11b_{-1}b_{2}$ are given above are: \[ \begin{tabular} [c]{c} $ \begin{array} [c]{ccc} \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } \\ b_{-1} & b_{1} & b_{2} \end{array} $\\ \textbf{The }$\left( 3,3\right) $\textbf{-braid submosaic}\\ $\beta^{2:3}$ \textbf{of }$\beta$\textbf{ at position 2} \end{tabular} \ \ \ \ \text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } \begin{tabular} [c]{c} $ \begin{array} [c]{cccc} \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } \\ 1 & 1 & b_{-1} & b_{2} \end{array} $\\ \textbf{The }$\left( 3,4\right) $\textbf{-braid submosaic}\\ $\beta^{5:4}$ \textbf{of }$\beta$\textbf{ at position 5} \end{tabular} \ \ \ \ \] \begin{definition} Let $\ell^{\prime}$ and $\ell$ be positive integers such that $\ell^{\prime }\leq\ell$. \ For any two $\left( n,\ell^{\prime}\right) $-braid mosaics $\gamma$ and $\gamma^{\prime}$, we define the $\ell^{\prime}$-\textbf{braid mosaic move} \textbf{at location} $p$ on the set of all $\left( n,\ell\right) $-braid mosaics $\mathbb{B}^{\left( n,\ell\right) }$, denoted by \[ \gamma\overset{p}{\leftrightarrow}\gamma^{\prime}\text{ ,} \] as the map defined by \[ \left( \gamma\overset{p}{\leftrightarrow}\gamma^{\prime}\right) \left( \beta\right) =\left\{ \begin{array} [c]{ll} \beta\text{ with }\beta^{p:\ell^{\prime}}\text{ replaced by }\gamma^{\prime} & \text{if }\beta^{p:\ell^{\prime}}=\gamma\\ & \\ \beta\text{ with }\beta^{p:\ell^{\prime}}\text{ replaced by }\gamma & \text{if }\beta^{p:\ell^{\prime}}=\gamma^{\prime}\\ & \\ \beta & \text{otherwise} \end{array} \right\vert \] \end{definition} As an example, consider the $2$-braid mosaic move $\gamma \overset{3}{\leftrightarrow}\gamma^{\prime}$ at position $3$ defined by \[ \gamma\overset{3}{\leftrightarrow}\gamma^{\prime}\qquad=\qquad \begin{array} [c]{cc} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } \end{array} \overset{3}{\longleftrightarrow} \begin{array} [c]{cc} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } \end{array} \] Then \[ \hspace{-0.75in}\left( \gamma\overset{3}{\leftrightarrow}\gamma^{\prime }\right) \left( \begin{array} [c]{ccccc} \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } \end{array} \right) =\ \begin{array} [c]{ccccc} \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } \end{array} \ \ \left( \begin{tabular} [c]{c} Braid\\ Submosics\\ Switched \end{tabular} \ \ \ \ \ \right) \] \[ \hspace{-0.75in}\left( \gamma\overset{3}{\leftrightarrow}\gamma^{\prime }\right) \left( \begin{array} [c]{ccccc} \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } \end{array} \right) =\ \begin{array} [c]{ccccc} \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } \end{array} \ \ \left( \begin{tabular} [c]{c} Braid\\ Submosics\\ Switched \end{tabular} \ \ \ \ \ \right) \] \[ \hspace{-0.75in}\left( \gamma\overset{3}{\leftrightarrow}\gamma^{\prime }\right) \left( \begin{array} [c]{ccccc} \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } \end{array} \right) =\ \begin{array} [c]{ccccc} \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } & \hspace{-0.1in} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } \end{array} \ \ \left( \begin{tabular} [c]{c} Braid\\ Mosaic\\ Unchanged \end{tabular} \ \ \ \ \ \right) \] The following proposition is an almost immediate consequence of the definition of a braid move. \begin{proposition} Each braid move is a permutation on the set $\mathbb{B}^{(n,\ell)}$ of $(n,\ell)$-braid mosaics. \ In fact, it is a permutation which is a product of disjoint transpositions. \end{proposition} \subsection{Stage 1. (Cont.) Planar isotopy moves} \qquad Our next objective is to translate all the standard topological moves on braids into braid mosaic moves. \ To accomplish this, we must first note that there are two types of standard topological moves, i.e., those which do not change the topological type of the braid projection, called \textbf{planar isotopy moves}, and those which do change the typological type of the braid projection but not of the braid itself, called \textbf{Reidemeister moves}. We begin with the planar isotopy moves. \begin{definition} For braid mosaics, there are two types \textbf{planar isotopy moves}, i.e., types $P_{1}$ and $P_{2}$, which are defined below as: \[ \begin{tabular} [c]{|c|}\hline $ \begin{array} [c]{c} \ \\ \ \end{array} 1b_{i}\overset{\lambda}{\underset{P_{1}}{\longleftrightarrow}}b_{i}1\text{ \ for \ }0<\left\vert i\right\vert <n \begin{array} [c]{c} \ \\ \ \end{array} $\\\hline \textbf{Definition of a type }$P_{1}$\textbf{ planar isotopy move}\\\hline \end{tabular} \] and \[ \begin{tabular} [c]{|c|}\hline $ \begin{array} [c]{c} \ \\ \ \end{array} b_{i}b_{j}\overset{\lambda}{\underset{P_{2}}{\longleftrightarrow}}b_{i} b_{j}\ for0<\left\vert i\right\vert ,\left\vert j\right\vert <n\ and\ \left\vert \left\vert i\right\vert -\left\vert j\right\vert \right\vert >1 \begin{array} [c]{c} \ \\ \ \end{array} $\\\hline \textbf{Definition of a type }$P_{2}$\textbf{ planar isotopy move}\\\hline \end{tabular} \] \ \end{definition} \begin{example} Examples of $P_{1}$ and $P_{2}$ moves are respectively given below: \[ \begin{tabular} [c]{c} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } $\overset{\lambda}{\longleftrightarrow}$ {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } \\ $\text{An example of a }P_{1}\text{ move.}$ \end{tabular} \ \text{ \ \ and \ \ } \begin{tabular} [c]{c} {\includegraphics[ natheight=3.999800in, natwidth=3.000000in, height=0.4272in, width=0.3269in ] {rtp14.jpg} } {\includegraphics[ natheight=3.999800in, natwidth=3.000000in, height=0.4272in, width=0.3269in ] {rtp34.jpg} } $\overset{\lambda}{\longleftrightarrow}$ {\includegraphics[ natheight=3.999800in, natwidth=3.000000in, height=0.4272in, width=0.3269in ] {rtp34.jpg} } {\includegraphics[ natheight=3.999800in, natwidth=3.000000in, height=0.4272in, width=0.3269in ] {rtp14.jpg} } \\ $\text{An example of a }P_{2}\text{ move: }$ \end{tabular} \ \] \end{example} \begin{remark} The number of $P_{1}$ and $P_{2}$ moves are respectively $2\left( n-1\right) \left( \ell-1\right) $ \ and \ $\left( n-1\right) \left( 2n-6\right) \left( \ell-1\right) $ . \end{remark} \subsection{Stage 1. (Cont.) Reidemeister moves} \qquad There are two types of topological moves, i.e., $R_{2}$ and $R_{3}$. \begin{definition} The \textbf{Reidemeister} $R_{2}$ moves are defined as \[ \begin{tabular} [c]{c} $b_{i}b_{-i}\overset{\lambda}{\longleftrightarrow}1^{2}$\\ where$\text{ \ }0<\left\vert i\right\vert <n$ \end{tabular} \] \end{definition} \begin{example} An example of a Reidemeister 2 move is given below \[ {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm13.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } \overset{\lambda}{\longleftrightarrow} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } \] \end{example} \begin{remark} The number of $R_{2}$ moves is $2(n-1)\left( \ell-1\right) $ \end{remark} \begin{definition} The \textbf{Reidemeister} $R_{3}$ moves are defined for $0<\left\vert i\right\vert <n$ , and given below: \[ \begin{tabular} [c]{|c|}\hline $b_{i}b_{i+1}b_{i}b_{-(i+1)}b_{-i}b_{-(i+1)}\overset{\lambda }{\longleftrightarrow}1^{6}$\\\hline \\\hline $b_{i}b_{i+1}b_{i}b_{-(i+1)}b_{-i}\overset{\lambda}{\longleftrightarrow }b_{i+1}1^{4}$\\\hline \\\hline $b_{i}b_{i+1}b_{i}b_{-(i+1)}\overset{\lambda}{\longleftrightarrow}b_{i+1} b_{i}1^{2}$\\\hline \\\hline $b_{i}b_{i+1}b_{i}\overset{\lambda}{\longleftrightarrow}b_{i+1}b_{i}b_{i+1} $\\\hline \\\hline $b_{i}b_{i+1}1^{2}\overset{\lambda}{\longleftrightarrow}b_{i+1}b_{i} b_{i+1}b_{-i}$\\\hline \\\hline $b_{i}1^{4}\overset{\lambda}{\longleftrightarrow}b_{i+1}b_{i}b_{i+1} b_{-i}b_{-(i+1)}$\\\hline \\\hline $1^{6}\overset{\lambda}{\longleftrightarrow}b_{i+1}b_{i}b_{i+1}b_{-i} b_{-(i+1)}b_{-i}$\\\hline \end{tabular} \ \] \end{definition} \begin{example} Two examples of Reidemeister $R_{3}$ are given below: \[ {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm13.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } \overset{\lambda}{\longleftrightarrow} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm13.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm13.jpg} } \] \[ {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm13.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rt03.jpg} } \overset{\lambda}{\longleftrightarrow} {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm13.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtm23.jpg} } {\includegraphics[ natheight=3.000000in, natwidth=3.000000in, height=0.3269in, width=0.3269in ] {rtp13.jpg} } \] \end{example} \begin{remark} The number of Reidemeister 3 moves $R_{3}$ is given by \[ \text{\# }R_{3}\text{ Moves }=\left\{ \begin{array} [c]{ccc} n\left( n-2\right) \left( 6\ell-21\right) & \text{if} & \ell\geq6\\ & & \\ n\left( n-2\right) \left( 5\ell-16\right) & \text{if} & \ell=5\\ & & \\ n\left( n-2\right) \left( 3\ell-8\right) & \text{if} & \ell=4\\ & & \\ n\left( n-2\right) \left( \ell-2\right) & \text{if} & \ell=3\\ & & \\ 0 & \text{if} & \ell<3 \end{array} \right. \] \end{remark} \subsection{Stage 1. (Cont.) The ambient group $A\left( n,\ell\right) $ and the braid mosaic system $\mathcal{B}_{n,\ast}$} \qquad At this point, we can define what is meant by the ambient group and the resulting braid mosaic system. We begin reminding the reader of a fact noted earlier in this paper, namely the fact that each braid move is a permutation on the set $\mathbb{B} ^{(n,\ell)}$ of $(n,\ell)$-braid mosaics. \ Thus, since planar isotopy and Reidemeister moves are permutations, we can make the following definition: \begin{definition} We define the ( $(n,\ell)$\textbf{-braid mosaic}) \textbf{ambient group} $A(n,\ell)$ as the group of all permutations on the set $\mathbb{B}^{(n,\ell )}$ of $(n,\ell)$-braid mosaics generated by $(n,\ell)$-braid planar isotopy and Reidemeister moves. \end{definition} We need one more definition, before we can move to the objective of this section. \begin{definition} We define the \textbf{braid mosaic injection} \[ \iota:\mathbb{B}^{(n,\ell)}\longrightarrow\mathbb{B}^{(n,\ell+1)} \] as the map \[ \beta\longmapsto\beta1 \] for each $(n,\ell)$-braid mosaic in $\mathbb{B}^{(n,\ell)}$. \ It immediately follows that the braid mosaic injection induces a monomorphism \[ \iota:A(n,\ell)\longrightarrow A(n,\ell+1)\text{ } \] from the $(n,\ell)$-braid ambient group $A(n,\ell)$ to the $(n,\ell+1)$-braid ambient group $A(n,\ell+1)$. \ This monomorphism is called the \textbf{braid mosaic monomorphism}. \ \end{definition} \begin{definition} We define an \textbf{braid system} $\mathcal{B}_{n,\ell}=\mathcal{B}\left( \mathbb{B}^{(n,\ell)},A(n,\ell)\right) $ of\textbf{ order} $\left( n,\ell\right) $ as the pair $\left( \mathbb{B}^{(n,\ell)},A(n,\ell)\right) $, where $\mathbb{B}^{(n,\ell)}$ is called the \textbf{set of }$\left( n,\ell\right) $-\textbf{braid mosaics}, and where $\mathbb{A}(n,\ell)$ is called the \textbf{ambient group}. \ Finally, we define a\textbf{ nested motif system} $\mathcal{B}_{n,\ast}=\mathcal{B}\left( \mathbb{B}^{(n,\ast )},A(n,\ast)\right) $ as the following sequence of sets, groups, injections, and monomorphisms: \[ \mathcal{B}\left( \mathbb{B}^{(n,1)},A(n,1)\right) \overset{\iota }{\longrightarrow}\mathcal{B}\left( \mathbb{B}^{(n,2)},A(n,2)\right) \overset{\iota}{\longrightarrow}\cdots\overset{\iota}{\longrightarrow }\mathcal{B}\left( \mathbb{B}^{(n,2)},A(n,2)\right) \overset{\iota }{\longrightarrow}\cdots \] \end{definition} \subsection{Stage 2. \ Braid mosaic type and braid mosaic invariants} \qquad Our next objective is to define what it means for two braid mosaics to represent the same topological braid. Two braid mosaics $\beta_{1}$ and $\beta_{2}$ of the set $\mathbb{B} ^{(n,\ell)}$ are said to be of the \textbf{same }$n$\textbf{-braid mosaic type}, written \[ \beta_{1}\underset{n}{\sim}\beta_{2}\text{ ,} \] if there exists an element $g$ of the ambient group $A(n,\ell)$ which takes $\beta_{1}$ to $\beta_{2}$, i.e., such that \[ g\beta_{1}=\beta_{2}\text{ .} \] The braid mosaics $\beta_{1}$ and $\beta_{2}$ are said to be of the \textbf{same braid mosaic type}, written \[ \beta_{1}\sim\beta_{2}\text{ ,} \] if there exists a non-negative integer $k$ such that \[ \iota^{k}\beta_{1}\underset{n+k}{\sim}\iota^{k}\beta_{2}\text{ .} \] We now wish to answer the question: \noindent\textbf{Question}. \ \textit{What is meant by a braid mosaic invariant?} \begin{definition} Let $\mathcal{B}_{n,\ell}=\mathcal{B}\left( \mathbb{B}^{(n,\ell )},A(n)\right) $ be a braid system, and let $\mathbb{D}$ be some yet to be chosen mathematical domain. \ By an $n$-\textbf{braid mosaic invariant} $I^{(n)}$, we mean a map \[ I^{(n)}:\mathbb{B}^{(n,\ell)}\longrightarrow\mathbb{D} \] such that, when two braid mosaics $\beta_{1}$ and $\beta_{2}$ are of the same $n$-type, i.e., when \[ \beta_{1}\underset{n}{\sim}\beta_{2}\text{ , } \] then their respective invariants must be equal, i.e., \[ I^{(n)}\left( \beta_{1}\right) =I^{(n)}\left( \beta_{2}\right) \text{ .} \] In other words, $I^{(n)}:\mathbb{B}^{(n,\ell)}\longrightarrow\mathbb{D}$ is a map that is invariant under the action of the ambient group $A(n)$, i.e., \[ I^{(n)}\left( \beta\right) =I^{(n)}\left( g\beta\right) \] for all elements of $g$ in $A(n)$. \end{definition} \subsection{Stage 3. \ Construction of the corresponding quantum braid system} \qquad We now use the nested \ braid mosaic system $\mathcal{B}_{n,\ast}$ to construct a nested sequence of quantum braid mosaic systems $\mathcal{Q} _{n,\ast}$. For pair of non-negative integers $n$ and $\ell$ the corresponding $\left( n,\ell\right) $\textbf{-th order} \textbf{quantum braid system} \[ \mathcal{Q}_{n,\ell}=\mathcal{Q}\left( \mathcal{B}^{(n,\ell)},\mathcal{A} (n,\ell)\right) \] consists of a Hilbert space $\mathcal{B}^{(n,\ell)}$, called the \textbf{quantum mosaic space}, and a group $\mathcal{A}(n,\ell)$, also called the \textbf{ambient group}.\ The quantum motif space $\mathcal{B}^{(n,\ell)}$ and the ambient group $\mathcal{A}(n,\ell)$ are defined as follows: \begin{itemize} \item The\textbf{ quantum motif space} $\mathcal{B}^{(n,\ell)}$ is the Hilbert space with orthonormal basis \[ \left\{ \ \left\vert \beta\right\rangle :\beta\in\mathbb{B}^{(n,\ell )}\ \right\} \text{ .} \] The elements of $\mathcal{B}^{(n,\ell)}$ are called \textbf{quantum braids} $.$ \item The \textbf{ambient group} $\mathcal{A}(n,\ell)$ is the unitary group acting on the Hilbert space $\mathcal{B}^{(n,\ell)}$ consisting of all linear transformations of the form \[ \left\{ \ \widetilde{g}:\mathcal{B}^{(n,\ell)}\longrightarrow\mathcal{B} ^{(n,\ell)}:g\in A(n,\ell)\ \right\} \text{ ,} \] where $\widetilde{g}$ is the linear transformation defined by \[ \begin{array} [c]{rcc} \widetilde{g}:\mathcal{B}^{(n,\ell)} & \longrightarrow & \mathcal{B} ^{(n,\ell)}\\ \left\vert \beta\right\rangle \quad & \longmapsto & \left\vert g\beta \right\rangle \end{array} \] Since each element $g$ in $A(n,\ell)$ is a permutation, each $\widetilde{g}$ permutes the orthonormal basis $\left\{ \ \left\vert \beta\right\rangle :\beta\in\mathbb{B}^{(n,\ell)}\ \right\} $ of $\mathcal{B}^{(n,\ell)}$. \ Hence, $\widetilde{g}$ is automatically a unitary transformation. \ It follows that $A(n,\ell)$ and $\mathcal{A}(n,\ell)$ are isomorphic as groups. \ We will often abuse notation by denoting $\widetilde{g}$ by $g$, and $\mathcal{A}(n,\ell)$ by $A(n,\ell)$. \end{itemize} Next, for each pair of non-negative integers $n$ and $\ell$, let \[ \iota:\mathcal{B}^{(n,\ell)}\longrightarrow\mathcal{B}^{(n+1,\ell)} \] and \[ \iota:\mathcal{A}(n,\ell)\longrightarrow\mathcal{A}(n+1,\ell) \] respectively denote the Hilbert space monomorphism and the group monomorhism induced by the injection \[ \iota:\mathbb{B}^{(n,\ell)}\longrightarrow\mathbb{B}^{(n+1,\ell)} \] and the group monomorphism \[ \iota:A(n,\ell)\longrightarrow A(n+1,\ell)\text{ .} \] Finally, we define the\textbf{ nested quantum braid system} $\mathcal{Q} _{n,\ast}=\mathcal{Q}_{n,\ast}\left( \mathcal{B}^{(n,\ast)},\mathcal{A} (n,\ast)\right) $ as the following sequence of Hilbert spaces, groups, Hilbert space monomorphisms, and group monomorphisms: \[ \mathcal{Q}_{1,\ell}\left( \mathcal{B}^{(1,\ell)},\mathcal{A}(1,\ell)\right) \overset{\iota}{\longrightarrow}\mathcal{Q}_{2,\ell}\left( \mathcal{B} ^{(2,\ell)},\mathcal{A}(2,\ell)\right) \overset{\iota}{\longrightarrow} \cdots\overset{\iota}{\longrightarrow}\mathcal{Q}_{n,\ell}\left( \mathcal{B}^{(n,\ell)},\mathcal{A}(n,\ell)\right) \overset{\iota }{\longrightarrow}\cdots \] \subsection{Stage 4. Quantum braid equivalence} Let $\mathcal{Q}_{n,\ell}=\mathcal{Q}\left( \mathcal{B}^{(n,\ell )},\mathcal{A}(n,\ell)\right) $ be a quantum motif system of order $\left( n,\ell\right) $. Two quantum braids $\left\vert \psi_{1}\right\rangle $ and $\left\vert \psi_{2}\right\rangle $ of the Hilbert space $\mathcal{B}^{(n,\ell)}$ are said to be of the \textbf{same }$\left( n,\ell\right) $\textbf{-braid type}, written \[ \left\vert \psi_{1}\right\rangle \underset{n}{\sim}\left\vert \psi _{2}\right\rangle \text{ ,} \] if there exists an element $g$ of the ambient group $\mathcal{A}(n,\ell)$ which takes $\left\vert \psi_{1}\right\rangle $ to $\left\vert \psi _{2}\right\rangle $, i.e., such that \[ g\left\vert \psi_{1}\right\rangle =\left\vert \psi_{2}\right\rangle \text{ .} \] The quantum motifs $\left\vert \psi_{1}\right\rangle $ and $\left\vert \psi_{2}\right\rangle $ are said to be of the \textbf{same braid type}, written \[ \left\vert \psi_{1}\right\rangle \sim\left\vert \psi_{2}\right\rangle \text{ ,} \] if there exists a non-negative integer $m$ such that \[ \iota^{m}\left\vert \psi_{1}\right\rangle \underset{n+m}{\sim}\iota ^{m}\left\vert \psi_{2}\right\rangle \text{ .} \] \subsection{Stage 5. Quantum braid invariants as quantum observables} \qquad We consider the following question: \noindent\textbf{Question:} \ \textit{What do we mean by a physically observable quantum braid invariant?} We answer this question with a definition. \begin{definition} Let $\mathcal{Q}_{n,\ell}=\mathcal{Q}\left( \mathcal{B}^{(n,\ell )},\mathcal{A}(n,\ell)\right) $ be a quantum braid system of order $\left( n,\ell\right) $, and let $\Omega$ be an observable, i.e., a Hermitian operator on the Hilbert space $\mathcal{B}^{(n,\ell)}$ of quantum braids. \ Then $\Omega$ is a \textbf{quantum braid }$\left( n,\ell\right) $\textbf{-invariant} provided $\Omega$ is left invariant under the big adjoint action of the ambient group $\mathcal{A}(n,\ell)$, i.e., provided \[ U\Omega U^{-1}=\Omega \] for all $U$ in $\mathcal{A}(n,\ell)$. \end{definition} \begin{proposition} If \[ I^{(n)}:\mathbb{B}^{(n,\ell)}\longrightarrow\mathbb{R} \] is a real valued $\left( n,\ell\right) $-braid invariant, then \[ \Omega= {\displaystyle\sum\limits_{\beta\in\mathbb{B}^{(n,\ell)}}} I^{(n,\ell)}\left( \beta\right) \left\vert \beta\right\rangle \left\langle \beta\right\vert \] is a quantum motif observable which is a quantum motif $\left( n,\ell\right) $-invariant. \end{proposition} \section{Conclusion} Much more can be said about this topic. \ \ For more examples of the application of the quantization procedure discussed in this paper, we refer the reader to \cite{Lomonaco1, Lomonaco2, Kauffman1, Farhi1}. \ For knot theory and the braid group, we refer the reader to \cite{Crowell1, Murasugi1, Kauffman2, Lickorish1, Birman1, Kassel1}; for topological quantum computation, \cite{Kauffman3, Kauffman4, Kauffman5, Kitaev1,Nayak1, Sarma1}; and for quantum computation and information, \cite{Nielsen1, Lomonaco4, Lomonaco5}. \end{document}
\begin{document} \title{ Isolating noise and amplifying signal with quantum Cheshire cat} \author{Ahana Ghoshal$^1$, Soham Sau$^{1,2,3}$, Debmalya Das$^{4,5}$, Ujjwal Sen$^1$} \affiliation {$^1$Harish-Chandra Research Institute, A CI of Homi Bhabha National Institute, Chhatnag Road, Jhunsi, Prayagraj 211 019, India\\ $^2$Department of Physics, School of Physical Sciences, Central University of Rajasthan, Bandarsindri, Rajasthan, 305 817, India\\ $^3$RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská Cesta 9, Bratislava 84511, Slovakia \\ $^4$Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627, USA\\ $^5$Center for Coherence and Quantum Optics, University of Rochester, Rochester, New York 14627, USA} \begin{abstract} The so-called quantum Cheshire cat is a phenomenon in which an object, identified with a ``cat'', is dissociated from a property of the object, identified with the ``grin'' of the cat. We propose a thought experiment, similar to this phenomenon, with an interferometric setup, where a property (a component of polarization) of an object (photon) can be separated from the object itself and can simultaneously be amplified when it is already decoupled from its object. We further show that this setup can be used to dissociate two complementary properties, e.g., two orthogonal components of polarization of a photon and identified with the grin and the snarl of a cat, from each other and one of them can be amplified while being detached from the other. Moreover, we extend the work to a noisy scenario, effected by a spin-orbit-coupling --like additional interaction term in the Hamiltonian for the measurement process, with the object in this scenario being identified with a so-called confused Cheshire cat. We devise a gedanken experiment in which such a ``confusion'' can be successfully dissociated from the system, and we find that the dissociation helps in the amplification of signals. \end{abstract} \maketitle \section{Introduction} In recent times, a technique known as weak-value amplification has been used to amplify weak signals~\cite{U}. The method relies on extracting information from a system while minimally disturbing it~\cite{Barchielli1982, Busch1984, Caves1986}. This weak measurement~\cite{AAV, Duck} of the system is performed using a weak coupling strength between the system and a meter. In the weak-value amplification method, a quantum system is initially prepared in a pure state, known as the pre-selected state, following which an observable is weakly measured. After the weak measurement of the observable, a strong measurement of a second observable is carried out on the principal system and a quantity called weak value~\cite{AAV,Duck,YA} is defined by post-selecting an outcome of the second measurement. The weak value of the observable is basically the average shift in the meter readings for the weak measurement corresponding to the post-selected state. Experimental observations of weak values have been reported in Refs.~\cite{Correa, Denkmayr, Sponar, Ashby, Pryde, Hosten, Lundeen, Cormann}. Although the present work discusses weak values in relation to signal amplification, it may be worthwhile to mention a few other applications of the idea. Weak values can be used in the direct measurement of a photon wave function~\cite{Lundeen, L2} to measure the spin Hall effect~\cite{Hosten}, in quantum state tomography~\cite{tomo1, tomo2}, in the geometric description of quantum states~\cite{geometry}, and in state visualization~\cite{sv}. It also finds application in quantum thermometry~\cite{qt}, and measuring the expectation value of non-Hermitian operators~\cite{nonherm1, nonherm2}. Weak values have been shown to be acquire complex values~\cite{Jozsa} and weak values play important roles in the two-state vector formalism~\cite{twostate}, in the physical understanding of superoscillations~\cite{Super}, and in separating a quantum property from its system~\cite{Aharonov}. Weak measurements have been used to show a double violation of a Bell inequality by a single entangled pair~\cite{npj} and in quantum process tomography where sequential weak measurements are done on incompatible observables~\cite{m}. A property of weak values that is of special interest to us is that it can lie outside the eigenvalue spectrum of the observable being weakly measured, and can even be very large~\cite{AAV, Duck}. This aspect is exploited in weak-value amplification. A significant concern in experiments is the minimization of noise in the relevant signal and designing measurement techniques that achieve the same. Setting aside logistics and dependencies on other constraints imposed by the particular scenario, a measurement strategy with a significantly reduced noise is typically favored by experimentalists~\cite{SNR1, SNR2, SNR3, SNR4, SNR5, SNR6}. We consider a situation in which noise may be introduced during the process of weak-value amplification if the Hamiltonian coupling the system and the meter has extra undesired terms due to the effect of an environmental element. It therefore becomes necessary to eliminate these terms or suppress them to obtain the amplification of the signal alone. An important ingredient of the strategy we discuss in this paper is a phenomenon known as the quantum Cheshire cat. In this gedanken experiment, based on a modified Mach-Zehnder interferometer, a photon can be detected in one arm while its circular polarization can be detected in the other arm, each being absent in the other arm, by measuring the respective weak values~\cite{Aharonov}. Thus, for a particular combination of pre-selected and post-selected states, the photon ``cat'' can be disembodied from its property ``grin'', leading to the name of the effect being chosen after a magical and enigmatic character in the celebrated literary work, \textit{Alice in Wonderland}~\cite{Alice}. This counter intuitive phenomenon has been experimentally observed using neutron interferometry in Ref.~\cite{Denkmayr} and using photon interferometry in Refs.~\cite{Correa, Sponar, Ashby, Kim2021}. The concept has been further expanded upon in Refs.~\cite{Bancal, At, Duprey, dynamic}, and different kinds of manipulations of the photon polarization, independent of the photon, have been achieved in Refs.~\cite{DasPati, Liu2020, DasSen, wavepar}. In this paper, we present a gedanken experiment to amplify a property of a photon at a location, independent of the photon being present at the same location. We also show that the same gedanken setup can amplify one property of a photon independent of another property of the same, where the two properties are complementary and correspond to non-commuting observables. We believe this is a technologically and fundamentally useful amalgamation of the two areas, viz., the quantum Cheshire cat and weak-value amplification. As a demonstration, we consider a scenario where the polarization degree of the photon interacts with another degree of freedom and results in an additional term in the Hamiltonian that governs the measurement. We call the additional term the spin-orbit coupling of the photon (cat), borrowing the nomenclature from the spin-orbit interaction in the relativistic treatment of an electron's dynamics. This additional term behaves as a noise component that changes the shift of the meter. The meter now gives a deflection proportional to the weak value of an effective observable that is different from the weak value of a polarization component, which was to be measured. To avoid the unwanted disturbances caused by the effect of noise in some amplification techniques, we then formulate a gedanken experiment to separate the required observable from the noisy part. We propose that, using this experimental setup, it is possible to reduce the noise effect on average (as a weak value primarily is) and amplify some quantum property by a weak-value measurement with a certain accuracy. \par The rest of the paper is organized as follows. The ideas of weak measurement and quantum Cheshire cat are briefly recollected in Sec.~\ref{section:2}. In Sec.~\ref{section:3}, we present the thought experiment, based on the concept of the quantum Cheshire cat, to amplify a property of a photon ($z$-component of polarization) independent of the photon itself. We extend the idea to amplify one property ($z$-component of polarization) independently of the other ($x$-component of polarization) for a noiseless ideal situation. In Sec.~\ref{section:4}, we evaluate the weak value of the effective observable, resulting from a noisy scenario, where the weak value is obtained from the shift of the meter, weakly coupled with the system, and where the noise is incorporated as a spin-orbit coupling. We also propose an experimental setup useful to amplify the weak value of the effective observable in that section. We summarize in Sec.~\ref{section: conclusion}. \section{Review of weak measurement and quantum Cheshire cat} \label{section:2} The weak-value scheme~\cite{AAV,Duck} started with a thought experiment for measuring a spin component of a quantum spin-$\frac{1}{2}$ particle, obtaining a result which was far beyond the range of usual values. In this work, the interaction Hamiltonian is usually taken as \begin{equation} \label{eqn:ho} H_0=-g(t)\hat{A}\otimes \hat{q}, \end{equation} where $\hat{q}$ is a canonical variable of the meter that is conjugate to momentum $\hat{p}$, $g(t)$ is a time-dependent coupling function with a compact support near the time of measurement (normalized such that its time integral is unity), and $\hat{A}$ is the observable to be measured. In~\cite{AAV,Duck}, $\hat{q}$ has a continuous spectrum. However, we will consider in this paper, instances where $\hat{q}$ can also have a discrete spectrum. The weak value of $\hat{A}$ is defined as \begin{equation} A_w = \frac{\Braket{\Psi_f|\hat{A}|\Psi_{in}}}{{\Braket{\Psi_f|\Psi_{in}}}}. \label{eq:weak} \end{equation} The concept of the quantum Cheshire cat~\cite{Aharonov} is based on weak measurement of observables. Typically in a quantum Cheshire cat setup, as seen in Fig.~\ref{fig1}(a), a photon having horizontal polarization $\ket{H}$ is fed into a $50:50$ beam-splitter $BS_1$ of a Mach-Zehnder interferometer, creating the pre-selected state \begin{equation} \ket{\Psi_{in}}=\frac{1}{\sqrt{2}}(i\ket{L}+\ket{R})\ket{H}, \label{eq:psi_in} \end{equation} where $\ket{L}$ and $\ket{R}$ represent the left and right path degrees of freedom, respectively. We will consider the states of circular polarization of the photon, denoted by $\ket{+}$ and $\ket{-}$, and given by \begin{equation} \label{eq:6} \ket{+}=\frac{1}{\sqrt{2}}(\ket{H}+i\ket{V}),\quad\ket{-}=\frac{1}{\sqrt{2}}(\ket{H}-i\ket{V}), \end{equation} as the computational basis. In particular, therefore, $\hat{\sigma}_z=\ket{+}\bra{+}-\ket{-}\bra{-}$ and $\hat{\sigma}_x=\ket{+}\bra{-}+\ket{-}\bra{+}$. Here $\ket{V}$ denotes the state of vertical polarization. This convention is in accordance with that adopted in~\cite{Aharonov}. In the two arms of the interferometer, weak measurements of the location of the photon and that of the photon's circular polarization are carried out. The photon and its polarization interact weakly with appropriate meter states, resulting in deflections in the latter. This interaction is of the form defined in Eq.~(\ref{eqn:ho}). Next, an arrangement of a half waveplate $HWP$, phase shifter $PS$, beam-splitter $BS_2$, polarization beam-splitter $PBS$ and detectors $D_1$, $D_2$ and $D_3$, elaborated in Fig.~\ref{fig1}(a), is used to post-select the state \begin{equation} \ket{\Psi_f}=\frac{1}{\sqrt{2}}(\ket{L}\ket{H}+\ket{R}\ket{V}). \label{eq:psi_f} \end{equation} For this particular post-selected state, it can be seen that the distributions of the deflected meter states center around the weak values of the observables being measured in the arms. To trace the location of the photon, the meters, which are inserted in the left and right arms of the interferometer, measure the projectors $\hat{\Pi}_L=\ket{L}\bra{L}$ and $\hat{\Pi}_R=\ket{R}\bra{R}$, respectively, and similarly the polarization detectors measure the observables $\hat{\sigma}_z^{L}=\hat{\Pi}_L \otimes \hat{\sigma}_z$ and $\hat{\sigma}_z^{R}=\hat{\Pi}_R \otimes \hat{\sigma}_z$. The corresponding weak values are \begin{eqnarray} (\hat{\Pi}_L)_w=1, && \quad (\hat{\Pi}_R)_w=0, \nonumber\\ (\hat{\sigma}_z^{L})_w=0, && \quad (\hat{\sigma}_z^{R})_w=1. \end{eqnarray} This indicates that the photon passed through the left arm but its $z$-component of polarization passed through the right arm. \section{Amplification of polarization of a photon without the photon} \label{section:3} \begin{figure*} \caption{Quantum Cheshire cat without and with amplification. (a) Quantum Cheshire cat setup (without amplification). The areas shaded pink and blue carry out the pre-selection and post-selection, respectively. For the latter, only the clicks of detector $D_1$ are selected. Weak measurements of the photon position and the position of polarization are performed by interacting suitable meters weakly, in the two arms of the interferometer. (b) Setup for decoupling the $z$-component of polarization of a photon from the photon itself and amplifying it simultaneously. This configuration is also applicable for dissociating the $z$-component of polarization and the $x$-component of polarization of the photon, and amplifying the former simultaneously.} \label{fig1} \end{figure*} Weak values can be used as a tool for amplifying small signals. Further, we have seen how a property can be separated from a quantum system using the technique of the quantum Cheshire cat. Now we aim to achieve the two phenomena simultaneously, namely, the separation of a property from the object and amplification of the separated property independently of the object. The interferometric setup of this gedanken experiment is presented in Fig.~\ref{fig1}(b). Let us begin by considering a photon, propagating along a path degree of freedom state denoted by $\ket{L^\prime}$, in a polarization state $\cos\frac{\theta}{2}\ket{H}+ \sin\frac{\theta}{2}\ket{V}$. The photon is sent through a polarization beam-splitter $PBS_1$ that transmits horizontally polarized light and reflects the vertically polarized one, leading to the state \begin{equation} \ket{\Psi_1}=\cos\frac{\theta}{2}\ket{L}\ket{H}+ \sin\frac{\theta}{2}\ket{R}\ket{V}, \end{equation} where $\ket{L}$ and $\ket{R}$ are the two possible photon paths, viz., along the transmitted and reflected paths, respectively, forming the two arms of a Mach-Zehnder interferometer. Note that $\ket{L}$ and $\ket{L^\prime}$ are along the same path after and before the polarization beam-splitter $PBS_1$. In the right arm of the interferometer, we place a half waveplate $HWP_1$ that converts a vertical polarization into a horizontal one, and vice versa, followed by a $\pi$ phase shifter $P$, which introduces a phase $e^{i\pi}$ in the right path. We now have the state \begin{equation} \ket{\Psi^\prime_1}=(\cos\frac{\theta}{2}\ket{L}-i\sin\frac{\theta}{2}\ket{R})\ket{H} . \label{eq: preselect} \end{equation} In the parlance of weak values and the quantum Cheshire cat, this is the pre-selected state. The post-selection involves a half waveplate $HWP$, a phase shifter $PS$, a beam-splitter $BS$, a second polarization beam-splitter $PBS$, and three detectors $D_1, D_2$ and $D_3$. The working principle is the same as discussed for the quantum Cheshire cat scenario without the amplification requirement. Thus the clicking of the detector $D_1$ can once again be solely selected to obtain the post-selected state \begin{equation} \ket{\Psi_f}=\frac{1}{\sqrt{2}}(\ket{L}\ket{H}+\ket{R}\ket{V}). \label{eq:3} \end{equation} In the two arms of the interferometric setup, weak measurements of the position of the photon and its circular polarization are performed and the corresponding weak values are obtained. The weak values of the operators $\hat{\Pi}_L$ and $\hat{\Pi}_R$, denoting the positions of the photon in the left and right arms, and $\hat{\sigma}_z^{L}$ and $\hat{\sigma}_z^{R}$, denoting the positions of $z$-components of polarization in the two arms, are then measured to be \begin{eqnarray} (\hat{\Pi}_L)_w&=&\frac{\Braket{\Psi_f|\hat{\Pi}_L|\Psi^\prime_1}}{\Braket{\Psi_f|\Psi^\prime_1}}=1,\nonumber\\ (\hat{\Pi}_R)_w&=&\frac{\Braket{\Psi_f|\hat{\Pi}_R|\Psi^\prime_1}}{\Braket{\Psi_f|\Psi^\prime_1}}=0,\nonumber\\ (\hat{\sigma}_z^{L})_w&=&\frac{\Braket{\Psi_f|\hat{\sigma}_z^{L}|\Psi^\prime_1}}{{\Braket{\Psi_f|\Psi^\prime_1}}}=0,\nonumber\\ (\hat{\sigma}_z^{R})_w&=&\frac{\Braket{\Psi_f|\hat{\sigma}_z^{R}|\Psi^\prime_1}}{\Braket{\Psi_f|\Psi^\prime_1}}=\tan\frac{\theta}{2}. \label{eq:7} \end{eqnarray} Therefore, the photon is detected in the left arm and the $z$-component of polarization is detected in the right arm with a factor which could be amplified by varying the parameter $\theta$. Thus we have achieved the phenomenon of amplifying a property of an object independently of the object: The photon's polarization component is being amplified in the right arm of the interferometer and the photon is not there. This thought experiment can be further extended by separating two orthogonal components of polarization and then amplifying one component independently of the other. Let us consider the two orthogonal components to be the $x$ and $z$-components of polarization, viz. \(\hat{\sigma}_x\) and \(\hat{\sigma}_z\). The operators $\hat{\sigma}_x^{L}=\hat{\Pi}_L\otimes \hat{\sigma}_x$ and $\hat{\sigma}_x^{R}=\hat{\Pi}_R\otimes \hat{\sigma}_x$ are measured to detect the $x$-component of polarization in the left and right arms of the interferometer, respectively. The corresponding weak values turn out to be \begin{eqnarray} (\hat{\sigma}_x^{L})_w&=&\frac{\Braket{\Psi_f|\hat{\sigma}_x^{L}|\Psi_1^{\prime}}}{\Braket{\Psi_f|\Psi_1^{\prime}}}=1, \nonumber\\ (\hat{\sigma}_x^{R})_w&=&\frac{\Braket{\Psi_f|\hat{\sigma}_x^{R}|\Psi_1^{\prime}}}{\Braket{\Psi_f|\Psi_1^{\prime}}}=0. \end{eqnarray} \par When coupled with the weak values of $\hat{\sigma}_z^{L}$ and $\hat{\sigma}_z^{R}$, these indicate that the $z$-component of polarization can be amplified independently of the $x$-component of polarization. The weak value of $\hat{\sigma}_z^{R}$ is seen to acquire a value of $\tan\frac{\theta}{2}$, which means that the weak value will result in outcomes beyond the eigenvalue spectrum in the regions $\theta \in (\pi/2,\pi)$ and $\theta \in (-\pi,-\pi/2)$. To realize of the weak-value measurement, the principal system is made to weakly interact with a meter. Let us assume that the meter is initially in a state $\ket{\Phi_{in}}$. Suppose we intend to measure $\hat{\sigma}_z^R$. The weak interaction between the system and the meter can be effected by a joint unitary $\hat{U}_{\hat{\sigma}_z^R}=e^{-\int iH_{\hat{\sigma}_z^R}t dt}$ where \begin{equation} H_{\hat{\sigma}_z^R}=\frac{1}{\sqrt{2}}g(t)[(\hat{I}-\hat{\Pi}_R)\otimes\hat{\sigma}_z\otimes \hat{I} + \hat{\Pi}_R\otimes \hat{\sigma}_z\otimes \hat{q}]. \end{equation} For the post-selection given by Eq.~(\ref{eq:3}), the meter goes to the state \begin{equation} \ket{\Phi_m}=\braket{\Psi_f|\Psi_{1}^{\prime}}[1-ig(\hat{\sigma}_z^R)_w \hat{q}\ket{\Phi_{in}}], \end{equation} with $(\hat{\sigma}_Z^R)_w=1$. Hence, the meter shows a deflection that is directly proportional to the weak value of the measured observable. \section{Amplification of the polarization of a photon in a noisy scenario} \label{section:4} In the preceding section, we laid out the procedure for amplifying a property of a quantum system in the absence of the system. We now consider a scenario in which the property we are looking to amplify is affected by noise, in a certain way. We first present the situation where the weak-value amplification of the noise-affected observable is carried out. As expected, the amplified quantity will contain a contribution from the unwanted noise. We then get rid of the noise by separating it from the signal using a setup similar to that of the quantum Cheshire cat and amplify the signal. We can conveniently take the observable to be weakly measured, as the $z$-component of polarization $\hat{\sigma}_z$. We recall that the measurement of the $z$-component of polarization ideally requires us to set up an interaction of the form \begin{equation} H_0=-g(t)\hat{\sigma}_z\otimes \hat{q}, \label{eqn:hoz} \end{equation} between the polarization degree of freedom and a convenient meter of our choosing, with meter variable $\hat{q}$. The variable $\hat{q}$ may be a discrete or a continuous meter variable. See~\cite{Pryde, npj,m, opticsguo} for experimental realizations of using discrete meter states. Let us now consider a scenario in which there is noise in the interaction Hamiltonian that is analogous to a spin-orbit-coupling term $\hat{L}_x \otimes {\hat\sigma}_x$. The total Hamiltonian is now \begin{equation} H=-g\delta(t)\hat{I} \otimes \hat{\sigma}_z \otimes \hat{q}+g^{\prime} \hat{L}_x \otimes \hat{\sigma}_x \otimes \hat{I}. \label{eq:Hnoisy} \end{equation} Here $g$ and $g^{\prime}$ are the coupling constants. The coupling between the system and the meter is an instantaneous coupling, while on the contrary, in the second term, there is no dependence of time on spin-orbit coupling. Our aim is to obtain the weak value of $\hat{\sigma}_z$. Due to the noise, the effective observable of the total system, resulting in the deflection in the meter, is different from that in the noiseless case obtained from Eq.~(\ref{eqn:hoz}) in~\cite{AAV,Duck}. Let us consider the initial state of the system as $\ket{\Psi_{in}}$ and the post-selected state as $\ket{\Psi_f}$. As mentioned before, the meter state could be chosen as a discrete or a continuous spectrum, depending on the type of the experimental setup. In our work, we consider both scenarios: One is with a meter variable $\hat{q}$ considered as discrete and the other is with a continuous state distribution of the same. As an example of a discrete meter state, we take a discrete Gaussian function with the standard deviation $\sqrt{2}\Delta$ as \begin{equation} \label{eq:dis} \ket{\Phi_{in}}_{dis}= \sum_{k=-N}^{N}\exp \Big ( -\frac{q_k^2}{4\Delta^2}\Big)\ket{q_k}, \end{equation} where we are assuming a discrete meter variable $q_k=k$ with $k$ having the values $0, \pm 1, \pm 2\ldots \pm N$. The corresponding $p$ representation of this meter state turns out to be \begin{equation} \ket{\Phi_{in}}_{dis}=\sum_{l=-N}^{N} \exp \left ( -\Delta^2 p_l^2\right)\xi(p_l)\ket{p_l}, \end{equation} where $\xi(p_l)=\sum_{k=-N}^{N} e^{-\frac{1}{4\Delta^2}\big(q_k+2i\Delta^2 p_l\big)^2}$ and $p_l=\frac{l}{(2N+1)}$ with $l$ having the values $0, \pm 1, \pm 2\ldots \pm N$. In the limit of $N \rightarrow \infty$, $\xi(p_l)$ will be independent of $p_l$. The parallel example in the continuous case may be taken as \begin{equation} \ket{\Phi_{in}}_{con}= \int_{-\infty}^{+\infty} dq\exp \Big ( -\frac{q^2}{4\Delta^2}\Big)\ket{q}, \label{eq:meter} \end{equation} with the $p$-representation, \begin{equation} \ket{\Phi_{in}}_{con}= \int_{-\infty}^{+\infty} dp\exp \Big ( -\Delta^2 p^2\Big)\ket{p}. \label{eq:meterp} \end{equation} To obtain the $p$ representations from the $q$ representation in both the discrete and continuous cases, we neglect multiplicative constants. After post-selection, the final state of the meter is given by \begin{equation} \ket{\Phi_f}=\bra{\Psi_f}\hat{U}(t)\ket{\Psi_{in}}\ket{\Phi_{in}}, \end{equation} where $t$ is the time of measurement. This is true for both the discrete and continuous meter states and hence we have omitted the subscripts. As the Hamiltonians at different times do not commute, the expansion of $\hat{U}(t)$ requires using the Dyson series expansion, leading to \begin{eqnarray} \label{eq:uni} \hat{U}(t) &=& 1+ \sum_{n=1}^{\infty} \frac{1}{n!}\Big (\frac{-i}{\hbar}\Big)^n \int_{0}^{t}dt_1 \int_{0}^{t_1}dt_2 \cdots \int_{0}^{t_{n-1}}dt_{n-1}\nonumber\\ &&\phantom{ dure cholo jai} \times \mathcal{J}[H(t_1)H(t_2) \cdots H(t_n)]. \end{eqnarray} We re-define the coupling constants \(g\) and \(g'\) so that we can effectively set $\hbar=1$. Using time ordering in the expansion, we have \begin{eqnarray} 1&+& (-i) \int_{0}^{t}dt_1 H(t_1)\nonumber\\ &+&\frac{1}{2!}(-i)^2 \int_{0}^{t}dt_1 \int_{0}^{t_1}dt_2 H(t_1)H(t_2)\nonumber\\ &+&\frac{1}{2!}(-i)^2 \int_{0}^{t}dt_2 \int_{0}^{t_2}dt_1 H(t_2)H(t_1)+ \cdots. \end{eqnarray} Interchanging $t_1$ and $t_2$ in the last (displayed) term, we get \begin{eqnarray} 1&+& (-i) \int_{0}^{t}dt_1 H(t_1)\nonumber\\ &-& \int_{0}^{t}dt_1 \int_{0}^{t_1}dt_2 H(t_1)H(t_2)+ \cdots. \label{ramakantakamar} \end{eqnarray} Substituting the Hamiltonian $H$ from Eq.~(\ref{eq:Hnoisy}) in this expression, the form of the unitary turns out to be {\begin{eqnarray} \label{eq:approx} &&1-i({g\hat{I} \otimes \hat{\sigma}_z \otimes \hat{q}+g^\prime t \hat{L}_x\otimes \hat{\sigma}_x} \otimes \hat{I}) -[g^2(\hat{I} \otimes \hat{\sigma}_z \otimes \hat{q})^2 \nonumber\\ &+& g g^\prime t (\hat{L}_x \otimes \hat{\sigma}_x\hat{\sigma}_z \otimes \hat{q})+g^{\prime^2} \frac{t^2}{2}(\hat{L}_x \otimes \hat{\sigma}_x \otimes \hat{I})^2]+ \cdots. \quad \quad \end{eqnarray}} The measurement time $t$ is chosen from the regime {$\frac{g}{g^{\prime}} \ll t \ll \frac{\sqrt{g}}{g^{\prime}}$}. Thus we can now assume $g$ and $g^{\prime}$ to be sufficiently small to neglect the $g^2$ term in~(\ref{eq:approx}) and the higher-order terms that are not present in~(\ref{eq:approx}). On the other hand, we retain the terms containing $gg^{\prime}t$ and $g^{\prime^2}t^2$. Using Eq. (\ref{eq:dis}), the final state of the meter after weak measurement and post-selection reads \begin{eqnarray} \label{eqn:zzz} \ket{\Phi_f}_{dis} &\approx& \braket{\Psi_f|\Psi_{in}}\sum_{k=-N}^{N} e^{-\frac{q_k^2}{4\Delta^2}}\Big[1+i g q_k(\hat{\sigma}_z)_w\nonumber\\ &-& i g^{\prime} t (\hat{L}_x\otimes \hat{\sigma}_x)_w-gg^\prime tq_k (\hat{L}_x\otimes \hat{\sigma}_x \hat{\sigma}_z)_w \nonumber\\ &-& g^{\prime^2}\frac{t^2}{2}(\hat{L}_x\otimes \hat{\sigma}_x)_w^2\Big]\ket{q_k}. \end{eqnarray} The weak values above are obtained using the definition in Eq. (\ref{eq:weak}). Now, in general, for an observable $\hat{A}$ and post-selection of $\ket{\Psi_f}$, with $A_w$ being the weak value of $\hat{A}$, the shifted meter state is given by \begin{equation} \label{eq:shift1} \ket{\Phi_f}_{dis} \approx \braket{\Psi_f|\Psi_{in}}\sum_{k=-N}^N e^{igq_kA_w}\exp\Big(-\frac{q_k^2}{4\Delta^2}\Big)\ket{q_k}. \end{equation} Comparing Eqs. (\ref{eqn:zzz}) and (\ref{eq:shift1}), we get \begin{eqnarray} e^{igq_kA_w}&=&1+igq_k(\hat{\sigma}_z)_w-ig^{\prime}t(\hat{L}_x\otimes\hat{\sigma}_z)_w \nonumber\\ &&-gg^\prime tq_k(\hat{L}_x\otimes\hat{\sigma}_x\hat{\sigma}_z)_w-g^{\prime^2}\frac{t^2}{2}(\hat{L}_x\otimes\hat{\sigma}_z)_w^2. \quad \label{shift2} \end{eqnarray} Let \begin{eqnarray} a_w&=&-ig^{\prime}t(\hat{L}_x\otimes\hat{\sigma}_z)_w-g^{\prime^2}\frac{t^2}{2}(\hat{L}_x\otimes\hat{\sigma}_z)_w^2\nonumber,\\ A^{\prime}_w&=&(\hat{\sigma}_z)_w+ig^\prime t (\hat{L}_x\otimes\hat{\sigma}_x\hat{\sigma}_z)_w. \label{eq:weak_noise} \end{eqnarray} So, the final state of the meter can be rewritten as \begin{equation} \ket{\Phi_f}_{dis} \approx \braket{\Psi_f|\Psi_{in}}\sum_{k=-N}^N\; e^{a_w} e^{igq_kA^{\prime}_w}\exp\Big(-\frac{q_k^2}{4\Delta^2}\Big)\ket{q_k}, \end{equation} with the corresponding $p$-representation being \begin{eqnarray} &&\ket{\Phi_f}_{dis}\approx \braket{\Psi_f|\Psi_{in}}\sum_{l=-N}^N e^{a_w} \exp\Big[-\Delta^2(p_l-gA^{\prime}_w)^2\Big]\nonumber\\ &&\phantom{ami hridoyer bolite byakul sudha} \times \xi(p_l-gA_w^{\prime})\ket{p_l}\label{eq:30}\\ &&\approx \braket{\Psi_f|\Psi_{in}}\sum_{l=-N}^N e^{a_w^{\prime}} \exp\Big[-\Delta^2\Big(p_l-gA^{\prime\prime}_w\Big)^2\Big]\ket{p_l}.\label{eq:31} \end{eqnarray} Here $a^{\prime}_w$ and $A^{\prime\prime}_w$ are implicitly defined via the expression~(\ref{eq:30}), which is equal to the expression~(\ref{eq:31}) and $A^{\prime\prime}_w\rightarrow A^{\prime}_w$, $a^{\prime}_w\rightarrow a_w$ as $N\rightarrow \infty$. The factor $e^{a_w^{\prime}}$ does not contribute to the shift of the meter. So the deflection of the meter is proportional to the weak value $A_w^{\prime\prime}$. This $A_w^{\prime\prime}$ is difficult to be given in a closed (explicit) analytic form. In the continuous limit $N\rightarrow \infty$, when the meter state is taken as a continuous one, given in Eq.~(\ref{eq:meter}), the effective observable, measured in the weak measurement conjured by the noisy Hamiltonian in Eq. ({\ref{eq:Hnoisy}}), is given by \begin{equation} A^{\prime}=\hat{\sigma}_z+ig^\prime t \hat{L}_x\otimes\hat{\sigma}_x\hat{\sigma}_z. \label{eq:A_w_noisy} \end{equation} The steps of the calculation for the continuous meter case are given in Appendix~\ref{discrete}. In the further discussion of our paper we will use the effective observable obtained in the continuous limit. The results for the discrete case will however be close to those obtained using the effective observable \(A^{\prime}\) in Eq.~(\ref{eq:A_w_noisy}) for large \(N\). Moreover, there are instances below where the discrete and continuous cases match (in form). If instead of $\hat{\sigma}_x$ the noise term in the Hamiltonian of Eq.~(\ref{eq:Hnoisy}) contains $\hat{\sigma}_z$, as in the cases, \begin{eqnarray} H_1=-g\delta(t)\hat{I} \otimes \hat{\sigma}_z \otimes \hat{q}+g^{\prime} \hat{L}_x \otimes \hat{\sigma}_z \otimes \hat{I}, \nonumber \\ H_2=-g\delta(t)\hat{I} \otimes \hat{\sigma}_z \otimes \hat{q}+g^{\prime} \hat{L}_z \otimes \hat{\sigma}_z \otimes \hat{I}, \label{eq:noise:parallel} \end{eqnarray} then by calculating the effective observable in a similar fashion, we get, respectively, \begin{eqnarray} &&A_1^{\prime}=\hat{\sigma}_z+ig^\prime t \hat{L}_x\otimes\hat{I},\nonumber \\ &&A_2^{\prime}=\hat{\sigma}_z+ig^\prime t \hat{L}_z\otimes\hat{I}. \end{eqnarray} We can also consider a noisy interaction Hamiltonian of a different form. Precisely, we can take the noisy part of the interaction Hamiltonian to be a three-body term so that it is coupled to the meter with a coupling parameter $g(t)$ which has a compact support near the measurement time $t$: \begin{eqnarray} H^\prime=-g(t)\big(\hat{I}\otimes\hat{\sigma}_z \otimes \hat{q}-\hat{L}_x\otimes\hat{\sigma}_x \otimes \hat{q}\big). \label{eq:Hnoisysimple} \end{eqnarray} Using the same method as before, we see that the effective observable resulting in the shift of the meter is \begin{equation} A^\prime=\hat{\sigma}_z-\hat{L}_x\otimes\hat{\sigma}_x \label{eq:A_w_1_noisy}. \end{equation} \begin{figure} \caption{Setup for amplifying the weak value of the effective observable in the noisy scenario. The $L$-splitter adds an orbital angular momentum degree of freedom to the physical system. The $L^\prime$-splitter transmits the component of orbital angular momentum parallel to $\ket{v_a} \label{fig2} \end{figure} To demonstrate the working principle in either case [i.e., when the interaction Hamiltonian is given by either Eq.~(\ref{eq:Hnoisysimple}) or~(\ref{eq:Hnoisy})], let us consider a set of pre-selected and post-selected states as follows: \begin{eqnarray} \ket{\chi_{in}}&=&\frac{1}{\sqrt{2}}(\ket{v_a}+i\ket{v_b})\otimes \ket{H},\nonumber\\ \ket{\chi_f}&=&\ket{v_a} \otimes(\cos\alpha\ket{H}+\sin\alpha\ket{V}) \label{pre_post_noise_1}. \end{eqnarray} In these states, the first degree of freedom represents the angular momentum component \(\hat{L}_x\), while the second degree of freedom represents polarization. We assume that the orbital quantum number \(l\) is conserved, for the system under study, at 1. Correspondingly, the dimension of the space spanned by the eigenvectors of \(\hat{L}_x\) is three dimensional. For simplicity, we work in a scenario where one of the dimensions in this three-dimensional space is naturally or artificially forbidden. The remaining two-dimensional Hilbert space is spanned by orthonormal vectors $\ket{v_a}$ and $\ket{v_b}$, with $\hat{L}_x$ represented as \begin{equation} \hat{L}_x=-i(\ket{v_a}\bra{v_b}-\ket{v_b}\bra{v_a}), \end{equation} where \begin{eqnarray} \ket{v_a}= \frac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} , \quad \ket{v_b}= \begin{pmatrix} 0 \\- i \\ 0 \end{pmatrix}, \end{eqnarray} expressed in the eigenbasis of \(\hat{L}_z\). To prepare the pre-selected state, we can send a photon with an initial polarization $\ket{H}$ through an arrangement, which we call the $L$-splitter in Fig.~\ref{fig2}, where it acquires an angular momentum of $\frac{1}{\sqrt{2}}(\ket{v_a}+i\ket{v_b})$. We intend to measure the weak value of $\hat{\sigma}_z$ and so bring in a meter in the initial state given by Eq.~(\ref{eq:dis}) for the discrete pointer state and Eq.~(\ref{eq:meter}) for the continuous pointer state and set up an interaction of the form in Eq.~(\ref{eqn:hoz}). However, unknown to us, the form of interaction is actually as in Eq.~(\ref{eq:Hnoisy}). To get the post-selected state, the photon passes through another arrangement, the $L^\prime$-splitter, that transmits orbital angular momentum $\ket{v_a}$ and reflects any orthogonal component towards the detector $D_3$. The transmitted photon is then passed through a polarization beam-splitter $PBS$, that is chosen so that it transmits light of polarization $\cos\alpha\ket{H}+\sin\alpha\ket{V}$ towards a detector $D_1$ and reflects any orthogonal polarization en route to detector $D_2$. Thus by selecting the clicks of $D_1$ alone, we can post-select the state $\ket{\chi_f}$. However, due to the inherent noise in the Hamiltonian of the form~(\ref{eq:Hnoisy}), as a result of the extra spin-orbital interaction, the weak value of the effective observable $A^{\prime}=\hat{\sigma}_z+ig^\prime t \hat{L}_x\otimes\hat{\sigma}_x\hat{\sigma}_z$ is measured (instead of \(\hat{\sigma}_z\)) to be \begin{eqnarray} A^\prime_w={(g^\prime t+i)\tan\alpha}. \label{A_wk} \end{eqnarray} In contrast, if the unknown noise is a three-body interaction as in Eq.~(\ref{eq:Hnoisysimple}), the weak value of the effective observable given in Eq.~(\ref{eq:A_w_1_noisy}) turns out to be \begin{equation} A_w^{\prime}=1+i \tan\alpha \label{A_wk1} \end{equation} using the setup schematically demonstrated in Fig.~\ref{fig2}. So, in both noisy situations [Eqs. (\ref{eq:Hnoisy}) and (\ref{eq:Hnoisysimple})] we can amplify the weak value of the respective effective observables [see Eqs. (\ref{eq:A_w_noisy}) and (\ref{eq:A_w_1_noisy})] by varying the parameter $\alpha$ using different $PBS$s. However, due to the presence of spin-orbit coupling in the system, we are unable to determine the weak value of $\hat{\sigma}_z$ as, instead of $\hat{\sigma}_z$, the deflection of the meter is proportional to the weak value of an observable which contains an unwanted noise along with $\hat{\sigma}_z$. Hence, while attempting to amplify the weak value of the $z$-component of polarization, we unintentionally amplify the weak value of a constituent noise. This, e.g., may cause disadvantages in applications of quantum technologies, which depend on the weak-value enhancement of the polarization degree of freedom of the system. It is plausible that the presence of the noise effects considered is more probable than the ideal noiseless situation, and therefore it will be beneficial to design an experimental setup that can ``disembody'' the noise from the required observable. \subsection{Disembodiment of noise from the ideal system using quantum Cheshire cats } \label{section:5} \begin{figure} \caption{ Interferometric setup for separating the spin-orbit-coupling --like noise and simultaneously amplifying the signal corresponding to the chosen observable. See the text for details. } \label{fig3} \end{figure} With the general working principle established earlier in Sec.~\ref{section:2}, we now proceed to get the amplified signal of the $z$-component of polarization by disassociating, with the help of the Cheshire cat mechanism. The noise originates from the interaction with the unintended degree of freedom (in this case, $\hat{L}_x$) during the weak measurement process. The intended pre-selected and post-selected states are $\ket{\Psi_1^\prime}$ and $\ket{\Psi_f}$, respectively [see Eqs.~(\ref{eq: preselect}) and~(\ref{eq:3})]. However, the photon passing through the $L$-splitter picks up a new degree of freedom, an angular momentum component given by $\frac{1}{\sqrt{2}}(\ket{v_a}+i\ket{v_b})$, as shown in Fig.~\ref{fig3}. So the effective pre-selected state is \begin{equation} \ket{\chi_{in}^{\prime}}=(\cos\frac{\theta}{2}\ket{L}- i\sin\frac{\theta}{2}\ket{R}\otimes\frac{1}{\sqrt{2}}(\ket{v_a}+i\ket{v_b})\otimes \ket{H}. \label{eq:pree} \end{equation} To achieve the desired outcome, we are required to carry out post-selection in \begin{equation} \ket{\chi_f^{\prime}}=(\cos\alpha\ket{L}\ket{H}+ \sin\alpha\ket{R}\ket{V})\otimes\ket{v_a}. \label{eq:post} \end{equation} To obtain this state, the beam-splitter $BS_2$ needs to be of a transmission coefficient $\cos^2 \alpha$ and reflection coefficient $\sin^2 \alpha$ rather than being the $50:50$ type. Also, we assume that the state passes through a device, the $L^\prime$-splitter, that permits only the orbital angular momentum component along $\ket{v_a}$ towards the $PBS$ and reflects any orthogonal component towards the detector $D_3$, as shown in Fig.~\ref{fig3}. The measured weak values are \begin{eqnarray} &&(\hat{\sigma}_z^L)_w = 0,\nonumber \\ &&(\hat{\sigma}_z^R)_w = \tan\frac{\theta}{2}\tan\alpha, \nonumber \\ &&(\hat{L}_x\otimes\hat{\sigma}_x)^L_w = 1, \nonumber \\ &&(\hat{L}_x\otimes\hat{\sigma}_x)^R_w = 0. \label{wk_final} \end{eqnarray} Therefore, we can conclude that one can amplify the weak value of the observable $\hat{\sigma}_z$ separating it from the noise part $\hat{L}_x \otimes \hat{\sigma}_x$. Moreover, the enhancement of the weak value of $\hat{\sigma}_z^R$ can be more than that of the effective observables obtained in the two previous cases [described in Eqs.~(\ref{A_wk}) and~(\ref{A_wk1}), using a linear setup, as depicted in Fig.~\ref{fig2}], because of the extra tuning parameter $\theta$. Therefore, even if we fix the parameter $\alpha$, we are still able to amplify the weak value of $\hat{\sigma}_z$ by changing the parameter $\theta$ with the use of different $PBS_1$s. So now we have successfully achieved our goal of amplifying the weak value of the required observable by splitting up the noise from the system. In addition, the enhancement of the weak value of the required observable is greater than that of the effective observable detectable in the noisy situation [compare Eqs.~(\ref{A_wk}) and~(\ref{A_wk1}) with Eq.~(\ref{wk_final})]. We need to choose an initial state of suitable polarization in the preparation of the pre-selected state. Also, to measure the weak values in an experimental procedure, we have to construct a unitary parallel to the one mentioned in the noiseless situation. The composite setup of the system and meter can be acted on by the Hamiltonian $H_{\hat{\sigma}_z^R}^{\prime}$ for measuring $\hat{\sigma}_z^R$, where \begin{eqnarray} H_{\hat{\sigma}_z^R}^{\prime} &=& g\delta(t-t^{\prime})[(\hat{I}-\hat{\Pi}_R) \otimes \hat{I} \otimes \hat{\sigma}_z \otimes \hat{I}\nonumber\\ &&\phantom{na go ei je dhula}+\hat{\Pi}_R \otimes \hat{I}\otimes\hat{\sigma}_z \otimes \hat{q}]. \label{eq:H_sigma} \end{eqnarray} So the unitary generated by the Hamiltonian $H_{\hat{\sigma}_z^R}^{\prime}$ will be $U_{\hat{\sigma}_z^R}^{\prime}=e^{-\int iH_{\hat{\sigma}_z^R}^{\prime}t dt}$. After the post selection, the meter state turns out to be \begin{equation} \ket{\Phi_m}=\braket{\chi_f^{\prime}|\chi_{in}^{\prime}}[1-ig(\hat{\sigma}_z^R)_w \hat{q}\ket{\Phi_{in}}]. \end{equation} Hence, the deflection of the meter state is proportional to the weak value of $\hat{\sigma}_z$ on the right arm, up to the first-order term in the expansion of the unitary. The constructed unitary is almost the same as in the noiseless scenario, the only difference being that we have to incorporate an identity in the degree of freedom of orbital angular momentum. Similarly, to measure $(\hat{L}_x\otimes \hat{\sigma}_x)^L$, the unitary is $\hat{U}_{(\hat{L}_x\otimes \hat{\sigma}_x)^L}^{\prime}$, where \begin{eqnarray} H_{(\hat{L}_x\otimes \hat{\sigma}_x)^L}^{\prime} &=& g^{\prime}[(\hat{I}-\hat{\Pi}_L) \otimes \hat{L}_x \otimes \hat{\sigma}_x \otimes \hat{I}\nonumber\\ &&\phantom{jodi tare}+ \hat{\Pi}_L \otimes \hat{L}_x\otimes\hat{\sigma}_x \otimes \hat{q}], \label{eq:H_last} \end{eqnarray} and here the final meter state ends up being \begin{equation} \ket{\Phi_m}=\braket{\chi_f^{\prime}|\chi_{in}^{\prime}}[1-ig^{\prime}t(\hat{L}_x\otimes\hat{\sigma}_x)^R)_w \hat{q}\ket{\Phi_{in}}]. \end{equation} Note that the subscript dis or con has been omitted here, as the same form of the equation is true in both cases. \par In practice, a more complex scenario can arise when the noise couples with the measured observable, as in Eq.~(\ref{eq:noise:parallel}). In this case, one may not be able to find a suitable pre- and post-selection to decouple the noise from $\hat{\sigma}_z$. It could be possible to separate $\hat{\sigma}_z$ and $\hat{L}_x \otimes \hat{\sigma}_z$ or $\hat{L}_z \otimes \hat{\sigma}_z$, but in both arms of the interferometer, there still remains a contribution of $\hat{\sigma}_z$. Hence the complete dissociation of the $z$-component of polarization of the photon from the noise part may not be achievable by this method. In the noise model proposed here, the angular momentum degree of freedom of the system is acting as a source of noise and it couples to another degree of freedom of the system, which we want to measure. So it is like a subsystem interacting with the original system. In a realistic noisy scenario, to introduce a generic noise in the system, we take an auxiliary system from outside, operate a global unitary on the composite system-auxiliary setup, and then trace out the auxiliary. The same method can be followed with a different degree of freedom of the system instead of using the auxiliary system. The mathematical modeling will be the same in both instances. There are previous works where the noise is generated from a degree of freedom of the system itself. See, e.g., \cite{Zukowski1999,Zukowski2023}. Also, in \cite{Flores} it was shown that decoherence effects can be generated due to the coupling of mesoscopic variables of the system and internal degrees of freedom of the same. The center of mass of a system can have different degrees of freedom, which may interfere, and such interference can get effectively decohered due to the coupling of the center of mass of the system with the internal vibrational degrees of freedom, as was studied in~\cite{Brun}. See also~\cite{Hillery,Nikolic} in this regard. An additional point to be noted here is that in practical scenarios, the noise source is usually unknown and we have to trace out the subsystem. In our case, for the measurement, we have to concentrate on the meter state, which is done by removing both degrees of freedom of the system from the total system including the meter. This tracing out is performed implicitly in the method of weak measurement. This implicit tracing out of the system has also been performed in previous papers in this direction, e.g., in~\cite{AAV,Duck}. Note that for this implicit tracing out of the system, the noise source is also being traced out. So no additional explicit tracing out, corresponding to the noise in the system, is required. \section{Conclusion} \label{section: conclusion} To summarize, we have proposed a thought experiment related to the so-called quantum Cheshire cat in which a component of polarization could be amplified independently of the photon, using interferometric arrangements. Furthermore, we extended the setup to a scenario in which, of two complementary polarization components of a photon, one can be amplified while being detached from the other. Moreover, we considered a noisy scenario in which the noise is generated by a spin-orbit-coupling --like interaction term in the Hamiltonian governing the measurement process. We analyzed the amplification of a chosen observable in the presence of noise, both with and without the noise term being dissociated, on average, from the object by using a quantum Cheshire cat--inspired setup. It has been pointed out in the literature that the phenomena related to the quantum Cheshire cat are ``average'' effects. In particular, the weak values indicate average shifts of the meter, conditioned on the pre-selected and the post-selected states, and the object and the property (or two properties of the same object) do not actually travel separately in each arm of the experimental setup; rather they do so only on an average. Nonetheless, just like for the original quantum Cheshire cat, in our setups also, it has been ensured that the weak values observed are not classical averages, but quantum-mechanical mean shifts, obtained by considering coupling between the photon or polarization component and a meter. Let us also note here that the amplifications reported do not imply that the amplified values could become arbitrarily large; nonlinear effects appear that limit the amplified values~\cite{Limits7}.\par \appendix \section{Effective observable when meter state is continuously distributed} \label{discrete} For the initial meter state given in Eq.~(\ref{eq:meter}), the final meter state will take the form \begin{eqnarray} \ket{\Phi_f}_{con} &\approx& \braket{\Psi_f|\Psi_{in}}\int dq \, e^{-\frac{q^2}{4\Delta^2}}\Big[1+i g q(\hat{\sigma}_z)_w\nonumber\\ &-& i g^{\prime} t (\hat{L}_x\otimes \hat{\sigma}_x)_w-gg^\prime tq (\hat{L}_x\otimes \hat{\sigma}_x \hat{\sigma}_z)_w \nonumber\\ &-& g^{\prime^2}\frac{t^2}{2}(\hat{L}_x\otimes \hat{\sigma}_x)_w^2\Big]\ket{q}. \end{eqnarray} This can be written as \begin{equation} \ket{\Phi_f}_{con} \approx \braket{\Psi_f|\Psi_{in}}\int \,dq\, e^{iqgA_w}\exp(-\frac{q^2}{4\Delta^2})\ket{q}, \end{equation} with the same $a_w$ and $A^{\prime}_w$ as in Eq.~(\ref{eq:weak_noise}) and hence, the effective observable $A^{\prime}$ is the same as in Eq.~(\ref{eq:A_w_noisy}). So, the final state of the meter turns out to be \begin{equation} \ket{\Phi_f}_{con} \approx \braket{\Psi_f|\Psi_{in}}\int \,dq\,e^{a_w} e^{iqgA^{\prime}_w}\exp(-\frac{q^2}{4\Delta^2})\ket{q}, \end{equation} with the corresponding $p$-representation being \begin{equation} \ket{\Phi_f}_{con} \approx \braket{\Psi_f|\Psi_{in}}\int \,dp\,e^{a_w} \exp[-\Delta^2(p-gA^{\prime}_w)^2]\ket{p}. \end{equation} \acknowledgements A.G. and U.S. acknowledge partial support from the Department of Science and Technology, Government of India through QuEST grant (grant number DST/ICPS/QUST/Theme-3/2019/120). \end{document}
\begin{document} \title{\textbf{\sc On Derivative Euler Phi Function Set-Graphs} \hrule \begin{abstract} In this paper, we study some graph theoretical properties of two derivative Euler Phi function set-graphs. For the Euler Phi function $\phi(n)$, $n\in \mathbb{N}$, the set $S_\phi(n) =\{i:\gcd(i,n)=1, 1\leq i \leq n\}$ and the vertex set is $\{v_i:i\in S_\phi(n)\}$. Two graphs $G_d(S_\phi(n))$ and $G_p(S_\phi(n))$, defined with respect to divisibility adjacency and relatively prime adjacency conditions, are studied. \end{abstract} \noindent\textbf{Keywords:} Euler Phi set-graph, least common multiple set-graph, relatively prime set-graph. \noindent \textbf{Mathematics Subject Classification 2010:} 05C15, 05C38, 05C75, 05C85. \hrule {\Large \varsigma}ection{Introduction} For all terms and definitions, not defined specifically in this paper, we refer to \cite{BM1,BLS,FH,DBW}. Unless mentioned otherwise, all graphs considered here are undirected, simple, finite and connected. The \textit{order} and \textit{size} of a graph $G$ are respectively the numbers of vertices and edges in it and are denoted respectively by $nu(G)$ and $\varepsilon(G)$. In this paper, we write $\nu(G) = n \geq 1$ and size $\varepsilon(G)= p \geq 0$. The minimum and maximum degrees of $G$ are represented by $\delta(G)$ and $\Delta(G)$, respectively. When the context is clear, we write these notations simply as $\delta$ and $\Delta$ respectively. The degree of a vertex $v \in V(G)$ is denoted $d_G(v)$ or when the context is clear, simply as $d(v)$. The complement graph of graph $G$ is denoted by, $\overline{G}$ and for a graph of composite notation such as $G_{g(x)}(f(x))$ the complement graph will be denoted by, $\overline{G}_{g(x)}(f(x))$. Unless mentioned otherwise all graphs $G$ are simple, connected and finite graphs. The sequences of intergers, $a_i,\ 1\leq i\leq k\ a_i\in \{1,2,3\dots,n\}$ of greatest length such that either $a_j\mid a_{j+1}$ or $a_{j+1}\mid a_j$, $1\leq j\leq k-1$ have been studied in \cite{EFH} and the longest sequences are equivalent to finding a longest path in a divisor graph as defined in \cite{CMSZ}. The \textit{divisor graph} of a finite, non-empty set $S$ of positive integers, denoted by $G_d(S)$, is the graph with has vertex set $V(G_d(S)) = \{v_i:i\in S\}$ and edge set, $E(G_d(S)) = \{v_iv_j: i|j,\ \text{or}\ j|i\ \ \text{and}\; i\neq j\}$. Hence, if $S=\{1,2,3,\dots,n\}$, then finding a longest path in $G(S)$ is equivalent to the subject of study in [3]. The importance of divisor and relatively prime graphs is founded in some of the applications in communication networks. Recall that the \textit{relatively prime graph} of a finite, non-empty set of positive integers $S$ is the graph $G_p(S)$ with vertex set $V(G_p(S)) = \{v_i:i\in S\}$ and edge set $E(G_p(S)) = \{v_iv_j: \gcd(i,j)=1\}$ (see \cite{CMSZ}). Note that $1$ is relatively prime to all positive integers and any integer is divisible by $1$. On the other hand, a positive integer $a\geq 2$ is not relatively prime to itself. Therefore, no loops can be found in $G_p(S)$. A comprehensive study on the classification of divisor graphs which includes interesting results related to the complement of graphs and induced neighbourhoods, has been done in \cite{CF1}. An interesting conjecture that for $S=\{1,2,3,\ldots,n\}$, every tree of order $n$ is a subgraph of the relatively prime graph, $G_p(S)$ was verified for $n\leq 15$ in \cite{FH1}. A specialised \textit{relatively prime graph} has been defined in \cite{SS1}. For a given integer $n\in \mathbb{N}$, the \textit{Euler Phi graph}, denoted by $G_p(S_\phi(n))$, is a relatively prime graph on the set of positive integers $S_\phi(n) = \{i: \gcd(i, n)=1, 1\leq i\leq n\}$ with vertex set $V(G_p(S_\phi(n)))=\{v_i:i\in S_\phi(n)\}$. Note that the vertices are not necessarily consecutively indexed. The notation used in this paper differs slightly from that introduced in \cite{SS1} because for $\phi(n)\in \mathbb{N}$ and it is possible to have that, $n_1\neq n_2$ and $\phi(n_1)=\phi(n_2)$. {\Large \varsigma}ection{New Directions} For the set $S_\phi(n)$, two graphs with the context of this study is possible -- a relatively prime graph as studied in \cite{SS1} and a divisor graph. Let $S'_\phi(n)=S_\phi(n)-\{v_1\}$. If for all pairs of distinct vertices $v_i,v_j \in S'(\phi)$ the integers $i,j$ are either divisible or relatively prime, then it follows that the divisor graph $G_d(S_\phi(n))= \overline{G}_p(S_\phi(n)-\{v_1\}) + v_1$ and the relatively prime graph, $G_p(S_\phi(n))= \overline{G}_d(S_\phi(n)-\{v_1\}) + v_1$. It is clear that for both the aforesaid graphs, $\Delta = \phi(n)-1$. Let $\mathcal{P}$ be the set of prime numbers. Also, let $\mathcal{P}(k)$ denote the first $k$ consecutive prime numbers. We know that if a positive integer $n$ is written as some product of $k$ prime numbers, $n= p_i^{a_1}\cdot p_j^{a_2}\cdots p_t^{a_k}$, then $\phi(n)=n(1-\frac{i}{p_i})(1-\frac{1}{p_j})\cdots (1-\frac{1}{p_t})$ (see \cite{TMA}). This result will be used throughout with little reference to other identies of the $\phi$-function. This result provides the cardinality of $S_\phi(n)$. \begin{theorem}\label{Thm-2.1} A divisor graph $G_d(S_\phi(n))$ is acyclic if and only if the corresponding relatively prime graph $G_p(S_\phi(n))$ is complete. \end{theorem} \begin{proof} Let a relatively prime graph $G_p(S_\phi(n))$ be complete. Hence, besides the vertex $v_1$, all other pairs of distinct vertices are relatively prime so they are not divisors. Therefore, in the corresponding divisor graph only $v_1$ is adjacent to all vertices which renders a star graph. So, $G_d(S_\phi(n))$ is acyclic. Let a divisor graph, $G_d(S_\phi(n))$ be acyclic. The converse result follows by similar reasoning. \end{proof} We recall the next well known lemma \cite{ES1}. \begin{lemma}{\rm \cite{ES1}}\label{Lem-2.2} \begin{enumerate} \item[(i)] Any positive, non-prime integer, $n\geq 4$, is a multiple of at least one prime number, and all of which are less than $n$.\\ \item[(ii)] $\prod\limits_{i=1}^{k}p_i < p^2_{k+1}$ for $p_i \in \mathcal{P}(k)$. \end{enumerate} \end{lemma} For a positive integer $n$ let the prime divisor (or factor) set of $n$ be, $S_\vartheta(n) =\{p_i:p_i| n\}$. \begin{theorem}\label{Thm-2.3} For any positive integer $n\ge 5$, the divisor graph $G_d(S_\phi(n))$ is acyclic if and only if $S_\vartheta(n)= \mathcal{P}(k)$, for some $k\in \mathbb{N}$. \end{theorem} \begin{proof} If $S_\vartheta(n)=\mathcal{P}(k)$ for some $k\in \mathbb{N}$ then, $n=t\cdot \prod\limits_{i=1}^{k}p_i$, $t\in \mathbb{N}$. Two cases must be considered. \textit{Case 1}: For $t=1$ and for all non-prime $m, 1\leq m\leq p_k+1$, $\gcd(m, n)\neq 1$ because from Lemma \ref{Lem-2.2}(i) it follows that, $m$ is a product of prime numbers in $\mathcal{P}(k)$. Similarly, $\forall~t$, $p_k+1< t \leq n$ and non-prime, we have $t \notin S_\phi(n)$ because from Lemma \ref{Lem-2.1}(ii), $p^2_{k+1}>n$ and hence $\gcd(t,n)\neq 1$. Therefore, the set $S_\phi(n) = \{1\}\cup \{p_j: (k+1)\leq j\leq \ell,~p_\ell <n\}$. Since all pairs of distinct prime numbers are non-divisors to each other it follows that $G_d(S_\phi(n))$ is the star graph $S_{1,(\ell -k)}$. Now, if $G_d(S_\phi(n))$ is acyclic and since $1| z,\ \forall\, z\in \mathbb{N}$, it follows that $G_d(S_\phi(n)\cong S_{1,(\phi(n)-1)}$. Further for all pairs of distinct $a,b \in S_\phi(n), a\neq 1, b\neq 1$, it follows that, $\gcd(a,b)= 1$. We prove two subcases. \textit{Subcase 1(a)}: If $n$ is odd. For $n \geq5$ it follows easily that $2,4 \in S_\phi(n)$ hence, $G_d(S_\phi(n))$ is cyclic. Also $S_\vartheta(n)\neq \mathcal{P}(k)$ for any $k$ because $2\notin S_\vartheta(n)$. Hence, $n$ cannot be odd. \textit{Subcase 1(b)}: If $n\geq 4$ and even, no product of a combination of elements in $S_\vartheta(n)$ or a multiple thereof can be an element in $S_\phi(n)$ else, a triangle exists in $G_d(S_\phi(n))$. Therefore, all elements in $S_\phi(n)$ are prime numbers, $p_{k+1}\leq p_j < n$ for some $k\in \mathbb{N}$. If not, a triangle must exist which is a contradiction. The aforesaid implies that, $S_\vartheta(n)=\mathcal{P}(k)$ for some $k\in \mathbb{N}$. \textit{Case 2:} For $t\geq 2$ the result follows through immediate induction by similar reasoning found in Case-1. \end{proof} The lower bound on $n$ in Theorem \ref{Thm-2.3} follows from the fact that for $1\leq n\leq 4$ it is easy to verify that $G_d(S_\phi(n))$ is acyclic. Also, $n=3$ is the only exception to Subcase i(a). \begin{example}{\rm When $n=24$, the prime divisor set $S_\vartheta(24)=\{2,3\}= \{2,3\}=\mathcal{P}(2)$, the divisor graph $G_d(S_\phi(24))$ is cyclic. Clearly, $S_\phi(24)=\{1,5,7,11,13,17,19,23\}$. See Figure \ref{fig:fig-1}. \begin{figure}\label{fig:fig-1} \end{figure} Since, for $n=15$ the prime divisor set, $S_\vartheta(9)=\{3,5\}\neq \{2,3,5\}=\mathcal{P}(3)$, the divisor graph, $G_d(S_\phi(15))$ is cyclic. $S_\phi(15) = \{ 1,2,4,6,7,8,11,13,14\}$. See Figure \ref{fig:fig-2}. \begin{figure}\label{fig:fig-2} \end{figure} }\end{example} The following corollary is a direct consequence of Theorem \ref{Thm-2.3}. \begin{corollary} If $G_d(S_\phi(n))$ is acyclic, then $n$ is even. \end{corollary} \begin{theorem} For $n\in \mathbb{N}$, let $x= \min\{y:y\in S_\phi(n)-\{1\}\}$. The divisor graph $G_d(S_\phi(n))$ is acyclic if and only if $n\leq x^2$. \end{theorem} \begin{proof} For $n\in \mathbb{N}$ let $x= \min\{y:y\in S_\phi(n)-\{1\}\}$. From Corollay 1.4 it follows that if suffices to only consider $n$ is even. Therefore, $x\geq 3$ is a prime number. Also since, all elements of $S_\phi(n)-\{1\}$ are either prime numbers or multiples of prime numbers, it follows that if $n\leq x^2$. Hence, all elements of $S_d(\phi(n))-\{1\}$ are prime. Therefore, $G_d(S_\phi(n))$ is acyclic. Next, if $G_d(S_\phi(n))$ is acyclic, the converse is obvious. \end{proof} The next corollary is a useful summary of equivalent results. \begin{corollary} For any $n \in \mathbb{N}$, $n \geq 5$ it follows that $S_\vartheta(n)= \mathcal{P}(k)$, for some $k\in \mathbb{N}$ if and only if: \begin{enumerate}\itemsep0mm \item[(i)] $G_d(S_\phi(n))$ is acyclic. \item[(ii)] $G_p(S_\phi(N))$ is complete. \item[(iii)] $n\leq x^2$ for $x= \min\{y:y\in S_\phi(n)-\{1\}\}$. \end{enumerate} \end{corollary} \begin{proposition} For $n\geq 5$ and $S_\vartheta(n) = \mathcal{P}(k)$ for some $k\in \mathbb{N}$, then \begin{enumerate}\itemsep0mm \item[(i)] $\varepsilon(G_d(S_\phi(n)))=|\{p_i:(k+1)\leq i\leq \ell, p_\ell<n\}|=\Delta$. \item[(ii)] $\varepsilon(G_p(S_\phi(n)))=\frac{1}{2}\Delta (\Delta +1)$. \end{enumerate} \end{proposition} \begin{proof} \item[(i)] Since $G_d(S_\phi(n))$ is the star graph, $S_{1,(\ell -k)}$, by the proof of Theorem \ref{Thm-2.3} the result is obvious. \item[(ii)] Since $G_p(S_\phi(n))$ is of order $\Delta +1$ and is a complete graph, the result is obvious. \end{proof} \begin{observation}{\rm A number of trivial observations on these invariants follow immediately. \begin{enumerate}\itemsep0mm \item[(i)] The respective domination numbers are $1$. \item[(ii)] The chromatic numbers $\chi(G_d(S_\phi(n)))=2$ and $\chi(G_p(S_\phi(n)))=\Delta+1$. \item[(iii)] The clique numbers $\omega(G_d(S_\phi(n)))=2$ and $\omega(G_p(S_\phi(n)))=\Delta+1$. \item[(iv)] These graphs $G_d(S_\phi(n))$ and $G_p(S_\phi(n))$ are perfect graphs. \end{enumerate} }\end{observation} {\Large \varsigma}ection{Derivative Set-graphs} The notion of a set-graph was introduced in \cite{KCSS} as explained below. \begin{definition}\label{Defn-3.1}{\rm \cite{KCSS} Let $A^{(n)} = \{a_1,a_2,a_3,\ldots,a_n\}$, $n\in \mathbb{N}$ be a non-empty set and the $i$-th $s$-element subset of $A^{(n)}$ be denoted by $A^{(n)}_{s,i}$. Now, consider $\mathcal S = \{A^{(n)}_{s,i}: A^{(n)}_{s,i} {\Large \varsigma}ubseteq A^{(n)}, A^{(n)}_{s,i} \neq \emptyset \}$. The \textit{set-graph} corresponding to set $A^{(n)}$, denoted by $G_{A^{(n)}}$, is defined to be the graph with $V(G_{A^{(n)}}) = \{v_{s,i}: A^{(n)}_{s,i} \in \mathcal S\}$ and $E(G_{A^{(n)}}) = \{v_{s,i}v_{t,j}: A^{(n)}_{s,i} \cap A^{(n)}_{t,j} \neq \emptyset\}$, where $s\neq t$ or $i\neq j$. }\end{definition} The largest complete subgraph in the given set-graph $G_{A^{(n)}}$, $n \geq 2$ is $K_{2^{n-1}}$ and the number of such largest complete subgraphs in the given set-graph $G_{A^{(n)}}$, $n \geq2$ is provided in the following proposition. \begin{proposition}\label{Prop-3.2}{\rm \cite{KCSS}} The set-graph $G_{A^{(n)}}, n\geq 1$ has exactly $2^{n-1}$ largest complete subgraphs $K_{2^{n-1}}$. \end{proposition} The graph in Figure \ref{fig:fig-3} is the set-graph $G_{A^{(3)}}$. \begin{figure} \caption{{\Large \varsigma} \label{fig:fig-3} \end{figure} \begin{example}{\rm For $A^{(4)} = \{1,3,5,7\}$. Following from Definition 2.1 set-graph has vertices, $v_{1,1} =\{1\}$, $v_{1,2}=\{3\}$, $v_{1,3}=\{5\}$, $v_{1,4}=\{7\}$, $v_{2,1}=\{1,3\}$, $v_{2,2}=\{1,5\}$, $v_{2,3}=\{1,7\}$, $v_{2,4}=\{3,5\}$, $v_{2,5}=\{3,7\}$, $v_{2,6}=\{5,7\}$, $v_{3,1}=\{1,3,5\}$, $v_{3,2}=\{1,3,7\}$, $v_{3,3}=\{1,5,7\}$, $v_{3,4}=\{3,5,7\}$, $v_{4,1}=\{1,3,5,7\}$. The $\iota$-weights are given by the mapping, $\iota(v_{1,1}) \mapsto 1$, $\iota(v_{1,2})\mapsto 3$, $\iota(v_{1,3})\mapsto 5$, $\iota(v_{1,4})\mapsto 7$, $\iota(v_{2,1})\mapsto 3$, $\iota(v_{2,2})\mapsto 5$, $\iota(v_{2,3})\mapsto 7$, $\iota(v_{2,4})\mapsto 15$, $\iota(v_{2,5})\mapsto 21$, $\iota(v_{2,6})\mapsto 35$, $\iota(v_{3,1})\mapsto 15$, $\iota(v_{3,2})\mapsto 21$, $\iota(v_{3,3})\mapsto 35$, $\iota(v_{3,4})\mapsto 105$, $\iota(v_{4,1})\mapsto 105$. Hence, $\iota(A^{(4)}) = \{v_{1,1}(1), v_{1,2}(3), v_{1,3}(5), v_{1,4}(7), v_{2,1}(3),v_{2,2}(5), v_{2,3}(7), v_{2,4}(15),\\ v_{2,5}(21), v_{2,6}(35),v_{3,1}(15), v_{3,2}(21), v_{3,3}(35),v_{3,4}(105), v_{4,1}(105)\}$. }\end{example} \begin{proposition}\label{Prop-3.3} For a set of positive integers $S =\{a_1,a_2,a_3,\ldots,a_n\}$ with $a_1 < a_2 < a_3 <\cdots < a_n$, we have \begin{enumerate}\itemsep0mm \item[(i)] If $a_1\neq 1$ and all pairs of distinct entries are relatively prime then the $\iota$-weights are distinct. \item[(ii)] If $a_1=1$ and all pairs of distinct entries are relatively prime then the $\iota$-weights repeat twice except for $v_{1,1}$. \end{enumerate} \end{proposition} \begin{proof} \item[(i)] The result follows immediately from the facts that all subsets are distinct and for any set of relative prime integers $X=\{e_1,e_2,e_3,\ldots,e_t\}$ the least common multiple, $\lcm(e_i\in X)=\prod\limits_{i=1}^{t}e_i$ for all $e_i$. \item[(ii)] The result follows immediately from the facts that, $1\cdot x=x$ and for each subset $X$, $1\notin X$ there exists a subset $\{1\}\cup X$ except for the unique vertex $v_{1,1}=\{1\}$. Hence, $\iota(v_{1,1})=1$, uniquely. All other $\iota$-weights repeat twice. \end{proof} In the context of repetition we say that an $\iota$-weight which is unique, repeats zero times. \begin{theorem}\label{Thm-3.4} For a set of positive integers $S =\{a_1,a_2,a_3,\ldots,a_n\}$ with $a_1 < a_2 < a_3 <\cdots < a_n$, all $\iota$-weights repeat an even number of times. \end{theorem} \begin{proof} We prove the result by mathematical induction on the cardinality of the set $S$. For $S=\{a_1\}$, the corresponding set-graph has the isolated vertex $v_{1,1}$. Therefore, $\iota(v_{1,1})= a_1$. Since the $\iota$-weight repeats zero times, the result holds true when $|S|=1$. For $S=\{a_1,a_2\}$, the corresponding set-graph has vertex set $\{v_{1,1}, v_{1,2}, v_{2,1}\}$ and hence $\iota(V_{1,1})= a_1$, $\iota(V_{1,2})=a_2$ and $\iota(v_{2,1}) = a_1\cdot a_2$. If $a_1=1$, then $\iota(v_{1,2})=\iota(v_{2,1})=a_2$. Hence, the $\iota$-weight $a_1$ repeats zero times and the $\iota$-weight $a_2$ repeats twice. If $a_1\neq 1$ then, $\iota(v_{1,1})=a_1$, $\iota(v_{1,2})=a_2$ and $\iota(v_{2,1})= a_1\cdot a_2$. Clearly, each $\iota$-weight repeats zero times and the result is true for $|S|=2$. Now, assume that the result holds for any set $S$ of cardinality $k$. Consider a set of positive integers $S'= \{e_1,e_2,e_3,\ldots,e_k,e_{k+1}\}$. Then for $S= \{e_1,e_2,e_3,\ldots,e_k\}$, the result holds. The vertices of set-graph $G_{S'^{(k+1)}}$ can be partitioned into three subsets namely, $V(G_{S^{(k)}})$, $\{\{e_{k+1}\}\cup v_{s,i}: v_{s,i}\in V(G_{S^{(k)}})\}$ and $\{v_{k+1,1}\}$. Since $k+1 >1$ the $\iota$-weights of the vertices in $\{\{e_{k+1}\}\cup v_{s,i}: v_{s,i}\in V(G_{S^{(k)}})\}$ repeat as $(k+1)$-multiples the equal even number of times as the corresponding vertices in $V(G_{S^{(k)}})$. Finally, $\iota(v_{k+1,1})$ is unique and hence it repeats zero times. In conclusion the results holds for any set of positive integers of cardinality $k+1$. Thus, the result holds for all finite sets of positive integers of cardinality $n\in \mathbb{N}$ by mathematical induction. \end{proof} The \textit{first derivative set-graph} is a divisor graph with respect to the LCM values. Hence, vertices $v_{s,i}(j)$ and $v_{t,k}(\ell)$ are adjacent if and only if $j\mid \ell$ or $\ell \mid j$. The \textit{second derivative set-graph} is a relatively prime graph with respect to the LCM values. Hence, vertices $v_{s,i}(j)$ and $v_{t,k}(\ell)$ are adjacent if and only if $\gcd(j,\ell)=1$. These derivative set-graphs will be investigated for the Euler Phi function $\phi(n)$. {\Large \varsigma}ubsection{Euler Phi Set-graphs and derivative set-graphs} In order to link some aspects of graph theory with the notion of divisor graph, relative prime graphs, Euler Phi set-graphs and alike, we introduce the notion of a set which divides another set. If for two distinct non-empty sets $X$ and $Y$, we have $X\cap Y\neq \emptyset$, we say $X$ divides $Y$ and write it as $X\mid Y$. Clearly it is true that, $X\mid Y\Leftrightarrow Y\mid X$. Also, it is not necessarily true that, $(X\mid Y, Y\mid Z)\Rightarrow X\mid Z$. The \textit{Euler Phi set-graph} denoted by $G_{S_\phi(n)}$ is obtained by applying Definition \ref{Defn-3.1} to the set $S_\phi(n)$. Clearly, $G_{S_\phi(n)}$ has all the properties of $G_{A^{(\phi(n))}}$ for $A^{(\phi(n))} = \{a_1,a_2,a_3,\ldots,a_{\phi(n)}\}$. Then, we have a straightforward corollary. \begin{corollary}\label{Cor-3.5} $G_{S_\phi(n)} \cong G_{S_\phi(m)}$ if and only if $\phi(n)=\phi(m)$. \end{corollary} We denote the Euler Phi $\lcm$-divisor set-graph by, $G_d(\iota(S_\phi(n)))$. Recall that except for $n=1,2$, $\phi(n)$ is even for all $n\in \mathbb{N}/\{1,2\}$. For $n= 1,2$, $\phi(n) = 1$ and for $n=3,4,6$, $\phi(n)=2$ and for $n=5,8,10,12$, $\phi(n) = 4$. Generally the vertex notation $v_{s,i}(\iota)$ will mean the vertex corresponding to the $i^{th}$, $s$-element subset with corresponding $\iota$-weight, \begin{lemma}\label{Lem-3.6} An Euler Phi $\lcm$-divisor set-graph $G_d(\iota(S_\phi(n)))$ has exactly three vertices with maximum degree, $\Delta (G_d(\iota(S_\phi(n))))= 2^{\phi(n))}-2$. \end{lemma} \begin{proof} Since $\iota(v_{1,1})=1$, the vertex $v_{1,1}(1)$ is adjacent to all vertices in the Euler Phi $\lcm$-divisor set-graph. Also, $\iota(v_{\phi(n)-1,\phi(n)}) =\iota(v_{\phi(n),1})=lcm(S_\phi(n)) =max\{\iota$-$weight:over~all~vertices\}$ and $\iota(v_{s,i})\mid lcm(S_\phi(n))$ for all vertices in the Euler Phi $\lcm$-divisor set-graph. Hence, all vertices are adjacent to vertices $v_{\phi(n)-1,\phi(n)}(\iota)$, $v_{\phi(n),1}(\iota)$, respectively. Therefore, the result. \end{proof} The following theorem describes the vertex degrees of Euler Phi $\lcm$-divisor set-graph. \begin{theorem}\label{Thm-3.7} All vertex degrees of an Euler Phi $\lcm$-divisor set-graph $G_d(\iota(S_\phi(n)))$ are even. \end{theorem} \begin{proof} Consider a Euler Phi prime set $\{1,p_1,p_2,\ldots,p_t\}$. In the set-graph which has an odd number that is, $2^{t+1}-1$ vertices, the vertex $v_{1,1}(1)$ is adjacent to all vertices thus has even degree. Without loss of generality consider any 1-element subset $\{p_i\}$. The element $p_i$ is an element in exactly $2^t$ subsets hence, vertex $v_{1,i+1}(p_i)$ will be adjacent to the remaining odd number of vertices corresponding to those subsets. Since, vertex $v_{1,i+1}(p_i)$ is adjacent to vertex $v_{1,1}(1)$ it has even degree in respect of the vertices considered thus far. Furthermore, if $\iota(v_{1,i+1})=p_i$, is a divisor of $\iota(v_{s,k}(j))$ which from Theorem \ref{Thm-3.4} repeats even times, a further even adjacency number is added to the degree of $v_{1,i+1}(p_i)$. Clearly after exhaustive adjacency count the degree of $v_{1,i+1}(p_i)$ is even. Similar reasoning holds for any subset hence, for any corresponding vertex in the graph $G_d(\iota(S_\phi(n)))$. Therefore the result. \end{proof} \noindent In view of the above results, we have some interesting corollaries as given below. \begin{corollary} An Euler Phi $\lcm$-divisor set-graph is an Eulerian graph. \end{corollary} \begin{proof} Since the vertex $v_{1,1}(1)$ is adjacent to all vertices in an Euler Phi $\lcm$-divisor set-graph, the graph is connected and obviously, non-empty. Since it is well-known (see \cite{BM1}), that a graph is eulerian if and only if it is non-empty and connected and it has no vertices of odd degree, Theorem \ref{Thm-3.7} implies that an Euler Phi $\lcm$-divisor set-graph is Eulerian. \end{proof} \begin{corollary} $\chi(G_{S_\phi(n)})\leq \chi(G_d(\iota(S_\phi(n))))$, where $\chi$ denotes the chromatic number of a graph. \end{corollary} \begin{proof} Since, $G_d(\iota(S_\phi(n)))$ is a proper super graph of the corresponding set-graph, $G_{S_\phi(n)}$ the result follows immediately. \end{proof} \begin{corollary} $\chi(G_d(\iota(S_\phi(m)))) =\omega(G_d(\iota(S_\phi(m))))$ for $m\in \mathbb{N}$, where $\omega$ is the clique number of a graph. \end{corollary} \begin{proof} It is known that the relation between the clique number and the chromatic number of a graph is given by, $\omega(G)\leq \chi(G)$. It is also known that if $H{\Large \varsigma}ubseteq G$ then, $\chi(H)\leq \chi(G)$. Furthermore, if $p=n+1$ is a prime number then, $\phi((p)= n$ and $S_\phi(p) =\{1,2,3,\ldots,n\}$. From Theorem 2 in \cite{SS1}, it follows that $\chi(G_d(\iota(S_\phi(p)))) = \lfloor log_2(2^n -1)\rfloor +1 =\omega(G_d(\iota(S_\phi(p))))$. In \cite{YT1}, it is shown that $G_d(\iota(S_\phi(p)))$ is a perfect graph. Therefore, for any $m$ for which $S_\phi(m){\Large \varsigma}ubseteq S_\phi(p))$ the Euler Phi $\lcm$-divisor set-graph $G_d(\iota(S_\phi(m)))$ is also a perfect graph. Hence, the result follows. \end{proof} \begin{corollary} The best upperbound for the chromatic and clique number is given by $\chi(G_d(\iota(S_\phi(m))))=\omega(G_d(\iota(S_\phi(m)))) \leq \lfloor log_2(2^n -1)\rfloor +1$. \end{corollary} \begin{proof} Since for any $m\in \mathbb{N}$ there exists a smallest prime number $p$ such that $S_\phi(m){\Large \varsigma}ubseteq S_\phi(p)$, $\phi((p)= n$. Therefore the result. \end{proof} In \cite{VG1}, we can find an easy algorithm to find the prime factors of any positive non-prime integer $n\geq 2$, which is as explained below: \textbf{Vishwas Garg Algorithm} \cite{VG1}. In adapted form it is informally described as follows: \textit{Step 1:} If $n\geq 2$ is non-prime and odd let $o=n$, $\ell=0$ and let the set, $\mathcal P =\emptyset$ and go to Step 2. Else, divide by 2 iteratively, say $\ell$ iterations until an odd number say, $o$ is obtain. Let $\mathcal P \leftarrow \mathcal P \cup \{2\}$. If $o=1$, go to Step 3 else, go to Step 2. \textit{Step 2:} For $o$ and $i\in \{3,5,7,\ldots,max\{p: p\leq {\Large \varsigma}qrt o, p~an~odd~number\}\}$ and similarly as for 2 in Step 1, sequentially, divide iteratively by $i$ until 1 is obtained. Add (or $\cup$) all values of $i$ for which were divisors to the set $\mathcal P$. Go to Step 3. \textit{Step 3:} Let $\mathcal P' = \mathcal P \cup \{t\cdot i: t\in \mathbb{N}~and~ t\cdot i\leq n\}$. Go to Step 4. \textit{Step 4:} Let $S_\phi(n) = \{i:1\leq i\leq n-1\} -\mathcal P'$ and $\phi(n)=|S_\phi(n)|$. Besides efficiency considerations, the above adapted algorithm provides the claimed results. Thus, the prime factors of $n$ are the elements of the set $\mathcal P$. This follows from the fact that the prime factors of an integer are unique. The division operator applied to a finite integer does exhaust in finite iterations. Hence, convergence of the algorithm follows. Also, since for any two consecutive prime numbers $p_1,p_2$ there exists a third prime number $p_3$ such that $2p_2<p_3<p_1p_2$, the iterative upper limit given by $\max\{p: p\leq {\Large \varsigma}qrt o, \text{$p$ an odd number}\}$, suffices. Steps 3 and 4 are well-defined, exhaustive and presents a unique result. The applicability of the adapted Vishwas Garg Algorithm is self evident in that following Step 4 it is easy to apply Definition \ref{Defn-3.1} and to obtain $\iota$-weights and to then construct a divisor graph. Therefore, if $\mathcal{P} = P_s(k)$ for some $k\in \mathbb{N}$, the result of Theorem \ref{2.3} applies to the corresponding divisor graph. In that sense the adapted Vishwas Garg Algorithm serves as methodology to test for $n\in \mathbb{N}$ whether or not $G_p(S_\phi(n))$ is acyclic without constructing the graph. We denote the Euler Phi $\lcm$-relatively prime set-graph by, $G_p(\iota(S_\phi(n)))$. Then, \begin{lemma} An Euler Phi $\lcm$-relatively prime set-graph $G_p(\iota(S_\phi(n)))$ has the maximum degree $\Delta (G_p(\iota(S_\phi(n))))= 2^{\phi(n))}-2$. \end{lemma} \begin{proof} It follows from the fact that $\iota(v_{1,1}) = 1$ and $1$ is relatively prime to all integers hence, vertex $v_{1,1}(1)$ is adjacent to all vertices. \end{proof} We now present a relation between an Euler Phi $\lcm$-relatively prime set-graph and the corresponding Euler Phi $\lcm$-divisor set-graph. \begin{theorem} Let $\iota(S'_\phi(n))= \iota(S_\phi(n))-\{v_{1,1}(1)\}$. If for all pairs of distinct vertices $v_{s,i}(\iota),v_{t,j}(\iota) \in \iota(S'(\phi))$ the respective $\iota$-weights are either divisible or relatively prime, then $G_p(\iota(S_\phi(n)))= \overline{G}_d(\iota(S_\phi(n)-\{v_{1,1}(1)\})) + v_{1,1}(1)$. In other words, $G_d(\iota(S_\phi(n)))= \overline{G}_p(\iota(S_\phi(n)-\{v_{1,1}(1)\})) + v_{1,1}(1)$. \end{theorem} \begin{proof} The result follows from the fact that if for $v_{s,i}(\iota),v_{t,j}(\iota) \in \iota(S'(\phi))$ the respective $\iota$-weights are divisible, then $v_{s,i}(\iota),v_{t,j}(\iota)$ are not relatively prime and vice versa. Also $\iota(v_{1,1}(1))$ both divides and is relatively prime to all $\iota(v_{s,i}(\iota))$. \end{proof} \begin{theorem} For $n\in \mathbb{N}$ and $\phi(n)\geq 4$, both $G_p(\iota(S_\phi(n)))$ and $G_d(\iota(S_\phi(n)))$ contains at least one triangle. \end{theorem} \begin{proof} For $n=1,2$ both $\phi(n)=1$ hence, $G_p(\iota(S_\phi(n)))$ and $G_d(\iota(S_\phi(n)))$ are an isolated vertex so are excluded. For $n=3$, $S_\phi(3)=\{1,2\}$ and hence, $G_d(\iota(S_\phi(3)))$ is a triangle and $G_p(\iota(S_\phi(3)))$ is path $P_3$ so are excluded. Similarly, $G_d(\iota(S_\phi(4)))$, $G_p(\iota(S_\phi(4)))$, $G_d(\iota(S_\phi(6)))$ and $G_p(\iota(S_\phi(6)))$ are excluded. For $n\in \mathbb{N}$ and $\phi(n)\geq 4$, let $S_\phi(n)= \{1, a_1,a_2,a_3,\ldots,a_\ell\}$. The vertices $v_{1,1}(1)$, $v_{1,a_i}(a_i)$ and $v_{2,(a_i-1)}(a_1)$ induce a triangle in $G_d(\iota(S_\phi(n)))$. Hence, $G_d(\iota(S_\phi(n)))$ contains at least one triangle. If $n\geq 5$ is odd then, $2\in S_\phi(n)$. Furthermore, for such $n$ at least one prime number $2<p_1\leq n$, $p_1\in S_\phi(n)$, exists. Hence, vertices $v_{1,1}(1)$, $v_{1,2}(2)$ and $v_{1,p_1}(p_1)$ induce a triangle in $G_p(\iota(S_\phi(n)))$. If $n\geq 8$ is even and since $\phi(n)\geq 4$ at least two prime numbers, $p_1,p_2 \in S_\phi(n)$, exist. Hence, vertices $v_{1,1}(1)$, $v_{1,p_1}(p_1)$ and $v_{1,p_2}(p_2)$ induce a triangle. Therefore, for $n\geq 5~and~\neq 6$, $G_p(\iota(S_\phi(n)))$ contains at least one triangle. \end{proof} {\Large \varsigma}ection{Conclusion} It was observed that the divisor graph, $G_d(S_\phi(n))= \overline{G}_p(S_\phi(n)-\{v_1\}) + v_1$ and the relatively prime graph, $G_p(S_\phi(n))= \overline{G}_d(S_\phi(n)-\{v_1\}) + v_1$. These observations permit the notion of a quasi-complement of any graph $G$. Let $G' =\langle V(G)-\{u\}_{u\in V(G)}\rangle$. A quasi-complement of $G$ is defined to be, $\overline{G}_u = \overline{G'}+u$. Hence, a finite family of quasi-complement exist. To be exact, $\nu(G)$ such quasi-complements. It easily follows that if $G$ is not connected and has an isolated vertex $u$ then, $\overline{G}=\overline{G}_u$. The notion of quasi-complements offers a new direction of research. The study of the number of edges as well as the chromatic number of the derivative Euler Phi set-graphs ($\lcm$-divisor and $\lcm$-relatively prime) remains open. It is known that integers $a,b$ can be: (i) not divisors and not relatively prime or, (ii) not divisors and relatively prime or, (iii) divisor and not relatively prime and finally, (iv) for say $a= 1$ which is both divisor and relatively prime to all positive integers. Therefore, a study of the relative divisor graph that is, vertex $v_a,v_b$ are adjacent if and only if $\gcd(a,b)\neq 1$ remains open. The authors are not aware of any studies related to this derivative graph. It will be advisable to exclude the element $1$, from the set under consideration. If this new graph is denoted by, $G_{rd}(\mathcal{S})$, $1\notin \mathcal{S}$, then clearly $G_d(\mathcal{S}){\Large \varsigma}ubseteq G_{rd}(\mathcal{S})$. \end{document}
\begin{document} \maketitle \begin{abstract} Let $G$ be a graph with maximum degree $\Delta$, and let $G^{\sigma}$ be an oriented graph of $G$ with skew adjacency matrix $S(G^{\sigma})$. The skew spectral radius $\rho_s(G^{\sigma})$ of $G^\sigma$ is defined as the spectral radius of $S(G^\sigma)$. The skew spectral radius has been studied, but only few results about its lower bound are known. This paper determines some lower bounds of the skew spectral radius, and then studies the oriented graphs whose skew spectral radii attain the lower bound $\sqrt{\Delta}$. Moreover, we apply the skew spectral radius to the skew energy of oriented graphs, which is defined as the sum of the norms of all the eigenvalues of $S(G^\sigma)$, and denoted by $\mathcal{E}_s(G^\sigma)$. As results, we obtain some lower bounds of the skew energy, which improve the known lower bound obtained by Adiga et al.\\ \noindent\textbf{Keywords:} oriented graph, skew adjacency matrix, skew spectral radius, skew energy\\ \noindent\textbf{AMS Subject Classification 2010:} 05C20, 05C50, 15A18, 05C90 \end{abstract} \section{Introduction} The spectral radius of a graph is one of the fundamental subjects in spectral graph theory, which stems from the spectral radius of a matrix. Let $M$ be a square matrix. Then the spectral radius of $M$, denoted by $\rho(M)$, is defined as the maximum norm of its all eigenvalues. If $G$ is a simple undirected graph with adjacency matrix $A(G)$, then the spectral radius of $G$ is defined to be the spectral radius of $A(G)$, denoted by $\rho(G)$. It is well-known that $\rho(G)$ is the largest eigenvalue of $A(G)$. The spectral radius of undirected graphs has been studied extensively and deeply. For the bounds of the spectral radius, many results have been obtained. The spectral radius of a graph is related to the chromatic number, independence number and clique number of the graph; see \cite{BH} for details. Moreover, the spectral radius of a graph has applications in graph energy \cite{LSG}. Recently, the spectral radii of skew adjacency matrices of oriented graphs have been studied in \cite{CCF,CXZ,SS,Xu,XG}. Let $G^\sigma$ be an oriented graph of $G$ obtained by assigning to each edge of $G$ a direction such that the induced graph $G^\sigma$ becomes a directed graph. The graph $G$ is called the underlying graph of $G^\sigma$. The skew adjacency matrix of $G^\sigma$ is the $n\times n$ matrix $S(G^\sigma)=(s_{ij})$, where $s_{ij}=1$ and $s_{ji}=-1$ if $\langle v_i,v_j\rangle$ is an arc of $G^\sigma$, $s_{ij}=s_{ji}=0$ otherwise. It is easy to see that $S(G^\sigma)$ is a skew symmetric matrix. Thus all the eigenvalues of $S(G^\sigma)$ are pure imaginary numbers or $0's$, which are said to be the skew spectrum $Sp_s(G^\sigma)$ of $G^\sigma$. The skew spectral radius of $G^\sigma$, denoted by $\rho_s(G^\sigma)$, is defined as the spectral radius of $S(G^\sigma)$. There are only few results about the skew spectral radii of oriented graphs. Xu and Gong \cite{XG} and Chen et al. \cite{CXZ} studied the oriented graphs with skew spectral radii no more than $2$. Cavers et al. \cite{CCF} and Xu \cite{Xu} independently deduced an upper bound, that is, the skew spectral radius of $G^\sigma$ is dominated by the spectral radius of the underlying graph $G$. In Section 2, we further investigate the skew spectral radius of $G^\sigma$, and obtain some lower bounds for $\rho_s(G^\sigma)$. One of the lower bounds is based on the maximum degree of the underlying graph $G$, that is, $\rho_s(G^\sigma)\geq\sqrt{\Delta}$. We, in Section 3, study the oriented graphs whose skew spectral radii attain the lower bound $\sqrt{\Delta}$. In Section 4, we apply the skew spectral radius to the skew energy of oriented graphs. The skew energy $\mathcal{E}_s(G^\sigma)$ of $G^\sigma$, first introduced by Adiga et al. \cite{ABC}, is defined as the sum of the norms of all the eigenvalues of $S(G^\sigma)$. They obtained that for any oriented graph $G^\sigma$ with $n$ vertices, $m$ arcs and maximum degree $\Delta$, \begin{equation}\label{Skew} \sqrt{2m+n(n-1)\left(\det(S)\right)^{2/n}}\leq\mathcal{E}_s(G^\sigma)\leq n\sqrt{\Delta}, \end{equation} where $S$ is the skew adjacency matrix of $G^\sigma$ and $\det(S)$ is the determinant of $S$. The upper bound that $\mathcal{E}_s(G^\sigma)=n\sqrt{\Delta}$ is called the optimum skew energy. They further proved that an oriented graph $G^\sigma$ has the optimum skew energy if and only if its skew adjacency matrix $S(G^\sigma)$ satisfies that $S(G^\sigma)^TS(G^\sigma)=\Delta I_n$, or equivalently, all eigenvalues of $G^\sigma$ are equal to $\sqrt{\Delta}$, which implies that $G$ is a $\Delta$-regular graph. From the discussion of Section $3$, it is interesting to find that if the underlying graph is regular, the oriented graphs with $\rho_s(G^\sigma)=\sqrt{\Delta}$ have the optimal skew energy. Moreover, by applying the lower bounds of the skew spectral radius obtained in Section $2$, we derive some lower bounds of the skew energy of oriented graphs, which improve the lower bound in (\ref{Skew}). Throughout this paper, when we simply mention a graph, it means a simple undirected graph. When we say the maximum degree, average degree, neighborhood, etc. of an oriented graph, we mean the same as those in its underlying graph, unless otherwise stated. For terminology and notation not defined here, we refer to the book of Bondy and Murty \cite{BM}. \section{Lower bounds of the skew spectral radius} In this section, we deduce some lower bounds of the skew spectral radii of oriented graphs, which implies the relationships between skew spectral radius and some graph parameters. We begin with some definitions. Let $G^\sigma$ be an oriented graph of a graph $G$ with vertex set $V$. Denote by $S(G^\sigma)$ and $\rho_s(G^\sigma)$ the skew adjacency matrix and the skew spectral radius of $G^\sigma$, respectively. For any two disjoint subsets $A,B$ of $V$, we denote by $\gamma(A,B)$ the number of arcs whose tails are in $A$ and heads in $B$. Let $G^\sigma[A]$ be the subgraph of $G^\sigma$ induced by $A$, where $G^\sigma[A]$ has vertex set $A$ and contains all arcs of $G^\sigma$ which join vertices of $A$. Then we can deduce a lower bound of $\rho_s(G^\sigma)$ as follows. \begin{thm}\label{lower} Let $G$ be a graph with vertex set $V$, and let $G^\sigma$ be an oriented graph of $G$ with skew adjacency matrix $S(G^\sigma)$ and skew spectral radius $\rho_s(G^\sigma)$. Then for any two nonempty subsets $A,B\subseteq V$, \begin{equation*} \rho_s(G^\sigma)\geq\frac{|\gamma(A,B)-\gamma(B,A)|}{\sqrt{|A||B|\,}}\,, \end{equation*} where $|A|$ and $|B|$ are the number of elements of $A$ and $B$, respectively. \end{thm} \noindent {\it Proof.} Suppose that $|A|=k$ and $|B|=l$, and let $C=A\cap B$ with order $t$, $A_1=A-C$ with order $k-t$, $B_1=B-C$ with order $l-t$, and $D=V-A\cup B$ with order $n+t-k-l$. With suitable labeling of the vertices of $V$, the skew adjacency matrix $S$ can be formulated as follows: $$\left(\begin{array}{cccc} S_{11} & S_{12} & S_{13} & S_{14}\\ -S_{12}^T & S_{22} & S_{23} & S_{24}\\ -S_{13}^T & -S_{23}^T & S_{33} & S_{34}\\ -S_{14}^T & -S_{24}^T & -S_{34}^T & S_{44} \end{array}\right),$$ where $S_{11}=S(G^\sigma[A_1])$ is the skew adjacency matrix of the induced oriented graph $G^\sigma[A_1]$ with order $k-t$, $S_{22}=S(G^\sigma[C])$ is the skew adjacency matrix of $G^\sigma[C]$ with order $t$ and $S_{33}=S(G^\sigma[B_1])$ is the skew adjacency matrix of $G^\sigma[B_1]$ with order $l-t$, $S_{44}=S(G^\sigma[D])$ is the skew adjacency matrix of the induced oriented graph $G^\sigma[D]$ with order $n+t-k-l$. Let $H=(-i)S$. Since $S$ is skew symmetric, $H$ is an Hermitian matrix. By Rayleigh-Ritz theorem, we obtain that \begin{equation*} \rho_s(G^\sigma)=\rho(S)=\rho(H)=\max_{x\in \mathbb{C}^n}\frac{x^*Hx}{x^*x}, \end{equation*} where $\mathbb{C}^n$ is the complex vector space of dimension $n$ and $x^*$ is the conjugate transpose of $x$. We take $x=\Big(\underbrace{\frac{1}{\sqrt{k}},\ldots,\frac{1}{\sqrt{k}}}_{k-t}, \underbrace{\frac{1}{\sqrt{k}}+\frac{i}{\sqrt{l}},\ldots,\frac{1}{\sqrt{k}}+\frac{i}{\sqrt{l}}}_{t}, \underbrace{\frac{i}{\sqrt{l}},\ldots,\frac{i}{\sqrt{l}}}_{l-t}, 0,\ldots,0\Big)^T=\left(\frac{1}{\sqrt{k}}\mathbf{1}_{k-t}^T,(\frac{1}{\sqrt{k}}+\frac{i}{\sqrt{l}})\mathbf{1}_t^T, \frac{i}{\sqrt{l}}\mathbf{1}_{l-t}^T,\mathbf{0}^T\right)^T$ in the above equality and derive that \begin{eqnarray*} \rho_s(G^\sigma)&\geq &\frac{x^*Hx}{x^*x}=\frac{-i}{2}\left(\frac{1}{k}\mathbf{1}_{k-t}^TS_{11}\mathbf{1}_{k-t} +\left(\frac{1}{k}+\frac{1}{l}\right)\mathbf{1}_t^TS_{22}\mathbf{1}_t+\frac{1}{l}\mathbf{1}_{l-t}^TS_{33}\mathbf{1}_{l-t}\right.\\ &+&\left(\frac{1}{k}+\frac{i}{\sqrt{kl}}\right)\mathbf{1}_{k-t}^TS_{12}\mathbf{1}_{t}- \left(\frac{1}{k}-\frac{i}{\sqrt{kl}}\right)\mathbf{1}_{t}^TS_{12}^T\mathbf{1}_{k-t}\\ &+&\frac{i}{\sqrt{kl}}\mathbf{1}_{k-t}^TS_{13}\mathbf{1}_{l-t}-\frac{-i}{\sqrt{kl}}\mathbf{1}_{l-t}^TS_{13}^T\mathbf{1}_{k-t}\\ &+&\left.\left(\frac{i}{\sqrt{kl}}+\frac{1}{l}\right)\mathbf{1}_{t}^TS_{23}\mathbf{1}_{l-t}- \left(\frac{-i}{\sqrt{kl}}+\frac{1}{l}\right)\mathbf{1}_{l-t}^TS_{23}^T\mathbf{1}_{t}\right). \end{eqnarray*} Note that $\mathbf{1}_{k-t}^TS_{11}\mathbf{1}_{k-t}=\mathbf{1}_t^TS_{22}\mathbf{1}_t=\mathbf{1}_{l-t}^TS_{33}\mathbf{1}_{l-t}=0$, since $S_{11}$, $S_{22}$ and $S_{33}$ are both skew symmetric. Moreover, it can be verified that \begin{equation*} \mathbf{1}_{k-t}^TS_{12}\mathbf{1}_{t}=\mathbf{1}_{t}^TS_{12}^T\mathbf{1}_{k-t}=\gamma(A_1,C)-\gamma(C,A_1), \end{equation*} \begin{equation*} \mathbf{1}_{k-t}^TS_{13}\mathbf{1}_{l-t}=\mathbf{1}_{l-t}^TS_{13}^T\mathbf{1}_{k-t}=\gamma(A_1,B_1)-\gamma(B_1,A_1), \end{equation*} and \begin{equation*} \mathbf{1}_{t}^TS_{23}\mathbf{1}_{l-t}=\mathbf{1}_{l-t}^TS_{23}^T\mathbf{1}_{t}=\gamma(C,B_1)-\gamma(B_1,C). \end{equation*} Note that $\gamma(A,B)-\gamma(B,A)=\gamma(A_1,C)+\gamma(A_1,B_1)+\gamma(C,B_1)-\gamma(C,A_1)-\gamma(B_1,A_1)-\gamma(B_1,C)$. Combining the above equalities, it follows that \begin{equation*} \rho_s(G^\sigma)\geq\frac{\gamma(A,B)-\gamma(B,A)}{\sqrt{kl}}. \end{equation*} Similarly, we can derive that $\rho_s(G^\sigma)\geq\frac{\gamma(B,A)-\gamma(A,B)}{\sqrt{kl}}$. The proof is thus complete.\qed The above theorem implies a lower bound of $\rho_s(G^\sigma)$ in terms of the maximum degree of $G$. Before proceeding, it is necessary to introduce the notion of switching-equivalent \cite{LL} of two oriented graphs. Let $G^\sigma$ be an oriented graph of $G$ and $W$ be a subset of its vertex set. Denote $\overline{W}=V(G^\sigma)\setminus W$. Another oriented graph $G^\tau$ of $G$, obtained from $G^\sigma$ by reversing the orientations of all arcs between $W$ and $\overline{W}$, is said to be obtained from $G^\sigma$ by switching with respect to $W$. Two oriented graphs $G^\sigma$ and $G^\tau$ are called switching-equivalent if $G^\tau$ can be obtained from $G^\sigma$ by a sequence of switchings. The following lemma shows that the switching operation keeps skew spectrum unchanged. \begin{lem}\cite{LL}\label{switch} Let $G^\sigma$ and $G^\tau$ be two oriented graphs of a graph $G$. If $G^\sigma$ and $G^\tau$ are switching-equivalent, then $G^\sigma$ and $G^\tau$ have the same skew spectra. \end{lem} \begin{cor}\label{maximum} Let $G^\sigma$ be an oriented graph of $G$ with maximum degree $\Delta$. Then \begin{equation*} \rho_s(G^\sigma)\geq \sqrt{\Delta}. \end{equation*} \end{cor} \noindent {\it Proof.} Let $G^\tau$ be an oriented graph of $G$ obtained from $G^\sigma$ by switching with respect to every neighbor of $v$ if necessary, such that all arcs incident with $v$ have the common tail $v$. Then $G^\sigma$ and $G^\tau$ are switching-equivalent. By Lemma \ref{switch}, $G^\sigma$ and $G^\tau$ have the same skew spectra. Consider the oriented graph $G^\tau$ and let $A=\{v\}$ and $B=N(v)$. Obviously, $\gamma(A,B)-\gamma(B,A)=\Delta$. By Theorem \ref{lower}, $\rho_s(G^\sigma)=\rho_s(G^\tau)\geq \frac{\Delta}{\sqrt{\Delta}}=\sqrt{\Delta}$.\qed It is known \cite{BH} that for any undirected tree $T$, $\rho(T)\geq \bar{d}$, where $\bar{d}$ is the average degree of $T$. Moreover, all oriented trees of $T$ have the same skew spectra which are equal to $i$ times the spectrum of $T$; see \cite{SS}. Therefore, for any oriented tree $T^\sigma$ of $T$, $\rho_s(T^\sigma)=\rho(T)\geq \bar{d}$. We next consider a general graph. The following lemma \cite{AS} is necessary. \begin{lem}\cite{AS}\label{bipartite} Let $G=(V,E)$ be a graph with $n$ vertices and $m$ edges. Then $G$ contains a bipartite subgraph with at least $m/2$ edges. \end{lem} Then we obtain the following result for a general graph by applying Theorem \ref{lower}. \begin{cor} For any simple graph $G$ with average degree $\bar{d}$, there exists an oriented graph $G^\sigma$ of $G$ such that \begin{equation*} \rho_s(G^\sigma)\geq \frac{\,\bar{d}\,}{2}. \end{equation*} \end{cor} \noindent {\it Proof.} By Lemma \ref{bipartite}, $G$ contains a bipartite subgraph $H=(A,B)$ with at least $m/2$ edges. We give an orientation of $G$ such that all arcs between $A$ and $B$ go from $A$ to $B$ and the directions of the other arcs are arbitrary. By Theorem \ref{lower} and Lemma \ref{bipartite}, \begin{equation*} \rho_s(G^\sigma)\geq\frac{|\gamma(A,B)-\gamma(B,A)|}{\sqrt{|A||B|}} \geq\frac{2}{n}\,|\gamma(B,A)-\gamma(A,B)| \geq\frac{m}{n}=\frac{\bar{d}}{2}. \end{equation*}\qed Hofmeister \cite{H} and Yu et al. \cite{YLT} deduced a lower bound of the spectral radius of a graph in terms of its degree sequence. Specifically, let $G$ be a connected graph with degree sequence $d_1,d_2,\ldots,d_n$. Then $\rho(G)\geq \sqrt{\frac{1}{n}\sum_{i=1}^nd_i^2}$. Similarly, for an oriented graph, we consider the relation between its skew spectral radius and vertex degrees, where it should be taken into account the out-degree and in-degree of every vertex. Let $G^\sigma$ be an oriented graph with vertex set $\{v_1,v_2,\ldots,v_n\}$. Denote by $d_i^+$ and $d_i^-$ the out-degree and in-degree of $v_i$ in $G^\sigma$, respectively. Let $\tilde{d_i}=d_i^+-d_i^-$. Then we establish a lower bound of the skew spectral radius of $G^\sigma$ as follows. \begin{thm}\label{degree} Let $G^\sigma$ be an oriented graph with vertex set $\{v_1,v_2,\ldots,v_n\}$ and skew spectral radius $\rho_s(G^\sigma)$. Then \begin{equation*} \rho_s(G^\sigma)\geq\sqrt{\frac{\tilde{d_1}^2+\tilde{d_2}^2+\cdots+\tilde{d_n}^2}{n}}\,. \end{equation*} \end{thm} \noindent {\it Proof.} If $G^\sigma$ is an Eulerian digraph, then the right-hand of the above inequality is $0$, and the inequality is obviously true since $\rho_s(G^\sigma)\geq 0$ holds always. So, we always assume that $G^\sigma$ is not Eulerian in the following. Let $S=[s_{ij}]$ be the skew adjacency matrix of $G^\sigma$. Then $\rho_s(G^\sigma)=\rho(S)=\sqrt{\rho(S^TS)}$. We consider the spectral radius of $S^TS$. Since $S^TS$ is symmetric, by Rayleigh-Ritz theorem, \begin{equation*} \rho(S^TS)=\max_{x\in \mathbb{R}^n}\frac{x^T(S^TS)x}{x^Tx}. \end{equation*} We take $x=\frac{1}{\sqrt{\tilde{d_1}^2+\tilde{d_2}^2+\cdots+\tilde{d_n}^2}}\left(\tilde{d_1}, \tilde{d_2},\ldots,\tilde{d_n}\right)^T$ in the above equality and obtain that \begin{equation*} \rho(S^TS)\geq x^T(S^TS)x=(Sx)^T(Sx). \end{equation*} It is easy to compute that \begin{equation*} Sx=\frac{1}{\sqrt{\tilde{d_1}^2+\tilde{d_2}^2+\cdots+\tilde{d_n}^2}}\left(\sum_{j=1}^ns_{1j}\tilde{d_j}, \sum_{j=1}^ns_{2j}\tilde{d_j},\ldots,\sum_{j=1}^ns_{nj}\tilde{d_j}\right)^T. \end{equation*} Applying the Cauchy-Schwarz's inequality, we obtain that \begin{eqnarray*} (Sx)^TSx&=&\frac{1}{\tilde{d_1}^2+\tilde{d_2}^2+\cdots+\tilde{d_n}^2} \left[\left(\sum_{j=1}^ns_{1j}\tilde{d_j}\right)^2+ \left(\sum_{j=1}^ns_{2j}\tilde{d_j}\right)^2+\cdots+ \left(\sum_{j=1}^ns_{nj}\tilde{d_j}\right)^2\right]\\ &\geq& \frac{n}{\tilde{d_1}^2+\tilde{d_2}^2+\cdots+\tilde{d_n}^2} \left(\frac{\sum_{j=1}^ns_{1j}\tilde{d_j}+ \sum_{j=1}^ns_{2j}\tilde{d_j}+\cdots+\sum_{j=1}^ns_{nj}\tilde{d_j}}{n}\right)^2. \end{eqnarray*} Note that \begin{equation*} \sum_{j=1}^ns_{1j}\tilde{d_j}+ \sum_{j=1}^ns_{2j}\tilde{d_j}+\cdots+\sum_{j=1}^ns_{nj}\tilde{d_j}=(1,1,\ldots,1)Sx=-\tilde{d_1}^2 -\tilde{d_2}^2-\cdots-\tilde{d_n}^2. \end{equation*} Therefore, \begin{equation*} (Sx)^T(Sx)\geq \frac{\tilde{d_1}^2+\tilde{d_2}^2+\cdots+\tilde{d_n}^2}{n}. \end{equation*} We thus conclude that \begin{equation*} \rho_s(G^\sigma)\geq\sqrt{(Sx)^T(Sx)}\geq \sqrt{\frac{\tilde{d_1}^2+\tilde{d_2}^2+\cdots+\tilde{d_n}^2}{n}}. \end{equation*} This completes the proof.\qed \noindent\textbf{Remark 2.1} Theorem \ref{degree} can also implies Corollary \ref{maximum}. \noindent {\bf \emph{Another proof of Corollary \ref{maximum}}}. Suppose that $d_G(v_1)=\Delta$ and $N(v_1)=\{v_2,v_3,\ldots,v_{\Delta+1}\}$. Let $G^\tau$ be an oriented graph of $G$ obtained from $G^\sigma$ by a sequence of switchings such that all arcs incident with $v_1$ have the common tail $v_1$. Then by Lemma \ref{switch}, $\rho_s(G^\sigma)=\rho_s(G^\tau)$. Let $H^\tau$ be the subgraph of $G^\tau$ induced by $v_1$ and its all adjacent vertices. Let $H_1=(-i)S(G^\tau)$ and $H_2=(-i)S(H^\tau)$. Then $\rho_s(G^\tau)=\rho(H_1)$ and $\rho_s(H^\tau)=\rho(H_2)$. Note that $H_1$ and $H_2$ are both Hermitian matrices. By interlacing of eigenvalues (\cite{HJ}, 4.3.16 Corollary), $\rho(H_1)=\lambda_1(H_1)\geq\lambda_1(H_2)=\rho(H_2)$. Suppose that for any $1\leq i\leq \Delta$, the vertex $v_i$ has out-degree $t_i^+$ and in-degree $t_i^-$ in $H^\tau$. Let $\tilde{t_i}=t_i^+-t_i^-$. It can be found that $\tilde{t_1}+\tilde{t_2}+\cdots+\tilde{t}_{\Delta+1}=0$ and $\tilde{t_1}=\Delta$. It follows that $\tilde{t_1}^2+\tilde{t_2}^2+\cdots+\tilde{t}_{\Delta+1}^2\geq \Delta^2+\Delta$. Then by Theorem \ref{degree}, \begin{equation*} \rho_s(H^\tau)\geq\sqrt{\frac{\tilde{t_1}^2+\tilde{t_2}^2+\cdots+\tilde{t}_{\Delta+1}^2}{\Delta+1}} \geq\sqrt{\Delta}. \end{equation*} Now we conclude that $\rho_s(G^\sigma)=\rho_s(G^\tau)\geq\rho_s(H^\tau)\geq\sqrt{\Delta}$, which implies Corollary \ref{maximum}.\qed It is known that Wilf \cite{Wilf} considered the relation between the spectral radius and the chromatic number of a graph. As for the oriented graphs, Sopena \cite{Sopena} introduced the notion of oriented chromatic number, which motivates us to consider the relation between the skew spectral radius and the oriented chromatic number of an oriented graph. Let $G^\sigma$ be an oriented graph with vertex set $V$. An oriented $k$-coloring of $G^\sigma$ is a partition of $V$ into $k$ color classes such that no two adjacent vertices belong to the same color class, and all the arcs between two color classes have the same direction. The oriented chromatic number of $G^\sigma$, denoted by $\chi_o(G^\sigma)$, is defined as the smallest number $k$ satisfying that $G^\sigma$ admits an oriented $k$-coloring. The following theorem presents a lower bound of the skew spectral radius in terms of the average degree and oriented chromatic number. \begin{thm} Let $G$ be a graph with average degree $\bar{d}$. Let $G^\sigma$ be an oriented graph of $G$ with skew spectral radius $\rho_s(G^\sigma)$ and oriented chromatic number $\chi_o(G^\sigma)$. Then \begin{equation*} \rho_s(G^\sigma)\geq \frac{\bar{d}}{\chi_o(G^\sigma)-1}\,. \end{equation*} \end{thm} \noindent {\it Proof.} Suppose that $G^\sigma$ contains $m$ arcs and $\chi_o(G^\sigma)=k$. Let $\{V_1,V_2,\ldots,V_k\}$ be an oriented $k$-coloring of $G^\sigma$. Denote $a_{ij}=|\gamma(V_i,V_j)-\gamma(V_j,V_i)|$. By the definition of an oriented $k$-coloring, we get that either $\gamma(V_i,V_j)=0$ or $\gamma(V_j,V_i)=0$. It follows that $\sum_{i<j}a_{ij}=m$. By Theorem \ref{lower}, we derive that \begin{eqnarray*} \rho_s(G^\sigma)&\geq&\max_{i<j}\frac{a_{ij}}{\sqrt{|V_i||V_j|}} \geq\max_{i<j}\frac{2a_{ij}}{|V_i|+|V_j|}\\ &\geq&\frac{2\sum_{i<j}a_{ij}}{\sum_{i<j}(|V_i|+|V_j|)} =\frac{2m}{(k-1)n}=\frac{\bar{d}}{\chi_o(G^\sigma)-1}\,. \end{eqnarray*} The proof is now complete. \qed \section{Oriented graphs with skew spectral radius $\sqrt{\Delta}$} From the previous section, we know that for any oriented graph $G^\sigma$, $\rho_s(G^\sigma)\geq\sqrt{\Delta}$. In this section, we investigate the oriented graphs with $\rho_s(G^\sigma)=\sqrt{\Delta}$. We first recall the following proposition on the skew spectra of oriented graphs. \begin{prop}\label{eigen2} Let $\{i\lambda_1,i\lambda_2,\ldots,i\lambda_n\}$ be the skew spectrum of $G^\sigma$, where $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_n$. Then (1) $\lambda_j=-\lambda_{n+1-j}$ for all $1\leq j\leq n$; (2) when $n$ is odd, $\lambda_{(n+1)/2}=0$ and when $n$ is even, $\lambda_{n/2}\geq 0$; and (3) $\sum_{j=1}^{n}\lambda_j^2=2m$. \end{prop} An oriented regular graph is an oriented graph of a regular graph. We consider the case of oriented regular graphs, which is associated with optimum skew energy oriented graphs. \begin{thm}\label{regular} Let $G^\sigma$ be an oriented graph of a $\Delta$-regular graph $G$ with skew adjacency matrix $S$. Then $\rho_s(G^\sigma)=\sqrt{\Delta}$ if and only if $S^TS=\Delta I_n$, i.e., $G^\sigma$ has the optimum skew energy. \end{thm} \noindent {\it Proof.} Let $\{i\lambda_1,i\lambda_2,\ldots,i\lambda_n\}$ be the skew spectrum of $G^\sigma$ with $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_n$. By Proposition \ref{regular}, we get that $\lambda_1^2+\lambda_2^2+\cdots+\lambda_n^2=2m=n\Delta$ and $\lambda_1\geq|\lambda_i|$ for any $2\leq i\leq n$. It follows that $\lambda_1=\rho_s(G^\sigma)$ and $\lambda_1^2+\lambda_2^2+\cdots+\lambda_n^2\leq n(\rho_s(G^\sigma))^2$. If $\rho_s(G^\sigma)=\sqrt{\Delta}$, then we can conclude that $\lambda_1=|\lambda_2|=\cdots=|\lambda_n|=\sqrt{\Delta}$, that is to say, $S^TS=\Delta I_n$. The converse implication follows easily. \qed The above theorem gives a good characterization for an oriented $\Delta$-regular graph with $\rho_s(G^\sigma)=\sqrt{\Delta}$, which says that any two rows and any two columns of its skew adjacency matrix are all orthogonal. For any oriented graph $G^\sigma$ with $\rho_s(G^\sigma)=\sqrt{\Delta}$, we also consider the orthogonality of its skew adjacency matrix and obtain an extended result as follows. \begin{thm}\label{general} Let $G$ be a graph with vertex set $V=\{v_1,v_2,\ldots,v_n\}$ and maximum degree $\Delta$. Let $G^\sigma$ be an oriented graph of $G$ with $\rho_s(G^\sigma)=\sqrt{\Delta}$ and skew adjacency matrix $S=(S_1,S_2,\ldots,S_n)$. If $d_G(v_i)=\Delta$, then $(S_i,S_j)=S_i^TS_j=0$ for any $j\neq i$. \end{thm} \noindent {\it Proof.} Without loss of generality, suppose that $d_G(v_1)=\Delta$. It is sufficient to consider the matrix $S^TS$ and prove that for any $j\neq 1$, $(S^TS)_{1j}=0$. Notice from $\rho_s(G^\sigma)=\sqrt{\Delta}$ that $\Delta$ is the maximum eigenvalue of $S^TS$. Suppose that $\Delta$ is an eigenvalue of $S^TS$ with multiplicity $l$. Since $S^TS$ is a real symmetric matrix, there exists an orthogonal matrix $P$ such that $S^TS=P^TDP$, where $D$ is the diagonal matrix with the form $D=\text{diag}(\Delta,\ldots,\Delta,u_{l+1},\ldots,u_n)$ with $0\leq u_i<\Delta$. Denote $P=(P_1,P_2,\ldots,P_n)=(p_{ij})$. Note that $(S^TS)_{11}=\Delta$ since $d(v_1)=\Delta$. It follows that $P_1^TDP_1=\Delta$, that is, \begin{equation*} \Delta p_{11}^2+\cdots+\Delta p_{l1}^2+u_{l+1}p_{l+1,1}^2+\cdots+u_n p_{n1}^2=\Delta. \end{equation*} Since $P_1^TP_1=1$, we derive that $p_{l+1,1}=p_{l+2,1}=\cdots=p_{n1}=0$. Then for any $j\neq1$, we compute the $(1,j)$-entry of $S^TS$ as follows. \begin{eqnarray*} (S^TS)_{1j}&=&P_1^TDP_j\\ &=&\Delta p_{11}p_{1j}+\cdots+\Delta p_{l1}p_{j1}+u_{l+1}p_{l+1,1}p_{l+1,j}+\cdots+u_n p_{n1}p_{nj}\\ &=&\Delta(p_{11}p_{1j}+\cdots+p_{l1}p_{lj})=\Delta P_1^TP_j=0. \end{eqnarray*} The last equality holds due to the orthogonality of $P$. The proof is now complete.\qed Comparing Theorem \ref{regular} with Theorem \ref{general}, it is natural to ask whether the converse of Theorem \ref{general} holds, that is, whether the condition that $d_G(v_i)=\Delta$ and $(S_i,S_j)=0$ for every vertex $v_i$ with maximum degree and $j\neq i$ implies that $\rho_s(G^\sigma)=\sqrt{\Delta}$. In what follows, we show that it is not always true by constructing a counterexample, but we can still obtain that $i\sqrt{\Delta}$ is an eigenvalue of $G^\sigma$. \begin{thm}\label{eigen} Let $G$ be a graph with vertex set $V=\{v_1,v_2,\ldots,v_n\}$ and maximum degree $\Delta$. Let $G^\sigma$ be an oriented graph of $G$ with skew adjacency matrix $S=(S_1,S_2,\ldots,S_n)$. If there exists a vertex $v_i$ with maximum degree $\Delta$ such that for any $j\neq i$, $(S_i,S_j)=0$, then $i\sqrt{\Delta}$ is an eigenvalue of $G^\sigma$. \end{thm} \noindent {\it Proof.} Without loss of generality, suppose that $d_G(v_1)=\Delta$. Since $(S_1,S_j)=0$ for any $j\neq 1$, we obtain that $SS^T=\left(\begin{array}{cc}\Delta & \mathbf{0}\\ \mathbf{0}^T & \mathbf{*}\end{array}\right)$. Then $\Delta$ is an eigenvalue of $SS^T$, which follows that $i\sqrt{\Delta}$ is an eigenvalue of $G^\sigma$. The proof is thus complete.\qed \noindent\textbf{Example 3.1} Let $G^\sigma$ be the oriented graph depicted in Figure \ref{Fig1}, which has the maximum degree $6$. It can be verified that $G^\sigma$ satisfies the conditions of Theorem \ref{eigen} and $\sqrt{6}i$ is an eigenvalue of $G^\sigma$. But we can compute that $\rho_s(G^\sigma)\approx 3.1260 > \sqrt{6}$. \begin{figure} \caption{The counterexample $G^\sigma$} \label{Fig1} \end{figure} \section{Lower bounds of the skew energy of $G^\sigma$} Similar to the McClelland's lower bound for the energy of undirected graphs, Adiga et al. in \cite{ABC} got a lower bound for the skew energy of oriented graphs, that is, $\mathcal{E}_s(G^\sigma)\geq\sqrt{2m+n(n-1)\left(\det(S)\right)^{2/n}}$, where $S$ is the skew adjacency matrix of $G^\sigma$. This bound is also called the McClelland's lower bound of skew energy. In this section, we obtain some new lower bounds for the skew energy of oriented graphs. In view of Proposition \ref{eigen2}, we reconsider the McClelland's lower bound and establish a new lower bound of $\mathcal{E}_s(G^\sigma)$. \begin{thm}\label{Mc1} Let $G^\sigma$ be an oriented graph with $n$ vertices, $m$ arcs and skew adjacency matrix $S$. Then \begin{equation}\label{Mc} \mathcal{E}_s(G^\sigma)\geq\sqrt{4m+n(n-2)\left(\det(S)\right)^{2/n}}. \end{equation} \end{thm} \noindent {\it Proof.} By Proposition \ref{eigen2}, we have \begin{equation*} \left(\mathcal{E}_s(G^\sigma)\right)^2=\left(2\sum_{j=1}^{\lfloor n/2\rfloor}|\lambda_j|\right)^2 =4\sum_{j=1}^{\lfloor n/2\rfloor}\lambda_j^2+4\sum_{1\leq i\neq j\leq \lfloor n/2 \rfloor}|\lambda_i||\lambda_j|. \end{equation*} If $n$ is odd, $\det(S)=0$ and $\left(\mathcal{E}_s(G^\sigma)\right)^2 \geq4\sum_{j=1}^{\lfloor n/2\rfloor}\lambda_j^2=4m$. If $n$ is even, by the arithmetic-geometric mean inequality, we have that \begin{equation*} \left(\mathcal{E}_s(G^\sigma)\right)^2 =4\sum_{j=1}^{\lfloor n/2\rfloor}\lambda_j^2+4\sum_{1\leq i\neq j\leq \lfloor n/2 \rfloor}|\lambda_i||\lambda_j|\geq 4m+n(n-2)(\det(S))^{2/n}. \end{equation*} The proof is thus complete.\qed \noindent\textbf{Remark 4.1} The lower bound in the above theorem is better than the McClelland's bound. In fact, we find that \begin{equation}\label{inequ1} (\det(S))^\frac{1}{n}=\left(\prod_{j=1}^{n}|\lambda_j|\right)^\frac{1}{n}\leq \sqrt{\frac{\sum_{j=1}^{n}\lambda_j^2}{n}}= \sqrt{\frac{2m}{n}}. \end{equation} Therefore, we deduce that \begin{equation*} 4m+n(n-2)\left(\det(S)\right)^\frac{2}{n}\geq 2m+n(n-1)\left(\det(S)\right)^\frac{2}{n}. \end{equation*} An oriented graph $G^\sigma$ is said to be singular if $\det(S)=0$ and nonsingular otherwise. In what follows, we only consider the oriented graphs with $\det(S)\neq 0$. Note that if $G^\sigma$ is nonsingular, then $n$ must be even and $\det(S)$ is positive. We next derive a lower bound of the skew energy for nonsingular oriented graphs in terms of the order $n$, the maximum degree $\Delta$ and $\det(S)$. \begin{thm}\label{ELB} Let $G^\sigma$ be a nonsingular oriented graph with order $n$, maximum degree $\Delta$ and skew adjacency matrix $S$. Then \begin{equation}\label{ELB1} \mathcal{E}_s(G^\sigma)\geq 2\sqrt{\Delta}+(n-2)\left(\frac{\det(S)}{\Delta}\right)^{\frac{1}{n-2}} \end{equation} and equality holds if and only if $\lambda_1=\sqrt{\Delta}$ and $\lambda_2=\cdots=\lambda_{n/2}$. \end{thm} \noindent {\it Proof.} Using the arithmetic-geometric mean inequality, we obtain that \begin{eqnarray*} \mathcal{E}_s(G^\sigma)&=&\sum_{j=1}^{n}|\lambda_j| =2\lambda_1+\sum_{j=2}^{n-1}|\lambda_j|\geq 2\lambda_1+(n-2)\left(\prod_{j=2}^{n-2}|\lambda_j|\right)^{\frac{1}{n-2}}\\ &=&2\lambda_1+(n-2)\left(\frac{\det(S)}{\lambda_1^2}\right)^{\frac{1}{n-2}} \end{eqnarray*} with equality if and if $|\lambda_2|=\cdots=|\lambda_{n-1}|$. Let $f(x)=2x+(n-2)\left(\frac{\det(S)}{x^2}\right)^\frac{1}{n-2}$. Then $f'(x)=2-2\left(\det(S)\right)^\frac{1}{n-2}x^{-\frac{n}{n-2}}$. It is easy to see that the function $f(x)$ is increasing for $x\geq \left(\det(S)\right)^\frac{1}{n}$. By Inequality (\ref{inequ1}), we have $\left(\det(S)\right)^\frac{1}{n}\leq\sqrt{2m/n}\leq \sqrt{\Delta}$. Therefore, \begin{equation*} \mathcal{E}_s(G^\sigma)\geq f(\sqrt{\Delta})=2\sqrt{\Delta}+(n-2)\left(\frac{\det(S)}{\Delta}\right)^\frac{1}{n-2}. \end{equation*} and equality holds if and only if $\lambda_1=\sqrt{\Delta}$ and $\lambda_2=\cdots=\lambda_{n/2}$. Now the proof is complete.\qed By expanding the right of Inequality (\ref{ELB1}), we obtain a simplified lower bound, but it is a little weaker than the bound (\ref{ELB1}). \begin{cor}\label{ELBC} Let $G^\sigma$ be a nonsingular oriented graph with order $n$, maximum degree $\Delta$ and skew adjacency matrix $S$. Then \begin{equation}\label{ELB2} \mathcal{E}_s(G^\sigma)\geq 2\sqrt{\Delta}+n-2+\ln(\det(S))-\ln{\Delta}. \end{equation} Equality holds if and only if $G^\sigma$ is a union of $n/2$ disjoint arcs. \end{cor} \noindent {\it Proof.} Note that $e^{x}\geq1+x$ for all $x$, where the equality holds if and only if $x=0$. Combining the above inequality and Theorem \ref{ELB}, we obtain that \begin{eqnarray*} \mathcal{E}_s(G^\sigma)&\geq& 2\sqrt{\Delta}+(n-2)\left(\frac{\det(S)}{\Delta}\right)^{\frac{1}{n-2}}\\ &=&2\sqrt{\Delta}+(n-2)e^\frac{\ln{(\det(S)/\Delta)}}{n-2}\\ &\geq&2\sqrt{\Delta}+(n-2)\left(1+\frac{\ln{(\det(S)/\Delta)}}{n-2}\right)\\ &=&2\sqrt{\Delta}+n-2+\ln(\det(S))-\ln{\Delta}\,\,. \end{eqnarray*} The equality holds in (\ref{ELB2}) if and only if all the inequalities in the above consideration must be equalities, that is, $\lambda_1=\sqrt{\Delta}$, $\lambda_2=\cdots=\lambda_{n/2}$ and $\det(S)=\Delta$. It is easy to see that a union of $n/2$ disjoint arcs satisfies the equality in (\ref{ELB2}). Conversely, if $\lambda_1=\sqrt{\Delta}$, $\lambda_2=\cdots=\lambda_{n/2}$ and $\det(S)=\Delta$, then we find that $\lambda_1=\sqrt{\Delta}$ and $\lambda_2=\cdots=\lambda_{n/2}=1$. If $\Delta=1$, then $G^\sigma$ is a union of $n/2$ disjoint arcs. Suppose that $\Delta\geq 2$. Then we get that $n\geq 4$. Considering the matrix $M=S^TS=(m_{ij})$, where $S=(S_1,\ldots,S_n)$. By Theorem \ref{general}, we know that $M$ is either a diagonal matrix with diagonal entries \{$\Delta, \Delta, 1, \ldots, 1$\} or a matrix of form $\left(\begin{array}{cc}\Delta & \mathbf{0}\\ \mathbf{0}^T & M_1\end{array}\right)$. If $M$ is a diagonal matrix, then the graph $G$ must contain two vertices with maximum degree $\Delta$ and the other vertices with degree 1. Since $\Delta\geq 2$, there must exist a path $v_i\,v_j\,v_k$ satisfying $d_{G}(v_i)=1$. Then no matter how to orient the graph $G$, we always get that $(S_i,S_k)\neq0$ and $m_{ik}\neq 0$, which is a contradiction. If $M$ is a matrix of form $\left(\begin{array}{cc}\Delta & \mathbf{0}\\ \mathbf{0}^T & M_1\end{array}\right)$, then $M_1$ must be symmetric and have spectrum $\{\Delta,1,\ldots,1\}$. Denote by $A$ the vertex set of the connected component of $G$ which contains the vertex $v_1$. Suppose $A=\{v_1,v_2,\ldots,v_{t+1}\}$. Obviously, $t\geq \Delta$. By the spectral decomposition, we deduce that $M_1=I_{n-1}+(\Delta-1)pp^T$, where $I_{n-1}$ is a unit matrix of order $n-1$ and $p$ is a unit eigenvector of $M_1$ corresponding to the eigenvalue $\Delta$. Therefore, we get \begin{equation}\label{Form} S^TS=\left(\begin{array}{cc}\Delta & \mathbf{0}\\ \mathbf{0}^T & I_{n-1} +(\Delta-1)pp^T\end{array}\right). \end{equation} Suppose that $p=(p_1,p_2,\ldots,p_{n-1})^T$. Then we claim that $p$ satisfies following propositions: \begin{enumerate}[(1)] \item $p_1^2+\cdots+p_{n-1}^2=1$, \item The degree sequence of $G$ is \{$\Delta, 1+(\Delta-1)p_1^2,\ldots,1+(\Delta-1)p_{n-1}^2$\}, \item $(S_{i+1},S_{j+1})=(\Delta-1)p_{i}p_{j}$ for any two distinct integer $1\leq i,j\leq n-1$, \item $p_i\neq 0$ for $1\leq i\leq t$. \end{enumerate} The first proposition is trivial. The second and third propositions follow from direct calculation of Equality (\ref{Form}). Now it remains to prove the fourth proposition. If not, without loss of generality, we suppose $p_1=0$. Then from the propositions (2) and (3), we have $d_G(v_2)=1$ and $(S_2,S_j)=0$ for all $3\leq j\leq t+1$, which is a contradiction. From the above propositions, we observe that $(\Delta-1)p_{j}^2\geq 1$ for all $1\leq j\leq t$. Then $\Delta-1=(\Delta-1)\sum_{j=1}^{n-1}p_j^2\geq t\geq \Delta$, which is a contradiction. Now we conclude that $G^\sigma$ must be a union of $n/2$ disjoint arcs. The proof is thus complete. \qed For the general case, the bound (\ref{ELB2}) is not better than the McClelland's lower bound (\ref{Mc}). For some cases, we find that the bound (\ref{ELB2}) is better. For example, the oriented graph in Figure \ref{Fig2} shows that the bound (\ref{ELB2}) is superior to (\ref{Mc}). The oriented graph $G^\sigma$ has $n$ vertices, $3n/2-2$ arcs and maximum degree $n/2$. By calculation, we have $\det(S)=n^2/4$. \begin{figure} \caption{The oriented graph $G^\sigma$} \label{Fig2} \end{figure} Moreover, we find a class of oriented graphs which illustrate the superiority of the bound (\ref{ELB2}). Let $\Gamma$ be the class of connected oriented graphs of order $n\geq 600$, which satisfies the following conditions: \begin{equation}\label{class} \frac{n}{2}\leq\Delta\leq \det(S)\leq n^2,\,\, m\leq 10n. \end{equation} Obviously, the oriented graph in Figure \ref{Fig2} belongs to $\Gamma$. \begin{thm}\label{CLL} The bound (\ref{ELB2}) is better than (\ref{Mc}) for any oriented graph in $\Gamma$. \end{thm} \noindent {\it Proof.} For any oriented graph $G^\sigma$ in $\Gamma$, we get that \begin{equation*} \ln{(\det(S))}-\ln{\Delta}=\ln\left(\frac{\det(S)}{\Delta}\right)\geq 0. \end{equation*} To prove the theorem, it is sufficient to prove that \begin{equation}\label{equ} 2\sqrt{\Delta}+n-2\geq \sqrt{4m+n(n-2)(\det(S))^{\frac{2}{n}}}. \end{equation} Notice that $\ln(\det(S))\leq 2\ln{n}\leq \frac{n}{2}$ for $n\geq 600$. By Taylor's formula, we get \begin{equation*} (\det(S))^{\frac{2}{n}}=e^{\frac{2\ln(\det(S))}{n}}=1+\frac{2\ln(\det(S))}{n}+2e^{t_0} \left(\frac{\ln(\det(S))}{n}\right)^2, \end{equation*} where the number $t_0$ satisfies that $0\leq t_0\leq \frac{2\ln(\det(S))}{n}\leq \frac{4\ln{n}}{n}\leq1$. It follows that $e^{t_0}\leq e$. We immediately obtain the following inequality: \begin{equation}\label{taylor} (\det(S))^{\frac{2}{n}}\leq 1+\frac{2\ln(\det(S))}{n}+2e\left(\frac{\ln(\det(S))}{n}\right)^2. \end{equation} To prove Inequality (\ref{equ}), we demonstrate that \begin{eqnarray*} &&4m+n(n-2)(\det(S))^{\frac{2}{n}}\\ &\leq& 4m+n(n-2)\left(1+\frac{2\ln(\det(S))} {n}+2e\left(\frac{\ln(\det(S))}{n}\right)^2\right)\\ &\leq&38n+n^2+2(n-2)\ln(\det(S))+2e\left(\frac{n-2}{n}\right) \left(\ln(\det(S))\right)^2\\ &\leq&38n+n^2+4(n-2)\ln(n)+8e\left(\frac{n-2}{n}\right) \left(\ln(n)\right)^2\\ &\leq&n^2-2n+4+2\sqrt{2n}(n-2). \end{eqnarray*} The last inequality follows for $n\geq 600$. Note that $(2\sqrt{\Delta}+n-2)^2\geq n^2-2n+4+2\sqrt{2n}(n-2)$. The proof is thus complete. \qed \noindent\textbf{Remark 4.2} By Theorem \ref{Mc1}, we know that the bound (\ref{Mc}) is always superior to the McClelland's bound obtained by Adiga et al. By Theorem \ref{ELB} and Corollary \ref{ELBC}, the bound (\ref{ELB1}) is always superior to the bound (\ref{ELB2}). For some cases, we obtain from Theorem \ref{CLL} that the bound (\ref{ELB2}) is better than the bound (\ref{Mc}). \end{document}
\begin{document} \baselineskip 8mm \parindent 9mm \title[] {Bound state solutions for non-autonomous fractional Schr\"{o}dinger-Poisson equations with critical exponent} \author{Kexue Li} \address{Kexue Li\newline School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China} \email{[email protected]} \thanks{{\it 2010 Mathematics Subjects Classification}: 35Q40, 58E30} \keywords{Fractional Schr\"{o}dinger-Poisson equation, bound state solution, critical exponent.} \begin{abstract} In this paper, we study the fractional Schr\"{o}dinger-Poisson equation \begin{equation*} \ \left\{\begin{aligned} &(-\Delta)^{s}u+V(x)u+K(x)\phi u=|u|^{2^{\ast}_{s}-2}u, &\mbox{in} \ \mathbb{R}^{3},\\ &(-\Delta)^{s}\phi=K(x)u^{2},&\mbox{in} \ \mathbb{R}^{3}, \end{aligned}\right. \end{equation*} where $s\in (\frac{3}{4},1]$, $2^{\ast}_{s}=\frac{6}{3-2s}$ is the fractional critical exponent, $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$ and $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$ are nonnegative functions. If $\|V\|_{\frac{3}{2s}}+\|K\|_{\frac{6}{6s-3}}$ is sufficiently small, we prove that the equation has at least one bound state solution. \end{abstract} \maketitle \section{\textbf{Introduction}} In this paper, we are concerned with the existence of bound state solutions for the following fractional Schr\"{o}dinger-Poisson equation \begin{equation}\label{fsp} \ \left\{\begin{aligned} &(-\Delta)^{s}u+V(x)u+K(x)\phi u=|u|^{2^{\ast-2}_{s}}u, &\mbox{in} \ \mathbb{R}^{3},\\ &(-\Delta)^{s}\phi=K(x)u^{2}, &\mbox{in} \ \mathbb{R}^{3}, \end{aligned}\right. \end{equation} where $s\in (\frac{3}{4},1]$, $2^{\ast}_{s}=\frac{6}{3-2s}$ is the fractional critical exponent, $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$ and $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$ are nonnegative functions, $(-\Delta)^{s}$ is the fractional Laplacian defined by \begin{align}\label{fractionalLaplacian} (-\triangle)^{s}u(x)=c_{s}P.V.\int_{\mathbb{R}^{3}}\frac{u(x)-u(y)}{|x-y|^{3+2s}}dy, \end{align} where \begin{align*} c_{s}=2^{2s}\pi^{-\frac{3}{2}}\frac{\Gamma(\frac{3+2s}{2})}{|\Gamma(-s)|}. \end{align*} When $s=1$, (\ref{fsp}) is the classical Schr\"{o}dinger-Poisson equation or the more general one \begin{equation}\label{SP} \ \left\{\begin{aligned} &-\Delta u+V(x)u+K(x)\phi u=f(x,u), &\mbox{in} \ \mathbb{R}^{3},\\ &-\Delta \phi=K(x)u^{2}, &\mbox{in} \ \mathbb{R}^{3}, \end{aligned}\right. \end{equation} The Schr\"{o}dinger-Poisson system has been introduced in \cite{Benci} as a model describing a quantum particle interacting with a electromagnetic field. In recent years, (\ref{SP}) has attracted much attention, we refer to \cite{Aprile,Ambrosetti,Jiang,Sun,Cerami,Y,Z,He} and the references therein. When $V(x)=1$, $K(x)=\lambda$ and $f(x,u)=u^{p}$, Ruiz \cite{Ruiz} studied the problem \begin{equation*} \ \left\{\begin{aligned} &-\Delta u+u+\lambda\phi u=u^{p}, \\ &-\Delta \phi=u^{2}, \lim_{|x|\rightarrow +\infty}\phi(x)=0, \end{aligned}\right. \end{equation*} where $u,\phi:\mathbb{R}^{3}\rightarrow \mathbb{R}$ are positive radial functions, $\lambda>0$ and $1<p<5$. The existence and nonexistence are established with different parameters $p$ and $\lambda$. In the case that $p\in (2,3)$, Ruiz developed a new approach consists of minimizing the associated energy functional on a certain manifold which is a combination of Nehari manifold and Pohozaev equality. In the case that $p\in (1,2)$ and $\lambda$ is small enough, the existence results are obtained. Liu and Guo \cite{Liu} considered the following Schr\"{o}dinger-Poisson equation with critical growth \begin{equation*} \ \left\{\begin{aligned} &-\Delta u+V(x)u+\lambda\phi u=\mu|u|^{q-1}u+u^{5}, \\ &-\Delta \phi=u^{2}, \end{aligned}\right. \end{equation*} where $q\in (2,5)$, $\lambda>0$. Under some assumptions on the potential $V$, they used the variational methods to prove the existence of positive ground state solutions. In \cite{ZMX}, the authors considered (\ref{SP}) with $f(u)=u^{5}$ and applied a linking theorem to prove the existence of bound state solutions when $V(x)$, $K(x)$ satisfy some conditions. If $V(x)=1$, $f(x,u)=a(x)|u|^{p-2}+u^{5}$, $p\in (4,6)$, when $K(x)$ and $a(x)$ satisfy some assumptions, Zhang \cite{ZhangJian} studied the existence of ground state solutions and nodal solutions for (\ref{SP}). If $\phi=0$, (\ref{fsp}) reduces to a fractional Schr\"{o}dinger equation, which is a fundmental equation in fractional quantum mechanics \cite{Laskin1,Laskin2}. In fact, if one extends the Feynman path integral from the Brownian-like to L\'{e}vy-like quantum mechanical paths, the classical Schr\"{o}dinger equation will change into the fractional Schr\"{o}dinger equation. In the last decade, the existence of solutions for fractional Schr\"{o}dinger equations has been investigated by many authors, we refer to \cite{Ionescu,Lemm,Ambrosio,Bieganowski,HZ,Ao,Secchi}. To the best of our knowledge, there are few papers which considered the equation (\ref{fsp}). In \cite{Zhang}, the authors considered a fractional Schr\"{o}dinger-Poisson system \begin{equation*} \ \left\{\begin{aligned} &(-\Delta)^{s}u+\lambda\phi u=g(u), &\mbox{in} \ \mathbb{R}^{3},\\ &(-\Delta)^{t}\phi=\lambda u^{2},&\mbox{in} \ \mathbb{R}^{3}, \end{aligned}\right. \end{equation*} where $g(u)$ satisfies the Berestucki-Lions conditions, the existence of positive solutions is proved. Teng \cite{T} studied the existence of ground state solutions for the nonlinear fractional Schr\"{o}dinger-Poisson equation with critical Sobolev exponent \begin{equation*} \ \left\{\begin{aligned} &(-\Delta)^{s}u+V(x)u+\phi u=\mu|u|^{p-1}+|u|^{2_{s}^{\ast}-2}u, &\mbox{in} \ \mathbb{R}^{3},\\ &(-\Delta)^{t}\phi=u^{2},&\mbox{in} \ \mathbb{R}^{3}, \end{aligned}\right. \end{equation*} where $\mu>0$ is a parameter, $1<p<2_{s}^{\ast}-1=\frac{3+2s}{3-2s}$, $s,t\in (0,1)$ and $2s+2t>3$, the author prove the existence of a nontrivial ground state solutions by using the method of Nehari-Pohozaev manifold and the arguments of Brezis-Nirenberg, the monotonic trick and global compactness Lemma. Shen and Yao \cite{Shen} applied Nehari-Pohozaev type manifold to prove the existence of a nontrivial least energy solution for the nonlinear fractional Schr\"{o}dinger-Poisson system \begin{equation*} \ \left\{\begin{aligned} &(-\Delta)^{s}u+V(x)u+\phi u=\mu|u|^{p-1}, &\mbox{in} \ \mathbb{R}^{3},\\ &(-\Delta)^{t}\phi=u^{2},&\mbox{in} \ \mathbb{R}^{3}, \end{aligned}\right. \end{equation*} where $s,t\in (0,1)$, $s<t$ and $2s+2t>3$, $2<p<\frac{3+2s}{3-2s}$. In \cite{Yu}, the authors studied the fractional Schr\"{o}dinger-Poisson system \begin{equation*} \ \left\{\begin{aligned} &\varepsilon^{2s}(-\Delta)^{s}u+V(x)u+\phi u=K(x)|u|^{p-2}u, &\mbox{in} \ \mathbb{R}^{3},\\ &\varepsilon^{2s}(-\Delta)^{s}\phi=u^{2},&\mbox{in} \ \mathbb{R}^{3}, \end{aligned}\right. \end{equation*} where $\varepsilon>0$, $\frac{3}{4}<s<1$, $4<p<\frac{6}{3-2s}$, $V(x)\in C(\mathbb{R}^{3})\cap L^{\infty}(\mathbb{R}^{3})$ is has positive global minimim, $K(x)\in C(\mathbb{R}^{3})\cap L^{\infty}(\mathbb{R}^{3})$ is positive and has global maximum. They proved the existence of positive ground solution and determined a concrete set related to the potentials $V$ and $K$ as the concentration position of these ground state solutions as $\varepsilon\rightarrow 0$. In this paper, we are devoted to establishing the existence of bounded state solutions for the fractional Schr\"{o}dinger-Poisson equation (\ref{fsp}). We consider the nonlinear term $f(x,u)=|u|^{2_{s}^{\ast}-2}$ without the subcritical part and we deal with the problem (\ref{fsp}) in $D^{s,2}(\mathbb{R}^{3})$ instead of in $H^{s}(\mathbb{R}^{3})$. Under some assumptions on the potentials $V$ and $K$, we prove that (\ref{fsp}) can not be solved by constrained minimization on the Nehari manifold. After we prove some concentration-compactness results, the existence of a bounded state solution is obtained by linking theorem. Our main result is as folows: \begin{theorem}\label{boundstate} Assume that $V\geq 0$, $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$, $K\geq 0$, $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$ and \begin{align}\label{svk} 0<S_{s}^{-1}\|V\|_{\frac{3}{2s}}+S_{s}^{\frac{3}{2s}-3}\|K\|_{\frac{6}{6s-3}}^{2}\leq 2^{\frac{2_{s}^{\ast}-4}{2_{s}\ast}}-1, \end{align} where \begin{align*} S_{s}=\inf_{u\in D^{s,2}(\mathbb{R}^{3})\backslash\{0\}}\frac{\int_{\mathbb{R}^{3}}|(-\Delta)^{\frac{s}{2}}u|^{2}dx}{(\int_{\mathbb{R}^{3}}|u|^{2_{s}^{\ast}})^{\frac{2_{s}^{\ast}}{2}}} \end{align*} is the best Sobolev constant for the embedding $D^{s,2}(\mathbb{R}^{3})\rightarrow L^{2_{s}^{\ast}}(\mathbb{R}^{3})$, then the problem (\ref{fsp}) has at least one bound state solution. \end{theorem} This paper is organized as follows. In Section 2, we first present the variational setting of the problem, then we prove some lemmas. In Section 3, we study the Palais-Smale sequence and get a compactness theorem. In Section 4, we prove the existence of bounded state solutions by linking theorem. \section{\textbf{Variational setting and preliminaries}} For $p\in [1,\infty)$, we denote by $L^{p}(\mathbb{R}^{3})$ the usual Lebesgue space with the norm $\|u\|_{p}=\left(\int_{\mathbb{R}^{3}}|u(x)|^{p}dx\right)^{\frac{1}{p}}$. For any $r>0$ and for any $x\in \mathbb{R}^{3}$, $B_{r}(x)$ denotes the ball of radius $r$ centered at $x$. For any $s\in (0,1)$, we recall some definitions of fractional Sobolev spaces $H^{s}(\mathbb{R}^{3})$ and the fractional Laplacian $(-\Delta)^{s}$, for more details, we refer to \cite{DPV}. $H^{s}(\mathbb{R}^{3})$ is defined as follows \begin{align*} H^{s}(\mathbb{R}^{3})=\left\{u\in L^{2}(\mathbb{R}^{3}): \int_{\mathbb{R}^{3}}(1+|\xi|^{2s})|\mathcal{F}u(\xi)|^{2}d\xi<\infty\right\} \end{align*} with the norm \begin{align}\label{alternative} \|u\|_{H^{s}}=\left(\int_{\mathbb{R}^{3}}(|\mathcal{F}u(\xi)|^{2}+|\xi|^{2s}|\mathcal{F}u(\xi)|^{2})d\xi\right)^{\frac{1}{2}}, \end{align} where $\mathcal{F}u$ denotes the Fourier transform of $u$. As usual $H^{-s}(\mathbb{R}^{3})$ denotes the dual of $H^{s}(\mathbb{R}^{3})$. By $\mathcal{S}(\mathbb{R}^{n})$, we denote the Schwartz space of rapidly decaying $C^{\infty}$ functions in $\mathbb{R}^{n}$. For $u\in \mathcal{S}(\mathbb{R}^{n})$ and $s\in (0,1)$, $(-\Delta)^{s}$ is defined by \begin{align}\label{Fourier} (-\Delta)^{s}f=\mathcal{F}^{-1}(|\xi|^{2s}(\mathcal{F}f)), \ \forall \xi\in \mathbb{R}^{n}. \end{align} In fact, (\ref{Fourier}) is equivalent to (\ref{fractionalLaplacian}), see \cite{Tzirakis}. By Plancherel's theorem, we have $\|\mathcal{F}u\|_{2}=\|u\|_{2}$, $\||\xi|^{s}\mathcal{F}u\|_{2}=\|(-\Delta)^{\frac{s}{2}}u\|$. Then by (\ref{alternative}), we get the equivalent norm \begin{align*} \|u\|_{H^{s}}=\left(\int_{\mathbb{R}^{3}}(|(-\Delta)^{\frac{s}{2}}u|^{2}+|u|^{2})dx\right)^{\frac{1}{2}}. \end{align*} From Theorem 6.5 and Corollary 7.2 in \cite{DPV}, it is known that the space $H^{s}(\mathbb{R}^{3})$ is continuously embedded in $L^{q}(\mathbb{R}^{3})$ for any $q\in [2,2^{\ast}_{s}]$ and the embedding $H^{s}(\mathbb{R}^{3})\hookrightarrow L^{q}(\mathbb{R}^{3})$ is locally compact for $q\in [1, 2^{\ast}_{s})$, where $2^{\ast}_{s}=\frac{6}{3-2s}$ is the fractional critical Sobolev exponent. \\ For $s\in (0,1)$, the fractional Sobolev space $D^{s,2}(\mathbb{R}^{3})$ is defined as follows \begin{align*} D^{s,2}(\mathbb{R}^{3})=\left\{u\in L^{2^{\ast}_{s}}(\mathbb{R}^{3}): |\xi|^{s}\mathcal{F}u(\xi)\in L^{2}(\mathbb{R}^{3})\right\}, \end{align*} which is the completion of $C^{\infty}_{0}(\mathbb{R}^{3})$ with respect to the norm \begin{align*} \|u\|_{D^{s,2}}=\left(\int_{\mathbb{R}^{3}}|(-\Delta)^{\frac{s}{2}}u|^{2}dx\right)^{\frac{1}{2}}=\left(\int_{\mathbb{R}^{3}}|\xi|^{2s}|\mathcal{F}u(\xi)|^{2}d\xi\right)^{\frac{1}{2}}. \end{align*} The best Sobolev constant is defined by \begin{align}\label{minimizer} S_{s}=\inf_{u\in D^{s,2}(\mathbb{R}^{3})\backslash \{0\}}\frac{\int_{\mathbb{R}^{3}}|(-\Delta)^{\frac{s}{2}}u|^{2}dx}{\left(\int_{\mathbb{R}^{3}}|u|^{2^{\ast}_{s}}dx\right)^{\frac{2}{2^{\ast}_{s}}}}. \end{align} By $C$, we denote the generic constants, which may change from line to line. We consider the variational setting of (\ref{fsp}). For $u\in D^{s,2}(\mathbb{R}^{3})$, define the linear operator $T_{u}: D^{s,2}(\mathbb{R}^{3})\rightarrow \mathbb{R}$ as \begin{align*} T_{u}(v)=\int_{\mathbb{R}^{3}}K(x)u^{2}vdx. \end{align*} For $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$, by the H\"{o}lder inequality, we have \begin{align}\label{dt2} |T_{u}(v)|\leq \|K\|_{\frac{6}{6s-3}}\|u\|^{2}_{\frac{6}{3-2s}}\|v\|_{2^{\ast}_{s}}\leq \|K\|_{\frac{6}{6s-3}}S^{-\frac{3}{2}}_{s}\|u\|_{D^{s,2}}^{2}\|v\|_{D^{s,2}}. \end{align} Set \begin{align*} \eta(a,b)=\int_{\mathbb{R}^{3}}(-\Delta)^{\frac{s}{2}}a\cdot (-\Delta)^{\frac{s}{2}}b\ dx, \ a,b\in D^{s,2}(\mathbb{R}^{3}). \end{align*} It is clear that $\eta(a,b)$ is bilinear, bounded and coercive. The Lax-Milgram theorem implies that for every $u\in D^{s,2}(\mathbb{R}^{3})$, there exists a unique $\phi^{s}_{u}\in D^{s,2}(\mathbb{R}^{3})$ such that $T_{u}(v)=\eta(\phi^{s}_{u},v)$ for any $v\in D^{s,2}(\mathbb{R}^{3})$, that is \begin{align}\label{dt3} \int_{\mathbb{R}^{3}}(-\Delta)^{\frac{s}{2}}\phi^{s}_{u}(-\Delta)^{\frac{s}{2}}vdx=\int_{\mathbb{R}^{3}}K(x)u^{2}vdx. \end{align} Therefore, $(-\Delta)^{s}\phi^{s}_{u}=K(x)u^{2}$ in a weak sense. For $x\in \mathbb{R}^{3}$, we have \begin{align}\label{Riesz} \phi^{s}_{u}(x)=c_{s}\int_{\mathbb{R}^{3}}\frac{K(y)u^{2}(y)}{|x-y|^{3-2s}}dy, \end{align} which is the Riesz potential \cite{Stein}, where \begin{align*} c_{s}=\frac{\Gamma(\frac{3-2s}{2})}{\pi^{3/2}2^{2s}\Gamma(s)}. \end{align*} Moreover, \begin{align}\label{phisu} \|\phi^{s}_{u}\|_{D^{s,2}}=\|T_{u}\|_{\mathcal{L}(D^{s,2},\mathbb{R})}. \end{align} Since $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$, by (\ref{dt3}), (\ref{dt2}), \begin{align}\label{phi2} \|\phi^{s}_{u}\|^{2}_{D^{s,2}}&=\int_{\mathbb{R}^{3}}K(x)\phi^{s}_{u}u^{2}dx\leq \|K\|_{\frac{6}{6s-3}}\|u\|^{2}_{\frac{6}{3-2s}}\|\phi^{s}_{u}\|_{2^{\ast}_{s}}\nonumber\\ &\leq S^{-\frac{1}{2}}_{s}\|K\|_{\frac{6}{6s-3}}\|u\|^{2}_{\frac{6}{3-2s}}\|\phi^{s}_{u}\|_{D^{s,2}}. \end{align} Then \begin{align}\label{ph} \|\phi^{s}_{u}\|_{D^{s,2}}\leq S^{-\frac{1}{2}}_{s}\|K\|_{\frac{6}{6s-3}}\|u\|^{2}_{\frac{6}{3-2s}}\leq CS^{-\frac{1}{2}}_{s}\|K\|_{\frac{6}{6s-3}}\|u\|_{H^{s}}. \end{align} Substituting $\phi^{s}_{u}$ in (\ref{fsp}), we have the fractional Schr$\ddot{\mbox{o}}$dinger equation \begin{align}\label{fractional Schrodinger} (-\Delta)^{s}u+V(x)u+K(x)\phi^{s}_{u}u=|u|^{2^{\ast}_{s}-2}u, \ x\in \mathbb{R}^{3}, \end{align} The energy functional $I: D^{s,2}(\mathbb{R}^{3})\rightarrow \mathbb{R}$ corresponding to problem (\ref{fractional Schrodinger}) is defined by \begin{align}\label{corresponding} I(u)=\frac{1}{2}\int_{\mathbb{R}^{3}}(|(-\Delta)^{\frac{s}{2}}u|^{2}+V(x)u^{2})dx+\frac{1}{4}\int_{\mathbb{R}^{3}}K(x)\phi^{s}_{u}u^{2}dx-\frac{1}{2^{\ast}_{s}}\int_{\mathbb{R}^{3}}|u|^{2^{\ast}_{s}}dx. \end{align} It is easy to see that $I$ is well defined in $D^{s,2}(\mathbb{R}^{3})$ and $I\in C^{1}(D^{s,2}(\mathbb{R}^{3}),\mathbb{R})$, and \begin{align*} \langle I'(u),v\rangle=\int_{\mathbb{R}^{3}}\left((-\Delta)^{\frac{s}{2}}u(-\Delta)^{\frac{s}{2}}v+V(x)uv+K(x)\phi^{s}_{u}uv-|u|^{2^{\ast}_{s}-2}uv\right)dx, \end{align*} for any $v\in D^{s,2}(\mathbb{R}^{3})$. \begin{definition} (1) We call $(u,\phi)\in D^{s,2}(\mathbb{R}^{3})\times D^{s,2}(\mathbb{R}^{3})$ is a weak solution of (\ref{fsp}) if $u$ is a weak solution of (\ref{fractional Schrodinger}).\\ (2)We call $u$ is a weak solution of (\ref{fractional Schrodinger}) if \begin{align*} \int_{\mathbb{R}^{3}}\left((-\Delta)^{\frac{s}{2}}u(-\Delta)^{\frac{s}{2}}v+V(x)uv+K(x)\phi^{s}_{u}uv-|u|^{2^{\ast}_{s}-2}uv\right)dx=0, \end{align*} for any $v\in D^{s,2}(\mathbb{R}^{3})$. \end{definition} Let $\Phi: D^{s,2}(\mathbb{R}^{3})\rightarrow D^{s,2}(\mathbb{R}^{3})$ be the operator \begin{align*} \Phi(u)=\phi^{s}_{u}, \end{align*} and set \begin{align*} N(u)=\int_{\mathbb{R}^{3}}K(x)\phi^{s}_{u}u^{2}dx. \end{align*} \begin{lemma}\label{Hardy} Let $u_{n}\in D^{s,2}(\mathbb{R}^{3})$ be such that $u_{n}\rightharpoonup 0$ in $D^{s,2}(\mathbb{R}^{3})$. Then, up to subsequences, $u_{n}\rightarrow 0$ in $L_{loc}^{p}(\mathbb{R}^{3})$ for any $p\in [2,2_{s}^{\ast})$. \end{lemma} \begin{proof} By the fractional Hardy inequality (see (1.1) in \cite{Yafaev}), \begin{align*} \int_{\mathbb{R}^{3}}\frac{u_{n}^{2}}{(1+|x|)^{2s}}dx\leq c_{s}\int_{\mathbb{R}^{3}}|(-\Delta)^{\frac{s}{2}}u_{n}|^{2}dx\leq C. \end{align*} Let $r>0$ and let $A\subset \mathbb{R}^{3}$ be such that $A\subset B_{r}(0)$. We have \begin{align*} \int_{A}u_{n}^{2}dx\leq (1+r)^{2s}\int_{B_{r}(0)}\frac{u_{n}^{2}}{(1+|x|)^{2s}}dx\leq \widetilde{C}. \end{align*} this together with $u_{n}\in D^{s,2}(\mathbb{R}^{3})$ yield that $u_{n}$ is bounded in $H^{s}(A)$. Since $u_{n}\rightharpoonup 0$ in $D^{s,2}(\mathbb{R}^{3})$, then up to a subsequence, $u_{n}\rightarrow 0$ in $L^{2}(A)$. By $L^{p}$ interpolation inequality, for all $p\in [2,2_{s}^{\ast})$, \begin{align*} \left(\int_{A}|u_{n}|^{p}dx\right)^{\frac{1}{p}}\leq \left(\int_{A}|u_{n}|^{2}dx\right)^{\frac{\alpha}{2}}\left(\int_{A}|u_{n}|^{2_{s}^{\ast}}dx\right)^{\frac{1-\alpha}{2_{s}^{\ast}}}, \end{align*} where $\frac{\alpha}{2}+\frac{1-\alpha}{2_{s}^{\ast}}=\frac{1}{p}$. Then the conclusion follows. \end{proof} \begin{lemma}\label{high} Let $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$ with $s>\frac{1}{2}$, then $\Phi$ and $N$ have following properties:\\ (1) $\Phi: D^{s,2}(\mathbb{R}^{3})\mapsto D^{s,2}(\mathbb{R}^{3})$ is continuous and maps bounded sets into bounded sets;\\ (2) If $u_{n}\rightharpoonup u$ in $D^{s,2}(\mathbb{R}^{3})$, then up to a subsequence $\Phi(u_{n})\rightarrow \Phi(u)$ in $D^{s,2}(\mathbb{R}^{3})$ and $N(u_{n})\rightarrow N(u)$. \end{lemma} \begin{proof} (1) By (\ref{dt3}), we see that \begin{align}\label{seq} \|T_{u}\|_{\mathcal{L}(D^{s,2},\mathbb{R})}=\|\Phi(u)\|_{D^{s,2}}, \ \forall \ u\in D^{s,2}(\mathbb{R}^{3}). \end{align} Assume that $u_{n}\in D^{s,2}(\mathbb{R}^{3})$, $u_{n}\rightarrow u$ in $D^{s,2}(\mathbb{R}^{3})$. Since $D^{s,2}(\mathbb{R}^{3})\hookrightarrow L^{\frac{6}{3-2s}}(\mathbb{R}^{3})$, we have $||u_{n}-u||\rightarrow 0$ in $L^{\frac{6}{3-2s}}(\mathbb{R}^{3})$, then $||u_{n}-u||^{\frac{3}{2s}}\rightarrow 0$ in $L^{\frac{4s}{3-2s}}(\mathbb{R}^{3})$. Since $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$, we have $\int_{\mathbb{R}^{3}}|K(x)|^{\frac{3}{2s}}|u_{n}-u|^{\frac{3}{2s}}dx\rightarrow 0$. For any $v\in D^{s,2}(\mathbb{R}^{3})$, by H\"{o}lder inequality, \begin{align*} |T_{u_{n}}(v)-T_{u}(v)|&\leq \int_{\mathbb{R}^{3}}K(x)|(u^{2}_{n}-u^{2})v|dx\\ &\leq\left(\int_{\mathbb{R}^{3}}|K(x)|^{\frac{3}{2s}}|u_{n}-u|^{\frac{3}{2s}}dx\right)^{\frac{2s}{3}}\|u_{n}+u\|_{\frac{6}{3-2s}}\|v\|_{2^{\ast}_{s}}\\ &\rightarrow 0. \end{align*} Since $v$ is arbitrary, we get $\lim_{n\rightarrow \infty}\|T_{u_{n}}-T_{u}\|_{\mathcal{L}(D^{s,2},\mathbb{R})}=0$. By (\ref{ph}), $\Phi$ maps bounded sets into bounded sets. \\ (2) By (\ref{seq}), we only need to show that \begin{align*} \|T_{u_{n}}-T_{u}\|_{\mathcal{L}(D^{s,2},\mathbb{R})}\rightarrow 0 \ \mbox{as} \ n\rightarrow \infty. \end{align*} Let $\varepsilon$ be any given positive number. There exists $r_{\varepsilon}>0$ large enough such that $\|K\|_{L^{\frac{6}{6s-3}}(\mathbb{R}^{3}\backslash B(0,r_{\varepsilon}))}<\varepsilon$. For any $v\in D^{s,2}(\mathbb{R}^{3})$, by H\"{o}lder inequality, \begin{align}\label{sequence} |T_{u_{n}}(v)-T_{u}(v)|&=\int_{\mathbb{R}^{3}}K(x)|(u^{2}_{n}-u^{2})v|dx\nonumber\\ &\leq \int_{\mathbb{R}^{3}\backslash B(0,r_{\varepsilon})}K(x)|u^{2}_{n}-u^{2}||v|dx+\int_{B(0,r_{\varepsilon})}K(x)|u^{2}_{n}-u^{2}||v|dx\nonumber\\ &\leq\|K\|_{L^{\frac{6}{6s-3}}(\mathbb{R}^{3}\backslash B(0,r_{\varepsilon}))}\|u^{2}_{n}-u^{2}\|_{\frac{3}{3-2s}}\|v\|_{\frac{6}{3-2s}}\nonumber\\ &\quad+\left(\int_{B(0,r_{\varepsilon})}(K(x))^{\frac{6}{3+2s}}|u^{2}_{n}-u^{2}|^{\frac{6}{3+2s}}dx\right)^{\frac{3+2s}{6}}\|v\|_{2^{\ast}_{s}}\nonumber\\ &\leq \left(\varepsilon C+\left(\int_{B(0,r_{\varepsilon})}(K(x))^{\frac{6}{3+2s}}|u^{2}_{n}-u^{2}|^{\frac{6}{3+2s}}dx\right)^{\frac{3+2s}{6}}\|v\|_{2^{\ast}_{s}}\right)\|v\|_{D^{s,2}}. \end{align} Set $B_{M}=\{x\in B(0,r_{\varepsilon}):K(x)>M\}$. Since $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$, then the volume $|B_{M}|\rightarrow 0$ as $M\rightarrow \infty$. By Lemma \ref{Hardy}, up to a subsequence, $u_{n}\rightarrow u$ in $L_{loc}^{\frac{12}{3+2s}}(\mathbb{R}^{3})$. Then for large $M$, we have \begin{align}\label{largeM} &\left(\int_{B(0,r_{\varepsilon})}(K(x))^{\frac{6}{3+2s}}|u^{2}_{n}-u^{2}|^{\frac{6}{3+2s}}dx\right)^{\frac{3+2s}{6}}\nonumber\\ &=\int_{B_{M}}(K(x))^{\frac{6}{3+2s}}|u_{n}+u|^{\frac{6}{3+2s}}|u_{n}-u|^{\frac{6}{3+2s}}dx\nonumber\\ &\quad+\int_{B(0,r_{\varepsilon})\backslash B_{M}}(K(x))^{\frac{6}{3+2s}}|u_{n}+u|^{\frac{6}{3+2s}}|u_{n}-u|^{\frac{6}{3+2s}}dx\nonumber\\ &\leq \left(\int_{B_{M}}(K(x))^{\frac{6}{6s-3}}dx\right)^{\frac{6s-3}{3+2s}}\left(\int_{\mathbb{R}^{3}}|u_{n}+u|^{\frac{6}{3-2s}}dx\right)^{\frac{3-2s}{3+2s}}\left(\int_{\mathbb{R}^{3}}|u_{n}-u|^{\frac{6}{3-2s}}dx\right)^{\frac{3-2s}{3+2s}}\nonumber\\ &\quad+M^{\frac{6}{3+2s}}\left(\int_{B(0,r_{\varepsilon})}|u_{n}+u|^{\frac{12}{3+2s}}dx\right)^{\frac{1}{2}}\left(\int_{B(0,r_{\varepsilon})}|u_{n}-u|^{\frac{12}{3+2s}}dx\right)^{\frac{1}{2}}\nonumber\\ &\leq \varepsilon c+co(1). \end{align} Therefore, \begin{align*} \Phi(u_{n})\rightarrow \Phi(u) \ \mbox{in} \ D^{s,2}(\mathbb{R}^{3}). \end{align*} Since $D^{s,2}(\mathbb{R}^{3})\hookrightarrow L^{2_{s}^{\ast}}(\mathbb{R}^{3})$ and $\phi_{u_{n}}^{s}\rightarrow \phi_{u}^{s}$ in $D^{s,2}(\mathbb{R}^{3})$, then by H\"{o}lder inequality, we have \begin{align}\label{embedding} \int_{\mathbb{R}^{3}}K(x)(\phi_{u_{n}}^{s}-\phi_{u}^{s})u^{2}dx\leq \|K\|_{\frac{6}{6s-3}}\|u\|^{2}_{2_{s}^{\ast}}\|\phi_{u_{n}}^{s}-\phi_{u}^{s}\|_{2_{s}^{\ast}}=o(1). \end{align} By the same argument as in (\ref{sequence}) and (\ref{largeM}), replacing $v$ by $\phi_{u_{n}}^{s}$, we get \begin{align}\label{N} \int_{\mathbb{R}^{3}}K(x)\phi_{u_{n}}^{s}(u_{n}^{2}-u^{2})dx \rightarrow 0 \ \mbox{as} \ n\rightarrow \infty. \end{align} From (\ref{embedding}), (\ref{N}) and \begin{align*} N(u_{n})-N(u)=\int_{\mathbb{R}^{3}}K(x)(\phi_{u_{n}}^{s}-\phi_{u}^{s})u^{2}dx+\int_{\mathbb{R}^{3}}K(x)\phi_{u_{n}}^{s}(u_{n}^{2}-u^{2})dx, \end{align*} it follows that $N(u_{n})\rightarrow N(u)$ as $n\rightarrow \infty$. \end{proof} We consider the limiting problem \begin{equation}\label{linear} \ \left\{\begin{aligned} &(-\Delta)^{s}u=|u|^{2^{\ast}_{s}-2}u \ \ \ \ \mbox{in} \ \mathbb{R}^{3},\\ &u\in D^{s,2}(\mathbb{R}^{3}). \end{aligned}\right. \end{equation} It is known that every positive solution of (\ref{linear}) assumes the form (see \cite {Chen}) \begin{equation}\label{form} \Psi_{\delta,x_{0},\kappa}(x)=\kappa\left(\frac{ \delta}{\delta^{2}+|x-x_{0}|^{2}}\right)^{\frac{3-2s}{2}}, \ \kappa>0,\ \delta>0, \ x_{0}\in \mathbb{R}^{3}. \end{equation} By \cite{C}, the best constant $S_{s}$ is only attained at $\Psi_{\delta,x_{0},\kappa}(x)$. It is known that \cite {Cora} if we choose \begin{equation*} \kappa=b_{s}=2^{\frac{3-2s}{2}}\left(\frac{\Gamma(\frac{3+2s}{2})}{\Gamma(\frac{3-2s}{2})}\right)^{\frac{3-2s}{4s}}, \end{equation*} then \begin{equation}\label{psi} \psi_{\delta,x_{0}}(x)=b_{s}\left(\frac{ \delta}{\delta^{2}+|x-x_{0}|^{2}}\right)^{\frac{3-2s}{2}} \end{equation} satisfies (\ref{linear}), and \begin{equation}\label{energy} \int_{\mathbb{R}^{3}}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,x_{0}}|^{2}dx=\int_{\mathbb{R}^{3}}|\psi_{\delta,x_{0}}|^{2_{s}^{\ast}}dx=S_{s}^{\frac{3}{2s}}. \end{equation} The energy functional related to (\ref{linear}) is defined on $D^{s,2}(\mathbb{R}^{3})$ by \begin{equation*} I_{\infty}(u)=\frac{1}{2}\int_{\mathbb{R}^{3}}|(-\Delta)^{\frac{s}{2}}u|^{2}dx-\frac{1}{2_{s}^{\ast}}\int_{\mathbb{R}^{3}}|u|^{2_{s}^{\ast}}dx. \end{equation*} Then \begin{equation*} I_{\infty}(\psi_{\delta,x_{0}})=\frac{s}{3}S_{s}^{\frac{3}{2s}}. \end{equation*} We define the Nehari manifold corresponding to (\ref{fsp}) and (\ref{linear}) by $\mathcal{N}$ and $\mathcal{N}_{\infty}$: \begin{align}\label{Nehari} \mathcal{N}&=\{u\in D^{s,2}(\mathbb{R}^{3})\backslash \{0\}: \langle I'(u), u\rangle=0\}, \nonumber\\ \mathcal{N}_{\infty}&=\{u\in D^{s,2}(\mathbb{R}^{3})\backslash \{0\}: \langle I_{\infty}'(u), u\rangle=0\}. \end{align} Set \begin{align}\label{inf} m=\inf_{u\in \mathcal{N}} I(u), \ m_{\infty}=\inf_{u\in \mathcal{N}_{\infty}} I_{\infty}(u). \end{align} \begin{lemma}\label{minimization} Assume that $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$ and $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$ are nonnegative functions. Then $m=m_{\infty}$ and $m$ can't be attained when $\|V\|_{\frac{3}{2s}}+\|K\|_{\frac{6}{6s-3}}>0$. \end{lemma} \begin{proof} Let $u\in D^{s,2}(\mathbb{R}^{3})$ and define the function \begin{align} \gamma(t):&=\langle I'(tu),tu\rangle\nonumber\\ &=t^{2}\int_{\mathbb{R}^{3}}(|(-\Delta)^{\frac{s}{2}}u|^{2}+V(x)u^{2})dx+t^{4}\int_{\mathbb{R}^{3}}K(x)\phi_{u}^{s}u^{2}dx-t^{2_{s}^{\ast}}\int_{\mathbb{R}^{3}}|u|^{2_{s}^{\ast}}dx\nonumber\\ &=at^{2}+bt^{4}-ct^{2_{s}^{\ast}}, \end{align} where $a=\int_{\mathbb{R}^{3}}(|(-\Delta)^{\frac{s}{2}}u|^{2}+V(x)u^{2})dx$, $b=\int_{\mathbb{R}^{3}}K(x)\phi_{u}^{s}u^{2}dx$, $c=\int_{\mathbb{R}^{3}}|u|^{2_{s}^{\ast}}dx$. Since $s>\frac{3}{4}$, by a similar argument as the proof of Lemma 3.3 in \cite{Ruiz}, we know that there exist unique $t(u)>0$, $s(u)>0$ such that $t(u)u\in \mathcal{N},s(u)u\in \mathcal{N}_{\infty}$, and $I(t(u)u)=\max_{t>0}I(tu)$, $I_{\infty}(s(u)u)=\max_{t>0}I_{\infty}(tu)$. For any $u\in \mathcal{N}$, \begin{align}\label{inequality} m_{\infty}&\leq I_{\infty}(s(u)u)\nonumber\\ &\leq\frac{1}{2}\|s(u)u\|_{D^{s,2}}^{2}+\frac{s^{2}(u)}{2}\int_{\mathbb{R}^{3}}V(x)u^{2}dx+\frac{1}{4}N(s(u)u)-\frac{1}{2_{s}^{\ast}}\|s(u)u\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}\nonumber\\ &=I(s(u)u). \end{align} Since $u\in \mathcal{N}$, it is easy to see that $I(tu)$ attains its maximum at $t=1$, then for any $t>0$, \begin{align}\label{attain} I(tu)\leq I(u). \end{align} By (\ref{inequality}), (\ref{attain}), we have \begin{align}\label{first} m_{\infty}\leq m. \end{align} Assume that $w$ is a positive solution of (\ref{linear}) centered at zero, $(z_{n})_{n}\subset \mathbb{R}^{3}$ satisfying $|z_{n}|\rightarrow \infty$ as $n\rightarrow \infty$, $w_{n}(\cdot)=w(\cdot-z_{n})$ and $t_{n}=t(w_{n})$. It is clear that $\|w_{n}\|_{D^{s,2}}=\|w\|_{D^{s,2}}$, $\|w_{n}\|_{2_{s}^{\ast}}=\|w\|_{2_{s}^{\ast}}$ and $w_{n}\rightharpoonup 0$ in $D^{s,2}(\mathbb{R}^{3})$ as $n\rightarrow \infty$. It follows from Lemma \ref{high} that \begin{align}\label{nw} N(w_{n})\rightarrow 0. \end{align} Since $D^{s,2}(\mathbb{R}^{3})\hookrightarrow L^{2_{s}^{\ast}}(\mathbb{R}^{3})$, then $w_{n}^{2}\rightharpoonup 0$ in $L^{\frac{2_{s}^{\ast}}{2}}(\mathbb{R}^{3})$, this together with $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$ yield that \begin{align}\label{vw} \int_{\mathbb{R}^{3}}V(x)w_{n}^{2}\rightarrow 0. \end{align} By (\ref{nw}), (\ref{vw}), we have \begin{align}\label{iun} I(u_{n})=\frac{t_{n}^{2}}{2}\|w\|_{D^{s,2}}+\frac{t_{n}^{2}}{2}o_{n}(1)+\frac{t_{n}^{4}}{4}o_{n}(1)-\frac{t_{n}^{2_{s}^{\ast}}}{2_{s}^{\ast}}\|w\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}. \end{align} Since $w_{n}\in \mathcal{N}_{\infty}$ and $t_{n}w_{n}\in \mathcal{N}$, then \begin{align}\label{w} \|w_{n}\|_{D^{s,2}}^{2}=\|w_{n}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}, \end{align} and \begin{align}\label{tn} t_{n}^{2}\|w_{n}\|_{D^{s,2}}^{2}+t_{n}^{2}\int_{\mathbb{R}^{3}}V(x)w_{n}^{2}dx+t_{n}^{4}N(w_{n})=t_{n}^{2_{s}^{\ast}}\|w_{n}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}. \end{align} From (\ref{w}) and (\ref{tn}), it follows that \begin{align}\label{imply} (1-t_{n}^{2_{s}^{\ast}-2})\|w_{n}\|_{D^{s,2}}^{2}+\int_{\mathbb{R}^{3}}V(x)w_{n}^{2}dx+t_{n}^{2}N(w_{n})=0. \end{align} By (\ref{nw}), (\ref{vw}) and (\ref{tn}), for sufficiently large $n$, we have \begin{align*} \|w\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}t_{n}^{2_{s}^{\ast}-2}\leq \|w\|_{D^{s,2}}^{2}+o_{n}(1)+t_{n}^{2}o_{n}(1). \end{align*} Then $(t_{n})_{n}$ is bounded. This together with $\|w_{n}\|_{D^{s,2}}=\|w\|_{D^{s,2}}$ and (\ref{imply}) yield that $\lim_{n\rightarrow \infty}t_{n}=1$. By (\ref{iun}), we see that $\lim_{n\rightarrow \infty}I(u_{n})=m_{\infty}$. Therefore by (\ref{inf}) and (\ref{first}), $m=m_{\infty}$. Now we will prove that $m$ can't be attained. Arguing by contradiction, we let $\bar{u}\in \mathcal{N}$ be a function such that $I(\bar{u})=m=m_{\infty}$. Then, since $s(\bar{u})\bar{u}\in \mathcal{N}_{\infty}$, we have \begin{align*} m_{\infty}&\leq I_{\infty}(s(\bar{u})\bar{u})\\ &\leq \frac{1}{2}\|s(\bar{u})\bar{u}\|_{D^{s,2}}^{2}+\frac{s^{2}(\bar{u})}{2}\int_{\mathbb{R}^{3}}V(x)\bar{u}^{2}dx+\frac{1}{4}N(s(\bar{u})\bar{u})-\frac{1}{2_{s}^{\ast}}\|s(\bar{u})\bar{u}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}\\ &\leq I(\bar{u})=m_{\infty}. \end{align*} We deduce that \begin{align*} \frac{s^{2}(\bar{u})}{2}\int_{\mathbb{R}^{3}}V(x)\bar{u}^{2}dx=0, \ N(s(\bar{u})\bar{u})=0, \ s(\bar{u})=1, \end{align*} then $\bar{u}\in \mathcal{N}_{\infty}$, $I_{\infty}(\bar{u})=m_{\infty}$. Therefore $\bar{u}$ admits the form (\ref{form}), contradicting $\int_{\mathbb{R}^{3}}V(x)\bar{u}^{2}dx=0$ and $N(\bar{u})=0$. \end{proof} \section{\textbf{A compactness theorem}} \begin{lemma}\label{yang}(see Lemma 2.5 in \cite{Yang}) Let $\{u_{n}\}\subset H_{loc}^{s}(\mathbb{R}^{N})$ be a bounded sequence of functions such that $u_{n}\rightharpoonup 0$ in $H^{s}(\mathbb{R}^{N})$. Suppose that there exist a bounded open set $Q\subset \mathbb{R}^{N}$ and a positive constant $\gamma>0$ such that \begin{align*} \int_{Q}|(-\triangle)^{s}u_{n}|dx\geq \gamma>0, \ \int_{Q}|u_{n}|^{2_{s}^{\ast}}dx\geq \gamma>0. \end{align*} Moreover suppose that \begin{align*} (-\Delta)^{s}u_{n}-|u_{n}|^{2_{s}^{\ast}-2}u_{n}=\chi_{n}, \end{align*} where $\chi_{n}\in H^{-s}(\mathbb{R}^{N})$ and \begin{align*} |\langle\chi_{n},\varphi\rangle|\leq \varepsilon_{n}\|\varphi\|_{H^{s}(\mathbb{R}^{N})}, \ \forall \ \varphi\in C_{0}^{\infty}(U), \end{align*} where $U$ is an open neighborhood of $Q$ and $\varepsilon_{n}$ is a sequence of positive numbers converging to 0. Then there exist a sequence of points $\{y_{n}\}\subset \mathbb{R}^{N}$ and a sequence of positive numbers $\{\sigma_{n}\}$ such that \begin{align*} v_{n}:=\sigma_{n}^{\frac{N-2s}{2}}u_{n}(\sigma_{n}x+y_{n}) \end{align*} converges weakly in $D^{s,2}(\mathbb{R}^{N})$ to a nontrivial solution of \begin{align*} (-\Delta)^{s}u=|u|^{2_{s}^{\ast}-2}, \ u\in D^{s,2}(\mathbb{R}^{N}), \end{align*} morover, \begin{align*} y_{n}\rightarrow \bar{y}\in \bar{Q} \ \mbox{and} \ \sigma_{n}\rightarrow 0. \end{align*} \end{lemma} \begin{lemma}\label{unps} Let $\{u_{n}\}$ be a Palais-Smale sequence for $I$, such that $u_{n}\in C_{0}^{\infty}(\mathbb{R}^{3})$ and \begin{align*} &u_{n}\rightharpoonup 0 \ weakly \ in \ D^{s,2}(\mathbb{R}^{3})\\ &u_{n}\nrightarrow 0 \ strongly \ in \ D^{s,2}(\mathbb{R}^{3}). \end{align*} Then there exist a sequence of points $\{y_{n}\}\subset \mathbb{R}^{3}$ and a sequence of positive numbers $\{\sigma_{n}\}$ such that \begin{align*} v_{n}(x)=\sigma_{n}^{\frac{3-2s}{2}}u_{n}(\sigma_{n}x+y_{n}) \end{align*} converges weakly in $D^{s,2}(\mathbb{R}^{3})$ to a nontrival solution of (\ref{linear}) and \begin{align}\label{iun1} I(u_{n})=I_{\infty}(v)+I_{\infty}(v_{n}-v)+o(1) \end{align} \begin{align}\label{iun2} \|u_{n}\|_{D^{s,2}}^{2}&=\|v\|_{D^{s,2}}^{2}+\|v_{n}-v\|_{D^{s,2}}^{2}+o(1). \end{align} \end{lemma} \begin{proof} Since $u_{n}\rightharpoonup 0 \ in \ D^{s,2}(\mathbb{R}^{3})$, by (\ref{vw}) and (2) of Lemma \ref{high}, we get \begin{align}\label{get} \int_{\mathbb{R}^{3}}V(x)u_{n}^{2}\rightarrow 0, \ \ N(u_{n})\rightarrow 0. \end{align} For any $h\in D^{s,2}(\mathbb{R}^{3})$, by the H\"{o}lder inequality, \begin{align*} \int_{\mathbb{R}^{3}}V(x)u_{n}hdx\leq \left(\int_{\mathbb{R}^{3}}(V(x)u_{n})^{\frac{6}{3+2s}} dx\right)^{\frac{3+2s}{6}}\left(\int_{\mathbb{R}^{3}}|h|^{2_{s}^{\ast}}dx\right)^{\frac{1}{2_{s}^{\ast}}}. \end{align*} From $u_{n}\rightharpoonup 0$ in $L^{2_{s}^{\ast}}(\mathbb{R}^{3})$, we have $u_{n}^{\frac{6}{3+2s}}\rightharpoonup 0$ in $L^{\frac{3+2s}{3-2s}}(\mathbb{R}^{3})$. This together with $V^{\frac{6}{3+2s}} \in L^{\frac{3+2s}{4s}}(\mathbb{R}^{3})$ yield that \begin{align*} \int_{\mathbb{R}^{3}}V^{\frac{6}{3+2s}}u_{n}^{\frac{6}{3+2s}}dx\rightarrow 0. \end{align*} Since $D^{s,2}(\mathbb{R}^{3})\hookrightarrow L^{2_{s}^{\ast}}(\mathbb{R}^{3})$, then $\|h\|_{2_{s}^{\ast}}\leq C\|h\|_{D^{s,2}}$. So \begin{align}\label{weak} \int_{\mathbb{R}^{3}}V(x)u_{n}hdx=o(1)\|h\|_{D^{s,2}}. \end{align} By the similar argument as that in the proof of (2) in Lemma \ref{high}, we have \begin{align}\label{strong} \int_{\mathbb{R}^{3}}K(x)\phi_{u_{n}}^{s}u_{n}hdx=o(1)\|h\|_{D^{s,2}}. \end{align} Therefore \begin{align}\label{therefore1} I(u_{n})=I_{\infty}(u_{n})+o(1), \end{align} \begin{align}\label{therefore2} I'(u_{n})&=I'_{\infty}(u_{n})+o(1)=o(1). \end{align} Put $\sigma_{n}=\|u_{n}\|_{2}^{1/s}$ and define \begin{align*} \tilde{u}_{n}(x)=\sigma_{n}^{\frac{3-2s}{2}}u_{n}(\sigma_{n}x). \end{align*} From Proposition 3.6 in \cite{DPV}, up to a positive constant, it follows that \begin{align}\label{equivalent} \|(-\Delta)^{\frac{s}{2}}u\|_{2}^{2}=\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}\frac{|u(x)-u(y)|^{2}}{|x-y|^{3+2s}}dxdy. \end{align} Then we can get \begin{align}\label{two norm} \|\tilde{u}_{n}\|_{D^{s,2}}=\|u_{n}\|_{D^{s,2}}, \ \|\tilde{u}_{n}\|_{2_{s}^{\ast}}=\|u_{n}\|_{2_{s}^{\ast}}. \end{align} By the invariance of these two norms with respect to rescaling $u\rightarrow r^{\frac{3-2s}{2}}u(r\cdot)$, note that (\ref{therefore2}), we have \begin{align}\label{recaling} I_{\infty}(\tilde{u}_{n})=I_{\infty}(u_{n}), \ I'_{\infty}(\tilde{u}_{n})=I'_{\infty}(u_{n})=o(1). \end{align} By the scale change, \begin{align}\label{one} \|\tilde{u}_{n}\|_{2}=1. \end{align} If $\tilde{u}_{n}\rightharpoonup \tilde{u}\neq 0$ in $D^{s,2}(\mathbb{R}^{3})$, we are done: the wanted functions $v_{n}$ and $v$ are precisely $\tilde{u}_{n}$ and $\tilde{u}$, and $y_{n}=0$. In fact, since $\tilde{u}_{n}\rightharpoonup \tilde{u}$ in $D^{s,2}(\mathbb{R}^{3})$, by the Br$\acute{\mbox{e}}$zis-Lieb Lemma \cite{Willem}, \begin{align}\label{Brezis} I_{\infty}(\tilde{u}_{n}-\tilde{u})=I_{\infty}(\tilde{u}_{n})-I_{\infty}(\tilde{u})+o(1). \end{align} From (\ref{therefore1}), (\ref{therefore2}), (\ref{recaling}) and (\ref{Brezis}), it follows that \begin{align}\label{that} I(u_{n})=I_{\infty}(\tilde{u})+I_{\infty}(\tilde{u}_{n}-\tilde{u})+o(1). \end{align} By (\ref{two norm}), \begin{align*} \|u_{n}\|_{D^{s,2}}^{2}=\|\tilde{u}_{n}\|_{D^{s,2}}^{2}=\|\tilde{u}\|_{D^{s,2}}^{2}+\|\tilde{u}_{n}-\tilde{u}\|_{D^{s,2}}^{2}+o(1). \end{align*} If $\tilde{u}_{n}\rightharpoonup 0$ in $D^{s,2}(\mathbb{R}^{3})$. We cover $\mathbb{R}^{3}$ with balls of radius 1 in such a way that each point of $\mathbb{R}^{3}$ is contained in at most 4 balls. Set \begin{align*} d_{n}=\sup_{B_{1}}|\tilde{u}_{n}|_{L^{2_{s}^{\ast}}(B_{1})}=\sup_{x\in \mathbb{R}^{3}}\left(\int_{B_{1}(x)}|\tilde{u}_{n}|^{2_{s}^{\ast}}\right)^{\frac{1}{2_{s}^{\ast}}}. \end{align*} We claim that there exists $\gamma>0$ such that \begin{align}\label{dn} d_{n}\geq \gamma>0, \ \forall \ n\in \mathbb{N}. \end{align} Otherwise, by Lemma 2.2 in \cite{DSS}, we have \begin{align}\label{ast} \tilde{u}_{n}\rightarrow 0 \ \ \mbox{in} \ \ L^{2_{s}^{\ast}}(\mathbb{R}^{3}). \end{align} By (\ref{therefore2}), $\langle I'_{\infty}(\tilde{u}_{n}), \tilde{u}_{n}\rangle=o(1)$. This together with (\ref{ast}) yield that \begin{align*} \|\tilde{u}_{n}\|_{D^{s,2}}^{2}=\|\tilde{u}_{n}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}+o(1)=o(1). \end{align*} Then by (\ref{two norm}), we have $\|u_{n}\|_{D^{s,2}}=o(1)$. This contradicts $u_{n}\nrightarrow 0$ strongly in $D^{s,2}(\mathbb{R}^{3})$. We choose $\tilde{y}_{n}$ such that \begin{align}\label{yn} \int_{B_{1}(\tilde{y}_{n})}|\tilde{u}_{n}|^{2_{s}^{\ast}}dx\geq d_{n}-\frac{1}{n} \end{align} and let \begin{align}\label{wn} w_{n}=\tilde{u}_{n}(x+\tilde{y}_{n}). \end{align} By (\ref{dn}), (\ref{yn}) and (\ref{wn}), \begin{align}\label{wngamma} \int_{B_{1}(0)}|w_{n}|^{2_{s}^{\ast}}dx\geq \frac{\gamma}{2}. \end{align} If $w_{n}\rightharpoonup w\neq 0$ in $D^{s,2}(\mathbb{R}^{3})$, we are done: the wanted functions $v_{n}$ and $v$ are precisely $w_{n}$ and $w$, and $y_{n}=\sigma_{n}\tilde{y}_{n}$. Suppose that $w_{n}\rightharpoonup 0$ in $D^{s,2}(\mathbb{R}^{3})$. By Lemma \ref{Hardy}, $w_{n}\rightarrow 0$ in $L_{loc}^{2}(\mathbb{R}^{3})$. Then by (\ref{wngamma}), we have \begin{align*} \int_{B_{1}(0)}|(-\Delta)^{\frac{s}{2}}w_{n}|^{2}dx&=\|w_{n}\|_{H^{s}(B_{1}(0))}^{2}-\|w_{n}\|_{L^{2}(B_{1}(0))}^{2}\geq C\|w_{n}\|_{L^{2_{s}^{\ast}}(B_{1}(0))}^{2}+o(1)\\ &\geq C\left(\frac{\gamma}{2}\right)^{\frac{2}{2_{s}^{\ast}}}+o(1)>0. \end{align*} By (\ref{recaling}), (\ref{wn}), the invariance of the $H^{s}(\mathbb{R}^{3})$ and $L^{2_{s}^{\ast}}(\mathbb{R}^{3})$ norms by translation, we apply Lemma \ref{yang} to $w_{n}$ and to claim that there exists a sequence $\eta_{n}\rightarrow 0$ of numbers and a sequence $y_{n}$ of points: $y_{n}\rightarrow \bar{y}\in \overline{B_{1}(0)}$ such that \begin{align*} v_{n}=\eta_{n}^{\frac{3-2s}{2}}w_{n}(\eta_{n}x+y_{n})\rightharpoonup v\neq 0. \end{align*} By (\ref{two norm}), it is easy to see that \begin{align}\label{vu} \|v_{n}\|_{D^{s,2}}=\|u_{n}\|_{D^{s,2}}, \ \|v_{n}\|_{2_{s}^{\ast}}=\|u_{n}\|_{2_{s}^{\ast}}. \end{align} Since $v_{n}\rightharpoonup v$ in $D^{s,2}(\mathbb{R}^{3})$, by the Br$\acute{\mbox{e}}$zis-Lieb Lemma \cite{Willem}, (\ref{therefore1}), (\ref{therefore2}) and (\ref{vu}), we have \begin{align*} I(u_{n})&=I_{\infty}(u_{n})+o(1)=I_{\infty}(v_{n})+o(1)=I_{\infty}(v)+I_{\infty}(v_{n}-v)+o(1),\\ \|u_{n}\|_{D^{s,2}}^{2}&=\|v_{n}\|_{D^{s,2}}^{2}=\|v\|_{D^{s,2}}^{2}+\|v_{n}-v\|_{D^{s,2}}^{2}+o(1). \end{align*} So $v_{n}$ and $v$ are the wanted functions. \end{proof} \begin{theorem}\label{compactness} Suppose that $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$, $V\geq 0$ and $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$, $K\geq 0$. Let $\{u_{n}\}$ be a Palsis-Smale sequence of $I$ at level $c$. Then up to a subsequence, $u_{n}\rightharpoonup \bar{u}$. Moreover, there exists a number $k\in \mathbb{N}$, $k$ sequences of points $\{y_{n}^{i}\}_{n}\subset \mathbb{R}^{3}$, $1\leq i\leq k$, $k$ sequences of positive numbers $\{\sigma_{n}^{i}\}_{n}$, $1\leq i\leq k$, $k$ sequences of functions $\{u_{n}^{i}\}\subset D^{s,2}(\mathbb{R}^{3})$, $1\leq i\leq k$, such that \begin{align*} u_{n}(x)-\bar{u}(x)-\sum_{i=1}^{k}\frac{1}{(\sigma_{n}^{i})^{\frac{3-2s}{2}}}u^{i}\left(\frac{x-y_{n}^{i}}{\sigma_{n}^{i}}\right)\rightarrow 0, \end{align*} where $u^{i}$, $1\leq i \leq k$, are solutions of (\ref{linear}). Moreover as $n\rightarrow \infty$, \begin{align*} \|u_{n}\|_{D^{s,2}}^{2}\rightarrow \|\bar{u}\|_{D^{s,2}}^{2}+\sum_{i=1}^{k}\|u^{i}\|_{D^{s,2}}^{2} \end{align*} \begin{align*} I(u_{n})\rightarrow I(\bar{u})+\sum_{i=1}^{k}I_{\infty}(u^{i}). \end{align*} \end{theorem} \begin{proof} Since $u_{n}$ is a Palais-Smale sequence for $I$, it is easy to show that $u_{n}$ is bounded in $D^{s,2}(\mathbb{R}^{3})$ and in $L^{2_{s}^{\ast}}(\mathbb{R}^{3})$. Up to a subsequence, we assume that $u_{n}\rightharpoonup \bar{u}$ in $D^{s,2}(\mathbb{R}^{3})$ and in $L^{2_{s}^{\ast}}(\mathbb{R}^{3})$ as $n\rightarrow \infty$. By the weak convergence, we have $I'(\bar{u})=0$. Note that $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$, $V\geq 0$ and $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$, $K\geq 0$, by Proposition 5.1.1 in \cite{Dipierro} and Lemma 2.7 in \cite{Teng}, we have $\bar{u}\in L^{\infty}(\mathbb{R}^{3})$. Set $z_{n}^{1}=u_{n}-\bar{u}$, then $z_{n}^{1}\rightharpoonup 0$ in $D^{s,2}(\mathbb{R}^{3})$. If $z_{n}^{1}\rightarrow 0$ in $D^{s,2}(\mathbb{R}^{3})$, we are done. Suppose that $z_{n}^{1}=u_{n}-\bar{u}\nrightarrow 0$ in $D^{s,2}(\mathbb{R}^{3})$, for any $v\in C_{0}^{\infty}(\mathbb{R}^{3})$, \begin{align}\label{zh} \langle I'(z_{n}^{1}),v\rangle&=\int_{\mathbb{R}^{3}}(-\Delta)^{\frac{s}{2}}z_{n}^{1}(-\Delta)^{\frac{s}{2}}vdx+\int_{\mathbb{R}^{3}}V(x)z_{n}^{1}vdx +\int_{\mathbb{R}^{3}}K(x)\phi_{z_{n}^{1}}^{s}z_{n}^{1}vdx\nonumber\\ &\quad-\int_{\mathbb{R}^{3}}|z_{n}^{1}|^{2_{s}^{\ast}-2}z_{n}^{1}vdx\nonumber\\ &=\langle I'(u_{n}), v\rangle-\langle I'(u), v\rangle+\int_{\mathbb{R}^{3}}K(x)(\phi_{z_{n}^{1}}^{s}z_{n}^{1}+\phi_{\bar{u}}^{s}\bar{u}-\phi_{u_{n}^{s}}^{s}u_{n})vdx\nonumber\\ &\quad-\int_{\mathbb{R}^{3}}(|z_{n}^{1}|^{2_{s}^{\ast}-2}z_{n}^{1}+|\bar{u}|^{2_{s}^{\ast}-2}\bar{u}-|u_{n}|^{2_{s}^{\ast}-2}u_{n})vdx. \end{align} Since $\bar{u}\in L^{\infty}(\mathbb{R}^{3})$, by arguments similar to those of Lemma 8.9 in \cite{Willem}, we get \begin{align}\label{invariance} |u_{n}-\bar{u}|^{2_{s}^{\ast}-2}(u_{n}-u)+|\bar{u}|^{2_{s}^{\ast}-2}\bar{u}-|u_{n}|^{2_{s}^{\ast}-2}u_{n}\rightarrow 0 \ \mbox{in}\ (D^{s,2}(\mathbb{R}^{3}))'. \end{align} Since $u_{n}\rightharpoonup \bar{u}$ in $D^{s,2}(\mathbb{R}^{3})$, it is easy to see that \begin{align*} &\int_{\mathbb{R}^{3}}K(x)\phi_{z_{n}^{1}}^{s}z_{n}^{1}vdx=o(1)\|v\|_{D^{s,2}},\\ &\int_{\mathbb{R}^{3}}K(x)\phi_{\bar{u}}^{s}\bar{u}-\phi_{u_{n}^{s}}^{s}u_{n})vdx=o(1)\|v\|_{D^{s,2}}. \end{align*} Then \begin{align}\label{k} \int_{\mathbb{R}^{3}}K(x)(\phi_{z_{n}^{1}}^{s}z_{n}^{1}vdx+\phi_{\bar{u}}^{s}\bar{u}-\phi_{u_{n}^{s}}^{s}u_{n})vdx=o(1)\|v\|_{D^{s,2}}. \end{align} From (\ref{zh}), (\ref{invariance}) and (\ref{k}), it follows that $\langle I'(z_{n}^{1}),v\rangle=o(1)v$, then $z_{n}^{1}$ is a Palais-Smale sequence for $I$.\\ By the definition of $D^{s,2}(\mathbb{R}^{3})$, for any $n$, there exists $\tilde{z}_{n}^{1}\in C_{0}^{\infty}(\mathbb{R}^{3})$ such that \begin{align*} \|z_{n}^{1}-\tilde{z}_{n}^{1}\|_{D^{s,2}}< \frac{1}{n} \ \ \mbox{and} \ \ \|I'(z_{n}^{1})-I'(\tilde{z}_{n}^{1})\|_{D^{s,2}}< \frac{1}{n}. \end{align*} Thus $\bar{z}_{n}^{1}$ is a Palais-Smale sequence, and \begin{align}\label{strongly} z_{n}^{1}-\tilde{z}_{n}^{1}\rightarrow 0 \ \ \mbox{in} \ D^{s,2}(\mathbb{R}^{3}), \end{align} \begin{align}\label{weakly} \tilde{z}_{n}^{1}\rightharpoonup 0 \ \ \mbox{in} \ D^{s,2}(\mathbb{R}^{3}). \end{align} By a similar argument to that in proof of Lemma \ref{high}, we get that as $n\rightarrow \infty$, \begin{align}\label{verify} \left\{\begin{aligned} &\int_{\mathbb{R}^{3}}K(x)\tilde{z}_{n}^{1}(x)(z_{n}^{1}-\tilde{z}_{n}^{1})(x)dx\rightarrow 0\\ &K(x)\bar{u}(x)\tilde{z}_{n}^{1}(x)dx\rightarrow 0\\ &K(x)\bar{u}(x)(z_{n}^{1}-\tilde{z}_{n}^{1})(x)dx\rightarrow 0. \end{aligned}\right. \end{align} By (\ref{strongly}), (\ref{weakly}), (\ref{verify}) and the Br$\acute{\mbox{e}}$zis-Lieb Lemma, we conclude that \begin{align}\label{conclude1} \|\tilde{z}_{n}^{1}\|_{D^{s,2}}^{2}&=\|u_{n}\|_{D^{s,2}}^{2}-\|\tilde{u}\|_{D^{s,2}}^{2}+o(1), \end{align} \begin{align}\label{conclude2} I(\tilde{z}_{n}^{1})&=I(u_{n})-I(\bar{u})+o(1), \end{align} \begin{align}\label{conclude3} I'(\tilde{z}_{n}^{1})&=I'(u_{n})-I'(\bar{u})+o(1). \end{align} Then $\tilde{z}_{n}^{1}$ is a Palais-Smale sequence for $I$. Since $z_{n}^{1}\rightharpoonup 0$ in $D^{s,2}(\mathbb{R}^{3})$ and $z_{n}^{1}\nrightarrow 0$, we have that $\tilde{z}_{n}^{1}\rightharpoonup 0$ in $D^{s,2}(\mathbb{R}^{3})$ and $\tilde{z}_{n}^{1}\nrightarrow 0$. Thus $\tilde{z}_{n}^{1}$ satisfies the assumptions of Lemma \ref{unps}, so there exist a sequence of points $\{x_{n}^{1}\}$ and a sequence of positive numbers $\eta_{n}^{1}$ such that \begin{align*} v_{n}^{1}(x)=(\eta_{n}^{1})^{\frac{3-2s}{2}}\tilde{z}_{n}^{1}(\eta_{n}^{1}x+x_{n}^{1}) \end{align*} converges weakly in $D^{s,2}(\mathbb{R}^{3})$ to a nontrival solution $u^{1}$ of (\ref{linear}) and by (\ref{iun1}), (\ref{iun2}), (\ref{conclude1}) and (\ref{conclude2}), we have \begin{align}\label{vnuz1} I_{\infty}(v_{n}^{1}-u^{1})=I(\tilde{z}_{n}^{1})-I_{\infty}(u^{1})+o(1)=I(u_{n})-I(\bar{u})-I_{\infty}(u^{1})+o(1) \end{align} \begin{align}\label{vnuz2} \|v_{m}^{1}-u^{1}\|_{D^{s,2}}^{2}=\|\tilde{z}_{n}^{1}\|_{D^{s,2}}^{2}-\|u^{1}\|_{D^{s,2}}^{2}+o(1)=\|u_{n}\|_{D^{s,2}}^{2}-\|\bar{u}\|_{D^{s,2}}^{2}-\|u^{1}\|_{D^{s,2}}^{2}+o(1). \end{align} Iterating this procedure, we get Palais-Smale sequences of functions \begin{align*} z_{n}^{j}=v_{n}^{j-1}-u^{j-1}, \ z_{n}^{j}\rightharpoonup 0 \ \mbox{in} \ D^{s,2}(\mathbb{R}^{3}), \end{align*} and \begin{align*} \tilde{z}_{n}^{j}\in C_{0}^{\infty}(\mathbb{R}^{3}), \ \tilde{z}_{n}^{j}\rightharpoonup 0 \ \mbox{in} \ D^{s,2}(\mathbb{R}^{3}) \end{align*} such that $z_{n}^{j}=\tilde{z}_{n}^{j}+(z_{n}^{j}-\tilde{z}_{n}^{j})$ and \begin{align}\label{znj} z_{n}^{j}-\tilde{z}_{n}^{j}\rightarrow 0 \ \ \mbox{in} \ \ D^{s,2}(\mathbb{R}^{3}). \end{align} We can obtain sequences of points $\{x_{n}\}^{j}\subset \mathbb{R}^{3}$ and sequences of numbers $\eta_{n}^{j}$ such that \begin{align*} v_{n}^{j}(x)=(\eta_{n}^{j})^{\frac{3-2s}{2}}\tilde{z}_{n}^{j}(\eta_{n}^{j}x+x_{n}^{j}) \end{align*} converges weakly in $D^{s,2}(\mathbb{R}^{3})$ to a nontrivial solution $u^{j}$ of (\ref{linear}). Furthermore, by (\ref{vnuz1}), (\ref{vnuz2}) and (\ref{znj}), we have \begin{align*} I_{\infty}(v_{n}^{j})&=I_{\infty}(\tilde{z}_{n}^{j})=I_{\infty}(v_{n}^{j-1}-u^{j-1})+o(1)\\ &=I(u_{n})-I(\bar{u})-\sum_{i=1}^{j-1}I_{\infty}(u^{i})+o(1) \end{align*} and \begin{align}\label{vnjd} \|v_{n}^{j}\|_{D^{s,2}}^{2}&=\|\tilde{z}_{n}^{j}\|_{D^{s,2}}^{2}=\|v_{n}^{j-1}-u^{j-1}\|_{D^{s,2}}^{2}+o(1)=\|v_{n}\|_{D^{s,2}}^{2}-\|u^{j-1}\|_{D^{s,2}}^{2}+o(1)\nonumber\\ &=\|u_{n}\|_{D^{s,2}}^{2}-\|\bar{u}\|_{D^{s,2}}^{2}-\sum_{i=1}^{j-1}\|u^{i}\|_{D^{s,2}}^{2}+o(1). \end{align} By the definition of $S_{s}$, we have $\|u^{j}\|_{D^{s,2}}^{2}\geq S_{s}\|u^{j}\|_{2_{s}^{\ast}}^{2}$. Since $u^{j}$ is a solution of (\ref{linear}), then we have \begin{align}\label{ujd} \|u^{j}\|_{D^{s,2}}^{2}\geq S_{s}^{\frac{3}{2s}}. \end{align} This together with the boundedness of $u_{n}$ and (\ref{vnjd}) yield that the iteration must terminate at some $k>0$. The proof is complete. \end{proof} \begin{corollary}\label{und} Assume that $\{u_{n}\}\subset D^{s,2}(\mathbb{R}^{3})$ satisfies the assumption of the Theorem \ref{compactness} with $c\in (\frac{s}{3}S_{s}^{\frac{3}{2s}}, \frac{2s}{3}S_{s}^{\frac{3}{2s}})$, then $\{u_{n}\}$ contains a subsequence strongly convergent in $D^{s,2}(\mathbb{R}^{3})$. \end{corollary} \begin{proof} By Lemma \ref{minimization}, any nontrival solution $u$ have energy $I(u)>\frac{s}{3}S_{s}^{\frac{3}{2s}}$. From (3.15) in \cite{Cora}, any sign-changing solution $v$ of (\ref{linear}) satisfies $I_{\infty}(v)\geq \frac{2s}{3}S_{s}^{\frac{3}{2s}}$. Since any positive solution of (\ref{linear}) has energy $\frac{s}{3}S_{s}^{\frac{3}{2s}}$, by Theorem \ref{compactness}, we get the conclusion. \end{proof} \begin{corollary}\label{unm} If $\{u_{n}\}$ is a minimizing sequence for $I$ on $N$, then there exist a sequence of points $\{y_{n}\}\subset \mathbb{R}^{3}$, a sequence of positive numbers $\{\delta_{n}\}\subset \mathbb{R}^{+}$ and a sequence $\{w_{n}\}\subset D^{s,2}(\mathbb{R}^{3})$ such that \begin{align*} u_{n}(x)=w_{n}(x)+\psi_{\delta_{n},y_{n}}(x), \end{align*} where $\psi_{\delta_{n},y_{n}}(x)$ are functions defined in (\ref{psi}) and $w_{n}\rightarrow 0$ strongly in $D^{s,2}(\mathbb{R}^{3})$. \end{corollary} \begin{proof} The result follows from Theorem \ref{compactness} and Lemma \ref{minimization}. \end{proof} \section{\textbf{Proof of the main result}} Set \begin{align*} \sigma(x)=\left\{\begin{aligned} &0, && \mbox{if}\ |x|<1\\ &1, && \mbox{if}\ |x|\geq1 \end{aligned}\right. \end{align*} and define \begin{align*} \alpha: D^{s,2}(\mathbb{R}^{3})\rightarrow \mathbb{R}^{3}\times \mathbb{R^{+}} \end{align*} \begin{align*} \alpha(u)=\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}}\left(\frac{x}{|x|},\sigma(x)\right)|(-\Delta)^{\frac{s}{2}}u|^{2}dx=(\beta(u),\gamma(u)), \end{align*} where \begin{align*} \beta(u)=\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}}\frac{x}{|x|}|(-\Delta)^{\frac{s}{2}}u|^{2}dx, \ \ \gamma(u)=\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}}\sigma(x)|(-\Delta)^{\frac{s}{2}}u|^{2}dx. \end{align*} \begin{lemma}\label{half} If $|y|\geq \frac{1}{2}$, then \begin{align*} \beta(\psi_{\delta,y})=\frac{y}{|y|}+o(1) \ \ \mbox{as} \ \delta\rightarrow 0. \end{align*} \end{lemma} \begin{proof} From (\ref{psi}), it follows that \begin{align*} |\nabla \psi_{\delta,y}(x)|=(3-2s)b_{s}\delta^{\frac{3-2s}{2}}\frac{|x-y|}{(\delta^{2}+|x-y|^{2})^{\frac{5-2s}{2}}}. \end{align*} By (\ref{equivalent}) and Proposition 2.2 in \cite{DPV}, we have \begin{align}\label{deltay} \int_{\mathbb{R}^{3}\backslash B_{\varepsilon}(y)}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}|^{2}dx&\leq C\int_{\mathbb{R}^{3}\backslash B_{\varepsilon}(y)}|\nabla\psi_{\delta,y}|^{2}dx\nonumber\\ &=(3-2s)b_{s}C\delta^{\frac{3-2s}{2}}\int_{\mathbb{R}^{3}\backslash B_{\varepsilon}(y)}\frac{|x-y|^{2}}{(\delta^{2}+|x-y|^{2})^{5-2s}}dx\nonumber\\ &=C_{1}\delta^{\frac{3-2s}{2}}\int_{\varepsilon}^{+\infty}\frac{\rho^{4}}{(\delta^{2}+\rho^{2})^{5-2s}}d\rho\nonumber\\ &\leq C_{1}\delta^{\frac{3-2s}{2}}\int_{\varepsilon}^{+\infty}\rho^{4s-6}d\rho. \end{align} By (\ref{deltay}), for every $\varepsilon>0$, there exists a $\hat{\delta}$ such that $\forall \delta\in (0,\hat{\delta}]$, \begin{align*} \frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}\backslash B_{\varepsilon}(y)}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}|^{2}dx<\varepsilon. \end{align*} Then \begin{align}\label{minus1} \left|\beta(\psi_{\delta,y})-\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{B_{\varepsilon}(y)}\frac{x}{|x|}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}|^{2}dx\right|<\varepsilon. \end{align} If $\varepsilon$ is small enough, for $|y|\geq \frac{1}{2}$ and $x\in B_{\varepsilon}(y)$, \begin{align*} \left|\frac{x}{|x|}-\frac{y}{|y|}\right|<2\varepsilon. \end{align*} Then by (\ref{energy}), \begin{align}\label{minus2} &\left|\frac{y}{|y|}-\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{B_{\varepsilon}(y)}\frac{x}{|x|}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}(x)|^{2}dx\right|\nonumber\\ &=\left|\frac{1}{S_{s}^{\frac{3}{2s}}}\left(\frac{y}{|y|}-\frac{x}{|x|}\right)|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}(x)|^{2}dx+\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}\backslash B_{\varepsilon}(y)}\frac{y}{|y|}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}(x)|^{2}dx\right|\nonumber\\ &\leq \frac{2\varepsilon}{S_{s}^{\frac{3}{2s}}}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}(x)|^{2}dx+\varepsilon=3\varepsilon. \end{align} By (\ref{minus1}) and (\ref{minus2}), we have \begin{align*} \left|\beta(\psi_{\delta,y})-\frac{y}{|y|}\right|<4\varepsilon. \end{align*} \end{proof} Define \begin{align*} \mathcal{M}=\left\{u\in \mathcal{N}:\alpha(u)=(\beta(u),\gamma(u))=(0,\frac{1}{2})\right\}, \end{align*} and define \begin{align*} c_{\mathcal{M}}=\inf_{u\in \mathcal{M}}I(u). \end{align*} \begin{lemma}\label{vkc} Suppose that $V\geq 0$, $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$, $K\geq 0$, $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$ and $\|V\|_{\frac{3}{2s}}+\|K\|_{\frac{6}{6s-3}}>0$. Then \begin{align*} c_{\mathcal{M}}>\frac{s}{3}S_{s}^{\frac{3}{2s}}. \end{align*} \end{lemma} \begin{proof} It is obvious that $c_{0}\geq \frac{s}{3}S_{s}^{\frac{3}{2s}}$. To prove $c_{0}>\frac{s}{3}S_{s}^{\frac{3}{2s}}$, we argue by contradiction. Suppose that there exists a sequence $\{u_{n}\}\subset \mathcal{N}$ such that \begin{align}\label{betagamma} \beta(u_{n})=0, \ \gamma(u_{n})=\frac{1}{2} \end{align} \begin{align}\label{iunm} \lim_{n\rightarrow\infty}I(u_{n})=\frac{s}{3}S_{s}^{\frac{3}{2s}}. \end{align} By Corollary \ref{unm}, there exist a sequence of points $\{y_{n}\}\subset \mathbb{R}^{3}$, a sequence of positive numbers $\{\delta_{n}\}\subset \mathbb{R}^{+}$ and a sequence of functions $\{w_{n}\}\subset D^{s,2}(\mathbb{R}^{3})$ converging to 0 in $D^{s,2}(\mathbb{R}^{3})$ such that \begin{align*} u_{n}(x)=w_{n}(x)+\psi_{\delta_{n},y_{n}}(x). \end{align*} Since $w_{n}\rightarrow 0$ in $D^{s,2}(\mathbb{R}^{3})$, then for $n$ big enough, we have \begin{align}\label{wnpsi} \alpha(w_{n}+\psi_{\delta_{n},y_{n}})&=\frac{1}{S_{s}^{\frac{3}{2}}}\int_{\mathbb{R}^{3}}\left(\frac{x}{|x|},\sigma(x)\right)|(-\Delta)^{\frac{s}{2}}(w_{n}+\psi_{\delta_{n},y_{n}})|^{2}dx\nonumber\\ &=\frac{1}{S_{s}^{\frac{3}{2}}}\int_{\mathbb{R}^{3}}\left(\frac{x}{|x|},\sigma(x)\right)|(-\Delta)^{\frac{s}{2}}w_{n}|^{2}dx\nonumber\\ &\quad+\frac{2}{S_{s}^{\frac{3}{2}}}\int_{\mathbb{R}^{3}}\left(\frac{x}{|x|},\sigma(x)\right)((-\Delta)^{\frac{s}{2}}w_{n},(-\Delta)^{\frac{s}{2}}\psi_{\delta_{n},y_{n}})dx\nonumber\\ &\quad+\frac{1}{S_{s}^{\frac{3}{2}}}\int_{\mathbb{R}^{3}}\left(\frac{x}{|x|},\sigma(x)\right)|(-\Delta)^{\frac{s}{2}}\psi_{\delta_{n},y_{n}}|^{2}dx\nonumber\\ &=\frac{1}{S_{s}^{\frac{3}{2}}}\int_{\mathbb{R}^{3}}\left(\frac{x}{|x|},\sigma(x)\right)|(-\Delta)^{\frac{s}{2}}\psi_{\delta_{n},y_{n}}|^{2}dx+o(1)\nonumber\\ &=\alpha(\psi_{\delta_{n},y_{n}})+o(1). \end{align} From (\ref{betagamma}) and (\ref{wnpsi}), it follows that \begin{align*} \mbox{(i)} \ \ \beta(\psi_{\delta_{n},y_{n}})\rightarrow 0 \ \ \mbox{as} \ \ n\rightarrow \infty. \end{align*} \begin{align*} \mbox{(ii)} \ \ \gamma(\psi_{\delta_{n},y_{n}})\rightarrow 0 \ \ \mbox{as} \ n\rightarrow \infty. \end{align*} For $\delta_{n}$, up to a sunsequence, one of these cases occurs\\ (a) $\delta_{n}\rightarrow \infty$ as $n\rightarrow \infty$\\ (b) $\delta_{n}\rightarrow \tilde{\delta}\neq 0$ as $n\rightarrow \infty$\\ (c) $\delta_{n}\rightarrow 0$ and $y_{n}\rightarrow \bar{y}$, $\bar{y}<\frac{1}{2}$ as $n\rightarrow \infty$\\ (d) $\delta_{n}\rightarrow 0$ as $n\rightarrow \infty$ and $|y_{n}|\geq \frac{1}{2}$ for $n$ large. \\ We will prove that none of the possibilities (a)-(d) can be true. If (a) holds, \begin{align*} \gamma(\psi_{\delta_{n},y_{n}})&=\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}}\sigma(x)|(-\triangle)^{\frac{s}{2}}\psi_{\delta_{n},y_{n}}|^{2}dx=\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}\backslash B_{1}(0) }|(-\triangle)^{\frac{s}{2}}\psi_{\delta_{n},y_{n}}|^{2}dx\\ &=1-\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{B_{1}(0)}|(-\triangle)^{\frac{s}{2}}\psi_{\delta_{n},y_{n}}|^{2}dx=1-o(1) \ \ \ \ \mbox{as} \ n\rightarrow \infty. \end{align*} This contradicts (ii). If (b) holds, then $|y_{n}|\rightarrow +\infty$, otherwise $\psi_{\delta_{n},y_{n}}$ would converge strongly in $D^{s,2}(\mathbb{R}^{3})$, so $u_{n}$ would converge strongly in $D^{s,2}(\mathbb{R}^{3})$ against Lemma \ref{minimization}. Then we have \begin{align*} \gamma(\psi_{\delta_{n},y_{n}})&=\gamma(\psi_{\bar{\delta},y_{n}})+o(1)=\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}}\sigma(x)|(-\Delta)^{\frac{s}{2}}\psi_{\bar{\delta},y_{n}}|^{2}dx+o(1)\\ &=\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}}\sigma(x-y_{n})|(-\Delta)^{\frac{s}{2}}\psi_{\bar{\delta},0}|^{2}dx+o(1)\\ &=1-\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{B_{1}(y_{n})}|(-\Delta)^{\frac{s}{2}}\psi_{\bar{\delta},0}|^{2}dx+o(1)\\ &=1+o(1) \ \ \ \ \mbox{as} \ n\rightarrow \infty. \end{align*} This contradicts (ii). If (c) holds, then \begin{align*} \gamma(\psi_{\delta_{n},y_{n}})&=\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}\backslash B_{1}(0)}|(-\Delta)^{\frac{s}{2}}\psi_{\bar{\delta},y_{n}}|^{2}dx\\ &=\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}\backslash B_{1}(y_{n})}|(-\Delta)^{\frac{s}{2}}\psi_{\bar{\delta},0}|^{2}dx\\ &=o(1). \end{align*} This contradicts (ii). If (d) holds, by Lemma \ref{half}, we have \begin{align*} \beta(\psi_{\delta_{n},y_{n}})=\frac{y_{n}}{|y_{n}|}+o(1) \ \ \ \mbox{as} \ n\rightarrow \infty. \end{align*} This contradicts (i). \end{proof} Define $\theta:D^{s,2}(\mathbb{R}^{3})\backslash \{0\}\rightarrow \mathcal{N}$: \begin{align*} \theta(u)=t(u)u, \end{align*} where $t(u)$ is the unique positive number such that $t(u)u\in \mathcal{N}$. Let $T$ be the operator \begin{align*} T:\mathbb{R}^{3}\times (0,+\infty)\rightarrow D^{s,2}(\mathbb{R}^{3}) \end{align*} defined by \begin{align*} T(y,\delta)=\psi_{\delta,y}(x). \end{align*} \begin{lemma}\label{vkdelta} Assume that $V\geq 0$, $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$, $K\geq 0$, $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$. Then for any $\varepsilon>0$, there exist $\delta_{1}=\delta_{1}(\varepsilon)$, $\delta_{2}=\delta_{2}(\varepsilon)$ such that \begin{align*} I(\theta\circ T(y,\delta))<\frac{s}{3}S_{s}^{\frac{3}{2s}}+\varepsilon \end{align*} for any $y\in \mathbb{R}^{3}$ and $\delta\in (0,\delta_{1}]\cup [\delta_{2},+\infty)$. \end{lemma} \begin{proof} Since $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$, then for any $\varepsilon>0$, there exists $R>0$ such that \begin{align}\label{rbv} \left(\int_{\mathbb{R}^{3}\backslash B_{R}(0)}|V(x)|^{\frac{3}{2s}}dx\right)^{\frac{2s}{3}}<\frac{\varepsilon}{2S_{s}^{\frac{3-2s}{2s}}}. \end{align} From $\lim_{\delta\rightarrow +\infty}\sup_{y\in \mathbb{R}^{3}}|\psi_{\delta,y}|=0$, it follows that $\lim_{\delta\rightarrow +\infty}\int_{B_{R}(0)}|\psi_{\delta,y}|^{2_{s}^{\ast}}dx=0$. Therefore there exists $\delta_{2}=\delta_{2}(\varepsilon)$ such that \begin{align}\label{supdelta} \sup_{y\in \mathbb{R}^{3}}\left(\int_{B_{R}(0)}|\psi_{\delta,y}|^{2_{s}^{\ast}}dx\right)^{\frac{3-2s}{3}}<\frac{\varepsilon}{2\|V\|_{\frac{3}{2s}}}, \ \ \ \delta\geq \delta_{2}. \end{align} By (\ref{rbv}) and (\ref{supdelta}), we have \begin{align}\label{vpsi} \int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx&=\int_{B_{R}(0)}V(x)\psi_{\delta,y}^{2}dx+\int_{\mathbb{R}^{3}\backslash B_{R}(0)}V(x)\psi_{\delta,y}^{2}dx\nonumber\\ &\leq \left(\int_{B_{R}(0)}|V(x)|^{\frac{3}{2s}}dx\right)^{\frac{2s}{3}}\left(\int_{B_{R}(0)}|\psi_{\delta,y}|^{2_{s}^{\ast}}dx\right)^{\frac{3-2s}{3}}\nonumber\\ &\quad+\left(\int_{\mathbb{R}^{3}\backslash B_{R}(0)}|V(x)|^{\frac{3}{2s}}dx\right)^{\frac{2s}{3}}\left(\int_{\mathbb{R}^{3}\backslash B_{R}(0)}|\psi_{\delta,y}|^{2_{s}^{\ast}}dx\right)^{\frac{3-2s}{3}}\nonumber\\ &\leq \|V\|_{\frac{3}{2s}}\left(\int_{B_{R}(0)}|\psi_{\delta,y}|^{2_{s}^{\ast}}dx\right)^{\frac{3-2s}{3}}+\left(\int_{\mathbb{R}^{3}\backslash B_{R}(0)}|V(x)|^{\frac{3}{2s}}dx\right)^{\frac{2s}{3}}\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2}\nonumber\\ &<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon. \end{align} Since $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$, for any $\varepsilon>0$, there exists $r>0$ small enough such that \begin{align}\label{supy} \sup_{y\in \mathbb{R}^{3}}\left(\int_{B_{r}(y)}|V(x)|^{\frac{3}{2s}}dx\right)^{\frac{2s}{3}}<\frac{\varepsilon}{2S_{s}^{\frac{3-2s}{2s}}}. \end{align} By (\ref{psi}), \begin{align*} \int_{\mathbb{R}^{3}\backslash B_{r}(y)}|\psi_{\delta,y}|^{2_{s}^{\ast}}dx&=C\delta^{3}\int_{\mathbb{R}^{3}\backslash B_{r}(y)}\frac{1}{(\delta^{2}+|x-y|^{2})^{3}}dx\\ &=\widetilde{C}\delta^{3}\int_{r}^{+\infty}\frac{\rho^{2}}{(\delta^{2}+\rho^{2})^{3}}d\rho, \end{align*} so there is a $\delta_{1}=\delta_{1}(\varepsilon)$ such that for any $\delta\in (0,\delta_{1}]$, there holds \begin{align}\label{rbry} \left(\int_{\mathbb{R}^{3}\backslash B_{r}(y)}|\psi_{\delta,y}|^{2_{s}^{\ast}}dx\right)^{\frac{3-2s}{3}}<\frac{\varepsilon}{2\|V\|_{\frac{3}{2s}}}, \ \ \ 0<\delta\leq \delta_{1}. \end{align} By (\ref{supy}) and (\ref{rbry}), we get \begin{align}\label{vpsidelta} \int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx&=\int_{B_{r}(y)}V(x)\psi_{\delta,y}^{2}dx+\int_{\mathbb{R}^{3}\backslash B_{r}(y)}V(x)\psi_{\delta,y}^{2}dx\nonumber\\ &\leq \left(\int_{B_{r}(y)}|V(x)|^{\frac{3}{2s}}dx\right)^{\frac{2s}{3}}\left(\int_{B_{r}(y)}|\psi_{\delta,y}|^{2_{s}^{\ast}}dx\right)^{\frac{3-2s}{3}}\nonumber\\ &\quad+\left(\int_{\mathbb{R}^{3}\backslash B_{r}(y)}|V(x)|^{\frac{3}{2s}}dx\right)^{\frac{2s}{3}}\left(\int_{\mathbb{R}^{3}\backslash B_{r}(y)}|\psi_{\delta,y}|^{2_{s}^{\ast}}dx\right)^{\frac{3-2s}{3}}\nonumber\\ &\leq \left(\int_{B_{r}(y)}|V(x)|^{\frac{3}{2s}}dx\right)^{\frac{2s}{3}}\left(\int_{B_{r}(y)}|\psi_{\delta,y}|^{2_{s}^{\ast}}dx\right)^{\frac{3-2s}{3}}\nonumber\\ &\quad+\|V\|_{\frac{3}{2s}}\left(\int_{\mathbb{R}^{3}\backslash B_{r}(y)}|V(x)|^{\frac{3}{2s}}dx\right)^{\frac{2s}{3}}\nonumber\\ &<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon. \end{align} From (\ref{vpsi}) and (\ref{vpsidelta}), it follows that \begin{align}\label{rvdy} \int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx<\varepsilon, \ \mbox{for any} \ y\in \mathbb{R}^{3} \ \mbox{and} \ \ \delta\in (0,\delta_{1}]\cup [\delta_{2},+\infty). \end{align} Similarly, we can get that \begin{align}\label{npsidelta} N(\psi_{\delta,y})<\varepsilon, \ \mbox{for any} \ y\in \mathbb{R}^{3} \ \mbox{and} \ \ \delta\in (0,\delta_{1}]\cup [\delta_{2},+\infty). \end{align} Since $\psi_{\delta,y}\in \mathcal{N}_{\infty}$, it is easy to show that there exists $t_{\delta,y}=t(\psi_{\delta,y})$ such that $t_{\delta,y}\psi_{\delta,y}\in \mathcal{N}$. By an argument analogous to (\ref{nw})-(\ref{imply}), we verify that for any $y\in \mathbb{R}^{3}$, \begin{align}\label{tdelta} t_{\delta,y}\rightarrow 1 \ \ \ \mbox{as} \ \ \delta\rightarrow 0 \ \ \mbox{or} \ \ \delta\rightarrow+\infty. \end{align} By (\ref{rvdy}), (\ref{npsidelta}) and (\ref{tdelta}), we obtain \begin{align*} I(\theta\circ T(y,\delta))&=\frac{1}{2}t_{\delta,y}^{2}\|\psi_{\delta,y}\|_{D^{s,2}}^{2}+\frac{1}{2}t_{\delta,y}^{2}\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx+\frac{1}{4}t_{\delta,y}^{4}N(\psi_{\delta,y})\\ &\quad-\frac{1}{2_{s}^{\ast}}t_{\delta,y}^{2_{s}^{\ast}}\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}\\ &=I_{\infty}(t_{\delta,y}\psi_{\delta,y})+\frac{1}{2}t_{\delta,y}^{2}\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx+\frac{1}{4}t_{\delta,y}^{4}N(\psi_{\delta,y})\\ &<I_{\infty}(t_{\delta,y}\psi_{\delta,y})+\varepsilon\\ &=\frac{s}{3}S_{s}^{\frac{3}{2s}}+\varepsilon, \end{align*} for any $y\in \mathbb{R}^{3}$ and $\delta\in (0,\delta_{1}]\cup [\delta_{2},+\infty)$. \end{proof} \begin{lemma}\label{vky} Assume that $V\geq 0$, $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$, $K\geq 0$, $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$. Then for any fixed $\delta>0$, \begin{align*} \lim_{|y|\rightarrow +\infty}I(\theta\circ T(y,\delta))=\frac{s}{3}S_{s}^{\frac{3}{2s}}. \end{align*} \end{lemma} \begin{proof} It is obvious that $\lim_{|y|\rightarrow +\infty}\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx=0$, $\lim_{|y|\rightarrow +\infty}N(\psi_{\delta,y})=0$. By a similar argument to (\ref{iun})-(\ref{imply}), we have \begin{align*} t_{\delta,y}=t(\psi_{\delta,y})\rightarrow 1 \ \ \mbox{as} \ \ |y|\rightarrow +\infty, \end{align*} where $t_{\delta,y}$ is the unique positive number such that $t_{\delta,y}\psi_{\delta,y}\in \mathcal{N}$. Then \begin{align*} \frac{s}{3}S_{s}^{\frac{3}{2s}}&\leq I(\theta\circ T(y,\delta))\\ &=\frac{1}{2}t_{\delta,y}^{2}\|\psi_{\delta,y}\|_{D^{s,2}}^{2}+\frac{1}{2}t_{\delta,y}^{2}\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx+\frac{1}{4}t_{\delta,y}^{4}N(\psi_{\delta,y})\\ &\quad-\frac{1}{2_{s}^{\ast}}t_{\delta,y}^{2_{s}^{\ast}}\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}\\ &<I_{\infty}(\psi_{\delta,y})+o(1)\\ &=\frac{s}{3}S_{s}^{\frac{3}{2s}}+o(1) \ \ \ \mbox{as} \ \ \ |y|\rightarrow +\infty. \end{align*} \end{proof} By Lemma \ref{vkc}, there exist a positive number $\mu$ such that $\frac{s}{3}S_{s}^{\frac{3}{2s}}+\mu<c_{0}$. \begin{lemma}\label{deltatheta} There is a $\delta_{1}:0<\delta<\frac{1}{2}$ such that \\ $(a)$ $I(\theta\circ T(y,\delta))<\frac{s}{3}S_{s}^{\frac{3}{2s}}+\mu$, \ \ $\forall y\in \mathbb{R}^{3}$\\ $(b)$ $\gamma(\theta\circ T(y,\delta))<\frac{1}{2}$, \ \ $\forall y: |y|<\frac{1}{2}$\\ $(c)$ $\big|\beta(\theta\circ T(y,\delta))-\frac{y}{|y|}\big|<\frac{1}{4}$, \ \ $\forall y: |y|\geq \frac{1}{2}$. \end{lemma} \begin{proof} By Lemma \ref{vkdelta}, $(a)$ holds. By (\ref{equivalent}) and Proposition 2.2 in \cite{DPV}, \begin{align*} \gamma(\theta\circ T(y,\delta))&=\frac{t_{\delta,y}^{2}}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}}\sigma(x)|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}|^{2}dx\\ &=\frac{t_{\delta,y}^{2}}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}\backslash B_{1}(0)}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}|^{2}dx\\ &=\frac{t_{\delta,y}^{2}}{S_{s}^{\frac{3}{2s}}}\int_{\mathbb{R}^{3}\backslash B_{1}(y)}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,0}|^{2}dx\\ &\leq Ct_{\delta,y}^{2}\int_{\mathbb{R}^{3}\backslash B_{1}(y)}|\nabla \psi_{\delta,0}|^{2}dx\\ &=\tilde{C}t_{\delta,y}^{2}\delta^{\frac{3-2s}{2}}\int_{\mathbb{R}^{3}\backslash B_{1}(y)}\frac{|x|^{2}}{(\delta^{2}+|x|^{2})^{5-2s}}dx, \end{align*} this together with (\ref{tdelta}) yield that $(b)$ holds. By Lemma \ref{half} and (\ref{tdelta}), we get $(c)$. \end{proof} \begin{lemma}\label{deltagamma} There exists a $\delta_{2}>\frac{1}{2}$ such that \\ $(a)$ $I(\theta\circ T(y,\delta))<\frac{s}{3}S_{s}^{\frac{3}{2s}}+\mu$, \ \ $\forall y\in \mathbb{R}^{3}$\\ $(b)$ $\gamma(\theta\circ T(y,\delta))>\frac{1}{2}$, \ \ $\forall y: |y|>\frac{1}{2}$. \end{lemma} \begin{proof} By Lemma \ref{vkdelta}, $(a)$ holds. Since \begin{align*} \lim_{\delta\rightarrow +\infty}\int_{B_{1}(0)}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}|^{2}dx=0, \end{align*} and \begin{align*} \lim_{\delta\rightarrow +\infty}t_{\delta,y}=1, \end{align*} we have \begin{align*} \gamma(\theta\circ T(y,\delta))=t_{\delta,y}\left(1-\frac{1}{S_{s}^{\frac{3}{2s}}}\int_{B_{1}(0)}|(-\Delta)^{\frac{s}{2}}\psi_{\delta,y}|^{2}dx\right)\rightarrow 1 \ \ \mbox{as} \ \ \delta\rightarrow +\infty, \end{align*} then $(b)$ holds. \end{proof} \begin{lemma}\label{rtheta} There exists $R>0$ such that \\ $(a)$ $I(\theta\circ T(y,\delta))<\frac{s}{3}S_{s}^{\frac{3}{2s}}+\mu$, \ \ $\forall y:|y|\geq R$ and $\delta\in [\delta_{1},\delta_{2}]$\\ $(b)$ $(\beta(\theta\circ T(y,\delta))\cdot y)_{\mathbb{R}^{3}}>0$, \ \ $\forall |y|\geq R$ and $\delta\in [\delta_{1},\delta_{2}]$. \end{lemma} \begin{proof} Since \begin{align}\label{ty} t_{\delta,y}\rightarrow 1 \ \mbox{as} \ |y|\rightarrow +\infty, \end{align} by Lemma \ref{vky} and the compactness of $[\delta_{1},\delta_{2}]$, we can get $R_{1}$ big enough such that \begin{align*} I(\theta\circ T(y,\delta))<\frac{s}{3}S_{s}^{\frac{3}{2s}}+\mu, \ \ \forall y:|y|\geq R \ \mbox{and} \ \delta\in [\delta_{1},\delta_{2}]. \end{align*} For $|y|$ large enough, by (\ref{ty}) and a similar argument as that in the proof of Lemma \ref{half}, the result follows. \end{proof} Let $\delta_{1}$, $\delta_{2}$ and $R$ be the constant in Lemma \ref{deltatheta}, Lemma \ref{betagamma} and Lemma \ref{rtheta}, respectively. Define a bounded domain $D\subset \mathbb{R}^{3}\times \mathbb{R}$ by \begin{align*} D=\left\{(y,\delta)\in \mathbb{R}^{3}\times \mathbb{R}: |y|\leq R, \ \delta_{1}\leq \delta\leq \delta_{2}\right\}, \end{align*} and define the map $\vartheta:D\rightarrow \mathbb{R}^{3}\times \mathbb{R}^{+}$ by \begin{align*} \vartheta(y,\delta)=\left(\beta\circ\theta\circ T(y,\delta),\gamma\circ\theta\circ T(y,\delta)\right). \end{align*} \begin{lemma}\label{topologicaldegree} Assume that $V\geq 0$, $V\in L^{\frac{3}{2s}}(\mathbb{R}^{3})$, $K\geq 0$, $K\in L^{\frac{6}{6s-3}}(\mathbb{R}^{3})$ and $\|V\|_{\frac{3}{2s}}+\|K\|_{\frac{6}{6s-3}}>0$. Then \begin{align*} \emph{deg}\left(\vartheta,D,(0,\frac{1}{2})\right)=1. \end{align*} \end{lemma} \begin{proof} We consider the homotopy \begin{align*} \zeta(y,\delta,s)=(1-s)(y,\delta)+s\vartheta(y,\delta). \end{align*} By the homotopy invariance of the topological degree, and by the fact that \begin{align*} \emph{deg}\left(id,D,(0,\frac{1}{2})\right)=1, \end{align*} we need to prove that \begin{align*} \zeta(y,\delta,s)\neq (0,\frac{1}{2}) \ \mbox{for any} \ \ (y,\delta)\in \partial D \ \mbox{and} \ s\in [0,1]. \end{align*} We have \begin{align*} \partial D=\Gamma_{1}\cup\Gamma_{2}\cup \Gamma_{3}\cup \Gamma_{4}, \end{align*} where \begin{align*} \Gamma_{1}&=\{(y,\delta_{1}):|y|<\frac{1}{2}\}\\ \Gamma_{2}&=\{(y,\delta_{1}):\frac{1}{2}\leq|y|\leq R\}\\ \Gamma_{3}&=\{(y,\delta_{2}):|y|\leq R\}\\ \Gamma_{4}&=\{(y,\delta):|y|=R,\delta\in[\delta_{1},\delta_{2}]\}. \end{align*} If $(y,\delta)\in \Gamma_{1}$, by Lemma \ref{deltatheta} $(b)$, \begin{align*} (1-s)\delta_{1}+s\gamma\circ\theta\circ T(y,\delta_{1})<\frac{1}{2}. \end{align*} If $(y,\delta)\in \Gamma_{2}$, by Lemma \ref{deltatheta} $(c)$, \begin{align*} \big|\beta(\theta\circ T(y,\delta))-\frac{y}{|y|}\big|<\frac{1}{4}, \end{align*} then $\forall s\in [0,1]$, \begin{align*} |(1-s)y+s\beta(\theta\circ T(y,\delta_{1}))|&\geq \left|(1-s)y+s\frac{y}{|y|}\right|-\left|s\beta(\theta\circ T(y,\delta_{1}))-s\frac{y}{|y|}\right|\\ &\geq (1-s)|y|+s-\frac{s}{4}\\ &\geq \frac{1}{2}. \end{align*} If $(y,\delta)\in \Gamma_{3}$, \begin{align*} (1-s)\delta_{2}+s\gamma\circ \theta \circ T(y,\delta_{2})>\frac{1}{2} \ \mbox{for any} \ s\in [0,1]. \end{align*} If $(y,\delta)\in \Gamma_{4}$, by Lemma \ref{rtheta} $(b)$, \begin{align*} ([(1-s)y+s\beta\circ \theta \circ T(y,\delta)]\cdot y)>0 \ \mbox{for any} \ s\in [0,1]. \end{align*} Therefore, \begin{align*} deg(\vartheta,D,(0,\frac{1}{2}))=deg(id,D,(0,\frac{1}{2}))=1. \end{align*} \end{proof} \textbf{Proof of Theorem \ref{boundstate}.} In order to apply the linking theorem in \cite{Struwe}, we define \begin{align*} Q=\theta\circ T(D), \ \mathcal{M}=\left\{u\in \mathcal{N}: \alpha(u)=(\beta(u),\gamma(u))=(0,\frac{1}{2})\right\}. \end{align*} We claim that $\mathcal{M}$ links $\partial Q$, that is, \\ $(a)$ $\partial Q\cap \mathcal{M}=\varnothing$;\\ $(b)$ $h(Q)\cap \mathcal{M}\neq \varnothing$ for any $h\in \Gamma=\{h\in C(Q,\mathcal{N}):h(\partial Q)=id\}$. \\ In fact, if $u\in \theta\circ T(\partial D)$, by Lemma \ref{deltatheta} $(a)$, Lemma \ref{deltagamma} $(a)$, Lemma \ref{rtheta} $(a)$, \begin{align*} I(u)<m+\mu<c_{\mathcal{M}}, \end{align*} then $u\notin \mathcal{M}$. To prove $(b)$, for any $h \in\Gamma$, we define $\eta: D\rightarrow \mathbb{R}^{3}\times \mathbb{R}+$ by \begin{align*} \eta(y,\delta)=\left(\beta\circ h\circ \theta\circ T(y,\delta),\gamma\circ h\circ \theta\circ T(y,\delta)\right). \end{align*} Since $h(\partial D)=id$, then \begin{align*} \eta(y,\delta)=\left(\beta\circ \theta\circ T(y,\delta),\gamma\circ \theta\circ T(y,\delta)\right)=\vartheta(y,\delta) \ \mbox{for any} \ (y,\delta)\in \partial D. \end{align*} This together with Lemma \ref{topologicaldegree} yield that \begin{align*} deg(\eta,D,(0,\frac{1}{2}))=deg(\vartheta,D,(0,\frac{1}{2}))=1. \end{align*} Then there exists $(y',\delta')\in D$ such that $h\circ \theta \circ T(y',\delta')\in \mathcal{M}$. Define \begin{align}\label{level} d=\inf_{h\in \Gamma}\max_{u\in Q}I(h(u)). \end{align} By linking theorem, $d\geq c_{\mathcal{M}}>\frac{s}{3}S_{s}^{\frac{3}{2s}}$. From $Q=\theta\circ T(D)$ and (\ref{level}), it follows that \begin{align*} d\leq \max_{u\in Q}I(u)\leq \sup_{(\delta,y)\in D}I(t_{\delta,y}\psi_{\delta,y}). \end{align*} By $t_{\delta,y}\psi_{\delta,y}\in \mathcal{N}$, we have \begin{align}\label{tdeltaypsi} t_{\delta,y}^{2}\|\psi_{\delta,y}\|_{D^{s,2}}^{2}+t_{\delta,y}^{2}\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx+t_{\delta,y}^{4}N(\psi_{\delta,y}) =t_{\delta,y}^{2_{s}^{\ast}}\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}, \end{align} this together with $\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}=\|\psi_{\delta,y}\|_{D^{s,2}}^{2}$ yield that \begin{align}\label{deduce} (1-t_{\delta,y}^{2_{s}^{\ast}-2})\|\psi_{\delta,y}\|_{D^{s,2}}^{2}+\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx+t_{\delta,y}^{2}N(\psi_{\delta,y})=0. \end{align} Since $\psi_{\delta,y}>0$, $V(x)\geq 0$, $K(x)\geq 0$ and $\|V\|_{\frac{3}{2s}}+\|K\|_{\frac{6}{6s-3}}>0$, by (\ref{deduce}), we get that $t_{\delta,y}>1$. From (\ref{tdeltaypsi}) and the H\"{o}lder inequality, it follows that \begin{align}\label{holder} t_{\delta,y}^{2_{s}^{\ast}-2}\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}&=\|\psi_{\delta,y}\|_{D^{s,2}}^{2}+\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx +t_{\delta,y}^{2}\mathcal{N}(\psi_{\delta,y})\nonumber\\ &<t_{\delta,y}^{2}\|\psi_{\delta,y}\|_{D^{s,2}}^{2}+t_{\delta,y}^{2}\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx +t_{\delta,y}^{2}\mathcal{N}(\psi_{\delta,y})\nonumber\\ &\leq t_{\delta,y}^{2}\|\psi_{\delta,y}\|_{D^{s,2}}^{2}+t_{\delta,y}^{2}\|V\|_{\frac{3}{2s}}\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2}\nonumber\\ &\quad+t_{\delta,y}^{2}S_{s}^{-\frac{1}{2}}\|K\|_{\frac{6}{6s-3}}\|\Phi(\psi_{\delta,y})\|_{D^{s,2}}\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2}. \end{align} By (\ref{dt2}) and (\ref{phisu}), \begin{align}\label{Phiu} \|\Phi(u)\|_{D^{s,2}}\leq S_{s}^{-\frac{3}{2}}\|K\|_{\frac{6}{6s-3}}\|u\|_{D^{s,2}}^{2}. \end{align} By (\ref{deduce}), (\ref{Phiu}) and $\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}=\|\psi_{\delta,y}\|_{D^{s,2}}^{2}=S_{s}^{\frac{3}{2s}}$, we have \begin{align}\label{upp} t_{\delta,y}^{2_{s}^{\ast}-4}<1+S_{s}^{-1}\|V\|_{\frac{3}{2s}}+S_{s}^{\frac{3}{2s}-3}\|K\|_{\frac{6}{6s-3}}^{2}. \end{align} Then, by (\ref{corresponding}), (\ref{tdeltaypsi}), (\ref{upp}) and (\ref{svk}), \begin{align*} I(t_{\delta,y}\psi_{\delta,y})&=\frac{1}{2}t_{\delta,y}^{2}\|\psi_{\delta,y}\|_{D^{s,2}}^{2}+\frac{1}{2}t_{\delta,y}^{2}\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx +\frac{1}{4}t_{\delta,y}^{4}N(\psi_{\delta,y})-\frac{1}{2_{s}^{\ast}}t_{\delta,y}^{2_{s}^{\ast}}\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}\\ &=\frac{s}{3}t_{\delta,y}^{2}\|\psi_{\delta,y}\|_{D^{s,2}}^{2}+\frac{s}{3}t_{\delta,y}^{2}\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx +\frac{4s-3}{12}t_{\delta,y}^{4}N(\psi_{\delta,y})\\ &\leq \frac{s}{3}t_{\delta,y}^{2}(\|\psi_{\delta,y}\|_{D^{s,2}}^{2}+\int_{\mathbb{R}^{3}}V(x)\psi_{\delta,y}^{2}dx+t_{\delta,y}^{2}N(\psi_{\delta,y}))\\ &=\frac{s}{3}t_{\delta,y}^{2_{s}^{\ast}}\|\psi_{\delta,y}\|_{2_{s}^{\ast}}^{2_{s}^{\ast}}\\ &<\frac{s}{3}S_{s}^{\frac{3}{2s}}(1+S_{s}^{-1}\|V\|_{\frac{3}{2s}}+S_{s}^{\frac{3}{2s}-3}\|K\|_{\frac{6}{6s-3}}^{2})^{\frac{2_{s}^{\ast}}{2_{s}^{\ast}-4}}\\ &\leq \frac{2s}{3}S_{s}^{\frac{3}{2s}}. \end{align*} Therefore, $\frac{s}{3}S_{s}^{\frac{3}{2s}}<d<\frac{2s}{3}S_{s}^{\frac{3}{2s}}$. By Corollary \ref{und}, $d$ is the critical value of $I$. The proof is complete. \section{\textbf{Acknowledgements}} This work is partially supported by National Natural Science Foundation of China under the contract No.11571269, China Postdoctoral Science Foundation Funded Project under contracts No.2015M572539 and No.2016T90899 and Shaanxi Province Postdoctoral Science Foundation Funded Project. \end{document}
\begin{document} \title{An All-But-One Entropic Uncertainty Relation, and Application to Password-Based Identification} \begin{abstract} Entropic uncertainty relations are quantitative characterizations of Heisenberg's uncertainty principle, which make use of an entropy measure to quantify uncertainty. In quantum cryptography, they are often used as convenient tools in security proofs. We propose a new entropic uncertainty relation. It is the first such uncertainty relation that lower bounds the uncertainty in the measurement outcome for \emph{all but one} choice for the measurement from an {\em arbitrarily large} (but specifically chosen) set of possible measurements, and, at the same time, uses the {\em min-entropy} as entropy measure, rather than the Shannon entropy. This makes it especially suited for quantum cryptography. As application, we propose a new {\em quantum identification scheme} in the bounded-quantum-storage model. Because the scheme requires a perfectly single-qubit source to operate securely, it is currently mainly of theoretical interest. Our new uncertainty relation forms the core of the new scheme's security proof in the bounded-quantum-storage model. In contrast to the original quantum identification scheme proposed by Damg{\aa}rd {\emph{et al.}}\xspace, our new scheme also offers some security in case the bounded-quantum-storage assumption fails to hold. Specifically, our scheme remains secure against an adversary that has unbounded storage capabilities but is restricted to non-adaptive single-qubit operations. The scheme by Damg{\aa}rd {\emph{et al.}}\xspace, on the other hand, completely breaks down under such an attack. \\ \footnotesize NJB is supported by an NWO Open Competition grant. CGG is supported by Spanish Grants I-MATH, MTM2008-01366, QUITEMAD and QUEVADIS. CS is supported by an NWO VENI grant. \end{abstract} \tableofcontents* \chapter{Introduction } \section{A New Uncertainty Relation} \label{sec:newuncert} In this work, we propose and prove a new general entropic uncertainty relation. Uncertainty relations are quantitative characterizations of the uncertainty principle of quantum mechanics, which expresses that for certain pairs of measurements, there exists no state for which the measurement outcome is determined for {\em both} measurements: at least one of the outcomes must be somewhat uncertain. {\em Entropic} uncertainty relations express this uncertainty in at least one of the measurement outcomes by means of an entropy measure, usually the Shannon entropy. Our new entropic uncertainty relation distinguishes itself from previously known uncertainty relations by the following collection of features: \begin{enumerate} \item\label{it:minentropy} It uses the \emph{min-entropy} as entropy measure, rather than the Shannon entropy. Such an uncertainty relation is sometimes also called a \emph{high-order} entropic uncertainty relation. \footnote{This is because the min-entropy coincides with the R{\'e}nyi entropy $H_\alpha$ of high(est) order $\alpha = \infty$. In comparison, the Shannon entropy coincides with the R{\'e}nyi entropy of (relatively) low order $\alpha = 1$.} Since privacy amplification needs a lower bound on the min-entropy, high-order entropic uncertainty relations are useful tools in quantum cryptography. \item\label{it:allbutone} It lower bounds the uncertainty in the measurement outcome for \emph{all but one} measurement, chosen from an \emph{arbitrary} (and arbitrarily large) family of possible measurements. This is clearly \emph{stronger} than typical entropic uncertainty relations that lower bound the uncertainty on \emph{average} (over the choice of the measurement). \item\label{it:qubitwise} The measurements can be chosen to be qubit-wise measurements, in the computational or Hadamard basis, and thus the uncertainty relation is applicable to practical schemes (which can be implemented using current technology). \end{enumerate} To the best of our knowledge, no previous entropic uncertainty relation satisfies (\ref{it:minentropy}) and (\ref{it:allbutone}) simultaneously, let alone in combination with~(\ref{it:qubitwise}). Indeed, as pointed out in a recent overview article by Wehner and Winter~\cite{WW10}, little is known about entropic uncertainty relations for more than two measurement outcomes, and even less when additionally considering min-entropy. To explain our new uncertainty relation, we find it helpful to first discuss a simpler variant, which does not satisfy~(\ref{it:minentropy}), and which follows trivially from known results. Fix an arbitrary family $\set{{\mathcal{B}}_1,\ldots,{\mathcal{B}}_m}$ of bases for a given quantum system (i.e., Hilbert space). The \emph{maximum overlap} of such a family is defined as \[ c := \max\Set{|\braket{\phi}{\psi}|}{\ket{\phi} \in {\mathcal{B}}_j, \ket{\psi} \in {\mathcal{B}}_k, 1 \!\leq\! j \!<\! k \!\leq\! m}, \] and let $d := -\log(c^2)$. Let $\rho$ be an arbitrary quantum state of that system, and let $X$ denote the measurement outcome when $\rho$ is measured in one of the bases. We model the choice of the basis by a random variable $J$, so that $H(X|J\!=\!j)$ denotes the Shannon entropy of the measurement outcome when $\rho$ is measured in basis ${\mathcal{B}}_j$. It follows immediately from Maassen and Uffink's uncertainty relation~\cite{MU88} that \[ H(X|J =j)+H(X|J=k) \geq -\log(c^2) = d \quad \forall j \neq k. \] As a direct consequence, there exists a choice $j'$ for the measurement so that $H(X|J\!=\!j) \geq \frac{d}{2}$ for all $j \in \set{1,\ldots,m}$ with $j \neq j'$. In other words, for any state $\rho$ there exists $j'$ so that unless the choice for the measurement coincides with $j'$, which happens with probability at most $\max_j P_J(j)$, there is at least $d/2$ bits of entropy in the outcome~$X$. Our new high-order entropic uncertainty relation shows that this very statement essentially still holds when we replace Shannon by min-entropy, except that $j'$ becomes randomized: for any $\rho$, there exists a \emph{random variable} $J'$, independent of $J$, such that \footnote{The rigorous version of the approximate inequality $\gtrsim$ is stated in \refthm{UR}. } $$ \hmin(X|J\!=\!j, J' \!=\! j') \gtrsim \frac{d}{2} \quad \forall \; j \neq j' \in \set{1,\ldots,m} $$ no matter what the distribution of $J$ is. Thus, unless the measurement $J$ coincides with $J'$, there is roughly $d/2$ bits of min-entropy in the outcome~$X$. Furthermore, since $J'$ is \emph{independent} of $J$, the probability that $J$ coincides with $J'$ is at most $\max_j P_J(j)$, as is the case for a fixed $J'$. Note that we have no control over (the distribution of) $J'$. We can merely guarantee that it exists and is independent of $J$. It may be insightful to interpret $J'$ as a \emph{virtual guess} for $J$, guessed by the party that prepares~$\rho$, and whose goal is to have little uncertainty in the measurement outcome $X$. The reader may think of the following specific way of preparing $\rho$: sample $j'$ according to some arbitrary distribution $J'$, and then prepare the state as the, say, first basis vector of ${\mathcal{B}}_{j'}$. If the resulting mixture $\rho$ is then measured in some basis ${\mathcal{B}}_j$, sampled according to an arbitrary (independent) distribution~$J$, then unless $j = j'$ (i.e., our guess for $j$ was correct), there is obviously lower bounded uncertainty in the measurement outcome $X$ (assuming a non-trivial maximum overlap). Our uncertainty relation can be understood as saying that for \emph{any} state~$\rho$, no matter how it is prepared, there exists such a (virtual) guess $J'$, which exhibits this very behavior: if it differs from the actual choice for the measurement then there is lower bounded uncertainty in the measurement outcome $X$. As an immediate consequence, we can for instance say that $X$ has min-entropy at least $d/2$, except with a probability that is given by the probability of guessing $J$, e.g., except with probability $1/m$ if the measurement is chosen uniformly at random from the family. This is clearly the best we can hope for. We stress that because the min-entropy is more conservative than the Shannon entropy, our high-order entropic uncertainty relation does not follow from its simpler Shannon-entropy version. Neither can it be deduced in an analogous way; the main reason being that for fixed pairs $j \neq k$, there is no strong lower bound on $\hmin(X|J\!=\!j)+\hmin(X|J\!=\!k)$, in contrast to the case of Shannon entropy. More precisely and more generally, the \emph{average} uncertainty $\frac{1}{|J|}\sum_j\hmin(X|J\!=\!j)$ does not allow a lower bound higher than $\log|J|$. To see this, consider the following example for $|J|=2$ (the example can easily be extended to arbitrary $|J|$). Suppose that $\rho$ is the uniform mixture of two pure states, one giving no uncertainty when measured in basis $j$, and the other giving no uncertainty when measured in basis $k$. Then, $\tfrac12 \hmin(X|J\!=\!j) + \tfrac12 \hmin(X|J\!=\!k) = 1$. Because of a similar reason, we cannot hope to get a good bound for all but a {\em fixed} choice of $j'$; the probabilistic nature of $J'$ is necessary (in general). Hence, compared to bounding the average uncertainty, the all-but-one form of our uncertainty relation not only makes our uncertainty relation stronger in that uncertainty for all-but-one implies uncertainty on average (yet not vice versa), but it also allows for {\em more} uncertainty. By using asymptotically good error-correcting codes, one can construct families of bases that have a large value of $d$, and thus for which our uncertainty relation guarantees a large amount of min-entropy (we discuss this in more detail in \refsec{goodfam}). These families consist of qubit-wise measurements in the computational or the Hadamard basis, hence these measurements can be performed with current technology. The proof of our new uncertainty relation comprises a rather involved probability reasoning to prove the existence of the random variable $J'$ and builds on earlier work presented in \cite{Schaffner07}. \section{Quantum Identification with ``Hybrid'' Security} As an application of our entropic uncertainty relation, we propose a new \emph{quantum identification protocol}. Informally, the goal of (password-based) identification is to prove knowledge of a possibly low-entropy password $w$, without giving away any information on $w$ (beyond what is unavoidable). In~\cite{DFSS07}, Damg{\aa}rd {\emph{et al.}}\xspace\ showed the existence of such an identification protocol in the \emph{bounded-quantum-storage model} (BQSM). This means that the proposed protocol involves the communication of qubits, and security is proven against any dishonest participant that can store only a limited number of these qubits (whereas legitimate participants need no quantum storage at all to honestly execute the protocol). Our uncertainty relation gives us the right tool to prove security of the new quantum identification protocol in the BQSM. The distinguishing feature of our new protocol is that it also offers some security in case the assumption underlying the BQSM fails to hold. Indeed, we additionally prove security of our new protocol against a dishonest server that has unbounded quantum-storage capabilities and can reliably store all the qubits communicated during an execution of the protocol, but is restricted to non-adaptive single-qubit operations and measurements.\footnote{It is known that \emph{some} restriction is necessary (see \cite{DFSS07}).} This is in sharp contrast to protocol \textsf{QID} by Damg{\aa}rd {\emph{et al.}}\xspace, which completely breaks down against a dishonest server that can store all the communicated qubits in a quantum memory and postpone the measurements until the user announces the correct measurement bases. On the downside, our protocol only offers security in case of a perfectly single-qubit (e.g.\ single-photon) source, because multi-qubit emissions reveal information about $w$. Hence, given the immature state of single-qubit-source technology at the time of this writing, our protocol is currently mainly of theoretical interest. We want to stress that proving security of our protocol in this \emph{single-qubit-operations model} (SQOM) is non-trivial. Indeed, as we will see, standard tools like privacy amplification are not applicable. Our proof relies on a certain minimum-distance property of random binary matrices and makes use of Diaconis and Shahshahani's XOR inequality (\refthm{diaconis}, see also \cite{Diaconis88}). \section{Related Work} The study of \emph{entropic} uncertainty relations, whose origin dates back to 1957 with the work of Hirschman~\cite{Hirschman57}, has received a lot of attention over the last decade due to their various applications in quantum information theory. We refer the reader to~\cite{WW10} for a recent overview on entropic uncertainty relations. Most of the known entropic uncertainty relations are of the form $$ \frac{1}{|J|}\sum_j H_\alpha(X|J\!=\!j) \geq h \, , $$ where $H_\alpha$ is the R\'enyi entropy.\footnote{The R\'enyi entropy \cite{renyi1961} is defined as $H_\alpha(X) := \frac{1}{1-\alpha}\log\sum_x P_X(x)^\alpha$. Nevertheless, for most known uncertainty relations $\alpha=1$, i.e.\ the Shannon entropy.} I.e., most uncertainty relations only give a lower bound on the entropy of the measurement outcome $X$ \emph{on average} over the (random) choice of the measurement. As argued in \refsec{newuncert}, the bound $h$ on the \emph{min}-entropy can be at most $\log|J|$, no matter the range of~$X$. Furthermore, an uncertainty relation of this form only guarantees that there is uncertainty in $X$ for \emph{some} measurement(s), but does not specify precisely for how many, and certainly it does not guarantee uncertainty for \emph{all but one} measurements. The same holds for the high-order entropic uncertainty relation from~\cite{dfrss07}, which considers an exponential number of measurement settings and guarantees that except with negligible probability over the (random) choice of the measurement, there is lower-bounded min-entropy in the outcome. On the other hand, the high-order entropic uncertainty relation from~\cite{DFSS05} only considers \emph{two} measurement settings and guarantees lower-bounded min-entropy with probability (close to) $\frac{1}{2}$. The uncertainty relation we know of that comes closest to ours is Lemma~2.13 in~\cite{FHS11}. Using our notation, it shows that $X$ is $\epsilon$-close to having roughly $d/2$ bits of min-entropy (i.e., the same bound we get), but only for all but an $\epsilon$-fraction of all the $m$ possible choices for the measurement $j$, where $\epsilon$ is about~\smash{$\sqrt{2/m}$}. With respect to our application, backing up the security of the identification protocol by Damg{\aa}rd {\emph{et al.}}\xspace~\cite{DFSS07} against an adversary that can overcome the quantum-memory bound assumed by the BQSM was also the goal of~\cite{DFLSS09}. However, the solution proposed there relies on an unproven computational-hardness assumption, and as such, strictly speaking, can be broken by an adversary in the SQOM, i.e., by storing qubits and measuring them later qubit-wise and performing (possibly infeasible) classical computations. On the other hand, by \emph{assuming} a lower bound on the hardness of the underlying computational problem against quantum machines, the security of the protocol in~\cite{DFLSS09} holds against an adversary with much more quantum computing power than our protocol in the SQOM, which restricts the adversary to single-qubit operations. We hope that with future research on this topic, new quantum identification (or other cryptographic) protocols will be developed with security in the same spirit as our protocol, but with a more relaxed restriction on the adversary's quantum computation capabilities, for instance that he can only perform a limited number of quantum computation steps, and in every step he can only act on a limited number of qubits coherently. \chapter{Preliminaries} \section{Basic Notation} Sets as well as families are written using a calligraphic font, e.g.\ $\Tset{A}, \Tset{X}$, and we write $|\Tset{A}|$ etc.\ for the cardinality. We use $\setn$ as a shorthand for $\set{1,\ldots,n}$. For an $n$-bit vector vector $v = (v_1,\ldots,v_n)$ in $\set{0,1}^n$, we write $|v|$ for its Hamming weight, and, for any subset $\Tset{I} \subseteq \setn$, we write $v_{\Tset{I}}$ for the restricted vector $(v_i)_{i\in \Tset{I}} \in \set{0,1}^{|\Tset{I}|}$. For two vectors $v,w \in \{0,1\}^n$, the \emph{Schur product} is defined as the element-wise product $v \odot\xspace w := (v_1 w_1, v_2 w_2, \ldots, v_n w_n) \in \set{0,1}^n$, and the \emph{inner product} between $v$ and $w$ is given by $v\cdot w := v_1 w_1 \oplus \cdots \oplus v_n w_n \in \set{0,1}$, where the addition is modulo~$2$. We write $\ensuremath{\mathrm{span}}(F)$ for the \emph{row span} of a matrix $F$; the set of vectors obtained by making all possible linear combinations (modulo~$2$) of the rows of $F$, i.e. the set $\Set{sF}{\forall s\in \{0,1\}^\ell}$, where $s$ should be interpreted as a row vector and $sF$ denotes a vector-matrix product. \section{Probability Theory} A finite probability space is a non-empty finite set $\Omega$ together with a function $\ensuremath{\mathrm{Pr}}: \Omega \rightarrow \mathbb{R}$ such that $\ensuremath{\mathrm{Pr}}(\omega)\geq 0 \quad \forall \omega \in \Omega$ and $\sum_{\omega\in \Omega}\ensuremath{\mathrm{Pr}}(\omega)=1$. An \emph{event} is a subset of $\Omega$. A {\em random variable} is a function $X: \Omega \rightarrow \mathcal{X}$ from a finite probability space $(\Omega,\ensuremath{\mathrm{Pr}})$ to a finite set $\mathcal{X}$. We denote random variables as capital letters, for example $X$, $Y$, $Z$. The {\em distribution} of $X$, which we denote as $P_X$, is given by $P_X(x) = \ensuremath{\mathrm{Pr}}[X\!=\!x] = \ensuremath{\mathrm{Pr}}[\Set{\omega \in \Omega}{X(\omega)\!=\!x}]$. The joint distribution of two (or more) random variables $X$ and $Y$ is denoted by $P_{XY}$, i.e., $P_{XY}(x,y) = \ensuremath{\mathrm{Pr}}[X\!=\!x \wedge Y\!=\!y]$. Specifically, we write $U_{\Tset{X}}$ for the uniform probability distribution over \Tset{X}. Usually, we leave the probability space $(\Omega,\ensuremath{\mathrm{Pr}})$ implicit, and understand random variables to be defined by their joint distribution, or by some ``experiment'' that uniquely determines their joint distribution. Random variables $X$ and $Y$ are {\em independent} if $P_{XY} = P_X P_Y$ (which should be understood as $P_{XY}(x,y) = P_X(x) P_Y(y) \;\forall\, x\in {\mathcal{X}},y\in \mathcal{Y}$). The random variables $X$, $Y$ and $Z$ form a (first-order) Markov chain, denoted by $X \leftrightarrow Y \leftrightarrow Z$, if $P_{XZ|Y} = P_{X|Y}P_{Z |Y}$. The \emph{statistical distance} (also knows as variational distance) between distributions $P_X$ and $P_Y$ is written as $\mathrm{SD}(P_X,P_Y):=\tfrac12\|P_X-P_Y\|_1$. The \emph{bias} of a binary random variable $X$ is defined as $\mathrm{bias}(X) := \big|P_X(0) - P_X(1) \big|.$ This also naturally defines the bias of $X$ conditioned on an event $\mathcal{E}$ as $\mathrm{bias}(X|\mathcal{E}) := \big|P_{X|\mathcal{E}}(0) - P_{X|\mathcal{E}}(1) \big|$. The bias thus ranges between $0$ and $1$ and can be understood as a degree of predictability of a bit: if the bias is small then the bit is close to random, and if the bias is large (i.e. approaches $1$) then the bit has essentially no uncertainty. For a sum of two independent binary random variables $X_1$ and $X_2$, the bias of the sum is the product of the individual biases, i.e. $\ensuremath{\mathrm{bias}}(X_1 \oplus\xspace X_2) = \ensuremath{\mathrm{bias}}(X_1) \ensuremath{\mathrm{bias}}(X_2)$. \begin{thm}[Diaconis and Shahshahani's Information-Theoretic XOR Lemma] \label{thm:diaconis} Let $X$ be a random variable over $\Tset{X}:=\set{0,1}^n$ with distribution $P_{X}$. Then, the following holds, \[ \mathrm{SD}(P_{X}, U_\mathcal{X}) \leq \frac12 {\mathcal{B}}ig[\sum_{f \in \set{0,1}^n \setminus \set{0^n}} \ensuremath{\mathrm{bias}}(f\cdot X)^2{\mathcal{B}}ig]^\frac12. \] \end{thm} \noindent The original version of \refthm{diaconis} appeared in \cite{Diaconis88}, where it is expressed in the language of representation theory. The version above is due to \cite{naor93}. \begin{thm}[Hoeffding's Inequality] Let $X_1, X_2, \ldots, X_n$ be independent binary random variables, each distributed according to the Bernoulli distribution with parameter $\mu$, and let $\bar X := n^{-1} \sum_{i \in \setn} X_i$. Then for $0 < t < 1-\mu$ \[ \ensuremath{\mathrm{Pr}}[\bar X - \mu \geq t] \leq \exp(-2nt^2). \] \label{thm:hoeffding} \end{thm} \noindent For a proof, the reader is referred to \cite{hoeffding1963}. \section{Quantum Systems and States} We assume that the reader is familiar with the basic concepts of quantum information theory; the main purpose of this section is to fix some terminology and notation. A quantum system $A$ is associated with a complex Hilbert space, $\mathcal{H} = \mathbb{C}^d$, its {\em state space}. By default, we write $\mathcal{H}_A$ for the state space of system $A$, and $\rho_A$ (respectively $\ket{\varphi_A}$ in case of a pure state) for the state of $A$. We write $\mcal{D}(\mathcal{H})$ for the set of all density matrices on Hilbert space $\mathcal{H}$. The state space of a {\em bipartite} quantum system $AB$, consisting of two (or more) subsystems, is given by $\mathcal{H}_{AB} = \mathcal{H}_A \otimes \mathcal{H}_B$. If the state of $AB$ is given by $\rho_{AB}$ then the state of subsystem $A$, when treated as a stand-alone system, is given by the {\em partial trace} $\rho_A = \mathrm{tr}_B(\rho_{AB})$, and correspondingly for $B$. {\em Measuring} a system $A$ in basis $\set{\ket{i}}_{i \in I}$, where $\set{\ket{i}}_{i \in I}$ is an orthonormal basis of $\mathcal{H}_A$, means applying the measurement described by the projectors $\set{\proj{i}}_{i \in I}$, such that outcome $i \in I$ is observed with probability $p_i = \mathrm{tr}(\proj{i} \rho_A)$ (respectively $p_i = |\braket{i}{\varphi_A}|^2$ in case of a pure state). If $A$ is a subsystem of a bipartite system $AB$, then it means applying the measurement described by the projectors $\set{\proj{i} \otimes \mathbb{I}_B}_{i \in I}$, where $\mathbb{I}_B$ is the identity operator on $\mathcal{H}_B$. A {\em qubit} is a quantum system $A$ with state space~$\mathcal{H}_A = \mathbb{C}^2$. The {\em computational basis} $\set{\ket{0},\ket{1}}$ (for a qubit) is given by $\ket{0} = {1 \choose 0}$ and $\ket{1} = {0 \choose 1}$, and the {\em Hadamard basis} by $\set{H\ket{0},H\ket{1}}$, where $H$ denotes the 2-dimensional {\em Hadamard matrix} $H = \frac{1}{\sqrt2} \big(\begin{smallmatrix} 1 & \;\; 1 \\ 1 & -1 \end{smallmatrix}\big)$. We also call the computational basis the {\em plus} basis and associate it with the `$+$'-symbol, and we call the Hadamard basis the {\em times} basis and associate it with the `$\times$'-symbol. For bit vectors $x = (x_1,\ldots,x_n) \in \set{0,1}^n$ and $v = (v_1,\ldots,v_n) \in \set{+,\times}^n$ we then write $\ket{x}_v = \ket{x_1}_{v_i} \otimes\xspace \cdots \otimes\xspace \ket{x_n}_{v_n}$ where $\ket{x_i}_{+} := \ket{x_i}$ and $\ket{x_i}_{\times} := H\ket{x_i}$. Subsystem $X$ of a bipartite quantum system $XE$ is called {\em classical}, if the state of $XE$ is given by a density matrix of the form $$ \rho_{XE} = \sum_{x \in \mathcal X} P_X(x) \proj{x} \otimes \rho_{E}^x \, , $$ where $\mathcal X$ is a finite set of cardinality $|{\mathcal X}| = \dim(\mathcal{H}_X)$, $P_X:{\mathcal X} \rightarrow [0,1]$ is a probability distribution, $\set{\ket{x}}_{x \in \mathcal X}$ is some fixed orthonormal basis of $\mathcal{H}_X$, and $\rho_E^x$ is a density matrix on $\mathcal{H}_E$ for every \mbox{$x \in \mathcal X$}. Such a state, called {\em hybrid} or {\em cq-} (for {\em c}lassical-{\em q}uantum) state, can equivalently be understood as consisting of a {\em random variable} $X$ with distribution $P_X$, taking on values in $\mathcal X$, and a system $E$ that is in state $\rho_E^x$ exactly when $X$ takes on the value $x$. This formalism naturally extends to two (or more) classical systems $X$, $Y$ etc. For any event $\mathcal{E}$ (defined by $P_{\mathcal{E}|X}(x) = \ensuremath{\mathrm{Pr}}[\mathcal{E}|X=x]$ for all $x$), we may write \[ \rho_{XE|\mathcal{E}} := \sum_x P_{X|\mathcal{E}} \outs{x} \otimes\xspace \rho_E^x. \] If the state of $XE$ satisfies $\rho_{XE} = \rho_X \otimes \rho_E$, where $\rho_X = \mathrm{tr}_E(\rho_{XE}) = \sum_x P_X(x) \proj{x}$ and $\rho_E = \mathrm{tr}_X(\rho_{XE}) = \sum_x P_X(x) \rho_E^x$, then $X$ is {\em independent} of $E$, and thus no information on $X$ can be obtained from system~$E$. Moreover, if $\rho_{XE} = \frac{1}{|{\mathcal X}|} \mathbb{I}_X \otimes \rho_E$, where $\mathbb{I}_X$ denotes the identity on $\mathcal{H}_X$, then $X$ is {\em random-and-independent} of $E$. We also want to be able to express that a random variable $X$ is (close) to being independent of a quantum system $E$ \emph{when given a random variable $Y$}. Formally, this is expressed by saying that $\rho_{XYE}$ equals $\rho_{X \leftrightarrow Y \leftrightarrow E}$, where \[ \rho_{X \leftrightarrow Y \leftrightarrow E}:=\sum_{x,y} P_{XY}(x,y) \outs{x} \otimes\xspace \outs{y} \otimes\xspace \rho_E^y. \] This notion, called \emph{conditional independence}, for the quantum setting was introduced in \cite{DFSS07}. For a matrix $\rho$, the trace norm is defined as $\| \rho \|_1 := \mathrm{tr} \sqrt {\rho \rho^*}$, where $\rho^*$ denotes the Hermitian transpose of $\rho$. \begin{definition} \label{def:tracedist} The \emph{trace distance} between two density matrices $\rho,\sigma \in \mcal{D}(\mathcal{H})$ is defined as $\delta(\rho,\sigma) := \tfrac12 \| \rho - \sigma \|_1$. \end{definition} If two states $\rho$ and $\sigma$ are $\varepsilon$-close in trace distance, i.e. $\tfrac12 \| \rho - \sigma \|_1 \leq \varepsilon$, we use $\rho \approx_\varepsilon \sigma$ as shorthand. In case of classical states, the trace distance coincides with the statistical distance. Moreover, the trace distance between two states cannot increase when applying the same quantum operation (i.e., CPTP map) to both states. As a consequence, if $\rho \approx_\varepsilon \sigma$ then the states cannot be distinguished with statistical advantage better than $\varepsilon$. \begin{definition}\label{def:distuni} For a density matrix $\rho_{XE}\in \mcal{D}(\mathcal{H}_X \otimes\xspace \mathcal{H}_E)$ with classical $X$, the \emph{distance to uniform} of $X$ given $E$ is defined as \[ \deltauni(X|E) : = \tfrac12 \| \rho_{XE} - \rho_U \otimes\xspace \rho_E\|_1, \] where $\rho_U:= \frac{1}{\dim(\mathcal{H}_X)}\ensuremath{\mathbb{I}}\xspace_X$. \end{definition} \section{Min-Entropy and Privacy Amplification} We make use of Renner's notion of the {\em conditional min-entropy} $\hmin(\rho_{AB}|B)$ of a system $A$ conditioned on another system $B$~\cite{Renner05}. If the state $\rho_{AB}$ is clear from the context, we may write $\hmin(A|B)$ instead of $\hmin(\rho_{AB}|B)$. The formal definition is given by $\hmin(\rho_{AB}|B):= \sup_{\sigma_B}\max\Set{h \in \mathbb{R}}{2^{-h} \cdot \ensuremath{\mathbb{I}}\xspace_A \otimes\xspace \sigma_B - \rho_{AB} \geq 0}$ where the supremum is over all density matrices $\sigma_B$ on $\mathcal{H}_B$. If $\mathcal{H}_B$ is the trivial space $\mathbb{C}$, we obtain the unconditional min-entropy of $\rho_A$, denoted as $\hmin(\rho_A)$, which simplifies to $\hmin(\rho_A) = - \log \lambda_{\max}(\rho_A)$, where $\lambda_{\max}(\rho_A)$ is the largest eigenvalue of $\rho_A$. We will need the following chain rule. \begin{lemma} \label{lem:bqsmchain} For any density matrix $\rho$ on $\mathcal{H}_{XYE}$ with classical $X$ and $Y$ it holds that \[ \hmin(X|YE) \geq \hmin(X|Y) - \ensuremath{H_\mathrm{max}\hspace{-1pt}}(E). \] \end{lemma} \noindent The proof can be found in \refapp{bqsmchainproof}. For the special case of a hybrid state $\rho_{XE} \in \mcal{D}(\mathcal{H}_X \otimes\xspace \mathcal{H}_E)$ with classical $X$, it is shown in \cite{koenig09} that the conditional min-entropy of a quantum state coincides with the negative logarithm of the \emph{guessing probability conditional on quantum side information} \[ p_\mathrm{guess}(X|E):= \max_{\set{M_x}}\sum_x P_X(x)\, \mathrm{tr}ace(M_x \rho_E^x), \] where the latter is the probability that the party holding $\mathcal{H}_E$ guesses $X$ correctly using the POVM $\set{M_x}_x$ on $\mathcal{H}_E$ that maximizes $p_\mathrm{guess}$. Thus, \begin{equation} \label{eq:guessform} \hmin(X|E) = -\log p_\mathrm{guess}(X|E). \end{equation} \noindent For random variables $X$ and $Y$, we have that $p_\mathrm{guess}(X|Y)$ simplifies to \[ p_\mathrm{guess}(X|Y)=\sum_y P_Y(y) p_\mathrm{guess}(X|Y=y) = \sum_y P_Y(y)\max_x P_{X|Y}(x|y). \] Finally, we make use of Renner's privacy amplification theorem~\cite{RK05,Renner05}, as given below. Recall that a function $g:\mathcal{R} \times \mathcal{X} \rightarrow \set{0,1}^\ell$ is called a {\em universal} (hash) function, if for the random variable $R$, uniformly distributed over $\mathcal{R}$, and for any distinct $x,y \in \mathcal{X}$: $\ensuremath{\mathrm{Pr}}[g(R,x)\!=\!g(R,y)] \leq 2^{-\ell}$. \begin{thm}[Privacy amplification]\label{thm:PA} Let $\rho_{XE}$ be a hybrid state with classical $X$. Let $g:\mathcal{R} \times \mathcal{X} \to \set{0,1}^\ell$ be a universal hash function, and let $R$ be uniformly distributed over $\mathcal{R}$, independent of $X$ and~$E$. Then $K = g(R,X)$ satisfies $$ \deltauni(K|RE) \leq \frac12 \cdot 2^{-\frac12(\hmin(X|E) - \ell)} \, . $$ \end{thm} \noindent Informally, Theorem~\ref{thm:PA} states that if $X$ contains sufficiently more than $\ell$ bits of entropy when given $E$, then $\ell$ nearly random-and-independent bits can be extracted from $X$. \chapter{The All-But-One Entropic Uncertainty Relation} \label{sec:moreunbiasedbases} Throughout this section, $\set{{\mathcal{B}}_1,\ldots,{\mathcal{B}}_m}$ is an arbitrary but fixed family of bases for the state space $\mathcal{H}$ of a quantum system. For simplicity, we restrict our attention to an $n$-qubit system, such that $\mathcal{H} = (\mathbb{C}^2)^{\otimes n}$ for $n \in \mathbb{N}$, but our results immediately generalize to arbitrary quantum systems. We write the $2^n$ basis vectors of the $j$-th basis ${\mathcal{B}}_j$ as ${\mathcal{B}}_j = \Set{\ket{x}_j}{x \in \set{0,1}^n}$. Let $c$ be the maximum overlap of $\set{{\mathcal{B}}_1,\ldots,{\mathcal{B}}_m}$, i.e., \[ c:= \max\Set{|\bra{x}_j \ket{y}_k|}{x,y \in \set{0,1}^n, 1 \!\leq\! j \!<\! k \!\leq\! m}. \] In order to obtain our entropic uncertainty relation that lower bounds the min-entropy of the measurement outcome for all but one measurement, we first show an uncertainty relation that expresses uncertainty by means of the probability measure of given sets. \begin{thm}[Theorem 4.18 in \cite{Schaffner07}] \label{thm:morehadamard} Let $\rho$ be an arbitrary state of $n$ qubits. For $j \in \setn[m]$, let $Q^j(\cdot)$ be the distribution of the outcome when $\rho$ is measured in the $\mathcal{B}_j$-basis, i.e., $Q^j(x) = \bra{x}_{j} \: \rho \: \ket{x}_{j}$ for any $x \in \set{0,1}^n$. Then, for any family $\set{\mathcal{L}^j}_{j \in \setn[m]}$ of subsets $\mathcal{L}^j \subset \set{0,1}^n$, it holds that \[ \sum_{j \in \setn[m]} Q^j(\mathcal{L}^j) \leq 1 + c \, (m-1) \cdot \max_{j \neq k \in \setn[m]} \sqrt{|\mathcal{L}^j| |\mathcal{L}^k|}. \] \end{thm} A special case of Theorem~\ref{thm:morehadamard}, obtained by restricting the family of bases to the specific choice $\set{{\mathcal{B}}_+,{\mathcal{B}}_\times}$ with ${\mathcal{B}}_+ = \Set{\ket{x}}{x \in \set{0,1}^n}$ and ${\mathcal{B}}_\times = \Set{H^{\otimes n}\ket{x}}{x \in \set{0,1}^n}$ (i.e. either the computational or Hadamard basis for all qubits), is an uncertainty relation that was proven and used in the original paper about the BQSM~\cite{DFSS05}. The proof of Theorem~\ref{thm:morehadamard} goes along similar lines as the proof in the journal version of~\cite{DFSS05} for the special case outlined above. It is based on the norm inequality $$ \big\| A_1+\ldots+A_m \big\| \leq 1 + (m-1) \cdot \max_{j \neq k \in \setn[m]} \big\|A_j A_k\big\| \, , $$ which holds for arbitrary orthogonal projectors $A_1,\ldots,A_m$. Recall that for a linear operator $A$ on the complex Hilbert space ${\mathcal{H}}$, the {\em operator norm} is defined as $\|A \| \ensuremath{:=} \sup \|A \ket{\psi}\|$, where the supremum is over all norm-$1$ $\ket{\psi} \in \mathcal{H}$; this is identical to $\| A \| \ensuremath{:=} \sup |\bra{\varphi}A\ket{\psi}|$, where the supremum is over all norm-$1$ $\ket{\varphi},\ket{\psi} \in \mathcal{H}$. Furthermore, $A$ is called an \emph{orthogonal projector} if $A^2 = A$ and $A^* = A$. The proof of this norm inequality can be found in Appendix~\ref{sec:proofinequality}. The proof of Theorem~\ref{thm:morehadamard} is given here. \begin{proof}[Proof of Theorem~\ref{thm:morehadamard}] For $j \in \setn[m]$, we define the orthogonal projectors $A^j \ensuremath{:=} \sum_{x \in \mathcal{L}^j} \ket{x}_{j} \bra{x}_{j}$. Using the spectral decomposition of $\rho = \sum_w \lambda_w \proj{\varphi_w}$ and the linearity of the trace, we have \begin{align*} \sum_{j \in \setn[m]} Q^j(\mathcal{L}^j) &= \sum_{j \in \setn[m]} \mathrm{tr}(A^j\rho) = \sum_{j \in \setn[m]} \sum_w \lambda_w \mathrm{tr}(A^j \proj{\varphi_w}) = \sum_w \lambda_w \bigg( \sum_{j \in \setn[m]} \bra{\varphi_w}A^j\ket{\varphi_w} \bigg)\\ &= \sum_w \lambda_w \bra{\varphi_w} \bigg( \sum_{j \in \setn[m]} A^j \bigg) \ket{\varphi_w} \leq \bigg\| \sum_{j \in \setn[m]} A^j \bigg\| \leq 1 + (m-1) \cdot \max_{j \neq k \in\setn[m]} \big\|A^j A^k\big\|, \end{align*} where the last inequality is the norm inequality (Proposition~\ref{prop:morebases} in Appendix~\ref{sec:proofinequality}). To conclude, we show that $\|A^j A^k\| \leq c \sqrt{|\mathcal{L}^j| |\mathcal{L}^k|}$. Let us fix $j \neq k \in \setn[m]$. Note that by the restriction on the overlap of the family of bases $\set{\mathcal{B}_j}_{j \in \setn[m]}$, we have that $| \bra{x}_{j} \ket{y}_k | \leq c$ holds for all $x,y \in \set{0,1}^n$. Then, with the sums over $x$ and $y$ understood as over $x \in \mathcal{L}^j$ and $y \in \mathcal{L}^k$, respectively, \begin{align*} {\mathcal{B}}ig\| A^j A^k \ket{\psi} {\mathcal{B}}ig\|^2 &= \bigg\| \sum_x \ket{x}_{j} \bra{x}_{j} \sum_y \ket{y}_k \bra{y}_k \ket{\psi} \bigg\|^2 = \bigg\| \sum_x \ket{x}_{j} \sum_y \bra{x}_{j} \ket{y}_k \, \bra{y}_k \ket{\psi} \bigg\|^2 \\ &= \sum_x \bigg| \sum_y \bra{x}_{j} \ket{y}_k \, \bra{y}_k \ket{\psi} \bigg|^2 \leq \sum_x \bigg(\sum_y \big|\bra{x}_{j} \ket{y}_k \, \bra{y}_k \ket{\psi} \big| \bigg)^2 \\ &\leq c^2 \sum_x \bigg(\sum_y \big|\bra{y}_k \ket{\psi}\big|\bigg)^2 \leq c^2 \big|\mathcal{L}^j\big| \big|\mathcal{L}^k\big| . \end{align*} The third equality follows from Pythagoras, the first inequality holds by triangle inequality, the second inequality by the bound on $| \bra{x}_{j} \ket{y}_k|$, and the last follows from Cauchy-Schwarz. This implies $\|A^j A^k\| \leq c \sqrt{|\mathcal{L}^j| |\mathcal{L}^k|}$ and finishes the proof. \end{proof} In the same spirit as in (the journal version of)~\cite{DFSS05}, we reformulate above uncertainty relation in terms of a ``good event'' $\mathcal{E}$, which occurs with reasonable probability, and if it occurs, the measurement outcomes have high min-entropy. The statement is obtained by choosing the sets $\mathcal{L}^j$ in Theorem~\ref{thm:morehadamard} appropriately. Because we now switch to entropy notation, it will be convenient to work with a measure of overlap between bases that is logarithmic in nature and \emph{relative} to the number $n$ of qubits. Hence, we define \[ \delta := - \frac{1}{n}\log c^2 \, . \] We will later see that for ``good'' choices of bases, $\delta$ stays constant for growing $n$. \begin{corollary} \label{cor:morehadamard} Let $\rho$ be an arbitrary $n$-qubit state, let $J$ be a random variable over $\setn[m]$ (with arbitrary distribution~$P_J$), and let $X$ be the outcome when measuring $\rho$ in basis $\mathcal{B}_J$. \footnote{I.e., $P_{X\mid J}(x|j) = Q^j(x)$, using the notation from Theorem~\ref{thm:morehadamard}.} Then, for any $0< \epsilon< \delta/4 $, there exists an event $\mathcal{E}$ such that $$ \sum_{j \in \setn[m]} \ensuremath{\mathrm{Pr}}[{\mathcal{E}} | J\!=\!j ] \geq (m-1) - (2m-1) \cdot 2^{-\epsilon n} $$ and $$ \hmin(X | J\!=\!j,{\mathcal{E}}) \geq {\mathcal{B}}igl(\frac{\delta}{2} - 2 \epsilon{\mathcal{B}}igr) n $$ for $j \in \setn[m]$ with $P_{J\mid {\mathcal{E}}}(j) > 0$. \end{corollary} \begin{proof} For $j \in \setn[m]$ define \begin{align*} \mathcal S^j \ensuremath{:=} \big\{ x \in \{ 0,1 \} ^n &: Q^j(x) \leq 2^{-(\delta/2-\epsilon )n} \big\} \end{align*} to be the sets of strings with small probabilities and denote by $\mathcal{L}^j \ensuremath{:=} \ol{\mathcal S}^j$ their complements\footnote{Here's the mnemonic: $\mathcal S$ for the strings with \emph{S}mall probabilities, $\mathcal{L}$ for \emph{L}arge.}. Note that for all $x \in \mathcal{L}^j$, we have that $Q^j(x) > 2^{-(\delta/2 - \epsilon )n}$ and therefore $|\mathcal{L}^j| < 2^{(\delta/2 - \epsilon)n}$. It follows from Theorem~\ref{thm:morehadamard} that \begin{align*} \sum_{j \in \setn[m]} Q^j(\mathcal S^j) &= \sum_{j \in \setn[m]} (1- Q^j(\mathcal{L}^j) ) \geq m - (1 + (m-1) \cdot 2^{-\epsilon n} ) =(m-1) - (m-1)2^{-\epsilon n}. \end{align*} We define ${\mathcal{E}} \ensuremath{:=} \set{X \in \mathcal S^J \, \wedge \, Q^J(\mathcal S^J) \geq 2^{-\epsilon n}}$ to be the event that $X \in \mathcal S^J$ and at the same time the probability that this happens is not too small. Then $\ensuremath{\mathrm{Pr}}[{\mathcal{E}}|J\!=\!j] = \ensuremath{\mathrm{Pr}}[X\in \mathcal S^j \wedge Q^j(\mathcal S^j)\geq 2^{-\epsilon n} |J\!=\!j]$ either vanishes (if $Q^j(\mathcal S^j) < 2^{-\epsilon n}$) or else equals $Q^j(\mathcal S^j)$. In either case, $\ensuremath{\mathrm{Pr}}[{\mathcal{E}}|J\!=\!j] \geq Q^j(\mathcal S^j) - 2^{-\epsilon n}$ holds and thus the first claim follows by summing over $j \in\setn[m]$ and using the derivation above. Furthermore, let $p=\max_j P_J(j)$, then $\ensuremath{\mathrm{Pr}}[ \bar{ \mathcal{E}}] = \sum_{j \in \setn[m]} P_J(j) \ensuremath{\mathrm{Pr}}[ \bar{\mathcal{E}}|J\!=\!j] \leq p \sum_{j \in \setn[m]} \ensuremath{\mathrm{Pr}}[\bar{\mathcal{E}}|J\!=\!j] \leq p (m-(\sum_{j\in \setn[m]} Q^j(\mathcal S^j) - 2^{-\epsilon n})) \leq p (1 +(2m-1) \cdot 2^{-\epsilon n})$, and $\ensuremath{\mathrm{Pr}}[{\mathcal{E}}] \geq (1-p) - p(2m-1) \cdot 2^{-\epsilon n}$ Regarding the second claim, in case $J=j$, we have \begin{align*} \hmin(X|J\!=\!j, {\mathcal{E}}) &= -\log\left(\max_{x \in \mathcal S^j} \frac{Q^j(x)}{Q^j(\mathcal S^j)}\right) \\& \geq -\log\left(\frac{2^{-(\delta/2 -\epsilon)n}}{Q^j(\mathcal S^j)}\right) = (\delta/2 -\epsilon) n + \log(Q^j(\mathcal S^j)). \end{align*} As $Q^j(\mathcal S^j) \geq 2^{-\epsilon n}$ by definition of ${\mathcal{E}}$, we have $\hmin(X|J\!=\!j, {\mathcal{E}}) \geq (\delta/2 - 2\epsilon) n$. \end{proof} \section{Main Result and Its Proof} We are now ready to state and prove our new all-but-one entropic uncertainty relation. \begin{thm} \label{thm:UR} Let $\rho$ be an arbitrary $n$-qubit state, let $J$ be a random variable over $\setn[m]$ (with arbitrary distribution~$P_J$), and let $X$ be the outcome when measuring $\rho$ in basis $\mathcal{B}_J$. Then, for any $0< \epsilon < \delta /4 $, there exists a random variable $J'$ with joint distribution $P_{JJ'X}$ such that (1) $J$ and $J'$ are independent and (2) there exists an event $Psi$ with $\ensuremath{\mathrm{Pr}}[Psi] \geq 1-2\cdot2^{-\epsilon n} $ such that \footnote{Instead of introducing such an event $Psi$, we could also express the min-entropy bound by means of the {\em smooth} min-entropy of $X$ given $J=j$ and $J'=j'$. } \[ \hmin(X|J=j,J'=j',Psi) \geq {\mathcal{B}}igl(\frac{\delta}{2} - 2 \epsilon{\mathcal{B}}igr) n - 1 \] for all $j,j' \in \setn[m]$ with $j \neq j'$ and $P_{JJ'|Psi}(j,j')>0$. \end{thm} Note that, as phrased, Theorem~\ref{thm:UR} requires that $J$ is fixed and known, and only then the existence of $J'$ can be guaranteed. This is actually not necessary. By looking at the proof, we see that $J'$ can be defined simultaneously in all $m$ probability spaces $P_{X|J=j}$ with $j \in \setn[m]$, without having assigned a probability distribution to $J$ yet, so that the resulting random variable $J'$ we obtain by assigning an {\em arbitrary} probability distribution $P_J$ to $J$, satisfies the claimed properties. This in particular implies that the (marginal) distribution of $J'$ is fully determined by $\rho$. The idea of the proof of Theorem~\ref{thm:UR} is to (try to) define the random variable $J'$ in such a way that the event $J \neq J'$ coincides with the ``good event'' $\mathcal{E}$ from Corollary~\ref{cor:morehadamard}. It then follows immediately from Corollary~\ref{cor:morehadamard} that $\hmin(X|J=j,J' \neq J) \geq (\delta/2-2\epsilon) n$, which is already close to the actual min-entropy bound we need to prove. This approach dictates that if the event $\mathcal{E}$ does not occur, then $J'$ needs to {\em coincide} with~$J$. Vice versa, if $\mathcal{E}$ does occur, then $J'$ needs to be {\em different} to $J$. However, it is a priori unclear {\em how} to choose $J'$ different to $J$ in case $\mathcal{E}$ occurs. There is only one way to set $J'$ to be equal to $J$, but there are many ways to set $J'$ to be different to $J$ (unless $m = 2$). It needs to be done in such a way that without conditioning on $\mathcal{E}$ or its complement, $J$ and $J'$ are independent. Somewhat surprisingly, it turns out that the following does the job. To simplify this informal discussion, we assume that the sum of the $m$ probabilities $\ensuremath{\mathrm{Pr}}[{\mathcal{E}} | J\!=\!j ]$ from Corollary~\ref{cor:morehadamard} equals $m-1$ exactly. It then follows that the corresponding complementary probabilities, $\ensuremath{\mathrm{Pr}}[\bar{\mathcal{E}} | J\!=\!j ]$ for the $m$ different choices of $j \in \setn[m]$, add up to $1$ and thus form a probability distribution. $J'$ is now chosen, in the above spirit depending on the event $\mathcal{E}$, so that its marginal distribution $P_{J'}$ coincides with this probability distribution: $P_{J'}(j') = \ensuremath{\mathrm{Pr}}[\bar{\mathcal{E}} | J\!=\!j' ]$ for all $j'\in \setn[m]$. Thus, in case the event $\mathcal{E}$ occurs, $J'$ is chosen according to this distribution but conditioned on being different to the value $j$, taken on by $J$. The technical details, and how to massage the argument in case the sum of the $\ensuremath{\mathrm{Pr}}[{\mathcal{E}} | J\!=\!j ]$'s is not exactly $m-1$, are worked out in the proof below. \begin{proof}[Proof of Theorem~\ref{thm:UR}] From \refcor{morehadamard} we know that for any $0<\epsilon<\delta/4$, there exists an event $\mathcal{E}$ such that $\sum_{j \in \setn[m]} \ensuremath{\mathrm{Pr}}[\mathcal{E} | J=j] = m-1- \alpha$, and thus $\sum_{j \in \setn[m]} \ensuremath{\mathrm{Pr}}[\bar{\mathcal{E}} | J=j] = 1 + \alpha$, for $-1 \leq \alpha \leq (2m-1) 2^{-\epsilon n}$. We make a case distinction between $\alpha=0$, $\alpha>0$ and $\alpha <0$; we start with the case $\alpha = 0$, we subsequently prove the other two cases by reducing them to the case $\alpha = 0$ by ``inflating'' and ``deflating'' the event $\mathcal{E}$ appropriately. The approach for the case $\alpha = 0$ is to define $J'$ in such way that $\mathcal{E} \iff J \neq J'$, i.e., the event $ J \neq J'$ coincides with the event $\mathcal{E}$. The min-entropy bound from \refcor{morehadamard} then immediately translates to $\hmin(X | J=j, J' \neq J) \geq (\delta/2-2\epsilon) n$, and to $\hmin(X|J=j, J'=j') \geq (\delta/2-2\epsilon) n$ for $j' \neq j$ with $P_{JJ'}(j,j')>0$, as we will show. What is not obvious about the approach is how to define $J'$ when it is supposed to be different from $J$, i.e., when the event $\mathcal{E}$ occurs, so that in the end $J$ and $J'$ are independent. Formally, we define $J'$ by means of the following conditional probability distributions: \[ P_{J' | J X \bar \mathcal{E}}(j'|j,x) := \left\{ \begin{array}{cl} 1 & \text{if } j=j' \\%\wedge \ensuremath{\mathrm{Pr}}[\mathcal{E}|R=r]>0 , \\ 0 & \text{if } j \neq j' \end{array}\right. \quad \text{and} \quad P_{J'|J X \mathcal{E}}(j'|j,x) := \left\{ \begin{array}{cc} 0 & \text{if } j=j' \\ \displaystyle \frac{\ensuremath{\mathrm{Pr}}[\bar \mathcal{E} | J=j']}{\ensuremath{\mathrm{Pr}}[\mathcal{E}|J=j]} & \text{if } j \neq j' \end{array}\right. \] We assume for the moment that the denominator in the latter expression does not vanish for any $j$; we take care of the case where it does later. Trivially, $P_{J'|JX\bar\mathcal{E}}$ is a proper distribution, with non-negative probabilities that add up to $1$, and the same holds for $P_{J'|JX\mathcal{E}}$: \begin{align*} \sum_{j' \in \setn[m]} P_{J'|JX\bar \mathcal{E}} = \sum_{j' \in \setn[m] \setminus \set{j}} P_{J'|JX\bar \mathcal{E}} &= \sum_{j' \in \setn[m] \setminus \set{j}} \frac{\ensuremath{\mathrm{Pr}}[\bar \mathcal{E} | J=j']}{\ensuremath{\mathrm{Pr}}[\mathcal{E} |J=j]} = 1 \end{align*} where we used that $\sum_{j \in \setn[m]} \ensuremath{\mathrm{Pr}}[\bar{\mathcal{E}} | J=j] = 1$ (because $\alpha = 0$) in the last equality. Furthermore, it follows immediately from the definition of $J'$ that $\bar \mathcal{E} \implies J=J'$ and $\mathcal{E} \implies J \neq J'$. Hence, $\mathcal{E} \iff J \neq J'$, and thus the bound from \refcor{morehadamard} translates to $\hmin(X | J=j, J' \neq J) \geq (\delta/2-2\epsilon) n$. It remains to argue that $J'$ is independent of $J$, and that the bound also holds for $\hmin(X|J=j, J'=j')$ whenever $j \neq j'$. The latter follows immediately from the fact that conditioned on $J \neq J'$ (which is equivalent to $\mathcal{E}$), $X,J$ and $J'$ form a Markov chain $X \leftrightarrow J \leftrightarrow J'$, and thus, given $J=j$, additionally conditioning on $J'=j'$ does not change the distribution of $X$. For the independence of $J$ and $J'$, consider the joint probability distribution of $J$ and $J'$, given by \begin{align*} P_{JJ'} (j,j') &= P_{J'J\mathcal{E}}(j',j) + P_{J'J\bar\mathcal{E}}(j',j)\\ &= P_J(j) \ensuremath{\mathrm{Pr}}[ \mathcal{E} | J=j] P_{J' | J \mathcal{E}}(j'|j) + P_J(j) \ensuremath{\mathrm{Pr}}[\bar \mathcal{E} | J=j] P_{J' | J \bar \mathcal{E}}(j' | j) \\&= P_J(j) \ensuremath{\mathrm{Pr}}[\bar \mathcal{E}|J=j'], \end{align*} where the last equality follows by separately analyzing the cases $j=j'$ and $j \neq j'$. It follows immediately that the marginal distribution of $J'$ is $P_{J'}(j') = \sum_{j } P_{JJ'}(j,j') = \ensuremath{\mathrm{Pr}}[\bar \mathcal{E}|J=j']$, and thus $P_{JJ'} = P_J \cdot P_{J'}$. What is left to do for the case $\alpha = 0$ is to deal with the case where there exists $j^*$ with $\ensuremath{\mathrm{Pr}}[\mathcal{E} | J=j^*]=0$. Since $\sum_{j \in \setn[m]} \ensuremath{\mathrm{Pr}}[\bar \mathcal{E} | J=j] = 1$, it holds that $\ensuremath{\mathrm{Pr}}[\bar \mathcal{E} | J=j] =0 $ for $j\neq j^*$. This motivates to define $J'$ as $J' := j^*$ with probability $1$. Note that this definition directly implies that $J'$ is independent from $J$. Furthermore, by the above observations: $\mathcal{E} \iff J \neq J'$. This concludes the case $\alpha=0$. Next, we consider the case $\alpha > 0$. The idea is to ``inflate'' the event $\mathcal{E}$ so that $\alpha$ becomes $0$, i.e., to define an event $\mathcal{E}'$ that contains $\mathcal{E}$ (meaning that $\mathcal{E} \implies \mathcal{E}'$) so that $\sum_{j \in \setn[m]} \ensuremath{\mathrm{Pr}}[\mathcal{E}'|J=j] = m-1$, and to define $J'$ as in the case $\alpha = 0$ (but now using $\mathcal{E}'$). Formally, we define $\mathcal{E}'$ as the disjoint union $\mathcal{E}' = \mathcal{E} \vee \mathcal{E}_\circ$ of $\mathcal{E}$ and an event $\mathcal{E}_\circ$. The event $\mathcal{E}_\circ$ is defined by means of $\ensuremath{\mathrm{Pr}}[\mathcal{E}_\circ|\mathcal{E}, J=j,X=x] = 0$, so that $\mathcal{E}$ and $\mathcal{E}_\circ$ are indeed disjoint, and $\ensuremath{\mathrm{Pr}}[\mathcal{E}_\circ|J=j,X=x] = \alpha/m$, so that indeed $$ \sum_{j \in \setn[m]} \ensuremath{\mathrm{Pr}}[\mathcal{E}'|J=j] = \sum_{j \in \setn[m]} (\ensuremath{\mathrm{Pr}}[\mathcal{E}|J=j]+\ensuremath{\mathrm{Pr}}[\mathcal{E}_\circ|J=j]) = (m-1-\alpha)+\alpha = m-1 \, . $$ We can now apply the analysis of the case $\alpha = 0$ to conclude the existence of $J'$, independent of $J$, such that $J \neq J' \iff \mathcal{E}'$ and thus $(J \neq J') \wedge \bar{\mathcal{E}}_\circ \iff \mathcal{E}'\wedge \bar{\mathcal{E}}_\circ \iff \mathcal{E}$. Setting $Psi := \bar{\mathcal{E}}_\circ$, it follows that \[ \hmin(X|J=j,J \neq J',Psi) = \hmin(X|J=j,\mathcal{E}) \geq (\delta/2-2\epsilon) n \, , \] where $\ensuremath{\mathrm{Pr}}[Psi] = 1 - \ensuremath{\mathrm{Pr}}[\mathcal{E}_\circ] = 1 - \alpha/m \geq 1 - (2m-1) 2^{-\epsilon n}/m \geq 1 - 2 \cdot 2^{-\epsilon n}$. Finally, using similar reasoning as in the case $\alpha = 0$, it follows that the same bound holds for $\hmin(X|J=j, J'=j',Psi)$ whenever $j \neq j'$. This concludes the case $\alpha > 0$. Finally, we consider the case $\alpha < 0$. The approach is the same as above, but now $\mathcal{E}'$ is obtained by ``deflating'' $\mathcal{E}$. Specifically, we define $\mathcal{E}'$ by means of $\ensuremath{\mathrm{Pr}}[\mathcal{E}'|\bar\mathcal{E},J=j,X=x ]=\ensuremath{\mathrm{Pr}}[\mathcal{E}'|\bar\mathcal{E} ] =0$, so that $\mathcal{E}'$ is contained in $\mathcal{E}$, and $\ensuremath{\mathrm{Pr}}[\mathcal{E}' | \mathcal{E}, J=j, X=x] = \ensuremath{\mathrm{Pr}}[\mathcal{E}' | \mathcal{E}] = \frac{m-1}{m-1-\alpha}$, so that $$ \sum_{j \in \setn[m]} \ensuremath{\mathrm{Pr}}[\mathcal{E}'|J=j] = \sum_{j \in \setn[m]} \ensuremath{\mathrm{Pr}}[\mathcal{E}'| \mathcal{E}]\cdot \ensuremath{\mathrm{Pr}}[ \mathcal{E} |J=j] = m - 1 \, . $$ Again, from the $\alpha = 0$ case we obtain $J'$, independent of $J$, such that the event $J \neq J'$ is equivalent to the event $\mathcal{E}'$. It follows that \begin{align*} \hmin(&X|J=j,J \neq J') = \hmin(X|J=j,\mathcal{E}') = \hmin(X|J=j,\mathcal{E}',\mathcal{E}) \\ &\geq \hmin(X|J=j,\mathcal{E}) - \log(P[\mathcal{E}'|\mathcal{E},J=j]) \geq (\delta/2-2\epsilon) n - 1\, , \end{align*} where the second equality holds because $\mathcal{E}' \implies \mathcal{E}$, the first inequality holds because additionally conditioning on $\mathcal{E}'$ increases the probabilities of $X$ conditioned on $J=j$ and $\mathcal{E}$ by at most a factor $1/P[\mathcal{E}'|\mathcal{E},J=j])$, and the last inequality holds by \refcor{morehadamard}) and because $P[\mathcal{E}'|\mathcal{E},J=j]) = \frac{m-1}{m-1-\alpha} \geq \frac12$, where the latter holds since $\alpha \geq -1$. Finally, using similar reasoning as in the previous cases, it follows that the same bound holds for $\hmin(X|J=j, J'=j')$ whenever $j \neq j'$. This concludes the proof. \end{proof} \section{Constructing Good Families of Bases} \label{sec:goodfam} Here, we discuss some interesting choices for the family $\set {{\mathcal{B}}_1,\ldots,{\mathcal{B}}_m}$ of bases. We say that such a family is ``good'' if $\delta = - \frac{1}{n}\log(c^2)$ converges to a strictly positive constant as $n$ tends to infinity. There are various ways to construct such families. For example, a family obtained through sampling according to the Haar measure will be good with overwhelming probability (a precise statement, in which ``good'' means $\delta=0.9$, can be found at the very end of the proof of Theorem 2.5 of \cite{FHS11}). The best possible constant $\delta = 1$ is achieved for a family of \emph{mutually unbiased bases}. However, for arbitrary quantum systems (i.e., not necessarily multi-qubit systems) it is not well understood how large such a family may be, beyond that its size cannot exceed the dimension plus $1$. In the upcoming section, we will use the following simple and well-known construction. For an arbitrary binary code $\Tset{C} \subset \set{+,\times}^n$ of size $m$, minimum distance $d$ and encoding function $\mathfrak{c}\xspace:\setn[m]\rightarrow \mathcal{C}$, we can construct a family $\set{{\mathcal{B}}_1,\ldots,{\mathcal{B}}_m}$ of bases as follows. We identify the $j$th codeword, i.e. $\mathfrak{c}\xspace(j) = (c_1,\ldots,c_n)$ for $j\in \setn[m]$, with the basis ${\mathcal{B}}_j = \Set{\ket{x}_{\mathfrak{c}\xspace(j)}}{x \in \set{0,1}^n}= \Set{ (H^{c_1} \!\otimes\xspace \cdots \otimes\xspace\! H^{c_n}) \ket{x}}{x \in \set{0,1}^n}$. In other words, ${\mathcal{B}}_j$ measures qubit-wise in the computational or the Hadamard basis, depending on the corresponding coordinate of $\mathfrak{c}\xspace(j)$. It is easy to see that the maximum overlap $c$ of the family obtained this way is directly related to the minimum distance of $\mathcal{C}$, namely $\delta = -\frac 1 n \log(c^2)$ coincides with the relative minimal distance $d/n$ of $\mathcal{C}$. Hence, choosing an asymptotically good code immediately yields a good family of bases. \chapter{Application: A New Quantum Identification Scheme} \label{sec:identification} Our main application of the new uncertainty relation is in proving security of a new identification scheme in the quantum setting. The goal of (password-based) identification is to ``prove'' knowledge of a password $w$ (or some other low-entropy key, like a PIN) without giving $w$ away. More formally, given a user \ensuremath{\mathsf{U}}\xspace and a server \ensuremath{\mathsf{S}}\xspace that hold a pre-agreed password $w \in \Tset{W}$, \ensuremath{\mathsf{U}}\xspace wants to convince \ensuremath{\mathsf{S}}\xspace that he indeed knows $w$, but in such a way that he gives away as little information on $w$ as possible in case he is actually interacting with a dishonest server \ensuremath{\mathsf{S}^*}\xspace. In~\cite{DFSS07}, Damg{\aa}rd {\emph{et al.}}\xspace\ showed the existence of a secure identification scheme in the {\em bounded-quantum-storage} model. The scheme involves the communication of qubits, and is secure against an arbitrary dishonest server \ensuremath{\mathsf{S}}\xspace that has limited quantum storage capabilities and can only store a certain fraction of the communicated qubits, whereas the security against a dishonest user \ensuremath{\mathsf{U}^*}\xspace holds unconditionally. On the negative side, it is known that {\em without} any restriction on (one of) the dishonest participants, secure identification is impossible (even in the quantum setting). Indeed, if a quantum scheme is unconditionally secure against a dishonest user, then unavoidably it can be broken by a dishonest server with unbounded quantum-storage and unbounded quantum-computing power; this follows essentially from \cite{lo96} (see also~\cite{DFSS07}). Thus, the best one can hope for (for a scheme that is unconditionally secure against a dishonest user) is that in order to break it, unbounded quantum storage {\em and} unbounded quantum-computing power is {\em necessary} for the dishonest server. This is not the case for the scheme of~\cite{DFSS07}: storing all the communicated qubits as they are, and measuring them qubit-wise in one or the other basis at the end, completely breaks the scheme. Thus, no quantum computing power at all is necessary to break the scheme, only sufficient quantum storage. In this section, we propose a new identification scheme, which can be regarded as a first step towards closing the above gap. Like the scheme from \cite{DFSS07}, our new scheme is secure against an unbounded dishonest user and against a dishonest server with limited quantum storage capabilities. The new uncertainty relation forms the main ingredient in the user-security proof in the BQSM. Furthermore, and in contrast to~\cite{DFSS07}, a minimal amount of quantum computation power is {\em necessary} to break the scheme, beyond sufficient quantum storage. Indeed, next to the security against a dishonest server with bounded quantum storage, we also prove---in \refsec{usecsqom}---security against a dishonest server that can store all the communicated qubits, but is restricted to measure them qubit-wise (in arbitrary qubit bases) at the end of the protocol execution. Thus, beyond sufficient quantum storage, quantum computation that involves {\em pairs} of qubits is necessary (and in fact sufficient) to break the new scheme. Restricting the dishonest server to qubit-wise measurements may look restrictive; however, we stress that in order to break the scheme, the dishonest server needs to store many qubits {\em and} perform quantum operations on them that go beyond single-qubit operations; this may indeed be considerably more challenging than storing many qubits and measuring them qubit-wise. Furthermore, it turns out that proving security against such a dishonest server that is restricted to qubit-wise measurements is already challenging; indeed, standard techniques do not seem applicable here. Therefore, handling a dishonest server that can, say, act on {\em blocks} of qubits, must be left to future research. \section{Security Definitions} We first formalize the security properties we want to achieve. We borrow the definitions from \cite{DFSS07}, which are argued to be ``the right ones'' in \cite{FS09}. \begin{definition}[Correctness] \label{def:correctness} An identification protocol is said to be \emph{$\varepsilon$-correct} if, after an execution by honest \ensuremath{\mathsf{U}}\xspace and honest \ensuremath{\mathsf{S}}\xspace, \ensuremath{\mathsf{S}}\xspace accepts with probability $1-\varepsilon$. \end{definition} \begin{definition}[User security] \label{def:usec} An identification protocol for two parties \ensuremath{\mathsf{U}}\xspace, \ensuremath{\mathsf{S}}\xspace is $\varepsilon$-secure for the user \ensuremath{\mathsf{U}}\xspace against (dishonest) server \ensuremath{\mathsf{S}^*}\xspace if the following holds: If the initial state of \ensuremath{\mathsf{S}^*}\xspace is independent of $W$, then its state $E$ after execution of the protocol is such that there exists a random variable $W^\prime$ that is independent of $W$ and such that \[ \rho_{WW^\prime E|W\neq W^\prime} \approx_\varepsilon \rho_{W \leftrightarrow W^\prime \leftrightarrow E | W\neq W^\prime}. \] \end{definition} \begin{definition}[Server security]\label{def:servsec} An identification protocol for two parties \ensuremath{\mathsf{U}}\xspace, \ensuremath{\mathsf{S}}\xspace is $\varepsilon$-secure for the server \ensuremath{\mathsf{S}}\xspace against (dishonest) user \ensuremath{\mathsf{U}^*}\xspace if the following holds: whenever the initial state of \ensuremath{\mathsf{U}^*}\xspace is independent of $W$, then there exists a random variable $W^\prime$ (possibly $\perp$) that is independent of $W$ such that if $W\neq W^\prime$ then \ensuremath{\mathsf{S}}\xspace accepts with probability at most $\varepsilon$. Furthermore, the common state $\rho_{W E}$ after execution of the protocol (including \ensuremath{\mathsf{S}}\xspace's announcement to accept or reject) satisfies \[ \rho_{WW^\prime E|W\neq W^\prime} \approx_\varepsilon \rho_{W \leftrightarrow W^\prime \leftrightarrow E | W\neq W^\prime}. \] \end{definition} We will prove the user-security of the protocol in two different models, in which different assumptions are made. Because these assumptions are in some sense ``orthogonal'', the hope is that if security would break down in one model to a failing assumption, the protocol is still secure by the other model. \section{Description of the New Quantum Identification Scheme} \label{sec:schemedescr} Let $\Tset{C} \subset \set{+,\times}^n$ be a binary code with minimum distance $d$, and let $\mathfrak{c}\xspace: \Tset{W} \rightarrow \Tset{C}$ be its encoding function. Let $m:=|\Tset{W}|$, and typically, $m < 2^n$. Let $\mathcal{F}$ be the class of all linear functions from $\set{0,1}^n$ to $\set{0,1}^\ell$, where $\ell < n$, represented as $\ell \times n$ binary matrices. It is well-known that this class is two-universal. Furthermore, let $\mathcal{G}$ be a strongly two-universal class of hash functions from $\Tset{W} $ to $\{0,1\}^\ell$. Protocol {\bfseries\texttt{Q-ID}}\xspace is shown below. \begin{protocol} \begin{enumerate*} \item \ensuremath{\mathsf{U}}\xspace picks $x\in \{0,1\}^n$ independently and uniformly at random and sends $\ket{x}_{\mathfrak{c}\xspace(w)}$ to \ensuremath{\mathsf{S}}\xspace. \item \ensuremath{\mathsf{S}}\xspace measures in basis $\mathfrak{c}\xspace(w)$. Let $x^\prime$ be the outcome. \item \ensuremath{\mathsf{U}}\xspace picks $f \in \mathcal{F}$ independently and uniformly at random and sends it to \ensuremath{\mathsf{S}}\xspace \item \ensuremath{\mathsf{S}}\xspace picks $g\in \mathcal{G}$ independently and uniformly at random and sends it to \ensuremath{\mathsf{U}}\xspace \item \ensuremath{\mathsf{U}}\xspace computes and sends $z:=f(x)\oplus g(w) $ to \ensuremath{\mathsf{S}}\xspace \item \ensuremath{\mathsf{S}}\xspace accepts if and only if $z=z^\prime$ where $z^\prime :=f(x') \oplus g(w)$ \end{enumerate*} \caption{{\bfseries\texttt{Q-ID}}\xspace} \end{protocol} Our scheme is quite similar to the scheme in~\cite{DFSS07}. The difference is that in our scheme, both parties, \ensuremath{\mathsf{U}}\xspace and \ensuremath{\mathsf{S}}\xspace, use $\mathfrak{c}\xspace(w)$ as basis for preparing/measuring the qubits in step (1) and (2), whereas in~\cite{DFSS07}, only \ensuremath{\mathsf{S}}\xspace uses $\mathfrak{c}\xspace(w)$ and \ensuremath{\mathsf{U}}\xspace uses a {\em random} basis $\theta \in \set{+,\times}^n$ instead, and then \ensuremath{\mathsf{U}}\xspace communicates $\theta$ to \ensuremath{\mathsf{S}}\xspace and all the positions where $\theta$ and $\mathfrak{c}\xspace(w)$ differ are dismissed. Thus, in some sense, our new scheme is more natural since why should \ensuremath{\mathsf{U}}\xspace use a random basis when he knows the right basis (i.e., the one that \ensuremath{\mathsf{S}}\xspace uses)? In~\cite{DFSS07}, using a random basis (for \ensuremath{\mathsf{U}}\xspace) was crucial for their proof technique, which is based on an entropic uncertainty relation of a certain form, which asks for a random basis. However, using a random basis, which then needs to be announced, renders the scheme insecure against a dishonest server \ensuremath{\mathsf{S}^*}\xspace that is capable of storing all the communicated qubits and then measure them in the right basis once it has been announced. Our new uncertainty relation applies to the case where an $n$-qubit state is measured in a basis that is sampled from a code $\Tset{C}$, and thus is applicable to the new scheme where \ensuremath{\mathsf{U}}\xspace uses basis $\mathfrak{c}\xspace(w) \in \Tset{C}$. Since this basis is common knowledge (to the honest participants), it does not have to be communicated, and as such a straightforward store-and-then-measure attack as above does not apply. A downside of our scheme is that security only holds in case of a perfect quantum source, which emits exactly one qubit when triggered. Indeed, a multi-photon emission enables a dishonest server \ensuremath{\mathsf{S}^*}\xspace to learn information on the basis used, and thus gives away information on the password $w$ in our scheme. As such, our scheme is currently mainly of theoretical interest. It is straightforward to verify that (in the ideal setting with perfect sources, no noise, etc.) {\bfseries\texttt{Q-ID}}\xspace satisfies the correctness property (Definition~\ref{def:correctness}) perfectly, i.e.\ $\varepsilon=0$. In the remaining sections, we prove (unconditional) security against a dishonest user, and we prove security against two kinds of restricted dishonest servers. First, against a dishonest server that has limited quantum storage capabilities, and then against a dishonest server that can store an unbounded number of qubits, but can only store and measure them qubit-wise. \section{(Unconditional) Server Security} First, we claim security of {\bfseries\texttt{Q-ID}}\xspace against an arbitrary dishonest user \ensuremath{\mathsf{U}^*}\xspace (that is merely restricted by the laws of quantum mechanics). \label{sec:serversec} \begin{thm} \label{thm:serversec} {\bfseries\texttt{Q-ID}}\xspace is $\varepsilon$-secure for the server with $\varepsilon=\binom{m}{2}2^{-\ell}$. \end{thm} \begin{proof} Clearly, from the steps (1) to (5) in the protocol {\bfseries\texttt{Q-ID}}\xspace, \ensuremath{\mathsf{U}^*}\xspace learns no information on $W$ at all. The only information he may learn is by observing whether \ensuremath{\mathsf{S}}\xspace accepts or not in step (6). Therefore, in order to prove server security, it suffices to show the existence of a random variable $W'$, independent of $W$, with the property that \ensuremath{\mathsf{S}}\xspace rejects whenever $W' \neq W$ (except with probability $\frac12 m(m-1) 2^{-\ell}$). We may assume that $\mathcal{W} = \set{1,\ldots,m}$. Let $\rho_{WX'FGZE}$ be the state describing the password $W$, the variables $X',F,G$ and $Z$ occurring in the protocol from the server's point of view, and \ensuremath{\mathsf{U}^*}\xspace's quantum state $E$ {\em before} observing \ensuremath{\mathsf{S}}\xspace's decision to accept or reject. For any $w \in \mathcal{W}$, consider the state $\rho^w_{X'FGZE} := \rho_{X'FGZE|W=w}$. Note that the reduced state $\rho^w_{FGZE}$ is the same for any $w \in \mathcal{W}$; this follows from the assumption that \ensuremath{\mathsf{U}^*}\xspace's initial state is independent of $W$ and because $F,G$ and $Z$ are produced independently of $W$. We may thus write $\rho^w_{X'FGZE}$ as $\rho_{X'_w F G Z E}$, and we can ``glue together'' the states $\rho_{X'_w F G Z E}$ for all choices of $w$. This means, there exists a state $\rho_{X'_1 \cdots X'_m F G Z E_1 \cdots E_m}$ that correctly reduces to $\rho_{X'_w F G Z E_w} = \rho_{X'_w F G Z E}$ for any $w \in \mathcal{W}$, and conditioned on $FGZ$, we have that $X'_i E_i$ is independent of $X'_j E_j$ for any $i \neq j \in \mathcal{W}$. It is easy to see that for any $i \neq j \in \mathcal{W}$, $G$ is independent of $X'_i,X'_j$ and $F$. Therefore, by the strong two-universality of $G$, for any $i \neq j$ it holds that $Z'_i \neq Z'_j$ except with probability $2^{-\ell}$, where $Z'_w = FX'_w+G(w)$ for any $w$. Therefore, by the union bound, $Z'_1,\ldots,Z'_m$ are pairwise distinct and thus $Z$ can coincide with at most one of the $Z'_w$'s, except with probability $\varepsilon = \frac12 m(m-1) 2^{-\ell}$. Let $W'$ be defined such that $Z = Z'_{W'}$; if there is no such $Z'_w$ then we let $W' = \: \perp$, and if there are more than one then we let it be the first. Recall, the latter can happen with probability at most $\varepsilon$. We now extend the state $\rho_{X'_1 \cdots X'_m F G Z W' E_1 \cdots E_m}$ by $W$, chosen independently according to $P_W$. Clearly $W'$ is independent of $W$. Furthermore, except with probability at most $\varepsilon$, if $W \neq W'$ then $Z \neq Z'_{W}$. Finally note that $\rho_{X'_W F G Z W' W E_W}$ is such that $\rho_{X'_W F G Z W E_W} = \sum_w P_W(w)\rho_{X'_w F G Z E_w} \otimes \ketbra{w}{w} = \sum_w P_W(w)\rho^w_{X' F G Z E} \otimes \ketbra{w}{w} = \rho_{X' F G Z W E}$. Thus, also with respect to the state $\rho_{X' F G Z W E}$ there exist $W'$, independent of $W$, such that if $W' \neq W$ then $Z \neq Z'$ except with probability at most $\varepsilon$. This was to be shown. \end{proof} \section{User Security in the Bounded-Quantum-Storage Model} \label{sec:bqsm} Next, we consider a dishonest server \ensuremath{\mathsf{S}^*}\xspace, and first prove security of {\bfseries\texttt{Q-ID}}\xspace in the {\em bounded-quantum-storage model}. In this model, as introduced in~\cite{DFSS05}, it is assumed that the adversary (here \ensuremath{\mathsf{S}^*}\xspace) cannot store more than a fixed number of qubits, say~$q$. The security proof of {\bfseries\texttt{Q-ID}}\xspace in the bounded-quantum-storage model is very similar to the corresponding proof in \cite{DFSS07} for their scheme, except that we use the new uncertainty relation from \refsec{moreunbiasedbases}. Furthermore, since our uncertainty relation (\refthm{UR}) already guarantees the existence of the random variable $W'$ as required by the security property, no {\em entropy-splitting} as in~\cite{DFSS07} is needed. In the following, let $\delta:=d/n$, i.e. the relative minimum distance of $\mathcal{C}$. \begin{thm}\label{thm:bqsm} Let \ensuremath{\mathsf{S}^*}\xspace be a dishonest server whose quantum memory is at most $q$ qubits at step 3 of {\bfseries\texttt{Q-ID}}\xspace. Then, for any $0<\kappa< \delta / 4$, {\bfseries\texttt{Q-ID}}\xspace is $\varepsilon$-secure for the user with \[ \varepsilon = 2^{-\frac 1 2 ((\delta/2-2\kappa) n -1 - q - \ell)}+ 4\cdot 2^{-\kappa n}. \] \end{thm} \begin{proof} We consider and analyze a purified version of {\bfseries\texttt{Q-ID}}\xspace where in step (1) instead of sending $\ket{X}_c$ to \ensuremath{\mathsf{S}^*}\xspace for a uniformly distributed $X$, \ensuremath{\mathsf{U}}\xspace prepares a fully entangled state $2^{-n/2}\sum_x \ket{x}\ket{x}$ and sends the second register to \ensuremath{\mathsf{S}^*}\xspace while keeping the first. Then, in step (3) when the memory bound has applied, \ensuremath{\mathsf{U}}\xspace measures his register in the basis $\mathfrak{c}\xspace(W)$ in order to obtain $X$. Note that this procedure produces exactly the same common state as in the original (non-purified) version of {\bfseries\texttt{Q-ID}}\xspace. Thus, we may just as well analyze this purified version. The state of \ensuremath{\mathsf{S}^*}\xspace consists of his initial state and his part of the EPR pairs, and may include an additional ancilla register. Before the memory bound applies, \ensuremath{\mathsf{S}^*}\xspace may perform any unitary transformation on his composite system. When the memory bound is applied (just before step (3) is executed in {\bfseries\texttt{Q-ID}}\xspace), \ensuremath{\mathsf{S}^*}\xspace has to measure all but $q$ qubits of his system. Let the classical outcome of this measurement be denoted by $y$, and let $E'$ be the remaining quantum state of at most $q$ qubits. The common state has collapsed to a $(n+q)$-qubit state and depends on $y$; the analysis below holds for any $y$. Next, \ensuremath{\mathsf{U}}\xspace measures his $n$-qubit part of the common state in basis $\mathfrak{c}\xspace(W)$; let $X$ denote the classical outcome of this measurement. By our new uncertainty relation (\refthm{UR}) and subsequently applying the min-entropy chain rule that is given in \reflem{bqsmchain} (to take the $q$ stored qubits into account) it follows that there exists $W'$, independent of $W$, and an event $Psi$ that occurs at least with probability $1-2\cdot 2^{-\kappa n}$, such that \[ \hmin(X|E',W=w, W'=w', Psi ) \geq (\delta/2-2\kappa) n -1-q. \] for any $w,w'$ such that $w\neq w'$. Because \ensuremath{\mathsf{U}}\xspace chooses $F$ independently at random from a 2-universal family, privacy amplification guarantees that \[ \deltauni(F(X)|E'F,W=w,W'=w') \leq \varepsilon' := \frac 1 2 \cdot 2^{-\frac 1 2 ((\delta/2-2\kappa) n -1 -q-\ell)} + 2\cdot 2^{-\kappa n}, \] for any $w,w'$ such that $w\neq w'$. Recall that $Z=F(X) \oplus\xspace G(W)$. By security of the one-time pad it follows that \begin{equation} \deltauni(Z|E'FG,W=w,W'=w') \leq \varepsilon', \label{eq:zuni} \end{equation} for any $w,w'$ such that $w\neq w'$. To prove the claim, we need to bound, \begin{align} \nonumber\delta& ( \rho_{WW'E |W\neq W'} , \rho_{W \leftrightarrow W' \leftrightarrow E|W\neq W'} ) \\ \nonumber&=\tfrac12 \|\rho_{WW'E'FGZ|W\neq W'} - \rho_{W \leftrightarrow W' \leftrightarrow E'FGZ|W\neq W'} \|_1 \\ \nonumber& \leq \tfrac12 \|\rho_{WW'E'FGZ|W\neq W'} - \rho_{WW'E'FG|W\neq W'} \otimes\xspace 2^{-\ell} \ensuremath{\mathbb{I}}\xspace \|_1 \\ \label{eq:triang}&\hspace{2em} + \tfrac12 \| \rho_{WW'E'FG|W\neq W'}\otimes\xspace 2^{-\ell} \ensuremath{\mathbb{I}}\xspace - \rho_{ W \leftrightarrow W' \leftrightarrow E'FGZ|W\neq W'} \|_1 \end{align} where the equality follows by definition of trace distance (\refdef{tracedist}) and the fact that the output state $E$ is obtained by applying a unitary transformation to the set of registers ($E'$, $F$, $G$, $W'$, $Z$). The inequality is the triangle inequality; in the remainder of the proof, we will show that both terms in \refeq{triang} are upper bounded by $\varepsilon'$. \begin{align*} \tfrac12 & \|\rho_{WW'E'FGZ|W\neq W'} - \rho_{WW'E'FG|W\neq W'} \otimes\xspace 2^{-\ell} \ensuremath{\mathbb{I}}\xspace \|_1\\ & = \sum_{w \neq w'} P_{WW'|W\neq W'}(w,w')\, \deltauni(Z|E'FG,W=w,W'=w')\leq \varepsilon', \end{align*} where the latter inequality follows from \refeq{zuni}. For the other term, we reason as follows: \begin{align*} \tfrac12 & \| \rho_{WW'E'FG|W\neq W'}\otimes\xspace 2^{-\ell} \ensuremath{\mathbb{I}}\xspace - \rho_{ W \leftrightarrow W' \leftrightarrow E'FGZ|W\neq W'} \|_1\\ & = \tfrac12 \sum_{w \neq w'} P_{WW'|W\neq W'}(w,w') \, \|\rho^{w,w'}_{E'FG|W\neq W'} \otimes\xspace 2^{-\ell} \ensuremath{\mathbb{I}}\xspace - \rho^{w'}_{E'FGZ|W\neq W'} \|_1 \\ & =\tfrac12 \sum_{w \neq w'} P_{WW'|W\neq W'}(w,w') \, \|\rho^{w,w'}_{E'FG|W\neq W'} \otimes\xspace 2^{-\ell} \ensuremath{\mathbb{I}}\xspace \\ &\hspace{2em} - \hspace{-1em}\sum_{\substack{w''\\\text{ s.t. }w''\neq w'}}\hspace{-1em}P_{W|W',W\neq W'}(w''|w') \rho^{w'',w'}_{E'FGZ|W\neq W'} \|_1 \\ & =\tfrac12 \sum_{w'} P_{W'|W\neq W'}(w') \, \| \sum_{\substack{w\\\text{ s.t. }w\neq w'}} P_{W|W',W\neq W'}(w|w') \rho^{w,w'}_{E'FG|W\neq W'} \otimes\xspace 2^{-\ell} \ensuremath{\mathbb{I}}\xspace \\ &\hspace{2em} - \hspace{-1em}\sum_{\substack{w''\\\text{ s.t. }w''\neq w'}}\hspace{-1em}P_{W|W',W\neq W'}(w''|w') \rho^{w'',w'}_{E'FGZ|W\neq W'} \hspace{-1em}\sum_{\substack{w\\\text{ s.t. }w\neq w'}}\hspace{-1em}P_{W|W',W\neq W'}(w|w') \|_1 \\ & = \tfrac12 \sum_{w \neq w'} P_{WW'|W\neq W'}(w,w') \, \|\rho^{w,w'}_{E'FG|W\neq W'} \otimes\xspace 2^{-\ell} \ensuremath{\mathbb{I}}\xspace - \rho^{w,w'}_{E'FGZ|W\neq W'} \|_1 \\ & =\sum_{w \neq w'} P_{WW'|W\neq W'}(w,w') \, \deltauni(Z|E'FG,W=w,W'=w') \leq \varepsilon', \end{align*} where the first equality follows by definition of conditional independence and by a basic property of the trace distance; the third and fourth equality follow by linearity of the trace distance. The inequality on the last line follows from \refeq{zuni}. This proves the claim. \end{proof} \chapter{User Security in the Single-Qubit-Operations Model} \label{sec:usecsqom} We now consider a dishonest server \ensuremath{\mathsf{S}^*}\xspace that can store an unbounded number of qubits. Clearly, against such a \ensuremath{\mathsf{S}^*}\xspace, Theorem~\ref{thm:bqsm} provides no security guarantee anymore. We show here that there is still \emph{some} level of security left. Specifically, we show that {\bfseries\texttt{Q-ID}}\xspace is still secure against a dishonest server \ensuremath{\mathsf{S}^*}\xspace that can reliably store all the communicated qubits and measure them qubit-wise and non-adaptively at the end of the protocol. This feature distinguishes our identification protocol from the protocol from~\cite{DFSS07}, which completely breaks down against such an attack. \section{The Model} \label{sec:measmodel} Formally, a dishonest server \ensuremath{\mathsf{S}^*}\xspace in the SQOM is modeled as follows. \begin{enumerate} \item \ensuremath{\mathsf{S}^*}\xspace may reliably store the $n$-qubit state $\ket{x}_{\mathfrak{c}\xspace(w)} = \ket{x_1}_{\mathfrak{c}\xspace(w)_1} \otimes \cdots \otimes \ket{x_n}_{\mathfrak{c}\xspace(w)_n}$ received in step (1) of {\bfseries\texttt{Q-ID}}\xspace. \item At the end of the protocol, in step (5), \ensuremath{\mathsf{S}^*}\xspace chooses an arbitrary sequence $\theta = (\theta_1,\ldots,\theta_n)$, where each $\theta_i$ describes an arbitrary orthonormal basis of $\mathbb{C}^2$, and measures each qubit $\ket{x_i}_{\mathfrak{c}\xspace(w)_i}$ in basis $\theta_i$ to observe $Y_i \in \set{0,1}$. Hence, we assume that \emph{\ensuremath{\mathsf{S}^*}\xspace measures all qubits at the end of the protocol.} \item The choice of $\theta$ may depend on all the classical information gathered during the execution of the protocol, but we assume a \emph{non-adaptive} setting where $\theta_i$ does not depend on $Y_j$ for $i \neq j$, i.e., \ensuremath{\mathsf{S}^*}\xspace has to choose $\theta$ entirely before performing any measurement. \end{enumerate} Considering complete projective measurements acting on individual qubits, rather than general single-qubit POVMs, may be considered a restriction of our model. Nonetheless, general POVM measurements can always be described by projective measurements on a bigger system. In this sense, restricting to projective measurements is consistent with the requirement of single-qubit operations. It seems non-trivial to extend our security proof to general single-qubit POVMs. The restriction to non-adaptive measurements (item 3) is rather strong, even though the protocol from \cite{DFSS07} already breaks down in this non-adaptive setting. The restriction was introduced as a stepping stone towards proving the adaptive case. Up to now, we have unfortunately not yet succeeded in doing so, hence we leave the adaptive case for future research. We also leave for future research the case of a less restricted dishonest server \ensuremath{\mathsf{S}^*}\xspace that can do measurements on blocks that are less stringently bounded in size. Whereas the adaptive versus non-adaptive issue appears to be a proof-technical problem ({\bfseries\texttt{Q-ID}}\xspace looks secure also against an adaptive \ensuremath{\mathsf{S}^*}\xspace), allowing measurements on larger blocks will require a new protocol, since {\bfseries\texttt{Q-ID}}\xspace becomes insecure when \ensuremath{\mathsf{S}^*}\xspace can do measurements on blocks of size $2$, as we show in \refsec{attack}. \section{No Privacy Amplification} One might expect that proving security of {\bfseries\texttt{Q-ID}}\xspace in the SQOM, i.e., against a dishonest server \ensuremath{\mathsf{S}^*}\xspace that is restricted to single-qubit operations should be straightforward, but actually the opposite is true, for the following reason. Even though it is not hard to show that after his measurements, \ensuremath{\mathsf{S}^*}\xspace has lower bounded uncertainty in $x$ (except if he was able to guess $w$), it is not clear how to conclude that $f(x)$ is close to random so that $z$ does not reveal a significant amount of information about $w$. The reason is that standard privacy amplification fails to apply here. Indeed, the model allows \ensuremath{\mathsf{S}^*}\xspace to postpone the measurement of all qubits to step (5) of the protocol. The hash function $f$, however, is chosen and sent already in step (3). This means that \ensuremath{\mathsf{S}^*}\xspace can choose his measurements in step (5) depending on $f$. As a consequence, the distribution of $x$ from the point of view of \ensuremath{\mathsf{S}^*}\xspace may depend on the choice of the hash function $f$, in which case the privacy-amplification theorem does not give any guarantees. \section{Single-Qubit Measurements} Consider an arbitrary sequence $\theta = (\theta_1,\ldots,\theta_n)$ where each $\theta_i$ describes an orthonormal basis of $\mathbb{C}^2$. Let $\ket{\psi}$ be an $n$-qubit system of the form \[ \ket{\psi} = \ket{x}_b=H^{b_1}\ket{x_1} \otimes\xspace \cdots \otimes\xspace H^{b_n} \ket{x_n}, \] where $x$ and $b$ are arbitrary in $\set{0,1}^n$. Measuring $\ket{\psi}$ qubit-wise in basis $\theta$ results in a measurement outcome $Y = (Y_1,\ldots,Y_n) \in \set{0,1}^n$. Suppose that $x$, $b$ and $\theta$ are in fact realizations of the random variables $X$, $B$ and $\Theta$ respectively. It follows immediately from the product structure of the state $\ket{\psi}$ that \[ P_{Y|XB\Theta}(y|x,b,\theta) = \prod_{i=0}^n P_{Y_i|X_i B_i \Theta_i}(y_i|x_i,b_i,\theta_i), \] i.e. the random variables $Y_i$ are statistically independent conditioned on arbitrary fixed values for $X_i$, $B_i$ and $\Theta_i$ but such that $P_{X_iB_i\Theta_i}(x_i,b_i,\theta_i)>0$. \begin{lemma} \label{lem:probsymmetry} The distribution $P_{Y_i|X_i B_i \Theta_i}(y_i | x_i , b_i, \theta_i)$ exhibits the following symmetries: \[ P_{Y_i|X_i B_i \Theta_i}(0 | 0 , b_i, \theta_i) = P_{Y_i|X_i B_i \Theta_i}(1 | 1 , b_i, \theta_i) \] and \[ P_{Y_i|X_i B_i \Theta_i}(0 | 1 , b_i, \theta_i) = P_{Y_i|X_i B_i \Theta_i}(1 | 0 , b_i, \theta_i) \] for all $i \in [n]$, for all $b_i$ and $\theta_i$ with $P_{X_iB_i\Theta_i}(\xi,b_i,\theta_i)>0$ for all $\xi \in \set{0,1}$. \end{lemma} The proof can found in \refapp{prfprobsym}. \begin{comment} \begin{align} \nonumber P&_{Y_i|X_i B_i \Theta_i}(0 | x_i , b_i, \theta_i) = | \bra{x_i}H^{b_i} (\alpha \ket{0}+\beta \ket{1})|^2 \\%(\alpha \bra{0}+\beta \bra{1})H^{b_i}\ket{x_i} \label{eq:yiszero} & ={\mathcal{B}}ig|\alpha + b_i{\mathcal{B}}ig(\frac{\alpha + \beta}{\sqrt2}-\alpha{\mathcal{B}}ig){\mathcal{B}}ig|^2 \braket{x_i}{0} +{\mathcal{B}}ig| \beta + b_i {\mathcal{B}}ig(\frac{\alpha - \beta}{\sqrt2}-\beta {\mathcal{B}}ig){\mathcal{B}}ig|^2 \braket{x_i}{1} \end{align} and \begin{align} \nonumber P&_{Y_i|X_i B_i \Theta_i}(1 | x_i , b_i, \theta_i) = | \bra{x_i}H^{b_i} (\beta \ket{0}-\alpha \ket{1})|^2 \\%(\alpha \bra{0}+\beta \bra{1})H^{b_i}\ket{x_i} \label{eq:yisone} & ={\mathcal{B}}ig|\beta + b_i{\mathcal{B}}ig(\frac{\beta-\alpha}{\sqrt2}-\beta{\mathcal{B}}ig){\mathcal{B}}ig|^2 \braket{x_i}{0} +{\mathcal{B}}ig| - \alpha + b_i {\mathcal{B}}ig(\frac{\alpha + \beta}{\sqrt2}+\alpha {\mathcal{B}}ig){\mathcal{B}}ig|^2 \braket{x_i}{1}. \end{align} By substituting any choice for $b_i \in \set{0,1}$, we see that the coefficient in front of $\braket{x_i}{0}$ in \refeq{yiszero} equals the coefficient in front of $\braket{x_i}{1}$ in \refeq{yisone}, and \emph{vice versa}. Furthermore, we have that $\braket{x_i}{0} = 1$ if $x_i=0$ and $\braket{x_i}{0}$ vanishes when $x_i=1$, and the opposite holds for $\braket{x_i}{1}$. Hence the claim follows. \end{comment} The symmetry characterized in \reflem{probsymmetry} coincides with that of the \emph{binary symmetric channel}, i.e. we can view $Y$ as a ``noisy version'' of $X$, where this noise---produced by the measurement---is independent of $X$. Formally, we can write $Y$ as \begin{equation} \label{eq:mmodel} Y = X \oplus \Delta, \end{equation} where the random variable $\Delta=(\Delta_1,\ldots,\Delta_n) \in \set{0,1}^n$ thus represents the error between the random variable $X\in \set{0,1}^n$ that is ``encoded'' in the quantum state and the measurement outcome $Y\in \set{0,1}^n$. By substituting \refeq{mmodel} in \reflem{probsymmetry}, we get the following corollary. \begin{corollary}[Independence Between $\Delta$ and $X$] For every $i\in [n]$ it holds that \[ P_{\Delta_i|X_i B_i \Theta_i}(\delta_i | x_i , b_i, \theta_i) = P_{\Delta_i| B_i \Theta_i}(\delta_i | b_i, \theta_i) \] for all $\delta_i \in \set{0,1}$ and for all $x_i$, $b_i$ and $\theta_i$ such that $P_{X_iB_i\Theta_i}(x_i,b_i,\theta_i)>0$. \label{cor:indepxd} \end{corollary} Furthermore, since the random variables $Y_i$ are statistically independent conditioned on fixed values for $X_i$, $B_i$ and $\Theta_i$, it follows that the $\Delta_i$ are statistically independent conditioned on fixed values for $B_i$ and $\Theta_i$. \begin{definition}[Quantized Basis] \label{def:quantb} For any orthonormal basis $\theta_i=\set{\ket{v_1},\ket{v_2}}$ on $\mathbb{C}^2$, we define the \emph{quantized basis} of $\theta_i$ as \[ \hat{\theta}_i := j^* \in \set{0,1},\quad \text{where } j^* \in \operatorname*{arg\,max}_{j\in \set{0,1}} \max_{k\in\set{1,2}} |\bra{v_k}H^j \ket{0}|. \] If both $j\in \set{0,1}$ attain the maximum, then $j^*$ is chosen arbitrarily. The quantized basis of the sequence $\theta = (\theta_1, \ldots, \theta_n)$ is naturally defined as the element-wise application of the above, resulting in $\hat{\theta} \in \{0,1\}^n$. \end{definition} We will use the bias as a measure for the predictability of $\Delta_i$. \begin{thm} When measuring the qubit $H^{b_i} \ket{x_i}$ for any $x_i,b_i \in \set{0,1}$ in any orthonormal basis $\theta_i$ on $\mathbb{C}^2$ for which the quantized basis $\hat{\theta}_i$ is the complement of $b_i$, i.e.\ $\hat{\theta}_i = b_i \oplus\xspace 1$, then the bias of $\Delta_i \in \set{0,1}$, where $\Delta_i = Y_i \oplus\xspace x_i$ and $Y_i \in \set{0,1}$ is the measurement outcome, is upper bounded by \[ \ensuremath{\mathrm{bias}}(\Delta_i) \leq \frac{1}{\sqrt2}. \] \label{thm:biasbound} \end{thm} Since the theorem holds for any $x_i \in \set{0,1}$ and since \refcor{indepxd} guarantees that $\Delta_i$ is independent from an arbitrary random variable $X_i$, the theorem also applies when we replace $x_i$ by the random variable $X_i$. In order to prove \refthm{biasbound}, we need the following lemma. \begin{lemma} \label{lem:biasrelation} If, for any orthonormal basis $\theta_i$ on $\mathbb{C}^2$, there exists a bit $b_i \in \set{0,1}$ so that when measuring the qubit $H^{b_i} \ket{x_i}$ for any $x_i \in \set{0,1}$ in the basis $\theta_i$ to obtain $Z_i\in \set{0,1}$ it holds that \[ \ensuremath{\mathrm{bias}}(Z_i) \geq 1/\sqrt2, \] then it holds that when measuring the qubit $H^{b_i \oplus\xspace 1} \ket{x_i}$ in the basis $\theta_i$ to obtain $Y_i~\in~\set{0,1}$, \[ \ensuremath{\mathrm{bias}}(Y_i) \leq 1/\sqrt2. \] \end{lemma} \begin{proof} First note that for any $x_i,b_i \in \set{0,1}$ and any orthonormal basis $\theta_i$ on $\mathbb{C}^2$, measuring a state $H^{b_i}\ket{x_i}$ in $\theta_i=\set{\ket{v},\ket{w}}$ where $\ket{v} = \alpha \ket{0}+\beta\ket{1}$ and $\ket{w}=\beta\ket{0}-\alpha\ket{1}$ gives the same outcome distribution (up to permutations) as when measuring one of the basis states of $\theta_i$ (when viewed as a quantum state), say \ket{w}, using the basis $\set{H^{b_i}\ket{x_i},H^{b_i} \ket{x_i \oplus 1}}$. To see why this holds, note that it follows immediately that $|\bra{w}H^{b_i}\ket{x_i}|^2 = |\bra{x_i}H^{b_i}\ket{w}|^2$. Furthermore, we have already shown in the proof of \reflem{probsymmetry} that \[ |\bra{v}H^{b_i}\ket{x_i}|^2 = |\bra{w}H^{b_i}\ket{x_i\oplus 1}|^2 \] holds. Hence, we can apply \refthm{morehadamard} with $\rho = \proj{w}$ (this implies that $n=1$), $m~=~2$ and $\mcal{B}_0$ and $\mcal{B}_1$ are the computational and Hadamard basis respectively. The maximum overlap between those bases is $c=1/\sqrt2$. \refthm{morehadamard} gives us that \[ p^{\set{\ket{0},\ket{1}}}_{\max} + p^{\set{\ket{+},\ket{-}}}_{\max} \leq 1 + \frac{1}{\sqrt2}, \] where $p^{\set{\ket{0},\ket{1}}}_{\max}$ and $p^{\set{\ket{+},\ket{-}}}_{\max}$ respectively denote the maximum probability in the distribution obtained by measuring in the computational and Hadamard basis. By simple manipulations we can write this as a bound on the sum of the biases: \begin{align} \nonumber \frac{2}{\sqrt2} & \geq (2 p^{\set{\ket{0},\ket{1}}}_{\max} - 1) + (2 p^{\set{\ket{+},\ket{-}}}_{\max} -1) \\ \label{eq:biassum}&= \ensuremath{\mathrm{bias}}(Y_i ) + \ensuremath{\mathrm{bias}}(Z_i) . \end{align} From this relation, the claim follows immediately. \end{proof} Following \cite{Schaffner07}, we want to remark that both biases in \refeq{biassum} are equal to $1/\sqrt2$ when $\theta_i$ is the \emph{Breidbart basis}, which is the basis that is precisely ``in between'' the computational and the Hadamard basis:\footnote{In \cite{Schaffner07}, the corresponding state is called the ``Hadamard-invariant state.''} \[ \ket{v} = 0s(\tfrac{\pi}{8}) \ket{0} + \sin (\tfrac{\pi}{8}) \ket{1} \qquad\text{and}\qquad \ket{w}= \sin (\tfrac{\pi}{8}) \ket{0} - 0s (\tfrac{\pi}{8}) \ket{1}. \] \begin{proof}[Proof of \refthm{biasbound}] Let $\theta_i=\set{\ket{v_0},\ket{v_1}}$. We will make a case distinction based on the value of \begin{equation} \mu:= \max_{k\in \set{0,1}} |\bra{v_k}H^{\hat\theta_i} \ket{0}|. \label{eq:maximization} \end{equation} If $\mu \leq 0s(\pi/8)$, then we also have that $\max_{k\in \set{0,1}}|\bra{v_k}H^{b_i} \ket{x_i}| \leq 0s(\pi/8)$ where $b_i=\hat \theta_i \oplus\xspace 1$, this holds by definition of the quantized basis (\refdef{quantb}). Then, the probability of obtaining outcome $Y_i=k^*$, where $k^* \in \set{0,1}$ achieves the maximum in \refeq{maximization}, is bounded by \[ P_{Y_i}(k^*) = |\bra{v_{k^*}}H^{b_i} \ket{x_i}|^2 \leq 0s^2(\pi/8) = \tfrac12 + \tfrac{1}{2\sqrt2}. \] Hence, \[ \ensuremath{\mathrm{bias}}(\Delta_i) = \ensuremath{\mathrm{bias}}(Y_i) = | P_{Y_i}(k^*) - (1-P_{Y_i}(k^*))| = |2 P_{Y_i}(k^*) - 1| \leq \tfrac{1}{\sqrt2}. \] If $\mu > 0s(\pi/8)$, then when measuring the state $H^{\hat\theta_i}\ket{x_i}$ in $\theta_i$ to obtain $Z_i~\in~\set{0,1}$, we have that $\ensuremath{\mathrm{bias}}(Z_i) > 1/\sqrt2$ (this follows from similar computations as performed above). We now invoke \reflem{biasrelation} to conclude that when measuring the state $H^{b_i}\ket{x_i}$ in $\theta_i$ to obtain $Y_i$, $\ensuremath{\mathrm{bias}}(\Delta_i) = \ensuremath{\mathrm{bias}}(Y_i) < \tfrac{1}{\sqrt2}$. \end{proof} \section{User Security of {\bfseries\texttt{Q-ID}}\xspace} \label{sec:usecqw} We are now ready to state and prove the security of {\bfseries\texttt{Q-ID}}\xspace against a dishonest user in the SQOM. \begin{thm}[User Security]\label{thm:usec} Let \ensuremath{\mathsf{S}^*}\xspace be a dishonest server with unbounded quantum storage that is restricted to non-adaptive single-qubit operations, as specified in \refsec{measmodel}. Then, for any $0 < \beta < \tfrac14$, user security (as defined in \refdef{usec}) holds with \[ \textstyle\varepsilon \leq \tfrac12 2^{\frac{1}{2} \ell- \frac14 (\frac14 -\beta)d} + {m\choose 2} 2^{2\ell}\exp(-2d \beta^2) \] \end{thm} Note that $d$ is typically linear in $n$ whereas $\ell$ is chosen independently of $n$, hence the expression above is negligible in $d$. To prove \refthm{usec} we need the following technical lemma and corollary. Recall that $\mathcal{F}$ denotes the class of all linear functions from $\set{0,1}^n$ to $\set{0,1}^\ell$, where $\ell < n$, represented as binary $\ell \times n$ matrices. \begin{lemma} \label{lem:schur} Let $n$, $k$ and $\ell$ be arbitrary positive integers, let $0<\beta<\tfrac14$ and let $\mathcal I \subset \setn$ such that $|\mathcal I|\geq k$, and let $F$ be uniform over $\mathcal{F}= \{0,1\}^{\ell \times n}$. Then, it holds except with probability $2^{2\ell}\exp(- 2 k \beta^2)$ (the probability is over the random matrix $F$) that \[ \big|(f \odot\xspace g )_\Tset{I}\big| > (\tfrac14-\beta) k \qquad \forall f,g \in \lswon \] \end{lemma} \begin{proof} Without loss of generality, we will assume that $|\Tset{I}| =k$. Now take arbitrary but non-zero vectors $r,s \in \set{0,1}^{\ell}$ and let $V:=rF$ and $W:=sF$. We will analyze the case $r \neq s$; the case $r=s$ is similar but simpler. Because each element of $F$ is an independent random bit, and $r$ and $s$ are non-zero and $r\neq s$, $V$ and $W$ are independent and uniformly distributed $n$-bit vectors with expected relative Hamming weight $1/2$. Hence, on average $|(V \odot\xspace W)_\Tset{I}|$ equals $k/4$. Furthermore, using Hoeffding's inequality (\refthm{hoeffding}), we may conclude that \[ \ensuremath{\mathrm{Pr}} \bigg[ \frac{k}{4} - |(V \odot\xspace W)_\Tset{I}| > \beta k \bigg] = \ensuremath{\mathrm{Pr}} \bigg[|(V \odot\xspace W)_\Tset{I}| < \big(\tfrac14 - \beta\big) k \bigg] \leq \exp(- 2 k \beta^2) \, . \] Finally, the claim follows by applying the union bound over the choice of $r$ and $s$ (each $2^\ell$ possibilities). \end{proof} \begin{comment} \begin{lemma} \label{lem:randomcodes} Let $G$ be a random matrix, uniformly distributed over the set of $\{0,1\}^{k\times n}$-matrices. Then, except with probability $<2 \cdot 2^{-n(1-H(\delta))+k}$ it holds that \[ |g| \in \big(\delta n, (1-\delta)n\big) \qquad \forall g \in \lswon[G]. \] \end{lemma} \noindent The proof is inspired by Venkatesan Guruswami's lecture notes on coding theory. \cite{guru} \begin{proof} For any nonzero row-vector $x \in \{0,1\}^k$, the vector $x G $ is a uniformly random element of $\{0,1\}^n$. (Indeed, take $x_k \neq 0$ and fix the first $k-1$ columns $\tilde G$ of $G$, then $xG = x\tilde G + x_k G_k $ is uniformly distributed, where $G_k$ denotes the $k$th row of $G$. Hence, \[ \ensuremath{\mathrm{Pr}}[ |Gx| \leq \delta n] \leq \frac{1}{2^n} \sum_{v=1}^{\delta n} {n \choose v} \leq 2^{-n(1-H(\delta))} \] where $H(\cdot)$ is the binary entropy function. Finally, a union bound over all non-zero $x$ gives that except with probability $\leq 2^{-n(1-H(\delta))+k}$, it holds that $|xG| > \delta n $ for all $x \in \set{0,1}^k \setminus \set{0}$. By symmetry, it holds that $|xG| < (1-\delta)n$ for all $x \in \set{0,1}^k \setminus \set{0}$ except with the same error probability. Finally, the claim follows by summing both error probabilities. \end{proof} \begin{lemma} \label{lem:setmagic} For any $\tfrac13 < \beta < \tfrac12$ and any $\mathcal I \subset \setn$ such that $|\mathcal I|\geq d$, let $F$ be uniform over $\mathcal{F}$. Then, it holds except with probability $\leq 2 \cdot 2^{\ell-d(1-H(\beta))}$ that \[ \big|(f \odot\xspace g )_\Tset{I}\big| > \tfrac12 d(3\beta-1) \qquad \forall f,g \in \lswon \] \end{lemma} \begin{proof} Note that the claim for the case $f_\Tset{I}=g_\Tset{I}$ (which includes the case $f=g$) follows directly from \reflem{randomcodes}. Below, we will prove the claim for $f_\Tset{I} \neq g_\Tset{I}$. For simplicity, but without loss of generality, let us assume that $|\Tset{I}|=d$. By \reflem{randomcodes} it holds that, except with probability $\leq 2 \cdot 2^{\ell- d \cdot(1-H(\beta))}$ the weights of all vectors in \lswon, restricted to the positions in \Tset{I}, lie in the interval $(\beta d, (1- \beta) d)$. From here, we assume the latter holds with certainty and we bookkeep the error. The intuition behind the remaining part of the proof is to exploit this lower and upper bound on the weights in the following way: for any pair of vectors from \lswon, when restricted to the positions in \Tset{I} (recall that we currently analyze the case in which those subvectors are distinct), the weight of both subvectors is strictly larger than $\tfrac13 d$ by the restriction on $\beta$. By linearity, the modulo-$2$ sum of those subvectors is again an element of \lswon restricted to \Tset{I}, whose weight must be strictly smaller than $\tfrac23 d$. Hence, the two subvectors must have some overlap in their support, which is expressed by the Schur product between these subvectors having strictly positive weight. We now prove this formally, where we will use the following elementary properties of the Schur product: for any vectors $a,b \in \set{0,1}^n$, it holds that $ |a | = |a \odot\xspace b| + |a \odot\xspace \bar b |$ and $|a \odot\xspace \bar b| + |\bar a \odot\xspace b| = |a \oplus\xspace b|$, where $\bar a$ denotes the element-wise inversion of $a$, i.e., $\bar a:= a \oplus\xspace 1^n$ (and similarly for $\bar b$). Take arbitrary $f,g \in \lswon$ such that $f_\Tset \neq g_\Tset{I}$. Then, \begin{align*} (1-\beta) d &> |f_\Tset{I} \oplus\xspace g_\Tset{I}| = |f_\Tset{I} \odot\xspace \bar g_\Tset{I}| + |\bar f_\Tset{I} \odot\xspace g_\Tset{I}| = |f_\Tset{I}| - |f_\Tset{I} \odot\xspace g_\Tset{I}| + |g_\Tset{I}| - |f_\Tset{I} \odot\xspace g_\Tset{I}| \geq 2 \beta d - 2 |f_\Tset{I} \odot\xspace g_\Tset{I}| \end{align*} Rearranging terms yields \[ | (f \odot\xspace g)_\Tset{I} | =| f_\Tset{I} \odot\xspace g_\Tset{I} | > \tfrac12 (3\beta - 1)d \] \end{proof} \end{comment} Recall that \Tset{C} is a binary code with minimum distance $d$, $\mathfrak{c}\xspace(\cdot)$ its encoding function, and that $m:=|\Tset{W}|$. \begin{corollary} \label{cor:atmost} Let $0 <\beta <\tfrac14$, and let $F$ be uniformly distributed over $\Tset{F}$. Then, $F$ has the following property except with probability $ \binom{m}{2} 2^{2\ell}\exp(-2d\beta^2)$: for any string $s \in \set{0,1}^n$ (possibly depending on the choice of~$F$), there exists at most one $\tilde c \in \Tset{C}$ such that for any code word $c \in \Tset{C}$ different from $ \tilde c$, it holds that \[ | f \odot\xspace (c \oplus\xspace s) | \geq \tfrac12(\tfrac14-\beta)d \qquad \forall f \in \lswon \] \end{corollary} We prove the statement by arguing for two $\tilde c$'s and showing that they must be identical. In the proof, we will make use of the two following propositions. \begin{prop} $|a| \geq |a \odot\xspace b| $ for all $a,b \in \set{0,1}^n$. \label{prop:pone} \end{prop} \begin{proof} Follows immediately.\end{proof} \begin{prop} $|a\odot\xspace b | + |a \odot\xspace c| \geq |a \odot\xspace (b \oplus\xspace c)|$ for all $a,b,c \in \set{0,1}^n$. \label{prop:ptwo} \end{prop} \begin{proof} $ |a \odot\xspace (b \oplus\xspace c)| = |a \odot\xspace b \oplus\xspace a\odot\xspace c| \leq |a \odot\xspace b | + | a\odot\xspace c|$, where the equality is the distributivity of the Schur product, and the inequality is the triangle inequality for the Hamming weight. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:atmost}] By \reflem{schur} with $\Tset{I}:= \set{ i \in \setn: c_i \neq c'_i}$ for $c,c' \in \mathcal{C}$, and by applying the union bound over all possible pairs $(c,c')$, we obtain that except with probability $\binom{m}{2} 2^{2\ell} \exp(-2d\beta^2)$ (over the choice of~$F$), it holds that \begin{equation} |f \odot\xspace g \odot\xspace (c\oplus c')|>(\tfrac14-\beta)d \label{eq:contrad} \end{equation} for all $f,g \in \lswon$ and all $c,c' \in \mathcal{C}$ with $c\neq c'$. Now, for such an $F$, and for every choice of $s\in \{0,1\}^n$, consider $\tilde c_1, \tilde c_2 \in \mathcal{C}$ and $f_1, f_2 \in \lswon$ such that \[ |f_1 \odot\xspace (\tilde c_1 \oplus\xspace s)| <\tfrac12( \tfrac14 -\beta)d \quad \text{and}\quad |f_2 \odot\xspace (\tilde c_2 \oplus\xspace s)| < \tfrac12( \tfrac14 -\beta)d. \] We will show that this implies $\tilde c_1 = \tilde c_2$, which proves the claim. Indeed, we can write \begin{align*} (\tfrac14- \beta)d & > |f_1 \odot\xspace (\tilde c_1 \oplus\xspace s)| + |f_2 \odot\xspace (\tilde c_2 \oplus\xspace s)| \\ &\geq |f_1 \odot\xspace f_2 \odot\xspace (\tilde c_1 \oplus\xspace s)| + |f_1 \odot\xspace f_2 \odot\xspace (\tilde c_2 \oplus\xspace s)| \geq |f_1 \odot\xspace f_2 \odot\xspace (\tilde c_1\! \oplus\xspace \!\tilde c_2)| \end{align*} where the second inequality is \refprop{pone} applied twice and the third inequality is \refprop{ptwo}. This contradicts \refeq{contrad} unless $\tilde c_1 = \tilde c_2$. \end{proof} Now we are ready to prove \refthm{usec}. In the proof, when $F \in \mcal{F}$ acts on an $n$-bit vector $x \in \{0,1\}^n$, we prefer the notation $F(x)$ over matrix-product notation $Fx$.\footnote{When using matrix-product notation ambiguities could arise, e.g.\ in subscripts of probability distributions like $P_{FX}$: then it is not clear whether this means the joint distribution of $F$ and $X$ or the distribution of $F$ acting on $X$?} \begin{proof}[Proof of Theorem \ref{thm:usec}] Consider an execution of {\bfseries\texttt{Q-ID}}\xspace, with a dishonest server \ensuremath{\mathsf{S}^*}\xspace as described in \refsec{measmodel}. We let $W, X$ and $Z$ be the random variables that describe the values $w, x$ and $z$ occurring in the protocol. From {\bfseries\texttt{Q-ID}}\xspace's description, we see that $F$ is uniform over $\mcal{F}$. Hence, by \refcor{atmost} it will be ``good'' (in the sense that the bound from \refcor{atmost} holds) except with probability ${m\choose 2} 2^{2\ell}\exp(-2d \beta^2)$. From here, we consider a fixed choice for $F$ and condition on the event that it is ``good,'' we thus book-keep the probability that $F$ is ``bad'' and take it into account at the end of the analysis. Although we have fixed $F$, we will keep using capital notation for it, to emphasize that $F$ is a matrix. We also fix $G=g$ for an arbitrary $g$; the analysis below holds for any such choice. Let $\Theta$ describe the qubit-wise measurement performed by \ensuremath{\mathsf{S}^*}\xspace at the end of the execution, and $Y$ the corresponding measurement outcome. By the non-adaptivity restriction and by the requirement in \refdef{usec} that \ensuremath{\mathsf{S}^*}\xspace is initially independent of $W$, we may conclude that, once $G$ and $F$ are fixed, $\Theta$ is a function of $Z$. (Recall that $Z = F(X)\oplus\xspace g(W)$.) We will define $W'$ with the help of \refcor{atmost}. Let $\q{\Theta}$ be the quantized basis of $\Theta$, as defined in \refdef{quantb}. Given a fixed value $\theta$ for $\Theta$, and thus a fixed value $\q{\theta}$ for $\q{\Theta}$, we set $s$, which is a variable that occurs in \refcor{atmost}, to $s = \q{\theta}$. \refcor{atmost} now guarantees that there exists \emph{at most one} $\tilde c$. If $\tilde c$ indeed exists, then we choose $w'$ such that $\mathfrak{c}\xspace(w') = \tilde c$. Otherwise, we pick $w'\in \mcal{W}$ arbitrarily (any choice will do). Note that this defines the random variable $W'$, and furthermore note that $Z\rightarrow \Theta \rightarrow \hat \Theta\rightarrow W'$ forms a Markov chain. Moreover, by the choice of $w'$ it immediately follows from \refcor{atmost} that for all $w \neq w'$ and for all $f \in \lswon$ it holds that \begin{equation} \big|f \odot (\mathfrak{c}\xspace(w) \oplus\xspace \q{\theta})\big|\geq \tfrac12(\tfrac14 -\beta)d. \label{eq:hbound} \end{equation} We will make use of this bound later in the proof. Since the model (\refsec{measmodel}) enforces the dishonest server to measure all qubits at the end of the protocol, the system $E=(Y,Z,\Theta)$ is classical and hence the trace-distance-based user-security definition (\refdef{usec}) simplifies to a bound on the statistical distance between distributions. I.e., it is sufficient to prove that \[ \mathrm{SD}(P_{E W |W'=w',W'\neq W}, P_{W|W'=w',W\neq W'} P_{E|W'=w',W\neq W'})\leq \varepsilon \] holds for any $w'$. Consider the distribution that appears above as the first argument to the statistical distance, i.e. $P_{E W |W'=w',W'\neq W}$. By substituting $E=(Y,Z,\Theta)$, it factors as follows\footnote{Note that we shorten notation here by omitting the parentheses containing the function arguments. The quantification is over all inputs for which all involved conditional probabilities are well-defined.} \begin{align} \nonumber P_{YZ\Theta W |W', W \neq W'} &= P_{W| W', W \neq W'}\ P_{Z \Theta |WW', W \neq W'}\ P_{Y | Z\Theta WW', W \neq W'} \\ &= P_{W| W', W \neq W'}\ P_{Z \Theta |W', W \neq W'}\ P_{Y | F(X)\Theta WW', W \neq W'}, \label{eq:factor} \end{align} where the equality $P_{Z\Theta |WW', W \neq W'} = P_{Z\Theta |W', W \neq W'}$ holds by the following argument: $Z$ is independent of $W$ (since $F(X)$ acts as one-time pad) and $Z \rightarrow \Theta \rightarrow W'$ is a Markov chain, and \ensuremath{\mathsf{S}^*}\xspace (who computes $\Theta$ from $Z$) is initially independent of $W$ by \refdef{usec}, hence $W$ is independent of $Z$, $\Theta$ and $W'$, which implies the above equality. The equality $P_{Y | Z \Theta WW', W \neq W'} = P_{Y | F(X)\Theta WW', W \neq W'}$ holds by the observation that given $W$, $Z$ is uniquely determined by $F(X)$ and vice versa. In the remainder of this proof we will show that \[ \deltauni (Y | F(X)=u,\Theta=v,W=w,W'=w') \leq \tfrac12 2^{\frac{\ell}{2} - \frac14 (\frac14 -\beta)d}, \] for all $u,v,w$ such that $w \neq w'$, where $w'$ is determined by $v$. This then implies that the rightmost factor in \refeq{factor} is essentially independent of $W$, and concludes the proof. To simplify notation, we define $\mathcal{E}$ to be the event \[ \mcal{E}:= \set{ F(X) = u, \Theta=v, W = w, W' = w'} \] for fixed but arbitrary choices $u$, $v$ and $w$ such that $w \neq w'$, where $w'$ is determined by $v$. We show closeness to the uniform distribution by using the XOR inequality from Diaconis {\emph{et al.}}\xspace (\refthm{diaconis}), i.e., we use the inequality $$ \deltauni(Y|\mcal{E}) \leq \tfrac12 {\mathcal{B}}ig[\sum_{\alpha} \ensuremath{\mathrm{bias}}(\alpha\cdot Y|\Tset{E})^2{\mathcal{B}}ig]^\frac12, $$ where the sum is over all $\alpha$ in $\set{0,1}^n \setminus \set{0^n}$. We split this sum into two parts, one for $\alpha \in \ensuremath{\mathrm{span}}(F)$ and one for $\alpha$ not in $\ensuremath{\mathrm{span}}(F)$, and analyze the two parts separately. Since $X$ is uniformly distributed, it follows that for any $\alpha \notin \ensuremath{\mathrm{span}}(F)$, it holds that $P_{\alpha \cdot X| F(X)}(\cdot|u) = \tfrac12$ (for any $u$). We conclude that \begin{align*} \tfrac12 & = P_{\alpha\cdot X| F(X)} = P_{\alpha\cdot X| F(X) W} = P_{\alpha\cdot X| F(X) \Theta WW' } \\ &= P_{\alpha\cdot Y| F(X) \Theta WW' } = P_{\alpha\cdot Y|\mcal{E}} \quad \forall \alpha \notin \ensuremath{\mathrm{span}}(F). \end{align*} The second equality follows since $W$ is independent of $X$. The third equality holds by the fact that $\Theta$ is computed from $F(X) \oplus g(W)$ and $W'$ is determined by $\Theta$. The fourth equality follows by the security of the one-time pad, i.e. recall that $Y = X \oplus \Delta$, where by \refcor{indepxd} it holds that $\Delta\in \{0,1\}^n$ is independent of $X$ when conditioned on fixed values for $B=\mathfrak{c}\xspace(W)$ and $\Theta$. Hence, it follows that $\ensuremath{\mathrm{bias}} (\alpha \cdot Y| \Tset{E} ) = 0$ for $\alpha \notin \ensuremath{\mathrm{span}}(F)$. For any non-zero $\alpha \in \ensuremath{\mathrm{span}}(F)$, we can write \begin{align*} \ensuremath{\mathrm{bias}} ( \alpha \cdot Y|\mcal{E}) &= \ensuremath{\mathrm{bias}} ( \alpha \cdot (X \oplus\xspace \Delta ) |\mcal{E}) \\ &= \ensuremath{\mathrm{bias}} ( \alpha \cdot X \oplus\xspace \alpha \cdot \Delta |\mcal{E}) &\text{(distributivity of dot product)}\\ &= \ensuremath{\mathrm{bias}} ( \alpha \cdot X |\mcal{E}) \ensuremath{\mathrm{bias}}( \alpha \cdot \Delta |\mcal{E}) &\text{(\refcor{indepxd})}\\ & \leq \ensuremath{\mathrm{bias}}( \alpha \cdot \Delta |\mcal{E}) &\text{($\ensuremath{\mathrm{bias}}(\alpha \cdot X) \leq 1$)} \\ & = \prod_{i \in [n]} \ensuremath{\mathrm{bias}}( \alpha_i \cdot \Delta_i |\mcal{E}) &\text{($\Delta_i$ independent)}\\ & = \prod_{i \in [n]:\alpha_i = 1} \ensuremath{\mathrm{bias}}( \Delta_i |\mcal{E}) \\ &\leq \prod_{\substack{i \in [n]:\alpha_i = 1 \\ \hat{\theta}_i = \mathfrak{c}\xspace(w)_i \oplus\xspace 1}} 2^{-\frac12}&\text{(\refthm{biasbound}) } \\&= 2^{-\frac12 |\alpha \odot\xspace (\mathfrak{c}\xspace(w) \oplus\xspace \hat\theta)|} \leq 2^{-\frac14(\frac14-\beta)d}&\text{(by \refeq{hbound})} \end{align*} Combining the two parts, we get \begin{align*} \deltauni(Y|\mcal{E}) & \leq \tfrac12 {\mathcal{B}}ig[\sum_{\alpha} \ensuremath{\mathrm{bias}}(\alpha\cdot Y|\Tset{E})^2{\mathcal{B}}ig]^\frac12 \\ &=\tfrac12 {\mathcal{B}}ig[\sum_{\alpha \in \lswon} \ensuremath{\mathrm{bias}}(\alpha \cdot Y|\Tset{E})^2 + 0\,{\mathcal{B}}ig]^\frac12 \leq \tfrac12 2^{\frac{\ell}{2} - \frac14 (\frac14 -\beta)d} \, . \end{align*} Incorporating the error probability of having a ``bad'' $F$ completes the proof. \end{proof} \section{Attack against {{\bfseries\texttt{Q-ID}}\xspace} with Operations on Pairs of Qubits} \label{sec:attack} We present an attack with which the dishonest server \ensuremath{\mathsf{S}^*}\xspace can discard two passwords in one execution of {\bfseries\texttt{Q-ID}}\xspace using coherent operations on pairs of qubits. Before discussing this attack, we first explain a straightforward strategy by which \ensuremath{\mathsf{S}^*}\xspace can discard one password per execution: \ensuremath{\mathsf{S}^*}\xspace chooses a candidate password $\hat w$ and measures the state $H^{\mathfrak{c}\xspace(W)}\ket{X}$ qubit-wise in the basis $\mathfrak{c}\xspace(\hat w)$ to obtain $Y$. \ensuremath{\mathsf{S}^*}\xspace then computes $F(Y)\oplus\xspace g(\hat w)$ and compares this to $Z=F(X) \oplus\xspace g(W)$, which he received from the user. If indeed $Z = F(Y)\oplus\xspace g(\hat w)$, then it is very likely that $W = \hat w$, i.e.\ that \ensuremath{\mathsf{S}^*}\xspace guessed the password correctly. Let us now explain the attack, which is obtained by modifying the above strategy. The attack is based on the following observation \cite{DFSS05}: if \ensuremath{\mathsf{S}^*}\xspace can perform Bell measurements on qubit pairs $\ket{x_1}_{a} \ket{x_2}_{a}$, for $a \in \set{0,1}$, then he can learn the parity of $x_1 \oplus\xspace x_2$ for both choices of $a$ simultaneously. This strategy can also be adapted to determine both parities of a pair in which the first qubit is encoded in a basis that is opposite to that of the second qubit, i.e.\ by appropriately applying a Hadamard gate prior to applying the Bell measurement. Let the first bit of $Z$ be equal to $f\cdot X \oplus\xspace g(W)_1$,\footnote{By $g(W)_1$ we mean the first bit of $g(W)$.} where $f \in \lswon$. Let $\hat w_1$ and $\hat w_2$ be two candidate passwords. With the trick from above, \ensuremath{\mathsf{S}^*}\xspace can measure the positions in the set \[ \mcal{P}:=\Set{i \in [n]}{f_i = 1,\mathfrak{c}\xspace(\hat w_1)_i = 1 \oplus\xspace \mathfrak{c}\xspace(\hat w_2)_i} \] \emph{pairwise} (assuming $|\mcal{P}|$ to be even) using Bell measurements, while measuring the positions where $\mathfrak{c}\xspace(\hat w_1)$ and $\mathfrak{c}\xspace(\hat w_2)$ coincide using ordinary single-qubit measurements. This allows him to compute both ``check bits'' corresponding to both passwords \emph{simultaneously}, i.e.\ those check bits coincide with $f \cdot Y_1 \oplus\xspace g(\hat w_1)_1$ and $f \cdot Y_2 \oplus\xspace g(\hat w_2)_1$, where $Y_1$ and $Y_2$ are the outcomes that \ensuremath{\mathsf{S}^*}\xspace would have obtained if he had measured all qubits qubit-wise in either $\mathfrak{c}\xspace(\hat w_1)$ or $\mathfrak{c}\xspace(\hat w_2)$, respectively. If both these check bits are different from the bit $Z_1$, then \ensuremath{\mathsf{S}^*}\xspace can discard both $w_1$ and $w_2$. We have seen that in the \emph{worst case}, the attack is capable of discarding two passwords in one execution, and hence clearly violates the security definition. On \emph{average}, however, the attack seems to discard just one password per execution, i.e.\ a candidate password cannot be discarded if its check bit is consistent with $Z_1$, which essentially happens with probability $1/2$. This raises the question whether the security definition is unnecessarily strong, because it seems that not being able to discard more than one password on average would be sufficient. Apart from this, it might be possible to improve the attack, e.g.\ by selecting the positions where to measure pairwise in a more clever way, as to obtain multiple check bits (corresponding to multiple $f$s in the span of $F$) per candidate password, thereby increasing the probability of discarding a wrong candidate password. \chapter{Conclusion} We view our work related to {\bfseries\texttt{Q-ID}}\xspace as a first step in a promising line of research, aimed at achieving security in multiple models simultaneously. The main open problem in the context of the SQOM is to reprove our results in a more general model in which the dishonest server \ensuremath{\mathsf{S}^*}\xspace can choose his basis adaptively. Also, it would be interesting to see whether similar results can be obtained in a model where the adversary is restricted to performing quantum operations on blocks of several qubits. \begin{appendix} \chapter{Proof of an Operator Norm Inequality (Proposition~\ref{prop:morebases})}\label{sec:proofinequality} We first recall some basic properties of the operator norm $\|A \| \ensuremath{:=} \sup \|A \ket{\psi}\|$, where the supremum is over all norm-$1$ vectors $\ket{\psi} \in \mathcal{H}$. First of all, it is easy to see that \[ \left\| \begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix} \right\| =\max\left\{\|A\|,\|B\|\right\}. \] Also, from the fact that $\| A \| = \sup |\bra{\psi}A\ket{\varphi}|$, where the supremum is over all norm-$1$ $\ket{\psi},\ket{\varphi} \in \mathcal{H}$, it follows that $\|A^*\|=\|A\|$, where $A^*$ is the Hermitian transpose of $A$, and thus that for Hermitian matrices $A$ and~$B$: $$ \| AB \| = \|(AB)^*\|=\|B^*A^*\|=\|BA\| \, . $$ Furthermore, if $A$ is Hermitian then $\| A \| = \lambda_{\max}(A) \ensuremath{:=} \max\{|\lambda_j| : \lambda_j \mbox{ an eigenvalue of } A \}$. Finally, the operator norm is \emph{unitarily invariant}, i.e., $\|A\|=\|UAV\|$ for all $A$ and for all unitary $U,V$. \begin{lemma} \label{lem:ineq} Any two $n \times n$ matrices $X$ and $Y$ for which the products $XY$ and $YX$ are Hermitian satisfy \[ \|XY\| = \|YX\| \] \end{lemma} \begin{proof} For any two $n \times n$ matrices $X$ and $Y$, $XY$ and $YX$ have the same eigenvalues, see e.g. \cite[Exercise I.3.7]{Bhatia97}. Therefore, $\| XY \| = \lambda_{\max}(XY) = \lambda_{\max}(YX) = \|YX\|$. \end{proof} We are now ready to state and prove the norm inequality. We recall that an orthogonal projector $P$ satisfies $P^2 = P$ and $P^* = P$. \begin{prop} \label{prop:morebases} For orthogonal projectors $A_1, A_2, \ldots, A_m$, it holds that \begin{equation*} \label{eq:multiproj} \big\| A_1+\ldots+A_m \big\| \leq 1 + (m-1) \cdot \max_{1\leq j< k \leq m} \big\|A_j A_k\big\|. \end{equation*} \end{prop} The case $m = 2$ was proven in~\cite{DFSS05}, adapting a technique by Kittaneh \cite{Kittaneh97}. We extend the proof to an arbitrary $m$. \begin{proof} Defining \[ X \ensuremath{:=} \begin{pmatrix} A_1 & A_2 & \cdots & A_m \\ 0 & 0 & \cdots & 0 \\ \vdots & \vdots& & \vdots \\ 0 & 0 & \cdots & 0\end{pmatrix} \quad \mbox{ and } \quad Y \ensuremath{:=} \begin{pmatrix} A_1 & 0 & \cdots & 0 \\ A_2 & 0 & \cdots & 0 \\ \vdots & \vdots& & \vdots \\ A_m & 0 & \cdots & 0 \end{pmatrix} \] yields \begin{align*} XY &= \begin{pmatrix} A_1 +A_2 + \ldots + A_m & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 0 \\ \vdots & \vdots& & \vdots \\ 0 & 0 & \cdots & 0\end{pmatrix} \quad \mbox{ and}\quad YX = \begin{pmatrix} A_1 & A_1 A_2 & \cdots & A_1 A_m \\ A_2 A_1 & A_2 & \cdots & A_2 A_m \\ \vdots & \vdots& \ddots& \vdots \\ A_m A_1 & A_m A_2 & \cdots & A_m \end{pmatrix} \end{align*} The matrix $YX$ can be additively decomposed into $m$ matrices according to the following pattern \[ YX= \begin{pmatrix} * & & & & \\ & * & & & \\ & & \ddots & & \\ & & & * & \\ & & & & * \end{pmatrix} + \begin{pmatrix} 0 & * & & & \\ & 0 & & & \\ & & \ddots & \ddots & \\ & & & 0 & * \\ * & & & & 0 \end{pmatrix} +\;\ldots\; + \begin{pmatrix} 0 & & & & * \\ * & 0 & & & \\ &\ddots & \! \ddots & & \\ & & & 0 & \\ & & & * & 0 \end{pmatrix} \] where the $*$ stand for entries of $YX$ and for $i=1,\ldots,m$ the $i$th star-pattern after the diagonal pattern is obtained by $i$ cyclic shifts of the columns of the diagonal pattern. $XY$ and $YX$ are Hermitian and thus we can apply Lemma~\ref{lem:ineq}. Then, by applying the triangle inequality, the unitary invariance of the operator norm and the facts that for all $j \neq k : \|A_j\|=1$, $\|A_j A_k\|=\|A_k A_j\|$, we obtain the desired statement. \end{proof} \chapter{Proof of \reflem{bqsmchain}} \label{app:bqsmchainproof} To prove \reflem{bqsmchain}, we need to introduce some more tools. The following proposition guarantees that the ``averaging property'' of the guessing probability (which holds by definition in the classical case) still holds when additionally conditioning on a quantum system. \begin{prop} \label{prop:avgprop} For any state $\rho_{XYE}\in \mcal{D}(\mathcal{H}_X\otimes\xspace \mathcal{H}_Y \otimes\xspace \mathcal{H}_E)$ that is classical on $X$ and $Y$ it holds that \[ p_\mathrm{guess}(X|YE) = \sum_y P_Y(y)\, p_\mathrm{guess}(X|E,Y=y). \] \end{prop} \begin{proof} First, note that for any matrix $M_x$ acting on $\mathcal{H}_Y \otimes\xspace \mathcal{H}_E$, we can always write $M_x = \sum_{y,y'} \ketbra{y}{y'} \otimes\xspace M_x^{y,y'}$, where $M^{y,y'}_x$ acts on $\mathcal{H}_E$ for every $x,y,y'$. Now, we write \begin{align*} p_\mathrm{guess}(X|YE) & = \max_{\set{M_x}} \sum_x P_X(x) \mathrm{tr}ace(M_x \rho^x_{YE})\\ & = \max_{\set{M_x}} \sum_x P_X(x) \mathrm{tr}ace(M_x \sum_y P_{Y|X}(y|x) \, \proj{y}\otimes\xspace \rho^{x,y}_{E})\\ & = \max_{\set{M_x}} \sum_{x,y} P_{XY}(x,y) \mathrm{tr}ace( ( \sum_{v,w} \ketbra{v}{w} \otimes\xspace M_x^{v,w} )(\proj{y}\otimes\xspace \rho^{x,y}_{E}))\\ & = \max_{\set{M_x}} \sum_{x,y} P_{XY}(x,y) \sum_{v}\braket{v}{y}\mathrm{tr}ace( M_x^{v,y} \rho^{x,y}_{E})\\ & = \max_{\set{M_x}} \sum_{x,y} P_{XY}(x,y) \mathrm{tr}ace( M_x^{y,y} \rho^{x,y}_{E})\\ & = \sum_y P_Y(y) \max_{\set{M^{y,y}_x}} \sum_{x} P_{X|Y}(x|y) \mathrm{tr}ace( M_x^{y,y} \rho^{x,y}_{E})\\ &= \sum_y P_Y(y) \,p_\mathrm{guess}(X|E,Y=y). \end{align*} \end{proof} The following proposition is known as the chain rule for min-entropy. \begin{prop}[\cite{Renner05}] \label{prop:chain} The following holds for all $\rho_{ABC} \in \mcal{D}(\mathcal{H}_A \otimes\xspace \mathcal{H}_B \otimes\xspace \mathcal{H}_C)$, \[ \hmin(A|BC) \geq \hmin(AB|C) -\ensuremath{H_\mathrm{max}\hspace{-1pt}}(B). \] \end{prop} Finally, we need the following lemma. \begin{lemma} \label{lem:moreminent} For any state $\rho_{XYE}\in \mcal{D}(\mathcal{H}_X\otimes\xspace \mathcal{H}_Y \otimes\xspace \mathcal{H}_E)$ that is classical on $X$ and $Y$ it holds that \begin{equation} \hmin(XE|Y=y)\geq \hmin(X|Y=y) \label{eq:eigineq} \end{equation} for every $y \in \mcal{Y}$. \end{lemma} \begin{proof} Note that it suffices to show that $\lambda_{\max}(\rho^y_{XE}) \leq \lambda_{\max}(\rho^y_{X})$ holds for every $y\in \mcal{Y}$. Because $\rho_{XE}^y$ is classical on $X$, there exists a unitary $U$ acting on $\mathcal{H}_X$ such that $\tilde{\rho}_{XE}^y := (U\otimes\xspace\ensuremath{\mathbb{I}}\xspace_E) \rho_{XE}^y (U^\dagger \otimes\xspace \ensuremath{\mathbb{I}}\xspace_E)$ is classical with respect to the computational basis $\set{\ket{x}}_{x \in \mcal{X}}$ on $\mathcal{H}_X$ with $\mcal{X}:=[d]$. In particular, this means that $\tilde{\rho}_{XE}^y$ has block-diagonal structure: \[ \tilde{\rho}^y_{XE} = \sum_{x \in [d]} P_{X|Y}(x|y) \proj{x} \otimes\xspace \rho_{E}^{x,y}=\begin{bmatrix} P_{X|Y}(1|y)\,\rho_E^{1,y} &&\boldsymbol{0}\\ &\ddots & \\ \boldsymbol{0} && P_{X|Y}(d|y)\,\rho_E^{d,y}\end{bmatrix}. \] Note that because $U$ is unitary, $\tilde{\rho}^y_{XE}$ has the same eigenvalues as $\rho^y_{XE}$, where these eigenvalues are given by the union of the eigenvalues of the blocks on the diagonal of $\tilde{\rho}^y_{XE}$. From this we see that the largest eigenvalue of $\tilde{\rho}^y_{XE}$ (and thus of $\rho^y_{XE}$) cannot be larger than the largest eigenvalue of $\tilde{\rho}^y_{X}:=\mathrm{tr}ace_E (\tilde{\rho}^y_{XE})$ (and thus of $\rho^y_{X}$). \end{proof} \begin{proof}[Proof of \reflem{bqsmchain}] By \refeq{guessform} it is equivalent to show that \[ p_\mathrm{guess}(X|YE) \leq p_\mathrm{guess}(X|Y) \, 2^{\ensuremath{H_\mathrm{max}\hspace{-1pt}}(E)}. \] Using \refprop{avgprop}, we write \begin{align*} p_\mathrm{guess} & (X|EY) = \sum_y P_Y(y)\, p_\mathrm{guess}(X|E,Y=y) = \sum_y P_Y(y) \,2^{-\hmin(X|E,Y=y) } \\ &\leq \sum_y P_Y(y) \, 2^{- (\hmin(XE|Y=y)-\ensuremath{H_\mathrm{max}\hspace{-1pt}}(E))} \\ &\leq 2^{\ensuremath{H_\mathrm{max}\hspace{-1pt}}(E)} \, \sum_y P_Y(y) 2^{-\hmin(X|Y=y)} =2^{\ensuremath{H_\mathrm{max}\hspace{-1pt}}(E)} \, p_\mathrm{guess} (X|Y), \end{align*} where the first inequality is \refprop{chain}, and the second inequality follows by \reflem{moreminent}. Hence, the claim follows. \end{proof} \chapter{Proof of \reflem{probsymmetry}} \label{app:prfprobsym} \begin{proof} Let $\alpha,\beta \in \mathbb{C}$ be such that $\theta_i:=\set{\alpha \ket{0}+\beta \ket{1}, \beta \ket{0}-\alpha \ket{1} }$. (We can always find such $\alpha$ and $\beta$.) Writing out the measurement explicitly gives \begin{align*} P_{Y_i|X_i B_i \Theta_i}(0 | x_i , b_i, \theta_i) &= | (\alpha \bra{0}+\beta \bra{1})H^{b_i} \ket{x_i}|^2 \qquad\text{and}\\ P_{Y_i|X_i B_i \Theta_i}(1 | x_i , b_i, \theta_i) &= | (\beta \bra{0}-\alpha \bra{1})H^{b_i} \ket{x_i}|^2. \end{align*} Hence, it suffices to prove that \begin{equation} | (\alpha \bra{0}+\beta \bra{1})H^{b_i}\ket{x_i} |^2 = | (\beta \bra{0}-\alpha \bra{1})H^{b_i}\ket{x_i \oplus\xspace 1 } |^2 \label{eq:bsc} \end{equation} for every $ x_i,b_i \in \set{0,1}$. We first show \refeq{bsc} for $b_i=0$. Let $\sigma_1$ be the first Pauli matrix defined by $\sigma_1 \ket{a} = \ket{a\oplus 1}$ for every $a \in \set{0,1}$. It follows immediately from the definition that $\sigma_1$ is a unitary matrix and it is easy to see that $\sigma_1$ is Hermitian. Then, \begin{align*} |(\alpha \bra{0}+\beta\bra{1})\ket{x_i}|^2 &= |(\alpha \bra{0}+\beta\bra{1}) \sigma_1 \sigma_1 \ket{x_i }|^2 = |(\alpha \bra{1} + \beta \bra{0}) \ket{x_i\oplus\xspace 1}|^2 \\ &= |(\beta \bra{0}- \alpha \bra{1} ) \ket{x_i\oplus\xspace 1}|^2 \end{align*} The last equation follows because the expression equals either $|\alpha|^2$ or $|\beta|^2$ (depending on $x_i\in \set{0,1}$), hence we may freely change the sign of $\alpha$. For $b_i=1$, we have \[ |(\alpha \bra{0}+\beta\bra{1})H\ket{x_i}|^2 = |(\alpha \bra{0}+\beta\bra{1})(\ket{0} + (-1)^{x_i}\ket{1})|^2 = |\alpha +(-1)^{x_i}\beta |^2 \] and \[ |(\beta \bra{0}-\alpha \bra{1})H\ket{x_i\oplus\xspace 1}|^2 = |(\beta \bra{0}-\alpha \bra{1})(\ket{0} - (-1)^{x_i}\ket{1})|^2 = |\beta + (-1)^{x_i} \alpha|^2. \] We see that those expressions are equal for every $x_i \in \set{0,1}$. \end{proof} \end{appendix} \end{document}
\begin{document} \title{Solutions of DEs and PDEs as Potential Maps Using First Order Lagrangians} \newcommand{\infty}{\infty} \newcommand{\theta}{\theta} \newcommand{\delta}{\deltalta} \newcommand{\over}{\overer} \newcommand{\displaystyle}{\displaystylesplaystyle} \newcommand{\sigma}{\sigmagma} \newcommand{\nabla}{\nablabla} \newcommand{\partial}{\partialrtial} \newcommand{\lambda}{\lambdambda} \newcommand{\alpha}{\alphapha} \newcommand{\varphirepsilon}{\varphirepsilon} \newcommand{\varphi}{\varphirphi} \newcommand{\ldots}{\ldotsots} \newcommand{\noalign{ }}{\noalign{ }lign{ }} \newcommand{\times}{\timesmes} \begin{abstract} Using parametrized curves (Section 1) or parametrized sheets (Section 3), and suitable metrics, we treat the jet bundle of order one as a semi-Riemann manifold. This point of view allows the description of solutions of DEs as pregeodesics (Section 1) and the solutions of PDEs as potential maps (Section 3), via Lagrangians of order one or via generalized Lorentz world-force laws. Implicitly, we solved a problem rised first by Poincar\'e: find a suitable geometric structure that converts the trajectories of a given vector field into geodesics (see also [6] - [11]). Section 2 and Section 3 realize the passage from the Lagrangian dynamics to the covariant Hamilton equations. \end{abstract} {\bf Mathematics Subject Classification}: 34C40, 31C12, 53C43, 58E20 {\bf Key words}: jet bundle of order one, DEs, pregeodesics, PDEs, potential maps, Lagrangians of order one, covariant Hamilton equations \section{Solutions of DEs as pregeodesics} Unless specifically denied, all manifolds, all objects on them, and all maps from one manifold into another will be $C^\infty$; however, we sometimes redundantly write "a $C^\infty$ manifold", and so on, for emphasis. Let $(T = R,h)$ and $(M,g)$ be semi-Riemann manifolds of dimensions 1 and $n$. Hereafter we shall assume that the manifold $T$ is oriented. Latin letters will be used for indexing the components of geometrical objects attached to the manifold $M$. Local coordinates will be written $$ t = t^1, \quad x = (x^i), \quad i = 1,\ldots, n, $$ and the components of the corresponding metric tensors and Christoffel symbols will be denoted by $h_{11}$, $g_{ij}$, $H^1_{11}$, $G^i_{jk}$. Indices of distinguished objects will be rised and lowered in the usual fashion. Let $C^\infty (T,M) = \{\varphi: T \to M\:|\: \varphi \; \hbox{of class}\; C^\infty\}$. For any $\varphi,\psi \in C^\infty (T,M)$, we define the equivalence relation $\varphi \sigmam \psi$ at $(t_0, x_0) \in T \timesmes M$ by $$ x^i (t_0) = y^i (t_0) = x^i_0, \quad {dx^i \over dt} (t_0) = {dy^i \over dt} (t_0). $$ Using the factorization $$ J^1_{(t_0,x_0)} (T,M) = C^\infty (T,M)/\sigmam $$ we introduce the jet bundle of order one $$ J^1 (T,M) = \bigcup_{(t_0,x_0)\in T\timesmes M} J^1_{(t_0,x_0)} (T,M). $$ Denoting by $[\varphi]_{(t_0,x_0)}$ the equivalence class of the map $\varphi$, we define the projection $$ \pi: J^1 (T,M) \to T \timesmes M, \quad \pi[\varphi]_{(t_0,x_0)} = (t_0, \varphi (t_0)). $$ Suppose that the base $T \timesmes M$ is covered by a system of coordinate neighborhoods $(U\timesmes V, t^\alpha, x^i)$. Then we can define the diffeomorphism $$ F_{U\times V}: \pi^{-1} (U \times V) \to U \times V \times R^{1\cdot n} $$ $$ F_{UV} [\varphi]_{(t_0,x_0)} = \left( t_0, x^i_0, {dx^i \over dt} (t_0) \right). $$ Consequently $J^1(T,M)$ is a differentiable manifold of dimension $1+n+ 1\cdot n = 2n+1$. The coordinates on $\pi^{-1} (U \times V) \subset J^1 (T,M)$ will be $$ \left( t^1 = t, x^i, y^i = {dx^i \over dt}\right), $$ where $$ t^1 ([\varphi]_{(t_0,x_0)}) = t^1(t_0), x^i([\varphi]_{(t_0,x_0)}) = x^i(x_0), \; y^i([\varphi]_{(t_0,x_0)}) = {dx^i \over dt} (t_0). $$ A local changing of coordinates $(t, x^i, y^i) \to (\bar t, \bar x^i, \bar y^i)$ is given by $$ \bar t = \bar t(t), \; \bar x^i = \bar x^i(x^j), \; \bar y^i = {\partial \bar x^i \over \partial x^j}{dt \over d\bar t} \; y^j, \leqno (1) $$ where $$ {d\bar t \over dt} > 0, \quad \deltat\left( {\partial \bar x^i \over \partial x^j} \right) \ne 0. $$ The expression of the Jacobian matrix of the local diffeomorphism $(1)$ shows that the jet bundle of order one $J^1 (T,M)$ is always orientable. Let $H^1_{11} = \displaystyle{1 \over 2} h^{11} \displaystyle{dh_{11} \over dt^1} = \displaystyle{1 \over 2} h^{-1}_{11} \displaystyle{dh_{11} \over dt^1} = \displaystyle{1 \over 2} \displaystyle{d \over dt^1} \sqrt{|h_{11}|}$, $G^i_{jk}$ be the components of the connections induced by $h$ and $g$ respectively. If $\displaystyle\left( t=t^1, x^i, y^i = {dx^i \over dt}\right)$ are the coordinates of a point in $J^1(T,M)$, then $$ {\deltalta \over dt}{dx^i \over dt} = {d^2x^i \over dt^2} - H^1_{11} {dx^i \over dt} + G^i_{jk} {dx^j \over dt} {dx^k \over dt} $$ are the components of a distinguished tensor on $T \timesmes M$. Also $$ \left( {\deltalta \over \deltalta t} = {d \over dt} + H^1_{11} y^i {\partial \over \partial y^i}, \quad {\deltalta \over \deltalta x^i} = {\partial \over \partial x^i} - G^h_{ik} y^k {\partial \over \partial y^h}, \; {\partial \over \partial y^i} \right) $$ $$ (dt, \; dx^j, \; \deltalta y^j = dy^j - H^1_{11} y^j dt + G^j_{hk} y^h dx^k) $$ are dual frames on $J^1(T,M)$, i.e., $$ dt\left( {\deltalta \over \deltalta t} \right) = 1, \; dt\left( {\deltalta \over \deltalta x^i} \right) = 0, \; dt\left( {\partial \over \partial y^i} \right) = 0 $$ $$ dx^j\left( {\deltalta \over \deltalta t} \right) = 0, \; dx^j\left( {\deltalta \over \deltalta x^i} \right) = \deltalta^j_i, \; dx^j\left( {\partial \over \partial y^i} \right) = 0 $$ $$ \deltalta y^j\left( {\deltalta \over \deltalta t} \right) = 0, \; \deltalta y^j\left( {\deltalta \over \deltalta x^i} \right) = 0,\; \deltalta y^j\left( {\partial \over \partial y^i} \right) = \deltalta^j_i. $$ Using these frames, we define on $J^1 (T,M)$ the induced Sasaki-like metric $$ S_1 = h_{11} dt \otimes dt + g_{ij} dx^i \otimes dx^j + h^{11} g_{ij} \deltalta y^i \otimes \deltalta y^j. $$ The geometry of the manifold $(J^1(T,M), S_1)$ was developed recently in [4]. Now we shall generalize the Lorentz world-force law which was initially stated [5] for particles in nonquantum relativity. {\bf Definition}. Let $F = (F_j{}^i)$ and $U = (U^i)$ be $C^\infty$ distinguished tensors on $T \timesmes M$, where $\omega_{ji} = g_{hi} F_j{}^h$ is skew-symmetric with respect to $j$ and $i$. Let $c(t,x)$ be a $C^\infty$ real function on $T \timesmes M$. A map $\varphi: T \to M$ obeys the {\it Lorentz-Udri\c ste World-Force Law} with respect to $F, U, c$ iff $$ h^{11} {\deltalta \over dt} {dx^i \over dt} = h^{11} \left( g^{ij} {\partial c \over \partial x^j} + F_j{}^i \; {dx^j \over dt} + U^i \right). $$ Now we remark that a $C^\infty$ distinguished tensor field $X^i(t,x)$, $i=1,\ldots, n$ on $T \times M$ defines a family of trajectories as solutions of DEs system of order one $$ {dx^i \over dt} = X^i (t, x(t)). \leqno (2) $$ The distinguished tensor field $X^i(t,x)$ and semi-Riemann metrics $h$ and $g$ determine the {\it potential energy} $$ f: T \times M \to R, \quad f = {1 \over 2} h^{11} g_{ij} X^i X^j. $$ The distinguished tensor field (family of trajectories) $X^i$ on $(T\times M, h_{11} + g)$ is called: 1) {\it timelike}, if $f < 0$; 2) {\it nonspacelike} or {\it causal}, if $f \le 0$; 3) {\it null} or {\it lightlike}, if $f=0$; 4) {\it spacelike}, if $f > 0$. Let $X^i$ be a distinguished tensor field of everywhere constant energy. If $X^i$ (the system (2)) has no critical point on $M$, then upon rescaling, it may be supposed that $f \in \{-1,0,1\}$. Generally, $\cal E = \{x_0 \in M| X^i(t,x_0) = 0, \forall t \in T\}$ is the set of critical points of the distinguished tensor field, and this rescaling is possible only on $T \times (M\setminus \cal E)$. Using the operator (derivative along a solution of (2)) $$ {\deltalta \over dt} {dx^i \over dt} = {d^2x^i \over dt^2} - H^1_{11} {dx^i \over dt} + G^i_{jk} {dx^j \over dt}{dx^k \over dt}, $$ the Levy-Civita connection $D$ of $(R,h)$ and the Levy-Civita connection $\nablabla$ of $(M,g)$, we obtain the prolongation (system of DEs of order two) $$ {d^2x^i \over dt^2} - H^1_{11} {dx^i \over dt} + G^i_{jk} {dx^j \over dt}{dx^k \over dt} = DX^i + (\nablabla_j X^i){dx^j \over dt}, \leqno (3) $$ where $$ \nabla_j X^i = {\partial X^i \over\partial x^j} + G^i_{jk} X^k, \quad DX^i = {\partial X^i \over \partial t} - H^1_{11} X^i. $$ The distinguished tensor field $X^i$, the metric $g$, and the connection $\nabla$ determine the external distinguished tensor field $$ F_j{}^i = \nabla_j X^i - g^{ih} g_{kj} \nabla_h X^k, $$ which characterizes the {\it helicity} of the distinguished tensor field $X^i$. The DEs system (3) can be written in the equivalent form $$ {d^2x^i \over dt^2} - H^1_{11} {dx^i \over dt} + G^i_{jk} {dx^j \over dt}{dx^k \over dt} = g^{ih} g_{kj} (\nabla_h X^k) {dx^j \over dt} + F_j{}^i {dx^j \over dt} + DX^i. \leqno (4) $$ Now we modify this DEs system into $$ {d^2x^i \over dt^2} - H^1_{11} {dx^i \over dt} + G^i_{jk} {dx^j \over dt}{dx^k \over dt} = g^{ih} g_{kj} (\nabla_h X^k)X^j + F_j{}^i {dx^j \over dt} + DX^i \leqno (5) $$ or equivalently, $$ h^{11} \left( {d^2x^i \over dt^2} - H^1_{11} {dx^i \over dt} + G^i_{jk} {dx^j \over dt}{dx^k \over dt}\right) = h^{11} (g^{ih} g_{kj} (\nabla_h X^k) X^j + F_j{}^i {dx^j \over dt} + DX^i). $$ The system (5) is still a prolongation of the DEs system (2). {\bf Theorem}. {\it The kinematic system (2) can be prolonged to the second order dynamical system (5).} {\bf Corollary}. {\it Choosing the metrics $h$ and $g$ such that $f \in \{-1,0,1\}$, then the kinematic system (2) can be prolonged to the second order dynamical system} $$ {d^2x^i \over dt^2} - H^1_{11} {dx^i \over dt} + G^i_{jk} {dx^j \over dt}{dx^k \over dt} = F_j{}^i {dx^j \over dt} + DX^i. $$ We shall show that the dynamical system (5) is in fact an Euler-Lagrange system. We identify $J^1(T\times M)$ with its dual via the semi-Riemann metrics $h$ and $g$. {\bf Theorem}. 1) {\it The solutions of the DEs system (5) are the extremals of the Lagrangian} $$ L = {1 \over 2} h^{11} g_{ij} \left( {dx^i \over dt} - X^i \right) \left( {dx^j \over dt} - X^j \right) \sqrt{|h_{11}|} = $$ $$ = \left( {1 \over 2} h^{11} g_{ij} {dx^i \over dt} {dx^j \over dt} - h^{11}g_{ij} {dx^i \over dt} X^j + f\right) \sqrt{|h_{11}|}. $$ 2) {\it If $F_j{}^i = 0$, then the solutions of the DEs system (5) are the extremals of the Lagrangian} $$ L = \left( {1 \over 2} h^{11} g_{ij} {dx^i \over dt} {dx^j \over dt} + f\right) \sqrt{|h_{11}|}. $$ 3) {\it Both Lagrangians produce the same Hamiltonian} $$ H = \left( {1 \over 2} h^{11} g_{ij} {dx^i \over dt} {dx^j \over dt} - f\right) \sqrt{|h_{11}|}. $$ {\bf Theorem (Lorentz-Udri\c ste World-Force Law)}. 1) {\it Every solution of DEs system $$ {d^2x^i \over dt^2} - H^1_{11} {dx^i \over dt} + G^i_{jk} {dx^j \over dt} {dx^k \over dt} = g^{ih}g_{kj} (\nabla_h X^k)X^j + DX^i $$ is a pregeodesic (potential map) on the semi-Riemann manifold} $(T\times M, h+g)$. 2) {\it Every solution of DEs system (5) is a horizontal pregeodesic (potential map) on the semi-Riemann-Lagrange manifold} $$ (T\times M, h+g, \; N(^i_1)_j = G^i_{jk}y^k - F_j{}^i, \quad M(^i_1)_1 = - H^1_{11} y^i). $$ {\bf Corollary}. {\it Every DE generates a Lagrangian of order one via the associated first order DEs system and suitable metrics on the manifold of independent variable and on the manifold of functions. In this sense the solutions of the initial DE are pregeodesics produced by a suitable Lagrangian}. {\bf Proof}. Let $t \in R$ denote a real variable, usually referred to as the time. It may be pointed out that the DE $$ {d^n x \over dt^n} = f\left( t, x, {dx \over dt}, \ldotsots, {d^{n-1} x \over dt^{n-1}} \right), \leqno (6) $$ where $x$ is the unknown function, is equivalent to a system (2). For if we set $x=x^1$, then (6) is equivalent to $$ {dx^1 \over dt} = x^2, \quad {dx^2 \over dt} = x^3, \ldots, {dx^{n-1} \over dt} = x^n $$ $$ {dx^{n} \over dt} = f(t, x^1, x^2, \ldots, x^n), $$ which is type (2). Therefore, the preceding theory applies. \section{Hamiltonian approach} Let $(Q, \Omega)$ be a symplectic manifold (of even dimension). The Hamiltonian vector field $X_H$ of the function $H \in \cal F(Q)$ is defined by $$ X_H \interior \Omega = dH. $$ We generalize this relation as $$ X^1_H \interior \Omega_1 = \sqrt{|h_{11}|} dH, $$ using the distinguished objects $$ X^1_H, \; \Omega_1, H $$ and the manifold $J^1(T,M)$. For another point of view, see also [11]. {\bf Theorem}. {\it The DEs system $$ {d^2x^i \over dt^2} - H^1_{11} {dx^i \over dt} + G^i_{jk} {dx^j \over dt} {dx^k \over dt} = g^{ih} g_{kj} (\nabla_h X^k)X^j $$ transfers in $J^1(T,M)$ as a Hamilton DEs system with respect to the Hamiltonian $$ H = {1\over 2} h^{11} g_{ij} y^i y^j - f $$ and the non-degenerate distinguished symplectic relative 2-form} $$ \Omega = \Omega_1 \otimes dt^1, \quad \Omega_1 = g_{ij} dx^i \wedge \deltalta y^j \sqrt{|h_{11}|} . $$ {\bf Proof}. Let $$ \theta = \theta_1 \otimes dt^1, \quad \theta_1 = g_{ij} y^i dx^j \sqrt{|h_{11}|} $$ be the distinguished Liouville relative 1-form on $J^1(T,M)$. We find $$ \Omega_1 = -d\theta_1. $$ We introduce $$ X_H = X^1_H {\deltalta \over \deltalta t}, \; X^1_H = u^{1l} {\delta \over \delta x^l} + {\delta u^{1l} \over dt} {\partial \over \partial y^l} $$ as the distinguished Hamiltonian object associated to the function $H$. The relation $$ X^1_H \interior \Omega_1 = \sqrt{|h_{11}|} dH, $$ where $$ dH = h^{11} g_{ij} y^j \delta y^i - h^{11} g_{ij} (DX^i) X^j dt - h^{11} g_{ij} X^j \nabla_k X^i dx^k, $$ implies $$ g_{ij} u^{1i} \delta y^j - g_{ij} {\delta u^{1j} \over dt} dx^i = dH. $$ Consequently, it appears the PDEs system of Hamilton type $$ \left\{ \begin{array}{l} u^{1i} = h^{11} y^i \\ \noalign{ } \displaystyle{\delta u^{1i} \over dt} = g^{hi} h^{11} g_{jk} X^j (\nabla_h X^k) \end{array} \right. $$ together the condition $$ h^{11} g_{ij} (DX^i)X^j = 0. $$ {\bf Theorem}. {\it The DEs system $$ {d^2x^i \over dt^2} - H^1_{11} {dx^i \over dt} + G^i_{jk} {dx^j \over dt}{dx^k \over dt} = g^{ih} g_{kj} (\nabla_h X^k)X^j + F_j{}^i {dx^j \over dt} + DX^i $$ transfers in $J^1(T,M)$ as a Hamilton DEs system with respect to the Hamiltonian $$ H = {1 \over 2} h^{11} g_{ij} y^i y^j - f $$ and the non-degenerate distinguished symplectic relative 2-form $$ \Omega = \Omega_1 \otimes dt, \; \Omega_1 = (g_{ij} dx^i \wedge \delta y^j + \omega_{ij} dx^i \wedge dx^j + g_{ij} (DX^i) dt \wedge dx^j) \sqrt{|h_{11}|}, $$ where} $$ \omega_{ji} = g_{hi} F_j{}^h. $$ {\bf Proof}. Let $$ \theta = \theta_1 \otimes dt^1, \; \theta_1 = (g_{ij} y^i dx^j - g_{ij} X^i dx^j) \sqrt{|h_{11}|} $$ be the distinguished Liouville relative 1-form on $J^1(R,M)$. We find $$ \Omega_1 = - d\theta_1. $$ We denote $$ X_H = X^1_H {\delta \over \delta t}, \; X^1_H = h^{11} {\delta \over \delta t} + u^{1l}{\delta \over \delta x^l} + {\delta u^{1l} \over dt}{\partial \over \partial y^l} $$ the distinguished Hamiltonian object of the function $H$. The relation $$ X^1_H \interior \Omega_1 = \sqrt{|h_{11}|} dH $$ can be written $$ g_{ij} u^{1i} \delta y^j - g_{ij} {\delta u^{1j} \over dt} dx^i + 2\omega_{ij} u^{1i} dx^j - g_{ij} (DX^i) u^{1j} dt + h^{11} g_{ij} (DX^i)dx^j= dH, $$ where $$ dH = -h^{11} g_{ij} (DX^i) X^j dt + h^{11} g_{ij} y^j \delta y^i - h^{11} g_{ij} X^j (\nablabla_k X^i)dx^k. $$ Via these relations we identify a PDEs system of Hamilton type, $$ \left\{ \begin{array}{l} u^{1i} = h^{11} y^i \\ \noalign{ } \displaystyle{\delta u^{1i} \over dt} = g^{hi} h^{11} g_{jk} X^j (\nabla_h X^k) + 2g^{hi} \omega_{jh} u^{1j} + h^{11} DX^i \end{array} \right. $$ together the condition $$ g_{ij} (DX^i) (u^{1j} - h^{11} X^j) = 0. $$ \section{Solutions of PDEs as Potential Maps} All manifolds and maps are $C^\infty$, unless otherwise stated. Let $(T,h)$ and $(M,g)$ be semi-Riemann manifolds of dimensions $p$ and $n$. Hereafter we shall assume that the manifold $T$ is oriented. Greek (Latin) letters will be used for indexing the components of geometrical objects attached to the manifold $T$ (manifold $M$). Local coordinates will be written $$ t = (t^\alpha), \quad \alpha = 1, \ldots, p $$ $$ x = (x^i), \quad i = 1, \ldots, n, $$ and the components of the corresponding metric tensor and Christoffel symbols will be denoted by $h_{\alpha \beta}, g_{ij}$, $H^\alpha_{\beta \gamma}$, $G^i_{jk}$. Indices of tensors or distinguished tensors will be rised and lowered in the usual fashion. Let $C^\infty (T,M) = \{ \varphi: T \to M |\: \varphi \; \hbox{of class } \; C^\infty\}$. For any $\varphi, \psi \in C^\infty (T,M)$ we define the equivalence relation $\varphi \sigmam \psi$ at $(t_0, x_0) \in T \times M$, by $$ x^i (t_0) = y^i (t_0) = x^i_0, \; {\partial x^i \over \partial t^\alpha} (t_0) = {\partial y^i \over \partial t^\alpha} (t_0). $$ Using the factorization $$ J^1_{(t_0,x_0)} (T,M) = C^\infty (T,M)/_\sigmam, $$ we introduce the jet bundle of order one $$ J^1 (T,M) = \bigcup_{(t_0, x_0) \in T\times M} J^1_{t_0,x_0} (T,M). $$ Denoting by $[\varphi]_{(t_0,x_0)}$ the equivalence class of the map $\varphi$, we define the projection $$ \pi: J^1 (T,M) \to T \times M, \; \pi [\varphi]_{(t_0,x_0)} = (t_0, \varphi (t_0)). $$ Suppose that the base $T\times M$ is covered by a systems of coordinate neighborhood $(U \times V, t^\alpha, x^i)$. Then we can define the diffeomorphism $$ F_{U\times V}: \pi^{-1} (U \times V) \to U \times V \times R^{pn} $$ $$ F_{UV} [\varphi]_{(t_0,x_0)} = \left( t^\alpha_0, x^i_0, {\partial x^i \over \partial t^\alpha} (t_0)\right). $$ Consequently $J^1(T,M)$ is a differentiable manifold of dimension $p+n+pn$. The coordinates on $\pi^{-1} (U\times V) \subset J^1 (T,M)$ will be $$ (t^\alpha, x^i, x^i_\alpha), $$ where $$ t^\alpha \left([\varphi]_{(t_0,x_0)} \right) = t^\alpha (t_0), x^i \left([\varphi]_{(t_0,x_0)} \right) = x^i(x_0), x^i_\alpha \left([\varphi]_{(t_0,x_0)} \right) = {\partial x^i \over \partial t^\alpha} (t_0). $$ A local changing of coordinates $(t^\alpha, x^i, x^i_\alpha) \to (\bar t^\alpha, \bar x^i, \bar x^i_\alpha)$ is given by $$ \bar t^\alpha = \bar t^\alpha (t^\beta), \; \bar x^i = \bar x^i(x^j), \; \bar x^i_\alpha = {\partial \bar x^i \over \partial x^j} {\partial t^\beta \over \partial \bar t^\alpha} x^j_\beta, \leqno (7) $$ where $$ \deltat \left( {\partial \bar t^\alpha \over \partial t^\beta}\right) > 0, \quad \deltat \left( {\partial \bar x^i \over \partial x^j}\right) \ne 0. $$ The expression of the Jacobian matrix of the local diffeomorphism $(7)$ shows that the jet bundle of order one $J^1 (T,M)$ is always orientable. Let $H^\alpha_{\beta\gamma}, G^i_{jk}$ be the components of the connections induced by $h$ and $g$ respectively. If $(t^\alpha, x^i, x^i_\alpha)$ are the coordinates of a point in $J^1(T,M)$, then $$ x^i_{\alpha\beta} = {\partial^2 x^i \over \partial t^\alpha \partial t^\beta} - H^\gamma_{\alpha\beta} x^i_{\gamma} + G^i_{jk} x^j_\alpha x^k_\beta $$ are the components of a distinguished tensor on $T \times M$. Also $$ \left( {\deltalta \over \deltalta t^\alpha} = {\partial \over \partial t^\alpha} + H^\gamma_{\alpha\beta} x^i_\gamma {\partial \over \partial x^i_\beta}, \; {\deltalta \over \deltalta x^i} = {\partial \over \partial x^i} - G^h_{ik} x^k_\alpha {\partial \over \partial x^h_\alpha}, \; {\partial \over \partial x^i_\alpha}\right), $$ $$ \left( dt^\beta, dx^j, \; \deltalta x^j_\beta = dx^j_\beta - H^\gamma_{\beta\lambda} x^j_\gamma dt^\lambda + G^j_{hk} x^h_\beta dx^k \right) $$ are dual frames on $J^1 (T,M)$, i.e., $$ dt^\beta\left({\deltalta \over \deltalta t^\alpha}\right) = \deltalta^\beta_\alpha, \quad dt^\beta \left({\deltalta \over \deltalta x^i}\right) = 0, \quad dt^\beta \left({\partial \over \partial x^i_\alpha}\right) = 0 $$ $$ dx^j\left({\deltalta \over \deltalta t^\alpha}\right) = 0, \quad dx^j \left({\deltalta \over \deltalta x^i}\right) = \deltalta^j_i, \quad dx^j \left({\partial \over \partial x^i_\alpha}\right) = 0 $$ $$ \deltalta x^j_\beta\left({\deltalta \over \deltalta t^\alpha}\right) = 0, \quad \deltalta x^j_\beta \left({\deltalta \over \deltalta x^i}\right) = 0, \quad \deltalta x^j_\beta \left({\partial \over \partial x^i_\alpha}\right) = \deltalta^j_i \deltalta^\alpha_\beta. $$ Using these frames, we define on $J^1(T,M)$ the induced Sasaki-like metric $$ S_1 = h_{\alpha\beta} dt^\alpha \otimes dt^\beta + g_{ij} dx^i \otimes dx^j + h^{\alpha\beta} g_{ij} \deltalta x^i_\alpha \otimes \deltalta x^j_\beta. $$ The geometry of the manifold $J^1(T,M)$ was developed recently in [4]. The Lorentz world-force law formulated usually for particles [5] can be generalized as follows: {\bf Definition}. Let $F_\alpha =(F_j{}^i{}_\alpha)$ and $U_{\alpha\beta} = (U^i_{\alpha\beta})$ be $C^\infty$ distinguished tensors on $T\timesmes M$, where $\omega_{ji\alpha} = g_{hi}F_j{}^h{}_\alpha$ is skew-symmetric with respect to $j$ and $i$. Let $c(t,x)$ be a $C^\infty$ real function on $T \timesmes M$. A $C^\infty$ map $\varphi: T \to M$ obeys the {\it Lorentz-Udri\c ste World-Force Law} with respect to $F_\alpha$, $U_{\alpha\beta}$, $c$ iff $$ h^{\alpha\beta}x^i_{\alpha\beta} = g^{ij} {\partial c \over \partial x^j} + h^{\alpha\beta} F_j{}^i{}_\alpha x^j_\beta + h^{\alpha\beta} U^i_{\alpha\beta}, $$ i.e., iff it is a potential map of a suitable geometrical structure. Let us show that the solutions of a system of PDEs of order one are potential maps in a suitable geometrical structure of the jet bundle of order one. For that we remark that any $C^\infty$ distinguished tensor field $X^i_\alpha (t,x)$ on $T \timesmes M$ defines a family of $p$-dimensional sheets as solutions of the PDEs system of order one $$ x^i_\alpha = X^i_\alpha (t, x(t)), \leqno (8) $$ if the complete integrability conditions $$ {\partial X^i_\alpha \over \partial t^\beta} + {\partial X^i_\alpha \over \partial x^j} X^j_\beta = {\partial X^i_\beta \over \partial t^\alpha} + {\partial X^i_\beta \over \partial x^j} X^j_\alpha $$ are satisfied. To any distinguished tensor field $X^i_\alpha(t,x)$ and semi-Riemann metrics $h$ and $g$ we associate the {\it potential energy} $$ f: T \timesmes M \to R, \quad f = {1 \over 2} h^{\alpha\beta} g_{ij} X^i_\alpha X^j_\beta. $$ The distinguished tensor field $X^i_\alpha$ (family of $p$-dimensional sheets) on $(T \timesmes M, h + g)$ is called: 1) {\it timelike}, if $f < 0$; 2) {\it nonspacelike or causal}, if $f \le 0$; 3) {\it null or lightlike}, if $f=0$; 4) {\it spacelike}, if $f>0$. Let $\cal E = \{x_0 \in M |\: X^i_{\alpha} (t,x_0) = 0$, $\forall t \in T\}$ be the set of critical points of the system (8). If $f =$ constant, upon rescaling on $T \times (M \setminus \cal E)$, it may be supposed that $f \in \{-1,0,1\}$. The derivative along a solution of (8), $$ {\deltalta \over \partial t^\beta} x^i_\alpha = x^i_{\alpha\beta} = {\partial^2 x^i \over \partial t^\alpha \partial t^\beta} - H^\gamma_{\alpha\beta} x^i_\gamma + G^i_{jk} x^j_\alpha x^k_\beta, $$ produce the prolongation (system of PDEs of order two) $$ x^i_{\alpha\beta} = D_\beta X^i_\alpha + (\nabla_j X^i_\alpha) x^j_\beta. \leqno (9) $$ which can be converted into the prolongation $$ h^{\alpha\beta} x^i_{\alpha\beta} = g^{ih} h^{\alpha\beta} g_{ij} (\nabla_h X^k_\alpha)X^j_\beta + h^{\alpha\beta} F_j{}^i{}_\alpha x^j_\beta + h^{\alpha\beta} D_\beta X^i_\alpha, \leqno (10) $$ where $$ F_j{}^i{}_\alpha = \nabla_j X^i_\alpha - g^{ih} g_{kj} \nabla_h X^k_\alpha $$ is the external distinguished tensor field which characterizes the {\it helicity} of the distinguished tensor field $X^i_\alpha$. {\bf Theorem}. {\it Any solution of PDEs system (8) is a solution of the PDEs system (10).} The first term in the second hand member of the PDEs system (10) is $(grad\: f)^i$. Therefore, choosing the metrics $h$ and $g$ such that $f \in \{-1,0,1\}$, the system (10) reduces to $$ h^{\alpha\beta} x^i_{\alpha\beta} = g^{ih} F_j{}^i{}_\alpha x^j_\beta + h^{\alpha\beta} D_\beta X^i_\alpha. \leqno (10') $$ {\bf Theorem}. {\it The solutions of PDEs system (10) are the extremals of the Lagrangian} $$ \begin{array}{lcl} L &=& \displaystyle{1 \over 2} h^{\alpha\beta} g_{ij} (x^i_\alpha - X^i_\alpha)(x^j_\beta - X^j_\beta) \sqrt{|h|} = \\ \noalign{ } &=& \left( \displaystyle{1 \over 2} h^{\alpha\beta} g_{ij} x^i_\alpha x^j_\beta - h^{\alpha\beta} g_{ij} x^i_\alpha X^j_\beta + f\right) \sqrt{|h|}. \end{array} $$ {\it If $F^i_{j\alpha} = 0$, then this Lagrangian can be replaced by} $$ L = \left( {1 \over 2} h^{\alpha\beta} g_{ij} x^i_\alpha x^j_\beta + f \right) \sqrt{|h|}. $$ 2) {\it Both Lagrangians produce the same Hamiltonian} $$ H = \left( {1 \over 2} h^{\alpha\beta} g_{ij} x^i_\alpha x^j_\beta - f \right) \sqrt{|h|}. $$ {\bf Theorem (Lorentz-Udri\c ste World-Force Law)}. {\it Every solution of the PDEs system (8) is a horizontal potential map of the semi-Riemann-Lagrange manifold} $$ (T\times M, h+g, \; N(^i_\alpha)_j = G^i_{jk} x^k_\alpha - F_j{}^i{}_\alpha, \; M(^i_\alpha)_\beta = - H^\gamma_{\alpha\beta} x^i_\gamma). $$ {\bf Corollary}. {\it Every PDE generates a Lagrangian of order one via the associated first order PDEs system and suitable metrics on the manifold of independent variables and on the manifold of functions. In this sense the solutions of the initial PDE are potential maps produced by a suitable Lagrangian.} {\bf Proof}. Let $$ {\partial^rx \over \partial (t^p)^r} = F(t^\alpha, x, \bar x^{(r)}) $$ be a PDE of order $r$, where $\bar x^{(r)}$ represent the partial derivatives of $x$ with respect to $t^\alpha$, till the order $r$ inclusively, excepting the partial derivative $\displaystyle{\partial^r x \over \partial (t^p)^r}$. This equation is equivalent to a system (8). For the sake of simplicity, we take $r=2$. We denote $\displaystyle{\partial x\over \partial t^\alpha} = x_\alpha = u^\alpha$ and we find the partial derivatives of the functions $(x, u^\alpha)$ using the system $$ \left\{ \begin{array}{l} x_\alpha = u^\alpha \\ \noalign{ } u^\alpha_\beta = u^\beta_\alpha, \; \alpha \ne \beta \\ \noalign{ } u^2_2 = f(t^\alpha, x, u^\lambda_\mu), \; \hbox{excepting}\; \lambda =\mu =2. \end{array} \right. $$ We shall find a PDEs system of order one with $p(1+p)$ equations, which is of type (8). Therefore, the preceding theory applies. \section{Covariant Hamilton Equations} Recall that on a symplectic manifold $(Q,\Omega)$ of even dimension $q$, the Hamiltonian vector field $X_H$ of a function $H \in \cal F(Q)$ is defined by $$ X_H \interior \Omega = dH. $$ This relation can be generalized as $$ X^\alpha_H \interior \Omega_\alpha = \sqrt{|h|}dH, $$ using the distinguished objects $X_H, \Omega, H$ on $J^1 (T,M)$. For another point of view, see also [11]. {\bf Theorem}. {\it The PDEs system $$ h^{\alpha\beta} x^i_{\alpha\beta} = g^{ih} h^{\alpha\beta} g_{jk} (\nabla_h X^j_\alpha) X^k_\beta $$ transfers in $J^1(T,M)$ as a covariant Hamilton PDEs system with respect to the Hamiltonian $$ H = {1 \over 2} h^{\alpha\beta} g_{ij} x^i_\alpha x^j_\beta - f $$ and the non-degenerate distinguished polysymplectic relative 2--form} $$ \Omega = \Omega_\alpha \otimes dt^\alpha, \quad \Omega_\alpha = g_{ij} dx^i \wedge \deltalta x^j_\alpha \sqrt{|h|}. $$ {\bf Proof}. Let $$ \theta = \theta_\alpha \otimes dt^\alpha, \quad \theta_\alpha = g_{ij} x^i_\alpha dx^j \sqrt{|h|} $$ be the distinguished Liouville relative 1--form on $J^1 (T,M)$. It follows $$ \Omega_\alpha = -d\theta_\alpha. $$ We denote by $$ X_H = X^\beta_H {\deltalta \over \deltalta t^\beta}, \quad X^\beta_H = u^{\beta l} {\deltalta \over \deltalta x^l} + {\deltalta u^{\beta l} \over \partial t^\alpha}{\partial \over \partial x^l_\alpha} $$ the distinguished Hamiltonian object of the function $H$. Imposing $$ X^\alpha_H \interior \Omega_\alpha = \sqrt{|h|} dH, $$ where $$ dH = h^{\alpha\beta} g_{ij} x^j_\beta \deltalta x^i_\alpha - h^{\alpha\beta} g_{ij} (D_\gamma X^i_\alpha) X^j_\beta dt^\gamma - h^{\alpha\beta}g_{ij} X^j_\beta \nabla_k X^i_\alpha dx^k $$ we find $$ g_{ij} u^{\alpha i} \deltalta x^j_\alpha - g_{ij} {\deltalta u^{\alpha j}\over \partial t^\alpha} dx^i = dH. $$ Consequently, it appears the Hamilton PDEs system $$ \left\{ \begin{array}{l} u^{\alpha i} = h^{\alpha \beta} x^i_\beta \\ \noalign{ } \displaystyle{\deltalta u^{\alpha i} \over \partial t^\alpha} = g^{hi} h^{\alpha\beta} g_{jk} X^j_\beta (\nabla_h X^k_\alpha) \end{array} \right. $$ together the condition $$ h^{\alpha\beta}g_{ij} (D_\gamma X^i_\alpha) X^j_\beta = 0. $$ {\bf Theorem}. {\it The PDEs system $$ h^{\alpha\beta} x^i_{\alpha\beta} = g^{ih} h^{\alpha\beta} g_{kj} (\nabla_h X^k_\alpha) X^j_\beta + h^{\alpha\beta} F_j{}^i{}_\alpha x^j_\beta + h^{\alphapha\beta} D_\beta X^i_\alphapha $$ transfers in $J^1(T,M)$ as a covariant Hamilton PDEs system with respect to the Hamiltonian $$ H = {1 \over 2} h^{\alpha\beta} g_{ij} x^i_\alpha x^j_\beta - f $$ and the non-degenerate distinguished polysymplectic relative 2--form} $$ \Omega = \Omega_\alpha \otimes dt^\alpha, \quad \Omega_\alpha = (g_{ij} dx^i \wedge \deltalta x^j_\alpha + \omega_{ij\alpha} dx^i \wedge dx^j + g_{ij} (D_\beta X^i_\alphapha) dt^\beta \wedge dx^j) \sqrt{|h|}. $$ {\bf Proof}. Let $$ \theta = \theta_\alpha \otimes dt^\alpha, \quad \theta_\alpha = (g_{ij} x^i_\alpha dx^j - g_{ij} X^i_\alpha dx^j) \sqrt{|h|} $$ be the distinguished Liouville relative 1--form on $J^1(T,M)$. It follows $$ \Omega_\alpha = -d\theta_\alpha. $$ We denote by $$ X_H = X^\beta_H {\deltalta \over \deltalta t^\beta}, \quad X^\beta_H = h^{\beta\gamma} {\deltalta \over \deltalta t^\gamma} + u^{\beta l} {\deltalta \over \deltalta x^l} + {\deltalta u^{\beta l} \over \partial t^\alpha}{\partial \over \partial x^l_\alpha} $$ the distinguished Hamiltonian object of the function $H$. Imposing $$ X^\alpha_H \interior \Omega_\alpha = \sqrt{|h|} dH, $$ where $$ dH = -h^{\alpha\beta} g_{ij} (D_\gamma X^i_\alpha) X^j_\beta dt^\gamma + h^{\alpha\beta} g_{ij} x^j_\beta \deltalta x^i_\alpha - h^{\alpha\beta} g_{ij} X^j_\beta (\nabla_k X^i_\alpha) dx^k, $$ we find $$ g_{ij} u^{\alpha i} \deltalta x^j_\alpha - g_{ij} {\deltalta u^{\alpha j}\over \partial t^\alpha} dx^i + 2\omega_{ij\alpha} u^{\alpha i} dx^j - g_{ij} (D_\beta X^i_\alphapha) u^{\alpha j} dt^\beta + h^{\alpha\beta} g_{ij} (D_\beta X^i_\alpha) dx^j = dH. $$ Consequently, we obtain the Hamilton PDEs system $$ \left\{ \begin{array}{l} u^{\alpha i} = h^{\alpha \beta} x^i_\beta \\ \noalign{ } \displaystyle{\deltalta u^{\alpha i} \over \partial t^\alpha} = g^{hi} h^{\alpha\beta} g_{jk} X^j_\beta (\nabla_h X^k_\alpha) + 2g^{hi} \omega_{j h \alpha} u^{\alpha j} + h^{\alphapha\beta} D_\beta X^i_\alphapha \end{array} \right. $$ together the condition $$ g_{ij} (D_\gamma X^i_\alpha) (u^{\alpha j} - h^{\alpha\beta} X^j_\beta) = 0. $$ \end{document}
\begin{document} \title[Andr\'e permutations, right-to-left and \\ left-to-right minima] {Andr\'e permutations, right-to-left and \\ left-to-right minima} \author{Filippo Disanto*} \date{} \thanks{\hbox{\hskip-10pt}*Email: {\tt [email protected]}} \begin{abstract} We provide enumerative results concerning right-to-left minima and left-to-right minima in Andr\'e permutations of the first and second kind. For both the two kinds, the distribution of right-to-left and left-to-right minima is the same. We provide generating functions and associated asymptotics results. Our approach is based on the tree-structure of Andr\'e permutations. \end{abstract} \maketitle \thispagestyle{myheadings} \font\rms=cmr8 \font\its=cmti8 \font\bfs=cmbx8 \markright{\its S\'eminaire Lotharingien de Combinatoire \bfs ?? \rms (201?), Article~???? } \def\thepage{} \section{Introduction} \emph{Andr\'e} permutations have been introduced in \cite{foata} and extensively studied in the literature especially because of their relations with other combinatorial structures~\cite{strehl2, strehl, hetiei, hetiei2, stanleyundici}. For instance, the \emph{cd}-index of the Boolean algebra may be computed by summing the \emph{cd}-variation monomials of Andr\'e permutations~\cite{stanleyundici}. It is possible to distinguish among two types of Andr\'e permutations: those of the \emph{first} kind $\mathcal{A}^{(1)}$ and those of the \emph{second} kind $\mathcal{A}^{(2)}$. The two classes are equinumerous. The $n$-th Euler number $e_n = [z^n] \sec(z) + [z^n] \tan(z)$ counts Andr\'e permutations of size $n$. The first terms are $e_0=1,e_1=1,e_2=1,e_3=2,e_4=5,e_5=16,\dots$. Classically, Euler numbers only refer to \emph{secant} numbers, the (even) coefficients of the Taylor expansion of $\sec(z)$. The (odd) coefficients of the Taylor expansion of $\tan(z)$ are called \emph{tangent} numbers. Here, with an abuse of terminology, we take the Euler numbers as the sum of these two coefficients. Besides Andr\'e permutations, Euler numbers give the enumeration of several other combinatorial structures. In particular, they also count rooted binary un-ordered \emph{increasing} trees. In \cite{foata} the authors describe two bijections - denoted here by $\phi_1$ and $\phi_2$ - which maps Andr\'e permutations of both kinds onto this class of trees and viceversa. Based on this correspondence, two classical permutation statistics, such as right-to-left minima (rlm) and left-to-right minima (lrm), have a natural interpretation in terms of paths of the associated trees. In this work we indeed focus on the enumeration of Andr\'e permutations according to the parameters number of right-to-left minima and number of left-to-right minima. To the best of our knowledge, these permutation statistics have not been investigated before in this context. In Section~\ref{inizio}, we show that the statistic number of right-to-left minima has the same distribution on each one of the two sets $\mathcal{A}^{(1)}_n$ and $\mathcal{A}^{(2)}_n$. The same holds for the number of left-to-right minima and, more generally, in the case of their joint distribution. Without loss of generality, we then focus on one type of Andr\'e permutations, those of the second kind $\mathcal{A} = \mathcal{A}^{(2)}$. For the joint enumeration according to right-to-left and left-to-right minima a functional equation for the associated trivariate generating function is provided. In Section~\ref{L}, we find the bivariate generating function which counts Andr\'e permutations $\mathcal{A}$ with respect to the size and the number of right-to-left minima. As a result, fixing the number of right-to-left minima, we provide a combinatorial formula which describes the desired enumeration in terms of Euler numbers. As a corollary to the results of this section we have a correspondance between number of right-to-left minima in Andr\'e permutations and number of cycles in the so-called \emph{cycle-up-down} permutations introduced in \cite{deutsch}. This will need to be further investigated. In Section~\ref{nelbosco}, we study the number of left-to-right minima. We give a functional equation for the associated bivariate generating function. We show how the number of permutations of size $n+1$ with $2$ left-to-right minima is related to the total number of right-to-left minima in permutations of size $n$. Finally, we study Andr\'e permutations with a generic - but fixed - number of left-to-right minima providing asymptotic estimates. \section{Preliminaries} \label{def} The set of permutations of size $n$ is denoted by $\mathcal{S}_n$. If $\pi = (\pi_1 \pi_2 \dots \pi_n ) \in \mathcal{S}_n$, the set of its \emph{left-to-right minima} is denoted by $\mathrm{lrm}(\pi)$ and its elements are those entries $\pi_i$ such that if $j<i$, then $\pi_i < \pi_j$. We denote by $\mathrm{rlm}(\pi)$ the set of \emph{right-to-left minima} and we remind to the reader that $\pi_i \in \mathrm{rlm}(\pi)$ if $j>i$ implies $\pi_i< \pi_j$. \pagenumbering{arabic} \addtocounter{page}{1} \markboth{\SMALL FILIPPO DISANTO}{\SMALL ANDR\'E PERMUTATIONS, RIGHT-TO-LEFT AND LEFT-TO-RIGHT MINIMA} A binary \emph{increasing} tree is a rooted, \emph{un-ordered} tree with nodes of outdegree $0,1$~or~$2$. Nodes of outdegree $0$ are also called the $leaves$ of the tree. Moreover, for such a tree, we require that each of the $n$ nodes is bijectively labelled by a number in $\{1,2,...,n \}$ in a way that, going from the root to any leaf, we always find an increasing sequence of numbers. If $x$ and $y$ are two nodes, we write $x \prec y$ when the label of $x$ is less than the label of $y$. The set of binary increasing trees is denoted by $\mathcal{B}$ while we use the symbol $\mathcal{B}_n$ to denote the subset of $\mathcal{B}$ made of those trees with $n$ nodes. Observe that each tree in $\mathcal{B}$ can be drawn in the plane in a \emph{unique} way respecting the following two conditions $(A_2)$ and $(B_2)$: \begin{itemize} \item[$(A_2)$] if a node has only one child, then this child is drawn on the \emph{right} of its direct ancestor; \item[$(B_2)$] if a node $x$ has two children $y$ and $z$, let $t_y$ (resp. $t_z$) be the set of nodes in the subtree generated by $y$ (resp. $z$). If $ y=\min(t_y) \prec z=\min(t_z)$, then $y$ is drawn on the \emph{right} of $x$ while $z$ on the \emph{left}. \end{itemize} In Fig.~\ref{alberetti} we show those trees belonging to $\mathcal{B}_4$ drawn respecting the previous two conditions. \begin{figure} \caption{The trees in $\mathcal{B} \label{alberetti} \end{figure} Conditions $(A_2,B_2)$ are not the only possible ones that allow a unique planar representation for each tree in $\mathcal{B}$. Another couple of conditions is for instance \begin{itemize} \item[$(A_1)$] if a node has only one child, then this child is drawn on the \emph{right} of its direct ancestor; \item[$(B_1)$] if a node $x$ has two children $y$ and $z$, let $t_y$ (resp. $t_z$) be the set of nodes in the subtree generated by $y$ (resp. $z$). If $ \max(t_y) \prec \max(t_z)$, then $z$ is drawn on the \emph{right} of $x$ while $y$ on the \emph{left}. \end{itemize} The sets of \emph{Andr\'e} permutations $\mathcal{A}^{(2)}$ and $\mathcal{A}^{(1)}$ can be defined in several equivalent ways, see for instance Section~2~of~\cite{hetiei}. Since they are both subsets of $\mathcal{S}_n$ equinumerous to $\mathcal{B}_n$, we choose to characterize their permutations according to two injective maps $\phi_2,\phi_1:\mathcal{B}_n \rightarrow \mathcal{S}_n$ (see \cite{foata}). For $\phi_2$ (resp. $\phi_1$) the procedure is: \begin{itemize} \item[(1)] given $t \in \mathcal{B}_n$, draw $t$ according to $(A_2,B_2)$ (resp. $(A_1,B_1)$); \item[(2)] each leaf collapses into its direct ancestor whose label is then modified receiving on the left the label of the left child (if any) and on the right the label of its right child. We obtain in this way a new tree whose nodes are labelled with sequences of numbers; \item[(3)] starting from the obtained tree go to step (2). \end{itemize} The algorithms $\phi_2, \phi_1$ end when the tree $t$ is reduced to a single node whose label is then a permutation $\phi_2(t), \phi_1(t)$ of size $n$. Note that, without considering step (1) but only (2) and (3), the procedures give a well-known \cite{stan} bijection $\psi$ between \emph{ordered} binary increasing trees $\tilde{\mathcal{B}}_n$ and the entire set of permutations of size $n$. The sets $\mathcal{A}_n^{(i)}$ can be defined as $\mathcal{A}_n^{(i)} = \{ \phi_i(t) \in \mathcal{S}_n : t \in \mathcal{B}_n \}$ (with $i = 2,1$). Looking at Fig.~\ref{alberetti}, the corresponding permutations in $\mathcal{A}_4^{(2)}$ are (from left to right) $(4123), (1234), (3412), (1423)$ and $(3124)$. For the same size $n=4$, the permutations in $\mathcal{A}_4^{(1)}$ are $(2314), (1234), (2134), (1324)$ and $(3124)$. An equivalent definition of Andr\'e permutations can be given in terms of the so-called $x$\emph{-factorizations} of permutations, see Definition~1 and Definition~2 of \cite{hetiei2} and the related references. The equivalence is easily recovered by observing that - following notations of \cite{hetiei2} - the $\lambda$-part of the $x$-factorization of a permutation $\pi$ corresponds to the left-subtree of the node $x$ in the \emph{ordered} binary increasing tree $\psi^{-1}(\pi)$. Similarly, the $\rho$-part of the $x$-factorization corresponds to the right sub-tree of $x$ in $\psi^{-1}(\pi)$. Andr\'e permutations, as binary increasing trees, are enumerated, with respect to the size, by the so called \emph{Euler} numbers $(e_n)_{n\geq 0}$ whose exponential generating function satisfies $$\int E^2 = 2E-z-2 \footnote{\text{We will often adopt the notation} $\int f(z) = \int_0^{z} f(a) da$.}$$ and therefore is equal to $$E(z)= \sec(z) + \tan(z).$$ The first terms of the sequence are: $1,1,1,2,5,16,61,272,1385,...$ and they correspond to entry $A000111$ in \cite{sloane}. Furthermore, expanding $E(z)$ near the dominant singularity $z=\pi/2$, we easily recover an asymptotic approximation for the coefficients \begin{equation}\label{brioscia} \frac{e_n}{n!} \sim \frac{4}{\pi} \left( \frac{2}{\pi} \right)^n. \end{equation} \section{Enumeration of right-to-left minima and left-to-right minima} In this section we study enumerative properties of right-to-left minima and left-to-right minima in Andr\'e permutations. In Section~\ref{inizio}, these statistics are jointly studied. In Section~\ref{L}, we focus on the number of right-to-left minima, while, in Section~\ref{nelbosco}, we investigate left-to-right minima. \subsection{Joint enumeration}\label{inizio} Through the bijection $\psi: \tilde{\mathcal{B}}_n \rightarrow \mathcal{S}_n$ described in Section~\ref{def}, one can see that, for any given pemutation $\pi$, the set $\mathrm{rlm}(\pi)$ corresponds to the nodes visited in the tree $\psi^{-1}(\pi)$ starting from the root and performing only right-steps. Similarly, the set $\mathrm{lrm}(\pi)$ corresponds to the nodes visited in the tree $\psi^{-1}(\pi)$ starting from the root and performing only left-steps. Let $\pi_2 \in \mathcal{A}_n^{(2)}$ and $\pi_1 \in \mathcal{A}_n^{(1)}$, consider $t_2 = \phi_2^{-1}(\pi_2)$ and $t_1 = \phi_1^{-1}(\pi_1)$. If $n>1$ then, for $i=2,1$, the tree $t_i$ consists of two trees, $t_{i,\text{left}}$ and $t_{i,\text{right}}$, appended to its root on the left and on the right respectively. Clearly, $\mathrm{rlm}(\phi_i^{-1}(t_i)) = 1 + \mathrm{rlm}(\phi_i^{-1}(t_{i,\text{right}})) $ and $\mathrm{lrm}(\phi_i^{-1}(t_i)) = 1 + \mathrm{lrm}(\phi_i^{-1}(t_{i,\text{left}})) $, see Fig.~\ref{alba}. \begin{figure} \caption{Recursive decomposition of $t_i = \phi_i^{-1} \label{alba} \end{figure} Furtermore, observe that, in both cases $i=2,1$, there are exactly $${{|t_{i,\text{left}}| + |t_{i,\text{right}}| - 1}\choose{|t_{i,\text{left}}|}}$$ ways of merging the ranking of $t_{i,\text{left}}$ with the ranking of $t_{i,\text{right}}$ that create a tree drawn according to conditions ($A_i, B_i$). When $i=2$, we have to put the root of $t_{i,\text{right}}$ \emph{above} the root of $t_{i,\text{left}}$ while, when $i=1$, we put the $\max$-node of $t_{i,\text{right}}$ \emph{below} the $\max$-node of $t_{i,\text{left}}$ (the $\max$-node is always a leaf). Also note that, when $|t_{i,\text{left}}|=0$, the previous binomial expression returns $1$. From this considerations, it follows that, from an enumerative point of view, the same recursive construction describes the distribution of right-to-left minima and left-to-right minima in Andr\'e permutations of the first and second kind. Without loss of generality, we decide to focus on Andr\'e permutations of the second kind. We thus set $\mathcal{A} = \mathcal{A}^{(2)}$, $\phi= \phi_2$ and, if not specified otherwise, we draw each tree $t \in \mathcal{B}$ according to $(A_2,B_2)$. The exponential generating function $$H = H(x,y,z) = \sum_{\pi \in \mathcal{A}} \frac{x^r y^l z^n}{n!},$$ where $r=|\mathrm{rlm}(\pi)|, l=|\mathrm{lrm}(\pi)|$ and $n=\text{size}(\pi)$, satisfies the functional equation $$H = 1 + xyz + \sum_{\pi_1=t_{\text{right}} \neq \emptyset} \sum_{\pi_2=t_{\text{left}}} x^{r_1 + 1} y^{l_2 + 1}\frac{z^{n_1+n_2+1}}{(n_1+n_2+1)!} \cdot {{n_1+n_2-1}\choose{n_2}}.$$ Taking twice the derivative respect to $z$ we obtain \begin{equation}\label{nonni} \frac{\partial^2 H }{\partial z^2} = x y \frac{\partial H(x,1,z)}{\partial z} \, H(1,y,z) \end{equation} which gives \begin{equation}\label{tagliaerba} H = 1 + xyz + xy \int \int \frac{\partial H(x,1,z)}{\partial z} \, H(1,y,z) dz dz. \end{equation} Equation (\ref{tagliaerba}) can be used recursively to compute the polynomials $H_i(x,y)= \sum_{\pi \in \mathcal{A}_i} x^r y^l$. When $0\leq i \leq 5$ we have \begin{eqnarray}\nonumber H_0 &=& 1 ;\\\nonumber H_1 &=& xy ; \\\nonumber H_2 &=& x^2 y ; \\\nonumber H_3 &=& x^3 y + x^2 y^2 ; \\\nonumber H_4 &=& x^4 y + 2 x^3 y^2 + x^3 y + x^2 y^2 ; \\\nonumber H_5 &=& x^5 y + 3 x^4 y^2 + 3 x^4 y + 6 x^3 y^2 + x^3 y + x^2 y^3 + x^2 y^2. \nonumber \end{eqnarray} Furthermore, considering that $H(1,1,z) = E(z)$ and that $E'(z) = \frac{1}{1-\sin(z)}$, equation (\ref{nonni}) becomes \begin{equation}\label{rait} \frac{\partial^2 H(x,1,z) }{\partial z^2} = x \frac{\partial H(x,1,z)}{\partial z} \, E(z) \end{equation} when we consider $y=1$, while it gives \begin{equation}\label{lefttu} \frac{\partial^2 H(1,y,z) }{\partial z^2} = y E'(z) \, H(1,y,z) \end{equation} when we take $x=1$. In the following sections we will study (\ref{rait}) and (\ref{lefttu}) as they respectively provide the enumeration of Andr\'e permutations with respect to the number of right-to-left minima and left-to-right minima. \subsection{Right-to-left minima} \label{L} Here we focus on \emph{right-to-left minima} statistic using the symbol $\mathcal{A}^{R}_{n,r}$ to denote the subset of $\mathcal{A}_n$ made of those permutations $\pi$ with $|\mathrm{rlm}(\pi)|=r$. Defining \begin{equation} \nonumber F(x,z) = \left( \frac{1}{1-\sin(z)} \right)^x, \end{equation} it is easy to check that $$\frac{\partial F(x,z)}{\partial z} = x F(x,z) \cdot E(z).$$ Thus, setting $$F(x,z) = \frac{1}{x} \frac{\partial H(x,1,z)}{\partial z},$$ we have that $H(x,1,z)$ satisfies (\ref{rait}). Considering $F(1,z)$ provides the (shifted) exponential generating function for Euler numbers. In other words, we have the following result \begin{prop}\label{collara} The (shifted) exponential generating function counting Andr\'e permutations with respect to the size $n$ and number of righ-to-left minima $r$ is given by \begin{equation} \nonumber F(x,z) = \left( \frac{1}{1-\sin(z)} \right)^x = \sum_{\pi \in \mathcal{A}} \frac{ x^{r-1} z^{n-1} }{(n-1) \, !} . \nonumber \end{equation} \end{prop} The first terms of $|\mathcal{A}^{R}_{n,r}|$ are thus given by the following table. \begin{center} \begin{tabular}{|c|ccccccccc|} \hline n/r & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\hline 2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 3 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 4 & 1 & 3 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 5 & 2 & 7 & 6 & 1 & 0 & 0 & 0 & 0 & 0 \\ 6 & 5 & 20 & 25 & 10 & 1 & 0 & 0 & 0 & 0 \\ 7 & 16 & 70 & 105 & 65 & 15 & 1 & 0 & 0 & 0\\ 8 & 61 & 287 & 490 & 385 & 140 & 21 & 1 & 0 & 0\\ 9 & 272 & 1356 & 2548 & 2345 & 1120 & 266 & 28 & 1 & 0 \\ 10 & 1385 & 7248 & 14698 & 15204 & 8715 & 2772 & 462 & 36 & 1 \\\hline \end{tabular} \end{center} Note that Euler numbers are the entries of the first column. Furthermore observe that, looking at the table column by column, one has $$\left( \frac{\partial ^{r} \, F}{\partial x ^{r}} \right)_{x=0}=\bigg[-\ln \big( 1-\sin(z) \big)\bigg]^r$$ and then \begin{equation}\label{rota} \frac{1}{r!}\bigg[-\ln \big( 1-\sin(z) \big)\bigg]^r = \sum_{\pi, |\mathrm{rlm}(\pi)|=r+1} \frac{z^{n-1} }{(n-1) \, !}. \end{equation} Given that $\int E(z) = -\ln \big( 1-\sin(z) \big),$ as a corollary we have \begin{prop}\label{ele} For every fixed $r \geq 1$ \begin{equation}\label{balotelli} \frac{1}{r!}\left[ \sum_{n\geq 1} \frac{e_{n-1}}{n!}z^n \right]^{r}=\sum_{n\geq l}\frac{|\mathcal{A}^{R}_{n+1,r+1}|}{n!} z^n, \end{equation} where $e_0=1, e_1=1, e_2=1, e_3=2, e_4=5, e_5=16, e_6=61,...$ are Euler numbers. \end{prop} For a fixed $r\geq 2$, the asymptotic behaviour of the sequence $|\mathcal{A}^{R}_{n,r}|$ can also be examined at this point. Observe that near the dominant singularity $z=\pi/2$, from $$-\ln\big(1-\sin(z)\big) = \ln(2) - 2 \ln(z-\pi/2) + 1/12 (z-\pi/2)^2 + \mathcal{O}\left( (z-\pi/2)^4 \right),$$ we have the following approximation \begin{eqnarray}\nonumber \bigg[-\ln\big(1-\sin(z)\big) \bigg]^r &=& \bigg[ \ln(2) - 2 \ln(z-\pi/2) \bigg]^r + \mathcal{O}\left( z-\pi/2 \right) \\\label{paguro} &=& 2^r \bigg[-\ln(z-\pi/2) \bigg]^r + 2^{r-1} r \ln(2)\bigg[-\ln(z-\pi/2) \bigg]^{r-1} \\\nonumber && + \mathcal{O}\left( \bigg[-\ln(z-\pi/2) \bigg]^{r-2} \right). \end{eqnarray} Rewriting $\ln(z- \pi/2) = \ln(-\pi/2) + \ln\left(1-\frac{2z}{\pi} \right)$, by Th.VI.2 of \cite{ancomb} (see special cases formula (27)) we have $$[z^n]\bigg[-\ln(z-\pi/2) \bigg]^r \sim ( 2/\pi)^n n^{-1}\left[ C_1 \big(\ln(n)\big)^{r-1} + \mathcal{O}\left( \big(\ln(n)\big)^{r-2} \right) \right]$$ and similarly $$[z^n]\bigg[-\ln(z-\pi/2) \bigg]^{r-1} \sim ( 2/\pi)^n n^{-1}\left[ C_2 \big(\ln(n)\big)^{r-2} + \mathcal{O}\left( \big(\ln(n)\big)^{r-3} \right) \right],$$ where $C_1,C_2$ are positive constants. Furthermore, by using Th.VI.3 \cite{ancomb} for the $\mathcal{O}$-transfer, we have that $$[z^n]\bigg[ \mathcal{O}\left( \bigg[-\ln(z-\pi/2) \bigg]^{r-2} \right) \bigg] = \mathcal{O}\left( (2/\pi)^n n^{-1} \bigg( \ln(n) \bigg)^{r-2} \right).$$ Finally, by applying Th.VI.4 \cite{ancomb} to (\ref{paguro}) and recalling (\ref{rota}), we obtain the following result. \begin{prop} For a fixed $r \geq 1$ and $n \rightarrow \infty$, we have the asymptotic equivalence \begin{equation} \frac{|\mathcal{A}^{R}_{n+1,r+1}|}{n!} = [z^n]\bigg[-\ln\big(1-\sin(z)\big) \bigg]^r \sim k_r \cdot n^{-1} \left(\frac{2}{\pi}\right)^n \bigg( \ln(n) \bigg)^{r-1}, \end{equation} where $k_r$ is a positive constant depending on $r$. \end{prop} We conclude this section recalling that in Chapter 7 of \cite{johnson} the author studies a family of polynomials corresponding to the rows of the previous table. He also shows a criterion according to which each row defines a partition of the set of \emph{up-down} permutations of a given size. Furthermore, in \cite{deutsch} the authors prove that the rows of the previous table also provide the enumeration of the so called \emph{cycle-up-down} permutations with respect to the size and to the number of cycles. It is then natural to ask for a bijection between the permutations in $\mathcal{A}_{n+1}$ and the cycle-up-down ones of size $n$ enlightening the correspondence between right-to-left minima and cycles. \subsection{Left-to-right minima}\label{nelbosco} In the previous section we have enumerated the permutations in $\mathcal{A}$ with respect to the size and to the number of right-to-left minima. Here we study the cardinality of $\mathcal{A}^{L}_{n,l}$, that is the subset of $\mathcal{A}_n$ with $|\mathrm{lrm}(\pi)|=l$. Using the polynomials $H_i$ of Section~\ref{inizio}, we have computed the entries of the following table showing, for all $(n,l)\in \{1,...,10 \}^2$, the number of permutations in $\mathcal{A}^{L}_{n,l}$. \begin{center} \begin{tabular}{|c|cccccccccc|} \hline n/l & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\\hline 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 3 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 4 & 2 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 5 & 5 & 10 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 6 & 16 & 38 & 7 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 7 & 61 & 165 & 45 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 8 & 272 & 812 & 288 & 13 & 0 & 0 & 0 & 0 & 0 & 0 \\ 9 & 1385 & 4478 & 1936 & 136 & 1 & 0 & 0 & 0 & 0 & 0 \\ 10 & 7936 & 27408 & 13836 & 1320 & 21 & 0 & 0 & 0 & 0 & 0 \\\hline \end{tabular} \end{center} In the first column we find (shifted) Euler numbers. It is also interesting to observe that the entries in the second column $$ 1,3,10,38,165,812,4478,27408,184529,1356256,10809786,92892928... $$ belong to sequence A186367 of \cite{sloane}. This sequence counts the number of cycles in all \emph{cycle-up-down} permutations of size $n$ (see also \cite{deutsch}) and, furthermore, it is strongly related to the total number of right-to-left minima in the permutations of $\mathcal{A}$ having fixed size. Indeed, we will prove that the exponential generating function associated with the non-zero entries of the column $r=2$ of the table above is given by $$\left( \frac{\partial F}{\partial x} \right)_{x=1} =\frac{-\ln \big(1-\sin(z)\big)}{1-\sin(z)},$$ where $F$ is the same of Proposition~\ref{collara}. In order to prove the correspondence, we observe that each tree $\phi^{-1}(\pi)$, such that $|\mathrm{lrm}(\pi)=2|$, can be decomposed as shown in Fig.~\ref{jollo}. \begin{figure} \caption{Decomposition of a tree with two left-to-right minima.} \label{jollo} \end{figure} In particular, note that tree $t_1$ must contain at least one node (labelled with $2$) while tree $t_2$ could be empty. The class of trees consisting of a (possibly empty) tree appended to a node - denoted by $k$ in Fig.~\ref{jollo} - is counted by the exponential generating function \begin{eqnarray}\nonumber f_{t_2}(z) &=& \sum_{m>0} \frac{\tilde{e}_m z^m}{m!}=\int E(z)=-\ln(1-\sin(z)) \,\,\, (\mathrm{with}\,\, \tilde{e}_m=e_{m-1}) \\\nonumber \end{eqnarray} while \begin{eqnarray} \nonumber f_{t_1}(z) &=& \sum_{n>0} \frac{e_n z^n}{n!}=E(z)-1 \\\nonumber \end{eqnarray} counts those trees having at least one node. Appending $t_1$ of size $n$ and $t_2$ of size $m-1$ as shown in Fig.~\ref{jollo}, we can build exactly ${{n+m-1}\choose{m}}$ different trees. It follows that, in the previous table, the entries $n\geq 1$ of the column $r=2$ correspond to the coefficients of the following exponential generating function \begin{equation}\nonumber g_2(z)=\sum_{n>0} \sum_{m>0} \frac{e_n \tilde{e}_m z^{n+m+1}}{(n+m)(n+m+1)(m!)(n-1)!}. \end{equation} Finally observe that \begin{equation}\nonumber g_2'' = f_{t_2} \cdot f_{t_1}' = \ln \left( \frac{1}{1-\sin(z)} \right) \cdot \left(\frac{1}{1-\sin(z)} \right)=\left( \frac{\partial F}{\partial x} \right)_{x=1}. \end{equation} Given the above calculations, we obtain the following result \begin{prop} The following equality holds for all $n \geq 2$: \begin{eqnarray}\nonumber |\mathcal{A}^{L}_{n+1,2}|&=& \sum_{r\geq 2} (r-1)\cdot |\mathcal{A}^{R}_{n,r}|\\\nonumber &=&(n-1)!\cdot [z^{n-1}]\left(\frac{-\ln \big(1-\sin(z)\big)}{1-\sin(z)}\right) \\\nonumber \end{eqnarray} \end{prop} from which we have the next corollary \begin{coro}\label{rosario} For $n\geq 2$: \begin{equation}\label{media} |\mathcal{A}^{L}_{n+1,2}| +|\mathcal{A}_n| = \sum_{r\geq 2} r \cdot |\mathcal{A}^{R}_{n,r}| \end{equation} and therefore the expected number of right-to-left minima in a random permutation of $\mathcal{A}_n$ is given by $1 + | \mathcal{A}^{L}_{n+1,2}|/|\mathcal{A}_n|.$ \end{coro} \subsubsection{Fixing the number of left-to-right minima} It is interesting to investigate more in details what happens when we fix the number of left-to-right minima in $\mathcal{A}_n$. Let $$G_{l}(z)=\sum_{n} \frac{|\mathcal{A}^{L}_{n,l}|}{n!}\cdot z^n$$ and $$G(y,z) = \sum_{l \geq 0} y^l G_l(z).$$ Thus $G = H(1,y,z)$ and, from (\ref{lefttu}), we can write that \begin{equation}\label{checifaccio} \frac{\partial ^2 G}{\partial z^2} = y E'(z) \cdot G, \end{equation} where $E'(z)=\left( \frac{1}{1-\sin(z)}\right)$. From (\ref{checifaccio}) we have a recursion for $G_l$. \begin{prop} \label{ecci} The family of generating functions $(G_l)_l$ satisfies \begin{equation}\label{recg} G_l(z) = \int \int G_{l-1}(z) \cdot E'(z) \end{equation} being $E'(z)=\left( \frac{1}{1-\sin(z)} \right)$ and $G_1(z)=\int E(z) = -\ln\big(1-\sin(z)\big)$. \end{prop} Unfortunately equation~(\ref{checifaccio}) does not give an explicit solution for $G$. Still, as we will see later, it can be used to explore the structure of the solution in a neighbourhood of the singularity $z=\pi/2$. Let us now focus on the exact computation of $G_l$. To do so, one can apply the result of Proposition~\ref{ecci} together with the fact that $E=E(z)$ satisfies $\int E^2= 2E-z-2$. Here we compute explicitly the generating functions $G_l$ for the first values of $l$, say $l=1,2,3$, enlightening the correspondance with the generating function for Euler numbers. If we define $$\int^{(i)} f = \stackrel{i \mathrm{-times}}{\overbrace{\int\int\dots\int} f},$$ for $l=1,2$ we have \begin{align}\nonumber G_1 =& \int E \\\nonumber G_2 =& \left(\int^{(2)} \left(\int E\right) E'\right) = \left(\int \left(E \int E \right) \right) -\int^{(2)} E^2 \\\nonumber =& \frac{1}{2}\cdot \left( \int E \right)^2 - \int^{}\left( 2E-z-2\right) \\\nonumber =& \frac{1}{2}\cdot \left( \int E \right)^2 -2 \left(\int E\right) + \frac{z^2}{2} + 2z, \\\nonumber \end{align} while, for $l=3$, we obtain: \begin{align}\nonumber G_3 =& \left(\int^{(2)} \frac{(\int E)^2}{2} E'\right) -2 \left(\int^{(2)} \left( \int E \right) E' \right) + \left(\int^{(2)} \left(\frac{z^2}{2} + 2z\right)E' \right)\\\nonumber =& \left( \int E \frac{(\int E)^2}{2} \right) - \left( \int^{(2)} E^2 \left(\int E\right) \right)-2\left[ \left(\int E \left( \int E \right) \right) -\int^{(2)} E^2 \right]\\\nonumber & + \left(\frac{z^2}{2}+2z \right)\left(\int E \right)+ \left(-2z-4 \right)\left(\int^{(2)} E \right)+ 3\left(\int^{(3)} E \right)\\\nonumber =& \frac{1}{6}\cdot \left( \int E \right)^3 -\left[ \left(\int (2E-z-2)\left(\int E \right) \right)-\left( \int^{(2)} (2E-z-2)E\right) \right]\\\nonumber & -2\left[ \frac{1}{2}\cdot \left(\int E \right)^2 -\int (2E-z-2)\right]+ \left(\frac{z^2}{2}+2z \right)\left(\int E \right)\\\nonumber & + \left(-2z-4 \right)\left(\int^{(2)} E \right)+ 3\left(\int^{(3)} E \right)\\\nonumber =& \frac{1}{6} \left( \int E\right)^3 -2\left( \int E\right)^2 +\left( 8+2z +\frac{z^2}{2}\right)\left( \int E\right)-2z^2-8z\\\nonumber & + (-2z -4) \left( \int^{(2)} E \right) +4\left( \int^{(3)}E \right). \end{align} Recalling that $$[z^n]\left(\int^{(i)} E(z) \right) = \frac{e_{n-i}}{n!},$$ the previous calculations express $|\mathcal{A}^{L}_{n,l}|$ ($l=1,2,3$) in terms of Euler numbers $e_n$. For values of $l$ greater than $3$ the computation of $G_l$ becomes more difficult. In this cases, we can still use the results of Proposition~\ref{ecci} to obtain asymptotic estimates of the coefficients $[z^n]G_l(z)$. Using standard methods of analytic combinatorics (see \cite{ancomb}), it is sufficient to know an approximation of the function $G_l$ near its dominant singularity to describe the behaviour of $[z^n]G_l(z)$ for $n \rightarrow \infty$. In this case, the idea is to iteratively recover an approximation for $G_{l+1}$ by integration of an approximation for $(G_l \cdot E')$. Near the dominant singularity $z=\pi/2$ we have \begin{equation}\label{seno} E'(z)=\frac{1}{1-\sin(z)}= \frac{2}{\left(\frac{\pi}{2}-z \right)^2} + \mathcal{O}\left(1\right) \end{equation} and, for every $A>0$, \begin{equation}\label{castrellina} G_1= \int E(z) =\ln\left(\frac{1}{1-\sin(z)} \right)= -2\ln\left(\frac{\pi}{2}-z\right)+\mathcal{O}\left(1\right) = \mathcal{O}\left( \left( \frac{\pi}{2} - z \right)^{-A} \right). \end{equation} Then, as a first approximation, one has $$(G_1 \cdot E')(z) = \mathcal{O}\left(\left(\frac{\pi}{2}-z\right)^{-2-A}\right),$$ which gives by Proposition~\ref{ecci} and Th.~VI.9 \cite{ancomb} (see case (i)) $$G_2(z) = \mathcal{O}\left(\left(\frac{\pi}{2}-z \right)^{-A} \right).$$ We remark that, by the mentioned theorem, we can obtain a singular approximation of $G_2$ by integrating, according to classical rules, the singular expansion of $(G_1\cdot E')$. Iterating the procedure one has that, independently on $l$, for every $A>0$ \begin{equation}\label{titti} G_l(z) = \mathcal{O}\left(\left(\frac{\pi}{2}-z \right)^{-A} \right). \end{equation} Applying Th.~VI.3 of \cite{ancomb} to (\ref{titti}) gives the following bound. \begin{prop} When $n$ is large, for every $A>0$ and independently on $l$, we have \begin{equation}\label{proba} \frac{|\mathcal{A}^{L}_{n,l}|}{n!}=[z^n]G_l(z)= \, \mathcal{O}\left( \left(\frac{2}{\pi}\right)^n\cdot n^{A-1} \right). \end{equation} \end{prop} Recalling that $\frac{|\mathcal{A}_{n}|}{n!} \sim \frac{4}{\pi} \left( \frac{2}{\pi} \right)^n$ (see (\ref{brioscia})), equation (\ref{proba}) gives a measure of how strong is the effect of fixing the number of left-to-right minima in Andr\'e permutations. \textbf{Structural properties of $G$ near the singularity.} To conclude our asymptotic analysis we go back to equation~(\ref{checifaccio}) to describe a structural property of the solution $G$. Indeed, treating $y$ as a constant, we can apply Th.~VII.9 of \cite{ancomb} finding that near the \emph{regular} singular point $z=\pi/2$ the desired solution $G$ can be expressed as \begin{equation}\nonumber G = a_{y}\cdot \left(\frac{\pi}{2}-z \right)^{\frac{1+\sqrt{1+8y}}{2}} A_{y}\left(z-\frac{\pi}{2}\right) +{b}_{y}\cdot \left(\frac{\pi}{2}-z \right)^{\frac{1-\sqrt{1+8y}}{2}} B_{y}\left(z-\frac{\pi}{2}\right), \end{equation} where $y$ could in principle appear in $a_{y},A_{y}(z),b_{y},B_{y}(z)$ and the functions $A_{y}(z), B_{y}(z)$ are analytic at $z=0$. It is interesting to note that, taking $a_{y}=0$ and $b_{y}=B_{y}=1$, one obtains $$G_{\alpha}=\left(\frac{\pi}{2}-z \right)^{\frac{1-\sqrt{1+8y}}{2}}$$ whose expansion at $y=0$ looks as \begin{align}\nonumber G_{\alpha}=& 1-2y \ln(\pi/2 - z) + y^2 \left(4 \ln(\pi/2 - z) + 2 \left[\ln(\pi/2 - z)\right]^2 \right) \\\nonumber & + y^3\left(-16\ln(\pi/2 - z) -8\left[\ln(\pi/2 - z)\right]^2 -\frac{4}{3}\left[\ln(\pi/2 - z) \right]^3 \right) + \cdots .\nonumber \end{align} Based on the approximation for $\int E$ given in (\ref{castrellina}), this reflects the asymptotic behaviour of the expressions for $G_1, G_2$ and $G_3$ which have been previously computed. This can be justified observing that $G_{\alpha}$ satisfies \begin{equation}\nonumber \frac{\partial ^2 G_{\alpha}}{\partial z^2} = y \cdot \frac{2}{(\pi/2 -z)^2} \cdot G_{\alpha} \end{equation} which is obtained by substituting in (\ref{checifaccio}), i.e. the defining equation for $G$, the term $E'(z)$ by $2/(\pi/2-z)^2$, the latter being the main part of the singular approximation (\ref{seno}). \end{document}
\begin{document} \newcommand{\nonumber}{\nonumber} \newcommand{\ms}[1]{\mbox{{\mathcal S}criptsize #1}} \newcommand{\msi}[1]{\mbox{{\mathcal S}criptsize\textit{#1}}} \newcommand{^\dagger}{^\dagger} \newcommand{{\mathcal S}mallfrac}[2]{\mbox{$\frac{#1}{#2}$}} \newcommand{\ket}[1]{| {#1} \ra} \newcommand{\bra}[1]{\la {#1} |} \newcommand{\pfpx}[2]{\frac{\partial #1}{\partial #2}} \newcommand{\dfdx}[2]{\frac{d #1}{d #2}} \newcommand{\half}{{\mathcal S}mallfrac{1}{2}} \newcommand{{\mathcal S}}{{\mathcal S}} \newcommand{\color{red}}{\color{red}} \newcommand{\color{blue}}{\color{blue}} \title{The Energy Cost of Controlling Mesoscopic Quantum Systems} \author{Jordan M.~Horowitz} \affiliation{Department of Physics, University of Massachusetts at Boston, Boston, MA 02125, USA} \author{Kurt Jacobs} \affiliation{Department of Physics, University of Massachusetts at Boston, Boston, MA 02125, USA} \affiliation{U.S. Army Research Laboratory, Computational and Information Sciences Directorate, ATTN: CIH-N, Adelphi, Maryland 20783, USA} \affiliation{Hearne Institute for Theoretical Physics, Louisiana State University, Baton Rouge, LA 70803, USA} \begin{abstract} We determine the minimum energy required to control the evolution of any mesoscopic quantum system in the presence of arbitrary Markovian noise processes. This result provides the mesoscopic equivalent of the fundamental cost of refrigeration, sets the minimum power consumption of mesoscopic devices that operate out of equilibrium, and allows one to calculate the efficiency of any control protocol, whether it be open-loop or feedback control. As examples we calculate the energy cost of maintaining a qubit in the ground state, the efficiency of resolved-sideband cooling of nano-mechanical resonators, and discuss the energy cost of quantum information processing. \end{abstract} \pacs{03.67.-a, 03.65.Yz, 05.70.Ln, 05.40.Ca} \maketitle Recent advances in the fabrication and control of mescoscopic quantum devices~\cite{Palomaki13, Mamin13, Safavi14, Silverstone14, Leghtas15} has made their potential application in future technologies evermore promising~\cite{Kelly15, Matthews13, Komar13}. In such applications mesoscopic systems must be controlled to reduce the effects of environmental noise~\cite{WM10, Jacobs14}. Since reducing noise necessarily involves reducing the entropy of the controlled system, Landauer's principle suggests that there is an energetic cost, meaning that work must be supplied that can never be recovered. This energy cost is a fundamental question in quantum control and technologically important as it quantifies both the minimum power consumption and the minimum heat dissipation that must be handled by mescopic devices. Here we show that it is possible to fully characterize, in a relatively simple way, the minimum power required for continuous control of any mesoscopic quantum system subjected to arbitrary Markovian noise~\footnote{While we restrict ourselves here to Markovian noise processes, we suspect that our results can be extended to non-Markovian open systems, and this is an interesting question for further investigation.}. There is a natural division of controlled systems into \textit{weakly coupled} and \textit{strongly coupled}, depending on how large their interaction with the controller. For weakly-coupled systems -- which include most present-day mesoscopic systems~\footnote{Weakly-coupled systems include superconducting nano-electromechanical circuits~\cite{Palomaki13, Kelly15}, NV-centers in diamond~\cite{Mamin13}, cavity-QED~\cite{ Komar13}, and photonic circuits~\cite{Silverstone14}.} -- the coupling does not appreciably change the system's energy levels. As a result, the control does not affect the noise processes perturbing the system, but only adds Hamiltonian terms to the dynamics that facilitate control. For strongly-coupled systems the coupling does modify the system's energy levels and with that the environmental noise. It means that the controlled system and controller cannot be treated as thermodynamically separate. \emph{Preliminaries.---} The evolution of a mesoscopic system ${\mathcal S}$ weakly-coupled to its surroundings is given by a linear differential equation for its density matrix $\rho$. Denoting the Hamiltonian of the system as $H_{\mathcal S}$ and the linear super-operators that model the irreversible dynamics induced by the environemental noise processes ${\mathcal D}_{\mathcal S}^i$, $i=1,\dots,N$, the equation of motion for ${\mathcal S}$ in the absence of any control mechanism is the Lindblad master equation $\dot\rho = -(i/\hbar)[H_{\mathcal S},\rho]+{\mathcal S}um_i{\mathcal D}_{\mathcal S}^i(\rho)\equiv{\mathcal L}_{\mathcal S}(\rho)$~\cite{Breuer07, Jacobs14}. We further assume each noise process has an invariant state $\pi^i$, given as ${\mathcal D}_{\mathcal S}^i(\pi^i)=0$. For example, noise from a thermal reservoir at temperature $T$, would have as a fixed point the Boltzmann density $\pi^{\rm eq} \propto e^{-H_{\mathcal S}/T}$ (with $k_{\rm B}=1$, assumed throughout). Thus, we can view the overall dynamics as a competition between noise processes, each trying to impose its own steady state onto the system. The net effect is that in the absence of control, ${\mathcal S}$ will relax to a noise-induced steady-state density matrix $\rho^{\rm ss}$, given as the solution of ${\mathcal L}_{\mathcal S}(\rho^{\rm ss})=0$. The goal of control is to maintain ${\mathcal S}$ in an arbitrary state $\rho^*\neq \rho^{\rm ss}$. \begin{figure} \caption{Diagram of energy flow in a control process: A system ${\mathcal S} \label{fig:cartoon} \end{figure} {\it Weakly-coupled control.---} Control is implemented by weakly coupling ${\mathcal S}$ to an auxiliary quantum system ${\mathcal A}$ immersed in a thermal bath at temperature $T$, as in Fig.~(\ref{fig:cartoon}), in such a way that the reduced steady-state of ${\mathcal S}$ is $\rho^*$. The assumption that ${\mathcal S}$ and ${\mathcal A}$ are weakly coupled guarantees that the dynamics induced in ${\mathcal S}$ by its surroundings, given by $\{{\mathcal D}_{\mathcal S}^i\}$, is not changed by the coupling; yet, the control system can still affect ${\mathcal S}$'s evolution. Thus, the evolution of the joint density matrix $\tau$ of ${\mathcal S}\oplus{\mathcal A}$ can be modeled as \begin{equation}\label{eq:master} {\dot \tau} = (-i/\hbar)[{\mathcal H}(t),\tau]+{\mathcal S}um_i{\mathcal D}_{\mathcal S}^i(\tau)+{\mathcal D}_{\mathcal A}(\tau), \end{equation} in terms of the time-dependent joint Hamiltonian ${\mathcal H}(t)=H_{\mathcal S}+H_{\mathcal A}+V(t)$ with auxilliary Hamiltonian $H_{\mathcal A}$ and weak interaction $V(t)\ll H_{\mathcal S}, H_{\mathcal A}$, and the thermal-noise operator affecting the auxiliary, ${\mathcal D}_{\mathcal A}$. It is important to note that while we perform our analysis by coupling the system to a mesoscopic auxiliary system, the results apply to any control method, since this scenario has measurement-based control as a special case~\cite{Jacobs14b, Jacobs14}. Our main result is the minimum power ${\dot W}$ that the controller must supply to control ${\mathcal S}$, which crucially depends only on ${\mathcal S}$'s surroundings and the target state $\rho^*$. We prove this result rigorously -- the proof is outlined below, and is detailed in the Supplemental material~\footnote{See Supplemental Material at XXXXX for further details of the proofs of Eqs.~(\ref{eq:Wmin}) and (\ref{eq:strong}), the nonadiabatic entropy production, and the definition of weak-coupling.} -- but it can be understood in terms of an intuitive picture. It is due to the fact that for any isothermal process the work done on a system is bounded by the relation $W\ge \Delta F$, where $\Delta F$ is the change in the nonequilibrium free energy $F(\rho)=E(\rho)-TS(\rho)$, with average energy $E(\rho)={\rm Tr}[\rho H_{{\mathcal S}}]$ and von Neumann entropy $S=-{\rm Tr}[\rho\ln\rho]$~\cite{Esposito11}. With this in mind, each noise source ${\mathcal D}^i_{\mathcal S}$ continuously pushes the state of ${\mathcal S}$ away from $\rho^*$, and in doing so changes its free energy, implying that the controller must supply a commensurate amount of work to restore this free energy. Specifically, in the controlled steady state, the noise perturbations are changing ${\mathcal S}$'s entropy at a rate ${\dot S}^i_{\mathcal S}(\rho^*) = - {\rm Tr}[{\mathcal D}^i_{\mathcal S}(\rho^*)\ln\rho^*]$ while pumping energy in at a rate ${\dot E}_{\mathcal S}^i(\rho^*)= {\rm Tr}[{\mathcal D}^i_{\mathcal S}(\rho^*) H_{\mathcal S}]$. To undo these perturbations the controller must continuously transfer this entropy and energy through ${\mathcal A}$, eventually dumping it in ${\mathcal A}$'s thermal reservoir at temperature $T$. We show that this requires a minimum work rate \begin{align}\label{eq:Wmin} {\dot W}_{\rm min} &= -{\mathcal S}um_i\dot{F}^i_{\mathcal S}(\rho^*) = {\mathcal S}um_i T{\dot S}^i_{\mathcal S}(\rho^*)-{\dot E}_{\mathcal S}^i(\rho^*) \nonumber \\ &=-{\mathcal S}um_i {\rm Tr}[{\mathcal D}_{\mathcal S}^i(\rho^*)(T\ln \rho^* + H_{\mathcal S})], \end{align} where ${\dot F}^i_{{\mathcal S}}$ is the rate of change of the free energy of ${\mathcal S}$ due to the noise, evaluated at the temperature of the {\it auxiliary's} thermal reservoir. To summarize, the noise affects the free energy of ${\mathcal S}$, and the controller must undo this change in free energy, requiring work; the reference temperature is that of ${\mathcal A}$'s thermal reservoir, since the energy is ultimately dissipated there. This bound is for the energetics of the joint system, and therefore not a manifestation of the nonadiabatic entropy production~\cite{Yukawa:2001tf,Sagawa:2012vd,Gardas2015} for quantum nonequilibrium steady states (see~\cite{Note3}). We now explore the consequences of Eq.~(\ref{eq:Wmin}). First, notice that ${\dot W}_{\rm min}$ may be negative, meaning that we can extract energy while controlling the system. For example, when the system is coupled to a hot bath at $T_H$ and a cold one at $T_C$, our target state $\rho^*$ may coincide with the regime where ${\mathcal S}$ operates as a heat engine. However, for isothermal control in which ${\mathcal S}$ sees a single bath at temperature $T$, ${\dot W}_{\rm min}$ must be positive as required by the second law. Another important scenario is that of maintaining ${\mathcal S}$ in a pure (zero-entropy) state. Since the derivative of entropy at zero is infinity, such control requires an infinite rate of work as reflected by the term $\ln \rho$ in Eq.~(\ref{eq:Wmin}). It is for the same reason that the power required for macroscopic refrigeration tends to infinity as the cold temperature tends to zero. Finally, our result also supplies the minimum work to push the system through a specified sequence of states $\rho^*(t)$, from $t=0$ to $\theta$, since the energy cost at any particular time depends only the system's state at that time: ${\dot W}_{\rm min} = -\int_0^\theta {\mathcal S}um {\dot F}^i[\rho^*(t)]\,dt$. Via this bound one can quantify the energetic efficiency of finite-time protocols such as shortcuts to adiabaticity \cite{Deffner2014}. Our analysis further reveals that the minimum work, Eq.~(\ref{eq:Wmin}), can be achieved when the auxiliary operates reversibly. This requires a separation of time-scales, where the thermal relaxation of the auxiliary is very fast allowing it to remain essentially always in equilibrium. Additionally, the auxiliary's Hamiltonian dynamics must be fast compared to the system dynamics in order to rapidly extract the noise. An explicit nonautonomous protocol that implements this time-scale separation is described in Fig.~(\ref{fig:optimal}), where the rapid auxiliary dynamics are exploited to complete a reversible control cycle in every infinitesimal moment of time. \begin{figure} \caption{Nonautonomous optimal control protocol: In a small interval of time $dt$, noise perturbs the state of ${\mathcal S} \label{fig:optimal} \end{figure} {\it Strongly-coupled control.---} When the auxiliary is coupled to ${\mathcal S}$ so strongly that it changes the energy levels of ${\mathcal S}$, it also changes the effect of the environment on ${\mathcal S}$ by altering the $\{{\mathcal D}_{\mathcal S}^i\}$. Because of this we can no longer bound the minimum work to control ${\mathcal S}$ solely in terms of the properties of ${\mathcal S}$, because the result will depend on the choice of joint Hamiltonian ${\mathcal H}$ through the interaction. First we observe that if we have access to any joint Hamiltonian ${\mathcal H}$, no work is required to sustain ${\mathcal S}$ in an arbitrary constant state $\rho^*$: We can always choose a fixed ${\mathcal H}$ so that the energy levels and eigenstates of ${\mathcal S}$ set $\rho^{\rm ss} = \rho^*$. Since ${\mathcal H}$ is time-independent, no work is required. Therefore the problem is well-motivated only when the interaction $V$ is restricted to a subset $\mathcal{V}$ of all interactions. We include in $\mathcal{V}$ all weak-coupling Hamiltonians, defined as those that do not change appreciably the eigenstates or eigenvalues of the system. This allows any control of the system that is slow compared to its dynamics, and unlimited control of the auxiliary~\cite{Note3}. We show that the minimum work to control strongly coupled systems is \begin{equation}\label{eq:strong} {\dot W}_{\rm min}=\min_{{\mathcal H},\tau} \left[-{\mathcal S}um_i {\rm Tr}[{\mathcal D}^i_{\mathcal S}(\tau)(T\ln \tau+ {\mathcal H})]\right], \end{equation} where the minimum is taken over all ${\mathcal H}\in{\mathcal V}$ and $\tau$ such that ${\rm Tr}_{\mathcal A}[\tau]=\rho^*$. The proof is an extension of that for Eq.~(\ref{eq:Wmin})~\cite{Note3}. We show this bound is tight by demonstrating a protocol that saturates it, akin to the weak coupling protocol in Fig.~(\ref{fig:optimal}). Suppose we know the Hamiltonian and density matrix that gives the minimum in Eq.~(\ref{eq:strong}): call them ${\mathcal H}_{\rm m}$ and $\tau_{\rm m}$. Then in a small time interval $dt$ the system's noise perturbs the joint system, causing an evolution $\tau_{\rm m}\to\tau^\prime$. To undue these perturbations, we couple the joint system to ${\mathcal A}$'s thermal reservoir and rapidly and reversibly raise ${\mathcal A}$'s energy levels, so that the joint state becomes ${\mathcal S}igma\otimes|0\rangle\langle0|$, for some system state ${\mathcal S}igma$. The auxiliary is then uncoupled from the bath, and the state of the system is swapped into the auxiliary: $|0\rangle\langle0|\otimes{\mathcal S}igma$. This will usually require a weak interaction, which is allowed since it is only strong changes to $V$ that are constrained. Now that all nontrivial structure of the state is contained in the auxiliary, we can use our ability to arbitrarily manipulate the auxiliary Hamiltonian to isothermally and reversibly return the joint state to the initial state $\tau_{\rm m}$. Since the entire process that takes the joint state from $\tau^\prime\to\tau_{\rm m}$ is reversible, the work equals the free energy difference $\Delta F = F(\tau_{\rm m}) - F(\tau^\prime)$ predicted in Eq.~(\ref{eq:strong}). Note that here we have restricted ourselves to a constant or slowly varying $\rho^*$, and thus ${\mathcal H}_{\rm m}$, to ensure that the system evolution is Markovian. \textit{Resolved-sideband cooling.---} Resolved-sideband cooling is the current state-of-the-art in cooling mechanical quantum resonators~\cite{Wilson-Rae07, Marquardt07, Schliesser09, Tian09, Wang11} and is an example of coherent feedback control~\cite{Hamerly12, Jacobs15}. The auxiliary system is an optical or superconducting oscillator with a frequency, $\Omega$, sufficiently high that it sits in its ground state at the ambient temperature $T$. Cooling is accomplished by coupling the oscillators linearly and modulating this coupling at the frequency difference. In the weak coupling (rotating-wave) approximation the interaction Hamiltonian is $V = G + G^\dagger$ with $G = \hbar g a^\dag b e^{-i(\Omega-\omega)t}$; $\omega$ is the mechanical frequency, and $a$ and $b$ are the respective annihilation operators for the oscillators. This driven interaction mediates quanta exchange between the resonators, with the energy difference per quantum supplied as work $\Delta w=\hbar(\Omega-\omega)$. To achieve cooling the auxiliary must dump energy into the bath with sufficient speed. Due to the linear dynamics the cooled steady state of the mechanical oscillator under sideband cooling is a Boltzmann-like equilibrium state at an effective temperature, $T_{\rm eff}<T$. While $T_{\rm eff}$ is not a true temperature we will see that it is useful. The rate at which the bath increases the oscillator entropy in the cooled state is ${\dot S}_{{\mathcal S}} =\dot Q_{{\mathcal S}} / T_{\rm eff}$, with ${\dot Q}_{{\mathcal S}}$ the heat flow from bath into the oscillator and equivalently the energy flow to the auxiliary. Thus rather strikingly the mesoscopic oscillator has an entropy production rate identical to that of a thermal bath at temperature $T_{\rm eff}$. Because of this the cooling efficiency has precisely the form of that of a macroscopic refrigerator. Defining the coefficient of performance in the usual way as $\eta^{\rm COP}={\dot Q}_{{\mathcal S}}/{\dot W}$, where ${\dot W}$ is the actual power consumed by the fridge~\cite{Callen}, the minimum power can be written as ${\dot W}_{\rm min} = \dot{Q}_{{\mathcal S}} \eta^{\rm COP}_{\rm ideal}$, with efficiency \begin{equation}\label{eq:sideEff} \epsilon = {\dot W}_{\rm min} /{\dot W} = \eta^{\rm COP} /\eta^{\rm COP}_{\rm ideal}. \end{equation} Here $\eta^{\rm COP}_{\rm ideal}=T/T_{\rm eff}-1$ is the ideal Carnot coefficient of performance. In Fig.~(\ref{fig:eff}), we plot the effective temperature and efficiency achieved by sideband cooling as a function of the interaction rate $g$, using parameters from the recent experiment in~\cite{Teufel11}. We see that stronger coupling gives increased efficiency and a colder temperature, and that high damping is only effective when the coupling transfers the entropy with sufficient speed. \begin{figure} \caption{Plot of the efficiency $\epsilon$ of resolved-sideband cooling of a mechanical oscillator in the weak-coupling regime, as a function of the interaction rate $g$ for three values of the auxiliary oscillator's damping rate, $\gamma^\prime/2\pi = 10^5~{\rm Hz} \label{fig:eff} \end{figure} \textit{Cooling a single qubit.---} The master equation describing a weakly-damped qubit with energy gap $E$ in contact with a bath at temperature $T$ can be found in~\cite{Breuer07, Jacobs14}. We wish to maintain the qubit at a temperature $T_{\ms{c}} < T$. If $T_{\ms{c}}/E \ll 1$, so that the qubit is close to its ground state, the required power from Eq.~(\ref{eq:Wmin}) is simply \begin{equation} {\dot W}_{\rm min} \approx \gamma n_T E \left( T/T_{\ms{c}} - 1 \right) , \label{qbcool} \end{equation} with $\gamma$ the qubit's damping rate and $n_T = 1/(e^{E/T} - 1)$. The power goes to infinity as $T_{\ms{c}}\rightarrow 0$ as expected. The full expression for arbitrary $T_{\ms{c}}$ is obtained by replacing $n_T$ by the $(z-w)/[(z-1)(w+1)]$ where $z=\exp(E/T)$ and $w=\exp(E/T_{\ms{c}})$. Coupling the qubit strongly to an auxiliary qubit with an energy gap $\mathcal{E} > E$ provides a simple example in which a strong interaction reduces the power requirements for ground-state cooling. The interaction allows us to effectively increase the energy gap of the first qubit, increasing the equilibrium population of the ground state and thus reducing the effort required to preserve that state. Let the auxiliary gap be $\mathcal{E} \gg kT$ so that it effectively sits in its ground state $|0\rangle$, and take the Hamiltonian of the two qubits as $H = E {\mathcal S}igma_z^{(1)}/2 + H_{\ms{I}} + \mathcal{E} {\mathcal S}igma_z^{(2)}/2$ with the interaction $H_{\ms{I}} = g {\mathcal S}igma_z^{(1)} {\mathcal S}igma_z^{(2)}$/2. Since the auxiliary is in state $|0\rangle$, if we set $g = -\varepsilon$, and assuming that $\mathcal{E} > \varepsilon > E$, then the two lowest energy states of the joint system are $|0\rangle |0\rangle$ and $|1\rangle |0\rangle$, where $|i\rangle|j\rangle$ denotes system state $|i\rangle$ and auxiliary state $|j\rangle$. These two states have an energy gap of $E + \varepsilon$, so the interaction effectively increases the energy gap of the first qubit by $\varepsilon$. The minimum power consumption is then given by replacing $E$ with $E+\varepsilon$ in Eq.~(\ref{qbcool}). \textit{Devices that operate out-of-equilibrium.---} A quantum computer is one such device. While quantum logic gates are unitary and thus require no energy, a quantum computer consumes power because the constituent qubits are subject to relaxation (errors) from environmental noise. The error-correction process continually introduces new qubits prepared in near-pure states to combat these errors~\cite{Gottesman09, Aliferis06, Knill05}. We can estimate the energy consumption per qubit for a quantum computer by using a simple error model, and averaging the minimum energy dissipation for a single qubit over all pure states. Since fault-tolerant computation requires that the qubits are refreshed while the errors are still small, the analysis we have performed above for continuous-time control is appropriate. However we restrict to logic gates that are slow compared to the qubit frequency to ensure the damping is Markovian~\cite{Alicki06}. A typical error model involves thermal damping at (effectively) zero temperature at rate $\gamma$, and depolarizing at rate $\beta$ for which the master equation is $\dot{\rho} = -(\gamma/2) ({\mathcal S}igma^\dagger {\mathcal S}igma \rho + \rho {\mathcal S}igma^\dagger {\mathcal S}igma - 2 {\mathcal S}igma \rho {\mathcal S}igma^\dagger ) - (\beta/4){\mathcal S}um_j [{\mathcal S}igma_j, [{\mathcal S}igma_j, \rho]]$ with $j=x,y,z$. The change in free energy averaged over all pure states is $\Delta F \approx (p_{\beta} \ln p_{\beta} + p_{\gamma} \ln p_{\gamma})kT - p_{\gamma} E$, where $p_{\beta} = \beta \tau \ll 1$ and $p_{\gamma} = \gamma \tau/2 \ll 1$ are the error probabilities due to the thermal damping and depolarizing, respectively, and $E$ is the energy gap of the qubit. The time $\tau$ is the duration of a single lowest-level fault-tolerant gate, which includes the auxiliary qubits injected for error-correction and/or teleportation operations, both or which refresh the working qubits~\cite{Gottesman09}. The minimum energy consumption of a computation is therefore $M \Delta F$ where $M$ is the total number of qubits injected during the computation. Given the above form of $\Delta F$, we can conclude that if quantum computers run with $kT \ll E$ as presently envisaged, the minimum energy cost will be dominated by the loss of the qubits' internal energy to the bath. {\it Outline of proofs of Eqs.~(\ref{eq:Wmin}) and (\ref{eq:strong}).---} The key ingredient is a second-law-like inequality for the entropy production of an open quantum system modeled with a Lindblad master equation, which follows from the monotonicity of the quantum relative entropy under Markovian noise~\cite{Spohn1978, Alicki04, Sagawa08}. If we define $\Sigma_{\mathcal S}^{i} = - {\rm Tr}[{\mathcal D}^i_{\mathcal S}(\tau)(\ln \tau-\ln \pi^i)]$ and $\Sigma_{\mathcal{A}} = - {\rm Tr}[{\mathcal D}_{\mathcal A}(\tau)(\ln \tau-\ln \pi^{\rm eq}_{\mathcal A})]$, then $\Sigma = {\mathcal S}um_i \Sigma_{\mathcal S}^i + \Sigma_{\mathcal{A}}$ gives the total entropy production of the joint system. Further, $\Sigma_{\mathcal{A}}$ and the $\Sigma_{\mathcal S}^i$ are time derivatives of relatives entropies under Markovian noise processes. As such they are negative and thus $\Sigma \geq 0$. If we drop all the entropy production due to ${\mathcal A}$ we obtain $\Sigma \ge {\mathcal S}um_i \Sigma_{\mathcal S}^i \ge 0$. We now trace over ${\mathcal A}$ because we want a bound purely in terms of ${\mathcal S}$. This operation decreases $\Sigma$ due to the monotonicity of the relative entropy under partial trace~\cite{Sagawa08}, giving us $\Sigma \ge - {\mathcal S}um {\rm Tr}[{\mathcal D}^i_{\mathcal S}(\rho^*)(\ln \rho^*-\ln \pi^i)] \ge 0$. We next note that in the steady state $\Sigma = -{\mathcal S}um {\rm Tr}[{\mathcal D}^i_{\mathcal S}(\rho^*)\ln\pi^i]-{\dot Q}_{\mathcal A}/T$, in terms of the heat flow ${\dot Q}_{\mathcal A}=-{\rm Tr}[{\mathcal D}_{\mathcal A}(\tau)\ln\pi^{\rm eq}_{\mathcal A}] $ out of ${\mathcal A}$'s reservoir. The relation in Eq.~(\ref{eq:Wmin}) then follows from energy conservation in the steady state, $-{\dot Q}_{\mathcal A}={\dot W}+{\mathcal S}um {\dot E}^i_{\mathcal S}$. To obtain Eq.~(\ref{eq:strong}) the steps are the same as those above, except that we minimize the right hand side of $\Sigma \ge {\mathcal S}um_i \Sigma_{\mathcal S}^i$ over the interaction $V$, and then skip the step in which we trace over ${\mathcal A}$~\cite{Note3}. \textit{Acknowledgments:} JH and KJ were partially supported by the ARO MURI grant W911NF-11-1-0268, and KJ by the NSF project PHY-1212413. \begin{thebibliography}{36} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand {\mathcal S}electlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS{\mathcal S}pace} \providecommand \EOS [0]{{\mathcal S}pacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Palomaki}\ \emph {et~al.}(2013)\citenamefont {Palomaki}, \citenamefont {Harlow}, \citenamefont {Teufel}, \citenamefont {Simmonds},\ and\ \citenamefont {Lehnert}}]{Palomaki13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~A.}\ \bibnamefont {Palomaki}}, \bibinfo {author} {\bibfnamefont {J.~W.}\ \bibnamefont {Harlow}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Teufel}}, \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Simmonds}}, \ and\ \bibinfo {author} {\bibfnamefont {K.~W.}\ \bibnamefont {Lehnert}},\ }\href {\doibase doi:10.1038/nature11915} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {495}},\ \bibinfo {pages} {210} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mamin}\ \emph {et~al.}(2013)\citenamefont {Mamin}, \citenamefont {Kim}, \citenamefont {Sherwood}, \citenamefont {Rettner}, \citenamefont {Ohno}, \citenamefont {Awschalom},\ and\ \citenamefont {Rugar}}]{Mamin13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont {Mamin}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont {Sherwood}}, \bibinfo {author} {\bibfnamefont {C.~T.}\ \bibnamefont {Rettner}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Ohno}}, \bibinfo {author} {\bibfnamefont {D.~D.}\ \bibnamefont {Awschalom}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Rugar}},\ }\href {\doibase 10.1126/science.1231540} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {339}},\ \bibinfo {pages} {557} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Safavi-Naeini}\ \emph {et~al.}(2014)\citenamefont {Safavi-Naeini}, \citenamefont {Hill}, \citenamefont {Meenehan}, \citenamefont {Chan}, \citenamefont {Gr\"oblacher},\ and\ \citenamefont {Painter}}]{Safavi14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~H.}\ \bibnamefont {Safavi-Naeini}}, \bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont {Hill}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Meenehan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chan}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gr\"oblacher}}, \ and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Painter}},\ }\href {\doibase 10.1103/PhysRevLett.112.153603} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {153603} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Silverstone}\ \emph {et~al.}(2014)\citenamefont {Silverstone}, \citenamefont {Bonneau}, \citenamefont {Ohira}, \citenamefont {Suzuki}, \citenamefont {Yoshida}, \citenamefont {Iizuka}, \citenamefont {Ezaki}, \citenamefont {Natarajan}, \citenamefont {Tanner}, \citenamefont {Hadfield}, \citenamefont {Zwiller}, \citenamefont {Marshall}, \citenamefont {Rarity}, \citenamefont {O'Brien},\ and\ \citenamefont {Thompson}}]{Silverstone14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~W.}\ \bibnamefont {Silverstone}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Bonneau}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Ohira}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Suzuki}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Yoshida}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Iizuka}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ezaki}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Natarajan}}, \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Tanner}}, \bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont {Hadfield}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Zwiller}}, \bibinfo {author} {\bibfnamefont {G.~D.}\ \bibnamefont {Marshall}}, \bibinfo {author} {\bibfnamefont {J.~G.}\ \bibnamefont {Rarity}}, \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {O'Brien}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Thompson}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Photonics}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {104} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Leghtas}\ \emph {et~al.}(2015)\citenamefont {Leghtas}, \citenamefont {Touzard}, \citenamefont {Pop}, \citenamefont {Kou}, \citenamefont {Vlastakis}, \citenamefont {Petrenko}, \citenamefont {Sliwa}, \citenamefont {Narla}, \citenamefont {Shankar}, \citenamefont {Hatridge}, \citenamefont {Reagor}, \citenamefont {Frunzio}, \citenamefont {Schoelkopf}, \citenamefont {Mirrahimi},\ and\ \citenamefont {Devoret}}]{Leghtas15} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Leghtas}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Touzard}}, \bibinfo {author} {\bibfnamefont {I.~M.}\ \bibnamefont {Pop}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kou}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Vlastakis}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Petrenko}}, \bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont {Sliwa}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Narla}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Shankar}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Hatridge}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reagor}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Frunzio}}, \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Schoelkopf}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mirrahimi}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont {Devoret}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {347}},\ \bibinfo {pages} {853} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{Kel(2015)}]{Kelly15} \BibitemOpen \href {\doibase 10.1038/nature14270} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {519}},\ \bibinfo {pages} {66} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Matthews}\ \emph {et~al.}(2013)\citenamefont {Matthews}, \citenamefont {Zhou}, \citenamefont {Cable}, \citenamefont {Shadbolt}, \citenamefont {Saunders}, \citenamefont {Durkin}, \citenamefont {Pryde},\ and\ \citenamefont {O'Brien}}]{Matthews13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~C.~F.}\ \bibnamefont {Matthews}}, \bibinfo {author} {\bibfnamefont {X.-Q.}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Cable}}, \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Shadbolt}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Saunders}}, \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {Durkin}}, \bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {Pryde}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {O'Brien}},\ }\href@noop {} {\enquote {\bibinfo {title} {Practical quantum metrology},}\ }\bibinfo {howpublished} {Eprint: arXiv:1307.4673} (\bibinfo {year} {2013})\BibitemShut {NoStop} \bibitem [{\citenamefont {K\'{o}m\'{a}r}\ \emph {et~al.}(2013)\citenamefont {K\'{o}m\'{a}r}, \citenamefont {Kessler}, \citenamefont {Bishof}, \citenamefont {Jiang}, \citenamefont {rensen}, \citenamefont {Ye},\ and\ \citenamefont {Lukin}}]{Komar13} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {K\'{o}m\'{a}r}}, \bibinfo {author} {\bibfnamefont {E.~M.}\ \bibnamefont {Kessler}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bishof}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {A.~S.~S.}\ \bibnamefont {rensen}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ye}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Lukin}},\ }\href@noop {} {\enquote {\bibinfo {title} {A quantum network of clocks},}\ }\bibinfo {howpublished} {Eprint: arXiv:1310.6045} (\bibinfo {year} {2013})\BibitemShut {NoStop} \bibitem [{\citenamefont {Wiseman}\ and\ \citenamefont {Milburn}(2010)}]{WM10} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}}\ and\ \bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {Milburn}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Measurement and Control}}}\ (\bibinfo {publisher} {CUP},\ \bibinfo {address} {Cambridge},\ \bibinfo {year} {2010})\BibitemShut {NoStop} \bibitem [{\citenamefont {Jacobs}(2014)}]{Jacobs14} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Jacobs}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum Measurement Theory and its Applications}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {address} {Cambridge},\ \bibinfo {year} {2014})\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1} \BibitemOpen \bibinfo {note} {While we restrict ourselves here to Markovian noise processes, we suspect that our results can be extended to non-Markovian open systems, and this is an interesting question for further investigation.}\BibitemShut {Stop} \bibitem [{Note2()}]{Note2} \BibitemOpen \bibinfo {note} {Weakly-coupled systems include superconducting nano-electromechanical circuits~\cite {Palomaki13, Kelly15}, NV-centers in diamond~\cite {Mamin13}, cavity-QED~\cite { Komar13}, and photonic circuits~\cite {Silverstone14}.}\BibitemShut {Stop} \bibitem [{\citenamefont {Breuer}\ and\ \citenamefont {Petruccione}(2007)}]{Breuer07} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-P.}\ \bibnamefont {Breuer}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Petruccione}},\ }\href@noop {} {\emph {\bibinfo {title} {The Theory of Open Quantum Systems}}}\ (\bibinfo {publisher} {Oxford University Press},\ \bibinfo {address} {Oxford},\ \bibinfo {year} {2007})\BibitemShut {NoStop} \bibitem [{\citenamefont {Jacobs}\ \emph {et~al.}(2014)\citenamefont {Jacobs}, \citenamefont {Wang},\ and\ \citenamefont {Wiseman}}]{Jacobs14b} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Jacobs}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}}, \ and\ \bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Wiseman}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {073036} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{Note3()}]{Note3} \BibitemOpen \bibinfo {note} {See Supplemental Material at XXXXX for further details of the proofs of Eqs.~(\ref {eq:Wmin}) and (\ref {eq:strong}), the nonadiabatic entropy production, and the definition of weak-coupling.}\BibitemShut {Stop} \bibitem [{\citenamefont {Esposito}\ and\ \citenamefont {den Broeck}(2011)}]{Esposito11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Esposito}}\ and\ \bibinfo {author} {\bibfnamefont {C.~V.}\ \bibnamefont {den Broeck}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {EPL (Europhysics Letters)}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {40004} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yukawa}(2001)}]{Yukawa:2001tf} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Yukawa}},\ }\href@noop {} {\enquote {\bibinfo {title} {{The Second Law of Steady State Thermodynamics for Nonequilibrium Quantum Dynamics}},}\ } (\bibinfo {year} {2001}),\ \bibinfo {note} {arXiv:0108421v2}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sagawa}(2013)}]{Sagawa:2012vd} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Sagawa}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {{Lectures on quantum computing, thermodynamics and statistical physics}}}},\ \bibinfo {series} {{Kinki University Series on Quantum Computing}}, Vol.~\bibinfo {volume} {8},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {M.}~\bibnamefont {Nakahara}}}\ (\bibinfo {publisher} {World Scientific New Jersey},\ \bibinfo {year} {2013})\BibitemShut {NoStop} \bibitem [{\citenamefont {Gardas}\ and\ \citenamefont {Deffner}(2015)}]{Gardas2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Gardas}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Deffner}},\ }\href@noop {} {\enquote {\bibinfo {title} {Thermodynamics universality of quantum carnot engines},}\ } (\bibinfo {year} {2015}),\ \bibinfo {note} {arXiv:1503.03455}\BibitemShut {NoStop} \bibitem [{\citenamefont {Deffner}\ \emph {et~al.}(2014)\citenamefont {Deffner}, \citenamefont {Jarzynski},\ and\ \citenamefont {del Campo}}]{Deffner2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Deffner}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Jarzynski}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {del Campo}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {021013} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wilson-Rae}\ \emph {et~al.}(2007)\citenamefont {Wilson-Rae}, \citenamefont {Nooshi}, \citenamefont {Zwerger},\ and\ \citenamefont {Kippenberg}}]{Wilson-Rae07} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Wilson-Rae}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Nooshi}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zwerger}}, \ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Kippenberg}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {093901} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Marquardt}\ \emph {et~al.}(2007)\citenamefont {Marquardt}, \citenamefont {Chen}, \citenamefont {Clerk},\ and\ \citenamefont {Girvin}}]{Marquardt07} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Marquardt}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Clerk}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Girvin}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {093902} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schliesser}\ \emph {et~al.}(2009)\citenamefont {Schliesser}, \citenamefont {Arcizet}, \citenamefont {Rivi\`{e}re}, \citenamefont {Anetsberger},\ and\ \citenamefont {Kippenberg}}]{Schliesser09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Schliesser}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Arcizet}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Rivi\`{e}re}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Anetsberger}}, \ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Kippenberg}},\ }\href {\doibase 10.1038/NPHYS1304} {\bibfield {journal} {\bibinfo {journal} {Nature Phys.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {509} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tian}(2009)}]{Tian09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Tian}},\ }\href {\doibase 10.1103/PhysRevB.79.193407} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {193407} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2011)\citenamefont {Wang}, \citenamefont {Vinjanampathy}, \citenamefont {Strauch},\ and\ \citenamefont {Jacobs}}]{Wang11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Vinjanampathy}}, \bibinfo {author} {\bibfnamefont {F.~W.}\ \bibnamefont {Strauch}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Jacobs}},\ }\href {\doibase 10.1103/PhysRevLett.107.177204} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {177204} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hamerly}\ and\ \citenamefont {Mabuchi}(2012)}]{Hamerly12} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Hamerly}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Mabuchi}},\ }\href {\doibase 10.1103/PhysRevLett.109.173602} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages} {173602} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jacobs}\ \emph {et~al.}(2015)\citenamefont {Jacobs}, \citenamefont {Nurdin}, \citenamefont {Strauch},\ and\ \citenamefont {James}}]{Jacobs15} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Jacobs}}, \bibinfo {author} {\bibfnamefont {H.~I.}\ \bibnamefont {Nurdin}}, \bibinfo {author} {\bibfnamefont {F.~W.}\ \bibnamefont {Strauch}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {James}},\ }\href {\doibase 10.1103/PhysRevA.91.043812} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {043812} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Callen}(1985)}]{Callen} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~B.}\ \bibnamefont {Callen}},\ }\href@noop {} {\emph {\bibinfo {title} {Thermodynamics and an introduction to thermostatistics, 2nd edition}}}\ (\bibinfo {publisher} {John Wiley and Sons, New York},\ \bibinfo {year} {1985})\BibitemShut {NoStop} \bibitem [{\citenamefont {Teufel}\ \emph {et~al.}(2011)\citenamefont {Teufel}, \citenamefont {Li}, \citenamefont {Allman}, \citenamefont {Cicak}, \citenamefont {Sirois}, \citenamefont {Whittaker},\ and\ \citenamefont {Simmonds}}]{Teufel11} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Teufel}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Allman}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Cicak}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Sirois}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Whittaker}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont {Simmonds}},\ }\href {\doibase 10.1038/nature09898} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {471}},\ \bibinfo {pages} {204} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gottesman}(2009)}]{Gottesman09} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gottesman}},\ }\href@noop {} {\enquote {\bibinfo {title} {An introduction to quantum error correction and fault-tolerant quantum computation},}\ }\bibinfo {howpublished} {Eprint: arXiv:0904.2557} (\bibinfo {year} {2009})\BibitemShut {NoStop} \bibitem [{\citenamefont {Aliferis}\ \emph {et~al.}(2006)\citenamefont {Aliferis}, \citenamefont {Gottesman},\ and\ \citenamefont {Preskill}}]{Aliferis06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Aliferis}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gottesman}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quant. Inf. Comput.}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {97} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Knill}(2005)}]{Knill05} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {434}},\ \bibinfo {pages} {39} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Alicki}\ \emph {et~al.}(2006)\citenamefont {Alicki}, \citenamefont {Lidar},\ and\ \citenamefont {Zanardi}}]{Alicki06} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Alicki}}, \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Lidar}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Zanardi}},\ }\href {\doibase 10.1103/PhysRevA.73.052311} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {052311} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Spohn}\ and\ \citenamefont {Lebowitz}(1978)}]{Spohn1978} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Spohn}}\ and\ \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {Lebowitz}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Advances in Chemical Physics: For Ilya Prigogine}}},\ Vol.~\bibinfo {volume} {38},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {S.~A.}\ \bibnamefont {Rice}}}\ (\bibinfo {publisher} {John Wiley \& Sons, Hoboken, NJ},\ \bibinfo {year} {1978})\BibitemShut {NoStop} \bibitem [{\citenamefont {Alicki}\ \emph {et~al.}(2004)\citenamefont {Alicki}, \citenamefont {Horodecki}, \citenamefont {Horodecki},\ and\ \citenamefont {Horodecki}}]{Alicki04} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Alicki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Horodecki}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Horodecki}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Horodecki}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Open Sys. \& Information Dyn.}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {205} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sagawa}\ and\ \citenamefont {Ueda}(2008)}]{Sagawa08} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Sagawa}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ueda}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {080403} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
\begin{document} \title{Trading causal order for locality} \author{Ravi Kunjwal} \affiliation{Centre for Quantum Information and Communication (QuIC), Ecole polytechnique de Bruxelles, CP 165, Universit\'e libre de Bruxelles, 1050 Brussels, Belgium} \author{\"Amin Baumeler} \affiliation{Institute for Quantum Optics and Quantum Information (IQOQI-Vienna), Austrian Academy of Sciences, 1090 Vienna, Austria} \affiliation{Faculty of Physics, University of Vienna, 1090 Vienna, Austria} \affiliation{Facolt\`a indipendente di Gandria, 6978 Gandria, Switzerland} \date{\today} \begin{abstract} \noindent Quantum theory admits ensembles of quantum nonlocality without entanglement (QNLWE). These ensembles consist of seemingly classical states (they are perfectly distinguishable and non-entangled) that cannot be perfectly discriminated with local operations and classical communication (LOCC). Here, we analyze QNLWE from a causal perspective, and show how to perfectly discriminate some of these ensembles using local operations and {\em classical communication without definite causal order.} Specifically, three parties with access to an instance of indefinite causal order---the AF/BW process---can perfectly discriminate the states in a QNLWE ensemble---the SHIFT ensemble---with {\em local\/} operations. Hence, this type of quantum nonlocality disappears at the expense of definite causal order. Moreover, we show how multipartite generalizations of the AF/BW process are transformed into multiqubit ensembles that exhibit QNLWE. Such ensembles are of independent interest for cryptographic protocols and for the study of separable quantum operations unachievable with LOCC. \end{abstract} \maketitle \textit{Introduction.---}The famously counter-intuitive nature of quantum theory owes much to the phenomenon of entanglement which forces {\em ``its entire departure from classical lines of thought''\/} \cite{schrodinger1935c}. Perhaps the deepest consequence of entanglement is its role in revealing the tension between quantum theory and locality which is central to Bell's theorem~\cite{bell1964}. This tension, however, does not stop at entanglement and Bell's theorem: It persists in a different form even without entanglement, as captured by the phenomenon of quantum nonlocality without entanglement~(QNLWE)~\cite{bennett1999}. At the heart of quantum nonlocality---with or without entanglement---is the interplay of causation and correlation \cite{wood2015,WSS20}.\footnote{Note that the notion of locality at play in Bell's theorem is local causality \cite{wiseman2014} while in QNLWE, the notion of locality at play is the locality of operations \cite{bennett1999}.} To demonstrate quantum nonlocality without entanglement, Bennett~{\it et al.}~\cite{bennett1999} presented {\em locally\/} imperfectly discriminable ensembles of mutually orthogonal product quantum states,~{\it e.g.,\/}~the SHIFT ensemble,~$\{\ket{000},\ket{111},\allowbreak\ket{{+}01},\ket{{-}01},\ket{1{+}0},\ket{1{-}0},\ket{01{+}},\allowbreak\ket{01{-}}\}$. Although states in such an ensemble can be prepared locally, parties sharing an unknown state from the ensemble cannot perfectly identify the state with local operations and classical communication~(LOCC). The classical communication in LOCC is implicitly assumed to respect a {\em definite causal order:} In each round, the direction of communication is determined from all past data. A~necessary consequence of this constraint is that at least one party must initiate the communication. By contrast, in the case of Bell nonlocality, all communication is excluded by the requirement of spacelike separation. The background assumption of definite causal order, however, is common to both types of nonlocality. What if we drop the assumption of a definite causal order and regard it as a physical quantity sensitive to quantum indefiniteness~\cite{hardy2005}? This possibility has attracted much interest in recent years, {\it e.g.,\/}~as in the {\em quantum switch\/}~\cite{chiribella2013} achievable through indefinite wires connecting quantum gates~\cite{colnaghi2012} or through indefinite spacetime geometries formed from matter in a superposition of locations~\cite{zych2017}. Oreshkov, Costa, and Brukner~\cite{oreshkov2012} show that if, without further assumptions on causal connections, one insists that parties locally cannot detect any deviation from standard quantum theory, then indefinite causal order arises naturally: Their process-matrix framework encompasses the quantum switch~\cite{og16,araujo2015}, and also exhibits {\em noncausal correlations,\/}~{\it i.e.,\/}~correlations unattainable under a global causal order among the parties (see also Refs.~\cite{baumeler2014ieee,branciard2015,abbott2016}). Moreover, they show that the exotic causal possibilities that arise between two parties disappear in the classical limit. For three parties or more, however, {\em logically consistent classical processes\/} that create noncausal correlations exist~\cite{baumeler2014}. For example, the deterministic Araújo-Feix/Baumeler-Wolf (AF/BW) process~\cite{af,baumeler2016} exchanges bits among three parties, Alice, Bob, and Charlie, in the following way. Each party receives a~bit~\mbox{$a:=(y\oplus 1)z$},~\mbox{$b:=(z\oplus 1)x$},~\mbox{$c:=(x\oplus 1)y$} from the process and {\em thereafter\/} provides a bit of their choice~$x,y,z$ to the process. This resource allows every pair of parties to communicate to the third ({\it e.g.,\/}~Alice receives~$a$ which non-trivially depends on~$y,z$ of Bob and Charlie) in a single round: {\em Each party acts in the causal future of the other two.\/} The possibility of indefinite causal order---in particular, the AF/BW process---raises the following natural question: What happens to the tension between quantum theory and locality once the assumption of definite causal order is dropped? \textit{Results.---}In this Letter, we show how one can trade causal order for the locality of operations in perfectly discriminating QNLWE ensembles. The tension between quantum theory and locality suggested by QNLWE thus disappears in the absence of a definite causal order. Specifically, {\em local quantum\/} operations assisted with {\em classical processes\/} can allow the parties to {\em perfectly discriminate ensembles of quantum nonlocality without entanglement:\/} Three parties communicating through the classical AF/BW process can discriminate the SHIFT ensemble. In fact, this process allows the parties to {\em measure\/} quantum systems in the SHIFT basis. Conversely, we show that such a measurement simulates the classical channel underlying the AF/BW process. We use the insights from these protocols to show how any Boolean~$n$-party classical process without global past can be turned into an~$n$-qubit ensemble of states that exhibits quantum nonlocality without entanglement. These results establish an {\em operational link\/} between QNLWE and classical processes without causal order (see also Ref.~\cite{baumeler2022} for a suggested link with Bell nonlocality). \textit{SHIFT-basis measurement from AF/BW process.---}The parties Alice, Bob, and Charlie hold a quantum system in the three-qubit state~$\ket\psi$. The following protocol implements a measurement of~$\ket\psi$ in the SHIFT basis~(see~Fig.~\ref{fig:measurementprotocol}). \begin{figure} \caption{Schematic of protocol to implement the SHIFT-basis measurement on an arbitrary quantum state~$\ket\psi$ with local operations and classical ``acausal'' communication. Thick wires represent classical bits, normal wires qubits, and (\protect\tikz{\protect\node[draw,star,star points=9,inner sep=1,fill=black] at (0,0) {} \label{fig:measurementprotocol} \end{figure} First, each party receives a~classical bit~$a,b,c$ from the process. Then, each party applies a Hadamard transformation on their share of~$\ket\psi$ if the received bit is~$1$, {\it i.e.,\/}~the parties apply~\mbox{$H^{(a,b,c)}:=H^a\otimes H^b\otimes H^c$}. Now, they measure the quantum system in the computational basis, obtain the post-measurement state~$\ket{xyz}$, and forward~$x,y,z$ to the AF/BW process. Finally, the parties apply~$H^{(a,b,c)}$ to the post-measurement state. By this, the final state of the quantum system is \begin{align} \sum_{x,y,z} \left|\bra{xyz}H^{(a,b,c)}\ket\psi\right|^2 H^{(a,b,c)} \ket{xyz} \bra{xyz} H^{(a,b,c)} \,. \label{eq:final} \end{align} Note that the AF/BW process determines the values of~$a,b,c$ as a function of~$x,y,z$. First, we show that if~$\ket\psi\in\text{SHIFT}$, then this protocol returns the state~$\ket\psi$ with certainty. If~$\ket\psi=\ket{000}$, then the probability \begin{align} |\bra{xyz}H^{((y\oplus 1)z,(z\oplus 1)x,(x\oplus 1)y)}\ket{000}|^2 \end{align} is one for~$x=y=z=0$, and zero otherwise: The final state is~$\ket{000}$. Instead, if~$\ket\psi=\ket{01+}$, then the only contribution arises for~$x=z=0$ and~$y=1$~$(|\bra{010}H^{(0,0,1)}\ket{01+}|^2 = 1)$, and the final state is~$H^{(0,0,1)}\ket{010}=\ket{01{+}}$. By symmetry, the same follows for all SHIFT-ensemble states. In other words, for each SHIFT state there exists a {\em unique and distinct\/} triple~$x,y,z$ that contributes to the sum; namely,~$x,y,z$ encode the qubits of the SHIFT state~($0$~if the qubit is in the state~$\ket 0$ or~$\ket +$, and~$1$ otherwise). By linearity, this analysis extends to {\em any\/} quantum state~$\ket\psi$: Measuring an arbitrary state~$\ket\psi=\sum_{\ket k\in\text{SHIFT}}\alpha_{\ket k} \ket k$ in the SHIFT basis yields~$\sum_{\ket k\in\text{SHIFT}}|\alpha_{\ket k}|^2\ket k\bra k$, which is identical to the returned state of the protocol \begin{align} &|\alpha_{\ket{000}}|^2 \ket{000}\bra{000}\\ &\qquad+ |\alpha_{\ket{{+}01}}|^2 H^{(1,0,0)}\ket{001}\bra{001}H^{(1,0,0)}\\ &\qquad+ |\alpha_{\ket{01{+}}}|^2 H^{(0,0,1)}\ket{010}\bra{010}H^{(0,0,1)}\\ &\qquad+\cdots \,. \end{align} Now it is clear that if the parties communicate through the AF/BW process, then they perfectly discriminate the SHIFT ensemble. The classical data collected in the above protocol uniquely specifies the SHIFT state they were given: The bits~$a,b,c$ they receive from the process specify the basis, and the bits~$x,y,z$ they receive from the measurement specify the state in the corresponding basis, {\it e.g.,\/}~$a=0,b=0,c=1,x=0,y=1,z=0$ encode the state~$\ket{01+}$. \textit{AF/BW channel from SHIFT-basis measurement.---} Conversely, suppose three parties, Alice, Bob, and Charlie, have access to a measurement device that measures a three-qubit system in the SHIFT basis and returns to each party the classical description of the post-measurement qubit state. For instance, if the three-qubit post-measurement state is~$\ket{{+}01}$, then Alice receives the label~$+$, Bob~$0$, and Charlie~$1$. The following protocol~(see~Fig.~\ref{fig:afbwprocessprotocol}) {\em simulates\/} the classical channel underlying the AF/BW process via such a SHIFT-basis measurement, {\it i.e.,\/}~the parties start with three bits~$x,y,z$ of their choice and end up with~$a=(y\oplus 1)z,\allowbreak b=(z\oplus 1)x,c=(x\oplus 1)y$.\footnote{If the variables~$a,b,c$ were in the respective {\em local pasts\/} of the variables~$x,y,z$---as they are in the complementary protocol of Fig.~\ref{fig:measurementprotocol}---this AF/BW {\em channel\/} would correspond to the noncausal AF/BW {\em process.}} \begin{figure} \caption{Schematic of protocol to simulate the AF/BW channel from a SHIFT-basis measurement.} \label{fig:afbwprocessprotocol} \end{figure} First, each party encodes the respective bit in the computational basis of a qubit, {\it i.e.,\/}~they locally generate a quantum system in the state~$\ket\psi=\ket{xyz}$. In the second step, they feed~$\ket\psi$ into the measurement device and record the outcome~$\ell_A,\ell_B,\ell_C\in\{0,1,{+},{-}\}$, where~$\ell_A$ is Alice's outcome and so forth. Finally, they apply the function~$f:0\mapsto 0,\allowbreak 1\mapsto 0,+\mapsto 1,-\mapsto 1$ to obtain the bits~$a,b,c$. Suppose the bits~$x,y,z$ are chosen such that~$x=y=z$. The prepared quantum state~$\ket{xyz}$ is a member of the SHIFT basis. The measurement device therefore replies the labels~$\ell_A=\ell_B=\ell_C\in\{0,1\}$, and, according to the protocol, the parties set~$a=b=c=0$, which is the correct value. If the bits are specified as $x=y=0$ and~$z=1$, then the prepared state~$\ket{xyz}$ in not a member of the SHIFT ensemble and the measurement device responds probabilistically:~$|\braket{{+}01}{001}|^2 = |\braket{{-}01}{001}|^2 = 1/2$. In either case, however, the parties correctly end up with~\mbox{$a=1$},~$b=c=0$. By symmetry, the parties compute~$a,b,c$ as desired for all inputs~$x,y,z$. The correspondence between the SHIFT ensemble and the AF/BW process that we have shown above can be understood as a consequence of the following mathematical fact: The global correlations between the local basis choices ($Z$ or $X$) and the local basis states ($\ket{0}$ or $\ket{+}$ \textit{vs.\/}~$\ket{1}$ or $\ket{-}$) in the SHIFT ensemble are exactly the correlations between local inputs ($a,b,c\in\{0,1\}$) and local outputs ($x,y,z\in\{0,1\}$) specified by the AF/BW process. This mathematical fact allows us to use the AF/BW process to implement the SHIFT measurement via local operations and, conversely, to use any implementation of the SHIFT measurement to simulate the classical channel underlying the AF/BW process. Indeed, this observation holds more generally for multiqubit instances of QNLWE, as we now demonstrate. \textit{Multipartite QNLWE.---}We show that {\em all\/} Boolean classical processes that violate causal order in a maximal sense---classical processes where {\em each\/} party can receive a signal from at least one other party---give rise to ensembles that exhibit quantum nonlocality without entanglement. Classical processes are characterized by a~{\em unique fixed-point\/} condition~\cite{baumeler2016FP,baumeler2021} as follows. Let~$\omega^n$ be a~Boolean function~$\{0,1\}^n\rightarrow\{0,1\}^n$, and~$\mathcal F$ the set of all functions~$\{0,1\}\rightarrow\{0,1\}$. The function~$\omega^n$ is a~{\em Boolean $n$-party classical process\/} if and only if \begin{align} \forall \mu\in\mathcal F^n, \,\exists ! \underline p\in\{0,1\}^n: \underline p= \omega^n(\mu(\underline p))\,, \end{align} \textit{i.e.,} if and only if for {\em each\/} choice of interventions~$\mu_i$ of each party there exists a {\em unique\/} fixed-point of~\mbox{$\omega^n\circ\mu$}. Here,~$\mu=(\mu_1,\mu_2,\dots,\mu_n)$ is an $n$-tuple of local Boolean functions. Moreover, we say that~$\omega^n$ {\em has no global past\/} if and only if \begin{align} \forall i\,\exists k,\underline x\in\{0,1\}^n: \omega^n_i(\underline x) \neq \omega^n_i(\underline x^{(k)}) \,, \label{eq:maxviol} \end{align} where~$\underline x^{(k)}=(x_1,\dots,x_{k-1},x_k\oplus 1,x_{k+1},\dots,x_n)$ is the same as~$\underline x$ but where the~$k$-th bit is flipped, and where~$\omega^n_i$ is the~$i$-th component of~$\omega^n$. This condition states that every party~$i$ can receive a signal through the process from at least one other party~$k$; no party lies in the global past of all other parties.\\ \textit{Theorem.---} If~$\omega^n$ is a Boolean~$n$-party classical process without global past, then \begin{align} \mathcal S_{\omega^n}:= \left\{ H^{(\omega^n(\underline x))}\ket{\underline x} \mid \underline x\in\{0,1\}^n \right\} \end{align} is a basis of orthonormal states that exhibits QNLWE.\\ \textit{Proof.---} The states in the set~$S_{\omega^n}$ with cardinality~$2^n$ are normalized. Now we show that they are orthogonal, {\it i.e.,\/} \begin{align} \forall \underline x\neq \underline y: \bra{\underline y}H^{(\omega^n(\underline y)\oplus\omega^n(\underline x))}\ket{\underline x} = 0\,, \label{eq:orthogonality} \end{align} where~$\oplus$ is bitwise addition modulo $2$. Pick two~\mbox{$n$-bit} strings~$\underline x\neq \underline y$ and suppose without loss of generality that they differ in the first~$k$ positions only. Orthogonality~(Eq.~\eqref{eq:orthogonality}) states that there exists some~\mbox{$i\leq k$} with~$\omega_i^n(\underline x) = \omega_i^n(\underline y)$. Towards a contradiction, however, assume~\mbox{$\forall i\leq k:\omega_i^n(\underline x) \neq \omega_i^n(\underline y)$}. Since~$\omega^n$ is a classical process, the {\em reduced function\/}~$\tilde\omega^n:\{0,1\}^k\rightarrow\{0,1\}^k$ with \begin{align} \underline z \mapsto (\omega^n_1(\underline z,x_{k+1},\dots, x_{n}), \dots, \omega^n_k(\underline z, x_{k+1},\dots,x_{n})) \end{align} is a classical process as well~\cite[Lemma~A.3]{baumeler2019}. To simplify notation, let~$\underline x'$ be the first~$k$ bits of~$\underline x$, and similarly for~$\underline y'$, and define~\mbox{$\underline a := \tilde\omega^n(\underline x')$},~\mbox{$\underline b:= \tilde\omega^n(\underline y')$}. Now,~$\underline a$ and~$\underline b$ are {\em fixed-points\/} under the following two~$k$-party interventions~$\alpha$ and~$\beta$, respectively, {\it i.e.,\/}~\mbox{$\underline a = \tilde\omega^n(\alpha(\underline a))$},~\mbox{$\underline b = \tilde\omega^n(\beta(\underline b))$} for \begin{align} \alpha,\beta:\{0,1\}^k&\rightarrow\{0,1\}^k\in\mathcal F^k\\ \alpha: \underline w &\mapsto \underline x'\oplus \underline a \oplus \underline w\\ \beta: \underline w &\mapsto \underline y'\oplus \underline b \oplus \underline w \,. \end{align} However, because~$\forall i \leq k:\underline x'_i\oplus\underline y'_i=\underline a_i\oplus\underline b_i = 1$, the function~$\tilde\omega^n\circ\alpha$ has {\em a second fixed-point\/}~$\underline b$ \begin{align} \tilde\omega^n(\alpha(\underline b)) &= \tilde\omega^n(\underline x'\oplus \underline a \oplus \underline b) = \tilde\omega^n(\underline y'\oplus \underline b \oplus \underline b)\\ &= \tilde\omega^n(\beta(\underline b)) = \underline b \,, \end{align} and therefore~$\omega^n$ is {\em not\/} a classical process. This proves that the set~$\mathcal S_{\omega^n}$ forms a basis of orthonormal states. What remains to show is that this set exhibits QNLWE. This follows from the assumption that~$\omega^n$ has no global past. From Eq.~\eqref{eq:maxviol} we have that that for each party~$i$ there exist two bit-strings~$\underline x,\underline y$ such that the~\mbox{$i$-th} qubit of~$H^{(\omega^n(\underline x))}\ket{\underline x}$ is in a mutually unbiased basis compared to the~$i$-th qubit of~$H^{(\omega^n(\underline y))}\ket{\underline y}$: In an LOCC protocol for perfect descrimination, no party can initiate the communication. $\blacksquare$ \textit{Examples.---}The following is an ensemble exhibiting QNLWE for four parties. It is constructed from the classical process of Ref.~\cite{baumeler2022} inspired by the Ardehali-Svetlichny nonlocal game~\cite{svetlichny1987,ardehali1992}: \begin{align} \begin{split} \big\{ &\ket{0000}, \ket{0{+}01}, \ket{{+}01{+}}, \ket{001{-}},\\ &\ket{01{+}0}, \ket{{+}{-}01}, \ket{01{-}0}, \ket{0111},\\ &\ket{1{+}0{+}}, \ket{1{+}{+}{-}}, \ket{{-}01{+}}, \ket{1{+}{-}{-}},\\ &\ket{1{-}00}, \ket{{-}{-}01}, \ket{111{+}}, \ket{1{-}1{-}} \big\}\,. \end{split} \label{eq:example1} \end{align} Another example based on the generalizations of the AF/BW process proposed in Ref.~\cite{araujo2017} is the following: \begin{align} \begin{split} \big\{ &\ket{0000}, \ket{0101}, \ket{0111}, \ket{1010},\\ &\ket{1011}, \ket{1101}, \ket{1110}, \ket{1111},\\ &\ket{001{+}}, \ket{001{-}}, \ket{01{+}0}, \ket{01{-}0},\\ &\ket{1{+}00}, \ket{1{-}00}, \ket{+001}, \ket{{-}001} \big\}\,. \end{split} \label{eq:example2} \end{align} \textit{Conclusions.---}We have shown that Boolean \mbox{$n$-party} classical processes without global past can be mapped to a family of $n$-qubit ensembles exhibiting quantum nonlocality without entanglement and, as such, can discriminate these ensembles via local quantum operations. We illustrated this connection explicitly for the tripartite case of the SHIFT ensemble~\cite{bennett1999} with respect to the AF/BW process \cite{af,baumeler2016}. Several open questions arise from our results. We have, in particular, not discussed bipartite instances of quantum nonlocality without entanglement, \textit{e.g.,} the two-qutrit domino states~\cite{bennett1999}. This is because in the bipartite case, logically consistent classical processes have a definite causal order, as shown by Oreshkov \textit{et al.}~\cite{oreshkov2012}. To be sure, this instance of quantum nonlocality without entanglement {\em can\/} be interpreted as an instance of classical communication without causal order \cite{aok17,akibue22}, but this requires a relaxation of the constraint of logical consistency which is central to the process-matrix framework \cite{oreshkov2012}. Indeed, in the bipartite case, Akibue \textit{et al.}~\cite{aok17} show that the set of transformations achievable via local operations and classical communication without causal order~(their~`LOCC*') coincides with the set of separable operations. This means that the two-qutrit domino states can be perfectly discriminated by LOCC*, as shown explicitly in~Ref.~\cite{aok17}.\footnote{Akibue in his PhD thesis \cite{akibue22} also considers a tripartite example, namely, the tripartite classical-probabilistic process proposed in Ref.~\cite{baumeler2014ieee} and shows that it can realize a non-LOCC separable operation. However, unlike the bipartite case of domino states discussed in Ref.~\cite{aok17}, this example does not admit a straightforward interpretation in terms of quantum nonlocality without entanglement.} In the multipartite case, our results allow us to reinterpret the phenomenon of quantum nonlocality without entanglement as an operational witness of noncausality that has a qualitatively different character than the violation of causal inequalities. This opens up several potential connections with the wider literature on quantum nonlocality without entanglement and calls for a deeper understanding of its connection with noncausality. Indeed, as we have demonstrated, these results also offer a route to constructing new instances of quantum nonlocality without entanglement. These instances are of relevance for quantum cryptography, {\it e.g.,\/}~in quantum data hiding~\cite{divincenzo2002}. We also know that in standard quantum theory, multiqubit instances of quantum nonlocality without entanglement are incapable of witnessing a strong form of nonclassicality, \textit{i.e.,} logical proofs of the Kochen-Specker theorem \cite{wright2021contextuality}, and it would be interesting to investigate the implications of this fact for (non)causality in the process-matrix framework \cite{oreshkov2012}. Similarly, higher-dimensional generalizations of multipartite quantum nonlocality without entanglement \cite{niset2006} could also inspire new types of noncausal classical processes. The bipartite case, together with other generalizations of our results in the multipartite setting---in particular, the gap between separable and LOCC operations---will be taken up in forthcoming work. Let us also remark that whether the AF/BW process arises in general relativity would affect the interpretation of the noncausality witnessed via the perfect state discrimination task we have considered. In a Minkowski spacetime, three parties cannot discriminate the SHIFT ensemble with local operations and classical communication. However, if the parties are situated in a general-relativistic spacetime that realizes the AF/BW process, then this task becomes feasible. A successful discrimination of the SHIFT ensemble would then be an {\em operational signature\/} for the noncausal nature of such a~general-relativistic spacetime. On the other hand, if the AF/BW process turns out not to be realizable in a general-relativistic spacetime but instead requires an~intrinsically non-classical notion of spacetime (arising from, \textit{e.g.,} quantum gravity), then this discrimination task would serve as an operational signature of noncausality that is intrinsically non-classical. To be sure, in such a situation, the communication between the labs would still be classical but the physical conditions for achieving this communication would be outside the realm of possibilities afforded by general-relativistic spacetimes. The latter possibility could have interesting implications for how one might interpret time-delocalized realizations~\cite{oreshkov2019} of the AF/BW process~\cite{wechs2022existence}. \textit{Acknowledgments.--} We thank Nicolas Cerf, Stefano Pironio, Mio Murao, Ognyan Oreshkov, and Eleftherios Tselentis for helpful discussions and comments. RK also thanks ETH Z\"urich and IQOQI-Vienna, and \"AB also thanks QuIC for supporting the visits that made this work possible. RK is supported by the Chargé de Recherche fellowship of the Fonds de la Recherche Scientifique FNRS (F.R.S.-FNRS), Belgium. \"AB~is supported by the Austrian Science Fund~(FWF) through projects ZK3 (Zukunftskolleg) and BeyondC-F7103. \end{document}
\begin{document} \title{Gaussian-modulated coherent-state measurement-device-independent\\ quantum key distribution} \author{Xiang-Chun Ma}\affiliation{College of Science, National University of Defense Technology, Changsha 410073, People's Republic of China}\affiliation{Centre for Quantum Information and Quantum Control, Department of Physics \& Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada M5S 3G4} \author{Shi-Hai Sun}\affiliation{College of Science, National University of Defense Technology, Changsha 410073, People's Republic of China} \author{Mu-Sheng Jiang}\affiliation{College of Science, National University of Defense Technology, Changsha 410073, People's Republic of China} \author{Ming Gui}\affiliation{College of Science, National University of Defense Technology, Changsha 410073, People's Republic of China} \author{Lin-Mei Liang} \affiliation{College of Science, National University of Defense Technology, Changsha 410073, People's Republic of China} \affiliation{State Key Laboratory of High Performance Computing, National University of Defense Technology, Changsha 410073, People's Republic of China} \begin{abstract} Measurement-device-independent quantum key distribution (MDI-QKD), leaving the detection procedure to the third partner and thus being immune to all detector side-channel attacks, is very promising for the construction of high-security quantum information networks. We propose a scheme to implement MDI-QKD, but with continuous variables instead of discrete ones, i.e., with the source of Gaussian-modulated coherent states, based on the principle of continuous-variable entanglement swapping. This protocol not only can be implemented with current telecom components but also has high key rates compared to its discrete counterpart; thus it will be highly compatible with quantum networks. \end{abstract} \pacs{03.67.Dd, 03.67.Hk, 89.70.Cf} \maketitle \section{Introduction} Quantum key distribution (QKD), allowing a secret key between two legitimate parties (Alice and Bob) to be established \cite{Sca09}, has been applied to quantum information networks based on the trusted node or relay \cite{Fro13,Hug13}. However, in order to construct high-security and -performance networks, measurement-device-independent (MDI) QKD would be a very promising alternative since it not only removes all detector side-channel attacks, the most important security loophole of QKD implementations, by leaving the detection procedure to the untrusted relay but also supplies excellent performance with current technology \cite{Lo12,Bra12,Rub13,Liu13,Fer13,Tan13,Xu13}. Measurement-device-independent QKD, which is a time-reversed Einstein-Podolsky-Rosen (EPR)-based QKD scheme \cite{Ina02}, consists of Alice and Bob respectively sending single-photon states to the third partner, Charlie, who makes a Bell-state measurement (BSM) and broadcasts his measurement results, and it has been a big step forward to bridge the gap between the theory and the real-world implementation of QKD \cite{Lo12}. Based on the idea of discrete-variable entanglement swapping and two-photon interference, the BSM can postselect the entanglement states between Alice and Bob and does not disclose the information about encodings, so this protocol allows the legitimate parties to establish the secure keys, which are independent of the measurement device, and all detection side-channel attacks are removed by leaving the detection procedure to the third partner. As a counterpart, motivated by continuous-variable entanglement swapping \cite{Fur98,Bra98}, here, we propose a scheme to implement the MDI-QKD with continuous variables instead of discrete ones, i.e., with the source of Gaussian-modulated coherent states; thus we can confirm the security of this protocol by the optimality of Gaussian attacks \cite{Gar06,Nav06,Ren09L,Lev13L}. We show that with respect to this protocol two different reconciliation strategies, direct reconciliation (DR) and reverse reconciliation (RR), can be used to extract the secure keys even though this protocol seems symmetric for Alice and Bob compared to Charlie, and both are high performance. The detection side-channel attacks against continuous-variable QKD, such as the wavelength attack \cite{Ma13}, the calibration attack \cite{Jou13A}, the local oscillator (LO) intensity attack \cite{Ma13A}, and the saturation attack \cite{Qin13}, also have been excluded, and the expensive and low-detection-efficiency single-photon detectors used by discrete-variable MDI-QKD are also replaced by the lower-cost and higher-detection-efficiency balanced homodyne detectors (BHDs). Hence this protocol for continuous-variable MDI-QKD not only can be implemented with current technology but also has high key rates, like the conventional one-way continuous-variable (CV) QKD protocol [e.g., the Grosshans-Grangier protocol (GG02 protocol) \cite{Gro02,Gro03N}; see details in Ref.~\cite{Wee12R} and the references therein). This paper is structured as follows. In Sec.~\ref{sec:Protocol}, we describe the protocol of continuous-variable MDI-QKD. In Sec.~\ref{sec:Estimation}, we give the security bounds of this protocol in DR and RR respectively against one-mode attack. In Sec.~\ref{sec:Discussion}, we extend the results of Sec.~\ref{sec:Estimation} to the asymmetric channel case and discuss the imperfect detections. Finally, Sec.~\ref{sec:Conclusion} is used for the conclusion of this paper. \section{Protocol description}\label{sec:Protocol} This continuous-variable MDI-QKD, whose schematic setup is shown in Fig.~\ref{fig:1}, consists of the following four steps. \begin{figure} \caption{\label{fig:1} \label{fig:1} \end{figure} 1. \textit{Preparation}. Alice and Bob each prepare coherent states in the phase space and send them to the third partner, Charlie, simultaneously, as shown in Fig.~\ref{fig:1}. Here the input modes can be described as $\hat{X}_{A/B}=X^S_{A/B}+\hat{X}^N_{A/B}$ for Alice and Bob, respectively, where $X^S_{A/B}$ are classical encoding variables with centered Gaussian distribution of zero mean and variance $V_S$ and $\hat{X}^N_{A/B}$ are vacuum modes. For all quadratures $\hat{Q}$ and $\hat{P}$ of coherent states, they are defined as $\hat{X}\in\{\hat{Q},\hat{P}\}$. The overall variance $V:=V(\hat{X}_A)$ of Alice's initial mode is given by $V=V_S+1$ in shot-noise units, where $V_S$ is the modulation variance mentioned before, and here we assume that Bob's variance of his mode is the same as Alice's, which they can agree on before key distribution without loss of generality \cite{Note}. 2. \textit{Measurement}. Charlie combines these two input modes with a balanced beam splitter (BS) and makes a continuous-variable BSM \cite{Fur98,Bra98} on the two modes with two BHDs shown in Fig.~\ref{fig:1}, i.e., one port detecting $\hat{Q}$ quadrature and the other $\hat{P}$ quadrature. Thus he will get the measurements, for example, $\hat{Q}_A-\hat{Q}_B$ and $\hat{P}_A+\hat{P}_B$ over the lossless and noiseless channels, up to a multiplier of $1/\sqrt{2}$ introduced by the BS \cite{Fur98,Bra98}. Then, he broadcasts these measurement results to Alice and Bob. Note that the LO used by Charlie is sent by either Alice or Bob, and before that, both Alice and Bob have defined the same signal-modulation reference frame by manipulating their respective LO beams (see details in Appendix \ref{sec:Define}). 3. \textit{Parameter estimation and security extraction}. Alice and Bob reveal part of their encodings, and based on Charlie's measurement results, they estimate the channel transmissions and excess noises. To establish the correlated data and secure keys, either Alice or Bob subtracts her or his encodings from Charlie's measurement results. For convenience, we assume that Bob implements this subtraction procedure since the protocol is symmetric; that is, Bob will take the data ($\hat{Q}+\sqrt{T_2}Q^S_B$) and $(\hat{P}-\sqrt{T_2}P^S_B)$, denoted as $\hat{Q}_{B^{'}}$ and $\hat{P}_{B^{'}}$, as estimations of Alice's encodings to establish the secure keys, where $T_2$ is the estimated channel transmission between Bob and Charlie. Since Eve does not know Bob's encodings, she does not know Alice's encodings accurately either from just the publication of quadratures $\hat{Q}$ and $\hat{P}$. Of course, she can learn part of the information from $\hat{Q}$ and $\hat{P}$. 4. \textit{Data postprocessing}. Alice and Bob extract the secret keys from their raw data using the current error correction and privacy amplification techniques \cite{Jou11} after they calculate the secret key rate between them. \section{Estimation of security bounds}\label{sec:Estimation} To estimate the security bounds of our protocol, we consider the entangling cloner shown in Fig.~\ref{fig:2} to bound Eve's information. In the security analysis of conventional one-way CVQKD protocols, the collective Gaussian attacks up to an appropriate symmetrization of the protocols are considered to be the optimal general attacks \cite{Ren09L,Lev13L}. The entangling cloner is the most powerful and practical example of a collective Gaussian attack \cite{Pir08,Wee10} and is shown to be optimal for a single or one-mode channel \cite{Ma13A}. But in two-way protocols, the optimal attack is not clear with respect to two interaction channels, and the entangling cloner attack has only been demonstrated to be optimal in the hybrid two-way protocol \cite{Pir08}. In this work, we restrict our analysis to two Markovian memoryless Gaussian channels, which do not interact with each other and thus can be reduced to a one-mode channel \cite{Pir13N}. Hence, in this sense, the two independent entangling cloner attacks, one in each of the untrusted channels of our protocol, are reduced to one-mode attacks and thus can be taken as the optimal one-mode collective Gaussian attack (the most powerful attack corresponding to two memory Gaussian channels that interact with each other is analyzed in another work and is not analyzed in this paper; see the Note). This attack consists of Eve interacting on Alice's and Bob's modes with her half of each EPR pair, respectively, and the quantum channels are replaced by two beam splitters with transmissions $T_1$ and $T_2$, respectively. Then, she collects all the modes to store them in her quantum memory and makes collective measurements on these modes to acquire information at any time during the classical data-postprocessing procedure implemented by Alice and Bob. \begin{figure} \caption{\label{fig:2} \label{fig:2} \end{figure} As mentioned before, Bob's recast data are obtained by subtracting his own encodings from Charlie's publications. Before describing them, we first give the expressions for the BSM results $\hat{Q}$ and $\hat{P}$ under the entangling cloner attack shown in Fig.~\ref{fig:2}. They can be written as \begin{equation}\label{eq:Q} \begin{split} \hat{Q}&\!=\!(\!\sqrt{T_1}\hat{Q}_A\!+\!\sqrt{1-T_1}\hat{Q}_{E_1}\!)\!-\!(\!\sqrt{T_2}\hat{Q}_B\!+\!\sqrt{1-T_2}\hat{Q}_{E_2}\!),\\ \hat{P}&\!=\!(\sqrt{T_1}\hat{P}_A\!+\!\sqrt{1-T_1}\hat{P}_{E_1})\!+\!(\sqrt{T_2}\hat{P}_B\!+\!\sqrt{1-T_2}\hat{P}_{E_2}), \end{split} \end{equation} up to the multiplier of $1/\sqrt{2}$ mentioned before, which can be incorporated with the left-hand sides of the above equations. Here, $E_1$ and $E_2$ are Eve's EPR modes, whose variances are $N_1$ and $N_2$, respectively. $N_1$ and $N_2$ are used to simulate the variances of the practical channel excess noises $\varepsilon_A$ and $\varepsilon_B$, respectively; that is, $\varepsilon_A=(1-T_1)(N_1-1)/T_1$ for the channel between Alice and Charlie, $\varepsilon_B=(1-T_2)(N_2-1)/T_2$ for the channel between Bob and Charlie, and both are referred to their respective channel inputs. $T_1$ and $T_2$ are the respective channel transmissions of the channel between Alice or Bob and Charlie, and both can be estimated in the parameter-estimation procedure. Then, we can recast Bob's data as \begin{equation}\label{eq:QB} \begin{split} \hat{Q}_{B^{'}}&\!=\!(\sqrt{T_1}\hat{Q}_A\!+\!\sqrt{1-T_1}\hat{Q}_{E_1})\!-\!(\sqrt{T_2}\hat{Q}^N_B\!+\!\sqrt{1-T_2}\hat{Q}_{E_2}),\\ \hat{P}_{B^{'}}&\!=\!(\sqrt{T_1}\hat{P}_A\!+\!\sqrt{1-T_1}\hat{P}_{E_1})\!+\!(\sqrt{T_2}\hat{P}^N_B\!+\!\sqrt{1-T_2}\hat{P}_{E_2}). \end{split} \end{equation} Since the set of data in Eqs.~(\ref{eq:QB}) is a noisy version of Alice's encodings, restricted to one-mode attack, this protocol is equivalent to the conventional one-way CVQKD protocol with heterodyne detection \cite{Wee04}. In this sense, we can use the conventional standard methods to analyze its security bounds. Like for one-way CVQKD, two different strategies of reconciliation, DR and RR, can be used to extract secure keys. In DR, Alice's encodings are taken as the referential raw keys; thus Bob tries to guess them and reconcile his data to be identical to them by virtue of the additional classical side information sent by Alice. In RR Bob's recast data are taken as the raw keys; therefore Alice tries to make her encodings identical to them, requiring Bob to send side information. Note that Eve's entangling cloner attacks for these two reconciliation procedures are different. In DR, Eve just guesses Alice's encodings, and Bob's encodings have no contributions for her, so restricted to a one-mode attack, the entangling cloner attack on Bob's mode is no use to her except for reducing the mutual information between Alice and Bob. However, in RR, Eve tries to guess Bob's recast data, including not only Alice's encoding component but also the noise component, so she can acquire information with the help of entangling cloner attacks on both channels, as shown in Fig.~\ref{fig:2}. Before calculating the secret key rates of this protocol in DR and RR, we first compute the Shannon information between Alice and Bob, and then in the following sections we bound Eve's information using the standard method (see \cite{Gar07,Pir08}) since this protocol is equivalent to the conventional one-way CVQKD protocol with heterodyne detection \cite{Wee04}. Assuming the symmetry of both quadratures, the mutual information between Alice and Bob can be given by \begin{equation}\label{eq:IAB} I_{AB^{'}}=\log_2\frac{V_{B^{'}}}{V_{B^{'}|A}}. \end{equation} Note that they are identical in DR and RR and there is no multiplier of $1/2$ out the front since two quadratures are used to generate the secure keys, which is the same case as in conventional one-way CVQKD with heterodyne detection. The terms $V_{B^{'}}$ and $V_{B^{'}|A}$ are the variance and conditional variance of Bob's recast data $\hat{Q}_{B^{'}}$ and $\hat{P}_{B^{'}}$ in Eqs.~(\ref{eq:QB}). Since the terms on the right-hand sides of Eqs.~(\ref{eq:QB}) are mutually linearly independent, the variance $V_{B^{'}}:=\langle\hat{Q}_{B^{'}}^2\rangle=\langle\hat{P}_{B^{'}}^2\rangle$ ($\langle\hat{Q}_{B^{'}}\rangle=\langle\hat{P}_{B^{'}}\rangle=0$) is obtained by \begin{equation} V_{B^{'}}=T_1V+(1-T_1)N_1+T_2+(1-T_2)N_2:=b_v, \end{equation} and the conditional variance on Alice's encodings $X^S_A$ is given by \begin{equation} V_{B^{'}|A}=T_1+(1-T_1)N_1+T_2+(1-T_2)N_2:=b_0, \end{equation} using the formula of conditional variance defined as \cite{Poi94,Gra98} \begin{equation}\label{eq:CVar} V_{X|Y}=V(X)-\frac{|\left<XY\right>|^2}{V(Y)}. \end{equation} All the variances are in units of shot-noise level. Next, we calculate the secret key rates between Alice and Bob in DR and RR, respectively, by bounding Eve's information. \subsection{Direct reconciliation} In DR, Charlie's publication results will disclose some information, which is equal to giving Eve the virtual mode $\hat{E}_3$ shown in Fig.~\ref{fig:2}. Hence, Eve's information about Alice's encodings consists of the Shannon information $I_{AE_3}$ since $X_{E_3}$ ($\in\{Q,P\}$) is a classical variable and the Holevo information $\chi_{AE_A}$. The two kinds of information may partly contain each other, but we take the superposition as zero for simplicity. Therefore, the secret key rate can be given by \begin{align}\label{eq:KDR} K_{DR}=\beta I_{AB^{'}}-I_{AE_3}-\chi_{AE_A}, \end{align} where $\beta$ is the efficiency of reconciliation. The Shannon information $I_{AB^{'}}$ is given by Eq.~(\ref{eq:IAB}). $I_{AE_3}$ bounds Eve's knowledge about Alice's encodings directly learned from Charlie's publication results $Q$ and $P$, and it can be taken as the classical information since Charlie's measurement for each pulse is individual (e.g., Alice and Bob can wait to send the next signal pulse until they receive the measurement result of the last pulse). The Holevo bound $\chi_{AE_A}$ describes Eve's information obtained from the entangling cloner shown in Fig.~\ref{fig:2}. We first compute $I_{AE_3}$. Since Eve's modes $E^{''}_1$, $E^{''}_2$ can reduce the uncertainty of modes $E^{'}_1$ and $E^{'}_2$, respectively \cite{Gro03Q}, she can reduce the uncertainty of publication results $Q$ and $P$; i.e., the variance of mode $E_3\in\{Q,P\}$ conditioned on $E^{''}_1$, $E^{''}_2$ can be obtained by \begin{equation}\label{} V_{E_3|E^{''}_1,E^{''}_2}=T_1V+(1-T_1)/N_1+T_2V+(1-T_2)/N_2, \end{equation} using $V_{E_1|E^{''}_1}=1/N_1$ and $V_{E_2|E^{''}_2}=1/N_2$ \cite{Gro03Q}, and the conditional variance $V_{E_3|A,E^{''}_1,E^{''}_2}$ can also be given by \begin{equation}\label{} V_{E_3|A,E^{''}_1,E^{''}_2}=T_1+(1-T_1)/N_1+T_2V+(1-T_2)/N_2. \end{equation} So, assuming symmetry of both quadratures, the Shannon information $I_{AE_3}$ can be calculated as \begin{equation}\label{eq:IAE} I_{AE_3}=\log_2\frac{V_{E_3|E^{''}_1,E^{''}_2}}{V_{E_3|A,E^{''}_1,E^{''}_2}}. \end{equation} The Holevo information $\chi_{AE_A}$ can be written as \begin{equation}\label{eq:XAE} \chi_{AE_A}=S(E_A)-S(E_A|A), \end{equation} where $E_A$ denotes Eve's modes $E^{'}_1$, $E^{''}_1$, and $S(E_A)$ can be computed with the symplectic eigenvalues of the covariance matrix \begin{equation}\label{} \gamma_{E_A}(V,V)=\begin{pmatrix} e_{v1}\mathbb{I}& \varphi_1\sigma_z\\ \varphi_1\sigma_z& N_1\mathbb{I} \end{pmatrix}, \end{equation} where $\varphi_1=\sqrt{T_1(N_1^2-1)}$ and $e_{v1}=(1-T_1)V+T_1N_1$. Here $e_{v1}$ is the variance of mode $E^{'}_1$, and the conditional variance on Alice's encodings is given by $e_0=(1-T_1)+T_1N_1$. $\mathbb{I}$ and $\sigma_z$ are Pauli matrices. The symplectic eigenvalues of this covariance matrix are given by \begin{equation} \lambda_{1,2}=\sqrt{\frac{\Delta\mp\sqrt{\Delta^2-4D}}{2}}, \end{equation} where $\Delta=e^2_{v1}+N_1^2-2\varphi_1^2$ and $D=(e_{v1}N_1-\varphi_1^2)^2$. Hence, the von Neumann entropy of Eve's state is given by \begin{equation}\label{eq:SE} S(E_A)=G\left(\frac{\lambda_1-1}{2}\right)+G\left(\frac{\lambda_2-1}{2}\right), \end{equation} where $G(x)=(x+1)\log_2(x+1)-x\log_2x$. $S(E_A|A)$ can be obtained by the conditional covariance matrix $\gamma_{E_A|A}=\gamma_{E_A}(1,1)$, and its symplectic eigenvalues are given by \begin{equation} \lambda_{3,4}=\sqrt{\frac{A\mp\sqrt{A^2-4B}}{2}}, \end{equation} where $A=e^2_0+N_1^2-2\varphi_1^2, B=(e_0N_1-\varphi_1^2)^2$. Thus, the conditional entropy is \begin{equation}\label{eq:SEA} S(E_A|A)=G\left(\frac{\lambda_3-1}{2}\right)+G\left(\frac{\lambda_4-1}{2}\right). \end{equation} With Eqs.~(\ref{eq:IAE}) and (\ref{eq:XAE}), we can bound Eve's information for DR and then compute the secret key rate $K_{DR}$ in Eq.~(\ref{eq:KDR}). We plot it in Fig.~\ref{fig:3} for a symmetric channel case where $T_1=T_2$ and excess noises in each channel are also identical. \begin{figure} \caption{\label{fig:3} \label{fig:3} \end{figure} From Fig.~\ref{fig:3}, we can see that this protocol is very sensitive to channel loss and excess noise in DR for the symmetric channel case, and transmission distances are limited to 15 km (3-dB limit), the same as for the one-way CVQKD protocol with DR, or shorter due to the fact that Bob's data contain some modulation vacuum noise, which is detrimental to him, and Charlie's BSM discloses some information to Eve. \subsection{Reverse reconciliation}\label{sec:Reverse} In RR, the secret key rate can be written as \begin{align}\label{eq:KRR} K_{RR}=\beta I_{AB^{'}}-I_{B^{'}E_3}-\chi_{B^{'}E}, \end{align} where $I_{AB^{'}}$ is also given by Eq.~(\ref{eq:IAB}), $I_{B^{'}E_3}$ describes the information disclosed by Charlie, and $\chi_{B^{'}E}$ quantifies Eve's Holevo information about Bob's recast data ($\hat{Q}_{B^{'}}$ or $\hat{P}_{B^{'}}$) by entangling cloner attack. The latter two quantities are calculated as follows. The information $I_{B^{'}E_3}$ about Bob's recast data $\hat{Q}_{B^{'}}$ and $\hat{P}_{B^{'}}$ disclosed by Charlie's BSM can be written as \begin{equation}\label{eq:IBE} I_{B^{'}E_3}=\log_2\frac{V_{E_3}}{V_{E_3|B^{'}}}. \end{equation} where there is no factor of $\frac{1}{2}$, as the previous section mentioned. To compute the variance of mode $E_3\in\{Q,P\}$ and the conditional variance $V_{E_3|B^{'}}$, we recast the measurement quadrature $\hat{Q}$ ($\hat{P}$) in Eqs.~(\ref{eq:Q}) as $\hat{Q}=\hat{Q}_{B^{'}}-\sqrt{T_2}Q_B$ ($\hat{P}=\hat{P}_{B^{'}}+\sqrt{T_2}P_B$), so these variances can be given, respectively, by \begin{equation} \begin{split} V_{E_3}&=\langle(\hat{Q})^2\rangle=\langle(\hat{P})^2\rangle\\ &=T_1V+(1-T_1)N_1+T_2V+(1-T_2)N_2, \end{split} \end{equation} \begin{equation} V_{E_3|B^{'}}=T_2V_S=T_2(V-1). \end{equation} Then, using the above equations, Eq.~(\ref{eq:IBE}) can be obtained. The Holevo information $\chi_{B^{'}E}$ can be obtained by \begin{equation}\label{eq:XBE} \chi_{B^{'}E}=S(E)-S(E|B^{'}), \end{equation} where $E$ denotes Eve's modes $E^{'}_1$, $E^{''}_1$, $E^{'}_2$, $E^{''}_2$. $S(E)$ can be computed with the symplectic eigenvalues of the covariance matrix, \begin{equation}\label{} \gamma_{E}=\begin{pmatrix} e_{v1}\mathbb{I}& \varphi_1\sigma_z& 0& 0\\ \varphi_1\sigma_z& N_1\mathbb{I}& 0& 0\\ 0& 0& e_{{v2}}\mathbb{I}& \varphi_2\sigma_z\\ 0& 0& \varphi_2\sigma_z& N_2\mathbb{I} \end{pmatrix}_{8\times8}, \end{equation} where $e_{v2}=(1-T_2)V+T_2N_2$ and $\varphi_2=\sqrt{T_2(N_2^2-1)}$. This covariance matrix can be written as $\gamma_{E_A}\bigoplus\gamma_{E_B}$, so $S(E)=S(E_A)+S(E_B)$, where $S(E_A)$ is given by Eq.~(\ref{eq:SE}) and $S(E_B)$ is obtained by replacing $T_1$ and $N_1$ with $T_2$ and $N_2$ in Eq.~(\ref{eq:SE}). Likewise, $S(E|B^{'})$ can be calculated by symplectic eigenvalues of the conditional covariance matrix $\gamma^{Q_{B^{'}},P_{B^{'}}}_E$, which can be obtained by \cite{Gar07} \begin{equation} \gamma^{Q_{B^{'}},P_{B^{'}}}_E=\gamma_E-\sigma_{EB^{'}}(\textbf{X}\gamma_{B^{'}}\textbf{X})^{MP}\sigma^T_{EB^{'}}, \end{equation} where \begin{equation} \begin{split} \sigma_{EB^{'}}&=\begin{pmatrix} \langle\hat{Q}_{E^{'}_1}\hat{Q}_{B^{'}}\rangle& 0\\ 0& \langle\hat{P}_{E^{'}_1}\hat{P}_{B^{'}}\rangle\\ \langle\hat{Q}_{E^{''}_1}\hat{Q}_{B^{'}}\rangle& 0\\ 0& \langle\hat{P}_{E^{''}_1}\hat{P}_{B^{'}}\rangle \\ \langle\hat{Q}_{E^{'}_2}\hat{Q}_{B^{'}}\rangle& 0\\ 0& \langle\hat{P}_{E^{'}_2}\hat{P}_{B^{'}}\rangle\\ \langle\hat{Q}_{E^{''}_2}\hat{Q}_{B^{'}}\rangle& 0\\ 0& \langle\hat{P}_{E^{''}_2}\hat{P}_{B^{'}}\rangle \end{pmatrix}= \begin{pmatrix} \xi_1\mathbb{I}\\ \phi_1\sigma_z\\ -\xi_2\sigma_z\\ -\phi_2\mathbb{I} \end{pmatrix}, \end{split} \end{equation} with \begin{equation} \begin{split} \xi_1&=\sqrt{T_1(1-T_1)}(N_1-V),\\ \phi_1&=\sqrt{(1-T_1)(N_1^2-1)},\\ \xi_2&=\sqrt{T_2(1-T_2)}(N_2-1),\\ \phi_2&=\sqrt{(1-T_2)(N_2^2-1)}, \end{split} \end{equation} and $\gamma_{B^{'}}=\bigl(\begin{smallmatrix}bv & 0 \\ 0 & bv \end{smallmatrix}\bigr)$, $\textbf{X}=\bigl(\begin{smallmatrix}1 & 0 \\ 0& 1 \end{smallmatrix}\bigr)$. $MP$ stands for the Moore-Penrose inverse of a matrix. For a straightforward calculation, the conditional covariance matrix can be recast as \begin{equation}\label{} \begin{split} &\gamma^{Q_{B^{'}},P_{B^{'}}}_{E}=\\ &\begin{pmatrix} (e_{v1}-\frac{\xi^2_1}{b_v})\mathbb{I}& (\varphi_1\!-\!\frac{\xi_1\phi_1}{b_v})\sigma_z& \frac{\xi_1\xi_2}{b_v}\sigma_z& \frac{\xi_1\phi_2}{b_v}\mathbb{I}\\ (\varphi_1\!-\!\frac{\xi_1\phi_1}{b_v})\sigma_z& (N_1-\frac{\phi_1^2}{b_v})\mathbb{I}& \frac{\xi_2\phi_1}{b_v}\mathbb{I}& \frac{\phi_1\phi_2}{b_v}\sigma_z\\ \frac{\xi_1\xi_2}{b_v}\sigma_z& \frac{\xi_2\phi_1}{b_v}\mathbb{I}& (e_{v2}-\frac{\xi^2_2}{b_v})\mathbb{I}& (\varphi_2\!-\!\frac{\xi_2\phi_2}{b_v})\sigma_z \\ \frac{\xi_1\phi_2}{b_v}\mathbb{I}& \frac{\phi_1\phi_2}{b_v}\sigma_z& (\varphi_2\!-\!\frac{\xi_2\phi_2}{b_v})\sigma_z& (N_2-\frac{\phi_2^2}{b_v})\mathbb{I} \end{pmatrix}. \end{split} \end{equation} Calculating the symplectic eigenvalues of a four-mode covariance matrix is very challenging, and the standard method is as follows \cite{Gar07,Ser06}: first, we denote the four symplectic eigenvalues as $\nu_1$, $\nu_2$, $\nu_3$ and $\nu_4$, which satisfy \begin{equation}\label{eq:Delta} \begin{split} &\Delta^4_1=\nu^2_1+\nu^2_2+\nu^2_3+\nu^2_4,\\ &\Delta^4_2=\nu^2_1\nu^2_2+\nu^2_1\nu^2_3+\nu^2_1\nu^2_4+\nu^2_2\nu^2_3+\nu^2_2\nu^2_4+\nu^2_3\nu^2_4,\\ &\Delta^4_3=\nu^2_1\nu^2_2\nu^2_3+\nu^2_1\nu^2_2\nu^2_4+\nu^2_1\nu^2_3\nu^2_4+\nu^2_2\nu^2_3\nu^2_4,\\ &\Delta^4_4=\nu^2_1\nu^2_2\nu^2_3\nu^2_4, \end{split} \end{equation} where $\Delta^4_j$ ($j=1,2,3,4$) is the $jth$-order principal minor of $\gamma^{Q_{B^{'}},P_{B^{'}}}_E$, which is defined as the sum of the determinants of all the $2j\times2j$ submatrices of the $n\times n$ covariance matrix obtained by deleting $n-2j$ rows and the corresponding $n-2j$ columns \cite{Ser06}. Second, after calculating the principal minors of the conditional covariance matrix, we can solve Eqs.~(\ref{eq:Delta}) to get the symplectic eigenvalues of the four-mode conditional covariance matrix. However, it is very difficult. But, we can compute $S(E|B^{'})$ asymptotically. Note that $G(\frac{\nu-1}{2})\rightarrow\log_2\frac{e\nu}{2}+O(\nu^{-1})$ for $\nu\gg1$ \cite{Pir08}. This means that for a large variance $V$ of Alice's and Bob's modulated modes and $T\neq0,1$, we can use the above formula to compute the Holevo information. For a large modulation variance, the asymptotic eigenvalues of $\gamma^{Q_{B^{'}},P_{B^{'}}}_{E}$ can be given by \begin{equation}\label{eq:eigen} \nu_1=N_1, \nu_2=N_2, \nu^2_3\nu^2_4=\frac{\Delta^4_4}{\nu^2_1\nu^2_2}. \end{equation} Then, $S(E|B^{'})$ can be obtained by \begin{equation} S(E|B^{'})=G\left(\frac{N_1-1}{2}\right)+G\left(\frac{N_2-1}{2}\right)+\log_2\frac{e^2\nu_3\nu_4}{4}. \end{equation} Then, the Holevo information $\chi_{B^{'}E}$ in Eq.~(\ref{eq:XBE}) can be attained with above equations. Hence, we can obtain the secret key rate in RR in Eq.~(\ref{eq:KRR}) by substituting Eqs.~(\ref{eq:IAB}), (\ref{eq:IBE}), and (\ref{eq:XBE}) into it. We plot the secure key rate $K_{RR}$ as a function of transmission distances between Alice, Charlie, and Bob in Fig.~\ref{fig:4} for the symmetric channel case. The performance in RR is higher than that in DR, which is analogous to the case of conventional one-way CVQKD, but in conventional one-way CVQKD the RR protocol has no loss limit when the channel excess noise is zero. Since the vacuum noise of Bob's mode reduces the mutual information between Alice and Bob and Charlie's BSM discloses some information, the transmission distances are also very limited, with typical experimental parameters used in current CVQKD implementations \cite{Jou13N}, except for the modulation variance of Alice and Bob, which is set to be optimal. Note that the calculation of the Holevo information $\chi_{B^{'}E}$ in Eq.~(\ref{eq:XBE}) is for the case of large modulation variance of Alice and Bob. However, in Appendix \ref{sec:Asymptotic}, we show that even for infinitely strong modulation and perfect reconciliation efficiency the transmission distances are still short, and also the improvement is limited. \begin{figure} \caption{\label{fig:4} \label{fig:4} \end{figure} \section{Discussion}\label{sec:Discussion} As shown in the previous sections, we set Bob to recast his data by subtracting his encodings from Charlie's BSM results; however, if Alice recasts her data as Bob does and Bob keeps his encodings, the same results as above will be obtained. Although the performance is not very good for symmetric channels, we show that this protocol will exhibit high performance for asymmetric channels ($T_1\neq T_2$), as shown in Fig.~\ref{fig:5}. \begin{figure} \caption{\label{fig:5} \label{fig:5} \end{figure} In Fig.~\ref{fig:5}, we set Charlie's BSM relay close to Alice's station, for example, 10 m, and find that both DR and RR have excellent performances over long distances between Alice, Charlie, and Bob with experimental realistic conditions. Moreover, if we set Charlie's BSM relay close to Bob's station, the cases are a little more complicated due to the respective channel excess noises. However, in this setting, this protocol is very close to the conventional one-way CVQKD protocol except Charlie's BSM discloses part of the information to Eve. Therefore, in this sense, for this protocol DR has 3-dB limit, and RR has no loss limit if there is no channel excess noise and $T_2\rightarrow 1$. We do not show these results in the figure. Of course, we can reverse the above cases in Fig.~\ref{fig:5} to get high performance with Charlie's BSM relay close to Bob's station just by having Alice recast her data if channels have excess noise. Finally, we point out that the imperfections of Charlies's homodyne detections, such as detection efficiency and electronic noise, can be included in the channel transmission and excess noise, respectively; thus we can not necessarily consider the imperfections of Charlie's detections when computing the secure key rates. However, these imperfections will reduce the performance of this protocol rapidly if the BHD has low detection efficiency and high electronic noise. Hence using highly efficient BHDs in BSM is necessary to improve the performance of this continuous-variable MDI-QKD. \section{Conclusion}\label{sec:Conclusion} In conclusion, we proposed a scheme to realize the idea of MDI-QKD, with a source of Gaussian-modulated coherent states. We showed that this protocol has higher performance in RR against a one-mode optimal attack than DR for the symmetric channel case, but both are limited to short distances; however, for asymmetric channels both have excellent performances and can be extended to current distances realized by the conventional one-way CVQKD. Moreover, the protocol almost exploits each pulse to generate keys and thus has high key rates compared to the discrete-variable MDI-QKD. Actually, this protocol has no basis choice or comparison, and each pulse except the ones used for parameter estimation contributes to the establishment of secure keys. In addition, the source can be easily generated with coherent light, and the whole protocol can be implemented experimentally with current technology, although the LO interference will be a little complicated. We hope to seek other methods to solve the problem of pulse synchronization and the reference frame calibration in future research. \begin{acknowledgments} The authors thank H.-K. Lo and his group for enlightening discussions and comments. This work is supported by the National Natural Science Foundation of China, Grants No. 61072071 and No. 11304391. L.-M.L. is supported by the Program for New Century Excellent Talents. X.-C.M. is supported by the Hunan Provincial Innovation Foundation for Postgraduates. X.-C.M. and M.-S.J. acknowledge support from NUDT under Grant No. kxk130201. \end{acknowledgments} \appendix \section{Defining the reference frame}\label{sec:Define} In this appendix, we discuss how to synchronize the pulses and define the reference frame between Alice, Bob, and Charlie by the manipulation of LO. The basic idea is that, if we can measure the phase difference of two LO beams sent by Alice and Bob, respectively, we can add this phase difference in one party's modulation of his or her signal beam, and thus the two signal modulations of Alice and Bob are implemented in the same reference frame. Since LO is a strong classical beam, we can combine two LO beams in a balanced beam splitter so that they interfere with each other; then we can measure one port's interference output to get the phase difference of the two LO beams. The schematic setup is shown in Fig.~\ref{fig:6}. \begin{figure} \caption{\label{fig:6} \label{fig:6} \end{figure} In this protocol, we have Alice send the LO beam to Charlie, who then splits it into two beams with a balanced beam splitter for his two balanced homodyne detectors and uses them to measure the quadratures $\hat{Q}$ and $\hat{P}$. First, Alice splits her LO beam into two beams, one sent to Charlie and the other to Bob. Then, Bob splits the received LO beam from Alice and his own LO beam into two beams, respectively, and combines them with BS1 and BS2 so they interfere with each other, as shown in Fig.~\ref{fig:6}. Then, we use each photodetector on one port of both BS1 and BS2, to detect the intensity of interfered beams. Note that, we add a $\pi/2$ phase on Bob's one split LO beam in order to accurately measure the phase difference. We denote the amplitude of Alice's LO beam that interferes with Bob's as $\alpha e^{i\theta_A}$ and denote Bob's LO beam as $\alpha e^{i\theta_B}$, provided that both of them have identical intensities. Relative to the LO beams, Alice's and Bob's classical signal beams are phase modulated into $\alpha^A_S e^{i(\theta_A+\phi_A)}$ and $\alpha^B_S e^{i(\theta_B+\phi_B)}$, respectively, before attenuating the quantum level. $\alpha^A_S$ and $\alpha^B_S$ are their respective signal beam intensities, and $\phi_A$, $\phi_B$ are modulated phases. Then, when two LO beams interfere with BS1, the amplitude of one port can be written as \begin{equation} \beta_1=\frac{\alpha e^{i\theta_A}+\alpha e^{i\theta_B}}{\sqrt{2}}=\sqrt{2}\alpha e^{\frac{i(\theta_A+\theta_B)}{2}}\cos \left(\frac{\theta_A-\theta_B}{2}\right). \end{equation} The PD output of BS1 is obtained by \begin{equation}\label{eq:Beta1} |\beta_1|^2=2|\alpha|^2\cos^2\left(\frac{\theta_A-\theta_B}{2}\right)\!=\!|\alpha|^2[1+\cos(\theta_A-\theta_B)]. \end{equation} Likewise, the PD output of BS2 can be attained as \begin{equation}\label{eq:Beta2} |\beta_2|^2\!=\!|\alpha|^2\{1+\cos[\theta_A-(\theta_B+\pi/2)]\}=|\alpha|^2[1+\sin(\theta_A-\theta_B)]. \end{equation} With Eqs.~(\ref{eq:Beta1}) and (\ref{eq:Beta2}), we can accurately compute the phase difference $\Delta\theta:=\theta_A-\theta_B$ of Alice's and Bob's LO beams. When Bob modulates his signal beam, he adds the phase difference $\Delta\theta$ and the initial phase $\phi_B$ together as the modulated phase. Thus, the amplitude of Bob's signal beam can be written as $\alpha^B_S e^{i(\theta_B+\phi_B+\Delta\theta)}=\alpha^B_S e^{i(\theta_A+\phi_B)}$ , which has been defined in the same reference frame as Alice's. However, realizing the above strategy experimentally may be complicated, and we just give a simple theoretical method and demonstrate the possibility of implementing this whole protocol of continuous-variable MDI-QKD. Other strategies to solve the problem of pulse synchronization and the reference frame calibration might exist. We note that in Refs. \cite{Zha00,Zha02,Jia04}, homodyne detectors and LO beams are not needed to make the BSM in continuous-variable entanglement swapping; however, we are not sure whether their method of BSM is suitable for this protocol. Finally, we point out that, to relieve the emitters' burden and preserve the symmetry of Alice and Bob, this procedure of synchronization and the reference frame calibration can be implemented by Charlie without affecting the security of this protocol. That means Alice and Bob both send their LO beams to Charlie, who measures the phase difference of the two beams and then adds it to the signal beam of either Alice or Bob by modulation. \section{Asymptotic key rate for RR with infinitely strong modulation}\label{sec:Asymptotic} In Sec. \ref{sec:Reverse}, we obtained the Holevo information $\chi_{B^{'}E}$ in Eq.~(\ref{eq:XBE}) for RR for the case of large modulation variance of Alice and Bob. Using appropriate experimental parameters, e.g., a finite reconciliation efficiency $\beta=0.95$, and optimizing the modulation variance, we show that the transmission distances in the symmetric channel case for RR are also limited like in DR due to the vacuum noise of Bob's mode, which is different from the conventional one-way CVQKD as mentioned before. However, in this appendix, we point out that even for infinitely strong modulation and perfect reconciliation efficiency, the transmission distances are still limited, as shown in Fig.~\ref{fig:7}. \begin{figure} \caption{\label{fig:7} \label{fig:7} \end{figure} Figure \ref{fig:7} depicts the asymptotic key rate with infinitely large modulation variance for RR in the symmetric channel case, and we can see that the improvement in the achievable key rate is limited with respect to the modulation variance of Alice and Bob. In addition, we can easily check that the maximum transmission distance in the asymptotic case is extended by only about 2 km compared to the one in the case with $V=40$ and $\beta=1$. This means that the asymptotic calculation of the eigenvalues and Holevo information $\chi_{B^{'}E}$ in Eqs.~(\ref{eq:eigen}) and (\ref{eq:XBE}), respectively, is also applicable to the case of experimental realization with an appropriately large modulation variance. Finally, we point out that the asymptotic key rates for other cases in DR and RR with symmetric or asymmetric channels can also be easily obtained using the above method, i.e., by setting $V\rightarrow\infty$ and $\beta=1$ in the calculation for the key rates. For the purpose of experimental realization, i.e., using the experimentally realistic parameters, we do not give their results here. \end{document}
\begin{document} \thetaitle{Non-classical linear divisibility sequences\\ and cyclotomic polynomials} \alphauthor{Sergiy Koshkin\\ Department of Mathematics and Statistics\\ University of Houston-Downtown\\ One Main Street\\ Houston, TX 77002\\ e-mail: [email protected]} \title{Non-classical linear divisibility sequences\ and cyclotomic polynomials} \begin{abstract} Divisibility sequences are defined by the property that their elements divide each other whenever their indices do. The divisibility sequences that also satisfy a linear recurrence, like the Fibonacci numbers, are generated by polynomials that divide their compositions with every positive integer power. We completely characterize such polynomials in terms of their factorizations into cyclotomic polynomials using labeled Hasse diagrams, and construct new integer divisibility sequences based on them. We also show that, unlike the Fibonacci numbers, these non-classical sequences do not have the property of strong divisibility. \thetaextbf{Keywords}: Fibonacci numbers, Lucas numbers, divisibility sequence, linear recurrence, cyclotomic polynomial, Hasse diagram, strong divisibility \varepsilonnd{abstract} \sigmaection{What Mersenne and Fibonacci have in common}\lambdaabel{S1} The Mersenne numbers are given explicitly as $M_n:=2^n-1$. The Fibonacci numbers, on the other hand, come from a recurrence relation, $F_{n+2}=F_{n+1}+F_n$, with $F_0=0$ and $F_1=1$. Of course, $M_0=0$ and $M_1=1$, and $M_n$ satisfy a similar recurrence, $M_{n+2}=3M_{n+1}-2M_n$. But both sequences also have a more interesting property in common: if $m$ divides $n$ then $a_m$ divides $a_n$. For example, $F_7=13$ and $F_{14}=377$, and we observe that $377=13\cdot29$. Such sequences are called {\it divisibility sequences} (the name seems to be due to Hall \cite{Hall}), and {\it linear divisibility sequences} if they also satisfy a linear recurrence $a_{n+k}=c_1a_{n+k-1}+\cdots+c_ka_n$, like the Mersenne or Fibonacci numbers. Lucas was first to study in depth the second order ($k=2$) linear divisibility sequences in 1876-1880. He showed that, up to normalization, non-trivial such sequences are of the form $\frac{\alphalpha^n-\beta^n}{\alphalpha-\beta}$, where $\alphalpha$ and $\beta$ are the roots of a quadratic polynomial with integer coefficients. The polynomial is then $x^2-(\alphalpha+\beta)x+\alphalpha\beta$, and the recurrence is $a_{n+2}=(\alphalpha+\beta)a_{n+1}-\alphalpha\beta a_n$, so the sequence consists of integers even if $\alphalpha$ and $\beta$ are irrational. For the Fibonacci numbers, the Lucas's form is given by the Binet formula, $F_n=\frac{(\frac{1+\sigmaqrt{5}}2)^n-(\frac{1-\sigmaqrt{5}}2)^{n}}{\sigmaqrt{5}}$. Based on his work with divisibility sequences, Lucas devised primality tests that helped establish the primality of $M_{127}$. The search for large Mersenne primes is ongoing, now with the help of the Internet. But what is behind the peculiar divisibility property? This question leads from numerical sequences to sequences of polynomials. It still makes sense to talk about divisibility sequences of polynomials, e.g. $x^n$ is one. Slightly less obviously, so is $x^n-1$, because $x^{dm}-1$ has $x^m-1$ as a factor by the difference of powers formula. The Fibonacci polynomials are a more non-trivial example \cite{WP}. Up to normalization by $\frac1{\alphalpha-\beta}$, the Lucas sequences are products of $\beta^n$ and $(\frac{\alphalpha}{\beta})^n-1$, which are obtained by choosing a value for $x$ in $x^n$ and $x^n-1$, respectively. More generally, the product of two linear divisibility sequences, of integers or polynomials, is again a linear divisibility sequence, so it is natural to consider products $a_n=A\gammaamma^n\prod_i(\gammaamma_i^n-1)$ that generalize Lucas sequences. One such example was studied by Lehmer \cite{Leh}, who refined and extended Lucas's primality tests, and more by Ward \cite{Wd38}, back in 1930s. A general theory of such sequences was developed recently in \cite{RWG}, where one can also read more about their history and applications in algebra and number theory. Other old and new results on linear divisibility sequences and their applications can be found in the book \cite{EPS}. But $x^n$ and $x^n-1$ are just $x$ and $x-1$ composed with $x^n$. Classical authors thought that they generate ``almost all" linear divisibility sequences, with some special exceptions \cite{Wd38}, but they are themselves very special. One such exception, generated by composing $f(x)=(x+1)(x^m-1)$, $m$ odd, with $x^n$ is considered in \cite{Oos} (but was probably known already to Ward). What other $f(x^n)$, with a polynomial $f$, are divisibility sequences? \begin{definition} We call a polynomial $f(x)$ with complex coefficients a {\it divisibility polynomial} if $f(x^n)$ is a divisibility sequence, i.e., $f(x^m)|f(x^n)$ when $m|n$. It is enough that $f(x)|f(x^n)$ for all $n\in\mathbb{N}$, since then $f(x^m)|f(x^n)=f((x^m)^d)$ for $n=md$. \varepsilonnd{definition} \noindent The divisibility polynomials are called Lucas polynomials in \cite{RWG}, but that name is already in use for other purposes. In 1988 B\'ezivin, Peth\"o and van der Poorten proved more generally \cite{BPP}, see also \cite{Oos}, \cite{RWG}, that non-degenerate (in a precise sense) linear divisibility sequences of integers are always of the form \begin{equation}\lambdaabel{BPPForm} a_n=An^k\prod_if_i(\gammaamma_i^n), \varepsilonnd{equation} where $A$, $\gammaamma_i$ are complex numbers, $k$ is a nonnegative integer, and $f_i$ are divisibility polynomials. Not all sequences of this form are integer-valued, of course, and the integrality conditions are still not known in general. But we will fully describe the divisibility polynomials (Theorem \ref{DivPolyChar}), and explain where the ``exceptions" come from, and why they are more of a rule than exception. For example, if the positive integers $N_i$ are pairwise relatively prime then \begin{equation}\lambdaabel{DisDiv} f(x)=\frac{(x^{N_1}-1)\cdots(x^{N_k}-1)}{(x-1)^{k-1}} \varepsilonnd{equation} is a divisibility polynomial. When $k=1$ we get the classical $f(x)=x-1$, and when $k=2,N_1=2$ and $N_2=m$ odd we get the ``exception" $f(x)=(x+1)(x^m-1)$. The divisibility polynomials have an interesting factorization theory (Sections \ref{S2'}-\ref{S3}), and can be nicely described as certain products of cyclotomic polynomials. The cyclotomic (literally, circle dividing) polynomials are classical, and also have ancient roots. Gauss introduced them in 1796 to solve an old Greek problem of inscribing regular polygons into the circle. Our description will be visual, we will introduce Hasse diagrams with additional labels that show how to build divisibility polynomials from cyclotomic polynomials (Section \ref{S2}). It turns out that Lucas's integer divisibility sequences generalize accordingly. Namely, if $f(x,y):=y^{\thetaextrm{deg}\,(f)}f(x/y)$ is a homogenized divisibility polynomial, and $\alphalpha+\beta,\alphalpha\beta$ are integers, then $\frac{f(\alphalpha^n,\beta^n)}{f(\alphalpha,\beta)}$ is a divisibility sequence of integers (Theorem \ref{IntSeq}). When $f(x)=x-1$, or, more generally, $x^N-1$, we get Lucas's sequences. But for other $f(x)$, such as in \varepsilonqref{DisDiv}, the resulting sequences are {\it non-classical} -- they are not even products of Lucas's sequences. Nonetheless, they are closely related to them (Section \ref{S3'}). For the Fibonacci values of $\alphalpha,\beta$, the simplest one is $\frac12L_nF_{3n}$, where $L_n$ are the Lucas numbers. More generally, from the template \varepsilonqref{DisDiv} for $f(x)$ we get the sequence $$ \frac{F_{N_{1}n}/F_{N_{1}}\cdots F_{N_{k}n}/F_{N_{k}}}{(F_n)^{k-1}}\,. $$ We will also characterize polynomials $f(x)$ with the property of {\it strong divisibility} (Theorem \ref{StrongDiv}), namely satisfying $\thetaext{gcd}(f_m,f_n)=f_{\thetaext{gcd}(m,n)}$, where $f_n(x)=f(x^n)$. A new connection between strong divisibility and cyclotomic polynomials was recently discovered in \cite{BFLS}, other known connections are discussed in \cite{Kimb}. This is another property that the Mersenne and Fibonacci numbers have in common, and it is distinctive of the classical sequences. Under some mild conditions on $\alphalpha,\beta$, our generalized sequences are strong divisibility sequences if and only if they are classical (Theorem \ref{StDiv}). \sigmaection{Cyclotomic polynomials and Hasse diagrams}\lambdaabel{S2} The divisibility polynomials are clearly very special, but there are more of them than one might think. Let us call a polynomial {\it normal} if it has the leading coefficient $1$, and a non-zero constant term. Then any polynomial is of the form $Cx^sg(x)$ for some constant $C$, integer $s\gammaeq0$, and normal $g(x)$. In this section we will develop diagrams that encode the structure of normal divisibility polynomials. We start with a couple of simple observations. Let $\zeta$ be a root of a divisibility polynomial $f$. Since $f(x)|f(x^n)$ and $f(\zeta)=0$ we have $f(\zeta^n)=0$ for all positive integers $n$. But a polynomial can not have infinitely many roots, so $\zeta^k=\zeta^l$ for $k\neq l$. This means that $\zeta$ is either $0$ or $\zeta^n=1$ for some $n\gammaeq1$. Such $\zeta$ are called {\it $n$-th roots of unity}, and they are of the form $\zeta^{k}_n:=e^{\frac{2\pi i}{n}k}$. If $k\,|\,n$ then $\zeta^{k}_n=\zeta^1_{n/k}$ is also an $n/k$-th root of unity. Those roots that are not roots for smaller $n$ are called {\it primitive of order $n$}. By the elementary properties of cyclic groups, those are exactly the ones with $\thetaext{gcd}(n,k)=1$. B\'ezivin, Peth\"o and van der Poorten \cite{BPP} found a necessary and sufficient condition on the set of roots that makes a polynomial a divisibility polynomial. \begin{theorem}\lambdaabel{DivRoot} Consider a polynomial $f$ with roots of unity as roots. Given a pair of them, let $h,h'$ denote their orders as primitive roots of unity, and $m,m'$ their multiplicities in $f$. Then $f$ is a divisibility polynomial if and only if for any such pair with $h'|h$ we have $m'\gammaeq m$. \varepsilonnd{theorem} \begin{proof}First suppose that the roots of $f(x)$ satisfy the conditions of the theorem. We want to show that $f(x)|f(x^n)$. If $\zeta$ is a root of $f(x)$ of multiplicity $m$ then $\zeta':=\zeta^n$ is also a root, of multiplicity $m'\gammaeq m$. Since $(x-\zeta^n)^{m'}$ is a factor of $f(x)$ we have that $(x^n-\zeta^n)^{m'}$ is a factor of $f(x^n)$. But $(x-\zeta)|(x^n-\zeta^n)$, so $(x-\zeta)^m|(x^n-\zeta^n)^{m'}$, as $m'\gammaeq m$. Since this is true for all roots $f(x)|f(x^n)$. Conversely, let $f$ be a divisibility polynomial, and $\zeta,\zeta'$ be a pair roots from the statement of the theorem. Since $h|h'$, by the standard properties of cyclic groups, there is $d\in{\mathbb N}$ such that $\zeta':=\zeta^d$. We have $(x-\zeta)^m$ dividing $f(x)$ and hence $f(x^d)$. But $f(x^d)$ splits into factors of the form $x^d-\xi$, where $\xi$ is a root of $f(x)$, and $(x-\zeta)$ must divide at least one of them. This is only possible if $\xi=\zeta^d=\zeta'$, and for $(x-\zeta)^m$ to divide $(x^d-\zeta^d)^{m'}$ we must have $m'\gammaeq m$. \varepsilonnd{proof} \noindent Note that any primitive root of unity of some order is a power of any other primitive root of the same order. So if $\zeta$ is a root of $f(x)$ then so is every other primitive root of the same order, since $f(\zeta)|f(\zeta^n)$, and, by Theorem \ref{DivRoot}, they all have the same multiplicity. The product of $(x-\zeta)$ factors over all primitive roots $\zeta$ of order $n$ is called the {\it $n$-th cyclotomic polynomial} $\Phi_n(x)=\prod_{\thetaext{gcd}(n,k)=1}(x-\zeta_n^k)$. It follows that a normal divisibility polynomial $f(x)$ is a product of powers of cyclotomic polynomials. Gauss derived from the definition of cyclotomic polynomials that \begin{equation}\lambdaabel{AllDiv} x^n-1=\prod_{d|n}\Phi_d(x)\,. \varepsilonnd{equation} This is because any $n$-the root of unity must be primitive of some order that divides $n$. Gauss's formula allows us to find $\Phi_n$ once $\Phi_d$ for $d<n$ are already known, without any recourse to complex numbers, by splitting off $\Phi_n$ from the product on the right. It also shows, by the Gauss's lemma, that their coefficients are integers, and the leading coefficient is always $1$. It is easy to find them recursively from \varepsilonqref{AllDiv}: $\Phi_1(x)=x-1$, $\Phi_2(x)=x+1$, $\Phi_3(x)=x^2+x+1$, $\Phi_4(x)=x^2+1$, $\Phi_6(x)=x^2-x+1$, etc. What kinds of cyclotomic products are the divisibility polynomials? Suppose $\Phi_{h}(x)$ appears as a factor in a divisibility polynomial. Theorem \ref{DivRoot} tells us that if $d|h$ then $\Phi_{d}(x)$ must also be a factor. In particular, $\Phi_{1}(x)=x-1$ is always a factor. We are led to the following property, known in algebra. \begin{definition} A subset $\mathcal{L}ambda\sigmaubset\mathbb{N}$ is called {\it saturated} if $d|h$ and $h\in\mathcal{L}ambda$ imply that $d\in\mathcal{L}ambda$, i.e., if along with any of its elements $\mathcal{L}ambda$ also contains all of its divisors. \varepsilonnd{definition} Divisibility relations among positive integers in a subset can be conveniently pictured by {\it Hasse diagrams}. Given a subset $\mathcal{L}ambda$, at the lowest level of the diagram we place $1$ (if it is in $\mathcal{L}ambda$), at the next level all prime numbers in $\mathcal{L}ambda$ (if any), next up are their pairwise products, and so on, see Figure\,\ref{MultiHasse}a. The edges connect the numbers to the numbers one level up which they divide, so upward paths in the diagram reflect the ``increase" in divisibility. It is convenient to assign the empty diagram to the empty set $\varepsilonmptyset$. \begin{figure}[!ht] \begin{centering} (a)\ \ \ \includegraphics[scale=0.9]{Hasse} \hspace{0.7in} (b)\ \ \ \includegraphics[scale=0.9]{multihasse}\par \par\varepsilonnd{centering} \caption{\lambdaabel{MultiHasse}(a) Hasse diagram of $\{1,2,3,4,5,6,10\}$; (b) Hasse diagram of $\Phi_1^3\Phi_2^2\Phi_3^2\Phi_4^2\Phi_5\Phi_6\Phi_{10}$. The circled numbers are the multiplicities.} \varepsilonnd{figure} The Hasse diagrams do not reflect the multiplicities of factors $\Phi_h$ though. Those can be pictured as additional non-negative integer labels. Theorem \ref{DivRoot} imposes a restriction on these labels. For a product of cyclotomic polynomials $\prod_{h}\big(\Phi_h(x)\big)^{m_h}$ to be a divisibility polynomial, ``less divisible" primitive orders $h$ must have bigger multiplicities $m_h$. Since divisibility ``increases" along the upward paths in a Hasse diagram the multiplicity labels must not increase when moving up along the edges, Figure\,\ref{MultiHasse}b. To make this more precise, let us restate it in terms of maps. \begin{definition} We call maps $\lambdaambda:{\mathbb N}\thetao{\mathbb N}\cup\{0\}$, that are non-zero at only finitely many integers, {\it multiplicity maps}. And we call them {\it order-reversing} if $h|h'$ implies $\lambdaambda(h)\gammaeq\lambdaambda(h')$. \varepsilonnd{definition} \noindent Given a product of powers of cyclotomic polynomials as above, $\lambdaambda:h\mapsto m_h$ is such a map. And conversely, any multiplicity map $\lambdaambda$ defines a polynomial $$ \Phi_\lambdaambda(x):=\prod_{h\in{\mathbb N}}\big(\Phi_h(x)\big)^{\lambdaambda(h)}. $$ This makes sense since all but finitely many factors have the exponent $0$. If we assign the polynomial $1$ to the zero map (and the empty diagram) then Theorem \ref{DivRoot} can be restated as follows. \begin{corollary}\lambdaabel{1-1Cor} There is a $1$-$1$ correspondence between order-reversing multiplicity maps, multiplicity labeled Hasse diagrams of finite saturated sets, and the normal divisibility polynomials. \varepsilonnd{corollary} There is also a nice algebraic way to describe finite saturated sets that will come handy later. If $\mathcal{L}ambda$ is finite there must be $n_1,\partialots,n_k\in\mathcal{L}ambda$ that do not divide any of its other elements. They are maximal in terms of divisibility, and they are, obviously, uniquely determined by $\mathcal{L}ambda$. But for finite saturated sets the converse is also true, because upward paths in a finite Hasse diagram must terminate at a maximal element. \begin{corollary}\lambdaabel{<n>} Finite saturated subsets of ${\mathbb N}$ are of the form: \begin{equation} \lambdaangle n_1,\partialots,n_k\rangle:=\{d\in{\mathbb N}\,\big|\,d\,|\,n_i\thetaext{ for some }i\}\,. \varepsilonnd{equation} \varepsilonnd{corollary} \noindent In particular, $\lambdaangle n\rangle$ is the set of all (positive) divisors of $n$. Figures \ref{MultiHasse} and \ref{MultDecomp} contain saturated sets $\lambdaangle 4,6,10\rangle$, $\lambdaangle pq\rangle$ and $\lambdaangle p,q,r\rangle$, for distinct primes $p,q,r$. Finally, some notational conventions. Recall that the indicator function of $\mathcal{L}ambda$ is: $$ 1_\mathcal{L}ambda(h):=\begin{cases}1,& h\in\mathcal{L}ambda\\0,& h\not\in\mathcal{L}ambda\,.\varepsilonnd{cases} $$ It is a map, and we abbreviate $\Phi_{1_\mathcal{L}ambda}$ as $\Phi_\mathcal{L}ambda$. This is simply the product $\prod_{d\in\mathcal{L}ambda}\Phi_d(x)$, and it means that $\Phi_{\lambdaangle n_1,\partialots,n_k\rangle}(x)$ is the product of $\Phi_d(x)$, where $d$ runs over the set $\lambdaangle n_1,\partialots,n_k\rangle$ of divisors of at least one of the positive integers $n_1,\partialots,n_k$. They are natural generalizations of $\Phi_{\lambdaangle n\rangle}(x)=x^n-1$, and will play a major part in what follows. \sigmaection{Factorization of divisibility polynomials}\lambdaabel{S2'} With the Hasse diagrams at hand, it is easier to take stock of the divisibility polynomials. We now realize that there are many more of them than contemplated by the classical authors \cite{Wd38}. A natural idea is to break them up into simpler pieces in some way. Note that a product of divisibility polynomials is again a divisibility polynomial, in other words, divisibility polynomials form a set closed under multiplication. In such sets a natural way to decompose is to factor. Integers factor into primes, (general) polynomials factor into linear factors (over complex numbers), etc. And those do not factor any further. \begin{definition} A non-constant divisibility polynomial is called decomposable if it is the product of two non-constant divisibility polynomials. Otherwise, it is called indecomposable. \varepsilonnd{definition} We do not yet know which divisibility polynomials are indecomposable, but here again we can make use of the diagrams. The ones associated with finite saturated sets look simpler than those with additional multiplicity labels on them. The idea of decomposing into them is pictured in Figure \ref{MultDecomp}a. We represent multiplicities by nodes stacked over the diagram, and then slice the layers of nodes horizontally. Each slice projects to a finite saturated set that defines a factor. This idea turns out to work. Recall that $\Phi_\lambdaambda$ denotes the divisibility polynomial corresponding to the multiplicity map $\lambdaambda$. \begin{figure}[!ht] \begin{centering} (a)\ \ \ \includegraphics[width=0.21\thetaextwidth]{Slice} \hspace{0.7in} (b)\ \ \ \includegraphics[width=0.22\thetaextwidth]{MultDecomp}\par \varepsilonnd{centering} \caption{\lambdaabel{MultDecomp} (a) Slicing decomposition of $\Phi_1^3\Phi_p^2\Phi_q^2\Phi_{pq}=\Phi_{\lambdaangle 1\rangle}\Phi_{\lambdaangle p,q\rangle}\Phi_{\lambdaangle pq\rangle}$; (b) Divisibility polynomial $\Phi_1^2\Phi_p\Phi_q^2\Phi_{r}$ with a non-unique decomposition ($p,q,r$ are distinct primes).} \varepsilonnd{figure} \begin{theorem}[{\bf Slicing factorization}]\lambdaabel{Slice} A normal divisibility polynomial is indecomposable if and only if each of its cyclotomic factors has multiplicity $1$. It suffices that $\Phi_1(x)$ has multiplicity $1$. In other words, indecomposable normal divisibility polynomials are $\Phi_{\lambdaangle n_1,\partialots,n_k\rangle}$ with $n_i\nmid n_j$ for $i\neq j$. Any normal divisibility polynomial $\Phi_\lambdaambda$ factorizes into indecomposables: \begin{equation}\lambdaabel{Lambda_j} \partialisplaystyle{\Phi_\lambdaambda=\prod_{j=1}^{\lambdaambda(1)}\Phi_{\mathcal{L}ambda_j}}\thetaextrm{, where }\mathcal{L}ambda_j:=\{m\in{\mathbb N}\,|\,\lambdaambda(m)\gammaeq j\}\,. \varepsilonnd{equation} \varepsilonnd{theorem} \begin{proof} Saying that all cyclotomic factors of $\Phi_\lambdaambda$ have multiplicity $1$ is equivalent to saying that $\lambdaambda=1_\mathcal{L}ambda$ for a finite saturated set $\mathcal{L}ambda$, or that $\lambdaambda$ is an order-reversing multiplicity map with $\lambdaambda(1)=1$. Indeed, since $1|m$ and $\lambdaambda$ is order-reversing $\lambdaambda(m)\lambdaeq\lambdaambda(1)$ for any $m\in{\mathbb N}$ . If $\lambdaambda(1)=1$ then $\lambdaambda(m)$ is $0$ or $1$. If $\lambdaambda(1)=0$ then $\lambdaambda$ is the zero map and $\Phi_\lambdaambda=1$, otherwise it is $1_\mathcal{L}ambda$ for some finite saturated set $\mathcal{L}ambda$. Moreover, if $\Phi_\lambdaambda=\Phi_\mu\Phi_\nu$ were a non-trivial factorization then $\lambdaambda(1)=\mu(1)+\nu(1)\gammaeq2$, so none exists if $\lambdaambda(1)=1$. Conversely, if $\lambdaambda(1)\gammaeq2$ we can factorize $\Phi_\lambdaambda$ explicitly, see Figure \ref{MultDecomp}a. Let $\mathcal{L}ambda_j$ be the projection of the $j$-th layer of nodes above the diagram. Clearly, $\lambdaambda=\sigmaum_{j=1}^{\lambdaambda(1)}1_{\mathcal{L}ambda_j}$, hence $\Phi_\lambdaambda=\prod_{j=1}^{\lambdaambda(1)}\Phi_{\mathcal{L}ambda_j}$. Moreover, if $m\in\mathcal{L}ambda_j$ and $d|m$ then $\lambdaambda(d)\gammaeq\lambdaambda(m)\gammaeq j$, so $d\in\mathcal{L}ambda_j$, and $\mathcal{L}ambda_j$ is a saturated set by definition. \varepsilonnd{proof} Although the slicing factorization has much to recommend itself, it is not unique. Consider the multiplicity map depicted in Figure\,\ref{MultDecomp}b. Its slicing factorization is $\Phi_{\lambdaangle q\rangle}\Phi_{\lambdaangle p,q,r\rangle}$, but $\Phi_{\lambdaangle p,q\rangle}\Phi_{\lambdaangle q,r\rangle}$ also factorizes it into indecomposables. The situation is not unlike the simple example of multiplication of $3n+1$ numbers $1,4,7,10,13,16,19,22$..., whose set is also closed under multiplication. We have $4\cdot55=10\cdot22$, and $4,22,10,55$ do not decompose into other $3n+1$ numbers. However, it is a simple exercise to show that if a $3n+1$ number has a $3n+1$ factor, then the quotient is also a $3n+1$ number. So such a number decomposes into a product of $3n+1$ numbers. Even this weaker property fails for normal divisibility polynomials. Every non-trivial one has $\Phi_{\lambdaangle 1\rangle}(x)=x-1$ as a factor, but plenty of them are indecomposable. For example, $\Phi_{\lambdaangle 2\rangle}(x)=x^2-1=(x-1)(x+1)$ is divisible by $\Phi_{\lambdaangle 1\rangle}(x)$, but does not factor into divisibility polynomials. The set of divisibility polynomials shows just how far multiplicatively closed sets can be from positive integers, where the prime factorization is unique. \sigmaection{Compression of indecomposable polynomials}\lambdaabel{S3} Factorization of divisibility polynomials is a bit of a disappointment, even aside from non-uniqueness. Although $f_m(x)=x^m-1$ are all indecomposable, in a sense, they bring little new compared to $x-1$. Their divisibility sequences $f_m(x^n)=x^{mn}-1$ are rather boring subsequences of $x^{n}-1$, and if we pick a numerical value for $x$ the resulting numerical sequences are even more alike -- the picked value is simply replaced by its $m$-th power. We need to weed out this redundancy. \begin{definition}\lambdaabel{PhixpDef} A divisibility polynomial $f$ is called compressible into a divisibility polynomial $g$ if $f(x)=g(x^m)$, and it is called incompressible if no such $g$ exists. \varepsilonnd{definition} To find out which $\Phi_M(x)$ can be compressed, i.e., written as $\Phi_\mathcal{L}ambda(x^n)$, we need to find out how $\Phi_\mathcal{L}ambda(x^n)$ {\it de}compress. We can start by decompressing their cyclotomic factors. One special case is straightforward by inspection of the roots on both sides: \begin{equation}\lambdaabel{Phixp} \Phi_m(x^p)=\begin{cases}\Phi_{mp}(x),\thetaext{ if } p\,|\,m\\ \Phi_{m}(x)\Phi_{mp}(x),\thetaext{ if } p\!\not|\,m\,,\varepsilonnd{cases} \varepsilonnd{equation} where $m$ is any positive integer, and $p$ is prime. For divisibility polynomials we can get a nicer formula, that does not split into cases. Recall that $\lambdaangle n\rangle:=\{d\in{\mathbb N}\,\big|\,d\,|\,n\}$, and $ST$ denotes the set of products of elements from sets $S$ and $T$. So $\lambdaangle n\rangle\mathcal{L}ambda$ consists of products of divisors of $n$ and elements of $\mathcal{L}ambda$. \begin{theorem}[{\bf Decompression formula}]\lambdaabel{TDecomp} Let $n\in{\mathbb N}$ and $\mathcal{L}ambda\sigmaubset{\mathbb N}$ be a finite saturated set. Then \begin{equation}\lambdaabel{Decomp} \Phi_\mathcal{L}ambda(x^n)=\Phi_{{\lambdaangle n\rangle\mathcal{L}ambda}}(x)=\prod_{d\in{\lambdaangle n\rangle\mathcal{L}ambda}}\Phi_d(x). \varepsilonnd{equation} \varepsilonnd{theorem} \begin{proof} By inspection, $\Phi_\mathcal{L}ambda(x^n)$ is a normal divisibility polynomial. Moreover, $x-1$ has multiplicity $1$ as a factor of $\Phi_\mathcal{L}ambda(x)$. No term of the form $x^n-\zeta$ is divisible by $x-1$, except when $\zeta=1$. Moreover, $x-1$ has multiplicity $1$ in $x^n-1$, and, therefore, $x-1$ has multiplicity $1$ in $\Phi_\mathcal{L}ambda(x^n)$ as well. So, by Theorem \ref{Slice}, $\Phi_\mathcal{L}ambda(x^n)$ is indecomposable, and $\Phi_\mathcal{L}ambda(x^n)=\Phi_{\mathcal{L}ambda^{(n)}}(x)$ for some finite saturated set, which, for the time being, we will denote $\mathcal{L}ambda^{(n)}$. It remains to show that $\mathcal{L}ambda^{(n)}=\lambdaangle n\rangle\mathcal{L}ambda$. First suppose that $n=p$ is prime. If $d\in\mathcal{L}ambda^{(p)}$ then $\Phi_d(x)$ is a factor of $\Phi_\mathcal{L}ambda(x^p)$. We see from \varepsilonqref{Phixp} that those are of the form $\Phi_{m}(x)$ and $\Phi_{mp}(x)$ with $m\in\mathcal{L}ambda$, so $d\in\lambdaangle p\rangle\mathcal{L}ambda$. Conversely, if $m\in\mathcal{L}ambda$ we need to show that $m,mp\in\mathcal{L}ambda^{(p)}$. That $\Phi_{mp}(x)$ is a factor of $\Phi_m(x^p)$ is immediate from \varepsilonqref{Phixp}, as it is for $\Phi_{m}(x)$ when $p\nmid m$. But if $p\,|\,m$ then $\frac{m}{p}\in\mathcal{L}ambda$ by the saturation property, and $\Phi_{m}(x)$ is always a factor of $\Phi_{\frac{m}{p}}(x^p)$. Hence, $m\in\mathcal{L}ambda^{(p)}$ also in this case. Thus, $\mathcal{L}ambda^{(p)}=\lambdaangle p\rangle\mathcal{L}ambda$ in both cases. To finish the proof, we need a simple observation about $\lambdaangle mn\rangle$. If $d|mn$ then, factorizing all three into primes, we can find $d_1|m$ and $d_2|n$ such that $d=d_1d_2$. Conversely, if $d_1|m$ and $d_2|n$ then $d_1d_2|mn$. Summarizing, $\lambdaangle mn\rangle=\lambdaangle m\rangle\lambdaangle n\rangle$. The general case now follows by induction on the number of prime factors in $n=p_1\cdots p_N$ (some $p_i$ may repeat) since $\Phi_\mathcal{L}ambda(x^{(p_1\cdots p_{k})p_{k+1}})=\Phi_{\lambdaangle p_{k+1}\rangle\mathcal{L}ambda}(x^{p_1\cdots p_{k}})$ and $\lambdaangle p_{k+1}\rangle\lambdaangle p_1\cdots p_{k}\rangle=\lambdaangle p_1\cdots p_{k+1}\rangle$. \varepsilonnd{proof} \noindent A warning: even if $d$ can be represented as $d_1d_2$, with $d_1|n$ and $d_2\in\mathcal{L}ambda$, in several different ways $\Phi_d(x)$ still enters the product in \varepsilonqref{Decomp} only once. \begin{figure}[!ht] \begin{centering} \includegraphics[scale=0.9]{DecomDiag} \par\varepsilonnd{centering} \caption{\lambdaabel{DecomDiag}Decompression diagrams for $\Phi_{\lambdaangle1\rangle}(x^{pq})$, $\Phi_{\lambdaangle p,q\rangle}(x^r)$ and $\Phi_{\lambdaangle p,q\rangle}(x^{pq})$, $p,q,r$ are distinct primes. Adjoined (possibly twice) edges are dashed, additional edges are dotted.} \varepsilonnd{figure} When $\mathcal{L}ambda=\lambdaangle1\rangle=\{1\}$ the decompression formula reduces to the Gauss's $x^n-1=\prod_{d\,|\,n}\Phi_d(x)$. Geometrically, the Hasse diagram of $\lambdaangle n\rangle\mathcal{L}ambda$ is obtained by adjoining a copy of $\lambdaangle n\rangle$ to every node of $\mathcal{L}ambda$ (or equivalently a copy of $\mathcal{L}ambda$ to every node of $\lambdaangle n\rangle$), and possibly adding extra edges according to the diagram drawing rules, see Figure\,\ref{DecomDiag}. This suggests that to compress a diagram we need to look for a subdiagram in the shape of $\mathcal{L}ambda$, to which identical diagrams in the shape of $\lambdaangle n\rangle$ are attached. This is a non-trivial search for large diagrams. \begin{figure}[!ht] \begin{centering} \includegraphics[scale=0.8]{Incomp} \par\varepsilonnd{centering} \caption{\lambdaabel{Incomp}Incompressible diagrams, $p,q,r$ are distinct primes.} \varepsilonnd{figure} Fortunately, there is a much simpler way to detect compressibility. The reader may try to guess it by reflecting on the difference between the diagrams in Figures \ref{DecomDiag} and \ref{Incomp}. This is one of those cases where tricky geometry is streamlined by simple algebra. Since one can easily show that $\lambdaangle d\rangle\lambdaangle n_1,\partialots,n_k\rangle=\lambdaangle dn_1,\partialots,dn_k\rangle$ (by the same argument as for $\lambdaangle m\rangle\lambdaangle n\rangle=\lambdaangle mn\rangle$) we can rewrite \varepsilonqref{Decomp} as $$ \Phi_{\lambdaangle m_1,\partialots,m_k\rangle}(x^d)=\Phi_{\lambdaangle dm_1,\partialots,dm_k\rangle}(x). $$ So $\Phi_{\lambdaangle n_1,\partialots,n_k\rangle}$ is compressible if and only if $n_i$ have a non-trivial common divisor. \begin{corollary} $\Phi_{\lambdaangle n_1,\partialots,n_k\rangle}(x)$ is incompressible if and only if $\thetaextup{gcd}(n_1,\partialots,n_k)=1$. Any indecomposable normal divisibility polynomial is of the form $\Phi_{\lambdaangle n_1,\partialots,n_k\rangle}(x^m)$, with $m\gammaeq1$ and an incompressible $\Phi_{\lambdaangle n_1,\partialots,n_k\rangle}(x)$. \varepsilonnd{corollary} Now it is easy to give explicit ``exceptional" examples of divisibility polynomials that are distinctive. If $n_1, n_2$ are relatively prime then $x^{n_1}-1=\prod_{d\,|\,n_1}\Phi_d(x)$ and $x^{n_2}-1=\prod_{d\,|\,n_2}\Phi_d(x)$ have only one common cyclotomic factor, $\Phi_1(x)=x-1$, so $$ \Phi_{\lambdaangle n_1,n_2\rangle}(x)=\frac{(x^{n_1}-1)(x^{n_2}-1)}{x-1}\,. $$ Similarly, if $\thetaextrm{gcd}(n_1,n_2,n_3)=1$ then \begin{equation}\lambdaabel{Phi3Prime} \Phi_{\lambdaangle n_1,n_2,n_3\rangle}(x)=\frac{(x^{n_1}-1)(x^{n_2}-1)(x^{n_3}-1)(x-1)}{(x^{\thetaextrm{gcd}(n_1,n_2)}-1)(x^{\thetaextrm{gcd}(n_2,n_3)}-1) (x^{\thetaextrm{gcd}(n_3,n_1)}-1)}\,. \varepsilonnd{equation} In general, one can use the inclusion-exclusion formula from combinatorics to express $\Phi_{\lambdaangle n_1,\partialots,n_k\rangle}(x)$ for any $k$. When $\thetaextrm{gcd}(n_i,n_j)=1$ for $i\neq j$ it simplifies to \begin{equation}\lambdaabel{PhikPrime} \Phi_{\lambdaangle n_1,\partialots,n_k\rangle}(x)=\frac{(x^{n_1}-1)\cdots(x^{n_k}-1)}{(x-1)^{k-1}}\,. \varepsilonnd{equation} Let us summarize what we discovered about the structure of the divisibility polynomials. \begin{theorem}\lambdaabel{DivPolyChar} Any divisibility polynomial $f(x)$ is of the form $f(x)=Cx^sg(x)$, where $C$ is a numerical constant, $s$ is a positive integer, and $g(x)$ is a product of factors of the form $\Phi_{\lambdaangle n_1,\partialots,n_k\rangle}(x^d)$, with integers $d\gammaeq0,n_i\gammaeq1$, $n_i\nmid n_j$ for $i\neq j$, and $\thetaextup{gcd}(n_1,\partialots,n_k)=1$. \varepsilonnd{theorem} \sigmaection{Back to Mersenne and Fibonacci}\lambdaabel{S3'} We now have a characterization of the divisibility polynomials, and it is time to apply the fruits of our labor. To fully characterize linear divisibility sequences of integers, it ``only" remains to find conditions on $A$ and $\gammaamma_i$ in \varepsilonqref{BPPForm}, that would produce integer sequences. Unfortunately, this is a big only -- necessary and sufficient integrality conditions are not known even when all divisibility polynomials are $x$ or $x-1$ \cite{RWG}. We will find sufficient integrality conditions when there is a single divisibility polynomial factor, and construct many non-classical relatives of the Fibonacci and Mersenne numbers. It will be convenient to represent our sequences in a different form than \varepsilonqref{BPPForm}, using a modification of the cyclotomic polynomials into polynomials in two variables. The modification is common in algebra, and is called {\it homogenization}: given a polynomial $f(x)$ we set $f(x,y):=y^{\thetaextrm{deg}\,(f)}f(x/y)$. It leaves $x$ as is, but turns $x-1$ into $x-y$, and $x^n-1$ into $x^n-y^n$. If $f(x)$ is a product of polynomials then its homogenization is the product of their homogenizations. In particular, $\Phi_\mathcal{L}ambda(x,y):=\prod_{d\in\mathcal{L}ambda}\Phi_d(x,y)$, where $\Phi_d(x,y)$ are the homogenized cyclotomic polynomials: $\Phi_1(x,y)=x-y$, $\Phi_2(x,y)=x+y$, $\Phi_3(x,y)=x^2+xy+y^2$, $\Phi_4(x,y)=x^2+y^2$, $\Phi_6(x,y)=x^2-xy+y^2$, etc. A key observation is that $\Phi_d(x,y)$ are symmetric in $x$ and $y$ for $d>1$. This is because $\Phi_d(x)$ for $d>1$ are {\it palindromic}, their coefficients read the same forward and backward. To prove this, note that $\frac{x^n-1}{x-1}$ is palindromic (all coefficients are $1$), and if $f(x)g(x)=h(x)$, where $f$ and $h$ are palindromic, then so is $g$. Dividing the Gauss's formula \varepsilonqref{AllDiv} by $x-1$ we have \begin{equation}\lambdaabel{AllDivx1} \frac{x^n-1}{x-1}=\Phi_n(x)\prod_{d|n,1<d<n}\Phi_d(x)\,, \varepsilonnd{equation} Assuming $\Phi_d(x)$ are palindromic for $d<n$, $\Phi_n(x)$ is a quotient of palindromic polynomials, hence itself palindromic. By induction, $\Phi_n(x)$ are palindromic for all $n>1$. Now it is easy to figure out the secret behind $F_n=\frac{\alphalpha^n-\beta^n}{\alphalpha-\beta}$ being integers, despite the irrational values $\alphalpha=\frac{1+\sigmaqrt{5}}2$ and $\beta=\frac{1-\sigmaqrt{5}}2$. By the fundamental theorem on symmetric polynomials, any such polynomial with integer coefficients is also a polynomial with integer coefficients in $x+y$ and $xy$, and $\alphalpha+\beta=1$, $\alphalpha\beta=-1$ are integers. We can generalize this integrality argument as follows. \begin{theorem}\lambdaabel{IntSeq} Let $\alphalpha,\beta$ be complex numbers such that $\alphalpha+\beta$ and $\alphalpha\beta$ are integers, and $f(x)$ be a normal divisibility polynomial. Then $A_n:=\frac{f(\alphalpha^n,\beta^n)}{f(\alphalpha,\beta)}$ is a linear divisibility sequence of integers. \varepsilonnd{theorem} \begin{proof} Since normal divisibility polynomials are products of indecomposables, and products of integer sequences are integer sequences, it suffives to consider an indecomposable $f=\Phi_\mathcal{L}ambda$. By the decompression formula, $\frac{\Phi_\mathcal{L}ambda(x^n)}{\Phi_\mathcal{L}ambda(x)}=\Phi_{{\lambdaangle n\rangle\mathcal{L}ambda}}(x)/\Phi_\mathcal{L}ambda(x)$ is a product of cyclotomic polynomials. In particular, it is a polynomial with integer coefficients. Therefore, so is $$ \frac{\Phi_\mathcal{L}ambda(x^n,y^n)}{\Phi_\mathcal{L}ambda(x,y)}=\frac{\prod_{d\in\mathcal{L}ambda}\Phi_d(x^n,y^n)}{\prod_{d\in\mathcal{L}ambda}\Phi_d(x,y)}= \frac{x^n-y^n}{x-y}\prod_{d\in\mathcal{L}ambda,\,d>1}\frac{\Phi_d(x^n,y^n)}{\Phi_d(x,y)}\,. $$ Moreover, the first factor is symmetric in $x$ and $y$, because it homogenizes a palindromic $\frac{x^n-1}{x-1}$, and the second factor is also symmetric, because $\Phi_d(x,y)$ are for $d>1$. By the fundamental theorem on symmetric polynomials, the product is a symmetric polynomial with integer coefficients in $x+y$ and $xy$, and its values are integers for $x=\alphalpha,y=\beta$. By the same reasoning, $\frac{\Phi_\mathcal{L}ambda(\alphalpha^n,\beta^n)}{\Phi_\mathcal{L}ambda(\alphalpha^m,\beta^m)}$ is an integer when $m|n$, so $A_m|A_n$ in ${\mathbb Z}$. \varepsilonnd{proof} As a first example, consider the simplest non-classical divisibility polynomial $$ f=\Phi_{\lambdaangle 2,3\rangle}=\Phi_1\Phi_2\Phi_3=(x-y)(x+y)(x^2+xy+y^2)=(x+y)(x^3-y^3)\,. $$ The corresponding divisibility sequence is $$ A_n=\frac{x^n+y^n}{x+y}\cdot\frac{x^{3n}-y^{3n}}{x^3-y^3}=B_n\cdot C_n\,. $$ Both factors satisfy linear recurrences of the second order, and can be generated easily:\\ \hspace*{1in} $B_0=\frac2{x+y},\ B_1=1,\ B_{n+2}=(x+y)B_{n+1}-xyB_n;$\\ \hspace*{1in} $C_0=0,\ C_1=1,\ C_{n+2}=(x^3+y^3)C_{n+1}-x^3y^3C_n.$\\ For the Fibonacci values $\alphalpha,\beta$, with $\alphalpha+\beta=1$ and $\alphalpha\beta=-1$, $B_n$ are none other than the Lucas numbers $L_n$, and $C_n$ is a normalized subsequence of the Fibonacci numbers: $\frac{F_{3n}}{F_3}=\frac12F_{3n}$. So $A_n=\frac12L_nF_{3n}$, see Table \ref{LucFib}. This construction can be generalized by picking any odd number $N$ instead of $3$, the resulting sequence is $A_n=L_nF_{N\!n}/F_N$, recurrent of order four. Some of these sequences were entered into the On-Line Encyclopedia of Integer Sequences by Bala in 2014 \cite{Bala}. \begin{table}[!ht] \centering \begin{tabular}{|l|llllllllll|} \hline $n$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\\ \hline $L_n$ & 2 & 1 & 3 & 4 & 7 & 11 & 18 & 29 & 47 & 76 \\ \hline $F_{3n}/F_3$ & 0 & 1 & 4 & 17 & 72 & 305 & 1,292 & 5,473 & 23,184 & 98,209\\ \hline $A_n$ & 0 & 1 & 12 & 68 & 504 & 3355 & 23,256 & 158,717 & 1,089,648 & 7,463,884\\ \hline \varepsilonnd{tabular} \caption{\lambdaabel{LucFib} Non-classical linear divisibility sequence constructed from the Fibonacci numbers.} \varepsilonnd{table} \noindent A recurrent sequence of order six comes from $\Phi_{\lambdaangle 3,4\rangle}(x)=(x+1)(x^2+1)(x^3-1)$. For the Fibonacci values, we find $A_n=L_n\frac{L_{2n}}{L_2}\frac{F_{3n}}{F_3}=\frac16L_nL_{2n}F_{3n}$. Note that the extra factor $\frac{L_{2n}}{L_2}=2/3,1,7/3,6,47/3,41\partialots$ not only is not a divisibility sequence, but is not even integer-valued. To generalize them further, let us define $$ S_n^\mathcal{L}ambda(\alphalpha,\beta):=\frac{\Phi_\mathcal{L}ambda(\alphalpha^n,\beta^n)}{\Phi_\mathcal{L}ambda(\alphalpha,\beta)}, $$ We will abbreviate $S_n^{\lambdaangle 1\rangle}$ as $S_n$, so $M_n=S_n(2,1)$, $F_n=S_n\big(\frac{1+\sigmaqrt{5}}2,\frac{1-\sigmaqrt{5}}2\big)$, and suppress $\alphalpha,\beta$ from the notation when they are unimportant or understood. Any second order recurrent sequence with $S_0=0$ and $S_1=1$ is of this form, with $\mathcal{L}ambda=\lambdaangle 1\rangle$ and suitable $\alphalpha,\beta$. When $\mathcal{L}ambda=\lambdaangle N\rangle$ we get normalized subsequences of $S_n$, namely $S_n^{\lambdaangle N\rangle}=S_{N\!n}/S_{N}$. More generally, we can use the expression \varepsilonqref{PhikPrime} for $\Phi_\mathcal{L}ambda$. In the homogenized form, when $\thetaextrm{gcd}(N_i,N_j)=1$ for $i\neq j$, we have: $$ \Phi_{\lambdaangle N_1,\partialots,N_k\rangle}(x,y)=\frac{(x^{N_1}-y^{N_1})\cdots(x^{N_k}-y^{N_k})}{(x-y)^{k-1}}\,. $$ Therefore, \begin{equation}\lambdaabel{SnNk} S_n^{\lambdaangle N_1,\partialots,N_k\rangle}=\frac{S_{N_{\!1}n}/S_{N_{\!1}}\cdots S_{N_{\!k}n}/S_{N_{\!k}}}{(S_n)^{k-1}}\,. \varepsilonnd{equation} We recover the sequences $A_n=L_nF_{N\!n}/F_N$, when $S_n=F_n$, from the well-known identity $L_n=F_{2n}/F_n$, and $F_2=1$.We leave it to the reader to produce more general sequences based on the expressions like \varepsilonqref{Phi3Prime}. When $\mathcal{L}ambda\neq\lambdaangle N\rangle$ these sequences are non-classical, and not covered by the extension of Lucas's theory developed in \cite{RWG}. Yet, all sequences from \varepsilonqref{SnNk} are divisors of a product of classical sequences. This is not accidental -- it is true for all non-degenerate linear divisibility sequences by a main result of \cite{BPP}. \sigmaection{Strong divisibility}\lambdaabel{S4} The Fibonacci and Mersenne numbers have even more in common than being divisibility sequences. They are both {\it strong divisibility} sequences: $\thetaextrm{gcd}(a_n,a_m)=a_{\thetaextrm{gcd}(n,m)}$. This follows by a clever application of the Euclidean algorithm, for example. We will show that this property distinguishes them from their non-classical generalizations. Let us start with the simplest non-classical sequence $f_n=\Phi_{\lambdaangle2,3\rangle}(x^n)$. By the decompression formula,\\ \hspace*{1in} $f_1=\Phi_1\Phi_2\Phi_3$\\ \hspace*{1in} $f_2=\Phi_1\Phi_2\Phi_3\Phi_4\Phi_6$\\ \hspace*{1in} $f_3=\Phi_1\Phi_2\Phi_3\Phi_6\Phi_9$\,.\\ \noindent Since $\Phi_d$ with different $d$ have no common roots $\thetaextrm{gcd}(f_2,f_3)=\Phi_1\Phi_2\Phi_3\Phi_6\neq\Phi_1\Phi_2\Phi_3=f_1=f_{\thetaextrm{gcd}(2,3)}$, so this sequence is not a strong divisibility sequence. It is natural to ask when $\Phi_\mathcal{L}ambda(x^n)$ is a strong divisibility sequence. Recall that $\lambdaangle n\rangle$ denotes the set of all positive divisors of $n$, and $ST$ the set of products of elements from sets $S$ and $T$. Generalizing the above example, we see that the common cyclotomic factors of $\Phi_\mathcal{L}ambda(x^m)=\Phi_{\lambdaangle m\rangle\mathcal{L}ambda}(x)$ and $\Phi_\mathcal{L}ambda(x^n)=\Phi_{\lambdaangle n\rangle\mathcal{L}ambda}(x)$ are those with the indices from $\lambdaangle m\rangle\mathcal{L}ambda\cap\lambdaangle n\rangle\mathcal{L}ambda$. Therefore, $\Phi_\mathcal{L}ambda(x^n)$ is a strong divisibility sequence if and only if \begin{equation}\lambdaabel{StDivSet} \lambdaangle m\rangle\mathcal{L}ambda\cap\lambdaangle n\rangle\mathcal{L}ambda=\lambdaangle\thetaextrm{gcd}(m,n)\rangle\mathcal{L}ambda\,. \varepsilonnd{equation} But we can give a much more explicit characterization. \begin{theorem}\lambdaabel{StrongDiv} Let $\mathcal{L}ambda\sigmaubset{\mathbb N}$ be a finite saturated set. Then $\Phi_\mathcal{L}ambda(x^n)$ is a strong divisibility sequence of polynomials if and only if $\mathcal{L}ambda=\lambdaangle N\rangle$ for some $N\gammaeq1$. \varepsilonnd{theorem} \begin{proof}First, suppose $\mathcal{L}ambda=\lambdaangle N\rangle$. Then we have $$ \lambdaangle m\rangle\lambdaangle N\rangle\cap\lambdaangle n\rangle\lambdaangle N\rangle=\lambdaangle mN\rangle\cap\lambdaangle nN\rangle =\lambdaangle\thetaextrm{gcd}(mN,nN)\rangle, $$ because every common divisor of two numbers is a divisor of their $\thetaextrm{gcd}$, and vice versa. Multiplying every line of the Euclidean algorithm on $m,n$ by $N$, we may conclude that $\thetaextrm{gcd}(mN,nN)=\thetaextrm{gcd}(m,n)N$. Therefore, $\lambdaangle\thetaextrm{gcd}(mN,nN)\rangle=\lambdaangle\thetaextrm{gcd}(m,n)\rangle\lambdaangle N\rangle$, and \varepsilonqref{StDivSet} holds, so $\Phi_\mathcal{L}ambda(x^n)$ is a strong divisibility sequence. To prove the converse, we will first show that if \varepsilonqref{StDivSet} holds, and $m,n\in\mathcal{L}ambda$, then $\thetaextrm{lcm}(m,n)\in\mathcal{L}ambda$. Clearly, $mn\in\lambdaangle m\rangle\mathcal{L}ambda\cap\lambdaangle n\rangle\mathcal{L}ambda=\lambdaangle\thetaextrm{gcd}(m,n)\rangle\mathcal{L}ambda$. Therefore, there is $d\,|\,\thetaextrm{gcd}(m,n)$ and $h\in\mathcal{L}ambda$ such that $mn=dh$. But then, $$ h=\frac{mn}{\thetaextrm{gcd}(m,n)}\cdot\frac{\thetaextrm{gcd}(m,n)}{d}=\thetaextrm{lcm}(m,n)\cdot\frac{\thetaextrm{gcd}(m,n)}{d}, $$ so $\thetaextrm{lcm}(m,n)\,|\,h$. Since $h\in\mathcal{L}ambda$, and $\mathcal{L}ambda$ is saturated, this implies $\thetaextrm{lcm}(m,n)\in\mathcal{L}ambda$. As this is true for any $m,n\in\mathcal{L}ambda$, we have $\mathcal{L}ambda=\lambdaangle N\rangle$, where $N=\thetaextrm{lcm}\{h\,|\,h\in\mathcal{L}ambda\}$. \varepsilonnd{proof} \begin{corollary} Every indecomposable divisibility polynomial that generates a strong divisibility sequence is compressible into $x-1$, i.e. it is of the form $x^{N}-1$. \varepsilonnd{corollary} \noindent In other words, strong divisibility sequences of polynomials are the classical ones. Does it extend to the corresponding integer sequences? The question is subtle because we may get $\pm1$ when substituting values into $\Phi_d(x,y)$, even though the polynomial itself is non-trivial. Barring such degeneracy, we can characterize which linear divisibility sequences defined in Section \ref{S3'} are strong divisibility sequences. \begin{theorem}\lambdaabel{StDiv} Let $\alphalpha,\beta$ be complex numbers such that $\alphalpha+\beta$ and $\alphalpha\beta$ are integers, $\alphalpha/\beta$ is not a root of unity, and $|\Phi_d(\alphalpha,\beta)|\neq1$ for all large enough $d$. Then $S_n^\mathcal{L}ambda(\alphalpha,\beta)$ is a strong divisibility sequence of integers if and only if $\mathcal{L}ambda=\lambdaangle N\rangle$ for some $N\gammaeq1$. \varepsilonnd{theorem} \begin{proof}If $\mathcal{L}ambda=\lambdaangle N\rangle$ the strong divisibility follows directly from Theorem \ref{StrongDiv}, even for the polynomials $\Phi_\mathcal{L}ambda(x^n,y^n)$. If $\mathcal{L}ambda\neq\lambdaangle N\rangle$, by the same argument as in the proof of the theorem, there are $m,n\in\mathcal{L}ambda$ such that for any positive integer $k$ we have $kmn\in\lambdaangle km\rangle\mathcal{L}ambda\cap\lambdaangle kn\rangle\mathcal{L}ambda$, but $kmn\not\in\lambdaangle\thetaextrm{gcd}(km,kn)\rangle\mathcal{L}ambda$. Therefore, $\thetaextrm{gcd}(S_{km}^\mathcal{L}ambda,S_{kn}^\mathcal{L}ambda)$ multiplies $S_{\thetaextrm{gcd}(km,kn)}^\mathcal{L}ambda$ by at least $\Phi_{kmn}(\alphalpha,\beta)$. Note that $S_n^\mathcal{L}ambda(\alphalpha,\beta)$ can only be $0$ for $n>0$ if $\alphalpha/\beta$ is a root of unity, and, by assumption, $\Phi_{kmn}(\alphalpha,\beta)\neq\pm1$ for large enough $k$. Thus, $S_n^\mathcal{L}ambda$ is not a strong divisibility sequence. \varepsilonnd{proof} \noindent For the sequences with real $\alphalpha,\beta$, like the Mersenne and Fibonacci numbers, we can rule out the numerical degeneracy easily. \begin{corollary}\lambdaabel{PhiReal} Let $\alphalpha,\beta$ be real numbers such that $\alphalpha+\beta$ and $\alphalpha\beta$ are integers, and $\alphalpha\beta\neq0$, $\alphalpha\neq\pm\beta$. Then $|\Phi_d(\alphalpha,\beta)|>1$ for $d>2$, and the only strong divisibility sequences among $S_n^\mathcal{L}ambda$ are the classical ones $S_{N\!n}/S_{N}$. \varepsilonnd{corollary} \begin{proof} From definition, $\Phi_d(x,y)$ is the product of $x-\zeta y$, where $\zeta$ runs over all primitive roots of unity of order $d$. Consider two circles centered at the origin with the radii $|\alphalpha|$ and $|\beta|$. Then $\zeta\beta$ is on the second circle, since $|\zeta|=1$, and its distance $|\alphalpha-\zeta\beta|$ to $\alphalpha$ is no less than the distance between the circles, $|\alphalpha+\beta|$ or $|\alphalpha-\beta|$, depending on the signs of $\alphalpha,\beta$. Moreover, since $\alphalpha,\beta$ are real the distance can equal $|\alphalpha\pm\beta|$ only if $\zeta=\pm1$, , i.e. the order of $\zeta$ is $1$ or $2$. But $\alphalpha+\beta$ is an integer, so $|\alphalpha+\beta|\gammaeq1$, unless $\alphalpha=-\beta$. Similarly, $(\alphalpha-\beta)^2=(\alphalpha+\beta)^2-4\alphalpha\beta$ is an integer, so $|\alphalpha-\beta|\gammaeq1$, unless $\alphalpha=\beta$. Thus, $|\alphalpha-\zeta\beta|>1$ for $\zeta\neq\pm1$, and $|\Phi_d(\alphalpha,\beta)|>1$ for $d>2$. The second conclusion follows from Theorem \ref{StDiv}. \varepsilonnd{proof} \noindent This implies that of the divisibility sequences we constructed in Section \ref{S3'} the only strong divisibility sequences are the classical ones, $S_n$ and $S_{N\!n}/S_{N}$. One can show that the conditions of Theorem \ref{StDiv} also hold for complex conjugate $\alphalpha,\beta$, when $\alphalpha\beta\neq0$ and $\alphalpha/\beta$ is not a root of unity. But the proof relies on a non-elementary estimate of Stewart \cite{Stew} for $|\Phi_d(\alphalpha,\beta)|$, and we omit it. \noindent {\sigmamall {\bf Acknowledgements:} } The author is grateful to Brian Pasko and anonymous referees for their comments on a previous version of this paper, and to Lawrence Somer for valuable corrections to the final version. \begin{thebibliography}{99} \bibitem{Bala} P. Bala, A family of linear divisibility sequences of order four, 2014, available at \thetaexttt{http://oeis.org/A238536/a238536.pdf} \bibitem{BPP} J.-P. B\'ezivin, A. Peth\"o, A. van der Poorten, A full characterisation of divisibility sequences. American Journal of Mathematics, 112 (1990), no. 6, pp. 985--1001. \bibitem{BFLS} N. Bliss, B. Fulan, S. Lovett, J. Sommars, Strong divisibility, cyclotomic polynomials, and iterated polynomials. American Mathematical Monthly 120 (2013), no. 6, 519--536. \bibitem{EPS} G. Everest, A. J. van der Poorten, I. Shparlinski, T. Ward, Recurrence Sequences, American Mathematical Society, Providence, RI, 2003. \bibitem{Hall} M. Hall, Divisibility sequences of third order. American Journal of Mathematics, 58 (1936), no. 3, 577--584. \bibitem{Kimb} C. Kimberling, Generalized cyclotomic polynomials, Fibonacci cyclotomic polynomials, and Lucas cyclotomic polynomials. Fibonacci Quarterly, 18 (1980), no. 2, 108--126. \bibitem{Leh} D. Lehmer, An extended theory of Lucas' functions. Annals of Mathematics, (2) 31 (1930), no. 3, 419--448. \bibitem{Oos} A. Oosterhout, Characterization of divisibility sequences. Master’s Thesis, Utrecht University, June 2011, available at \thetaexttt{https://dspace.library.uu.nl/handle/1874/214007} \bibitem{RWG} E. Roettger, H. Williams, R. Guy, Some extensions of the Lucas functions. In {\it Number theory and related fields}, 271--311, Springer Proceedings in Mathematics \& Statistics, 43, Springer, New York, 2013. \bibitem{Stew} C. Stewart, On divisors of Lucas and Lehmer numbers, Acta Mathematica, 211 (2013), no. 2, 291--314. \bibitem{Wd38} M. Ward, The law of apparition of primes in a Lucasian sequence. Transactions of the American Mathematical Society, 44 (1938), 68--86. \bibitem{WP} W. Webb, E. Parberry, Divisibility properties of Fibonacci polynomials, Fibonacci Quarterly, 7 (1969), no. 5, 457--463. \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{{\bf Multi-activity Influence and Intervention}\footnote{ We thank the editor, an advisory editor, two anonymous referees as well as Francis Bloch, Yann Bramoulle, Arthur Campbell, George Charlson, Andrea Galeotti, Sanjeev Goyal, Rongzhu Ke, Sudipta Sarangi, Fanqi Shi, Satoru Takahashi, Yiqing Xing, Yves Zenou, and seminar participants at 2021 Young Academics Networks Conference (Cambridge-INET), Zhejiang University, Monash University, and NUS. Zhou acknowledges support from Tsinghua Strategy for Heightening Arts, Humanities and Social Sciences: “Plateaus \& Peaks” (No. 2022TSG08102). The usual disclaimers apply. }} \author{Ryan Kor\thanks{Department of Economics, National University of Singapore, Singapore. E-mail: {[email protected]}}\and Junjie Zhou\thanks{Corresponding author. School of Economics and Management, Tsinghua University, China. E-mail: {[email protected]}}} \date{\today} \maketitle \begin{abstract} Using a general network model with multiple activities, we analyse a planner’s welfare maximising interventions taking into account within-activity network spillovers and cross-activity interdependence. We show that the direction of the optimal intervention, under sufficiently large budgets, critically depends on the spectral properties of two matrices: the first matrix depicts the social connections among agents, while the second one quantifies the strategic interdependence among different activities. In particular, the first principal component of the interdependence matrix determines budget resource allocation across different activities, while the first (last) principal component of the network matrix shapes the resource allocation across different agents when network effects are strategic complements (substitutes). We explore some comparative statics analysis with respective to model primitives and discuss several applications and extensions. \strut \noindent \textbf{JEL classification:} D85; Z13; C72 \noindent \textbf{Keywords: } Networks, multiple activities, interventions, centralities. \end{abstract} \section{Introduction} Network models have made major contributions to the understanding of equilibrium activity in a variety of markets, such as criminal effort \citep{bcz,bcz2}, public goods \citep{bk,allouch2015private, elliott2019network}, and research and development \citep{goyal2001RD,klz}. These models take into account the significant spillover effects present in these activities to guide government intervention, see, for instance, the key player problem in \cite{bcz}, and the optimal targeting interventions in \cite{ggg}. Nevertheless, it is important to consider that an intervention affects not only the intended activity, but also creates spillovers into other markets. For example, \cite{gold} showed that there are important strategic interactions between drug consumption and criminal activity, and so the predicted effects of an intervention that reduces criminal activity will be imprecise if the effects on the drug market is not taken into consideration. Therefore, in this paper, we discuss the implementation of government interventions when the players simultaneously participate in multiple activities. We adopt the multiple activity network model in \cite{cyz}, and extend to the situation where the interactions between activities are not homogeneous. That is, we allow for the strength and type of interaction between each pair of activities to differ. In doing so, we define a matrix of strategic interdependence, and find that the eigendecomposition of this interdependence matrix, in combination with the network matrix, plays a significant role in determining the optimal intervention and welfare. Our analysis generalizes the principal component analysis in \cite{ggg} to a multiple activity setting. In particular, we obtain analogous ``simple'' interventions that the planner can adopt to obtain an asymptotically optimal welfare. We also establish results on the allocation of the planner's budget when the budget is large, both across activities, and across agents within an activity. We show that the two allocations are in a sense independent of each other, and are determined solely by the eigenvectors of the two matrices mentioned before - the matrix of strategic interdependence and the network adjacency matrix. More precisely, we show that the first principal component of the interdependence matrix determines budget resource allocation across different activities, while the first (last) principal component of the network matrix shapes the resource allocation across different agents when network effects are strategic complements (substitutes). We also perform some comparative statics analysis to obtain monotonicity results in the case where activities are complements. To broaden the scope of our study, we also consider the loss in welfare due to the restriction in intervention (called partial intervention) whereby the planner is unable to intervene in certain activities. This could be due to a lack of infrastructure to facilitate effective targeted intervention. For example, using the example of drug consumption above, there may be underground supply chains that the planner is unable to control. Consequently, the planner could decide to focus on intervening in the dimension of criminal activity, and indirectly influence drug consumption through the strategic interdependence. In this paper, we show that when the activities are complements and the budget is large, a restriction in the intervention space will lead to a percentage loss in total social welfare among the players, and this welfare loss is proportional to the number of restricted activities. On the other hand, there is no loss in the case of substitutable activities as long as at least two activities are available for intervention. These results showcase the difference between complementary and substitute activities, where we see that increasing the intervention space is more important when the activities are complements. In contrast, when the budget is small, there are decreasing marginal benefits to the number of activities that the planner intervenes in regardless of the type of strategic interactions. This trend thus lies between the extreme cases observed when the planner's budget is large, and provides another consideration for policy implementation. Finally, we find that the impact of a restriction in the planner's intervention is larger when the intensity of the strategic interactions or the network spillovers are strong. Intuitively, the maximum welfare returns to the budget is larger in these cases due to the greater strategic relationships in effort levels that the planner can exploit with a suitable choice of intervention, but the planner is unable to do so under a restricted intervention space. Similarly, the effect of a restriction is also greater for denser graphs when there are strategic complementarities among the agents due to the network spillovers. On the other hand, the effect of an increase in interconnectedness between the agents is ambiguous when there are strategic substitutabilities instead, as the additional linkages can result in a decrease in consumption. We also consider several extensions to our model. Firstly, we relax our assumptions to allow for the network spillovers to be different in each activity. We find that our main results on the spectral properties of the optimal intervention are still largely applicable with some modifications. In particular, the first and last principal components of the adjacency matrix still feature prominently in the optimal intervention. We also studied the problem of nonnegative interventions, which is where the planner is only able to shift incentives in one direction. This is relevant in situations where a planner may find it impractical to decrease accessibility to an activity such as education, and thus is only able to incentivise further participation. While the problem is in general shown to be NP-hard, our simulations offered some intuition to the optimal interventions, and may still guide policy direction. Finally, we offer an alternative interpretation of our model in a monopoly setting. A price discriminating monopolist distorts the market in a manner equivalent to the targeted interventions we study, but aims to maximize its total profit instead of the consumers' welfare. We show that the results in \cite{cbo} and \cite{bloch2013pricing} can be generalized to our case of a multiple-good market. Throughout our analysis, we assume that the underlying network structure is the same across all activities.\footnote{In one extension, \cite{cyz} consider a model with multiple distinct networks.} While this seems restrictive, as a first exploration into interventions on multiplex networks, our model still lends itself to various economic applications. For example, under the same criminal social network, players may choose their involvement in both drug consumption and criminal activity. A planner could then choose to target each activity separately---the planner could decrease criminal activity by increasing law enforcement and police presence, or reduce drug consumption by an intervention in the drug market. Since the players enjoy complementary effects from participation in both activities \citep{gold}, our results suggest that the planner should allocation equal amounts into the intervention on both activities. A planner who ignores the cross-activity interactions and only intervenes in one dimension would incur a significant decrease in optimal welfare. Another multiple activity setting, where consumers instead participate in substitutable activities, can be found in a social network with the consumers choosing their participation levels among numerous video-sharing and networking platforms such as Facebook, Instagram, and Tiktok. Our results thus imply that a planner who wants to maximize utility from these could simply focus their efforts on promoting the usage of just one or two applications. \subsection*{Related Literature} There is a range of other literature in both the multi-activity dimension, as well as the study of interventions. \cite{bcz} introduced the seminal single-activity network model, showing that the equilibrium activity level is related to the Katz-Bonacich centralities of the network.\footnote{See \cite{SET2022} for recent developments in network models with nonlinear responses.} \cite{bd} extends the single-activity model to the case of agents participating in two perfectly substitutable activities, and \cite{cyz} generalises the analysis to arbitrary strategic interactions between multiple activities. \cite{walsh} also considers a network model where agents invest in two public goods. \cite{goyal2017aer} develop a multi-activity model where individuals in a social network choose whether to participate in their network and whether to participate in the market. In the area of network interventions, a wide variety of methods have been proposed in \cite{val} for a planner to conduct. Structural interventions, such as in \cite{bcz}, \cite{gl}, \cite{cs} and \cite{SZZ2021structural}, adjust activity levels through modifying the network structure. The creation or deletion of links in such interventions affects the centralities of the agents, which leads to a change in the equilibrium. Other papers such as \cite{dem} and \cite{ggg} have considered characteristic interventions, where the planner instead modifies the agents' intrinsic valuations of the activities.\footnote{\cite{kor2022joint} analyze joint interventions in both characteristics of nodes and weights on the links that connect nodes.} Such characteristic interventions are the main focus of our paper. A closely related field of research is that of discriminatory pricing within a network of consumers, where a decrease in price for a consumer has a similar effect as an increase in intrinsic utility. \cite{cyz2,O2p2022} study the optimal pricing for both monopolies and oligopolies, as well as the implications on total welfare, while \cite{uz} considers a version with product varieties. \cite{fg} allows for incomplete information of the network structure. However, to the best of our knowledge, the previous literature has only analysed characteristic interventions in a single activity. Therefore, this is the first attempt in including interventions into a multiple activity network model. Our paper follows and extends the methodology in \cite{ggg} to analyse a novel set of issues that appear only in the setting with multiple activities. For instance, we study the effects of restrictions on the planner's intervention space, and highlight the importance of cross-activity interdependence in shaping the optimal interventions. Our paper and \cite{ggg}, together, provide a more complete theory about targeting interventions in networks with complex interactions across agents and across activities. The remainder of this paper is organized as follows. Section \ref{sect-2} introduces the model, as well as the key assumption and notations used in this paper. Section \ref{sect-3} solves for the equilibrium and analyses some comparative statics. Section \ref{sect-4} broadens the scope of the model by considering a case where the planner is limited in its interventions, and Section \ref{sect-5} explores several other extensions which apply the model in different contexts. Finally, Section \ref{sect-6} concludes the paper. Appendix \ref{sect-A} presents some preliminary mathematical results, and Appendix \ref{sect-B} provides the proofs of the results presented in this paper. \section{Model}\label{sect-2} We introduce a general multi-activity network model and analyze the optimal interventions. \underline{\bf Network} Consider a network game where a set of agents \(\mathcal{N}=\{1,2,\cdots,n\}\) participate in a set of activities \(\mathcal{K}=\{1,2,\cdots,k\}\), with \(k\geq2\).\footnote{When \(k=1\), the network model reduces to the single activity model discussed in \cite{bcz} and \cite{bk}, with the optimal intervention characterized thoroughly by \cite{ggg}.} Let \(\b G=(g_{ij})\) be the adjacency matrix of the network, allowing for arbitrarily weighted graphs, so that \(g_{ij}\in\mathbb R_+\) for all \(i,j\in\c N\).\footnote{For unweighted graphs, we have the specific case \(g_{ij}\in\{0,1\}\) for all \(i,j\in\c N\).} We also assume that \(g_{ii}=0\) for all \(i\in\mathcal{N}\), that is, there are no self-loops. Further assume that \(\b G\) represents an undirected network, so that \(\b G\) is symmetric, i.e., \(g_{ij}=g_{ji}\) for all \(i,j\in\mathcal{N}\). \underline{\bf Payoffs} Each agent \(i\) chooses actions \(\b x_i=(x_i^1, x_i^2,\cdots, x_i^k)\in\mathbb R^k\) simultaneously, where each \(x_i^s\) represents agent \(i\)'s level of participation in activity \(s\in\c K\). We suppose that the payoff to each agent \(i\) is given by the utility function\footnote{This utility specification extends that in \cite{cyz}, which assumes $\beta_{ij}=\beta,\forall i\neq j$. } \begin{equation}U_i(\b x_i;\b x_{-i},\b a)= \underbrace{\sum_{s\in\mathcal{K}}a_i^sx_i^s-\left(\frac{1}{2}\sum_{s\in\mathcal{K}}(x_i^s)^2+\frac{1}{2}\sum_{s\in\mathcal{K}}\sum_{\substack{t\in\mathcal{K}\\t\neq s}}\beta_{st}x_i^sx_i^t\right)}_\text{private utility}+\underbrace{\delta\sum_{s\in\mathcal{K}}\sum_{j\in\mathcal{N}}g_{ij}x_i^sx_j^s}_\text{network spillovers}.\label{eq-1}\end{equation} We explain the utility function \eqref{eq-1} term by term. For each \(i\in\mathcal{N}\) and \(s\in\mathcal{K}\), the parameter \(a_i^s\) represents player \(i\)'s intrinsic marginal utility from activity \(s\). The cost of actions consists of two parts: the first is the sum of the quadratic term \(\frac{1}{2}(x_i^s)^2\) over $s$; the second is the sum of interaction terms $\beta_{st} x_i^sx_i^t$ over different activities $s\neq t$. Here \(\beta_{st}\) represents the degree of strategic substitutability or complementarity between the activities \(s\) and \(t\): \[\partial^2 U_i/\partial x_i^s\partial x_i^t=-\beta_{st},\] so a positive \(\beta_{st}\) corresponds to the case where the activities are substitutes, while a negative \(\beta_{st}\) corresponds to the case where the activities are complements. When \(\beta_{st}=0\), there are no direct interactions between the activities. Note that without loss of generality, we can let \(\beta_{st}=\beta_{ts}\), otherwise we can replace them with their average without changing the utility function. We will impose some regularity condition on the $\beta_{st}$ to guarantee convexity of the utility function. Lastly, the third term captures the total network externalities enjoyed by agent \(i\), where \(\delta\) represents the strength of the network externalities. We assume that the strength of these externalities is the same for each activity. Notice that, for each $s$, \[\partial^2U_i/\partial x_i^s\partial x_j^s=\delta g_{ij},\] so the sign of \(\delta\) determines the strategic interaction between agents, and an increase in \(|\delta|\) reflects an increase in the intensity of these network spillovers. \cite{bcz} investigates the case of \(\delta>0\), representing strategic complementarities between the agents in a model of a network with a single activity. On the other hand, \(\delta<0\) corresponds to strategic substitutability between agents, as in the model of a public good network by \cite{bk}. When \(\delta=0\), there are no network effects and each agent receives utility only based on their own choices of actions. Both \cite{bcz} and \cite{bk} consider a single activity network model. Note that in our setting, the network effects aggregate over different activities. Throughout this paper, we reserve the indices \(i,j\) to represent players and \(s,t\) to represent activities. For convenience, we introduce the following notation:\[\b a^s=\begin{bmatrix}a_1^s\\\vdots\\a_n^s\end{bmatrix}\in\mathbb R^n,\ \b a=\begin{bmatrix}\b a^1\\\vdots\\\b a^k\end{bmatrix}\in\mathbb R^{kn},\ \b x^s=\begin{bmatrix}x_1^s\\\vdots\\x_n^s\end{bmatrix}\in\mathbb R^n,\ \text{ and }\b x=\begin{bmatrix}\b x^1\\\vdots\\\b x^k\end{bmatrix}\in\mathbb R^{kn}.\] \underline{\bf Targeted Intervention} We let \(\hb a\) be the original vector of the agents' marginal utilities. The social planner intervenes by shifting \(\hb a\) to a new vector \(\b a\), with the aim of maximizing the total social welfare \[W(\b a)=\sum_{i\in\mathcal{N}}U_i(\b x_i^*(\b a);\b x_{-i}^*(\b a),\b a),\] where \(\b x^*(\b a)\) denotes the equilibrium activity level given the planner's choice of \(\b a\) (see Proposition \ref{pr-1} for the equilibrium characterization). In other words, \(\b x_i^*(\b a)\) is a best response to \(\b x_{-i}^*(\b a)\) for all \(i\). This intervention comes at a cost to the planner, which we will model using a quadratic cost as \(\|\b a-\hb a\|^2\) following \cite{ggg}. We assume that the planner can incur a maximum expenditure of \(C\in\mathbb R^+\), and thus solves the constrained optimization problem \begin{align*} \max_{\b a\in\mathbb R^{kn}} \quad &\sm[N]iU_i(\b x_i^*(\b a);\b x_{-i}^*(\b a),\b a)\stepcounter{equation}\tag{\theequation}\label{eq-objective}\\ \text{s.t.} \quad &(\b a-\hb a)^T(\b a-\hb a)\leq C,&\text{(Budget constraint)}\\ &\b x^*_i(\b a)\in\underset{\b x_i\in\mathbb R^k}{\text{argmax}}\ U_i(\b x_i;\b x_{-i}^*(\b a),\b a)\text{ for all }i\in\c N.&\text{(Agents' equilibrium)} \end{align*} We write the optimal choice of intervention as \(\b a^*(C)\), with a corresponding total welfare of \(W(\b a^*(C))=W^*(C)\). \subsection{Assumptions and Notation} We first define some standard matrices. Let \(\b I_p\) be the \(p\times p\) identity matrix, \(\b 0_p\) be the \(p\times p\) matrix of zeroes, and \(\b J_p\) be the \(p\times p\) matrix of ones. Given a real symmetric matrix \(\b M\), define \(\lambda_{max}(\b M)\) and \(\lambda_{min}(\b M)\) be the largest and smallest eigenvalues of \(\b M\) respectively. Also denote their respective eigenspaces by \(E_{max}(\b M)\) and \(E_{min}(\b M)\). Note that since \(\b G\) is nonnegative, by the Perron-Frobenius theorem, we have that \(\lambda_{max}(\b G)\geq0\geq\lambda_{min}(\b G)\geq-\lambda_{max}(\b G)\). Define the strategic interdependence matrix \[\b \Phi=\begin{pmatrix}0&\beta_{12}&\cdots&\beta_{1k}\\\beta_{21}&0&\cdots&\beta_{2k}\\\vdots&\vdots&\ddots&\vdots\\\beta_{k1}&\beta_{k2}&\cdots&0\end{pmatrix},\] and further write $$ \tb \Phi=\b I_k+\b \Phi.$$ We observe that \(\b\Phi\) and \(\tb\Phi\) are both symmetric, \(\lambda_{min}(\tb \Phi)=1+\lambda_{min}(\b \Phi)\), and \(E_{min}(\tb \Phi)=E_{min}(\b \Phi)\). Throughout the paper, we impose the following assumption: \begin{assumption} \label{ass-1} \(1+\lambda_{min}(\b \Phi)-\lambda_{max}(\delta\b G)>0\). \end{assumption} This assumption is equivalent to $\lambda_{min}(\tb \Phi)-\lambda_{max}(\delta\b G)>0$, which specifies a sufficient condition to ensure that the underlying network game among agents has a unique Nash equilibrium for any $\b a$. Note that Assumption \ref{ass-1} directly implies that \(\tb \Phi\) is positive definite. When the network externalities are positive, \(\delta>0\) and \(\lambda_{max}(\delta\b G)=\delta\lambda_{max}(\b G)\geq 0\). On the other hand, when the network externalities are negative, \(\delta<0\) and \(\lambda_{max}(\delta\b G)=\delta\lambda_{min}(\b G)\geq 0\). In all cases, we have \(\lambda_{min}(\tb \Phi)>\lambda_{max}(\delta\b G)\geq 0\), so \(\tb \Phi\) is positive definite. Thus each agent's payoff is concave in her own strategy. Assumption \ref{ass-1} makes sure the network effects are not too strong. Assumption \ref{ass-1} generalizes the condition on the spectral radius of \(\b G\) as found in previous literature. Suppose the interaction between any pair of activities is the same, so that \(\beta_{st}=\beta\) for all \(s,t\). Then the distinct eigenvalues of \(\b \Phi\) are \((k-1)\beta\) and \(-\beta\). Therefore, Assumption \ref{ass-1} reduces to the requirement that \[1-\beta-\lambda_{max}(\delta\b G)>0\text{ and }1+(k-1)\beta-\lambda_{max}(\delta\b G)>0,\] which is equivalent to the condition stated in \cite{cyz}. As a further specialization, when the activities are independent and \(\beta_{st}=0\) for all \(s,t\), we have \(\b \Phi=\b 0_k\) and \(\lambda_{min}(\b \Phi)=0\). Thus Assumption \ref{ass-1} reduces to \(1-\lambda_{max}(\delta\b G)>0\), which is further simplified to $\delta <\frac{1}{\lambda_{max}(\b G)}$ when $\delta>0$ (see \cite{bcz}), and $|\delta| <-\frac{1}{\lambda_{min}(\b G)}$ when $\delta<0$ (see, for instance, \cite{bk}). Finally, for any matrices \(\b A_{m\times n}\) and \(\b B_{p\times q}\), let \(\otimes\) denote the Kronecker product, where \(\b A\otimes\b B\) represents the block matrix \[\begin{bmatrix}a_{11}\b B&\cdots&a_{1n}\b B\\\vdots&\ddots&\vdots\\a_{m1}\b B&\cdots&a_{mn}\b B\end{bmatrix}.\] \section{Analysis}\label{sect-3} \subsection{Optimal Intervention} We first derive the equilibrium activity profile among agents and the aggregate equilibrium payoff, fixing $\b a$. \begin{proposition}\label{pr-1} Suppose Assumption \ref{ass-1} holds. There exists a unique equilibrium in which the agents choose activity levels \[\b x^*(\b a)=[\tb \Phi\otimes\b I_n-\b I_k\otimes\delta\b G]^{-1}\b a,\] for a total equilibrium welfare of \[W(\b a)=\frac{1}{2}\b a^T[\tb \Phi\otimes\b I_n-\b I_k\otimes\delta\b G]^{-1}(\tb \Phi\otimes\b I_n)[\tb \Phi\otimes\b I_n-\b I_k\otimes\delta\b G]^{-1}\b a.\] \end{proposition} For simplicity, we will write \begin{equation} \b P=\frac{1}{2}[\tb \Phi\otimes\b I_n-\b I_k\otimes\delta\b G]^{-1}(\tb \Phi\otimes\b I_n)[\tb \Phi\otimes\b I_n-\b I_k\otimes\delta\b G]^{-1}, \label{eq-P} \end{equation} so that \(W(\b a)=\b a^T\b P\b a\). Proposition \ref{pr-1} generalises the equilibrium results found in previous literature to the case of multiple heterogeneous activities, and the proof can be found in Appendix \ref{sect-B}. Relating our results to the existing literature, we illustrate Proposition \ref{pr-1} with a few simple cases. \underline{\bf One activity} When there is only one activity, then \(k=1, \tb \Phi=\b I_1\), so \[\b x^*(\b a)= (\b I_n-\delta\b G)^{-1}\b a,\mbox{ and } W(\b a)=\frac{1}{2}\b a^T(\b I_n-\delta\b G)^{-2}\b a.\] This reduces to the equilibrium obtained in \cite{bcz}, where activity levels are equal to the Katz-Bonacich centralities of each agent, and welfare proportional to the squared activity levels. \cite{ggg} studies the optimal targeted intervention under this framework. \underline{\bf Two activities} When there are two activities ($k=2$), we have \(\tb\Phi=\begin{pmatrix}1&\beta\\\beta&1\end{pmatrix}\), where we let \(\beta_{12}=\beta_{21}=\beta\) by the symmetry assumption. We can expand the tensor products to obtain the equilibrium activity levels\[\b x^*(\b a)=\left(\begin{bmatrix}\b I_n&\beta\b I_n\\\beta\b I_n&\b I_n\end{bmatrix}-\begin{bmatrix}\delta\b G&\b 0\\\b 0&\delta\b G\end{bmatrix}\right)^{-1}\b a=\begin{bmatrix}\b I_n-\delta\b G&\beta\b I_n\\\beta\b I_n&\b I_n-\delta\b G\end{bmatrix}^{-1}\b a.\] Writing \(\b M^+=[(1+\beta)\b I_n-\delta\b G]^{-1}\) and \(\b M^-=[(1-\beta)\b I_n-\delta\b G]^{-1}\), the solution (see \cite{cyz}) is given by \[\begin{bmatrix}\b x^1\\\b x^2\end{bmatrix}=\frac{1}{2}\begin{bmatrix}\b M^+(\b a^1+\b a^2)+\b M^-(\b a^1-\b a^2)\\\b M^+(\b a^1+\b a^2)-\b M^-(\b a^1-\b a^2)\end{bmatrix}.\] Therefore, \(\b M^+\) determines the total action over both activities, while \(\b M^-\) determines the difference in action between both activities. Finally, the total welfare is \begin{align*} W(\b a)&=\frac{1}{2}\b a^T\begin{bmatrix}\b I_n-\delta\b G&\beta\b I_n\\\beta\b I_n&\b I_n-\delta\b G\end{bmatrix}^{-1}\begin{bmatrix}\b I_n&\beta\b I_n\\\beta\b I_n&\b I_n\end{bmatrix}\begin{bmatrix}\b I_n-\delta\b G&\beta\b I_n\\\beta\b I_n&\b I_n-\delta\b G\end{bmatrix}^{-1}\b a\\& =\frac{1}{2}\b x^T\begin{bmatrix}\b I_n&\beta\b I_n\\\beta\b I_n&\b I_n\end{bmatrix}\b x\\&=\frac{1}{2}[(\b x^1)^T\b x^1+2\beta(\b x^1)^T\b x^2+(\b x^2)^T\b x^2]\\&=\frac{1}{4}[(1+\beta)(\b a^1+\b a^2)^T(\b M^+)^2(\b a^1+\b a^2)+(1-\beta)(\b a^1-\b a^2)^T(\b M^-)^2(\b a^1-\b a^2)].\end{align*} The presence of the cross-activity interaction term \(2\beta(\b x^1)^T\b x^2\) means that the total welfare is no longer equal to the squared activity levels. Instead, welfare can again be decomposed into two parts involving the total marginal utilities and the difference in marginal utilities, in a manner similar to the equilibrium \(\b x^*\). However, further equilibrium analysis via expanding the relevant tensor products will not be feasible for large number of activities. Instead, we reformulate the planner's optimization problem \eqref{eq-objective} using the above equilibrium characterization to obtain a constrained quadratic maximization problem: \begin{align} \max_{\b a\in\mathbb R^{kn}} \quad &\b a^T\b P\b a\label{eq-qp}\\ \text{s.t.} \quad &(\b a-\hb a)^T(\b a-\hb a)\leq C.\notag \end{align} We then exploit general results of constrained quadratic programming, which we collectively state in Lemma \ref{lem-1}. Lemma \ref{lem-1} is applicable both in our paper and \cite{ggg}, given the structural similarity in the underlying programs. \begin{lemma}\label{lem-1} Let \(\b S\) be a positive definite matrix, and \(\b v\) a vector in \(\mathbb R^n\). Then the solution \(\b x^*\) to the maximization problem \begin{align*} \max_{\mathbf x\in\mathbb R^{n}} \quad &V(\b x)=\mathbf x^T\mathbf{Sx}+\b v^T\b x\\ \text{s.t.} \quad &\b x^T\b x\leq C \end{align*} satisfies\\ (a) \(\lim_{C\to\infty}\frac{V(\b x^*)}{C}=\lambda_{max}(\b S)\), and for any unit vector \(\b u\in E_{max}(\b S)\), \(\lim_{C\to\infty}\frac{V(\sqrt C\b u)}{V(\b x^*)}=1\), \\ (b) \(\lim_{C\to\infty}\frac{\|\mathrm{proj}_{E_{max}(\b S)}\b x^*\|}{\|\b x^*\|}=1\).\\ Furthermore, if \(\b v\neq\b 0\), we have\\ (c) \(\lim_{C\to0}\frac{V(x^*)}{\sqrt C}=\|\b v^T\|\),\\ (d) \(\lim_{C\to0}\frac{\|\text{proj}_{\b v}\b x\|}{\|\b x\|}=1\). \end{lemma} Lemma \ref{lem-1} shows that as \(C\to\infty\), the solution to \eqref{eq-qp} simply hinges on the largest eigenvalue of \(\b P\) and its corresponding eigenspace. Consequently, our problem reduces to obtaining the spectral decomposition of $\b P$, which can be rewritten as \[\b P=\frac{1}{2}[\tb \Phi^{\frac{1}{2}}\otimes\b I_n-\tb \Phi^{-\frac{1}{2}}\otimes\delta\b G]^{-2}.\footnote{Given a positive definite matrix \(\b M\), the square root of \(\b M\), written \(\b M^{\frac{1}{2}}\), is the unique positive definite matrix satisfying \((\b M^{\frac{1}{2}})^2=\b M\).}\] Here we make use of the fact that \(\tb \Phi^{\frac{1}{2}}\otimes\b I_n\) and \(\tb \Phi^{-\frac{1}{2}}\otimes\delta\b G\) are simultaneously diagonalizable since they commute. Let \(\b Q\b \Lambda\b Q^{-1}\), \(\b R\b \Sigma\b R^{-1}\) be spectral decompositions of \(\tb\Phi\) and \(\delta\b G\), respectively. Then \begin{align*}\b P&=\frac{1}{2}[(\b Q\otimes\b R)(\b \Lambda^{\frac{1}{2}}\otimes\b I_n-\b \Lambda^{-\frac{1}{2}}\otimes\b \Sigma)(\b Q\otimes\b R)^{-1}]^{-2}\\&=\frac{1}{2}(\b Q\otimes\b R)^2(\b \Lambda^{\frac{1}{2}}\otimes\b I_n-\b \Lambda^{-\frac{1}{2}}\otimes\b \Sigma)^{-2}(\b Q\otimes\b R)^{-2},\end{align*} so the eigenvalues of \(\b P\) are the entries of the diagonal matrix \[\frac{1}{2}(\b \Lambda^{\frac{1}{2}}\otimes\b I_n-\b \Lambda^{-\frac{1}{2}}\otimes\b \Sigma)^{-2},\] which are \begin{equation}\frac{1}{2}[\lambda_i(\tb\Phi)^{\frac{1}{2}}-\lambda_i(\tb\Phi)^{-\frac{1}{2}}\lambda_j(\delta\b Q)]^{-2}\text{ for }i=1\cdots k\text{ and } j=1\cdots n.\label{eq-eigenvalues}\end{equation} Equation \eqref{eq-eigenvalues} fully characterizes the spectral properties of \(\b P\). Employing Lemma \ref{lem-1}, we now state our first main result regarding the planner's optimal intervention policy. \begin{theorem} \label{th-1} Suppose Assumption \ref{ass-1} holds.\footnote{The expression in Theorem \ref{th-1}(a) is well defined when Assumption \ref{ass-1} holds. When \(\lambda_{min}(\tb\Phi)-\lambda_{max}(\delta\b G)\leq0\), the network effects are too strong, so the optimal welfare is unbounded and no Nash equilibrium exists.}\\ (a) As the planner's budget \(C\to\infty\), \[\frac{W^*(C)}{C}\to\frac{\lambda_{min}(\tb \Phi)}{2(\lambda_{min}(\tb \Phi)-\lambda_{max}(\delta\b G))^2}\equiv \Gamma(\b \Phi,\delta\b G).\] (b) Furthermore, if \(\b u\in E_{min}(\b \Phi),\b v\in E_{max}(\delta\b G)\) are unit vectors, then the intervention \(\tb a(C)=\sqrt C\b u\otimes\b v+\hb a\) is asymptotically optimal in the sense that \[\lim_{C\to\infty}\frac{W(\tb a(C))}{W^*(C)}=1.\] \end{theorem} Theorem \ref{th-1} characterizes the growth rate of the optimal welfare $W^*$ and provides a \emph{simple} intervention $\tb a$ to achieve $W^*$ asymptotically under large budgets. Our result implies that the planner does not need to identify the prior marginal utilities to be able to implement an asymptotically efficient intervention. The term \(\Gamma(\b \Phi,\delta\b G)\) measures the marginal return of the planner's budget, as \[ \lim_{C\to\infty} W^{*'}(C)=\lim_{C\to\infty}\frac{W^*(C)}{C}=\Gamma(\b \Phi,\delta\b G) \] by Theorem \ref{th-1} and L'Hospital's rule. Theorem \ref{th-1} extends and generalises the results obtained in \cite{ggg} with a single activity to a network setting with multiple activities, and where the interaction between each pair of activities can be arbitrary. Here, in addition to the adjacency matrix $\b G$ of the network, there is another matrix \(\b \Phi\), describing the strategic interactions between the multiple activities, that is crucial in determining the optimal intervention and welfare. \begin{proposition}\label{pr-2} Let \(\Gamma(\b \Phi,\delta\b G)\) be defined as in Theorem \ref{th-1}. Then \[\frac{\partial \Gamma(\b \Phi,\delta\b G)}{\partial\lambda_{max}(\delta\b G)}>0 \text{ and } \frac{\partial \Gamma(\b \Phi,\delta\b G)}{\partial\lambda_{min}(\b \Phi)}<0.\] \end{proposition} Note that $\Gamma(\b \Phi,\delta\b G)$ depends only on the smallest eigenvalue of $\lambda_{min}(\tb \Phi)$ and the largest eigenvalue of $\lambda_{max}(\delta\b G)$. From Theorem \ref{th-1}(a), \(\Gamma(\mathbf\Phi,\delta\b G)\) is equal to the asymptotic return to the budget, \(W^*(C)/C\). Therefore, a larger value of \(\Gamma(\mathbf\Phi,\delta\b G)\) would imply a greater effect of the budget on welfare gain. Consequently, Proposition \ref{pr-2} shows that when the budget is sufficiently big, the planner's intervention will be more effective in the following two cases: \begin{enumerate} \item[(i)] \(\delta\b G\) has a large spectral radius, so the maximum social multiplier effect that can be caused by the network externalities is large, which the planner exploits by choosing the intervention in the direction of the first principal component. \item [(ii)] \(\mathbf \Phi\) has a small last eigenvalue, so the minimum cost incurred by the cross-activity terms of the utility function is small and the planner can conduct a more cost-efficient intervention by intervening in the direction of the last principal component. \end{enumerate} For ease of notation, we introduce another assumption: \begin{assumption} \label{ass-2} \(E_{min}(\b \Phi)=\text{span}(\b u)\) and \(E_{max}(\delta\b G)=\text{span}(\b v)\) for some unit vectors \(\|\b u\|=\|\b v\|=1\). \end{assumption} Assumption \ref{ass-2} implies that the dimensions of \(E_{min}(\b \Phi)\) and \(E_{max}(\delta\b G)\) are both one, and holds generically. This ensures that the optimal intervention is essentially unique,\footnote{There can be another optimal intervention obtained by multiplying by \(-1\), but this transformation does not affect our subsequent results.} and \(\lim_{C\to\infty}\rho(\b u\otimes\b v,\b a^*)=\pm 1\), where \(\rho\) represents the cosine similarity\footnote{See \cite{ggg}, or Lemma \ref{lem-1}(b) in Appendix \ref{sect-A}.} \[\rho(\b x,\b y)=\frac{\b x^T\b y}{\|\b x\|\|\b y\|}.\] We define the components of the planner's expenditure grouped by activities and individuals: \begin{align*} B^s(\b a)&=\sum_{i=1}^n(\b a_i^s-\hb a_i^s)^2\text{ is the total expenditure on activity }s,\\ B_j(\b a)&=\sum_{t=1}^k(\b a_j^t-\hb a_j^t)^2\text{ is the total expenditure on individual }j. \end{align*} By definition, $\sum_{s=1}^{k}B^s(\b a)=\sum_{j=1}^{n} B_j(\b a)=C.$ \begin{corollary} \label{co-1} Suppose Assumptions \ref{ass-1} and \ref{ass-2} hold.\\ (a) Across-layer allocation. For any activities \(s,t\) with \(u_t\neq 0\), \[\lim_{C\to\infty}\frac{B^s(\b a^*)}{C}=u_s^2,\text{ and }\lim_{C\to\infty}\frac{B^s(\b a^*)}{B^t(\b a^*)}=\left(\frac{u_s}{u_t}\right)^2.\] (b) Within-layer allocation. For any individuals \(i,j\) with \(v_j\neq0\), \[\lim_{C\to\infty}\frac{B_i(\b a^*)}{C}=v_i^2,\text{ and }\lim_{C\to\infty}\frac{B_i(\b a^*)}{B_j(\b a^*)}=\left(\frac{v_i}{v_j}\right)^2.\] \end{corollary} Corollary \ref{co-1} follows from Theorem \ref{th-1}(b), and shows that the planner's expenditure in each activity or individual only depends on the entries of the corresponding eigenvectors. \cite{ggg} established Corollary \ref{co-1}(b) for a single activity network. Here we extend the result to the multi-activity setting, and show that budget allocation remains proportional to the squares of the entries of the principal components. \begin{proposition}\label{pr-3} Suppose all activities are pairwise complements, that is, \(\beta_{st}\leq0\) for all \(s,t\). Then there exists \(\b u\in E_{min}(\b \Phi)\) such that \(u_s\geq0\) for all \(s\). \end{proposition} Proposition \ref{pr-3} shows that when activities are complements, for each agent, the planner can choose to adjust marginal utilities in the same direction across activities. This allows the planner to exploit the complementarities across activities, since agents obtain maximum utility when his activity levels are all positive or all negative. \begin{proposition}\label{pr-4} Suppose all activities are pairwise substitutes, that is, \(\beta_{st}\geq0\) for all \(s,t\), with strict inequality for some \(s,t\). Then for all \(\b u\in E_{min}(\b \Phi)\), there exists \(s,t\) such that \(u_s>0\) and \(u_t<0\). \end{proposition} Proposition \ref{pr-4} shows a contrasting result for the case of substitute activities. Now, the planner will instead adjust the marginal utilities in different directions across activities to maximize welfare. These two propositions together highlight a significant difference between complementary and substitute activities. \begin{example}\label{ex-1} We illustrate the above results on a network over five agents, and first consider the base case of having only one activity (Fig. \ref{fig:1}). A green node represents an intervention in the positive direction, while a red node represents an intervention in the negative direction. The size of the node depicts the magnitude of the intervention.\footnote{The adjacency matrix \(\mathbf G=\begin{pmatrix}0&1&1&0&0\\1&0&1&0&0\\1&1&0&1&1\\0&0&1&0&0\\0&0&1&0&0\end{pmatrix}\) has five distinct eigenvalues: \(\lambda_1=\lambda_{\max}=2.343, \lambda_2=0.471,\lambda_3=0,\lambda_4=-1,\lambda_5=\lambda_{\min}=-1.814\), so Assumption \ref{ass-2} holds.} \begin{figure} \caption{Optimal intervention for single-activity networks} \end{figure}\label{fig:1} \end{example} These illustrate the contrasting effects of the network spillovers on the optimal intervention. As shown in \cite{ggg}, when \(\delta>0\), the planner follows the first principal component of the adjacency matrix and intervenes in the positive direction. However, when \(\delta<0\), the planner follows the last principal component of the adjacency matrix, and increases the marginal utilities of some agents, while decreasing those of others. \begin{example} We continue with the previous example, increasing the number of activities that the agents participate in to three. The four figures below showcase all possible permutations of complementary or substitute activities and spillovers. Figures \ref{fig-1}(a) and \ref{fig-1}(b) extend the case of strategically complementary spillovers in Example \ref{ex-1}(a) to a three activity network, while Figures \ref{fig-1}(c) and \ref{fig-1}(d) are the corresponding extension of Example \ref{ex-1}(b), with strategically substitutive spillovers.\footnote{We choose the strategic interdependence matrix \(\mathbf\Phi=\pm\begin{pmatrix}0&0.2&0.3\\0.3&0&0.4\\0.3&0.4&0\end{pmatrix}\) with eigenvalues \(\lambda_1=0.608,\lambda_2=-0.189,\lambda_3=-0.419\) in the positive case, and \(\lambda_1=0.419,\lambda_2=0.189,\lambda_3=-0.608\) in the negative case, so Assumption \ref{ass-2} holds.}\end{example} \begin{figure} \caption{Optimal intervention for multi-activity networks} \label{fig-1} \end{figure} Figures \ref{fig-1}(a) and \ref{fig-1}(c) correspond to pairwise complementary activities. As in Proposition \ref{pr-3}, the planner interventions in the same direction across activities. This is particularly evident in Figure \ref{fig-1}(c), where the planner intervenes negatively for the central agent for all activities. On the other hand, Figures \ref{fig-1}(b) and \ref{fig-1}(d) feature pairwise substitute activities to demonstrate Proposition \ref{pr-4}. The last principal component chosen is only negative in the second entry, leading to the inversion of the signs in activity two. \subsection{Comparative Statics} \begin{proposition}\label{pr-5} Suppose Assumption \ref{ass-1} holds, and we have matrices \(\b\Phi=\{\beta_{st}\},\ \b\Phi'=\{\beta'_{st}\}\) satisfying \( \beta_{st}'\leq\beta_{st}\leq 0\) for all \(s,t\). Then \(\Gamma(\b \Phi'',\delta\b G)\geq \Gamma(\b \Phi',\delta\b G)\). \end{proposition} Proposition \ref{pr-5} provides an easier method of comparing the marginal returns of the planner's budget. Instead of having to evaluate the minimal eigenvalue of the strategic interdependence matrices as in Proposition \ref{pr-2}, we show that as long as the activities are all pairwise complements ($\beta_{ij}\leq 0$ for all \(i,j\)), any increase in the magnitude of the strategic interactions will result in an increase in welfare. However, this monotonicity result does not apply in the case of substitute activities, which we demonstrate in the below example. \begin{example} Suppose \(\lambda_{max}(\delta\b G)=0.2\), and let \[\b \Phi=\begin{pmatrix}0&0.2&0.3\\0.2&0&\beta_{23}\\0.3&\beta_{23}&0\end{pmatrix}.\] We verify that Assumption \ref{ass-1} holds for \(\beta_{23}\in\{0.1,0.2,0.3\}\), and obtain the following table. \begin{center} \begin{tabular}{|c|c|c|c|} \hline \(\beta_{23}\)&0.1&0.2&0.3\\\hline $\Gamma$ &1.48&1.4&1.56\\\hline \end{tabular} \end{center} \end{example} Here, the intermediate value of \(\beta_{23}=0.2\) results in a decreased effectiveness of the planner's intervention compared to a lower or higher \(\beta_{23}\). Therefore, it is in general unclear if more strategic interaction among the agents will be beneficial for the agents. The scope of Proposition \ref{pr-5} can be widened by considering the following transformations on the betas. For any \(t\), we can replace \[\b x^t,\hb a^t,\beta_{t1},\beta_{t2},\cdots,\beta_{tk}\text{ by }-\b x^t,-\hb a^t,-\beta_{t1},-\beta_{t2},\cdots,-\beta_{tk}\] with no effect on the equilibrium welfare and intervention. In particular, when there are three activities, these transformations allow for the division of the possible signs of the betas to two equivalence classes:\\ (A): \(\b \Phi=\begin{bmatrix}0&+&+\\+&0&+\\+&+&0\end{bmatrix}\to\begin{bmatrix}0&-&-\\-&0&+\\-&+&0\end{bmatrix}\to\begin{bmatrix}0&-&+\\-&0&-\\+&-&0\end{bmatrix}\to\begin{bmatrix}0&+&-\\+&0&-\\-&-&0\end{bmatrix}\), and\\ (B): \(\b \Phi=\begin{bmatrix}0&+&+\\+&0&-\\+&-&0\end{bmatrix}\to\begin{bmatrix}0&+&-\\+&0&+\\-&+&0\end{bmatrix}\to\begin{bmatrix}0&-&+\\-&0&+\\+&+&0\end{bmatrix}\to\begin{bmatrix}0&-&-\\-&0&-\\-&-&0\end{bmatrix}\).\\ Therefore, with three activities, any situation with exactly two positive \(\beta_{st}\) is isomorphic to the situation of having all complementary activities, and the monotonicity result in Proposition \ref{pr-5} will hold. \subsection{Small Budgets} We also consider the other end of the spectrum, where the planner's budget is small. \begin{proposition}\label{pr-6} Suppose Assumption \ref{ass-1} holds, and the original marginal utilities are not all zero. (a) The optimal welfare satisfies \[\lim_{C\to0}\frac{W^*(C)-W^*(0)}{\sqrt{C}}=\|\b P\hb a\|.\] (Note that $\b P$ is defined in \eqref{eq-P}.) (b) The optimal intervention satisfies \[\lim_{C\to 0}\rho(\b P \hb a,\b a^*)=1.\] \end{proposition} \color{red} \color{black} Proposition \ref{pr-6} shows that when budgets are small, the original vector of marginal utilities \(\hb a\) play a crucial role in determining the optimal intervention. This is in contrast with Theorem \ref{th-1}, where the original marginal utilities are not relevant and it is the spectral properties of the matrices \(\b \Phi\) and \(\b G\) that are important instead. \\ Another key difference between small and large budgets is the growth rate of welfare. For \(C\to 0\), Proposition \ref{pr-6}(a) shows that the optimal gain in welfare is of order \(\sqrt C\), while we have seen from Theorem \ref{th-1}(a) that as \(C\to\infty\), the planner can obtain a welfare gain that is linear in \(C\). This implies that the planner experiences significant diminishing marginal returns to the size of the budget when the budget is small, but eventually the marginal returns plateaus to a positive constant \(\Gamma(\mathbf \Phi,\delta\b G)\). Therefore, while the initial intervention is the most cost-effective, an increase in the budget will still be meaningful for the planner across all budget sizes. \section{Partial Interventions}\label{sect-4} We turn to a modified version of the planner's problem, where the planner is only able to intervene in a subset \(\mathcal L=\{1,\cdots,l\}\subseteq \mathcal K\) of the activities available.\footnote{This can always be achieved by a relabelling of the activities.} When \(\mathcal L=\mathcal K\), we obtain the original formulation of the problem. Under such a restriction, the planner solves the optimization problem \eqref{eq-objective}, but with an additional constraint: \begin{align*} \max_{\b a\in\mathbb R^{kn}} \quad &\sm[N]iU_i(\b x_i^*(\b a);\b x_{-i}^*(\b a),\b a)\stepcounter{equation}\tag{\theequation}\label{eq-3}\\ \text{s.t.} \quad&\b x^*_i(\b a)\in\underset{\b x_i\in\mathbb R^k}{\text{argmax}}\ U_i(\b x_i;\b x_{-i}^*(\b a),\b a)\text{ for all }i\in\c N,&\text{(Agents' equilibrium)}\\ &(\b a-\hb a)^T(\b a-\hb a)\leq C,&\text{(Budget constraint)}\\ &\b a^s=\hb a^s\text{ for all }s\in\mathcal K\setminus\c L.&\text{(Intervention restriction)} \end{align*} In this section, our focus is to study the difference in the planner's decisions when planner faces a restriction in the intervention, which we parametrize by \(l\). Therefore, we extend our previous notation and denote the optimal intervention as \(\b a^*(l,C)\), and optimal welfare \(W^*(l,C)\). Additionally, to simplify our analysis and focus on the key issue of a restricted intervention space, we assume that the cross-activity interactions are homogeneous, so that \(\beta_{st}=\beta\) for all \(s\neq t\). \begin{assumption}\label{ass-3} In other words, we have \(\b\Phi=\beta(\b J_k-\b I_k)\). \end{assumption} \subsection{Analysis} The agents' choice of \(\b x^*\) is unaffected by the restriction in the planner's intervention space, so we know from Proposition \ref{pr-1} that the planner chooses a feasible intervention \(\b a\) to maximize \(\b a^T\b P\b a\). Define \(\c H=\c K\setminus\c L\), and decompose \[\b a=\begin{bmatrix}\b a^{\c L}\\\b a^{\c H}\end{bmatrix},\hb a=\begin{bmatrix}\hb a^{\c L}\\\hb a^{\c H}\end{bmatrix},\b P=\begin{bmatrix}\b P^{\c {LL}}&\b P^{\c {LH}}\\\b P^{\c {HL}}&\b P^{\c {HH}}\end{bmatrix}\] in the natural way, so that \(\b a^{\c L}\) and \(\hb a^{\c L}\) are length \(\ell n\) vectors, and \(\b P^{\c {LL}}\) is a \(\ell n\times \ell n\) matrix. The restriction on the intervention means that \(\b a^{\c H}=\hb a^{\c H}\), so we can then rewrite the problem \eqref{eq-3} as \begin{align*} \max_{\b a^{\c L}\in\mathbb R^{ln}} \quad &\b a^T\b P\b a=(\b a^{\c L})^T\b P^{\c {LL}}\b a^{\c L}+2(\hb a^{\c H})^T\b P^{\c {HL}}\b a^{\c L}+(\hb a^{\c H})^T\b P^{\c {HH}}\hb a^{\c H}\stepcounter{equation}\tag{\theequation}\label{eq-4}\\ \text{s.t.} \quad &(\b a^{\c L}-\hb a^{\c L})^T(\b a^{\c L}-\hb a^{\c L})\leq C. \end{align*} Applying Lemma \ref{lem-1} in Appendix \ref{sect-A} to problem \eqref{eq-4}, we obtain the following result for the asymptotic welfare and intervention for large budgets. \begin{proposition} \label{pr-7} Let \(W^*(l,C)\) be the solution to the optimization problem \eqref{eq-4}, and suppose that Assumption \ref{ass-1} holds. (a) \begin{align*}\lim_{C\to\infty}\frac{W^*(l,C)}{C}&=\lambda_{max}(\b P^{\c {LL}})\\&=\begin{cases}\frac{1-\beta}{2(1-\beta-\lambda_{max}(\delta\b G))^2},&l\geq 2\text{ and } \beta\geq0;\\\frac{l(1+(k-1)\beta)}{2k(1+(k-1)\beta-\lambda_{max}(\delta\b G))^2}+\frac{(k-l)(1-\beta)}{2k(1-\beta-\lambda_{max}(\delta\b G))^2},& otherwise.\end{cases}\end{align*} (b) \[\lim_{C\to\infty}\frac{\|\mathrm{proj}_{E_{max}(\b P^{\c {LL}})}\ (\b a^\c L)^*\|}{\|(\b a^\c L)^*\|}=1.\] \end{proposition} This generalises the cosine similarity result obtained in \cite{ggg} to eigenspaces of arbitrary dimension - in particular, when \(E_{max}(\b P^{\c {LL}})=\text{span}\{\b a_0\}\) is of dimension one, the ratio in Proposition \ref{pr-7} is by definition equivalent to \(\rho((\b a^\c L)^*, \b a_0)\). We see below in Theorem \ref{th-2} that there are important cases in our multiple activity setting where the maximum eigenvalue occurs with large multiplicity, and this generalisation using the projection operator is useful.\footnote{We could, by following the guidelines in \cite{ggg}, analytically solve and implement this optimal intervention by finding the Lagrange multiplier of the budget constraint.} More practically, Lemma \ref{lem-1} also tells us that any other direction in this eigenspace will be almost optimal. Therefore, we can choose a more convenient vector lying in the eigenspace as an approximation for our intervention, and have the following: \begin{theorem} \label{th-2} Suppose Assumption \ref{ass-1} holds. Let \(\b u\) be any unit vector in \(E_{max}(\delta\b G)\). Then the following interventions \(\tb a(C)\) satisfy \(\lim_{C\to\infty}\frac{W_k(\tb a(C))}{W^*_k(l,C)}=1\):\\ (i) When the activities are complements, i.e., \(\beta<0\), we can choose the intervention \[\tb a=\sqrt{\frac{C}{l}}(\b u^T,\cdots,\b u^T,\b 0^T,\cdots, \b 0^T)^T+\hb a.\] (ii) When the activities are substitutes, i.e., \(\beta>0\),\\ (iia) If \(l>1\), we choose\[\tb a=\sqrt{\frac{C}{2}}(\b u^T,-\b u^T,\b 0^T,\cdots,\b 0^T)^T+\hb a.\] (iib) If \(l=1\), we choose \[\tb a=\sqrt{C}(\b u^T,\b 0^T,\cdots,\b 0^T)^T+\hb a.\] \end{theorem} As in Theorem \ref{th-1}(b), we construct \emph{simple} interventions that only depend on the eigenvectors of the network matrix. Under homogeneous cross-activity interactions, we find that when the activities are complements, the planner should intervene equally in all the possible activities, where the within-activity intervention remains parallel to \(\b u\in E_{max}(\delta\b G)\) as in \cite{ggg}. As the planner becomes able to intervene in more activities, he can spread out the intervention and get closer to the optimal intervention of the unrestricted case. This explains why the optimal welfare increases as the planner intervenes in more activities. On the other hand, when the activities are substitutes, the planner can apply a partial intervention in only two activities and still obtain an almost optimal welfare. This is done by conducting an intervention parallel to \(\b u\in E_{max}(\delta\b G)\) in both activities, but in opposite directions - the marginal utility of one activity is increased, while the other is decreased. This is because in the case of substitute products, it is inefficient for agents to choose large amounts of multiple activities, so the planner focuses on encouraging one activity while discouraging the other. \begin{remark} In both cases, when \(\delta>0\), the eigenvector \(\b u\in\lambda_{max}(\b G)\) is unique up to multiplication by \(-1\) by the Perron-Frobenius theorem. On the other hand, when \(\delta<0\), no such result exists. This is because the dimension of the eigenspace of \(\lambda_{min}(\b G)\) can be up to \(n-1\), such as in the case of a complete network with an equal weight on each edge. Taking these results together, we see that we can only be certain that the eigenspace \(E_{max}(\b P^{\c {LL}})\) is of dimension 1 when \(\delta>0\) and \(\beta<0\). \end{remark} We return to our aim of evaluating the impact of an intervention restriction on the optimal welfare. To do so, we denote by \(\widehat W\) the original welfare without any intervention, and define the welfare improvement ratio as \[\eta_k(l,C)=\frac{W^*(l,C)-\widehat W}{W^*(k,C)-\widehat W},\ 1\leq l\leq k.\] For convenience, we also write \[\eta_k(l,\infty)=\lim_{C\to\infty}\eta_k(l,C).\] Clearly, \(\eta_k(l,C)\) is nondecreasing in \(l\) since an increase in \(l\) expands the feasible set for the planner while leaving the objective unchanged, so \(0\leq\eta_k(l,C)\leq\eta_k(l',C)\leq\eta_k(k,C)=1\) for all \(1\leq l\leq l'\leq k\). This captures the fraction of welfare gain the planner can achieve when there is a restriction in the intervention to \(l\) activities, compared to the case where there is no such restriction. This thus provides a measure of the benefit of intervening in more activities, giving an indication of the usefulness of having more instruments of intervention for the planner. To further simplify our expressions, we define \begin{equation}\alpha\equiv \frac{(1+(k-1)\beta)(1-\beta-\lambda_{max}(\delta\b G))^2}{(1-\beta)(1+(k-1)\beta-\lambda_{max}(\delta\b G))^2}.\label{eq-5}\end{equation} Here \(\alpha\) depends on the number of activities, \(k\), the degree of complementarity, \(\beta\), and the network effects, \(\delta\b G\), but is independent of the intervention restriction, \(l\). Under Assumption \ref{ass-1}, we can show that when the activities are complements \((\beta<0)\) then \(\alpha>1\), when the activities are substitutes \((\beta>0)\) then \(0<\alpha<1\), and when the activities are independent \((\beta=0)\) then \(\alpha=1\). We have the following theorem on the behaviour of \(\eta_k\) when the budget is large. \begin{theorem} \label{th-3} Suppose Assumptions \ref{ass-1} and \ref{ass-3} hold.\\ (i) If the activities are substitutes, i.e., \(\beta>0\), then \begin{equation}\eta_k(l,\infty)=\begin{cases} \frac{1}{k}(k-1+\alpha),&l=1;\\1,&l\geq2.\end{cases}\label{eq-6} \end{equation} (ii) If the activities are complements, i.e., \(\beta<0\), then \begin{equation}\eta_k(l,\infty)= \frac{1}{k}\left(l+\frac{k-l}{\alpha}\right).\label{eq-7} \end{equation} \end{theorem} Theorem \ref{th-3} follows directly from Proposition \ref{pr-7}. From equation \eqref{eq-6}, when activities are substitutes, there is a loss in welfare when the planner can only intervene in one activity, but the planner can reach an asymptotically optimal intervention as long as there are at least two activities that can be intervened in. That is, \[\eta_k(1,\infty)<\eta_k(2,\infty)=\eta_k(3,\infty)=\cdots=\eta_k(k,\infty)=1.\]Furthermore, the planner is able to achieve at least \(\frac{k-1}{k}\) of the optimal welfare even when there is only one activity available for intervention ($l=1$), so the welfare loss will be small if the agents are simultaneously participating in many activities. As a result, a social planner does not need to consider implementing a variety of intervention channels across the multiple activities when the activities are substitutes, and it suffices to focus on one or two of them. On the other hand, when the activities are complements, Theorem \ref{th-3}(ii) shows that the planner cannot reach an optimal welfare whenever there are activities the planner is unable to intervene in. That is, \[\eta_k(1,\infty)<\eta_k(2,\infty)<\cdots<\eta_k(k,\infty)=1.\] Furthermore, we see from \eqref{eq-7} that the asymptotic welfare ratio is linear in \(l\), which implies that the gain in relative welfare is linear in the number of restricted activities.\footnote{Formally, we obtain $\partial \eta_k(l,\infty)/\partial l=(\alpha-1)/(\alpha k)$.} This demonstrates a stark contrast between the cases where the activities are substitutes or complements. Only when the activities are complements does the planner benefit from intervention in multiple activities. This induces an increase in the agents' participation in more activities and hence improves total welfare from the positive interactions between them. Finally, when the activities are independent, we have \(\beta=0\), thus, $\alpha=1$ and \(\eta_k(l,\infty)=1\) for all \(k\) and \(l\). The planner asymptotically reaches the same level of welfare regardless of the number of activities that can be intervened in. In this situation, the planner only cares about the relative consumption across agents for each activity, and not the relative consumption across activities. Theorem \ref{th-3} generalizes the findings in \cite{ggg} to a setting with multiple activities. We see that the presence of interactions between the activities, as well as the restriction on the number of activities that the planner can intervene in, play a significant role in determining the effectiveness of the intervention. In particular, a 2-activity partial intervention is sufficient in the case where the activities are substitutes, but a partial intervention will not be as effective as a complete intervention when the activities are complements. To complete our analysis, we also obtain the welfare loss for small budgets. \begin{theorem} \label{th-4} Suppose Assumption \ref{ass-1} holds, and the original marginal utilities are not all zero. For all \(\beta\) and \(l\), \[\lim_{C\to0}\eta_k(l,C)=\frac{\|\b P^{\c L}\hb a\|}{\|\b P\hb a\|}.\] In particular, if the activities are homogeneous with \(\hb a^s=\hb a^t\) for all \(s,t\in\c K\), then \[\lim_{C\to0}\eta_k(l,C)=\sqrt{\frac{l}{k}}.\] \end{theorem} In contrast to the result for large \(C\), here the original marginal utility \(\hb a\) is crucial in determining the optimal welfare gain, while there is no significant dependence on the sign of \(\beta\). This echoes the result for the single activity case in \cite{ggg}. Therefore, in general, the choice of activities that allow for intervention will affect the results. Since we have that \[\sum_{s\in\c L}\|\b P^{\{s\}}\hb a\|^2=\|\b P^{\c L}\hb a\|^2,\] given a fixed number of activities \(l\), there exists a choice of \(\c L\) with \(|\c L|=l\) such that \(\eta_k(l,C)\geq\sqrt{\frac{l}{k}}\). This shows that as long as the planner is able to pick the activities to intervene in, there are decreasing marginal returns to the number of activities in which the planner intervenes. This is in contrast to the linear gains found in Theorem \ref{th-3}. Therefore, when the budget is small, it is important for the planner to choose to intervene in the correct activities, after which including other activities will provide a smaller welfare gain. In particular, if the activities are homogeneous, then all choices of \(\c L\) of the same size result in the same welfare, and equality always holds. We illustrate above results on the optimal welfare for a simple network below. \begin{example} Consider a dyad network\footnote{\(\b G^{dyad}=\begin{pmatrix}0&1\\1&0\end{pmatrix}\).} over three activities, setting \(\delta=0.1\), and suppose the activities are ex ante homogeneous with \(\hb a^1=\hb a^2=\hb a^3=(2,1)^T\). Letting the number of activities that allow for interventions to vary, we obtain the following plots for optimal welfare as the budget $C$ ranges from 0 to 40: \begin{figure} \caption{\(\beta=0.4\)} \label{Figure 4.1} \caption{\(\beta=-0.4\)} \label{Figure 4.2} \caption{Optimal welfare against budget for varying intervention restrictions} \label{fig:2} \end{figure} \end{example} As we expect from Proposition \ref{pr-7}, the graphs in Figures \ref{fig:2}(a) and \ref{fig:2}(b) exhibit linear growth as \(C\) grows large. In Figure \ref{fig:2}(a), the curves for \(l=2\) and \(l=3\) are almost identical, reflecting our result in Theorem \ref{th-3} that when the activities are substitutes \((\beta>0)\), an intervention in two activities is always sufficient to obtain an almost optimal welfare. On the other hand, Figure \ref{fig:2}(b) shows the necessity of intervening in more activities in the case of complementary activities \((\beta<0)\), and the incremental effect of an additional activity is asymptotically constant. On the other hand, we see that the gap between the curves \(l=1\) and \(l=2\) is larger than the gap between \(l=2\) and \(l=3\) when \(C\) is small, as we have seen in Theorem \ref{th-4}. \subsection{Comparative Statics} As a counterpart to Proposition \ref{pr-5}, we examine how the normalized welfare \(\Gamma\) is affected by a change in the model parameters in this case. \begin{proposition}\label{pr-8} Suppose Assumption \ref{ass-1} holds, and \(\Phi=\beta(\b J_k-\b I_k)\). Then for any \(1\leq l\leq k\), \\ (i) \(\Gamma(\Phi,\delta\b G,l)\) is increasing in \(\lambda_{max}(\delta\b G)\).\\ (ii) \(\Gamma(\Phi,\delta\b G,l)\) is increasing in \(|\beta|\). \end{proposition} Here, the homogeneity of the cross-activity interactions allows us to extend the monotonicity result to the case of substitute activities. An increase in the intensity of either the cross-activity interactions or the network spillovers will lead to an increase in the effectiveness of the intervention, as these spillovers serve to propagate the effects of the intervention throughout the network and act as a multiplier for the equilibrium welfare gain. Of further interest is the effect of these model parameters on the welfare ratio discussed in Theorem \ref{th-3}. From Theorem \ref{th-3}, we know that \[\lim_{C\to\infty}\frac{\partial\eta_k(l,C,\b G)}{\partial\alpha}=\begin{cases}-\frac{k-l}{k\alpha^2},&\beta<0;\\\frac{1}{k},&\beta>0,\ l=1;\\0,&otherwise.\end{cases}\] Therefore, we obtain the following comparative statics effects on the welfare ratio. \begin{proposition}\label{pr-9} (i) \(\eta_k(l,\infty)\) is weakly decreasing in \(\lambda_{max}(\delta\b G)\).\\ (ii) \(\eta_k(l,\infty)\) is weakly decreasing in \(|\beta|\). \end{proposition} Proposition \ref{pr-9} shows that the welfare ratio moves in the opposite direction of the welfare gain. Having stronger cross-activity interactions or network spillovers instead decrease the welfare ratio, making it more important for the planner to have access to intervention in all activities. While the interpretation of a change in \(\beta\) is straightforward, we want to link changes in the network structure \(\b G\) to the spectral radius of \(\delta\b G\). We first define a partial ordering on the set of networks. Given two graphs \(\b G\) and \(\b G'\), we write \(\b G\triangleleft\b G'\) if \(g_{ij}\leq g'_{ij}\) for all \(i,j\). In particular, \(\b G\triangleleft\b G'\) whenever \(\b G\) represents a subgraph of \(\b G'\) over the same set of agents. We have the following standard results on the largest and smallest eigenvalues of \(\b G\). \begin{fact} \label{fact-1} (i) If \(\b G\triangleleft\b G'\), then \(\lambda_{max}(\b G)\leq\lambda_{max}(\b G')\).\\ (ii) If \(\b G\triangleleft\b G'\) and \(\b G'\) is bipartite, then \(\lambda_{min}(\b G)\geq\lambda_{min}(\b G')\). \end{fact} Using this characterization, we can restate part (i) of Proposition \ref{pr-9}. \begin{proposition} \label{pr-10} Let \(\b G\triangleleft\b G'\).\\ (i) If \(\delta>0\), then \(\eta_k(l,\infty,\b G)\geq\eta_k(l,\infty,\b G')\).\\ (ii) If \(\delta<0\) and \(\b G'\) is bipartite, then \(\eta_k(l,\infty,\b G)\geq\eta_k(l,\infty,\b G')\). \end{proposition} Therefore, increasing the connectivity of the network will decrease the welfare ratio when either \(\delta>0\), or \(\delta<0\) and \(\b G'\) is bipartite. That is, when the network spillovers are positive, a denser network would result in a greater welfare loss from a partial intervention, but the effect is in general ambiguous when the spillovers are negative. \section{Extensions} \label{sect-5} \subsection{Heterogeneous Network Externalities} In the previous sections, we have imposed that the strength of the network spillovers \(\delta\) is the same for each activity. We now relax this assumption, such that each activity \(s\) has its own corresponding coefficient \(\delta_s\). That is, each agent now receives a total utility of \begin{equation}U_i= \sum_{s\in\mathcal{K}}a_i^sx_i^s-\left(\frac{1}{2}\sum_{s\in\mathcal{K}}(x_i^s)^2+\frac{1}{2}\sum_{s\in\mathcal{K}}\sum_{\substack{t\in\mathcal{K}\\t\neq s}}\beta_{st}x_i^sx_i^t\right)+\sum_{s\in\mathcal{K}}\sum_{j\in\mathcal{N}} \delta_sg_{ij}x_i^sx_j^s.\label{eq-h}\end{equation} It is useful to define a new matrix of spillovers \(\b \Delta=diag(\delta_1,\cdots,\delta_k)\) to capture the varying network externalities. The consumer's equilibrium choice of actions, following the similar argument as in Proposition \ref{pr-1}, can be shown to be \begin{equation} \b x^*=[\tb\Phi\otimes\b I_n-\b\Delta\otimes\b G]^{-1}\b a, \label{eq-h-x} \end{equation} with the corresponding total welfare given by \begin{equation} W(\b a)=\frac{1}{2}\b a^T\underbrace{[\tb \Phi\otimes\b I_n-\b \Delta\otimes\b G]^{-1}(\tb \Phi\otimes\b I_n)[\tb \Phi\otimes\b I_n-\b \Delta\otimes\b G]^{-1}}_{:=\c P}\b a. \label{eq-h-w} \end{equation} Here we need to impose the following assumption, which generalizes Assumption \ref{ass-1} to ensure the well-behavedness of model under heterogeneous \(\delta_s\). \setcounter{assumption}{0} \renewcommand\theassumption{\arabic{assumption}'} \begin{assumption}\label{ass-4} The matrix \(\tb\Phi\otimes\b I_n-\b\Delta\otimes\b G\) is positive definite. \end{assumption} Assumption \ref{ass-4} is equivalent to the condition that \(\tb\Phi-t\b\Delta\succ 0\) for all eigenvalues \(t\) of \(\b G\).\footnote{For symmetric matrices \(\b A\) and \(\b B\), we write \(\b A\succ\b B\) if \(\b A-\b B\) is positive definite.} Under our baseline model of homogeneous network externalities, that is, \(\delta_s=\delta\) for all \(s\), then Assumption \ref{ass-4} reduces to the condition \( \lambda_{min}(\tb \Phi)>\lambda_{max}(\delta\b G) \) and we recover Assumption \ref{ass-1}. Also, when \(\delta_s>0\) for all \(s\), Assumption \ref{ass-4} reduces to \(\tb\Phi-\lambda_{max}(\b G)\b\Delta\succ0\), while when \(\delta_s<0\) for all \(s\), Assumption \ref{ass-4} instead reduces to \(\tb\Phi-\lambda_{min}(\b G)\b\Delta\succ0\). To state the optimal intervention under heterogeneous \(\delta_s\), we define the matrix \[\b M(t)=\tb\Phi-2t\b\Delta+t^2\b\Delta\tb\Phi^{-1}\b\Delta,\quad t\in\mathbb R.\] In our analysis, \(\b M(t)\) performs a similar role to \(\tb\Phi\), but takes into account the effects of \(\b\Delta\) to an extent parametrized by a scalar \(t\). \begin{proposition}\label{pr-12} Suppose Assumption \ref{ass-4} holds, and \(\hb a=0\).\footnote{For general \(\hb a\), we could state asymptotic optimality results of such $\b a^*$ stated in Proposition \ref{pr-12} under large $C$ in the similar spirit to Theorem \ref{th-1}. We omit the details for brevity.} (a) If \(\delta_s>0\) for all \(s\), then the optimal intervention satisfies \(\b a^*=\sqrt{C}\b u\otimes\b v\) for some \(\b v\in E_{max}(\b G)\) and \(\b u\in E_{min}(\b M(\lambda_{max}(\b G)))\). (b) If \(\delta_s<0\) for all \(s\), then the optimal intervention satisfies \(\b a^*=\sqrt{C} \b u\otimes\b v\) for some \(\b v\in E_{min}(\b G)\) and \(\b u\in E_{min}(\b M(\lambda_{min}(\b G)))\). \end{proposition} Similarly to Theorem \ref{th-1}, Proposition \ref{pr-12} is obtained by a spectral analysis of the matrix \(\c P\), which we provide in the following Proposition: \begin{proposition}\label{pr-11} Let \(\{(t_i,\b v_i)\}_{i=1}^n\) be a set of orthogonal eigenpairs of \(\b G\). For each \(i\), let \(\{(\mu_{ji},\b u_{ji})\}_{j=1}^k\) be a set of orthogonal eigenpairs of the matrix \(\b M(t_i)\). Then \(\{(\mu_{ji}^{-1},\b u_{ji}\otimes\b v_i)\}\) is a set of orthogonal eigenpairs of \(\c P\). \end{proposition} Proposition \ref{pr-11} identifies a set \(k\times n\) orthogonal eigenpairs of \(\c P\), so they must contain all the eigenvalues of \(\c P\). By applying Lemma \ref{lem-1}, Proposition \ref{pr-12} follows by selecting the largest eigenvalue of \(\c P\) and choosing its corresponding eigenspace as the optimal intervention. Proposition \ref{pr-12} shows that our previous results on the within-activity intervention are robust to including heterogeneity in the strength of the network spillovers. If the network generates positive spillovers for each activity, the optimal intervention will still follow the first eigenvector of \(\b G\), while if the network generates negative spillovers for each activity, the optimal intervention will follow the last eigenvector of \(\b G\). The optimal across-activity intervention no longer follows the principal components of \(\tb\Phi\), but instead can be expressed in terms of the spectral properties of a new matrix \(\b M(t)\), which captures both the cross-activity interactions \(\tb\Phi\) and the heterogeneous within-activity spillovers \(\b\Delta\). We provide a simple example of the effects of varying \(\b\Delta\) on the optimal across-activity interaction below. \begin{example} Let \(k=2, \tb\Phi=\begin{bmatrix}1&-0.3\\-0.3&1\end{bmatrix},\ \delta_1=0.1\), and \(\b G\) be an arbitrary cycle graph. The table below illustrates the optimal intervention for varying \(\delta_2\).\footnote{Note \(\lambda_{\max}(\b G)=2\) with corresponding eigenvector \( \b v=(\frac{1}{\sqrt n},\cdots,\frac{1}{\sqrt n})^T\). For each $\delta_2$, the vector $\b u$ in the table is given by the last eigenvector of $\b M(\lambda_{max}(\b G))=\b M(2)$ by Proposition \ref{pr-12}(a). } \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|} \hline\(\delta_2\) & 0.1 & 0.2 & 0.3 \\ \hline\(\b u\)& \((0.707,0.707)^T\) & \((0.483,0.876)^T\) & \((0.390,0.921)^T\)\\ \hline\(\frac{u_1}{u_2}\)& 1 & 0.551 & 0.423\\ \hline \end{tabular} \end{table} \end{example} By Proposition \ref{pr-12}, the cross-activity intervention is proportional to the eigenvector \(\b u\), so \(\frac{u_1}{u_2}\) represents the ratio of the amount of intervention between activities 1 and 2 (see Corollary \ref{co-1}). When \(\delta_2=0.1\), we have homogeneous spillovers as in our baseline model, and the planner performs an equal amount of intervention in each activity. Intuitively, as \(\delta_2\) increases, we see that the ratio \(\frac{u_1}{u_2}\) decreases, as the larger spillovers in activity 2 leads to an increase in the effectiveness of intervention, and a larger proportion of the planner's budget will be allocated to activity 2. \subsection{Nonnegative Interventions} We return to the baseline model \eqref{eq-objective}, but now assume that the planner is only able to conduct a nonnegative intervention. That is, we must have \(\b a^*\geq\hb a\). For instance, for a planner seeking an intervention in education, a reduction in an individual's returns to education could be seen as unethical. Therefore, it is reasonable for a planner to be bound by such a constraint in an applied setting. For simplicity, we suppose that \(\hb a=\b 0\). Applying Proposition \ref{pr-1}, we can write this problem as \begin{align} \max_{\b a\in\mathbb R^{kn}}&\qquad W(\b a)=\b a^T\b P\b a\label{eq-positive}\\\text{s.t.}&\qquad \b a^T\b a\leq C,\ \b a\geq\b 0.\nonumber \end{align} When \(\delta>0\) and \(\beta_{st}\leq0\) for all \(s,t\), we know from Proposition \ref{pr-3} and Example \ref{ex-1} that the optimal unconstrained intervention is already nonnegative. Therefore, the additional constraint is not binding, and does not affect the solution. However, if \(\delta<0\) or \(\beta_{st}>0\) for some \(s,t\), the optimal unconstrained intervention will be negative in some component, and is not feasible. We can still perform a spectral analysis to obtain a necessary condition: \begin{proposition}\label{pr-positive} Suppose \(\b a^*\) solves \eqref{eq-positive}. Let the support of \(\b a^*\) be the set of indices \(S=\{i:a_i^*>0\}\), and let \(\b a_S^*\) be the subvector of \(\b a^*\) induced by the indices in \(S\). Similarly, let \(\b P_S\) be the principal submatrix of \(\b P\) induced the indices in \(S\). Then \(\b a_S^*\in E_{max}(\b P_S)\), and \(\frac{W(\b a^*)}{C}=\lambda_{max}(\b P_S)\). \end{proposition} Proposition \ref{pr-positive} implies that we can obtain the optimal intervention by simply checking for each principal submatrix of \(\b P\) for a nonnegative eigenvector, and finding the maximum eigenvalue among them. While this process may seem inefficient, the following proposition suggests that we will not be able to do much better. \begin{proposition}\label{pr-14} The problem \eqref{eq-positive} is NP-hard. \end{proposition} Consequently, adding the nonnegativeness constraints in the intervention problem increases its computational complexity. Finally, we make use of two sample networks from \cite{ggg}, to illustrate the optimal nonnegative intervention in the single activity case, i.e., $k=1$. Both of these graphs can be checked to be 3-regular. \begin{example} \label{ex-6} The following diagrams illustrate a representative optimal intervention\footnote{Multiple optimal interventions exist due to the symmetries of the graph, but they are equivalent up to a graph automorphism.} for \(\delta=-0.05\) and \(\delta=-0.2\) (Fig. \ref{fig-4}). Nodes coloured black receive zero intervention, while red nodes are labelled by their normalized optimal intervention, \(\frac{\b a^*}{\sqrt C}\). \begin{figure} \caption{Optimal nonnegative interventions} \label{fig-4} \end{figure} Observe that when \(|\delta|\) is small, the nodes with positive intervention forms an independent set, but this is not the case when \(|\delta|\) is large. Recall from Proposition \ref{pr-1} (or see \cite{bcz}) that for single-activities, we can simplify the expression for welfare as \(2\b a^T\b P\b a=\b a^T[\b I-\delta\b G]^{-2}\b a\), so a Taylor expansion at \(\delta=0\) gives \[2\b a^T\b P\b a\approx\b a^T\b a+2\delta\b a^T\b G\b a.\] Recall $\b a^T\b a= C$. Intuitively, for \(\delta\) sufficiently close to zero, we maximize the linear term \(2\delta\b a^T\b G\b a\). Since \(\delta<0\), it suffices to minimize \(\b a^T\b G\b a=\sum_{i,j} g_{ij} a_i a_j\), which equals zero if and only if the support of \(\b a\) must be an independent set of $\b G$. See panel (a) of the above figure. On the other hand, when \(|\delta|\) is large, the higher order terms in the Taylor expansion\footnote{$2\b a^T\b P\b a=\b a^T\b a+2\delta\b a^T\b G\b a+3\delta^2\b a^T\b G^2\b a+\cdots.$} become more significant, and we observe a trade-off between minimizing the odd-length paths which decrease total welfare, and maximizing the even-length paths which increase welfare. We expect similar results to hold for other graphs as well. \end{example} \subsection{An Alternative Intervention Model - Pricing} Besides welfare-maximizing interventions, we show that our model can also find applications in other contexts. Here, we consider a model where a monopolist provides multiple goods to a network of consumers, as a generalization of \cite{cyz2}. We assume that the firm faces constant marginal costs, which we denote by the vector \(\b c=(c_i^1,\cdots,c_n^k)^T\), where \(c^s_i\) represents the marginal cost of producing good \(s\) to consumer \(i\). Furthermore, we assume that \(c_i^s>a_i^s\) for all \(s,i\) to ensure that the firm can obtain positive profits. The firm then chooses a price vector \(\b p=(p_1^1,\cdots,p_n^k)^T\), where \(p_i^s\) represents the price of good \(s\) to consumer \(i\). Consumer \(i\) thus incurs a total cost of \(\sum_{s=1}^k p_i^sx_i^s\) from purchasing the quantity \(\b x_i\), resulting in a total utility of \[U_i= \sum_{s\in\mathcal{K}}(a_i^s-p_i^s)x_i^s-\left(\frac{1}{2}\sum_{s\in\mathcal{K}}(x_i^s)^2+\frac{1}{2}\sum_{s\in\mathcal{K}}\sum_{\substack{t\in\mathcal{K}\\t\neq s}}\beta_{st}x_i^sx_i^t\right)+\sum_{s\in\mathcal{K}}\sum_{j\in\mathcal{N}} \delta_sg_{ij}x_i^sx_j^.\] For a given price vector $\b p$, the equilibrium demand, using \eqref{eq-h-x}, is given by \[\b x^*=[\tb\Phi\otimes\b I_n-\b\Delta\otimes\b G]^{-1}(\b a-\b p).\] The firm thus solves the profit-maximizing problem \begin{equation}\label{eq-firm}\max_{\b p\in\mathbb R^{kn}}\ \pi=(\b p-\b c)^T\b x^*=(\b p-\b c)^T[\tb\Phi\otimes\b I_n-\b\Delta\otimes\b G]^{-1}(\b a-\b p)\end{equation} \begin{proposition}\label{pr-13} Suppose Assumption \ref{ass-4} holds. The solution to \eqref{eq-firm} is \[\b p^*=\frac{\b a+\b c}{2},\] with a corresponding maximal monopoly profit \[\pi^*=\frac{1}{4}(\b a-\b c)^T[\tb\Phi\otimes\b I_n-\b \Delta\otimes\b G]^{-1}(\b a-\b c).\] \end{proposition} That is, the firm sets prices equal to the average of the intrinsic marginal utilities of the consumers and the marginal costs of production. The network structure and cross-activity interactions are irrelevant in determining optimal pricing, and only contributes to the equilibrium quantities and profit. This network-independent pricing result generalizes the optimal pricing problem for a single good (see \cite{cbo} and \cite{bloch2013pricing}) to multiple products, and we find that similar results hold in this setting. We admit that the network independence result here is due to certain special features of the model (such as linear demand and no competition). See \cite{Bloch2015} for a survey of targeted pricing in social networks and \cite{cyz2,O2p2022} for settings with competitive pricing.\footnote{Relatedly, \cite{ggg3} study taxation in a network market under an oligopolistic setting, and find that optimal taxes and surplus can also be described in terms of the spectral properties of the network.} \section{Conclusion}\label{sect-6} We analyse the problem of a welfare-maximizing intervention on a heterogeneous multiplex network using principal component analysis. By solving a general quadratic programming problem, we show when the planner's budget is large, the optimal intervention and welfare can be simply described by the spectral properties of two matrices, one representing the within network spillovers among the agents, and the other representing the strategic interdependence between activities. We study the effect of this interdependence on welfare, and find that when activities are pairwise complements, the marginal welfare gained from the planner's budget increases with the degree of complementarity. Furthermore, we establish a class of strategic interdependencies that exhibit the same property. We also consider the problem for small budgets, and show that in contrast, the optimal intervention becomes dependent on the original utilities and not the spectral decomposition. However, our paper only considers the situation where underlying network is the same across each activity, and future research may be able to weaken this strong assumption. As an application of our model, we have also studied a related problem where the planner is unable to intervene in all activities. We analyse the optimal welfare under such a constraint, and find that there are significant differences in the results depending on whether activities are substitutes or complements, as well as the size of the budget. In particular, we find that when activities are substitutes, the planner has little incentive to intervene in more than two activities as long as all the spillovers are taken into account when deciding on the choice of intervention. This can have useful policy implications, as a planner does not have to spend resources in developing instruments for intervention in each activity. In this extension, we focus on the case where the interaction between each pair of activities is the same, and it would be interesting for future research to analyse the \emph{key activity/layer problem} in situations where the cross-activity interactions are heterogeneous.\footnote{We are grateful to Francis Bloch for this comment.} We believe that the richer network structures offered by such multiplex networks can lead to further interesting results, and allow for the modelling of more economic scenarios. Our paper focuses on studying a formulation of the optimal intervention problem. Further research can be done to extend other results on single-activity networks to the multiple activity case, such as other issues in intervention, or to study network formation and design in multiplex networks. See, for instance, \cite{cheng2021theory} on how to sustain cooperation among players embedded in multiple social relations, and \cite{joshi2020network} and \cite{billand2021model} on models of formation of multilayer networks. \appendix \begin{center}{\Large \bf APPENDIX }\end{center} \section{Two Useful Lemmas}\label{sect-A} This section contains the proof of Lemma \ref{lem-1}, which characterizes the solutions to a general quadratic programming problem. For completeness, we include Lemma \ref{lem-3} as an extension of Lemma \ref{lem-1} to the case of intermediate budgets, which largely follows \cite{ggg}. \textbf{Proof of Lemma \ref{lem-1}.} Let \(\b y=C^{-1/2}\b x\), so we solve \begin{align*} \max_{\mathbf y\in\mathbb R^{2n}} \quad &C\b y^T\b S\b y+\sqrt C\hb v^T\b y\\ \text{s.t.} \quad &\b y^T\b y\leq 1. \end{align*} In general, we can solve for the optimal \(\b y\) by maximizing the Lagrangian \[\c L=C\b y^T\b S\b y+\sqrt C\hb v^T\b y+\lambda(\b y^T\b y-1).\] This has first order conditions \begin{align*}2C\b S\b y+\sqrt C\hb v+2\lambda\b y&=0\\ \b y^T\b y&=1,\end{align*} but there is no closed form solution to the system. We thus focus our analysis on the cases \(C\to\infty\) and \(C\to0\).\\ (a) We bound each term of the objective function individually. It is known that the problem \begin{align*} \max_{\mathbf y\in\mathbb R^{2n}} \quad &C\b y^T\b S\b y\\ \text{s.t.} \quad &\b y^T\b y\leq 1. \end{align*} has a maximum of \(\lambda_{max}(\b S)C\), which is attained for any \(\b y\in E_{max}(\b S)\) such that \(\|\b y\|=1\).\\ Also, for any \(\b y\) with \(\b y^T\b y\leq 1\), we have \(|\sqrt C\b v^T\b y|\leq\sqrt C\|\b v\|\|\b y\|\).\\ Therefore, \(V^*\in[\lambda_{max}(\b S)(C-\sqrt C\|\b v\|),\ \lambda_{max}(\b S)(C+\sqrt C\|\b v\|)].\) Since \(\|\b v\|\) is a fixed constant, we have \(\lim_{C\to\infty}\frac{V^*}{C}=\lambda_{max}(\b S)\).\\ Also, \(V^*(\sqrt C\b u)\geq\lambda_{max}(\b S)(C-\sqrt C\|\b v\|)\) for all unit \(\b u\in E_{max}(\b S)\), so \(\lim_{C\to\infty}\frac{V^*(\sqrt C\b u)}{V^*(\b x^*)}=1\).\\ (b) Since \(\b S\) is symmetric, we can write \(\b S=\b U\b D\b U^T\), where \(\b D\) is a diagonal matrix consisting of the eigenvalues of \(\b S\), and \(\b U=[\b u_1,\b u_2,\cdots,\b u_{2n}]\) is an orthogonal matrix consisting of the corresponding eigenvectors. If \(\b S\) has only one eigenvalue the result is trivial, so let \(\theta\) be the second largest eigenvalue of \(\b S\).\\ Since \(\b U\) forms a basis of \(\mathbb{R}^{2n}\), we can uniquely write \(\frac{\b x}{\|\b x\|}=\sum_i^{2n}k_i\b u_i\) for some \(k_i\in[-1,1]\) and \(\sum_i^{2n}k^2_i=1\). Then \begin{align*} \frac{\b x^T\b S\b x}{\|\b x\|^2}&=\left(\sum_{i=1}^{2n}k_i\b u_i\right)^T\b S\left(\sum_{i=1}^{2n}k_i\b u_i\right)\\ &=\left(\sum_i^{2n}k_i\b u_i\right)^T\left(\sum_{i=1}^{2n}k_id_{ii}\b u_i\right)\\ &=\sum_{i=1}^{2n}k^2_id_{ii}\b u_i^T\b u_i+\sum_{i\neq j}k_ik_jd_{jj}\b u_i^T\b u_j\\ &=\sum_{i=1}^{2n}k^2_id_{ii}+0\\ &\leq\sum_{\{\b u_i\in E_{max}(\b S)\}}k_i^2\lambda_{max}(\b S)+\sum_{\{\b u_i\notin E_{max}(\b S)\}}k_i^2\theta\\ &=\lambda_{max}(\b S)-\left(1-\sum_{\{\b u_i\in E_{max}(\b S)\}}k_i^2\right)(\lambda_{max}(\b S)-\theta) \end{align*} Let \(\b A\) be the submatrix of \(\b U\) containing the columns of \(\b U\) which lie in \(E_{max}(\b S)\). Then \(\b A\) is a projection matrix for \(E_{max}(\b S)\), so \begin{align*} \|\mathrm{proj}_{E_{max}(\b S)}\b x\|^2&=\|\b A(\b A^T\b A)^{-1}\b A^T\b x\|^2 =\b x^T\b A\b A^T\b A\b A^T\b x =\b x^T\b A\b A^T\b x =\|\b A^T\b x\|^2\\ &=\|\b x\|^2\sum_{\b u_i\in E_{max}(\b S)}\left(\b u_i^T\sum_{j=1}^{2n}k_i\b u_j\right)^2\\ &=\|\b x\|^2\sum_{\b u_i\in E_{max}(\b S)}k_i^2 \end{align*} As \(C\to\infty\), from part (a), \(\frac{\b x^T\b S\b x}{\|\b x\|^2}\to\lambda_{max}(\b S)\), hence we have \[\sum_{\b u_i\in E_{max}(\b S)}k^2_i=\frac{\|\mathrm{proj}_{E_{max}(\b S)}\b x\|^2}{\|\b x\|^2}\to 1.\]\\ (c) We again bound each term of the objective function individually. Following part (a), we have \(C\b y^T\b S\b y\in[-\lambda_{max}(\b S) C,\lambda_{max}(\b S) C]\), while \(\sqrt C\b v^T\b y\) has maximum \(\sqrt C\|\b v\|\). Thus \(V^*\in[\sqrt C\|\b v\|-\lambda_{max}(\b S) C, \sqrt C\|\b v\|+\lambda_{max}(\b S) C].\) Given that \(\|\b v\|>0\), we have \(\lim_{C\to0}\frac{V(\b x^*)}{\sqrt C}=\|\b v\|\).\\ (d) Let \(\epsilon>0\). For any \(C<\frac{\epsilon\|\b v\|}{2\lambda_{max}(\b S)}\), we must have \begin{align*}&V(\b y^*)\geq\sqrt{C}\|\b v\|-\lambda_{max}(\b S) C\\ \implies&C(\b y^*)^T\b S\b y^*+\sqrt C\b v^T\b y^*\geq\sqrt{C}\|\b v\|-\lambda_{max}(\b S) C\\ \implies&\lambda_{max}(\b S) C +\sqrt C\b v^T\b y^*\geq\sqrt{C}\|\b v\|-\lambda_{max}(\b S) C\\ \implies&\b v^T\b y\geq\|\b v\|-2\lambda_{max}(\b S)\sqrt C\\ \implies&\frac{\|\text{proj}_{\b v}\b y\|}{\|\b y\|}=\frac{\b v^T\b y}{\|\b v\|\|\b y\|}\geq1-\frac{2\lambda_{max}(\b S)\sqrt C}{\|\b v\|}>1-\epsilon. \end{align*} Since \(\b y\) is a scalar multiple of \(\b x\), the desired result follows. \begin{lemma}\label{lem-3} Let \(\b S\) be a positive definite matrix. Let \(\b S=\b U\b D\b U^T\) be a diagonalization of \(\b S\), so \(d_{ii}=\lambda_i(\b S)\) are the eigenvalues of \(\b S\). Write \(\bb v=\b U^T\b v\), \(\bb x=\b U^T\b x\). The solution to \begin{align*} \max_{\mathbf x\in\mathbb R^{n}} \quad &V(\b x)=\mathbf x^T\mathbf{Sx}+\b v^T\b x\\ \text{s.t.} \quad &\b x^T\b x\leq C. \end{align*} is \[\bar x_i=\frac{\bar v_i}{2(\mu-\lambda_i(\b S))},\] where \(\mu\) satisfies the equation \[\sum_{i=1}^n\frac{\bar v_i^2}{4(\mu-\lambda_i(\b S))^2}=C.\] \end{lemma} \textbf{Proof of Lemma \ref{lem-3}.} We have \(\b x^T\b S\b x=\bb x^T\b D\bb x=\sum_{i=1}^n\lambda_i(\b S)\bar x_i^2,\) so the original problem can be reformulated as \begin{align*} \max_{\bb x\in\mathbb R^{n}} \quad & \sum_{i=1}^n\left(\lambda_i(\b S)\bar x_i^2+\bar v_i\bar x_i\right)\\ \text{s.t.} \quad &\sum_{i=1}^n \bar x_i^2\leq C. \end{align*} Consider the Lagrangian \[L=\sum_{i=1}^n\left(\lambda_i(\b S)\bar x_i^2+\bar v_i\bar x_i\right)+\mu(C-\sum_{i=1}^n \bar x_i^2).\] The first order conditions are \[\frac{\partial L}{\partial\bar x_i}=2\lambda_i(\b S)\bar x_i+\bar v_i-2\mu\bar x_i=0\ \forall i,\] so \[\bar x_i=\frac{\bar v_i}{2(\mu-\lambda_i(\b S))}.\] The constraint \(\b x^T\b x\leq C\) thus implies that \(\mu\) satisfies the equation \[\sum_{i=1}^n\frac{\bar v_i^2}{4(\mu-\lambda_i(\b S))^2}=C.\] When applied to the quadratic program \eqref{eq-objective}, Lemma \ref{lem-3} implicitly determines the optimal intervention \(\b x^*\) and welfare \(W(\b x^*)\) in terms of the spectral decomposition of \(\b P\). \section{Proofs}\label{sect-B} \textbf{Proof of Proposition \ref{pr-1}.} Adapting from \cite{cyz}, the agents' choice \(\b x\) satisfies the first order conditions \[a_i^s-x_i^s-\sum_{t\neq s}\beta_{st}x_i^t+\delta\sum_{j\neq i}g_{ij}x_j^s=0\] for all \(i\in\c N,\ s\in\c K\). In matrix form, this is \[\b 0=\b a-\b x-(\b \Phi\otimes \b I_n)\b x+(\b I_k\otimes\delta\b G)\b x=\b a-[\b I_k\otimes\b I_n+\b \Phi\otimes\b I_n-\b I_k\otimes\delta\b G]\b x,\] so \[\b x=[\tb \Phi\otimes\b I_n-\b I_k\otimes\delta\b G]^{-1}\b a.\] Therefore, the total welfare across all agents is \begin{align*} W(\b a)&=\sum_{i\in\c N}\left(\sum_{s\in\c K} a_i^sx_i^s-\frac{1}{2}\sum_{s\in\c K}(x_i^s)^2-\frac{1}{2}\sum_{s\in\mathcal{K}}\sum_{\substack{t\in\mathcal{K}\\t\neq s}}\beta_{st}x_i^sx_i^t+\delta\sum_{i\in\mathcal{N}}\sum_{j\in\mathcal{N}}g_{ij}x_i^sx_j^s\right)\\ &=\b a^T\b x-\frac{1}{2}\b x^T\b I_{kn}\b x-\frac{1}{2}\b x^T(\b \Phi\otimes\b I_n)\b x+\b x^T(\b I_k\otimes\delta\b G)\b x\\ &=\b x^T\left[(\tb \Phi\otimes\b I_n-\b I_k\otimes\delta\b G)-\frac{1}{2}(\b I_k\otimes\b I_n)-\frac{1}{2}(\b \Phi\otimes\b I_n)+(\b I_k\otimes\delta\b G)\right]\b x\\ &=\frac{1}{2}\b x^T(\tb \Phi\otimes\b I_n)\b x\\ &=\frac{1}{2}\b a^T[\tb \Phi\otimes\b I_n-\b I_k\otimes\delta\b G]^{-1}(\tb \Phi\otimes\b I_n)[\tb \Phi\otimes\b I_n-\b I_k\otimes\delta\b G]^{-1}\b a. \end{align*} \textbf{Proof of Theorem \ref{th-1}.} The expression in \eqref{eq-eigenvalues} is decreasing in \(\lambda(\tb\Phi)\) and increasing in \(\lambda(\delta\b G)\). Therefore, the expression is maximized at the smallest eigenvalue of \(\tb\Phi\) and the largest eigenvalues of \(\delta\b G\). Theorem \ref{th-1} then follows from an application of Lemma \ref{lem-1}. \textbf{Proof of Proposition \ref{pr-2}.} Proposition \ref{pr-2} follows directly from Theorem \ref{th-1}(a). \textbf{Proof of Corollary \ref{co-1}.} Since \(\lim_{C\to\infty}\rho(\b u\otimes\b v,\b a^*)=\pm 1\), we have \[\lim_{C\to\infty}\frac{B^s(\b a^*)}{C}=\lim_{C\to\infty}\frac{\sum_{i=1}^n(\b a^{s*}_i)^2}{C}=\lim_{C\to\infty}\frac{C\sum_{i=1}^n(u_sv_i)^2}{C}=(u_s)^2.\] The other limits can be obtained similarly. \textbf{Proof of Proposition \ref{pr-3}.} Let \(\b M=\b I_k-\b \Phi\). Then \(\b M\) is a nonnegative matrix, by the Perron-Frobenius theorem, the largest eigenvalue \(\lambda_{max}(\b M)\) has a corresponding nonnegative eigenvector \(\b u\). Note that \(\lambda_{max}(\b M)=1-\lambda_{min}(\b \Phi)\), so \(\b u\) is the desired nonnegative eigenvector in \(E_{min}(\b \Phi)\). \textbf{Proof of Proposition \ref{pr-4}.} Here \(\b \Phi\) is a nonnegative matrix, so by the Perron-Frobenius theorem, the largest eigenvalue is at least the smallest row sum, which is 1. Furthermore, tr\((\b \Phi)=k\). Suppose \(\lambda_{max}(\b \Phi)=1\), then all the eigenvalues of \(\b \Phi\) are 1, but that means that \(\beta_{st}=0\) for all \(s,t\), a contradiction. Thus \(\lambda_{max}(\b \Phi)>1\), and \(\lambda_{min}(\b \Phi)<1\). Let \(\b u\) be an element of \(E_{min}(\b \Phi)\). If \(\b u\geq\b 0\), then \(\lambda_{min}(\b \Phi)\b u=\b \Phi\b u\geq\b I_k\b u=\b u\), a contradiction. Thus \(\b u\) must have both positive and negative components. \textbf{Proof of Proposition \ref{pr-5}.} Since \(\beta_{st}\leq0\) for all \(s,t\), by the Perron-Frobenius theorem, we have that \(\lambda_{max}(-\b \Phi)=\lambda_{min}(\b \Phi)\) is weakly monotone in its entries, so \(\lambda_{min}(\b \Phi')\leq\lambda_{min}(\b \Phi)\). By Proposition \ref{pr-2}, \(R(\b \Phi,\delta\b G)\) is decreasing in \(\lambda_{min}(\b \Phi)\), so \(R(\b \Phi',\delta\b G)\geq R(\b \Phi,\delta\b G)\). \textbf{Proof of Proposition \ref{pr-6}.} This result follows from an application of parts (c) and (d) of Lemma \ref{lem-1} on the optimization problem \eqref{eq-objective}. \textbf{Proof of Proposition \ref{pr-7}.} Let \[\mathbf M_+=[(1+(k-1)\beta)\mathbf I_n-\delta \mathbf G]^{-1},\text{ and }\mathbf M_-=[(1-\beta)\mathbf I_n-\delta \mathbf G]^{-1}.\] Then (see \cite{cyz})\[\b P^{\mathcal{LL}}=\frac{1-\beta}{2}\b I_l\otimes\b M_-^2+\frac{1}{2k}\b J_l\otimes[(1+(k-1)\beta)\b M_+^2-(1-\beta)\b M_-^2].\] We can apply a similar method as in Theorem \ref{th-1} (see \eqref{eq-eigenvalues}) to obtain the eigenvalues of \(\b P^{\mathcal{LL}}\) as \[\frac{\lambda_j(\b J_l)(1+(k-1)\beta)}{2k(1+(k-1)\beta-\lambda_i(\delta\b G))^2}+\frac{(k-\lambda_j(\b J_l))(1-\beta)}{2k(1-\beta-\lambda_i(\delta\b G))^2}\text{ for }i=1\cdots n,\ j=1\cdots l.\] When \(l>1\), the eigenvalues of \(\b J_l\) are \(l\) and \(0\), so the eigenvalues of \(\b P^{\c {LL}}\) are \[\frac{l(1+(k-1)\beta)}{2k(1+(k-1)\beta-\lambda_i(\delta\b G))^2}+\frac{(k-l)(1-\beta)}{2k(1-\beta-\lambda_i(\delta\b G))^2}\text{ and }\frac{1-\beta}{2(1-\beta-\lambda_i(\delta\b G))^2}\] for \(1\leq i\leq n\).\\ When \(l=1\), the unique eigenvalue of \(\b J_l\) is 1, so the eigenvalues of \(\b P^{\c{LL}}\) are \[\frac{1+(k-1)\beta}{2k(1+(k-1)\beta-\lambda_i(\delta\b G))^2}+\frac{(k-1)(1-\beta)}{2k(1-\beta-\lambda_i(\delta\b G))^2}\] for \(1\leq i\leq n\). To obtain the largest eigenvalue in each case, define the function \(f(a,b)\coloneqq\frac{1+a}{2(1+a-b)^2}\) on the domain \(a>-1\), and \(a+1>b\). Then \[\frac{\partial f}{\partial a}=-\frac{1+a+b}{2(1+a-b)^3}<0,\ \frac{\partial f}{\partial b}=\frac{(1+a)}{(1+a-b)^3}>0,\] so \(f\) is decreasing in \(a\) and increasing in \(b\).\\ When \(l=1\), the eigenvalues of \(\b P^{\c{LL}}\) are \(f(-\beta,\lambda_i(\delta\b G))\), which is thus maximized when \(\lambda_i(\delta\b G)\) is maximum, so the largest eigenvalue is \(f(-\beta,\lambda_{max}(\delta\b G))\).\\ When \(l>1\), the eigenvalues of \(\b P^{\c{LL}}\) are \(f(-\beta,\lambda_i(\delta\b G))\) and \(\frac{l}{k}f((k-1)\beta,\lambda_i(\delta\b G))+\frac{k-l}{k}f(-\beta,\lambda_i(\delta\b G))\). The latter is larger if and only if \(f((k-1)\beta,\lambda_i(\delta\b G))>f(-\beta,\lambda_i(\delta\b G))\), which is equivalent to the condition \(\beta<0\) since \(f\) is decreasing in its first argument. Thus the largest eigenvalue is \(f(-\beta,\lambda_{max}(\delta\b G))\) when \(\beta>0\), and is \(\frac{l}{k}f((k-1)\beta,\lambda_{max}(\delta\b G))+\frac{k-l}{k}f(-\beta,\lambda_{max}(\delta\b G))\) when \(\beta<0\). \textbf{Proof of Theorem \ref{th-2}.} Suppose \(\beta<0\), and let \(\b u\in E_{max}(\delta \b G),\ \b v\in E_{max}(\b J_l)\). Then we have \begin{align*}\b P^{\c {LL}}(\b v\otimes\b u)&=\frac{1-\beta}{2}\b v\otimes\frac{1}{(1-\beta-\delta\lambda_j)^2}\b u+\frac{l}{2k}\b v\otimes\left(\frac{1+(k-1)\beta)}{(1+(k-1)\beta-\lambda_j))^2}-\frac{1-\beta}{(1-\beta-\lambda_j)^2}\right)\b u\\&=\lambda_{max}(\b P^{\c {LL}})(\b v\otimes\b u),\end{align*} so \(\b v\otimes\b u\) is in the eigenspace of \(\lambda_{max}(\b P^{\c {LL}})\). By Lemma \ref{lem-1}, \(\b v\otimes\b u\) is asymptotically optimal. Furthermore, we know that \(E_{max}(\b J_l)=\text{span}\{\b 1_l\}\), which means that the planner should choose to conduct the same intervention \(t\b u\) across all activities for some scalar \(t\). Suppose \(\beta>0\) and \(l>1\) instead, so we choose \(\b u\in E_{max}(\delta \b G),\ \b v\in E_{min}(\b J_l)\). A similar expansion of \(\b P^{\c {LL}}(\b v\otimes\b u)\) implies that \(\b v\otimes\b u\) is in the eigenspace of \(\lambda_{max}(\b P^{\c {LL}})\). Since \(-1\) is an eigenvalue of \(\b J_l\) with multiplicity \(l-1\), the choice of \(\b v\) is no longer unique. In particular, \((1,-1,0,\cdots,0)^T\) is a possible choice of \(\b v\), giving the simple intervention we constructed. {\bf Proof of Theorem \ref{th-3}.} Theorem \ref{th-3} follows directly from Proposition \ref{pr-7}. \textbf{Proof of Theorem \ref{th-4}.} Applying Lemma \ref{lem-1} to \eqref{eq-4}(c), we obtain that for all \(1\leq l\leq k\), \[\lim_{C\to0}\frac{W^*(l,C)-\widehat W}{\sqrt C}=2\left\|\begin{bmatrix}\b P^{\c{LL}}&\b P^{\c{LH}}\end{bmatrix}\hb a\right\|.\] This gives the first part of Theorem \ref{th-4}.\\ Now suppose \(\hb a^s=\hb a^t\) for all \(s,t\). Then for all \(l\), we have \(\b P^{\c L}\hb a=\b 1_l\otimes\b P^{\{1\}}\hb a\), so \(\|\b P^{\c L}\hb a\|=\sqrt l\|\b P^{\{1\}}\hb a\|\propto\sqrt l\), and hence \(\lim_{C\to0}\eta_k(l,C)=\sqrt{\frac{l}{k}}\). \textbf{Proof of Proposition \ref{pr-8}.} Proposition \ref{pr-8} follows directly from Proposition \ref{pr-7}(a). \textbf{Proof of Proposition \ref{pr-9}.} From the proof of Proposition \ref{pr-7}, the function \(f(a,b)=\frac{1+a}{2(1+a-b)^2}\) is decreasing in \(a\). Then \[\alpha=\frac{f(k-1)\beta,\lambda_{max}(\delta\b G)}{-\beta,\lambda_{max}(\delta\b G)}\] is decreasing in \(\beta\). Therefore, by Theorem \ref{th-3}, when \(\beta<0\), the welfare improvement ratio \(\lim_{C\to\infty}\eta_k(l,C)\) is increasing in \(\beta\), while when \(\beta>0\), the welfare improvement ratio \(\lim_{C\to\infty}\eta_k(1,C)\) is decreasing in \(\beta\) instead. \textbf{Proof of Proposition \ref{pr-10}.} Proposition \ref{pr-10} follows directly from Fact \ref{fact-1} and Proposition \ref{pr-9}(i). \textbf{Proof of Proposition \ref{pr-12}.} By Lemma \ref{lem-1}, the optimal intervention corresponds to \(\lambda_{max}(\b P)=\frac{1}{\lambda_{min}(\b P^{-1})}\) since Assumptions \ref{ass-1} and \ref{ass-4} imply that \(\b P\) is positive definite. By Proposition \ref{pr-11}, \(\lambda_{min}(\b P^{-1})\) is equal to \(\min_t \lambda_{min}[\b M(t)]\) as \(t\) varies across the eigenvalues of \(\b G\). Choose any \(\b x\in\mathbb R^{k}\) with \(\b x^T\b x=1\). Then \[\frac{d(\b x^T[\b M(t)]\b x)}{dt}=\b x^T(-2\b\Delta+2t\b\Delta\tb\Phi^{-1}\b\Delta)\b x.\] Let \(\b S=\b\Delta-t\b\Delta\tb\Phi^{-1}\b\Delta\), so that \(\frac{d(\b x^T[\b M(t)]\b x)}{dt}=-2\b x^T\b S\b x\). (a) Define the square root of \(\b\Delta\) as \(\b\Delta^{1/2}=\text{diag}(\sqrt{\delta^1},\cdots,\sqrt{\delta^k})\), so that we have \(\b S=\b\Delta^{1/2}(\b I-\lambda\b\Delta^{1/2}\tb\Phi^{-1}\b\Delta^{1/2})\b\Delta^{1/2}\). When \(\lambda_{min}(\b G)\leq t\leq0\), it is clear that \[\b\Delta^{1/2}\tb\Phi^{-1}\b\Delta^{1/2}\succ0\implies \b I_k-t\b\Delta^{1/2}\tb\Phi^{-1}\b\Delta^{1/2}\succ0\\\implies\b S\succ 0.\] Suppose instead \(0<t\leq\lambda_{max}(\b G)\). Then from Assumption \ref{ass-4}, \begin{align*}\tb\Phi-t\b\Delta\succ 0&\implies\b\Delta^{-1/2}\tb\Phi\b\Delta^{-1/2}\succ t\b I_k\implies\lambda_{min}(\b\Delta^{-1/2}\tb\Phi\b\Delta^{-1/2})>t\\&\implies\lambda_{max}(\b\Delta^{-1/2}\tb\Phi\b\Delta^{-1/2})<\frac{1}{t}\implies \b\Delta^{1/2}\tb\Phi^{-1}\b\Delta^{1/2}\prec\frac{1}{t}\b I_k\\&\implies\b I_k-t\b\Delta^{1/2}\tb\Phi^{-1}\b\Delta^{1/2}\succ0\implies\b S\succ 0.\end{align*} Thus for all \(t\in[\lambda_{min}(\b G),\lambda_{max}(\b G)]\) and \(\b x\neq 0\), we have \(\b x^T\b S\b x>0\), and \(\b x^T[\b M(t)]\b x\) is decreasing in \(t\). Therefore, \(\min_{\b x} \b x^T[\b M(t)]\b x=\lambda_{min}(\b M(t))\) is decreasing in \(t\), so \(\lambda_{min}(\b P^{-1})\) is obtained when \(t=\lambda_{max}(\b G)\) and \(\mu=\lambda_{min}(\b M(t))\), with corresponding eigenvector \(\b u\otimes\b v\) satisfying \(\b u\in E_{min}(\b M(t))\) and \(\b v\in E_{max}(\b G)\). (b) Define \(\b D=-\b\Delta\), then \(\frac{d(\b x^T[\b M(t)]\b x)}{dt}=2\b x^T(\b D+t\b D\tb\Phi^{-1}\b D)\b x\). A similar argument as in part (a) shows that \(\b D+t\b D\tb\Phi^{-1}\b D\) is positive definite, hence \(\b x^T[\b M(t)]\b x\) is increasing in \(t\) and the optimal intervention \(\b u\otimes\b v\) satisfies \(\b u\in E_{min}(\b M(t))\) and \(\b v\in E_{min}(\b G)\). \textbf{Proof of Proposition \ref{pr-11}.} We have \begin{align*}\c P^{-1}&=[\tb \Phi\otimes\b I_n-\b \Delta\otimes\b G](\tb \Phi^{-1}\otimes\b I_n)[\tb \Phi\otimes\b I_n-\b \Delta\otimes\b G]\\&=[\tb\Phi\otimes\b I_n-2\b\Delta\otimes\b G+\b\Delta\tb\Phi^{-1}\b\Delta\otimes\b G^2],\end{align*} so for any \(\b u_{ji},\b v_i\), \begin{align*}\c P^{-1}(\b u_{ji}\otimes\b v_i)&=\tb\Phi\b u_{ji}\otimes\b I_n\b v_i-2\b\Delta\b u_{ji}\otimes\b G\b v_i+\b\Delta\tb\Phi^{-1}\b\Delta\b u_{ji}\otimes\b G^2\b v_i\\&=\tb\Phi\b u_{ji}\otimes\b v_i-2t\b\Delta\b u_{ji}\otimes\b v_i+t^2\b\Delta\tb\Phi^{-1}\b\Delta\b u_{ji}\otimes\b v_i\\&=\b M(t)\b u_{ji}\otimes\b v_i\\&=\mu\b u_{ji}\otimes\b v_i.\end{align*} Hence \((\mu_{ji}^{-1},\b u_{ji}\otimes\b v_i)\) is an eigenpair of \(\c P\). It remains to show that these eigenvectors are orthogonal. Take any \(\b u_{ji}\otimes\b v_i\) and \(\b u_{lk}\otimes\b v_k\). If \(i=k\), then by construction, \(\b u_{ji}\) and \(\b u_{lk}\) are orthogonal eigenvectors of \(\b M(t_i)\), so \((\b u_{ji}\otimes\b v_i)^T(\b u_{lk}\otimes\b v_k)=0\times\b v_i^T\b v_i=0\). Otherwise \(i\neq k\), so \(\b v_i\) and \(\b v_k\) are orthogonal vectors of \(\b G\) and \((\b u_{ji}\otimes\b v_i)^T(\b u_{lk}\otimes\b v_k)=\b u_{ji}^T\b u_{lk}\times0=0\). \textbf{Proof of Proposition \ref{pr-positive}.} Since \(\b a^*_i=0\) for all \(i\notin S\), we have \((\b a^*)^T\b P\b a^*=(\b a^*_S)^T\b P_S\b a^*_S\). Furthermore, since \(\b a^*_S\) is strictly positive, it must be a local maximum of the function \(f(\b z)=\b z^T\b P_S\b z\) in the domain \(\{\b z:\b z^T\b z\leq C\}\). Therefore, \(\b a_S^*\) must satisfy \(\b a_S^*\in E_{max}(\b P_S)\), and \(\frac{(\b a^*)^T\b P\b a^*}{C}=\frac{(\b a^*_S)^T\b P_S\b a^*_S}{C}=\lambda_{max}(\b P_S)\). \textbf{Proof of Proposition \ref{pr-14}.} Let \(S=\{\b a:\b a\in\mathbb R^{kn},\ \b a^T\b a\leq C,\ \b a\geq\b 0.\}\). For any \(z\in\mathbb R\), we have \begin{align*}\max_{\b a\in S}\ \b a^T\b P\b a>z&\iff\exists(\b a\in S)\ \b a^T\b P\b a>z\\&\iff \exists(\b a\in S)\ \b a^T(\b P-\frac{z}{C}\b I)\b a>0\\&\iff\exists(\b a\in S)\ \b a^T(\frac{z}{C}\b I-\b P)\b a<0\\&\iff \frac{z}{C}\b I-\b P\notin CoPn, \end{align*} where \(CoPn\) denotes the set of copositive matrices. By \cite{murty}, the problem of checking whether a symmetric matrix is copositive is an NP-hard problem. Therefore, the nonnegative intervention problem \(\max_{\b a\in S}\ \b a^T\b P\b a\) is also NP-hard. \textbf{Proof of Proposition \ref{pr-13}.} Under Assumption \ref{ass-4}, the matrix $\tb\Phi\otimes\b I_n-\b \Delta\otimes\b G$, which is symmetric, is positive definite, so does its inverse. Therefore, the profit $\pi(\b p)$, defined in \eqref{eq-firm}, is strictly concave in $\b p$. It suffices to check the first order conditions. Differentiating the profit function in \eqref{eq-firm}, we obtain the first order condition \[\pi'(\b p^*)=[\tb\Phi\otimes\b I_n-\b \Delta\otimes\b G]^{-1}(\b a-\b p^*)-[\tb\Phi\otimes\b I_n-\b \Delta\otimes\b G]^{-1}(\b p^*-\b c)=0,\] so \(\b a-\b p^*=\b p^*-\b c\), giving the optimal price vector \(\b p^*=\frac{\b a+\b c}{2}\), with corresponding profit following from substituting back into \eqref{eq-firm}. \end{document}
\begin{document} \title[Bessel K Series for the Riemann Zeta Function]{Bessel K Series for the Riemann Zeta Function} \begin{abstract} This paper provides some expansions of the Riemann xi function, $\xi$, as a series of Bessel K functions. \end{abstract} \maketitle \section{Introduction} Some expansions of the Riemann $\xi$ function (and expansions of related functions) are provided in this article as sums of the form, \begin{equation*} \sum_{j=1}^{\infty} c_j(s) K_s(x_j), \end{equation*} where $K_s(x)$ denotes the Bessel K function. Theorem~\ref{EasyBessel} and Theorem~\ref{ris4} use transformation formulas of theta functions. Conjecture~\ref{ZetaDecomposition} is based on a partition of unity different from the one Riemann used in his memoir. Partitions of unity, such as that employed by Riemann, are more or less equivalent in the derivation of the meromorphy and functional equation of $\zeta(s)$, but we found only one of them that gave Bessel expansions. We suspect that among algebraic functions of $\lambda$ there are others. One of these is given in the appendix. Half of the appendix is a discussion of a curious zeros phenomenon which occured in relation to partitions of unity that arose in the use of Hecke operators. Some numerical data hinted that an eigenform for Hecke operators would be interesting. We were able to construct a formal eigenform for weight one half Hecke operators indexed over odd indices, but we could not show that it converged. Conjecture~\ref{ZetaDecomposition} is a provisional result, based on a conjectured upper bound on certain partition coefficients. The result holds up under numerical tests, and it looks like an elementary proof, somewhat lengthy, would establish the upper bounds in the same way as the elementary proofs establish the upper bounds of the standard partition function. As usual, \begin{align*} \theta_2 &=& \sum_{n=-\infty}^{\infty}q^{(n - 1/2)^2}\\ \theta_3 &=& \sum_{n=-\infty}^{\infty}q^{n^2}\\ \theta_4 &=& \sum_{n=-\infty}^{\infty}(-1)^n q^{n^2}\\ \lambda &=& \left(\frac{\theta_2}{\theta_3}\right)^4. \end{align*} \section{A brief note}\label{BriefNote1Section} For the purposes of much of this paper, it will be convenient to define the functions, $\theta_2$, $\theta_3$, $\theta_4$ and $\lambda$ as functions of $y$ where $y$ is real positive and $q=e^{-\pi y}$. \section{A first $\zeta(s)$ Bessel Series} \begin{theorem}\label{EasyBessel} Define a multiplicative function, $a(n)$, on prime powers by, \begin{align*} a(n) &= \sigma_1(n) & n & = p^f, \ p>2 \\ &= n & n & = 2^f. \end{align*} This is to say that the $a(n)$ are the coefficients in the expansion, $$ \dot\theta_4/\theta_4 = 2 \pi \sum_{n=1}^{\infty}a(n) q^n, $$ where the derivative is taken with respect to $y$. Then, \begin{align*} \frac{s}{4\pi}&\left(2^{\frac{s-1}{2}} - 2^{\frac{1-s}{2}}\right)\pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right)\zeta(s) \\ & = \sum_{n=1}^{\infty}a(n) n^{-\frac{1+s}{4}}\sum_{m=0}^{\infty} (2m+1)^{\frac{s+1}{2}} K_{\frac{s+1}{2}}(\pi \sqrt{n}(2m+1)). \end{align*} \end{theorem} \begin{proof} For $\sigma > 0$, \begin{align*} (1 - 2^{1-s})&\pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right)\zeta(s) \\ &= -\frac{1}{2}\int_0^{\infty} y^{\frac{s}{2}-1}(\theta_4 - 1)dy \\ &=\frac{1}{s}\int_0^{\infty} y^ {\frac{s}{2}}\frac{\dot \theta_4}{\theta_4} \theta_4 dy \\ &= \frac{1}{s}\int_0^{\infty} y^ {\frac{s}{2}}\frac{\dot \theta_4}{\theta_4}(y) \frac{1}{\sqrt{y}}\theta_2\left(\frac{1}{y}\right) dy \\ &= \frac{1}{s}\int_0^{\infty} y^ {\frac{1+s}{2}-1}\frac{\dot \theta_4}{\theta_4}(y) \ \theta_2\left(\frac{1}{y}\right) dy \\ &= \frac{1}{s}\sum_{n=1}^{\infty}(2\pi )a_n \sum_{m=0}^{\infty}2 \int_0^{\infty} y^ {\frac{1+s}{2}-1}e^{-\pi(n y + \frac{(2 m + 1)^2}{4 y})} dy \\ &= \frac{4\pi}{s} \sum_{n=1}^{\infty}a_n\sum_{m=0}^{\infty} \Big(\frac{2m+1}{2\sqrt{n}}\Big)^{\frac{s+1}{2}}\int_0^{\infty} x^ {\frac{1+s}{2}-1} e^{\pi\frac{\sqrt{n}(2m+1)}{2}(x + \frac{1}{x})}dx \\ &= \frac{4\pi}{s} \sum_{n=1}^{\infty}a_n\sum_{m=0}^{\infty} \Big(\frac{2m+1}{2\sqrt{n}}\Big)^{\frac{s+1}{2}} 2 K_{\frac{s+1}{2}}(\pi \sqrt{n}(2m+1)) \\ &= \frac{8\pi}{s} \sum_{n=1}^{\infty}a_n\sum_{m=0}^{\infty} \Big(\frac{2m+1}{2\sqrt{n}}\Big)^{\frac{s+1}{2}} K_{\frac{s+1}{2}}(\pi \sqrt{n}(2m+1)), \end{align*} and the result follows on multiplying both sides by \begin{equation*} \frac{s}{8\pi} \ 2^{\frac{s+1}{2}}. \end{equation*} \end{proof} The result provides a rapidly convergent series for $\zeta(s)$. How it would compare with the methods currently in use calculating zeros we cannot say. We include this result, and some others like it, as a comparison to the kinds of expansions using a partition of unity mentioned in the introduction. \section{A second $\zeta(s)$ Bessel Series} In preparation for Theorem 4.1, define a second multiplicative functions, $b(m)$, on prime powers, by \begin{eqnarray*} b(m) &=& \sigma_1(m) \qquad m = p^f, \ p > 2 \\ &=& 0 \qquad \qquad m = 2^f. \end{eqnarray*} The $b(m)$ are therefore the coefficients in the expansion, $$ \theta_2^4 = 16 \sum_{m=1}^{\infty} b(m) q^m. $$ With the $a(n)$ and $b(m)$ as given above, define entire functions of the complex variable, $s$, and positive integers, $j$, by \begin{equation*} c_j(s) = \sum_{d|j} a(d) b\left(\frac{j}{d}\right)\Big(\frac{j}{d^2}\Big)^{s/2}, \end{equation*} which functions, $$ c_j(s) = \prod_{p^f||j}c_{p^f}(s), $$ are multiplicative in $j$ for all $s$. We then have, \begin{theorem}\label{ris4} \begin{equation*} \frac{s(s+1)}{32\pi^2\sqrt{2}}\Big(2^{\frac{s}{2}}-2^{-\frac{s}{2}}\Big)\Big(2^{\frac{s-1}{2}}-2^{-\frac{s-1}{2}}\Big)\zeta^{*}(s)\zeta^{*}(s+1) = \sum_{j=1}^{\infty} c_j(s)K_s( 2 \pi \sqrt{j}) \end{equation*} \end{theorem} \begin{proof} The proof of theorem~\ref{ris4} is very much like the proof of theorem~\ref{EasyBessel}. Replace $\theta_4$ with $\theta_4^4$ on the right hand side. Use the formula for the number of representations of an integer as the sum of four squares on the left hand side. \end{proof} The asterisks indicate the usual, \begin{eqnarray*} \zeta^{*}(s) &=& \pi^{-\frac{s}{2}}\Gamma(\frac{s}{2})\zeta(s) \\ \zeta^{*}(s+1) &=& \pi^{-\frac{s+1}{2}}\Gamma(\frac{s+1}{2})\zeta(s+1). \end{eqnarray*} We mention that another BesselK expansion of zeta originates from $\theta_4^8$ in the same way as that obtained from $\theta_4^4$. There is also an expansion using $\theta_4^2$. Regarding theorem~\ref{ris4}, there is an additional result that we can prove about the zeros of $c_j(s)$. \begin{theorem}\label{ris4czeros} For any $j$, all the zeros of $c_j(s)$ are pure imaginary. \end{theorem} \begin{proof} Since $c_j(s)$ is multiplicative, we only need to look at the zeros of $c_{p^k}(s)$ where $p$ is prime. If $p=2$ there are no zeros of \begin{equation*} c_{2^m}(s) = 2^m2^{-m s/2}. \end{equation*} For $p$ an odd prime, the equation for $c_{p^k}(s)$ is as follows \begin{equation*} c_{p^k}(s) = \sum_{j=0}^{k}\dfrac{p^{j+1}-1}{p-1}\dfrac{p^{k-j+1}-1}{p-1}p^{(k-2j)s/2}. \end{equation*} Now we will perform a couple of changes of variable in turn: \begin{align*} z & = p^{s/2} \\ e^{i\phi} & = z \end{align*} Note that $s$ is pure imaginary exactly when $z$ is on the unit circle which also corresponds to real $\phi$ between 0 and $2\pi$. We will use the following approximation of $c_{p^k}(s)$, \begin{equation*} c_{p^k}(s) \approx \dfrac{p^{k+2}}{(p-1)^2}\sum_{j=0}^kz^{(k-2j)} =\dfrac{p^{k+2}}{(p-1)^2}\dfrac{z^{k+1}-z^{-k-1}}{z-z^{-1}}. \end{equation*} Note that the formula on the right has all its zeros when $z$ is on the unit circle. We will attempt to use this approximation to show that the zeros of $c_{p^k}(s)$ only occur when $z$ is on the unit circle. To make this argument we use the following estimate for all $z$ on the unit circle: \begin{align*} \Big|\dfrac{(p-1)^2}{p^{k+2}}&\dfrac{z-z^{-1}}{2i}c_{p^k}(s)-\dfrac{z^{k+1}-z^{-k-1}}{2i}\Big| \\ & = \left|\dfrac{z-z^{-1}}{2i}\sum_{m=0}^k\left(\dfrac{1}{p^{k+2}}-\dfrac{1}{p^{m+1}}-\dfrac{1}{p^{k-m+1}}\right)z^{k-2m}\right| \\ & < \sum_{m=0}^k\left(\dfrac{1}{p^{k+2}}-\dfrac{1}{p^{m+1}}-\dfrac{1}{p^{k-m+1}}\right) < 1 \end{align*} If we now make the change of variable \begin{equation*} z = e^{i\phi} \end{equation*} then this inequality indicates that \begin{equation*} \dfrac{(p-1)^2}{p^{k+2}}\sin{\phi}\,c_{p^k}(s) \end{equation*} differs from \begin{equation*} \sin{(k+1)\phi} \end{equation*} by less than one. This means that for each integer $m$ in the interval \begin{equation*} 0 < m < 2(k+1) \end{equation*} where $m\neq k+1$ we have a zero for $c_{p^k}(s)$ for some $\phi$ in the range \begin{equation*} \dfrac{1}{k+1}\left(m-\dfrac{1}{2}\right)\pi<\phi<\dfrac{1}{k+1}\left(m+\dfrac{1}{2}\right)\pi. \end{equation*} Since $c_{p^k}(s)$ is $z^{-k}$ times a degree $2k$ polynomial in $z$ this accounts for all the zeros of $c_{p^k}(s)$. \end{proof} \section{A Partition of Unity} The function, \begin{equation} \zeta^{*}(s,r) = \frac{1}{2r} \int_0^{\infty} y^{\frac{r s}{2}-1}\left(\theta^r -1- \frac{1}{y^{\frac{r}{2}}}\right) dy \tag{$0 < \sigma < 1$} \end{equation} $$ \theta = \theta_3, $$ may be combined with the partition of unity, \begin{eqnarray*} 1 &=& (1 - \lambda( y)) + \lambda( y) \\ &=& \lambda\left(\frac{1}{y}\right) + \lambda( y), \end{eqnarray*} together with two integrations by parts, to give two meromorphic expressions in lines 3 and 4 below: \begin{eqnarray*} 2r \zeta^{*}(s,r) &=& \int_0^{\infty} y^{\frac{r s}{2}-1}(\theta^r (1 - \lambda) -1) dy + \int_0^{\infty} y^{\frac{r s}{2}-1}(\theta^r \lambda - \frac{1}{y^{\frac{r}{2}}}) dy \\ &=& \int_0^{\infty} y^{\frac{r s}{2}-1}(F_r(y) -1) dy + \int_0^{\infty} y^{\frac{r(1-s)}{2}-1}(F_r(y)-1) dy \\ &=& -\frac{2}{rs} \int_0^{\infty} y^{\frac{r s}{2}}\dot F_r dy - \frac{2}{r(1-s)} \int_0^{\infty} y^{\frac{r(1-s)}{2}}\dot F_r dy \\ &=& \frac{4}{r^2 s(s-1)} \int_0^{\infty}\Big(y^{-\frac{r(1-s)}{2}} + y^{-\frac{rs}{2}}\Big)(\dot F_r y^{\frac{r}{2} + 1})'dy. \end{eqnarray*} In the above, the notation is, $$ F_r = \theta^r (1 - \lambda). $$ We have chosen not to work with the entire function that comes from the fourth line above; i.e., \begin{align*} Z(s,r) &= \frac{r^3}{2} s (s-1) \zeta^{*}(s,r) \\ &= \int_0^{\infty}\Big(y^{-\frac{r(1-s)}{2}} + y^{-\frac{rs}{2}}\Big)(\dot F_r y^{\frac{r}{2} + 1})'dy \\ &= \begin{aligned}[t] \int_0^{\infty}&\Big(y^{\frac{rs}{2}+1} + y^{\frac{r(1-s)}{2}+1}\Big)\ddot F_r dy \\ &+ \left(\frac{r}{2}+1\right)\int_0^{\infty}\Big(y^{\frac{rs}{2}} + y^{\frac{r(1-s)}{2}}\Big)\dot F_r dy, \end{aligned} \end{align*} but rather with the expression that precedes it; i. e., with $$ r^2 \zeta^{*}(s,r) = -\Big(\frac{\Psi_r(s)}{s} + \frac{\Psi_r(1-s)}{1-s}\Big), $$ where $$ \Psi_r(s) = \int_0^{\infty} y^{\frac{r s}{2}}\dot F_r dy. $$ The expression, $\Psi_r(s)$, with one integration by parts will introduce one set of coefficients, $a_r(n)$, to consider in upcoming formulas, while two integrations by parts involves two sets of coefficients, $a_r(n), a_r'(n)$, and a bit more bookkeeping. We stick with the simpler single integration by parts, and present three expansions for $r = 1, 2, 3$. While two integrations by parts gives pure Bessel expansions in all three cases, in the absence of any insight into the value of Bessel expansions for the study of zeta functions, we go for the method with fewer coefficients. For example, if the prime number theorem could be established via Theorem 4.1, then Bessel functions would be in the game. As things are, they're a step too far or a step in the wrong direction. Define sequences, $a_r(n)$, for $r = 1, 2, 3$, by \begin{align*} 4 \frac{\dot\theta_4}{\theta_4} - (4 - r) \frac{\dot\theta_3}{\theta_3} &= 4(2\pi) \sum_{n=1}^{\infty} a(n)q^n -(4-r)(2\pi)\sum_{n=1}^{\infty} a(n)(-1)^n q^n \\ &= 2\pi \sum_{m=1}^{\infty}\Big(4 + (4 -r)(-1)^n \Big) a(n) q^n\\ &= \sum_{m=1}^{\infty}a_r(n) q^n, \\ \end{align*} where the $a(n)$ are defined earlier: \begin{equation*} a(n) = \left\{\begin{array}{cl} n & \mbox{if $n$ is a power of 2} \\ \sigma_1(n) & \mbox{otherwise}. \end{array}\right. \end{equation*} We define sequences, $b_r(n)$, for $r = 1, 2, 3$, by \begin{align*} \theta_3^r \lambda &= \theta_3^{r-4}\theta_2^4 \\ &= \sum_{m=1}^{\infty}b_r(n)q^n. \end{align*} \section{A Provisional Result} \begin{conjecture}\label{ZetaDecomposition} For $r = 1, 2, 3,$ $$ \zeta^{*}(s, r) = -\Big(\frac{\Psi_r(s)}{s} + \frac{\Psi_r(1-s)}{1-s}\Big), $$ where, \begin{eqnarray*} \Psi_r(s) &=& 2 \pi \sum_{j=1}^{\infty} c_j(s, r)K_{\frac{rs+2 - r}{2}}( 2 \pi \sqrt{j}) \\ c_j(s, r) &=& \sum_{d|j} a_r(d) b_r\left(\frac{j}{d}\right)\left(\frac{j}{d^2}\right)^{\frac{r s+2-r}{4}} \qquad j\ge 1 \end{eqnarray*} \end{conjecture} \begin{proof} We have, \begin{eqnarray*} \Psi_r(s) &=& \int_0^{\infty} y^{\frac{rs}{2}} \dot F_r (y) dy \\ &=& \int_0^{\infty} y^{\frac{rs}{2}} \Big( 4\frac{\dot\theta_4}{\theta_4} - (4 - r) \frac{\dot\theta_3}{\theta_3} \Big)F_r (y) dy \\ &=& \int_0^{\infty} y^{\frac{rs}{2}} \Big( 4\frac{\dot\theta_4}{\theta_4} - (4 - r) \frac{\dot\theta_3}{\theta_3} \Big)\frac{1}{y^{\frac{r}{2}}}\theta^r(\frac{1}{y})\lambda(\frac{1}{y}) dy \\ &=& \sum_{m, n = 1}^{\infty}a_r(n) b_r(m) \int_0^{\infty} y^{\frac{rs+2 -r}{2}-1} e^{-\pi (n y + \frac{m}{y})} dy \\ &=& \sum_{m, n = 1}^{\infty}a_r(n) b_r(m)\Big(\frac{m}{n}\Big)^{\frac{rs+2 -r}{4}} \int_0^{\infty} y^{\frac{rs+2 -r}{2}-1} e^{-\pi \sqrt{mn}(y +\frac{1}{y})} dy \\ &=& 2 \sum_{m, n = 1}^{\infty}a_r(n) b_r(m)\Big(\frac{m}{n}\Big)^{\frac{rs+2 -r}{4}} K_{\frac{rs+2 - r}{2}}( 2 \pi \sqrt{mn}), \end{eqnarray*} where we recall, $$ 2 K_w(x) = \int_0^{\infty}u^{w - 1}e^{-\frac{x}{2}(u + \frac{1}{u})} du, \qquad x > 0. $$ The result follows from Dirichlet convolution, collecting $m$ and $n$ according to, $j = mn.$ \end{proof} The result is provisional upon the justification of the interchange of the sum and integral in the fourth step. \section{Upper Bounds} We conjecture that for any $\epsilon >0$, then \begin{align*} b_1(m) &\le e^{\pi\sqrt{3m}(1+\epsilon)} \qquad m\ge m((\epsilon) \\ b_2(m) &\le e^{\pi\sqrt{2m}(1+\epsilon)} \qquad m\ge m((\epsilon) \\ b_3(m) &\le e^{\pi\sqrt{m}(1+\epsilon)} \qquad m\ge m((\epsilon). \end{align*} We have numerical evidence for this conjecture and it conforms with the elementary estimates as obtained in~\cite{ErdosElementary}. We recall the estimate, $$ K_{w}(2\pi \sqrt{m}) \sim \Big( \frac{1}{4\sqrt{m}} \Big)^{\frac{1}{2}}e^{-2\pi \sqrt{m}}, $$ so that the $-2\pi\sqrt{m}$ swamps the $\pi\sqrt{3m}$, the $\pi\sqrt{2m}$, and the $\pi\sqrt{m}$, and the absolute convergence of, $$ \sum_{m=1}^{\infty} b_r(m) K_{\frac{r s+ 2 - r}{2}}(2\pi \sqrt{m}), $$ holds for $r =1, 2, 3$, which justifies the interchange of sum and integral. Had we applied the inversion, \begin{equation*} y\longrightarrow \frac{1}{y} \end{equation*} to $\lambda$ alone, the interchange of limits could not be justified. The coefficients, $b_0(m)$, that appear as coefficients in, $$ \lambda(\tau) = \sum_{m=1}^{\infty}b_0(m) q^m, $$ (See Simons paper \cite{Simons1952}, (eq 4)), satisfy $$ b_0(m) \sim \frac{\pi}{8\sqrt{m}}I_1(2\pi\sqrt{m}). $$ As $$ I_{\nu}(x) \sim \frac{1}{\sqrt{2\pi x}}e^x \qquad x\longrightarrow \infty, $$ then, $$ \sum_{m=1}^{\infty} b_0(m)K_w(2\pi \sqrt{m}) $$ does not converge absolutely. \section{$r=1$} From the formulas above, with $\frac{r s + 2 - r}{2} = \frac{s + 1}{2}$ for $r = 1$, \begin{align*} \Psi_1(s) &= \int_0^{\infty} y^{\frac{s}{2}}\dot F_1 dy \\ &= \pi \int_0^{\infty} y^{\frac{s}{2}}\sum_{n=1}^{\infty} a_1(n)e^{-\pi n y}\sum_{m=1}^{\infty} b_1(m)e^{-\pi\frac{m}{y}} dy \\ &= \pi \int_0^{\infty}y^{\frac{s-1}{2}} \sum a_1(n)\sum b_1(m)e^{-\pi (n y + \frac{m}{y})} dy \\ &= \pi \sum a_1(n)\sum b_1(m) \Big(\frac{m}{n} \Big)^{\frac{s+1}{4}}\int_0^{\infty}y^{\frac{s+1}{2}-1}e^{-\pi \sqrt{mn}(y + \frac{1}{y})} dy \\ &= 2\pi \sum a_1(n)\sum b_1(m) \Big(\frac{m}{n} \Big)^{\frac{s+1}{4}}K_{\frac{s+1}{2}}(2\pi \sqrt{mn}). \end{align*} The inclusion of the weight one half form, $\theta$, as a factor of $\lambda$, reduces the growth of the $b_1(m)$ significantly in comparison with the growth of $b_0(m)$, with the result that the sum, $$ \sum_{m=1}^{\infty} b_1(m) K_{\frac{s+1}{2}}(2\pi \sqrt{m}), $$ is absolutely convergent. Having obtained a Bessel K expansion of the zeta function, the coefficients, $b_1(m)$, are then of interest and we followed the method of Simons paper to obtain the partition expression of these coefficients. Here we encountered major and minor differences in the $r=1$ calculations from Simon's $r = 0$ calculation (and from the classical partition expansion). In contrast with the $r=0$ case the sum for $r=1$ in the next section does not converge absolutely. In fact we did not prove that it converges. Numerical tests indicate that it does. We evaluated the arithmetic components of the terms in elementary terms; i. e., sines and hyperbolic sines. The case of $r=2$ is like $r=1$ though the arithmetic components were not all elementary. \begin{conjecture}\label{LambdaCoefficients} The coefficients of $\theta \lambda$ are given by, \begin{equation*} b_1(m) = \frac{2\pi }{8\sqrt{2}}\left(\frac{3}{m}\right)^{\frac{1}{4}} \sum_{k\equiv 2 (mod 4)} \frac{A_k(m)}{k} \ I_{\frac{1}{2}}\left(\frac{2\pi\sqrt{3m}}{k}\right) + \delta_1(m) \end{equation*} where, \begin{align*} A_k(m) & = -i \sum_{(h, k) = 1} \ \kappa(\gamma^{-1}) e^{-2\pi i (\frac{4mh + 3h'}{4k})}, \\ \delta_1(m) &=\left\{\begin{array}{cl} 1 & \mbox{if m is a square} \\ 0 & \mbox{otherwise} \end{array} \right. \end{align*} and where, $\kappa(\gamma^{-1})$, is a theta multiplier. \end{conjecture} This conjecture was obtained using a method similar to that used in~\cite{Simons1952}. \begin{theorem}\label{AkAsSalie01} Let $k = 2 l$, $l$ odd. The $A_k(m)$ may be factored as, \begin{equation} A_k(m) = i^{\frac{(l+3)(l-1)}{4}} (-1)^{m-1} S(m, -3 B_l^4, l) \tag{$B_l = \frac{l+1}{2}$} \end{equation} where $S$ is a Salie sum defined as, $$ S(a, b, l) = \sum_{(h, l) = 1} \Big(\frac{h}{l}\Big) e^{\frac{2\pi i}{l}( a h + b \bar h)}. $$ \end{theorem} \begin{proof} Using Rademacher's explicit transformation formula~\cite{Rademacher-AnalyticNumber} for the $\theta_3$-multiplier, it is a straight-forward calculation to show that the formula holds. \end{proof} The Bessel function, $I_{\frac{1}{2}}$, is a hyperbolic sine. This fact combined with conjecture~\ref{LambdaCoefficients} and theorem~\ref{AkAsSalie01}, gives a final form for $b_1(m)$: \begin{align*} \frac{\pi}{4\sqrt{2}}& \begin{aligned}[t] \left(\frac{3}{m}\right)^{\frac{1}{4}}&(-1)^{m-1} \\ &\sum_{l \ge 1 \ \mbox {odd}}i^{\frac{(l+3)(l-1)}{4}}\frac{1}{2 l} S(m, -3 B_l^4, l) I_{\frac{1}{2}}\left(\frac{\pi \sqrt{3m}}{l}\right) \\ &+\delta_1(m) \end{aligned} \\ &= \begin{aligned}[t] \frac{1}{8\sqrt{m}} &(-1)^{m-1}\\ &\sum_{j=0}^{\infty}\frac{i^{j(j+2)}}{\sqrt{2j+1}} S(m, -3 (j+1)^4, 2j + 1) sinh \left(\frac{\pi \sqrt{3m}}{2j+1}\right) \\ & +\delta_1(m) . \end{aligned} \end{align*} The bulk of the remaining calculations are the determination of the Salie sums as trigonometric expressions. \section{$S(a,b,l)$ Evaluation} When the Salie sums are not zero they are simple trigonometric functions after a $\sqrt{l}$ is extracted. We saw that the exponential sum, $$ S(a, b, l) = \sum_{(h, l) = 1}\Big(\frac{h}{l}\Big) e^{\frac{2\pi i}{l}(a h + b \bar h)}, $$ evaluates the $A_k(m), k = 2 l,$ as, \begin{align} A_k(m) &= i^{\frac{(l+3)(l-1)}{2}} (-1)^{m-1} \ S(m, -3 B^4, l). \notag \end{align} We must consider all choices of $m$ and $l$, in which $m$ and $l$ might or might not share factors of $3$ and might or might not share an odd factor other than $3$. All possibilities can occur in the $A_k(m)$. Thus, \begin{eqnarray*} m &=& 3^e \times l_1 \times a \qquad (a, 3l) = 1 \\ l &=& 3^f \times l'_1\times c \qquad (c, 3m) = 1\\ (l_1l'_1, 3) &=& 1, \end{eqnarray*} where $l_1$ and $l'_1$ are the factors of $m$ and $l$ that share primes other than $3$. The combinations of $e$ and $f$ that modify the values of $S$ do so in more complicated arrangements for $p = 3$ than for the other primes, due to the, $3$, in $S(m, -3B^4, l)$. As we saw, this $3$ comes from the first power of $\theta$ in the formula for the coefficients of $\theta \lambda$. $3$ is no longer special for any higher power of $\theta$. \section{Some General Results} We observe the following general results: \begin{align} &1.\ S(a, b, l) = S(b, a, l) \notag\\ &2.\ S(c a, b, l) = \left(\frac{c}{l}\right) S(a, cb, l) \tag{$(c, l) = 1$} \\ &3.\ S(a, b, l) = S(a + m l, b+ n l, l) \notag \\ &4.\ S(a, b, l) = S(a \bar l_1, b \bar l_1, l_2) S(a \bar l_2, b \bar l_2, l_1), \tag{$ l = l_1 l_2, (l_1, l_2) = 1 $} \end{align} It follows from 1 and 2 that if $ a = b = g$, then $$ S(c g, g, l) =0 \qquad \mbox {if} \ \Big(\frac{c}{l}\Big) = -1. $$ If $ c = g_1 \bar g, (g_1 g, l) = 1$, then $$ S(g_1, g, l) =0 \qquad \mbox{if} \ \Big(\frac{g_1 g}{l}\Big)= -1. $$ \section{A useful theorem} A result of~\cite{IwaniecAutomorphic} evaluates many cases: \begin{theorem}\label{IwaniecResult} Let $(l, 2 b) = 1$. Then, $$ S(a, b, l) = \Big(\frac{b}{l}\Big) \epsilon_l \sqrt{l}\sum_{y^2\equiv ab \ (l)}e^{\frac{4\pi y i}{l}} $$ \end{theorem} \begin{corollary} Let $p\ge 3$ be a prime. If $(a, p) = 1$, $e\ge 1$, and $f\ge 2$, then, $$ S(a, p^e b, p^f) = 0 $$ \end{corollary} \section{$p > 3$} For a prime, $p > 3$ dividing $l_1$ and $l_1'$, a sum of the form, \begin{equation*} S(p^e a, 3 b, p^f), (b, p) = 1, \end{equation*} will be a factor of \begin{equation*} S(m, -3 B^4, l) = S(3^e l_1 a, -3B^4, 3^f \times l'_1\times c). \end{equation*} The few cases are contained in the following result. \begin{theorem}\label{AkAsSalie02} \quad Let $p > 3$ be a prime with $(p, ab) = 1$ and let $S = S(p^e a , 3 b, p^f)$. Then, \begin{align*} e &\ge 1, \ f\ge 2, \quad S = 0 \\ e &\ge 1, \ f = 1, \ \quad S = \Big(\frac{3b}{p}\Big) \epsilon_p \sqrt{p} \\ e &= 0, \ f\ge 1, \quad S = \Big(\frac{3b}{p}\Big)\epsilon_{p^f} \sqrt{p^f} \sum_{y^2\equiv = 3ab (p^f)}e^{\frac{2\pi i y}{p^f}} \\ \end{align*} \end{theorem} \begin{proof} All of these cases are contained in the Iwaniec formula. We separated the ones that vanished, and the simple Gauss sum, from the last case. \end{proof} \section{$p = 3$} The combination of powers of $3$ dividing $m$ and $l$ can be grouped according to $e = 0, 1, 2$, where we have let, \begin{eqnarray*} m &=& 3^e \times l_1 \times a \qquad (a, 3l) = 1 \\ l &=& 3^f \times l'_1\times c \qquad (c, 3m) = 1 \qquad (l_1l'_1, 3) = 1. \end{eqnarray*} The (twisted) multiplicativity result, 4, of the note, shows that a sum of the type, $S = S(3^e a, 3 b, 3^f)$, is a factor of $S(3^e l_1 a, -3B^4, 3^f \times l'_1\times c)$. The situation regarding this factor is given in the following result. \begin{theorem}\label{S3powers} Let $S = S(3^e a , 3 b, 3^f)$. Then, \begin{align*} e &\ge 2, f = 2, S= -3 \\ e &\ge 2, f\ge 3, S=0 \\ e &\ge 1, f = 1, S = 0 \\ e &= 1, f = 2, S = 6, \left(\frac{ab}{3}\right) = -1 \\ e &= 1, f = 2, S = -3, \left(\frac{ab}{3}\right) = 1 \\ e &= 1, f \ge 3, S = 0 \qquad \mbox{$ab$ is not a square mod $3^f$} \\ e & \begin{aligned}[t] = 1, f = 2n, n\ge 2, S &= -\left(\frac{g}{3}\right) 2\times 3^n \sqrt{3} \ Sin\left(\frac{4 \pi g}{3^{2n-1}}\right), \\ g^2&\equiv ab (3^{2n}) \end{aligned} \\ e & \begin{aligned}[t] = 1, f = 2n-1, n\ge 2, 6S &= 2 i\left(\frac{g}{3}\right) \times 3^{n+1} \ Sin\left(\frac{4 \pi g}{3^{2n-2}}\right), \\ g^2&\equiv ab (3^{2n-1}) \end{aligned}\\ e &= 0, f\ge 2, S = 0 \\ e&=0, f=1, S=\left(\frac{a}{3}\right)\sqrt{3}i \end{align*} \end{theorem} \begin{proof} The two $e = 0$ cases are direct results of Theorem~\ref{IwaniecResult}. The results for $e \ge 1$ follow from a formula that generalizes Theorem~\ref{IwaniecResult} in an obvious way. We worked out each of the cases that can occur and presented them in their simplest form. \end{proof} \section{$r = 2$} With $r = 2$, then $\frac{r s + 2 - r}{2} = s$, and \begin{eqnarray*} \zeta^{*}(s, 2) &=& \frac{1}{4} \int_0^{\infty} y^{s-1}(\theta^2 -1) dy \\ &=& \pi^{-s} \Gamma(s) \zeta(s, 2) \\ &=& \pi^{-s} \Gamma(s) \zeta(s) L(s, \chi), \end{eqnarray*} \begin{theorem}\label{BesselExpansionrIs2} $$ 4 \zeta^{*}(s, 2) = -\left(\frac{\Psi_2(s)}{s} + \frac{\Psi_2(1-s)}{1-s}\right), $$ $$ \Psi_2(s) = 2\pi \sum_{j=1}^{\infty} c_j(2, s)K_s(2\pi \sqrt{j}) $$ $$ c_j(2, s) = \sum_{d|j}a_2(d) b_2\left(\frac{j}{d}\right)\left(\frac{j}{d^2}\right)^{\frac{s}{2}} $$ \end{theorem} \begin{conjecture}\label{b2mEvaluation} \begin{equation*} b_2(m) = \frac{\pi}{2}\sum_{k\equiv 2 (mod 4)} \frac{A_k(m)}{k} \ I_0\left(\frac{2\pi\sqrt{2m}}{k}\right) + 2\delta_2(m) \end{equation*} where $$ \frac{\theta^2 -1}{4} = \sum_{n=1}^{\infty} \delta_2(m) q^m, $$ and $$ A_k(m) = - \sum_{(h, k) = 1} \ \kappa^2(\gamma^{-1})e^{-2\pi i (\frac{h'}{2k} + \frac{mh}{k})} $$ is the arithmetic factor with the same multiplier, $\kappa(\gamma^{-1})$, as before, but squared. \end{conjecture} \begin{theorem}\label{ris2Zeta} \begin{equation} A_k(m) = (-1)^{(l-1)/2}(-1)^{m-1} T(-2m A^2, A^2, l), \tag{$k=2l$, $A = \frac{l+1}{2}$} \end{equation} where $T$ is the Kloosterman sum: $$ T(a, b, l) = \sum_{(h,k)=1}e^{\frac{2\pi i}{l}(a h + b \bar h)}. $$ \end{theorem} Combining conjecture~\ref{b2mEvaluation} and theorem~\ref{ris2Zeta} above provides a final evaluation for the $b_2(m)$: $$ b_2(m) = \frac{\pi}{4}(-1)^{m-1}\sum_{j=0}^{\infty}\frac{T(-2m(j+1)^2, (j+1)^2, 2j+1)}{2j+1} + 2 \delta_2(m). $$ \section{$r = 3$} The function, $$ \zeta^{*}(s, 3) = \frac{1}{6} \int_0^{\infty} y^{s-1}(\theta^3 -1) dy $$ has the Bessel K series expansion, \begin{theorem} $$ 6 \zeta^{*}(s, 3) = -\Big(\frac{\Psi_3(s)}{s} + \frac{\Psi_3(1-s)}{1-s}\Big), $$ $$ \Psi_3(s) = 2\pi \sum_{j=1}^{\infty} c_j(3, s)K_{\frac{3s-1}{2}}(2\pi \sqrt{j}) $$ $$ c_j(3, s) = \sum_{d|j}a_3(d) b_3(\frac{j}{d})\Big(\frac{j}{d^2}\Big)^{\frac{3s-1}{4}} $$ \end{theorem} Following the same steps in the calculations for $b_1$ and $b_2$ we obtained the expression, \begin{equation}\label{b3mEquation} b_3(m) = \sqrt{2}\pi \ m^{\frac{1}{4}} \ \sum_{k\equiv 2 (mod 4)} \frac{A_k(m)}{k} \ I_{-\frac{1}{2}}\Big(\frac{2\pi\sqrt{m}}{k}\Big) + 3\delta_3(m). \end{equation} where \begin{equation*} \frac{\theta^3 -1}{6} = \sum_{n=1}^{\infty} \delta_3(m) q^m, \end{equation*} and $$ A_k(m) = - i \sum_{(h, k) = 1} \ \kappa^3(\gamma^{-1})e^{-2\pi i (\frac{h'}{4k} + \frac{mh}{k})} $$ We do not believe that the sum in equation~\ref{b3mEquation} converges. \begin{theorem}\label{ris3AkmCalculation} \begin{equation} A_k(m) = (-1)^{m-1} i^{\frac{l-1}{2}} S(-4m A^3, A^3, l) \tag{$A = \frac{l+1}{2}, k = 2l$} \end{equation} where $$ S(a, b, l) = \sum_{(h, l) =1} \left(\frac{h}{l}\right)e^{\frac{2\pi i}{l}(a h + b \bar h)}. $$ \end{theorem} \appendix \section{} Assume that for arithmetic sequences $[a(n)]_{n=1}^\infty$ and $[b(n)]_{n=1}^\infty$, the function \begin{equation*} \Psi(s) = \sum_{j=1}^{\infty} c_j(s)K_s(2\pi \sqrt{j}), \end{equation*} is entire, where, as defined earlier, \begin{equation*} c_j(s) = \sum_{d|j} a(j)b(\frac{d}{j})\left(\frac{j}{d^2} \right)^{\frac{s}{2}} \qquad j\ge 1 \end{equation*} For the special sequences that appear in Theorem~\ref{ris4}, numerical tests for the zeros of $\Psi$ gave what was expected: zeros on four vertical lines, $\sigma=-\frac{1}{2}, \frac{1}{2},0,1$. For $a(n)=b(n)=1$, the zeros of $\Psi$ that we found were pure imaginary to within machine precision. Other pairs of multiplicative sequences gave zeros that were sporadically distributed. A rationale for which sequences produced highly organized zeros for the associated $\Psi$ is an attactive puzzle. We considered briefly an operator that would encompass the second order equations satisfied by the $K_s(x_j)$, but dropped it. We take up a curious property of Hecke operators in Appendix~\ref{partitionAppendix}. \section{Other partitions of unity}\label{partitionAppendix} We looked at other partitions of unity that may be used to provide Bessel expansions for the Riemann Zeta function. For a modular function of weight zero to be a partition of unity it must satisfy the functional equation \begin{equation*} f(\tau)+f\left(-\dfrac{1}{\tau}\right)=1. \end{equation*} If $f$ is such a partition of unity then this leads to a partition of the Riemann $\xi$ of the form \begin{equation}\label{psifunctional} 2\xi(s) = (1-s)\Psi_f(s) + s\Psi_f(1-s) \end{equation} where \begin{equation}\label{psidefinition} \Psi_f(s) = \int_0^\infty x^{s/2} \left(\dfrac{d}{dx}\theta_3(ix)f(ix)\right)dx. \end{equation} Certain partitions of unity accumulated zeros of the associated $\Psi(s)$ close to the $\frac{1}{2}$ line. As a first example, consider the zeros of $\Psi_{1-\lambda}$: \begin{equation*} \begin{split} -6.85993 + 18.5302 i \\ 8.13082 + 18.4858 i \\ -7.17924 + 28.5267 i \\ 8.33654 + 28.5203 i \\ -8.228 + 36.2544 i \\ 9.40533 + 36.2529 i \\ -8.20765 + 42.7431 i \\ 9.33192 + 42.7147 i \\ -8.69688 + 49.6138 i \\ 9.82173 + 49.6313 i \\ -9.19901 + 55.0839 i \\ 10.3377 + 55.0643 i. \end{split} \end{equation*} An infinite class of partitions of unity arise from weight one-half Hecke operators acting on weight one-half modular forms. For an odd prime, $p$, the Hecke operator acting on a weight one-half modular form $g$, is expressed as \begin{equation*} {\mathcal H}_p(g)(\tau) = \begin{aligned}[t] g(p^2\tau) & + \dfrac{1}{\sqrt{p}\epsilon_p}\sum_{m=1}^{p-1}\left(\dfrac{m}{p}\right)g\left(\tau+\dfrac{2m}{p}\right) \\ & + \dfrac{1}{p} \sum_{m=0}^{p^2-1}g\left(\dfrac{\tau+2m}{p^2}\right), \end{aligned} \end{equation*} where \begin{equation*} \epsilon_p = \left\{\begin{array}{cl} 1 & p\equiv 1(4) \\ i & p\equiv 3(4) \end{array}\right. \end{equation*} and \begin{equation*} \left(\dfrac{m}{p}\right) \end{equation*} is a Jacobi symbol. If $\phi$ is a partition of unity then \begin{equation*} \dfrac{1}{p+1}\dfrac{1}{\theta_3}\mathcal H_p(\theta_3\phi) \end{equation*} is also a partition of unity. For example, if \begin{equation*} f_1(\tau) = \theta_3(\tau)(1-\lambda(\tau)), \end{equation*} then \begin{align*} \dfrac{1}{4}\mathcal H_3&(f_1)(\tau) = \\ &= \dfrac{1}{4} \begin{aligned}[t] \Big(f_1(9\tau) &-\dfrac{i}{\sqrt{3}}f_1\left(\tau+\dfrac{2}{3}\right) +\dfrac{i}{\sqrt{3}}f_1\left(\tau+\dfrac{4}{3}\right) \\ &+\dfrac{1}{3}\sum f_1\left(\dfrac{\tau+2k}{9}\right)\Big) \end{aligned}\\ & = \theta_3(\tau)(1 - \lambda(\tau)) \begin{aligned}[t](1 &- 12000 \lambda(\tau) + 441792 \lambda(\tau)^2 \\ & - 3350528 \lambda(\tau)^3 + 9224192 \lambda(\tau)^4 \\ &- 10485760 \lambda(\tau)^5 + 4194304 \lambda(\tau)^6). \end{aligned} \end{align*} is a partition of unity attached to the prime, 3. \begin{figure} \caption{Plot of the integrand of the $\Psi$ function obtained from $\mathcal H_3$ of $\theta_3(1-\lambda)$} \label{PsiIntegrand03} \end{figure} \begin{figure} \caption{Plot of the integrand of the $\Psi$ function obtained from $\mathcal H_{11} \label{PsiIntegrand11} \end{figure} Regarding numeric evaluations of the $\Psi$ functions for these partitions of unity, we used Mathematica's numeric integration technology to perform the integrations for equation~\ref{psidefinition}. We were a bit worried about the accuracy of such calculations because as the prime, $p$, gets larger, the behavior of the integrand becomes extremely oscillatory. For example, for $p=3$ (see figure~\ref{PsiIntegrand03}), the integrand goes between -20,000 to +20,000 but the curve is smooth. But as the prime $p$ gets larger the oscillations of the integrand becomes more extreme. For example, figure~\ref{PsiIntegrand11} shows a plot of the integrand when $p=11$. Concerned that mathematica could not accurately calculate the value of $\Psi$ when the integrand was behaving in such an extreme manner, we tested our numerical values for $\Psi$ in the equation~\ref{psifunctional} on page~\pageref{psifunctional} and the results were very encouraging. For example, when $s=.5+i$ numeric calculations showed \begin{align*} \xi(s) & = -0.485757 \\ \Psi_{11}(s) & = 4.738132-2.85482i\\ 2\xi(s) -( (1-s)\Psi_{11}(s) + s\Psi_{11}(1-s)) & = 0 \end{align*} These results continued to be encouraging for other larger values of $s$. For example at $s=50+50i$ we obtained \begin{align*} \xi(s) & = -8.08211*10^{9}+1.715833*10^9 i\\ \Psi_{11}(s) & = -5.135*10^{38}-3.14968*10^{38} i \\ 2\xi(s) -&( (1-s)\Psi_{11}(s) + s\Psi_{11}(1-s)) \\ & = -5.63493*10^{-19}+3.19468*10^{-19}i \end{align*} This seemed like pretty convincing evidence because it did not seem likely that Mathematica was using integration by parts in such a manner as to obtain erroneous results that were consistent with~\ref{psifunctional}. It may be worth noting that in order to avoid errors in Mathematica we had to use some rather extreme values on the numeric integration precision. For example, we set Mathematica up to perform integration with a working precision of 200 places of accuracy and a precision goal of 30 places of accuracy. We noticed that with these settings Mathematica seemed to return results that were significantly more accurate than requested. Regarding the zeros clustering phenomenon around the one-half line, the behavior was not particularly striking for the $\Psi$-function associated with $\mathcal H_3$ versus the previous example, but it was apparent. Thus, for zeros in the square between -50 and $50+40i$: \begin{equation*} \begin{array}{cc} 0.235237 + 0.77205 i \\ 0.516168 + 3.6727 i \\ 0.498618 + 6.13523 i \\ -21.1074 + 19.7716 i & 22.1074 + 19.7716i \\ -23.4598 + 29.768 i & 24.4599 + 29.768 i \\ -25.2761 + 37.4269 i & 26.2762 + 37.4269i \end{array} \end{equation*} The $\Psi$-function associated with $\mathcal H_5$ of $1-\lambda$ gave the following zeros in the square from -50 to $50+50i$: \begin{align*} 0.233242 + 0.437961 i\\ 0.519924 + 2.43977 i\\ 0.49621 + 4.48092 i\\ 0.500645 + 6.21101 i\\ 0.499978 + 9.29704 i\\ 0.500002 + 11.2699 i\\ 0.5 + 13.5882 i\\ 0.5 + 17.5795 i\\ -34.6505 + 21.523 i \\ 35.6505 + 21.523 i\\ 0.5 + 22.4764 i \\ -38.8576 + 31.7487 i \\ 39.8576 + 31.7487 i\\ -41.6848 + 39.6315 i \\ 42.6848 + 39.6315 i\\ -43.8901 + 46.6857 i \\ 44.8901 + 46.6857 i \end{align*} The occurances of a real part of $.5$ does not mean that the zero is exactly on the one-half line; it is just means that such a zero is within machine precision of the one-half line. The zeros of $\Psi$-function associated with $\mathcal H_7$ of $1-\lambda$ show a greater clustering behavior. The results for zeros in the box between -50 and $50 +50i$: \begin{equation*} \begin{split} 0.234502 + 0.298452 i\\ 0.519389 + 2.01942 i\\ 0.495116 + 3.64979 i\\ 0.501157 + 5.24145 i \\ 0.49982 + 6.98016 i\\ 0.500017 + 9.04737 i\\ 0.499999 + 11.023 i\\ 0.5 + 12.9791 i\\ 0.5 + 15.0838 i\\ -1.01888 + 18.9618 i \\ 2.01888 + 18.9618 i\\ 0.5 + 22.496 i\\ -47.6872 + 23.3824 i \\ 48.6872 + 23.3824 i\\ 0.5 + 25.9916 i\\ 0.5 + 29.3926 i\\ 0.5 + 33.1953 i\\ 0.5 + 37.4991 i\\ 0.5 + 42.5267 i\\ 0.5 + 48.8834 i \end{split} \end{equation*} Finally, the zeros of the $\Psi$-function associated with $\mathcal H_{11}$ of $1-\lambda$ in the box from $-50$ to $50+50i$: \begin{align*} 0.236946 + 0.139035 i \\ 0.517157 + 1.64704 i \\ 0.494551 + 2.94259 i \\ 0.501784 + 4.27374 i \\ 0.499487 + 5.50646 i \\ 0.500101 + 7.19582 i \\ 0.499972 + 8.26041 i \\ 0.500003 + 10.6066 i \\ 0.499998 + 11.0806 i \\ 0.5 + 13.5367 i \\ 0.5 + 15.0404 i \\ 0.5 + 16.5906 i \\ 0.5 + 18.7044 i \\ -0.00762622 + 21.1219 i \\ 1.00763 + 21.1219 i \\ -0.423696 + 26.1218 i \\ 1.4237 + 26.1218 i \\ 0.5 + 26.6352 i \\ 0.5 + 31.5323 i \\ 0.5 + 32.6608 i \\ -1.6787 + 35.8675 i \\ 2.6787 + 35.8675 i \\ 0.119661 + 39.9134 i \\ 0.880339 + 39.9134 i \\ 0.5 + 44.4793 i \\ 0.5 + 48.8203 i \end{align*} The exceptional zeros suggested we try for hybrids of Hecke operators arising from collections of primes, and we found a sum, which, formally, was an eigenvector for the Hecke operators for all odd primes, but we could not prove that the series converged. There are partitions of unity that come from pieces of the Hecke operator. For example, \begin{equation*} f(\tau)=1-\lambda\left(\dfrac{\tau+a}{p}\right) \end{equation*} is a partition of unity if $a$ is even, $p$ is an odd prime and \begin{equation*} a^2=-1 \mbox{ mod } p. \end{equation*} This example can be generalized. Such functions are algebraic functions of $\lambda$ although their coefficients grow too quickly to produce a Bessel K expansion of the $\zeta$ function. Another class of algebraic functions of $\lambda$ that are also partitions of unity may be obtained using the lemma below. In some cases, these have the advantage that their coefficients go to infinity slowly enough that they may be used to provide Bessel K expansions for the zeta function. \begin{lemma}\label{AlgebraicPartition} Suppose that $R(x)$ and $S(x)$ are polynomials of odd and even degrees respectively with real coefficients such that \begin{itemize} \item $R(1-x)+R(x)=S(x)$ \item The polynomial $R(x)S'(x)-R'(x)S(x)$ is zero only when $x=0$ or $x=1$ \item $S(x)$ has no real zeros when $0< x < 1$ \item $S(1)=R(1)\neq 0.$ \end{itemize} Then there is a modular function, $\phi$, over a finite index subgroup of the $\Lambda$ group which satisfies the following conditions: \begin{align*} R(\phi(\tau))-\lambda(\tau)S(\phi(\tau)) &= 0 \\ \phi(\tau)+\phi(-1/\tau) &= 1 \\ \phi(0) &= 1 \\ \phi(i\infty) &= 0 \\ \phi(i) &= 1/2. \end{align*} \end{lemma} \begin{proof} For any fixed $y$, the equation, \begin{equation*} R(x)-y S(x) = 0, \end{equation*} will only have a double root in $x$ when \begin{equation*} R'(x)-y S'(x) = 0. \end{equation*} This means that, \begin{equation*} R'(x)S(x)-R(x)S'(x) = 0, \end{equation*} so $x$ must be either zero or one. It then follows that $y$ is either zero or one. Now if we consider $x$ and $y$ to be functions of $\tau$ with, say, $x=\phi(\tau)$ and $y=\lambda(\tau)$, then there are no double roots in the $x$ variable of \begin{equation*} R(x)-y S(x) = 0, \end{equation*} when $\tau$ is in the upper half plane. In addition, since the degree of $R(x)$ is at least one more than the degree of $S(x)$, $x$ cannot go to infinity for $\tau$ in the upper half plane. Thus, for any given $\tau$ in the upper half plane, any solution, $\phi(\tau)$, of the equations \begin{equation}\label{phiEquation} R(\phi(\tau)) - \lambda(\tau) S(\phi(\tau))=0 \end{equation} can be extended to a single-valued root for all $\tau$ in the upper half plane. Now we will consider the behavior of \begin{equation*} \dfrac{R(x)}{S(x)} \end{equation*} as $x$ goes from zero to one in order to fix attention on a specific root of \begin{equation*} R(\phi(\tau)) - \lambda(\tau) S(\phi(\tau))=0. \end{equation*} We have \begin{equation*} \dfrac{R(1)}{S(1)}=1 \end{equation*} and \begin{equation*} \dfrac{R(0)}{S(0)}=0. \end{equation*} In addition, for $x$ in the open interval from 0 to 1, \begin{equation*} \dfrac{d}{dx}\left(\dfrac{R(x)}{S(x)}\right) = \dfrac{R'(x)S(x)-R(x)S'(x)}{S(x)^2} \neq 0. \end{equation*} This means that \begin{equation*} \dfrac{R(x)}{S(x)} \end{equation*} is monotonically increasing for $x$ in the open interval from zero to one. In addition, \begin{equation*} \dfrac{R(x)}{S(x)}+\dfrac{R(1-x)}{S(1-x)}=1. \end{equation*} So \begin{equation*} \dfrac{R(1/2)}{S(1/2)}=1/2. \end{equation*} We thus can choose the root of \begin{equation*} R(\phi(\tau)) - \lambda(\tau) S(\phi(\tau))=0, \end{equation*} with \begin{equation}\label{PhiOfI} \phi(i) = 1/2. \end{equation} Now we use~\ref{PhiOfI} to show that the solution, $\phi$, of equation~\ref{phiEquation} must be a partition of unity. The conditions of lemma~\ref{AlgebraicPartition} have been crafted so that if \begin{equation*} R(x_0)-y_0 S(x_0) = 0, \end{equation*} then \begin{equation*} R(1-x_0)-(1-y_0) S(1-x_0) = 0. \end{equation*} This means that \begin{equation*} R(1-\phi(-1/\tau)) - (1-\lambda(-1/\tau)) S(1-\phi(-1/\tau))=0, \end{equation*} or \begin{equation*} R(1-\phi(-1/\tau)) - \lambda(\tau) S(1-\phi(-1/\tau))=0. \end{equation*} Thus, by looking at what happens in a neighborhood of $\tau=i$ we get \begin{equation*} \phi(\tau)=1-\phi(-1/\tau). \end{equation*} This completes the proof. \end{proof} The algebraic equations \begin{align*} 2\phi(\tau)^3-3\phi(\tau)^2+\lambda(\tau) &= 0 \\ \phi(\tau)^5-\lambda(\tau)(5\phi(\tau)^4-10\phi(\tau)^3+10\phi(\tau)^2-5\phi(\tau)+1) &= 0 \\ \begin{aligned}[b] 2 \phi(\tau)^7&-7\phi(\tau)^5 \\ &+\lambda(\tau)(35\phi(\tau)^4-70\phi(\tau)^3+63\phi(\tau)^2-28\phi(\tau)+5) \end{aligned} &=0 \\ \begin{aligned}[b] 2\phi(\tau)^{11}&-11\phi(\tau)^{10} \\ &\begin{aligned}[b] +\lambda(\tau)(165\phi(\tau)^8&-600\phi(\tau)^7+1386\phi(\tau)^6 \\ &-1848\phi(\tau)^5+1650\phi(\tau)^4 \\ &-990\phi(\tau)^3+385\phi(\tau)^2-88\phi(\tau)+9) \end{aligned} \end{aligned} &= 0 \end{align*} satisfy the conditions of lemma~\ref{AlgebraicPartition}. At this time, we don't know if there are other similar equations that can be derived using lemma~\ref{AlgebraicPartition} or if a more sophisticated lemma would produce more roots. We will now provide some expanded analysis of the first of these equations: \begin{equation}\label{CubicEquation} 2\phi(\tau)^3-3\phi(\tau)^2+\lambda(\tau)=0. \end{equation} For $\tau$ in the upper half plane, this equation has no double roots and thus it defines three single valued functions, $\phi_1$, $\phi_2$ and $\phi_3$, on the upper half plane. The partition of unity root of equation~\ref{AlgebraicPartition} is the the root, $\phi_1$, where \begin{equation*} \phi_1(i)=\dfrac{1}{2}. \end{equation*} It is easy to see (see figure~\ref{CubicPlot}) that for the $\phi_1$ root we have \begin{align*} \phi_1(0) & = 1 \\ \phi_1(\infty) & = 0 \\ \phi_1(\tau)+\phi_1(-1/\tau) &= 1. \end{align*} The other two roots can be characterized by their evaluation at $\tau=i$: \begin{align*} \phi_2(i) &= \dfrac{1-\sqrt{3}}{2} \\ \phi_3(i) &= \dfrac{1+\sqrt{3}}{2}. \end{align*} It is also easy to see (see figure~\ref{CubicPlot}) that \begin{align*} \phi_2(0)&=-\dfrac{1}{2} \\ \phi_3(0)&=1 \\ \phi_2(i\infty) &= 0 \\ \phi_3(i\infty) &= \dfrac{3}{2} \\ \phi_2(\tau)+\phi_3(-1/\tau) &= 1. \end{align*} From this last identity, we could use $\phi_2$ and $\phi_3$ also to provide partions of unity. \begin{figure} \caption{Plot of $3\phi^2-2\phi^3$ for $\phi\in (-1,2)$} \label{CubicPlot} \end{figure} The $\lambda$-group ($\Gamma[2]$) permutes $\phi_1$, $\phi_2$ and $\phi_3$. If \begin{align*} \lambda_1 &= \left(\begin{array}{cc} 1 & 2 \\ 0 & 1 \end{array}\right) \\ \lambda_2 &= \left(\begin{array}{cc} 1 & 0 \\ 2 & 1 \end{array}\right), \end{align*} then \begin{align*} \phi_1\circ\lambda_1 &= \phi_2 \\ \phi_2\circ\lambda_1 &= \phi_1 \\ \phi_3\circ\lambda_1 &= \phi_3 \\ \phi_1\circ\lambda_2 &= \phi_3 \\ \phi_2\circ\lambda_2 &= \phi_2 \\ \phi_3\circ\lambda_2 &= \phi_1. \end{align*} It then follows from combinatorial group theory that the subgroup of $\Gamma[2]$ that leaves $\phi_1$ invariant is freely generated by the elements \begin{align*} \lambda_1^2 &= \left(\begin{array}{cc} 1 & 4 \\ 0 & 1 \end{array}\right) \\ \lambda_2^2 &= \left(\begin{array}{cc} 1 & 0 \\ 4 & 1 \end{array}\right) \\ \lambda_1\lambda_2\lambda_1^{-1} &= \left(\begin{array}{cc} 5 & -8 \\ 2 & -3 \end{array}\right) \\ \lambda_2\lambda_1\lambda_2^{-1} &= \left(\begin{array}{cc} -3 & 2 \\ -8 & 5 \end{array}\right). \end{align*} Simon's method may be applied to $\phi_1$ and used to give another BesselK expansion for $\zeta(s)$. The main term of Simon's formula for the coefficients of $\phi_1$ gives an order of magnitude of the mth-coefficient of $\phi_1(q)$ as \begin{equation*} \dfrac{1}{m}\exp{(2\pi\sqrt{m/3})}. \end{equation*} This order of magnitude estimate allows the interchange of summation and integration. This order of magnitude estimate is consistent with numeric calculations of the coefficients wherein, for example, the 1000th coefficient of $\phi_1(q)$ is \begin{equation*} -2.21563*10^{46} \end{equation*} and \begin{equation*} \dfrac{1}{1000}exp{\left(2\pi\sqrt{1000/3}\right)}=6.60664*10^{46}. \end{equation*} We searched for zeros of $\Psi_{\phi_1}$ inside the box extending to -20 on the left and +20 on the right and going up from 0 to 40: \begin{equation*} \begin{array}{lr} -0.196822 + 9.837 i & 0.773957 + 12.8053 i \\ -0.305477 + 19.939 i & 1.67883 + 19.3617 i \\ -1.51671 + 26.1875 i & 2.14412 + 25.9197 i \\ -0.729122 + 31.5098 i & 1.73372 + 31.7993 i\\ -1.18117 + 36.8074 i & 2.50435 + 36.6545 i. \end{array} \end{equation*} {} \end{document}
\begin{document} \maketitle \begin{abstract} In this paper we study the Newton stratification on the reduction of Shimura varieties of PEL type with hyperspecial level structure. Our main result is a formula for the dimension of Newton strata and the description of their closure, where the dimension formula was conjectured by Chai. As a key ingredient of its proof we calculate the dimension of some Rapoport-Zink spaces. Our result yields a dimension formula, which was conjectured by Rapoport (up to a minor correction). As an interesting application to deformation theory, we determine the dimension and closure of Newton strata on the algebraisation of the deformation space of a \BTpEL . Our result on the closure of a Newton stratum generalises conjectures of Grothendieck and Koblitz. \end{abstract} \section{Introduction} \label{sect introduction} We fix a prime $p$ and denote by $\sigma$ the Frobenius automorphism over $\FF_p$ or $\QQ_p$ (where the latter is considered in an unramified field extension). We denote by $\breve\QQ_p := \widehat\QQ_p^{\nr}$ the completion of the maximal unramified extension of $\QQ_p$. Let $\Dscr$ be a PEL-Shimura datum unramified at $p$ as in \cite{kottwitz92}~ch.~5 such that the associated linear algebraic group $\Gsf$ is connected. We denote by $\Ascr_0$ the reduction modulo $p$ of the associated moduli space defined by Kottwitz in \cite{kottwitz92}. The points of $\Ascr_0$ correspond to abelian varieties equipped with polarisation, endomorphisms and level structure. The Newton stratification is the stratification corresponding to the isogeny class of \BT s (with endomorphisms and polarisation) $\Aund[p^\infty] = (A[p^\infty],\lambda_{|A[p^\infty]},\iota_{|A[p^\infty]})$ of points $\Aund = (A,\lambda,\iota,\eta)$ of $\Ascr_0$. We call \BT s with additional structure induced by $\Dscr$ ``{\BTDs}'' (for a more precise definition see section~\ref{ss preliminaries}). By Dieudonn\'e theory their isogeny classes correspond to a certain finite subset $B(\Gsf_{\QQ_p},\mu)$ of the set $B(\Gsf_{\QQ_p})$ of $\sigma$-conjugacy classes in $G(\breve\QQ_p)$. For $b \in B(\Gsf_{\QQ_p},\mu)$ denote by $\Ascr_0^{b}$ the associated Newton stratum of $\Ascr_0$. Viehmann and Wedhorn have shown in \cite{VW13}, Thm.~11.1 that $\Ascr_0^{b}$ is always non-empty. The set $B(\Gsf_{\QQ_p})$ is equipped with a partial order, which is given in group theoretic terms. In the ``classical'' case of \BT s without additional structure, i.e.\ $\Gsf_{\QQ_p} = \GL_n$, we have the following description of this order. By a result of Dieudonn\'e, the set $B(\Gsf_{\QQ_p})$ equals the set of (concave) Newton polygons over $[0,n]$. Then $b' \leq b$ iff the polygons have the same endpoint and $b$ lies above $b'$ (for more details see section~\ref{ss sigma conjugacy}). It is known that the closure of $\Ascr_0^{b}$ in $\Ascr_0$ is contained in $\Ascr_0^{\leq b} := \bigcup_{b' \leq b} \Ascr_0^{b'}$ by a theorem of Rapoport and Richartz (\cite{RR96}~Thm.~3.6). Their result generalises of Grothendieck's specialisation theorem which states that (concave) Newton polygons only ``go down'' under specialisation. \subsection{The main results} The primary goal of this paper is the following theorem. \begin{theorem} \label{thm dimension shimura} \begin{subenv} \item $\Ascr_0^{\leq b}$ is equidimensional of dimension \begin{equation} \label{term dimension shimura} \langle \rho, \mu+\nu(b) \rangle - \frac{1}{2} \defect (b) \end{equation} where $\rho$ denotes the half-sum of (absolute) positive roots of $\Gsf$, $\mu$ is the cocharacter induced by $\Dscr$ and $\nu(b)$ and $\defect (b)$ denote the Newton point resp.\ the defect of $b$ (cf.~section~\ref{ss sigma conjugacy}). \item $\Ascr_0^{\leq b}$ is the closure of $\Ascr_0^{b}$ in $\Ascr_0$. \end{subenv} \end{theorem} In the case of the Siegel moduli variety this theorem was proven by Oort (cf.~\cite{oort00}~Cor.~3.5). In the general PEL-case the dimension formula (\ref{term dimension shimura}) proves a conjecture of Chai. In \cite{chai00}~Question~7.6 he conjectured a formula for the codimension of $\Ascr_0^b$ using his notion of chains of Newton points. We prove the equivalence of his conjecture and our dimension formula in section~\ref{ss chains}. The most important ingredient of the proof of the above theorem is the following result, which is also interesting in its own right. \begin{theorem} \label{thm dimension RZ-space} Let $\Mscr_G(b,\mu)$ be the underlying reduced subscheme of the Rapoport-Zink space associated to an unramified Rapoport-Zink datum (cf.\ Def.~\ref{def RZ datum}). \begin{subenv} \item The dimension of $\Mscr_G(b,\mu)$ equals \begin{equation} \label{term dimension RZ-space} \langle \rho, \mu - \nu_G(b) \rangle - \frac{1}{2} \defect_G (b) \end{equation} \item If $b$ is superbasic then the connected components of $\Mscr_G (b,\mu)$ are projective. \end{subenv} \end{theorem} The dimension formula for $\Mscr_G(b,\mu)$ coincides with the formula conjectured by Rapoport in \cite{rapoport05}, p.296 up to a minor correction. See also Remark~\ref{rem dimension formula}. Theorem~\ref{thm dimension RZ-space} is already known in the case of moduli spaces of \BT s without endomorphism structure by results of Viehmann (\cite{viehmann08}, \cite{viehmann08b}). Moreover, the dimension formula (\ref{term dimension RZ-space}) is known to hold for some affine Deligne-Lusztig varieties, which are a function field analogue of Rapoport-Zink spaces. The dimension formula was proved for affine Deligne-Lusztig varieties in the affine Grassmannian of split groups in \cite{GHKR06} and \cite{viehmann06}, this proof was generalized to the case of unramified groups in \cite{hamacher}. \subsection{Application to deformation theory} Let $\Xund$ be a {\BTD} over a perfect field $k_0$ of characteristic $p$. We denote by $\Def(\Xund)$ the deformation functor of $\Xund$. It is known that $\Def(\Xund)$ is representable, we denote by $\Sscr_{\Xund}$ its algebraisation. By a result of Drinfeld there exists a (unique) algebraisation of the universal deformation of $\Xund$ to a {\BTD} over $\Sscr_{\Xund}$ (for more details see section~\ref{ss deformation}). This induces a Newton stratification on $\Sscr_{\Xund}$ for which we use the analogous notation as above. We derive the following theorem from Theorem~\ref{thm dimension shimura} by using a Serre-Tate argument (see section~\ref{ss comparision}). \begin{theorem} \label{thm dimension deformation} Denote by $b_0$ the isogeny class of $\Xund$ and let $b \in B(\Gsf_{\QQ_p},\mu)$ with $b \geq b_0$. \begin{subenv} \item $\Sscr_{\Xund}^{\leq b}$ is equidimensional of dimension \[ \langle \rho_{\Gsf}, \mu+\nu(b) \rangle - \frac{1}{2} \defect (b). \] \item $\Sscr_{\Xund}^{\leq b }$ is the closure of $\Sscr_{\Xund}^{b}$ in $\Sscr_{\Xund}$. \end{subenv} \end{theorem} In the case of \BT s without additional structure and for polarised \BT s (without endomorphism structure) this was also proven by Oort (\cite{oort00}~Thm.~3.2,~3.3). We give an intrinsic definition of the \BT s with additional structure which coincides with $\Dscr'$-structure for a suitable PEL-Shimura datum $\Dscr'$ (and thus Theorem~\ref{thm dimension deformation} applies). We show that one basically has to exclude case D in Kottwitz's notation and hence call these groups of type (AC) (for more details, see section~\ref{ss PEL vs D}). \subsection{Conjectures of Grothendieck and Koblitz} Let $\Xund_0$ und $\Xund_\eta$ be two {\BTEL} or PEL structure of type (AC). We say that $\Xund_0$ is the specialisation of $\Xund_\eta$ if there exists an integral local scheme $S$ of characteristic $p$ and a {\BTpEL} $\Xund$ over $S$ which has generic fibre $\Xund_\eta$ and special fibre $\Xund_0$. Now assume that $\Xund_0$ is a specialisation of $\Xund_\eta$ and denote by $b_0$ and $b$ their respective isogeny classes. Then \cite{RR96}~Thm.~3.6 states that $b_0 \leq b$. In the case of \BT s without additional structure this is a result of Grothendieck known as Grothendieck's specialisation theorem. Grothendieck conjectured in a letter to Barsotti (see e.g.\ the appendix of \cite{grothendieck74}) that the converse of his specialisation theorem also holds true. He writes ``The necessary conditions (1) (2) that $G'$ is a specialisation of $G$ are also sufficient. In other words, taking the formal modular deformation in char.~$p$ (over a modular formal variety $S$ [...]) and the BT group $G$ over $S$ thus obtained, we want to know if for every sequence of rational numbers $(\lambda_i)_i$ which satisfies (1) and (2), these numbers occur as the sequence of slopes of a fibre of $G$ at some point $S$.'' Here he considers the isogeny class $b$ of a {\BT} via the family of the slopes of their Newton polygon and the conditions (1) and (2) reformulate to $b_0 \leq b$ where $b_0$ denotes the isogeny class of $G'$. The following generalisation follows from Theorem~\ref{thm dimension deformation}, as it is a reformulation of the non-emptiness of Newton strata. In particular, it was already shown in the case of \BT s without additional structure and for polarised \BT s by Oort (\cite{oort00}, Thm.~6.2, Thm.~6.3). \begin{proposition} Let $\Xund$ be a {\BTEL} or PEL structure of type (AC) and let $b_0$ denote its isogeny class. For any isogeny class $b$ with $b \geq b_0$ there exists a deformation of $\Xund$ which has generically isogeny class $b$. \end{proposition} Motivated by Grothendieck's conjecture, Koblitz conjectured in \cite{koblitz75}~p.~211 that ``all totally ordered sequences of Newton polygons can be realized by successive specialisations of principally polarised abelian varieties''. In other words, if $\Ascr_0$ is the Siegel moduli space and $b_1 > \ldots > b_h$ is a chain of isogeny classes (or equivalently a chain of symmetric Newton polygons with slopes between $0$ and $1$) then \[ \overline{\Ascr_0^{b_1} \cap \overline{\Ascr_0^{b_2} \cap \ldots}} \cap \Ascr_0^{b_h} \not= \emptyset \] The second assertion of Theorem~\ref{thm dimension shimura} implies that the analogue holds for arbitrary Shimura varieties of PEL-type $\Ascr_0$, as the left hand side equals $\Ascr_0^{b_h}$. \subsection{Overview} The proof of Theorem~\ref{thm dimension shimura} follows an idea of Viehmann. By an analogous argument as in \cite{viehmann13} we prove in section~\ref{sect newton stratification} that the dimension formula as well as the closure relations follow if we show that $\dim \Ascr_0^{\leq b}$ is less or equal than the term (\ref{term dimension shimura}). By the work of Mantovan (\cite{mantovan05}) each Newton stratum is in a finite to finite correspondence with the product of a (truncated) Rapoport-Zink space and a so-called central leaf inside the Newton stratum. In particular the dimension of a Newton stratum is the sum of the dimension of a central leaf and a Rapoport-Zink space. Here a central leaf is defined as the locally closed subset of $\Ascr_0(\FFbar_p)$ where $\Aund[p^\infty] \cong \Xund$ for a fixed {\BTD} $\Xund$. In the sections \ref{sect central leaves} and \ref{sect EO strata} we calculate the dimension of the central leaves, thus reducing Theorem~\ref{thm dimension shimura} to the claim that $\Mscr(b,\mu)$ has dimension less or equal than (\ref{term dimension RZ-space}). We construct a correspondence between $\Mscr_G(b,\mu)$ and a disjoint union of Rapoport-Zink spaces associated to data with superbasic $\sigma$-conjugacy classes in section~\ref{sect BT again} using similar moduli spaces as in \cite{mantovan08}. In section~\ref{sect numerical dimension} we translate the dimension of the fibres of the correspondence into group theoretical terms and calculate it in section~\ref{sect fibre dimension}. This allows us to reduce to the case of a superbasic Rapoport-Zink datum of EL type in section~\ref{sect reduction to superbasic}. In section~\ref{sect superbasic} we prove Theorem~\ref{thm dimension RZ-space} in this special case thus finishing the proof of Theorem~\ref{thm dimension shimura}. We remark that the reduction step which reduces the task of estimating the dimension of Rapoport-Zink spaces to the case of a superbasic datum of EL type is an analogy of the reduction step in \cite{GHKR06}~sect.~5 for affine Deligne-Lusztig varieties. Especially the arguments given in section~\ref{sect fibre dimension} and section~\ref{sect reduction to superbasic} are very similar to those in \cite{GHKR06}. The proof of Theorem~\ref{thm dimension RZ-space} in the superbasic EL case in section~\ref{sect superbasic} follows the proof in \cite{viehmann08} replacing her combinatorial invariants by their generalisation introduced in \cite{hamacher}. The article is subdivided as follows. Sections~\ref{sect shimura} to \ref{sect BT} are mostly recapitulations of already known facts, except for section~\ref{ss defect} and \ref{ss PEL vs D}. In sections~\ref{sect newton stratification} to \ref{sect EO strata} we consider the Newton stratification on the special fibre of Shimura varieties. We give an overview of the relationship between theorems~1.1 -1.3 in section~\ref{sect epilogue}. In the subsequent sections we exclusively deal with the geometry of Rapoport-Zink spaces. \begin{notation} Throughout the article we keep the following notation. For any ring $R$ we denote by $W(R)$ its Witt-vectors. If $R$ has characteristic $p$, we denote by $\sigma$ the Frobenius endomorphism of $R$, as well as the Frobenius of $W(R)$ and $W(R)_\QQ$. For any $p$-adic field $F$ we denote by $O_F$ its ring of integers, by $k_F$ its residue field and set $q_F := \# k_F$. We denote $\Gamma_F := \Gal (\kbar_F/k_F) = \Aut_{F,\cont} (\hat{F}^\nr)$ and $\Gamma := \Gamma_{\QQ_p}$. We denote by $k$ an arbitrary algebraically closed field of characteristic $p$ and $L = W(k)_\QQ$, $O_L = W(k)$. Starting in section~\ref{sect BT again} we will assume $k = \FFbar_p$. In most cases we will denote objects defined over number fields by letters in sans serif while we use the usual italic letters for objects defined over $p$-adic fields. \end{notation} \emph{Acknowledgements:} I would like to express my sincere gratitude to my advisor E.~Viehmann for entrusting me with this topic and for her steady encouragement and advice. I thank M.~Chen, M.~Kisin, E.~Viehmann and M.~Rapoport for giving me a preliminary version of their respective articles. The author was partially supported by ERC starting grant 277889 ``Moduli spaces of local $G$-shtukas''. \section{Shimura varieties of PEL type with good reduction} \label{sect shimura} \subsection{Moduli spaces of abelian varieties} We recall Kottwitz's definition of an integral PEL-Shimura datum with hyperspecial level structure at $p$ (which we will abbreviate to ``unramified PEL-Shimura datum'') and the associated moduli spaces. They were defined in \cite{kottwitz92}, which is also the main reference for this section. The moduli problem is given by a datum $\Dscr= (\Bsf,\, ^{\ast},\Vsf,\pair,\Osf_\Bsf,\Lambda,h)$. To this datum one associates an linear algebraic group $\Gsf$ over $\QQ$, a conjugacy class $[\mu_h]$ of cocharacters of $\Gsf_\CC$ and a number field $\Esf$. This data have the following meaning. \begin{itemlist} \item $\Bsf$ is a finite dimensional semi-simple $\QQ$-algebra such that $\Bsf_{\QQ_p}$ is a product of matrix algebras over unramified extensions of $\QQ_p$. \item $^{\ast}$ is a positive involution of $\Bsf$ over $\QQ$. \item $\Vsf$ is a non-zero finitely generated left-$\Bsf$-module. \item $\pair$ is a symplectic form of the underlying $\QQ$-vector space of $\Vsf$ which is $\Bsf$-adapted, i.e.\ for all $v,w \in \Vsf$ and $b\in \Bsf$ \[ \langle bv,w\rangle = \langle v,b^\ast w\rangle. \] \item $\Osf_\Bsf$ is a $\ZZ_{(p)}$-order of $\Bsf$, whose $p$-adic completion $\Osf_{\Bsf,p}$ is a maximal order of $\Bsf_{\QQ_p}$. \item $\Lambda$ is a lattice in $\Vsf_{\QQ_p}$ which is self dual for $\pair$ and preserved under the action of $\Osf_\Bsf$. \item The group $\Gsf$ represents the functor \[ \Gsf (R) = \{ g \in \GL_{\Bsf}(\Vsf \otimes R) \mid \exists c(g) \in R^\times: \langle g(v), g(w) \rangle = c(g) \cdot \langle v,w \rangle\}. \] We assume henceforth that $\Gsf$ is connected. \item $h: \CC \rightarrow \End_B(\Vsf)_\RR$ is a homomorphism of algebras with involution (where on the left hand side the involution is the complex conjugation and on the right hand side the involution maps a homomorphism to its adjoint with respect to $\pair$) such that the form $\langle v, h(i)\cdot w\rangle$ on $\End_\Bsf(\Vsf)_\RR$ is positive definite. \item Let $\Vsf_\CC = \Vsf^0 \oplus \Vsf^1$, where $\Vsf^0$ resp.\ $\Vsf^1$ is the subspace where $h(z)$ acts by $\zbar$ resp.\ $z$. We define $\mu_h$ to be the cocharacter of $\Gsf_\CC$ which acts with weight $0$ on $\Vsf^0$ and with weight $1$ on $\Vsf^1$. Then $[\mu_h]$ is defined as the $\Gsf(\CC)$-conjugacy class of $\mu_h$. \item $\Esf$ is the field of definition of $[\mu_h]$. \end{itemlist} \begin{definition} We call a datum $\Dscr$ as above an unramified PEL-Shimura datum. The field $\Esf$ is called reflex field of $\Dscr$. \end{definition} Let $K^p \subset \Gsf(\AA^p)$ be an open compact subgroup. We consider the functor $\Ascr_{\Dscr,K^p}$ which associates to an $\Osf_\Esf \otimes \ZZ_{(p)}$-algebra $R$ the set of isomorphism classes of tuples $(A,\lambda,\iota,\eta)$ where \begin{itemlist} \item $A$ is a projective abelian scheme over $\Spec R$. \item $\lambda$ is a polarization of $A$ of degree prime to $p$. \item $\iota: \Osf_\Bsf \rightarrow \Aut(A)$ is a homomorphism satisfying the following two conditions. For every $a\in \Osf_\Bsf$ we have the compatibility of $\lambda$ and $\iota$ \begin{equation} \label{term kottwitz compatibility} \iota(a) = \lambda^{-1} \circ \iota(a^\ast)^\vee \circ \lambda \end{equation} and $\iota$ satisfies the Kottwitz determinant condition. That is, we have an equality of characteristic polynomials \begin{equation} \label{term kottwitz determinant} \cha (\iota(a)|\Lie A) = \cha (a|\Vsf^1). \end{equation} The polynomial on the right hand side has actually coefficients in $\Osf_\Esf \otimes \ZZ_{(p)}$, but we consider it as element of $R[X]$ via the structural morphism. \item $\eta$ is a level structure of type $K^p$ in the sense of \cite{kottwitz92} \S 5. \end{itemlist} Two such tuples $(A,\lambda,\iota,\eta), (A',\iota',\lambda',\eta')$ are isomorphic if there exists an isogeny $A \rightarrow A'$ of degree prime to $p$ which commutes with $\iota$, carrying $\lambda$ into a $\ZZ_{(p)}^\times$-scalar multiple of $\lambda'$ and carrying $\eta$ to $\eta'$. If $K^p$ is small enough, this functor is representable by a smooth quasi-projective $\Osf_\Esf \otimes \ZZ_{(p)}$-scheme. Henceforth we will always assume that this is the case. We fix $\Dscr,K^p$ as above and choose an embedding $\QQbar \mono \QQbar_p$ of the algebraic closure of $\QQ$ in $\CC$ into an algebraic closure of $\QQ_p$. We denote by $v$ the place of $\Esf$ over $p$ which is given by this embedding and by $\kappa$ its residue field. We write \[ \Ascr_0 := \Ascr_{\Dscr,K^p,0} = \Ascr_{\Dscr,K^p} \times \bar\kappa \] for the geometric special fibre of $\Ascr_{\Dscr,K^p}$ at $v$. We fix an isomorphism $\bar\kappa \cong \bar\FF_p$ and denote by $\Aund^{\univ}$ the universal object over $\Ascr_0$. \subsection{\BTDs} \label{ss preliminaries} Let $R$ be a $\FFbar_p$-algebra and $(A,\iota,\lambda,\eta) \in \Ascr_0(R)$. Then by functoriality we obtain additional structure on the Barsotti-Tate group $X = A[p^\infty]$ of $A$. That is, we get an action of $\Osf_{\Bsf,p}$ on $A[p^\infty]$ and a polarisation up to $\ZZ_p^\times$-scalar multiple, satisfying the same compatibility conditions as $\iota$ and $\lambda$. \begin{notation} We call a polarisation up to $\ZZ_p^\times$-scalar multiple as above a $\ZZ_p^\times$-homogeneous polarisation. We will use the analogous notion for bilinear forms which are defined up $\ZZ_p^\times$- or $\QQ_p^\times$-scalar multiple. \end{notation} \begin{definition} Let $X$ be a \BT\ over a $\kappa$-algebra $R$, together with a homomorphism $\iota:\Osf_{\Bsf,p} \to \End X$ and a $\ZZ_p^\times$-homogeneous polarisation $\lambda:X \to X^\vee$. The tuple $\Xund = (X,\iota,\lambda)$ is called a \BTD\ if the following conditions are satisfied. \begin{assertionlist} \item Let $\Bsf_{\QQ_p} = \prod B_i$ be a decomposition into simple factors and $\Vsf_{\QQ_p} = \prod V_i$, $\Osf_{\Bsf,p} = \prod O_{B_i}$,the induced decompositions. Denote by $\epsilon_i$ the multiplicative unit in $B_i$ and let $X_i := \image \epsilon_i$. Then $X_i$ is a \BT\ with height $\dim_{\QQ_p} V_i$. \item $\iota(a) = \lambda^{-1} \circ \iota(a^\ast)^\vee \circ \lambda$. \item $\cha (\iota(a)|\Lie X) = \cha (a| \Vsf_v^1)$ where $\Vsf_v^1$ denotes the $v$-adic completion of $\Vsf^1$. \end{assertionlist} \end{definition} \begin{remark} The condition that $X_i$ is a \BT\ is automatic, cf.~section~\ref{ss normal forms of BTpELs}. \end{remark} Some of the data above are superfluous for the general study of \BTDs. Therefore we introduce the following (simpler) objects. \begin{definition} \begin{subenv} \item Let $B$ be a finite product of matrix algebras over finite unramified field extensions of $\QQ_p$ and $O_B \subset B$ be a maximal order. We call a \BT\ with $O_B$-action a \BT\ with EL structure. \item Let $O_B,B$ be as above and $^\ast$ be a $\QQ_p$-linear involution of $B$ which stabilizes $O_B$. We call a \BT\ with polarisation $\lambda$ and $O_B$-action $\iota$ satisfying \[ \iota(a) = \lambda^{-1} \circ \iota(a^\ast)^\vee \circ \lambda \] a \BT\ with PEL structure. \item We call a tuple $(B,O_B)$ resp.\ $(B,O_B,\ast)$ as above an EL-datum resp.\ a PEL-datum. \end{subenv} \end{definition} \begin{notation} When we want to consider the EL case and the PEL case simultaneously, we will write the additional data in brackets, e.g.\ ``Let $\Xund$ be a \BTpEL .'' \end{notation} \begin{definition} Let $\Xund = (X,\iota,(\lambda))$ and $\Xund' = (X',\iota',(\lambda'))$ be two \BTpELs. \begin{subenv} \item A morphism $\Xund \to \Xund'$ is a homomorphism of \BT s $X \to X'$ which commutes with the $O_B$-action and the polarisation in the PEL case. \item An isogeny $\Xund \to \Xund'$ is an isogeny $X \to X'$ which commutes with the $O_B$-action and in the PEL case also commutes with the polarisation up to $\QQ_p^\times$-scalar. The scalar given in the PEL case is called the similitude factor of the isogeny. \end{subenv} \end{definition} We note that an isogeny of {\BTPELs} is not necessarily a homomorphism. \subsection{Deformation theory} \label{ss deformation} Let $\Xund = (X,\iota,(\lambda))$ be a \BTpEL\ over a perfect field $k_0$ of characteristic $p$. We briefly recall the construction of its universal deformation $\Xund^\univ$, where we only consider deformations in equal characteristic. By \cite{illusie85},~Cor.~4.8 the deformation space $\Def (X)$ is pro-representable by $\Spf k_0 \pot{X_1, \ldots , X_{d\cdot (n-d)}}$ where $n$ denotes the height of $X$ and $d$ its dimension. In order to describe $\Def(\Xund)$ we use the following result of Drinfeld. For any Artinian local $k_0$-algebra $A$ and \BT s $X',X''$ over $A$ the canonical map \[ \Hom(X',X'') \otimes_{\ZZ_p} \QQ_p \to \Hom(X'_{k_0},X''_{k_0}) \otimes_{\ZZ_p} \QQ_p \] is an isomorphism. Now the condition that an element of $\Hom(X',X'') \otimes \QQ_p$ is a homomorphism is closed on $\Spec A$ by \cite{RZ96},~Prop.~2.9. Thus $\Def (\Xund)$ is a closed subfunctor of $\Def(X)$ and in particular pro-representable by $\Spf \Rscr_{\Xund}$ for some adic ring $\Rscr_{\Xund}$. Denote by $\Xund^{\rm def}$ the universal object over $\Spf \Rscr_X$. By a result of Messing (\cite{messing72}~Lemma~II.4.16) for any $I$-adic ring $R$ the functor \begin{eqnarray*} \{\textnormal{\BT s over } \Spec R\} &\to& \{\textnormal{\BT s over} \Spf R\} \\ X' &\mapsto& (X' \mod I^n)_{n\in\NN} \end{eqnarray*} is an equivalence of categories. Thus $\Xund^{\rm def}$ induces a \BTpEL\ $\Xund^\univ$ over $\Spec\Rscr_{\Xund}$. We note that the same construction also works for \BTPELs\ with $\ZZ_p^\times$-homogeneous polarisation and in particular yields a canonically isomorphic deformation functor. \begin{definition} \label{defi deformation space} We call $\Xund^\univ$ the universal deformation of $\Xund$. We denote $\Sscr_{\Xund} = \Spec \Rscr_{\Xund}$ \end{definition} The Serre-Tate theorem states that the canonical homomorphism $\Def (A) \to \Def (A[p^\infty])$ is an isomorphism for every abelian variety $A$ over an algebraically closed field $k$ of characteristic $p$. We obtain the following corollary. \begin{proposition} \label{prop serre-tate} Let $x = \Aund \in \Ascr_0 (\FFbar_p)$ and $\Xund := \Aund[p^\infty]$. Then the morphism $\Rscr_{\Xund} \to \widehat\Oscr_{\Ascr_0,x}$ induced by the deformation $(A^{\univ}[p^\infty])_{\widehat\Oscr_{\Ascr_0,x}}$ is an isomorphism and the pull-back of $(A^{\univ}[p^\infty])_{\widehat\Oscr_{\Ascr_0,x}}$ equals $\Xund^\univ$. \end{proposition} \section{Group theoretic preliminaries} \label{sect group theory} \subsection{Reductive group schemes over $\ZZ_p$} \label{ss group theory} We recall the following definitions from \cite{SGA3-3}. \begin{definition} Let $S$ be an arbitrary scheme. A group scheme $G \to S$ is called reductive if the structure morphism is smooth and affine with connected fibres and for every geometric point $\sbar$ of $S$ the linear algebraic group $G_{\sbar}$ is reductive. \end{definition} \begin{definition} Let $G \to S$ be a reductive group scheme. \begin{subenv} \item A maximal torus of $G$ is a subtorus $T \subset G$ such that for every $T_{\sbar}$ is a maximal torus of $G_{\sbar}$ for all geometric points $\sbar$ of $S$. \item A Borel subgroup of $G$ is a subgroup $B \subset G$ such that $B_{\sbar}$ is a Borel subgroup of $G_{\sbar}$ for all geometric points $\sbar$ of $S$. \end{subenv} \end{definition} In the case where $S = \Spec R$ is the spectrum of a local ring we make the following definitions. A reductive group scheme $G$ over $S$ is called \emph{split} if it contains a maximal split torus and it is called \emph{quasi-split} if it contains a Borel subgroup. The notions of split resp.\ quasi-split reductive group schemes also exist over arbitrary bases, but are a bit more complicated. Let $G$ be a reductive group scheme over $\ZZ_p$. Then it is automatically quasi-split and splits over a finite unramified extension $O_F$ of $\ZZ_p$ (cf. \cite{VW13}~A.4). We fix $T \subset B \subset G$, where $T$ is a maximal torus and $B$ a Borel subgroup of $G$. Denote \begin{eqnarray*} X^*(T) &=& \Homfunc (T,\GG_m) \\ X_*(T) &=& \Homfunc (\GG_m,T) \end{eqnarray*} These sheaves become constant after base change to $O_F$, thus we regard them as abstract groups with $\Aut_{\ZZ_p}(O_F)$ action. We obtain canonical isomorphisms of Galois-modules \[ X_*(T) \cong X_*(T_{\QQ_p}) \cong X_*(T_{\FF_p}) \] where we also identify $\Gal(F/\QQ_p) = \Aut_{\ZZ_p}(O_F) = \Gal(k_F/\FF_p)$ (and analogously for $X^*(T)$). We denote by $R$ (resp. $R^\vee$) the set of absolute roots of $G$ with respect to $T$, that is the lifts of the absolute roots (resp. coroots) of $G_{\FF_p}$ to $O_F$. This definition coincides with the definition of roots (resp.\ coroots) of $G_{O_F}$ given in \cite{SGA3-3}, Exp.~XXII, ch.~1. In particular, the absolute roots (resp.\ coroots) of $G$ also coincide with the absolute roots (resp.\ coroots) of $G_{\QQ_p}$ w.r.t. the identifications above. We denote by $R^+,\Delta^+$ and $R^{\vee,+},\Delta^{\vee,+}$ the system of positive/simple roots resp.\ positive/simple coroots determined by $B$. Let $\pi_1(G)$ denote the fundamental group of $G$, i.e. the quotient of $X_*(T)$ by the coroot lattice. The Weyl group of $G$ is defined as the quotient $W := (\Norm_G T)/T$. It is represented by a finite \'etale scheme which becomes constant after base change to $O_F$. Thus we may identify $W = (\Norm_G T)(O_F)/T(O_F)$ with the canonical Galois action. In particular $W$ is canonically isomorphic to the absolute Weyl groups of $G_{\FF_p}$ and $G_{\QQ_p}$ equipped with Galois action. We denote by $\Wtilde := \Norm_G(T)(L)/T(O_L) \cong \Norm_G T(F)/T(O_F)$ the (absolute) extended affine Weyl group of $G$, equipped with the canonical Galois action. We will often consider an element $x \in \Wtilde$ as an element of $G(L)$ by which we mean an arbitrary lift of $x$. We have $\Wtilde \cong W \rtimes X_*(T)$ canonically; denote by $p^\mu$ the image of a cocharacter $\mu$ in $\Wtilde$. The canonical inclusion of the affine Weyl group $W_a$ into $\Wtilde$ yields a short exact sequence \begin{center} \begin{tikzcd} 0 \arrow{r} & W_a \arrow{r} & \Wtilde \arrow{r} & \pi_1(G) \arrow{r} & 0. \end{tikzcd} \end{center} The isomorphism $\Wtilde \cong W \rtimes X_*(T)$ defines an action of $\Wtilde$ on the apartment $\afr := X_*(T)_\RR$ by affine linear maps. As $W_a$ acts simply transitively on the set of alcoves in $\afr$, the stabilizer $\Omega \subset \Wtilde$ of a fixed ``base'' alcove defines a right-splitting of the exact sequence above. We choose as the base alcove the alcove in the anti-dominant chamber whose closure contains the origin. This alcove corresponds to the Iwahori subgroup $\Iscr$ of $G(L)$ which is defined as the preimage of $B(\FFbar_p)$ w.r.t.\ the canonical projection $G(O_L) \epi G(\FFbar_p)$. We define the length function on $\Wtilde$ by \[ \ell(w\tau) = \ell(w). \] for $w \in W_a,\tau\in \Omega$. In particular, the elements of length $0$ are precisely those which are contained in $\Omega$. \subsection{$\sigma$-conjugacy classes} \label{ss sigma conjugacy} Using Dieudonn\'e theory one gets a bijection between the isogeny classes of \BTDs\ over $k$ and certain $\sigma$-conjugacy classes in the $L$-valued points of a reductive group scheme over $\ZZ_p$ (cf.\ next section). For this reason we briefly recall Kottwitz's classification of $\sigma$-conjugacy classes in the case of unramified groups. The main reference for this subsection is the article of Rapoport and Richartz \cite{RR96}. We keep the notation of the previous subsection. Recall that two elements $b,b' \in G(L)$ are called $\sigma$-conjugated if there exists an element $g\in G(L)$ such that $b' = gb\sigma(g)^{-1}$. The equivalence classes with respect to this relation are called $\sigma$-conjugacy classes; we denote the $\sigma$-conjugacy class of an element $b \in G(L)$ by $[b]$. Let $B(G)$ denote the set of all $\sigma$-conjugacy classes in $G(L)$. By \cite{RR96} Lemma 1.3 the sets of $\sigma$-conjugacy classes does not depend on the choice of $k$ (up to canonical bijection), so this notation is without ambiguity. Kottwitz assigns in \cite{kottwitz85} to each $\sigma$-conjugacy class $[b]$ two functorial invariants \[ \nu_G(b) \in X_*(T)_{\QQ,\dom}^\Gamma \] \[ \kappa_G (b) \in \pi_1(G)_\Gamma, \] which are called the Newton point resp.\ the Kottwitz point of $[b]$. Those two invariants determine $[b]$ uniquely. \begin{example} \label{ex isocrystal} \emph{(1)} Assume $G = \GL_n$. We have a one-to-one correspondence \begin{eqnarray*} B(G) &\leftrightarrow& \{ \textnormal{isocrystals over } k \textnormal{ of height } n \}/\cong \\ \left[ b \right] &\mapsto& (L^n,b\sigma). \end{eqnarray*} The above bijection is easy to see, as a base change of $(L^n,b\sigma)$ by the matrix $g$ replaces $b$ with $gb\sigma(g)^{-1}$. Now we choose $T$ to be the diagonal torus and $B$ to be the Borel subgroup of upper triangular matrices. Then we have canonical isomorphisms $X_*(T)_{\QQ}^\Gamma = X_*(T)_{\QQ} = \QQ^n$ and $\pi_1(G)_\Gamma = \pi_1(G) = \ZZ$. The first isomorphism identifies \[ X_*(T)_{\QQ,\dom} = \{\nu = (\nu_1,\ldots,\nu_n\} \in \QQ^n\mid \nu_1 \geq \ldots \geq \nu_n\}. \] Then $\nu_G(b) = (\nu_1, \ldots, \nu_n)$ is the vector of Newton slopes of the isocrystal $(L^n,b\sigma)$ given in descending order. Of course, this already determines $[b]$ uniquely. The Kottwitz point is given by \[ \kappa_G (b) = \val \det b = \nu_1 + \ldots + \nu_n. \] \emph{(2)} Let $F/\QQ_p$ be a finite unramified field extension of degree $d$ and let $G = \Res_{F/\QQ_p} \GL_n$. Similar as above one sees that $[b] \mapsto ((F\otimes L)^n,b(1\otimes\sigma))$ defines a bijection between $B(G)$ and the isomorphism classes of isocrystals $(N,\Phi)$ over $k$ of height $n$ together with an $F$-action $\iota: F \mono \Aut(N,\Phi)$. We have a canonical isomorphism $F\otimes L \cong \prod_{\tau: F \mono L} L$ and likewise \[ (L\otimes F)^n \cong \prod_{\tau:F\mono L} L^n =: \prod_{\tau:F\mono L} N_\tau. \] Then $\sigma$ defines a bijection of $N_\tau$ onto $N_{\sigma\tau}$ and any element $b\in G(L)$ stabilizes the $N_\tau$. Fixing an embedding $\tau: F \mono L$, we thus obtain an equivalence of categories \begin{eqnarray*} \{ \textnormal{isocrystals over } k \textnormal{ of height } n \textnormal{ with } F-\textnormal{action} \} &\rightarrow& \{ \sigma^d-\textnormal{isocrystals over } k \textnormal{ of height } n \} \\ (N,\Phi,\iota) &\mapsto& (N_\tau, \Phi^d). \end{eqnarray*} Using that in $\GL_n(L)$ any element $g$ is $\sigma^d$-conjugate to $\sigma(g)$ (this holds as every $\sigma^d$-conjugacy class contains a $\sigma$-stable element,\ e.g. a suitable lift of an element in $\Wtilde$) one sees that the isomorphism class of the object on the right hand side does not depend on the choice of $\tau$. Hence if we denote the Newton slopes of $(N_\tau, \Phi^d)$ by $(\lambda_1,\ldots,\lambda_n)$ the slopes of the isocrystal $(N,\Phi)$ (forgetting the $F$-action) equal \[ (\underbrace{\frac{\lambda_1}{d},\ldots,\frac{\lambda_1}{d}}_{d\textnormal{ times}},\ldots,\underbrace{\frac{\lambda_n}{d},\ldots,\frac{\lambda_n}{d}}_{d \textnormal{ times}}). \] We choose $T$ to be the diagonal torus and $B$ to be the Borel subgroup of upper triangular matrices. Then $X_*(T) \cong \prod_{\tau:F\mono L} \ZZ^n$ canonically identifying \[ X_*(T)_{\QQ,\dom}^\Gamma = \{\nu = ((\nu_1,\ldots,\nu_n))_{\tau} \in \prod \QQ^n\mid \nu_1 \geq\ldots\geq \nu_n \} \] Then by functoriality $\nu_G(b) = ((\frac{\lambda_1}{d},\ldots , \frac{\lambda_n}{d}))_\tau$. \end{example} Recall that we have the Cartan decomposition \[ G(L) = \bigsqcup_{\mu \in X_*(T)_\dom} G(O_L)p^{\mu}G(O_L). \] An estimate for $\nu$ and $\kappa$ on a $G(O_L)$-double coset is given by the generalised Mazur inequality. Before we can state it, we need to introduce some more notation. We equip $X_*(T)_\QQ$ with a partial order $\leq$ where we say that $\mu' \leq \mu$ if $\mu-\mu'$ is a linear combination of positive coroots with positive (rational) coefficients. For any cocharacter $\mu \in X_*(T)$ we denote by $\mubar$ the average of its $\Gamma$-orbit. \begin{proposition}[\cite{RR96},~Thm.~4.2] Let $b \in G(O_L)\mu(p)G(O_L)$ for $\mu \in X_*(T)_\dom$. Then the following assertions hold. \begin{subenv} \item We have $\nu_G(b) \leq \mu$. \item The Kottwitz point $\kappa_G(b)$ equals the image of $\mu$ in $\pi_1(G)_\Gamma$. \end{subenv} \end{proposition} \begin{definition} \begin{subenv} \item We define the partial order $\leq$ on $B(G)$ by \[ [b'] \leq [b] :\Lra \nu_G(b') \leq \nu_G(b) \textnormal{ and } \kappa_G(b') = \kappa_G(b). \] \item We denote \begin{eqnarray*} B(G,\mu) &=& \{[b] \in B(G)\mid \nu_G(b) \leq \mu \textnormal{ and } \kappa_G(B) \textnormal{ is the image of } \mu \textnormal{ in } \pi_1(G)_\Gamma\} \\ &=& \{[b] \in B(G)\mid [b] \leq [p^\mu]\}. \end{eqnarray*} \end{subenv} \end{definition} By the generalized Mazur inequality $B(G,\mu)$ contains all $\sigma$-conjugacy classes which intersect $G(O_L)\mu(p)G(O_L)$ non-emptily. It is known that the converse is also true. Many authors have worked on this conjecture, the result in the generality we need was proven by Kottwitz and Gashi (\cite{kottwitz03}~ch.~4.3, \cite{gashi10}~Thm.~5.2). To every $\sigma$-conjugacy class $[b]$ one associates linear algebraic groups $M_b$ and $J_b$ which are defined over $\QQ_p$. The group $M_b$ is defined as the centraliser of $\nu(b)$ in $G$. So in particular $M_b$ is a standard Levi subgroup of $G$. Kottwitz showed that the intersection of $M_b$ and $[b]$ is non-empty (\cite{kottwitz85}~ch.~6). The group $J_b$ represents the functor \[ J_b(R) = \{g \in G(R \otimes_{\QQ_p} L)\mid gb = b\sigma(g) \}. \] This group is an inner form of $M_b$ which (up to canonical isomorphism) does not depend on the choice of the representative of $[b]$ (\cite{kottwitz85}~\S~5.2). \begin{definition} \label{def basic} Let $[b] \in B(G)$. \begin{subenv} \item $[b]$ is called basic if $\nu_G(b)$ is central. \item $[b]$ is called superbasic if every intersection with a proper Levi subgroup of $G$ is empty. \end{subenv} \end{definition} We note that $M_b$ is a proper subgroup of $G$ if and only if $[b]$ is not basic. As $M_b$ always intersects $[b]$ non-trivially, this observation shows that every superbasic $\sigma$-conjugacy class is also basic. We have a bijection between the basic $\sigma$-conjugacy classes of $G$ and $\pi(G)_\Gamma$ induced by the Kottwitz point (\cite{kottwitz85}~Prop.~5.6). Finally we can define the last group theoretic invariant which appears in the dimension formula. \begin{definition} Let $[b] \in B(G)$. We define the defect of $[b]$ by \[ \defect_G (b) := \rank_{\QQ_p} G - \rank_{\QQ_p} J_b \] \end{definition} \subsection{A formula for the defect} \label{ss defect} We keep the notation above. Furthermore, let $\omegaund_1,\ldots,\omegaund_l$ be the sums over all elements in a Galois orbit of absolute fundamental weights of $G$. Recall that we have an embedding $\pi_1(G) \cong \Omega \mono \Wtilde$. For $\varpi \in \pi_1(G)$ let $\dot\varpi$ be its image in $\Wtilde$. Then by construction $\dot\varpi$ is basic and $\kappa_G(\dot\varpi)$ is the image of $\varpi$ in $\pi_1(G)_\Gamma$. \begin{proposition} \label{prop calculation of defect} Let $b \in G(L)$. Then \[ \defect_G (b) = 2\cdot\sum_{i=1}^l \{ \langle \nu_G(b), \omegaund_i \rangle \} \] where $\{\cdot\}$ denotes the fractional part of a rational number. \end{proposition} The proposition is a generalisation of \cite{kottwitz06}~Cor.~1.10.2. The proof of the proposition given here is a generalisation of Kottwitz's proof. The calculations will use the following combinatorial result. \begin{lemma} \label{lem root theoretic} Let $\Psi = (X,R,R^+,X^\vee,R^\vee,R^{\vee,+})$ be a based reduced root datum. We fix the following notations, which will be used until the end of the proof of this lemma. \begin{simplelist} \item $P^\vee := $ coweight lattice of $\Psi$. \item $Q^\vee := $ coroot lattice of $\Psi$. \item $\pi_1 := P^\vee/Q^\vee$ denotes the fundamental group. \item $I := \Aut\Psi$. \item $\omega_1,\ldots,\omega_l := $ fundamental weights of $\Psi$. \item $\varpi_1,\ldots,\varpi_l := $ images of $\omega_1^\vee,\ldots,\omega_l^\vee$ in $\pi_1$. \item $\chi_1,\ldots, \chi_l$ are the characters of $\pi_1$ defined by $\chi_j(\varpi) := \exp (-2\pi i \cdot \langle \varpi,\omega_j\rangle)$. \item $\Xi := \mathbf{1} \oplus \chi_1 \oplus \cdots \oplus \chi_l$ seen as $I \ltimes \pi_1$-representation. Here the action of $I$ is given by the permutation of the $\chi_j$ according to its action on the fundamental weights. \item $V' :=$ vector space of affine linear functions on $X^\vee_\RR$ seen as $I \ltimes \pi_1$-representation. \end{simplelist} Then $V' \cong \Xi$. \end{lemma} \begin{remark} If $\Psi$ is the root datum of a reductive group $G$ then in general $\pi_1(G)$ is only a subgroup of $\pi_1$. We have equality if and only if $P^\vee = X^\vee$, i.e. $G$ is adjoint. \end{remark} \begin{proof} The assertion that $\Xi$ and $V'$ are isomorphic as representations of $\pi_1$ was proven in \cite{kottwitz06}~Lemma~4.1.1. In particular, it proves our assertion in the case where $I$ is trivial. We assume without loss of generality that the Dynkin diagram of $\Psi$ is connected. We have to check the assertion only in the cases where $I$ is non-trivial, i.e.\ when the Dynkin diagram is of type $A_l$, $D_l$ or $E_6$. We show that $V'$ and $\Xi$ are isomorphic by calculating their characters. It is obvious how to calculate the character of $\Xi$. For the character of $V'$ we use that the action of $I \ltimes \pi_1$ permutes the simple affine roots. We obtain for $\sigma\cdot\varpi \in I \ltimes \pi_1$ \begin{eqnarray*} \tr (\sigma\cdot\varpi \mid V') &=& \# \textnormal{ simple affine roots fixed by } \sigma\cdot\varpi, \\ \tr (\sigma\cdot\varpi \mid \Xi) &=& 1 + \sum_{i; \sigma(\omega_i) = \omega_i} \omega_i(\varpi). \end{eqnarray*} Now all data we need to calculate the right hand sides are given in \cite{bourbaki68} and thus the claim is reduced to some straightforward calculations. We give these calculations for type $A_l$ as an example and skip the calculations in the other two cases as they are analogous. We use the notation of \cite{bourbaki68}~ch.~VI,~planche~I. That is $\alpha_0,\alpha_1,\ldots,\alpha_l$ denote the simple affine roots, where $\alpha_0$ is the unique root which is not finite and $\omega_i$ (in Bourbaki's notation $\bar\omega_i$) denotes the fundamental weight associated to $\alpha_i$. We will consider the indices as elements of $\ZZ/(l+1)\ZZ$. We have $\pi_1 = \{1, \varpi_1,\ldots,\varpi_l \} \cong \ZZ/(l+1)\ZZ$ with distinguished generator $\varpi_1$. The group $\pi_1$ acts on the set of simple affine roots by cyclic permutation. In particular an element of $\pi_1$ different from the identity acts fixed point free on the set of simple affine roots. Now $I = \{1,\sigma\} \cong \ZZ/2\ZZ$. The non-trivial automorphism $\sigma$ acts on the set of simple affine roots by $\sigma(\alpha_j) = \alpha_{-j}$. Altogether, we obtain \[ \sigma\cdot\varpi_1^k(\alpha_j) = \alpha_{-j-k} \] Thus the number of simple affine roots fixed by $\sigma\cdot\varpi_1^k$ equals the number of solutions of the equation \[2j \equiv -k \mod l+1.\] This yields the following values for the character of $V'$. If $l$ is even, then \begin{eqnarray*} \tr (1\cdot \varpi_1^k \mid V') &=& \left\{ \begin{array}{ll} 0 \quad & \textnormal{ if } k = 1,\ldots,l \\ l+1 \quad & \textnormal{ if } k = 0 \end{array} \right. \\ \tr (\sigma\cdot\varpi_1^k \mid V') &=& 1 \end{eqnarray*} and if $l$ is odd we have \begin{eqnarray*} \tr (1\cdot \varpi_1^k \mid V') &=& \left\{ \begin{array}{ll} 0 \quad & \textnormal{ if } k = 1,\ldots,l \\ l+1 \quad & \textnormal{ if } k = 0 \end{array} \right. \\ \tr (\sigma\cdot\varpi_1^k \mid V') &=& \left\{ \begin{array}{ll} 0 \quad & \textnormal{ if } k \textnormal{ is odd,} \\ 2 \quad & \textnormal{ if } k \textnormal{ is even.} \end{array} \right. \end{eqnarray*} Now we have to compare the above formulas to the character of $\Xi$. We note that \[ \omega_1^\vee = \frac{1}{l+1} ( l \alpha_1^\vee + (l-1) \alpha_2^\vee + \ldots + 2 \alpha_{l-1}^\vee + \alpha_l^\vee). \] Hence \[ \chi_j(\varpi_1^k) = \chi_j(\varpi_1)^k = \exp(2\pi i \cdot \frac{j}{l+1})^k. \] We obtain \[ \tr (1\cdot \varpi_1^k \mid \Xi) = \sum_{k=0}^{l-1} \exp(2\pi i \cdot \frac{j}{l+1})^k = \left\{ \begin{array}{ll} 0 \quad & \textnormal{ if } k = 1,\ldots,l \\ l+1 \quad & \textnormal{ if } k = 0 \end{array} \right. , \] coinciding with the formula above. Now we have $\sigma(\alpha_j) = \alpha_j$ if and only if $l$ is odd and $j = \frac{l+1}{2}$. We obtain for even $l$ \[ \tr (\sigma\cdot\varpi_1^k \mid \Xi) = 1 + 0 = 1, \] and for odd $l$ \[ \tr (\sigma\cdot\varpi_1^k \mid \Xi) = 1 + \chi_{\frac{l+1}{2}} (\varpi_1^k) = 1 + (-1)^k. \] Thus we have indeed $ \tr ( \cdot \mid V') = \tr( \cdot \mid \Xi)$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop calculation of defect}] First we note that the equation does not change if we replace $G$ by $M_b^{\ad}$ (cf.~\cite{kottwitz06}~Lemma~1.9.1). Thus we may assume that $b$ is basic and $G$ of adjoint type. Hence $[b]$ is uniquely determined by its Kottwitz point, so we may even assume that $b$ is a representative of an element $\wtilde\in\Omega$ in the normaliser $(\Norm_G T)(L)$. Now conjugation with $b$ fixes $T$ and the standard Iwahori subgroup. By Bruhat-Tits theory the twist of $T$ by $b$ is a maximal torus of $J_b$ which contains a maximal split torus (see for example \cite{hamacher}~Lemma~4.5). Now the automorphism of $\afr$ induced by conjugation with $\wtilde$ equals the finite Weyl group part of $\wtilde$, which we denote by $w$. Thus $\rank_{\QQ_p} J_b = \dim \afr^{w\sigma}$ and we obtain \[ \defect_G (b) = \dim \afr^\sigma - \dim \afr^{w\sigma}. \] As the action of $\Gamma$ factorises through the automorphism group of the root datum of $G$, we have an isomorphism $\afr \cong \chi_1 \oplus \cdots \oplus \chi_l$ of $\pi_1(G) \rtimes \Gamma$-representations by Lemma~\ref{lem root theoretic}. We have to calculate the dimension of $\afr^{w\sigma}$. Assume that $\chi_1,\ldots,\chi_{m}$ are one $\Gamma$-orbit with $\sigma$ mapping $\chi_j$ to $\chi_{j+1}$. Let $ v = (v_1,\ldots,v_m) \in \chi_1 \oplus \cdots\oplus\chi_m$, then \[ (w\sigma) (v) = (\chi_1(\wtilde)\cdot v_m, \chi_2(\wtilde)\cdot v_1, \ldots , \chi_m(\wtilde)\cdot v_{m-1}). \] We see that $v$ is fixed by $w\sigma$ if and only if $v_1 = \chi_1(\wtilde) \cdot \cdots \cdot \chi_m(\wtilde) \cdot v_1$ and $v_j = \chi_j (v_{j-1})$ for $j>1$. Thus the subspace of $\chi_1 \oplus \cdots\oplus\chi_m$ of vectors fixed by $w\sigma$ has dimension $1$ if $\chi_1(\wtilde) \cdot \ldots \cdot \chi_m(\wtilde) =1$ and dimension $0$ otherwise. We obtain \[ \rank J_b = \dim \afr^{w\sigma} = \#\{i \mid \langle \nu, \omegaund_i \rangle \in \ZZ\}. \] As the $\Omega \rtimes \Gamma$-representation $\afr$ (and thus $\chi_1 \oplus\cdots\oplus \chi_l$) is self-contragredient, we have \begin{eqnarray*} 2 \sum_{i=1}^l \{\langle \nu, \omegaund_i\rangle\} &=& \sum_{i=1}^l \{\langle \nu, \omegaund_i\rangle\} + \sum_{i=1}^l \{\langle \nu, -\omegaund_i\rangle\} \\ &=& l - \#\{ i \mid \langle \nu,\omegaund_i \rangle \in \ZZ \} \\ &=& \rank_{\QQ_p} G - \rank_{\QQ_p} J_b \\ &=& \defect_G (b). \end{eqnarray*} \end{proof} \subsection{Chains of Newton points} \label{ss chains} Our calculations in the previous subsection allows us to reformulate the dimension formula (\ref{term dimension shimura}) in terms of Chai's chains of Newton points. We first recall some of his notions and results from \cite{chai00}. Denote by $\Nscr(G)$ the image of $\nu_G$. For $\nu \in \Nscr(G)$ and $[b] \in B(G)$ with $\nu_G(b) = \nu$ the image $\Nscr(G)_{\leq\nu}$ of the set $ \{ [b'] \in B(G)\mid \nu(b') \leq \nu, \kappa(b') = \kappa(b) \} $ in $\Nscr(G)$ only depends on $\nu$ (\cite{chai00}~Prop.~4.4). For elements $\nu'' \leq \nu'$ in $\Nscr(G)_{\leq \nu}$ define \[ [\nu'',\nu'] = \{ \xi \in \Nscr(G)_{\leq\nu}\mid \nu'' \leq \xi \leq \nu'\}. \] We note that $[\nu'',\nu']$ does not change if we replace $\nu$ by $\nu'$, thus it is independent of the choice of $\nu$. We denote by $\lengthG{\nu'}{\nu}$ the maximum of all integers $n$ such that there exists a chain $\nu_0 \leq \ldots \leq \nu_n$ in $[\nu',\nu]$. Chai gave a formula for $\lengthG{\nu'}{\nu}$ but made a small mistake in his calculations. In the formula at the bottom of page 982 one has to replace the relative fundamental weights $\omega_{F,j}$ by the sum of elements of a Galois orbit of absolute fundamental weights $\omegaund_j$. As the $\omega_{F.j}$ and $\omegaund_j$ are scalar multiples of each other, the other assertions in \cite{chai00}~section~7 and the proofs remain valid. \begin{proposition}[cf.~\cite{chai00}~Thm.~7.4~(iv)] \label{prop chai} Let $\nu \in \Nscr(G)$ and $\nu' \in \Nscr(G)_{\leq\nu}$. Then \[ \lengthG{\nu'}{\nu} = \sum_{j=1}^{l} \lceil\langle\nu,\omegaund_j\rangle - \langle\nu',\omegaund_j\rangle\rceil. \] \end{proposition} Now we can reformulate the dimension formula of Theorem~\ref{thm dimension shimura} in a more elegant terms. \begin{corollary} \label{cor dimension formulas} Let $[b] \in B(G,\mu)$ and $\nu$ its Newton point. Then \[ \langle \mu+\nu, \rho \rangle - \frac{1}{2} \defect_G(b) = 2 \langle \rho, \mu \rangle - \lengthG{\nu}{\mu} \] \end{corollary} \begin{proof} \begin{eqnarray*} \langle \mu+\nu, \rho \rangle - \frac{1}{2} \defect_G(b) &=& 2 \langle \rho, \mu \rangle - \langle \rho, \mu-\nu \rangle+ \frac{1}{2} \defect_G(b)\\ &\stackrel{\rm Prop.~\ref{prop calculation of defect}}{=}& 2 \langle \rho, \mu \rangle - \sum_{j=1}^l \lceil \langle\omegaund_j, \mu-\nu \rangle\rceil \\ &\stackrel{\rm Prop.~\ref{prop chai}}{=}& 2 \langle \rho, \mu \rangle - \lengthG{\nu}{\mu}. \end{eqnarray*} \end{proof} \begin{remark} \label{rem dimension formula} In a similar fashion one can rewrite the dimension formula of Theorem~\ref{thm dimension RZ-space} \[ \langle \rho, \mu-\nu \rangle - \frac{1}{2} \defect_G(b) = \sum_{j=1}^l \lfloor \langle \omegaund_j, \mu-\nu\rangle \rfloor. \] This is almost the same as the formula conjectured by Rapoport in \cite{rapoport05},~p.~296 \[ \langle 2\rho, \overline{\mu}-\nu \rangle + \sum_{j=1}^l \lfloor -\langle \omega_{\QQ_p,j}, \bar\mu-\nu\rangle \rfloor, \] where $\overline\mu$ denotes the average over the $\Gamma$-orbit of $\mu$ and $\omega_{\QQ_p,1},\ldots,\omega_{\QQ_p,l}$ are the relative fundamental weights. One just has to replace the $\omega_{\QQ_p,j}$ in Rapoport's formula by $\omegaund_j$. \end{remark} \section{\BT s with additional structure} \label{sect BT} \subsection{Decomposition of \BTpELs} \label{ss normal forms of BTpELs} In this subsection we recall a mechanism that will often allow us to reduce to the case where $B=F$ is a finite unramified field extension of $\QQ_p$. Even though this mechanism is well known (see for example \cite{fargues04}), I could not find a reference for its proof. \begin{lemma} \label{lem BT decomposition} Let $X$ be a Barsotti-Tate group over a scheme $S$. \begin{subenv} \item Assume we have a subalgebra $O_1 \times \ldots \times O_r \subset \End X$. Denote by $\epsilon_i$ the multiplicative unit in $O_i$ and let $X_i := \image \epsilon_i$. Then $X= X_1 \times \ldots \times X_r$ and the $X_i$ are \BT s over $S$. \item Assume there is a subalgebra $\M_d(O) \subset \End X$. Let \[ \epsilon = \left( \begin{array}{ccc} 1 & 0 & \ldots \\ 0 & 0 & \\ \vdots & & \ddots \end{array} \right) \in \M_d(O) \] and $X' = \image \epsilon$. Then $X'$ is a \BT\ with $O$-action and $X \cong (X')^d$ compatible with $\M_d(O)$-action. \end{subenv} \end{lemma} \begin{proof} The first assertion of (1) is obvious. The sheaves $X_i$ are $p$-divisible and $p$-torsion because $X$ is. It remains to show that $X_i[p]$ is representable by a finite locally free group scheme over $S$. Let $\epsilon_i' := \sum_{j \not= i} \epsilon_j$. Then $X_i = \ker \epsilon_i'$, in particular $X_i[p]$ is represented by a closed subgroup scheme of $X[p]$. Let $\Escr, \Escr_i$ be coherent $\Oscr_S$-algebras such that $X[p] = \Specfunc\Escr$ and $X_i[p] = \Specfunc\Escr_i$. Then the closed embedding $X_i[p] \mono X[p]$ induces a surjection $\Escr \epi \Escr_i$ and $\epsilon_{i |X[p]}:X[p] \epi X_i[p]$ induces a splitting of this surjection. Thus $\Escr_i$ is a direct summand of $\Escr$ as $\Oscr_S$-module, in particular it is again locally free. Now the isomorphism $X \cong (X')^d$ is the standard Morita argument and the fact that $X'$ is a \BT\ can be shown by the same argument as above. \end{proof} \begin{corollary} \label{cor BTEL decomposition} \begin{subenv} \item Let $(B,O_B)$ be an unramified EL-datum and let $B= \prod B_i$ be a decomposition into simple factors and $O_{B_i} = O_B \cap B_i$. Let $\Xund$ be a \BT\ with $(B,O_B)$-EL structure. Then $\Xund$ decomposes as $\Xund = \prod \Xund_i$ where $\Xund_i$ is a \BT\ with $(B_i,O_{B_i})$-EL structure. This defines an equivalence of categories \[ \left\{ \begin{array}{l}\textnormal{\BT s with} \\ (B,O_B) \textnormal{-EL structure} \end{array}\right\} \cong \prod_i \left\{\begin{array}{l} \textnormal{\BT s with} \\ (B_i,O_{B_i}) \textnormal{-EL structure} \end{array}\right\} \] that preserves isogenies. \item Let $(B,O_B)$ be an unramified EL-datum with $B \cong \M_d(F)$ simple. Let $\Xund$ be a \BT\ with $(B,O_B)$-EL structure. Then $O_B \cong \M_d(O_F)$ and $\Xund \cong (\Xund')^d$ where $\Xund'$ is a \BT\ with $(F,O_F)$-EL structure. This defines an equivalence of categories \[ \left\{\begin{array}{l} \textnormal{\BT s with} \\ (B,O_B) \textnormal{-EL structure} \end{array}\right\} \cong \left\{ \begin{array}{l}\textnormal{\BT s with} \\ (F,O_F) \textnormal{-EL structure} \end{array}\right\} \] which preserves isogenies. \end{subenv} \end{corollary} \begin{definition} Let $X$ be a \BT\ with $(B,O_B)$-EL structure. We define the relative height of $X$ as the tuple \[ \relht X = \left(\frac{\height X_1}{[B_1:\QQ_p]^{\frac{1}{2}}}, \ldots , \frac{\height X_r}{[B_r:\QQ_p]^{\frac{1}{2}}}\right) \] with $X_i, B_i$ as above. \end{definition} Let $(B,O_B, ^\ast)$ be a PEL-datum. Following \cite{RZ96}~ch.~A, we decompose $(B,^\ast) = \prod B_i$ where $B_i$ is isomorphic to one of the following. \begin{enumerate}[topsep = -3pt, parsep = 3pt] \item[(I)] $\M_d (F) \times \M_d(F)^{\rm opp}$ where $F$ is an unramified $p$-adic field and $(a,b)^\ast = (b,a)$. \item[(II\textsubscript{C})] $\M_d (F)$ with $F$ as above, $a^\ast = a^t$. \item[(II\textsubscript{D})] $\M_d (F)$ with $F$ as above, $a^\ast = J^{-1}a^\ast J$ with $J^tJ =-1$. \item[(III)] $\M_d (F)$ with $\QQ_p \subset F' \subset F$ finite unramified field extensions, $[F:F'] = 2$ and $a = \abar^t$ where $\overline{\cdot}$ denotes the non-trivial $F'$-automorphism of $F$, acting on $M_d(F)$ componentwise. \end{enumerate} We call the algebras with involution which are isomorphic to one of the above indecomposable. Note that we also have the analogous decomposition for $O_B$. \begin{definition} A PEL-datum $(B,O_B,\,^\ast)$ is of type (AC) if no factors of type (II\textsubscript{D}) appear in the decomposition of $(B,^\ast)$. \end{definition} Now Lemma~\ref{lem BT decomposition} implies the following result. \begin{corollary} \label{cor polarised BT decomposition} \begin{subenv} \item Let $(B,O_B, ^\ast)$ be an unramified PEL-datum of type (AC) and let $(B, ^\ast) = \prod B_i$ be a decomposition into indecomposable factors. Let $\Xund$ be a \BT\ with $(B,O_B, ^\ast)$-PEL structure. Then $\Xund$ decomposes as $\Xund = \prod \Xund_i$ where $\Xund_i$ is a \BT\ with $(B_i,O_{B_i}, ^\ast)$-PEL structure. This defines an equivalence of categories \[ \left\{\begin{array}{l} \textnormal{\BT s with} \\ (B,O_B,\,^\ast) \textnormal{-PEL structure}\end{array}\right\} \cong \prod_i \left\{\begin{array}{l}\textnormal{\BT s with} \\ (B_i,O_{B_i},\,^\ast) \textnormal{-PEL structure} \end{array}\right\} \] and also a bijection of isogenies on the left hand side with tuples of isogenies on the right hand side which have the same similitude factor. \item Let $(B,O_B, ^\ast)$ be an unramified PEL-datum of type (AC) with $(B, ^\ast)$ indecomposable and let $\Xund$ be a \BT\ with $(B,O_B, ^\ast)$-PEL structure. Using the notation above, we may describe $\Xund$ as follows. \\ (I) $\Xund \cong (\Xund')^d \times (\Xund'^\vee)^d$ where $X'$ is a \BT\ with $(F,O_F)$-EL structure and the polarisation is given by $\lambda (a,b) = (b,-a)$. \\ (II\textsubscript{C}) $\Xund \cong (\Xund')^d$ where $X'$ is a \BT\ with $(F,O_F,\id)$-PEL structure. \\ (III) $\Xund \cong (\Xund')^d$ where $X'$ is a \BT\ with $(F,O_F,\bar\cdot)$-PEL structure. \\ We obtain equivalences of categories \begin{flalign*} (I) & \quad \left\{\begin{array}{l} \textnormal{\BT s with}\\ (B,O_B,\,^\ast) \textnormal{-PEL structure}\end{array}\right\} \cong \left\{\begin{array}{l} \textnormal{\BT s with} \\ (F,O_F) \textnormal{-EL structure} \end{array} \right\} & \\ (II_C) & \quad \left\{ \begin{array}{l} \textnormal{\BT s with} \\ (B,O_B,\,^\ast) \textnormal{-PEL structure}\end{array}\right\} \cong \left\{\begin{array}{l} \textnormal{\BT s with} \\ (F,O_F,\id) \textnormal{-PEL structure}\end{array} \right\} & \\ (III) & \quad \left\{ \begin{array}{l} \textnormal{\BT s with} \\ (B,O_B,\,^\ast) \textnormal{-PEL structure}\end{array} \right\} \cong \left\{\begin{array}{l} \textnormal{\BT s with} \\ (F,O_F,\bar\cdot) \textnormal{-PEL structure}\end{array} \right\} \end{flalign*} and in the cases (II\textsubscript{C}) and (III) also a bijection of the sets of isogenies. This is also true in case (I) if one fixes the similitude factor. \end{subenv} \end{corollary} \begin{proof} Write $X_i = \image \epsilon_i$ as in Lemma \ref{lem BT decomposition}. Let $\Xund = (X,\iota,\lambda)$. Then \[ \image \lambda_{|X_i} = \image (\lambda \circ \epsilon_i) = \image (\epsilon_i^\ast \circ \lambda) = \image (\epsilon_i \circ \lambda) = X_i, \] which proves (1). For the second assertion assume first that $(B,\,^\ast)$ is of type (II\textsubscript{C}) or (III). Let $\epsilon_{i,j}$ be the matrix with $1$ as $(i,j)$-th entry and $0$ otherwise and denote $X_i := \image (\epsilon_{i,i})$. Then we have by Lemma~\ref{lem BT decomposition} that $X = \prod X_i$ with $O_F$-equivariant isomorphisms $\epsilon_{i,j}: X_j \isom X_i$. Now the same argument as in (1) shows $\image \lambda_{|X_i} = X_i$ (consider $\epsilon_{i,i}$) and $X_i \cong X_j$ as {\BTPELs} (consider $\epsilon_{i,j}$). If $(B,\,^\ast)$ is of type (I) let $\epsilon_0$ and $\epsilon_1$ be the units of the $\M_d(O_F)$-factors. Decompose $X = X_0 \oplus X_1$ as in Corollary~\ref{cor BTEL decomposition}. Then \[ \image \lambda_{|X_i} = \image (\lambda \circ \epsilon_i) = \image (\epsilon_i^\ast \circ \lambda) = \image(\epsilon_{1-i} \circ \lambda) = X_{1-i} \] Now the claim follows as in cases (II\textsubscript{C}) and (III). \end{proof} \subsection{Dieudonn\'e theory for {\BT}s with additional structure} \label{ss isocrystals} Given an {\BTpEL} $X$ over $k$, we denote by $M(X)$ resp.\ $N(X)$ the Dieudonn\'e module, resp.\ rational Dieudonn\'e module. The (P)EL structure on $X$ induces additional structure on $M(X)$ an d$N(X)$, which induces the following definition. \begin{definition} \begin{subenv} \item Let $\Bscr = (B,O_B,(^\ast))$ be a (P)EL-datum. A $\Bscr$-isocrystal is a pair $(N,\Phi)$ such that \begin{itemlist} \item $N$ is a finite-dimensional $L_0$-vector space with $B$-action. \item In the PEL case $N$ is equipped with a perfect anti-symmetric $B$-compatible $\QQ_p$-linear pairing $\pair$. \item $\Phi:N \to N$ is a $\sigma$-semilinear bijection which commutes with $B$-action and in the PEL-case satisfies \begin{equation} \label{term polarisation isocrystal} \langle \Phi(v),\Phi(w) \rangle = p^a \langle v,w \rangle \end{equation} for some fixed integer $a$. \end{itemlist} \item Let $N = \prod N_i$ be the decomposition of $N$ induced by the decomposition $B = \prod B_i$ into simple algebras. Then the tuple $\relht (N,\Phi) := (\frac{\dim_L N_i}{[B_i:\QQ_p]^{\frac{1}{2}}})_i$ is called relative height of $(N,\Phi)$. \item A $\Bscr$-isocrystal $(N,\Phi)$ is called relevant, if there exists an $O_B$-stable lattice $\Lambda \subset N$ which in the PEL-case is self-dual w.r.t.\ some representative of $\pair$. \end{subenv} \end{definition} Now given $\Bscr$-{\BTpEL} $X$ of relative height $\nbf$, the associated isocrystal $N(X)$ is a relevant $\Bscr$-isocrystal and $M(X) \subset N(X)$ is a (self-dual) $O_B$-lattice. In order to associate $\sigma$-conjugacy class to $N(X)$, we need the following lemma. \begin{lemma} \label{lem isocrystal} \begin{subenv} \item Assume $\Bscr$ is an EL-datum and let $(N,\Phi)$ be a $\Bscr$-isocrystal. Then there exists a unique $B$-module $V$ such that $N \cong V \otimes L$. In particular $(N,\Phi)$ is relevant, $\Lambda$ may be chosen to be defined over $\ZZ_p$ and is unique up to $O_B$-linear isomorphism. \item Let $\Bscr$ be a PEL-datum of type (AC) and let $(N,\Phi)$ be a relevant $\Bscr$-isocrystal. Then there exists a unique $B$-module $V$ with $\QQ_p^\times$-homogeneous polarisation, depending only on the relative height of $(N,\Phi)$, such that $N \cong V \otimes L$. Furthermore, $\Lambda$ can be chosen to be defined over $\ZZ_p$ and is unique up to $O_B$-linear isometry. \end{subenv} \end{lemma} \begin{proof} For the proof of part (1) we may assume that $B=F$ is a finite unramified field extension of $\QQ_p$ by Morita equivalence. Then $F\otimes L \cong \prod_{\tau:F\mono L} L$ inducing a decomposition $N \cong \prod_{\tau:F\mono L} N_\tau$. As $\Phi$ induces a $\sigma$-linear bijection $N_\tau \to N_{\sigma \circ \tau}$, we have that all $N_\tau$ have dimension $\relht (N,\Phi)$ over $L$, thus $N \cong L \otimes F^{\relht (N,\Phi)}$. Obviously $O_L \otimes O_F^{\relht (N,\Phi)}$ is an $O_F$-stable lattice in $N$ and any $O_F$-stable lattice of $N$ is isomorphic to it. Now part (2) follows from part (1) and \cite{kottwitz92}~Lemma~7.2 and Remark~7.5. \end{proof} In particular the underlying (polarised) $B$-module of a relevant $\Bscr$-isocrystal is uniquely determied by its relative height. Thus fixing a (polarised) left-$B$-module $V$ and a (self-dual) $O_B$-stable lattice $\Lambda \subset V$, we obtain a bijection \begin{eqnarray*} B(G) &\to& \{ \textnormal{relevant } \Bscr-\textnormal{isocrystals of relative height } \nbf\}/\cong \\ \left[ b \right] &\mapsto& (V_L,b\sigma), \end{eqnarray*} where $G$ denotes the group scheme of $O_B$-linear automorphisms (similitudes) of $\Lambda$ and $\nbf = (\frac{\dim_{\QQ_p} V_i}{[B_i:\QQ_p]^{\frac{1}{2}}})_i$. We warn the reader that in the PEL-case the above bijection requires twisting $\pair$ by a scalar, which is unique up to $\QQ_p^\times$, so that (\ref{term polarisation isocrystal}) is satisfied (see \cite{RZ96}~1.38). This scalar would not necessarily exist if $k$ was only perfect and not algebraically closed. Combining this bijection with the Dieudonn\'e functor, we obtain an injective map \[ \left\{ \begin{array}{c} \textnormal{\BT s over } k \textnormal{ with } \Bscr-\textnormal{(P)EL structure} \\ \textnormal{of relative height } \nbf \textnormal{ up to isogeny} \end{array}\right\} \mono B(G). \] In the case $k = \FFbar_p$ denote by $C(G)$ the $G(O_L)$-$\sigma$-conjugacy classes in $G(L)$. By a similar argument one gets an injection \[ \{\textnormal{\BT s over } \FFbar_p \textnormal{ with } \Bscr-\textnormal{(P)EL structure of relative height } \nbf\} \mono C(G) \] where in the PEL case the polarisation is meant to be defined up to $\ZZ_p^\times$-scalar. For $b \in G(L)$ we denote by $\pot{b}$ its $G(O_L)$-$\sigma$-conjugacy class. \subsection{Rapoport-Zink spaces} \label{ss RZ spaces} We restrict the definition of a Rapoport-Zink space in \cite{RZ96} to our case, that is with unramified (P)EL-datum and hyperspecial level structure. \begin{definition} \label{def RZ datum} \begin{subenv} \item An unramified Rapoport-Zink datum of type EL is a tuple $(B,O_B,V,\mu_{\QQbar_p},\Lambda,[b])$ where \begin{itemlist} \item $\Bscr = (B,O_B)$ is an EL-datum. \item $V$ is a finite left-$B$-module, \item $\Lambda$ is an $O_B$-stable lattice in $V$, \item $\mu_{\QQbar_p}$ is a conjugacy class of cocharacters $\mu_{\QQbar_p}: \GG_{m,\QQbar_p} \to G_{\QQbar_p}$, where $G = \underline{\Aut}_{O_B} (\Lambda)$. \item $[b] \in B(G,\mu_{\QQbar_p})$ \end{itemlist} which satisfy the conditions given below. \item An unramified Rapoport-Zink datum of type PEL is a tuple $\hat\Dscr = (B,O_B,\ast,V,\pair, \mu_{\QQbar_p},\Lambda,[b])$ where \begin{itemlist} \item $\Bscr = (B,O_B,^\ast)$ is a PEL-datum, \item $V$ is a finite left-$B$-module with a $B$-adapted symplectic form $\pair$ on the underlying $\QQ_p$-vector space, \item $\Lambda$ is an $O_B$-stable self-dual lattice in $V$, \item $\mu_{\QQbar_p}$ is a conjugacy class of cocharacters $\mu_{\QQbar_p}: \GG_{m,\QQbar_p} \to G_{\QQbar_p}$, where $G$ is the linear algebraic group with $R$-valued points (for every $\ZZ_p$-algebra $R$) \begin{eqnarray*} G(R) &=& \{g\in \underline{\Aut}_{O_B}(\Lambda)(R) \mid \langle g(v_1),g(v_2) \rangle = c(g) \langle v_1,v_2 \rangle \textnormal{ for some } c(g) \in R^\times\} \end{eqnarray*} \item $[b] \in B(G,\mu_{\QQbar_p})$. \end{itemlist} which satisfy the conditions below. \item The field of definition $E$ of $\mu_{\QQbar_p}$ is called the local Shimura field. \item A Rapoport-Zink datum as above is called superbasic if $[b]$ is superbasic. \end{subenv} \end{definition} Let $(V_L,\Phi) := (L \otimes_{\QQ_p} V, b(\sigma \otimes \id))$ denote the $\Bscr$-isocrystal associated to $[b]$. The additional conditions we impose on the data are the following. \begin{assertionlist} \item $G$ is connected. \item The Newton slopes of $(V_L,\Phi)$ are in $[0,1]$. \item The weight decomposition of $V_{L}$ w.r.t.\ $\mu_{\QQbar_p}$ contains only the weights $0$ and $1$, \item In the PEL-case, we have $c(\mu(p)) = p$. \end{assertionlist} The first condition implies that $G$ is a reductive group scheme, in particular $E$ is a finite unramified field extension of $\QQ_p$. We fix a Killing pair $T \subset B$ of $G$. Denote by $\mu$ the dominant element in $\mu_{\QQbar_p}$ and let $V_E = V^0 \oplus V^1$ be the decomposition into weight spaces w.r.t. $\mu$. Now the $\Bscr$-isocrystal $(V,\Phi)$ gives rise to an isogeny class of \BTpELs. We fix an element $\underline{\XX}$ of this isogeny class. \begin{definition} \label{def RZ space} Let $Nilp_{O_L}$ denote the full subcategory of schemes over $O_L$ on which $p$ is locally nilpotent. Then the Rapoport-Zink space is the functor $\Mscr_G(b,\mu): Nilp_{O_L} \rightarrow Sets, S \mapsto \Mscr_G(b,\mu)(S)$ given by the following data up to isomorphism. \begin{itemlist} \item A {\BTpEL} $\underline{X}$ over $S$ with associated (P)EL-datum $\Bscr$. Moreover the Kottwitz determinant condition holds. \item An isogeny $\underline{\XX}_{\overline{S}} \to \underline{X}_{\overline{S}}$, where $\overline{S}$ denotes the closed subscheme of $S$ defined by $p\Oscr_S$. \end{itemlist} \end{definition} \begin{example} Let $\Dscr= (\Bsf, ^{\ast},\Vsf,\pair,\Osf_\Bsf,\Lambda,h)$ be a PEL-Shimura datum and $[b]$ be an isogeny class of \BTDs. Then $(\Bsf_{\QQ_p},\,^\ast,\Vsf_{\QQ_p},\pair,\Osf_{\Bsf,p},\Lambda,[\mu_h],[b])$ is an unramified Rapoport-Zink datum and $\Mscr_G(b,\mu)$ is the moduli functor parameterizing {\BTDs} inside the isogeny class $[b]$. \end{example} \begin{theorem}[\cite{RZ96}, Thm.~3.25] The functor $\Mscr_G(b,\mu)$ is representable by a formal scheme locally of finite type. \end{theorem} \begin{notation} From now on we denote by $\Mscr_G(b,\mu)$ the \emph{underlying reduced subscheme} of the moduli space above. \end{notation} By \cite{RZ96}~\S~3.23(b) the Kottwitz determinant condition can be reformulated as follows. Let $B = \prod B_i = \prod_i \M_{d_i}(F_i)$ be a decomposition of $B$ into simple factors inducing $X = \prod X_i$ and $V^1 = \prod V_{i}^1$. Furthermore we decompose $\Lie X$ and $V_{i}^1$ according to the $O_{F_i}$- resp.\ $F_i$-action. \begin{eqnarray*} \Lie X_i &=& \prod_{\tau: F_i \mono L} (\Lie X_i)_\tau \\ V_{i}^1 &=& \prod_{\tau: F_i \mono L} V_{i,\tau}^1 \end{eqnarray*} Now $X$ satisfies the determinant condition if and only if $(\Lie X_i)_\tau$ has rank $\dim_{\QQ_p} V_{i,\tau}^1$ for all $i,\tau$. We obtain that by Dieudonn\'e theory the $k$-valued points of $\Mscr_G(b,\mu)$ correspond to $O_B$-stable lattices in $V_L$ which are self-dual up to $L^\times$-scalar and satisfy $p\Lambda \subset b\sigma(\Lambda) \subset \Lambda, \dim_k (\Lambda/b\sigma(\Lambda))_\tau = \dim V_{i,1,\tau}$. By \cite{kottwitz92}~Cor.~7.3 and Rem.~7.5 the group $G(L)$ acts transitively on the set of $O_B$-stable lattices in $V_L$ which are self-dual up to a constant, thus \begin{eqnarray*} \Mscr_G(b,\mu)(k) &=& \{g \Lambda \mid g \in G(L), gb\sigma(g)^{-1} \in G(O_L)\mu(p)G(O_L)\} \\ &=& \{g \in G(L)/G(O_L) \mid gb\sigma(g)^{-1} \in G(O_L)\mu(p)G(O_L)\}. \end{eqnarray*} \subsection{Comparison of PEL structure and $\Dscr$-structure} \label{ss PEL vs D} The aim of this subsection is to show the following proposition. \begin{proposition} \label{prop PEL vs D} Any {\BTD} is a {\BTPEL} of type $(AC)$ with $\ZZ_p^\times$-homogeneous polarisation. On the other hand, for every {\BTPEL} of type (AC) with $\ZZ_p^\times$-homogeneous polarisation over a connected $\FFbar_p$-scheme there exists an unramified PEL-Shimura datum $\Dscr'$ such that the PEL structure defines a $\Dscr'$-structure. \end{proposition} The construction of the PEL-Shimura datum associated to the {\BTPEL} consists of two steps. First we construct a local datum similar to a Rapoport-Zink datum and then we show that it can be obtained by localizing a PEL-Shimura datum. The first step also works for Barsotti-Tate groups with EL structure and will be needed later. Thus we include this case in our construction. Let $\Xund$ be a {\BTEL} or with PEL structure of type (AC). As in section \ref{ss isocrystals} we associate $(V,\Lambda)$ and $G$ to $\Xund$. It remains to construct a $G(\QQbar_p)$-conjugacy class of cocharacters such that $\Xund$ satisfies the Kottwitz determinant condition. For this we give an explicit description of the group $G$. Assume first that we are in the EL-case. Let $B=\prod B_i$ be a decomposition into simple factors, inducing a decomposition $\Lambda = \prod \Lambda_i$ into $O_{B_i}$-modules. Then \[ G = \prod_i \GL_{O_{B_i}} (\Lambda_i), \] thus it suffices to give a description of $G$ in the case where $B$ is simple. By Morita equivalence we may furthermore assume that $B=F$ is a finite unramified field extension of $\QQ_p$. By choosing an $O_F$-basis $e_1,\ldots,e_n$ of $\Lambda$, we obtain an isomorphism \[ G \cong \Res_{O_F/\ZZ_p} \GL_n. \] We denote $\GL_{O_F,n} = \Res_{O_F/\ZZ_p} \GL_n$. If $\Xund$ is a {\BTPEL} of type (AC) a similar construction works. Let $(B,\, ^\ast) = \prod B_i$ be the decomposition into indecomposable factors and $\Lambda = \prod_i \Lambda_i$ accordingly. We denote by $G_i$ the group schemes of similitudes of $\Lambda_i$. Then \[ G = (\prod_i G_i)^1 := \{(g_i)_i \in \prod_i G_i \mid c(g_i) = c(g_j) \textnormal{ for all } i,j\}. \] Hence it suffices to give a description of $G$ in the case where $(B,^\ast)$ is indecomposable. Using the notation of section~\ref{ss normal forms of BTpELs} we may replace $\pair$ by a $\M_d(F)$-hermitian form on $V$ without changing $G$ (\cite{knus91}~Thm.~7.4.1). Furthermore we assume $d=1$ by hermitian Morita equivalence (\cite{knus91}~Thm.~9.3.5), i.e.\ $B =F \times F$ in case (I) and $B=F$ in cases (II\textsubscript{C}) and (III). We first consider case (I). We decompose $\Lambda = \Lambda' \oplus \Lambda''$ with the first resp.\ second factor of $O_F \times O_F$ acting on $\Lambda'$ resp.\ $\Lambda''$. Then $\pair$ induces an isomorphism $\Lambda' \cong (\Lambda'')^\vee$ of $O_F$-modules. Now \begin{eqnarray*} G &=& \{(g',g'') \in \underline{\Aut} (\Lambda') \times \underline{\Aut} (\Lambda''); g'' = c(g)\cdot g'^{-t}\} \\ &\cong& \underline{\Aut} (\Lambda') \times \GG_m \\ &\cong& \GL_{O_F,n} \times \GG_m. \end{eqnarray*} Here $\GG_m$ parametrizes the similitude factor. By choosing dual bases of $\Lambda'$ and $\Lambda''$, we may also consider $G$ as a closed subgroup of $\GL_{O_F,n} \times \GL_{O_F,n}$. Now assume that $(B,\,^\ast)$ is of the form (II\textsubscript{C}). By the uniqueness assertion of Lemma \ref{lem isocrystal} there exists an $O_F$-basis $x_1, \ldots, x_n$ of $\Lambda$ such that \[ \langle x_i , x_j \rangle = \delta_{i,n+1-j} \textnormal{ for } i \leq j. \] This identifies $G$ with the closed subgroup $\GSp_{O_F,n}$ of $\GL_{O_F,n}$, which is given by \[ \GSp_{O_F,n}(R) = \{g \in \GL_n(F\otimes R)\mid gJ_a g^t = c(g)J_a \textnormal{ for some } c(g)\in R^\times\}, \] where $J_a$ denotes the matrix \begin{center} \begin{tikzpicture} \node (lower entry) at (0,0) {-1} ; \node (lower central entry) at (1,1) {-1} ; \node (upper central entry) at (1.4,1.4) {1} ; \node (upper entry) at (2.4,2.4) {1} ; \node[left delimiter = (, right delimiter = ),fit=(lower entry) (upper entry)] {}; \draw (lower entry.north east) -- (lower central entry.south west) ; \draw (upper central entry.north east) -- (upper entry.south west) ; \draw[decorate,decoration=brace] let \p1=(lower entry.west), \p2=(lower central entry) in (\x2,-0.5) -- (\x1,-0.5) node[midway,yshift=-10] {\footnotesize $\frac{n}{2}$} ; \draw[decorate,decoration=brace] let \p1=(upper central entry), \p2=(upper entry.east) in (\x2,-0.5) -- (\x1,-0.5) node[midway,yshift=-10] {\footnotesize $\frac{n}{2}$} ; \end{tikzpicture} \end{center} If $(B,^\ast)$ is of type (III), we choose a basis $x_1,\ldots,x_n$ of $\Lambda$ such that \[ \langle x_i,x_j\rangle = u\cdot\delta_{i,n+1-j} \] where $u \in O_F^\times$ with $\sigma_{F'}(u) = -u$. This defines a closed embedding of $G$ into $\GL_{O_F,n}$ with image \[ \GU_{O_F,n}(R) = \{g \in \GL(F\otimes R)\mid gJ \sigma_{F'}(g)^t = c(g)J \textnormal{ for some } c(g)\in R^\times\}. \] Here $J$ denotes the matrix with ones on the anti-diagonal and zeros anywhere else. We are interested in the $G(\QQbar_p)$-conjugacy classes of $\Hom(\GG_m,G_{\QQbar_p})$. As $G$ is reductive, the conjugacy classes are in canonical one-to-one correspondence with the subset of dominant elements $X_*(T)_\dom$ of $X_*(T)$ for some choice $T\subset B \subset G$ of a maximal torus $T$ and a Borel subgroup $B$. For $G = \GL_{O_F,n}, G=\GSp_{O_F,n}, G= \GU_{O_F,n}$ we choose $T$ to be the diagonal torus and $B$ to be the subgroup of upper triangular matrices. In case (I) take $T$ and $B$ to be induced by our choice for $\GL_{O_F,n}$ and the isomorphism $G \cong \GL_{O_F,n} \times \GG_m$. With respect to the canonical embedding $G \mono \GL_{O_F,n} \times \GL_{O_F,n}$ this means that $B$ denotes the Borel subgroup of pairs $(g',g'')$ where the first factor is an upper triangular matrix (or equivalently the second factor is a lower triangular matrix). Using the canonical identification $X_*(T) \cong \prod \ZZ^n$ in the $\GL_{O_F,n}$ case we obtain in the EL-case \[ X_*(T)_{\dom} = \{ \mu \in \prod_{\tau:F\mono\QQ_p} \ZZ \mid \mu_{\tau,1} \geq \ldots \geq \mu_{\tau.n}\} \] and in the PEL case \begin{eqnarray*} \textnormal{(I)} \quad X_*(T)_\dom &=& \{ \mu= (\mu',\mu'') \in (\prod_\tau \ZZ^n)^2\mid \mu'_{\tau,1} \geq \ldots \geq \mu'_{\tau, n}, \mu_i'' = c(\mu) - \mu_{n+1-i}', c(\mu) \in \ZZ\}. \\ \textnormal{(II\textsubscript{C})} \quad X_*(T)_\dom &=& \{ \mu \in \prod_\tau \ZZ^n\mid \mu_{\tau,1} \geq \ldots \geq \mu_{\tau, n},\; \mu_{\tau,i}+\mu_{\tau,n+1-i} = c(\mu) \textnormal{ with } c(\mu) \in \ZZ\}. \\ \textnormal{(III)} \quad X_*(T)_\dom &=& \{ \mu \in \prod_\tau \ZZ^n\mid \mu_{\tau,1} \geq \ldots \geq \mu_{\tau, n},\; \mu_{\tau,i}+\mu_{\tau+\sigma_{F'},n+1-i} = c(\mu) \textnormal{ with } c(\mu) \in \ZZ\}. \end{eqnarray*} For details, see the appendix. \begin{lemma} \label{lem mu} Let $\Xund$ be a {\BTEL} or PEL structure of type (AC) and $V,\Lambda,G$ be as above. There exists a (unique) $[\mu]$ with weights $0$ and $1$ on $V_{\QQbar_p}$ such that $\Xund$ satisfies the determinant condition, i.e. \[ \cha(\iota(a)|\Lie X) = \cha(a|V^1) \] \end{lemma} \begin{proof} We start with the EL-case. By the virtue of section~\ref{ss normal forms of BTpELs} we may assume without loss of generality that $B=F$ is an unramified field extension of $\QQ_p$. Recall that the determinant condition is equivalent to \[ \dim (\Lie X)_\tau = \dim_{\QQ_p} V_{\tau}^1 = \#\{i\mid \mu_{i,\tau} = 1\}. \] Thus $\mu = ((\underbrace{1,\ldots,1,}_{\dim (\Lie X)_\tau} 0,\ldots,0))_{\tau:F \mono \QQbar_p}$. In the PEL-case assume without loss of generality that $(B,\,^\ast)$ is indecomposable and that $B=F$ resp.\ $B = F \times F$ in case (I). We exclude case (I) for the moment. Denote by $n$ be the relative height of $\Xund$. As in the EL-case, we need $\mu$ to satisfy \[ \dim (\Lie X)_\tau = \#\{i\mid \mu_{i,\tau} = 1\}. \] Now use the argument given in \cite{RZ96}~\S~3.23(c). The above condition is open and closed on $S$ and local for the \'etale topology. Hence we assume that $S = \Spec k$ is the spectrum of an algebraically closed field of characteristic $p$. Let $E(X)$ be the universal extension of $X$. By functoriality we obtain an $O_B$-action on $E(X)$ and an isomorphism $E(X) \isom E(X^\vee)$. Now by Lemma~\ref{lem isocrystal} the crystal induced by $X$ over $k$ is a free $W(k) \otimes O_F$-module, thus the value of this crystal at $\Spec k$ is a free $k \otimes O_F$-module. The value at $\Spec k$ is the Lie-algebra of the universal extension $E(X)$ of $X$. Thus \[ \dim (\Lie E(X))_\tau = n. \] Now we have a short exact sequence of $O_F \otimes k$-modules \begin{equation} \label{term universal extension} 0 \to (\Lie X^\vee)^\vee \to \Lie E(X) \to \Lie X \to 0. \end{equation} Thus \[ \dim (\Lie X)_\tau = \dim((\Lie X^\vee)^\vee)_\tau = \left\{ \begin{array}{ll} n - \dim (\Lie X)_\tau & \textnormal{ in case (II\textsubscript{C})} \\ n - \dim (\Lie X)_{\tau+\sigma_{F'}} & \textnormal{ in case (III)} \end{array}\right., \] which is equivalent to the constraint $c(\mu) = 1$. In case (I) let $V = V' \oplus V''$ and $\Lie X = (\Lie X)' \oplus (\Lie X)$ be induced by the decomposition of $B$ resp.\ $O_B$ into two factors. The relative height of $\Xund$ is of the form $(n,n)$. The determinant condition is equivalent to \begin{eqnarray*} \dim (\Lie X)'_\tau &=& {V'_{\tau}}^1 \\ \dim (\Lie X)''_\tau &=& {V''_{\tau}}^1. \end{eqnarray*} Now the side condition is $\dim {V'_{\tau}}^1 = \dim {V''_{\tau}}^0$, or equivalently $c(\mu) = 1$. Indeed by (\ref{term universal extension}) we have \[ \dim (\Lie X)''_\tau + \dim (\Lie X)'_\tau = n. \] \end{proof} \begin{proof}[Proof of Proposition~\ref{prop PEL vs D}] We have associated a datum $(B,\,^\ast,V,\pair,O_B,\Lambda,[\mu])$ to $\Xund$ so far. It remains to show that there exists a PEL-Shimura datum $(\Bsf,\,^\ast,\Vsf,\pair',\Osf_{\Bsf},\Lambda,h)$ such that \begin{itemlist} \item $(B,^\ast) \cong (\Bsf_{\QQ_p},^\ast)$, inducing $O_B = \Osf_{\Bsf,p}$. \item $(V,\pair) \cong (\Vsf_{\QQ_p},\pair')$ inducing $\Lambda \cong \Lambda'$, $[\mu] = [\mu_h]$. \end{itemlist} This is proven in \cite{VW13}~Lemma~10.4. In the other direction, the assertion that the PEL structure induced by $\Dscr$ is of type (AC) is well-known to be equivalent to the condition that $\Gsf$ is connected. \end{proof} \section{The Newton stratification} \label{sect newton stratification} \subsection{Comparison between the Newton stratifications on $\Ascr_0$ and on $\Sscr_{\Xund}$} \label{ss comparision} Let $\Xund$ be a {\BTEL} or PEL structure of type (AC) of fixed relative height over an $\FF_p$-scheme $S$. As in the previous section this yields an unramified reductive group scheme $G$ over $\ZZ_p$ and we choose $T \subset B \subset G$ as above. Now for every geometric point $\sbar$ of $S$ the isogeny class of $\Xund_{\sbar}$ defines a $\sigma$-conjugacy class of $G_{\QQ_p}$ and thus $\Xund$ induces functions \begin{eqnarray*} b_{\Xund}: \{\textnormal{ geometric points of } S \} &\to& B(G) \\ \nu_{\Xund}: \{\textnormal{ geometric points of } S \} &\to& X_*(T)_{\QQ,\dom}^\Gamma \\ \kappa_{\Xund}: \{\textnormal{ geometric points of } S \} &\to& \pi_1(G)_\Gamma. \end{eqnarray*} \begin{lemma}[\cite{RR96}, Thm.~3.6]\label{lem RR} The map $b_{\Xund}$ is lower semi continuous. \end{lemma} \begin{definition} For $\Xund$ as above and $b \in B(G)$ we consider \begin{eqnarray*} S^{b} &:=& \{s \in S; b_{\Xund} (\sbar) = b \} \\ S^{\leq b} &:=& \{s \in S; b_{\Xund} (\sbar) \leq b\} \end{eqnarray*} as locally closed subschemes with reduced structure. The $S^b$ are called Newton strata and the $S^{\leq b}$ are called closed Newton strata. \end{definition} Now the isogeny classes of {\BTDs} correspond to $B(G,\mu)$ (see for example \cite{VW13}~ch.~8). Hence $\kappa$ is locally constant on $\Ascr_0$, thus it suffices to consider the Newton points to distinguish the Newton strata. We will also write $\Ascr_0^{\nu(b)}$ instead of $\Ascr_0^b$ whenever it is convenient. \begin{proposition} \label{prop relation} Assume Theorem 1.1 holds true. Let $\Xund$ be a {\BTEL} or PEL structure of type (AC) over a perfect field $k_0$; denote by $G$ the associated reductive group over $\ZZ_p$ denote by $b_0 \in B(G)$ the isogeny class of $\Xund_{\kbar_0}$. For any $b \in B(G)$ which corresponds to an isogeny class of {\BTpELs} the following assertions hold. \begin{subenv} \item $\Sscr_{\Xund}^b$ is non-empty if and only if $b \geq b_0$. \item If $\Sscr_{\Xund}^{\leq b}$is non-empty then it is equidimensional of dimension \[ \langle \rho_{G}, \mu+\nu(b) \rangle - \frac{1}{2} \defect (b). \] \item $\Sscr_{\Xund}^{\leq b}$ is the closure of $\Sscr_{\Xund}^{ b}$ in $\Sscr_{\Xund}$. \end{subenv} \end{proposition} \begin{proof} As mentioned in section~\ref{ss deformation}, we may assume in the PEL case that the polarisation is $\ZZ_p^\times$-homogeneous. In the EL case we may replace $\Xund$ by $\Xund \times \Xund^\vee$ with the obvious $\ZZ_p^\times$-homogeneous PEL structure by Corollary~\ref{cor polarised BT decomposition}. We note that the similitude factor is constant modulo $\ZZ_p^\times$, as it is determined by the Kottwitz point, which is constant on $\Sscr_{\Xund}$ by \cite{RR96}~Thm.~3.8. Also, we may replace $\Xund$ by $\Xund_{\kbar_0}$ and thus assume that $k_0$ is algebraically closed. Now by Proposition~\ref{prop PEL vs D} $\Xund$ is a {\BT} with $\Dscr'$-structure for a suitable unramified PEL-Shimura datum $\Dscr'$. Thus by \cite{VW13}~Thm.~10.2 there exists $x \in \Ascr_{\Dscr',0}(k_0)$ such that the associated {\BTD} is isomorphic to $\Xund$. Now the assertions follows from Theorem~\ref{thm dimension shimura} by applying Proposition~\ref{prop serre-tate} to $x$. \end{proof} \subsection{The Newton polygon stratification} Our medium-term goal is to generalize the following improvement of de Jong-Oort's purity theorem by Yang to \BTpELs. It will be our main tool to compare the two assertions of Theorem~\ref{thm dimension shimura}. \begin{proposition}[cf.~\cite{yang11}~Thm.~1.1] \label{prop yang} Let $S$ be a locally noetherian connected $\FF_p$-scheme and $X$ be a \BT\ over $S$. If there exists a neighbourhood $U$ of a point $s\in S$ such that the Newton polygons $\NP(X)(x)$ of $X$ over all points $x \in U\setminus \overline{\{s\}}$ have a common break point, then either $\codim_U(\overline{\{s\}}) \leq 1$ or $NP(X)(s)$ has the same break point. \end{proposition} \begin{remark} The original formulation of this theorem gives the assertion for arbitrary $F$-crystals. For simplicity, we will work with \BT s instead. However, we remark that our generalisation also works in the setting of $F$-crystals. \end{remark} We will generalise the above proposition by comparing the Newton stratification on $S$ to a stratification given by a family of Newton polygons. We consider the following stratification. \begin{definition} Let $\Xund$ be a {\BTpEL} over a connected $\FF_p$-scheme $S$ and $(B,O_B,(^\ast))$ the associated (P)EL-datum. Let $B = \prod B_i$ be the decomposition into simple algebras and $X = \prod \Xund_i$ be as in Lemma \ref{lem BT decomposition}. Denote by $\NP(\Xund) = (\NP(\Xund_i))_i$ the family of (classical) Newton polygons associated to the $\Xund_i$. We call the decomposition of $S$ according to the invariant $\NP(\Xund)$ the Newton polygon stratification. \end{definition} \begin{remark} One might also think of considering the stratification given by the Newton polygon of $X$, i.e. the stratification given by of the isogeny class of $X_{\sbar}$ (forgetting the (P)EL-structure) for geometric points $\sbar$ of $S$. We warn the reader that this stratification is in general coarser than the Newton stratification. \end{remark} Now the aim of this subsection is to show that the Newton polygon stratification and the Newton stratification on $S$ coincide. It follows from the definition that the Newton polygon stratification is at worst coarser than the Newton stratification. We reformulate the invariant $\NP$ in group theoretic terms to show that they are in fact equal. Let $\Xund,S$ be as above and assume in the PEL-case that $(B,O_B,\, ^\ast)$ is of type (AC). Let $V,\Lambda$ and $G$ be associated to $\Xund$ as in subsection \ref{ss PEL vs D}. Define $H := \prod \GL_{\ZZ_p} \Lambda_i$ with canonical embedding $i: G \mono H$. Then the Newton polygon stratification is given by the invariant \[ \NP(\sbar) := \nu_H(i(b_{\Xund}(\sbar))). \] \begin{lemma} \label{lem newton polygon stratification} The Newton stratification and the Newton polygon stratification on $S$ coincide. \end{lemma} \begin{proof} We start with the EL-case. Obviously it suffices to show the claim when $B$ is simple, by Morita equivalence we may further assume that $B=F$ is a finite unramified field extension of $\QQ_p$. In this case the assertion was already proven in Example \ref{ex isocrystal}. In the PEL-case, $i$ factorizes as $G \stackrel{i_1}{\mono} \GL_{O_B}(\Lambda) \stackrel{i_2}{\mono} H$. We have already seen that $i_2$ separates Newton points, thus it suffices to prove this for $i_1$. But we have already noted in section~\ref{ss PEL vs D} that $i_1$ induces an injective map on the dominant cocharacters thus in particular separates Newton points. \end{proof} \subsection{Break points of Newton polygons and Newton points} In order to generalize Proposition~\ref{prop yang} to the PEL-case, we need the group theoretic description of break points. \begin{definition} Let $G$ be a reductive group over $\ZZ_p$, $T\subset B \subset G$ be a maximal torus and an Borel subgroup of $G$ and let $\Delta_{\QQ_p}^+(G)$ denote the set of simple relative roots of $G_{\QQ_p}$. For any $\nu \in \Nscr(G)$ and $\beta\in \Delta_{\QQ_p}^+(G)$ we make the following definitions. \begin{subenv} \item We say that $\nu$ has a break point at $\beta$ if $\langle \nu, \beta \rangle > 0$. We denote by $J(\nu)$ the set of break points of $\nu$. \item Let $\omega_\beta^\vee,$ be the relative fundamental coweight of $G^\ad$ corresponding to $\beta$ and let \\ $pr_\beta:X_*(T)^\Gamma \to X_*(T^\ad)^\Gamma \to \QQ\cdot\omega_\beta$ be the orthogonal projection (cf.~\cite{chai00}~chapter~6). If $j \in J(\nu)$, we call $(\beta, pr_\beta(\nu))$ the break point of $\nu$ at $\beta$. \end{subenv} \end{definition} \begin{example} Consider the case $G=GL_h$ with diagonal torus $T$ and Borel of upper triangular matrices $B$. Then the simple roots are \[ \beta_i: \diag(t_1,\ldots,t_n) \mapsto t_{i+1}-t_i. \] Hence $J(\nu)$ is the set of all $j$ such that $\nu_j >\nu_{j+1}$. In terms of the Newton polygon $P$ associated to $\nu$, this is the set of all places where the slope on the left and the slope on the right do not coincide. We use the standard identification $X_*(T^\ad)_\QQ \cong \QQ^n/\QQ$ where $\QQ\mono\QQ^n$ is the diagonal embedding. We write $[\nu_1,\ldots,\nu_n]$ for $(\nu_1, \ldots, \nu_n) \mod \QQ$. Now $\omega_{\beta_j}^\vee = [1,\ldots,1,0,\ldots,0]$ and a short calculation shows \begin{eqnarray*} pr_\beta(\nu) &=& [\underbrace{\frac{\nu_1+\cdots +\nu_j}{j},\ldots,\frac{\nu_{1}+\cdot+\nu_j}{j}}_{j \textnormal{ times}}, \underbrace{\frac{\nu_{j+1}+\cdots+\nu_n}{n-j},\ldots,\frac{\nu_{j+1}+\cdots+\nu_n}{n-j}}_{n-j \textnormal{ times}}] \\ &=& (\frac{\nu_1+\ldots+\nu_j}{j} - \frac{\nu_{j+1}+\ldots+\nu_n}{n-j})\cdot \omega_{\beta_j}^\vee \\ &=& (\frac{n}{j(n-j)} P(j)-\frac{1}{n-j} P(h) ) \cdot \omega_{\beta_j}^\vee. \end{eqnarray*} In particular if $\nu,\nu' \in \Nscr(G)$ with corresponding Newton polygons $P,P'$ satisfying $P(n) = P'(n)$ we have $pr_{\beta_j}(\nu) = pr_{\beta_j}(\nu')$ if and only if $P(j) = P'(j)$. Thus, under the premise that $P(n) = P'(n)$, the notion of a common break point for $\nu,\nu' \in \Nscr(\GL_h)$ coincides with the classical definition for Newton polygons. \end{example} \begin{lemma} \label{lem break points} In the situation of the previous subsection, let $\beta$ a relative root of $G_{\QQ_p}$. There exists a relative root $\beta'$ of $H_{\QQ_p}$ such that for any subset $\{\nu_i\} \subset \Nscr(G)$ the $\nu_i$ have a common break point at $\beta$ if and only if the $i(\nu_i)$ have a common break point at $\beta'$. \end{lemma} \begin{proof} We proceed similarly as in Lemma~\ref{lem newton polygon stratification}. In the EL-case we reduce to the case $B=F$ which follows from the explicit description given in Example~\ref{ex isocrystal}. In the PEL-case it thus suffices to consider the embedding $i_1:G \mono \GL_{O_F} \Lambda$. We note that the simple roots of $G$ are precisely the restriction of the simple roots of $\GL_{O_F} \Lambda$. Denote by $R'(\beta)$ the set of simple roots of $\GL_{O_F} \Lambda$ above $\beta$. Obviously we have for any $\beta' \in R'(\beta)$ \[ \langle \nu, \beta \rangle = \langle i_1(\nu), \beta'\rangle. \] Thus $\nu$ has a break point at $\beta$ if and only if $i_1(\nu)$ has a breakpoint at some (or equivalently every) element of $R'(\beta)$. Now it remains to show that for $\nu_1,\nu_2 \in \Nscr(\beta)$ we have $pr_\beta (\nu_1) = pr_\beta (\nu_2) \Lra pr_{\beta'} (i_{1}(\nu_1)) = pr_{\beta'} (i_{1}(\nu_2))$. As all functions appearing in the term are linear, it suffices to show \[ pr_\beta (\nu) = 0 \Lra pr_{\beta'}(i_{1}(\nu)) = 0. \] By the definition of the projections $pr_\beta, pr_{\beta'}$ this is equivalent to \[ \nu \in \sum_{\alpha \in \Delta_{\QQ_p}^+(G) \setminus \beta} \RR\alpha^\vee \Lra i_{1}(\nu) \in \sum_{\alpha' \in \Delta_{\QQ_p}^+(\GL_{O_F}\Lambda) \setminus \beta'} \RR\alpha'^\vee. \] This follows from the fact that $\alpha^\vee$ is a scalar multiple of the sum of the coroots corresponding to elements in $R'(\alpha)$, which is easily checked using the data given in the appendix. \end{proof} \subsection{The relationship between the dimension of Newton strata and their position relative to each other} Now we can finally formulate and prove the generalisation of Proposition~\ref{prop yang}. \begin{proposition}\label{prop yang generalised} Let $S$ be a connected locally noetherian $\FF_p$-scheme and $\Xund$ be a \BT\ over $S$ with EL structure or PEL structure of type (AC). If there exists a neighbourhood $U$ of a point $s\in S$ such that the Newton polygons $\nu_{\Xund}(x)$ of $\Xund$ over all points $x \in U\setminus \overline{\{s\}}$ have a common break point, then either $\codim_U(\overline{\{s\}}) \leq 1$ or $\nu_{\Xund}(s)$ has the same break point. \end{proposition} \begin{proof} By Lemma~\ref{lem newton polygon stratification} and \ref{lem break points} it suffices to prove the assertion for a family $(X_i)$ of Barsotti-Tate groups without additional structure. But this assertion obviously follows from Proposition~\ref{prop yang}. \end{proof} We now proceed to one of the central pieces of the proof of the main theorem. \begin{proposition} \label{prop stratification} We fix an unramified PEL-Shimura datum $\Dscr$ and assume that we have that for any $b \in B(G,\mu)$ \begin{equation} \label{term blah} \dim \Ascr_0^{\nu} \leq \langle \rho,\mu+\nu\rangle - \frac{1}{2} \defect_G (b). \end{equation} Then Theorem~\ref{thm dimension shimura} holds true. \end{proposition} \begin{proof} This proof is the same as the proof of the analogous assertion in the equal characteristic case considered in \cite{viehmann13}.Due to the importance of this assertion, we give the full proof anyway. Note that by Corollary~\ref{cor dimension formulas} the inequality on the dimension is equivalent to $\codim \Ascr_0^{\nu} \geq \lengthG{\nu}{\mu}$. We prove the claim by induction on $\lengthG{\nu}{\mu}$. For $\lengthG{\nu}{\mu} = 0$, i.e.\ $\nu=\mu$ the dimension formula of Theorem~\ref{thm dimension shimura} certainly holds true and the density of $\Ascr_0^{\mu}$ is known by Wedhorn \cite{wedhorn99}. Now fix an integer $i$ and assume the statement holds true for all $\nu\leq\mu$ and with $\lengthG{\nu}{\mu} < i$. Let $\nu'$ with $\lengthG{\nu'}{\mu} = i$. We fix an element $\nu \in [\nu',\mu]$ such that $\lengthG{\nu'}{\nu} = 1$. Then $\lengthG{\nu}{\mu} = i-1$ and by \cite{viehmann13} Lemma 5 (or more precisely the remark after this lemma), we have \[ \Nscr_{\leq\nu'} = \{\xi \in \Nscr_{\leq\nu}\mid pr_\beta(\xi) < \lambda\} \] for some break point $(\beta,\lambda)$ of $\nu$. In particular, $pr_\beta(\nu') < \lambda$. Let $\eta$ be a maximal point of $\Ascr_0^{\leq\nu'}$ (i.e.~the generic point of an irreducible component). By applying Proposition~\ref{prop yang generalised} to $S=\Spec \Oscr_{\Ascr_0^{\leq\nu},\eta}$ and $s = \eta$ we see that every irreducible component of $\Ascr_0^{\leq\nu'}$ has at most codimension 1 in $\Ascr_0^{\leq\nu}$. By (\ref{term blah}) and induction hypothesis we have $\dim \Ascr_0^{\leq\nu'} < \dim \Ascr_0^\nu$, thus $\Ascr_0^{\leq\nu'}$ is of pure codimension $\lengthG{\nu'}{\mu}$ in $\Ascr_0$. By (\ref{term blah}) we know that for any $\xi < \nu'$ that $\codim \Ascr_0^\xi \geq \lengthG{\xi}{\mu} > \lengthG{\nu'}{\mu}$, thus $\Ascr_0^{\nu'}$ is dense in $\Ascr_0^{\leq\nu'}$ and Theorem~\ref{thm dimension shimura} follows. \end{proof} \section{Central leaves of $\Ascr_0$} \label{sect central leaves} In the previous proposition we reduced Theorem~\ref{thm dimension shimura} to an estimate of the dimension of Newton strata in $\Ascr_0$. In the special case of the Siegel moduli variety, Oort has calculated the dimension of Newton strata by writing irreducible components ``almost'' as a product of so-called central leaves and isogeny leaves and calculating the dimension of these. We will use a similar approach to prove the estimate, applying Mantovan's theorem which implies that a Newton stratum of a PEL-Shimura variety is in finite-to-finite correspondence with the product of a central leaf with a truncated Rapoport-Zink space (cf.\ Proposition~\ref{prop almost decomposition}). First, let us recall the definition and the basic properties of central leaves. \subsection{The geometry of central leaves} \begin{definition} Let $\Xund$ be a completely slope divisible {\BTD} over $k$ with isogeny class $b$ (Recall that by \cite{OZ02}~Lemma~1.4 a \BT $X$ over $k$ is slope divisible if and only if it is isomorphic to a direct sum of isoclinic \BT s which are defined over a finite field). The corresponding central leaf is defined as \[ C_{\Xund} := \{x \in \Ascr_0 \mid \Aund^{\univ}_{\overline{x}}[p^\infty] \cong \Xund_{\overline{k(x)}}\} \] \end{definition} The central leaf is a smooth closed subscheme of $\Ascr_0^{b}$ by \cite{mantovan05}~Prop.~1. The fact that $C_{\Xund}$ is closed in the Newton stratum is proven by writing $C_{\Xund}$ as a union of irreducible components of \[ C_{X} := \{x \in \Ascr_0 \mid A^{\univ}_{\overline{x}}[p^\infty] \cong X_{\overline{k(x)}}\}, \] which is closed in $\Ascr_0^{b}$ by a result of Oort. Viehmann and Wedhorn showed that every central leaf is non-empty (\cite{VW13}~Thm.~10.2). Furthermore, the dimension of $C_{\Xund}$ only depends on $b$. \begin{proposition} \label{prop dimension central leaves} Let $\Dscr' = (B,\ast,V,\langle\, , \rangle, O_B,\Lambda,h)$ be a second unramified Shimura datum which agrees with $\Dscr$ except for $h$ and let ${K^p}' \subset G(\AA^p)$ be a sufficiently small open compact subgroup. Denote by $\Ascr_0'$ the special fibre of the associated moduli space. We consider two \BT s $\Xund = (X,\iota,\lambda)$ and $\Xund' = (X',\iota',\lambda')$ over $k$ equipped with $\Dscr$- resp.\ $\Dscr'$-structure and assume that we have an isogeny $\rho: \Xund \to \Xund'$. Denote by $C_X$ and $C_{X'}$ the associated central leaves in $\Ascr_0$ resp.\ $\Ascr'_0$. Then \[ \dim C_{\Xund} = \dim C_{\Xund'}. \] \end{proposition} \begin{remark} \begin{subenv} \item Oort proved the analogous assertion for moduli spaces of (not necessarily principally) polarised abelian varieties (\cite{oort04}~Thm.~3.13). Our proof is a generalisation of his proof. \item In the case $\Dscr = \Dscr'$ the assertion was already proved by Mantovan. In the proof of \cite{mantovan04}~Prop.~4.7 she showed the proposition only for some special cases of PEL-Shimura data, but her proof can be generalised to arbitrary unramified PEL-Shimura data using \cite{mantovan05}~ch.4 and 5. \end{subenv} \end{remark} \begin{proof} First we fix some notation. Let $n' = \dim_\QQ V$. Oort showed in \cite{oort04}~Cor.~1.7 that there exists an integer $N(n')$ such that any two \BT s $X,X'$ of height $n'$ over an algebraically closed field with $X[p^{N(n')}] \cong X'[p^{N(n')}]$ are isomorphic. Denote $H := \ker \rho$, furthermore $i := \deg \rho$ and $n = N(n')+i$. Choose $x =(A,\iota,\lambda,\eta) \in C_{\Xund}$ and let $x' = (A/G,\iota',\lambda',\eta')$. We denote by $C_x$ and $C_{x'}$ the connected components of the central leaves containing $x$ resp.\ $x'$. We denote the corresponding universal abelian varieties with additional structure by \begin{eqnarray*} \Mund &\rightarrow& C_x \\ \Mund'&\rightarrow& C_{x'} \end{eqnarray*} and \begin{eqnarray*} \Yund &=& \Mund[p^\infty] \\ \Yund' &=& \Mund'[p^\infty]. \end{eqnarray*} Using a slight generalisation of \cite{oort04}~Lemma~1.4 (the same proof still works) there exists a scheme $T \to C_{x}$ finite and surjective such that $\Mund [p^n]_T$ is constant. We assume that $T$ is irreducible. The abelian variety with additional structure $\Mund_T/H_T$ defines a morphism $f:T \rightarrow \Ascr_0'$. Using \cite{oort04}~Cor.~1.7, we see that $f$ factorises over $C_X$; as $T$ is irreducible it thus factors over $C_{x'}$. We now show that $f$ is quasi-finite. We denote by $\varphi: \Yund_T \rightarrow (\Mund_T/H_T)[p^\infty]$ the isogeny constructed above and choose $\psi$ such that $\varphi \circ \psi = \psi\circ\varphi = p^n$. Let $u\in C_x$ and $S \subset T_u$ a reduced and irreducible subscheme. Then \[ \psi_S: (\Yund'_u)_S = (\Mund_T/H_T)[p^\infty]_S \rightarrow X_S. \] By arguing as in \cite{oort04}~\S~1.11 one checks that the kernel of $\psi_S$ is constant. Thus \[ \Mund_S \cong \Mund'_S/\ker \psi_S \] is also constant, i.e.\ the image of $S$ in $C_x$ is a single point. As $T$ is finite over $C_x$, this implies that $S$ is a single point. So we get $\dim C_x = \dim T \leq \dim C_{x'}$. As the assertion of the proposition is symmetric in $X$ and $X'$, the claim follows. \end{proof} \begin{proposition} \label{prop almost decomposition} Let $\nu \in B(G,\mu)$ and let $\XXund$ be a slope divisible \BT\ over $\FFbar_p$ with $\Dscr$-structure in the isogeny class $b$. Denote by $\Mscr_G(b,\mu)$ the Rapoport-Zink space associated to $\XXund$. Then \[ \dim \Ascr_0^b = \dim C_{\XXund} + \dim \Mscr_G(b,\mu) \] \end{proposition} \begin{proof} By \cite{mantovan05}~ch.~5 there exists a finite surjective map $\pi_N: J_{m,b} \times \Mscr_b^{n,d} \to \Ascr_0^\nu$ ($m,n,d\gg 0$). Here $J_{m,b}$ is an Igusa variety over $C_\XXund$ (cf.~\cite{mantovan05}~ch.~4), which in particular means finite \'etale over $C_\XXund$, and $\Mscr_b^{n,d}$ are truncated Rapoport-Zink spaces (cf.~ \cite{mantovan05}~ch.~5), which are quasi-compact and of the same dimension of $\Mscr_G(b,\mu)$ for $n,d \gg 0$. Hence \[ \dim \Ascr_0^b = \dim J_{m,b} + \dim \Mscr_b^{n,d} = \dim C_\XXund + \dim \Mscr(b,\mu). \] \end{proof} \subsection{Results on Ekedahl-Oort-strata} Proposition \ref{prop almost decomposition} reduces the dimension formula of Theorem \ref{thm dimension shimura} to computing the dimension of the central leaves and of Rapoport-Zink spaces. Because of Proposition~\ref{prop dimension central leaves} it suffices to calculate the dimension of the central leaf for one representative of each isogeny class. In order to find a suitable central leaf we consider the Ekedahl-Oort stratification, which is studied in great detail in the paper \cite{VW13} of Viehmann and Wedhorn. We recall some of their notions and results. For a \BTD\ $\Xund$ over $k$ we denote by $\tc(\Xund)$ the isomorphism class of $\Xund[p]$. The map $\tc$ has the following group theoretic interpretation (cf.~\cite{VW13}~ch.~8). Recall that an isomorphism class of \BTDs\ corresponds to an element of $C(G)$, the set of $G(O_L)$-$\sigma$-conjugacy classes in $G(L)$. By Dieudonn\'e theory the isomorphism classes of truncated \BTDs\ correspond to $G(O_L)$-$\sigma$-conjugacy classes in $G_1 \backslash G(L) / G_1$ where $G_1 := \ker(G(O_L) \epi G(k))$. Now the truncation map $\tc$ is given by the canonical projection. \[ C(G) \epi \{ \sigma-\textnormal{conjugacy classes in } G_1 \backslash G(L) / G_1 \} \] The Ekedahl-Oort stratification is the decomposition of $\Ascr_0(\FFbar_p)$ given by the invariant $\tc$. The Ekedahl-Oort strata are locally closed subsets of $\Ascr_0(\FFbar_p)$ and are indexed by elements $w\in W$ which are the shortest element in the coset $W_{\sigma^{-1}(\mu)}w$. Here $W_{\sigma^{-1}(\mu)}$ denotes the Weyl group of the centralizer of $\sigma^{-1}(\mu)$. We denote the Ekedahl-Oort stratum corresponding to $w$ by $\Ascr_{0,w}$. \begin{proposition}[\cite{VW13}~Thm.~11.1 and Prop.~11.3] $\Ascr_{0,w}$ is non-empty and of pure dimension $\ell(w)$. \end{proposition} As we want to calculate the dimension of central leaves, we are interested in Ekedahl-Oort strata corresponding to $p$-torsion subgroups (with additional structure) which determine their {\BTD} uniquely. \begin{definition} \begin{subenv} \item A \BTD\ $\Xund$ over $k$ is called minimal if every \BTD\ $\Yund$ with $\Xund[p] \cong \Yund[p]$ is isomorphic to $\Xund$. \item An Ekedahl-Oort stratum is called minimal if the fibre of $\Aund^{\univ}[p^\infty]$ over some (or equivalently every) point of it is minimal. \end{subenv} \end{definition} Certainly an Ekedahl-Oort stratum is a central leaf if and only if it is minimal and the corresponding {\BT} over $\FFbar_p$ is slope-divisible. Viehmann and Wedhorn show in their paper that every isogeny class contains a minimal \BTD, more precisely they show that this is true for the stronger notion of fundamental elements. We recall their definition. \begin{definition} \begin{subenv} \item Let $P$ be a semistandard parabolic subgroup of $G_{O_F}$, denote by $N$ its unipotent radical and let $M$ be the Levi factor containing $T_{O_F}$. Furthermore denote by $\Nbar$ the unipotent radical of the opposite parabolic. We denote \[ \Iscr_M = \Iscr \cap M(O_L),\quad \Iscr_N = \Iscr \cap N(O_L),\quad \Iscr_{\Nbar} = \Iscr\cap \Nbar(O_L). \] Then an element $x \in \Wtilde$ is called $P$-fundamental if \begin{eqnarray*} \sigma(x \Iscr_M x^{-1}) &=& \Iscr_M, \\ \sigma(x \Iscr_N x^{-1}) &\subseteq& \Iscr_N, \\ \sigma(x \Iscr_{\Nbar} x^{-1}) &\supseteq& \Iscr_{\Nbar}. \end{eqnarray*} \item We call an element $x \in \Wtilde$ fundamental if it is $P$-fundamental for some semi-standard parabolic subgroup $P \subset G_{O_F}$. \item We call $\pot{c} \in C(G)$ fundamental if it contains a fundamental element of $\Wtilde$. A {\BTD} over $\FFbar_p$ is called fundamental if the corresponding element of $C(G)$ is fundamental. \item An Ekedahl-Oort stratum is called fundamental if the fibre of $\Aund^{\univ}[p^\infty]$ over some point of it is fundamental. \end{subenv} \end{definition} By \cite{VW13}~Rem.~9.8 every fundamental {\BTD} is also minimal. As mentioned above they also show the following assertion. \begin{proposition}[\cite{VW13}~Thm.~9.18] \label{prop VW} Let $G$ and $\mu$ be the reductive group scheme and the cocharacter associated to an unramified PEL-Shimura datum. Then for every $b \in B(G,\mu)$ there exists a fundamental element $x \in \Wtilde$ such that $x \in b$ and $x \in W\mu'$ for some minuscule cocharacter $\mu'$. Furthermore the {\BTD} associated with $\pot{x}$ is completely slope divisible. \end{proposition} The slope divisibility is not mentioned in \cite{VW13}, but following their construction of $x$ one easily checks that the induced {\BTD} is completely slope divisible. \section{The dimension of certain Ekedahl-Oort-strata} \label{sect EO strata} In order to apply the formula $\dim \Ascr_{0,w} = \ell(w)$ to fundamental Ekedahl-Oort strata, we have to explain how to compute $w$, or rather $\ell(w)$, if one is given a fundamental element $x \in \Wtilde$. This can be done by the algorithm provided by Viehmann in the proof of \cite{viehmann}~Thm.~1.1. Before we apply this algorithm, we recall some combinatorics of the Weyl group which it uses. \subsection{Shortest elements in cosets of the extended affine Weyl group} We denote by $S$ resp.\ $S_a$ the set of simple reflections in $W$ resp.\ $W_a$. For $J \subset S_a$ let $W_J \subset \Wtilde$ be the subgroup generated by $J$. Then every right (resp.\ left) $W_J$-coset of $\Wtilde$ contains a unique shortest element. We denote by $\Wtilde^J$ (resp. ${^J\Wtilde}$) the set of those elements. Then every $x \in \Wtilde$ can be written uniquely as $x = x^J \cdot w_J = {_Jw} \cdot {^Jx}$ with $x^J \in \Wtilde^J, w_J \in W_J, {_Jw} \in {_JW}, {^Jx} \in {^J\Wtilde}$. We have $\ell(x^J) + \ell(w_J) = \ell(x) = \ell(_Jw) + \ell(^Jx)$. (cf.\ \cite{DDPW}~Prop.~4.16) Moreover there exists a unique shortest element for every double coset $W_J\backslash \Wtilde/W_K$. we denote the set of all shortest elements in their respective double coset by ${^J\Wtilde^K}$. Let $u \in {^J\Wtilde^K}$ and $K' := K \cap u^{-1}Ju$. Then $W_{K'} = W_K \cap u^{-1}W_Ju$ and every element $x \in W_JuW_K$ can be written uniquely as $x = w_Juw_{K'}$ with $w_J \in W_J$ and $w_{K'}\in W_{K'}$. Moreover, we have $\ell(x) = \ell(w_J) + \ell(u) + \ell(w_{K'})$. (cf.\ \cite{DDPW}~Lemma~4.17 and Thm.~4.18) Of course the above statements also hold for $J,K \subset S$ and $W$ instead of $\Wtilde$. We denote the respective sets of shortest elements by $W^J, {^JW}$ resp.\ ${^JW^K}$. For $\mu \in X_*(T)_{\dom}$ denote by $\tau_\mu$ the shortest element of $Wp^\mu W$. Then $\tau_\mu$ is of the form $x_\mu \cdot p^\mu$ with $x_\mu \in W$. Let $M_\mu$ be the centralizer of $\mu$ and $W_\mu = \{w\in W\mid w(\mu) = \mu\}$ the Weyl group of $M_\mu$. We have the following useful lemmas. \begin{lemma} $W_\mu = W_{S \cap \tau_\mu^{-1}S\tau_\mu} = W \cap \tau_\mu^{-1} W \tau_\mu$. \end{lemma} \begin{proof} It suffices to show that for $s\in S$ we have $s(\mu) = \mu$ if and only if $s \in \tau_\mu^{-1}S\tau_\mu$. If $s(\mu)=\mu$ then $\tau_\mu s \tau_\mu^{-1} \in W$. Thus \[ \ell(\tau_\mu s \tau_\mu^{-1}) = \ell(\tau_\mu s \tau_\mu^{-1} \tau_\mu ) - \ell(\tau_\mu) = \ell(\tau_\mu s) - \ell(\tau_\mu) = \ell(\tau_\mu) + \ell(s) - \ell(\tau_\mu) = \ell(s) = 1 \] and hence $\tau_\mu s \tau_\mu^{-1} \in S$. On the other hand we have \[ \tau_\mu s \tau_\mu^{-1} = x_\mu p^\mu s p^{-\mu} x_{\mu}^{-1} = x_\mu s x_{\mu}^{-1} p^{x_\mu(s(\mu)-\mu)}. \] Thus $\tau_\mu s \tau_{\mu}^{-1} \in S$ implies $s(\mu)-\mu = 0$. \end{proof} We denote $S_\mu := S \cap \tau_\mu^{-1}S\tau_\mu$. \begin{lemma} \label{lem length preservation} Let $J,K \subset S_a, u \in {^J\Wtilde^K}$. Denote by $K' := K \cap u^{-1}Ju$ and let $w \in W_{K'}$. Then \[ \ell(uwu^{-1}) = \ell(w). \] \end{lemma} \begin{proof} We have \begin{eqnarray*} l(w) &=& \ell(uw) - \ell(u) \\ &=& \ell((uwu^{-1})u) - \ell(u) \\ &=& \ell(uwu^{-1}) + \ell(u) - \ell(u) \\ &=& \ell(uwu^{-1}) \end{eqnarray*} \end{proof} \begin{corollary} \label{cor length preservation} For $x \in \Wtilde_\mu$ we have \[ \ell(x_\mu x x_\mu^{-1}) = \ell(x) \] \end{corollary} \begin{proof} This is a consequence of $x_\mu x x_\mu^{-1} = \tau_\mu x \tau_\mu^{-1}$ and the preceding two lemmas. \end{proof} \subsection{$G(O_L)$-$\sigma$-conjugacy classes in $\widetilde{W}$} We have the following result on $G(O_L)$-$\sigma$-conjugacy classes of $G_1\backslash G(L) / G_1$. \begin{proposition}[\cite{viehmann}~Thm.~1.1] Let $\Tscr = \{(w,\mu) \in W \times X_*(T)_\dom \mid w \in {^{M_{\sigma^{-1}(\mu)}}W}\}$. Then the map assigning to $(w,\mu)$ the $G(O_L)$-$\sigma$-conjugacy class of $G_1w\tau_\mu G_1$ is a bijection between $\Tscr$ and the $G(O_L)$-$\sigma$-conjugacy classes in $G_1\backslash G(L) / G_1$. \end{proposition} In the case where $G$ is given by an unramified PEL-Shimura datum, the stratum $\Ascr_{0,w}$ corresponds to the $G(O_L)$-$\sigma$-conjugacy class of $G_1 w\tau_{\mu_h} G_1$. The proof of the above theorem provides an algorithm which determines the pair $(w,\mu)$ associated to the $G(O_L)$-$\sigma$-conjugacy class of $G_1bG_1$ for any $b \in G(L)$. For the special case that $b \in \Wtilde$ the algorithm simplifies as follows. \begin{enumerate} \item Denote by $\mu'$ the image of $b$ under the canonical projection $\Wtilde \epi X_*(T), w\cdot p^\lambda \mapsto p^\lambda$ and let $\mu = \mu'_{\dom}$. Then $b \in W\tau_\mu W$, thus we may write \[ b = w \tau_\mu w' \] with $w,w' \in W$, $\ell(b) = \ell(w) + \ell(\tau_\mu) + \ell(w')$. Now replace $b$ by its $\sigma$-conjugate \[ \sigma^{-1}(w') b w^{-1} = \sigma^{-1}(w') w \tau_\mu =: b_0 \tau_\mu. \] \item Define sequences of subsets $J_i,J_i' \subset S$ and sequences of elements $u_i \in W,b_i \in W_{J_i'}$ as follows; \begin{enumerate} \item $J_0 = J_0' = S$, \newline $J_1 = \sigma^{-1}(S_\mu)$ and $J_1' = x_\mu S_\mu x_\mu^{-1}$, \newline $J_i = J_{i-1}' \cap u_{i-1}J_1u_{i-1}^{-1}$ and $J_i' = x_\mu \sigma(u_{i-1} J_i u_{i-1}^{-1}) x_\mu^{-1}$. \item $u_0 =1$ and $b_0$ as above. Let $\delta_i$ be the shortest length representative of $W_{J_i}\delta_{i-1}W_{J_i'}$ in $W_{J_{i-1}'}$. Then $u_i = u_{i-1}\delta_i$ and $b_i$ is defined as follows. Decompose \[ b_{i-1} = w_i \delta_i w_i' \] with $w_i \in J_i$, $w_i' \in J_i'$ such that $\ell(b_{i-1}) = \ell(w_i) + \ell(\delta_i) + \ell(w_i')$. Then \[ b_i = w_{i}' \cdot x_\mu \sigma(u_{i-1}w_{i}u_{i-1}^{-1}) x_\mu^{-1} \] \end{enumerate} The above sequences satisfy the following properties. The sequences $(J_i)$ and $(J_i')$ are descending and $u_i \in {^{J_1} W ^{J_i'}}$. \item Let $n$ be sufficiently large such that $J_n = J_{n+1}, J_n' = J_{n+1}'$. Then $(w,\mu)$ is given by $w=u_n$ and $\mu$ as in step $1$. \end{enumerate} \begin{remark} Following the construction above, we see that we not only the $K_1$-double cosets of $b$ and $w\tau_\mu$ are $G(O_L)$-$\sigma$-conjugate, but also $b$ and $\tau_\mu$ themselves. Indeed, we have a chain of $\sigma$-conjugations \begin{eqnarray*} u_0b_0\tau_\mu &=& \sigma^{-1}(w') b w^{-1} \\ u_ib_i\tau_\mu &=& (u_{i-1}w_iu_{i-1}^{-1})^{-1} u_{i-1}b_{i-1}\tau_\mu\; \sigma(u_{i-1}w_iu_{i-1}^{-1}) \textnormal{ for } i > 0 \end{eqnarray*} and Viehmann shows in her proof of the above theorem that $u_n\tau_\mu$ is $G(O_L$)-$\sigma$-conjugated to $u_nb_n\tau_\mu$. \end{remark} \begin{proposition} \label{prop length tr} Let $b,(w,\mu)$ as above. Then \[ \ell(b) \geq \ell(w\tau_\mu). \] \end{proposition} \begin{proof} We follow the algorithm above. We have \[ \ell(b_0\tau_\mu) = \ell(\sigma^{-1}(w')w\tau_\mu) \leq \ell(w') + \ell(w) + \ell(\tau_\mu) = \ell(b). \] Next, we show that $\ell(u_ib_i)\leq \ell(u_{i-1}b_{i-1})$ for all $i$. Note that \[ \ell(u_{i-1}b_{i-1}) = \ell(u_{i-1}) + \ell(b_{i-1}) = \ell(u_{i-1}) + \ell(w_i) + \ell(\delta_i) + \ell(w_i') = \ell(u_i) + \ell(w_i) +\ell(w_i'). \] Thus we have to show that $\ell(b_i) \leq \ell(w_i) + \ell(w_i')$. Now \begin{eqnarray*} \ell(b_i) &=& \ell(w_i' x_\mu \sigma(u_{i-1} w_i u_{i-1}^{-1}) x_\mu^{-1}) \\ &\leq& \ell(w_i') + \ell(x_\mu \sigma(u_{i-1} w_i u_{i-1}^{-1}) x_\mu^{-1}) \\ &\stackrel{\rm Cor.~\ref{cor length preservation}}{=}& \ell(w_i') + \ell(\sigma(u_{i-1}w_iu_{i-1}^{-1})) \\ &=& \ell(w_i') + \ell(u_{i-1}w_iu_{i-1}^{-1}) \\ &\stackrel{\rm Lemma~\ref{lem length preservation}}{=}& \ell(w_i') + \ell(w_i). \end{eqnarray*} Altogether, we have \[ \ell(w\tau_\mu) = \ell(w)+\ell(\tau_\mu) = \ell(u_n)+ \ell(\tau_\mu) = \ell(u_nb_n) - \ell(b_n) + \ell(\tau_\mu) \leq \ell(b_0) + \ell(\tau_\mu) \leq \ell(b). \] \end{proof} \subsection{Fundamental Ekedahl-Oort strata} \label{ss fundamental elements} It is well-known that one has for any $x \in \Wtilde$ the inequality $\ell(x) \geq \langle 2\rho, \nu(x) \rangle$. One calls $x$ a $\sigma$-straight element if equality holds, or equivalently if \[ \ell(x\cdot \sigma(x) \cdot \ldots \cdot \sigma^n(x)) = (n+1) \cdot \ell(x) \] for every non-negative integer $n$ (see for example \cite{he}~Lemma~8.1). \begin{lemma} Let $G$ be a reductive group scheme with extended affine Weyl group $\Wtilde$. Then any fundamental element $x \in \Wtilde$ is $\sigma$-straight. \end{lemma} \begin{proof} In the split case this was proven in \cite{GHKR10}~Prop.13.1.3. Our proof uses the same idea but is a bit more technical than the proof in the split case. We show \[ \ell(x\cdot \sigma(x) \cdot \ldots \cdot \sigma^n(x)) = (n+1)\cdot \ell(x) \] via induction on $n$. For $n=0$ the assertion is tautological. For $n \geq 1$ let $x_{(n)} = \sigma(x)\cdot \ldots \sigma^n(x)$. As $x$ is fundamental we deduce \[ \sigma(x\Iscr_Mx^{-1}) = \Iscr_M \Rightarrow \Iscr_{\sigma^k(M)} = \sigma^k(x^{-1}) \Iscr_{\sigma^{k-1}(M)} \sigma^k(x) \textnormal{ for all } k \Rightarrow x_{(n)}^{-1} \Iscr_M x_{(n)} = \Iscr_{\sigma^n(M)} \subset \Iscr, \] \[ \sigma(x\Iscr_{\Nbar}x^{-1}) \supseteq \Iscr_{\Nbar} \Rightarrow \Iscr_{\sigma^k(\Nbar)} \supseteq \sigma^k(x^{-1}) \Iscr_{\sigma^{k-1}(\Nbar)} \sigma^k(x) \textnormal{ for all } k \Rightarrow x_{(n)}^{-1} \Iscr_{\Nbar} x_{(n)} \subseteq \Iscr_{\sigma^n(\Nbar)} \subset \Iscr. \] Hence by Iwahori factorisation \[ x\Iscr x_{(n)} = x\Iscr_N \Iscr_M \Iscr_{\Nbar} x_{(n)} = (x\Iscr_Nx^{-1})x x_{(n))} (x_{(n)}^{-1}\Iscr_M x_{(n)}) (x_{(n)}^{-1} \Iscr_{\Nbar}x_{(n)}) \subset \Iscr x x_{(n)} \Iscr \] which implies \[ \ell(x\cdot x_{(n)}) = \ell(x) + \ell(x_{(n)}) = \ell(x) + n\cdot \ell(x) = (n+1) \cdot \ell(x). \] \end{proof} \begin{corollary} \label{cor dimension of central leaves} We fix $b \in B(G,\mu)$. \begin{subenv} \item Let $x\in\Wtilde$ and $\mu' \in X_*(T)$ be as in Proposition~\ref{prop VW}. Denote by $\Ascr_0'$ the special fibre of the moduli space associated to the datum $(\Bsf,\Osf_\Bsf,\Vsf, \langle\,\, ,\, \rangle, \Lambda, \mu'_\dom)$ and by $\Ascr'_{0,w}$ the Ekedahl-Oort stratum corresponding to $\tc (x)$. Then $\dim \Ascr'_{0,w} = \langle 2\rho,\nu_G(b) \rangle$. \item Any central leaf in $\Ascr_0^{b}$ has dimension $\langle 2\rho,\nu_G(b) \rangle$. \item We have $\dim \Ascr_0^{b} = \dim \Mscr_G(b,\mu) + \langle 2\rho,\nu_G(b) \rangle$. \end{subenv} \end{corollary} \begin{proof} We have $\dim \Ascr_{0,w} = \ell(w) = \ell(w\tau_\mu)$. Here the last equalisty holds as $\ell(\tau_\mu) = 0$ because $\mu$ is minuscule. Now $\nu_G(w \tau_\mu) = \nu_G(b)$, thus $\ell(w\tau_\mu) \geq \langle 2\rho,\nu_G(b) \rangle$. On the other hand we have $\ell(w\tau_\mu) \leq \ell(x) = \langle 2 \rho, \nu_G(b) \rangle$ by Proposition \ref{prop length tr} and the previous lemma, thus proving part (1). The second part is a direct consequence of part (1) and Proposition~\ref{prop dimension central leaves} and the last assertion follows by Prop.~\ref{prop almost decomposition}. \end{proof} \section{Epilogue} \label{sect epilogue} Now we have reduced the task of proving the main theorems to proving the following proposition. \begin{proposition} \label{prop main} \begin{subenv} \item We have \begin{equation} \dim \Mscr_G(b,\mu) \leq \langle \rho, \mu-\nu_G(b) \rangle - \frac{1}{2} \defect_G(b) \end{equation} \item Assume that $b$ is superbasic. Then the connected components of $\Mscr_G(b,\mu)$ are projective. \end{subenv} \end{proposition} We recall the reduction steps we made so far for the readers convenience. First we reduced Theorem~\ref{thm dimension shimura} to the inequality (\ref{term blah}) in Proposition~\ref{prop stratification}. Assuming that the above proposition holds true, we obtain the inequality (\ref{term blah}) by applying the above estimate for $\dim \Mscr_G(b,\mu)$ to Corollary~\ref{cor dimension of central leaves}~(3). Now Theorem~\ref{thm dimension deformation} follows from Theorem~\ref{thm dimension shimura} by Proposition~\ref{prop relation}. In return, applying the dimension formula for $\Ascr_0^b$ of Theorem~\ref{thm dimension shimura} to Corollary~\ref{cor dimension of central leaves}~(3) yields the first assertion of Theorem~\ref{thm dimension RZ-space}. The other assertion is identical to the second assertion of the proposition above. \section{The correspondence between the general and superbasic case} \label{sect BT again} \subsection{Simple RZ-data and Rapoport-Zink spaces over perfect fields} \label{ss simple RZ} We call an un\-ra\-mi\-fied Rapoport-Zink datum $(B,O_B,(*),V,(\pair),\Lambda)$ \emph{simple} if $B=F$ is an unramified field extension. In \cite{fargues04}~\S~2.3.7 Fargues gives an open and closed embedding \[ \Mscr_G(b,\mu) \mono \prod_i \Mscr_{G_i}(b_i,\mu_i) \] of an arbitrary Rapoport-Zink space into a product of Rapoport-Zink spaces associated to simple data using assertions as in section~\ref{ss normal forms of BTpELs}. In particular, \[ \dim \Mscr_G(b,\mu) = \sum_i \dim \Mscr_{G_i}(b_i,\mu_i) \] and one obtains an analogous formula for the right hand side of the dimension estimate of Proposition \ref{prop main}. Thus it suffices to prove the proposition for simple Rapoport-Zink data. We note that as a consequence of \cite{RR96}~Lemma~1.3 the reduced subscheme of a Rapoport-Zink space can be defined over $\FFbar_p$, thus we can make the following assumption. \begin{notation} In the following we will assume that $k = \FFbar_p$. \end{notation} In order to estimate the dimension of the Rapoport-Zink spaces we will use some methods which only work for schemes defined over a finite field. So we consider the more general setup as in section~\ref{ss RZ spaces} where the RZ-spaces are defined over perfect fields. We fix a perfect field $k_0$ and denote $L_0 = W(k_0)_\QQ$. \begin{definition} \label{def relative RZ datum} A simple Rapoport-Zink datum relative to $L_0$ is a datum $\hat\Dscr = (F,O_F,(^\ast),V,$ $(\pair),\Lambda,[b])$ as in Definition \ref{def RZ datum} but $[b]$ denotes a $\sigma$-conjugacy class in $G(L_0)$ which is assumed to be decent, i.e.\ it contains an element $b$ satisfying \begin{equation} \label{term decent} (b\sigma)^s = \nu_G (b)(p^s) \sigma \end{equation} for some natural number $s$. \end{definition} As in the case of an algebraically closed field, one can associate an isomorphism class of $\Bscr$-isocrystals to $[b]$ (\cite{RZ96}~Lemma~3.37). Now let $(\XX,\iota_\XX,(\lambda_\XX))$ be a {\BTpEL} with $\Bscr$-isocrystal $(L_0 \otimes V, b(\sigma \otimes \id))$. \begin{proposition}[\cite{RZ96}, Cor.~3.40] \label{prop relative RZ space} Assume that $L_0$ contains the local Shimura field and $\QQ_{p^s}$ (where $s$ is chosen as in (\ref{term decent})). Then the associated functor $\Mscr_G$ defined as in Definition \ref{def RZ space} is representable by a formal scheme formally locally of finite type over $\Spf W(k_0)$. \end{proposition} We note that every class $[b] \in B(G)$ contains an element $b_0$ which satisfies the decency equation (\ref{term decent}) above. By \cite{RZ96}~Cor.~1.9 we have $b_0 \in \QQ_{p^s}$ thus every Rapoport-Zink space can be defined over a finite field $k_0$. \subsection{Construction of the correspondence} We fix a simple Rapoport-Zink datum $\hat\Dscr = (B,O_B,(^\ast),V,(\pair),\Lambda,\mu)$. Let $P \subset G$ be a standard parabolic subgroup defined over $\ZZ_p$ and $M \subset P$ be its Levi subgroup which contains $T$ . We will later assume that $[b]$ induces a superbasic $\sigma$-conjugacy class in $M(L)$. However, our construction works in greater generality. By \cite{SGA3-3}~Exp.~XXVI~ch.~1 the pairs $M \subset P$ as above are given by the root datum of $G$ in analogy with reductive groups over fields. Using the explicit description of the root data in the appendix, one sees that there exists a decomposition $\Lambda = \bar\Lambda_1 \oplus \ldots \oplus \bar\Lambda_r$ such that \begin{eqnarray*} M &=& \{g \in G; g(\bar\Lambda_i) = \bar\Lambda_i\} \\ P &=& \{g \in G; g(\Lambda_i) = \Lambda_i\}, \end{eqnarray*} where $\Lambda_{i} = \bar\Lambda_1 \oplus \ldots \oplus \bar\Lambda_i$. We get in the EL-case \[ M \cong \prod_{i=1}^{r'} \GL_{O_F,n_i}, \] where $n_i = \dim_F \overline{V_i}$ and $r'=r$. In the PEL-case we can assume that for any $i$ \[ \bar\Lambda_i^\perp = \bar\Lambda_1 \oplus \ldots \oplus \bar\Lambda_{r-i-1} \oplus \bar\Lambda_{r-i+1} \oplus \ldots \oplus \bar\Lambda_{r}. \] We denote $r' = \lfloor\frac{r}{2}\rfloor+1$. Then we have \[ M \cong \left\{ \begin{array}{ll} \prod\limits_{i=1}^{r'-1} \GL_{O_F,n_i} \times \GG_m & \textnormal{ if } r \textnormal{ is even} \\ \prod\limits_{i=1}^{r'-1} \GL_{O_F,n_i} \times \GU_{O_F,n_{r'}} & \textnormal{ if } r \textnormal{ is odd, } ^\ast = \id \\ \prod\limits_{i=1}^{r'-1} \GL_{O_F,n_i} \times \GSp_{O_F, n_{r'}} & \textnormal{ if } r \textnormal{ is odd, } ^\ast = \bar\cdot . \end{array} \right. \] We also write this decomposition as $M = \prod M_i$ with $M_{r'}$ denoting the right factor in the PEL-case. After replacing $b$ by a $G(L)$-$\sigma$-conjugate, we may assume that $b \in M(L)$. Denote by $[b]_M$ the $\sigma$-conjugacy class in $M(L)$. After $\sigma$-conjugating by an element of $M(L)$ we furthermore assume that $b$ satisfies a decency equation (\ref{term decent}). Denote by $b_i$ the images of $b$ in $M_i$. Also note that the $b_i$ satisfy the same decency equation in $M_i$ as $b$. Let $k_0$ be finite field containing $k_F$ and $\FF_{p^s}$ where the $s$ is given by the decency equation of $b$. We choose {\BTpELs} $\XXbar_i$ over $k_0$ with $\Bscr$-isocrystals isomorphic to $(V_{i,K_0}, b\sigma)$ having the property that in the PEL-case the pairing $\pair$ induces an isomorphism $\XXbar_i \cong \XXbar_{r-i}^\vee$. In particular the isogeny class of $\XX = \XXbar_1 \oplus \ldots \oplus \XXbar_r$ corresponds to the isomorphism class of $(V,b\sigma)$ (with additional structure). We denote $\XX_i = \XXbar_1 \oplus \ldots \oplus \XXbar_i$. The following objects will allow us to relate $\Mscr_G(b,\mu)$ to the (underlying reduced subschemes of) Rapoport-Zink spaces corresponding to the $\sigma$-conjugacy classes $[b_i]_{M_i}$. \begin{definition} Using the notation introduced above we make the following definitions. \begin{subenv} \item Let $\Mscr_P(b,\mu)$ be the functor associating to each scheme $S$ over $k_0$ the following data up to isomorphism. \begin{itemlist} \item $(X,\rho) \in \Mscr_G(b,\mu)(S)$ and \item a filtration $X_\bullet = (X_1 \subset \ldots \subset X_r = X)$ of $X$ such that the restriction $\rho_{|\XX_i}$ defines a quasi-isogeny onto $X_i$. \end{itemlist} \item Similarly, let $\Mscr_M(b,\mu)$ be the functor associating the following data (up to isomorphism) to a scheme $S$ over $k_0$. \begin{itemize} \item $(X,\rho) \in \Mscr_G(b,\mu)(S)$ and \item a direct sum decomposition $X = \Xbar_1 \oplus \ldots \oplus \Xbar_r$ such that the restriction $\rho_{|\overline{\XX_i}}$ defines a quasi-isogeny onto $\Xbar_i$. \end{itemize} \end{subenv} \end{definition} We note that if $(X,X_\bullet,\rho) \in \Mscr_P(b,\mu)(S)$ then $X_i = \rho(\XX_i)$. Thus the filtration $X_\bullet$ is uniquely determined by $(X,\rho)$ if it exists, i.e.\ if $\rho(\XX_i)$ is a \BT. Following the proof of \cite{mantovan08}, Prop.~5.1, we conclude that $i_P: \Mscr_P(b,\mu) \rightarrow \Mscr_G(b,\mu), (X,X_\bullet,\rho) \mapsto (X,\rho)$ is the decomposistion of $\Mscr_G(b,\mu)$ into locally closed subsets given by the invariant $(\height(\rho_{|\XX_i}))_i$, i.e.\ there is a canonical isomorphism \[ \Mscr_P(b,\mu) \cong \coprod_{(\alpha_i)\in\ZZ^r} \{ (X,\rho) \in \Mscr_G(b,\mu)(k) \mid \height (\rho_{|\XX_i}) = \alpha_i \textnormal{ for all } i \} \] with the reduced subscheme structure on the right hand side which identifies $i_P$ with the canonical embedding of the components of the coproduct into $\Mscr_G(b,\mu)$. In particular, $\Mscr_P(b,\mu)$ is locally of finite type and $\dim \Mscr_P(b,\mu) = \dim \Mscr_G(b,\mu)$ Now the $\XX_i$ are $O_F$-stable and in the PEL-case $\lambda_\XX$ induces isomorphisms $\XX_i \stackrel{\sim}{\rightarrow} (\XX/\XX_{r-i})^\vee$. As $\rho$ is compatible with the PEL structure, the analogous compatibility assertion also holds for the filtration $X_\bullet$ with respect to $\lambda$ and $\iota$. Hence we have a canonical $O_F$-action and polarisation on $\bigoplus_{i=1}^r X_i/X_{i-1}$ (where $X_0 := 0$). Thus we get a morphism \[ p_M:\Mscr_P(b,\mu) \to \Mscr_M(b,\mu), (X,X_\bullet,\rho) \mapsto (\bigoplus_{i=1}^r X_i/X_{i-1},\oplus\bar\rho_i) \] where the $\bar\rho_i$ denote the induced isogenies on the subquotients. So we have a correspondence \begin{equation} \begin{tikzcd} \label{diag correspondence} & \Mscr_P(b,\mu) \arrow{ld}[swap]{p_M} \arrow{rd}{i_P} \\ \Mscr_M(b,\mu) & & \Mscr_G(b,\mu) \end{tikzcd} \end{equation} Now we want to describe $\Mscr_M(b,\mu)$ in terms of Rapoport-Zink spaces. Analogously to our consideration above we get that for $(\bigoplus_i \overline{X_i},\rho) \in \Mscr_M(b,\mu)(S)$ the $\Xbar_i$ are $O_F$-stable and that $\lambda$ induces isomorphisms $\overline{X_i} \stackrel{\sim}{\to} \Xbar_{r-i}^\vee$. As in \cite{RV}~ch.~5.2 we get an isomorphism \begin{eqnarray*} \Mscr_M(b,\mu) &\stackrel{\sim}{\longrightarrow}& \coprod_{\mu_M \in I_{\mu,b,M}} \Mscr_{M,b,\mu_M} \\ (\bigoplus_i \Xbar_i, \rho) &\mapsto& (\Xbar_i,\rho_{|\XXbar_i})_i \end{eqnarray*} where $I_{\mu,b,M}$ denotes the $M(\QQbar_p)$-conjugacy classes of cocharacters $\mu_{M}: \GG_{m,\QQbar_p} \to M_{\QQbar_p}$ which are contained in the conjugacy class of $\mu$ and satisfy $b \in B(M,\mu_M)$ and $\Mscr_{M,b,\mu_M}$ is defined by \[ \Mscr_{M,b,\mu_M} = \prod_{i=1}^{r'} \Mscr_{M_i}(b_i,\mu_{M,i}) \] Now consider again the diagram (\ref{diag correspondence}). As we have $\dim \Mscr_P(b,\mu) = \dim \Mscr_G(b,\mu)$, it remains to calculate the dimension of the fibres of $p_M$ to relate the dimension of $\Mscr_G(b,\mu)$ to the dimension of $\Mscr_M(b,\mu)$. For this we consider the $k$-valued points of the diagram (\ref{diag correspondence}) with the natural action of $\Gamma_0 = \Gal(k/k_0)$ on the set of points. By section~\ref{ss RZ spaces} we have \begin{eqnarray*} \Mscr_G(b,\mu)(k) &=& \{g \in G(L)/G(O_L); gb\sigma(g)^{-1} \in G(O_L)\mu(p)G(O_L)\} \\ \Mscr_P(b,\mu)(k) &=& \{g \in P(L)/P(O_L);\, gb\sigma(g)^{-1} \in G(O_L)\mu(p)G(O_L)\} \\ \Mscr_M(b,\mu_M)(k) &=& \{m \in M(L)/M(O_L);\, m b\sigma(m)^{-1} \in G(O_L) \mu(p) G(O_L)\} \end{eqnarray*} and \begin{eqnarray*} \quad\quad\,\,\,\,\Mscr_{M,b,\mu_M}(k) &=& \{ m \in M(L)/M(O_L);\, mb\sigma(m)^{-1} \in M(O_L)\mu_M(p)M(O_L)\} \end{eqnarray*} with the canonical $\Gamma_0$-action. Now the diagram (\ref{diag correspondence}) induces \begin{equation} \begin{tikzcd} \label{diag correspondence 2} & \Mscr_P(b,\mu)(k) \arrow{ld}[swap]{p_M: mnP(O_L) \mapsto mM(O_L)} \arrow{rd}{i_P: gP(O_L) \mapsto gG(O_L)} \\ \Mscr_M(b,\mu)(k) & & \Mscr_G(b,\mu)(k) \end{tikzcd} \end{equation} Here $p_M$ uses the decomposition $P \cong M\times N$ where $N$ denotes the unipotent radical of $P$. \begin{proposition} \label{prop fibre dimension} Let $x \in \Mscr_{M,b,\mu_M}$. Then \[ \dim p_M^{-1}(x) = \langle\rho, \mu-\nu_G(b) \rangle -\langle \rho_M, \mu_M\rangle. \] where $\rho$ resp.\ $\rho_M$ denotes the half-sum of all positive (absolute) roots in $G$ resp.\ $M$. \end{proposition} Before we can finally prove this proposition at the end of section~\ref{sect fibre dimension}, we need to establish some notions and lemmas. We denote $\Fscr := p_M^{-1}(x)$. Choose an element $m \in M(L)$ such that $x = mM(O_L)$. If we replace $\Lambda$ by $m\cdot\Lambda$, we may assume that $m = \id$. Note that this replaces $b$ by a $\sigma$-conjugate. However, the formula above shows that the dimension of $\Fscr$ does not change if we replace $b$ by a $\sigma$-conjugate. \section{Numerical dimension of subsets of $N(L)$} \label{sect numerical dimension} Let $\tilde{\Fscr}$ denote the preimage of $\Fscr(k)$ in $N(L)$ with respect to the canonical projection $N(L) \twoheadrightarrow N(L)/N(O_L)$. In this section we define the notion of ``numerical dimension'' for certain subsets of $N(L)$ and show that with respect to this notion the dimension of $\tilde\Fscr$ coincides with the dimension of $\Fscr$. We will calculate the numerical dimension of $\tilde{\Fscr}$ and thus prove Proposition \ref{prop fibre dimension} in the next section. \subsection{The concept of numerical dimension} \label{ss idea} Let us first sketch the idea behind the numerical dimension. The starting point is the following result of Lang and Weil. If $X$ is variety over $\FF_q$ of dimension $d$ then the cardinality of $X(\FF_{q^s})$ grows with the same speed as $q^{s\cdot d}$. In particular the dimension of $X$ is uniquely determined by the $\Gal(k/\FF_q)$-set $X(k)$. Now if we tried to calculate the dimension of $\Fscr$ by applying theorem and counting points, we would run into two problems. First of all $\Fscr$ is only locally of finite type. We will solve this problem by defining an exhausting filtration of $N(L)$ by so-called bounded subsets which correspond to closed quasi-compact (and thus finite type) subschemes of $\Fscr$. Now the second problem is that Galois-sets without a geometric background are in general too badly behaved to use them. Here the general idea is that by applying theorem of Lang and Weil two times, one sees that any finite type scheme $Y$ with $Y(k) \cong X(k)$ as Galois-sets has the same dimension as $X$. This applies to our situation as follows. Let $(\Fscr_m)_{m\in\NN}$ be the above mentioned filtration by finite type subschemes and $(\tilde\Fscr_m)_{m\in\NN}$ be the corresponding bounded subsets in $N(L)$. Unfortunately, it is not known (and probably not true) whether there is any good structure of an ind-scheme of ind-finite type on $N(L)/N(O_L)$ which induces a useful scheme structure on $\tilde\Fscr_m/N(O_L)$. Therefore we will use the following work-around. We replace $\Fscr_m$ by its image $c(\Fscr_m)$ under the conjugation by a suitable semisimple element of $M(L)$ such that $c(\Fscr_m)$ is contained in $N(O_L)$. Here we have a canonical structure of affine spaces on the quotients $N(O_L/p^j)$ given by the truncated $p$-adic loop groups. We will prove that $c(\tilde\Fscr_m)$ is the full preimage of a locally closed subvariety $\Ybar$ of $N(O_L/p^j)$ if $j$ is big enough. Thus we can define the numerical dimension of $\tilde\Fscr_m$ via the dimension of $\Ybar$. We note that $\tilde\Fscr_m/N(O_L)$ will in general not be isomorphic to $\Ybar(k)$ (yet it will be the analogue of an affine fibration over $\Ybar(k)$). This is because we have shrunken $\tilde\Fscr_m$ by applying $c$ and have only divided out a subgroup of $N(O_L)$ instead of $N(O_L)$ itself. We can (and will) compensate for these deviation by adding a constant to $\dim \Ybar$ depending on $c$ and subtracting the dimension of $N(O_L/p^j)$ as variety. This definition of numerical dimension will give us the same value as we would get if we were counting points (cf.~Proposition~\ref{prop nd counting points}), but has the advantage that it also allows us to use some tools from geometry. \subsection{Notation and conventions} Let $G$ be a reductive group scheme over $\ZZ_p$. We impose the same notions as in section~\ref{ss group theory}. Moreover, we denote $I := \Gal (O_F/\ZZ_p)$. In particular the action of $\Gamma$ on $X^*(T)$ and $X_*(T)$ factorizes through $I$. For any $\alpha \in R$ denote by $\gfr^\alpha$ the corresponding weight space in the Lie algebra of $G_{O_F}$ and by $U_\alpha$ the corresponding root subgroup. By the uniqueness of $U_\alpha$ (\cite{SGA3-3}~Exp.~XXII~Thm.~1.1) the action of an element $\tau \in I$ on $G_{O_F}$ maps $U_\alpha$ isomorphically onto $U_{\tau(\alpha)}$. We fix a standard parabolic subgroup $P = MN$ of $G$. We write $K = G(O_L), K_M = M(O_L)$ and $K_N = N(O_L)$. Now \[ \Lie P_{O_F} = \bigoplus_{\alpha\in R'} \gfr^\alpha \] for a closed $I$-stable subset $R'\subset R$. We denote $R_N := \{\alpha \in R^+\mid -\alpha \not\in R'\}$. Then multiplication in $G$ defines an isomorphism of schemes \begin{equation} \label{term N isom} N_{O_F} \cong \prod_{\alpha \in R_N} U_\alpha \end{equation} where the product is taken with respect to an arbitrary (but fixed) total order on $R_N$ (cf.~\cite{SGA3-3} Exp.~XXVI~ch.~1). Let $\delta_N$ be the sum of all fundamental coweights corresponding to simple roots in $R_N$. For $i \geq 1$ we define the group schemes \begin{eqnarray*} N[i] &:=& \prod_{\alpha\in R_N \atop \langle \alpha, \delta_N\rangle \geq i} U_\alpha \subseteq N \\ N\langle i \rangle &:=& \bigslant{N[i]}{N[i+1]} \end{eqnarray*} We note that the sets $\{\alpha\in R_N\mid \langle \alpha, \delta_N \rangle \geq i\}$ are $I$-stable, thus the $I$-action permutes the $U_\alpha$ in the above product. Now the commutator $[u_\alpha,u_\beta]$ of two elements $u_\alpha\in U_\alpha, u_\beta\in U_\beta$ is contained in $\prod U_{i\alpha+j\beta}$, which is contained in $N[i]$ if $U_\alpha$ and $U_\beta$ are. We conclude that the $I$-action on $N$ stabilizes $N[i]$. Thus $N[i]$ descends to $\ZZ_p$ and a posteriori also $N\langle i \rangle$. Also note that the canonical isomorphism of schemes \[ N\langle i \rangle \cong \prod_{\alpha \in R_N \atop \langle \alpha, \delta_N \rangle = 1} U_\alpha \] is in fact an isomorphism of group schemes. Let $\lambda$ be a regular dominant coweight of $T$ defined over $\ZZ_p$. For $i\in\ZZ$ we define $N(i) := \lambda(p^i)N(O_L)\lambda(p^{-i})$. This defines an exhausting filtration \[ \ldots N(-2) \subset N(-1) \subset N(0) \subset N(1) \subset N(2) \subset \ldots \] of $N(L)$. For every subset $Y \subset N(L)$ we denote $Y(i) := \lambda(p^i)Y\lambda(p^{-i})$. For any algebraic group $H$ over $\ZZ_p$ and any integer $i$ we denote $H_i = \ker (H(O_L) \epi H(O_L/p^iO_L))$. In particular, we get a second filtration \[ N(0) = N_0 \supset N_1 \supset N_2 \supset \ldots \] of $N(0)$. \begin{remark} \label{rem N(j)} The isomorphism (\ref{term N isom}) induces an homeomorphism \[ N(L) \cong \prod_{\alpha\in R_N} L \] which yields identifications \begin{eqnarray*} N(j) &=& \prod_{\alpha\in R_N} p^{j\langle \alpha, \delta_N\rangle} O_L \\ N_j &=& \prod_{\alpha\in R_N} p^jO_L \end{eqnarray*} \end{remark} \begin{definition} A subset $Y\subset N(L)$ is called bounded if it is contained in $N(-j)$ for an $j \in \ZZ$. \end{definition} \begin{lemma} \label{lem bounded} A subset $Y \subset N(L)$ is bounded if and only if it is bounded as a subset of $G(L)$ in the sense of Bruhat and Tits. \end{lemma} \begin{proof} To avoid confusion we temporarily call $Y$ ``BT-bounded'' if it is bounded in the sense of Bruhat and Tits. By definition a subset $Y \subset N(L)$ is BT-bounded if and only if $\val_p (f(Y))$ is bounded from below for every $f\in \Gamma(G,\Oscr_G)$, or equivalently for every $f\in\Gamma(N,\Oscr_N)$. Using the isomorphism $N_L \cong \AA_L^{R_N}$, we may reformulate the condition as follows. The set $Y \subset L^{R_N}$ is BT-bounded if and only if $\val_p (f(Y))$ is bounded from below for every $f\in L[X_\alpha]_{\alpha\in R_N}$. It obviously suffices to check this for the coordinate functions. Thus $Y$ is BT-bounded if and only if $Y\subset(p^{-k}O_L)^{R_N}$ for some integer $k$. But this is equivalent to $Y$ being bounded by the description given in Remark~\ref{rem N(j)}. \end{proof} \subsection{Admissible subsets of $N(0)$ and the $p$-adic loop group} As a first step we now define the numerical dimension for a family of subsets of $N(0)$. We will extend this definition to certain subsets of $N(L)$ (called ``admissible'') in the next subsection. Before that we give a reminder on $p$-adic loop groups. \begin{definition} Let $H$ be an affine smooth group scheme over $\ZZ_p$. \begin{subenv} \item The $p$-adic loop group is the ind-affine ind-scheme over $\FF_p$ representing the functor \[ L_pH (R) = H(W(R)_\QQ). \] \item The positive $p$-adic loop group is the affine scheme over $\FF_p$ with $R$-valued points \[ L_p^+H(R) = H(W(R)). \] \item Let $j$ be a positive integer. We define the positive $p$-adic loop group truncated at level $j$ as the affine scheme of finite type over $\FF_p$ representing the functor \[ L_{p}^{+,j}H(R) = H(W_j(R)). \] \end{subenv} \end{definition} For the proof that the functors above can be represented as claimed, we refer the reader to \cite{kreidl}~sect.~3. \begin{lemma} \label{lem truncation map is open} Let $H$ be an affine smooth group scheme over $\ZZ_p$. The truncation maps $t_j:L_p^+ H \to L_p^{+,j} H$ are open and surjective. \end{lemma} \begin{proof} We have for any $\FF_p$-algebra $R$ \begin{eqnarray*} L_p^+H(R) &=& \Mor( \Spec \varprojlim W_j(R), H) \\ &=& \varinjlim \Mor(\Spec W_j(R), H) \\ &=& \varinjlim L_p^{+,j}H(R). \end{eqnarray*} Thus $L_p^+H$ is the projective limit of the $L_p^{+,j}H$ and the truncation maps are the canonical projections. In particular we have an homeomorphism of the underlying topological spaces $L_p^+ H \cong \varprojlim L_p^{+,j}H$ by \cite{EGA4-3}, Cor.~8.2.10. Thus it remains to show that the transition maps are surjective. But this follows from the infinitesimal lifting property, as $H$ is smooth. \end{proof} We note that $L_pH(k) = H(L)$, $L_p^+H(k) = H(O_L)$ and that we have a canonical isomorphism of $\Gamma$-groups \[ L_{p}^{+,j}H(k) \cong H_0/H_j. \] In our future considerations we equip the quotient $N_0/N_j$ with the structure of a variety over $\FF_p$ via this identification. We denote \[ d_j := \dim L_p^{+,j}N \] Now our considerations in subsection~\ref{ss idea} motivate the following definition. \begin{definition} \label{def admissible} \begin{subenv} \item A subset $Y \subset N_0$ is called admissible if there exists a positive integer $j$ such that $Y$ is the preimage of a locally closed subset $\overline{Y} \subset N_0/N_j$. \item We define the numerical dimension of an admissible subset as \[ \nd Y := \dim \Ybar - d_j \] with $\Ybar$ and $j$ as above. \end{subenv} \end{definition} Note that the definition of the numerical dimension is certainly independent of the choice of $j$. \subsection{Admissible and ind-admissible subsets of $N(L)$} It is a straightforward idea to define admissibility of bounded subsets of $N(L)$ by checking admissibility for a $\lambda(p^i)$-conjugate which is a subset of $N_0$. For this definition to make sense, we have to check that any $\lambda(p^i)$-conjugate of an admissible subset of $N_0$ is again admissible. \begin{lemma} \label{lem admissible is well-defined} Let $Y \subset N_0$ and $i$ be a positive integer. \begin{subenv} \item $Y(i)$ is admissible if and only if $Y$ is admissible. \item If $Y$ is admissible then $Y \cap N(i)$ is admissible. \end{subenv} \end{lemma} \begin{proof} \emph{(1)} Assume first that $Y$ is admissible. Let $j,\overline{Y}$ as in Definition \ref{def admissible} and let $\Yscr := t_j^{-1}(\overline{Y})$ where $\overline{Y}$ is regarded as sub\emph{scheme} of $L_p^{+,j}N$. Then $\Yscr$ is a locally closed subset of $L_p^+N$, which we equip with the reduced subscheme structure. Since $L_p^+N$ is a closed subfunctor of $L_pN$, the functor $\Yscr$ is also locally closed in $L_pN$. Now conjugation with $\lambda (p^i)$ defines an automorphism of $L_pN$, hence $\Yscr' := \lambda(p^i)\Yscr\lambda(p^{-i})$ is again a locally closed subfunctor of $L_pN$ and thus a locally closed subscheme of $L_p^+N$. We note that $Y(i) = \Yscr'(\Spec k)$ and that $\Yscr'$ is the preimage of a subset $\overline{Y}' \subset L_p^{+,j'}N$ for $j'$ big enough such that $N_{j'} \subset \lambda(p^i)N_j\lambda(p^i)$ (One checks the second assertion on geometric points). Thus $\Yscr' = t_{j'}^{-1}(t_{j'} \Yscr')$, hence we have $t_ {j'}(\overline{\Yscr'}) = \overline{t_{j'}(\Yscr')}$ as $t_{j'}$ is open and surjective. So the restriction $t_{j'}: \overline{\Yscr'} \rightarrow \overline{t_{j'}(\Yscr')}$ is again open, in particular $\Ybar'=t_{j'}(\Yscr)$ is open in its closure. Hence $Y'$ is admissible. The other direction has the same proof. \emph{(2)} We have seen that $N(i)$ is admissible and obviously the intersection of two admissible sets is again admissible, proving the claim. \end{proof} \begin{definition} \begin{subenv} \item A subset $Y \subset N(L)$ is called admissible, if $Y \subset N(-k)$ for some non-negative integer $k$ and $\lambda(p^k)Y\lambda(p^{-k})$ is admissible in the sense of Definition~\ref{def admissible} \item A subset $Y \subset N(L)$ is called ind-admissible if $Y \cap N(-k)$ is admissible for all non-negative integers $k$. \end{subenv} \end{definition} We note that by Lemma~\ref{lem admissible is well-defined}~(1), the definition of admissibility does not depend on the choice of the integer $k$ and by Lemma~\ref{lem admissible is well-defined}~(2) that every admissible subset is also ind-admissible. Before we can introduce our new notion of dimension, we have to show a few auxiliary results to ensure that our definition will be well-defined. First, we recall a result of Lang and Weil which reduces the calculation of the dimension of certain schemes to the counting of points. \begin{definition} Let $q$ be a $p$-power and let $f,g$ be two functions defined on the set of $q$-powers with values in the non-negative integers. Then we write $f \sim g$ if there exist positive real constants $C_1,C_2$ with $ C_1 f (q^n) \leq g(q^n) \leq C_2 f(q^n)$ for $n \gg 0$. \end{definition} \begin{proposition} \label{prop lang weil} Let $V$ be a scheme of finite type over $\FF_q$. Then \[ \# V(\FF_{q^n}) \sim q^{n \dim V}. \] \end{proposition} \begin{proof} This is an easy consequence of \cite{LW54}, Thm.~1. \end{proof} \begin{lemma} \label{lem nd is well-defined} Let $Y \subset N(0)$ be admissible and let $i$ be a positive integer. Then \[ \nd Y(i) = \nd Y - 2 \langle\rho_N, i \lambda\rangle. \] \end{lemma} \begin{proof} Choose $j$ such that $Y(i)$ (and thus $Y$) is $N_j$-stable and denote by $\Ybar$ (resp. $\Ybar(i)$) their images in $N_0/N_j$. Let $\cbar_i: N_0/N_j \to N_0/N_j$ be the morphism induced by conjugation with $\lambda(p^i)$. Then $\Ybar$ is the full preimage of $\Ybar(i)$ w.r.t. $\cbar_i$ and the restriction $\cbar_{i | \Ybar}: \Ybar \to \Ybar(i)$ is surjective. Thus \[ \dim \Ybar = \dim \Ybar (i) + \dim\ker \cbar_i \] Now by Proposition~\ref{prop lang weil} we have \[ q_F^{(\dim \ker \cbar_j)\cdot s} \sim \prod_{\alpha\in R_N} \#\ker(W_j(\FF_{q_F^s}) \stackrel{\cdot p^{\langle\alpha, i\lambda\rangle}}{\lto} W_j(\FF_{q_F^s})) = \prod_{\alpha\in R_N} p^{(\min\{j,\langle\alpha, i \lambda\rangle\})\cdot s}. \] Thus for $j$ big enough, which we may assume (and actually is automatic), \[ \dim \ker \cbar_i = \sum_{\alpha\in R_N} \langle\delta_N, i\lambda\rangle = 2 \langle \rho_N, i \lambda \rangle. \] \end{proof} We denote \[ d(i) = 2 \langle\rho_N, i\lambda\rangle. \] \begin{definition} \begin{subenv} \item Let $Y \subset N(-k)$ be admissible. We define the numerical dimension of $Y$ as \[ \nd Y := \nd Y(k) + d(k). \] \item The numerical dimension of an ind-admissible subset $Y \subset N(L)$ is defined as \[ \nd Y = \sup_{k>0} \nd (Y \cap N(-k)). \] \end{subenv} \end{definition} \begin{corollary} \label{cor calculation of nd} Let $Y \subset N(L)$ be admissible and $m\in M(L)$. Then $mYm^{-1}$ is admissible and has numerical dimension $\nd Y -2\langle \rho_N, \nu_M(m)\rangle$. \end{corollary} \begin{proof} By Cartan decomposition it suffices to consider the following two cases. \noindent If $m = \lambda'(p)$ for a coweight $\lambda'$ which is dominant w.r.t.\ $T_{O_L} \subset B_{O_L} \subset M_{O_L}$ then the claim follows as in Lemma~\ref{lem nd is well-defined}. \noindent If $m \in M(O_L)$ then conjugation with $m$ stabilizes the $N_j$, thus the admissibility and the numerical dimension do not change. \end{proof} \subsection{Equality of $\dim \Fscr$ and $\nd \tilde\Fscr$} By Proposition~\ref{prop lang weil} the dimension of $\Fscr$ can be determined by knowing the cardinality of $\Fscr(k)^{\sigma_E^s}$ for any positive integer $s$. In order to relate $\dim \Fscr$ to $\nd \tilde\Fscr$ (under the assumption that the latter is well-defined) we need a similar assertion for the numerical dimension. For this we need to do bit of preparatory work. \begin{lemma} Let $F\subset \QQ_{p^s}$. We have short exact sequences \begin{tikzcd} 0 \arrow{r} & N[i+1](L)^{\sigma^s} \arrow{r} & N[i](L)^{\sigma^s} \arrow{r} & N\langle i \rangle(L)^{\sigma^s} \arrow{r} & 0 \end{tikzcd} \noindent and \begin{tikzcd} 0 \arrow{r} & N[i+1](O_L)^{\sigma^s} \arrow{r} & N[i](O_L)^{\sigma^s} \arrow{r} & N\langle i \rangle(O_L)^{\sigma^s} \arrow{r} & 0. \end{tikzcd} \end{lemma} \begin{proof} This is an easy consequence of the description of $N[i]$ resp.\ $N\langle i \rangle$ as product of root groups. \end{proof} \begin{lemma} Let $F \subset \QQ_{p^s}$. For any $i$, the map $N[i](L)^{\sigma^s} \rightarrow (N[i](L)/N[i](O_L))^{\sigma^s}$ is surjective. \end{lemma} \begin{proof} We prove the claim by descending induction on $i$. For the induction beginning choose $i \gg 0$ such that $N[i] = 0$. Then the claim is certainly true. Now assume that our assertion is true for $i+1$. Then we get a commutative diagram \begin{tikzcd}[column sep = small] 0 \arrow{r} & N[i+1](O_L)^{\sigma^s} \arrow{r}\arrow{d} & N[i](O_L)^{\sigma^s} \arrow{r}\arrow{d} & N\langle i \rangle(O_L)^{\sigma^s} \arrow{r}\arrow{d} & 0 \\ 0 \arrow{r} & N[i+1](L)^{\sigma^s} \arrow{r}{\phi}\arrow[two heads]{d} & N[i](L)^{\sigma^s} \arrow{r}{\psi}\arrow{d} & N\langle i \rangle(L)^{\sigma^s} \arrow{r}\arrow[two heads]{d} & 0 \\ & \bigslant{N[i+1](L)}{N[i+1](O_L)}^{\sigma^s} \arrow{r}{\overline{\phi}} & \bigslant{N[i](L)}{N[i](O_L)}^{\sigma^s} \arrow{r}{\overline{\psi}} & \bigslant{N\langle i \rangle(L)}{N\langle i \rangle (O_L)}^{\sigma^s}. \end{tikzcd} One easily checks that $\overline{\phi}$ is injective, $\overline{\psi}$ surjective and that $\image \overline{\phi} = \ker \overline{\psi}$ in the category of pointed sets. We choose an element $\overline{n} \in (N[i](L)/N[i](O_L))^{\sigma^s}$. By diagram chasing we find an $n\in N(L)^{\sigma^s}$ such that $n$ and $\overline{n}$ have the same image $\hat{n}$ in $(N\langle i \rangle (L) / N\langle i \rangle (O_L))^{\sigma^s}$. Thus $n^{-1} \cdot \overline{n} \in \ker \overline{\psi}$ and we find $n' \in N[i+1](L)^{\sigma^s}$ such that $n'$ is mapped to $n^{-1} \cdot \overline n$. Hence $n\phi(n') \in N(L)^{\sigma^s}$ is mapped to $\overline{n}$, finishing the proof. \end{proof} \begin{corollary} \label{cor nd is well-defined} Let $F \subset \QQ_{p^s}$. For any integer $j$, the map $N(L)^{\sigma^s} \rightarrow (N(L)/N(j))^{\sigma^s}$ is surjective. \end{corollary} \begin{proof} By conjugating with $\lambda(p)$, we see that it suffices to prove the claim for $j=0$. Now our assertion coincides with the assertion of the previous lemma for $i=0$. \end{proof} \begin{lemma} \label{lem N_j is weakly admissible} The canonical map $N_0^{\sigma_F} \rightarrow (N_0/N_j)^{\sigma_F}$ is surjective. \end{lemma} \begin{proof} (1) We show that $H^1_\cont (\Gamma_F, N[i]_j) = 0$ by descending induction on $i$. In particular for $i=0$ we get $H^1_\cont (\Gamma_F,N_j) = 0$ and the claim of the lemma follows. For $i \gg 0$ we have $N[i]_j = 0$ and thus $H^1_\cont(\Gamma_F,N[i]) = 0$. Now the short exact sequence \begin{tikzcd} 0 \arrow{r} & N[i+1]_j \arrow{r} & N[i]_j \arrow{r} & N\langle i \rangle_j \arrow{r} & 0 \end{tikzcd} \noindent gives the exact sequence \begin{tikzcd} H^1_\cont(\Gamma, N[i+1]_j) \arrow{r} & H^1_\cont(\Gamma, N[i]_j) \arrow{r} & H^1_\cont(\Gamma, N\langle i \rangle_j). \end{tikzcd} \noindent Under the identification $N\langle i \rangle (L) \cong \prod L$ the group $N\langle i \rangle_j$ is identified with $\prod p^j O_L$. So $N\langle i \rangle_j$ is isomorphic to a finite product of copies of $O_L$ and thus has trivial cohomology. We may assume that $H^1_\cont(\Gamma,N[i+1]) = 0$ by induction assumption, then the sequence above implies that also $H^1_\cont(\Gamma_F,N[i])=0$. \end{proof} \begin{proposition}\label{prop nd counting points} Let $Y \subset N(L)$ be admissible and $N(i)$-stable, assume that $F$ is big enough such that $Y$ is $\sigma_F$-stable. Then \[ (\bigslant{Y}{N(i)})^{\sigma_F^s} \sim q^{(\nd Y + d(i)) \cdot s}. \] \end{proposition} \begin{proof} Choose $k$ such that $Y \subset N(-k)$. Then conjugation with $\lambda(p^k)$ induces an isomorphism of $\Gamma_F$-sets \[ \bigslant{Y}{N(i)} \cong \bigslant{Y(k)}{N(k+i)} \] We choose $j$ such that $N_j \subset N(k+i)$. Let $\pi^{(s)}: (Y(k)/N_j)^{\sigma_F^s} \to (Y(k)/N(k+i))^{\sigma_F^s}$ be the map induced by the canonical projection. We get a commutative diagram. \begin{center} \begin{tikzcd}[column sep = large] & Y(k)^{\sigma_F^s} \arrow[two heads]{dr} \arrow[two heads]{dl} & \\ (\bigslant{Y(k)}{N_j})^{\sigma_F^s} \arrow{rr}{\pi^{(s)}} & & (\bigslant{Y(k)}{N(k+i)})^{\sigma_F^s} \end{tikzcd} \end{center} Thus $\pi^{(s)}$ is surjective. Each of its fibres is canonically bijective to \[ \bigslant{N(k+i)^{\sigma_F^s}}{N_j^{\sigma_F^s}} \cong \prod_{\alpha\in R_N} \bigslant{(p^{(i+k)\cdot \langle \delta_N,\alpha\rangle} O_{F_s})}{p^j O_{F_s}} \] where $F_s$ denotes the (unique) unramified extension of $F$ of degree $s$. In particular every fibre has $q^{(d_j - d(i+k))\cdot s}$ elements. Altogether, \begin{eqnarray*} \# (Y/N(k))^{\sigma_F^s} &=& \# (Y(i)/N(k+i))^{\sigma_F^s} \\ &=& \# (Y(k)/N_j)^{\sigma_F^s} \cdot q_F^{(d(k+i)-d_j)\cdot s} \\ &\sim& q_F^{(\nd Y(k) + d_j) \cdot s} \cdot q_F^{(d(k+i)-d_j) \cdot s} \\ &=& q_F^{(\nd Y + d(i)) \cdot s}. \end{eqnarray*} \end{proof} \begin{proposition} \label{prop numerical dimension equals dimension} Assume that $\tilde\Fscr$ is ind-admissible. Then \[ \dim \Fscr = \nd \tilde\Fscr. \] \end{proposition} \begin{proof} The obstacle that prevents us from applying Proposition \ref{prop lang weil} directly to $\Fscr$ is the fact that $\Fscr$ is not quasi-compact in general. Thus our method of proof is to find a filtration of $\Fscr$ by quasi-compact subschemes and compare it to the filtration of $\tilde\Fscr$ by $\tilde\Fscr \cap N(-k)$. Now as $\height\rho_{|\XX_i}$ is constant on $\Fscr$ for every $i$, the restriction of $i_P$ defines an isomorphism of $\Fscr$ onto its image in $\Mscr_G(b,\mu)$. Thus by \cite{RZ96}, Cor.~2.31 such a filtration is given by \[ \Fscr_{-k} := \{ (X,\rho) \in \Fscr \mid p^k \rho \textnormal{ and } p^k \rho^{-1} \textnormal{ are isogenies}\}. \] Its preimage in $N(L)$ is \[ \tilde\Fscr_{-k} := \{n \in N(L); p^k\Lambda \subset n \Lambda \subset p^{-k} \Lambda\}. \] Now by Proposition \ref{prop lang weil} and \ref{prop nd counting points} we have \[ \dim \Fscr = \sup \dim \Fscr_{-k} = \sup \nd \tilde\Fscr_{-k}. \] It remains to show that the filtrations $(\tilde\Fscr_{-k})_{k\in\NN}$ and $(\tilde\Fscr \cap N(-k))_{k\in\NN}$ are refinements of each other. Indeed, by Bruhat-Tits theory the sets $\tilde\Fscr_{-k}$ are bounded and any bounded subset of $\tilde\Fscr$ is contained in one of the sets $\tilde\Fscr_{-k}$. \end{proof} \section{Calculation of the fibre dimension} \label{sect fibre dimension} Let $K := G(O_L)$ and $K_M := M(O_L)$. Analogous to \cite{GHKR06} we define the function $f_{m_1,m_2}$ for $m_1,m_2 \in M(L)$ by \[ f_{m_1,m_2}: N(L) \to N(L), n \mapsto m_1 n^{-1} m_1^{-1} \cdot m_2 \sigma(n)m_2^{-1}. \] Then we have \begin{eqnarray*} \tilde\Fscr &=& \{n \in N(L)\mid n^{-1}b\sigma(n) \in K\mu(p)K\} \\ &=& \{n \in N(L)\mid n^{-1}b\sigma(n)b^{-1} \in K\mu(p)Kb^{-1} \cap N(L)\} \\ &=& f_{1,b}^{-1} (K\mu(p) Kb^{-1} \cap N(L)). \end{eqnarray*} Hence we divide the computation of $\nd \tilde\Fscr$ into two steps: \begin{itemlist} \item We have to show that $K\mu(p)Kb^{-1} \cap N(L)$ is admissible and compute its dimension. \item We have to calculate the difference $\nd f_{m_1, m_2}^{-1} Y - \nd Y$ for admissible $Y \subset N(L)$. \end{itemlist} The maps $f_{m_1,m_2}$ are defined in greater generality than the functions we actually need, but this will turn out to be an advantage in the second step. \subsection{Admissibility and dimension of $K\mu(p)Kb^{-1} \cap N(L)$} \label{ssect fibre dimension 1} We note that we have two notions of dominant elements in $X_*(T)$, one coming from the Killing pair $T\subset B$ in $G$ and one coming from $T \subset B\cap M$ in $M$. Let $X_*(T)_\dom$ resp.\ $X_*(T)_{M-\dom}$ denote the set of cocharacters that are dominant in $G$ resp.\ $M$. For $\mu_M \in X_*(T), \mu \in X_*(T)_{\dom}$, we denote \[ C(\mu,\mu_M) := (N(L)\mu_M(p)K \cap K\mu(p)K)/K \] considered as $\Gamma_F$-set. In order to calculate the numerical dimension of $K\mu(p)Kb^{-1} \cap N(L)$, we first study the sets $C(\mu,\mu_M)$. We denote for $\mu,\mu_M$ as above and $\kappa \in \pi_1(M)_\Gamma$ \begin{eqnarray*} S_M(\mu) &:=& \{\mu_M \in X_*(T)_{M-\dom}\mid C(\mu,\mu_M) \not= \emptyset\} \\ S_M(\mu,\kappa) &:=& \{\mu_M \in S_M(\mu,\kappa)\mid \kappa_M(\mu_M)=\kappa\}. \end{eqnarray*} We will compare these sets to \begin{eqnarray*} \Sigma(\mu) &:=& \{\mu' \in X_*(T)\mid \mu'_\dom \leq \mu\} \\ \Sigma(\mu)_{M-\dom} &:=& \{\mu' \in \Sigma(\mu)\mid \mu' \in X_*(T)_{M-\dom} \} \\ \Sigma(\mu)_{M-\mmax} &:=& \{\mu' \in \Sigma(\mu)_{M-\dom}\mid \mu' \textnormal{ is maximal w.r.t.\ the Bruhat-order of } M\}. \end{eqnarray*} \begin{lemma} There are inclusions \[ \Sigma(\mu)_{M-\mmax} \subset S_M(\mu) \subset \Sigma(\mu)_{M-\dom}. \] \end{lemma} \begin{proof} This is the $p$-adic analogue of Lemma 5.4.1 in \cite{GHKR06}. The proof is literally the same. \end{proof} \begin{remark} Assume that we are in the situation of Prop.~\ref{prop fibre dimension}. Then \[ I_{M,\mu,b} = \{\mu_M\in\Sigma(\mu)_{M-\dom}\mid \kappa_M(\mu_M) = \kappa_M(b)\} = \{\mu \in \Sigma(\mu)_{M-\mmax}\mid \kappa_M(\mu_M) = \kappa (b)\}, \] where the first equality follows from the fact that $b$ is superbasic in $M(L)$ and the second equality holds because $\mu$ is minuscule. Now the above lemma implies that $I_{M,\mu,b} = S_M(\mu,\kappa(b))$. \end{remark} \begin{proposition} \label{prop GHKR 544} \begin{subenv} \item Let $\mu$ be a dominant coweight. Then for every $\mu_M \in S(\mu)$ there exists an integer $d(\mu,\mu_M)$ such that \[ C(\mu,\mu_M)^{\sigma_F^s} \sim q_F^{d(\mu,\mu_M) \cdot s}. \] \item We have an inequality \begin{equation} \label{term d(mu,mu_M)} d(\mu,\mu_M) \leq \langle \rho, \mu+\mu_M\rangle - 2\langle \rho_M,\mu_M \rangle. \end{equation} \item If $\mu_M \in \Sigma(\mu)_{M-\mmax}$, then the inequality (\ref{term d(mu,mu_M)}) is an equality. \end{subenv} \end{proposition} \begin{proof} The function field analogue of (1) and (2) are proven in \cite{GHKR06}, Prop.\ 5.4.2. Its proof determines the number of points \[ \# \left(\bigslant{N(\FF_q\rpot{t}) \mu_M(t) G(\FF_q\pot{t}) \cap G(\FF_q\pot{t}) \mu(t) G(\FF_q\pot{t})}{G(\FF_q\pot{t})}\right) \] for split groups, which still works in the $p$-adic case. Now (1) and (2) follow by applying the proof to the (split) group $G_{O_F}$ and (3) is the analogue of Corollary 5.4.4 in \cite{GHKR06}. \end{proof} \begin{proposition} \label{prop first step} \begin{subenv} \item The set $K\mu(p)Kb \cap N(L)$ is admissible. \item Let $b \in K_M\mu_M(p)K_M$. Then \[ \nd K\mu(p) Kb \cap N(L) = d(\mu,\mu_M) - 2\langle \rho_N, \nu(b) \rangle \] \end{subenv} \end{proposition} \begin{proof} The set $K\mu(p) Kb \cap N(L)$ is bounded by Lemma~\ref{lem bounded}. Choose $k$ such that it is contained in $N(-k)$. We denote $Y' := \lambda(p^k)(K\mu(p)Kb \cap N(L))\lambda(p^{-k})$. Let $j$ be big enough such that $N_j \subset \lambda(p^k)b^{-1}N_0b\lambda(p^{-k})$. Then $Y'$ is right-$N_j$-stable. For every $\FFbar_p$-algebra $R$ and $g \in L_pG(R)$ the subset \[ \{s\in\Spec R\mid g_{\overline{k(s)}} \in L_p^+(\overline{k(s)}) \mu(p) L_p^+(\overline{k(s)})\} \] is locally closed in $\Spec R$ (cf.~\cite{CKV}~Lemma~2.1.6). Here $k(s)$ denotes the fraction field at $s$. Thus the set of all $s\in L_p^+N$ whose geometric points are an element of \[ \lambda(p^k) (L_p^+(\overline{k(s)})\mu(p)L_p^+(\overline{k(s)})b \cap L_pN(\overline{k(s)}))\lambda(p^{-k}) \] form a locally closed subset $\Yscr'$ of $L_p^+N$ with $\Yscr'(k) = Y'$. Furthermore, $\Yscr'$ is the preimage of some subset of $L_p^{+,j}N$ w.r.t.\ $t_j$. As we have seen in the proof of Lemma \ref{lem admissible is well-defined}, this implies that $Y'$ is admissible. Thus $K\mu(p) Kb \cap N(L)$ is admissible. Now the map $x \mapsto xb^{-1}$ induces an isomorphism of $\Gamma_F$-sets \[ \bigslant{N(L)bK \cap K\mu_M(p)K}{K} \stackrel{\sim}{\longrightarrow} \bigslant{N(L)\cap K\mu(p)Kb^{-1} }{bN(0)b^{-1}}. \] Now choose $k_M \in K_M$ such that $bK_M= k_M\mu_M(p)K_M$. Then multiplication by $k_M$ defines an isomorphism of $\Gamma_F$-sets \[ \bigslant{N(L)\mu_M(p)K \cap K\mu_M(p)K}{K} \stackrel{\sim}{\longrightarrow} \bigslant{N(L)bK \cap K\mu_M(p)K}{K}. \] Altogether, we get \begin{eqnarray*} \nd K\mu Kb \cap N(L) &=& d(\mu,\mu_M) + \nd bN(0)b^{-1} \\ &=& d(\mu,\mu_M) - \langle 2\rho_N, \nu(b)\rangle. \end{eqnarray*} The last equality is an easy consequence of Corollary~\ref{cor calculation of nd}. \end{proof} \subsection{Relative dimension of certain morphisms $f:L^n \rightarrow L^n$} Before we can continue with the second step of our proof, we need to explain how the analogue of section three and four of \cite{GHKR06} works in the $p$-adic case. Let $V$ be a finite dimensional vector space over $L$ and $\Lambda_2 \subset \Lambda_1$ be two lattices in $V$. We define the structure of a variety on $\Lambda_1/\Lambda_2$ as follows. By the elementary divisor theorem, we find a basis $v_1, \ldots, v_n$ of $\Lambda_1$ such that $\Lambda_2$ has a basis of the form $p^{\alpha_1}v_1, \ldots , p^{\alpha_n}v_n$. This induces an isomorphism \[ \bigslant{\Lambda_1}{\Lambda_2} \stackrel{\sim}{\longrightarrow} \prod W_{\alpha_i} (k), \quad \sum \beta_i v_i \mod \Lambda_2 \mapsto (\beta_i \mod p^{\alpha_i})_i. \] As $W_{\alpha_i}$ is represented by the scheme $\AA^{\alpha_i}$, this defines the structure of an affine space on $\Lambda_1/\Lambda_2$. The variety structure does not depend on the choice of $v_1,\ldots,v_n$: Let $w_1,\ldots,w_n$ another basis as above. Define $\phi$ such that the diagram \begin{tikzcd} \bigslant{\Lambda_1}{\Lambda_2} \arrow{r}{\id} \arrow{d}[swap]{\sum \beta_i v_i \mod \Lambda_2 \mapsto \atop \beta_i \mod p^{\alpha_i}} & \bigslant{\Lambda_1}{\Lambda_2} \arrow{d}{\sum \beta_i w_i \mod \Lambda_2 \mapsto \atop \beta_i \mod p^{\alpha_i}} \\ \prod W_{\alpha_i}(k) \arrow{r}{\phi} & \prod W_{\alpha_i}(k) \end{tikzcd} \noindent commutes. Now $\phi$ is $W(k)$-linear and hence can be expressed as family of polynomials in the coordinates of the truncated Witt vectors. So $\phi$ is a morphism of varieties. The same argument shows that $\phi^{-1}$ is also a morphism of varieties, thus $\phi$ is an isomorphism and the structure of an affine space on $\Lambda_1/\Lambda_2$ given by the bases $v_1,\ldots,v_n$ and $w_1,\ldots,w_n$ are the same. Now one can can define admissible resp.\ ind-admissible subsets of $V$ and their dimension literally as in \cite{GHKR06}. Also, the $p$-adic analogue of the statements and proofs of section 4 in \cite{GHKR06} hold. Thereof we will need the following notations and results. \begin{definition} Let $(V,\Phi)$ be an isocrystal. We define \[ d(V,\Phi) = \sum_{\lambda < 0} \lambda \dim V_\lambda, \] where $V_\lambda$ is the isoclinic component of $(V,\Phi)$ of slope $\lambda$. \end{definition} \begin{proposition} \label{prop isocrystal} Let $V,V'$ be two finite dimensional $L$-vector spaces of the same dimension. Let $\phi: V \rightarrow V'$ be an $L$-linear isomorphism and $\psi:V \rightarrow V'$ be a $\sigma$-linear bijection. We define $f:V \rightarrow V'$ by $f := \psi - \phi$. Then for any lattice $\Lambda$ in $V$ there exists a lattice $\Lambda'$ in $V'$ and a non-negative integer $j$ such that \[ p^j\Lambda' \subset f\Lambda \subset \Lambda'. \] For any such triple $j,\Lambda,\Lambda'$ and $l \geq j$, consider the induced morphism \[ \overline{f}: \bigslant{\Lambda}{p^l \Lambda} \rightarrow \bigslant{\Lambda'}{p^l \Lambda'}. \] Then \begin{subenv} \item $\image \overline{f} \supset p^j \Lambda'/p^l \Lambda'$. \item $\dim\ker \overline{f} = d(V,\phi^{-1}\psi)+ [\Lambda : \phi^{-1}\Lambda']$. \item $(\ker \overline{f})^0 \subset p^{l-j}\Lambda/p^l\Lambda$ \end{subenv} \end{proposition} \begin{proof} The proposition is the $p$-adic analogue of Prop.~4.2.2 in \cite{GHKR06}. Its proof is literally the same. \end{proof} \subsection{Dimension of the preimage under $f_{m_1,m_2}^{-1}$} \begin{proposition} \label{prop GHKR 532} The map $f_{m_1,m_2}$ is surjective. Moreover, for any admissible subset $Y \subset N(L)$ the inverse image is ind-admissible and \begin{equation} \label{term dim eq} \nd f^{-1}_{m_1,m_2} Y - \nd Y = d(\nfr(L), \Ad_\nfr (m_1)^{-1} \Ad_\nfr (m_2\sigma)) + \val\det \Ad_\nfr(m_1). \end{equation} We denote the right hand side of (\ref{term dim eq}) by $d(m_1,m_2)$ for convenience. \end{proposition} \begin{proof} This is the analogue of Prop.\ 5.3.2 of \cite{GHKR06}. Its proof is almost literally the same. As this is the part where the main part of the calculation of $\dim \Fscr$ is done, we give a brief outline of the proof and explain why the arguments carry over. By multiplying $m_1$ and $m_2$ by a suitable power of $\lambda(p)$ we may assume that the $N_j$ are stable under conjugation with $m_1$ and $m_2$, in particular $f$ maps $N_j$ into $N_j$. One easily checks by replacing $m_1$ and $m_2$ by their $\lambda(p^k)$-multiple increases both sides of (\ref{term dim eq}) by $d(k)$, thus the assertion of the proposition does not change. Now we consider the maps $f\langle i \rangle: N\langle i \rangle \rightarrow N\langle i \rangle$ induced by $f$. By choosing an isomorphism of the root subgroups with their Lie algebra, we identify $N\langle i \rangle \cong \Lie N\langle i \rangle$. Under this identification $f\langle i \rangle$ is identified with $\Ad_{\Lie N\langle i \rangle} (m_2)\sigma - \Ad_{\Lie N\langle i \rangle}(m_1)$. Thus we are in the situation considered in the previous subsection. The following two claims are obtained from Proposition \ref{prop isocrystal} (resp.\ Prop. 4.2.2 in the proof in \cite{GHKR06}) using purely group theoretic arguments. Therefore the proofs given in \cite{GHKR06} also work in the $p$-adic case. \begin{claim} There exists an integer $k$ such that for any $i \geq 1$ \[ \lambda(p^k)N[i]_0\lambda(p^{-k}) \subset f(N[i]_0). \] \end{claim} \begin{claim} Choose positive integers $j,k,l$ such that $f\langle i \rangle N\langle i \rangle_0 \supset p^j N\langle i \rangle$ for any $i$, $k$ as above and $N[i]_{l-j} \subset \lambda(p^k)N[i]_0\lambda(p^{-k})$. We denote by $H = N_0/N_l$ and by $\overline{f}:H \rightarrow H$ the morphism induced by $f$. Then \[ \dim \overline{f}^{-1} (1) = d(m_1,m_2). \] \end{claim} We will give the rest of the proof in greater detail, as this is the part where we have to work with the notion of admissibility, which is slightly different from the one in \cite{GHKR06}. However the concept is still the same as in their proof. We denote by $f_0$ the restriction of $f$ to $N_0$. \begin{claim} Assume that $Y$ is an admissible subset of $N(k)$ with $k$ as in claim 2 (ensuring that $Y$ is contained in the image of $f_0$). Then $f_0^{-1}Y$ is admissible and \[ \nd f_0^{-1}Y - \nd Y = d(m_1,m_2). \] \end{claim} To prove claim 3, we choose $l \gg 0$ such that Claim 2 holds and such $Y$ is the preimage of a locally closed subset $\overline{Y}$ in $H = N_0/N_l$. Then all non-empty (reduced) fibres of $\overline{f}$ are isomorphic to each other. Indeed, if $n \in \image (f)$ and $n_0$ is a preimage of $n$ then \[ \overline{f}^{-1}(n) = \overline{f}^{-1}(1) n_0. \] Now $\overline{Y}$ is contained in the image of $\overline{f}$ by Claim 1 and hence every fibre of $\overline{f}$ has dimension $d(m_1,m_2)$ by claim 2, so \begin{equation}\label{term 1} \dim f_0^{-1} \overline{Y} - \dim \overline{Y} = d(m_1,m_2). \end{equation} As $f_0^{-1}Y = t_l^{-1} (\overline{f}^{-1}\overline{Y})(k)$, we see that it is admissible that the equation (\ref{term 1}) implies \[ \nd f_0^{-1}Y - \nd Y = d(m_1,m_2) \] This finishes the proof of the claim. Now let $Y \subset N(L)$ be admissible and $j$ big enough such that $Y\subset N(-j)$. Now $f_{m_1,m_2}$ and conjugation with (a power of) $\lambda(p)$ commute. Hence \[ f_{m_1,m_2}^{-1}(Y) \cap N(-j-k) = (f_0^{-1}( Y (j+k))) (-j-k), \] where $k$ is chosen as above. Hence $f_{m_1,m_2}^{-1}(Y) \cap N(-j-k)$ is admissible by Claim 3 and has numerical dimension $\nd Y + d(m_1,m_2)$, proving the ind-admissibility of $f_{m_1,m_2}^{-1}(Y)$ and the dimension formula (\ref{term dim eq}). For the surjectivity of $f$, note that by Claim 1 there exists an integer $k$ such that $N(k)$ is contained in the image of $f$. As $f$ commutes with conjugation with $\lambda(p)$ this implies that $N(j)$ is contained in the image of $f_{m_1,m_2}$ for every $j$. As the $N(j)$ exhaust $N(L)$, the assertion follows. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop fibre dimension}] Altogether, we get \begin{eqnarray*} \dim \Fscr &\stackrel{\rm Prop.~\ref{prop numerical dimension equals dimension}}{=}& \nd \tilde\Fscr \\ &\stackrel{\rm Prop.~\ref{prop GHKR 532}}{=}& \nd (K\mu(p)Kb^{-1} \cap N(L)) + \langle \rho,\nu_M(b)-\nu_M(b)_\dom \rangle \\ &\stackrel{\rm Prop.~\ref{prop first step}}{=}& d(\mu,\mu_M) - 2\langle \rho_N,\nu_M(b)\rangle + \langle \rho,\nu_M(b)-\nu_M(b)_\dom \rangle \\ &\stackrel{\rm Prop.~\ref{prop GHKR 544}}{=}& \langle\rho,\mu+\mu_M\rangle - 2\langle \rho_M,\mu_M\rangle - 2\langle \rho_N,\nu_M(b)\rangle + \langle \rho,\nu_M(b)-\nu_M(b)_\dom \rangle\\ &=& \langle\rho,\mu-\nu_M(b)_\dom\rangle - \langle\rho_M,\mu_M\rangle \\ & & + \underbrace{\langle\rho_N,\mu_M\rangle - \langle\rho_N,\nu_M(b)\rangle}_{=0} + \underbrace{\langle\rho,\nu_M(b)\rangle-\langle \rho_N,\nu_M(b)\rangle}_{=0} \\ &=& \langle\rho, \mu-\nu_G(b) \rangle -\langle \rho_M, \mu_M\rangle. \end{eqnarray*} \end{proof} \section{Reduction to the superbasic EL-case} \label{sect reduction to superbasic} We first reduce to the case of a superbasic Rapoport-Zink datum which still might be of PEL type. \begin{proposition} Assume Proposition~\ref{prop main} is true for every simple superbasic Rapoport-Zink datum. Then it is also true in general. \end{proposition} \begin{proof} We use the notation of Prop.~\ref{prop fibre dimension} and choose $b$ and $M$ such that $b \in M(L)$ is superbasic. If Proposition~\ref{prop main} is true in the superbasic case, we have \[ \dim \Mscr_{M,b,\mu_M} \leq \langle \rho_M,\mu_M\rangle - \frac{1}{2}\defect_M(b). \] By Proposition~\ref{prop fibre dimension} we get for $\mu_M \in I_{\mu,b,M}$ \begin{eqnarray*} \dim p_{M}^{-1} \Mscr_{M,b,\mu_M} &\leq& \langle \rho, \mu - \nu_G(b) \rangle - \frac{1}{2}\defect_M (b). \end{eqnarray*} Since $M$ is a Levi subgroup of $M_b$, the group $J_{M,b}$ is a Levi subgroup of $J_{G,b}$. As the $\QQ_p$-rank of a linear algebraic group is the same as the $\QQ_p$-rank of its Levi subgroups, this implies $\defect_G(b) = \defect_M(b)$. Altogether, \[ \dim \Mscr_G(b,\mu) = \dim \Mscr_P(b,\mu) = \max \dim p_M^{-1} \Mscr_{M,b,\mu_M} \leq \langle \rho, \mu - \nu_G(b) \rangle. \] \end{proof} In the superbasic case we can (and will) prove Theorem~\ref{thm dimension RZ-space} directly, i.e.\ without the detour via Proposition~\ref{prop main}. \begin{lemma} Assume that Theorem \ref{thm dimension RZ-space} holds for every superbasic simple Rapoport-Zink datum of EL type. Then it is also true for every superbasic Rapoport-Zink datum of PEL type. \end{lemma} \begin{proof} Let $\hat\Dscr$ be a superbasic unramified RZ-datum of PEL type. Then by \cite{CKV}~Lemma~3.1.1 the adjoint group $G^\ad$ is isomorphic to a product of $\Res_{F_i/\QQ_p} \PGL_{h_i}$. As mentioned in section~\ref{ss simple RZ}, we can (and will) assume that $\hat\Dscr$ is simple. Thus either $G \cong \GSp_{F,n}$ or $G \cong \GU_{F,n}$. Comparing the Dynkin diagrams (with Galois action) of the groups $G^\ad$ and $\Res_{F_i/\QQ_p} \PGL_{h_i}$, we see that there are only two cases where these groups are isomorphic: 1. $G \cong \GSp_{F,2}$. 2. $G \cong \GU_{F,2}$. Let $\hat\Dscr'$ be the Rapoport-Zink datum of EL type one gets by forgetting the polarization and $G'$ the associated linear algebraic group. We get a canonical closed embedding \[ \Mscr_{G}(b,\mu) \hookrightarrow \Mscr_{G'}(b,\mu). \] In the first case this is an isomorphism since $\Res_{F/\QQ_p} \GSp_2 = \Res_{F/\QQ_p} \GL_2$, so Theorem \ref{thm dimension RZ-space} is true for $\Dscr$. Now assume that $G \cong \GU_{F,2}$. We first show that $\hat\Dscr'$ is also superbasic. Using the explicit description of $X_*(T)$ given after Proposition~\ref{prop PEL vs D}, we see that the Newton point of the $\sigma$-conjugacy class associated to $\hat\Dscr'$ is of the form $(\alpha, 1-\alpha)$. This is central in $\GU_{F,2}$ if and only $\alpha = \frac{1}{2}$, thus $\hat\Dscr'$ is also superbasic. Now each connected component of $\Mscr_G(b,\mu)$ is isomorphic to a closed subset of a connected component of $\Mscr_{G'}(b,\mu)$ and thus projective. By \cite{CKV}~Thm.~1.3, all connected components of a Rapoport-Zink space are isomorphic, so we can determine their dimension by counting points of some connected component. For any reductive group $H$ over $\ZZ_p$, $b' \in H(E)$, $\mu' \in X_*(H_{O_L})$ let \[ X_H(b',\mu') := \{h\cdot H(O_L) \in \bigslant{H(L)}{H(O_L)}\mid hb'\sigma(h)^{-1} \in H(O_L)\mu'(p)H(O_L), \} \] denote the affine Deligne-Lusztig set. Let $\eta_H: H(L) \to \pi_1(H)$ denote the unique $H(O_L)$-bi-invariant map which maps $\mu_H(p)$ to the image of $\mu_H$ in $\pi_1(H)$ for every dominant cocharacter $\mu_H$. We denote for $\omega' \in \pi_1(H)$ \[ X_H(b',\mu')_{\omega'} := X_H(b',\mu') \cap \eta_H^{-1}(\{\omega'\}). \] By \cite{CKV}~Thm.~1.1 the connected components of $\Mscr_G(b,\mu)(k)$ are precisely the subspaces $X_G(b',\mu)_\omega$ which are non-empty. As $\GL_{F',2}$ and $\GU_{F,2}$ are not isomorphic, we compare their affine Deligne-Lusztig sets via their (isomorphic) adjoint groups. For $\omega\in \pi_1(G)$ with $X_G(b,\mu)_\omega \not= \emptyset$ we have \[ X_G(b,\mu)_\omega \cong X_{G^\ad}(b^\ad,\mu^\ad)_{\omega^\ad} \] as $\Gamma_E$-sets, where $b^\ad, \mu^\ad, \omega^\ad$ denote the images of $b,\mu,\omega$ in $G^\ad(L), X_*(G^\ad), \pi_1(G^\ad)$ respectively. Using the same argument for $G'' = \GL_{F',2}$ and suitable $b'',\mu'',\omega''$ we get \[ X_G(b,\mu)_\omega \cong X_{G''}(b'',\mu'')_{\omega''} \] as $\Gamma_E$-sets and thus $\dim \Mscr_G(b,\mu) = \dim \Mscr_{G''}(b'',\mu'')$ by Proposition~\ref{prop lang weil}. As the value of the right hand side of (\ref{term dimension RZ-space}) only depends on the images in $G^\ad(L)$ resp.\ $X_*(T^\ad)^\Gamma_\QQ$ this proves Theorem \ref{thm dimension RZ-space} for $\hat\Dscr$. \end{proof} So Theorem \ref{thm dimension RZ-space} is reduced to the claim that it holds in the case of a simple superbasic Rapoport-Zink datum of EL type. This is proved in the next section, see Corollary \ref{cor superbasic RZ-space is qc} and Proposition \ref{prop main for superbasic RZ-data of EL type}. \section{The superbasic EL-case} \label{sect superbasic} \subsection{Notation and conventions} From now on we restrict to the EL-case with $[b]$ superbasic. Fixing a basis $e_i$ of $V$ (as $F \otimes L$-module), we get an identification $G = \Res_{O_F/\ZZ_p} \GL_h$. Let $d$ be the degree of the unramified field extension $F/\QQ_p$, then $I := \Gal(F/\QQ_p) \cong \Gal(k_F/\FF_p) \cong \ZZ / d\cdot \ZZ$. We choose the isomorphism such that the Frobenius $\sigma$ is mapped to $1$. Let $T \subset B \subset G$ where $T$ is the diagonal (maximal) torus and $B$ is the Borel subgroup of lower triangular matrices in $G$. We fix a superbasic element $b\in G(L)$ with Newton point $\nu \in X_*(T)_\QQ^\Gamma$ and a dominant cocharacter $\mu \in X_*(T)$ such that $b\in B(G,\mu)$. We have to show that \[ \dim \Mscr_G(b,\mu) = \langle \rho, \mu - \nu \rangle - \frac{1}{2} \defect_G(b) \label{term superbasic}. \] By Remark~\ref{rem dimension formula} this is equivalent to \begin{equation} \label{term main EL} \dim \Mscr_G(b,\mu) = \sum_{i=1}^{h-1} \lfloor \langle \omega_i, \mu-\nu \rangle \rfloor, \end{equation} where $\omega_i$ are lifts of the fundamental weights of $G^{\rm der}$ (see also \cite{hamacher}~Prop.~4.4). As $T$ splits over $O_F$, the action of the absolute Galois group on $X_*(T)$ factorizes over $I$. We identify $X_*(T) = \prod_{\tau\in I} \ZZ^h$ with $I$ acting by cyclically permuting the factors. This yields an identification of $X_*(T)^I$ with $\ZZ^h$ such that \[ X_*(T)^I \hookrightarrow X_*(T), \nu' \mapsto (\nu')_{\tau\in I}. \] We denote by $(N,F) \cong ((L \otimes_{\QQ_p} F)^h, b\sigma)$ the $F$-isocrystal associated to our Rapoport-Zink datum. We decompose $N = \prod_{\tau\in I} N_\tau$ according to the $F$-action as in Example~\ref{ex isocrystal}. Denote by $e_{\tau,i}$ the image of $e_i$ in $N_\tau$, then $\{e_{\tau,i}\}_{i=1}^h$ is a basis of the $N_\tau$ and $\varsigma(e_{\tau,i})= e_{\varsigma\tau,i}$ for all $\varsigma \in I$. For $\tau \in I, l \in \ZZ, i=1,\ldots ,h$ denote $e_{\tau,i+l\cdot h} := p^l\cdot e_{\tau,i}$. Then each $v\in N_\tau$ can be written uniquely as infinite sum \[ v = \sum_{n \gg -\infty} [a_n]\cdot e_{\tau,n} \] with $a_n \in k$. Using Dieudonn\'e theory, we get an identification \[ \Mscr_G(b,\mu)(k) = \{(M_\tau \subset N_\tau \textnormal{ lattice})_{\tau\in I}\mid \inv (M_\tau, b\sigma (M_{\tau-1})) = \mu_\tau\}. \] Here $\inv$ means the invariant and is defined as follows. Suppose we are given two lattices $M,M' \subset L^h$. By the elementary divisor theorem we find a basis $v_1, \ldots, v_n$ of $M$ and a unique tuple of integers $a_1 \leq \ldots \leq a_n$ such that $p^{a_1}v_1,\ldots , p^{a_n}v_n$ form a basis of $M'$. We define the cocharacter $\inv (M,M'):\GG_m \rightarrow \GL_h, x \mapsto \diag (x^{a_1}, \ldots , x^{a_n})$. If we write $M' = gM$ with $g \in GL_h (L)$ we may equivalently define $\inv (M,M')$ to be the unique cocharacter of the diagonal torus which is dominant w.r.t.\ the Borel subgroup of lower triangular matrices and satisfies $g \in \GL_h(O_L) \inv(M,M')(p) \GL_h(O_L)$. \begin{definition} \begin{enumerate} \item We call a tuple of lattices $(M_\tau \subset N_\tau)_{\tau\in I}$ a $G$-lattice. \item We define the volume of a $G$-lattice $M = gM^0$ to be the tuple \[ \vol (M) = (\val \det g_\tau)_{\tau \in I}. \] Similarly, we define the volume of $M_\tau$ to be $\val\det g_\tau$. We call $M$ special if $\vol (M) = (0)_{\tau \in I}$. \end{enumerate} \end{definition} As $[b]$ is superbasic, $\nu$ is of the form $(\frac{m}{d\cdot h}, \frac{m}{d \cdot h}, \ldots , \frac{m}{d \cdot h})$ with $(m,h) = 1$. Now the condition $[b] \in B(G,\mu)$ translates to $\sum_{\tau \in I, i=1,\ldots h} \mu_{\tau,i} = m$. Replacing $b$ by a $\sigma$-conjugate if necessary, we can assume that $b$ is the form $b(e_{\tau,i}) = e_{\tau,i+m_\tau}$ where $m_\tau = \sum_{i=1}^h \mu_{\tau,i}$ (see also \cite{CKV}~Lemma~3.2.1). We could have chosen any tuple of integers $(m_\tau)$ such that $\sum_{\tau\in I} m_\tau = m$ but this particular choice has the advantage that the components of any $G$-lattice in $X_\mu(b)$ have the same volume. In general, \begin{eqnarray*} \vol M_\tau - \vol M_{\tau-1} &=& (\vol M_\tau - \vol b\sigma(M_{\tau-1})) + (\vol b\sigma(M_{\tau-1}) - \vol M_{\tau-1}) \\ &=& (\sum_{i=1}^h \mu_{\tau,i}) - m_\tau. \end{eqnarray*} In the cases $\nu = (0)$ and $\nu = (1)$ the moduli space $\Mscr_G(b,\mu)$ is isomorphic to $\bigslant{\End_{\QQ}(\XXund)}{\End(\XXund)} \cong \ZZ$, considered as discrete union of points (\cite{CKV}~Thm.~1.1). In this case we have $\mu = \nu$ thus the right hand side of (\ref{term main EL}) is also zero and Theorem \ref{thm dimension RZ-space} holds. We assume $\nu \not= (0), (1)$ from now on. Then the connected components of $\Mscr_G(b,\mu)$ are \[ \Mscr_G(b,\mu)^i = \{ M \in \Mscr_G(b,\mu)(k); \vol M = (i)_{\tau\in I}\}. \] where $i$ ranges over the integers (use \cite{CKV}~Thm.~1.1). In particular, we have \[ \dim \Mscr_G(b,\mu) = \dim \Mscr_G(b,\mu)^0. \] \subsection{A decomposition of $\Mscr_G(b,\mu)$} In order to calculate the dimension of $\Mscr_G(b,\mu)$, we decompose $\Mscr_G(b,\mu)^0$ into locally closed sets, whose dimension is given by a purely combinatorial formula. Let \begin{eqnarray*} \Iscr_\tau: N_\tau \setminus \{ 0\} \quad &\rightarrow& \ZZ \\ \sum_{n \gg -\infty} [a_n]\cdot e_{\tau,n} &\mapsto& \min \{n\in\ZZ; a_n \not= 0\}. \end{eqnarray*} Note that $\Iscr_\tau$ satisfies the strong triangle inequality for every $\tau$. We denote $N_{hom} := \coprod_{\tau \in I} (N_\tau\setminus \{0\})$, analogously $M_{hom}$. For $M \in \Mscr_G(b,\mu)^0 (k)$, we define \[ A(M) := \Iscr (M_{hom}) \] where $\Iscr = \sqcup \Iscr_\tau: N_\tau \to \coprod_{\tau\in I} \ZZ$. For a subset $A$ of $\coprod_{\tau\in I} \ZZ$ we denote $\Sscr_A$ the subset of all $G$-lattices in $\Mscr_G(b,\mu)^0$ whose image under $\Iscr$ equals $A$. We first study the possible values of $A(M)$ to determine which $\Sscr_A$ are non-empty. \begin{definition} Let $\ZZ^{(d)} := \coprod_{\tau \in I} \ZZ_{(\tau)}$ be the disjoint union of $d$ isomorphic copies of $\ZZ$. For $a\in\ZZ$ we denote by $a_{(\tau)}$ the corresponding element of $\ZZ_{(\tau)}$ and write $|a_{(\tau)}| := a$. We equip $\ZZ^{(d)}$ with a partial order ``$\leq$'' defined by \[ a_{(\tau)} \leq c_{(\varsigma)} :\Lra a\leq c \textnormal{ and } \tau = \varsigma \] and a $\ZZ$-action given by \[ a_{(\tau)} + n = (a+n)_{(\tau)}. \] Furthermore we define the function \begin{eqnarray*} f:\ZZ^{(d)} \rightarrow \ZZ^{(d)} &,& a_{(\tau)} \mapsto (a+m_{\tau+1})_{(\tau+1)} \end{eqnarray*} \end{definition} We impose the notation that for any subset $A \subset \ZZ^{(d)}$ we write $A_{(\tau)} := A\cap \ZZ_{(\tau)}$. \begin{definition} \begin{subenv} \item An EL-chart is a non-empty subset $A \subset \ZZ^{(d)}$ which is bounded from below, stable under $f$ and addition of $h$. We call an EL-chart $A$ small if it satisfies $A+h \subset f(A)$. \item Let $A$ be an EL-chart and $B = A\setminus (A+h)$. We say that $A$ is normalized if $\sum_{b \in B_{(0)}} b = \frac{h\cdot (h-1)}{2}$. \end{subenv} \end{definition} Let $A$ be a small EL-chart and $B = A\setminus (A+h)$. It is easy to see that $\#B_{(\tau)} = h$ for all $\tau \in I$ and $B = B^- \sqcup B^+$ where \begin{eqnarray*} B^+ &=& \{ b\in B\mid f(b) \in B \} \\ B^- &=& \{ b\in B\mid f(b)-h \in B \}. \end{eqnarray*} We define a sequence $b_0, \ldots b_{d\cdot h -1}$ of distinct elements of $B$ as follows. Denote by $b_0$ the minimal element of $B_{(0)}$ and let \[ b_{i+1} = \left\{ \begin{array}{ll} f(b) & \textnormal{ if } b \in B^+ \\ f(b)-h & \textnormal{ if } b \in B^-. \end{array} \right. \] These elements are indeed distinct: If $b_i = b_j$ then obviously $i \equiv j \mod d$ and then $b_{i + k\cdot d} \equiv b_i + k\cdot m \mod h$ implies that $i=j$ as $m$ and $h$ are coprime. Define the cocharacter $\mu' \in X_*(T)$ by \[ \mu'_\tau = (\underbrace{0,\ldots,0}_{\# B_{(\tau-1)}^+},\underbrace{1,\ldots,1}_{\# B_{(\tau-1)}^-}). \] We call $\mu'$ the Hodge-point of $A$. \begin{remark} One easily checks that a small EL-chart is the same as an EL-Chart in the sense of Def.~5.2 of \cite{hamacher} whose type only has coordinates $0$ and $1$ and that the definition of the Hodge-point in both cases coincides. In particular the Hodge-point $\mu'$ is minuscule so that by Cor.~5.10 of \cite{hamacher} the notion of an EL-chart for $\mu'$ and an extended EL-chart for $\mu'$ are also equivalent, allowing us to use the combinatorics of \cite{hamacher} in our case. \end{remark} \begin{proposition} \label{prop A(M) is EL-chart} Let $M \in \Mscr_G(b,\mu)^0(k)$. Then $A=A(M)$ is a normalized small EL-chart with Hodge-point $\mu$. \end{proposition} \begin{proof} $A(M)$ is stable under $f$ and addition with $h$ since \begin{eqnarray*} \Iscr(F v) &=& f(\Iscr(v)) \\ \Iscr(p\cdot v) &=& \Iscr(v)+h. \end{eqnarray*} Furthermore, we have \[ A(M)+h = \Iscr(pM) \subset \Iscr(FM) = f(A(M)). \] The fact that $A(M)$ is bounded from below is obvious. Let $M = gM^0$. We have \[ 0 = \val\det g_0 = |\NN^d \setminus A(M)_{(0)}| - |A(M)_{(0)} \setminus \NN^d|, \] hence \[ \sum_{b \in B(M)_{(0)}} b = \sum_{i=0}^{h-1} i = \frac{h(h-1)}{2} \] and thus $A(M)$ is indeed a normalized small EL-chart. We have for every $\tau \in I$ \[ \#\{i\mid \mu_{\tau,i} = 1\} = \dim_{k_0} \bigslant{M_\tau}{b\sigma(M)_\tau} = \# B^-_{(\tau-1)}, \] thus the Hodge point of $A(M)$ is $\mu$. \end{proof} \begin{corollary} \label{cor decomposition} The $\Sscr_A$ define a decomposition of $\Mscr_G(b,\mu)^0$ into finitely many locally closed subsets. In particular, $\dim \Mscr_G(b,\mu)^0 = \max_{A} \dim \Sscr_A$. \end{corollary} \begin{proof} By Proposition \ref{prop A(M) is EL-chart}, $\Mscr_G(b,\mu)$ is the (disjoint) union of the $\Sscr_{A}$ with $A$ being a small EL-chart with Hodge-point $\mu$. By Corollary 5.11 of \cite{hamacher} this union is finite. It remains to show that $\Sscr_{A}$ is locally closed. One shows that the condition $A(M)_{(\tau)} = A_{(\tau)}$ is locally closed analogously to the proof of Prop.~5.1 in \cite{viehmann08}. Then $\Sscr_{A}$ is locally closed as it is the intersection of finitely many locally closed subsets. \end{proof} \begin{definition} Let $A$ be a small EL-chart with Hodge-point $\mu$. We define \[ \Vscr_A = \{ (j,i) \in (\ZZ/dh\ZZ)^2 \mid b_j \in B^-, b_i \in B^+, b_j < b_i\} \] \end{definition} \begin{remark} Our notion $\Vscr_A$ coincides with $\Vscr(A,\varphi)$ in \cite{hamacher} with the slight difference that the latter considers pairs $(b_j,b_i)$ instead of $(j,i)$. \end{remark} \begin{proposition}\label{prop decomposition} Let $A$ be an EL-chart with Hodge-point $\mu$. Then $\Sscr_A \cong \AA^{\Vscr_A}$. \end{proposition} \begin{proof} This proposition is proven for the case $G=\GL_h$ in \cite{viehmann08}, \S 5. The construction of a morphism $f: \AA^\Vscr_A \to \Sscr_A$ is very similar to that in \cite{viehmann08} and the proof that it is well-defined and an isomorphism is the same. Therefore we only explain the construction of $f$. We denote $R = k[t_{j,i}\mid (j,i) \in \Vscr_A]$. The morphism $f:\AA^{\Vscr_A} \to \Sscr_A$ corresponds to a quasi-isogeny $X \mapsto \XX_{\AA^{\Vscr_A}}$, which we will describe by the construction a subdisplay of the isodisplay $N_{W(R)_\QQ}$ of $\XX_{\AA^{\Vscr_A}}$. There exists a unique family $\{v_i; 0 \leq i < dh\} \subset N_{W(R)_\QQ}$ which satisfies the following relations: \begin{eqnarray*} v_0 &=& e_{b_0} \\ v_{i+1} &=& \left\{ \begin{array}{ll} Fv_i & \textnormal{ if } b_i,b_{i+1} \in B^+ \\ Fv_i + \sum_{(j,i) \in \Vscr_A} [t_{j,i}]v_i & \textnormal{ if } b_i \in B^+, b_{i+1} \in B^- \\ \frac{F(v_i)}{p} & \textnormal{ if }b_i \in B^-, b_{i+1} \in B^+ \\ \frac{F(v_i)}{p} + \sum_{(j,i) \in \Vscr_A} [t_{j,i}] v_i & \textnormal{ if } b_i,b_{i+1} \in B^- \end{array} \right. \end{eqnarray*} The proof that $(v_i)$ exists and is unique is literally the same as in \cite{viehmann08}. Let \begin{eqnarray*} L &=& \spa_{W(R)}(v_i\mid b_i \in B^-) \\ T &=& \spa_{W(R)}(v_i\mid b_i \in B^+) \\ P &=& L \oplus T \\ Q &=& L \oplus I_R T. \end{eqnarray*} Then $(P,Q,F,\frac{F}{p})$ is a subdisplay of $N_{W(R)_\QQ}$, which yields a quasi-isogeny $X \mapsto \XX_{\AA^{\Vscr_A}}$ corresponding to a point $f\in \Sscr_A(\AA^{\Vscr_A})$. \end{proof} \begin{corollary} \label{cor superbasic RZ-space is qc} $\Mscr_G(b,\mu)^0$ is projective \end{corollary} \begin{proof} By the above proposition and Corollary \ref{cor decomposition} the underlying topological space of $\Mscr_G(b,\mu)^0$ has a decomposition into finitely many quasi-compact subspaces. Thus it is quasi-compact. Now \cite{RZ96}~Cor.~2.31 and Prop.~2.32 imply that $\Mscr_G(b,\mu)^0$ is quasi-projective with projective irreducible components. Thus it is projective. \end{proof} \begin{proposition} \label{prop main for superbasic RZ-data of EL type} The dimension formula (\ref{term dimension RZ-space}) holds for superbasic Rapoport-Zink data of EL type. \end{proposition} \begin{proof} As we remarked at the beginning of this section, we have to show that \[ \dim \Mscr_G(b,\mu) = \sum_{i=1}^{h-1} \lfloor \langle \omega_i, \nu-\mu \rangle \rfloor. \] Now by Proposition \ref{prop decomposition} we get \[ \dim \Mscr_G(b,\mu) = \max \{\#\Vscr(A)\mid A \textnormal{ is a small EL-chart with Hodge point } \mu\}. \] By Prop.~7.1 and Thm.~7.2 of \cite{hamacher} the right hand side equals $\sum_{i=1}^{h-1} \lfloor \langle \omega_i, \nu-\mu \rangle \rfloor$, finishing the proof. \end{proof} \begin{remark} As a consequence of the decomposition above we get an analogous description of the (top-dimensional) irreducible components of $\Mscr_G(b,\mu)^0$ resp.\ the $J_b(\QQ_p)$-orbits of irreducible components of $\Mscr_G(b,\mu)$ as in the case of affine Deligne-Lusztig varieties with $\mu$ minuscule. \end{remark} \appendix \section{Root data of some reductive group schemes} Here we use the notation of section \ref{ss group theory}. Furthermore, we denote all relative root data of $G_{\QQ_p}$ by a subscript $\QQ_p$. Let $I$ denote the Galois group of $O_F$ over $\ZZ_p$. \subsection{$\GL_{O_F,n}$} \quad\newline We have $\GL_{O_F,n} \otimes O_F \cong \prod_{\tau\in I} \GL_n$ with the Galois action cyclically permuting the $\GL_n$-factors. We choose $T_1 \subset B_1 \subset \GL_{O_F,n}$ to be the diagonal torus torus resp. the Borel subgroup of upper triangular matrices. Furthermore, let \begin{eqnarray*} e_{\varsigma,i}: T_1 \to \GG_m &,& (\diag(t_{\tau,1},\ldots,t_{\tau,n}))_{\tau \in I} \mapsto t_{\varsigma,i} \\ e_{\varsigma,i}^\vee: \GG_m \to T_1 &,& x \mapsto (\diag(1,\ldots,1),\ldots, \diag(1,\ldots,1,x,1,\ldots,1),\ldots,\diag(1,\ldots,1)) \end{eqnarray*} where the entry $x$ is the $(\varsigma,i)$\textsuperscript{th} entry. The $e_{\varsigma,i}$ resp. $e_{\varsigma,i}^\vee$ form a basis of $X^*(T)$ resp. $X_*(T)$, thus \begin{eqnarray*} X^*(T_1) &\cong& \prod_{\tau\in I} \ZZ^n \\ X_*(T_1) &\cong& \prod_{\tau\in I} \ZZ^n \\ R &=& \{e_{\tau,i} -e_{\tau,j} \mid \tau \in I, i \not=j \in \{1,\ldots,n\}\} \\ R^\vee &=& \{e_{\tau,i}^\vee -e_{\tau,j}^\vee \mid \tau \in I, i \not=j \in \{1,\ldots,n\}\} \\ R^+ &=& \{e_{\tau,i} -e_{\tau,j} \mid \tau \in I, i <j \in \{1,\ldots,n\}\} \\ R^{\vee,+} &=& \{e_{\tau,i} -e_{\tau,j} \mid \tau \in I, i <j \in \{1,\ldots,n\}\} \\ \Delta^+ &=& \{e_{\tau,i} -e_{\tau,i+1} \mid \tau \in I, i \in \{1,\ldots,n-1\}\} \\ \Delta^{\vee,+} &=& \{e_{\tau,i}^\vee -e_{\tau,i+1}^\vee \mid \tau \in I, i \in \{1,\ldots,n-1\}\}. \end{eqnarray*} Hence \[ X_*(T_1)_{\dom} = \{ \mu \in \prod_{\tau\in I} \ZZ^n \mid \mu_{\tau,1} \geq \ldots \mu_{\tau, n} \textnormal{ for all } \tau\}. \] Now the maximal split torus $S \subset T_{\QQ_p}$ is given by \[ S_1 = \{ (\diag(t_1,\ldots,t_n))_{\tau \in I} \in T_{\QQ_p}\}. \] We define \begin{eqnarray*} e_{\QQ_p,i} &:=& e_{\tau,i|S} \textnormal{ for some} \tau\in I\\ e_{\QQ_p,i}^\vee &:=& \sum_{\tau\in I} e_{\tau,i}^\vee. \end{eqnarray*} The $e_{\QQ_p,i}$ resp. $e_{\QQ_p,i}^\vee$ form a basis of $X^*(S_1)$ resp. $X_*(S_1)$, thus \begin{eqnarray*} X^*(S_1) &\cong& \ZZ^n \\ X_*(S_1) &\cong& \ZZ^n \\ R_{\QQ_p} &=& \{e_{\QQ_p,i} -e_{\QQ_p,j} \mid i \not=j \in \{1,\ldots,n\}\} \\ R_{\QQ_p}^\vee &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,j}^\vee \mid i \not=j \in \{1,\ldots,n\}\} \\ R_{\QQ_p}^+ &=& \{e_{\QQ_p,i} -e_{\QQ_p,j} \mid i <j \in \{1,\ldots,n\}\} \\ R_{\QQ_p}^{\vee,+} &=& \{e_{\QQ_p,i} -e_{\QQ_p,j} \mid i <j \in \{1,\ldots,n\}\} \\ \Delta_{\QQ_p}^+ &=& \{e_{\QQ_p,i} -e_{\QQ_p,i+1} \mid i \in \{1,\ldots,n-1\}\} \\ \Delta_{\QQ_p}^{\vee,+} &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,i+1}^\vee \mid \tau \in I, i \in \{1,\ldots,n-1\}\}. \end{eqnarray*} As a consequence \[ X_*(S_1)_{\QQ,\dom} = \{\nu \in \ZZ^n\mid \nu_1 \geq \ldots \geq \nu_n\} \] \subsection{$\GSp_{O_F,n}$} \quad\newline We have \[ \GSp_{O_F,n} \otimes O_F \cong (\prod_{\tau\in I} \GSp_n)^1 = \{(g_\tau) \in \prod_{\tau\in I} \GSp_n \mid c(g_\tau) = c \textnormal{ does not depend on } \tau\} \] with the Galois action cyclically permuting the factors. Let $T_2 \subset B_2 \subset \GSp_{O_F,n}$ denote the maximal torus resp.\ the Borel subgroup of upper triangular matrices. We denote by \[ c:T_2 \to \GG_m \] The similitude factor. Then \begin{eqnarray*} X^*(T_2) &\cong& \bigslant{X_*(T_1)}{\langle (e_{\tau,i} + e_{\tau,n+1-i}) - ( e_{\varsigma,j} + e_{\varsigma,n+1-j}) \rangle_{\tau,\varsigma ,i,j}} \\ X_*(T_2) &=& \{ (\mu \in X_*(T) \mid \mu_{\tau,i} + \mu_{\tau,n+1-i} = c(\mu) \textnormal{ for some integer } c(\mu) \} \\ R &=& \{e_{\tau,i|T_2} -e_{\tau,j|T_2} \mid \tau\in I, i \not=j \in \{1,\ldots,n/2\}\}\\ & & \cup \{\pm(e_{\tau,i|T_2} +e_{\tau,j|T_2} -c) \mid \tau\in I i\not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{\pm(2e_{\tau,i|T_2} -c) \mid \tau\in I, i\in\{1,\ldots,n/2\}\} \\ &=& \{e_{\tau,i|T_2} - e_{\tau,j|T_2} \mid \tau\in I, i \not=j \in \{1,\ldots,n\}\} \\ R^\vee &=& \{e_{\tau,i}^\vee -e_{\tau,j}^\vee + e_{\tau,n+1-j}^\vee - e_{\tau,n+1-i}^\vee \mid \tau\in I, i \not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm(e_{\tau,i}^\vee+e_{\tau,j}^\vee - e_{\tau,n+1-i}^\vee-e_{\tau,n+1-j}^\vee) \mid \tau\in I i \not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm e_{\tau,i}^\vee\mid \tau\in I, i \in \{1,\ldots,n/2\} \\ R^+ &=& \{e_{\tau,i|T_2} -e_{\tau,j|T_2} \mid \tau\in I, i < j \in \{1,\ldots,n/2\}\}\\ & & \cup \{e_{\tau,i|T_2} +e_{\tau,j|T_2} -c \mid i\not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{2e_{\tau,i|T_2} -c \mid \tau\in I, i\in\{1,\ldots,n/2\}\} \\ &=& \{e_{\tau,i|T_2} - e_{\tau,j|T_2} \mid \tau\in I, i \not=j \in \{1,\ldots,n\}\} \\ R^{\vee,+} &=& \{e_{\tau,i}^\vee -e_{\tau,j}^\vee + e_{\tau,n+1-j}^\vee - e_{\tau,n+1-i}^\vee \mid \tau\in I, i <j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm(e_{\tau,i}^\vee+e_{\tau,j}^\vee - e_{\tau,n+1-i}^\vee-e_{\tau,n+1-j}^\vee\mid \tau\in I, i \not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm(e_{\tau,i}^\vee\mid \tau\in I, i \in \{1,\ldots,n/2\} \\ \Delta^+ &=& \{e_{\tau,i|T_2} -e_{\tau,i+1|T_2} \mid \tau \in I, i \in \{1,\ldots,n/2-1\}\} \cup \{2e_{\tau,n/2|T_2} -c \mid \tau \in I\} \\ &=& \{e_{\tau,i|T_2} -e_{\tau,i+1|T_2} \mid \tau \in I, i \in \{1,\ldots,n-1\}\} \\ \Delta^{\vee,+} &=& \{e_{\tau,i}^\vee -e_{\tau,i+1}^\vee + e_{\tau,n+1-i}^\vee - e_{\tau,n-i}^\vee \mid \tau \in I, i \in \{1,\ldots,n/2-1\}\} \cup \{e_{\tau,n/2}^\vee\mid \tau \in I\} \end{eqnarray*} Hence \[ X_*(T_2)_{\dom} = \{ \mu \in \prod_{\tau\in I} \ZZ^n \mid \mu_{\tau,1} \geq \ldots \mu_{\tau, n} \textnormal{ for all } \tau, \mu_{\tau,i} + \mu_{\tau,n+1-i} = c(\mu) \textnormal{ for some integer } c\}. \] Denote by $S_2$ the maximal split torus of $T_2$. Now \begin{eqnarray*} X^*(S_2) &\cong& \bigslant{X_*(S_1)}{\langle (e_{\QQ_p,i} + e_{\QQ_p,n+1-i}) - (e_{\QQ_p,j} + e_{\QQ_p,n+1-j}) \rangle_{i,j}} \\ X_*(S_2) &=& \{ (\nu \in X_*(S) \mid \nu_{i} + \nu_{n+1-i} = c(\nu) \textnormal{ for some integer } c(\nu) \} \\ R_{\QQ_p} &=& \{e_{\QQ_p,i|S_2} -e_{\QQ_p,j|S_2} \mid i \not=j \in \{1,\ldots,n/2\}\}\\ & & \cup \{\pm(e_{\QQ_p,i|S_2} +e_{\QQ_p,j|S_2} -c) \mid i\not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{\pm(2e_{\QQ_p,i|S_2} -c) \mid i\in\{1,\ldots,n/2\}\} \\ &=& \{e_{\QQ_p,i|S_2} - e_{\QQ_p,j|S_2} \mid i \not=j \in \{1,\ldots,n\}\} \\ R_{\QQ_p}^\vee &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,j}^\vee + e_{\QQ_p,n+1-j}^\vee - e_{\QQ_p,n+1-i}^\vee \mid i \not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm(e_{\QQ_p,i}^\vee+e_{\QQ_p,j}^\vee - e_{\QQ_p,n+1-i}^\vee-e_{\QQ_p,n+1-j}^\vee) \mid i \not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm e_{\QQ_p,i}^\vee\mid i \in \{1,\ldots,n/2\} \end{eqnarray*} \begin{eqnarray*} R_{\QQ_p}^+ &=& \{e_{\QQ_p,i|S_2} -e_{\QQ_p,j|S_2} \mid i < j \in \{1,\ldots,n/2\}\}\\ & & \cup \{e_{\QQ_p,i|S_2} +e_{\QQ_p,j|S_2} -c \mid i\not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{2e_{\QQ_p,i|S_2} -c \mid i\in\{1,\ldots,n/2\}\} \\ &=& \{e_{\QQ_p,i|S_2} - e_{\QQ_p,j|S_2} \mid i \not=j \in \{1,\ldots,n\}\} \\ R_{\QQ_p}^{\vee,+} &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,j}^\vee + e_{\QQ_p,n+1-j}^\vee - e_{\QQ_p,n+1-i}^\vee \mid i <j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm(e_{\QQ_p,i}^\vee+e_{\QQ_p,j}^\vee - e_{\QQ_p,n+1-i}^\vee-e_{\QQ_p,n+1-j}^\vee\mid i \not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm(e_{\QQ_p,i}^\vee\mid i \in \{1,\ldots,n/2\} \\ \Delta^+ &=& \{e_{\QQ_p,i|S_2} -e_{\QQ_p,i+1|S_2} \mid i \in \{1,\ldots,n/2-1\}\} \cup \{2e_{\QQ_p,n/2|S_2} -c\} \\ &=& \{e_{\QQ_p,i|S_2} -e_{\QQ_p,i+1|S_2} \mid i \in \{1,\ldots,n-1\}\} \\ \Delta^{\vee,+} &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,i+1}^\vee + e_{\QQ_p,n+1-i}^\vee - e_{\QQ_p,n-i}^\vee \mid i \in \{1,\ldots,n/2-1\}\} \cup \{e_{\QQ_p,n/2}^\vee\}. \end{eqnarray*} Thus \[ X_*(S_2)_{\QQ,\dom} = \{\nu \in \QQ^n\mid \nu_1 \geq \ldots \geq \nu_n, \nu_{i} + \nu_{n+1-i} = c(\nu) \textnormal{ for some integer } c(\nu)\} \] \subsection{$\GU_{O_F,n}$} \quad \newline We have \[ \GU_{O_F,n} \otimes O_F = \{ (g_\tau) \in \prod_{\tau\in I} GL_n \mid g_\tau J g_{\sigma_{F'}\circ\tau} = c(g) J \} \] With $I$ cyclically permuting the factors. et $T_3 \subset B_3 \subset \GU_{O_F,n}$ denote the maximal torus resp.\ the Borel subgroup of upper triangular matrices. We denote by \[ c:T_3 \to \GG_m \] the similitude factor. We fix a system of representatives $I' \subset I$ of $I/\sigma_{F'}$ Now \begin{eqnarray*} X^*(T_3) &\cong& \bigslant{X_*(T_1)}{\langle (e_{\tau,i} + e_{\sigma_{F'} + \tau,n+1-i}) - ( e_{\varsigma,j} + e_{\sigma_{F'}+ \varsigma,n+1-j}) \rangle_{\tau,\varsigma ,i,j}} \\ X_*(T_3) &=& \{ (\mu \in X_*(T) \mid \mu_{\tau,i} + \mu_{\sigma_{F'} + \tau,n+1-i} = c(\mu) \textnormal{ for some integer } c(\mu) \} \\ R &=& \{e_{\tau,i|T_3} -e_{\tau,j|T_3} \mid \tau \in I', i \not=j \in \{1,\ldots,n\}\} \\ &=& \{e_{\tau,i|T_3} -e_{\tau,j|T_3} \mid \tau \in I, i \not=j \in \{1,\ldots,n\}\} \\ R^\vee &=& \{e_{\tau,i}^\vee -e_{\tau,j}^\vee + e_{\sigma_{F'}+\tau,n+1-j} - e_{\sigma_{F'}+\tau,n+1-i} \mid \tau \in I', i \not=j \in \{1,\ldots,n\}\} \\ R^+ &=& \{e_{\tau,i|T_3} -e_{\tau,j|T_3} \mid \tau \in I', i <j \in \{1,\ldots,n\}\} \\ &=& \{e_{\tau,i|T_3} -e_{\tau,j|T_3} \mid \tau \in I, i <j \in \{1,\ldots,n\}\} \\ R^{\vee,+} &=& \{e_{\tau,i}^\vee -e_{\tau,j}^\vee + e_{\sigma_{F'}+\tau,n+1-j} - e_{\sigma_{F'}+\tau,n+1-i} \mid \tau \in I, i <j \in \{1,\ldots,n\}\} \\ \Delta^+ &=& \{e_{\tau,i|T_3} -e_{\tau,i+1|T_3} \mid \tau \in I', i \in \{1,\ldots,n-1\}\} \\ &=& \{e_{\tau,i|T_3} -e_{\tau,i+1|T_3} \mid \tau \in I, i \in \{1,\ldots,n-1\}\} \\ \Delta^{\vee,+} &=& \{e_{\tau,i}^\vee -e_{\tau,i+1}^\vee + e_{\sigma_{F'}+\tau,n-i} - e_{\sigma_{F'}+\tau,n+1-i} \mid \tau \in I', i \in \{1,\ldots,n-1\}\}. \end{eqnarray*} In particular, \[ X_*(T_3)_{\dom} = \{ (\mu \in X_*(T) \mid \mu_{\tau,1} \geq \ldots \geq \mu_{\tau,n}, \mu_{\tau,i} + \mu_{\sigma_{F'} + \tau,n+1-i} = c(\mu) \textnormal{ for some integer } c(\mu) \}. \] We denote by $S_3$ the maximal split torus of $T_3$. If $n$ is even, then \begin{eqnarray*} X^*(S_3) &\cong& \bigslant{X_*(S_1)}{\langle (e_{\QQ_p,i} + e_{\QQ_p,n+1-i}) - (e_{\QQ_p,j} + e_{\QQ_p,n+1-j}) \rangle_{i,j}} \\ X_*(S_3) &=& \{ (\nu \in X_*(S) \mid \nu_{i} + \nu_{n+1-i} = c(\nu) \textnormal{ for some integer } c(\nu) \} \\ R_{\QQ_p} &=& \{e_{\QQ_p,i|S_2} -e_{\QQ_p,j|S_2} \mid i \not=j \in \{1,\ldots,n/2\}\}\\ & & \cup \{\pm(e_{\QQ_p,i|S_2} +e_{\QQ_p,j|S_2} -c) \mid i\not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{\pm(2e_{\QQ_p,i|S_2} -c) \mid i\in\{1,\ldots,n/2\}\} \\ &=& \{e_{\QQ_p,i|S_2} - e_{\QQ_p,j|S_2} \mid i \not=j \in \{1,\ldots,n\}\} \\ R_{\QQ_p}^\vee &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,j}^\vee + e_{\QQ_p,n+1-j}^\vee - e_{\QQ_p,n+1-i}^\vee \mid i \not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm(e_{\QQ_p,i}^\vee+e_{\QQ_p,j}^\vee - e_{\QQ_p,n+1-i}^\vee-e_{\QQ_p,n+1-j}^\vee) \mid i \not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm e_{\QQ_p,i}^\vee\mid i \in \{1,\ldots,n/2\} \\ R_{\QQ_p}^+ &=& \{e_{\QQ_p,i|S_2} -e_{\QQ_p,j|S_2} \mid i < j \in \{1,\ldots,n/2\}\}\\ & & \cup \{e_{\QQ_p,i|S_2} +e_{\QQ_p,j|S_2} -c \mid i\not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{2e_{\QQ_p,i|S_2} -c \mid i\in\{1,\ldots,n/2\}\} \\ &=& \{e_{\QQ_p,i|S_2} - e_{\QQ_p,j|S_2} \mid i \not=j \in \{1,\ldots,n\}\} \\ R_{\QQ_p}^{\vee,+} &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,j}^\vee + e_{\QQ_p,n+1-j}^\vee - e_{\QQ_p,n+1-i}^\vee \mid i <j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm(e_{\QQ_p,i}^\vee+e_{\QQ_p,j}^\vee - e_{\QQ_p,n+1-i}^\vee-e_{\QQ_p,n+1-j}^\vee\mid i \not=j \in \{1,\ldots,n/2\}\} \\ & & \cup \{ \pm(e_{\QQ_p,i}^\vee\mid i \in \{1,\ldots,n/2\} \\ \Delta^+ &=& \{e_{\QQ_p,i|S_2} -e_{\QQ_p,i+1|S_2} \mid i \in \{1,\ldots,n/2-1\}\} \cup \{2e_{\QQ_p,n/2|S_2} -c\} \\ &=& \{e_{\QQ_p,i|S_2} -e_{\QQ_p,i+1|S_2} \mid i \in \{1,\ldots,n-1\}\} \\ \Delta^{\vee,+} &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,i+1}^\vee + e_{\QQ_p,n+1-i}^\vee - e_{\QQ_p,n-i}^\vee \mid i \in \{1,\ldots,n/2-1\}\} \cup \{e_{\QQ_p,n/2}^\vee\}. \end{eqnarray*} If $n$ is odd, then \begin{eqnarray*} X^*(S_3) &\cong& \bigslant{X_*(S_1)}{\langle (e_{\QQ_p,i} + e_{\QQ_p,n+1-i}) - (e_{\QQ_p,j} + e_{\QQ_p,n+1-j}) \rangle_{i,j}} \\ X_*(S_3) &=& \{ (\nu \in X_*(S_1) \mid \nu_{i} + \nu_{n+1-i} = c(\nu) \textnormal{ for some integer } c(\nu) \} \\ R_{\QQ_p} &=& \{e_{\QQ_p,i|S_3} -e_{\QQ_p,j|S_3} \mid i \not=j \in \{1,\ldots,(n-1)/2\}\}\\ & & \cup \{\pm(e_{\QQ_p,i|S_3} +e_{\QQ_p,j|S_3} -c) \mid i\not=j \in \{1,\ldots(n-1)/2\}\} \\ & & \cup \{\pm(2e_{\QQ_p,i|S_3} -c) \mid i\in\{1,\ldots,(n-1)/2\}\} \\ & & \cup \{\pm(e_{\QQ_p,i|S_3} -c) \mid i\in\{1,\ldots,(n-1)/2\}\} \\ &=& \{e_{\QQ_p,i|S_2} - e_{\QQ_p,j|S_2} \mid i \not=j \in \{1,\ldots,n\}\} \\ R_{\QQ_p}^\vee &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,j}^\vee + e_{\QQ_p,n+1-j}^\vee - e_{\QQ_p,n+1-i}^\vee \mid i \not=j \in \{1,\ldots,(n-1)/2\}\} \\ & & \cup \{ \pm(e_{\QQ_p,i}^\vee+e_{\QQ_p,j}^\vee - e_{\QQ_p,n+1-i}^\vee-e_{\QQ_p,n+1-j}^\vee) \mid i \not=j \in \{1,\ldots,(n-1)/2\}\} \\ & & \cup \{ \pm e_{\QQ_p,i}^\vee\mid i \in \{1,\ldots,(n-1)/2\} \\ & & \cup \{ \pm 2e_{\QQ_p,i}^\vee\mid i \in \{1,\ldots,(n-1)/2\} \\ R_{\QQ_p}^+ &=& \{e_{\QQ_p,i|S_3} -e_{\QQ_p,j|S_3} \mid i < j \in \{1,\ldots,(n-1)/2\}\}\\ & & \cup \{e_{\QQ_p,i|S_3} +e_{\QQ_p,j|S_3} -c \mid i\not=j \in \{1,\ldots,(n-1)/2\}\} \\ & & \cup \{2e_{\QQ_p,i|S_3} -c \mid i\in\{1,\ldots,(n-1)/2\}\} \\ & & \cup \{e_{\QQ_p,i|S_3} \mid i\in\{1,\ldots,(n-1)/2\}\} \\ &=& \{e_{\QQ_p,i|S_2} - e_{\QQ_p,j|S_2} \mid i \not=j \in \{1,\ldots,n\}\} \\ \end{eqnarray*} \begin{eqnarray*} R_{\QQ_p}^{\vee,+} &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,j}^\vee + e_{\QQ_p,n+1-j}^\vee - e_{\QQ_p,n+1-i}^\vee \mid i <j \in \{1,\ldots,(n-1)/2\}\} \\ & & \cup \{ e_{\QQ_p,i}^\vee+e_{\QQ_p,j}^\vee - e_{\QQ_p,n+1-i}^\vee-e_{\QQ_p,n+1-j}^\vee\mid i \not=j \in \{1,\ldots,(n-1)/2\}\} \\ & & \cup \{ e_{\QQ_p,i}^\vee\mid i \in \{1,\ldots,(n-1)/2\} \cup \{ 2e_{\QQ_p,i}^\vee\mid i \in \{1,\ldots,(n-1)/2\} \\ \Delta^+ &=& \{e_{\QQ_p,i|S_3} -e_{\QQ_p,i+1|S_3} \mid i \in \{1,\ldots,(n-1)/2-1\}\} \cup \{e_{\QQ_p,(n-1)/2|S_2} -c\} \\ &=& \{e_{\QQ_p,i|S_3} -e_{\QQ_p,i+1|S_3} \mid i \in \{1,\ldots,n-1\}\} \\ \Delta^{\vee,+} &=& \{e_{\QQ_p,i}^\vee -e_{\QQ_p,i+1}^\vee + e_{\QQ_p,n+1-i}^\vee - e_{\QQ_p,n-i}^\vee \mid i \in \{1,\ldots,(n-1)/2-1\}\} \cup \{e_{\QQ_p,n/2}^\vee\}. \end{eqnarray*} In any case we get \[ X_*(S_3)_{\QQ,\dom} = \{\nu \in \QQ^n\mid \nu_1 \geq \ldots \geq \nu_n, \nu_{i} + \nu_{n+1-i} = c(\nu) \textnormal{ for some integer } c(\nu)\} \] \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \end{document}
\begin{document} \title{Orthogonal product sets with strong quantum nonlocality on plane structure} \author{Huaqi Zhou$^1$} \author{Ting Gao$^1$} \email{[email protected]} \author{Fengli Yan$^2$} \email{[email protected]} \affiliation{$^1$ School of Mathematical Sciences, Hebei Normal University, Shijiazhuang 050024, China \\ $^2$ College of Physics, Hebei Key Laboratory of Photophysics Research and Application, Hebei Normal University, Shijiazhuang 050024, China} \begin{abstract} In this paper, we consider the orthogonal product set (OPS) with strong quantum nonlocality in multipartite quantum systems. Based on the decomposition of plane geometry, we present a sufficient condition for the triviality of orthogonality-preserving POVM on fixed subsystem and partially answer an open question given by Yuan et al. [\href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.102.042228} {Phys. Rev. A \textbf{102}, 042228 (2020)}]. The connection between the nonlocality and the plane structure of OPS is established. We successfully construct a strongly nonlocal OPS in $\mathcal{C}^{d_{A}}\otimes \mathcal{C}^{d_{B}}\otimes \mathcal{C}^{d_{C}}$ $(d_{A},d_{B},d_{C}\geq 4)$, which contains fewer quantum states, and generalize the structures of known OPSs to any possible three and four-partite systems. In addition, we propose several entanglement-assisted protocols for perfectly local discriminating the sets. It is shown that the protocols without teleportation use less entanglement resources on average and these sets can always be discriminated locally with multiple copies of 2-qubit maximally entangled states. These results also exhibit nontrivial signification of maximally entangled states in the local discrimination of quantum states.\\ \end{abstract} \maketitle \section{Introduction} Quantum nonlocality, as one fundamental property and the most celebrated manifestations of quantum mechanics arises from entangled states. Quantum entanglement has received extensive attention, and many results have been obtained \cite{Horodecki, Zhou11, Gao}. Since entangled pure states violate Bell type inequalities, so they are nonlocal \cite{Bell,Clauser,Freedman,Yan,Meng,Chen,Ding1,Ding2}. However, in 1999, Bennett et al. \cite{Bennett} proposed a complete orthogonal product bases with nonlocality, i.e., each of which cannot be reliably discriminated by local operations and classical communication (LOCC) while it only can be identified by a global measurement. It means that nonlocal properties are no longer restricted only to entangled systems. Later, this phenomenon, quantum nonlocality without entanglement, has aroused a wide research \cite{Niset,Zhang1,Wang1,Wang2,Feng,Xu,Zhang3,Halder1,Jiang}. Zhang et al. \cite{Zhang1} gave a class of nonlocal orthogonal product bases in the quantum system of $\mathcal{C}^{d}\otimes \mathcal{C}^{d}$, where $d$ is odd. Wang et al. \cite{Wang1} obtained a small set with only $3(m+n)-9$ orthogonal product states in an arbitrary bipartite quantum system $\mathcal{C}^{m}\otimes \mathcal{C}^{n}$ and proved that these states are LOCC indistinguishable. Xu et al. \cite{Xu} presented a locally indistinguishable set of multipartite orthogonal product states of size $2n$, which can be projected to quantum system $\otimes_{i=1}^{n}\mathcal{C}^{2}$ in essence. Jiang et al. \cite{Jiang} proposed a simple method to construct a nonlocal set of orthogonal product states in $\otimes_{i=1}^{n}\mathcal{C}^{d_{i}}(n\geq 3,d_{i}\geq 2)$ quantum system. It is also shown that local indistinguishability is a crucial primitive for quantum data hiding \cite{Terhal,DiVincenzo,Eggeling} and quantum secret sharing \cite{Hillery,Guo,Hsu,Markham,Rahaman,JWang}. Recently, the concept of quantum nonlocality without entanglement was further developed \cite{Halder,Zhang2,Rout,Yuan,Shi1,Shi2,Shi3,Che,LiM,Bhunia,He}. Halder et al. \cite{Halder} presented a stronger manifestation of this kind of nonlocality in multiparty systems. Specifically, an orthogonal product set (OPS) on $\otimes_{i=1}^{n}\mathcal{C}^{d_{i}}(n\geq 3,d_{i}\geq 3)$ is defined to be strongly nonlocal if it is locally irreducible in every bipartition. The local irreducibility means that it is not possible to eliminate one or more states from the set by orthogonality-preserving local measurements \cite{Halder}. Immediately, Zhang et al. \cite{Zhang2} gave a more general definition of strong quantum nonlocality for multipartite quantum states, where the set is strongly nonlocal if it is locally irreducible in every $(n-1)$-partition. Naturally, the set of orthogonal quantum states which is locally irreducible in every bipartition is the strongest manifestation of nonlocality. It is well known that entanglement is a very valuable resource which allows remote parties to communicate \cite{Gao3,Gao4} as in teleportation \cite{Bennett1,Gao1,Gao2}. In fact, the set of orthogonal quantum states with quantum nonlocality can always be perfectly discriminated by sharing additional entangled resource among the parties \cite{Rout,Cohen1,Cohen2,Bandyopadhyay,Zhang4,Li,Zhang5}. Most generally, by using enough entanglement resource, we can teleport the full multipartite states to one of the parties by LOCC, then these states can be determined by performing suitable measurement. In 2008, Cohen \cite{Cohen2} proposed protocols using entanglement more efficiently than teleportation to distinguish certain classes of unextendible product bases (UPBs), where less entanglement were consumed in comparison to the teleportation-based method. Rout et al. \cite{Rout} studied local state discrimination protocols with Einstein-Podolsky-Rosen (EPR) state and Greenberger-Horne-Zeilinger (GHZ) state. Zhang et al. \cite{Zhang4,Zhang5} presented several protocols to locally distinguish particular UPBs by using different entanglement resource and proved that some sets can also be locally distinguished with multiple copies of EPR states. In this paper, we investigate the OPSs with strong nonlocality. In Sec. \ref{Q2}, we introduce some notations and required preliminary concepts and results. In Sec. \ref{Q3}, we study the sufficient condition for local irreducibility of OPS and illustrate the smallest size of OPS under some specific constraints. Next, in Sec. \ref{Q4}, we generalize the structure of given sets to higher dimension systems and construct a smaller OPS with the strongest quantum nonlocality in $\mathcal{C}^{d_{A}}\otimes \mathcal{C}^{d_{B}}\otimes \mathcal{C}^{d_{C}}$ $(d_{A},d_{B},d_{C}\geq 4)$. Furthermore, we also investigate local distinguishability of our OPSs by using different entanglement resource in Sec. \ref{Q5}. Finally, we conclude with a brief summary in Sec. \ref{Q6}. \section{Preliminaries}\label{Q2} In this section, we introduce some definitions and notations needed in the rest of the paper. \emph{Definition 1} \cite{Walgate}. A measurement is trivial if all the POVM elements are proportional to the identity operator. Otherwise, the measurement is nontrivial. In an $n$-partite system, a set $\{|\varphi\rangle\}$ of orthogonal states is locally irreducible if the orthogonality-preserving POVM \cite{Halder} on any party can only be trivial. The inverse does not hold in general. Let $X_{1}=\{2,3,\ldots,n\}$, $X_{2}=\{3,\ldots,n,1\}$, $X_{3}=\{4,\ldots,n,1,2\},\ldots,~X_{n}=\{1,2,\ldots,n-1\}$. \emph{Lemma~1} \cite{Shi2}. If $X_{i}$ party can only perform a trivial orthogonality-preserving POVM for all $1\leq i\leq n$, then the set $\{|\varphi\rangle\}$ is of the strongest nonlocality \cite{Zhang2}. Let the $d\times d$ matrix $E=(a_{ij})_{i,j\in\mathcal{Z}_{d}}$ be the matrix representation of the operator $E=M^{\dagger}M$ in the basis $\mathcal{B}=\{|0\rangle,\ldots,|d-1\rangle\}$. Define \begin{equation} _{\mathcal{S}}E_{\mathcal{T}}=\sum_{|i\rangle\in\mathcal{S}}\sum_{|j\rangle\in\mathcal{T}}a_{ij}|i\rangle\langle j|, \end{equation} where $\mathcal{S}$ and $\mathcal{T}$ are two nonempty subsets of $\mathcal{B}$. Especially, $_{\mathcal{T}}E_{\mathcal{T}}$ is represented by $E_{\mathcal{T}}$. Let $\{|\psi_{i}\rangle\}_{i=0}^{s-1}$ and $\{|\phi_{j}\rangle\}_{j=0}^{t-1}$ be two orthogonal sets spanned by $\mathcal{S}$ and $\mathcal{T}$, respectively, where $s=|\mathcal{S}|$ and $t=|\mathcal{T}|$. \emph{Lemma~2} \cite{Shi2}. If subsets $\mathcal{S}$ and $\mathcal{T}$ are disjoint and $\langle\psi_{i}|E|\phi_{j}\rangle=0$ for any $i\in\mathcal{Z}_{s}$, $j\in\mathcal{Z}_{t}$, then $_{\mathcal{S}}E_{\mathcal{T}}=\textbf{0}$ and $_{\mathcal{T}}E_{\mathcal{S}}=\textbf{0}$. \emph{Lemma~3} \cite{Shi2}. Suppose that $\langle\psi_{i}|E|\psi_{j}\rangle=0$ for any $i\neq j\in\mathcal{Z}_{s}$. If there exists a state $|i_{0}\rangle\in \mathcal{S}$ such that $_{\{|i_{0}\rangle\}}E_{\mathcal{S}\setminus\{|i_{0}\rangle\}}=\textbf{0}$ and $\langle i_{0}|\psi_{j}\rangle\neq 0$ for any $j\in\mathcal{Z}_{s}$, then $E_{\mathcal{S}}\propto \mathcal{I}_{s}$, i.e. $E_{\mathcal{S}}$ is proportional to the identity matrix. Consider an $n$-partite quantum system $\mathcal{H}=\otimes_{i=1}^{n}\mathcal{C}^{d_{i}}$. The computational basis of the whole quantum system is denoted by $\mathcal{B}=\{|i\rangle\}_{i=0}^{d_1d_2\cdots d_n -1}=\{\otimes_{k=1}^{n}|i_{k}\rangle ~ | ~ i_k=0, 1, \cdots, d_{k}-1 \}=\mathcal{B}^{\{1\}}\otimes \mathcal{B}^{\{2\}}\otimes \cdots\otimes \mathcal{B}^{\{n\}}$, where $\mathcal{B}^{\{k\}}=\{|i_{k}\rangle\}_{{i_k}=0}^{d_{k}-1}$ is the computational basis of the $k$th subsystem. Let \begin{equation} \mathcal{B}_{r}=\mathcal{B}_{r}^{\{1\}}\otimes \mathcal{B}_{r}^{\{2\}}\otimes \cdots\otimes \mathcal{B}_{r}^{\{n\}} \end{equation} be a subset of basis $\mathcal{B}$ with $\mathcal{B}_{r}^{\{i\}}\subset \mathcal{B}^{\{i\}}$. Suppose that $\mathcal{B}_{r}$ ($1\leq r \leq q$) are disjoint subsets of $\mathcal{B}$, then, there is a class of OPS, \begin{equation}\label{5} \begin{aligned} S=\cup_{r\in Q}S_{r},~~Q=\{1,2, \ldots,q\}, \end{aligned} \end{equation} in $\mathcal{H}$, where $S_{r}$ expresses the orthogonal product basis of the subspace spanned by $\mathcal{B}_{r}$, and each component of the vector in $S_{r}$ is nonzero under the computational basis $\mathcal{B}_r$, that is, each vector $|\phi\rangle_{r}$ in $S_{r}$ has the following form \begin{equation}\label{phi-r} \begin{aligned} |\phi\rangle_{r}=\bigg(\sum_{|j_{1}\rangle\in \mathcal{B}_{r}^{\{1\}}}a_{j_{1}}^{(1)}|j_{1}\rangle\bigg)\otimes\cdots\otimes\bigg(\sum_{|j_{n}\rangle\in \mathcal{B}_{r}^{\{n\}}}a_{j_{n}}^{(n)}|j_{n}\rangle\bigg) \end{aligned} \end{equation} with nonzero complex numbers $a_{j_{k}}^{(k)}$ for $k=1, 2, \ldots,n$. If the set $S$ is invariant under cyclic permutation of all subsystems, then we call it symmetric. A plane structure of the set $S$ refers to a two-dimensional grid diagram and each subset $S_{r}$ corresponds a domain in the diagram. \emph{Example 1}. In $\mathcal{C}^{3}\otimes\mathcal{C}^{3}$, let \begin{equation}\label{2} \begin{aligned} &S_{1}=|0\rangle|0\pm 1\rangle, && S_{2}=|1\pm 2\rangle|0\rangle,\\ &S_{3}=|2\rangle|1\pm 2\rangle, && S_{4}=|0\pm 1\rangle|2\rangle, \end{aligned} \end{equation} the plane structure of the OPS \cite{Bennett} $S=\cup_{i=1}^4 S_i$ is depicted in Fig.\ref{8}. The four dominos in this geometry structure represent the four subsets $S_{1}$, $S_{2}$, $S_{3}$ and $S_{4}$, respectively. \begin{figure} \caption{The plane structure of OPS given by Eq. (\ref{2} \label{8} \end{figure} In order to facilitate the establishment of the connection between the nonlocality and the plane structure of the given set $S$, some symbols are introduced. Given a subset $X$ of $\{1,2,\ldots,n\}$ and its complement $Y=\bar{X}$, we use $\mathcal{B}^{X}=\{|i\rangle_{X}\}_{i=0}^{d_{X}-1}$ with $d_X=\prod_{j\in X} d_j$ to represent the computation basis of the Hilbert space $\mathcal{H}_X=\otimes_{j\in X} \mathcal{C}^{d_j}$ corresponding to $X$ party and analogously $\mathcal{B}^{Y}=\{|i\rangle_{Y}\}_{i=0}^{d_{Y}-1}$ corresponding to $Y$ party. Under the basis $\mathcal{B}$, the projection set of $S_{r}$ on $\tau$ $(\tau=X,Y)$ party is expressed as $S_{r}^{(\tau)}=\{\textup{Tr}_{\bar{\tau}}(|i\rangle\langle i|)~|~|i\rangle\in \mathcal{B}~\textup{and}~\langle i|\phi^{r}\rangle\neq 0~\textup{for~any}~|\phi^{r}\rangle\in S_{r}\}$. Naturally, the projection set $S_{r}^{(\tau)}$ is a subset of basis $\mathcal{B}^{\tau}$. For a fixed $i\in\mathcal{Z}_{d_{X}}$, let $\mathcal{B}_{i}^{X}:=\{|k\rangle_{X}\}_{k=i}^{d_{X}-1}$, $V_{i}:=\{\bigcup_{v}S_{v}^{(Y)}~|~|i\rangle_{X}\in S_{v}^{(X)}\}$ and $\widetilde{S}_{V_{i}}:=\{\bigcup_{j}S_{j}^{(X)}~|~S_{j}^{(Y)}\cap V_{i}\neq \emptyset\}$. \emph{Example 2}. Consider the OPS given by Eq. (\ref{2}). $X$ and $Y$ represent $B$ and $A$, respectively. Observe its plane structure as shown in Fig. \ref{8}, the projection set of a subset on the $B$ $(\textup{or}~A)$ party is actually the coordinate of the corresponding grid on the $B$ $(\textup{or}~A)$ party. We have \begin{equation}\label{3} \begin{aligned} &S_{1}^{(B)}=\{|0\rangle_{B},|1\rangle_{B}\}, && S_{2}^{(B)}=\{|0\rangle_{B}\},\\ &S_{3}^{(B)}=\{|1\rangle_{B},|2\rangle_{B}\}, && S_{4}^{(B)}=\{|2\rangle_{B}\},\\ \end{aligned} \end{equation} and \begin{equation}\label{4} \begin{aligned} &S_{1}^{(A)}=\{|0\rangle_{A}\}, && S_{2}^{(A)}=\{|1\rangle_{A},|2\rangle_{A}\},\\ &S_{3}^{(A)}=\{|2\rangle_{A}\}, && S_{4}^{(A)}=\{|0\rangle_{A},|1\rangle_{A}\}.\\ \end{aligned} \end{equation} For all $i\in\mathcal{Z}_{3}$, $\mathcal{B}_{i}^{B}$ is a subset of basis $\mathcal{B}^B$ and $\mathcal{B}_{0}^{B}$ is equal to $\mathcal{B}^B$. It is easy to know that \begin{equation*} \begin{aligned} &\mathcal{B}_{0}^{B}=\{|0\rangle_{B},|1\rangle_{B},|2\rangle_{B}\},\\ &\mathcal{B}_{1}^{B}=\{|1\rangle_{B},|2\rangle_{B}\},\\ &\mathcal{B}_{2}^{B}=\{|2\rangle_{B}\}. \end{aligned} \end{equation*} Since $V_{i}$ expresses the union of the projection sets $S_{v}^{(A)}$ of $S_v$ on $A$ party, where all corresponding projection sets $S_{v}^{(B)}$ of $S_v$ on $B$ party contain quantum state $|i\rangle_{B}$, then there are \begin{equation*} \begin{aligned} &V_{0}=S_{1}^{(A)}\cup S_{2}^{(A)}=\{|0\rangle_{A},|1\rangle_{A},|2\rangle_{A}\},\\ &V_{1}=S_{1}^{(A)}\cup S_{3}^{(A)}=\{|0\rangle_{A},|2\rangle_{A}\},\\ &V_{2}=S_{3}^{(A)}\cup S_{4}^{(A)}=\{|0\rangle_{A},|1\rangle_{A},|2\rangle_{A}\}. \end{aligned} \end{equation*} Note that each projection set $S_j^{(A)}$ contains a quantum state in $V_{i}$, $\widetilde{S}_{V_{i}}$ is the union of all the projection sets $S_j^{(B)}$ of $S_j$ on $B$ party. That is, \begin{equation*} \begin{aligned} &\widetilde{S}_{V_{0}}=\widetilde{S}_{V_{1}}=\widetilde{S}_{V_{2}}=\cup_{j=1}^{4}S_{j}^{(B)}=\{|0\rangle_{B},|1\rangle_{B},|2\rangle_{B}\}. \end{aligned} \end{equation*} \emph{Definition 2}. A family of projection sets $\{S_{r}^{(\tau)}\}_{r\in Q}$ is connected if it cannot be divided into two groups of sets $\{S_{k}^{(\tau)}\}_{k\in T}$ $(T\subsetneqq Q)$ and $\{S_{l}^{(\tau)}\}_{l\in Q\setminus T}$ such that \begin{equation}\label{1} \begin{aligned} \bigg(\bigcup\limits_{k\in T}S_{k}^{(\tau)}\bigg)\bigcap \bigg(\bigcup\limits_{l\in Q\setminus T}S_{l}^{(\tau)}\bigg)=\emptyset. \end{aligned} \end{equation} \emph{Definition 3}. $R_{r}=\bigcup_{k\in T}S_{k}$ $(r\notin T\subset Q)$ is called the projection inclusion (PI) set of $S_{r}$ on $X$ party if projection sets satisfy $\bigcap_{k\in T}S_{k}^{(Y)}\neq \emptyset$ and $S_{r}^{(X)}\subset \bigcup_{k\in T} S_{k}^{(X)}$. Specially, $R_{r}$ is called a more useful projection inclusion (UPI) set if there exists a subset $S_{k}\subset R_{r}$ such that $|S_{r}^{(X)}\bigcap S_{k}^{(X)}|=1$. From the definition, both the PI set and the UPI set of a subset $S_{r}$ of an OPS $S$ may not be unique. By observing the plane tile as shown in Fig. \ref{8}, it is easy to know that both $S_1$ and $S_1\cup S_4$ are PI sets of $S_2$ in (\ref{2}) on $B$ party, and $S_{2}\cup S_{3}$ is the PI set of $S_{1}$ on $B$ party. Due to $|S_{1}^{(B)}\cap S_{2}^{(B)}|=1$, these PI sets are also UPI sets. For the set $S$ in (\ref{5}), we construct a set sequence $G_{1},G_{2},\ldots,G_{s}$. The set $G_{1}$ denoted as $\cup_{r_{1}\in T_{1}}S_{r_{1}}$ is the union of all subsets $S_{r_{1}}$ that have UPI sets. The remaining sets $G_{2},\ldots,G_{s}$ are expressed by $\cup_{r_{2}\in T_{2}}S_{r_{2}},\ldots,\cup_{r_{s}\in T_{s}}S_{r_{s}}$, respectively. Moreover, this sequence also satisfies the following two conditions. 1) The sets $G_{x}$ are pairwise disjoint and the union of all sets is $S$. 2) For any $S_{r_{x+1}}\subset G_{x+1}$ $(x=1,\ldots,s-1)$, there is always a subset $S_{r_{x}}\subset G_{x}$ such that $S_{r_{x}}^{(X)}\cap S_{r_{x+1}}^{(X)}\neq\emptyset$. Note that such a set sequence $G_{1},G_{2},\ldots,G_{s}$ satisfying above 1) and 2) does not necessarily exist. In addition, we call $S_{r_{x}}$ an included (IC) subset about set $G_{x}$ $(x=1,\ldots,s)$, if there is a subset $S_{r_{x}'}\subset G_{x}$ such that $S_{r_{x}}^{(X)}\subsetneqq S_{r_{x}'}^{(X)}$. Otherwise, it is called a non-included (NIC) subset. \emph{Example 3}. We consider the OPS in (\ref{2}), where each subset has a corresponding UPI set \begin{equation} \begin{aligned} &R_{1}=S_{2}\cup S_{3},~ R_{2}=S_{1},~ R_{3}=S_{1}\cup S_{4},~ R_{4}=S_{3}. \end{aligned} \end{equation} So, there is only one set in its set sequence, which happens to be this OPS. That is $G_{1}=\cup_{i=1}^{4}S_{i}$. \section{The sufficient condition for the triviality of orthogonality-preserving POVM and the smallest size of OPS under some constraints}\label{Q3} It is an important way to illustrate the irreducibility of OPS by proving that the orthogonality-preserving POVM on the subsystems can only be trivial \cite{Jiang,Halder,Zhang2,Shi2,Yuan,Bhunia}. Here, we will present a sufficient condition for orthogonality-preserving POVM being triviality. On plane structure, the condition is efficient for constructing OPS with strong nonlocality and demonstrating the irreducibility of given OPS. \emph{Theorem~1}. For the given set $S$ in (\ref{5}), any orthogonality-preserving POVM performed on $X$ party can only be trivial if the following conditions are satisfied. i) There is an inclusion relationship $\mathcal{B}_{i}^{X}\subset \widetilde{S}_{V_{i}}$ for any $i\in\mathcal{Z}_{d_{X}-1}$. ii) For any subset $S_{r}$, there exists a corresponding PI set $R_{r}$ on $X$ party. iii) There is a set sequence $G_{1},\ldots,G_{s}$ satisfying 1) and 2). Moreover, for each NIC subset $S_{r_{x+1}}\subset G_{x+1}$, there exist a subset $S_{r_{x}}\subset G_{x}$ and a subset $S_{r_{x+1}'}\subset R_{r_{x+1}}$ such that $S_{r_{x}}^{(X)}\cap S_{r_{x+1}}^{(X)}\supset S_{r_{x+1}}^{(X)}\cap S_{r_{x+1}'}^{(X)}$ with $x=1,2,\ldots,s-1$. iv) The family of sets $\{S_{r}^{(X)}\}_{r\in Q}$ is connected. $Proof.$ Let $\{E\}$ be an any orthogonality-preserving POVM performed on $X$. Without loss of generality, we assume \begin{equation} \begin{aligned} E=\begin{pmatrix} a_{00} & a_{01} & \cdots & a_{0(d_{X}-1)} \\ a_{10} & a_{11} & \cdots & a_{1(d_{X}-1)} \\ \vdots & \vdots & \ddots & \vdots \\ a_{(d_{X}-1)0} & a_{(d_{X}-1)1} & \cdots & a_{(d_{X}-1)(d_{X}-1)} \end{pmatrix}, \end{aligned} \end{equation} in the computation basis $\mathcal{B}^{X}$. Because the postmeasurement states should be mutually orthogonal, for any two states $|\psi_{1}\rangle_{X}|\phi_{1}\rangle_{Y}$ and $|\psi_{2}\rangle_{X}|\phi_{2}\rangle_{Y}$ in $S$, we have $_{X}\langle\psi_{1}|_{Y}\langle\phi_{1}|E\otimes \mathcal{I}|\psi_{2}\rangle_{X}|\phi_{2}\rangle_{Y}=0$. If $\langle\phi_{1}|\phi_{2}\rangle_{Y}\neq 0$, then $_{X}\langle\psi_{1}|E|\psi_{2}\rangle_{X}=0$. Let $S_{r}^{\tau}=\{\textup{Tr}_{\bar{\tau}}(|\phi^{r}\rangle\langle\phi^{r} |)~|~|\phi^{r}\rangle\in S_{r}\}$ $(\tau=X,Y)$ express the set of reduced density matrices. For any two different subsets $S_{q_{1}}$ and $S_{q_{2}}$, if $S_{q_{1}}^{(Y)}\cap S_{q_{2}}^{(Y)}\neq \emptyset$, then $S_{q_{1}}^{(X)}\cap S_{q_{2}}^{(X)}= \emptyset$ and there always exist two states $|\phi_{q_{1}}\rangle_{Y}\in S_{q_{1}}^{Y}$ and $|\phi_{q_{2}}\rangle_{Y}\in S_{q_{2}}^{Y}$ such that $\langle \phi_{q_{1}}|\phi_{q_{2}}\rangle_{Y}\neq 0$. Due to the orthogonality-preserving property, we obtain $_{X}\langle \psi_{q_{1}}|E|\psi_{q_{2}}\rangle_{X}=0$ for all $|\psi_{q_{1}}\rangle_{X}\in S_{q_{1}}^{X}$ and $|\psi_{q_{2}}\rangle_{X}\in S_{q_{2}}^{X}$. According to Lemma 2, we deduce $_{S_{q_{1}}^{(X)}}E_{S_{q_{2}}^{(X)}}=\textbf{0}$. Using this result, we can prove that $E\propto \mathcal{I}$ by the following four steps. Here, figures \ref{51}-\ref{54} depict the process of proving. \emph{Step 1}. When $i=0$, we know $V_{0}=\{\cup_{v}S_{v}^{(Y)}~|~|0\rangle_{X}\in S_{v}^{(X)}\}$. For each $|j\rangle_{Y}\in V_{0}$, let $\{S_{j_{s}}\}_{j_{s}\in Q_{j}}$ $(Q_{j}\subset Q)$ represent the all subsets whose projection sets on $Y$ party contain the state $|j\rangle_{Y}$. Suppose $S_{j_{1}}$ is the subset such that $|0\rangle_{X}\in S_{j_{1}}^{(X)}$, then one has $_{S_{j_{1}}^{(X)}}E_{S_{j_{s}}^{(X)}}=\textbf{0}$ for any $s$ $(s\neq 1)$. By the definition and condition i), it is easy to derive $\cup_{j,s}S_{j_{s}}^{(X)}=\widetilde{S}_{V_{0}}=\mathcal{B}_{0}^{X}=\{|i\rangle_{X}\}_{i=0}^{d_{X}-1}$. Thus, we get $a_{0k_{0}}=a_{k_{0}0}=0$ for $|k_{0}\rangle_{X}\notin \widehat{V}_{0}$, where $\widehat{V}_{0}=\{\cup_{v}S_{v}^{(X)}~|~|0\rangle_{X}\in S_{v}^{(X)}\}$. See Fig. \ref{51}. Similarly, when $i=1,\ldots,d_{X}-2$, we obtain $a_{ik_{i}}=a_{k_{i}i}=0$ for $|k_{i}\rangle_{X}\notin \widehat{V}_{i}$ and $k_{i}>i$. Here $\widehat{V}_{i}=\{\cup_{v}S_{v}^{(X)}~|~|i\rangle_{X}\in S_{v}^{(X)}\}$. \emph{Step 2}. According to the condition ii), for each $r\in Q$, there exists a PI set $R_{r}=\bigcup_{t\in T_r}S_{t}$ ($r\notin T_r\subset Q$) of $S_{r}$ on $X$ party, where $\bigcap_{t\in T_r}S_{t}^{(Y)}\neq \emptyset$ and $S_{r}^{(X)}\subset \bigcup_{t\in T_r} S_{t}^{(X)}$. For any two different indexes $t_1$ and $t_2$ in $T_r$, it is not difficult to deduce that $a_{kl}=a_{lk}=0$ with $|k\rangle_{X}\in S_{t_1}^{(X)}\cap S_{r}^{(X)}$ and $|l\rangle_{X}\in S_{t_2}^{(X)}\cap S_{r}^{(X)}$ for $k\neq l$. \emph{Step 3}. For any subset $S_{r_{1}}$ in $G_{1}$, the corresponding set $R_{r_{1}}$ is UPI set. From Definition 3, there is a subset $S_{r_{1}'}\subset R_{r_{1}}$ such that $|S_{r_{1}}^{(X)}\cap S_{r_{1}'}^{(X)}|=1$. It is a special case in the step 2. Let $|k\rangle_{X}$ be the only one element of $S_{r_{1}}^{(X)}\cap S_{r_{1}'}^{(X)}$, then $a_{kl}=0$ for all $|l\rangle_{X}\in S_{r_{1}}^{(X)}\setminus \{|k\rangle_{X}\}$. Since each component of the vector in $S_{r_{1}}$ is nonzero under the computation basis $\mathcal{B}_{r_1}$ from (\ref{phi-r}), it is easy to know $\langle k|\psi\rangle_{X}\neq 0$ for any $|\psi\rangle_{X}\in S_{r_{1}}^{X}$. According to Lemma 3, we deduce $E_{r_{1}}=E_{S_{r_{1}}^{(X)}}\propto \mathcal{I}$. By the condition iii), for each NIC subset $S_{r_{2}}\subset G_{2}$, there exist a subset $S_{r_{1}}\subset G_{1}$ and a subset $S_{r_{2}'}\subset R_{r_{2}}$ such that $S_{r_{1}}^{(X)}\cap S_{r_{2}}^{(X)}\supset S_{r_{2}}^{(X)}\cap S_{r_{2}'}^{(X)}$. Then $a_{kl}=0$ for $|k\rangle_{X},|l\rangle_{X}\in S_{r_{2}}^{(X)}\cap S_{r_{2}'}^{(X)}$ and $k\neq l$. Combining this with the step 2 produces $a_{kl}=0$ for $|k\rangle_{X}\in S_{r_{2}}^{(X)}\cap S_{r_{2}'}^{(X)}$, $|l\rangle_{X}\in S_{r_{2}}^{(X)}$ and $k\neq l$. It follows from Lemma 3 that $E_{r_{2}}=E_{S_{r_{2}}^{(X)}}\propto \mathcal{I}$. For each IC subset $S_{r_{2}''}$, there is always a corresponding NIC subset $S_{r_{2}}$ that satisfies the inclusion relationship $S_{r_{2}''}^{(X)}\subsetneqq S_{r_{2}}^{(X)}$, which implies $E_{r_{2}''}=E_{S_{r_{2}''}^{(X)}}\propto \mathcal{I}$. Similarly, $E_{r}\propto \mathcal{I}$ for each $r$. That is, there is a positive real number $b_{r}$ such that $E_{r}=b_{r}\mathcal{I}$. See also Fig. \ref{52}. \emph{Step 4}. Consider the set $\widehat{V}_{0}=\{\cup_{v}S_{v}^{(X)}~|~|0\rangle_{X}\in S_{v}^{(X)}\}$ of step 1. Due to each $E_{v}\propto \mathcal{I}$, we have $a_{0k_{0}}=0$ for $|k_{0}\rangle_{X}\in \widehat{V}_{0}$ and $k_{0}\neq 0$. Combining this with the step 1 produces $a_{0k_{0}}=0$ for all $k_{0}>0$. We can obtain the similar result for other $\widehat{V}_{i}$ $(i=1,\ldots,d_{X}-2)$. So, we deduce that the off-diagonal elements of $E$ are all zero. It is shown in Fig. \ref{53}. In addition, for any $x,y\in Q$, if $S_{x}^{(X)}\cap S_{y}^{(X)}\neq \emptyset$, then $b_{x}=b_{y}$. The condition iv) indicates that the family of sets $\{S_{r}^{(X)}\}_{r\in Q}$ is connected. This means that these scalars $b_{r}$ are all equal. Therefore, the POVM element can only be proportional to the unit operator $\mathcal{I}$. See also Fig. \ref{54}. ~ $\square$ \begin{figure} \caption{In step 1, taking $i=0$ as an example, we show that all the elements of $E$ in the first row and in the first column except $E_{\widehat{V} \label{51} \caption{In steps 2 and 3, it is proved that the operator $E_{r} \label{52} \caption{Consider the operator $E_{\widehat{V} \label{53} \caption{It follows from condition iv) that the scalars $b_r$ are all equal. Then the diagonal entries of the POVM element $E$ are all equal, that is, $E=a_{00} \label{54} \end{figure} \emph{Corollary~1}. If the conditions i)-iv) in Theorem 1 are satisfied for $X=X_1, X_2, \cdots, X_n$ with $X_j=\{1, 2, \cdots, j-1, j+1, \cdots, n\}$, then the set (\ref{5}) is an OPS of the strongest quantum nonlocality. Note that it is obvious that $E_{r}\propto \mathcal{I}$ for each $r\in Q$, if the set $G_{1}$ is equal to the set $S$. That is, when the set sequence has only one set $G_{1}$, we still say that the condition iii) is valid. Next we will provide an example to show the application of this theorem on plane structure. \emph{Example 4}. We revisit the quantum nonlocality of the following OPS \cite{Yuan} in $\mathcal{C}^{3}\otimes\mathcal{C}^{3}\otimes\mathcal{C}^{3}$ \begin{equation}\label{20} \begin{aligned} &S_{1}=\{|0\rangle|1\rangle|0\pm 1\rangle\}, && S_{7}=\{|0\rangle|2\rangle|0\pm 2\rangle\}, \\ &S_{2}=\{|1\rangle|0\pm 1\rangle|0\rangle\}, && S_{8}=\{|2\rangle|0\pm 2\rangle|0\rangle\}, \\ &S_{3}=\{|0\pm 1\rangle|0\rangle|1\rangle\}, && S_{9}=\{|0\pm 2\rangle|0\rangle|2\rangle\}, \\ &S_{4}=\{|1\rangle|2\rangle|0\pm 1\rangle\}, && S_{10}=\{|2\rangle|1\rangle|0\pm 2\rangle\}, \\ &S_{5}=\{|2\rangle|0\pm 1\rangle|1\rangle\}, && S_{11}=\{|1\rangle|0\pm 2\rangle|2\rangle\}, \\ &S_{6}=\{|0\pm 1\rangle|1\rangle|2\rangle\}, && S_{12}=\{|0\pm 2\rangle|2\rangle|1\rangle\}. \end{aligned} \end{equation} Due to Lemma 1 and the symmetry of the OPS given by Eq. (\ref{20}), we only need to consider the orthogonality-preserving POVM performed on $BC$ party. Fig. \ref{9} is the plane structure of OPS in $A|BC$ bipartition. By observing this tile graph, we can easily obtain the four conditions in Theorem 1. \begin{figure} \caption{The corresponding $3\times 9$ grid of $\{S_{r} \label{9} \end{figure} First, because the projection set $\cup_{r}S_{r}^{(ABC)}$ differs from the computation basis $\mathcal{B}$ only by states $|000\rangle$, $|111\rangle$ and $|222\rangle$. It is obvious that $\widetilde{S}_{V_{ij}}=\mathcal{B}^{BC}$ for $i,j\in\mathcal{Z}_{3}$. Here $\mathcal{B}^{BC}$ is the computation basis on subsystem $BC$. Naturally, $\mathcal{B}_{ij}^{BC}\subset\widetilde{S}_{V_{ij}}$. The condition i) holds. Second, for each subset $S_{r}$, we have the corresponding PI sets $R_{1}=S_{5}\cup S_{10}$, $R_{2}=S_{8}\cup S_{10}$, $R_{3}=S_{5}$, $R_{4}=S_{8}\cup S_{12}$, $R_{5}=S_{1}\cup S_{3}$, $R_{6}=S_{10}$, $R_{7}=S_{4}\cup S_{11}$, $R_{8}=S_{2}\cup S_{4}$, $R_{9}=S_{11}$, $R_{10}=S_{2}\cup S_{6}$, $R_{11}=S_{7}\cup S_{9}$ and $R_{12}=S_{4}$. The condition ii) is demonstrated. Furthermore, for any two subsets $S_{x}$ and $S_{y}$, we have $|S_{x}^{(BC)}\cap S_{y}^{(BC)}|\leq 1$. So, each $R_{r}$ is an UPI set, i.e., $G_{1}$ is the union of all subsets. It is obvious that the condition iii) holds. Finally, we find a sequence of projection sets $(S_{5}^{(BC)},~S_{10}^{(BC)})\rightarrow S_{1}^{(BC)}\rightarrow S_{2}^{(BC)}\rightarrow S_{8}^{(BC)}\rightarrow S_{4}^{(BC)}\rightarrow S_{7}^{(BC)}\rightarrow S_{11}^{(BC)}$. In this sequence, the intersection of the sets on both sides of the arrow is not empty and the union of these sets is the computation basis $\mathcal{B}^{BC}$. So, it is impossible to divide all projection sets into disjoint two groups of projection sets. That is, the family of projection sets $\{S_{r}^{(BC)}\}_{r=1}^{12}$ is connected. The condition iv) is satisfied. According to Theorem 1, we deduce the POVM performed on $BC$ party can only be trivial. Therefore, the OPS given by Eq. (\ref{20}) is of the strongest quantum nonlocality. For the same set as stated in Theorem 1, we have the following corollary. \emph{Corollary~2}. If any orthogonality-preserving POVM element performed on $X$ party can only be proportional to the identity operator, then the set $\cup_{r\in Q}S_{r}^{(X)}$ is the basis $\mathcal{B}^{X}$ and the family of projection sets $\{S_{r}^{(X)}\}_{r\in Q}$ is connected. By using Corollary 2, in systems $\mathcal{C}^{3}\otimes\mathcal{C}^{3}\otimes\mathcal{C}^{3}$ and $\mathcal{C}^{4}\otimes\mathcal{C}^{4}\otimes\mathcal{C}^{4}$, we can discuss the minimum size of the OPS given by Eq. (\ref{5}) under the specific restrictions. Let $N$ express the maximum size of all subsets, i.e., $N=\max_{r}|S_{r}|$. We have the following two theorems. \emph{Theorem~2}. In $\mathcal{C}^{3}\otimes\mathcal{C}^{3}\otimes\mathcal{C}^{3}$, for the set $S$ (\ref{5}), if the set $S$ is symmetric and any orthogonality-preserving POVM performed on $BC$ party can only be trivial, then the set $S$ is an OPS of the strongest nonlocality. The smallest size of this set is 24. \emph{Theorem~3}. In $\mathcal{C}^{4}\otimes\mathcal{C}^{4}\otimes\mathcal{C}^{4}$, for the set $S$ in (\ref{5}), if $S$ is symmetric with $N=2$ and any orthogonality-preserving POVM element performed on $BC$ party can only be proportional to identity, then the set $S$ is an OPS of the strongest nonlocality. The smallest size of this set $S$ is 48. The detailed proofs are given in Appendix \ref{B} and \ref{C}, respectively. Theorems 2 and 3 show the minimum sizes of two kinds of OPSs with strong nolocality, respectively. They are partial answers to an open question in Ref. \cite{Yuan}, ``Can we find the smallest strongly nonlocal set in $\mathcal{C}^{3}\otimes\mathcal{C}^{3}\otimes\mathcal{C}^{3}$, and more generally in any tripartite systems?''. \section{OPS with the strongest quantum nonlocolity in $\mathcal{C}^{d_{A}}\otimes\mathcal{C}^{d_{B}}\otimes\mathcal{C}^{d_{C}}$ and $\mathcal{C}^{d_{A}}\otimes\mathcal{C}^{d_{B}}\otimes\mathcal{C}^{d_{C}}\otimes\mathcal{C}^{d_{D}}$}\label{Q4} From Theorem 1, we know that the nonlocality of OPS is closely related to its plane structure. In this section, we will provide several strongly nonlocal OPSs in three and four-partite systems. By extending the dimension of the grid in Fig. \ref{9}, we can generalize the structure of the set (\ref{20}) to any finite dimension. The OPS in $\mathcal{C}^{d_{A}}\otimes\mathcal{C}^{d_{B}}\otimes\mathcal{C}^{d_{C}}$ is described as \begin{equation}\label{21} \begin{aligned} &H_{1}=\{|0\rangle_{A}|\xi_{i}\rangle_{B}|\eta_{j}\rangle_{C}\}_{i,j}, \\ &H_{2}=\{|\xi_{i}\rangle_{A}|\eta_{j}\rangle_{B}|0\rangle_{C}\}_{i,j}, \\ &H_{3}=\{|\eta_{j}\rangle_{A}|0\rangle_{B}|\xi_{i}\rangle_{C}\}_{i,j},\\ &H_{4}=\{|\xi_{i}\rangle_{A}|d_{B}'\rangle_{B}|\eta_{j}\rangle_{C}\}_{i,j}, \\ &H_{5}=\{|d_{A}'\rangle_{A}|\eta_{j}\rangle_{B}|\xi_{i}\rangle_{C}\}_{i,j}, \\ &H_{6}=\{|\eta_{j}\rangle_{A}|\xi_{i}\rangle_{B}|d_{C}'\rangle_{C}\}_{i,j},\\ &H_{7}=\{|0\rangle_{A}|d_{B}'\rangle_{B}|0\pm d_{C}'\rangle_{C}\}, \\ &H_{8}=\{|d_{A}'\rangle_{A}|0\pm d_{B}'\rangle_{B}|0\rangle_{C}\}, \\ &H_{9}=\{|0\pm d_{A}'\rangle_{A}|0\rangle_{B}|d_{C}'\rangle_{C}\},\\ &H_{10}=\{|d_{A}'\rangle_{A}|\xi_{i}\rangle_{B}|0\pm d_{C}'\rangle_{C}\}_{i}, \\ &H_{11}=\{|\xi_{i}\rangle_{A}|0\pm d_{B}'\rangle_{B}|d_{C}'\rangle_{C}\}_{i}, \\ &H_{12}=\{|0\pm d_{A}'\rangle_{A}|d_{B}'\rangle_{B}|\xi_{i}\rangle_{C}\}_{i}, \end{aligned} \end{equation} where $|\xi_{i}\rangle_{\tau}=\sum_{u=0}^{d_{\tau}-3}\omega_{d_{\tau}-2}^{iu}|u+1\rangle$, $|\eta_{j}\rangle_{\tau}=\sum_{u=0}^{d_{\tau}-2}$ $\omega_{d_{\tau}-1}^{ju}|u\rangle$, $d_{\tau}'=d_{\tau}-1$ for $i\in \mathcal{Z}_{d_{\tau}-2}$, $j\in \mathcal{Z}_{d_{\tau}-1}$, and $\tau\in\{A,B,C\}$. Here and below we use the notation $\omega_{n}:=\textup{e}^{\frac{2\pi\textup{i}}{n}}$ for any positive integer $n$. Fig. \ref{10} is a geometric representation of this OPS in $A|BC$ bipartition. We explain the srong nonlocality of the OPS (\ref{21}) in the following theorem. \begin{figure} \caption{The corresponding $d_{A} \label{10} \end{figure} \emph{Theorem~4}. In $\mathcal{C}^{d_{A}}\otimes\mathcal{C}^{d_{B}}\otimes\mathcal{C}^{d_{C}}$, the set $\cup_{i=1}^{12}H_{i}$ given by Eq. (\ref{21}) is an OPS of the strongest nonlocality. The size of this set is $2[(d_{A}d_{B}+d_{B}d_{C}+d_{A}d_{C})-2(d_{A}+d_{B}+d_{C})+3]$. $Proof.$ We only need to discuss the orthogonality-preserving POVM performed on $BC$ party. The tile structure is depicted in Fig. \ref{10}. Because the set $\cup_{i=1}^{12}H_{i}$ has the same structure as the set $\cup_{i=1}^{12}S_{i}$ given by Eq. (\ref{20}), the conditions i), ii) and iv) of Theorem 1 are obvious. Here $R_{1}=H_{5}\cup H_{10}$, $R_{2}=H_{8}\cup H_{10}$, $R_{3}=H_{5}$, $R_{4}=H_{8}\cup H_{12}$, $R_{5}=H_{1}\cup H_{3}$, $R_{6}=H_{10}$, $R_{7}=H_{4}\cup H_{11}$, $R_{8}=H_{2}\cup H_{4}$, $R_{9}=H_{11}$, $R_{10}=H_{2}\cup H_{6}$, $R_{11}=H_{7}\cup H_{9}$ and $R_{12}=H_{4}$. Now consider the condition iii). It is not difficult to show that the set sequence \begin{equation*} \begin{aligned} &G_{1}=H_{2}\cup H_{4}\cup H_{7}\cup H_{8}\cup H_{9}\cup H_{11}, \\ &G_{2}=H_{1}\cup H_{10}\cup H_{12}, \\ &G_{3}=H_{5}\cup H_{6}, \\ &G_{4}=H_{3}, \end{aligned} \end{equation*} satisfies 1) and 2). Here each subset contained in $G_{x}$ $(x=2,3,4)$ is a NIC subset. For $H_{1}\subset G_{2}$, we find that there are $H_{2}\subset G_{1}$ and $H_{10}\subset R_{1}$ such that $H_{1}^{(BC)}\cap H_{2}^{(BC)}=H_{1}^{(BC)}\cap H_{10}^{(BC)}$. For the subsets $H_{10}$, $H_{12}$, $H_{5}$, $H_{6}$ and $H_{3}$, there are $H_{2}=G_{1}\cap R_{10}$, $H_{4}=G_{1}\cap R_{12}$, $H_{1}=G_{2}\cap R_{5}$, $H_{10}=G_{2}\cap R_{6}$ and $H_{5}=G_{3}\cap R_{3}$, respectively. It follows that the condition iii) in Theorem 1 holds. According to Theorem 1, the orthogonality-preserving POVM performed on $BC$ party can only be trivial. Therefore, the set $\cup_{i=1}^{12}H_{i}$ given by Eq. (\ref{21}) is of the strongest nonlocality. $\square$ Applying Theorem 1, we propose a set of strongly nonlocal OPS in $\mathcal{C}^{4}\otimes\mathcal{C}^{4}\otimes\mathcal{C}^{4}$. The newly constructed OPS contains fewer quantum states than in Ref. \cite{Yuan, Shi2}. The specific OPS is given by \begin{equation}\label{22} \begin{aligned} &S_{11}=\{|0\rangle|1\rangle|2\pm 3\rangle\}, && S_{51}=\{|1\rangle|3\rangle|2\pm 3\rangle\},\\ &S_{12}=\{|1\rangle|2\pm 3\rangle|0\rangle\}, && S_{52}=\{|3\rangle|2\pm 3\rangle|1\rangle\},\\ &S_{13}=\{|2\pm 3\rangle|0\rangle|1\rangle\}, && S_{53}=\{|2\pm 3\rangle|1\rangle|3\rangle\},\\ &S_{21}=\{|0\rangle|2\rangle|1\pm 2\rangle\}, && S_{61}=\{|2\rangle|3\rangle|1\pm 2\rangle\},\\ &S_{22}=\{|2\rangle|1\pm 2\rangle|0\rangle\}, && S_{62}=\{|3\rangle|1\pm 2\rangle|2\rangle\},\\ &S_{23}=\{|1\pm 2\rangle|0\rangle|2\rangle\}, && S_{63}=\{|1\pm 2\rangle|2\rangle|3\rangle\},\\ &S_{31}=\{|0\rangle|3\rangle|0\pm 2\rangle\}, && S_{71}=\{|3\rangle|0\rangle|2\pm 3\rangle\},\\ &S_{32}=\{|3\rangle|0\pm 2\rangle|0\rangle\}, && S_{72}=\{|0\rangle|2\pm 3\rangle|3\rangle\},\\ &S_{33}=\{|0\pm 2\rangle|0\rangle|3\rangle\}, && S_{73}=\{|2\pm 3\rangle|3\rangle|0\rangle\},\\ &S_{41}=\{|1\rangle|0\rangle|0\pm 1\rangle\}, && S_{81}=\{|3\rangle|1\rangle|0\pm 1\rangle\},\\ &S_{42}=\{|0\rangle|0\pm 1\rangle|1\rangle\}, && S_{82}=\{|1\rangle|0\pm 1\rangle|3\rangle\},\\ &S_{43}=\{|0\pm 1\rangle|1\rangle|0\rangle\}, && S_{83}=\{|0\pm 1\rangle|3\rangle|1\rangle\}. \end{aligned} \end{equation} A geometric representation of this OPS in $A|BC$ bipartition is depicted in Fig. \ref{11}. \begin{figure} \caption{The corresponding $4\times 16$ grid of $\{S_{ij} \label{11} \end{figure} \emph{Theorem~5}. In $\mathcal{C}^{4}\otimes\mathcal{C}^{4}\otimes\mathcal{C}^{4}$, the set $\cup_{i=1}^{8}(\cup_{j=1}^{3}S_{ij})$ given by Eq. (\ref{22}) is of the strongest nonlocality. The size of this set is 48. The detailed proof is shown in Appendix \ref{E}. Up to now, we have constructed a strongly nolocal OPS containing 48 states in $\mathcal{C}^{4}\otimes\mathcal{C}^{4}\otimes\mathcal{C}^{4}$, which is 6 and 8 fewer than states presented in Ref. \cite{Yuan} and \cite{Shi2}, respectively. Next, we generalize the structures of OPSs given by Eq. (\ref{22}) and Ref. \cite{Yuan} to systems $\mathcal{C}^{d_{A}}\otimes\mathcal{C}^{d_{B}}\otimes\mathcal{C}^{d_{C}}$ and $\mathcal{C}^{d_{A}}\otimes\mathcal{C}^{d_{B}}\otimes\mathcal{C}^{d_{C}}\otimes\mathcal{C}^{d_{D}}$, respectively. In quantum system $\mathcal{C}^{d_{A}}\otimes\mathcal{C}^{d_{B}}\otimes\mathcal{C}^{d_{C}}$ $(d_{A},d_{B},d_{C}\geq 4)$, consider the following OPS \begin{equation}\label{23} \begin{aligned} &H_{11}=\{|0\rangle_{A}|1\rangle_{B}|\alpha_{3}^{l}\rangle_{C}\}_{l}, \\ &H_{12}=\{|1\rangle_{A}|\alpha_{3}^{l}\rangle_{B}|0\rangle_{C}\}_{l}, \\ &H_{13}=\{|\alpha_{3}^{l}\rangle_{A}|0\rangle_{B}|1\rangle_{C}\}_{l}, \\ &H_{21}=\{|0\rangle_{A}|\alpha^{i}\rangle_{B}|\alpha_{1}^{k}\rangle_{C}\}_{i,k}, \\ &H_{22}=\{|\alpha^{i}\rangle_{A}|\alpha_{1}^{k}\rangle_{B}|0\rangle_{C}\}_{i,k}, \\ &H_{23}=\{|\alpha_{1}^{k}\rangle_{A}|0\rangle_{B}|\alpha^{i}\rangle_{C}\}_{i,k}, \\ &H_{31}=\{|0\rangle_{A}|d_{B}'\rangle_{B}|\alpha_{0}^{j}\rangle_{C}\}_{j}, \\ &H_{32}=\{|d_{A}'\rangle_{A}|\alpha_{0}^{j}\rangle_{B}|0\rangle_{C}\}_{j}, \\ &H_{33}=\{|\alpha_{0}^{j}\rangle_{A}|0\rangle_{B}|d_{C}'\rangle_{C}\}_{j}, \\ &H_{41}=\{|1\rangle_{A}|0\rangle_{B}|0\pm 1\rangle_{C}\}, \\ &H_{42}=\{|0\rangle_{A}|0\pm 1\rangle_{B}|1\rangle_{C}\}, \\ &H_{43}=\{|0\pm 1\rangle_{A}|1\rangle_{B}|0\rangle_{C}\}, \\ &H_{51}=\{|1\rangle_{A}|d_{B}'\rangle_{B}|\alpha_{3}^{l}\rangle_{C}\}_{l}, \\ &H_{52}=\{|d_{A}'\rangle_{A}|\alpha_{3}^{l}\rangle_{B}|1\rangle_{C}\}_{l}, \\ &H_{53}=\{|\alpha_{3}^{l}\rangle_{A}|1\rangle_{B}|d_{C}'\rangle_{C}\}_{l}, \\ &H_{61}=\{|\alpha^{i}\rangle_{A}|d_{B}'\rangle_{B}|\alpha_{1}^{k}\rangle_{C}\}_{i,k}, \\ &H_{62}=\{|d_{A}'\rangle_{A}|\alpha_{1}^{k}\rangle_{B}|\alpha^{i}\rangle_{C}\}_{i,k}, \\ &H_{63}=\{|\alpha_{1}^{k}\rangle_{A}|\alpha^{i}\rangle_{B}|d_{C}'\rangle_{C}\}_{i,k}, \\ &H_{71}=\{|d_{A}'\rangle_{A}|0\rangle_{B}|\alpha_{3}^{l}\rangle_{C}\}_{l}, \\ &H_{72}=\{|0\rangle_{A}|\alpha_{3}^{l}\rangle_{B}|d_{C}'\rangle_{C}\}_{l}, \\ &H_{73}=\{|\alpha_{3}^{l}\rangle_{A}|d_{B}'\rangle_{B}|0\rangle_{C}\}_{l}, \\ &H_{81}=\{|d_{A}'\rangle_{A}|1\rangle_{B}|0\pm 1\rangle_{C}\}, \\ &H_{82}=\{|1\rangle_{A}|0\pm 1\rangle_{B}|d_{C}'\rangle_{C}\}, \\ &H_{83}=\{|0\pm 1\rangle_{A}|d_{B}'\rangle_{B}|1\rangle_{C}\}. \end{aligned} \end{equation} Here $|\alpha^{i}\rangle_{\tau}=\sum_{u=0}^{d_{\tau}-4}\omega_{d_{\tau}-3}^{iu}|u+2\rangle$, $|\alpha_{0}^{j}\rangle_{\tau}=|0\rangle+\sum_{u=1}^{d_{\tau}-3}\omega_{d_{\tau}-2}^{ju}|u+1\rangle$, $|\alpha_{1}^{k}\rangle_{\tau}=\sum_{u=0}^{d_{\tau}-3}\omega_{d_{\tau}-2}^{ku}|u+1\rangle$, $|\alpha_{3}^{l}\rangle_{\tau}=\sum_{u=0}^{d_{\tau}-3}\omega_{d_{\tau}-2}^{lu}|u+2\rangle$, $d_{\tau}'=d_{\tau}-1$ for $i\in \mathcal{Z}_{d_{\tau}-3}$, $j,k,l\in \mathcal{Z}_{d_{\tau}-2}$ and $\tau=A,B,C$. Since the above OPS has the same structure as the set (\ref{22}), we find that it is strong nonlocal. \emph{Theorem~6}. In $\mathcal{C}^{d_{A}}\otimes\mathcal{C}^{d_{B}}\otimes\mathcal{C}^{d_{C}}$, the set $\cup_{i=1}^{8}(\cup_{j=1}^{3}H_{ij})$ given by Eq. (\ref{23}) is an OPS of the strongest nonlocality. The size of this set is $2[(d_{A}d_{B}+d_{B}d_{C}+d_{A}d_{C})-3(d_{A}+d_{B}+d_{C})+12]$. The detailed proof is in Appendix \ref{F}. In $\mathcal{C}^{d}\otimes\mathcal{C}^{d}\otimes\mathcal{C}^{d}$, the size $6[(d-1)^2-d+3]$ of the strongly nonlocal OPS of Theorem 4 is strictly fewer, $6(d-3)$ fewer to be precise, than the size $6(d-1)^{2}$ of the strongly nonlocal OPS in Ref. \cite{Yuan}. Similarly, we propose the following OPS in $\mathcal{C}^{d_{A}}\otimes\mathcal{C}^{d_{B}}\otimes\mathcal{C}^{d_{C}}\otimes\mathcal{C}^{d_{D}}$ \begin{equation}\label{24} \begin{aligned} &U_{11}=\{|0\rangle_{A}|\xi_{i}\rangle_{B}|\eta_{j}\rangle_{C}|0\pm d_{D}'\rangle_{D}\}_{i,j}, \\ &U_{12}=\{|\xi_{i}\rangle_{A}|\eta_{j}\rangle_{B}|0\pm d_{C}'\rangle_{C}|0\rangle_{D}\}_{i,j}, \\ &U_{13}=\{|\eta_{j}\rangle_{A}|0\pm d_{B}'\rangle_{B}|0\rangle_{C}|\xi_{i}\rangle_{D}\}_{i,j}, \\ &U_{14}=\{|0\pm d_{A}'\rangle_{A}|0\rangle_{B}|\xi_{i}\rangle_{C}|\eta_{j}\rangle_{D}\}_{i,j}, \\ &U_{21}=\{|\xi_{i}\rangle_{A}|d_{B}'\rangle_{B}|\gamma_{k}\rangle_{C}|\eta_{j}\rangle_{D}\}_{i,j,k}, \\ &U_{22}=\{|d_{A}'\rangle_{A}|\gamma_{k}\rangle_{B}|\eta_{j}\rangle_{C}|\xi_{i}\rangle_{D}\}_{i,j,k}, \\ &U_{23}=\{|\gamma_{k}\rangle_{A}|\eta_{j}\rangle_{B}|\xi_{i}\rangle_{C}|d_{D}'\rangle_{D}\}_{i,j,k}, \\ &U_{24}=\{|\eta_{j}\rangle_{A}|\xi_{i}\rangle_{B}|d_{C}'\rangle_{C}|\gamma_{k}\rangle_{D}\}_{i,j,k}, \\ &U_{31}=\{|d_{A}'\rangle_{A}|0\rangle_{B}|0\pm d_{C}'\rangle_{C}|\gamma_{k}\rangle_{D}\}_{k}, \\ &U_{32}=\{|0\rangle_{A}|0\pm d_{B}'\rangle_{B}|\gamma_{k}\rangle_{C}|d_{D}'\rangle_{D}\}_{k}, \\ &U_{33}=\{|0\pm d_{A}'\rangle_{A}|\gamma_{k}\rangle_{B}|d_{C}'\rangle_{C}|0\rangle_{D}\}_{k}, \\ &U_{34}=\{|\gamma_{k}\rangle_{A}|d_{B}'\rangle_{B}|0\rangle_{C}|0\pm d_{D}'\rangle_{D}\}_{k}, \\ &U_{41}=\{|\xi_{i}\rangle_{A}|\xi_{i}\rangle_{B}|0\rangle_{C}|\gamma_{k}\rangle_{D}\}_{i|_{A},i|_{B},k}, \\ &U_{42}=\{|\xi_{i}\rangle_{A}|0\rangle_{B}|\gamma_{k}\rangle_{C}|\xi_{i}\rangle_{D}\}_{i|_{A},i|_{D},k}, \\ &U_{43}=\{|0\rangle_{A}|\gamma_{k}\rangle_{B}|\xi_{i}\rangle_{C}|\xi_{i}\rangle_{D}\}_{i|_{C},i|_{D},k}, \\ &U_{44}=\{|\gamma_{k}\rangle_{A}|\xi_{i}\rangle_{B}|\xi_{i}\rangle_{C}|0\rangle_{D}\}_{i|_{B},i|_{C},k}, \\ &U_{51}=\{|d_{A}'\rangle_{A}|d_{B}'\rangle_{B}|\xi_{i}\rangle_{C}|0\pm d_{D}'\rangle_{D}\}_{i}, \\ &U_{52}=\{|d_{A}'\rangle_{A}|\xi_{i}\rangle_{B}|0\pm d_{C}'\rangle_{C}|d_{D}'\rangle_{D}\}_{i}, \\ &U_{53}=\{|\xi_{i}\rangle_{A}|0\pm d_{B}'\rangle_{B}|d_{C}'\rangle_{C}|d_{D}'\rangle_{D}\}_{i}, \\ &U_{54}=\{|0\pm d_{A}'\rangle_{A}|d_{B}'\rangle_{B}|d_{C}'\rangle_{C}|\xi_{i}\rangle_{D}\}_{i}, \\ &U_{61}=\{|0\rangle_{A}|0\rangle_{B}|d_{C}'\rangle_{C}|\eta_{j}\rangle_{D}\}_{j}, \\ &U_{62}=\{|0\rangle_{A}|d_{B}'\rangle_{B}|\eta_{j}\rangle_{C}|0\rangle_{D}\}_{j}, \\ &U_{63}=\{|d_{A}'\rangle_{A}|\eta_{j}\rangle_{B}|0\rangle_{C}|0\rangle_{D}\}_{j}, \\ &U_{64}=\{|\eta_{j}\rangle_{A}|0\rangle_{B}|0\rangle_{C}|d_{D}'\rangle_{D}\}_{j}, \\ &U_{71}=\{|0\rangle_{A}|\xi_{i}\rangle_{B}|0\rangle_{C}|\xi_{i}\rangle_{D}\}_{i|_{B},i|_{D}}, \\ &U_{72}=\{|\xi_{i}\rangle_{A}|0\rangle_{B}|\xi_{i}\rangle_{C}|0\rangle_{D}\}_{i|_{A},i|_{C}}, \\ &U_{81}=\{|0\rangle_{A}|d_{B}'\rangle_{B}|0\rangle_{C}|d_{D}'\rangle_{D}\}, \\ &U_{82}=\{|d_{A}'\rangle_{A}|0\rangle_{B}|d_{C}'\rangle_{C}|0\rangle_{D}\}, \\ &U_{91}=\{|\xi_{i}\rangle_{A}|d_{B}'\rangle_{B}|\xi_{i}\rangle_{C}|d_{D}'\rangle_{D}\}_{i|_{A},i|_{C}}, \\ &U_{92}=\{|d_{A}'\rangle_{A}|\xi_{i}\rangle_{B}|d_{C}'\rangle_{C}|\xi_{i}\rangle_{D}\}_{i|_{B},i|_{D}}, \end{aligned} \end{equation} where $|\xi_{i}\rangle_{\tau}=\sum_{u=0}^{d_{\tau}-3}\omega_{d_{\tau}-2}^{iu}|u+1\rangle$, $|\eta_{j}\rangle_{\tau}=\sum_{u=0}^{d_{\tau}-2}$ $\omega_{d_{\tau}-1}^{ju}|u\rangle$, $|\gamma_{k}\rangle_{\tau}=\sum_{u=0}^{d_{\tau}-2}\omega_{d_{\tau}-1}^{ku}|u+1\rangle$, $d_{\tau}'=d_{\tau}-1$ for $i\in \mathcal{Z}_{d_{\tau}-2}$, $j,k\in \mathcal{Z}_{d_{\tau}-1}$, and $\tau=A,B,C,D$. \emph{Theorem~7}. In the system $\mathcal{C}^{d_{A}}\otimes\mathcal{C}^{d_{B}}\otimes\mathcal{C}^{d_{C}}\otimes\mathcal{C}^{d_{D}}$, the set $\{\cup_{i=1}^{6}(\cup_{j=1}^{4}U_{ij})\}\cup\{\cup_{i=7}^{9}(\cup_{j=1}^{2}U_{ij})\}$ given by Eq. (\ref{24}) is an OPS of the strongest nonlocality. The size of this set is $d_{A}d_{B}d_{C}d_{D}-(d_{A}-2)(d_{B}-2)(d_{C}-2)(d_{D}-2)-2$. The detailed proof is shown in Appendix \ref{G}. It is worth noting that the set (\ref{24}) is still of the strongest nonlocality even though it contains fewer quantum states than the set in Ref. \cite{Yuan}. Moreover, its size is smaller than that of the strongly nonlocal OPS in Ref. \cite{Shi2}. Each of Theorems 2-7 gives a positive answer to one open problem in Ref. \cite{Halder} of ``whether incomplete orthogonal product bases can be strongly nonlocal.'' \section{entanglement-assisted discrimination}\label{Q5} The above OPSs cannot be distinguished under LOCC even if any $n-1$ parties are allowed to come together. However, it is possible while one equips enough entanglement resource. Let $|\phi^{+}(d)\rangle$ denote the maximally entangled state $\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|ii\rangle$ in $\mathcal{C}^{d}\otimes \mathcal{C}^{d}$. Let $(s,|\phi^{+}(d)\rangle_{AB})$ express a resource configuration, which means that on average an amount $s$ of the two-qudit maximally entangled state is consumed between Alice and Bob. In this section, we will present several different entanglement-assisted discrimination protocols. Without loss of generality, from now, we only consider in the case $d_{A}\geq d_{B}\geq d_{C}\geq d_{D}$. \emph{Theorem~8}. The entanglement resource configuration $\{(1,|\phi^{+}(2)\rangle_{AB});(1,|\phi^{+}(d_{C})\rangle_{BC})\}$ is sufficient for local discrimination of the set ({\ref{21}}). The detailed process is provided in Appendix \ref{H}. In this protocol, we use quantum teleportation one time and consume $(1+\log_{2} d_{C})$-ebit entanglement resource in total. It is strictly less than the amount consumed in the protocol which teleports all subsystems to one party. Next, we discuss the local discrimination of OPS ({\ref{21}}) without teleportation. \emph{Theorem~9}. When all the parties are separated, the set $\cup_{i=1}^{12}H_{i}$ given by Eq.~({\ref{21}}) can be locally distinguished by using the entanglement resource $\{(s,|\phi^{+}(2)\rangle_{AB});$ $(1,|\phi^{+}(2)\rangle_{AC})\}$, where $s=1+\frac{e-3f+6}{2e-4f+6}$ for $e=d_{A}d_{B}+d_{A}d_{C}+d_{B}d_{C}$ and $f=d_{A}+d_{B}+d_{C}$. The specific process is given in Appendix \ref{I}. The entanglement consumed in this protocol is $(1+s)$-ebit, due to $s<1.5<\log_{2} d_{C}$, which is less than the resource used in Theorem 8. Since the set (\ref{22}) is a special case of (\ref{23}) and they have the same structure, we only need to consider the entanglement-assisted discrimination protocols for the set ({\ref{23}}). \emph{Theorem~10}. The set $\cup_{i=1}^{8}(\cup_{j=1}^{3}H_{ij})$ given by Eq. ({\ref{23}}) can be locally distinguished by using the entanglement resource configuration $\{(1,|\phi^{+}(2)\rangle_{AB});(1,|\phi^{+}(d_{C})\rangle_{BC})\}$. \emph{Theorem~11}. The set $\cup_{i=1}^{8}(\cup_{j=1}^{3}H_{ij})$ given by Eq. ({\ref{23}}) can be locally distinguished by using the entanglement resource configuration $\{(1,|\phi^{+}(4)\rangle_{AB});(1,|\phi^{+}(2)\rangle_{AC})\}$. The detailed proofs of Theorems 10 and 11 are given in Appendix \ref{J} and \ref{K}, respectively. The protocol in Theorem 10 uses teleportation while the protocol in Theorem 11 does not. Clearly $1+\log_{2} d_{C}$ ebits entanglement is consumed in the previous protocol, which is not less than the amount used 3 ebits in the latter protocol because $d_{C}\geq 4$. In other word, the latter resource configuration is more effective when the smallest dimension $d_{C}$ is greater than 4. Next, by the method presented by Zhang et al. in Ref. \cite{Zhang5}, using multiple copies of EPR states instead of high-dimensional entangled states, we can get a new resource configuration. \emph{Theorem~12}. The entanglement resource configuration $\{(2,|\phi^{+}(2)\rangle_{AB});(1,|\phi^{+}(2)\rangle_{AC})\}$ is sufficient for local discrimination of the set $\cup_{i=1}^{8}(\cup_{j=1}^{3}H_{ij})$ given by Eq. ({\ref{23}}). In fact, using two EPR states has the same effect as using one maximally entangled state $|\phi^{+}(4)\rangle_{AB}$. In the ancillary system of one party, $|00\rangle$, $|01\rangle$, $|10\rangle$ and $|11\rangle$ can correspond to $|0\rangle$, $|1\rangle$, $|2\rangle$ and $|3\rangle$, respectively. For the detailed procedure please refer to Appendix \ref{L}. This also shows that, in the similar discrimination protocol, we can replace a maximally entangled state $|\phi^{+}(d)\rangle$ with $n$ EPR states when $2^{n}\geq d$. Although more resources may be used, the method should be relatively easier to implement in real experiment because it only requires a device which can produce 2-qubit maximally entangled states. Besides, we also get several entanglement resource configurations to discriminate the set ({\ref{24}}) by LOCC. \emph{Theorem~13}. The entanglement resource configuration $\{(1,|\phi^{+}(3)\rangle_{AB});(1,|\phi^{+}(d_{C})\rangle_{BC});(1,|\phi^{+}(d_{D})\rangle_{BD})\}$ is sufficient for local discrimination of the set $\{\cup_{i=1}^{6}(\cup_{j=1}^{4}U_{ij})\}\cup\{\cup_{i=7}^{9}(\cup_{j=1}^{2}U_{ij})\}$ given by Eq. ({\ref{24}}). The protocol of Theorem 13 is given in Appendix \ref{M}. \emph{Theorem~14}. Any one of the resource configurations $\{(1,|\phi^{+}(3)\rangle_{AB});(1,|\phi^{+}(3)\rangle_{AC});(1,|\phi^{+}(3)\rangle_{AD})\}$ and $\{(2,|\phi^{+}(2)\rangle_{AB});(2,|\phi^{+}(2)\rangle_{AC});(2,|\phi^{+}(2)\rangle_{AD})\}$ is sufficient for local discrimination of the set ({\ref{24}}). We will not repeat the protocol of Theorem 14, because it is similar to that of Theorems 11 and 12. In Theorem 13, we perform quantum teleportation twice and consume $\log_{2} 3d_{C}d_{D}$ ebits entanglement resource. In comparison, the first configuration of Theorem 14 is more effective because $\log_{2} 27\leq \log_{2} 3d_{C}d_{D}$, and the second configuration is simpler because it only needs multiple EPR states. \section{Conclusion}\label{Q6} We have investigated the OPS with strong quantum nonlocality in multipartite quantum systems through the decomposition of plane geometry. Sufficient conditions for the triviality of orthogonality-preserving POVM on fixed subsystem are presented. We have shown the minimum size of strongly unlocal OPSs under some restrictions in $\mathcal{C}^{3}\otimes \mathcal{C}^{3}\otimes \mathcal{C}^{3}$ and $\mathcal{C}^{4}\otimes \mathcal{C}^{4}\otimes \mathcal{C}^{4}$, which partially answer an open question in Ref.\cite{Yuan}: ``Can we find the smallest strongly nonlocal set in $\mathcal{C}^{3}\otimes\mathcal{C}^{3}\otimes\mathcal{C}^{3}$, and more generally in any tripartite systems?''. Furthermore, we successfully constructed a smaller OPS which has the strongest nonlocality in $\mathcal{C}^{d_{A}}\otimes \mathcal{C}^{d_{B}}\otimes \mathcal{C}^{d_{C}}$ $(d_{A},d_{B},d_{C}\geq 4)$ and generalized the previous known structures of strongly nonlocal OPSs to any possible three and four-partite systems. Interestingly, we studied local discrimination protocols for our OPSs with different types of entangled resources. Among them, we have three protocols which only need multiple copies of EPR states. We found that the protocols without teleportation can be more efficient on average. More than that, our results could also be helpful in better understanding of the properties of maximally entangled states. \begin{acknowledgments} This work was supported by the National Natural Science Foundation of China under Grant Nos. 12071110 and 62271189, the Hebei Natural Science Foundation of China under Grant No. A2020205014, the Science and Technology Project of Hebei Education Department under Grant Nos. ZD2020167 and ZD2021066, and funded by School of Mathematical Sciences of Hebei Normal University under Grant No. 2021sxbs002. \end{acknowledgments} \begin{appendix} \section{The proof of theorem 2}\label{B} According to Corollary 2, we know the union $\cup_{r}S_{r}^{(BC)}$ of all projection sets is the basis $\mathcal{B}^{BC}$ and the family of projection sets $\{S_{r}^{(BC)}\}_{r}$ is connected. When $N=1$, it is obvious that the set $S$ is locally distinguishable. When $N=2$, due to the symmetry, there is the collection $\{S_{t_{1}},S_{t_{2}},S_{t_{3}}\}$ including 6 quantum states, which satisfies $|S_{t_{1}}|=|S_{t_{2}}|=|S_{t_{3}}|=2$, $|S_{t_{1}}^{(BC)}|=|S_{t_{2}}^{(BC)}|=2$ and $|S_{t_{3}}^{(BC)}|=1$. Moreover, the collection is invariant under the cyclic permutation of the parties. According to the completeness and connectedness of projection sets, the set $S$ contains at least 8 subsets whose projection sets on $BC$ party have two elements. That is, we have no less than 4 disjoint collections with above form. In other words, when $N=2$, the size of set $S$ cannot be less than 24. The case $N=3$ does not exist. If $N=3$, then there must be a subset satisfying $|S_{t}^{(A)}|=3$ and $|S_{t}^{(BC)}|=1$. Meanwhile, $S_{t}^{(A)}=\mathcal{B}^A$. We have $S_{t}^{(BC)}\cap (\cup_{t'\in Q\setminus\{t\}}S_{t'}^{(BC)})=\emptyset$. Hence, the family of projection sets $\{S_{r}^{(BC)}\}_{r}$ is unconnected, which is contradiction. Similarly, the cases $N=6,9$ do not exist. In the case $N=4$, because of symmetry, there is a collection $\{S_{u_{1}},S_{u_{2}},S_{u_{3}}\}$ containing 12 quantum states, which is symmetric and satisfies $|S_{u_{1}}|=|S_{u_{2}}|=|S_{u_{3}}|=4$, $|S_{u_{1}}^{(BC)}|=4$ and $|S_{u_{2}}^{(BC)}|=|S_{u_{3}}^{(BC)}|=2$. Similarly, due to the completeness and connectedness of projection sets, there are at least another subset whose projection set on $BC$ party has four elements or three additional subsets whose projection sets on $BC$ party have two elements. In either case, it means that the size of set $S$ is not less than 24. It is obvious that $|S_{r}|\neq 5,7$ for any $r\in Q$. If there is a subset such that $|S_{r}|=8$, then for arbitrary cyclic permutation $P_{c}$ of subsystems, the two subspaces spanned by $S_{r}$ and $P_{c}(S_{r})$, respectively, are not orthogonal. It follows that there must be two nonorthogonal quantum states, one of which belongs to $S_{r}$ and the other of which belongs to $P_{c}(S_{r})$. This contradicts the fact $P_{c}(S_{r})\subset S$. Consequently, the cases $N=5,7,8$ do not hold. On the other hand, the strongly nonlocal OPS given by Eq. (\ref{20}) satisfies all conditions and contains 24 quantum states. Thus, in $\mathcal{C}^{3}\otimes\mathcal{C}^{3}\otimes\mathcal{C}^{3}$, the minimum size of the set $S$ is 24. The proof is completed. \section{The proof of theorem 3}\label{C} Because the set is symmetric and the maximum size of all subsets is 2, there is a collection $\{S_{t_{1}},S_{t_{2}},S_{t_{3}}\}$ containing 6 quantum states. It satisfies the same requirements as the proof of Theorem 2. Due to the completeness and connectedness of projection sets, there are at least 15 subsets whose projection sets on $BC$ party have size 2. So, we have no less than 8 disjoint collections, each of which contains 6 quantum states. That is, the set $S$ contains at least 48 quantum states. On the other side, we find the OPS given by Eq. (\ref{22}) satisfies all conditions and the size is 48. Therefore, the minimum size of set $S$ is 48. \section{The proof of theorem 5}\label{E} According to Lemma 1 and the invariance of the set (\ref{22}) under cyclic permutations, we only need to discuss the orthogonal-preserving measurement on $BC$ party. The tile structure is illustrated in Fig. \ref{11}. It is obvious $\tilde{S}_{V_{kl}}=\mathcal{B}^{BC}$ for all $k,l\in\mathcal{Z}_{4}$, which implies that $\mathcal{B}_{kl}^{BC}\subset\tilde{S}_{V_{kl}}$. Hence the condition i) holds. For each subset $S_{ij}$, there is the corresponding PI set $R_{ij}$, which is shown in table \ref{30}. It follows that the condition ii) is satisfied. \begin{table}[tbp] \centering \caption{Corresponding PI set $R_{ij}$ for each subset $S_{ij}$.}\label{30} \begin{tabular}{cl|cl} \hline \hline Subset~~ & ~~~~~PI set~~~~~ & ~~Subset~~ & ~~~~~PI set~~~~~ \\ \hline $S_{11}$ & $R_{11}=S_{53}\cup S_{62}$ & $S_{51}$ & $R_{51}=S_{31}\cup S_{72}$ \\ $S_{12}$ & $R_{12}=S_{32}\cup S_{73}$ & $S_{52}$ & $R_{52}=S_{21}\cup S_{83}$ \\ $S_{13}$ & $R_{13}=S_{41}$ & $S_{53}$ & $R_{53}=S_{11}$ \\ $S_{21}$ & $R_{21}=S_{52}\cup S_{62}$ & $S_{61}$ & $R_{61}=S_{51}\cup S_{83}$ \\ $S_{22}$ & $R_{22}=S_{32}\cup S_{81}$ & $S_{62}$ & $R_{62}=S_{11}\cup S_{21}$ \\ $S_{23}$ & $R_{23}=S_{71}$ & $S_{63}$ & $R_{63}=S_{72}$ \\ $S_{31}$ & $R_{31}=S_{12}\cup S_{51}$ & $S_{71}$ & $R_{71}=S_{23}\cup S_{82}$ \\ $S_{32}$ & $R_{32}=S_{12}\cup S_{41}$ & $S_{72}$ & $R_{72}=S_{51}\cup S_{63}$ \\ $S_{33}$ & $R_{33}=S_{82}$ & $S_{73}$ & $R_{73}=S_{12}$ \\ $S_{41}$ & $R_{41}=S_{13}\cup S_{32}$ & $S_{81}$ & $R_{81}=S_{42}\cup S_{43}$ \\ $S_{42}$ & $R_{42}=S_{13}\cup S_{81}$ & $S_{82}$ & $R_{82}=S_{11}\cup S_{33}$ \\ $S_{43}$ & $R_{43}=S_{81}$ & $S_{83}$ & $R_{83}=S_{61}$ \\ \hline \end{tabular} \end{table} Since $|S_{ij}^{(BC)}\cap S_{kl}^{(BC)}|\leq 1$ for any two subsets $S_{ij}$ and $S_{kl}$, each $R_{ij}$ is a UPI set. Therefore $G_{1}$ is the union of all subsets. Thus, the condition iii) is true. In addition, we have a sequence of projection sets $S_{41}^{(BC)}\rightarrow S_{42}^{(BC)}\rightarrow S_{81}^{(BC)}\rightarrow S_{22}^{(BC)}\rightarrow S_{12}^{(BC)}\rightarrow S_{31}^{(BC)}\rightarrow S_{51}^{(BC)}~(\rightarrow S_{72}^{(BC)})\rightarrow S_{61}^{(BC)}\rightarrow S_{52}^{(BC)}\rightarrow S_{21}^{(BC)}\rightarrow S_{62}^{(BC)}\rightarrow S_{11}^{(BC)}\rightarrow S_{82}^{(BC)}\rightarrow S_{71}^{(BC)}$, where the intersection of the sets on both sides of the arrow is not empty and the union of these sets is the computation basis $\mathcal{B}^{BC}$. Here the set $S_{72}^{(BC)}$ in the bracket is only related to the previous set $S_{51}^{(BC)}$. This means that it is impossible to divide all projection sets into disjoint two groups. That is, the family of projection sets $\{S_{ij}^{(BC)}\}_{ij}$ is connected. The condition iv) holds. By using Theorem 1, the orthogonality-preserving POVM performed on $BC$ party can only be trivial. Therefore, the OPS (\ref{22}) is of the strongest quantum nonlocality. \section{The proof of theorem 6}\label{F} We need only to consider the orthogonality-preserving POVM on $BC$ party. Because the set (\ref{23}) has the same structure as the set (\ref{22}), the conditions i), ii) and iv) are obvious. Given the set sequence \begin{equation} \begin{aligned} G_{1}=& H_{11}\cup H_{12}\cup H_{13}\cup H_{22}\cup H_{31}\cup H_{32}\cup H_{33}\\ &\cup H_{41}\cup H_{42}\cup H_{43}\cup H_{51}\cup H_{52}\cup H_{53}\cup H_{61}\\ &\cup H_{71}\cup H_{72}\cup H_{73}\cup H_{81}\cup H_{82}\cup H_{83}, \\ G_{2}=& H_{21}\cup H_{23}\cup H_{62}\cup H_{63}. \end{aligned} \end{equation} Here each subset contained in $G_{2}$ is a NIC subset. Referring to table \ref{30}, we can get the PI set $R_{ij}$ of $H_{ij}$ on $BC$ party. More specifically, $H_{ij}$ is substituted for $S_{ij}$ in table \ref{30}, one gets the PI set $R_{ij}$ of $H_{ij}$ on $BC$ party. For the subsets $H_{21}$, $H_{23}$, $H_{62}$ and $H_{63}$, there are $H_{52}=G_{1}\cap R_{21}$, $H_{71}=G_{1}\cap R_{23}$, $H_{11}=G_{1}\cap R_{62}$ and $H_{72}=G_{1}\cap R_{63}$, respectively. It implies that condition iii) holds. The set (\ref{23}) satisfies the four conditions in Theorem 1, therefore it is locally irreducible in every bipartition. That is, the OPS (\ref{23}) is a set of the strongest nonlocality. \section{The proof of theorem 7}\label{G} \begin{figure} \caption{The corresponding $3\times 27$ grid of $\{U_{ij} \label{12} \end{figure} We need to prove that the orthogonality-preserving POVM performed on $BCD$ party can only be trivial. To see this, we will prove that the OPS (\ref{24}) satisfies the four conditions in Theorem 1. Fig. \ref{12} is the tile structure of this OPS. Note that $\tilde{S}_{V_{jkl}}=\mathcal{B}^{BCD}$ for any $j,k,l\in\mathcal{Z}_{3}$. It is obvious $\mathcal{B}_{jkl}^{BCD}\subset\tilde{S}_{V_{jkl}}$. The condition i) holds. For each subset $U_{ij}$, there is the corresponding PI set $R_{ij}$, which is shown in table \ref{32}. Hence, the condition ii) holds. \begin{table}[tbp] \centering \caption{Corresponding PI set $R_{ij}$ for each subset $U_{ij}$.}\label{32} \begin{tabular}{cl|cl} \hline \hline Subset~~ & ~~~~~~~~~~~PI set~~~~~~~~~~~ &~~Subset~~ & ~~~~~~~~PI set~~~~~~~~~~~ \\ \hline $U_{11}$ & \tabincell{l}{$R_{11}=U_{12}\cup U_{23}$\\$~~~~~~~~\cup U_{41}\cup U_{44}$} & $U_{44}$ & $R_{44}=U_{11}$ \\ \hline $U_{12}$ & \tabincell{l}{$R_{12}=U_{33}\cup U_{63}$\\$~~~~~~~~\cup U_{82}$} & $U_{51}$ & $R_{51}=U_{32}\cup U_{62}$ \\ \hline $U_{13}$ & $R_{13}=U_{22}\cup U_{31}$ & $U_{52}$ & $R_{52}=U_{11}\cup U_{24}$ \\ \hline $U_{14}$ & $R_{14}=U_{42}\cup U_{72}$ & $U_{53}$ & $R_{53}=U_{32}$ \\ \hline $U_{21}$ & \tabincell{l}{$R_{21}=U_{22}\cup U_{33}$\\$~~~~~~~~\cup U_{51}\cup U_{54}$} & $U_{54}$ & $R_{54}=U_{21}$ \\ \hline $U_{22}$ & \tabincell{l}{$R_{22}=U_{13}\cup U_{43}$\\$~~~~~~~~\cup U_{71}$} & $U_{61}$ & $R_{61}=U_{12}\cup U_{42}$\\\hline $U_{23}$ & $R_{23}=U_{11}\cup U_{32}$ & $U_{62}$ & $R_{62}=U_{34}\cup U_{51}$\\ \hline $U_{24}$ & $R_{24}=U_{52}\cup U_{92}$ & $U_{63}$ & $R_{63}=U_{12}$ \\ \hline $U_{31}$ & \tabincell{l}{$R_{31}=U_{13}\cup U_{42}$\\$~~~~~~~~\cup U_{53}\cup U_{64}$} & $U_{64}$ & $R_{64}=U_{31}$ \\ \hline $U_{32}$ & \tabincell{l}{$R_{32}=U_{23}\cup U_{53}$\\$~~~~~~~~\cup U_{91}$} & $U_{71}$ & $R_{71}=U_{41}$ \\ \hline $U_{33}$ & $R_{33}=U_{12}\cup U_{21}$ & $U_{72}$ & $R_{72}=U_{14}$ \\ \hline $U_{34}$ & $R_{34}=U_{62}\cup U_{81}$ & $U_{81}$ & $R_{81}=U_{34}$ \\ \hline $U_{41}$ & $R_{41}=U_{11}\cup U_{71}$ & $U_{82}$ & $R_{82}=U_{12}$ \\ \hline $U_{42}$ & $R_{42}=U_{14}\cup U_{61}$ & $U_{91}$ & $R_{91}=U_{51}$ \\ \hline $U_{43}$ & $R_{43}=U_{22}$ & $U_{92}$ & $R_{92}=U_{24}$ \\ \hline \end{tabular} \end{table} Furthermore, we construct the set sequence \begin{equation} \begin{aligned} &G_{1}=U_{12}\cup U_{21}\cup U_{31}\cup U_{33}\cup U_{34}\cup U_{61}\\ &\quad~~~~~\cup U_{62}\cup U_{64}\cup U_{81}\cup U_{82},\\ &G_{2}=U_{11}\cup U_{13}\cup U_{42}\cup U_{51}\cup U_{54}\cup U_{63},\\ &G_{3}=U_{14}\cup U_{22}\cup U_{23}\cup U_{32}\cup U_{41}\cup U_{44}\\ &\quad~~~~~\cup U_{52}\cup U_{91},\\ &G_{4}=U_{24}\cup U_{43}\cup U_{53}\cup U_{71}\cup U_{72},\\ &G_{5}=U_{92}. \end{aligned} \end{equation} For the subset $U_{32}\subset G_{3}$, there are subsets $U_{51}\subset G_{2}$ and $U_{91}\subset R_{32}$ such that $U_{32}^{(BCD)}\cap U_{51}^{(BCD)}=U_{32}^{(BCD)}\cap U_{91}^{(BCD)}$. For any other subset $U_{t}\subset G_{x}$ $(x=2,\ldots,5)$, the intersection of set $G_{x-1}$ and PI set $R_{t}$ is exhibited in table \ref{33}. This shows that the condition iii) is true. \begin{table}[tbp] \centering \caption{The intersection of set $G_{x-1}$ and PI set $R_{t}$ about subset $U_{t}\subset G_{x}$ $(x=2,\ldots,5)$.}\label{33} \begin{tabular}{cc|cc} \hline \hline ~~~Subset~~~ & ~~~~~Intersection~~~~~ & ~~~Subset~~~ & ~~~~~Intersection~~~~~ \\ \hline $U_{11}\subset G_{2}$ & $U_{12}=G_{1}\cap R_{11}$ & $U_{44}\subset G_{3}$ & $U_{11}=G_{2}\cap R_{44}$ \\ $U_{13}\subset G_{2}$ & $U_{31}=G_{1}\cap R_{13}$ & $U_{52}\subset G_{3}$ & $U_{11}=G_{2}\cap R_{52}$ \\ $U_{42}\subset G_{2}$ & $U_{61}=G_{1}\cap R_{42}$ & $U_{91}\subset G_{3}$ & $U_{51}=G_{2}\cap R_{91}$ \\ $U_{51}\subset G_{2}$ & $U_{62}=G_{1}\cap R_{51}$ & $U_{24}\subset G_{4}$ & $U_{52}=G_{3}\cap R_{24}$ \\ $U_{54}\subset G_{2}$ & $U_{21}=G_{1}\cap R_{54}$ & $U_{43}\subset G_{4}$ & $U_{22}=G_{3}\cap R_{43}$ \\ $U_{63}\subset G_{2}$ & $U_{12}=G_{1}\cap R_{63}$ & $U_{53}\subset G_{4}$ & $U_{32}=G_{3}\cap R_{53}$\\ $U_{14}\subset G_{3}$ & $U_{42}=G_{2}\cap R_{14}$ & $U_{71}\subset G_{4}$ & $U_{41}=G_{3}\cap R_{71}$\\ $U_{22}\subset G_{3}$ & $U_{13}=G_{2}\cap R_{22}$ & $U_{72}\subset G_{4}$ & $U_{14}=G_{3}\cap R_{72}$\\ $U_{23}\subset G_{3}$ & $U_{11}=G_{2}\cap R_{23}$ & $U_{92}\subset G_{5}$ & $U_{24}=G_{4}\cap R_{92}$\\ $U_{41}\subset G_{3}$ & $U_{11}=G_{2}\cap R_{41}$ &\\ \hline \end{tabular} \end{table} We find the tree sequence of projection sets $U_{12}^{(BCD)}\rightarrow U_{61}^{(BCD)}(\rightarrow U_{42}^{(BCD)}\rightarrow U_{14}^{(BCD)})\rightarrow U_{31}^{(BCD)}\rightarrow U_{32}^{(BCD)}\rightarrow U_{51}^{(BCD)}(\rightarrow U_{62}^{(BCD)}\rightarrow U_{34}^{(BCD)})\rightarrow U_{21}^{(BCD)}\rightarrow U_{22}^{(BCD)}\rightarrow U_{41}^{(BCD)}(\rightarrow U_{11}^{(BCD)})\rightarrow U_{52}^{(BCD)}\rightarrow U_{24}^{(BCD)}$, where the subsequence in parentheses is a branch of the previous adjacent set. In this sequence, the intersection of the sets on both sides of the arrow is nonempty and the union of all these sets is the computation basis $\mathcal{B}^{BCD}$. This means that the family of projection sets $\{U_{ij}^{(BCD)}\}_{ij}$ is connected. The condition iv) is proven. Therefore, one can only perform a trivial orthogonality-preserving POVM on the $BCD$ party. Combining Lemma 1 with the symmetry of (\ref{24}) ensures that the OPS (\ref{24}) is of the strongest quantum nonlocality. \section{The proof of theorem 8}\label{H} Suppose that the whole quantum system is shared among Alice, Bob, and Charlie. By taking advantage of entangled resource $|\phi^{+}(d_{C})\rangle$, Charlie first teleports the state in his subsystem $C$ to Bob. Let the subindex $\widetilde{B}$ represent the joint part of $B$ and $C$. Whereafter, to locally discriminate the states in ({\ref{21}}), the EPR state $|\phi^{+}(2)\rangle_{ab}$ is shared by Alice and Bob. The initial state is \begin{equation}\label{41} \begin{aligned} |\psi\rangle_{A\widetilde{B}}\otimes|\phi^{+}(2)\rangle_{ab}, \end{aligned} \end{equation} where $|\psi\rangle_{A\widetilde{B}}$ is one of the states from the set ({\ref{21}}), $a$ and $b$ are ancillary systems of Alice and Bob, respectively. Because each subset $H_{r}$ $(r\in Q)$ is LOCC distinguishable, one only needs to locally distinguish these subsets. Now the discrimination protocol proceeds as follows. $Step~1.$ Alice performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{1}\equiv\{&M_{11}:=P[(|0\rangle,\ldots,|d_{A}'-1\rangle)_{A};|0\rangle_{a}]\\ &\quad\quad\quad+P[|d_{A}'\rangle_{A};|1\rangle_{a}],~\\ &M_{12}:=I-M_{11}\}, \end{aligned} \end{equation*} where $P[(|0\rangle,\ldots,|d_{A}'-1\rangle)_{A};|0\rangle_{a}]:=(|0\rangle\langle 0|+\cdots+|d_{A}'-1\rangle\langle d_{A}'-1|)_A\otimes(|0\rangle\langle 0|)_{a}$, this definition is applicable for all the protocols. Suppose the outcome corresponding to $M_{11}$ clicks (see Fig. \ref{34}), then the resulting postmeasurement states are \begin{equation*} \begin{aligned} &H_{1} \rightarrow \{|0\rangle_{A}|\xi_{i}\circ \eta_{j}\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{2} \rightarrow \{|\xi_{i}\rangle_{A}|\eta_{j}\circ 0\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{3} \rightarrow \{|\eta_{j}\rangle_{A}|0\circ \xi_{i}\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{4} \rightarrow \{|\xi_{i}\rangle_{A}|d_{B}'\circ \eta_{j}\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{5} \rightarrow \{|d_{A}'\rangle_{A}|\eta_{j}\circ \xi_{i}\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{6} \rightarrow \{|\eta_{j}\rangle_{A}|\xi_{i}\circ d_{C}'\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{7} \rightarrow \{|0\rangle_{A}|d_{B}'\circ (0\pm d_{C}')\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{8} \rightarrow \{|d_{A}'\rangle_{A}|(0\pm d_{B}')\circ 0\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{9} \rightarrow \{(|0\rangle_{A}|00\rangle_{ab}\pm |d_{A}'\rangle_{A}|11\rangle_{ab})|0\circ d_{C}'\rangle_{\widetilde{B}}\}, \\ &H_{10} \rightarrow \{|d_{A}'\rangle_{A}|\xi_{i}\circ (0\pm d_{C}')\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{11} \rightarrow \{|\xi_{i}\rangle_{A}|(0\pm d_{B}')\circ d_{C}'\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{12} \rightarrow \{(|0\rangle_{A}|00\rangle_{ab}\pm |d_{A}'\rangle_{A}|11\rangle_{ab})|d_{B}'\circ \xi_{i}\rangle_{\widetilde{B}}\}. \end{aligned} \end{equation*} Henceforth, symbol `$\circ$' represents the union of the parties. For example, $|\psi_{1}\circ \psi_{2}\rangle_{\widetilde{B}}=|\psi_{1}\rangle_{B}|\psi_{2}\rangle_{C}$ for any two quantum states $|\psi_{1}\rangle_{B}$ and $|\psi_{2}\rangle_{C}$. Specially, let $|(0,\ldots,d_{B}-1)\circ (0,\ldots,d_{C}-1)\rangle_{\widetilde{B}}$ express the set $\{|ij\rangle_{\widetilde{B}}~ |~i=0,1,\cdots,d_B-1; j=0,1,\cdots,d_C-1 \}$ denoted by $(|00\rangle,\ldots,|0(d_{C}-1)\rangle,|10\rangle,\ldots,|(d_{B}-1)(d_{C}-1)\rangle)_{\widetilde{B}}$. \begin{figure} \caption{While Alice and Bob share the EPR state $|\phi^{+} \label{34} \end{figure} $Step~2.$ Bob performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{2}\equiv\{&M_{21}:=P[|0\circ (1,\ldots,d_{C}'-1)\rangle_{\widetilde{B}};|0\rangle_{b}],~\\ &M_{22}:=P[|(1,\ldots,d_{B}'-1)\circ d_{C}'\rangle_{\widetilde{B}};|0\rangle_{b}],~\\ &M_{23}:=P[|(0,d_{B}')\circ 0\rangle_{\widetilde{B}};|1\rangle_{b}],~\\ \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} &M_{24}:=P[|(0,\ldots,d_{B}'-1)\circ (1,\ldots,d_{C}'-1)\rangle_{\widetilde{B}};\\ &\quad\quad\quad\quad~~|1\rangle_{b}],~\\ &M_{25}:=P[|(1,\ldots,d_{B}'-1)\circ (0,d_{C}')\rangle_{\widetilde{B}};|1\rangle_{b}],~\\ &M_{26}:=I-M_{1}-M_{2}-M_{3}-M_{4}-M_{5}\}. \end{aligned} \end{equation*} This step is shown in Fig. \ref{35}. If the corresponding operations $M_{21}$, $M_{22}$, $M_{23}$, $M_{24}$ and $M_{25}$ click, we can distinguish the subsets $H_{3}$, $H_{6}$, $H_{8}$, $H_{5}$ and $H_{10}$, respectively. If $M_{26}$ clicks, the given state is belonging to one of the remaining seven subsets $\{H_{1},H_{2},H_{4},H_{7},H_{9},H_{11},H_{12}\}$. At this point, we move on to the next step. \begin{figure} \caption{The $d_{A} \label{35} \end{figure} $Step~3.$ Alice performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{3}\equiv\{&M_{31}:=P[|0\rangle_{A};|0\rangle_{a}]+P[|d_{A}'\rangle_{A};|1\rangle_{a}],~\\ &M_{32}:=I-M_{31}\}. \end{aligned} \end{equation*} Fig. \ref{38} shows the intuitive situation. If $M_{31}$ clicks, we can determine the four subsets $\{H_{1}, H_{7}, H_{9}, H_{12}\}$. Otherwise, the subset is one of the remaining three $\{H_{2}, H_{4}, H_{11}\}$. Moreover, they are all perfectly LOCC distinguishable. \begin{figure} \caption{The remaining states after Bob performs the measurement. Area covered by light gray is the measurement effect $M_{31} \label{38} \end{figure} In addition, if $M_{12}$ clicks in step 1, we can find a similar protocol where these states can be perfectly LOCC distinguished. \section{The proof of theorem 9}\label{I} Naturally, we only need to locally distinguish these subsets. To this end, let Alice and Bob share an EPR state $|\phi^{+}(2)\rangle_{a_{1}b_{1}}$, meanwhile Alice and Charlie share the EPR state $|\phi^{+}(2)\rangle_{a_{2}c_{1}}$. Therefore, the initial state is \begin{equation}\label{42} \begin{aligned} |\psi\rangle_{ABC}\otimes|\phi^{+}(2)\rangle_{a_{1}b_{1}}\otimes|\phi^{+}(2)\rangle_{a_{2}c_{1}}, \end{aligned} \end{equation} where the state $|\psi\rangle_{ABC}$ is one of the states from the set $\cup_{r=1}^{12}H_{r}$ ({\ref{21}}), $a_{1}$ and $a_{2}$ are ancillary systems of Alice, $b_{1}$ and $c_{1}$ are ancillary systems of Bob and Charlie, respectively. The specific process is as follows. $Step~1.$ Bob performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{1}\equiv\{&M_{11}:=P[(|0\rangle,\ldots,|d_{B}'-1\rangle)_{B};|0\rangle_{b_{1}}]\\ &\quad\quad\quad+P[|d_{B}'\rangle_{B};|1\rangle_{b_{1}}],~\\ &M_{12}:=I-M_{11}\}, \end{aligned} \end{equation*} and Charlie performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{2}\equiv\{&M_{21}:=P[(|0\rangle,\ldots,|d_{C}'-1\rangle)_{C};|1\rangle_{c_{1}}]\\ &\quad\quad\quad+P[|d_{C}'\rangle_{C};|0\rangle_{c_{1}}],~\\ &M_{22}:=I-M_{21}\}. \end{aligned} \end{equation*} Suppose $M_{11}$ and $M_{21}$ click (refer to Fig. \ref{36}), the resulting postmeasurement states are \begin{equation*} \begin{aligned} &H_{1} \rightarrow \{|0\rangle_{A}|\xi_{i}\rangle_{B}|\eta_{j}\rangle_{C}|00\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\},\\ &H_{2} \rightarrow \{|\xi_{i}\rangle_{A}|\eta_{j}\rangle_{B}|0\rangle_{C}|00\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\},\\ &H_{3} \rightarrow \{|\eta_{j}\rangle_{A}|0\rangle_{B}|\xi_{i}\rangle_{C}|00\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\},\\ &H_{4} \rightarrow \{|\xi_{i}\rangle_{A}|d_{B}'\rangle_{B}|\eta_{j}\rangle_{C}|11\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\},\\ &H_{5} \rightarrow \{|d_{A}'\rangle_{A}|\eta_{j}\rangle_{B}|\xi_{i}\rangle_{C}|00\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\},\\ &H_{6} \rightarrow \{|\eta_{j}\rangle_{A}|\xi_{i}\rangle_{B}|d_{C}'\rangle_{C}|00\rangle_{a_{1}b_{1}}|00\rangle_{a_{2}c_{1}}\},\\ &H_{7} \rightarrow \{|0\rangle_{A}|d_{B}'\rangle_{B}|11\rangle_{a_{1}b_{1}}(|0\rangle_{C}|11\rangle_{a_{2}c_{1}}\pm \\ &\quad\quad\quad~|d_{C}'\rangle_{C}|00\rangle_{a_{2}c_{1}})\},\\ &H_{8} \rightarrow \{|d_{A}'\rangle_{A}(|0\rangle_{B}|00\rangle_{a_{1}b_{1}}\pm |d_{B}'\rangle_{B}|11\rangle_{a_{1}b_{1}})\\ &\quad\quad\quad~|0\rangle_{C}|11\rangle_{a_{2}c_{1}}\},\\ &H_{9} \rightarrow \{|0\pm d_{A}'\rangle_{A}|0\rangle_{B}|d_{C}'\rangle_{C}|00\rangle_{a_{1}b_{1}}|00\rangle_{a_{2}c_{1}}\},\\ &H_{10} \rightarrow \{|d_{A}'\rangle_{A}|\xi_{i}\rangle_{B}|00\rangle_{a_{1}b_{1}}(|0\rangle_{C}|11\rangle_{a_{2}c_{1}}\pm \\ &\quad\quad\quad~~|d_{C}'\rangle_{C}|00\rangle_{a_{2}c_{1}})\},\\ &H_{11} \rightarrow \{|\xi_{i}\rangle_{A}(|0\rangle_{B}|00\rangle_{a_{1}b_{1}}\pm |d_{B}'\rangle_{B}|11\rangle_{a_{1}b_{1}})\\ &\quad\quad\quad~~|d_{C}'\rangle_{C}|00\rangle_{a_{2}c_{1}}\},\\ &H_{12} \rightarrow \{|0\pm d_{A}'\rangle_{A}|d_{B}'\rangle_{B}|\xi_{i}\rangle_{C}|11\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\}. \end{aligned} \end{equation*} \begin{figure} \caption{The two $2d_{A} \label{36} \end{figure} $Step~2.$ Alice performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{3}\equiv\{&M_{31}:=P[(|0\rangle,\ldots,|d_{A}'-1\rangle)_{A};|0\rangle_{a_{1}};|1\rangle_{a_{2}}],~\\ &M_{32}:=P[(|1\rangle,\ldots,|d_{A}'-1\rangle)_{A};|1\rangle_{a_{1}};|1\rangle_{a_{2}}],~\\ &M_{33}:=I-M_{31}-M_{32}\}. \end{aligned} \end{equation*} This process is described in Fig. \ref{37}. If $M_{31}$ clicks, the given subset is one of $\{H_{1},H_{2},H_{3}\}$, which contains $e-3f+6$ quantum states in total. Here $e=d_{A}d_{B}+d_{A}d_{C}+d_{B}d_{C}$ and $f=d_{A}+d_{B}+d_{C}$. It is obvious that these three subsets can not be perfectly distinguished by LOCC. Let Alice and Bob share the maximally entangled state $|\phi^{+}(2)\rangle_{a_{3}b_{2}}$. Moreover, Bob performs the measurement $\mathcal{M}_{3}'\equiv\{M_{31}':=P[|0\rangle_{B};I_{b_{1}};|0\rangle_{b_{2}}]+P[(|1\rangle,\ldots,|d_{B}'\rangle)_{B};I_{b_{1}};|1\rangle_{b_{2}}],~M_{32}':=I-M_{31}'\}$. When $M_{31}'$ clicks, Alice performs the measurement $\mathcal{M}_{3}''\equiv\{M_{31}'':=P[|0\rangle_{A};I_{a_{1}};I_{a_{2}};|1\rangle_{a_{3}}],~M_{32}'':=I-M_{31}''\}$. The results corresponding to operators $M_{31}''$ and $M_{32}''$ are $H_{1}$ and $\{H_{2},H_{3}\}$, respectively. The collection $\{H_{2},H_{3}\}$ is LOCC distinguishable. Similarly, when $M_{32}'$ clicks, the task of local discrimination can also be accomplished. The average entanglement consumed in this process is $\frac{e-3f+6}{2e-4f+6}$ maximally entangled state $|\phi^{+}(2)\rangle_{a_{3}b_{2}}$ \cite{Rout}, because the size of the set ({\ref{21}}) is $2e-4f+6$. If $M_{32}$ clicks, the subset is $H_{4}$. Otherwise, the subset is one of the remaining eight. \begin{figure} \caption{The states after clicking $M_{11} \label{37} \end{figure} $Step~3.$ Charlie performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{4}\equiv\{&M_{41}:=P[(|1\rangle,\ldots,|d_{C}'-1\rangle)_{C};|1\rangle_{c_{1}}],~\\ &M_{42}:=I-M_{41}\}. \end{aligned} \end{equation*} Refer to Fig. \ref{39}, if $M_{41}$ clicks, the given subset is one of $\{H_{5},H_{12}\}$. Obviously, it is locally distinguishable. \begin{figure} \caption{The states with auxiliary system $|11\rangle_{a_{2} \label{39} \end{figure} $Step~4.$ Alice performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{5}\equiv\{M_{51}:=P[|0\rangle_{A};|1\rangle_{a_{1}};I_{a_{2}}],~M_{52}:=I-M_{51}\}. \end{aligned} \end{equation*} If $M_{51}$ clicks, the subset is $H_{7}$. Otherwise, the subset is one of $\{H_{6},H_{8},H_{9},H_{10},H_{11}\}$. $Step~5.$ Bob performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{6}\equiv\{&M_{61}:=P[|0\rangle_{B};|0\rangle_{b_{1}}]+P[|d_{B}'\rangle_{B};|1\rangle_{b_{1}}],~\\ &M_{62}:=I-M_{61}\}. \end{aligned} \end{equation*} The results corresponding to operators $M_{61}$ and $M_{62}$ are $\{H_{8},H_{9},H_{11}\}$ and $\{H_{6},H_{10}\}$, respectively. They are all locally distinguishable. In summary, we consume a total of $1+\frac{e-3f+6}{2e-4f+6}$ EPR states between Alice and Bob and one EPR state between Alice and Charlie for this distinguishing task. If in the step 1 other operators click, we can find similar protocols to distinguish these subsets perfectly by LOCC. \section{The proof of theorem 10}\label{J} Suppose that the whole quantum system is shared among Alice, Bob, and Charlie. Since $d_{C}\leq d_{B}$, the subsystem $C$ is teleported to Bob by using the entanglement resource $|\phi^{+}(d_{C})\rangle_{BC}$, and the new union subsystem is represented by $\widetilde{B}$. To locally discriminate the states, Alice and Bob should share a maximally entangled state $|\phi^{+}(2)\rangle_{ab}$. The discrimination protocol proceeds as follows. $Step~1.$ Alice performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{1}\equiv\{&M_{11}:=P[(|0\rangle,|1\rangle)_{A};|0\rangle_{a}]+P[(|2\rangle,\ldots,|d_{A}'\rangle)_{A}\\ &\quad\quad\quad~;|1\rangle_{a}],~\\ &M_{12}:=I-M_{11}\}. \end{aligned} \end{equation*} Suppose $M_{11}$ clicks, then the resulting postmeasurement states are \begin{equation*} \begin{aligned} &H_{11}\rightarrow \{|0\rangle_{A}|1\circ \alpha_{3}^{l}\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{12}\rightarrow \{|1\rangle_{A}|\alpha_{3}^{l}\circ 0\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{13}\rightarrow \{|\alpha_{3}^{l}\rangle_{A}|0\circ 1\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{21}\rightarrow \{|0\rangle_{A}|\alpha^{i}\circ \alpha_{1}^{k}\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{22}\rightarrow \{|\alpha^{i}\rangle_{A}|\alpha_{1}^{k}\circ 0\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{23}\rightarrow \{(|1\rangle_{A}|00\rangle_{ab}+|\alpha_{1}^{k,2}\rangle_{A}|11\rangle_{ab})|0\circ \alpha^{i}\rangle_{\widetilde{B}}\}, \\ &H_{31}\rightarrow \{|0\rangle_{A}|d_{B}'\circ \alpha_{0}^{j}\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{32}\rightarrow \{|d_{A}'\rangle_{A}|\alpha_{0}^{j}\circ 0\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{33}\rightarrow \{(|0\rangle_{A}|00\rangle_{ab}+|\alpha_{0}^{j,2}\rangle_{A}|11\rangle_{ab})|0\circ d_{C}'\rangle_{\widetilde{B}}\}, \\ &H_{41}\rightarrow \{|1\rangle_{A}|0\circ (0\pm 1)\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{42}\rightarrow \{|0\rangle_{A}|(0\pm 1)\circ 1\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{43}\rightarrow \{|0\pm 1\rangle_{A}|1\circ 0\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{51}\rightarrow \{|1\rangle_{A}|d_{B}'\circ \alpha_{3}^{l}\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{52}\rightarrow \{|d_{A}'\rangle_{A}|\alpha_{3}^{l}\circ 1\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{53}\rightarrow \{|\alpha_{3}^{l}\rangle_{A}|1\circ d_{C}'\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{61}\rightarrow \{|\alpha^{i}\rangle_{A}|d_{B}'\circ \alpha_{1}^{k}\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{62}\rightarrow \{|d_{A}'\rangle_{A}|\alpha_{1}^{k}\circ \alpha^{i}\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{63}\rightarrow \{(|1\rangle_{A}|00\rangle_{ab}+|\alpha_{1}^{k,2}\rangle_{A}|11\rangle_{ab})|\alpha^{i}\circ d_{C}'\rangle_{\widetilde{B}}\}, \\ &H_{71}\rightarrow \{|d_{A}'\rangle_{A}|0\circ \alpha_{3}^{l}\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{72}\rightarrow \{|0\rangle_{A}|\alpha_{3}^{l}\circ d_{C}'\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{73}\rightarrow \{|\alpha_{3}^{l}\rangle_{A}|d_{B}'\circ 0\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{81}\rightarrow \{|d_{A}'\rangle_{A}|1\circ (0\pm 1)\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &H_{82}\rightarrow \{|1\rangle_{A}|(0\pm 1)\circ d_{C}'\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &H_{83}\rightarrow \{|0\pm 1\rangle_{A}|d_{B}'\circ 1\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \end{aligned} \end{equation*} where $|\alpha_{0}^{j,2}\rangle_{A}=\sum_{u=1}^{d_{A}-3}\omega_{d_{A}-2}^{ju}|u+1\rangle$ and $|\alpha_{1}^{k,2}\rangle_{A}=\sum_{u=1}^{d_{A}-3}\omega_{d_{A}-2}^{ku}|u+1\rangle$. $Step~2.$ Bob performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{2}\equiv\{&M_{21}:=P[|(1,\ldots,d_{B}'-1)\circ (2,\ldots,d_{C}'-1)\rangle_{\widetilde{B}}\\ &\quad\quad\quad~;|1\rangle_{b}],~\\ &M_{22}:=P[(|(2,\ldots,d_{B}')\circ (0,d_{C}')\rangle,|d_{B}'\circ (2,\ldots,\\ &\quad\quad\quad~ d_{C}'-1)\rangle)_{\widetilde{B}};|0\rangle_{b}]+P[|(2,\ldots,d_{B}'-1)\\ &\quad\quad\quad~\circ d_{C}'\rangle_{\widetilde{B}};|1\rangle_{b}],~\\ &M_{23}:=P[|01\rangle_{\widetilde{B}};|1\rangle_{b}],~\\ &M_{24}:=P[(|(0,\ldots,d_{B}'-1)\circ 0\rangle,|11\rangle)_{\widetilde{B}};|1\rangle_{b}],~\\ \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} &M_{25}:=P[|1d_{C}'\rangle_{\widetilde{B}};|1\rangle_{b}],~\\ &M_{26}:=P[(|(2,\ldots,d_{B}')\circ 1\rangle,|d_{B}'2\rangle)_{\widetilde{B}};|1\rangle_{b}],~\\ &M_{27}:=P[|d_{B}'0\rangle_{\widetilde{B}};|1\rangle_{b}],~\\ &M_{28}:=P[(|00\rangle,|01\rangle,|11\rangle)_{\widetilde{B}};|0\rangle_{b}],~\\ &M_{29}:=P[|10\rangle_{\widetilde{B}};|0\rangle_{b}],~\\ &M_{210}:=P[|d_{B}'1\rangle_{\widetilde{B}};|0\rangle_{b}],~\\ &M_{211}:=P[|(2,\ldots,d_{B}'-1)\circ (1,\ldots,d_{C}'-1)\rangle_{\widetilde{B}}\\ &\quad\quad\quad~~~;|0\rangle_{b}],~\\ &M_{212}:=I-\sum_{i=1}^{11}M_{2i}\}. \end{aligned} \end{equation*} For the operator $M_{2i}$ $(i=1,\ldots,12)$, the result of postmeasurement is \begin{equation*} \begin{aligned} &M_{21}\Rightarrow H_{62},&&M_{22}\Rightarrow H_{12},H_{31},H_{51},H_{72},H_{63},\\ &M_{23}\Rightarrow H_{13},&&M_{24}\Rightarrow H_{22},H_{32},H_{81},\\ &M_{25}\Rightarrow H_{53},&&M_{26}\Rightarrow H_{52},H_{61},\\ &M_{27}\Rightarrow H_{73},&&M_{28}\Rightarrow H_{41},H_{42},\\ &M_{29}\Rightarrow H_{43},&&M_{210}\Rightarrow H_{83},\\ &M_{211}\Rightarrow H_{21},&&M_{212}\Rightarrow H_{11},H_{23},H_{33},H_{71},H_{82}. \end{aligned} \end{equation*} Clearly, $\{H_{52},H_{61}\}$ and $\{H_{41},H_{42}\}$ are locally distinguishable. If $M_{22}$ clicks, Alice performs the measurement $\mathcal{M}_{2}'\equiv\{M_{21}':=P[|0\rangle_{A};|0\rangle_{a}],~M_{22}':=I-M_{21}'\}$. The outcomes corresponding to the operators $M_{21}'$ and $M_{22}'$ are $\{H_{31},H_{72}\}$ and $\{H_{12},H_{51},H_{63}\}$, respectively. They are also locally distinguishable. If $M_{24}$ clicks, Alice performs the measurement $\mathcal{M}_{2}''\equiv\{M_{21}'':=P[(|2\rangle,\ldots,|d_{A}'-1\rangle)_{A};|1\rangle_{a}],~M_{22}'':=I-M_{21}''\}$. The outcomes corresponding to the operators $M_{21}''$ and $M_{22}''$ are $H_{22}$ and $\{H_{32},H_{81}\}$, respectively. Moreover, $\{H_{32},H_{81}\}$ is a LOCC distinguishable collection. If $M_{212}$ clicks, we proceed to the next step. $Step~3.$ Alice performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{3}\equiv\{M_{31}:=P[|d_{A}'\rangle_{A};|1\rangle_{a}],~M_{32}:=I-M_{31}\}. \end{aligned} \end{equation*} If $M_{31}$ clicks, the subset is $H_{71}$. If $M_{32}$ clicks, the subset is one of the remaining four. $Step~4.$ Bob performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{4}\equiv\{&M_{41}:=P[|0\circ (2,\ldots,d_{C}'-1)\rangle_{\widetilde{B}};I_{b}],~\\ &M_{42}:=I-M_{41}\}. \end{aligned} \end{equation*} If $M_{41}$ clicks, the subset is $H_{23}$. If $M_{42}$ clicks, the result is one of the three remaining subsets. $Step~5.$ Alice performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{5}\equiv\{M_{51}:=P[|1\rangle_{A};|0\rangle_{a}],~M_{52}:=I-M_{51}\}. \end{aligned} \end{equation*} If $M_{51}$ clicks, the subset is $H_{82}$. If $M_{52}$ clicks, the subset is one of $\{H_{33},H_{11}\}$, which is locally distinguishable. On the other hand, when $M_{12}$ clicks in the step 1, we can find the distinction protocol similarly. \section{The proof of theorem 11}\label{K} To locally distinguish the set ({\ref{23}}), let Alice and Bob share a maximally entangled state $|\phi^{+}(4)\rangle_{a_{1}b_{1}}$, while Alice and Charlie share an EPR state $|\phi^{+}(2)\rangle_{a_{2}c_{1}}$. $Step~1.$ Bob performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{1}\equiv\{&M_{11}:=P[|0\rangle_{B};|0\rangle_{b_{1}}]+P[|1\rangle_{B};|1\rangle_{b_{1}}]\\ &\quad\quad\quad~+P[(|2\rangle,\ldots,|d_{B}'-1\rangle)_{B};|2\rangle_{b_{1}}]\\ &\quad\quad\quad~+ P[|d_{B}'\rangle_{B};|3\rangle_{b_{1}}],~\\ &M_{12}:=P[|0\rangle_{B};|1\rangle_{b_{1}}]+P[|1\rangle_{B};|2\rangle_{b_{1}}]\\ &\quad\quad\quad~+P[(|2\rangle,\ldots,|d_{B}'-1\rangle)_{B};|3\rangle_{b_{1}}]\\ &\quad\quad\quad~+P[|d_{B}'\rangle_{B};|0\rangle_{b_{1}}],~\\ &M_{13}:=P[|0\rangle_{B};|2\rangle_{b_{1}}]+P[|1\rangle_{B};|3\rangle_{b_{1}}]\\ &\quad\quad\quad~+P[(|2\rangle,\ldots,|d_{B}'-1\rangle)_{B};|0\rangle_{b_{1}}]\\ &\quad\quad\quad~+ P[|d_{B}'\rangle_{B};|1\rangle_{b_{1}}],~\\ &M_{14}:=I-M_{11}-M_{12}-M_{13}\}. \end{aligned} \end{equation*} Charlie performs the measurement \begin{equation*} \begin{aligned}\mathcal{M}_{2}\equiv\{&M_{21}:=P[(|0\rangle,|1\rangle)_{C};|0\rangle_{c_{1}}]+P[(|2\rangle,\\ &\quad\quad\quad\quad~~\ldots,|d_{C}'\rangle)_{C};|1\rangle_{c_{1}}],~\\ &M_{22}:=I-M_{21}\}. \end{aligned} \end{equation*} Suppose the outcomes corresponding to $M_{11}$ and $M_{21}$ click, the resulting postmeasurement states are \begin{equation}\label{K1} \begin{aligned} &H_{11}\rightarrow\{|0\rangle_{A}|1\rangle_{B}|\alpha_{3}^{l}\rangle_{C}|11\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\}, \\ &H_{12}\rightarrow\{|1\rangle_{A}(|\alpha_{3}^{l,1}\rangle_{B}|22\rangle_{a_{1}b_{1}}+ |\alpha_{3}^{l,2}\rangle_{B}|33\rangle_{a_{1}b_{1}})\\ &\quad\quad\quad~~|0\rangle_{C}|00\rangle_{a_{2}c_{1}}\}, \\ &H_{13}\rightarrow\{|\alpha_{3}^{l}\rangle_{A}|0\rangle_{B}|1\rangle_{C}|00\rangle_{a_{1}b_{1}}|00\rangle_{a_{2}c_{1}}\}, \\ &H_{21}\rightarrow\{|0\rangle_{A}|\alpha^{i}\rangle_{B}|22\rangle_{a_{1}b_{1}}(|1\rangle_{C}|00\rangle_{a_{2}c_{1}}+ |\alpha_{1}^{k,2}\rangle_{C}\\ &\quad\quad\quad~~|11\rangle_{a_{2}c_{1}})\}, \\ &H_{22}\rightarrow\{|\alpha^{i}\rangle_{A}(|1\rangle_{B}|11\rangle_{a_{1}b_{1}}+ |\alpha_{1}^{k,2}\rangle_{B}|22\rangle_{a_{1}b_{1}})\\ &\quad\quad\quad~~|0\rangle_{C}|00\rangle_{a_{2}c_{1}}\}, \\ &H_{23}\rightarrow\{|\alpha_{1}^{k}\rangle_{A}|0\rangle_{B}|\alpha^{i}\rangle_{C}|00\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\}, \\ &H_{31}\rightarrow\{|0\rangle_{A}|d_{B}'\rangle_{B}|33\rangle_{a_{1}b_{1}}(|0\rangle_{C}|00\rangle_{a_{2}c_{1}}+ |\alpha_{0}^{j,2}\rangle_{C}\\ &\quad\quad\quad~~|11\rangle_{a_{2}c_{1}})\}, \\ &H_{32}\rightarrow\{|d_{A}'\rangle_{A}(|0\rangle_{B}|00\rangle_{a_{1}b_{1}}+ |\alpha_{0}^{j,2}\rangle_{B}|22\rangle_{a_{1}b_{1}})\\ &\quad\quad\quad~~|0\rangle_{C}|00\rangle_{a_{2}c_{1}}\}, \\ &H_{33}\rightarrow\{|\alpha_{0}^{j}\rangle_{A}|0\rangle_{B}|d_{C}'\rangle_{C}|00\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\}, \\ &H_{41}\rightarrow\{|1\rangle_{A}|0\rangle_{B}|0\pm 1\rangle_{C}|00\rangle_{a_{1}b_{1}}|00\rangle_{a_{2}c_{1}}\}, \\ &H_{42}\rightarrow\{|0\rangle_{A}(|0\rangle_{B}|00\rangle_{a_{1}b_{1}}\pm |1\rangle_{B}|11\rangle_{a_{1}b_{1}})|1\rangle_{C}\\ &\quad\quad\quad~~|00\rangle_{a_{2}c_{1}}\}, \\ &H_{43}\rightarrow\{|0\pm 1\rangle_{A}|1\rangle_{B}|0\rangle_{C}|11\rangle_{a_{1}b_{1}}|00\rangle_{a_{2}c_{1}}\}, \\ &H_{51}\rightarrow\{|1\rangle_{A}|d_{B}'\rangle_{B}|\alpha_{3}^{l}\rangle_{C}|33\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\}, \\ &H_{52}\rightarrow\{|d_{A}'\rangle_{A}(|\alpha_{3}^{l,1}\rangle_{B}|22\rangle_{a_{1}b_{1}}+ |\alpha_{3}^{l,2}\rangle_{B}|33\rangle_{a_{1}b_{1}})\\ &\quad\quad\quad~~|1\rangle_{C}|00\rangle_{a_{2}c_{1}}\}, \\ &H_{53}\rightarrow\{|\alpha_{3}^{l}\rangle_{A}|1\rangle_{B}|d_{C}'\rangle_{C}|11\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\}, \\ &H_{61}\rightarrow\{|\alpha^{i}\rangle_{A}|d_{B}'\rangle_{B}|33\rangle_{a_{1}b_{1}}(|1\rangle_{C}|00\rangle_{a_{2}c_{1}}+ |\alpha_{1}^{k,2}\rangle_{C}\\ &\quad\quad\quad~~|11\rangle_{a_{2}c_{1}})\}, \\ &H_{62}\rightarrow\{|d_{A}'\rangle_{A}(|1\rangle_{B}|11\rangle_{a_{1}b_{1}}+ |\alpha_{1}^{k,2}\rangle_{B}|22\rangle_{a_{1}b_{1}})\\ &\quad\quad\quad~~|\alpha^{i}\rangle_{C}|11\rangle_{a_{2}c_{1}}\}, \\ \end{aligned} \end{equation} \begin{equation*} \begin{aligned} &H_{63}\rightarrow\{|\alpha_{1}^{k}\rangle_{A}|\alpha^{i}\rangle_{B}|d_{C}'\rangle_{C}|22\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\}, \\ &H_{71}\rightarrow\{|d_{A}'\rangle_{A}|0\rangle_{B}|\alpha_{3}^{l}\rangle_{C}|00\rangle_{a_{1}b_{1}}|11\rangle_{a_{2}c_{1}}\}, \\ &H_{72}\rightarrow\{|0\rangle_{A}(|\alpha_{3}^{l,1}\rangle_{B}|22\rangle_{a_{1}b_{1}}+ |\alpha_{3}^{l,2}\rangle_{B}|33\rangle_{a_{1}b_{1}})\\ &\quad\quad\quad~~|d_{C}'\rangle_{C}|11\rangle_{a_{2}c_{1}}\}, \\ &H_{73}\rightarrow\{|\alpha_{3}^{l}\rangle_{A}|d_{B}'\rangle_{B}|0\rangle_{C}|33\rangle_{a_{1}b_{1}}|00\rangle_{a_{2}c_{1}}\},\\ &H_{81}\rightarrow\{|d_{A}'\rangle_{A}|1\rangle_{B}|0\pm 1\rangle_{C}|11\rangle_{a_{1}b_{1}}|00\rangle_{a_{2}c_{1}}\},\\ &H_{82}\rightarrow\{|1\rangle_{A}(|0\rangle_{B}|00\rangle_{a_{1}b_{1}}\pm |1\rangle_{B}|11\rangle_{a_{1}b_{1}})|d_{C}'\rangle_{C}\\ &\quad\quad\quad~~|11\rangle_{a_{2}c_{1}}\}, \\ &H_{83}\rightarrow\{|0\pm 1\rangle_{A}|d_{B}'\rangle_{B}|1\rangle_{C}|33\rangle_{a_{1}b_{1}}|00\rangle_{a_{2}c_{1}}\}, \end{aligned} \end{equation*} where $|\alpha_{0}^{j,2}\rangle_{\tau}=\sum_{u=1}^{d_{\tau}-3}\omega_{d_{\tau}-2}^{ju}|u+1\rangle$, $|\alpha_{1}^{k,2}\rangle_{\tau}=\sum_{u=1}^{d_{\tau}-3}\omega_{d_{\tau}-2}^{ku}|u+1\rangle$, $|\alpha_{3}^{l,1}\rangle_{\tau}=\sum_{u=0}^{d_{\tau}-4}\omega_{d_{\tau}-2}^{lu}|u+2\rangle$ and $|\alpha_{3}^{l,2}\rangle_{\tau}=\omega_{d_{\tau}-2}^{l(d_{\tau}-3)}|d_{\tau}-1\rangle$ for $j,k,l\in \mathcal{Z}_{d_{\tau}-2}$ and $\tau=B,C$. $Step~2.$ Alice performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{3}\equiv\{&M_{31}:=P[|d_{A}'\rangle_{A};|0\rangle_{a_{1}};|1\rangle_{a_{2}}],~\\ &M_{32}:=P[|1\rangle_{A};|0\rangle_{a_{1}};|0\rangle_{a_{2}}],~\\ &M_{33}:=P[(|2\rangle,\ldots,|d_{A}'-1\rangle)_{A};(|1\rangle,|2\rangle)_{a_{1}};|0\rangle_{a_{2}}],~\\ &M_{34}:=P[|0\rangle_{A};|1\rangle_{a_{1}};|1\rangle_{a_{2}}],~\\ &M_{35}:=P[|d_{A}'\rangle_{A};|1\rangle_{a_{1}};|0\rangle_{a_{2}}],~\\ &M_{36}:=P[(|1\rangle,\ldots,|d_{A}'-1\rangle)_{A};|2\rangle_{a_{1}};|1\rangle_{a_{2}}],~\\ &M_{37}:=P[|1\rangle_{A};|3\rangle_{a_{1}};|1\rangle_{a_{2}}],~\\ &M_{38}:=I-\sum_{i=1}^{7}M_{3i}\}. \end{aligned} \end{equation*} The result of postmeasurement, corresponding to the operator $M_{3i}$ $(i=1,\ldots,7)$ is \begin{equation*} \begin{aligned} &M_{31}\Rightarrow H_{71},&&M_{32}\Rightarrow H_{41},&&M_{33}\Rightarrow H_{22},&&M_{34}\Rightarrow H_{11},\\ &M_{35}\Rightarrow H_{81},&&M_{36}\Rightarrow H_{63},&&M_{37}\Rightarrow H_{51}. \end{aligned} \end{equation*} If $M_{38}$ clicks, we proceed to the next step. $Step~3.$ Charlie performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{4}\equiv\{M_{41}:=P[|d_{C}'\rangle_{C};|1\rangle_{c_{1}}],~M_{42}:=I-M_{41}\}. \end{aligned} \end{equation*} If $M_{41}$ clicks, the given subset is one of $\{H_{33},H_{53},H_{72},H_{82}\}$. It is locally distinguishable. Otherwise, we continue to the next step. $Step~4.$ Alice performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{5}\equiv\{&M_{51}:=P[(|0\rangle,|1\rangle)_{A};(|0\rangle,|1\rangle)_{a_{1}};|0\rangle_{a_{2}}], \\ &M_{52}:=P[(|2\rangle,\ldots,|d_{A}'\rangle)_{A};(|1\rangle,|2\rangle)_{a_{1}};|1\rangle_{a_{2}}], \\ &M_{53}:=P[(|1\rangle,\ldots,|d_{A}'-1\rangle)_{A};|0\rangle_{a_{1}};|1\rangle_{a_{2}}], \\ &M_{54}:=P[|0\rangle_{A};|2\rangle_{a_{1}};I_{a_{2}}], \\ &M_{55}:=P[(|0\rangle,|1\rangle)_{A};|3\rangle_{a_{1}};|0\rangle_{a_{2}}]+P[|0\rangle_{A};\\ &\quad\quad\quad\quad~~|3\rangle_{a_{1}};|1\rangle_{a_{2}}]+P[|1\rangle_{A};|2\rangle_{a_{1}};|0\rangle_{a_{2}}],\\ &M_{56}:=I-\sum_{i=1}^{5}M_{5i}\}. \end{aligned} \end{equation*} Corresponding to the operator $M_{5i}$ $(i=1,\ldots,6)$, there is the following result \begin{equation*} \begin{aligned} &M_{51}\Rightarrow H_{43},H_{42},&&M_{54}\Rightarrow H_{21},\\ &M_{52}\Rightarrow H_{62},&&M_{55}\Rightarrow H_{12},H_{31},H_{83},\\ &M_{53}\Rightarrow H_{23},&&M_{56}\Rightarrow H_{13},H_{32},H_{52},H_{61},H_{73}. \end{aligned} \end{equation*} If $M_{55}$ clicks, then Charlie performs the measurement $\mathcal{M}_{5}'\equiv\{M_{51}':=P[|1\rangle_{C};|0\rangle_{c_{1}}],~M_{52}':=I-M_{51}'\}$. The outcomes corresponding to the operators $M_{51}'$ and $M_{52}'$ are $H_{83}$ and $\{H_{12},H_{31}\}$, respectively. Obviously, $\{H_{42},H_{43}\}$ and $\{H_{12},H_{31}\}$ are locally distinguishable. If $M_{56}$ clicks, we move on to the next step. $Step~5.$ Charlie performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{6}\equiv\{M_{61}:=P[|0\rangle_{C};|0\rangle_{c_{1}}],~M_{62}:=I-M_{61}\}. \end{aligned} \end{equation*} Corresponding to the operators $M_{61}$ and $M_{62}$, the subsets of postmeasurement are $\{H_{32},H_{73}\}$ and $\{H_{13},H_{52},H_{61}\}$, respectively. They are all LOCC distinguishable. If another operator clicks in the step 1, then also a similar entanglement-assisted discrimination protocol follows. \section{The proof of theorem 12}\label{L} Let Alice and Bob share two EPR states $|\phi^{+}(2)\rangle_{a_{1}b_{1}}|\phi^{+}(2)\rangle_{a_{2}b_{2}}$, while Alice and Charlie share an EPR state $|\phi^{+}(2)\rangle_{a_{3}c_{1}}$. Bob performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{1}\equiv\{&M_{11}:=P[|0\rangle_{B};|0\rangle_{b_{1}};|0\rangle_{b_{2}}]\\ &\quad\quad\quad~+P[|1\rangle_{B};|0\rangle_{b_{1}};|1\rangle_{b_{2}}]\\ &\quad\quad\quad~+P[(|2\rangle,\ldots,|d_{B}'-1\rangle)_{B};|1\rangle_{b_{1}};|0\rangle_{b_{2}}]\\ &\quad\quad\quad~+ P[|d_{B}'\rangle_{B};|1\rangle_{b_{1}};|1\rangle_{b_{2}}],~\\ &M_{12}:=P[|0\rangle_{B};|0\rangle_{b_{1}};|1\rangle_{b_{2}}]\\ &\quad\quad\quad~+P[|1\rangle_{B};|1\rangle_{b_{1}};|0\rangle_{b_{2}}]\\ &\quad\quad\quad~+P[(|2\rangle,\ldots,|d_{B}'-1\rangle)_{B};|1\rangle_{b_{1}};|1\rangle_{b_{2}}]\\ &\quad\quad\quad~+ P[|d_{B}'\rangle_{B};|0\rangle_{b_{1}};|0\rangle_{b_{2}}],~\\ &M_{13}:=P[|0\rangle_{B};|1\rangle_{b_{1}};|0\rangle_{b_{2}}]\\ &\quad\quad\quad~+P[|1\rangle_{B};|1\rangle_{b_{1}};|1\rangle_{b_{2}}]\\ &\quad\quad\quad~+P[(|2\rangle,\ldots,|d_{B}'-1\rangle)_{B};|0\rangle_{b_{1}};|0\rangle_{b_{2}}]\\ &\quad\quad\quad~+ P[|d_{B}'\rangle_{B};|0\rangle_{b_{1}};|1\rangle_{b_{2}}],~\\ &M_{14}:=I-M_{11}-M_{12}-M_{13}\}. \end{aligned} \end{equation*} Charlie performs the measurement \begin{equation*} \begin{aligned}\mathcal{M}_{2}\equiv\{&M_{21}:=P[(|0\rangle,|1\rangle)_{C};|0\rangle_{c_{1}}]+P[(|2\rangle,\\ &\quad\quad\quad\quad~~\ldots,|d_{C}'\rangle)_{C};|1\rangle_{c_{1}}],~\\ &M_{22}:=I-M_{21}\}. \end{aligned} \end{equation*} Similar to the proof of Theorem 11, when $a_{1}a_{2}a_{3}$ and $b_{1}b_{2}$ are substituted for ancillary systems $a_{1}a_{2}$ and $b_{1}$ in (\ref{K1}), respectively, the outcomes are obtained. It is easy to prove that these postmeasurement states are also locally distinguishable. \section{The proof of theorem 13}\label{M} Notice that $d_{C},d_{D}\leq d_{B}$. The states of subsystems $C$ and $D$ are teleported to Bob using the maximally entangled states $|\phi^{+}(d_{C})\rangle_{BC}$ and $|\phi^{+}(d_{D})\rangle_{BD}$, respectively. Their union is represented by $\widetilde{B}$. In addition, to locally discriminate the set (\ref{24}), Alice and Bob share a maximally entangled state $|\phi^{+}(3)\rangle_{ab}$. The specific protocol is as follows. Alice performs the measurement \begin{equation*} \begin{aligned} \mathcal{M}_{1}\equiv\{&M_{11}:=P[|0\rangle_{A};|0\rangle_{a}]+P[(|1\rangle,\ldots,|d_{A}'-1\rangle)_{A};\\ &\quad\quad\quad\quad~~|1\rangle_{a}]+P[|d_{A}'\rangle_{A};|2\rangle_{a}],~\\ &M_{12}:=P[|0\rangle_{A};|1\rangle_{a}]+P[(|1\rangle,\ldots,|d_{A}'-1\rangle)_{A};\\ &\quad\quad\quad\quad~~|2\rangle_{a}]+P[|d_{A}'\rangle_{A};|0\rangle_{a}],~\\ &M_{13}:=I-M_{11}-M_{12}\}. \end{aligned} \end{equation*} Suppose the outcome corresponding to $M_{11}$ clicks, the resulting postmeasurement states are \begin{equation*} \begin{aligned} &U_{11}\rightarrow\{|0\rangle_{A}|\xi_{i}\circ \eta_{j}\circ (0\pm d_{D}')\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &U_{12}\rightarrow\{|\xi_{i}\rangle_{A}|\eta_{j}\circ (0\pm d_{C}')\circ 0\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &U_{13}\rightarrow\{(|0\rangle_{A}|00\rangle_{ab}+|\eta_{j}^{1}\rangle_{A}|11\rangle_{ab})|(0\pm d_{B}')\circ 0\\ &\quad\quad\quad~\circ \xi_{i}\rangle_{\widetilde{B}}\}, \\ &U_{14}\rightarrow\{(|0\rangle_{A}|00\rangle_{ab}\pm |d_{A}'\rangle_{A}|22\rangle_{ab})|0\circ \xi_{i}\circ \eta_{j}\rangle_{\widetilde{B}}\}, \\ &U_{21}\rightarrow\{|\xi_{i}\rangle_{A}|d_{B}'\circ \gamma_{k}\circ \eta_{j}\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &U_{22}\rightarrow\{|d_{A}'\rangle_{A}|\gamma_{k}\circ \eta_{j}\circ \xi_{i}\rangle_{\widetilde{B}}|22\rangle_{ab}\}, \\ &U_{23}\rightarrow\{(|\gamma_{k}^{1}\rangle_{A}|11\rangle_{ab}+|\gamma_{k}^{2}\rangle_{A}|22\rangle_{ab})|\eta_{j}\circ \xi_{i}\circ d_{D}'\rangle_{\widetilde{B}}\}, \\ &U_{24}\rightarrow\{(|0\rangle_{A}|00\rangle_{ab}+|\eta_{j}^{1}\rangle_{A}|11\rangle_{ab})|\xi_{i}\circ d_{C}'\circ \gamma_{k}\rangle_{\widetilde{B}}\},\\ &U_{31}\rightarrow\{|d_{A}'\rangle_{A}|0\circ (0\pm d_{C}')\circ \gamma_{k}\rangle_{\widetilde{B}}|22\rangle_{ab}\}, \\ &U_{32}\rightarrow\{|0\rangle_{A}|(0\pm d_{B}')\circ \gamma_{k}\circ d_{D}'\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &U_{33}\rightarrow\{(|0\rangle_{A}|00\rangle_{ab}\pm |d_{A}'\rangle_{A}|22\rangle_{ab})|\gamma_{k}\circ d_{C}'\circ 0\rangle_{\widetilde{B}}\}, \\ &U_{34}\rightarrow\{(|\gamma_{k}^{1}\rangle_{A}|11\rangle_{ab}+|\gamma_{k}^{2}\rangle_{A}|22\rangle_{ab})|d_{B}'\circ 0\circ (0\\ &\quad\quad\quad~\pm d_{D}')\rangle_{\widetilde{B}}\}, \\ &U_{41}\rightarrow\{|\xi_{i}\rangle_{A}|\xi_{i}\circ 0\circ \gamma_{k}\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &U_{42}\rightarrow\{|\xi_{i}\rangle_{A}|0\circ \gamma_{k}\circ \xi_{i}\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &U_{43}\rightarrow\{|0\rangle_{A}|\gamma_{k}\circ \xi_{i}\circ \xi_{i}\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &U_{44}\rightarrow\{(|\gamma_{k}^{1}\rangle_{A}|11\rangle_{ab}+|\gamma_{k}^{2}\rangle_{A}|22\rangle_{ab})|\xi_{i}\circ \xi_{i}\circ 0\rangle_{\widetilde{B}}\}, \\ &U_{51}\rightarrow\{|d_{A}'\rangle_{A}|d_{B}'\circ \xi_{i}\circ (0\pm d_{D}')\rangle_{\widetilde{B}}|22\rangle_{ab}\}, \\ &U_{52}\rightarrow\{|d_{A}'\rangle_{A}|\xi_{i}\circ (0\pm d_{C}')\circ d_{D}'\rangle_{\widetilde{B}}|22\rangle_{ab}\}, \\ &U_{53}\rightarrow\{|\xi_{i}\rangle_{A}|(0\pm d_{B}')\circ d_{C}'\circ d_{D}'\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &U_{54}\rightarrow\{(|0\rangle_{A}|00\rangle_{ab}\pm |d_{A}'\rangle_{A}|22\rangle_{ab})|d_{B}'\circ d_{C}'\circ \xi_{i}\rangle_{\widetilde{B}}\}, \\ &U_{61}\rightarrow\{|0\rangle_{A}|0\circ d_{C}'\circ \eta_{j}\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &U_{62}\rightarrow\{|0\rangle_{A}|d_{B}'\circ \eta_{j}\circ 0\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &U_{63}\rightarrow\{|d_{A}'\rangle_{A}|\eta_{j}\circ 0\circ 0\rangle_{\widetilde{B}}|22\rangle_{ab}\}, \\ &U_{64}\rightarrow\{(|0\rangle_{A}|00\rangle_{ab}+|\eta_{j}^{1}\rangle_{A}|11\rangle_{ab})|0\circ 0\circ d_{D}'\rangle_{\widetilde{B}}\}, \\ &U_{71}\rightarrow\{|0\rangle_{A}|\xi_{i}\circ 0\circ \xi_{i}\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ &U_{72}\rightarrow\{|\xi_{i}\rangle_{A}|0\circ \xi_{i}\circ 0\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &U_{81}\rightarrow\{|0\rangle_{A}|d_{B}'\circ 0\circ d_{D}'\rangle_{\widetilde{B}}|00\rangle_{ab}\}, \\ \end{aligned} \end{equation*} \begin{equation*} \begin{aligned} &U_{82}\rightarrow\{|d_{A}'\rangle_{A}|0\circ d_{C}'\circ 0\rangle_{\widetilde{B}}|22\rangle_{ab}\}, \\ &U_{91}\rightarrow\{|\xi_{i}\rangle_{A}|d_{B}'\circ \xi_{i}\circ d_{D}'\rangle_{\widetilde{B}}|11\rangle_{ab}\}, \\ &U_{92}\rightarrow\{|d_{A}'\rangle_{A}|\xi_{i}\circ d_{C}'\circ \xi_{i}\rangle_{\widetilde{B}}|22\rangle_{ab}\}, \end{aligned} \end{equation*} where $|\eta_{j}^{1}\rangle_{A}=\sum_{u=1}^{d_{A}-2}\omega_{d_{A}-1}^{ju}|u\rangle$, $|\gamma_{k}^{1}\rangle_{A}=\sum_{u=0}^{d_{A}-3}$ $\omega_{d_{A}-1}^{ku}|u+1\rangle$ and $|\gamma_{k}^{2}\rangle_{A}=\omega_{d_{A}-1}^{k(d_{A}-2)}|d_{A}-1\rangle$ for $j,k\in \mathcal{Z}_{d_{A}-1}$. Evidently, they can be perfectly distinguished by LOCC. For all other cases a similar protocol follows. \end{appendix} \end{document}
\begin{document} \title{f Quantum damped oscillator II:\ Bateman's Hamiltonian vs. 2D Parabolic Potential Barrier.} \begin{abstract} We show that quantum Bateman's system which arises in the quantization of a damped harmonic oscillator is equivalent to a quantum problem with 2D parabolic potential barrier known also as 2D inverted isotropic oscillator. It turns out that this system displays the family of complex eigenvalues corresponding to the poles of analytical continuation of the resolvent operator to the complex energy plane. It is shown that this representation is more suitable than the hyperbolic one used recently by Blasone and Jizba. \end{abstract} \numberwithin{equation}{section} \section{Introduction} \setcounter{equation}{0} In the previous paper \cite{I} we have investigated a quantization of a 1D damped harmonic oscillator defined by the following equation of motion \begin{equation}\langle\,bel{damped_osc} \ddot{x}+2\gamma \dot{x}+ \kappa x \;=\; 0\ , \end{equation} where $\gamma>0$ denotes the damping constant. To quantize this system we follow an old observation of Bateman \cite{Bat31} and double the number of degrees of freedom, that is together with (\,\rangleef{damped_osc}) we consider \begin{equation}\langle\,bel{damped_osc2} \ddot{y}-2\gamma \dot{y}+ \kappa y \;=\; 0\ , \end{equation} i.e. an amplified oscillator. The detailed historical review of the Bateman idea may be found in \cite{Dekker81}. For more recent papers see e.g. \cite{Vit92} and \cite{Bla04}. The enlarged system is a Hamiltonian one and it is governed by the following classical Bateman Hamiltonian: \begin{equation}\langle\,bel{BHam} H(x,y,p_x,p_y)= p_x p_y -\gamma(xp_x - yp_y)+ \omega^2 xy\ , \end{equation} where $ \omega = \sqrt{\kappa - \gamma^2}\, .$\footnote{Throughout the paper we shall consider the underdamped case, i.e. $\kappa > \gamma^2$.} Now, performing a linear canonical transformation $(x,y,p_x,p_y) \longrightarrow (x_1,x_2,p_1,p_2)$: \begin{eqnarray} x_1 & = & \frac{p_y}{\sqrt{\omega}} \ , \hspace{1.5cm} p_1\, = \,-\sqrt{\omega}\, y \\ x_2 & = & -\sqrt{\omega}\, x \ , \hspace{1cm} p_2 \,=\, -\frac{p_x}{\sqrt{\omega}} \ , \end{eqnarray} and applying a standard symmetric Weyl ordering one obtains the following quantum Hamiltonian \begin{equation}\langle\,bel{H-old} \hat{H} = \omega\, \hat{\bf p} \wedge \hat{\bf x} - \gamma \, \hat{\bf p} \odot\hat{\bf x}\ , \end{equation} where $\hat{\bf x}=(\hat{x}_1,\hat{x}_2)$, $\hat{\bf p}=(\hat{p}_1,\hat{p}_2)$ and we define two natural operations: \[ \hat{\bf p} \wedge \hat{\bf x} = \hat{p}_1 \hat{x}_2 - \hat{p}_2 \hat{x}_1\ , \hspace{1cm} \hat{\bf p} \odot \hat{\bf x} = \hat{\bf x} \odot \hat{\bf p} = \frac 12\,\sum_{k=1}^2 ( \hat{x}_k \hat{p}_k + \hat{p}_k\hat{x}_k) \ . \] Note, that $[\hat{\bf p} \wedge \hat{\bf x},\hat{\bf p} \odot\hat{\bf x}] =0$. This operator was carefully analyzed in \cite{I}. In particular it was shown that the family of complex eigenvalues \begin{equation}\langle\,bel{} \hat{H} |\mathfrak{f}^\pm_{nl} \rangle = E^\pm_{nl} |\mathfrak{f}^\pm_{nl}\rangle\ , \end{equation} with \begin{equation}\langle\,bel{Enl} E^\pm_{nl} = \hbar\omega l \pm i \hbar \gamma (|l| + 2n +1)\ , \end{equation} found already by Feshbach and Tikochinsky \cite{FT}, corresponds to the poles of the resolvent operator $\hat{\,\ranglem R}(\hat{H},z) = (\hat{H} - z)^{-1}$. Therefore, the corresponding generalized eigenvectors $|\mathfrak{f}^\pm_{nl} \rangle$ may be interpreted as resonant states of the Bateman system. It shows that dissipation of energy is directly related to the presence of resonances. In the present paper we continue to study this system but in a different representation. Let us observe that performing the linear canonical transformation $({\bf x},{\bf p}) \longrightarrow ({\bf u},{\bf v})$: \begin{equation}\langle\,bel{CAN} {\bf x} = \frac{ \gamma {\bf u} - {\bf v}}{\sqrt{2\gamma}}\ , \ \ \ \ \ {\bf p} = \frac{ \gamma {\bf u} + {\bf v}}{\sqrt{2\gamma}}\ , \end{equation} one obtains for the Hamiltonian \begin{equation}\langle\,bel{H-new} \hat{H} = \omega\, \hat{\bf v}\wedge \hat{\bf u} + \hat{H}_{\,\ranglem iho}\ , \end{equation} where \begin{equation}\langle\,bel{H-iho} \hat{H}_{\,\ranglem iho} = \frac 12\,( \hat{\bf v}^2 -\gamma^2 \hat{\bf u}^2)\ , \end{equation} represents a Hamiltonian of a 2D isotropic inverted harmonic oscillator (iho) or, equivalently, a 2D potential barrier $- \gamma^2 \hat{\bf u}^2$. Now, $\omega\, \hat{\bf v}\wedge \hat{\bf u}$ generates an SO(2) rotation on $(u_1,u_2)$--plane. Therefore, in the rotating frame the problem is described by the following Schr\"odinger equation \begin{equation}\langle\,bel{} i\hbar\dot{\psi}_{\,\ranglem rf} = \hat{H}_{\,\ranglem iho}{\psi}_{\,\ranglem rf}\ , \end{equation} where the {\,\ranglem rotating frame} wave function ${\psi}_{\,\ranglem rf} = \exp(i\omega\, \hat{\bf v}\wedge \hat{\bf u} t/\hbar)\, \psi$. A 1D inverted (or reversed) oscillator was studied by several authors in various contexts \cite{Kemble,Wheeler,Friedman,Ann1,Ann2,Castagnino,Shimbori1}. Recently, this system was studied in the context of dissipation in quantum mechanics and a detailed analysis of its resonant states was performed in \cite{damp2}. The present paper is mostly devoted to analysis of a 2D iho. We find its energy eigenvectors and show that they are singular when one continues energy into complex plane. The complex poles correspond to resonant states of the 2D potential barrier \cite{RES-1,RES-2}. Finally, we analyze the Bateman system in the hyperbolic representation used recently in \cite{Bla04} by Blasone and Jizba. It turns out that this representation in not appropriate to describe resonant states and hence the family of generalized complex eigenvalues found in \cite{Bla04} is not directly related to the spectral properties of the Bateman Hamiltonian. We stress that it does not prove that these representation are physically inequivalent. Clearly they are. Different representation lead to different mathematical realization which is connected with different functional spaces and different boundary conditions. These may lead to different analytical properties and hence some representation may display resonant states while others not. From the mathematical point of view the natural language to analyze the spectral properties of Bateman's system is the so called rigged Hilbert space approach to quantum mechanics \cite{RHS1,RHS2,Bohm-Gadella,Bohm}. We show (cf. Section~\,\rangleef{ANALYTICITY}) that there are two dense subspaces $\Phi_\pm \in L^2( \mathbb{R}^2_{\bf u})$ such that restriction of the unitary group $\hat{U}(t) = e^{-i\hat{H}t/\hbar}$ to $\Phi_\pm$ does no longer define a group but gives rise to two semigroups: $\hat{U}_-(t)=\hat{U}(t)|_{\Phi_-}$ defined for $t\geq 0$ and $\hat{U}_+(t)=\hat{U}(t)|_{\Phi_+}$ defined for $t\leq 0$. It means that the quantum damped oscillator corresponds to the following Gel'fand triplets: \begin{equation}\langle\,bel{} \Phi_\pm \, \subset \, L^2( \mathbb{R}^2_{\bf u}) \, \subset \, \Phi_\pm'\ , \end{equation} and hence it serves as a simple example of Arno Bohm theory of resonances \cite{Bohm}. \section{2D inverted oscillator and complex eigenvalues} \setcounter{equation}{0} \subsection{2D harmonic oscillator} Let us briefly recall the spectral properties of the 2D harmonic oscillator (see e.g. \cite{Fluge,Kleinert}): \begin{equation}\langle\,bel{H-ho} \hat{H}_{\,\ranglem ho} = -\frac{\hbar^2}{2}\, \mbox{\boldmath $\triangle$}iangle_2 + \frac{\Omega^2}{2}\, \,\rangleho^2\ , \end{equation} where the 2D Laplacian reads \begin{equation}\langle\,bel{2D-laplacian} \mbox{\boldmath $\triangle$}iangle_2 = \frac{\partial^2}{\partial\,\rangleho^2} + \frac{1}{\,\rangleho} \frac{\partial}{\partial\,\rangleho} + \frac{1}{\,\rangleho^2} \frac{\partial^2}{\partial\varphi^2}\ , \end{equation} and $(\,\rangleho,\varphi)$ are standard polar coordinates on $(u_1,u_2)$--plane. The corresponding eigenvalue problem \begin{equation}\langle\,bel{} \hat{H}_{\,\ranglem ho} \psi_{nl}^{\,\ranglem ho} = \varepsilon_{nl}^{\,\ranglem ho}\psi_{nl}^{\,\ranglem ho}\ , \end{equation} is solved by \begin{equation}\langle\,bel{} \psi_{nl}^{\,\ranglem ho}(\,\rangleho,\varphi) = R_{nl}(\,\rangleho)\Phi_l(\varphi)\ , \end{equation} where \begin{equation}\langle\,bel{} \Phi_l(\varphi) = \frac{e^{il\varphi}}{\sqrt{2\pi}}\ ,\ \ \ \ \ l=0,\pm1,\pm2,\ldots \ , \end{equation} and the radial functions \begin{equation}\langle\,bel{} R_{nl}(\,\rangleho) = C_{nl}\, (\sqrt{\Omega/\hbar}\, \,\rangleho)^{|l|}\, \exp(-{\Omega}\,\rangleho^2/2\hbar)\, {_1}F_1(-n,|l|+1,\Omega\,\,\rangleho^2/\hbar)\ , \end{equation} where the normalization constant reads as follows \begin{equation}\langle\,bel{} C_{nl} = \frac{\sqrt{2\Omega/\hbar}}{|l|!}\, \sqrt{ \frac{(n+|l|)!}{n!}}\ , \ \ \ \ \ \ n=0,1,2,\ldots\ . \end{equation} Finally, the corresponding eigenvalues $\varepsilon_{nl}^{\,\ranglem ho}$ are given by the following formula \begin{equation}\langle\,bel{} \varepsilon_{nl}^{\,\ranglem ho} = \hbar \Omega ( |l| + 2n + 1)\ . \end{equation} Note, that using well known relation between confluent hypergeometric function $_1F_1$ and generalized Laguerre polynomials \cite{GR,AS} \begin{equation}\langle\,bel{L-1F1} L^\mu_n(z) = \frac{\Gamma(n+\mu +1)}{\Gamma(n+1)\Gamma(\mu +1)}\, _1F_1(-n,\mu+1,z)\ , \end{equation} one may rewrite $R_{nl}$ alternatively as follows \begin{equation}\langle\,bel{} R_{nl}(\,\rangleho) = \sqrt{2\Omega/\hbar}\,\sqrt{ \frac{n!}{(n+|l|)!}} \, (\sqrt{\Omega/\hbar}\, \,\rangleho)^{|l|}\, \exp(-{\Omega}\,\rangleho^2/2\hbar)\, L^{|l|}_n(\Omega\,\,\rangleho^2/\hbar)\ . \end{equation} It is evident that the family $\psi^{\,\ranglem ho}_{ln}$ is orthonormal\begin{equation}\langle\,bel{} \langle\, \psi^{\,\ranglem ho}_{nl}| \psi^{\,\ranglem ho}_{n'l'}\,\rangle = \delta_{nn'}\delta_{ll'}\ , \end{equation} and complete \begin{equation}\langle\,bel{} \sum_{n=0}^\infty \sum_{l=-\infty}^\infty\, \overline{\psi^{\,\ranglem ho}_{nl}(\,\rangleho,\varphi)}\, \psi^{\,\ranglem ho}_{nl}(\,\rangleho',\varphi') = \frac{1}{\,\rangleho}\,\delta(\,\rangleho-\,\rangleho')\delta(\varphi-\varphi')\ , \end{equation} where $\langle\, \ |\ \,\rangle$ denotes the standard scalar product in the Hilbert space \begin{equation}\langle\,bel{} {\cal H} = L^2( \mathbb{R}_+,\,\rangleho d\,\rangleho) \otimes L^2([0,2\pi),d\varphi)\ . \end{equation} \subsection{Scaling and complex eigenvalues} Let us note that $\hat{H}_{\,\ranglem iho}$ defined in (\,\rangleef{H-iho}) corresponds to the Hamiltonian of the harmonic oscillator with purely imaginary frequency $\Omega = \pm i\gamma$. The connection with a harmonic oscillator may be established by the following scaling operator \begin{equation}\langle\,bel{V-lambda} \hat{V}_\langle\,mbda := \exp\left( \frac{\langle\,mbda}{\hbar}\ \hat{\bf v} \odot \hat{\bf u} \,\rangleight)\ , \end{equation} with $\langle\,mbda \in \mathbb{R}$. Using commutation relation $[\hat{u}_k,\hat{v}_l]=i\hbar\delta_{kl}$, this operator may be rewritten as follows \begin{equation}\langle\,bel{} \hat{V}_\langle\,mbda = e^{-i{\langle\,mbda}}\, \exp\left(-i\langle\,mbda\, \,\rangleho\frac{\partial}{\partial \,\rangleho}\,\rangleight) \ , \end{equation} and therefore it defines a complex dilation, i.e. the action of $ \hat{V}_\langle\,mbda$ on a function $\psi=\psi(\,\rangleho,\varphi)$ is given by \begin{equation}\langle\,bel{} \hat{V}_\langle\,mbda\, \psi(\,\rangleho,\varphi) = e^{-i{\langle\,mbda}}\, \psi( e^{-i{\langle\,mbda}}\, \,\rangleho,\varphi)\ . \end{equation} In particular one easily finds: \begin{equation}\langle\,bel{} \hat{V}_\langle\,mbda\, \hat{H}_{\,\ranglem iho} \, \hat{V}_\langle\,mbda^{-1} = \frac 12\,e^{2i\langle\,mbda} \, \left( -{\hbar^2}\mbox{\boldmath $\triangle$}iangle_2 - e^{-4i\langle\,mbda}\gamma^2 \,\rangleho^2\,\rangleight)\ . \end{equation} Therefore, for $e^{4i\langle\,mbda} = -1$, i.e. $\langle\,mbda = \pm \pi/4$, one has \begin{equation}\langle\,bel{} \hat{V}_{\pm \pi/4}\, \hat{H}_{\,\ranglem iho} \, \hat{V}_{\pm \pi/4}^{-1} = \pm i \, \left( -\frac{\hbar^2}{2}\,\mbox{\boldmath $\triangle$}iangle_2 + \frac{\gamma^2}{2}\, \,\rangleho^2\,\rangleight)\ . \end{equation} Now, let us introduce \begin{equation}\langle\,bel{psi-psi-ho} \mathfrak{u}^{\pm}_{nl} = \hat{V}_{\mp \pi/4}\, \psi^{\,\ranglem ho}_{nl} \ , \end{equation} that is \begin{equation}\langle\,bel{} \mathfrak{u}^{\pm}_{nl}(\,\rangleho,\varphi) = \sqrt{\pm i}\, \psi^{\,\ranglem ho}_{nl}(\sqrt{\pm i}\,\rangleho,\varphi)\ . \end{equation} It is evident that \begin{equation}\langle\,bel{} \hat{H}_{\,\ranglem iho} \, \mathfrak{u}^{\pm}_{nl} = \varepsilon^\pm_{nl}\, \mathfrak{u}^{\pm}_{nl}\ , \end{equation} where \begin{equation}\langle\,bel{} \varepsilon^\pm_{nl} = \pm i\, \varepsilon^{\,\ranglem ho}_{nl} = \pm i\hbar \gamma ( |l| + 2n + 1)\ . \end{equation} We stress that $\hat{V}_\langle\,mbda$ is not unitary (for $\langle\,mbda \in \mathbb{R}$) and hence in general $\hat{V}_\langle\,mbda\psi$ does not belong to $\cal H$ even for $\psi \in {\cal H}$. In particular the generalized eigenvectors $\mathfrak{u}^\pm_{nl}$ do not belong to $\cal H$ (the radial part $R_{nl}(\sqrt{\pm i}\,\rangleho)$ is not an element from $L^2( \mathbb{R}_+,\,\rangleho d\,\rangleho)$). \begin{proposition} \langle\,bel{PRO-psi} Two families of generalized eigenvectors $\mathfrak{u}^\pm_{nl}$ satisfy the following properties: \begin{enumerate} \item they are bi-orthonormal \begin{equation}\langle\,bel{} \int_0^{2\pi}\!\!\!\!\int_0^\infty\, \overline{\mathfrak{u}^\pm_{nl}(\,\rangleho,\varphi)}\, \mathfrak{u}^\mp_{n'l'}(\,\rangleho,\varphi)\, \,\rangleho d\,\rangleho\,d\varphi = \delta_{nn'}\delta_{ll'}\ , \end{equation} \item they are bi-complete \begin{equation}\langle\,bel{} \sum_{n=0}^\infty \sum_{l=-\infty}^\infty\, \overline{\mathfrak{u}^\pm_{nl}(\,\rangleho,\varphi)}\, \mathfrak{u}^\mp_{nl}(\,\rangleho',\varphi') = \frac{1}{\,\rangleho}\,\delta(\,\rangleho-\,\rangleho')\delta(\varphi-\varphi')\ . \end{equation} \end{enumerate} \end{proposition} The proof follows immediately from orthonormality and completness of oscillator eigenfunctions $\psi^{\,\ranglem ho}_{nl}$. \section{Spectral properties of the Bateman Hamiltonian} \setcounter{equation}{0} Now, we solve the corresponding spectral problem for the Bateman Hamiltonian (\,\rangleef{H-new}). Note that $\hat{H}$ is bounded neither from below nor from above and hence its spectrum $\sigma(\hat{H}) = (-\infty,\infty)$. The corresponding generalized eigenvectors satisfy \begin{equation}\langle\,bel{} \hat{H} \psi_{\varepsilon,l} = E_{\varepsilon,l} \psi_{\varepsilon,l}\ , \end{equation} where $l \in \mathbb{Z}$ and $\varepsilon \in \mathbb{R}$. Assuming the following factorized form of $\psi_{\varepsilon,l}$ \begin{equation}\langle\,bel{psi-el} \psi_{\varepsilon,l}(\,\rangleho,\varphi) = R_{\varepsilon,l}(\,\rangleho) \Phi_l(\varphi)\ , \end{equation} one has \begin{equation}\langle\,bel{E-el} E_{\varepsilon,l} = \omega\hbar l + \varepsilon\ , \end{equation} with \begin{equation}\langle\,bel{} \hat{H}_{\,\ranglem iho} R_{\varepsilon,l} = \varepsilon R_{\varepsilon,l}\ . \end{equation} The above equation rewritten in terms of $(\,\rangleho,\varphi)$-variables takes the following form \begin{equation}\langle\,bel{} \left( \frac{\partial^2}{\partial\,\rangleho^2} + \frac{1}{\,\rangleho} \frac{\partial}{\partial\,\rangleho} - \frac{|l|^2}{\,\rangleho^2} + \frac{\gamma^2}{\hbar^2}\, \,\rangleho^2 + \frac{2\varepsilon}{\hbar^2} \,\rangleight) R_{\varepsilon,l} =0\ , \end{equation} and its solution reads as follows \begin{equation}\langle\,bel{R-e} R_{\varepsilon,l}(\,\rangleho) = N_{\varepsilon,l}\, (\sqrt{i\gamma/\hbar}\,\,\rangleho)^{|l|}\, \exp(-i{\gamma}\,\rangleho^2/2\hbar)\, {_1}\!F_1(a,|l|+1,i\gamma\,\rangleho^2/\hbar)\ , \end{equation} with \begin{equation}\langle\,bel{a} a = \frac 12 \left( |l| + 1 - \frac{\varepsilon}{i\gamma\hbar} \,\rangleight)\ . \end{equation} The normalization factor $N_{\varepsilon,l}$ is chosen such that \begin{equation}\langle\,bel{} \int_0^\infty \overline{R_{\varepsilon,l}(\,\rangleho)}\, R_{\varepsilon',l}(\,\rangleho)\, \,\rangleho d\,\rangleho = \delta(\varepsilon-\varepsilon')\ . \end{equation} It turns out (see Appendix A) that \begin{equation}\langle\,bel{N-el} N_{\varepsilon,l} = \sqrt{\frac{\gamma}{\pi |l|!}}\, (-i)^a\, \Gamma(a)\ , \end{equation} with $a$ defined in (\,\rangleef{a}). \begin{proposition} \langle\,bel{psi-e-l} The family of generalized eigenvectors $\psi_{\varepsilon,l}$ satisfy the following properties: \begin{enumerate} \item orthonormality \begin{equation}\langle\,bel{} \int_0^{2\pi}\!\!\!\!\int_0^\infty\, \overline{\psi_{\varepsilon,l}(\,\rangleho,\varphi)}\, \psi_{\varepsilon',l'}(\,\rangleho,\varphi)\, \,\rangleho d\,\rangleho\,d\varphi = \delta(\varepsilon-\varepsilon')\delta_{ll'}\ , \end{equation} \item completeness \begin{equation}\langle\,bel{} \sum_{l=-\infty}^\infty\, \int_{-\infty}^\infty d\varepsilon\ \overline{\psi_{\varepsilon,l}(\,\rangleho,\varphi)}\, \psi_{\varepsilon,l}(\,\rangleho',\varphi') = \frac{1}{\,\rangleho}\,\delta(\,\rangleho-\,\rangleho')\delta(\varphi-\varphi')\ . \end{equation} \end{enumerate} \end{proposition} Let us define another family of generalized energy eigenvectors \begin{equation}\langle\,bel{} \chi_{\varepsilon,l} = {\cal T}\psi_{\varepsilon,l}\ , \end{equation} where the anti-unitary operator $\cal T$ is defined as follows \begin{equation}\langle\,bel{T-def} {\cal T}\psi_{\varepsilon,l}(\,\rangleho,\varphi) = \overline{R_{\varepsilon,l}(\,\rangleho)}\Phi_l(\varphi)\ . \end{equation} It easy to show that Bateman's Hamiltonian $\hat{H}$ is $\cal T$--invariant \begin{equation}\langle\,bel{} {\cal T} \hat{H} {\cal T}^\dag = \hat{H}\ . \end{equation} Moreover, if $\psi(t) = \hat{U}(t)\psi_0$, then ${\cal T}\psi(t) = \hat{U}(-t)({\cal T}\psi_0)$, which shows that $\cal T$ is a time reversal operator. Finally, Proposition~\,\rangleef{psi-e-l} gives rise to the following spectral representation of the Bateman Hamiltonian \begin{equation}\langle\,bel{H-Psi} \hat{H} = \sum_{l=-\infty}^{+\infty} \int_{-\infty}^\infty\, d\varepsilon\, E_{\varepsilon,l} | \psi_{\varepsilon,l}\,\rangle \langle\, \psi_{\varepsilon,l}| = \sum_{l=-\infty}^{+\infty} \int_{-\infty}^\infty\, d\varepsilon\, E_{\varepsilon,l} | \chi_{\varepsilon,l}\,\rangle \langle\, \chi_{\varepsilon,l}| \ , \end{equation} with $E_{\varepsilon,l}$ defined in (\,\rangleef{E-el}). \section{Analyticity, resolvent and resonances} \setcounter{equation}{0} \langle\,bel{ANALYTICITY} Now, we continue energy eigenfunctions $\psi_{\varepsilon,l}$ and $\chi_{\varepsilon,l}$ into complex $\varepsilon$-plane. Note, that $\varepsilon$-dependence enters $R_{\varepsilon,l}$ via the normalization factor $N_{\varepsilon,l}$ and the function $_1F_1(a,|l|+1,i\gamma\,\rangleho^2/\hbar)$ ($a$ is $\varepsilon$-dependent, see (\,\rangleef{a})). It is well known (see e.g. \cite{AS}) that confluent hypergeometric function $_1\!F_1(a,b,z)$ defines a convergent series for all values of complex parameters $a$, $b$ and $z$ provided $a\neq -n$ and $b\neq -m$, with $m$ and $n$ positive integers. Moreover, if $a=-n$ and $b\neq -m$, then $_1\!F_1(a,b,z)$ is a polynomial of degree $n$ in $z$. In our case $b=|l|+1$ which is never negative and hence $_1F_1(a,|l|+1,i\gamma\,\rangleho^2/\hbar)$ is analytic in $\varepsilon$. However, it is no longer true for the normalization constant $N_{\varepsilon,l}$ given by (\,\rangleef{N-el}). The $\Gamma$-function has simple poles at $a=-n$, with $n=0,1,2,\ldots$, which correspond to \begin{equation}\langle\,bel{e-n-l} \varepsilon = \varepsilon_{nl} = i\gamma \hbar(|l| + 2n +1)\ , \end{equation} on the complex $\varepsilon$-plane. On the other hand the time-reversed function $ \overline{R_{\varepsilon,l}}$ has simple poles at $\varepsilon=\overline{\varepsilon_{nl}} = -\varepsilon_{nl}$. It is, therefore, natural to introduce two classes of functions that respect these analytical properties of $\psi_{\varepsilon,l}$ and $\chi_{\varepsilon,l}$. Recall \cite{Duren} that a smooth function $f=f(\varepsilon)$ is in the Hardy class from above ${\cal H}^2_+$ (from below ${\cal H}^2_-$) if $f(\varepsilon)$ is a boundary value of an analytic function in the upper, i.e. $\mbox{Im}\, \varepsilon\geq 0$ (lower, i.e. $\mbox{Im}\, \varepsilon\leq 0$) half complex $\varepsilon$-plane vanishing faster than any power of $\varepsilon$ at the upper (lower) semi-circle $|\varepsilon| \,\rangleightarrow \infty$. Define \begin{equation} \Phi_- := \Big\{ \phi \in {\cal S}( \mathbb{R}^2_{\bf u})\, \Big| \, f(\varepsilon):= \langle\, \chi_{\varepsilon,l} | \phi \,\rangle \in {\cal H}^2_-\, \Big\} \ , \end{equation} and \begin{equation} \Phi_+ := \Big\{ \phi \in {\cal S}( \mathbb{R}^2_{\bf u})\, \Big| \, f(\varepsilon):=\langle\, \psi_{\varepsilon,l} | \phi \,\rangle \in {\cal H}^2_+\, \Big\} \ , \end{equation} where ${\cal S}( \mathbb{R}^2_{\bf u})$ denotes the Schwartz space \cite{Yosida}, i.e. the space of $C^\infty(\mathbb{R}^2_{\bf u})$ functions $f=f(u_1,u_2)$ vanishing at infinity ($|{\bf u}| \longrightarrow \infty$) faster than any polynomial. It is evident from (\,\rangleef{T-def}) that \begin{equation}\langle\,bel{} \Phi_+ = {\cal T}({\Phi_-})\ . \end{equation} The main result of this section consists in the following \begin{theorem}\langle\,bel{MAIN} For any function $\phi^\pm \in \Phi_\pm$ one has \begin{eqnarray} \langle\,bel{phi+} \phi^+ = \sum_{n=0}^\infty\sum_{l=-\infty}^\infty \mathfrak{u}^+_{nl} \langle\, \mathfrak{u}^-_{nl}|\phi^+\,\rangle \ , \end{eqnarray} and \begin{eqnarray} \langle\,bel{phi-} \phi^- = \sum_{n=0}^\infty\sum_{l=-\infty}^\infty \mathfrak{u}^-_{nl} \langle\, \mathfrak{u}^+_{nl}|\phi^-\,\rangle \ . \end{eqnarray} \end{theorem} For the proof see Appendix B. The above theorem implies the following spectral resolutions of the Hamiltonian: \begin{equation} \langle\,bel{GMH-1} \hat{H}_- \equiv \hat{H}\Big|_{\Phi_-} \, = \, \sum_{n=0}^\infty \sum_{l=-\infty}^\infty\, {E^-_{nl}}\, |\mathfrak{u}^-_{nl}\rangle\langle\mathfrak{u}^+_{nl}|\ , \end{equation} and \begin{equation} \langle\,bel{GMH-2} \hat{H}_+ \equiv \hat{H}\Big|_{\Phi_+} \, = \, \sum_{n=0}^\infty \sum_{l=-\infty}^\infty\, {E^+_{nl}}\, |\mathfrak{u}^+_{nl}\rangle\langle\mathfrak{u}^-_{nl}|\ . \end{equation} In the above formulae $E^\pm_{nl}$ is given by (\,\rangleef{Enl}). The same techniques may be applied for the resolvent operator \begin{equation}\langle\,bel{} \hat{R}(z,\hat{H}) = \frac{1}{\hat{H}-z}\ . \end{equation} One obtains \begin{eqnarray}\langle\,bel{} \hat{R}_+(z,\hat{H}) &=& \sum_{l=-\infty}^\infty \int_{-\infty}^\infty\, \frac{d\varepsilon}{E_{\varepsilon,l}-z}\, |\psi_{\varepsilon,l}\rangle \langle \psi_{\varepsilon,l}|\ \Big|_{\Phi_+}\nonumber \\ &=& \sum_{n=0}^\infty\sum_{l=-\infty}^\infty \frac{1}{E^+_{n,l}-z}\, |\mathfrak{u}^-_{nl}\rangle \langle\, \mathfrak{u}^+_{nl}|\ , \end{eqnarray} on $\Phi_+$, and \begin{eqnarray}\langle\,bel{} \hat{R}_-(z,\hat{H}) &=& \sum_{l=-\infty}^\infty \int_{-\infty}^\infty\, \frac{d\varepsilon}{E_{\varepsilon,l}-z}\, |\chi_{\varepsilon,l}\rangle \langle \chi_{\varepsilon,l}|\ \Big|_{\Phi_-}\nonumber \\ &=& \sum_{n=0}^\infty\sum_{l=-\infty}^\infty \frac{1}{{E^-_{n,l}}-z}\, |\mathfrak{u}^+_{nl}\rangle \langle\, \mathfrak{u}^-_{nl}|\ , \end{eqnarray} on $\Phi_-$. Hence, $\hat{R}_+(z,\hat{H})$ has poles at $z=E^+_{nl}$, and $\hat{R}_-(z,\hat{H})$ has poles at $z={E^-_{nl}}$. As usual eigenvectors $\mathfrak{u}^+_{nl}$ and $\mathfrak{u}^-_{nl}$ corresponding to poles of the resolvent are interpreted as resonant states. Note, that the Cauchy integral formula implies \begin{equation}\langle\,bel{} \hat{P}^+_{nl} \, :=\, |\mathfrak{u}^-_{nl}\,\rangle\langle\,\mathfrak{u}^+_{nl}| \, =\, \frac{1}{2\pi i} \oint_{\Gamma^+_{nl}} \hat{R}_+(z,\hat{H})dz \ , \end{equation} where $\Gamma^+_{nl}$ is a clockwise closed curve that encircles the singularity $z=E^+_{nl}$. Similarly, \begin{equation}\langle\,bel{} \hat{P}^-_{nl} \, :=\, |\mathfrak{u}^+_{nl}\,\rangle\langle\,\mathfrak{u}^-_{nl}| \, =\, \frac{1}{2\pi i} \oint_{\Gamma^-_{nl}} \hat{R}_-(z,\hat{H})dz \ , \end{equation} where $\Gamma^-_{nl}$ is an anti-clockwise closed curve that encircles the singularity $z=E^-_{nl}$. One easily shows that \begin{equation}\langle\,bel{} \hat{P}^\pm_{nl} \cdot \hat{P}^\pm_{n'l'} = \delta_{nn'}\delta_{ll'}\,\hat{P}^\pm_{nl}\ , \end{equation} and hence the spectral decompositions of (\,\rangleef{GMH-1}) and (\,\rangleef{GMH-1}) may be written as follows: \begin{equation}\langle\,bel{H+-} \hat{H}_\pm = \sum_{n=0}^\infty\sum_{l=-\infty}^\infty \, E^\pm_{nl}\, \hat{P}^\pm_{nl} \ . \end{equation} Finally, let us note, that restriction of the unitary group $\hat{U}(t)=e^{-i\hat{H}t/\hbar}$ to $\Phi_\pm$ no longer defines a group. It gives rise to two semigroups: \begin{equation} \hat{U}_-(t)\, :=\, e^{-i\hat{H}_-t/\hbar} \ :\ \Phi_- \ \longrightarrow\ \Phi_-\ ,\ \ \ \ \ \ {\,\ranglem for}\ \ \ t\geq 0\ , \end{equation} and \begin{equation} \hat{U}_+(t) \, :=\, e^{-i\hat{H}_+t/\hbar} \ :\ \Phi_+ \ \longrightarrow\ \Phi_+\ ,\ \ \ \ \ \ {\,\ranglem for}\ \ \ t\leq 0\ . \end{equation} Using (\,\rangleef{H+-}) and the formula for $E_{nl}$ one finds: \begin{eqnarray} \langle\,bel{phi:-} \phi^-(t) = \hat{U}_-(t)\phi^- = \sum_{l=-\infty}^\infty e^{-i\omega lt}\, \sum_{n=0}^\infty e^{-\gamma(2n + |l| +1)t} \, \hat{P}^-_{nl} \ , \end{eqnarray} for $t\geq 0$, and \begin{eqnarray} \langle\,bel{phi:+} \phi^+(t) = \hat{U}_+(t)\phi^+ = \sum_{l=-\infty}^\infty e^{-i\omega lt}\, \sum_{n=0}^\infty e^{\gamma(2n + |l| +1)t} \, \hat{P}^+_{nl}\ , \end{eqnarray} for $t\leq 0$. We stress that $\phi^-_t$ ($\phi^+_t$) does belong to $L^2(\mathbb{R}^2_{\bf u} )$ also for $t<0$ ($t>0$). However, $\phi^-_t \in \Phi_-$ ($\phi^+_t \in \Phi_+$) only for $t\geq 0$ ($t\leq 0$). This way the irreversibility enters the dynamics of the reversed oscillator by restricting it to the dense subspace $\Phi_\pm$ of $L^2(\mathbb{R}^2_{\bf u})$. From the mathematical point of view the above construction gives rise to so called rigged Hilbert spaces (or Gel'fand triplets) \cite{RHS1,RHS2,Bohm-Gadella,Bohm}: \begin{equation}\langle\,bel{} \Phi_- \subset {\cal H} \subset \Phi_-'\ , \end{equation} and \begin{equation}\langle\,bel{} \Phi_+ \subset {\cal H} \subset \Phi_+'\ , \end{equation} where $\Phi_\pm'$ denote dual spaces, i.e. linear functionals on $\Phi_\pm$. Note, that generalized eigenvectors $\mathfrak{u}^\pm_{nl}$ are not elements from $\cal H$. However, they do belong to $\Phi_\pm'$. The first triplet $(\Phi_-,{\cal H},\Phi_-')$ is corresponds to the evolution for $t\geq 0$, whereas the second one $(\Phi_+,{\cal H},\Phi_+')$ corresponds to the evolution for $t\leq 0$. \section{Bateman's system in hyperbolic representation} \langle\,bel{Hyperbolic} \setcounter{equation}{0} In a recent paper \cite{Bla04} Blasone and Jizba used another representation. They transform Bateman's Hamiltonian (\,\rangleef{BHam}) into the following form \begin{equation}\langle\,bel{} H(y_1,y_2,w_1,w_2) = \frac 12 (w_1^2 - w_2^2) - \gamma( y_1w_2 + y_2w_1) + \frac 12 \omega^2 (y_1^2 - y_2^2) \ , \end{equation} with the new positions \begin{equation}\langle\,bel{} y_1 = \frac{x+y}{\sqrt{2}}\ , \hspace{1cm} y_2 = \frac{x-y}{\sqrt{2}}\ , \end{equation} and new canonical momenta \begin{equation}\langle\,bel{} w_1 = \frac{p_x+p_y}{\sqrt{2}}\ , \hspace{1cm} w_2 = \frac{p_x-p_y}{\sqrt{2}}\ . \end{equation} Now, introducing hyperbolic coordinates $(\varrho,u)$: \begin{equation}\langle\,bel{} y_1 = \varrho \, \cosh u \ , \hspace{1cm} y_2 = \varrho \, \sinh u \ , \end{equation} the canonical quantization leads to the following Hamiltonian defined on the Hilbert space ${\cal H} = L^2( \mathbb{R}_+,\varrho d\varrho) \otimes L^2( \mathbb{R},du)$: \begin{equation}\langle\,bel{HB-hyper} \hat{H} = \hat{H}_0 + \hat{H}_{\,\ranglem iho}\ , \end{equation} with \begin{equation}\langle\,bel{H0} \hat{H}_0 = - \frac{\hbar^2}{2}\, \square_{\,2} + \frac{\omega^2}{2}\, \varrho^2\ , \end{equation} and the iho part $\hat{H}_{\,\ranglem iho}$ \begin{equation}\langle\,bel{} \hat{H}_{\,\ranglem iho}= i\gamma\hbar\, \frac{\partial}{\partial u}\ . \end{equation} In the above formulae $\square_{\,2}$ denotes the 2D wave operator, that is \begin{equation}\langle\,bel{} \square_{\,2} = \frac{\partial^2}{\partial y_1^2} - \frac{\partial^2}{\partial y_2^2}\, =\, \frac{\partial^2}{\partial\varrho^2} + \frac{1}{\varrho} \frac{\partial}{\partial\varrho} - \frac{1}{\varrho^2} \frac{\partial^2}{\partial u^2}\ . \end{equation} Clearly, in the $(\varrho,u)$ variables the formula for $\hat{H}_{\,\ranglem iho}$ considerably simplifies and $\hat{H}_{\,\ranglem iho}$ represents the generator of SO(1,1) hyperbolic rotation on the $(y_1,y_2)$--plane. In this particular representation $\hat{H}_{\,\ranglem iho}$ defines a self-adjoint operator on $L^2( \mathbb{R},du)$. The corresponding eigen-problem is immediately solved \begin{equation}\langle\,bel{} \hat{H}_{\,\ranglem iho} \Phi_\nu = \gamma \hbar\nu \, \Phi_\nu\ , \end{equation} with $\Phi_\nu(u) = {e^{-i\nu u}}/{\sqrt{2\pi}}\ $, and hence it reproduces the continuous spectrum of 2D iho $ \sigma( \hat{H}_{\,\ranglem iho}) = (-\infty,\infty)$. However, there is a crucial difference between elliptic $(\,\rangleho,\varphi)$ and $(\varrho,u)$ representations. The generalized eigenvectors $\Phi_\nu$ may be analytically continued on the entire complex $\nu$--plane. Therefore, the hyperbolic representation does not display the family of resonances corresponding to complex eigenvalues $\varepsilon_{nl}$ defined in (\,\rangleef{e-n-l}). Of course one may by hand fix the values of $\nu$ to $\nu = i(2n + |l| + 1)$ but then the corresponding discrete $\Phi_{nl}$ family is neither bi-orthogonal nor bi-complete (cf. Proposition~\,\rangleef{psi-e-l}). To show how the complex eigenvalues of Blasone and Jizba \cite{Bla04} appear let us consider $\hat{H}_0$ defined in (\,\rangleef{H0}). Note, that $\hat{H}_0$ resembles 2D harmonic oscillator given by (\,\rangleef{H-ho}). There is, however, crucial difference between $\hat{H}_0$ and $\hat{H}_{\,\ranglem ho}$. The hyperbolic operator `$-\square_{\,2}$', contrary to the elliptic one `$-\mbox{\boldmath $\triangle$}iangle_2$', is not positively defined and hence it allows for negative eigenvalues. It is clear, since in the elliptic $(\,\rangleho,\varphi)$--representation $\hat{H}_0 = i\omega\hbar\, \partial_\varphi$ defines a self-adjoint operator on $L^2([0,2\pi),d\varphi)$ with purely discrete spectrum $\omega\hbar l$ ($l \in \mathbb{Z}$). Now, the spectral analysis of the Bateman Hamiltonian represented by (\,\rangleef{HB-hyper}) is straightforward: \begin{equation}\langle\,bel{} \hat{H}\psi_{\epsilon\nu} = {\cal E}_{\epsilon\nu}\psi_{\epsilon\nu} \ , \end{equation} with \begin{equation}\langle\,bel{} {\cal E}_{\epsilon\nu} = \epsilon + \gamma\hbar\nu\ , \end{equation} and the following factorized form of $\psi_{\epsilon\nu}$: \begin{equation}\langle\,bel{psi-e-nu} \psi_{\epsilon\nu}(\varrho,u) = {\cal R}_{\epsilon\nu}(\varrho)\, \Phi_\nu(u)\ . \end{equation} The radial function ${\cal R}_{\epsilon\nu}$ solves \begin{equation}\langle\,bel{} \hat{H}_0 {\cal R}_{\epsilon\nu} = \epsilon\, {\cal R}_{\epsilon\nu}\ , \end{equation} and in analogy to (\,\rangleef{R-e}) it is given by \begin{equation}\langle\,bel{R-U} {\cal R}_{\epsilon\nu}(\varrho) = N_{\epsilon\nu} \, (\sqrt{\omega/\hbar}\varrho)^{i\nu} \exp( -\omega\varrho^2/2\hbar)\, U(b,i\nu + 1,\omega\varrho^2/\hbar)\ , \end{equation} with \begin{equation}\langle\,bel{} b = \frac 12 \left( i\nu + 1 - \frac{\epsilon}{\hbar\omega} \,\rangleight)\ . \end{equation} In (\,\rangleef{R-U}) we have used instead of the standard confluent hypergeometric function $_1F_1$ so called Tricomi function $U$ (see e.g. \cite{BE}).\footnote{Actually, in \cite{BE} (and also in \cite{LL}) this function is denoted by $G$. We follow the notation of Abramowitz and Stegun \cite{AS}.} It is defined by \begin{equation}\langle\,bel{} U(a,c,z) = \frac{\Gamma(1-c)}{\Gamma(a-c+1)}\, _1F_1(a,c,z) + \frac{\Gamma(c-1)}{\Gamma(a)}\, z^{1-c}\, _1F_1(a-c+1,2-c,z) \ . \end{equation} A Tricomi function $U(a,c,z)$ is an analytical function of its arguments and for $a=-n$ $(n=0,1,2,\ldots)$ it defines a polynomial of order $n$ in $z$: \begin{equation}\langle\,bel{U-L} U(-n,\alpha+1,z) = (-1)^n n!\, L^\alpha_n(z) \ . \end{equation} Moreover, using the following property of $U$ (an analog of (\,\rangleef{FeF}) for $_1F_1$) \begin{equation}\langle\,bel{} U(a,c,z) = z^{1-c}\, U(1+a-c,2-c,z)\ , \end{equation} one obtains \begin{equation}\langle\,bel{RR} \int_0^\infty \overline{ {\cal R}_{\epsilon\nu}(\varrho)} {\cal R}_{\epsilon\nu}(\varrho) \, \varrho d\varrho = \frac{\hbar}{2\omega}\,|N_{\epsilon\nu}|^2 \int_0^\infty\, z^{i\nu} \, e^{-z} U^2(b,i\nu +1,z)\, dz\ , \end{equation} with $z = \omega\varrho^2/\hbar$. Now for $b=-n$, ${\cal R}_{\epsilon\nu}$ belongs to the Hilbert space $L^2( \mathbb{R}_+,\varrho d\varrho)$. It implies \begin{equation}\langle\,bel{} \epsilon = \hbar \omega (2n + 1 + i\nu)\ . \end{equation} and hence it reproduces discrete spectrum `$\hbar\omega \times {\,\ranglem integer}$' iff $i\nu = l = 0,\pm 1, \pm 2, \ldots\ $. Now, using (\,\rangleef{U-L}), (\,\rangleef{RR}) and \begin{equation}\langle\,bel{} \int_0^\infty e^{-z}z^\alpha L^\alpha_n(z)L^\alpha_m(z)\, dz = \frac {1}{n!}\, \Gamma(n+\alpha+1)\, \delta_{nm}\ , \end{equation} with $\alpha>-1$, one obtains the following family ${\cal R}_{nl} \in L^2( \mathbb{R}_+,\varrho d\varrho)$:\footnote{There is a difference in normalization factor in formulae (37) in \cite{Bla04}. It follows from slightly different definition of $L^\alpha_n$.} \begin{equation}\langle\,bel{} {\cal R}_{nl}(\varrho) = \sqrt{\frac{2\omega/\hbar}{n!\Gamma(n+l+1)}}\ (\sqrt{\omega/\hbar}\varrho)^{l} \exp( -\omega\varrho^2/2\hbar)\, L^l_n(\omega\varrho^2/\hbar)\ , \end{equation} with $n=0,1,2,\ldots\ $, and $l=0,1,2,\ldots\ $, satisfying \begin{equation}\langle\,bel{} \int_0^\infty \overline{ {\cal R}_{nl}(\varrho)} {\cal R}_{n'l}(\varrho) \, \varrho d\varrho = \delta_{nn'}\ . \end{equation} We stress that the family ${\cal R}_{nl}$ is defined for $l \geq 0$ only (otherwise it can not be normalized!). Finally, defining \begin{equation}\langle\,bel{} \phi_{nl}(\varrho,u) = \frac{1}{\sqrt{2\pi}}\,{\cal R}_{nl}(\varrho) \, e^{-ul} \ , \end{equation} one has \begin{equation}\langle\,bel{} \hat{H}\phi_{nl} = {\cal E}_{nl}\, \phi_{nl}\ , \end{equation} with \begin{equation}\langle\,bel{enl} {\cal E}_{nl} = \hbar\omega(2n + l +1) - i\hbar\gamma\, l\ . \end{equation} There is, however, crucial difference between families $\mathfrak{u}^\pm_{nl}(\,\rangleho,\varphi)$ and $\phi_{nl}(\varrho,u)$. The family $|\mathfrak{u}^\pm_{nl}\rangle$ corresponds to the poles of $\psi_{\varepsilon,l}$ from (\,\rangleef{psi-el}). No such correspondence holds for $|\phi_{nl}\rangle$ and $\psi_{\epsilon\nu}$ from (\,\rangleef{psi-e-nu}). In particular there is no analog of Theorem~\,\rangleef{MAIN} for $|\phi_{nl}\rangle$. Moreover, ${\cal E}_{nl}$ contrary to $E_{nl}$ from (\,\rangleef{Enl}) does not fit the formula for complex eigenvalues of Feshbach and Tikochinsky \cite{FT} (see detailed discussion in \cite{I}). It defines simply another family which is however not directly related to the spectral properties of the Bateman Hamiltonian. \section*{Appendix A} \defB.\arabic{equation}{A.\arabic{equation}} \setcounter{equation}{0} \langle\,bel{Normalization} To compute $N_{\varepsilon,l}$ in (\,\rangleef{R-e}) let us analyze the quantity $I_\varepsilon= \int_0^\infty \overline{R_{\varepsilon,l}(\,\rangleho)}\, R_{\varepsilon,l}(\,\rangleho)\, \,\rangleho d\,\rangleho$. Clearly, this integral diverges ($I_\varepsilon=\delta(0)$), however, its structure enables one the calculation of $N_{\varepsilon,l}$. One has \begin{equation}\langle\,bel{A1} I_\varepsilon = \frac{1}{2\gamma}\, |N_{\varepsilon,l}|^2 \, \int_0^\infty z^{|l|}\, _1\! F_1(a,|l|+1,iz)\, _1\! F_1(\overline{a},|l|+1,-iz)\, dz\ , \end{equation} where we defined $z = \gamma \,\rangleho^2$. Now, the integral in (\,\rangleef{A1}) belongs to the general class \begin{equation}\langle\,bel{} J = \int_0^\infty\, e^{-\langle\,mbda z}\, z^{\mu -1}\, _1\! F_1(\alpha,\mu,kz)\, _1\! F_1(\alpha',\mu,k'z)\, dz\ , \end{equation} given by the following formula (see Appendix f in \cite{LL}): \begin{equation}\langle\,bel{J} J = \Gamma(\mu)\, \langle\,mbda^{\alpha-\alpha'-\mu} (\langle\,mbda-k)^{-\alpha}(\langle\,mbda-k')^{-\alpha'}\, _2F_1\left( \alpha,\alpha',\mu; \frac{kk'}{(\langle\,mbda-k)(\langle\,mbda-k')} \,\rangleight)\ . \end{equation} Using the above formula with $\langle\,mbda=0$, $\mu=|l|+1$, $\alpha=a$, $\alpha'=\overline{a}$ and $k=-k'=i$ one finds \begin{equation}\langle\,bel{} I_\varepsilon= \frac{1}{2\gamma}\, |N_{\varepsilon,l}|^2 \, (-i)^{-a}\, \overline{ (-i)^{-a}}\, _2F_1(a,\overline{a},|l|+1;1)\ . \end{equation} Finally, noting that \begin{equation*}\langle\,bel{} _2F_1(\alpha,\beta,\gamma;1) = \frac{\Gamma(\gamma) \Gamma(\gamma-\alpha-\beta)}{\Gamma(\gamma-\alpha) \Gamma(\gamma-\beta)}\ , \end{equation*} one has \begin{equation}\langle\,bel{II} I_\varepsilon= \frac{|l|!}{2\gamma}\, |N_{\varepsilon,l}|^2 \, (-i)^{-a}\, \overline{ (-i)^{-a}}\, \frac{\Gamma(0)}{\Gamma(a)\Gamma( \overline{a})}\ . \end{equation} Therefore, comparing (\,\rangleef{II}) with $I_\varepsilon=\delta(0)$ one finds \begin{equation}\langle\,bel{} N_{\varepsilon,l} = \sqrt{\frac{\gamma}{\pi |l|!}}\, (-i)^a\, \Gamma(a)\ , \end{equation} which proves (\,\rangleef{N-el}). \section*{Appendix B} \defB.\arabic{equation}{B.\arabic{equation}} \langle\,bel{Proof} \setcounter{equation}{0} Due to the Gel'fand-Maurin spectral theorem \cite{RHS1,RHS2} an arbitrary function $\phi^+\in \Phi_+$ may be decomposed with respect to the basis $\psi_{\varepsilon,l}$ \begin{equation} \langle\,bel{GM-1} \phi^+ = \sum_{l=-\infty}^\infty \int_{-\infty}^\infty d\varepsilon\, \psi_{\varepsilon,l} \langle \psi_{\varepsilon,l}|\phi^+\rangle \ . \end{equation} Now, since $ \langle\, \psi_{\varepsilon,l} | \phi^+\,\rangle \in {\cal H}^2_+$, we may close the integration contour along the upper semi-circle $|\varepsilon|\,\rangleightarrow \infty$. Applying the Residue Theorem one obtains \begin{eqnarray} \langle\,bel{phi-R} \phi^+(\,\rangleho,\varphi) = 2\pi i \sum_{l=-\infty}^\infty \sum_{n=0}^\infty \,\mbox{Res}\, \psi_{\varepsilon,l}(\,\rangleho,\varphi)\Big|_{\varepsilon=\varepsilon_{nl}}\, \langle\, \psi_{\varepsilon,l} | \phi^+\,\rangle\Big|_{\varepsilon=\varepsilon_{nl}} \, . \end{eqnarray} Using the well known formula for the residuum \begin{equation}\langle\,bel{} \mbox{Res}\, \Gamma(a)\Big|_{a=-n} = \frac{(-1)^n}{n!}\ , \end{equation} one obtains \begin{equation}\langle\,bel{} \mbox{Res}\, \psi_{\varepsilon,l}\Big|_{\varepsilon=\varepsilon_{nl}} = \frac{-i}{\sqrt{i^{2n+|l|+1}}}\, \sqrt{\frac{1}{2\pi\hbar}}\, \sqrt{\frac{(n+|l|)!}{n!|l|!}}\ {\mathfrak{u}^+_{nl}}\ . \end{equation} Moreover, the analytical function $\overline{\psi_{\varepsilon,l}}$ computed at $\varepsilon=\varepsilon_{nl}$ reads: \begin{equation*}\langle\,bel{} \overline{\psi_{\varepsilon,l}}\Big|_{\varepsilon=\varepsilon_{nl}} = i^{n+|l|+1}\,\sqrt{\frac{\gamma}{\pi |l|!}}\, (\sqrt{-i\gamma/\hbar}\,\,\rangleho)^{|l|}\, \exp(i{\gamma}\,\rangleho^2/2\hbar)\, {_1}\!F_1(n+|l|+1,|l|+1,-i\gamma\,\rangleho^2/\hbar)\, \overline{\Phi_l(\varphi)}\ . \end{equation*} Due to the well known relation \cite{GR,Morse,AS} \begin{equation}\langle\,bel{FeF} _1F_1(a,b,z) = e^z\, _1F_1(b-a,b,-z)\ , \end{equation} one finds \begin{equation}\langle\,bel{} \overline{\psi_{\varepsilon,l}}\Big|_{\varepsilon=\varepsilon_{nl}} = \sqrt{i^{2n+|l|+1}}\, \sqrt{\frac{\hbar}{2\pi}}\, \sqrt{\frac{n!|l|!}{(n+|l|)!}}\ \overline{\mathfrak{u}^-_{nl}}\ . \end{equation} and hence the formula (\,\rangleef{phi+}) follows. In a similar way one shows (\,\rangleef{phi-}). \section*{Acknowledgments} This work was partially supported by the Polish State Committee for Scientific Research Grant {\em Informatyka i in\.zynieria kwantowa} No PBZ-Min-008/P03/03. \end{document}
\begin{document} \title{A hybrid landmark Aalen-Johansen estimator for transition probabilities in partially non-Markov multi-state models} \begin{abstract} Multi-state models are increasingly being used to model complex epidemiological and clinical outcomes over time. It is common to assume that the models are Markov, but the assumption can often be unrealistic. The Markov assumption is seldomly checked and violations can lead to biased estimation for many parameters of interest. As argued by Datta and Satten (2001), the Aalen-Johansen estimator of occupation probabilities is consistent also in the non-Markov case. Putter and Spitoni (2018) exploit this fact to construct a consistent estimator of state transition probabilities, the landmark Aalen-Johansen estimator, which does not rely on the Markov assumption. A disadvantage of landmarking is data reduction, leading to a loss of power. This is problematic for “less traveled" transitions, and undesirable when such transitions indeed exhibit Markov behaviour. Using a framework of partially non-Markov multi-state models we suggest a hybrid landmark Aalen-Johansen estimator for transition probabilities. The proposed estimator is a compromise between regular Aalen-Johansen and landmark estimation, using transition specific landmarking, and can drastically improve statistical power. The methods are compared in a simulation study and in a real data application modelling individual transitions between states of sick leave, disability, education, work and unemployment. In the application, a birth cohort of \mbox{184 951} Norwegian men are followed for 14 years from the year they turn 21, using data from national registries. \end{abstract} \keywords{Landmarking \and Non-Markov multi-state models \and The Aalen-Johansen estimator \and Transition probabilities} \section{Introduction} \label{sec1} Multi-state models are increasingly being used to model complex epidemiological and clinical outcomes over time. One example is in the analysis of long-term sick leave and health related absence from work, where detailed longitudinal data on individuals are available through administrative registries (see e.g. \citet{hoff17}). Multi-state models extend traditional hazard based time-to-event models to situations with a higher, finite, number of states, each of which defines a possible competing risks situation (\citealp{hougaard99,andersen02,putter07,meira08}). In such models, the objects of interest for estimation, beside covariate effects and the transition hazards themselves, are typically occupation and transition probabilities. In a Markov multi-state model, occupation and transition probabilities can be estimated consistently as a plug-in estimate based on the estimated transition intensities using the Aalen-Johansen (AJ) estimator \citep{aalen08}. However, for non-Markov models, the AJ estimator is only consistent for occupation probabilities (\citealp{datta01, glidden02, Overgaard19, Beyersmann20}). Several methods have been proposed for estimating transition probabilities in general semi- and non-Markov multi-state models; see for example \cite{allignol14}, \cite{titman15}, \cite{deunaalvarez15} and \cite{putter16}, who all propose methods based on subsampling. The landmark Aalen-Johansen (LMAJ) method of Putter and Spitoni is based on analysing a subset of the population who are in a specific state at a specific time point. This reference time and state is referred to as a landmark. Applying the AJ estimator to this landmark subset gives consistent estimates of transition probabilities from the landmark state at the landmark time, even for non-Markov models. A consequence of stratification to a landmark sample is that these subsets may become small, and estimation inefficient. In this paper we suggest an alternative approach, the hybrid landmark Aalen-Johansen (HAJ) estimator, for models consisting of Markov and non-Markov transitions. This hybrid estimator is based on a transition wise consideration of whether to use data from the landmark subsample or all available data in the estimation procedure. Inspired by \cite{TitmanPutter19} a type of two-sample test is suggested to select which transitions that are Markov (or close to Markov) and which are not. The resulting HAJ estimator can be seen as a compromise between the two extremes of either assuming all transitions are Markov or no transitions are Markov. As our results demonstrate, the HAJ estimator will, typically, have less bias than the AJ estimator and higher precision than the LMAJ estimator. The outline for the paper is as follows. In Section \ref{sec2} we define partially non-Markov multi-state processes and present the HAJ estimator. In Section \ref{sec3} we give a heuristic justification of the estimator, discuss large sample properties and how tests of Markov behaviour following \cite{TitmanPutter19} can be used to construct the estimator. In Section \ref{sec4} we consider a simulation study comparing the HAJ estimator to the AJ and LMAJ estimators. In Section \ref{sec5} we apply the techniques to our motivating example, using data from a Norwegian birth cohort to model sickness absence and work participation over time. A discussion is found in Section \ref{sec6}. R code for implementation and reproduction of the simulation study is available on GitHub (see Supplementary Material). \section{A hybrid landmark Aalen-Johansen estimator} \label{sec2} Let us consider a multi-state model $X(t)$ over a bounded time interval $[0, \tau]$, taking values in the state space $\mathcal{K} = \{1, \ldots, K\}$. Let $E \subset \mathcal{K} \times \mathcal{K}$ be the set of possible transitions of $X$. For a given state $l \in \mathcal{K}$, we are interested in the transition probabilities from $l$ to each of the states in $\mathcal{K}$, given by \begin{equation} \mathbf{P}_l(s, t) := \left( P_{l1}(s,t). \ldots, P_{lK}(s,t) \right)^\top, \label{intro1} \end{equation} where $P_{lk}(s,t) := \mathsf{P}(X(t) = k \, | \, X(s) = l)$ and $k \in \{1,...,K\}$. The problem of interest is to produce a consistent estimator of \eqref{intro1}. Note that the theory that follows also is valid for more than one landmark state $l$, so that $l$ in practice can be a set of states. However, to ease notation, we focus on the most common scenario where $l$ is one particular state in the state space $\mathcal{K}$. \subsection{The Aalen-Johansen estimator} When the multi-state process is Markov, a consistent estimator of these transition probabilities is provided by the Aalen-Johansen estimator \citep{AJ78}. For this, consider $n$ i.i.d. realisations $X_i(t)$ of $X(t)$, where for subject $i$ we define the at risk process for state $j$ as $Y_j^{(i)}(t) = 1\{ X_i(t-) = j\}$ and the transition counting process for transition $j \to k \in E$ as $N_{jk}^{(i)}(s,t) = \sum_{u \in (s,t]} 1\{X_i(u-) = j, X_i(u) = k\}$. Then, define the aggregated at risk and transition counting processes as \begin{align*} \overline{Y}_{j}(t) := \sum_{i = 1}^{n} 1\{X_i(t-) = j\} \quad \text{and} \quad \overline{N}_{jk}(t) := \sum_{i = 1}^{n} \sum_{u \in (s,t]} 1\{X_i(u-) = j, X_i(u) = k\}, \end{align*} and let $\overline{\mathbf{Y}}(t) := (\overline{Y}_{1}(t), \ldots, \overline{Y}_{K}(t))$, and $\overline{Y}_{\bullet}(t) = \sum_{j=1}^K \overline{Y}_j(t)$ be the total number of subjects at risk at time $t$. For $J_{j}(t) := 1\{\overline{Y}_j(t) > 0\}$ let \begin{align} \widehat{\Lambda}_{jk}(t) := \int_{0}^{t} \frac{J_{j}(u) \mathrm{d}\overline{N}_{jk}(u)}{\overline{Y}_{j}(u)} \label{NA1} \end{align} be the Nelson-Aalen estimator of the transition rates and $\widehat{\mathbf{\Lambda}}(t)$ a matrix with $(j,k)$th element equal to $\widehat{\Lambda}_{jk}(t)$ and diagonal elements $\widehat{\Lambda}_{jj}(t) = - \sum_{k \not= j} \widehat{\Lambda}_{jk}(t)$. The Aalen-Johansen (AJ) estimator of the transition probability matrix $\mathbf{P}(s, t)$, with elements $P_{jk}(s,t)$, is then given by \begin{align*} \widehat{\mathbf{P}}^{\textrm{AJ}}(s,t) &:= \prod_{u \in (s,t]} \left(\mathbf{I} + \Delta\widehat{\mathbf{\Lambda}}(u) \right). \end{align*} Estimated state occupation probabilities at time $t$ may be obtained by \begin{equation}\label{eq:stateocc} \widehat{\mathbf{\pi}}^{\textrm{AJ}}(t) = \widehat{\mathbf{\pi}}(0) \widehat{\mathbf{P}}^{\textrm{AJ}}(0,t), \end{equation} where $\widehat{\mathbf{\pi}}(0)$ is the row vector of empirical state occupation probabilities at $t=0$, given by $\widehat{\pi}_j(0) := \overline{Y}_j(0+) / \overline{Y}_{\bullet}(0+)$. Here, the $+$ in $(t+)$ means that the numbers observed to be in state $j$ \emph{at} time $t$, rather than just before time $t$ are to be taken. \citet{datta01} argued that the estimated state occupation probabilities in~\eqref{eq:stateocc} are consistent, even if the multi-state model is non-Markov. The Aalen-Johansen (AJ) estimator of the transition probabilities $\mathbf{P}_l(s, t)$ from \eqref{intro1} is then given by \begin{align*} \widehat{\mathbf{P}}_{l}^{\textrm{AJ}}(s,t) &:= e_{l} \prod_{u \in (s,t]} \left(\mathbf{I} + \Delta\widehat{\mathbf{\Lambda}}(u) \right), \end{align*} where $e_{l}$ is a vector with the $l$th element equal to 1, and all other elements 0. \subsection{The landmark Aalen-Johansen estimator} Building on the results of \cite{datta01}, \cite{putter16} defined the landmark Aalen-Johansen (LMAJ) estimator of transition probabilities. This estimator uses the landmark population $\{i: X_i(s) = l\}$ for estimation of the transition intensities. In what follows we consider the landmark time $s$ and landmark state $l$, on which we condition, as fixed. Let us suppress the dependence on $s$ and $l$ in the notation when defining the landmark counting process and at risk process as $Y_j^{(i, \textrm{LM})}(t) := 1\{X_i(s) = l\}Y_j^{(i)}(t)$ and $N_{jk}^{(i, \textrm{LM})}(t) := 1\{X_i(s) = l\}N_{jk}^{(i)}(s,t)$. We define the aggregated at risk and counting processes based on the landmark population as \begin{align*} \overline{Y}_{j}^{(\textrm{LM})}(t) := \sum_{i = 1}^{n} Y_j^{(i, \textrm{LM})}(t) \quad \text{and} \quad \overline{N}_{jk}^{(\textrm{LM})}(t) := \sum_{i = 1}^{n} N_{jk}^{(i, \textrm{LM})}(s,t). \end{align*} Let $\overline{\mathbf{Y}}^{(\textrm{LM})}(t) := (\overline{Y}_{1}^{(\textrm{LM})}(t), \ldots, \overline{Y}_{K}^{(\textrm{LM})}(t))$ and $\overline{Y}_{\bullet}^{(\textrm{LM})}(t) := \sum_{j = 1}^{K} \overline{Y}_{j}^{(\textrm{LM})}(t)$. For $J_{j}^{(\textrm{LM})}(t) := 1\{\overline{Y}_{j}^{(\textrm{LM})}(t) > 0\}$ define the landmark Nelson-Aalen estimator of the transition rates as \begin{align} \widehat{\Lambda}_{jk}^{(\textrm{LM})}(t) := \int_{s}^{t} \frac{J_{j}^{(\textrm{LM})}(u) \mathrm{d}\overline{N}_{jk}^{(\textrm{LM})}(u)}{\overline{Y}_{j}^{(\textrm{LM})}(u)}, \label{NALMAJ1} \end{align} and let $\widehat{\mathbf{\Lambda}}^{(\textrm{LM})}(t)$ be the matrix with $(j,k)$th element $\widehat{\Lambda}_{jk}^{(\textrm{LM})}(t)$ and diagonal element $\widehat{\Lambda}_{jj}^{(\textrm{LM})}(t) = - \sum_{k \not= j} \widehat{\Lambda}_{jk}^{(\textrm{LM})}(t)$. Then the landmark Aalen-Johansen (LMAJ) estimator of \eqref{intro1} presented by \cite{putter16} is given by \begin{align*} \widehat{\mathbf{P}}_{l}^{\textrm{LM}AJ}(s,t) &:= e_{l} \prod_{u \in (s,t]} \left(\mathbf{I} + \Delta\widehat{\mathbf{\Lambda}}^{(\textrm{LM})}(u) \right). \end{align*} \subsection{The hybrid landmark Aalen-Johansen estimator} An undesirable feature of landmark subsampling is a reduction of the number of individuals at risk used for estimation, for all transitions, including Markov transitions if such exist. An easy way of improving the estimation is by plugging in Nelson-Aalen estimates based on the landmark sample in the Aalen-Johansen (AJ) estimator only for transitions which causes non-Markov behaviour and this is the main idea behind what we will refer to as the hybrid Aalen-Johansen (HAJ) estimator. For such an estimation procedure, we are thus in particular interested in detecting how violations from the Markov assumption are attributable to specific transitions. If this is only a subset of the full set of transitions $E$, we refer to the multi-state processes as \emph{partially non-Markov} and denote the set of non-Markov transitions as $A \subset E$. The HAJ estimator considers as estimators of the transition rates \begin{align} \widehat{\Lambda}_{jk}^{(\textrm{H})}(t) := \left\{ \begin{array}{ll} \widehat{\Lambda}_{jk}(t), & \hbox{$jk \not\in A$;} \\ \widehat{\Lambda}_{jk}^{(\textrm{LM})}(t), & \hbox{$jk \in A$.} \end{array} \right. \label{NAHAJ1} \end{align} Define $\widehat{\mathbf{\Lambda}}^{(\textrm{H})}(t)$ to be the matrix with $(j,k)$th element $\widehat{\Lambda}_{jk}^{(\textrm{H})}(t)$ and diagonal element $\widehat{\Lambda}_{jj}^{(\textrm{H})}(t) = - \sum_{k \not= j} \widehat{\Lambda}_{jk}^{(\textrm{H})}(t)$. Now, the HAJ estimator of \eqref{intro1} is \begin{align*} \mathbf{\widehat{P}}_{l}^{\textrm{HAJ}}(s,t) &:= e_l \prod_{u \in (s,t]} (\mathbf{I} + \Delta\widehat{\mathbf{\Lambda}}^{(\textrm{H})}(u)). \end{align*} Observe that for $A = \emptyset$ we get the classical AJ estimator, while for $A = E$ we get the LMAJ estimator. If we assume Markov behaviour only for certain transitions, this means in practice that we can implement the HAJ estimator by removing individuals that are not in the landmark state at the landmark time point from the specific risk sets used for estimating intensities for the non-Markov transitions. Applying the AJ estimator to such a reduced dataset from the landmark time point and onward will produce the HAJ estimate. As already mentioned, the LMAJ estimator is expensive; meaning that data reduction reduces precision (increases variance). The HAJ estimator will guarantee equal or better precision, at the possible expense of introducing bias. In Section~\ref{sec4} we will study how these opposing effects balance out. \section{Justification of the HAJ estimator} \label{sec3} \subsection{Product limits and transition probabilities} \label{sub31} Estimation of transition probabilities in multi-state models relies on a special relation between conditional probabilities and cumulative hazard functions. The relation is the multi-state version of the argument for the Kaplan-Meier estimator in classical time-to-event models. For Markov multi-state models the result is due to \cite{gill90} and says that if $\mathbf{P}(s, t)$ is the transition probability matrix of a Markov multi-state model and $\mathbf{\Lambda}$ the cumulative hazard rate matrix, then we have \begin{align} \mathbf{P}(s, t) = \lim \prod_{m = 1}^{M} \mathbf{P}(t_{m-1},t_{m}) = \lim \prod_{m = 1}^{M} \left(I + \mathbf{\Lambda}(t_{m}) - \mathbf{\Lambda}(t_{m-1}) \right), \label{refine0} \end{align} where the limits are taken over refinements of $(s,t]$, and $t_0=s$ and $t_M=t$. As a consequence, the product integral, when considered as a functional \begin{align} \mathbf{\Lambda} \to \Prodi_{u \in (s, t]}(I + \mathrm{d}\mathbf{\Lambda}(u)), \label{functional} \end{align} is a convenient construct for producing plug-in estimators of probabilities in multi-state models based on estimators of the cumulative hazard matrix. Utilizing the relation of the product integral above to Duhamel's equation (see e.g. \citet[Chapter 2]{andersen93}), two ways of deriving large sample properties of plug-in estimators are obtained; one based on the functional delta method and empirical process theory exploiting smoothness properties of \eqref{functional} and one based on martingale theory (see e.g. \citet[ p. 320]{andersen93}). In the non-Markov case two problems arise. The main problem is that the result of Gill and Johansen establishing \eqref{refine0} no longer holds and thus the expression of probabilities through the product integral requires a new argument. This is resolved by \cite{Overgaard19} in a similar spirit as that of \cite{gill90}. In the non-Markov case the transition rates are no longer compensators of the transition counting processes and thus the martingale property no longer holds. This makes it unclear if the argument by \cite{datta01} is in fact valid. A concern also expressed and remedied by \cite{Beyersmann20}. However, consistency of the Aalen-Johansen estimator of state occupation probabilities still holds and a proof is given by \cite{Overgaard19}. The same is true regarding the landmark estimator and a proof can be found in \cite{Beyersmann20}. We do not go into details in terms of large sample properties of the HAJ estimator. Instead we will give a heuristic justification of the estimator and rely on the results of \cite{Overgaard19} and \cite{Beyersmann20}. In order to justify the HAJ estimator we first consider the LMAJ estimator. As earlier, consider a fixed landmark time point $s$ and landmark state $l$ and define $\mathbf{P}_l(t, u) = \Bigl( P_{l,jk}(t, u) \Bigr)$ to be the matrix of transition probabilities in \emph{the landmark population}, with \begin{equation*} P_{l,jk}(t, u) := \mathsf{P}(X(u) = k \, | \, X(t) = j, X(s) = l). \end{equation*} By construction, and regardless of the Markov assumption, we have for $s \leq t$ \begin{align*} \mathbf{P}_{l}(s,t) = e_l \mathbf{P}_l(t_{0},t_{1}) \cdots \mathbf{P}_l(t_{M-1},t_{M}), \end{align*} for $s = t_{0} < t_{1} < \cdots < t_{M} = t$. For a sufficiently fine partition of the interval $(s,t]$ consider the approximation $\mathbf{P}_l(t_{l-1}, t_{l}) \approx I + \mathbf{\Lambda}^{(\textrm{LM})}(t_{l}) - \mathbf{\Lambda}^{(\textrm{LM})}(t_{l-1})$ where $\Lambda_{jk}^{(\textrm{LM})}(\mathrm{d}t)$ is the transition rate of the landmark population, i.e. \begin{align*} \Lambda_{jk}^{(\textrm{LM})}(\mathrm{d}t) = \mathsf{E}\left[ \mathrm{d} N_{jk}^{(\textrm{LM})}(t) \, | \, Y_{1j}^{(\textrm{LM})}(t) = 1 \right]. \end{align*} The desired result is then to achieve the equality \begin{align} \lim \prod_{m = 1}^{M} \mathbf{P}_l(t_{m-1},t_{m}) = \lim \prod_{m = 1}^{M} \left(I + \mathbf{\Lambda}^{(\textrm{LM})}(t_{m}) - \mathbf{\Lambda}^{(\textrm{LM})}(t_{m-1}) \right), \label{refine} \end{align} where the limits are taken over refinements of $(s,t]$. For the Markov case, this is derived by \cite{gill90}, and for the non-Markov case, it is derived by \cite{Overgaard19}. Hence also in the non-Markov case an estimator of the transition probability $\mathbf{P}_{l}(s,t)$ may be obtained when we have a consistent estimator of the right hand side of \eqref{refine}. \subsection{The LMAJ estimator} Assuming no censoring one can, as pointed out by \cite{aalen01} and alternatively by \citet[p. 296]{andersen93}, derive the landmark estimator of $\mathbf{P}_{l}(s,t)$ from a bookkeeping argument. In order to do so let \begin{align*} \widehat{\mathbf{P}}^{(\textrm{LM})}(s,t) &= \prod_{u \in (s, t]}(I + \Delta\widehat{\mathbf{\Lambda}}^{(\textrm{LM})}(u)). \end{align*} A natural way of estimating the transition probabilities $\mathbf{P}_{l}(s,t)$ for specific time points $s$ and $t$ is to consider the fraction of empirical means $\mathbf{\overline{Y}}^{(\textrm{LM})}(t) / \overline{Y}_{\bullet}^{(\textrm{LM})}(s)$. The number of individuals from the landmark sample in state $i$ just after time $s$, $\overline{Y}_{j}^{(\textrm{LM})}(s + \Delta t)$, may be expressed as those who where in state j at time $s$ plus the net arrivals in the small time frame $\Delta t$. That is \begin{align*} \overline{Y}_{j}^{(\textrm{LM})}(s + \Delta t) &= \overline{Y}_{j}^{(\textrm{LM})}(s) + \sum_{k \neq j} (\overline{N}_{kj}^{(\textrm{LM})}(s + \Delta t) - \overline{N}_{kj}^{(\textrm{LM})}(s)) \ - \\ & \quad \sum_{k \neq j} (\overline{N}_{jk}^{(\textrm{LM})}(s + \Delta t) - \overline{N}_{jk}^{(\textrm{LM})}(s)) \\ &\to \overline{Y}_{j}^{(\textrm{LM})}(s) [I + \Delta\widehat{\mathbf{\Lambda}}^{(\textrm{LM})}(s +)]_{j} \end{align*} for $\Delta t$ $\to 0$, where $[I + \Delta \widehat{\mathbf{\Lambda}}^{(\textrm{LM})}(s +)]_{j}$ denotes the $j$th column of $I + \Delta \widehat{\mathbf{\Lambda}}^{(\textrm{LM})}(s +)$. From this observation one obtains the following algebraic relation \begin{align} \frac{ \mathbf{\overline{Y}}^{(\textrm{LM})}(t) }{\overline{Y}_{\bullet}^{(\textrm{LM})}(s)} &= \frac{\mathbf{\overline{Y}}^{(\textrm{LM})}(s) \widehat{\mathbf{P}}^{(\textrm{LM})}(s,t) }{\overline{Y}_{\bullet}^{(\textrm{LM})}(s)} = e_{l} \widehat{\mathbf{P}}^{(\textrm{LM})}(s,t) = \mathbf{\widehat{P}}_{l}^{\textrm{LM}AJ}(s,t). \label{LMAJ2} \end{align} A different conclusion to draw from \eqref{LMAJ2} is that if the equality \begin{align*} \frac{ \mathbf{\overline{Y}}^{(\textrm{LM})}(t) }{\overline{Y}_{\bullet}^{(\textrm{LM})}(s)} &= \hat{\pi}^{(\textrm{LM})}(s) \widehat{\mathbf{P}}^{(\textrm{LM})}(s,t) \end{align*} should be satisfied when using the plug-in estimator based on the product integral, then the cumulative transition hazards matrix has to be based on Nelson-Aalen estimates obtained from the landmark data. \subsection{The HAJ estimator} If $X_{1}, X_{2}, \ldots$ are partially non-Markov, then there is potential gain in terms of power and variance when using the HAJ estimator. Note that \begin{align} \mathsf{E}\left[ \mathrm{d} N_{jk}^{(\textrm{LM})}(t) \, | \, Y_{j}^{(\textrm{LM})}(t) = 1 \right] = \mathsf{E}\left[ \mathrm{d} N_{jk}(t) \, | \, Y_{j}(t) = 1, X(s) = l \right], \label{LMNA} \end{align} which for Markov transitions $(jk) \in A$ implies that $\mathrm{d}\Lambda_{jk}^{(\textrm{LM})}(t) = \mathrm{d}\Lambda_{jk}(t)$. Likewise if we let $u \to \mathbf{\Lambda}^{(\textrm{H})}(u)$ be the hybrid cumulative hazard matrix based on $\Lambda_{jk}$ for $(jk) \in A$ and $\Lambda_{jk}^{(\textrm{LM})}$, for $(jk) \in A^{C}$, we have $\mathbf{\Lambda}^{(\textrm{LM})} = \mathbf{\Lambda}^{(\textrm{H})}$. Thus, when \eqref{refine} holds, the same holds in the partially non-Markov setting using $\mathbf{\Lambda}^{(\textrm{H})}$. Given the consistency of the LMAJ estimator, e.g. following from \cite{putter16} and \cite{Overgaard19}, consistency will also hold for the HAJ estimator. As pointed out by \cite{TitmanPutter19} from \eqref{LMNA} it is clear that a test of the Markov assumption for a specific jump transition process from state $j$ to state $k$ can be obtained by comparing intensities based on disjoint landmark states. One can use a two-sample test of the hypothesis \begin{align*} H_{0}:\lambda_{jk}^{l_1}(s,t) = \lambda_{jk}^{l_2}(s,t) \text{ on } (s, \tau]. \end{align*} Note that the two disjoint landmark states (denoted with superscript) ensure independence between samples in the estimation procedure. Typically, $l_1$ would be the landmark state of main interest and $l_2$ the set of all other remaining possible states. In the application and the simulation study we use two different tests as selection criteria for determining Markov and non-Markov behaviour for specific transitions. Denote the log-rank test statistic for transition $i \to j$ from landmark time point $s$ by $\mathfrak{X}_{s}$. This test statistic is the basis for a test referred to as the \textbf{point test}. Since the above test statistic is dependent on $s$, \cite{TitmanPutter19} suggest a more global test of the Markov assumption for transition $i \to j$ based on $\mathfrak{X} := \max_{i} \mathfrak{X}_{s_{i}}$, over a suitable grid $s_{1}, \ldots, s_{k}$. We refer to this as the \textbf{grid test}. For further discussion of the use of two sample tests to identify non-Markov transition see Web Appendix D. \subsection{Censoring, covariates and variance} For simplicity, we have so far not considered censoring. Censoring is not used in the simulation experiment and is not a major issue in the practical application. In time-to-event analysis censoring is generally the rule rather than the exception. Note also that stronger censoring assumptions typically are needed in the setting of multi-state models, compared to the regular survival setting (see e.g. \citet[p. 123]{aalen08}). \cite{Overgaard19} derives the result \eqref{refine0}, which would extend to \eqref{refine}, based on a form of independent censoring coined "the status independent observation assumption" and \cite{glidden02} considers right censoring with strong independence assumptions requiring censoring to be independent of states. \cite{datta02} consider an IPCW version of the AJ estimator of state occupation probabilities under dependent censoring. This weighted estimator was empirically investigated by \cite{gunnes07} and showed reasonable results for non-Markov behaviour induced by a joint frailty at baseline and various dependent censoring regimes. We have not considered covariate based hazard models in our formal treatment of the HAJ estimator but the results are expected to generalize to classical covariate models e.g. Cox or additive hazard regression due to the smoothness property of \eqref{functional}. See e.g. \cite{hoff19} for a discussion of covariate based models for the transition rates in relation to the LMAJ estimator. Regarding variance estimates for the LMAJ and HAJ estimator we recommend bootstrapping. Many of the classical variance estimators for the AJ estimator rely on the transition rates being the compensators of the transition counting processes. This is true under the Markov assumption but not if we relax the Markov assumption. We therefore expect additional variation and a formal argument (similar to that of \cite{Beyersmann20}) is included in Web Appendix A, where it is made explicit how the non-Markov behaviour induces additional variation for the Nelson-Aalen estimator. For the first simulation experiment we study the empirical coverage of the Greenwood type estimator for confidence intervals and in the practical application we compare such confidence intervals to bootstrapped confidence intervals. The results are included in Web Appendix B.2 and C.2. Although the results suggest only minor deviations of the Greenwood type estimator, this is not expected to hold in general. \section{A simulation study} \label{sec4} A central feature of our motivating data application on sickness absence and work participation is recurrent periods of sick leave. Since previous individual health history is very likely to impact future events of sick-leave, we expect to see non-Markov behaviour for various transitions in our model. Furthermore, we expect a considerable amount of individual heterogeneity in a number of transitions due to for example variations in socioeconomic status, educational and professional backgrounds. Motivated by these problems we will focus on a multi-state simulation experiment with recurrent events and transition intensities subject to frailty effects. The smallest relevant multi-state model for such an investigation is the illness-death model with recovery depicted in Figure \ref{ID_mod}. However, the appropriate analogy to our application will not be illness and death. Rather we will think of state 1 as employed, state 2 as on sick leave and state 3 as permanent disability. \begin{figure} \caption{An illness-death model with recovery, where, for example, state 1 correspond to employment, state 2 to sick leave and state 3 to permanent disability.} \label{ID_mod} \end{figure} We consider two types of experiments using non-Markov models over the time interval $[0, \tau]$, with $\tau = 1000$. In both experiments the jump transition intensities given by \begin{align*} \lambda_{jk} = V_{jk} \alpha_{jk}, \quad \text{for } jk \in \{(1,2), (1,3), (2,1), (2,3)\}, \end{align*} where $(\alpha_{12}, \alpha_{13}, \alpha_{21}, \alpha_{23}) = (0.12, 0.03, 0.15, 0.1)$ and $V_{jk}$ are individual frailties. The simulation experiments were performed 1000 times and each experiment had a total sample size of 1000 individuals. The "true" transition probabilities were calculated from a separate simulation experiment as the mean over 1000 repetitions of the LMAJ estimator applied to each simulated experiment, each of which also with a sample size of 1000 individuals. Note however that the LMAJ estimates here are based on landmark subsamples which will have somewhat lower sample size than the total sample size of 1000. The asymptotic distribution from wild bootstrapping is based on 500 bootstrap samples using standardized compensated Poisson processes. We hope to find that the HAJ estimator is a useful intermediary between the AJ and the LMAJ estimator. To investigate this claim we consider two experiments; one focused on how large frailty effects need to be for the HAJ estimator to be preferable to the AJ estimator and one focused on when the non-Markov behaviour is significant enough for the HAJ estimator to compete with the LMAJ estimator. The two experiments are: \begin{enumerate} \item $V_{12} = V_{13} = V_{23} = 1$ and $V_{21}$ is gamma distributed with mean 1 and variance $\sigma^{2} \in \{0, 0.4, 1.2, 2\}$, ranging from no frailty (i.e.~Markov) to heavily right-skewed frailty; \item $V = (V_{1}, V_{2}, V_{3}, V_{4})$ is log-normal distributed with mean 1 and covariance matrix \begin{align*} \Sigma \approx \begin{pmatrix} 0.80 & 0.57 & -0.35 & 0.37 \\ 0.57 & 0.42 & -0.12 & 0.19 \\ -0.35 & -0.12 & 0.96 & -0.63 \\ 0.37 & 0.19 & -0.63 & 0.45 \end{pmatrix}. \end{align*} Then, $W = \log(V)$ is normally distributed with $EW_{j} = -\Sigma_{jj}/2$ and $\cov(W_{j}, W_{k}) = \log(1 +\Sigma_{jk})$. \end{enumerate} As mentioned, the HAJ estimator can be seen as a compromise between two extremes and the two experiments investigate to what extend such a compromise is useful. In other words do we need the HAJ estimator at all and if so how much do we gain by using it? In the first experiment the question of interest is how large the frailty variance should be in order to detect a change in the transition probabilities. In particular we are interested in at what point the HAJ estimator starts to outperform the standard AJ estimator. The second experiment investigates how the HAJ estimator performs compared to the LMAJ estimator under clear non-Markovian conditions induced by large correlated frailties (hence the choice of $\Sigma$). We evaluate the performance of the estimators using two performance measures. We consider point-wise (in time) empirical bias variance estimates (see Figure \ref{bias_var}) and mean residual squared error (MRSE) (see Figure \ref{MRSE_mod1} and \ref{MRSE_mod2}). MRSE is measured by the $L_{2}$ distance $\Vert f - g \Vert^{2} = \int_{s}^{\tau} (f(t) - g(t))^2 \mathrm{d}t$ between estimates of $t \to P_{lk}(s,t)$ and the (simulated) true transitions probability, with $\tau = 1000$. Here $s$ is landmark grid times and the $L_{2}$-distance is calculated as a Riemann sum over all jump times. All empirical performance measures are produced based on 1000 simulations of each of the model specifications above. In both experiments the HAJ estimator is constructed using the grid test. \subsection{Experiment 1} From Figure \ref{MRSE_mod1} we see that, for non-zero frailty variance, the HAJ estimator performs at least as good as or better than the AJ estimator and better than the LMAJ estimator. In other words, the interpretation of the HAJ estimator as an intermediary between the AJ estimator and the LMAJ estimator seems to hold. In the Markov case, i.e. $\sigma^{2} = 0$, we see almost no performance difference between the HAJ and the AJ estimators. Due to the in-built error from testing the Markov assumption we generally expect better performance of the AJ estimator over the HAJ estimator for Markov models, in particular for models with many transitions. For $\sigma^{2} = 0.4$, the LMAJ estimator performs worse than the AJ estimator. The HAJ estimator is comparable to AJ for $\sigma^2 = 0.4$, but starts to outperform AJ for larger values of $\sigma^2$. In this experiment, HAJ always performed better than LMAJ. In other words, Figure \ref{MRSE_mod1} suggests that if a perturbation of the transition intensity is sufficiently large, so as to induce significant non-Markov behaviour for the transition probabilities, then the HAJ estimator outperforms the AJ and the LMAJ estimator. However, as we shall see in the next experiment, this conclusion has its limitations. In the experiment all transitions are tested and results from the point tests and grid tests are included in Web Appendix B.1.1. \begin{figure} \caption{Mean residual squared error for the transition probabilities from state $i$ to state $j$ as a function of landmark time points. All numbers are based on 1000 samples where each sample has a size of 1000 individuals. The landmark grid is $\{6, 9, 12, 14, 17, 20, 22, 25, 28, 30\} \label{MRSE_mod1} \end{figure} Regarding the bias-variance trade-off, we see from Figure \ref{bias_var} that the AJ estimator overestimates the transition probability $P_{21}(s, t)$, whereas the LMAJ and the HAJ estimator are close to the true transition probability. We also see that the HAJ estimator has smaller variance than the LMAJ estimator. This is exactly what we would expect from the HAJ estimator. Through the selection method it accounts for the partly non-Markov and partly Markov behaviour, resulting in smaller bias than the AJ estimator and less variance than the LMAJ estimator. \begin{figure} \caption{Bias and variance estimates for the transition probability from state 2 to state 1 based on the AJ , HAJ and LMAJ estimator respectively. All estimates are computed from landmark time $s = 17$. Mean bias and variance estimates are based on 1000 samples, where each sample has a size of 1000 individuals.} \label{bias_var} \end{figure} \subsection{Experiment 2} From Figure \ref{MRSE_mod2} we see that for the estimated transition probability from state 2 to 1 and state 2 to 3 the HAJ estimator performs slightly worse than the LMAJ estimator. This is expected when the non-Markov behaviour is strong enough, simply because the selection procedure for the HAJ estimator will make the wrong choice in some percentage of cases based on the significance level. In this way the significance level of the test used in the selection procedure for the HAJ estimator becomes a tuning parameter which could be optimised. The optimal choice would be model dependent and depends on sample size and on how sensitive changes in the transition probabilities are to changes in the hazards. In the experiment all transitions are tested and results from the point tests and grid tests are included in Web Appendix B.1.2. \begin{figure} \caption{Mean residual squared error for the transition probability estimates from state $j$ to state $k$. The landmark grid is $s = 1, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30$.} \label{MRSE_mod2} \end{figure} \section{An application to Norwegian registry data on sick leave, disability and work participation} \label{sec5} To illustrate the HAJ estimator and compare it to LMAJ and AJ, we consider a multi-state model with five states related to work participation. The proposed model is shown in Figure \ref{fig:msm1} and consists of the following states: (1) work, (2) unemployment, (3) sick leave, (4) education (above high school) and (5) disability, where disability is an absorbing state. Individual multi-state histories through these states are constructed using data from various Norwegian national registries with data on employment, education and welfare benefits. For more details on the data material and source registries, see \cite{hoff19}. In the dataset, information is available for the period 1992 -- 2011 for all Norwegian males born between 1971 and 1976 (n = 184 951). Additionally, several individual covariates, on socioeconomic background, health, acquired education levels and results from military conscript examination, were available. We included individuals in the study from the 1st of July the year they turned 21 (1992-1997) and observed them for 14.5 years, until 31st of December (2006-2011). The time scale used in the model is days since inclusion, so that transition times are matched on age and season, but not year. \begin{figure} \caption{A multi-state model for work, education and health-related absence from work.} \label{fig:msm1} \end{figure} One should realise that there are several potential violations to the Markov assumption for the model represented by Figure \ref{fig:msm1}. For example, it is plausible that individuals who have been working for a longer period are in more stable positions, further prolonging their stay in employment. Another example is individuals on long-term sick leave due to serious diseases, who may have lower probabilities for returning to work than people with minor illnesses. There are also laws and regulations that limit the possible duration of stays in states that are based on welfare benefits. Based on these circumstances we consider two examples comparing the AJ, LMAJ and HAJ estimators. In particular, we look at transitions from sick leave at time $t = 100$ (100 days since inclusion) and transitions from unemployment at time $t = 3000$ (3000 days since inclusion). \subsection{Example 1: Transitions from sick leave at $t = 100$} In our first example, we calculate transition probabilities from being on sick leave at day 100. In this particular analyses, the full dataset is reduced from 184 951 individuals to 23 288 by fixing a number of covariates. Specifically, we consider high school completers attending general education (non-vocational) that scored between 7 and 9 (i.e. high scores) on the cognitive test during military conscript examinations (scores range between 1-9 where 9 is considered the best). The landmark subset then consists of only 72 individuals. We start by looking at cumulative transition intensities in the landmark sample and compare them with cumulative transition intensities calculated from the full dataset. The various intensities are shown in Figure \ref{cumhaz0}. Differences between the curves from the two data samples indicate a violation of the Markov assumption. Inspection of Figure \ref{cumhaz0} suggest that transitions exhibiting similar intensities in the landmark sample and the full sample are $1 \rightarrow 2$, $4 \rightarrow 1$ and $4 \rightarrow 2$. In addition to visual inspection we can test for non-Markov behaviour using the point-test described in Section \ref{sec3}. Results from this test, found in Web Table C.1, imply that transitions $1 \rightarrow 2$, $2 \rightarrow 5$, $4 \rightarrow 1$ and $4 \rightarrow 2$ are not significantly non-Markov. Based on the above results, the HAJ estimator will here utilize all available data for transitions ($1 \rightarrow 2$, $2 \rightarrow 5$, $4 \rightarrow 1$ and $4 \rightarrow 2$) and only the landmark data for the other transitions. We illustrate the resulting transition probabilities from the landmark state (sick leave) into work and into education in Figure \ref{from3}. Estimated transition probabilities to all states based on the HAJ and LMAJ estimators are found in Web Appendix C.2. \begin{figure} \caption{Cumulative transition intensities starting at landmark time-point $s = 100$ days. Full drawn lines are Nelson-Aalen estimates based on the reduced cohort ($n=23288$), while dotted lines are Nelson-Aalen estimates based on the landmark sample of individuals in sick leave ($n=72$).} \label{cumhaz0} \end{figure} \begin{figure} \caption{Estimated transition probabilities from sick leave to work ($3 \to 1$) and from sick leave to education ($3 \to 4$). Dotted lines are 95$\%$ bootstrap (1000 samples) confidence intervals. The landmark sample consists of 72 individuals on sick leave at day 100. The full sample includes 23288 individuals.} \label{from3} \end{figure} In the current example, LMAJ and HAJ estimates are in close agreement and differ substantially from the AJ estimates. The AJ transition probability estimates are far greater than estimates produced by the LMAJ and HAJ estimators, indicating that the AJ estimates cannot be trusted. As for precision; Figure \ref{from3} indicates that the HAJ estimator has slightly higher precision than the LMAJ estimator. The AJ estimates naturally come with smaller confidence intervals due to the much larger sample size going into the estimation. Confidence intervals for the two landmark estimators are based on bootstrap, while for AJ, Greenwood plug-in estimates for standard errors were used. In Web Appendix C.2, we also compare plug-in estimates of standard errors with bootstrap estimates for the HAJ estimator and find that they are very close, with the bootstrap standard errors being slightly larger. \subsection{Example 2: Transitions from unemployment at $t = 3000$} For the next example we still use data on high school completers of general education, but now consider individuals with cognitive scores between 4 and 6 (medium scores) and only individuals of parents who completed high school as their highest formal education. This amounts to a total of 10 451 individuals. Day 3000 is considered the landmark time point, and the landmark state is now unemployment. The landmark subset consists of 463 individuals. Results of log-rank tests for identifying Markov and non-Markov transitions can be found in Web Table C.2. The results indicate that transitions $2 \rightarrow 5$, $3 \rightarrow 2$, $3 \rightarrow 4$, $3 \rightarrow 5$, $4 \rightarrow 1$ and $4 \rightarrow 3$ are Markov. Thus, all available data are used for these transitions when constructing the hybrid estimator. Estimates of transition probabilities from unemployment to education and disability are presented in Figure \ref{from2}. Compared to our previous example, AJ estimates are not as misleading when compared to the estimates from the two landmark methods, but do fail to capture the development in the first one third of the time period. In terms of precision, the HAJ estimator seem to give higher precision than the LMAJ estimator for both the showcased transitions, along with bootstrap and Greenwood type plug-in estimates of the standard errors. \begin{figure} \caption{Estimated transition probabilities from unemployment to education ($2 \to 4$) and from unemployment to disability ($2 \to 5$). Dotted lines are 95$\%$ confidence intervals: model based for AJ and bootstrap (1000 samples) based for HAJ and LMAJ. The landmark sample consists of 463 individuals unemployed at day 3000. The full sample includes 10451 individuals.} \label{from2} \end{figure} \section{Discussion} \label{sec6} The idea behind the HAJ estimator is to utilize the interpretation of transition probabilities as a functional of transition specific rates and provide a framework for analyzing how specific transitions affect the estimation of transition probabilities. This sensitivity analysis point of view is useful when modelling non-Markov multi-state data. First, it frames the problem of non-Markov behaviour as a more familiar problem of bias-variance trade-off by considering the HAJ estimator as a compromise between the AJ (low variance) and the LMAJ (low bias) estimators. As a rule of thumb, one would generally expect the HAJ estimator to have higher bias (since one allows for Markov behaviour to be assumed in specific transitions) and lower variance (due to the increased sample sized) than the LMAJ estimator. The opposite, i.e. higher variance and lower bias, is generally expected when compared to the AJ estimator. Based on the simulation experiments it seems reasonable to believe that the lower variance comes at close to zero cost in bias. There are of cause exceptions to this rule and one should be aware that, depending on the data generating mechanism, the HAJ estimator may in principle be superior (in terms of bias or variance) to both or neither of the alternatives (AJ and LMAJ). Secondly, with the HAJ estimator, one can think of the problem of non-Markov behaviour as a transition specific modelling choice, where certain transitions are more sensitive to non-Markov behaviour than others. Such considerations suggest a more comprehensive exploratory analysis of where, in a specific model, non-Markov behaviour can be problematic and where it might be negligible. In our construction of the HAJ estimator we focused on test statistics, but other tools such as judgement based on expert knowledge and visual inspection of plotted cumulative hazards or transition probabilities can also be considered as selection mechanisms. The choice between the HAJ, AJ and LMAJ estimator depends on what kind of non-Markov behaviour one is dealing with and how pronounced it is in the data. \cite{gunnes07} investigated the Datta-Satten estimator of state occupation probabilities in non-Markov models and reached to some extent a similar conclusion; the benefit of using an estimator which can handle non-Markov behaviour and is prone to bias under Markov regimes depends heavily on how much and why the model in question deviates from the Markov property. In large cohort studies one often has to assume that heterogeneity will be a problem simply due to the complexity of the underlying data generating mechanisms. Even in cases where the non-Markov behaviour is negligible this seems like a problematic assumption to start from. Furthermore, a trivial but important advantage of the HAJ estimator over the LMAJ estimator is the increase in the sample size. Besides a reduction in variance this might in practice mean a difference between a feasible and an infeasible estimator. The HAJ estimator is therefore relevant for many applications of non-Markov and partially non-Markov multi-state models, in particular for studies of limited sample size, where the LMAJ estimator is not a viable option. \section*{Supporting Information} \label{sec10} The \texttt{R} functions producing the simulation experiments (see \texttt{sim\_fun}), the point test and the grid test (see \texttt{wbMarkov}) are part of the \texttt{R}-package \texttt{multistate} available on github: \url{https://github.com/niklasmaltzahn/multistate}. The package \texttt{multistate} has a number of additional convenience functions useful for landmarking and hybrid estimation. Furthermore, the data output of \texttt{sim\_fun} is easily formatted for fitting multi-state models using the \texttt{mstate} package \citep{Putter11}. Additional supporting information may be found online in the Supporting Information section at the end of the article. \vspace*{-8pt} \includepdf[pages=-]{supplementary.pdf} \end{document}
\begin{document} \title{(Un)conditional consensus emergence under feedback controls} \centerline{\scshape Mattia Bongini and Massimo Fornasier } {\footnotesize \centerline{Technische Universit\"at M\"unchen, Fakult\"at Mathematik} \centerline{ Boltzmannstra\ss e 3, D-85748 Garching, Germany} } \centerline{\scshape Dante Kalise } {\footnotesize \centerline{ Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences } \centerline{Altenbergerstra\ss e 69, A-4040 Linz, Austria} } \begin{abstract} We study the problem of consensus emergence in multi-agent systems via external feedback controllers. We consider a set of agents interacting with dynamics given by a Cucker-Smale type of model, and study its consensus stabilization by means of centralized and decentralized control configurations. We present a characterization of consensus emergence for systems with different feedback structures, such as leader-based configurations, perturbed information feedback, and feedback computed upon spatially confined information. We characterize consensus emergence for this latter design as a parameter-dependent transition regime between self-regulation and centralized feedback stabilization. Numerical experiments illustrate the different features of the proposed designs. \end{abstract} \section{Introduction}\label{sec:first_results} Over the last years, the study of multi-agent systems has become a topic of increasing interest in mathematics, biology, sociology, and engineering, among many other disciplines. Upon the seminal articles of Reynolds \cite{Reynolds87}, Vicsek et al. \cite{vics} and more recently, Cucker and Smale \cite{CS}, there has been a substantial amount of mathematical works addressing from both analytical and computational perspectives, different phenomena arising in this class of systems. Multi-agent systems are usually modeled as a large-set of particles interacting under simple binary rules, such as attraction, repulsion, and alignment forces, which can depend either metrically or topologically on the agent configuration; the wide applicability of this setting ranges from modeling the collective behavior of bird flocks \cite{Reynolds87}, to the study of data transmission over communication networks \cite{ignaciuk2013congestion}, including the description of opinion dynamics in human societies \cite{krause02}, and the formation control of platoon systems \cite{murray07,Peters}. At a microscopic level, multi-agent systems are often represented by a large-scale set of differential (or difference) equations; in this context, it is of interest the study of asymptotic behaviors, pattern formation, self-organization phenomena, and its basins of attraction. To make matters concrete, in this article we consider a set of $N$, $d-$dimensional agents interacting under a controlled Cucker-Smale model of the form \begin{eqnarray} \left\{ \begin{aligned} \begin{split} \label{eq:cuckersmale} \dot{\bm{x}}_{i} & = \bm{v}_{i} \\ \dot{\bm{v}}_{i} & = \frac{1}{N} \sum_{j = 1}^N a\left(r_{ij}\right)\left(\bm{v}_{j}-\bm{v}_{i}\right) +\bm{u}_i, \qquad i = 1, \ldots, N \end{split} \end{aligned} \right. \end{eqnarray} where the pair $(\bm{x}_i,\bm{v}_i)\in\mathbb{R}^{2d}$ represents the position and velocity of every agent, $\bm{u}_i$ is an external controller to be suitably defined later, and $a:[0, +\infty) \rightarrow [0, +\infty)$ is a bounded, non increasing, continuous function, whereas $r_{ij}$ stands for the Euclidean distance $\vnorm{x_i - x_j}$. This kind of model has been introduced by Cucker and Smale in \cite{CS,cusm07}, for specific choices of the interaction function $a$ as \begin{equation}\label{eq:kernel} a(r_{ij})=\frac{1}{(1+r_{ij}^2)^{\delta}} \quad \text{for every} \quad \delta \in [0,+\infty), \end{equation} later generalized to arbitrary positive interaction functions in \cite{HaHaKim}. Throughout our work we stick to this more general approach; we stress that henceforth, every result we obtain applies for \emph{any} choice of the interaction kernel $a$ that is a positive, continuous, bounded and non increasing function. For a group of agents evolving according to \eqref{eq:cuckersmale} , we shall be concerned with studying the asymptotic convergence of the velocity field of to a common vector, a phenomenon often defined as consensus. It is clear that if the group converges to consensus, the consensus velocity coincides with the mean velocity of the group \begin{equation} \overline{\bm{v}} \stackrel{\Delta}{=} \frac{1}{N} \sum^N_{i = 1} \bm{v}_i. \end{equation} \begin{definition}[Consensus] \label{def:consensus} We say that a solution $(\bm{x}(t),\bm{v}(t))$ of system \eqref{eq:cuckersmale} \emph{tends to consensus} if the consensus parameter vectors $\bm{v}_i$ tend to the mean $\overline{\bm{v}}$, i.e., \begin{align*} \lim_{t \rightarrow + \infty} \vnorm{\bm{v}_i(t) - \overline{\bm{v}}(t)} = 0 \quad \text{for every } i = 1,\ldots,N. \end{align*} \end{definition} To set our work in perspective, let us begin by referring to the available results concerning consensus emergence for the uncontrolled system \eqref{eq:cuckersmale}, i.e. when $\bm{u}_i\equiv 0$. In \cite{cusm07} a first result was presented related to the parameter $\delta$ in \eqref{eq:kernel}; it asserts that for $\delta\leq 1/2$, the system will tend asymptotically to consensus, independently of its initial configuration. For $\delta>1/2$, consensus emergence will depend on the cohesion of the initial setting. A precise characterization of this situation was obtained in \cite[Theorem 3.1]{HaHaKim}, where the authors give a sufficient condition depending on the initial configuration and the parameter $\delta$. Further results concerning variations of the original system and consensus emergence have been presented in \cite{CuGu}, where the authors study the effect of adding agents with preferred navigation directions (stubborn agents), and \cite{hahakim10rayleigh}, where a Rayleigh-type of damping is added to the dynamics. In general, the aforementioned results can be interpreted, in a wider framework, as stability results for nonlinear systems around a consensus manifold. A natural extension is then to consider the case when a controller is included as in \eqref{eq:cuckersmale}, and consensus can be achieved not only by internal self-regulation, but also by means of an external action. In \cite{CFPT}, the authors consider consensus stabilization for the Cucker-Smale system by means of both feedback-based controllers and open-loop, sparse optimal control. In particular, in \cite[Proposition 2]{CFPT} it is shown that, with a controller of the form \begin{equation}\label{eq:feed} \bm{u}_i=-(\bm{v}_i-\bar{\bm{v}})\,, \end{equation} consensus emergence can be guaranteed for any configuration and values of $\delta$. A natural drawback of such a controller relates to the fact that it is always active, requiring what we call \textsl{full information or centralized control}, i.e., for a single agent, feedback computation will make use, at every time, of the total velocity field; even if full information is available, it is also possible that some perturbation is present. A much more realistic setting relates to what is known in the literature as \textsl{decentralized control} \cite{Bakule}; in our context, it means a control design where every agent acts based on partial information as for instance, the agents around a certain metrical or topological neighborhood \cite{Ballerini,olfati}. The aim of this article is to make a contribution along these directions. Starting from a control of the form \eqref{eq:feed}, we will study variations of the feedback structure for consensus stabilization, by covering different settings such as feedback under perturbed information, leader-following feedback, and decentralized, local feedback depending on a metrical neighborhood of the agents. In every case, we present results concerning sufficient conditions for consensus emergence. In general, these results represent a transition between consensus emergence conditions for the uncontrolled Cucker-Smale system, and the feedback stabilization result under full information presented in \cite[Proposition 2]{CFPT}. The paper is structured as follows. In Section 2, we present some preliminary definitions and results concerning consensus emergence in the Cucker-Smale model. In Section 3 and 4, we introduce consensus stabilization results with feedback controllers based on perturbed information. Section 5 addresses the problem of consensus emergence under local feedback. Finally, in Section 6, numerical experiments are presented, aimed at illustrating the main features of the proposed designs; in particular, we numerically investigate the sharpness of the already existing and new estimates for local feedback stabilization. \section{Preliminaries} In order to start studying system \eqref{eq:cuckersmale}, we introduce the following notation and terminology: given a vector $\bm{a} = (\bm{a}_1, \ldots, \bm{a}_N) \in (\mathbb{R}^d)^N$, the symbol \begin{align*} \bm{a}^{\perp}_i \stackrel{\Delta}{=} \bm{a}_i - \overline{\bm{a}} \end{align*} shall stand for the deviation of the vector $\bm{a}_i$ with respect to the mean $\overline{\bm{a}}$. Note that \begin{equation*} \sum^N_{i = 1} \bm{a}^{\perp}_i = \sum^N_{i = 1} \bm{a}_i - N \overline{\bm{a}} = N \overline{\bm{a}} - N \overline{\bm{a}} = 0, \end{equation*} and thus, for any vector $\bm{c} \in \mathbb{R}^d$, denoting by $\scalarp{\cdot,\cdot}$ the usual scalar product on $\mathbb{R}^d$, it holds \begin{align} \label{eq:vertequalzero} \sum^N_{i = 1} \scalarp{\bm{a}^{\perp}_i, \bm{c}} = \scalarp{\sum^N_{i = 1}\bm{a}^{\perp}_i, \bm{c}} = 0. \end{align} The following calculation shall be often exploited: given a $N\times N$ matrix $\omega$ which is symmetric and with positive entries, i.e., $\omega_{ij} = \omega_{ji}$ and $\omega_{ij} > 0$, for any $\bm{a} \in (\mathbb{R}^d)^N$ we have \begin{equation} \label{eq:maintrick} \begin{split} \frac{1}{N^2} \sum^N_{i = 1} \sum^N_{j = 1} \omega_{ij} \scalarp{\bm{a}_j - \bm{a}_i, \bm{a}_i} & = \frac{1}{2 N^2} \Bigg( \sum^N_{i = 1} \sum^N_{j = 1} \omega_{ij} \scalarp{\bm{a}_j - \bm{a}_i, \bm{a}_i} \\ & \quad + \sum^N_{j = 1} \sum^N_{i = 1} \omega_{ji} \scalarp{\bm{a}_i - \bm{a}_j, \bm{a}_j} \Bigg) \\ & = -\frac{1}{2 N^2}\sum^N_{j = 1} \sum^N_{i = 1} \omega_{ij} \vnorm{\bm{a}_i - \bm{a}_j}^2 \\ & \leq - \min_{i,j} \omega_{ij} \frac{1}{N} \sum^N_{i = 1}\vnorm{\bm{a}_i^{\perp}}^2. \\ \end{split} \end{equation} In order to characterize consensus emergence in terms of the solutions $(\bm{x}(t),\bm{v}(t))$ of the system \eqref{eq:cuckersmale}, we define the following quantities \[ X(t)=\frac{1}{2N^2}\sum_{i,j,=1}^N||\bm{x}_i(t)-\bm{x}_j(t)||^2\,,\quad\text{and}\quad V(t)=\frac{1}{2N^2}\sum_{i,j,=1}^N||\bm{v}_i(t)-\bm{v}_j(t)||^2\,, \] which provide an accurate description of consensus in terms of energy of the system by measuring the spread, both in positions and velocities, of the configuration. A first result establishing a link between consensus in the sense of Definition \ref{def:consensus} and the above introduced quantities is stated as follows. \begin{proposition} The following are equivalent: \begin{enumerate} \item $\lim_{t \rightarrow + \infty} \vnorm{\bm{v}_i(t) - \overline{\bm{v}}(t)} = 0$ for every $i = 1,\ldots,N$, \item $\lim_{t \rightarrow + \infty} \bm{v}_i^{\perp} = 0$ for every $i = 1,\ldots,N$, \item $\lim_{t \rightarrow + \infty} V(t) = 0$. \end{enumerate} \end{proposition} It is thus natural to prove a sufficiently strong decay of the functional $V(t)$ in order to establish that a solution of \eqref{eq:cuckersmale} tends to consensus. On the other hand, it is well known that not every solution of system \eqref{eq:cuckersmale} tends to consensus in the sense of Definition \ref{def:consensus}; in this context, a relevant result will be the characterization of consensus emergence introduced in \cite[Theorem 3.1]{HaHaKim}, which we recall in a concise version: \begin{theorem}\label{thm:hhk} Let $(\bm{x}_0, \bm{v}_0) \in (\mathbb{R}^d)^N \times (\mathbb{R}^d)^N$ and set $X_0 = B(\bm{x}_0, \bm{x}_0)$ and $V_0 = B(\bm{v}_0, \bm{v}_0)$. If the following inequality is satisfied: \begin{align} \label{eq:HaHaKim} \int^{+\infty}_{\sqrt{X_0}} a(\sqrt{2N} r) \ dr \geq \sqrt{V_0}, \end{align} then the solution of \eqref{eq:cuckersmale} with initial datum $(\bm{x}_0, \bm{v}_0)$ tends to consensus. \end{theorem} In general, we can induce consensus into the system, by adding a feedback term measuring the distance from the group velocity, leading to \begin{eqnarray} \left\{ \begin{aligned} \begin{split} \label{eq:cuckersmale_uniform} \dot{\bm{x}}_{i} & = \bm{v}_{i} \\ \dot{\bm{v}}_{i} & = \frac{1}{N} \sum_{j = 1}^N a\left(r_{ij}\right)\left(\bm{v}_{j}-\bm{v}_{i}\right) + \gamma(\overline{\bm{v}} - \bm{v}_i), \end{split} \end{aligned} \right. \end{eqnarray} where $\gamma$ is a prescribed nonnegative constant, modeling the strength of the additional alignment term. As \eqref{eq:cuckersmale_uniform} can be rewritten as \eqref{eq:cuckersmale} with the function $a(r_{ij}) + \gamma$ replacing $a(r_{ij})$, by Theorem \ref{thm:hhk} each solution of \eqref{eq:cuckersmale_uniform} tends to consensus. As pointed out in \cite{CFPT}, the main drawback of this approach, however, is that it requires that each agent has a perfect information at every instant of the whole system, a condition which is seldom met in real-life situations; it is perhaps more realistic to ask that each agent computes an approximated mean velocity vector $\overline{\bm{v}}_i$, instead of the true mean velocity of the group $\overline{\bm{v}}$. Therefore, we consider the model \begin{eqnarray} \left\{ \begin{aligned} \begin{split} \label{eq:cuckersmale_local} \dot{\bm{x}}_{i} & = \bm{v}_{i} \\ \dot{\bm{v}}_{i} & = \frac{1}{N} \sum_{j = 1}^N a\left(r_{ij}\right)\left(\bm{v}_{j}-\bm{v}_{i}\right) + \gamma(\overline{\bm{v}}_i - \bm{v}_i). \end{split} \end{aligned} \right. \end{eqnarray} In studying under which conditions the solutions of system \eqref{eq:cuckersmale_local} tend to consensus, it is often desirable to express the approximated feedback as a combination of a term consisting on a \textsl{true information feedback}, i.e., a feedback based on the real average $\overline{\bm{v}}$, and a perturbation term. We rewrite the system \eqref{eq:cuckersmale_local} in the following form: \begin{eqnarray} \left\{ \begin{aligned} \begin{split} \label{eq:cuckersmale_perturbed} \dot{\bm{x}}_{i} & = \bm{v}_{i} \\ \dot{\bm{v}}_{i} & = \frac{1}{N} \sum_{j = 1}^N a\left(r_{ij}\right)\left(\bm{v}_{j}-\bm{v}_{i}\right) + \alpha(\overline{\bm{v}} - \bm{v}_i) + \beta \Delta_i, \end{split} \end{aligned} \right. \end{eqnarray} where $\alpha = \alpha(t)$ and $\beta = \beta(t)$ are two nonnegative, piecewise continuous functions, and $\Delta_i$ is a time-dependent not necessarily continuous deviation acting on agent $i$; therefore, solutions in this context have to be understood in terms of weak solutions in the Carath\'eodory sense \cite{filipov} (we also refer the reader to \cite[Appendix]{FS13} for specific details in the context of multi-agent systems). System \eqref{eq:cuckersmale_perturbed} encompasses all the previously introduced models, as it is readily seen: \begin{itemize} \item if $\alpha = \beta \equiv \gamma$ and $\Delta_i = \bm{v}_i - \overline{\bm{v}}$, or $\alpha = \beta = 0$, then we recover system \eqref{eq:cuckersmale}, \item the choices $\alpha = \gamma$, $\Delta_i = 0$ (or equivalently $\beta = 0$) yield system \eqref{eq:cuckersmale_uniform}, \item if $\alpha = \beta \equiv \gamma$ and $\Delta_i = \overline{\bm{v}}_i - \overline{\bm{v}}$ we obtain system \eqref{eq:cuckersmale_local}. \end{itemize} The introduction of the perturbation term in system \eqref{eq:cuckersmale_perturbed} may deeply modify the nature of the original model: for instance, an immediate consequence is that the mean velocity of the system is, in general, no longer a conserved quantity (as it is for system \eqref{eq:cuckersmale}). \begin{proposition} \label{prop:derivative_mean} For system \eqref{eq:cuckersmale_perturbed}, with perturbations given by the vector $\Delta = (\Delta_1, \ldots, \Delta_N)$, we have \begin{align*} \frac{d}{dt}\overline{\bm{v}} = \beta \overline{\Delta}. \end{align*} \end{proposition} \begin{proof} \begin{equation*} \begin{split} \frac{d}{dt} \overline{\bm{v}} & = \frac{1}{N} \sum^N_{i = 1} \frac{d}{dt} \bm{v}_i \\ & = \frac{1}{N} \sum^N_{i = 1} \left(\frac{1}{N} \sum_{j = 1}^N a\left(r_{ij}\right)\left(\bm{v}_{j}-\bm{v}_{i}\right) + \alpha(\overline{\bm{v}} - \bm{v}_i) + \beta \Delta_i \right) \\ & = \underbrace{\frac{1}{N^2} \sum^N_{i = 1} \sum_{j = 1}^N a\left(r_{ij}\right)\left(\bm{v}_{j}-\bm{v}_{i}\right)}_{= 0 \text{, by simmetry.}} + \underbrace{\frac{\alpha}{N} \sum^N_{i = 1} \bm{v}^{\perp}_i}_{= 0} + \frac{\beta}{N} \sum^N_{i = 1} \Delta_i \\ & = \beta \overline{\Delta}. \end{split} \end{equation*} \end{proof} \begin{remark} \label{rem:derivative_mean} As we have already pointed out, it is possible to recover system \eqref{eq:cuckersmale} by setting$\Delta_i = \bm{v}_i - \overline{\bm{v}}$ whereas we can recover system \eqref{eq:cuckersmale_uniform} for the choice $\Delta_i = 0$. Note that in both cases we have $\overline{\Delta} = 0$, and therefore the mean velocity is a conserved quantity both in systems \eqref{eq:cuckersmale} and \eqref{eq:cuckersmale_uniform}. We also highlight the fact that $\overline{\bm{v}}$ is not conserved even in the case that for every $t \geq 0,$ and for every $1 \leq i \leq N$, $\Delta_i(t) = \bm{c}$, where $\bm{c} \not = 0$, i.e., the case in which each agent makes the same error in evaluating the mean velocity. \end{remark} \section{General results for consensus stabilization under perturbed information} As already pointed out in the previous section, the main strategy for studying under which assumptions a solution of system \eqref{eq:cuckersmale} tends to consensus is to obtain an estimate of the decay of the functional $V(t)$. We follow a similar approach in order to study consensus emergence for system \eqref{eq:cuckersmale_perturbed}. We begin by proving the following lemma: \begin{lemma} \label{lem:bigv_growth} Let $(\bm{x}(t), \bm{v}(t))$ be a solution of system \eqref{eq:cuckersmale_perturbed}. For every $t \geq 0$, it holds \begin{align} \label{eq:maintool} \frac{d}{dt} V(t) \leq - 2 a\left(\sqrt{2NX(t)}\right) V(t) -2 \alpha V(t) + \frac{2 \beta}{N} \sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)}\,. \end{align} \end{lemma} \begin{proof} Differentiating $V$ for every $t \geq 0,$ we have \begin{eqnarray*} \frac{d}{dt} V(t) & = & \frac{d}{dt} \frac{1}{N} \sum^N_{i = 1} \vnorm{\bm{v}^{\perp}_i(t)}^2 \\ & = & \frac{2}{N} \sum^N_{i = 1} \scalarp{\frac{d}{dt} \bm{v}^{\perp}_i(t), \bm{v}^{\perp}_i(t)} \\ & = & \frac{2}{N} \sum^N_{i = 1} \scalarp{\frac{d}{dt} \bm{v}_i(t), \bm{v}^{\perp}_i(t)} - \frac{2}{N} \sum^N_{i = 1} \scalarp{\frac{d}{dt} \overline{\bm{v}}(t), \bm{v}^{\perp}_i(t)}, \end{eqnarray*} which, inserting the expression for $\frac{d}{dt} \bm{v}_i(t)$, yields \begin{eqnarray} \label{eq:derivative_bigv} \begin{split} \frac{d}{dt} V(t) & = \underbrace{\frac{2}{N^2} \sum^N_{i = 1} \sum^N_{j = 1} a\left(r_{ij}\right)\scalarp{\bm{v}_{j}(t)-\bm{v}_{i}(t), \bm{v}^{\perp}_i(t)}}_{(i)} + \\ & \quad + \frac{2 \alpha}{N} \sum^N_{i = 1} \scalarp{\overline{\bm{v}}(t) - \bm{v}_i(t), \bm{v}^{\perp}_i(t)} + \frac{2 \beta}{N} \sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)} \\ & \quad - \frac{2}{N} \sum^N_{i = 1} \scalarp{\frac{d}{dt} \overline{\bm{v}}(t), \bm{v}^{\perp}_i(t)}. \end{split} \end{eqnarray} Since \begin{eqnarray} \label{eq:distance_est} \begin{split} r_{ij} = \vnorm{\bm{x}_i - \bm{x}_j} & = \vnorm{\bm{x}_i^{\perp} - \bm{x}_j^{\perp}} \\ & \leq \vnorm{\bm{x}_i^{\perp}} + \vnorm{\bm{x}_j^{\perp}} \\ & \leq \sqrt{2} \left( \sum^N_{k = 1} \vnorm{\bm{x}_k^{\perp}}\right)^{\frac{1}{2}} \\ & \leq \sqrt{2 N X}, \end{split} \end{eqnarray} the fact that $a$ is non increasing and inequality \eqref{eq:maintrick} yield \begin{align} \label{eq:lemma1_CFPT} (i) \leq - 2 a\left(\sqrt{2NX(t)}\right) V(t). \end{align} Using Proposition \ref{prop:derivative_mean}, we can rewrite the remaining term as \begin{equation} \label{eq:controlterm} \begin{split} \frac{d}{dt} V(t) - (i) & = -\frac{2 \alpha}{N} \sum^N_{i = 1} \vnorm{\bm{v}^{\perp}_i(t)}^2 + \frac{2 \beta}{N} \sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)} - \underbrace{\frac{2\beta}{N} \sum^N_{i = 1} \scalarp{\overline{\Delta}(t), \bm{v}^{\perp}_i(t)}}_{= 0}\\ & = -2 \alpha V(t) + \frac{2 \beta}{N} \sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)} \end{split} \end{equation} Applying \eqref{eq:lemma1_CFPT} and \eqref{eq:controlterm} on \eqref{eq:derivative_bigv} concludes the proof. \end{proof} As a direct consequence we obtain the following theorem. \begin{theorem} \label{th:perpbound_convergence} Let $(\bm{x}(t), \bm{v}(t))$ be a solution of system \eqref{eq:cuckersmale_perturbed}, and suppose that there exists a $T \geq 0$ such that for every $t \geq T$, \begin{align} \label{eq:smallerror} \sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)} \leq \phi(t) \sum^N_{i = 1} \vnorm{\bm{v}^{\perp}_i(t)}^2 \end{align} for some function $\phi:[T,+\infty) \rightarrow [0,\ell]$, where $\ell<\frac{\alpha}{\beta}$. Then $(\bm{x}(t), \bm{v}(t))$ tends to consensus. \end{theorem} \begin{proof} Under the assumption \eqref{eq:smallerror}, for every $t \geq T$ the upper bound in \eqref{eq:maintool} can be simplified to \begin{equation*} \begin{split} \frac{d}{dt}V(t) & \leq -2 \alpha V(t) + \frac{2 \beta}{N} \sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)} \\ & \leq -2 \alpha V(t) + \frac{2 \beta}{N} \phi(t) \sum^N_{i = 1} \vnorm{\bm{v}^{\perp}_i(t)}^2 \\ & = -2 \alpha V(t) + 2 \beta \phi(t) V(t) \\ & \leq 2 \beta \left(\ell - \frac{\alpha}{\beta}\right) V(t). \end{split} \end{equation*} Integrating between $T$ and $t$ (where $t \geq T$) we get \begin{equation*} V(t) \leq V(T) e^{2 \beta \left(\ell - \frac{\alpha}{\beta}\right)(t - T)} \end{equation*} and as the factor $\ell - \frac{\alpha}{\beta}$ is negative, $V(t)$ approaches $0$ exponentially fast. \end{proof} \begin{corollary} \label{cor:noperp_convergence} If there exists $T \geq 0$ such that $\Delta^{\perp}_i(t) = 0$ for every $t \geq T$ and for every $1 \leq i \leq N$, then any solution of system \eqref{eq:cuckersmale_perturbed} tends to consensus. \end{corollary} \begin{proof} Noting that $\Delta^{\perp}_i = 0$ implies $\Delta_i = \overline{\Delta}$, we have, by \eqref{eq:vertequalzero} \begin{align*} \sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)} = \sum^N_{i = 1} \scalarp{\overline{\Delta}, \bm{v}^{\perp}_i(t)} = 0. \end{align*} We apply Theorem \ref{th:perpbound_convergence} with $\phi(t) \equiv 0$ for every $t \geq T$ and obtain the result. \end{proof} \begin{remark} Corollary \ref{cor:noperp_convergence} implies trivially that any solution of system \eqref{eq:cuckersmale_uniform} tends to consensus (this was already a consequence of Theorem \ref{thm:hhk}), but has moreover a rather nontrivial implication: also any solution of systems subjected to \emph{deviated} uniform control, i.e., systems like \eqref{eq:cuckersmale_perturbed} where $\Delta_i(t) = \Delta(t)$ for every $1 \leq i \leq N$ and for every $t \geq 0$, tend to consensus, because for every $1 \leq i \leq N$ and for every $t \geq 0$, we have \begin{align*} \begin{split} \Delta^{\perp}_i(t) = \Delta_i(t) - \frac{1}{N}\sum^N_{j = 1} \Delta_j(t) = \Delta(t) - \Delta(t) = 0, \end{split} \end{align*} and thus Corollary \ref{cor:noperp_convergence} applies. This means that systems of this kind converge to consensus even if the agents have an incorrect knowledge of the mean velocity, provided they all possess the same deviation. \end{remark} A final consequence of the previously developed results is the following theorem, which provides an upper bound for tolerable perturbations under which consensus emergence can be unconditionally guaranteed. \begin{theorem} Let $\varepsilon_i: [0, +\infty) \rightarrow [0,\ell]$ for a fixed $\ell < \frac{\alpha}{\beta}$, for every $i = 1, \ldots, N$. If there exists $T \geq 0$ such that $\vnorm{\Delta_i(t)} \leq \varepsilon_i(t) \vnorm{\bm{v}_i^{\perp}(t)}$ for every $t \geq T$ and for every $1 \leq i \leq N$, then any solution of system \eqref{eq:cuckersmale_perturbed} tends to consensus. \end{theorem} \begin{proof} By using the Cauchy-Schwarz inequality we have \begin{equation*} \begin{split} \sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)} & \leq \sum^N_{i = 1} \varepsilon_i(t) \vnorm{\bm{v}^{\perp}_i(t)}^2 \\ & \leq \ell \sum^N_{i = 1} \vnorm{\bm{v}^{\perp}_i(t)}^2. \\ \end{split} \end{equation*} The conclusion follows by taking $\phi(t) \equiv \ell$ for $t \in [T, +\infty)$ in Theorem \ref{th:perpbound_convergence}. \end{proof} \begin{remark} The result above shows that, provided that the magnitude of the perturbation is smaller than the one of the deviation of the agent velocity from the mean, then convergence to consensus is obtained unconditionally with respect to the initial condition. This is the case of local estimations of the average, as the largest error that an agent can make when estimating the group average upon a subset of agents is precisely its own deviation from the mean, $\bm{v}^{\perp}_i$. \end{remark} \section{Perturbations as linear combinations of velocity deviations} \label{sec:cuckersmale_localmean} We begin this section by considering a simple case study, which is nevertheless relevant as it addresses consensus stabilization based on a leader-following feedback. Let us use Lemma \ref{lem:bigv_growth} to study the convergence to consensus of a system like \eqref{eq:cuckersmale_local}, where each agent computes its local mean velocity $\overline{\bm{v}}_i$ by taking into account themselves and only a single common agent $(x_1, v_1)$, which in turn takes into account only itself by computing $\overline{\bm{v}}_1 = \bm{v}_1$. Formally, given two finite conjugate exponents $p, q$ (i.e., two positive real numbers satisfying $\frac{1}{p} + \frac{1}{q} = 1$), we assume that for any $i = 1, \ldots, N$ \begin{align*} \overline{\bm{v}}_i(t) = \frac{1}{p}\bm{v}_i(t) + \frac{1}{q}\bm{v}_1(t). \end{align*} We shall prove that any solution of this system tends to consensus, no matter how small the weight $\frac{1}{q}$ of $\bm{v}_1$ in $\overline{\bm{v}}_i$ is. We start by writing the system under the form \eqref{eq:cuckersmale_perturbed}, with $\alpha = \beta = \gamma$ and \begin{align*} \Delta_i(t) = \frac{1}{p}\bm{v}^{\perp}_i(t) + \frac{1}{q}\bm{v}^{\perp}_1(t). \end{align*} Hence, the perturbation term in \eqref{eq:maintool} is thus \begin{equation*} \begin{split} \frac{2 \gamma}{N}\sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)} & = \frac{2 \gamma}{N}\sum^N_{i = 1} \scalarp{\frac{1}{p}\bm{v}^{\perp}_i(t) + \frac{1}{q}\bm{v}^{\perp}_1(t), \bm{v}^{\perp}_i(t)}\\ & = \frac{1}{p} \frac{2 \gamma}{N} \sum^N_{i = 1} \vnorm{\bm{v}_i^{\perp}(t)}^2 + \frac{1}{q} \frac{2 \gamma}{N} \scalarp{\bm{v}^{\perp}_1(t), \underbrace{\sum^N_{i = 1} \bm{v}^{\perp}_i(t)}_{= 0}} \\ & = \frac{2 \gamma}{p} V(t)\,, \end{split} \end{equation*} and Lemma \ref{lem:bigv_growth} let us bound the growth of $V(t)$ as \begin{align*} \frac{d}{dt}V(t) \leq 2 \gamma \left(-1 + \frac{1}{p}\right) V(t) = -\frac{2 \gamma}{q} V(t)\,. \end{align*} This ensures the exponential decay of the functional $V(t)$ for any $q > 0$. Motivated by the latter configuration, we turn our attention to the study of systems like \eqref{eq:cuckersmale_perturbed} where the perturbation of the mean of the $i$-th agent has the form \begin{align} \label{eq:pert_cucker} \Delta_i(t) = \sum^N_{j = 1} \omega_{ij}(t) \bm{v}^{\perp}_j(t). \end{align} We shall see that the results obtained in Section \ref{sec:first_results} help us identify under which assumptions on the coefficients $\omega_{ij}$ we can infer unconditional convergence to consensus. \begin{theorem} \label{th:suff_cond_consensus} Consider a system of the form \eqref{eq:cuckersmale_perturbed}, where $\Delta_i$ is given as in \eqref{eq:pert_cucker}. Then, if for every $t \geq 0$ and every $i,j = 1, \ldots, N$ we have $\omega_{ij}(t) = \omega_{ji}(t)$, and we set \begin{align*} I(t) := \min_{i,j} \omega_{ij}(t) \text{ and } S(t) := \max_{i} \sum^N_{j = 1}\omega_{ij}(t), \end{align*} the following estimate holds: \begin{align} \label{eq:decayperp} \frac{d}{dt}V(t) \leq - 2 a\left(\sqrt{2NX(t)}\right) V(t) + 2\beta \left( S(t) - NI(t) - \frac{\alpha}{\beta} \right) V(t). \end{align} Therefore, if there exists a $T \geq 0$ such that the quantity $S(t) - NI(t) - \frac{\alpha}{\beta}$ is bounded from above by a constant $C < 0$ in $[T, +\infty)$, then any solution of the system tends to consensus. \end{theorem} \begin{proof} Standard calculations yield \begin{equation*} \begin{split} \sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)} & = \sum^N_{i = 1} \sum^N_{j = 1} \omega_{ij}(t) \scalarp{\bm{v}^{\perp}_j(t), \bm{v}^{\perp}_i(t)}\\ & = \sum^N_{i = 1} \sum^N_{j = 1} \omega_{ij}(t) \scalarp{\bm{v}^{\perp}_j(t) - \bm{v}^{\perp}_i(t), \bm{v}^{\perp}_i(t)} \\ & \quad + \sum^N_{i = 1} \left( \sum^N_{j = 1} \omega_{ij}(t) \right) \vnorm{\bm{v}^{\perp}_i(t)}^2\\ & = N^2 \left( -\frac{1}{2N^2} \sum^N_{i = 1} \sum^N_{j = 1} \omega_{ij}(t) \vnorm{\bm{v}_j(t) - \bm{v}_i(t)}^2\right) \\ & \quad + \sum^N_{i = 1} \left( \sum^N_{j = 1} \omega_{ij}(t) \right) \vnorm{\bm{v}^{\perp}_i(t)}^2\\ & \leq N \left(- NI(t) + S(t) \right) V(t), \end{split} \end{equation*} having used the equality \eqref{eq:maintrick}. Applying this into \eqref{eq:maintool} and collecting $\beta$, we get \eqref{eq:decayperp}. \end{proof} In the following results we shall assume $\sum^N_{j = 1} \omega_{ij}(t) = 1$, which implies by Proposition \ref{prop:derivative_mean} that $\overline{\Delta} =0$ and that $\overline{\bm{v}}$ is conserved. In particular, for $\alpha = \beta$ the system can be eventually rewritten as a Cucker-Smale system of the type \eqref{eq:cuckersmale}, with a different interaction function $a$, and the following results can be seen as consequences of Theorem \ref{thm:hhk}. \begin{corollary} Let, for any $t \geq 0$, $\omega(t) \in [0,+\infty)^{N \times N}$ be a symmetric stochastic matrix, i.e., $\sum^N_{j = 1} \omega_{ij}(t) = 1$ for every $i = 1, \ldots, N$. If there exists a $\vartheta > 0$ such that: \begin{align*} I(t) = \min_{i,j} \omega_{ij}(t) \geq \vartheta > \frac{\beta - \alpha}{N \beta}, \end{align*} then any solution of the system tends to consensus. \end{corollary} \begin{proof} Under the above hypotheses, the quantity $S(t) - NI(t) - \frac{\alpha}{\beta}$ of Theorem \ref{th:suff_cond_consensus} is bounded from above by $1 - N \vartheta - \frac{\alpha}{\beta}$, which, by assumption, is negative. \end{proof} \begin{corollary} \label{cor:phi_consensus} Suppose that \begin{align*} \omega_{ij}(t) = \frac{\phi(r_{ij}(t))}{\eta(t)} \end{align*} where $\phi:[0, +\infty) \rightarrow \left(0,1\right]$ is a non increasing, positive, bounded function, and $\eta:[0, +\infty) \rightarrow \left[0,+\infty\right)$ is a nonnegative function. Then, given constants $\alpha, \beta \geq 0$ satisfying \begin{align} \label{eq:betaalpha} 1 \leq N \frac{\beta}{\alpha} \leq \eta(t) \quad \text{for every } t \in [0,+\infty), \end{align} we have that any solution of system \eqref{eq:cuckersmale_perturbed}, for $\Delta_i$ as in \eqref{eq:pert_cucker} and $\alpha$ and $\beta$ as above, tends to consensus. \end{corollary} \begin{proof} If we consider the quantity $X(t)$, there are at most two cases: either $X(t)$ is bounded from above by a constant $\overline{X}$ in $[0, +\infty)$, or $X(t)$ remains unbounded. In the first case, we can bound $S(t)$ from above by $N$. Since $r_{ij}(t) \leq \sqrt{2NX(t)} \leq \sqrt{2N\overline{X}}$ and $\phi$ is non increasing, we have that \begin{align*} I(t) = \min_{ij} \omega_{ij}(t) \geq \phi\left(\sqrt{2N\overline{X}}\right), \end{align*} and thus, from \eqref{eq:betaalpha}, it follows that \begin{align} \label{eq:invoketheorem} S(t) - N I(t) - \frac{\alpha}{\beta}\eta(t) \leq N - N \phi\left(\sqrt{2N\overline{X}}\right) - \frac{\alpha}{\beta}\eta(t) \leq - N \phi\left(\sqrt{2N\overline{X}}\right). \end{align} Since $\phi$ is positive, the proof is completed by using Theorem \ref{th:suff_cond_consensus}. Suppose now, instead, that $X(t)$ is unbounded: in this case the term $I(t)$ is bounded from below by a term going to $0$ (and hence not helping us) and we have to take advantage of $S(t)$ as shall be shown now. By definition, $X(t)$ is unbounded if and only if there exist two agents with indexes $h$ and $k$ such that $r_{hk}(t)$ is unbounded. By the triangle inequality, it follows that for any index $i$ , there is an index $j(i)$ for which $r_{ij(i)}$ is unbounded. Thus, we fix $\rho > 0$ and let $T > 0$ be the maximum time $t$ such that $\phi(r_{ij(i)}(t)) < 1 - \rho$ for every $i = 1, \ldots, N$. Then for every $t > T$ we may bound $S(t)$ as \begin{align*} S(t) \leq \frac{1}{\eta(t)}\left(N - 1 + 1 - \rho \right) \leq \frac{\alpha}{N \beta}(N - \rho) \end{align*} and therefore, again from \eqref{eq:betaalpha}, \begin{align*} S(t) - N I(t) - \frac{\alpha}{\beta}\eta(t) \leq \frac{\alpha}{N \beta}(N - \rho) - \frac{\alpha}{\beta}\eta(t) \leq - \frac{\alpha}{N \beta}\rho + \frac{\alpha}{\beta} - N \leq - \frac{\alpha}{N \beta}\rho < 0 \end{align*} holds for every $t \geq T$, since $N \frac{\beta}{\alpha} \geq 1$ by assumption. Theorem \ref{th:suff_cond_consensus} yields thus the result. \end{proof} \begin{remark} \label{re:corollary_phi} A concrete example of a system for which we can apply Corollary \ref{cor:phi_consensus} is obtained by considering the functions \begin{align*} \phi(r) = \frac{1}{(1 + r^2)^{\varepsilonsilon}}\,, \end{align*} and \begin{align*} \eta(t) = \max_i\left\{\sum^N_{j = 1} \phi(r_{ij}(t))\right\}. \end{align*} In this case, $\varepsilonsilon$ can be thought as a parameter tuning the ability of each particle to gather information about the speed of the other agents: indeed, consider a set of agents such that $r_{ij} > 0$, if $i \not = j$. Then, if $\varepsilonsilon$ is $0$, $\overline{\bm{v}}_i = \overline{\bm{v}}$ for every $i$, and each particle communicates at the same rate with near and far away agents, while if $\varepsilonsilon$ goes to $+\infty$ then $\overline{\bm{v}}_i$ approaches $\bm{v}_i$, hence each particle is unable to gain knowledge about the speed of the other agents. The function $\eta$ serves to the purpose of being a common normalizing factor: naturally one would choose for every agent $i$ the normalizing factor given by \begin{align} \label{eq:usual_choice} \sum^N_{j = 1} \phi(r_{ij}(t)), \end{align} but that would produce a non symmetric matrix $\omega$, for which the above results are not valid. In this context, the function $\eta$ is a suitable replacement, being also coherent with the asymptotic behavior of \eqref{eq:usual_choice} for $\varepsilonsilon \rightarrow 0$ and $\varepsilonsilon \rightarrow +\infty$. \end{remark} \begin{remark} \label{rem:functionR} The request of positivity of the function $\phi$ cannot be removed from Corollary \ref{cor:phi_consensus}, as the function \begin{align*} \phi(r) = \chi_{[0,R]}(r) = \left\{ \begin{array}{ll} 1 & \text{ if } r \leq R \\ 0 & \text{ if } r > R \\ \end{array} \right. \end{align*} shows. Indeed, what fails in the argument of the proof is the case in which we suppose that $X(t)$ is bounded by $\overline{X}$: if the quantity $\sqrt{2N\overline{X}}$ is not less or equal to $R$, then $\phi\left(\sqrt{2N\overline{X}}\right) = 0$ in the inequality \eqref{eq:invoketheorem}, and we cannot invoke Theorem \ref{th:suff_cond_consensus} in order to infer consensus. \end{remark} \section{Perturbations due to local averaging} \label{sec:localmeanR} An interesting case of a system like \eqref{eq:cuckersmale_local} is the one where the local mean is calculated as \begin{align*} \overline{\bm{v}}_i = \frac{1}{\# \Lambda_R(i)} \sum_{j \in \Lambda_R(i)} \bm{v}_j, \end{align*} where $\Lambda_R(i) = \left\{j \in \{1, \ldots, N\} \mid r_{ij} \leq R \right\}$ and $\# \Lambda_R(i)$ is its cardinality. In this case, we model the situation in which each agent calculates its local mean counting only those agents inside a ball of radius $R$ centered on him. We want to address the issue of characterizing the behavior of system \eqref{eq:cuckersmale_local} with the above choice for $\overline{\bm{v}}_i$ when the radius $R$ of each ball is either reduced to $0$ or set to grow to $+\infty$: we shall see that we can reformulate this decentralized system again as a Cucker-Smale model for a different interaction function for which we can apply Theorem \ref{thm:hhk}. We shall show how tuning the radius $R$ affects the convergence to consensus, from the case $R \geq 0$ where only conditional convergence is ensured, to the unconditional convergence result given for $R = +\infty$. First of all, by defining $\chi_{[0,R]}(r)$ as the characteristic function of a ball of radius $R$ centered at the origin, we can rewrite $\overline{\bm{v}}_i$ as \begin{align} \label{eq:truncated_perturbation} \overline{\bm{v}}_i = \frac{1}{\sum^N_{k = 1} \chi_{[0,R]}(r_{ik})} \sum_{j = 1}^N \chi_{[0,R]}(r_{ij}) \bm{v}_j. \end{align} As already noted in Remark \ref{re:corollary_phi}, the normalizing terms $\sum^N_{k = 1} \chi_{[0,R]}(r_{ik}(t))$ give rise to a matrix of weights which is not symmetric. Since this will be an issue also in the present section, we take $\eta_R(t)$ to be a function approximating the above normalizing terms and which also preserves its asymptotics for $R \rightarrow 0$ and $R \rightarrow +\infty$, as for instance, \begin{align} \label{eq:normalizingR} \eta_R(t) = \max_i \left\{\sum^N_{k = 1} \chi_{[0,R]}(r_{ik}(t)) \right\}. \end{align} For every $t \geq 0$, we replace the vector $\overline{\bm{v}}_i(t) $ by \begin{align*} \frac{1}{\eta_{R}(t)} \sum^N_{j = 1} \chi_{[0,R]}(r_{ij}(t)) \bm{v}_j(t)\,. \end{align*} Moreover, the vector \begin{align*} \bm{v}_i(t) \cdot \left(\frac{1}{\eta_{R}(t)} \sum^N_{j = 1} \chi_{[0,R]}(r_{ij}(t)) \right) \end{align*} is also an approximation of $\bm{v}_i(t)$ for $R \rightarrow 0$ and $R \rightarrow +\infty$. This motivates the replacement of the term $\overline{\bm{v}}_i - \bm{v}_i$ in system \eqref{eq:cuckersmale_local} where $\overline{\bm{v}}_i$ is as in \eqref{eq:truncated_perturbation}, with \begin{equation} \begin{split} \label{eq:derivationR} \frac{1}{\eta_{R}} \sum^N_{j = 1} \chi_{[0,R]}(r_{ij})\bm{v}_j - \left(\frac{1}{\eta_{R}} \sum^N_{j = 1} \chi_{[0,R]}(r_{ij}) \right)\bm{v}_i & =\frac{1}{\eta_{R}} \sum^N_{j = 1} \chi_{[0,R]}(r_{ij}) (\bm{v}_j - \bm{v}_i) \\ & = \frac{1}{\eta_{R}} \sum^N_{i = 1} (\bm{v}_j - \bm{v}_i) \\ & \quad + \frac{1}{\eta_{R}} \sum^N_{j = 1} (1 - \chi_{[0,R]}(r_{ij})) (\bm{v}_i - \bm{v}_j) \\ & = \frac{N}{\eta_{R}} \left(\overline{\bm{v}} - \bm{v}_i\right) \\ & \quad + \frac{1}{\eta_{R}} \sum^N_{j = 1} (1 - \chi_{[0,R]}(r_{ij})) (\bm{v}_i - \bm{v}_j). \end{split} \end{equation} We can thus rewrite the original system as \eqref{eq:cuckersmale_perturbed}: \begin{eqnarray} \left\{ \begin{aligned} \begin{split} \label{eq:cuckersmale_R} \dot{\bm{x}}_{i} & = \bm{v}_{i} \\ \dot{\bm{v}}_{i} & = \frac{1}{N} \sum_{j = 1}^N a\left(r_{ij}\right)\left(\bm{v}_{j}-\bm{v}_{i}\right) + \gamma \frac{N}{\eta_{\varepsilon}} (\overline{\bm{v}} - \bm{v}_i) + \gamma \Delta^{\varepsilon}_i, \end{split} \end{aligned} \right. \end{eqnarray} where the perturbations have the general form \begin{align} \label{eq:pert_repulsion} \Delta^{\varepsilon}_i(t) = \frac{1}{\eta_{\varepsilon}(t)} \sum^N_{j = 1} (1 - \psi_{\varepsilon}(r_{ij}(t))) (\bm{v}_i(t) - \bm{v}_j(t)). \end{align} For \eqref{eq:pert_repulsion} to be a coherent approximation of our case study, we prescribe that $\varepsilon$ is a parameter ranging in a nonempty set $\Omega$ satisfying: \begin{enumerate}[$(i)$] \item \label{item:i} $\psi_{\varepsilon}:[0, +\infty) \rightarrow [0,1]$ is a nonincreasing measurable function for every $\varepsilon \in \Omega$; \item \label{item:ii} $\eta_{\varepsilon}:[0, +\infty) \rightarrow \mathbb{R}$ is an $L^{\infty}$-function for every $\varepsilon \in \Omega$; \item \label{item:iii} there are two disjoint subsets $\Omega_{C\!S}$ and $\Omega_{U}$ of $\Omega$ such that, if $\varepsilon \in \Omega_{C\!S}$ then $\psi_{\varepsilon} \equiv \chi_{\{0\}}$ and $\eta_{\varepsilon} \equiv 1$, while if $\varepsilon \in \Omega_{U}$ then $\psi_{\varepsilon} \equiv \chi_{[0, +\infty)}$ and $\eta_{\varepsilon} \equiv N$. \end{enumerate} With requirement \eqref{item:iii}, we impose that if $\varepsilon \in \Omega_{C\!S}$ then $\Delta_i^{\varepsilon} = - \frac{N}{\eta_{\varepsilon}} (\overline{\bm{v}} - \bm{v}_i)$, therefore recovering system \eqref{eq:cuckersmale} from \eqref{eq:cuckersmale_R}, whereas if $\varepsilon \in \Omega_{U}$ then $\Delta_i^{\varepsilon} = 0$, and we obtain a particular instance of system \eqref{eq:cuckersmale_uniform}. In order to study under which conditions on the initial values the solutions of system \eqref{eq:cuckersmale_R} converge to consensus, we cannot use the results of the previous section. This is because, if we compare the following calculations \begin{equation*} \begin{split} \frac{1}{\eta_{R}} \sum^N_{j = 1} \chi_{[0,R]}(r_{ij})\bm{v}_j - \left(\frac{1}{\eta_{R}} \sum^N_{j = 1} \chi_{[0,R]}(r_{ij}) \right)\bm{v}_i & = \left(\frac{1}{\eta_{R}} \sum^N_{j = 1} \chi_{[0,R]}(r_{ij}) \right)(\overline{\bm{v}} - \bm{v}_i) \\ & \quad + \frac{1}{\eta_{R}} \sum^N_{j = 1} \chi_{[0,R]}(r_{ij})\bm{v}^{\perp}_j \end{split} \end{equation*} to \eqref{eq:derivationR}, we obtain that the system we are considering is precisely the version of system \eqref{eq:cuckersmale_perturbed} where $\alpha = \frac{1}{\eta_{R}} \sum^N_{j = 1} \chi_{[0,R]}(r_{ij})$ (which is equal to 1 for sufficiently small and large enough $R$) and $\Delta_i$ is of the same kind as the one mentioned in Remark \ref{rem:functionR}. We present the following result which gives a sufficient condition on the initial data for which the solutions of system \eqref{eq:cuckersmale_R} converge to consensus. We point out that system \eqref{eq:cuckersmale_R} can be rewritten into the shape of a Cucker-Smale type of model as follows: \begin{eqnarray*} \left\{ \begin{aligned} \begin{split} \dot{\bm{x}}_{i} & = \bm{v}_{i} \\ \dot{\bm{v}}_{i} & = \frac{1}{N} \sum_{j = 1}^N \left(a\left(r_{ij}\right) + \gamma \frac{N}{\eta_{\varepsilon}}\psi_{\varepsilon}(r_{ij})\right) (\overline{\bm{v}} - \bm{v}_i), \end{split} \end{aligned} \right. \end{eqnarray*} and therefore, the following result is obtained as an application of Theorem \ref{thm:hhk}. \begin{theorem} \label{th:HaHaKimExtended} Fix $\gamma \geq 0$, consider system \eqref{eq:cuckersmale_R} where $\Delta^{\varepsilon}_i$ is as in \eqref{eq:pert_repulsion} and let $(\bm{x}_0,\bm{v}_0) \in (\mathbb{R}^d)^N \times (\mathbb{R}^d)^N$. Then if $X_0 = B(\bm{x}_0, \bm{x}_0)$ and $V_0 = B(\bm{v}_0,\bm{v}_0)$ satisfy \begin{align} \label{eq:HaHaKimLarge} \int^{+\infty}_{\sqrt{X_0}} a\left(\sqrt{2N}r\right) \ dr + \frac{\gamma N}{\vnorm{\eta_{\varepsilon}}_{\infty}} \int^{+\infty}_{\sqrt{X_0}} \psi_{\varepsilon}\left(\sqrt{2N}r\right) \ dr \geq \sqrt{V_0}, \end{align} the solution of system \eqref{eq:cuckersmale_R} with initial datum $(\bm{x}_0,\bm{v}_0)$ tends to consensus. \end{theorem} \begin{proof} From requirement \eqref{item:i} we have \begin{equation*} \begin{split} \frac{1}{N}\sum^N_{i = 1} \scalarp{\Delta_i(t), \bm{v}^{\perp}_i(t)} & = \frac{1}{N\eta_{\varepsilon}(t)} \sum^N_{i = 1} \sum^N_{j = 1} (1 - \psi_{\varepsilon}(r_{ij}(t))) \scalarp{\bm{v}_i(t) - \bm{v}_j(t), \bm{v}^{\perp}_i(t)}\\ & \leq \frac{N}{\eta_{\varepsilon}(t)} \left(1 - \psi_{\varepsilon} \left(\sqrt{2N X(t)}\right)\right) V(t), \end{split} \end{equation*} and inequality \eqref{eq:maintool} reads \begin{align*} \frac{d}{dt}V(t) \leq - 2 \left(a\left(\sqrt{2NX(t)}\right) + \frac{\gamma N}{\eta_{\varepsilon}(t)} \psi_{\varepsilon} \left(\sqrt{2N X(t)}\right) \right) V(t). \end{align*} Moreover, since \begin{align*} \frac{d}{dt}\sqrt{V(t)} = \frac{1}{2 \sqrt{V(t)}} \frac{d}{dt}V(t), \end{align*} from \eqref{item:ii} we have \begin{equation} \begin{split}\label{eq:HaHaDecay} \frac{d}{dt}\sqrt{V(t)} & \leq - 2 \left(a\left(\sqrt{2NX(t)}\right) + \frac{\gamma N}{\eta_{\varepsilon}(t)} \psi_{\varepsilon} \left(\sqrt{2N X(t)}\right) \right) \sqrt{V(t)} \\ & \leq - 2 \left(a\left(\sqrt{2NX(t)}\right) + \frac{\gamma N}{\vnorm{\eta_{\varepsilon}}_{\infty}} \psi_{\varepsilon} \left(\sqrt{2N X(t)}\right) \right) \sqrt{V(t)}, \end{split} \end{equation} and integrating between $0$ and $t$ we obtain \begin{align} \label{eq:growthbound} \sqrt{V(t)} - \sqrt{V(0)} \leq - \int^t_0 \left(a\left(\sqrt{2NX(t)}\right) + \frac{\gamma N}{\vnorm{\eta_{\varepsilon}}_{\infty}} \psi_{\varepsilon} \left(\sqrt{2N X(t)}\right) \right) \sqrt{V(s)} \ ds. \end{align} We now work on changing the variable inside the integral. We can actually claim that \begin{align} \label{eq:changevariable} \frac{d}{dt} X(t) \leq \sqrt{V(t)}, \end{align} since, indeed, the following \begin{equation*} \begin{split} \frac{d}{dt} X(t) = \frac{1}{N} \sum^N_{i = 1} \frac{d}{dt}\vnorm{x^{\perp}_i(t)}^2 & = \frac{2}{N} \sum^N_{i = 1} \scalarp{\bm{x}^{\perp}_i(t),\frac{d}{dt}\bm{x}^{\perp}_i(t)} \\ & = \frac{2}{N} \sum^N_{i = 1} \scalarp{\bm{x}^{\perp}_i(t),\bm{v}^{\perp}_i(t)} \\ & \leq \frac{2}{N} \sum^N_{i = 1} \vnorm{\bm{x}^{\perp}_i(t)}\vnorm{\bm{v}^{\perp}_i(t)} \\ & \leq \frac{2}{N} \left(\sum^N_{i = 1}\vnorm{\bm{x}^{\perp}_i(t)}\right)^{\frac{1}{2}} \left(\sum^N_{i = 1} \vnorm{\bm{v}^{\perp}_i(t)}^2\right)^{\frac{1}{2}} \\ & = 2 \sqrt{X(t)}\sqrt{V(t)}, \end{split} \end{equation*} and $\frac{d}{dt} X(t) = \frac{d}{dt}(\sqrt{X(t)}\sqrt{X(t)}) = 2 \sqrt{X(t)} \frac{d}{dt} \sqrt{X(t)}$, together yield \eqref{eq:changevariable}. Note that we have used \begin{align*} \frac{d}{dt}\bm{x}^{\perp}_i(t) = \frac{d}{dt}\left(\bm{x}_i(t) - \frac{1}{N} \sum^N_{j = 1} \bm{x}_j(t)\right) = \bm{v}_i(t) - \frac{1}{N} \sum^N_{j = 1} \bm{v}_j(t) = \bm{v}^{\perp}_i(t). \end{align*} Setting $r = \sqrt{X(s)}$, and using \eqref{eq:changevariable} we can change variable in \eqref{eq:growthbound} as follows: \begin{align} \label{eq:cesemo} \sqrt{V(t)} - \sqrt{V(0)} \leq - \int^{\sqrt{X(t)}}_{\sqrt{X(0)}} \left(a\left(\sqrt{2N}r\right) + \frac{\gamma N}{\vnorm{\eta_{\varepsilon}}_{\infty}} \psi_{\varepsilon}\left(\sqrt{2N}r\right) \right) \ dr. \end{align} Let us suppose that \eqref{eq:HaHaKimLarge} is true, and note that $X(0) = X_0$ and $V(0) = V_0$. If $V(0) = 0$, then there is nothing to prove since we are already in consensus. If, instead, it is true that \begin{align} \label{eq:almostdone} 0 < \sqrt{V(0)} \leq \int^{+\infty}_{\sqrt{X(0)}} \left(a\left(\sqrt{2N}r\right) + \frac{\gamma N}{\vnorm{\eta_{\varepsilon}}_{\infty}} \psi_{\varepsilon}\left(\sqrt{2N}r\right) \right) \ dr, \end{align} then there is a $\overline{X} > X(0)$ such that \begin{align*} \sqrt{V(0)} = \int^{\sqrt{\overline{X}}}_{\sqrt{X(0)}} \left(a\left(\sqrt{2N}r\right) + \frac{\gamma N}{\vnorm{\eta_{\varepsilon}}_{\infty}} \psi_{\varepsilon}\left(\sqrt{2N}r\right) \right) \ dr \end{align*} (having used the fact that, from \eqref{item:i}, the integrand is a non increasing function). Now, either equality holds in \eqref{eq:almostdone}, and $\lim_{t \rightarrow +\infty} V(t) = 0$ follows by passing to the limit in \eqref{eq:cesemo}, or we have a strict inequality. But in this case $\overline{X} \geq X(t)$ must hold for every $t \geq 0$, since otherwise there would be a $T > 0$ for which we have \begin{equation*} \begin{split} \sqrt{V(0)} & \geq \sqrt{V(T)} + \int^{\sqrt{X(T)}}_{\sqrt{X(0)}} \left(a\left(\sqrt{2N}r\right) + \frac{\gamma N}{\vnorm{\eta_{\varepsilon}}_{\infty}} \psi_{\varepsilon}\left(\sqrt{2N}r\right) \right) \ dr \\ & > \int^{\sqrt{\overline{X}}}_{\sqrt{X(0)}} \left(a\left(\sqrt{2N}r\right) + \frac{\gamma N}{\vnorm{\eta_{\varepsilon}}_{\infty}} \psi_{\varepsilon}\left(\sqrt{2N}r\right) \right) \ dr \\ & = \sqrt{V(0)}, \end{split} \end{equation*} which is obviously a contradiction. Thus, we have that the inequality $\overline{X} \geq X(t)$ is true for every $t \geq 0$, and from \eqref{eq:HaHaDecay} we have \begin{align*} \frac{d}{dt}V(t) \leq - 2 \left(a\left(\sqrt{2N\overline{X}}\right) + \frac{\gamma N}{\vnorm{\eta_{\varepsilon}}_{\infty}} \psi_{\varepsilon} \left(\sqrt{2N \overline{X}}\right) \right) V(t). \end{align*} The fact that $\lim_{t \rightarrow +\infty} V(t) = 0$ follows from the inequality above. \end{proof} A first example of a family of functions $\left\{ \psi_{\varepsilon} \right\}_{\varepsilon \in \Omega}$ is given by \begin{align*} \psi_{\varepsilon}(r) = \frac{1}{(1 + r^2)^{\varepsilon}} \text{ where } \varepsilon \in \Omega = [0, \infty], \end{align*} for which we set $\psi_{\infty} \equiv \chi_{\{0\}}$ and \begin{align*} \eta_{\varepsilon}(t) = \max_i \left\{ \sum^N_{k = 1} \psi_{\varepsilon}(r_{ik}(t)) \right\}. \end{align*} In this case, $\Omega_{C\!S} = \{0\}$ and $\Omega_{U} = \{\infty\}$, and \eqref{eq:HaHaKimLarge} is satisfied as soon as \begin{align*} \int^{+\infty}_{\sqrt{X_0}} a\left(\sqrt{2N}r\right) \ dr + \gamma \int^{+\infty}_{\sqrt{X_0}} \psi_{\varepsilon}\left(\sqrt{2N}r\right) \ dr \geq \sqrt{V_0} \end{align*} is, since $\vnorm{\eta_{\varepsilon}}_{\infty} \leq N$. But the most interesting example of such a family is the one which has introduced this section: we consider $\Omega = [0, +\infty]$, the set of functions $\{\chi_{[0,R]}\}_{R \in \Omega}$ and $\eta_R$ as in \eqref{eq:normalizingR} (notice that, as before, we have $\Omega_{C\!S} = \{0\}$ and $\Omega_{U} = \{\infty\}$). Since $\vnorm{\eta}_{\infty} \leq N$, if $R$ is sufficiently large to satisfy $\sqrt{2NX_0} \leq R$, condition \eqref{eq:HaHaKimLarge} is satisfied as soon as \begin{align*} \int^{+\infty}_{\sqrt{X_0}} a\left(\sqrt{2N}r\right) \ dr + \gamma \left(\frac{R}{\sqrt{2N}} - \sqrt{X_0}\right) \geq \sqrt{V_0}, \end{align*} by means of a trivial integration. If, instead, $R$ is so small that $\sqrt{2NX_0} > R$, condition \eqref{eq:HaHaKimLarge} is satisfied as soon as \begin{align*} \int^{+\infty}_{\sqrt{X_0}} a\left(\sqrt{2N}r\right) \ dr \geq \sqrt{V_0}, \end{align*} recovering Theorem \ref{thm:hhk}. The above results can be seen as the asymptotic outcome of the following more general approach: consider the set $\Omega = [0,\infty] \times (1,\infty]$, write $\varepsilon$ as the couple of parameter $R, \theta$ and set \begin{align*} \psi_{R,\theta}(r) = \left\{ \begin{array}{ll} 1 & \text{ if } r \leq R, \\ \frac{1}{(r - R + 1)^{\theta}} & \text{ if } r > R. \\ \end{array} \right. \end{align*} This time we have $\Omega_{C\!S} = \{0\}\times\{+ \infty\}$ and $\Omega_{U} = \{+\infty\} \times (1,+\infty]$. If we suppose that $R$ is sufficiently large to satisfy $\sqrt{2NX_0} \leq R$ and we consider $\eta_{R, \theta}$ to be like \begin{align*} \eta_{R, \theta}(t) = \max_i \left\{\sum^N_{k = 1} \psi_{R, \theta}(r_{ik}(t)) \right\} \quad \text{or} \quad \eta_{R, \theta}(t) = \min_i \left\{\sum^N_{k = 1} \psi_{R, \theta}(r_{ik}(t))\right\}, \end{align*} since in both cases we have $\vnorm{\eta}_{\infty} \leq N$, and it holds \begin{align*} \int^{+\infty}_{\sqrt{X_0}} \psi_{R, \theta}\left(\sqrt{2N}r\right) \ dr & = \int^{\frac{R}{\sqrt{2N}}}_{\sqrt{X_0}} \ dr + \int^{+\infty}_{\frac{R}{\sqrt{2N}}} \frac{11}{(\sqrt{2N}r - R + 1)^{\theta}} \ dr\\ & = \frac{R}{\sqrt{2N}} - \sqrt{X_0} + \frac{1}{\theta - 1}, \end{align*} then condition \eqref{eq:HaHaKimLarge} is satisfied as soon as \begin{align*} \int^{+\infty}_{\sqrt{X_0}} a\left(\sqrt{2N}r\right) \ dr + \gamma \left( \frac{R}{\sqrt{2N}} - \sqrt{X_0} + \frac{1}{\theta - 1}\right) \geq \sqrt{V_0}, \end{align*} which shows that the consensus region grows linearly with the radius $R$ while it is inversely proportional to the growth of $\theta$. Otherwise, if $R$ is so small that $\sqrt{2NX_0} > R$, since \begin{align*} \int^{+\infty}_{\sqrt{X_0}} \psi_{R, \theta}\left(\sqrt{2N}r\right) \ dr & = \int^{+\infty}_{\sqrt{X_0}} \frac{1}{(\sqrt{2N}r - R + 1)^{\theta}} \ dr\\ & = \frac{1}{(\theta - 1)(\sqrt{2NX_0} - R + 1)^{\theta - 1}}\,, \end{align*} we have that condition \eqref{eq:HaHaKimLarge} is true whenever \begin{align*} \int^{+\infty}_{\sqrt{X_0}} a\left(\sqrt{2N}r\right) \ dr + \frac{\gamma}{\theta - 1} \frac{1}{(\sqrt{2NX_0} - R + 1)^{\theta - 1}} \geq \sqrt{V_0}\,. \end{align*} In this case, the consensus region is in practice not modified by $R$, but only by the decay of the far-away interaction $\theta$ (the faster the decay, the smaller the consensus region). In both cases, if $\theta$ goes to $+\infty$, we recover the result we obtained for the family of functions $\{\chi_{[0,R]}\}_{R \in [0,+\infty]}$. \section{Numerical tests} We present a series of numerical tests illustrating the main results developed throughout this article. We begin by describing the generic setting upon which the initial configurations of agents are determined; we follow similar ideas as those presented in \cite{cafotove10}. We consider a system of $N$, $2-$dimensional agents with a randomly generated initial configuration of positions and velocities \[(\tilde{\bm{x}},\tilde{\bm{v}})\in[-1,1]^{2N}\times[-1,1]^{2N}\,,\] interacting by means of the kernel \eqref{eq:kernel} , with $\delta=1$. We recall that relevant quantities for the analysis of our results are given by \[ X[\bm{x}](t)=\frac{1}{2N^2}\sum_{i,j,=1}^N||\bm{x}_i(t)-\bm{x}_j(t)||^2\,,\quad\text{and}\quad V[\bm{v}](t)=\frac{1}{2N^2}\sum_{i,j,=1}^N||\bm{v}_i(t)-\bm{v}_j(t)||^2\,. \] In fact, once a random initial configuration has been generated, it is possible to rescale it to a desired $(X_0,V_0)$ parametric pair, by means of \[ (\bm{x},\bm{v})=\left(\sqrt{\frac{X_0}{X[\tilde{\bm{x}}]}}\tilde{\bm{x}}, \sqrt{\frac{V_0}{V[\tilde{\bm{v}}]}} \tilde{\bm{v}}\right)\,, \] such that $(X[\bm{x}],V[\bm{v}])=(X_0,V_0)$. As simulations concerning flock trajectories have been generated by prescribing a value for the pair $(X_0,V_0)$, which is used to rescale randomly generated initial conditions, there are slight variations on the initial positions and velocities in every model run, which can affect the final consensus direction. However our results are stated in terms of $X,V$, and independently of the specific initial configuration. For simulation purposes the system is integrated in time with the specific feedback controller by means of a Runge-Kutta 4th-order scheme. \noindent \textbf{Leader-based feedback.} The first case that we address is the one presented in Section \ref{sec:cuckersmale_localmean}, where the feedback is built upon local information and a single flock leader. In this case, the local feedback is defined as \[\bm{u}_i=-(\bm{v}_i-\overline{\bm{v}}_i)\,,\quad\text{with}\quad \overline{\bm{v}}_i=(1-q)\bm{v}_i+q \bm{v}_1\,,\;i=1,\ldots,N\,,\] where for convenience we have selected the first agent as the leader of the flock. Figure \ref{fig:1} shows the behavior of the flock depending on the parameter $q$, which represents the influence of the leader in the local average. Our result asserts that for $0< q\leq 1$, the system will converge to consensus independently of the initial configuration, which is illustrated by our numerical experiments, as shown in Figures \ref{fig:1} and \ref{fig:12} ; it can be observed that, the weaker the influence of the leader, the longer the flock takes to reach consensus. \begin{figure} \caption{Leader-based feedback control. Simulations with 100 agents, the value $q$ indicates the strength of the of the leader in the partial average. It can be observed how, as the strength of the leader is increased, convergent behavior is improved.} \label{fig:1} \end{figure} \begin{figure} \caption{Leader-based feedback control. Simulations with 100 agents, the value $q$ indicates the strength of the of the leader in the partial average. Evolution of $X(t)$ and $V(t)$ for the simulations in Figure \ref{fig:1} \label{fig:12} \end{figure} \begin{figure} \caption{Total feedback control under structured perturbations. For a fixed strong structured perturbation term ($\beta=10$), different energies for the unperturbed control term $\alpha$ generate different consensus behavior; the stronger the correct information term is, the faster consensus is achieved.} \label{fig:2} \end{figure} \noindent\textbf{Feedback under perturbed information.} Next, we deal with the setting presented again in Section \ref{sec:cuckersmale_localmean}, by considering a feedback of the form \[ \bm{u}_i=-\alpha(\bm{v}_i-\overline{\bm{v}})-\beta\Delta_i\,,\quad\overline{\bm{v}}=\frac{1}{N}\sum_{i=1}^N\bm{v}_i\,, \] where $\Delta_i=\Delta_i(t)$ represents a structured perturbation written as \[ \Delta_i=\frac{1}{\eta_i(t)}\sum_{j=1}^N\omega_{ij}(\bm{v}_j-\overline{\bm{v}})\,. \] In particular, we address the case where the weighting function $\omega_{ij}$ corresponds to similar Cucker-Smale kernel as for the dynamics, i.e., \[\omega_{ij}=\frac{1}{(1+||\bm{x}_i-\bm{x}_j||^2)^{\varepsilonsilon}}\,,\quad \eta_i=\frac{1}{N}\sum_{j=1}^N \omega_{ij}\] In this test, we fix a large value of $\beta=10$, representing a strong perturbation of the feedback, and a small value of $\varepsilonsilon=1e-5$, related to a disturbance which is distributed among all the agents. In Figures \ref{fig:2} and \ref{fig:22} , it is shown how increasing the value of $\alpha$, representing the energy of the \textsl{correct information feedback}, induces faster consensus emergence. \begin{figure} \caption{Total feedback control under structured perturbations. Evolution of $X(t)$ and $V(t)$ for the simulations in Figure \ref{fig:2} \label{fig:22} \end{figure} \noindent\textbf{Local feedback control.} The last test case studies the results presented in Section \ref{sec:localmeanR}, where the feedback is computed according to the local average in eq.\eqref{eq:truncated_perturbation}. Simulations in Figure \ref{fig:3} illustrate the setting. From an uncontrolled system, represented by a local feedback radius $R=0$, by increasing this quantity, partial flocking is consistently achieved, until full consensus is observed for large radii mimicking a total information feedback control. From a theoretical perspective, this result is presented in Theorem \ref{th:HaHaKimExtended}, which describes a sufficient consensus region for feedback control based on local averages. This theorem recovers on its asymptotics previous results in \cite{HaHaKim}, and \cite{CFPT}, related to consensus regions for uncontrolled and fully controlled systems under total information feedback, respectively. It has been reported in the literature \cite{cafotove10}, that estimates for consensus regions such as the one provided by Theorem \ref{thm:hhk}, are not sharp in many situations. In this direction, we proceed to contrast the theoretical consensus estimates with the numerical evidence. For this purpose, for a fixed number of agents, we span a large set of possible initial configurations determined by different values of $(X,V)$. For every pair $(X,V)$ we randomly generate a set of 20 initial conditions, and we simulate for a sufficiently large time frame. We measure consensus according to a threshold established on the final value of $V$; we consider that consensus has been achieved if the final value of $V$ is lower or equal to $1e-5$. We proceed by computing empirical probabilities of consensus for every point of our state space $(X,V)$; results in this direction are presented in Figures \ref{fig:4} and \ref{fig:5}. We first consider the simplified case of 2 agents; according to \cite{CFPT}, for this particular case, the consensus region estimate provided by Theorem \ref{thm:hhk} is sharp, as illustrated is by the results presented in Figure \ref{fig:4}. Furthermore, it is also the case for Theorem \ref{th:HaHaKimExtended}; for $R>0$, the consensus region predicted by the theorem coincides with the numerically observed ones. \begin{figure} \caption{Local feedback control. Simulations with $N=40$ agents, and different control radii $R$. By increasing the value of $R$ the systems transits from uncontrolled behavior, to partial flocking, up to total, fast flocking.} \label{fig:3} \end{figure} \begin{figure} \caption{Local feedback control. Empirical consensus regions and theoretical estimates for two-agent systems.} \label{fig:4} \end{figure} Figure \ref{fig:5} illustrates the case when a larger number of agents is considered. In a similar way as for Theorem \ref{thm:hhk}, the consensus region estimate is conservative if compared with the region where numerical experiments exhibit convergent behavior. Nevertheless, Theorem \ref{th:HaHaKimExtended} is consistent in the sense that the theoretical consensus region increases gradually as $R$ grows, eventually covering any initial configuration, which is the case of the total information feedback control, as presented in \cite[Proposition 2]{CFPT}. The numerical experiments also confirm this phenomena, as shown in Figure \ref{fig:6} , where contour lines showing the $80\%$ probability of consensus for different radii locate farther from the origin as $R$ increases. \begin{figure} \caption{Local feedback control. Empirical consensus regions and theoretical estimates for $N=20$ agents and different control radii $R$.} \label{fig:5} \end{figure} \begin{figure} \caption{Local feedback control. Empirical contour lines for the 80$\%$ probability of consensus with different control radii.} \label{fig:6} \end{figure} \section*{Concluding remarks and perspectives} We have presented a set of feedback controllers for consensus emergence in nonlinear multi-agent systems of Cucker-Smale type. The proposed control designs address different situations concerning leader-following configurations, stabilization under perturbed information, and decentralized, local feedback control. In general, we characterize consensus emergence in every case, providing a coherent extension of the available results in the literature. Furthermore, numerical experiments assess the performance of the controllers in a consistent way. \\ Among possible future directions of research, let us mention that numerical evidence suggest that sharper consensus estimates should be possible to be derived, if the structure of the internal dynamics is more intensively used in the computations. Another natural extension of our work would be to consider the consensus emergence problem, under a decentralized control computed via an optimality-based approach. \end{document}
\begin{document} \title{A Simple Approach for Non-stationary Linear Bandits} \author{Peng Zhao, Lijun Zhang, Yuan Jiang, Zhi-Hua Zhou} \affil{National Key Laboratory for Novel Software Technology\\ Nanjing University, Nanjing 210023, China} \date{} \maketitle \begin{abstract} This paper investigates the problem of non-stationary linear bandits, where the unknown regression parameter is evolving over time. Existing studies develop various algorithms and show that they enjoy an $\Ot(T^{2/3}P_T^{1/3})$ dynamic regret, where $T$ is the time horizon and $P_T$ is the path-length that measures the fluctuation of the evolving unknown parameter. In this paper, we discover that a serious technical flaw makes their results ungrounded, and then present a fix, which gives an $\Ot(T^{3/4}P_T^{1/4})$ dynamic regret without modifying original algorithms. Furthermore, we demonstrate that instead of using sophisticated mechanisms, such as sliding window or weighted penalty, a simple restarted strategy is sufficient to attain the same regret guarantee. Specifically, we design an UCB-type algorithm to balance exploitation and exploration, and restart it periodically to handle the drift of unknown parameters. Our approach enjoys an $\Ot(T^{3/4}P_T^{1/4})$ dynamic regret. Note that to achieve this bound, the algorithm requires an oracle knowledge of the path-length $P_T$. Combining the bandits-over-bandits mechanism by treating our algorithm as the base learner, we can further achieve the same regret bound in a parameter-free way. Empirical studies also validate the effectiveness of our approach. \end{abstract} \section{Introduction} \label{sec:SLB-intro} Multi-Armed Bandits (MAB)~\citep{Robbins:52} models the sequential decision-making with partial information, where the player requires to choose one of the $K$ slot machines at each iteration in order to maximize the cumulative reward. MAB is a paradigmatic instance of the exploration versus exploitation trade-offs, which is fundamental in many areas of artificial intelligence, such as reinforcement learning~\citep{book:reinforcement-learning} and evolutionary algorithms~\citep{survey'13:EE-evolution}. In many real-world decision-making problems, each arm is usually associated with certain side information. Therefore, researchers start to formulate structured bandits in which the reward distributions of each arm are connected by a common but unknown parameter. Particularly, Stochastic Linear Bandits (SLB) has received much attention~\citep{JMLR'02:Auer-linear-bandits,NIPS'07:bandit-lower-bound,AISTATS'11:Chu-linear-bandits,NIPS'11:AY-linear-bandits,COLT'19:Yuan-linear-bandits}. In SLB, at iteration $t$, the player makes a decision $X_t$ from a feasible set $\X \subseteq \R^d$, and then observes the reward $r_t$ satisfying \begin{equation} \label{eq:LB-model-assume} \E[r_t|X_t] = X_t^\T \theta_*, \end{equation} where $\theta_*$ is an unknown regression parameter. The goal of the player is to minimize the (pseudo) regret, \begin{equation} \label{eq:regret-stationary} \mbox{Regret}_T = T\max_{\x \in \X} \x^{\T}\theta_* - \sum_{t=1}^{T} X_t^\T \theta_*. \end{equation} The stochastic linear bandits problem is well-studied in literatures. By exploiting the tool of upper confidence bounds, various approaches demonstrate an $\Ot(d\sqrt{T})$ regret~\citep{NIPS'07:bandit-lower-bound,NIPS'11:AY-linear-bandits},\footnote{We adopt the notation of $\Ot(\cdot)$ to suppress logarithmic factors in the time horizon $T$.} which matches the $\Omega(d\sqrt{T})$ lower bound established by~\citet{NIPS'07:bandit-lower-bound}, up to $\log T$ factors. However, the observation model~\eqref{eq:LB-model-assume} assumes that the regression parameter $\theta_*$ is fixed, which is unfortunately hard to satisfy in real-life applications, because data are usually collected in non-stationary environments. For instance, in recommender systems the regression parameter models customers' interests, which could vary over time when customers look through product pages. Therefore, it is crucial to facilitate stochastic linear bandits with capability of handling non-stationarity. To address above issue,~\citet{AISTATS'19:window-LB} proposed the \emph{non-stationary} linear bandits model, which assumes the reward $r_t$ satisfies \begin{equation*} \label{eq:LB-model-assume-dynamic} \E[r_t|X_t] = X_t^\T \theta_t, \end{equation*} where $\theta_t$ is the unknown regression parameter at iteration $t$. Different from the standard SLB setting in~\eqref{eq:LB-model-assume}, non-stationary linear bandits allow the unknown parameter to vary over time, whose evolution is often called the path-length defined as $P_T = \sum_{t=2}^{T} \norm{\theta_{t-1} - \theta_t}_2$, which naturally measures the non-stationarity of environments. The player's goal is to minimize the following (pseudo) \emph{dynamic regret}, \begin{equation} \label{eq:regret-dynamic} \textnormal{D-Regret}_T = \sum_{t=1}^{T} \max_{\x \in \X} \x^{\T}\theta_t - \sum_{t=1}^{T} X_t^\T \theta_t, \end{equation} namely, the cumulative regret against the optimal strategy that has full information of unknown parameters. Recently,~\citet{AISTATS'19:window-LB} first established an $\Omega(d^{2/3}T^{2/3}P_T^{1/3})$ minimax lower bound for the non-stationary linear bandits problem. On the upper bound side,~\citet{AISTATS'19:window-LB} proposed an online UCB-type algorithm called WindowUCB, which is based on the sliding window least square estimator to track the evolving parameters. Subsequently,~\citet{NIPS'19:weighted-LB} developed the WeightUCB algorithm, which adopted the weighted least square estimator for parameter estimation. The authors prove an $\Ot(d^{2/3}T^{2/3}P_T^{1/3})$ dynamic regret guarantee for their algorithms, matching the aforementioned lower bound up to $\log T$ factors. However, we exhibit that a serious technical flaw makes their arguments and regret guarantees ungrounded. We revisit the analysis and demonstrate that it is actually impossible to upper bound the crucial term in their argument by the desired quantity as they expected. Further, we present a fix. Without modifying original algorithms, we prove that their algorithms attain an $\Ot(d^{7/8}T^{3/4}P_T^{1/4})$ dynamic regret. This is the first contribution of this paper. Furthermore, although these two strategies~\citep{AISTATS'19:window-LB,NIPS'19:weighted-LB} attain nice dynamic regret guarantees (after fixing the technical flaws), their algorithms and analyses are fairly complicated. Instead, we discover that a quite simple algorithm based on the \emph{restarted strategy} (simply running an UCB-style algorithm and restarting it periodically), surprisingly, achieves the same dynamic regret guarantee and is more efficient. This is the second contribution of this paper. Indeed, our proposed algorithm enjoys the following three advantages compared with previous studies. \begin{itemize} \item The proposed algorithm for non-stationary linear bandits is very simple and thus easy to analyze its dynamic regret, only exploiting the standard self-normalized concentration inequality for classical stochastic linear bandits. \item Compared with WindowUCB, the sliding window least square based approach~\citep{AISTATS'19:window-LB}, our approach supports online update and enjoys a one-pass manner \emph{without} storing historical data. Meanwhile, WindowUCB demands an $\O(w)$ memory where $w$ is the window length; by contrast, our approach only requires a \emph{constant} memory. \item Compared with WeightUCB, the weighted least square based approach~\citep{NIPS'19:weighted-LB}, our approach and analysis are much simpler, without involving other complicated deviation results. Additionally, WeightUCB maintains and manipulates the covariance matrix and its variant, and thus takes a longer running time. \end{itemize} Overall, our approach is more friendly to the resource-constrained learning scenarios due to its simplicity. In the following, we start with a brief review of related work in Section~\ref{sec:related-work}. Then, Section~\ref{sec:infinite-arm} presents our proposed approach and the theoretical results. Section~\ref{sec:analysis} provides the analysis. We further supply the empirical studies in Section~\ref{sec:experiment} and finally conclude the paper in Section~\ref{sec:conclusion}. Appendix~\ref{appendix:tech-lemmas} supplements technical lemmas. \section{Related Work} \label{sec:related-work} Online learning in non-stationary environments has drawn considerable attention recently, in both full-information and bandits settings. We focus on related work in the bandits setting. Non-stationary multi-armed bandits problem with abrupt changes was first studied by~\citet{JMLR'02:Auer-linear-bandits}. Denoted by $K$ the number of arms and by $L$ the number of distribution changes,~\citet{JMLR'02:Auer-linear-bandits} proposed \textsc{Exp3.S}, a variant of \textsc{Exp3}, which achieves an $\Ot(\sqrt{KLT})$ regret bound when $L$ is known. The rate is minimax optimal up to $\log T$ factors. Later studies demonstrated that $\Ot(\sqrt{KLT})$ regret is attainable by sliding window and weighted penalty strategies~\citep{ALT'11:switch-MAB}, as well as the restarted strategy~\citep{journal'17:restart-MAB}. All these algorithms require the number of changes $L$ as the input parameter, which is undesired in practice. Recently,~\citet{COLT19:Auer-unknown-bandits} achieved a near-optimal rate $\Ot(\sqrt{KLT})$ without knowing prior knowledge of $L$. On the other hand,~\citet{SS'19:non-stationary-MAB} studied the non-stationary MAB with slowly changing distributions, and proved an $\Ot((K\log K)^{1/3}V_T^{1/3}T^{2/3})$ dynamic regret, where $V_T = \sum_{t=2}^{T} \norm{\boldsymbol{\mu}_t - \boldsymbol{\mu}_{t-1}}_{\infty}$ is the total variation of changes in reward distributions. Non-stationary linear bandits problem was first studied by~\citet{AISTATS'19:window-LB}. The authors established an $\Omega(d^{2/3}T^{2/3}P_T^{1/3})$ minimax lower bound, and then proposed the WindowUCB algorithm based on the sliding window least square, achieving an $\Ot(d^{7/8}T^{3/4}P_T^{1/4})$ dynamic regret (after fixing the technical gap). Nevertheless, to implement the sliding window least square, WindowUCB needs to store historical data in a buffer. A natural replacement is the weighted least square, which supports online update and enjoys both nice empirical performance and sound theoretical guarantee~\citep{control'93:Guo-ffRLS,TKDE'21:DFOP}. Based on the idea,~\citet{NIPS'19:weighted-LB} proposed the WeightUCB algorithm and proved that the approach attains the same dynamic regret. Nevertheless, both algorithmic design and regret analysis of WeightUCB are fairly complicated. Besides, WeightUCB needs to maintain and manipulate covariance matrix and its variant (in the same scale), which leads to an evidently longer running time. Finally, both WindowUCB and WeightUCB require the unknown quantity $P_T$ as an input. To avoid the limitation,~\citet{AISTATS'19:window-LB} developed the bandits-over-bandits mechanism as a meta algorithm and finally obtained an $\Ot(d^{7/8}T^{3/4}P_T^{1/4})$ parameter-free dynamic regret guarantee. In this work, we first revisit the analysis of two existing algorithms designed for non-stationary linear bandits in the literature~\citep{AISTATS'19:window-LB,NIPS'19:weighted-LB}. We demonstrate that there exists a technical flaw in the analysis, making the claimed dynamic regret guarantee ungrounded. We present a new analysis to fix the technical gap. Next, we propose a simple algorithm based on the restarted strategy for non-stationary linear bandits and show that the simple algorithm can achieve the same dynamic regret guarantee as existing methods. We note that using the restarted strategy for non-stationary environments is not new, which has been applied in various scenarios, including non-stationary online convex optimization~\citep{OR'15:dynamic-function-VT}, MAB with abrupt changes~\citep{journal'17:restart-MAB}, and MAB with gradual changes~\citep{SS'19:non-stationary-MAB}. However, to the best of our knowledge, our work is the first time to apply the restarted strategy to non-stationary linear bandits. \section{Our Results} \label{sec:infinite-arm} We first introduce the formal problem setup and then present our approach. \subsection{Problem Setup} \label{sec:setting-LB} \paragraph{Setting.} In non-stationary (infinite-armed) linear bandits, at each iteration $t$, let $X_t \in \X \subseteq \R^d$ denote the contextual information of the chosen arm and $r_t$ denote its associated reward, and the model is assumed to be linearly parameterized, i.e., \begin{equation} \label{eq:model-assume} r_t = X_t^\T\theta_t + \eta_t, \end{equation} where $\theta_t \in \R^d$ is the unknown parameter and $\eta_t$ is the noise satisfying certain tail condition specified below. As mentioned earlier, to guide the algorithmic design of non-stationary linear bandits, it is natural to employ the following (pseudo) \emph{dynamic regret} as the performance measure: \begin{equation*} \textnormal{D-Regret}_T = \sum_{t=1}^{T} \max_{\x \in \X} \x^{\T}\theta_t - \sum_{t=1}^{T} X_t^{\T}\theta_t, \end{equation*} which is the cumulative regret against the optimal strategy that has full information of unknown parameters. \paragraph{Assumptions.} We assume the noise $\eta_t$ be conditionally $R$-sub-Gaussian with a fixed constant $R>0$. That is, $\E[\eta_t \mid X_{1 : t}, \eta_{1 : t-1}] = 0$, and for any $\lambda \in \R$, \begin{equation*} \E[\exp(\lambda \eta_t) \mid X_{1 : t}, \eta_{1 : t-1}] \leq \exp \left(\frac{\lambda^{2} R^{2}}{2}\right), \end{equation*} The feasible set and unknown parameters are assumed to be bounded, i.e., $\forall \x \in \X$, $\norm{\x}_2 \leq L$, and $\norm{\theta_t}_2 \leq S$ holds for all $t \in [T]$. For convenience, we further assume $\inner{\x}{\theta_t} \leq 1$, but we will keep the dependence in $L$ and $S$ for better comprehension of the results. \subsection{RestartUCB Algorithm} RestartUCB algorithm has two key ingredients: upper confidence bounds for trading off the exploration and exploitation, and the restarted strategy for handling the non-stationarity of environments. Specifically, our proposed RestartUCB algorithm proceeds in epochs. At each iteration, we first estimate the unknown regression parameter from historical data within the epoch, and then construct upper confidence bounds of the expected reward for selecting the arm. Finally, we periodically restart the algorithm to be resilient to the drift of underlying parameter $\theta_t$. In the following, we first specify the estimator used in the RestartUCB algorithm, then investigate its estimate error to construct upper confidence bounds, and finally describe the restarted strategy. \paragraph{Estimator.} At iteration $t$, we adopt the regularized least square estimator by only exploiting data in the current epoch. More precisely, the estimator $\thetah_t$ is the solution of the following problem: \begin{equation} \label{eq:estimator} \min_{\theta}~\lambda \norm{\theta}_2^2 + \sum_{s=t_0}^{t-1}(X_s^\T \theta - r_s)^2, \end{equation} where $t_0$ is the starting point of the current epoch, and $\lambda > 0$ is the regularization coefficient. Clearly, $\thetah_t$ admits a closed-form solution as \begin{equation} \label{eq:close-form} \thetah_t=V_{t-1}^{-1}\left(\sum_{s=t_0}^{t-1} r_s X_s \right), \end{equation} where $V_{t-1} = \lambda I + \sum_{s=t_0}^{t-1} X_sX_s^\T$. We remark that the estimator~\eqref{eq:close-form} (essentially, both the terms of $V_{t-1}$ and $\sum_{s=t_0}^{t-1} r_s X_s$) can be updated online \emph{without} storing historical data in the memory owing to the restarted strategy. Furthermore, it is known that~\eqref{eq:estimator} can be \emph{exactly} solved by the recursive least square algorithm, whose solution is provably equivalent to the closed-form expression~\eqref{eq:close-form}. This feature can further accelerate our approach in that it saves the computation of the inverse of covariance matrix $V_{t-1}$, which is arguably the most time-consuming step at each iteration. By contrast,~\citet{AISTATS'19:window-LB} adopted the following sliding window least square estimator, \begin{equation} \label{eq:sw-close-form} \thetah^{\text{sw}}_t=(V^{\text{sw}}_{t-1})^{-1}\Bigg(\sum_{s=1 \vee (t-w)}^{t-1} r_s X_s\Bigg), \end{equation} where $V^{\text{sw}}_{t-1} = \lambda I + \sum_{s=1 \vee (t-w)}^{t-1} X_sX_s^\T$ is the covariance matrix formed by historical data in the sliding window and $w > 0$ is the window length. For online update, WindowUCB will remove the oldest data item in the window and then add the new item. So it requires to store the nearest $w$ data items in the memory for future update, resulting in an $\O(w)$ space complexity which cannot be regarded as a constant because the setting of $w$ depends on the time horizon $T$. \paragraph{Upper Confidence Bounds.} Based on the estimator $\thetah_t$ in~\eqref{eq:close-form}, we further construct upper confidence bounds for the expected reward. To this end, it is required to investigate the estimate error. Inspired by the analysis of WindowUCB~\citep{AISTATS'19:window-LB}, we have the following result. \begin{myLemma} \label{lemma:estimate-error} For any $t \in [T]$ and $\delta \in (0,1)$, with probability at least $1-\delta$, the following holds for all $\x \in \X$, \begin{equation} \label{eq:estimate-error} \abs{\x^{\T}(\theta_t - \thetah_t)} \leq L^2\sqrt{\frac{dH}{\lambda}} \sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2 + \beta_t \norm{\x}_{V_{t-1}^{-1}}, \end{equation} where $H > 0$ is the restarting period (or epoch size), and $\beta_t$ is the radius of confidence region, \begin{equation} \label{eq:confidence-radius} \beta_t = \sqrt{\lambda} S + R\sqrt{2\log \frac{1}{\delta} + d\log\left(1 + \frac{(t-t_0)L^2}{\lambda d} \right)}. \end{equation} \end{myLemma} \begin{myRemark} \label{remark:estimate-error} The analysis of estimate error serves as the foundation of designing the UCB-type algorithms. In fact, the pioneering study~\citep{AISTATS'19:window-LB} has studied the estimate error of sliding window least square estimator for non-stationary linear bandits, however, the technical reasoning suffers from some gaps and makes the estimate error bound and the claimed $\Ot(T^{2/3}P_T^{1/3})$ dynamic regret guarantee ungrounded. The flaw appears in a key technical lemma~\citep[Lemma 3]{arXiv'19:window-LB}, and is unfortunately inherited by the later studies including WeightUCB~\citep[Theorem 2]{NIPS'19:weighted-LB}, the early version of this paper~\citep[Lemma 3]{AISTATS'20:restart}, and perturbation based method~\citep[Theorem 7]{UAI'20:kim20a}. In this version, we correct the previous results, at the price of another $\sqrt{dH}$ factor appearing in front of the path-length term comparing to the original result (see Lemma 1 in the early version of our work~\citep{AISTATS'20:restart}). The additional $\sqrt{dH}$ factor in the estimation error will lead to an $\Ot(T^{3/4}P_T^{1/4})$ dynamic regret, which is slightly worse than the original $\Ot(T^{2/3}P_T^{1/3})$ rate. We present more technical discussions in Section~\ref{sec:revisit}. \end{myRemark} The estimate error~\eqref{eq:estimate-error} essentially suggests an upper confidence bound of the expected reward $\x^\T \theta_t$. Hence, we adopt the principle of \emph{optimism in the face of uncertainty}~\citep{JMLR'02:Auer-linear-bandits} and choose the arm that maximizes its upper confidence bound, \begin{equation} \label{eq:select-criteria} \begin{split} X_t = & \argmax_{\x \in \X} \Big\{ \inner{\x}{\thetah_t} + L^2\sqrt{\frac{dH}{\lambda}}\sum_{p=t_0}^{t-1}\norm{\theta_p - \theta_{p+1}}_2 + \beta_t \norm{\x}_{V_t^{-1}} \Big\} \\ = & \argmax_{\x \in \X} \big\{ \inner{\x}{\thetah_t} + \beta_t \norm{\x}_{V_t^{-1}} \big\}. \end{split} \end{equation} So at iteration $t$, the algorithm first solves the estimator based on~\eqref{eq:close-form}, then obtains the confidence radius $\beta_t$ by~\eqref{eq:confidence-radius}, and finally pulls the arm $X_t$ according to the selection criteria~\eqref{eq:select-criteria}. \paragraph{Restarted Strategy.} To handle the changes of unknown regression parameters, RestartUCB algorithm proceeds in epochs and restarts the procedure every $H$ iterations, as illustrated in Figure~\ref{figure:restart}. We call the variable $H$ as the restarting period or epoch size, which is the key parameter to trade off between the stability and non-stationarity. In each epoch, RestartUCB performs the UCB-style algorithm as described in the last part. We summarize overall procedures in Algorithm~\ref{alg:restart-UCB}. \begin{figure} \caption{Illustration of RestartUCB algorithm.} \label{figure:restart} \end{figure} \begin{algorithm}[!t] \caption{RestartUCB} \label{alg:restart-UCB} \begin{algorithmic}[1] \REQUIRE time horizon $T$, restarting period $H$, confidence $\delta$, regularizer $\lambda$, scaling parameters $S$ and $L$ \STATE Set epoch counter $j = 1$ \WHILE{$j \leq \lceil T / H \rceil$} \STATE Set $\tau = (j - 1)H$ \STATE Initialize $X_\tau \in \X$ \STATE $V_\tau = \lambda I_d$ \FOR{$t = \tau +1,\ldots,\tau + H - 1$} \STATE Compute $\thetah_t = V_{t-1}^{-1}S_{t-1}$ \STATE Compute $\beta_t$ by~\eqref{eq:confidence-radius} with $t_0 = \tau$ \STATE Select $X_t = \argmax_{\x \in \X} \{ \inner{\x}{\thetah_t} + \beta_t \norm{\x}_{V_{t-1}^{-1}}\}$ \STATE Receive the reward $r_t$ \STATE Update $V_{t} = V_{t-1} + X_t X_t^\T$ and $S_t = S_{t-1} + r_t X_t$ \ENDFOR \STATE Set $j = j + 1$ \ENDWHILE \end{algorithmic} \end{algorithm} We will show in the next subsection that a fixed length is sufficient to achieve the same theoretical guarantees as previous works~\citep{AISTATS'19:window-LB,NIPS'19:weighted-LB}. Nevertheless, since the regret guarantee is not optimal, it would be interesting to see whether an adaptive epoch length with a certain statistical detection will give an improved regret guarantee. \subsection{Theoretical Guarantees} \label{sec:infinite-theory} We show that notwithstanding its simplicity RestartUCB algorithm enjoys the same dynamic regret guarantee as the existing methods for non-stationary linear bandits, including WindowUCB~\citep{AISTATS'19:window-LB} and WeightUCB~\citep{NIPS'19:weighted-LB}. In the following, we first analyze the regret within each epoch (Theorem~\ref{thm:dynamic-regret-in-epoch}), and then sum over epochs to obtain the guarantee of the whole time horizon (Theorem~\ref{thm:dynamic-regret}). \begin{myThm} \label{thm:dynamic-regret-in-epoch} For each epoch $\mathcal{E}$ whose size is $H$ and any $\delta \in (0,1)$, with probability at least $1-2\delta$, the dynamic regret within the epoch is upper bounded by \begin{equation*} \begin{split} \DReg(\mathcal{E}) \triangleq {} & \sum_{t \in \mathcal{E}} \max_{\x \in \X} \x^{\T}\theta_t - \sum_{t \in \mathcal{E}} X_t^\T \theta_t \\ \leq {} & 2L^2\sqrt{\frac{d}{\lambda}}\cdot H^{\frac{3}{2}}\mathcal{P}(\Ecal) + 2\beta_H \sqrt{2dH\log\left(1+ \frac{L^2 H}{\lambda d}\right)}, \end{split} \end{equation*} where $\beta_H = \sqrt{\lambda} S + R\sqrt{2\log(1/\delta) + d\log\left(1 + \frac{HL^2}{\lambda d} \right)}$ is the confidence radius of the epoch, and $\mathcal{P}(\mathcal{E})$ denotes the path-length within epoch $\mathcal{E}$, i.e., $\mathcal{P}(\mathcal{E}) = \sum_{t\in \mathcal{E}} \norm{\theta_{t-1} - \theta_t}_2$. \end{myThm} By summing the dynamic regret over epochs, we can therefore obtain dynamic regret over of the whole time horizon. \begin{myThm} \label{thm:dynamic-regret} With probability at least $1-1/T$, the dynamic regret of RestartUCB (Algorithm~\ref{alg:restart-UCB}) over the whole time horizon is upper bounded by \begin{equation} \label{eq:main-result} \DReg_T = \sum_{t=1}^{T} \max_{\x \in \X} \x^{\T}\theta_t - \sum_{t=1}^{T} X_t^\T \theta_t \leq \Ot\left(d^{\frac{1}{2}}H^{\frac{3}{2}} P_T + dT/\sqrt{H}\right), \end{equation} where $P_T = \sum_{t=2}^{T} \norm{\theta_{t-1} - \theta_t}_2$ is the path-length, and $H$ is the restarting period. Furthermore, by setting the restarting period optimally as \begin{equation} \label{eq:optimal-tuning-parameter} H = \min\{H^*, T\} = \min\left\{ \floor{d^{\frac{1}{4}}T^{\frac{1}{2}} P_T^{-\frac{1}{2}}}, T \right\}, \end{equation} RestartUCB achieves the following dynamic regret, \begin{equation} \label{eq:optimal-tuning} \DReg_T \leq \begin{cases} \Ot\big(d^{\frac{7}{8}} T^{\frac{3}{4}} P_T^{\frac{1}{4}}\big) & \mbox{ when } P_T \geq \sqrt{d}/T, \\ \Ot(d\sqrt{T}) & \mbox{ when } P_T < \sqrt{d}/T. \end{cases} \end{equation} \end{myThm} \begin{myRemark} As shown in Theorem~\ref{thm:dynamic-regret}, the setting of optimal restarting period $H^*$ in~\eqref{eq:optimal-tuning-parameter} requires the prior information of path-length $P_T$, which is generally unavailable. We will discuss how to remove the undesired dependence in the next subsection. \end{myRemark} \subsection{Adapting to Unknown Non-stationarity} \label{sec:unknown-path-length} As mentioned earlier, the restarting period $H$ plays a key role in dealing with the non-stationarity of environments. Intuitively, a small restarting period should be employed when the environments change very dramatically, and a large one should be used when the environments are relatively stable. Our theoretical result also validates the intuition. As one can see in Theorem~\ref{thm:dynamic-regret}, the restarting period $H$ trades off between the path-length term $P_T$ and the total horizon $T$, which necessitates an appropriate balance. Nevertheless, the optimal configuration of restarting period, as shown in~\eqref{eq:optimal-tuning-parameter}, requires the prior knowledge of path-length $P_T$, which essentially measures the non-stationarity of underlying environments and is thus generally unavailable. To compensate the lack of this information of environmental non-stationarity, we design an online ensemble method~\citep{book'12:ensemble-zhou} by employing the meta-base framework, which is recently used in full-information non-stationarity online learning~\citep{ICML'15:Daniely-adaptive,AISTATS'17:coin-betting-adaptive,NIPS'18:Zhang-Ader,NIPS'19:Zheng,AISTATS'20:Zhang,UAI'20:simple,NIPS'20:sword} and non-stationary bandit online learning~\citep{COLT'17:Corralling-bandits,AISTATS'19:window-LB,AISTATS'20:BCO}. In this paper, to deal with the issue for non-stationary linear bandits, we employ the \emph{Bandits-over-Bandits} (BOB) mechanism, proposed by~\citet{AISTATS'19:window-LB} in designing parameter-free algorithm for non-stationary linear bandits based on sliding window least square estimator. \begin{figure} \caption{Illustration of Bandits-over-Bandits mechanism~\citep{arXiv'19:window-LB} \label{figure:restartBOB} \end{figure} In the following, we describe how to apply the BOB mechanism to eliminate the requirement of the unknown path-length in RestartUCB. The essential idea is to use the RestartUCB algorithm as the base-learner to handle non-stationary linear bandits with a given restarting period, and on top of that we will employ a second-layer bandits algorithm as the meta-learner to adaptively learn the optimal restarting period. We name the RestartUCB algorithm with Bandits-over-Bandits mechanism as ``RestartUCB-BOB''. Figure~\ref{figure:restartBOB} illustrates the meta-base structure of RestartUCB-BOB algorithm. To be more concrete, although the exact value of the optimal restarting period $H^*$ (or equivalently, the path-length $P_T$) is unknown, we can make some guess of its possible value, as the $P_T$ is always bounded. Then, we can use a certain meta-algorithm to adaptively track the best restarting period. To achieve this goal, RestartUCB-BOB first requires to examine the performance of base-learner with different restarting period. Therefore, RestartUCB-BOB will perform in several episodes, and in each episode RestartUCB-BOB employs the base-algorithm RestartUCB with a particular restarting period and receives the returned cumulative reward over the episode as the reward feedback. We denote by $\Delta \in [T]$ the episode length. The restarting period will be adaptively adjusted by employing \textsc{Exp3}~\citep{SICOMP'02:Auer-EXP3} as the meta-algorithm. In the configuration of RestartUCB-BOB, the episode length is set $\Delta = \ceil{d\sqrt{T}}$. Further, the pool of candidate restarting periods $\H$ is configured as follows: \begin{equation} \label{eq:candidate-pool} \H = \left\{H_i = \floor{ d^{\frac{1}{4}} S^{-\frac{1}{2}} \cdot 2^{i-1}} \mid i \in [N]\right\}, \end{equation} where $N = \ceil{\frac{1}{2}\log_2(ST)} + 1$ is the number of candidate restarting periods and recall that $S$ is the upper bound of the norm of underlying regression parameters as specified in Section~\ref{sec:setting-LB}. Let $H_{\min}$ ($H_{\max}$) be the minimal (maximal) restarting period in the pool $\H$, then it is evident to verify that \begin{equation} \label{eq:min-max-epoch-size} H_{\min} = \floor{d^{\frac{1}{4}} S^{-\frac{1}{2}}}, H_{\max} = \floor{d^{\frac{1}{4}}\sqrt{T}} \leq \Delta. \end{equation} To conclude, RestartUCB-BOB is in a two-layer meta-base structure and will perform in episodes. In each episode, the base-learner is RestartUCB associated with a particular restarting period in the candidate pool determined by the meta-learner \textsc{Exp3}; besides, the cumulative reward of the base-learner within the episode is fed to the meta-learner as the feedback to adaptively choose a better restarting period. We refer the reader to Section 7 of~\citet{arXiv'19:window-LB} for more descriptions of algorithmic details. The following theorem presents the dynamic regret guarantee for RestartUCB-BOB. Note that the algorithm does not require the prior knowledge of the path-length $P_T$. \begin{myThm} \label{thm:dynamic-regret-BOB} \textsc{RestartUCB} together with Bandits-over-Bandits mechanism satisfies \begin{equation} \label{eq:dynamic-order-infinite} \textnormal{D-Regret}_T = \sum_{t=1}^{T} \max_{\x \in \X} \x^{\T}\theta_t - \sum_{t=1}^{T} X_t^\T \theta_t \leq \Ot\big(d^{\frac{7}{8}} T^{\frac{3}{4}} P_T^{\frac{1}{4}}\big), \end{equation} without requiring the path-length $P_T$ ahead of time. \end{myThm} \begin{myRemark} From the theorem, we can observe that RestartUCB-BOB enjoys the same dynamic regret bound as RestartUCB with an oracle tuning~\eqref{eq:optimal-tuning} shown in Theorem~\ref{thm:dynamic-regret}, while RestartUCB-BOB now requires no prior knowledge on the environmental non-stationarity measure $P_T$. Nevertheless, the attained dynamic regret upper bound still exhibits a certain gap to the $\Omega(d^{2/3}T^{2/3}P_T^{1/3})$ minimax lower bound of non-stationary linear bandits~\citep{AISTATS'19:window-LB}. Therefore, it remains open on how to obtain rate-optimal and parameter-free dynamic regret. Indeed, even with an oracle tuning, RestartUCB still cannot achieve optimal dynamic regret. We are not sure whether this is due to the limitation of the regret analysis or the algorithm itself. Finally, we note that recent studies achieve near-optimal dynamic regret without prior information for multi-armed bandits~\citep{EWRL'18:Auer-MAB,COLT19:Auer-unknown-bandits} and contextual bandits~\citep{COLT'19:para-free-contextual} by means of change detection. These studies could be useful in designing parameter-free algorithms for non-stationary linear bandits, which will be investigated in the future. \end{myRemark} \section{Analysis} \label{sec:analysis} In this section, we provide proofs of theoretical results presented in the previous section. \subsection{Proof of Lemma~\ref{lemma:estimate-error}} \begin{proof} From the model assumption~\eqref{eq:model-assume} and the estimator~\eqref{eq:close-form}, we can verify that the estimate error can be decomposed as, \begin{equation*} \thetah_t - \theta_t = V_{t-1}^{-1}\Bigg(\sum_{s=t_0}^{t-1} X_s X_s^\T (\theta_s - \theta_t) + \sum_{s=t_0}^{t-1} \eta_s X_s - \lambda \theta_t\Bigg). \end{equation*} Therefore, by Cauchy-Schwarz inequality, we know that for any $\x \in \X$, \begin{equation} \label{eq:bound-cauchy} \abs{\x^{\T}(\thetah_t - \theta_t)} \leq \norm{\x}_2\cdot A_t + \norm{\x}_{V_{t-1}^{-1}}\cdot B_t, \end{equation} where \begin{align*} A_t = \left\| V_{t-1}^{-1}\left(\sum_{s=t_0}^{t-1} X_s X_s^\T (\theta_s - \theta_t)\right)\right\|_2, \mbox{ and } B_t = \left\|\sum_{s=t_0}^{t-1} \eta_s X_s - \lambda \theta_t\right\|_{V_{t-1}^{-1}}. \end{align*} We will give upper bounds for these two terms separately, as summarized in the following two lemmas. \begin{myLemma} \label{lemma:path-length-A_t} For any $t \in [T]$, we have \begin{equation} \label{eq:path-length-A_t} \left\| V_{t-1}^{-1}\bigg(\sum_{s=t_0}^{t-1} X_s X_s^\T (\theta_s - \theta_t)\bigg)\right\|_2 \leq L \sqrt{\frac{dH}{\lambda}}\sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2. \end{equation} \end{myLemma} \begin{myLemma} \label{lemma:path-length-B_t} For any $t \in [T]$, we have \begin{equation} \left\|\sum_{s=t_0}^{t-1} \eta_s X_s - \lambda \theta_t\right\|_{V_{t-1}^{-1}} \leq \sqrt{\lambda} S + R\sqrt{2\log \frac{1}{\delta} + d\log\left(1 + \frac{(t-t_0)L^2}{\lambda d} \right)}, \end{equation} where $\beta_t \triangleq \sqrt{\lambda} S + R\sqrt{2\log \frac{1}{\delta} + d\log\left(1 + \frac{(t-t_0)L^2}{\lambda d} \right)}$ is the confidence radius used in RestartUCB. \end{myLemma} Based on the inequality~\eqref{eq:bound-cauchy}, Lemma~\ref{lemma:path-length-A_t}, Lemma~\ref{lemma:path-length-B_t}, and the boundedness of the feasible set, we have for any $\x \in \X$, \[ \abs{\x^{\T}(\theta_t - \thetah_t)} \leq L^2\sqrt{\frac{dH}{\lambda}} \sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2 + \beta_t \norm{\x}_{V_{t-1}^{-1}}, \] which competes the proof. \end{proof} We proceed to prove Lemma~\ref{lemma:path-length-A_t} and Lemma~\ref{lemma:path-length-B_t}. It is noteworthy mentioning that previous works in non-stationary linear bandits~\citep{arXiv'19:window-LB,NIPS'19:weighted-LB,AISTATS'20:restart} also need to upper bound some quantities similar to $A_t$, while their results are general invalid due to a serious technical flaw that will be explicitly stated in Section~\ref{sec:revisit}. Lemma~\ref{lemma:path-length-A_t} serves as the key component to fix existing results. \begin{proof}[{Proof of Lemma~\ref{lemma:path-length-A_t}}] Notice that \begin{align} \left\|V_{t-1}^{-1} \bigg(\sum_{s=t_0}^{t-1} X_s X_s^\T (\theta_s - \theta_t)\bigg)\right\|_2 = {} & \left\|V_{t-1}^{-1} \bigg(\sum_{s=t_0}^{t-1} X_s X_s^\T \Big(\sum_{p=s}^{t-1} (\theta_p - \theta_{p+1})\Big)\bigg)\right\|_2 \nonumber \\ = {} & \left\|V_{t-1}^{-1} \bigg(\sum_{p=t_0}^{t-1} \Big(\sum_{s=t_0}^{p} X_s X_s^\T (\theta_p - \theta_{p+1})\Big)\bigg)\right\|_2 \nonumber \\ \leq {} & \sum_{p=t_0}^{t-1} \left\|V_{t-1}^{-1} \Big(\sum_{s=t_0}^{p} X_s X_s^\T\Big) (\theta_p - \theta_{p+1})\right\|_2 \nonumber \\ \leq {} & \sum_{p=t_0}^{t-1} \left\|V_{t-1}^{-1} \Big(\sum_{s=t_0}^{p} X_s X_s^\T\Big)\right\|_2 \norm{\theta_p - \theta_{p+1}}_2.\nonumber \end{align} We now derive the upper bound for the term $\|V_{t-1}^{-1} (\sum_{s=t_0}^{p} X_s X_s^\T)\|_2$. Denote by $\S(1) = \{\x \mid \norm{\x}_2 = 1\}$ the unit sphere. \begin{align*} \left \| V_{t-1}^{-1} \Big(\sum_{s=t_0}^{p} X_s X_s^\T\Big) \right \|_2 = {} & \sup_{\z \in \S(1)} \sup_{\widetilde{\z} \in \S(1)} \left| \z^\T V_{t-1}^{-1} \Big(\sum_{s=t_0}^{p} X_s X_s^\T\Big) \widetilde{\z} \right| \\ = {} & \left| \z_*^\T V_{t-1}^{-1} \Big(\sum_{s=t_0}^{p} X_s X_s^\T\Big) \widetilde{\z}_* \right| \\ \leq {} & \norm{\z_*}_{V_{t-1}^{-1}} \left\| \sum_{s=t_0}^{p} X_s (X_s^\T \widetilde{\z}_*)\right\|_{V_{t-1}^{-1}}\\ \leq {} & \norm{\z_*}_{V_{t-1}^{-1}} \left\| \sum_{s=t_0}^{p} X_s \norm{X_s}_2 \norm{\widetilde{\z}_*}_2\right\|_{V_{t-1}^{-1}}\\ \leq {} & \frac{L}{\sqrt{\lambda}} \left\| \sum_{s=t_0}^{p} X_s\right\|_{V_{t-1}^{-1}}\\ \leq {} & \frac{L}{\sqrt{\lambda}} \sum_{s=t_0}^{p} \norm{X_s}_{V_{t-1}^{-1}} \\ \leq {} & \frac{L}{\sqrt{\lambda}} \sqrt{H} \sqrt{\sum_{s=t_0}^{p} \norm{X_s}_{V_{t-1}^{-1}}^2} \\ \leq {} & L \sqrt{\frac{dH}{\lambda}}. \end{align*} In above, the first equation makes use of the property of the matrix $2$-norm: for a matrix $M \in \R^{m \times n}$, $\norm{M}_2 = \sup_{\norm{\x}_2 = 1} \sup_{\norm{\y}_2 = 1} \abs{\x^\T M \y}$, whose proof can be found from the book~\citep[Chapter 5, Eq.~(5.2.9)]{meyer2000matrix} and also Lemma~\ref{lemma:matrix-2-norm} for self-containedness. Further, $(\z_*,\widetilde{\z}_*)$ denotes the optimizer of the right hand side optimization problem in the first line. In the proof, we use the fact that for any $\x$, we have $\norm{\x}_{V_{t-1}^{-1}} \leq \norm{\x}_2/\sqrt{\lambda} $ as $V_{t-1} \succeq \lambda I$. The second last step holds by the Cauchy-Schwarz inequality. Besides, the last step follows from the fact: for any $p \in \{t_0,\ldots,t-1\}$, \begin{align*} {} & \sum_{s=t_0}^{p} \norm{X_s}_{V_{t-1}^{-1}}^2 = \sum_{s=t_0}^{p} \mathrm{Tr}(X_s^\T V_{t-1}^{-1} X_s) \\ = {} & \mathrm{Tr}\left(V_{t-1}^{-1} \sum_{s=t_0}^{p} X_s X_s^\T\right) \\ \leq {} & \mathrm{Tr}\left(V_{t-1}^{-1} \sum_{s=t_0}^{p} X_s X_s^{\T}\right) + \sum_{s=p+1}^{t-1}X_s^{\T}V_{t-1}^{-1} X_s + \lambda \sum_{i=1}^{d} \mathbf{e}_i^{\T} V_{t-1}^{-1} \mathbf{e}_i \\ = {} & \mathrm{Tr}\left(V_{t-1}^{-1} \sum_{s=t_0}^{p} X_s X_s^{\T}\right) + \mathrm{Tr}\left(V_{t-1}^{-1} \sum_{s=p+1}^{t-1} X_s X_s^{\T}\right) + \mathrm{Tr}\left(V_{t-1}^{-1} \lambda \sum_{i=1}^{d} \mathbf{e}_i \mathbf{e}_i^{\T}\right) \\ = {} & \mathrm{Tr}(I_d) = d. \end{align*} Hence, we complete the proof. \end{proof} \begin{proof}[{Proof of Lemma~\ref{lemma:path-length-B_t}}] From the self-normalized concentration inequality~\citep[Theorem 1]{NIPS'11:AY-linear-bandits}, restated in Theorem~\ref{thm:self-normalize} of Section~\ref{appendix:tech-lemmas}, we know that \begin{align*} \left\| \sum_{s=t_0}^{t-1} \eta_s X_s \right\|_{V_{t-1}^{-1}} \overset{\eqref{eq:potential}}{\leq} {} & \sqrt{2R^2 \log\left(\frac{\det(V_{t-1})^{1/2} \det(\lambda I)^{-1/2}}{\delta}\right)}\\ \leq {} & R\sqrt{2\log\frac{1}{\delta} + d\log\left(1 + \frac{(t-t_0)L^2}{d}\right)}, \end{align*} where the last inequality is obtained from the analysis of the determinant, as shown in the proof of Lemma~\ref{lemma:potential}. Meanwhile, since $V_{t-1} \succeq \lambda I$, we know that \begin{align*} \norm{\lambda \theta_t}_{V^{-1}_{t-1}}^2 \leq & 1/\lambda_{\min}(V_{t-1}) \norm{\lambda \theta_t}_2^2 \leq \frac{1}{\lambda} \norm{\lambda \theta_t}_2^2 \leq \lambda S^2. \end{align*} Therefore, the upper bound of $B_t$ can be immediately obtained by combining the above inequalities. \end{proof} \subsection{Proof of Theorems~\ref{thm:dynamic-regret-in-epoch} and~\ref{thm:dynamic-regret}} \begin{proof}[Proof of Theorem~\ref{thm:dynamic-regret-in-epoch}] Due to Lemma~\ref{lemma:estimate-error} and the fact that $X_t^*, X_t \in \X$, each of the following holds with probability at least $1-\delta$, \begin{align*} \inner{X_t^*}{\theta_t} \leq {}& \inner{X_t^*}{\thetah_t} + L^2\sqrt{\frac{dH}{\lambda}}\sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2 + \beta_t \norm{X_t^*}_{V_{t-1}^{-1}},\\ \inner{X_t}{\theta_t} \geq {}& \inner{X_t}{\thetah_t} - L^2\sqrt{\frac{dH}{\lambda}}\sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2 - \beta_t \norm{X_t}_{V_{t-1}^{-1}}. \end{align*} By the union bound, the following holds with probability at least $1-2\delta$, \begin{align*} {} & \langle X_t^*,\theta_t\rangle - \langle X_t,\theta_t\rangle\\ \leq {} & \inner{X_t^*}{\thetah_t} - \inner{X_t}{\thetah_t} + 2L^2\sqrt{\frac{dH}{\lambda}}\sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2 + \beta_t (\norm{X_t^*}_{V_{t-1}^{-1}} + \norm{X_t}_{V_{t-1}^{-1}})\\ \leq {} & 2L^2\sqrt{\frac{dH}{\lambda}}\sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2 + 2 \beta_t \norm{X_t}_{V_{t-1}^{-1}}, \end{align*} where the last step comes from the following implication of the arm selection criteria~\eqref{eq:select-criteria} such that $\inner{X_t^*}{\thetah_t} + \beta_t \norm{X_t^*}_{V_{t-1}^{-1}} \leq \inner{X_t}{\thetah_t} + \beta_t \norm{X_t}_{V_{t-1}^{-1}}$. Hence, dynamic regret within epoch $\Ecal$ is bounded by, \begin{align*} \textnormal{D-Regret}(\Ecal) \leq {} & \sum_{t\in\Ecal} 2L^2\sqrt{\frac{dH}{\lambda}}\sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2 + 2 \beta_t \norm{X_t}_{V_{t-1}^{-1}}\\ \leq {} & 2L^2\sqrt{\frac{d}{\lambda}}H^{\frac{3}{2}}\mathcal{P}(\Ecal) + 2\beta_H \sqrt{2dH\log\left(1+ \frac{L^2 H}{\lambda d}\right)}, \end{align*} where the last inequality holds due to the standard elliptical potential lemma (Lemma~\ref{lemma:potential}), whose statement and proof are presented in Section~\ref{appendix:tech-lemmas}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:dynamic-regret}] By taking the union bound over the dynamic regret of all $\ceil{T/H}$ epochs, we know that the following holds with probability at least $1-2/T$, \begin{align*} \textnormal{D-Regret}_T = \sum_{s=1}^{\ceil{T/H}}\textnormal{D-Regret}(\mathcal{E}_s) \leq 2L^2\sqrt{\frac{d}{\lambda}}H^{\frac{3}{2}} P_T + 2T\widetilde{\beta}_H\sqrt{\frac{2d}{H}\log\left(1+ \frac{L^2 H}{\lambda d}\right)}, \end{align*} where $\widetilde{\beta}_H = \sqrt{\lambda} S + R\sqrt{2\log(T\ceil{\frac{T}{H}}) + d\log\left(1 + \frac{HL^2}{\lambda d} \right)}$. Ignoring logarithmic factors in time horizon $T$, we finally obtain that \[ \textnormal{D-Regret}_T \leq \Ot\big(d^{\frac{1}{2}}H^{\frac{3}{2}}P_T + dT/\sqrt{H}\big). \] When $P_T < \sqrt{d}/T$ (which corresponds a small amount of non-stationarity), we simply set the restarting period as $T$ and achieve an $\Ot(d\sqrt{T})$ regret bound. Note that under such a configuration, our algorithm actually performs no restart and thereby recovers the standard LinUCB algorithm for the stationary stochastic linear bandits~\citep{NIPS'11:AY-linear-bandits}. Besides, when coming to the non-degenerated case of $P_T \geq \sqrt{d}/T$, we set the restarting period optimally as $H = \floor{d^{1/4}T^{1/2} P_T^{-1/2}}$ and attain an $\Ot(d^{\frac{7}{8}} T^{\frac{3}{4}} P_T^{\frac{1}{4}})$ dynamic regret bound. This ends the proof. \end{proof} \subsection{Revisiting Existing Results} \label{sec:revisit} Previous studies show an $\Ot(T^{2/3}P_T^{1/3})$ dynamic regret for non-stationary linear bandits, however, the technical reasoning suffers from some gaps and makes the overall regret guarantee ungrounded. In the following, we first spot the flaws of their original proofs and then discuss the key component of our new analysis. Indeed, the flaw appears in a key technical lemma for regret analysis of WindowUCB, the pioneering study of non-stationary linear bandits~\citep[Lemma 3]{arXiv'19:window-LB}. The flaw is unfortunately inherited by the later studies, including WeightUCB~\citep[Theorem 2]{NIPS'19:weighted-LB}, the early version of this paper~\citep[Lemma 3]{AISTATS'20:restart}, and perturbation based method~\citep[Theorem 7]{UAI'20:kim20a}. To be more concrete, Lemma 3 of~\citet{arXiv'19:window-LB} (also see Lemma 3 of~\citet{AISTATS'20:restart}) claims that for any $t \in [T]$, \begin{equation} \label{eq:claim} \left\| V_{t-1}^{-1}\bigg(\sum_{s=t_0}^{t-1} X_s X_s^\T (\theta_s - \theta_t)\bigg)\right\|_2 \leq \sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2. \end{equation} Actually, the quantity $\| V_{t-1}^{-1}(\sum_{s=t_0}^{t-1} X_s X_s^\T (\theta_s - \theta_t))\|_2$ is of great importance for the regret analysis of non-stationary linear bandits algorithms, because it will be finally converted to the path-length of unknown regression parameters. Our Lemma~\ref{lemma:path-length-A_t} gives an upper bound of $L \sqrt{\frac{dH}{\lambda}}\sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2$, which has a worse dependence in terms of dimension $d$ and restarting period $H$ compared to~\eqref{eq:claim}. However, we will demonstrate that the proof of the above claim~\eqref{eq:claim} suffers serious technical flaws, which makes the result ungrounded. We restate their proof~\citep[Appendix B]{arXiv'19:window-LB} as follows: \begin{align} {} & \left\|V_{t-1}^{-1} \bigg(\sum_{s=t_0}^{t-1} X_s X_s^\T (\theta_s - \theta_t)\bigg)\right\|_2 \nonumber \\ = {} & \left\|V_{t-1}^{-1} \bigg(\sum_{s=t_0}^{t-1} X_s X_s^\T \Big(\sum_{p=s}^{t-1} (\theta_p - \theta_{p+1})\Big)\bigg)\right\|_2 \nonumber \\ = {} & \left\|V_{t-1}^{-1} \bigg(\sum_{p=t_0}^{t-1} \Big(\sum_{s=t_0}^{p} X_s X_s^\T (\theta_p - \theta_{p+1})\Big)\bigg)\right\|_2 \nonumber \\ \leq {} & \sum_{p=t_0}^{t-1} \left\|V_{t-1}^{-1} \Big(\sum_{s=t_0}^{p} X_s X_s^\T\Big) (\theta_p - \theta_{p+1})\right\|_2 \nonumber \\ \leq {} & \sum_{p=t_0}^{t-1} \sigma_{\max}\left(V_{t-1}^{-1} \Big(\sum_{s=t_0}^{p} X_s X_s^\T\Big)\right) \norm{\theta_p - \theta_{p+1}}_2 \nonumber\\ \leq {} & \sum_{p=t_0}^{t-1} \norm{\theta_p - \theta_{p+1}}_2, \label{eq:step} \end{align} where $\sigma_{\max} (\cdot)$ is the largest singular value. The key is the last step~\eqref{eq:step} but its proof is questionable: they need to show the following results holds universally for all $p \in \{t_0,\ldots,t-1\}$, \begin{equation} \label{eq:key-to-hold} \sigma_{\max}\left(V_{t-1}^{-1} \Big(\sum_{s=t_0}^{p} X_s X_s^\T\Big)\right) \leq 1. \end{equation} To this end, denoted by $A = \sum_{s=t_0}^{p} X_s X_s^\T$, the authors show that $V_{t-1}^{-1} A$ shares the same characteristics polynomial with $V_{t-1}^{-1/2} A V_{t-1}^{-1/2}$, namely, $\det(\eta I - V_{t-1}^{-1} A) = \det(\eta I - V_{t-1}^{-1/2} A V_{t-1}^{-1/2})$ holds for any $\eta$. Since $V_{t-1}^{-1/2} A V_{t-1}^{-1/2}$ is clearly symmetric positive semi-definite, they claim that \begin{equation} \label{eq:wrong-claim} \z^{\T} V_{t-1}^{-1}A \z \geq 0 \end{equation} also holds for $\z \in \S(1) =\{\x \mid \norm{\x}_2 = 1\}$, which is crucial for their remaining proof. \begin{align} & \sigma_{\max}\left(V_{t-1}^{-1}\bigg(\sum_{s=t_0}^{p} X_{s} X_{s}^\T\bigg)\right)=\sup_{\z \in \S(1)} \z^\T V_{t-1}^{-1}\left(\sum_{s=t_0}^{p} X_{s} X_{s}^\T\right) \z \label{eq:wrong-1}\\ \overset{\eqref{eq:wrong-claim}}{\leq} & \sup_{\z \in \S(1)}\left\{\z^\T V_{t-1}^{-1}\bigg(\sum_{s=t_0}^{p} X_{s} X_{s}^\T\bigg) \z+\z^\T V_{t-1}^{-1}\bigg(\sum_{s=p+1}^{t-1} X_{s} X_{s}^\T\bigg) \z+\lambda \z^\T V_{t-1}^{-1} \z\right\} \label{eq:wrong-2}\\ =& \sup_{\z \in \S(1)} \z^\T V_{t-1}^{-1} V_{t-1} \z=1. \nonumber \end{align} However, we identify that there are two issues in the above arguments. First, the step in~\eqref{eq:wrong-1} doubtful. For a matrix $M \in \R^{m \times n}$, we have $\norm{M}_2 = \sup_{\norm{\x}_2 = 1} \sup_{\norm{\y}_2 = 1} \abs{\y^{\T} M \x}$ (see for Lemma~\ref{lemma:matrix-2-norm}), while it is not warranted that $\norm{M}_2 = \sup_{\norm{\z}_2 = 1} \abs{\z^{\T} M \z}$ which is seemingly important for the following arguments. Regardless of this first issue, the second issue about the claim~\eqref{eq:wrong-claim} and the result in~\eqref{eq:wrong-2}. We discover that the claim~\eqref{eq:wrong-claim} is even more severe. We discover that the claim~\eqref{eq:wrong-claim} is ungrounded (at least its current proof cannot stand for the correctness). The big caveat is that $V_{t-1}^{-1}A \in \R^{d\times d}$ is \emph{not} guaranteed to be symmetric. The logic behind the claim is that, suppose $P,Q\in\R^{d\times d}$ are with the same characteristics polynomial, i.e., $\det(\eta I - Q) = \det(\eta I - P)$ holds for any $\eta$, and meanwhile $P$ is symmetric positive semi-definite (which guarantees $\z^\T P \z \geq 0$ for any $\z \in \R^d$), then we can also have $\z^\T Q \z \geq 0$ for any $\z \in \R^d$. Unfortunately, the reasoning is not correct, and we give a simple counterexample. Let $P$ be the $2$-dim identity matrix $[1,0;0,1]$, and $Q = [1,-10;0,1]$ is an \emph{asymmetric} matrix, then clearly $\det(\eta I - P)=\det(\eta I - Q) = (\eta - 1)^2$ is true for any $\eta$; however, $\z^\T Q \z \geq 0$ does not hold in general, for example, $\z^\T Q \z = -8 < 0$ when $\z = (1,1)^{\T}$. \paragraph{Fixing the gap.} The term in the left hand of~\eqref{eq:claim} is crucial for the dynamic regret analysis of non-stationary linear bandits, which is expected to be converted to the path-length indicating the non-stationarity of environments. In the proof of~\citet{AISTATS'19:window-LB}, as shown in~\eqref{eq:step} and~\eqref{eq:key-to-hold}, the authors aim to upper bound the crucial quantity $\sigma_{\max}\left(V_{t-1}^{-1} \big(\sum_{s=t_0}^{p} X_s X_s^\T\big)\right)$ by some constant. However, the technical reasoning is wrong. We avoid this issue and provide a new analysis as exhibited in the proof of Lemma~\ref{lemma:path-length-A_t}, which serves as the key component in our fix. Following our analysis of Lemma~\ref{lemma:path-length-A_t}, it is not hard to give a similar estimator error analysis for the sliding window least square estimator~\citep{AISTATS'19:window-LB} and the weighted least square estimator~\citep{NIPS'19:weighted-LB}, which will fix their results from $\Ot(d^{2/3}T^{2/3}P_T^{1/3})$ to $\Ot(d^{7/8}T^{3/4}P_T^{1/4})$. Also see related discussions in Remark~\ref{remark:estimate-error}. \paragraph{Impossibility result.} In the following, we further prove that the desirable claim in~\eqref{eq:key-to-hold} is actually impossible. Specifically, we construct a hard problem instance to show that the key quantity $\sigma_{\max}\left(V_{t-1}^{-1} \big(\sum_{s=t_0}^{p} X_s X_s^\T\big)\right)$ cannot be universally upper bounded by any constant without square-root dependence on $H$. For notational convenience, we focus on the first restarting epoch, so the starting index $t_0 = 1$. \begin{myThm} \label{thm:impossibility} Let $L=1$ and $\lambda = 1$. We construct the feature as \begin{equation} \label{eq:example} \begin{split} & X_1=\ldots=X_p = \left[\frac{1}{\sqrt{p}}, \frac{\sqrt{p-1}}{\sqrt{p}}\right]^{\T}, \mbox{ and } \\ & X_{p+1}=\ldots=X_H = \left[\frac{1}{\sqrt{H-p}}, \frac{\sqrt{H-p-1}}{\sqrt{H-p}}\right]^{\T}. \end{split} \end{equation} Denote by $A = \sum_{s=1}^{p} X_s X_s^\T$ and $B = \sum_{s=p+1}^{H} X_s X_s^\T$, then the covariance matrix is $V_{t-1} = A + B + I_d$. Under such cases, considering the checkpoint of $p = \floor{H/3}$, we have \begin{equation} \label{eq:impossible} \norm{V_{t-1}^{-1} A}_2 = \sigma_{\max} (V_{t-1}^{-1} A) \geq 0.0564\sqrt{H}. \end{equation} \end{myThm} \begin{proof} For simplicity of notation, let $y = \sqrt{p-1}$ and $z = \sqrt{H-p-1}$. By the constructed example in~\eqref{eq:example}, we have \begin{align*} {} & A = \sum_{s=1}^{p} X_s X_s^\T = \begin{bmatrix} 1 & y\\ y & y^2 \end{bmatrix} \mbox{ and } B = \sum_{s=p+1}^{H} X_s X_s^\T = \begin{bmatrix} 1 & z\\ z & z^2 \end{bmatrix} . \end{align*} For convenience, we will write the covariance matrix $V_t$ simply $V$ when no confusion can arise. So the concerned matrix $V^{-1}A$ can be calculated as \begin{align*} V^{-1}A = {} & \begin{bmatrix} 2+\lambda & y+z\\ y+z & y^2 + z^2 + \lambda \end{bmatrix}^{-1} \begin{bmatrix} 1 & y\\ y & y^2 \end{bmatrix}\\ = {} & \frac{1}{(2+\lambda)(y^2 + z^2 + \lambda) - (y+z)^2} \cdot \begin{bmatrix} y^2 + z^2 + \lambda & -(y+z)\\ -(y+z) & 2+\lambda \end{bmatrix} \begin{bmatrix} 1 & y\\ y & y^2 \end{bmatrix}\\ = {} & \frac{1}{(1+\lambda)(y^2 + z^2) - 2yz + (2+\lambda)\lambda} \cdot \begin{bmatrix} z^2 - yz + \lambda & yz^2 - y^2z + \lambda y\\ (1+\lambda)y-z & (1+\lambda)y^2-yz \end{bmatrix}. \end{align*} Denote by $s= (1+\lambda)(y^2 + z^2) - 2yz + (2+\lambda)\lambda$, $\alpha = z^2 - yz + \lambda$, and $\beta = (1+\lambda)y-z$, we then have \begin{align*} V^{-1}A (V^{-1}A)^\T = \frac{1+y^2}{s^2} \begin{bmatrix} \alpha^2 & \alpha \beta\\ \alpha \beta & \beta^2 \end{bmatrix}. \end{align*} The eigenvalues (we denote them by $\bar{\lambda}$, to distinguish the notation with the regularizer coefficient $\lambda$) of matrix $[\alpha^2, \alpha \beta; \alpha \beta, \beta^2]$ should satisfy $(\alpha^2 - \bar{\lambda})(\beta^2 - \bar{\lambda}) - \alpha^2 \beta^2 = 0$. By solving the equation, we can obtain that \begin{align*} \bar{\lambda}_{\max} = \alpha^2 + \beta^2 = (z^2 - yz + \lambda)^2 + ((1+\lambda)y-z)^2 \geq (z^2 - yz + \lambda)^2. \end{align*} When $\lambda=1$ and $p = aH$ (here we assume $aH$ is an integer for simplicity), we have \begin{align} \bar{\lambda}_{\max} \geq {} & (z^2 - yz + \lambda)^2\nonumber \\ = {} & \left( (1-a)p-1 - \sqrt{p-1}\sqrt{(1-a)p-1} + 1 \right)^2 \nonumber \\ \geq {} & \left((1-a)H - \sqrt{a(1-a)}H\right)^2 \nonumber \\ = {} & (1-a)(\sqrt{1-a} - \sqrt{a})^2 H^2 \label{eq:lb-1}. \end{align} Note that the condition of $a\in(0,1/2)$ is required to make the second inequality hold. On the other hand, we have \begin{equation} \label{eq:lb-2} \frac{1+y^2}{s^2} = \frac{p}{(2(y^2 + z^2) - 2yz + 3)^2} \geq \frac{p}{(2(y^2 + z^2) +4)^2} = \frac{a}{4H}. \end{equation} Combining~\eqref{eq:lb-1} and~\eqref{eq:lb-2}, we have \begin{align*} \sigma_{\max}(V^{-1}A) = \sqrt{\lambda_{\max} \left(V^{-1}A (V^{-1}A)^\T\right)} \geq \sqrt{\bar{\lambda}_{\max} \cdot \frac{1+y^2}{s^2}} \geq \sqrt{\frac{a'}{4}} \cdot \sqrt{H}, \end{align*} where $a' = (1-a)a (\sqrt{1-a} - \sqrt{a})^2$ is a universal constant. When choosing $a = 1/3$ as selected in the main paper, $a' = 0.0127$ and the lower bound is $\sigma_{\max}(V^{-1}A) \geq 0.0564 \sqrt{H}$. \end{proof} Moreover, we report some numerical results for validation: when $H = 3000$, $\sigma_{\max} (V_{t-1}^{-1} A) = 5.852$ and the theoretical lower bound is $0.0564\sqrt{H} = 3.087$; when $H = 30000$, $\sigma_{\max} (V_{t-1}^{-1} A) = 18.474$ and the theoretical lower bound is $0.0564\sqrt{H} = 9.763$. \subsection{Proof of Theorem~\ref{thm:dynamic-regret-BOB}} \begin{proof} We begin with the following decomposition of the dynamic regret. \begin{align*} \sum_{t=1}^T \langle X_t^*,\theta_t\rangle - \langle X_t,\theta_t\rangle = {} & \underbrace{\sum_{t=1}^T \langle X_t^*,\theta_t\rangle - \sum_{i=1}^{\ceil{T/\Delta}} \sum_{t = (i-1)\Delta + 1}^{i \Delta } \langle X_t(H^{\dagger}),\theta_t\rangle}_{\base}\\ \quad {} & + \underbrace{\sum_{i=1}^{\ceil{T/\Delta}} \sum_{t = (i-1)\Delta + 1}^{i \Delta } \langle X_t(H^{\dagger}),\theta_t\rangle - \langle X_t(H_i),\theta_t\rangle}_{\meta}, \end{align*} where $H^{\dagger}$ is the best restarting period to approximate the optimal restarting period $H^*$ in the pool $\H$, and $H^* = \floor{(dT/(P_T)^{2/3}}$. The first term is the dynamic regret of RestartUCB with the best restarting period in the candidate pool $\H$, and hence called base-regret. The second term is the regret overhead of meta-algorithm due to adaptive exploration of unknown optimal restarting period, and is thus called the meta-regret. We bound the two terms respectively. We first consider the base-regret. Indeed, from the construction of candidate restarting periods pool $\H$, we confirm that there exists an restarting period $H^{\dagger} \in \H$ such that $H^{\dagger} \leq H^* \leq 2H^{\dagger}$. Therefore, employing the dynamic regret bound~\eqref{eq:main-result} in Theorem~\ref{thm:dynamic-regret}, we have the following upper bound for the base-regret: \begin{align} \base \leq {} & \sum_{i=1}^{\ceil{T/\Delta}} \Ot\left(d^{\frac{1}{2}} H^{\dagger \frac{3}{2}} P_i + \frac{d\Delta}{\sqrt{H^\dagger}}\right) \label{eq:termb-step1} \\ = {} & \Ot\left(d^{\frac{1}{2}} H^{\dagger \frac{3}{2}} P_T + \frac{dT}{\sqrt{H^\dagger}}\right) \label{eq:termb-step2} \\ \leq {} & \Ot\left(d^{\frac{1}{2}} H^{* \frac{3}{2}} P_T + \frac{dT}{\sqrt{2H^*}}\right) \label{eq:termb-step3}\\ = {} & \Ot \big( d^{\frac{7}{8}} T^{\frac{3}{4}} P_T^{\frac{1}{4}} \big), \label{eq:base-regret-bound} \end{align} where~\eqref{eq:termb-step1} is due to Theorem~\ref{thm:dynamic-regret} and $P_i$ denotes the path-length in the $i$-th episode of the meta-learner's update.~\eqref{eq:termb-step2} follows by summing over all update episodes, and the inequality~\eqref{eq:termb-step3} holds since the optimal restarting period $H^*$ is provably in the range of $[H_{\min}, H_{\max}]$ and satisfies $H^{\dagger} \leq H^* \leq 2H^{\dagger}$. Next, we give an upper bound for the meta-regret. The analysis follows the proof argument in the sliding window based approach~\citep[Proposition~1]{arXiv'19:window-LB}. Note that the definition of the meta-regret is defined over the \emph{expected} reward (namely, $\E[r_t(X)] = X^\T \theta_t$), whereas the actual returned feedback is the noisy one (i.e., $r_t(X) = X^{\T} \theta_t + \eta_t$) which might be unbounded due to the additive sub-Gaussian noise. Fortunately, the light-tail property enables us to continue the use of adversarial MAB algorithms, e.g., Exp3~\citep{SICOMP'02:Auer-EXP3}. Specifically, by the concentration inequality endowed by the sub-Gaussian noise, we know that the received reward lies in the bounded region with high probability, which is presented in Lemma~\ref{lemma:bob}. Denote by $\mathcal{E}$ the event that Lemma~\ref{lemma:bob} holds, and denote by $R_i \triangleq \sum_{t = (i-1)\Delta + 1}^{i \Delta} \langle X_t(H^{\dagger}),\theta_t\rangle - \langle X_t(H_i),\theta_t\rangle$ the instantaneous regret of the meta learner. The meta-regret follows \begin{align} \meta = {} & \E\left[\sum_{i=1}^{\ceil{T/\Delta}} R_i\right] \nonumber \\ = {} & \E\left[\sum_{i=1}^{\ceil{T/\Delta}} R_i~\Big\vert~\mathcal{E}\right] \cdot \Pr[\Ecal] + \E\left[\sum_{i=1}^{\ceil{T/\Delta}} R_i~\Big\vert~\overline{\Ecal}\right] \cdot \Pr[\overline{\Ecal}]\nonumber \\ \leq {} & \O\sbr{L_{\max}\sqrt{\frac{T}{\Delta}N}} \cdot \sbr{1-\frac{2}{T}} + \O(T) \cdot \frac{2}{T}\nonumber \\ = {} & \O(\Delta N T) \le \Ot(d^{1/2}T^{3/4}), \label{eq:meta-regret-bound} \end{align} where $L_{\max} \triangleq \max L_i$ for $i\in [\ceil{T/\Delta}]$ denotes the maximum cumulative loss in all episodes. The first equation is by definition, and the second one is by the law of total expectation. The next inequality follows from the following two aspects: the quantity is bounded according to the standard regret guarantee of Exp3~\citep{SICOMP'02:Auer-EXP3} when the event $\Ecal$ holds; and it is trivially upper bounded when the event $\Ecal$ does not happen. The failing probability is controlled by Lemma~\ref{lemma:bob}. The final equation is true by checking the parameters that the episode length is $\Delta = \ceil{d\sqrt{T}}$, and the number of candidate restarting periods $N$ is of order $\O(\log T)$, hence be omitted in the $\Ot(\cdot)$-notation. Combining the upper bounds of base-regret~\eqref{eq:base-regret-bound} and meta-regret~\eqref{eq:meta-regret-bound}, we obtain that the expected dynamic regret of RestartUCB-BOB is bounded by $\Ot( d^{\frac{7}{8}} T^{\frac{3}{4}} P_T^{\frac{1}{4}})$, which completes the proof of Theorem~\ref{thm:dynamic-regret-BOB}. It is also worthy noting that the base-regret is actually with high-probability guarantees, while the meta-regret in our analysis only holds in expectation. Actually, we can boost the result to the high-probability version by employing advanced meta-algorithms such as Exp3.IX~\citep{NIPS'15:Neu} that can achieve a high-probability regret bound for adversarial MAB problems, as well as using the union bound in the analysis. \end{proof} \section{Experiments} \label{sec:experiment} Despite the focus of this paper is on the theoretical aspect, we present empirical studies to further evaluate the proposed approach. \paragraph{Contenders.} We study two kinds of non-stationary environments: the underlying parameter is \emph{abruptly changing} or \emph{gradually changing}. We will simulate both environments and details can be found in the next paragraph. We compare RestartUCB to (a) WindowUCB, based on the sliding window least square~\citep{AISTATS'19:window-LB}; (b) WeightUCB, based on the weighted least square~\citep{NIPS'19:weighted-LB}; (c) StaticUCB, the algorithm designed for stationary linear bandits~\citep{NIPS'11:AY-linear-bandits}. In the scenario of abrupt change, we additionally compare with OracleRestartUCB, which knows the exact information of change points a priori and restarts the algorithm when reaching a change point. Evidently, OracleRestartUCB is not a practical algorithm, which actually serves as the skyline of all the approaches. \paragraph{Settings.} In abruptly-changing environments, the unknown regression parameter $\theta_t$ is periodically set as $[1,0]$, $[-1,0]$, $[0,1]$, $[0,-1]$ in the first half of iterations, and $[1,0]$ for the remaining iterations. In gradually-changing environments, the unknown regression parameter $\theta_t$ is moved from $[1,0]$ to $[-1,0]$ on the unit circle continuously. In both scenarios, we set $T=50,000$ and number of arms $n=20$. The feature is sampled from normal distribution $\mathcal{N}(0,1)$ and rescaled such that $L=1$. The random noise is generated according to $\mathcal{N}(0,0.1)$. Since the path-length $P_T$ is available in the synthetic datasets, we set the weight $\gamma = 1-1/\tau$ for WeightUCB, the window size $w = \tau$ for WindowUCB, and the restarting period $H = \tau$ for RestartUCB, here $\tau = 10*\floor{d^{1/4}T^{1/2} P_T^{-1/2}}$ is set as suggested by the theory. The simulation is repeated for $50$ times, and we report the average and standard deviation. \paragraph{Results.} Figure~\ref{figure:change} shows performance comparisons of different approaches for non-stationary linear bandits. The performance is measured by the (pseudo-) dynamic regret, which is plotted the in y-axis in the logarithmic scale. In the \emph{abruptly-changing environments}, OracleRestartUCB is definitely the best one as was expected since it knows exact information of change points a priori, and StaticUCB ranks the last as it does not take the non-stationarity issue into consideration. RestartUCB and WindowUCB have comparable performance, better than WeightUCB. Actually, RestartUCB is even slightly better than WindowUCB. We note that RestartUCB has an additional advantage over WindowUCB in terms of the computational issue: RestartUCB supports the one-pass update without storing historical data, whereas WindowUCB has to maintain a buffer and thus needs to scan data multiple times owing to the sliding window strategy. In the \emph{gradually-changing environments}, WeightUCB ranks the first, followed by WindowUCB and RestartUCB. Nevertheless, as will be shown later, WeightUCB takes a significantly longer running time than our approach. Figure~\ref{figure:time} reports the running time including both mean and standard deviation. We can see that the time costs of RestartUCB, WindowUCB and StaticUCB are almost the same. By contrast, WeightUCB requires a significantly longer running time, nearly twice the cost of other contenders. The reason lies in the fact that WeightUCB algorithm involves the computation of inverse of covariance matrix $V_t \in \R^{d\times d}$ and its variant $\widetilde{V}_t \in \R^{d\times d}$, while other three methods maintain and manipulate only one covariance matrix. It is worthy to note that our approach can be further accelerated by the recursive least square. This will save the inverse computation of the covariance matrix, which will be particularly desired in high-dimensional problems. \begin{figure} \caption{Comparisons of different approaches in terms of dynamic regret. Note that the y-axis is plotted in the logarithmic scale.} \label{figure:abrupt} \label{figure:slow} \label{figure:change} \end{figure} \begin{figure} \caption{Comparisons of different approaches in terms of running time.} \label{figure:time} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we study the problem of non-stationary linear bandits, where the unknown regression parameter $\theta_t$ is changing over time. We propose a simple algorithm based on the restarted strategy, which enjoys strong theoretical guarantees notwithstanding its simplicity. Concretely, when the path-length of underlying parameters $P_T$ is known, our proposed RestartUCB algorithm enjoys an $\Ot(d^{7/8}T^{3/4}P_T^{1/4})$ dynamic regret, which shares the same regret guarantees with previous methods developed in the literature yet is with more favorable computational advantage. In addition, we show that the same dynamic regret guarantee is attainable even when $P_T$ is unknown by further using ResatartUCB as the base algorithm and combining the bandits-over-bandits mechanism as the meta scheduling. Empirical studies validate the efficacy of the proposed approach, particularly in the abruptly-changing environments. The current upper bounds do not match the existing lower bound, even when the path-length term is known. In the future, we would like to investigate how to get rid of this regret gap and further study how to design algorithms for non-stationary linear bandits that achieve rate-optimal dynamic regret without prior information. \section*{Acknowledgment} This research was supported by the National Science Foundation of China (61921006, 61673201), and the Collaborative Innovation Center of Novel Software Technology and Industrialization. We are grateful for the anonymous reviewers for their helpful comments. The preliminary version of this paper appears at AISTATS 2020, in which we use the wrong argument~\eqref{eq:claim} as spotted in Section~\ref{sec:revisit}. This version has fixed the technical error. For the correction, the authors thank Jin-Hui Wu for many helpful discussions, especially on the impossibility result of Theorem~\ref{thm:impossibility}. We also acknowledge Yu-Hu Yan for carefully proofreading the paper. \appendix \section{Technical Lemmas} \label{appendix:tech-lemmas} In this section, we provide several technical lemmas that frequently used in the proofs. \begin{myThm}[Self-Normalized Bound for Vector-Valued Martingales~{\citep[Theorem 1]{NIPS'11:AY-linear-bandits}}] \label{thm:self-normalize} Let $\{F_t\}_{t=0}^\infty$ be a filtration. Let $\{\eta_t\}_{t=0}^\infty$ be a real-valued stochastic process such that $\eta_t$ is $F_t$-measurable and conditionally $R$-sub-Gaussian for some $R>0$, namely, \begin{equation} \label{eq:sub-Gaussian} \forall \lambda \in \mathbb{R}, \quad \mathbb{E}[\exp(\lambda \eta_t) \mid F_{t-1}] \leq \exp\left(\frac{\lambda^2 R^2}{2}\right). \end{equation} Let $\{X_t\}_{t=1}^\infty$ be an $\mathbb{R}^d$-valued stochastic process such that $X_t$ is $F_{t-1}$-measurable. Assume that $V$ is a $d\times d$ positive definite matrix. For any $t\geq 0$, define \begin{equation} \label{eq:covariance-matrix} \bar{V}_t = V + \sum_{\tau = 1}^t X_\tau X_\tau^{\T},\quad S_t = \sum_{\tau = 1}^t \eta_\tau X_\tau. \end{equation} Then, for any $\delta>0$, with probability at least $1-\delta$, for all $t\geq 0$, \begin{equation} \label{eq:self-normal-concentration} \norm{S_t}_{\bar{V_t}^{-1}}^2 \leq 2R^2 \log \left(\frac{\det(\bar{V_t})^{1/2} \det(V)^{-1/2}}{\delta}\right). \end{equation} \end{myThm} \begin{myLemma}[Elliptical Potential Lemma] \label{lemma:potential} Suppose $U_0 = \lambda I$, $U_t = U_{t-1} + X_tX_t^{\T}$, and $\norm{X_t}_2 \leq L$, then \begin{equation} \label{eq:potential} \sum_{t=1}^T \lVert U_{t-1}^{-\frac{1}{2}} X_t\rVert_2 \leq \sqrt{2dT \log\left(1 + \frac{L^2T}{\lambda d}\right)}. \end{equation} \end{myLemma} \begin{proof} First, we have the following decomposition, \[ U_t = U_{t-1} + X_tX_t^{\T} = U_{t-1}^{\frac{1}{2}}(I + U_{t-1}^{-\frac{1}{2}}X_t X_t^{\T}U_{t-1}^{-\frac{1}{2}})U_{t-1}^{\frac{1}{2}}. \] Taking the determinant on both sides, we get \[ \det(U_t) = \det(U_{t-1}) \det(I + U_{t-1}^{-\frac{1}{2}}X_t X_t^{\T}U_{t-1}^{-\frac{1}{2}}), \] which in conjunction with Lemma~\ref{lemma:determinant} yields \[ \det(U_t) = \det(U_{t-1}) (1 + \lVert U_{t-1}^{-\frac{1}{2}} X_t\rVert_2^2) \geq \det(U_{t-1}) \exp(\lVert U_{t-1}^{-\frac{1}{2}} X_t\rVert_2^2/2). \] Note that in the first inequality, we utilize the fact that $1 + x \geq \exp(x/2)$ holds for any $x\in [0,1]$. By taking advantage of the telescope structure, we have \[ \begin{split} & \sum_{t=1}^T \lVert U_{t-1}^{-\frac{1}{2}} X_t\rVert_2^2 \leq 2\log \frac{\det(U_T)}{\det(U_0)} \leq 2d\log \left(1 + \frac{L^2T}{\lambda d}\right), \end{split} \] where the last inequality follows from the fact that $\mbox{Tr}(U_T) \leq \mbox{Tr}(U_0) + L^2T = \lambda d + L^2T$, and thus $\det(U_T) \leq (\lambda + L^2T/d)^d$. Therefore, Cauchy-Schwarz inequality implies, \[ \sum_{t=1}^T \lVert U_{t-1}^{-\frac{1}{2}} X_t\rVert_2 \leq \sqrt{T\sum_{t=1}^T \lVert U_{t-1}^{-\frac{1}{2}} X_t\rVert_2^2} \leq \sqrt{2dT\log\left(1 + \frac{L^2T}{\lambda d}\right)}. \] \end{proof} \begin{myLemma} \label{lemma:determinant} For any $\v \in \R^d$, we have \[ \det(I+\mathbf{v} \mathbf{v}^{\T}) = 1 + \norm{\mathbf{v}}_2^2. \] \end{myLemma} \begin{proof} Notice that \begin{enumerate} \item[(i)] $(I+\mathbf{v} \mathbf{v}^\T)\mathbf{v} = (1 + \norm{\mathbf{v}}_2^2)\mathbf{v}$, therefore, $\mathbf{v}$ is its eigenvector with $(1 + \norm{\mathbf{v}}_2^2)$ as the eigenvalue; \item[(ii)] $(I + \mathbf{v} \mathbf{v}^\T)\mathbf{v}^{\perp} = \mathbf{v}^{\perp}$, therefore, $\mathbf{v}^{\perp} \perp \mathbf{v}$ is its eigenvector with $1$ as the eigenvalue. \end{enumerate} Consequently, $\det( I+\mathbf{v} \mathbf{v}^\T) = 1 + \norm{\mathbf{v}}_2^2$. \end{proof} \begin{myLemma}[{Property 5.2.9 of~\citet{meyer2000matrix}}] \label{lemma:matrix-2-norm} For a real matrix $A \in \R^{m \times n}$, we have \[ \norm{A}_2 = \sup_{\norm{\x}_2 = 1} \sup_{\norm{\y}_2 = 1} \abs{\y^{\T} A \x}. \] \end{myLemma} \begin{proof} The proof is from the solution manual of~\citet{meyer2000matrix}. Applying the Cauchy-Schwarz inequality yields $\abs{\y^\T A \x} \leq \norm{\y}_2 \norm{A\x}_2$, which implies that \[ \sup_{\norm{\x}_2 = 1} \sup_{\norm{\y}_2 = 1} \abs{\y^{\T} A \x} \leq \sup_{\norm{\x}_2 = 1} \norm{A \x}_2 = \norm{A}_2. \] Now show that equality is actually attained for some pair $\x$ and $\y$ on the unit $2$-sphere. To do so, notice that when setting $\x_*$ is a vector of unit length such that \[ \norm{A\x_*}_2 = \sup_{\norm{\x}=1} \norm{A\x}_2 = \norm{A}_2, \] and $\y_*$ is the vector such that \[ \y_* = \frac{A \x_*}{\norm{A \x_*}_2} = \frac{A \x_*}{\norm{A}_2}, \] then \[ \y_*^\T A \x_* = \frac{\x_*^\T A^\T A \x_*}{\norm{A}_2} = \frac{\norm{A \x_*}_2^2}{\norm{A}_2} = \frac{\norm{A}_2^2}{\norm{A}_2} = \norm{A}_2. \] Hence we complete the proof. \end{proof} \begin{myLemma} \label{lemma:bob} Let $N= \ceil{T/\Delta}$. Denote by $L_i$ the absolute value of cumulative rewards for episode $i$, i.e., $L_i \triangleq \sum_{t = (i-1)\Delta + 1}^{i \Delta} r_t(X_t)$, then \begin{equation} \label{eq:concentration} \Pr\left[\forall i\in [N], L_i\leq LS\Delta+2R\sqrt{\Delta\ln\frac{T}{\sqrt{\Delta}}}\right] \geq 1-\frac{2}{T}. \end{equation} \end{myLemma} \begin{proof} For any episode $i$, the absolute sum of rewards can be written as \begin{align*} \left|\sum_{t=(i-1)\Delta+1}^{i\Delta}\langle X_t,\theta_t\rangle + \eta_t\right| \leq {}& \sum_{t=(i-1)\Delta+1}^{i\Delta} \left|\langle X_t,\theta_t\rangle\right|+\left|\sum_{t=(i-1)\Delta+1}^{i\Delta}\eta_t\right|\\ \leq {}& \Delta LS+\left|\sum_{t=(i-1)\Delta+1}^{i\Delta}\eta_t\right|, \end{align*} where we have iteratively applied the triangle inequality as well as the fact that $\left|\langle X_t,\theta_t\rangle\right|\leq LS$ for all $t$. Further applying the standard concentration result of $R$-sub-Gaussian random variables~\citep[Corollary 1.7]{2019:HighDimension-book}, we get Now by property of the $R$-sub-Gaussian, it holds that \[ \Pr\left[\left|\frac{1}{\Delta}\sum_{t=(i-1)\Delta+1}^{i\Delta}\eta_t\right| \geq \epsilon\right] \leq 2\exp\left( -\frac{\Delta \epsilon^2}{2R^2} \right), \] which further implies that \[ \Pr\left[\left|\sum_{t=(i-1)\Delta+1}^{i\Delta}\eta_t\right|\geq 2R\sqrt{\Delta\ln\frac{T}{\sqrt{\Delta}}}\right]\leq\frac{2\Delta}{T^2}. \] So we can ensure a low failing probability, specifically, the probability of the event that the absolute value of the noise term $\eta_t$ exceeds $2R\sqrt{\ln T}$ for a fixed $t$ is at most $1/T^2$. By union bound, we have \begin{align*} & \Pr\left[\exists i\in [N]:\left|\sum_{t=(i-1)\Delta+1}^{i\Delta}\eta_t\right|\geq 2R\sqrt{\Delta\ln\frac{T}{\sqrt{\Delta}}}\right]\\ \leq {} & \sum_{i=1}^{\lceil T/\Delta\rceil}\Pr\left[\left|\sum_{t=(i-1)\Delta+1}^{i\Delta}\eta_t\right|\geq 2R\sqrt{\Delta\ln\frac{T}{\sqrt{\Delta}}}\right]\leq\frac{2}{T}. \end{align*} Hence, we finish the proof. \end{proof} \end{document}
\begin{document} \begin{frontmatter} \title{On a classification of polynomial differential operators } \author{Jinzhi Lei } \address{Zhou Pei-Yuan Center for Applied Mathematics, Tsinghua University, Beijing, 100084, P.R.China} \begin{abstract} This paper gives a classification of first order polynomial differential operators of form $\mathscr{X} = X_1(x_1,x_2)\delta_1 + X_2(x_1,x_2)\delta_2$, $(\delta_i = \partial/\partial x_i)$. The classification is given through the order of an operator that is defined in this paper. Let $X=\mathscr{X}y$ to be the differential polynomial associated with $\mathscr{X}$, the order of $\mathscr{X}$, $\mathrm{ord}(\mathscr{X})$, is defined as the order of a differential ideal $\Lambda$ of differential polynomials that is a nontrivial expansion of the ideal $\{X\}$ and with the lowest order. In this paper, we prove that there are only four possible values for the order of a differential operator, $0$, $1$, $2$, $3$, or $\infty$. Furthermore, when the order is finite, the expansion $\Lambda$ is generated by $X$ and a differential polynomial $A$, which can be obtained through a rational solution of a partial differential equation that is given explicitly in this paper. When the order is infinite, the expansion $\Lambda$ is just the unit ideal. In additional, if, and only if, the order of $\mathscr{X}$ is $0$, $1$, or $2$, the polynomial differential equation associating with $\mathscr{X}$ has Liouvillian first integrals. Examples for each class of differential operators are given at the end of this paper. \end{abstract} \begin{keyword} polynomial differential operator \sep classification \sep polynomial differential equation \sep differential algebra \sep Liouvillian first integral \MSC 34A05 \sep 34A34 \sep 12H05 \end{keyword} \end{frontmatter} \section{Introduction} \label{sec:0} \subsection{Background} This paper studies the polynomial differential operator \begin{equation} \label{eq:1} \mathscr{X} = X_1(x_1,x_2)\delta_1 + X_2(x_1,x_2)\delta_2, \end{equation} where $\delta_i = \partial/\partial x_i\ (i=1,2)$, and $X_1(x_1,x_2), X_2(x_1,x_2)$ are polynomials of $x_1$ and $x_2$. We further assume that $X_1\not\equiv 0$ without loss of generality. We will give a classification for all operators of form \eqref{eq:1}, according to which the solution of the first order partial differential equation \begin{equation} \label{eq:2} \mathscr{X}\omega = 0 \end{equation} is discussed. The operator \eqref{eq:1} closely relates to the following polynomial differential equation \begin{equation} \label{eq:3} \dfrac{d x_1}{dt} = X_1(x_1,x_2),\quad \dfrac{d x_2}{dt} = X_2(x_1,x_2), \end{equation} and non constant solutions of \eqref{eq:2} give first integrals of \eqref{eq:3}. Therefore, our results also yield a classification of the polynomial systems \eqref{eq:3}. The current study was motivated by investigating integrating methods of a polynomial differential equation of form \eqref{eq:3}. We first look at a simple situation. If the equation \eqref{eq:3} has an integrating factor $\mu$ which is a rational function of $x_1$ and $x_2$, a first integral $\omega$ of \eqref{eq:3} can be obtained by an integral of a rational function, and further, we have \begin{equation} \label{eq:if2} \delta_1 \omega - a = 0, \end{equation} where $a = \mu X_1$ is a rational function. Therefore, there is a non constant function $\omega$ that satisfies both equations \eqref{eq:2} and \eqref{eq:if2}. In this case, the differential operator $\mathscr{D}_A$ defined as \begin{equation} \mathscr{D}_A\omega = \delta_1 \omega - a \end{equation} is compatible with $\mathscr{X}$. In other words, if we define two differential polynomials $$X = \mathscr{X}y,\quad A = \mathscr{D}_Ay,$$ they can generate a differential ideal $\{X,A\}$ which is a nontrivial expansion of the ideal $\{X\}$ (refer detail definitions below). This simple situation suggests that to integrate the equation \eqref{eq:3} for first integrals, we need to find a differential polynomial $A$ such that $\{X,A\}$ is a nontrivial expansion of the ideal $\{X\}$. The differential polynomial $A$, if exist, is not unique. Nevertheless, we will show that the lowest order among these differential polynomials is uniquely determined by the original differential operator $\mathscr{X}$ (called the order of $\mathscr{X}$, to be detailed below), and therefore provides a classification. The classification presented in this study is obtained from the order of the operator $\mathscr{X}$. This order is essential for understanding integrating methods the polynomial differential equation \eqref{eq:3} in different classes, and also the classification of un-integrable systems. Furthermore, for a given equation \eqref{eq:3}, the above differential polynomial $A$ in defining the nontrivial expansion $\{X,A\}$ provides additional informations for the first integral, which are important for further investigations of the structure of integrating curves (or foliations) of the equation. Applications based on the classification given here is interested in future studies. \subsection{Preliminary definitions} Before stating the main results, we give some preliminary concepts from differential algebra. For detail discussions, refer\cite{Ka:76} and \cite{Ritt:50}. Let $K$ to be the field of all rational functions of $(x_1, x_2)$ with complex number coefficients, and $\delta_1, \delta_2$ are two \textit{derivations} of $K$. Then $K$ together with the two derivations form a \textit{differential field}, with $\mathbb{C}$ as the constant field. For a \textit{differential indeterminate} $y$, there is a usual way to add $y$ to the differential field $K$, by adding an infinite sequence of symbols \begin{equation} \label{eq:4} y, \delta_1y, \delta_2y,\delta_1\delta_2y,\cdots, \delta_1^{i_1}\delta_2^{i_2}y,\cdots \end{equation} to $K$ \cite{Ka:76}. This procedure results in a differential ring, denoted as $K\{y\}$. Each element in $K\{y\}$ is a polynomial of finite numbers of the symbols in \eqref{eq:4}, and therefore is a \textit{differential polynomial} in $y$ with coefficients in $K$. We say an algebra ideal $\Lambda$ in $K\{y\}$ to be a \textit{differential ideal} if $a\in \Lambda$ implies $\delta_i a \in \Lambda\ (i=1,2)$. Let $\Sigma$ be any aggregate of differential polynomials. The intersection of all differential ideals containing $\Sigma$ is called the \textit{differential ideal generated by $\Sigma$}, and is denoted by $\{\Sigma\}$. A differential polynomial $A$ is in $\{\Sigma\}$ if, and only if, $A$ is a linear combination of differential polynomials in $\Sigma$ and of derivatives, of various orders, of such differential polynomials. \begin{defn} Let $$w_1= \delta_1^{i_1}\delta_2^{i_2}y,\quad w_2 = \delta_1^{j_1}\delta_2^{j_2}y,$$ be two derivatives of $y$, $w_2$ is \textbf{higher} than $w_1$ if either $j_1>i_1$, or $j_1 = i_1$ and $j_2 > i_2$. The indeterminate $y$ is always higher than any element in $K$. \end{defn} \begin{defn} Let $A$ be a differential polynomial, if $A$ contains $y$ (or its derivatives) effectively, by the \textbf{leader} of $A$, we mean the highest of those derivatives of $y$ which are involved in $A$. \end{defn} \begin{defn} Let $A_1, A_2$ be two differential polynomials, we say $A_2$ to be of higher \textbf{rank} than $A_1$, if either \begin{enumerate} \item[(1)] $A_2$ has higher leader than $A_1$; or \item[(2)] $A_1$ and $A_2$ have the same leader, and the degree of $A_2$ in the leader exceeds that of $A_1$. \end{enumerate} A differential polynomial which effectively involves the intermediate $y$ will be of higher rank than one which does not. Two differential polynomials of which no difference in the rank as created above will be said to be of the same rank. \end{defn} Following fact is basic \cite[pp.~3]{Ritt:50}: \begin{prop} \label{prop:1} Every aggregate of differential polynomials contains a differential polynomial which is not higher than any other differential polynomials in the aggregate. \end{prop} For the operator $\mathscr{X}$ given by \eqref{eq:1}, we have \begin{equation} \label{eq:5} X= \mathscr{X}y = X_1 \delta_1y+X_2\delta_2y\in K\{y\}. \end{equation} Let $\{X\}$ denote the differential ideal in $K\{y\}$ that is generated by $X$. A differential ideal $\Lambda$ in $K\{y\}$ that contains $\{X\}$ as a proper subset will be called an \textit{expansion of $\{X\}$}, or an \textit{expansion of $\mathscr{X}$}. In this paper, we will show that expansions of $\mathscr{X}$ with the lowest order (to be defined below) will be essential to provide the classification of $\mathscr{X}$. Let $\Lambda$ to be an expansion of $\{X\}$, Proposition \ref{prop:1} yields that there is a differential polynomial $A\in \Lambda$ that has the lowest rank. Therefore, the leader of $A$ is lower than the leader of $X$, $\delta_1y$. Thus, either the leader of $A$ has form $\delta_2^r y\ (r\geq 0)$, or $A$ does not involve the intermediate $y$, i.e., $A\in K$. In the former situation, $r$ will be called the \textbf{\textit{order}} of $\Lambda$, denoted by $\mathrm{ord}(\Lambda)$. The latter situation will be called to have order of infinity, i.e., $\mathrm{ord}(\Lambda)=\infty$. For a differential polynomial $A\in K\{y\}$, we associate with $A$ a \textit{differential operator} $\mathscr{D}_A$ on analytic functions $\mathcal{A}(\Omega)$, where $\Omega$ is an open subset of $\mathbb{C}^2$, such that \begin{equation} \label{eq:oa} \mathscr{D}_Au = A|_{y=u},\quad \forall u\in \mathcal{A}(\Omega). \end{equation} By $S(\mathscr{D}_A)$, we denote the singularity set of $\mathscr{D}_A$, which contains all singularity points in the coefficients of the differential polynomial $A$. Because the coefficients of $A$ are rational functions, the singularity set $\mathscr{D}_A$ is a closed subset in $\mathbb{C}^2$. Thus, for any $u\in \mathcal{A}(\Omega)$, $\mathscr{D}_A u$ is well defined in the open subset $\Omega\backslash S(\mathscr{D}_A)$. We will called an expansion of $\mathscr{X}$, $\Lambda$, to be \textit{nontrivial} if there exists an open subset $\Omega\subset \mathbb{C}^2$ and a non constant function $\omega\in \mathcal{A}(\Omega)$, such that $\mathscr{D}_Au = 0$ in $\Omega\backslash S(\mathscr{D}_A)$ for all $A\in \Lambda$. Otherwise, the expansion is called \textit{trivial}. Examples of trivial expansion include $\{X, p(y)\}$ with $p(y)$ a proper polynomial of $y$ with constant coefficients (not a differential polynomial). For a nontrivial expansion $\Lambda$, a differential polynomial with the lowest rank can only take one of the following forms: \begin{itemize} \item a polynomial of $y$, with at least one coefficient that is non constant ($\mathrm{ord}(\Lambda) = 0$); or \item a differential polynomial of $y$ effectively involves derivatives ($1\leq \mathrm{ord}(\Lambda) < \infty$); or \item an element in $K$, and therefore $\Lambda = K\{y\}$ ($\mathrm{ord}(\Lambda) = \infty$). \end{itemize} \subsection{Main results} In this paper, we are interested at nontrivial expansions of $\mathscr{X}$ with the lowest order, called \textit{essential expansions of $\mathscr{X}$}. For a given differential operator $\mathscr{X}$, essential expansions of $\mathscr{X}$ may not unique, but all essential expansions have the same order, which we call the \textit{\textbf{order} of $\mathscr{X}$}, and is denoted as $\mathrm{ord}(\mathscr{X})$. We will show that $\mathrm{ord}(\mathscr{X})$ provides a classification of polynomial differential operators. \begin{thm} \label{th:1} Let the polynomial differential operator $\mathscr{X}$ given by \eqref{eq:1}, with coefficients $X_1,X_2\in K$, then either $0\leq \mathrm{ord}(\mathscr{X})\leq 3$, or $\mathrm{ord}(\mathscr{X}) = \infty$. Furthermore, when $0\leq \mathrm{ord}(\mathscr{X})\leq 3$, we can always select an essential expansion $\Lambda$ of $\mathscr{X}$, such that $\Lambda = \{X, A\}$, with $A\in K\{y\}$ given below \begin{enumerate} \item[(1)] if $\mathrm{ord}(\mathscr{X}) = 0$, then \begin{equation} \label{eq:7} A = y - a,\quad (a\in K\backslash\mathbb{R}); \end{equation} \item[(2)] if $\mathrm{ord}(\mathscr{X}) = 1$, then \begin{equation} \label{eq:8} A=(\delta_2y)^n - a,\quad (n\in \mathbb{N}, a\in K); \end{equation} \item[(3)] if $\mathrm{ord}(\mathscr{X}) = 2$, them \begin{equation} \label{eq:9} A=\delta_2^2y - a \delta_2 y,\quad (a\in K); \end{equation} \item[(4)] if $\mathrm{ord}(\mathscr{X}) = 3$, then \begin{equation} \label{eq:10} A= 2 (\delta_2 y) (\delta_2^3 y) - 3 (\delta_2^2 y)^2 - a (\delta_2 y)^2,\quad (a\in K). \end{equation} \end{enumerate} \end{thm} From Theorem \ref{th:1}, when the order of a differential operator $\mathscr{X}$ is finite, an essential expansion of $\mathscr{X}$ is given by $\Lambda = \{X,A\}$, with $A\in K\{y\}$ given by \eqref{eq:7}-\eqref{eq:10}. Discussions in \cite[Chapter 2]{Ritt:50} have shown that the system of equations \begin{equation} \mathscr{X} y = 0,\quad \mathscr{D}_A y = 0 \end{equation} has solution in some extension file of $K$. It is easy to see that this solution gives a first integral of the polynomial differential equation \eqref{eq:3}. Following result for the classification of \eqref{eq:3} is straightforward from Theorem \ref{th:1}. \begin{thm} \label{th:3} Consider the polynomial differential equation \eqref{eq:3}, and let $\mathscr{X}$ the corresponding differential operator given by \eqref{eq:1}, we have the following \begin{enumerate} \item[(1)] if $\mathrm{ord}(\mathscr{X}) = 0$, then \eqref{eq:3} has a first integral $\omega\in K$; \item[(2)] if $\mathrm{ord}(\mathscr{X}) = 1$, then \eqref{eq:3} has a first integral $\omega$, such that $$(\delta_2\omega)^n \in K$$ for some $n\in \mathbb{N}$; \item[(3)] if $\mathrm{ord}(\mathscr{X}) = 2$, then \eqref{eq:3} has a first integral $\omega$, such that $$\delta_2^2\omega/\delta_2\omega \in K;$$ \item[(4)] if $\mathrm{ord}(\mathscr{X}) = 3$, then \eqref{eq:3} has a first integral $\omega$, such that $$\dfrac{2(\delta_2\omega) (\delta_2^3\omega) - 3 (\delta_2^2\omega)^2}{(\delta_2 \omega)^2} \in K;$$ \item[(5)] if $\mathrm{ord}(\mathscr{X}) = \infty$, then any first integral of \eqref{eq:3} does not satisfy any differential equation of form $$\mathscr{D}_Ay =0$$ with $A\in K\{y\}\backslash\{X\}$. \end{enumerate} \end{thm} In 1992, Singer have proved that the first three cases in Theorem \ref{th:3} (also refer Theorem \ref{th:2} below) are the only cases to have Liouvillian integrals, i.e., there is a first integral that can be obtained from rational functions using finite steps of exponentiation, integration, an algebraic functions \cite{Singer:92}(also refer \cite{Guan:02}). In the latter two cases, however, the first integral of \eqref{eq:3} can not be obtained in finite steps by the above operations from rational functions (refer \cite{Singer:92} or \cite{Guan:02}). From the proof of Lemma \ref{lem:a.15} given below, when $\mathrm{ord}(\mathscr{X}) = 3$, the first integral of \eqref{eq:3} can be obtained through finite step operations from rational functions and a solution of the partial differential equation of form \eqref{eq:a.4}. In the rest of this paper, we will first give the proof of Theorem \ref{th:1} in Section \ref{sec:proof}, and then give examples for each types of equations in Section \ref{sec:appl}. \section{Proof of the Main Result} \label{sec:proof} \subsection{Outline of the proof} We always assume $X_1 \not\equiv 0$ without loss of generality. Hereinafter, we denote $\delta_2^i y$ by $y_i$ ($y_0 = y$). For any essential expansion $\Lambda$ of $\mathscr{X}$, let $A\in \Lambda$ with the lowest rank. From the above definitions, if $\mathrm{ord}(\mathscr{X}) = r (< \infty)$, then $A$ is a polynomial of $y_0, y_1, \cdots, y_r$, with coefficients in $K$. Write \begin{equation} A = \sum_\mathbf{m} a_\mathbf{m} y_0^{m_0}y_1^{m_1}\cdots y_r^{m_r}, \end{equation} where $\mathbf{m} = (m_0, m_1,\cdots, m_r)\in \mathbb{Z}^{r+1}$, and $a_\mathbf{m}\in K$. To prove Theorem \ref{th:1}, we only need to determine all possible non-zero coefficients in $A$. Let \begin{equation} \mathcal{I}_A = \{\mathbf{m}\in \mathbb{Z}^{r+1}| a_\mathbf{m}\not=0\}. \end{equation} We only need to specify the finite set $\mathcal{I}_A$. The process is outlined below. Let $\mathbf{m} = (m_0, m_1, \cdots, m_r)\in \mathbb{Z}^{r+1}$, we define an operators $\Delta_{i,j}:\mathbf{Z}^{r+1}\to\mathbf{Z}^{r+1}$ for $0<i<j\leq r$ such that $\Delta_{i,j}(\mathbf{m})\in\mathbb{Z}^{r+1}$ is given by \begin{equation} \Delta_{i,j}(\mathbf{m}) = \mathbf{m} + \mathbf{e}_{j-i} - \mathbf{e}_j \end{equation} where $$\mathbf{e}_{k} = (\stackrel{\begin{array}{c}0\\ \downarrow\end{array}}{0},\cdots, 0,\stackrel{\begin{array}{c}k\\ \downarrow\end{array}}{1},0,\cdots, 0).$$ Therefore \begin{equation} \Delta_{i,j}^{-1}(\mathbf{m}) = \mathbf{m} - \mathbf{e}_{j-i} + \mathbf{e}_j. \end{equation} For any $\mathbf{m}, \mathbf{n} \in \mathbb{Z}^{r+1}$, we will say $\mathbf{m}\succ \mathbf{n}$ if there exist $0<i<j\leq r$, such that $$\Delta_{i,j}(\mathbf{m}) = \mathbf{n}.$$ The proof will be done by showing that if $r=\mathrm{ord}(\mathscr{X}) < \infty$, then $\mathcal{I}_A$ can only be one of the following cases: \begin{enumerate} \item[(1)] $r = 0$, and $\mathcal{I}_A = \{(1), (0)\}$; or \item[(2)] $r = 1$, and $\mathcal{I}_A = \{(0,n), (0,0)\}$; or \item[(3)] $r = 2$, and $\mathcal{I}_A = \{(0,0,1), (0,1,0)\}$, with $$(0, 0,1) \succ (0,1,0);$$ or \item[(4)] $r = 3$, and $\mathcal{I}_A = \{(0,1,0,1), (0, 0,2,0), (0, 2,0,0)\}$, with relations \begin{center} \unitlength=0.5cm \begin{picture}(8,4.5) \put(2,0){(0,2,0,0)} \put(2,2){(0,1,1,0)} \put(2,4){(0,1,0,1)} \put(5.5,2){(0,0,2,0)} \put(2.8,1){$\curlyvee$} \put(2.8,3){$\curlyvee$} \put(4.8,2){$\prec$} \put(1.0,4.2){\line(1,0){0.8}} \put(1.0,4.2){\line(0,-1){1.2}} \put(1.0,0.2){\line(0,1){1.2}}\put(1.0,0.2){\line(1,0){0.8}} \put(0.8,2.0){$\curlyvee$} \end{picture} \end{center} Here $(0,1,1,0)$ is an auxiliary index with $a_{(0,1,1,0 )} = 0$. \end{enumerate} The final proof will be done after 14 preliminary Lemmas, following the flow chart given in Figure \ref{fig:1}. \begin{figure} \caption{Flow chart of the proof of Theorem \ref{th:1} \label{fig:1} \end{figure} \subsection{Preliminary notations} Before proving Theorem \ref{th:1}, we introduce some notations as following. Let $$[\delta_2, \mathscr{X}] = \delta_2 \mathscr{X} -\mathscr{X}\delta_2 = (\delta_2X_1) \delta_1 + (\delta_2X_2) \delta_2,$$ $$b_0 = -X_1\,(\delta_2\frac{X_2}{X_1}),\quad b_i = X_1\,(\delta_2\frac{b_{i-1}}{X_1}) = -X_1\,(\delta_2^{i+1}\frac{X_2}{X_1}),\ \ i = 1,2,\cdots$$ For $F\in K\{y\}$, and $\{X\}$ be the differential ideal that is generated by $X=\mathscr{X}y$, we write \begin{equation} F\sim R \end{equation} if $R\in K\{y\}$ such that $F - R\in\{X\}$. Let $\mathbf{m}, \mathbf{n}\in {\mathbb{Z}}^{r+1}$, the \textit{degree} of $\mathbf{n}$ is higher than that of $\mathbf{m}$, denoted by $\mathbf{n} > \mathbf{m}$, if there exists $0\leq k\leq r$ such that $n_k > m_k$ and $$n_i = m_i,\ \ i = k+1, \cdots, r.$$ It is easy to verify that the relation $\succ$ implies $>$, and for any $\mathbf{m}\in \mathbb{Z}^{r+1}$ and $0 < i < j \leq r$, \begin{equation} \label{eq:a.12} \Delta_{i,j}^{-1}(\mathbf{m}) \succ \mathbf{m} \succ \Delta_{i,j}(\mathbf{m}), \end{equation} and \begin{equation} \label{eq:a.11} \Delta_{i,j}^{-1}(\mathbf{m}) > \mathbf{m} > \Delta_{i,j}(\mathbf{m}). \end{equation} In the following discussion, by $\mathbf{m}^*$ we will always denote the element in $\mathcal{I}_A$ with the highest degree, and always assume $A_{\mathbf{m}^*} = 1$ without loss of generality. This is possible as the coefficients in $A$ are rational functions in $K$. For any $\mathbf{m}\in \mathbb{Z}^{r+1}$, define \begin{equation} \label{eq:a.13} \mathcal{P}(\mathbf{m}) = \{\mathbf{p}\in \mathcal{I}_A\ |\ \mathbf{p}\succ \mathbf{m}\ \mathrm{for\ some}\ 0< i<j\leq r\} \end{equation} and $\#(\mathbf{m}) = |\mathcal{P}(\mathbf{m})|$. We define a function $C:\mathbb{Z}^{r+1}\to \mathbb{Z}$ by \begin{equation} \label{eq:a.5} C({\mathbf{m}}) = \sum_{j=1}^r jm_j, \end{equation} where $\mathbf{m} = (m_0, m_1, \cdots, m_r)\in \mathbb{Z}^{r+1}$. It is easy to verify that if $\mathbf{m}\succ \mathbf{p}$, then $C(\mathbf{m}) > C(\mathbf{p})$. In particularly, \begin{equation} \label{eq:a.c} C(\mathbf{m}) - C(\Delta_{i,j}(\mathbf{m})) = i,\quad (0<i<j\leq r). \end{equation} \subsection{Preliminary Lemmas} Now, we can start the proof process. First, following lemma is straightforward from the definition of nontrivial expansion. \begin{lem} \label{le:0} Let $A\in K\{y\}$, the differential ideal $\Lambda = \{A, X\}$ is a nontrivial expansion of $\mathscr{X}$ if, and only if, the equation \begin{equation} \label{eq:xa} \left\{ \begin{array}{rcl} \mathscr{X}y &=& 0\\ \mathscr{D}_Ay &=& 0 \end{array}\right. \end{equation} has a non constant solution in $\mathcal{A}(\Omega)$, with $\Omega$ an open subset of $\mathbb{C}^2$. \end{lem} Following result is a straightforward conclusion from Lemma \ref{le:0} \begin{lem} \label{lem:a.0} If there exist $a\in K$, non constant, such that $\mathscr{X}a = 0$, then let $$A = y - a,$$ the differential ideal $\Lambda = \{X,A\}$ is a nontrivial expansion of $\mathscr{X}$. \end{lem} \begin{lem} \label{lem:a.3} If there exists $a\in K$, $a\not=0$, such that \begin{equation} \label{eq:a.8} \mathscr{X} a = nb_0 a, \end{equation} where $n$ is non-zero integer, let \begin{equation} A = (\delta_2y)^{|n|} - a^{|n|/n}, \end{equation} then $\Lambda = \{X, A\}$ is a nontrivial expansion of $\mathscr{X}$. \end{lem} \begin{proof} From Lemma \ref{le:0}, we only need to show that there is a non constant solution of the differential equation \begin{equation} \label{eq:e1} \left\{ \begin{array}{rcl} X_1 \delta_1 y + X_2 \delta_2 y &=& 0\\ (\delta_2y)^{|n|} - a^{|n|/n} &=& 0. \end{array}\right. \end{equation} Let $$u = a^{1/n},\quad v = -\dfrac{X_2}{X_1} u,$$ and taking account \eqref{eq:a.8}, direct calculations show that $$\delta_1 u = \dfrac{1}{X_1} (b_0 u - X_2 \delta_2 u) = \delta_2 v.$$ Thus, the 1-form $v dx_1 + u d x_2$ is closed, and therefore the function of form $$\omega = \int_{(x_1^0, x_2^0)}^{(x_1,x_2)} v dx_1 + u dx_2$$ is well defined and analytic on a neighborhood of some $(x_1^0, x_2^0)\in \mathbb{C}^2$. Further, $$\delta_1 \omega= v,\quad \delta_2 \omega= u.$$ It is easy to verify that $\omega$ satisfies equations \eqref{eq:e1}, and the Lemmas is proved. \end{proof} \begin{lem} \label{lem:a.4}If there exists $a\in K$ satisfying \begin{equation} \label{eq:a.9} \mathscr{X} a = b_0 a + b_1. \end{equation} Let \begin{equation} A = \delta_2^2y - a \delta_2y, \end{equation} then $\Lambda = \{X,A\}$ is a nontrivial expansion of $\mathscr{X}$. \end{lem} \begin{proof} Let $$b = -\dfrac{X_2}{X_1} a + \dfrac{b_0}{X_1}.$$ From \eqref{eq:a.9}, we have $$\delta_1 a = -\dfrac{X_2}{X_1} \delta_2 a - (\delta_2\dfrac{X_2}{X_1}) a + \delta_2 \dfrac{b_0}{X_1} = \delta_2 b.$$ Thus, the 1-form $bdx_1 + a dx_2$ is closed, and there exists a function $\eta$ that is analytic on a neighborhood of some $(x_1^0,x_2^0)\in \mathbb{C}^2$, such that $$\delta_1 \eta = b, \quad \delta_2 \eta = a.$$ Furthermore, we assume that $X_1(x_1^0,x_2^0)\not=0$. Let $u = \exp(\eta)$, then $u$ is a non zero function, and $$ \mathscr{X}u = u (X_1 \delta_1 \eta + X_2 \delta_2 \eta)=u (X_1 b + X_2 a) = b_0 u. $$ Thus, following the proof of Lemma \ref{lem:a.3}, let $$v = -\dfrac{X_2}{X_1}u,$$ then $v d x_1 + u d x_2$ is a closed 1-form, and the function $$\omega = \int_{(x_1^0,x_2^0)}^{(x_1,x_2)} vd x_1 + u dx_2$$ is well defined in a neighborhood of $(x_1^0,x_2^0)$ (we note that $X_1(x_1^0,x_2^0) \not=0$), non constant, and satisfies $$X_1\delta_1\omega +X_2 \delta_2 \omega = 0,\quad \delta_2 \omega - u = 0.$$ Therefore, $$X_1\delta_1\omega +X_2 \delta_2 \omega = 0, \quad \delta_2^2\omega - a \delta_2 u = 0.$$ Thus, the non constant function $\omega$ satisfies the equation \begin{equation} \left\{ \begin{array}{rcl} X_1 \delta_1 y + X_2 \delta_2 y &=& 0 \\ \delta_2^2 y - a \delta_2 y &=& 0 \end{array} \right. \end{equation} and hence the Lemma is concluded from Lemma \ref{le:0}. \end{proof} \begin{lem} \label{lem:a.15}If there exists $a\in K$ satisfying \begin{equation} \label{eq:a.10} \mathscr{X} a = 2 b_0 a + b_2. \end{equation} Let \begin{equation} A = 2 (\delta_2 y) (\delta_2^3 y) - 3 (\delta_2^2 y)^2 - a (\delta_2 y)^2, \end{equation} then $\Lambda = \{X,A\}$ is a nontrivial expansion of $\mathscr{X}$. \end{lem} \begin{proof} We will show that there is a function $\omega$ that is analytic on an open subset of $\mathbb{C}^2$, non constant, and satisfies \begin{equation} \label{eq:o3} \left\{ \begin{array}{l} X_1 \delta_1 \omega + X_2 \delta_2 \omega = 0\\ 2 (\delta_2 \omega) (\delta_2^3 \omega) - 3 (\delta_2^2 \omega)^2 - a (\delta_2 \omega)^2 = 0. \end{array}\right. \end{equation} Let \begin{eqnarray*} f(x_1,x_2,u) &=& -\delta_2^2 \dfrac{X_2}{X_1} - \dfrac{X_2}{X_1} a - (\delta_2 \dfrac{X_2}{X_1}) u - \dfrac{1}{2} (\dfrac{X_2}{X_1}) u^2,\\ g(x_1,x_2, u) &=& a + \dfrac{1}{2} u^2. \end{eqnarray*} Then $f$ and $g$ are analytic at some point $(x_1^0, x_2^0, u^0)\in \mathbb{C}^3$. We further assume that $X_1(x_1^0, x_2^0)\not=0$. We will show that there is a function $u(x_1,x_2)$ that is analytic on a neighborhood of $(x_1^0,x_2^0)$, and $u(x_1^0, x_2^0) = u^0$, such that \begin{equation} \label{eq:a.4} \left\{ \begin{array}{rcl} \delta_1 u &=& f(x_1,x_2,u),\\ \delta_2 u &=& g(x_1,x_2,u) \end{array}\right. \end{equation} is satisfied in a neighborhood of $(x_1^0,x_2^0)$. It follows from \eqref{eq:a.10} that $$\delta_1a = - \delta_2^3 \dfrac{X_2}{X_1} - \delta_2 (\dfrac{X_2}{X_1} a) - (\delta_2 \dfrac{X_2}{X_1}) a .$$ Thus, from the \eqref{eq:a.4}, we have \begin{eqnarray*} &&(\dfrac{\partial\ }{\partial x_2} + g(x_1,x_2,u)\dfrac{\partial\ }{\partial u} ) f(x_1,x_2,u)\\ &=&-\delta_2^3\frac{X_2}{X_1} - \delta_2(\frac{X_2}{X_1}a) - (\delta_2^2\frac{X_2}{X_1}) u - \frac{1}{2}(\delta_2\frac{X_2}{X_1}) u^2\\ &&{} - g(x_1,x_2,u) (\delta_2\frac{X_2}{X_1} +\frac{X_2}{X_1}u) \\ &=&-\delta_2^3\frac{X_2}{X_1} - \delta_2(\frac{X_2}{X_1}a) - (\delta_2\frac{X_2}{X_1}) a - (\delta_2^2\frac{X_2}{X_1}) u - (\frac{X_2}{X_1}a) u\\ &&{} - (\delta_2\frac{X_2}{X_1}) u^2 - \frac{1}{2}(\frac{X_2}{X_1}) u^3,\\ &=& -\delta_2^3\frac{X_2}{X_1} - \delta_2(\frac{X_2}{X_1}a) - (\delta_2\frac{X_2}{X_1}) a + u f(x_1,x_2,u)\\ &=&\delta_1 a + u f(x_1,x_2,u)\\ &=&(\dfrac{\partial\ }{\partial x_1} + f(x_1,x_2,u)\dfrac{\partial\ }{\partial u}) g(x_1,x_2,u). \end{eqnarray*} Therefore, assuming \begin{equation} \label{eq:us} u(x_1,x_2) = \sum_{i=0}^\infty\sum_{j=0}^\infty u_{i,j} (x_1-x_1^0)^i (x_2-x_2^0)^j,\quad (u_{0,0} = u^0) \end{equation} and applying the Method of Majorants, we can obtain the coefficients $u_{i,j}$ by induction, and the power series \eqref{eq:us} is convergent in a neighborhood of $(x_1^0,x_2^0)$ (refer Appendix for detail), which yields an analytic solution of \eqref{eq:a.4}. Let $u$ to be the above solution of \eqref{eq:a.4}, and $$v = -\delta_2\frac{X_2}{X_1} - \frac{X_2}{X_1} u.$$ It is easy to verify $\delta_2 v = \delta_1 u$, and hence the 1-form $v dx_1 + u dx_2$ is closed. Let \begin{equation} \label{eq:eta} \eta = \exp\left[\int_{(x_1^0,x_2^0)}^{(x_1,x_2)} v dx_1 + u dx_2\right], \end{equation} then the function $\eta$ is well defined, non zero, and analytic on a neighborhood of $(x_1^0, x_2^0)$ (here we note that $X_1(x_1^0, x_2^0) \not=0$), and $$\mathscr{X}\eta = b_0 \eta.$$ Follow the proof of Lemma \ref{lem:a.3}, there exist a non constant function $\omega$, analytic on an a neighborhood of $(x_1^0, x_2^0)$ (we note that $b_0$ is analytic at $(x_1^0,x_2^0)$), such that $$\mathscr{X}\omega = 0,\quad \delta_2 \omega = \eta.$$ From \eqref{eq:eta} and \eqref{eq:a.4}, we have $\delta_2 \eta = \eta u$, and $$\eta \delta_2^2 \eta = \eta \left((\delta_2 \eta) u + \eta (a + \dfrac{1}{2} u^2)\right) = \dfrac{3}{2}(\delta_2 \eta)^2 + a \eta^2.$$ Taking account $\eta = \delta_2 \omega$, we have $$(\delta_2 \omega) (\delta_2^3 \omega) - \dfrac{3}{2} (\delta_2^2 \omega)^2 - a (\delta_2 \omega)^2 = 0.$$ Thus, $\omega$ satisfies \eqref{eq:o3} and the Lemma is concluded. \end{proof} \begin{lem} \label{lem:a.1} Let $[\delta_2, \mathscr{X}]$ and $y_i$ defined as previous, then \begin{enumerate} \item[(1)] $[\delta_2, \mathscr{X}] = (\frac{\delta_2 X_1}{X_1}) \mathscr{X} - b_0 \delta_2$; \item[(2)] $\mathscr{X}y_j = \delta_2\mathscr{X}y_{j-1} - (\frac{\delta_2 X_1}{X_1}) \mathscr{X}y_{j-1} + b_0\,y_j$. \end{enumerate} \end{lem} \begin{proof} (1) is straightforward from \begin{eqnarray*} [\delta_2,\mathscr{X}] &=& (\delta_2X_1) \delta_1 + (\delta_2X_2) \delta_2\\ &=&\frac{\delta_2 X_1}{X_1} (X_1\,\delta_1 + X_2 \delta_2) - \frac{X_2}{X_1} (\delta_2 X_1) \delta_2 + (\delta_2 X_2)\,\delta_2\\ &=&\frac{\delta_2 X_1}{X_1} \mathscr{X} + X_1\,\frac{X_1 \delta_2X_2 - X_2\,\delta_2 X_1}{X_1^2}\delta_2\\ &=&\frac{\delta_2X_1}{X_1} \mathscr{X} - b_0\,\delta_2. \end{eqnarray*} (2) can be obtained by direct calculation as follows: \begin{eqnarray*} \mathscr{X}y_j &=& \mathscr{X}\delta_2 y_{i-1}\\ &=&\delta_2\mathscr{X}y_{j-1} - [\delta_2, \mathscr{X}]y_{j-1}\\ &=& \delta_2\mathscr{X}y_{j-1} - (\frac{\delta_2X_1}{X_1}\,\mathscr{X} - b_0 \delta_2)y_{j-1}\\ &=& \delta_2\mathscr{X}y_{j-1} - \frac{\delta_2X_1}{X_1}\,\mathscr{X}y_{j-1} + b_0 \delta_2y_{j-1}\\ &=&\delta_2\mathscr{X}y_{j-1} - (\frac{\delta_2 X_1}{X_1})\,\mathscr{X}y_{j-1} + b_0 y_j. \end{eqnarray*} \end{proof} \begin{lem} \label{lem:a.2} We have \begin{equation} \label{eq:a.14} \mathscr{X}y_j \sim \sum_{i = 0}^{j-1}c_{i,j}\,b_i\,y_{j-i},\quad (j\geq 1) \end{equation} where $c_{i,j}$ are positive integers, and $c_{0,j} = j$. \end{lem} \begin{proof} From Lemma \ref{lem:a.1}, when $j = 1$, we have $$ \mathscr{X}y_1 = \delta_2\mathscr{X}y_0 - (\frac{\delta_2 X_1}{X_1})\mathscr{X}y_0 + b_0 y_1 \sim b_0 y_1. $$ Thus \eqref{eq:a.14} holds for $j=1$ with $c_{0,1} = 1$. Assume that \eqref{eq:a.14} is valid for $j = k$ with positive integer coefficients $c_{i,k}$, and $c_{0,k}=k$, applying Lemma \ref{lem:a.1}, we have \begin{eqnarray*} \mathscr{X}y_{k+1} &=& \delta_2\mathscr{X}y_k - (\frac{\delta_2 X_1}{X_1})\,\mathscr{X}y_k + b_0\,y_{k+1}\\ &\sim&\delta_2(\sum_{i=0}^{k-1}c_{i,k}b_i y_{k-i}) - (\frac{\delta_2 X_1}{X_1}) (\sum_{i=0}^{k-1}c_{i,k}b_i y_{k-i}) + b_0\,y_{k+1}\\ &=&\sum_{i=0}^{k-1}c_{i,k} ((\delta_2b_i) y_{k-i} + b_i\,\delta_2y_{k-i}) - \sum_{i=0}^{k-1}c_{i,k}\frac{\delta_2 X_1}{X_1} b_i y_{k-i} + b_0y_{k+1}\\ &=&\sum_{i=0}^{k-1}c_{i,k}\left((\delta_2b_i - \frac{\delta_2 X_1}{X_1} b_i) y_{k-i} + b_i y_{k-i+1}\right) + b_0\,y_{k+1}\\ &=&(c_{0,k} + 1) b_0\,y_{k+1} + \sum_{i = 0}^{k-2}\left(c_{i,k} X_1 \delta_2(\frac{b_i}{X_1}) + c_{i+1,k}\,b_{i+1}\right)y_{k-i}\\ &&{} + c_{k-1,k} X_1\delta_2(\frac{b_{k-1}}{X_1}) y_1\\ &=&(c_{0,k} + 1) b_0 y_{k+1} + \sum_{i = 0}^{k-2}(c_{i, k} + c_{i+1, k}) b_{i+1}y_{k-i} + c_{k-1, k} b_{k} y_1. \end{eqnarray*} Thus, let $$\left\{\begin{array}{ll} c_{0,k+1} = c_{0,k} + 1 = k+1,& \\ c_{i,k+1} = c_{i-1,k} + c_{i,k},&\ \ (1\leq i\leq k-1),\\ c_{k,k+1} = c_{k-1,k},& \end{array}\right.$$ which are positive integers, we have $$\mathscr{X}y_{k+1} \sim \sum_{i = 0}^kc_{i,k+1} b_i y_{k + 1 - i},$$ and the Lemma is proved. \end{proof} \begin{lem} \label{lem:a.6} We have \begin{equation} \mathscr{X}a_{\mathbf{m}}\mathbf{y}^\mathbf{m}\sim (\mathscr{X}a_{\mathbf{m}} + C({\mathbf{m}}) b_0 a_{\mathbf{m}})\mathbf{y}^{\mathbf{m}} + \sum_{i = 1}^{r-1}\sum_{j=i+1}^rm_jc_{i,j}b_ia_\mathbf{m}\mathbf{y}^{\Delta_{i,j}(\mathbf{m})}, \end{equation} where $a_\mathbf{m}\in K$, $\mathbf{y}^{\mathbf{m}} = y_0^{m_0}y_1^{m_1}\cdots y_r^{m_r}$, and $c_{i,j}$ is defined as in Lemma \ref{lem:a.2}. \end{lem} \begin{proof} It is easy to have $$\mathscr{X}a_{\mathbf{m}}\mathbf{y}^{\mathbf{m}} = (\mathscr{X}a_{\mathbf{m}}) \mathbf{y}^{\mathbf{m}}+ a_{\mathbf{m}}\sum_{j=0}^r \dfrac{\partial \mathbf{y}^{\mathbf{m}}}{\partial y_j}\mathscr{X}y_j.$$ From Lemma \ref{lem:a.2}, we have \begin{eqnarray*} \mathscr{X}a_{\mathbf{m}}\mathbf{y}^{\mathbf{m}} &=&(\mathscr{X}a_\mathbf{m})\mathbf{y}^{\mathbf{m}} + a_\mathbf{m} \sum_{j=0}^r m_j \mathbf{y}^{\mathbf{m}-\mathbf{e}_j} \mathscr{X}y_i\\ &\sim&(\mathscr{X}a_\mathbf{m})\mathbf{y}^{\mathbf{m}} + a_\mathbf{m}\,\sum_{j=1}^r m_j \mathbf{y}^{\mathbf{m}-\mathbf{e}_j}(\sum_{i=0}^{j-1}c_{i,j} b_i y_{j-i})\\ &=&(\mathscr{X}a_\mathbf{m})\mathbf{y}^{\mathbf{m}} + a_\mathbf{m} b_0(\sum_{j = 1}^rc_{0, j} m_j)\mathbf{y}^{\mathbf{m}} + a_\mathbf{m}\sum_{j=1}^r\sum_{i = 1}^{j-1}m_jc_{i,j}b_i \mathbf{y}^{\mathbf{m} + \mathbf{e}_{j-i} -\mathbf{e}_j}\\ &=&(\mathscr{X} a_{\mathbf{m}} + C({\mathbf{m}})\, b_0 a_{\mathbf{m}})\,\mathbf{y}^{\mathbf{m}} + \sum_{i = 1}^{r-1}\sum_{j=i+1}^r m_j c_{i,j} b_i a_\mathbf{m}\,\mathbf{y}^{\Delta_{i,j}(\mathbf{m})}, \end{eqnarray*} and the Lemma is concluded. \end{proof} \begin{lem} \label{lem:a.9} Let $\Lambda$ be a nontrivial expansion of $\mathscr{X}$, $A\in \Lambda$ with the lowest rank and $r = \mathrm{ord}(\Lambda) $. Let $\mathbf{m}^*\in \mathcal{I}_A$ with the highest degree and assume that $a_{\mathbf{m}^*} = 1$, then for any $\mathbf{m}\in \mathbb{Z}^{r+1}$, $\mathbf{m}<\mathbf{m}^*$, we have \begin{equation} \label{eq:a.2} \mathscr{X}a_\mathbf{m} = (C({\mathbf{m}^*}) - C({\mathbf{m}}))b_0a_{\mathbf{m}} -\sum_{i=1}^{r-1}\sum_{j=i+1}^{r}(m_j+1)c_{i,j}b_ia_{{\Delta_{i,j}^{-1}}(\mathbf{m})}. \end{equation} Here $a_{\mathbf{m}} = 0$ whenever $\mathbf{m}\not\in \mathcal{I}_A$. \end{lem} \begin{proof} We can write $$A = \sum_{\mathbf{m}\in \mathcal{I}_A}a_{\mathbf{m}}\mathbf{y}^{\mathbf{m}} = \sum_{\mathbf{m} \leq \mathbf{m}^*} a_{\mathbf{m}} \mathbf{y}^{\mathbf{m}}.$$ Hereinafter $a_{\mathbf{m}} = 0$ if $\mathbf{m}\not\in \mathcal{I}_A$. First, it is easy to have $$\mathscr{X}A = X_1 \delta_1A + X_2 \delta_2 A \in \Lambda.$$ On the other hand, from Lemma \ref{lem:a.6}, we have \begin{eqnarray*} \mathscr{X}A&=&\sum_{\mathbf{m}\leq \mathbf{m}^*}\mathscr{X}a_{\mathbf{m}}\mathbf{y}^{\mathbf{m}}\\ &\sim&\sum_{\mathbf{m}\leq \mathbf{m}^*}\left((\mathscr{X}a_{\mathbf{m}} + C({\mathbf{m}})b_0a_{\mathbf{m}})\mathbf{y}^{\mathbf{m}} + \sum_{i=1}^{r-1}\sum_{j = i+1}^{r}m_jc_{i,j} b_i a_{\mathbf{m}}\mathbf{y}^{\Delta_{i,j}(\mathbf{m})}\right)\\ &=&\sum_{\mathbf{m}\leq \mathbf{m}^*}\left(\mathscr{X}a_{\mathbf{m}} + C({\mathbf{m}})b_0a_{\mathbf{m}} + \sum_{i=1}^{r-1}\sum_{j = i+1}^{r}(m_j+1)c_{i,j}b_ia_{{\Delta_{i,j}^{-1}}(\mathbf{m})}\right)\mathbf{y}^{\mathbf{m}} \end{eqnarray*} Note that for any $j>i$, $\Delta_{i,j}^{-1}(\mathbf{m}^*) > \mathbf{m}^*$, and thus $\Delta_{i,j}^{-1}(\mathbf{m}^*) \not\in \mathcal{I}_A$, i.e., $a_{\Delta_{i,j}^{-1}(\mathbf{m}^*)} = 0$ for any $j>i$. Taking account $a_{\mathbf{m}^*} = 1$, we have $\mathscr{X}a_{\mathbf{m}^*} =0$, and hence \begin{eqnarray*} \mathscr{X}A &\sim& C(\mathbf{m}^*)b_0 y^{\mathbf{m}^*} \\ &&{} + \sum_{\mathbf{m}<\mathbf{m}^*}\left(\mathscr{X}a_{\mathbf{m}} + C({\mathbf{m}})b_0a_{\mathbf{m}} + \sum_{i=1}^{r-1}\sum_{j = i+1}^{r}(m_j+1)c_{i,j}b_ia_{{\Delta_{i,j}^{-1}}(\mathbf{m})}\right)\mathbf{y}^{\mathbf{m}}. \end{eqnarray*} Therefore, \begin{equation} \label{eq:l58} \mathscr{X}A - C({\mathbf{m}^*})b_0A \sim R = \sum_{\mathbf{m} < \mathbf{m}^*} f_{\mathbf{m}} \mathbf{y}^{\mathbf{m}}, \end{equation} where the coefficients $f_{\mathbf{m}}$ are \begin{equation} \label{eq:l58-1} f_{\mathbf{m}} = \mathscr{X}a_\mathbf{m} + (C({\mathbf{m}}) - C({\mathbf{m}^*}))b_0a_{\mathbf{m}} + \sum_{i=1}^{r-1}\sum_{j=i+1}^{r-1}(m_j+1)c_{i,j}b_ia_{{\Delta_{i,j}^{-1}}(\mathbf{m})}. \end{equation} Now, we obtain a differential polynomial $R$ that has lower rank than $A$ and is contained in the differential ideal $\Lambda$. But $A$ is an element in $\Lambda$ with the lowest rank. Thus, we must have $R\equiv 0$. Therefore the coefficients \eqref{eq:l58-1} are zero, from which \eqref{eq:a.2} is concluded. The Lemma has been proved. \end{proof} Note that $a_{\mathbf{m}^*} = 1$ and $\Delta_{i,j}^{-1}(\mathbf{m}^*) \not\in\mathcal{I}_A$, the equation \eqref{eq:a.2} is also valid for $a_{\mathbf{m}^*}$. The equation \eqref{eq:a.2} can be rewritten in another form as follows. \begin{lem} \label{lem:a.8} In Lemma \ref{lem:a.9}, for any $\mathbf{m}\leq \mathbf{m}^*$, let $k=\#(\mathbf{m})$ and $\mathcal{P}(\mathbf{m}) = \{\mathbf{p}_1,\cdots, \mathbf{p}_k\}$, and assume $\Delta_{{i_l},{j_l}}(\mathbf{p}_l) = \mathbf{m},\ (l = 1,2,\cdots,k)$, then the coefficients $a_{\mathbf{p}_l}, a_{\mathbf{m}}$ satisfy \begin{equation} \label{eq:a.3} \mathscr{X} a_{\mathbf{m}} = (C({\mathbf{m}^*}) - C({\mathbf{m}}))b_0a_{\mathbf{m}} -\sum_{l = 1}^{\#(\mathbf{m})} (m_{j_l}+1) c_{i_l,j_l} b_{i_l} a_{\mathbf{p}_l}. \end{equation} \end{lem} \begin{lem} \label{lem:a.10} Let $\Lambda$ be a nontrivial expansion of $\mathscr{X}$, $A\in \Lambda$ with the lowest rank and $r = \mathrm{ord}(\Lambda) > 1$. Let $\mathbf{m}^*\in \mathcal{I}_A$ with the highest degree. Then for any $\mathbf{m}\in \mathcal{I}_A$, $\#(\mathbf{m})=0$ if and only if $C(\mathbf{m}) = C(\mathbf{m}^*)$. Furthermore, if $\#(\mathbf{m})=0$, then $a_{\mathbf{m}}$ is a constant. \end{lem} \begin{proof} First, we will prove that if $\#(\mathbf{m}) = 0$, then $C(\mathbf{m}) = C(\mathbf{m}^*)$. If $\#(\mathbf{m}) = 0$, then Lemma \ref{lem:a.8} yields $$\mathscr{X}a_{\mathbf{m}} = (C({\mathbf{m}^*}) - C({\mathbf{m}}))b_0a_{\mathbf{m}}.$$ If otherwise $C({\mathbf{m}})\not=C({\mathbf{m}^*})$, then $n = C(\mathbf{m}^*) - C(\mathbf{m})$ is a non-zero integer, and $a_{\mathbf{m}^*}\not=0$ such that $$\mathscr{X}a_{\mathbf{m}} = n b_0 a_{\mathbf{m}}.$$ From Lemma \ref{lem:a.3}, let $$A' = (\delta_2 y)^{|n|} - {a_{\mathbf{m}}}^{|n|/n},$$ then the differential ideal $\Lambda' = \{X,A'\}$ is a nontrivial expansion of $\mathscr{X}$, and with order $\leq 1$. This contradicts with the assumption that $\Lambda$ is an essential expansion with order $> 1$. Thus, we have concluded that $C(\mathbf{m}) = C(\mathbf{m}^*)$. Next, we will prove that if $C(\mathbf{m}) = C(\mathbf{m}^*)$, then $\#(\mathbf{m}) = 0$. If on the contrary, $C(\mathbf{m}) = C(\mathbf{m}^*)$ but $\#(\mathbf{m})>0$, there exists $\mathbf{m}_1\in \mathcal{P}(\mathbf{m})$. From \eqref{eq:a.c}, we have $C(\mathbf{m}_1) > C(\mathbf{m}) = C(\mathbf{m}^*)$. Apply the previous part of the proof to $\mathbf{m}_1$, we have $\#(\mathbf{m}_1) > 0$. Thus, we can repeat the above process, and obtain $\mathbf{m}_2\in \mathcal{P}(\mathbf{m}_1)$ such that $C(\mathbf{m}_2) > C(\mathbf{m}_1)>C(\mathbf{m}^*)$ and $\#(\mathbf{m}_2) > 0$. This procedure can continue to obtain an infinite sequence $\{\mathbf{m}_k\}_{k=1}^\infty \subseteq \mathcal{I}_A$ such that $\#(\mathbf{m}_k) > 0$ and $C(\mathbf{m}_{k+1}) > C(\mathbf{m}_k) > C(\mathbf{m}^*)$. But $\mathcal{I}_A$ is a finite set. Thus, we come to a contradiction, and therefore $\#(\mathbf{m}) = 0$. Now, we have proved that $\#(\mathbf{m})$ if and only if $C(\mathbf{m}) = C(\mathbf{m}^*)$. If $\#(\mathbf{m}) = 0$, then $C(\mathbf{m}^*) = C(\mathbf{m})$, and therefore \eqref{eq:a.3} yields $\mathscr{X}a_{\mathbf{m}} = 0$. But $\mathrm{ord}(\Lambda) > 1$, thus $a_{\mathbf{m}}$ is a constant according to Lemma \ref{lem:a.0}. \end{proof} \begin{lem} \label{lem:c} Let $\Lambda$ be a nontrivial expansion of $\mathscr{X}$, $A\in \Lambda$ with the lowest rank and $r = \mathrm{ord}(\Lambda) > 1$. Let $\mathbf{m}^*\in \mathcal{I}_A$ with the highest degree. Then for any $\mathbf{m}\in \mathcal{I}_A$, $C(\mathbf{m})\leq C(\mathbf{m}^*)$. \end{lem} \begin{proof}If otherwise, there is $\mathbf{m}\in\mathcal{I}_A$ such that $C(\mathbf{m}) > C(\mathbf{m}^*)$, then $\#(\mathbf{m}) \geq 1$ by Lemma \ref{lem:a.10}. Thus, there is a $\mathbf{m}_1\in \mathcal{P}(\mathbf{m})$, and $C(\mathbf{m}_1) > C(\mathbf{m}) > C(\mathbf{m}^*)$. Thus, we can repeat the procedure to obtain an infinite sequence $\{\mathbf{m}_k\}_{k=1}^{\infty}\subseteq \mathcal{I}_A$. This is contradiction to the fact that $\mathcal{I}_A$ is a finite set, and the Lemma is concluded. \end{proof} \begin{lem} \label{lem:a.11} Assume $r= \mathrm{ord}(\mathscr{X}) \geq 3$. Let $\Lambda$ be an essential expansion of $\mathscr{X}$, $A\in \Lambda$ with the lowest rank, $\mathbf{m}^* = (m_0^*, m_1^*, \cdots,m_r^*)\in \mathcal{I}_A$ with the highest degree and $a_{\mathbf{m}^*} = 1$, then $m_1^*>0$ and $m_2^* = 0$. \end{lem} \begin{proof} (1). If $m_1^* = 0$, we can write $\mathbf{m}^*$ as $$\mathbf{m}^* = (m_0^*, 0,\cdots, 0, m_k^*, \cdots, m_r^*),$$ where $1 < k \leq r$ and $m_k^* >0 $. Let $$\mathbf{m} = \Delta_{1,k}(\mathbf{m}^*) = (m_0^*,0,\cdots,0,1,m_k^*-1,m_{k+1}^*,\cdots,m_r^*),$$ it is easy to have $\mathcal{P}(\mathbf{m}) = \{\mathbf{m}^*\}$. Hence, $$\mathscr{X}a_{\mathbf{m}} = b_0 a_\mathbf{m} - m_k^* c_{1,k} b_1$$ from Lemma \ref{lem:a.8}. Here we have applied $C(\mathbf{m}^*) - C(\mathbf{m}) = 1$ and $a_{\mathbf{m}^*} = 1$. Let $$a = -\dfrac{a_{\mathbf{m}}}{m_k^* c_{1,k}},$$ then $$\mathscr{X} a = b_0 a + b_1.$$ Thus, we have $\mathrm{ord}(\mathscr{X}) \leq 2$ from Lemma \ref{lem:a.4}, which contradicts with $r \geq 3$. (2). If $m_2^*>0$, let $$\mathbf{p} = \Delta_{1,2}(\mathbf{m}^*) = (m_0^*, m_1^*+1,m_2^*-1,m_3^*,\cdots,m_r^*).$$ It is easy to verify $\mathcal{P}(\mathbf{p}) = \{\mathbf{m^*}\}$ as follows. (1) Since $\Delta_{1,2}(\mathbf{m^*}) = \mathbf{p}$, we have $\mathbf{m^*}\in\mathcal{P}(\mathbf{p})$. (2) If there is any other $\mathbf{m'}\in \mathcal{P}(\mathbf{p})$, then $\Delta_{i,j}(\mathbf{m'}) =\mathbf{p}$ for some $(i,j)\not= (1,2)$. Thus, we always have $j>2$, which yields $\mathbf{m}' >\mathbf{m^*}$, and hence contradicts with the assumption that $\mathbf{m}^*$ is the highest. Hence, we have $$\mathscr{X}a_{\mathbf{p}} = (C(\mathbf{m}^*) - C(\mathbf{p})) b_0 a_{\mathbf{p}} - c_{1,2} m_2^* b_1a_{\mathbf{m}^*} $$ from Lemma \ref{lem:a.8}. Similar to the above argument as in (1), we have $\mathrm{ord}(\mathscr{X})\leq 2$, which is contradiction to the assumption. Thus, we must have $m_2^* = 0$. \end{proof} \subsection{Proof of Theorem \ref{th:1}} Now, we are ready to prove our main Theorem. \begin{proof}[Proof of Theorem \ref{th:1}] Let $\Lambda$ be a nontrivial expansion of $\mathscr{X}$, and $A\in \Lambda$ with the lowest rank, $\mathbf{m}^*\in \mathcal{I}_A$ with the highest degree, and $a_{\mathbf{m}^*} = 1$. (1). If $r = 0$, let $n = \mathbf{m}^*$, we can write $A$ as $$A = y^n + a_1 y^{n-1} + \cdots + a_n,\quad (a_i \in K, i = 1,\cdots, n),$$ with at least one $a_i\in K\backslash\mathbb{C}$. Thus, the equation \eqref{eq:a.3} implies $$\mathscr{X}a_i = 0.$$ Let $$B = y - a_i,$$ then $a_i$ satisfies equations \begin{equation} \mathscr{X}y = 0,\quad \mathscr{D}_{B}y = 0. \end{equation} Hence, $ \{X, B\}$ is a nontrivial expansion of $\mathscr{X}$ with order $0$, and (1) is concluded. (2). If $r = 1$, we argue that there exists $\mathbf{m}\in \mathcal{I}_A$, with $\mathbf{m} < \mathbf{m}^*$, such that $C(\mathbf{m}) \not= C(\mathbf{m}^*)$. If otherwise, for any $\mathbf{m}\in \mathcal{I}_A$, $C(\mathbf{m}) = C(\mathbf{m}^*)$, then $A$ must has form $A = (\delta_2y)^{n}p(y)$, where $n = C(\mathbf{m}^*)$ and $p(y)$ is a polynomial of $y$, with coefficients in $K$. Thus, let $\omega$ to be a non constant solution of $$\mathscr{X} y = 0,\quad \mathscr{D}_A y = 0,$$ then either $$ \mathscr{X} \omega = 0,\quad \delta_2 \omega = 0, \quad \mathrm{or}\quad \mathscr{X} \omega = 0,\quad p(\omega) = 0. $$ But these are not possible because the former case implies $X_1\equiv 0$, and latter case implies $\mathrm{ord}(\Lambda) = 0$, both are in contradiction to our assumptions. Now, let $\mathbf{m}$ such that $C(\mathbf{m}) \not = C(\mathbf{m}^*)$. We note that $\#(\mathbf{m}) = 0$, thus, the equation \eqref{eq:a.3} yields $$\mathscr{X}a_{\mathbf{m}} = (C(\mathbf{m}^*) - C(\mathbf{m})) b_0 a_\mathbf{m}.$$ From Lemma \ref{lem:a.3}, let $n = C(\mathbf{m}^*) - C(\mathbf{m})$, $a = {a_{\mathbf{m}}}^{|n|/n}$, and $$B = (\delta_2 y)^{|n|} - a,$$ then $\{X, B\}$ is a nontrivial expansion of $\mathscr{X}$, and hence (2) is proved. (3). If $r = 2$, let $\mathbf{m}^* = (m_0^*, m_1^*,m_2^*)$ and $\mathbf{m} = \Delta_{1,2}(\mathbf{m}) = (m_0^*, m_1^*+1,m_2^* - 1)$. It is easy to verify $\mathcal{P}(\mathbf{m}) = \{\mathbf{m}^*\}$. Thus, from Lemma \ref{lem:a.8}, we have $$\mathscr{X}a_{\mathbf{m}} = b_0 a_{\mathbf{m}} - m_2^* c_{1,2} b_1.$$ Here, we note $C(\mathbf{m}^*) - C(\mathbf{m}) = 1$ and $a_{\mathbf{m}^*} = 1$. Let $$a = -\dfrac{a_{\mathbf{m}}}{m_2^* c_{1,2}},$$ then $a$ satisfies $$\mathscr{X} a = b_0 a + b_1.$$ From Lemma \ref{lem:a.4}, let $$B = \delta_2^2 y - a \delta_2 y,$$ then the differential ideal $\{X, B\}$ is a nontrivial expansion of $\mathscr{X}$. (4). If $r = 3$, we write $\mathbf{m} = (m_0^*, m_1^*, m_2^*, m_3^*)$. From Lemma \ref{lem:a.11}, we have $m_1^*>0$ and $m_2^* = 0$, i.e., $\mathbf{m}^* = (m_0^*, m_1^*, 0, m_3^*)$. Let $$\mathbf{p} = \Delta_{1,3}(\mathbf{m}^*) = (m_0^*, m_1^*, 1, m_3^*-1),$$ $$ \mathbf{m} = {\Delta_{1,2}^{-1}}(\mathbf{p}) = (m_0^*, m_1^*-1,2, m_3^*-1).$$ $$ \mathbf{q} = \Delta_{1,2}(\mathbf{p}) = (m_0^*, m_1^*+1,0,m_3^*-1),$$ It is easy to have $C(\mathbf{m}) = C(\mathbf{m}^*)$. Therefore, from Lemma \ref{lem:a.10}, $a_{\mathbf{m}}$ is a constant. Furthermore, we have $\mathcal{P}(\mathbf{p}) \subseteq \{\mathbf{m}^*,\mathbf{m}\}$ and $\mathcal{P}(\mathbf{q}) \subseteq \{\mathbf{m}^*,\mathbf{p}\}$. Applying Lemma \ref{lem:a.8} to $a_{\mathbf{p}}$ and $a_{\mathbf{q}}$, respectively, and notice that $C(\mathbf{m}^*) - C(\mathbf{p}) =1 $ and $C(\mathbf{m}^*) - C(\mathbf{q}) = 2$, we have \begin{equation} \label{eq:a.6} \mathscr{X}a_{\mathbf{p}} = b_0 a_{\mathbf{p}} - (m_3^* c_{1,3} a_{\mathbf{m}^*} + 2 c_{1, 2} a_{\mathbf{m}}) b_1, \end{equation} and \begin{equation} \label{eq:a.7} \mathscr{X}a_{\mathbf{q}} = 2 b_0 a_{\mathbf{q}} - (m_3^* c_{2,3} b_2 a_{\mathbf{m}^*} + c_{1,2} b_1 a_{\mathbf{p}}). \end{equation} Since $a_{\mathbf{m}^*}$ and $a_\mathbf{m}$ are constants, we must have $m_3^* c_{1,3} a_{\mathbf{m}^*} + 2 c_{1,2} a_{\mathbf{m}} = 0$ and $a_{\mathbf{p}} = 0$. If otherwise, we should have $r \leq 2$ from Lemma \ref{lem:a.3} or Lemma \ref{lem:a.4}. In \eqref{eq:a.7}, let $a_{\mathbf{p}} = 0$, $a_{\mathbf{m}^*}= 1$, and let $$a = -\dfrac{a_{\mathbf{q}}}{m_3^* c_{2,3}},$$ then $a$ satisfies $$\mathscr{X} a = 2 b_0 a + b_2.$$ From Lemma \ref{lem:a.15} and let $$B = 2 (\delta_2y)(\delta_2^3 y) - 3 (\delta_2^2 y)^2 - a (\delta_2 y)^2,$$ the differential ideal $\{X, B\}$ is a nontrivial expansion of $\mathscr{X}$, and (4) is proved. (5). If $r > 3$, we will show that $r = \infty$. If otherwise, $r$ is finite, then Lemma \ref{lem:a.11} yields $m_1^*>0$ and $m_2^* = 0$, and therefore $\mathbf{m}^*$ can be written as $$\mathbf{m}^* = (m_0^*, m_1^*, 0,\cdots, 0, m_k^*, \cdots, m_r^*),$$ where $2 < k \leq r$ and $m_1^*, m_k^* >0 $. We have the following. \begin{enumerate} \item[(a)] If $k = 3$, then $$\mathbf{m}^* = (m_0^*, m_1^*,0,m_3^*,m_4^*\cdots, m_r^*).$$ Let \begin{eqnarray*} \mathbf{p}&=& \Delta_{1,3}(\mathbf{m}^*) = (m_0^*,m_1^*,1,m_3^*-1,m_4^*,\cdots,m_r^*)\\ \mathbf{m}&=&\Delta_{1,2}^{-1}(\mathbf{p}) = (m_0^*,m_1^*-1,2,m_3^*-1,m_4^*\cdots,m_r^*)\\ \mathbf{q} &=& \Delta_{1,2}(\mathbf{p}) = (m_0^*,m_1^*+1,0,m_3^*-1,m_4^*\cdots,m_r^*). \end{eqnarray*} Then $\#(\mathbf{m}) = 0$, $\mathcal{P}(\mathbf{p}) \subseteq \{\mathbf{m}^*, \mathbf{m}\}$ and $\mathcal{P}(\mathbf{q}) \subseteq \{\mathbf{m}^*, \mathbf{p}\}$. Following the discussions as in (4), we have $\mathrm{ord}(\mathscr{X})\leq 3$, which contradicts with $r>3$. \item[(b)] If $k>3$, let \begin{eqnarray*} \mathbf{p}&=&\Delta_{1,k}(\mathbf{m}^*) =(m_0^*,m_1^*,0,\cdots,1,m_k^*-1,\cdots,m_r^*)\\ \mathbf{m}&=&\Delta_{k-2,k-1}^{-1}(\mathbf{p}) = (m_0^*,m_1^*-1,0,\cdots,2,m_k^*-1,m_r^*). \end{eqnarray*} Then $\mathcal{P}(\mathbf{p})\subseteq \{\mathbf{m}^*,\mathbf{m}\}$. Therefore, $$\mathscr{X}a_{\mathbf{p}} = b_0 a_{\mathbf{p}} - (m_k^* c_{1,k} b_1+ 2 c_{k-2,k-1} b_{k-2} a_{\mathbf{m}}).$$ Thus, we have $a_{\mathbf{m}}\not=0$, i.e., $\mathbf{m}\in \mathcal{I}_A$, otherwise we should have $\mathrm{ord}(\mathscr{X}) \leq 2$ as previous. Furthermore, we have $$C(\mathbf{m}) = C(\mathbf{m}^*) + k-3 > C(\mathbf{m}^*),$$ which is in contradiction to Lemma \ref{lem:c}. \end{enumerate} Thus, the above arguments conclude that $r$ must be $\infty$, and the Theorem has been proved. \end{proof} \section{Applications} \label{sec:appl} In this section, we will apply the previous results to study the classification of polynomial differential equations \eqref{eq:3} and give some examples. First, from the proof of Lemmas \ref{lem:a.3} - \ref{lem:a.15}, the explicit method to determine the class of a polynomial differential equation \eqref{eq:3} can be given as follows. \begin{thm} \label{th:2}Consider the polynomial differential equation \eqref{eq:3}, let \begin{equation} \label{eq:Bi} b_i = -X_1\delta_2^{i+1}(\frac{X_2}{X_1}),\quad (i = 0,1,2) \end{equation} and $r$ to be the order of the corresponding differential operator \eqref{eq:1}, then \begin{enumerate} \item[(1)] $r=0$ if, and only if, $K$ contains a first integral of \eqref{eq:3}; \item[(2)] $r=1$ if, and only if, $K$ contains no first integral of \eqref{eq:3}, and there exists $a\in K\backslash\{0\}$, and $n\in \mathbb{Z}\backslash\{0\}$, such that \begin{equation} \label{eq:31} \mathscr{X} a = n b_0 a. \end{equation} In this case, \eqref{eq:3} has an integrating factor \begin{equation} \eta = \dfrac{a^{1/n}}{X_1}. \end{equation} \item[(3)] $r=2$ if, and only if, \eqref{eq:31} is not satisfied by any $a\in K\backslash \{0\}$ and $n\in \mathbb{N}$, and there exists $a\in K$, such that \begin{equation} \label{eq:32} \mathscr{X} a = b_0 a + b_1. \end{equation} In this case, \eqref{eq:3} has an integrating factor of the form \begin{equation} \label{eq:3.5} \eta = \dfrac{1}{X_1}\exp\left[\int_{(x_1^0,x_2^0)}^{(x_1,x_2)} \dfrac{a}{X_1}\left( X_1 d x_2 -(X_2 a + b_0) d x_1 \right)\right]. \end{equation} \item[(4)] $r=3$ if, and only if, \eqref{eq:32} is not satisfied by any $a\in K$, and there exists $a\in K$, such that \begin{equation} \label{eq:33} \mathscr{X}a = 2 b_0 a + b_2. \end{equation} In this case, \eqref{eq:3} has an integrating factor of the form \begin{equation} \eta = \dfrac{1}{X_1} \exp\left[\int_{(x_1^0,x_2^0)}^{(x_1,x_2)}(-\delta_2 \dfrac{X_2}{X_1} - \dfrac{X_2}{X_1} u) d x_1 + u d x_2\right], \end{equation} where $u$ is a solution of following partial differential equations \begin{equation} \left\{ \begin{array}{rcl} \delta_1 u &=& - \delta_2^2 \dfrac{X_2}{X_1} - \dfrac{X_2}{X_1}a - (\delta_2 \dfrac{X_2}{X_1}) u - \dfrac{1}{2}(\dfrac{X_2}{X_1})u^2\\ \delta_2 u &=& a + \dfrac{1}{2} u^2. \end{array} \right. \end{equation} \item[(5)] $r=\infty$ if, and only if, \eqref{eq:33} is not satisfied by any $a\in K$. \end{enumerate} \end{thm} The proof is straightforward from pervious section, and is omitted here. We will give examples for each of the classes in Theorem \ref{th:2}. It is easy to see that all equations \begin{equation} \frac{d x_1}{d t} = 1,\quad \dfrac{d x_2}{d t} = p(x_1), \end{equation} with $p(x_1)$ a polynomial, have order $r=0$. The general homogenous linear equations\footnote{Here by general we mean most equations of this form.} \begin{equation} \frac{d x_1}{d t} = 1,\quad \dfrac{d x_2}{d t} = p(x_1) x_2, \end{equation} with $p(x_1)$ a rational function, have order $r = 1$, and the general non homogenous linear equations \begin{equation} \frac{d x_1}{d t} = 1,\quad \dfrac{d x_2}{d t} = p(x_1) x_2 + q(x_1), \end{equation} where $p(x_1)$ and $q(x_1)$ are rational functions, have order $r = 2$. In following, we will show that the general Riccati equation is an example of order $r = 3$. \begin{prop} \label{cor:3} The general Riccati equations \begin{equation} \label{eq:riccati} \dfrac{d x_1}{d t} = 1,\quad \frac{d x_2}{d t} = p_2(x_1) x_2^2 + p_1(x_1) x_2 + p_0(x_1), \end{equation} where $p_i(x), (i=0,1,2)$ are rational functions, have order $r = 3$. \end{prop} \begin{proof} We have known that the general Riccati equation \eqref{eq:riccati} does not have Liouvillian first integral (refer \cite{L:41} and\cite{Singer:92}), and hence the order $r $ is either $3$ or $\infty$ according to \cite{Singer:92}. From the equation \eqref{eq:riccati}, we have $X_1 = 1$ and $X_2 = p_2(x_1) x_2^2 + p_1(x_1) x_2 + p_0(x_1).$ Thus, we have $b_2 = 0$ from \eqref{eq:Bi}, and the equation \eqref{eq:33} has solution $a = 0$, therefore the order is $3$. \end{proof} Finally, we will show an example of differential equation with order $r=\infty$. Consider the van der Pol equation \begin{equation} \label{vdp} \left\{\begin{array}{rcl} \dot{x}_1 &=& x_2 - \mu (\dfrac{x_1^3}{3} - x_1),\\ \dot{x}_2 &=& -x_1 \end{array}\right. \ \ \ (\mu \not = 0). \end{equation} The van der Pol equation is well known for its existence of a limit cycle. Following Lemma was proved independently by Cheng et al.\cite{Cheng:95} and Odani\cite{Oda:95}, respectively. \begin{lem} (\cite{Cheng:95} and \cite{Oda:95}) \label{lem:vdp} The system of the van der Pol equation \eqref{vdp} has no algebraic solution curves. In particular, the limit cycle is not algebraic. \end{lem} \begin{prop} \label{cor:2} The order of the van der Pol equation \eqref{vdp} is $r=\infty$. \end{prop} \begin{proof} Let $$X_1(x_1,x_2) = x_2 - \mu (\dfrac{x_1^3}{3} - x_1),\quad \ X_2(x_1,x_2) = -x_1,$$ then the equation \eqref{eq:33} for the van der Pol equation \eqref{vdp} reads \begin{equation} \label{eq:37} X_1^3 \mathscr{X}a + 2 x_1 X_1^2 a + 6 x_1 = 0. \end{equation} We only need to show that \eqref{eq:37} has no rational function solution $a$. If on the contrary, \eqref{eq:37} has a rational function solution $a = a_1/a_2$, where $a_1, a_2$ are relatively prime polynomials, then $a_1$ and $a_2$ satisfy $$X_1^3 (a_2 \mathscr{X} a_1 - a_1 \mathscr{X}a_2) + 2 x_1 X_1^2 a_1 a_2 + 6 x_1 a_2^2 = 0,$$ i.e. $$a_2 (X_1^3 \mathscr{X}a_1 + 2 x_1 X_1^2 a_1 + 6 x_1 a_2) = a_1 X_1^3 \mathscr{X}a_2.$$ Hence, there exist a polynomial $c(x_1,x_2)$, such that \begin{eqnarray} \label{eq:38} X_1^3 \mathscr{X} a_2 &=& c a_2,\\ \label{eq:39} X_1^3 \mathscr{X} a_1 &=& (c - 2 x_1 X_1^2) a_1 - 6 x_1 a_2. \end{eqnarray} Let $a_2 = X_1^k p$, where $k$ is the maximum integer such that the polynomial $p$ does not contain $X_1$ as a factor. Substitute $a_2$ into \eqref{eq:38}, we have $$X_1^3 \mathscr{X} p = p (c - k X_1^2 \mathscr{X} X_1).$$ Thus, $p| (X_1^3 \mathscr{X}p)$, and therewith $p | \mathscr{X}p$ because $X_1$ is a prime polynomial and $p$ does not contain $X_1$ as a factor. Therefore, either $p$ is a constant or the planar curve defined by $p(x_1,x_2)=0$ is an algebraic invariant curve of the van der Pol equation \eqref{vdp}. However, Lemma \ref{lem:vdp} has excluded the latter case. Therefore, $p$ must be a constant. We can let $p = 1$ without loss of generality, and therefore \begin{equation} \label{eq:40} a_2 = X_1^k,\quad c = k X_1^2\, \mathscr{X}X_1. \end{equation} Substitute \eqref{eq:40} into \eqref{eq:39}, we have \begin{equation} \label{eq:41} X_1^3 \mathscr{X}a_1 = (k \mathscr{X}X_1 - 2 x_1 ) X_1^2 a_1 - 6 x_1 X_1^k,\quad (k\geq 0). \end{equation} Note that $$(k \mathscr{X}X_1 - 2 x_1) = -k \mu (x_1^2 - 1) X_1 - (k + 2) x_1,$$ \eqref{eq:41} can be rewritten as \begin{equation} \label{eq:kk} X_1^3 \mathscr{X}a_1 = -k \mu (x_1^2 - 1) X_1^3 a_1 - (k + 2) x_1 X_1^2 a_1 - 6 x_1 X_1^k. \end{equation} From \eqref{eq:kk}, we claim that $k = 2$. If otherwise, we should have $X_1|(k+2) x_1a_1$ if $k > 2$, or $X_1| 6 x_1$ if $k < 2$, which are not possible. Let $k = 2$, then equation \eqref{eq:41} becomes \begin{equation} X_1 \mathscr{X}a_1 = (2 \mathscr{X}X_1 - 2 x_1 ) a_1 - 6 x_1, \end{equation} which gives \begin{equation} \label{eq:42} \begin{array}{rl} &\left(x_2 - \mu (\dfrac{x_1^3}{3} - x_1)\right) \left((x_2 - \mu (\dfrac{x_1^3}{3} - x_1)) \dfrac{\partial a_1}{\partial x_1} -x_1 \dfrac{\partial a_1}{\partial x_2}\right)\\ = &\left(-2 \mu (x_1^2 - 1) (x_2 - \mu (\dfrac{x_1^3}{3} - x_1)) - 4 x_1\right) a_1 - 6 x_1. \end{array} \end{equation} Let \begin{equation} \label{eq:a1} a_1(x_1,x_2) = \sum_{i = 0}^m h_i(x_2) x_1^i, \end{equation} where $h_i(x_2)$ are polynomials and $h_m(x_2) \not=0$. Substituting \eqref{eq:a1} into \eqref{eq:42}, and comparing the coefficient of $x_1^{m+5}$, we have $$\frac{1}{9} \mu^2\, m\, h_m(x_2) = \frac{2}{3} \mu^2 h_m(x_2),$$ which implies $m = 6$. Hence, we have $7$ coefficients $h_i(x_2), (i = 0,\cdots , 6)$ to be determined, which are all polynomials of $x_2$. Next, comparing the coefficients of $x_1^{i}\ ( 0\leq i\leq 10)$, we obtain following 11 differential-algebra equations for the coefficients: \begin{eqnarray*} 0 &=& x_2 ( -2 \mu h_0(x_2) + x_2 h_1(x_2) )\\ 0 &=& 6 - 2 ( -2 + {\mu }^2) h_0(x_2) + 2 x_2^2 h_2(x_2) - x_2 h_0'(x_2)\\ 0 &=& 2 \mu x_2 h_0(x_2) - ( -4 + {\mu }^2 ) h_1(x_2) + 2 \mu x_2 h_2(x_2) + 3 x_2^2 h_3(x_2)\\ &&{} - \mu h_0'(x_2) - x_2 h_1'(x_2)\\ 0 &=& \frac{8 \mu^2}{3} h_0(x_2) + \frac{4 \mu x_2 }{3} h_1(x_2) + 4 h_2(x_2) + 4 \mu x_2 h_3(x_2) + 4 x_2^2 h_4(x_2)\\ &&{} - \mu h_1'(x_2) - x_2 h_2'(x_2)\\ 0 &=& 2 {\mu }^2 h_1(x_2) + \frac{2 \mu x_2 }{3} h_2(x_2) + 4 h_3(x_2) + {\mu }^2 h_3(x_2) + 6 \mu x_2 h_4(x_2)\\ &&{} + 5 x_2^2 h_5(x_2) + \frac{\mu }{3} h_0'(x_2) - \mu h_2'(x_2) - x_2 h_3'(x_2)\\ 0 &=& \frac{1}{3}(-2 {\mu }^2 h_0(x_2) + 4 {\mu }^2 h_2(x_2) + 12 h_4(x_2) + 6 {\mu }^2 h_4(x_2) + 24 \mu x_2 h_5(x_2)\\ &&{} + 18 x_2^2 h_6(x_2) + \mu h_1'(x_2) - 3 \mu h_3'(x_2) - 3 x_2 h_4'(x_2))\\ 0 &=& \frac{1}{9} (-5 {\mu }^2 h_1(x_2) + 6 {\mu }^2 h_3(x_2) - 6 \mu x_2 h_4(x_2) + 36 h_5(x_2) + 27 {\mu }^2 h_5(x_2)\\ &&{} + 90 \mu x_2 h_6(x_2) + 3 \mu h_2'(x_2) - 9 \mu h_4'(x_2) - 9 x_2 h_5'(x_2))\\ 0 &=& -\frac{4 {\mu }^2}{9} h_2(x_2) - \frac{4 \mu x_2 }{3} h_5(x_2) + 4 h_6(x_2) + 4 {\mu }^2 h_6(x_2) + \frac{\mu }{3} h_3'(x_2)\\ &&{} - \mu h_5'(x_2) - x_2 h_6'(x_2)\\ 0 &=& -\frac{\mu}{3} ( \mu h_3(x_2) + 2 \mu h_5(x_2) + 6 x_2 h_6(x_2) - h_4'(x_2) + 3 h_6'(x_2) )\\ 0 &=& -\frac{\mu}{9} ( 2 \mu h_4(x_2) + 12 \mu h_6(x_2) - 3 h_5'(x_2) )\\ 0 &=&- \frac{\mu}{9} ( \mu h_5(x_2) - 3 h_6'(x_2)) \end{eqnarray*} The above equations yield the following \begin{equation} \label{eq:hm} x_2 (3 x_2 h_5'(x_2) - 2 \mu h_4'(x_2)) = 2 \mu^3. \end{equation} But \eqref{eq:hm} can not be satisfied because $h_4(x_2)$ and $h_5(x_2)$ are polynomials, and the left hand side contains a factor $x_2$, while the right hand side does not. Thus, we conclude that \eqref{eq:37} has no rational function solution, and hence the order of the van der Pol equation is infinity from Theorem \ref{th:2}. \end{proof} \section*{Appendix} \begin{lem} \label{le:app} Consider following partial differential equations \begin{equation} \label{eq:app.1} \left\{ \begin{array}{rcl} \dfrac{\partial u}{\partial x_1} &=& f(x_1,x_2,u)\\ \dfrac{\partial u}{\partial x_2} &=& g(x_1,x_2,u) \end{array} \right. \end{equation} Let $$D_1 = \dfrac{\partial\ }{\partial x_1} + f(x_1,x_2,u)\dfrac{\partial\ }{\partial u},\quad D_2 = \dfrac{\partial\ }{\partial x_2} + g(x_1,x_2,u)\dfrac{\partial\ }{\partial u}.$$ If the functions $f$ and $g$ are analytic, and satisfy \begin{equation} \label{eq:DD} D_2 f(x_1,x_2,u) \equiv D_1 g(x_1,x_2,u), \end{equation} in a neighborhood of $(0,0,0)$, then the equation \eqref{eq:app.1} has a unique solution $u = u(x_1,x_2)$ that is analytic on a neighborhood of $(0,0)$ and $u(0,0) = 0$. \end{lem} \begin{proof} Without loss of generality, we assume that $f$ and $g$ are analytic in $$\Omega = \{(x,y,u)\in \mathbb{C}^3 \Big| |x_1| + |x_2| + |u| \leq \rho\},$$ where $\rho$ is positive. Then we can write $f(x_1,x_2,u)$ and $g(x_1,x_2,u)$ as power series \begin{equation} f(x_1,x_2,u) = \sum_{i,j,k} f_{i,j,k} x_1^i x_2^ju^k \end{equation} and \begin{equation} g(x_1,x_2,u)=\sum_{i,j,k} g_{i,j,k} x_1^i x_2^j u^k, \end{equation} respectively, and these series are convergent in $\Omega$. Let \begin{equation} \label{eq:app.u} u(x_1,x_2) = \sum_{i=0}^\infty \sum_{j=0}^\infty u_{i,j} x_1^i x_2^j,\quad (u_{0,0}= 0),\end{equation} and substitute it into \eqref{eq:app.1}, we have the following equations \begin{eqnarray} \label{eq:app.2} \sum_{i,j} i u_{i,j}x_1^{i-1}x_2^j &=& \sum_{i,j,k} f_{i,j,k} x_1^i x_2^j (\sum_{p,q} u_{p,q}x_1^px_2^q)^k\\ \label{eq:app.3} \sum_{i,j} j u_{i,j} x_1^i x_2^{j-1} &=& \sum_{i,j,k} g_{i,j,k} x_1^i x_2^j (\sum_{p,q} u_{p,q} x_1^p x_2^q)^k. \end{eqnarray} First, from \eqref{eq:app.2} and comparing the coefficients of the same degrees of $x_1^m,\ (m \geq 1)$, we have \begin{equation} \label{eq:app.4} u_{1,0} = f_{0,0,0}, \end{equation} and \begin{equation} \label{eq:app.5} u_{m,0} = \dfrac{1}{m!} D_1^{m-1} f(x_1,x_2,u(x_1,x_2))|_{(x_1,x_2) = (0,0)}. \end{equation} Next, from \eqref{eq:app.3} and comparing the coefficients of the same degrees of $x_1^m x_2^n\ (n\geq 1)$, we have \begin{equation} \label{eq:app.6} u_{0,1} = g_{0,0,0}, \end{equation} and \begin{equation} \label{eq:app.7} u_{m,n} = \dfrac{1}{m! n!} D_1^mD_2^{n-1} g(x_1,x_2,u(x_1,x_2))|_{(x_1,x_2) =(0,0)}. \end{equation} The right hand side of \eqref{eq:app.5} is a polynomial of $u_{i,0}$ with $i<m$. Thus, the coefficients $u_{m, 0}\ (m>0)$ are well defined by \eqref{eq:app.4} and \eqref{eq:app.5} step by step. Similarly, the right hand side of \eqref{eq:app.7} is a polynomial of the coefficients $u_{i,j}$ with $i < n$, $j\leq m$ an $i+j \leq m+ n - 1$. Thus, the coefficients of form $u_{m,n}\ (n\geq 1)$ can be determined by \eqref{eq:app.6}, \eqref{eq:app.7}, and the coefficients $u_{m,0}$ obtained previously. Thus, the coefficients in the power series \eqref{eq:app.u} are well defined and unique. Convergency of this power series can be proved by the Method of Majorants as follows. Let $$M = \max_{(x,y,u)\in \Omega} \{|f(x_1,x_2,u)|, |g(x_1,x_2,u)|\}, $$ then \begin{equation} F(x,y,u) = \dfrac{M}{1 -\dfrac{x_1+x_2 + u}{\rho}} \end{equation} is a majorant function of both $f(x_1,x_2,u)$ and $g(x_1,x_2,u)$. Thus, following equation \begin{equation} \label{eq:app.m} \left\{ \begin{array}{rcl} \dfrac{\partial u}{\partial x_1} &=& F(x_1,x_2,u)\\ \dfrac{\partial u}{\partial x_2} &=& F(x_1,x_2,u) \end{array} \right. \end{equation} majorize the equation \eqref{eq:app.1}. It is easy to verify that, the equation \eqref{eq:app.m} has an analytic solution $u(x_1,x_2) = U(x_1+x_2)$, with $U(z)$ the analytic solution of \begin{equation} \dfrac{d U}{d z} = \dfrac{M}{1 - \dfrac{z + U}{\rho}},\quad U(0) = 0. \end{equation} Thus, the convergency of \eqref{eq:app.u} is concluded by the Method of Majorants. Therefore, the function $u(x_1,x_2)$ given by \eqref{eq:app.u} is well defined in $\Omega$. Finally, we need to show that the function $u(x_1,x_2)$ obtained above satisfies \eqref{eq:app.1}. We note that when $m \geq 1$, \eqref{eq:DD} yields $$D_1^m D_2^{n-1} g = D_1^{m-1} D_2^{n-1} D_1 g = D_1^{m-1} D_2^{n-1} D_2 f = D_1^{m-1} D_2^n f.$$ Thus, \eqref{eq:app.7} is equivalents to \begin{equation} \label{eq:app.8} u_{m,n} = \dfrac{1}{m! n!} D_1^{m-1} D_2^{n} f(x_1,x_2,u(x_1,x_2))|_{(x_1,x_2) = (0,0)}. \end{equation} Therefore, from \eqref{eq:app.4}-\eqref{eq:app.7} and \eqref{eq:app.8}, the function $u(x_1,x_2)$ satisfies both equations in \ref{eq:app.1}. The Lemma has been proved. \end{proof} \end{document}
\begin{document} \title {An approach based on the wave equation in the time domain for active shielding of an unwanted wave with a fixed frequency} \author{Masaru IKEHATA\footnote{ Laboratory of Mathematics, Graduate School of Advanced Science and Engineering, Hiroshima University, Higashhiroshima 739-8527, JAPAN} \footnote{Emeritus Professor at Gunma University} } \maketitle \begin{abstract} An approach for shielding an unwanted wave with a fixed frequency by generating a suitably controlled nontrivial wave with the same frequency is suggested. Unlike the well known surface potential approach, the source of the controlled wave is given by solving the Cauchy problem for the wave equation in the finite time domain. \noindent AMS: 35J05, 35L05, 35C15, 31B10, 35R30 \noindent KEY WORDS: Shielding, unwanted wave, Helmholtz equation, wave equation, Huygens's principle, time domain enclosure method, virtual sound barriers \end{abstract} \section{Introduction} Assume that we are hearing an unwanted sound wave with a known fixed frequency caused by a known source. The support of the source is contained in the closure of a bounded open subset of ${\rm \bf R}^3$ which we denote by $D$. We are standing outside a known bounded domain $\Omega$ that contains $\overline D$. The problem considered in this paper is: can one shield the wave outside domain $\Omega$ {\it completely}, by adding another wave generated by a controlled source which is supported in $\overline\Omega$ and different from that of the unwanted wave? This is a typical problem of an active shielding of a given unwanted wave. This note is concerend with the methodology for the {\it virtual sound barriers} \cite{Q} which is a special form of active noise control systems and avoids to block air, light, and access for shielding. We suggest one natural approach which is based on the wave equation in the finite time domain. To our best knowledge, our approach is new and, in particular, not listed in the book \cite{Q}, see pages 26-33 therein. Now let us formulate the problem more precisely. We assume that the unwanted wave satisfies the inhomogeneous Helmholtz equation $$\begin{array}{ll} \displaystyle (\Delta+k^2)w+F(x)=0, & x\in{\rm \bf R}^3 \end{array} $$ and the outgoing radiation condition $$\begin{array}{lll} \displaystyle \lim_{r\rightarrow\infty}r\left(\frac{\partial}{\partial r}-ik\right)w(r\omega)=0, & \displaystyle r=\vert x\vert, & \displaystyle \omega=\frac{x}{\vert x\vert}, \end{array} $$ where $k$ is a positive number, $F\in L^2({\rm \bf R}^3)$ with $\mbox{supp}\,F\subset\overline D$ and the limit is uniform with respect to $\omega$. It is known that, by virtue of the radiation condition, $w$ has the expression $$\begin{array}{ll} \displaystyle w(x)=\frac{1}{4\pi}\int_D\frac{e^{ik\vert x-y\vert}}{\vert x-y\vert}\,F(y)dy, & x\in{\rm \bf R}^3. \end{array} \tag {1.1} $$ See \cite{CK}. We consider the following problem. {\bf\noindent Problem.} Find another source term $G\in L^2({\rm \bf R}^3)$ whose support is contained in $\overline\Omega$ and satisfies $$\displaystyle \mbox{supp}\,G\cap\overline D=\emptyset \tag {1.2} $$ in such a way that the solution of the inhomogeneous Helmholtz equation $$\begin{array}{ll} \displaystyle (\Delta+k^2)\tilde{w}+G(x)=0, & x\in{\rm \bf R}^3 \end{array} $$ with the outgoing radiation condition $$\begin{array}{lll} \displaystyle \lim_{r\rightarrow\infty}r\left(\frac{\partial}{\partial r}-ik\right)\tilde{w}(r\omega)=0, & \displaystyle r=\vert x\vert, & \displaystyle \omega=\frac{x}{\vert x\vert}, \end{array} $$ satisfies $$\begin{array}{ll} \displaystyle w(x)+\tilde{w}(x)=0, & \displaystyle x\in{\rm \bf R}^3\setminus\overline\Omega. \end{array} \tag {1.3} $$ The $G$ may depend on $k$, $D$, $F$ and $\Omega$ and is called an active source. The $\tilde{w}$ which is called the secondary wave, has the expression $$\begin{array}{ll} \displaystyle \tilde{w}(x)=\frac{1}{4\pi}\int_{\Omega\setminus D}\frac{e^{ik\vert x-y\vert}}{\vert x-y\vert}\,G(y)dy, & x\in{\rm \bf R}^3. \end{array} \tag {1.4} $$ Since we require the condition (1.2), one can not choose the {\it trivial solution} $G=-F$. And in practice, condition (1.2) is natural, say, consider the case when the unwanted wave is radiated from something {\it volumemetric real body}. In this note, to clearly indicate the idea we consider the simplest case when $D=B_{\epsilon}$, where $B_{\epsilon}$ is an open ball with radius $\epsilon$ and centered at the origin of the Cartesian coordinates. \section{A solution} The conclusion is: one can construct the active source as a function of the source of the unwanted wave by using a solution of the wave equation in the finite time domain. Let us describe the solution $G$ to the problem in the case $D=B_{\epsilon}$ step by step. (1) Let $T>2\epsilon$. Solve the Cauchy problem for the wave equation in the whole space $$\left\{ \begin{array}{ll} \displaystyle (\partial_t^2-\Delta)u=0, & x\in{\rm \bf R}^3,\,0<t<T,\\ \\ \displaystyle u(x,0)=0, & x\in{\rm \bf R}^3,\\ \\ \displaystyle \partial_tu(x,0)=F(x), & x\in{\rm \bf R}^3. \end{array} \right. \tag {2.1} $$ (2) Define $$\begin{array}{ll} \displaystyle G(x)=-e^{-ikT}(\partial_tu(x,T)+iku(x,T)), & x\in{\rm \bf R}^3. \end{array} \tag {2.2} $$ Since $\mbox{supp}\,F\subset\overline B_{\epsilon}$, by Huygens's principle, we have $$\displaystyle \mbox{supp}\,G\subset\overline{B_{T+\epsilon}} $$ and $$\displaystyle \mbox{supp}\,G\subset{\rm \bf R}^3\setminus B_{T-\epsilon}. \tag {2.3} $$ Thus the support of $G$ is contained in the shell domain $\overline B_{T+\epsilon}\setminus B_{T-\epsilon}$ and satisfies (1.2) provided $T>2\epsilon$. See Figure 1 for an illustration of a situation. Note that (2.3) is a consequence of the fact that the dimension of ${\rm \bf R}^3$ is an odd number. See, e.g., \cite{ES}. Thus, formula (1.4) becomes $$\begin{array}{ll} \displaystyle \tilde{w}(x)=\frac{1}{4\pi}\int_{B_{T+\epsilon}\setminus B_{T-\epsilon}}\frac{e^{ik\vert x-y\vert}}{\vert x-y\vert}\,G(y)dy, & x\in{\rm \bf R}^3. \end{array} \tag {2.4} $$ Now we are ready to state the main part of this note. \proclaim{\noindent Theorem 2.1.} Let $D=B_{\epsilon}$ and choose $\Omega=B_{T+\epsilon}$. Let $T>2\epsilon$. Then, the $G$ given by (2.2) satisfies (1.3). \em \vskip2mm {\it\noindent Proof.} Define $$\begin{array}{ll} \displaystyle z(x)=\int_0^T e^{-ikt}u(x,t)\,dt, & x\in{\rm \bf R}^3. \end{array} $$ It follows from (2.1) that the $z$ satisfies $$\begin{array}{ll} \displaystyle \Delta z+k^2 z+F(x)+G(x)=0, & x\in{\rm \bf R}^3. \end{array} $$ And we have $$\displaystyle \mbox{supp}\,z\subset\overline{B_{T+\epsilon}}. \tag {2.5} $$ These imply that $z$ has the expression $$\displaystyle z=w+\tilde{w}. $$ Now (2.5) yields the desired conclusion (1.3). \noindent $\Box$ \begin{figure} \caption{\label{fig:1} \label{fig:1} \end{figure} Note that $G$ depends also on $T$. Of course, $T$ sholud not be so large since $\Omega=B_{T+\epsilon}$ and $\Omega$ becoms large if $T$ is large. By Theorem 2.1, we know that one can completely shield the wave field with known source $F$ by adding another wave field. The added wave field has a source whose support is located outside of $\mbox{supp}\,F$ and thus is nontrivial. The proof employs a solution of the Cauchy problem for the wave equation over a {\it finite time interval}. As far as the author knows, no one has pointed out this fact. It should be pointedt out that the construction of $G$ which is given by (2.2) was inspired by reconsidering the {\it time domain enclosure method} \cite{IE0, IW2, IW3, IW4, IW5}. \section{A consequence on the far field pattern} From (1.1) and (1.4) we see that functions $w$ and $\tilde{w}$ have the expressions as $r\rightarrow\infty$ for all $\omega\in S^2$ uniformly $$\displaystyle w(r\omega)\sim\frac{1}{4\pi}\frac{e^{ikr}}{r}\int_De^{-iky\cdot\omega}F(y)dy $$ and $$\displaystyle \tilde{w}(r\omega)\sim\frac{1}{4\pi}\frac{e^{ikr}}{r}\int_{B_{T+\epsilon}\setminus B_{T-\epsilon}}e^{-iky\cdot\omega}G(y)dy. $$ Therefore Theorem 2.1 ensures that $$\displaystyle \int_De^{-iky\cdot\omega}F(y)dy+\int_{B_{T+\epsilon}\setminus B_{T-\epsilon}}e^{-iky\cdot\omega}G(y)dy=0 $$ under the choice of $G$ given by (2.2). This means that the far field pattern (see \cite{CK}) of $w+\tilde{w}$ vanishes. In particular, the far field pattern of $w$ coincides with that of $-\tilde{w}$. This gives an example for the nonuniqueness for the inverse source problem: one can not uniquely determine the source term for the Helmholtz equation at a fixed frequency from the far-field pattern. This fact itself is well known, however, this example tells us more than a non uniqueness in inverse source problem. One can {\it hide} the field radiated from a known source by generating an another field radiated by a set of suitable sources distributed around the known source. \section{Comparison with an approach based on surface potentials} In this section we present another approach in \cite{LRT} which is based on surface potentials. In more gerneral settings see also \cite{U}. And for references in early studies see also introduction in \cite{NU}. The idea is simple. Set $$\begin{array}{ll} \displaystyle K(y-x)=\frac{1}{4\pi}\frac{e^{ik\vert y-x\vert}}{\vert y-x\vert}, & x\not=y. \end{array} $$ Let $x\in{\rm \bf R}^3\setminus\overline\Omega$. From the governing equation of $w$ in ${\rm \bf R}^3\setminus\overline\Omega$ and the radiation condition, we have the expression $$\begin{array}{ll} \displaystyle w(x)=\int_{\partial\Omega}\left(w\frac{\partial K_x}{\partial\nu}- K_x\frac{\partial}{\partial\nu}w\right)\,dS(y), & x\in{\rm \bf R}^3\setminus\overline\Omega, \end{array} \tag {4.1} $$ where $K_x(y)=K(y-x)$. Note that the support of $F$ is contained in $\Omega$. This is a well known formula in scattering theory, see e.g., \cite{CK}. Rewrite (4.1) as $$\begin{array}{ll} \displaystyle w(x)-\int_{\partial\Omega}\left(w\frac{\partial K_x}{\partial\nu}- K_x\frac{\partial}{\partial\nu}w\right)\,dS(y)=0, & x\in{\rm \bf R}^3\setminus\overline\Omega. \end{array} \tag {4.2} $$ In \cite{LRT} they consider that this is a cancelation formula of $w$ outside $\Omega$. They define instead of $\tilde{w}$ $$\begin{array}{ll} \displaystyle w'(x)=-\int_{\partial\Omega}\left(w\frac{\partial K_x}{\partial\nu}- K_x\frac{\partial}{\partial\nu}w\right)\,dS(y), & x\in{\rm \bf R}^3. \end{array} \tag {4.3} $$ Then, (4.2) means that $$\begin{array}{ll} \displaystyle w(x)+w'(x)=0, & x\in{\rm \bf R}^3\setminus\overline\Omega. \end{array} $$ The remarkable point of (4.3) is: the $w'$ depends on only the Cauchy data of $w$ on $\partial\Omega$. It does not require any detailed knowledge of the source of $w$ in $\Omega$. Note also that $\Omega$ can be an arbitrary bounded domain. In \cite{Q}, on page 28 it is clarified that the theoretical base of the virtual sound barriers is such type of formulae which is called the Kirchhoff-Helmholtz equation and a quantified version of Huygens's principle. Mathematically, it is an application of integration by parts combined with the property of the fundamental solution of the Helmholtz equation. Following \cite{LRT}, one can rewrite (4.3) more. Choose a function $\Psi\in C_0^2({\rm \bf R}^3)$ in such a way that $$\begin{array}{lll} \displaystyle \Psi=w, & \displaystyle \frac{\partial \Psi}{\partial\nu}=\frac{\partial w}{\partial\nu}, & x\in\partial\Omega. \end{array} \tag {4.4} $$ We have, for all $x\in{\rm \bf R}^3\setminus\overline\Omega$ $$\begin{array}{l} \displaystyle \,\,\,\,\,\, \int_{\partial\Omega}\left(w\frac{\partial K_x}{\partial\nu}- K_x\frac{\partial}{\partial\nu}w\right)\,dS(y)\\ \\ \displaystyle =\int_{\partial\Omega}\left(\Psi\frac{\partial K_x}{\partial\nu}- K_x\frac{\partial}{\partial\nu}\Psi\right)\,dS(y) \\ \\ \displaystyle =\int_{{\rm \bf R}^3\setminus\Omega}K(y-x)(\Delta+k^2)\Psi(y)\,dy-\Psi(x) \end{array} $$ and $$\begin{array}{ll} \displaystyle \Psi(x)=\int_{{\rm \bf R}^3}K(y-x)(\Delta+k^2)\Psi(y)\,dy, & x\in{\rm \bf R}^3. \end{array} $$ Thus, we obtain, for all $x\in{\rm \bf R}^3\setminus\overline\Omega$ $$\begin{array}{ll} \displaystyle \,\,\,\,\,\, -\int_{\partial\Omega}\left(w\frac{\partial K_x}{\partial\nu}- K_x\frac{\partial}{\partial\nu}w\right)\,dS(y) \\ \\ \displaystyle =\int_{\Omega}K(y-x)(\Delta+k^2)\Psi(y)dy. \end{array} $$ Therefore, we obtain $$\begin{array}{ll} \displaystyle w'(x)=\int_{\Omega}K(y-x)(\Delta+k^2)\Psi(y)dy, & x\in{\rm \bf R}^3\setminus\overline\Omega. \end{array} $$ Next let $x\in\Omega$. We have $$\begin{array}{l} \displaystyle \,\,\,\,\,\, -\int_{\partial\Omega}\left(w\frac{\partial K_x}{\partial\nu}- K_x\frac{\partial}{\partial\nu}w\right)\,dS(y)\\ \\ \displaystyle =-\int_{\partial\Omega}\left(\Psi\frac{\partial K_x}{\partial\nu}- K_x\frac{\partial}{\partial\nu}\Psi\right)\,dS(y) \\ \\ \displaystyle =\int_{{\rm \bf R}^3\setminus\Omega}K(y-x)(\Delta+k^2)\Psi(y)\,dy \end{array} $$ Therefore, we obtain $$\begin{array}{ll} \displaystyle w'(x)=\int_{{\rm \bf R}^3\setminus\Omega}K(y-x)(\Delta+k^2)\Psi(y)dy, & x\in\Omega. \end{array} $$ Summing up, we have obtained the following formula. \proclaim{\noindent Proposition 4.1.} Let $\Psi\in C^2_0({\rm \bf R}^3)$ satisfy (4.4). Then, the $w'$ given by (4.3) has the the expression $$\displaystyle w'(x)= \left\{ \begin{array}{ll} \displaystyle \int_{{\rm \bf R}^3\setminus\Omega}K(y-x)(\Delta+k^2)\Psi(y)dy, & x\in\Omega,\\ \\ \displaystyle \int_{\Omega}K(y-x)(\Delta+k^2)\Psi(y)dy, & x\in{\rm \bf R}^3\setminus\overline\Omega. \end{array} \right. \tag {4.5} $$ \em \vskip2mm Note that the support of $\Psi$ can be an arbitrary small neighbourhood of $\partial\Omega$. One can rewrite (4.5) compactly $$\displaystyle w'=\chi_{\Omega}K*\left\{\chi_{{\rm \bf R}^3\setminus\overline\Omega}(\Delta+k^2)\Psi\right\} +\chi_{{\rm \bf R}^3\setminus\overline\Omega}K*\left\{\chi_{\Omega}(\Delta+k^2)\Psi\right\}. $$ We see that $w'\vert_{\Omega}\in H^2(\Omega)$, $w'\vert_{B_R\setminus\overline\Omega}\in H^2(B_R\setminus\overline\Omega)$ for all $R>>1$ and $w'\in L^2({\rm \bf R}^3)$. However, $w'$ itself does not belong to $H^1$ in a neighbourhood of $\partial\Omega$. This is a consequence of the jump relation of the double layer potential and the radiation condition. Thus $w'$ can not be realized as a wave field having a compact source in $L^2({\rm \bf R}^3)$. In contrast to this, our construction of $\tilde{w}$ automatically ensures $\tilde{w}\in H^2_{\mbox{loc}}({\rm \bf R}^3)$. This is an advantage of making use of the full knowledge of the source of the unwanted wave $w$. Le us make a comparison. \noindent {\bf The surface potential method.} $\bullet$ The selection of $\Omega$ is arbitrary as long as the condition $\mbox{suupp}\,F\subset\Omega$ is satisfied. For the construction of $w'$ we need only the Cauchy data of $w$ on $\partial\Omega$. $\bullet$ $w'$ has a {\it singularity across} $\partial\Omega$ and its source does not belong to $L^2({\rm \bf R}^3)$. \noindent {\bf Our method.} $\bullet$ The selection of $\Omega$ depends on an upper bound of the size of $\mbox{supp}\,F$. The full knowledge of the source of the unwanted wave is required. $\bullet$ $\tilde{w}$ is locally $H^2$-regular in the whole space and its source belongs to $L^2({\rm \bf R}^3)$. It may be possible to approximate the source as a superposition of finitely many {\it monopoles} only. See (2.4). \section{Comparison with a naive approach in a special case} Consider the case $F(x)=(\epsilon-\vert x\vert)\chi_{B_{\epsilon}}(x)$. The $F$ belongs to $H^1({\rm \bf R}^3)$. The field generated by the source $F$ is given by $$\displaystyle w(x)=\frac{1}{4\pi}\int_{B_{\epsilon}}\frac{e^{ik\vert x-y\vert}}{\vert x-y\vert}\,(\epsilon-\vert y\vert)\,dy. $$ Here we recall \proclaim{\noindent Lemma 5.1.} Let $B$ be the open ball with radius $\eta$ and centered at the origin. We have, for all $x\in{\rm \bf R}^3\setminus\overline B$ $$\displaystyle \frac{1}{4\pi}\int_B\frac{e^{ik\vert x-y\vert}}{\vert x-y\vert}\,dy =-i\frac{\varphi(-ik\eta)}{k^3}\frac{e^{ik\vert x\vert}}{\vert x\vert} \tag {5.1} $$ and $$\displaystyle \frac{1}{4\pi}\int_B\frac{e^{ik\vert x-y\vert}}{\vert x-y\vert}\,(\eta-\vert y\vert)\,dy =\frac{1}{k^4}\frac{e^{ik\vert x\vert}}{\vert x\vert} P(-ik\eta), \tag {5.2} $$ where $$\left\{ \begin{array}{l} \displaystyle \varphi(s)=s\cosh s-\sinh s, \\ \\ \displaystyle P(s)=-2\cosh s+s\sinh s+2. \end{array} \right. $$ \em \vskip2mm The equation (5.1) is a consequence of the mean value theorem for the Helmholtz equation. For the proof see \cite{CH} and that of (5.2) see Appendix in \cite{IW4}. Applying (5.2) to $w(x)$ for $x\in{\rm \bf R}^3\setminus\overline {B_{\epsilon}}$, we obtain $$\displaystyle w(x)=\frac{1}{k^4}\frac{e^{ik\vert x\vert}}{\vert x\vert} P(-ik\epsilon). \tag {5.3} $$ Consider also the secondary field $w'(x)$ generated by the source $G(x)=\chi_{B_{R_2}\setminus B_{R_1}}(x)$ with $R_2>R_1>\epsilon$: $$\displaystyle w'(x)=\frac{1}{4\pi}\int_{B_{R_2}\setminus B_{R_1}}\frac{e^{ik\vert x-y\vert}}{\vert x-y\vert}\,dy. $$ From (5.1) we have, for all $x\in{\rm \bf R}^3\setminus\overline{B_{R_2}}$ $$\displaystyle w'(x)=-\frac{i}{k^3}\frac{1}{4\pi}\frac{e^{ik\vert x\vert}}{\vert x\vert}(\varphi(-ik R_2)-\varphi(-ik R_1)). \tag {5.4} $$ Thus a combination of equations (5.3) and (5.4) yields that: $w(x)+w'(x)=0$ for all $x\in{\rm \bf R}^3\setminus B_{R_2}$ if and only if $$\displaystyle \frac{1}{k^4} P(-ik\epsilon)-\frac{i}{k^3}(\varphi(-ik R_2)-\varphi(-ik R_1))=0, $$ that is $$\displaystyle P(-ik\epsilon)-ik(\varphi(-ik R_2)-\varphi(-ik R_1))=0. \tag {5.5} $$ So, the problem is to: given $\epsilon$ and $k$ find $R_2>R_1>\epsilon$ such that (5.5) is valid. We have $$\begin{array}{l} \displaystyle \,\,\,\,\,\, P(-ik\epsilon)\\ \\ \displaystyle =-2\cosh (-ik\epsilon)-ik\epsilon\sinh (-ik\epsilon)+2 \\ \\ \displaystyle =-2\cos k\epsilon-k\epsilon \sin k\epsilon+2, \end{array} $$ $$\begin{array}{ll} \displaystyle \,\,\,\,\,\, \varphi(-ikR_j) \\ \\ \displaystyle =-ikR_j\cosh (-ikR_j)-\sinh(-ikR_j)\\ \\ \displaystyle =-ikR_j\cos kR_j+i\sin kR_j \\ \\ \displaystyle =-i(kR_j\cos kR_j-\sin kR_j). \end{array} $$ Thus, (5.5) is equivalent to the equation $$\displaystyle -2\cos k\epsilon-k\epsilon \sin k\epsilon+2 =k(kR_2\cos kR_2-\sin kR_2-kR_1\cos kR_1+\sin kR_1). \tag {5.6} $$ Define $$\displaystyle Q(\xi)=\xi\cos\xi-\sin\xi. $$ Then, (5.6) becomes $$\displaystyle Q(kR_2)=Q(kR_1)+\frac{1}{k}\left\{2(1-\cos k\epsilon)-k\epsilon\sin k\epsilon\right\}. \tag {5.7} $$ From the behaviour of $Q(\xi)$ we see that, given $k>0$, $\epsilon>0$ and $R_1>\epsilon$ there exist {\it infinitely many} $R_2>R_1$ such that (5.7) is satisfied. Thus, the shielding is possible and the radius $R_2$ is characterized as a soultion of the equation (5.7) with $R_2>R_1$. However, if the source term has the form $$\displaystyle F(x)=\rho(x)\chi_{B_{\epsilon}}(x), $$ where $\rho(x)$ is a function having a general form, the construction of $w'$ having the form $$\begin{array}{ll} \displaystyle w'(x)=\frac{1}{4\pi}\int_{B_{R_2}\setminus B_{R_1}}\frac{e^{ik\vert x-y\vert}}{\vert x-y\vert}\,g(y)\,dy, & R_2>R_1>\epsilon, \end{array} $$ shall be difficult without using our natural approach based on Huygens's principle. Our method gives a simple solution that $R_2=T+\epsilon$, $R_1=T-\epsilon$ with $T>2\epsilon$ and $g(y)$ is given by the right-hand side on (2.2). This means that we made use of the Cauchy problem for wave equation (2.1) as a {\it calculater} of the desired source. \section{Testing the approach} It would be interesting to do numerical testing of our method based on formula (2.4). The method consists of only two steps. $\quad$ {\bf Step 1.} Give $F$ and compute $G$ given by (2.2). $\quad$ {\bf Step 2.} Generate the secondary wave $\tilde{w}$ with the source $G$ computed in Step 1. $\quad$ In Step 1, the computation needs the solution of the Cauchy problem for the wave equation (2.1) together with its time derivative at $t=T$, however, not for all $t\in\,]0,\,T[$. We have the Kirchhoff formula, e.g., \cite{ES} for the solution of (2.1) which yields the exact value of the solution together with the time derivative at $t=T$ without solving (2.1) numerically. Numerically, Step 2 means that: compute $\tilde{w}$ via formula (2.4). To check our method numerically, we have to compute also the unwanted wave $w$ via formula (1.1). Then, compute the total wave $w+\tilde{w}$ and observe its behaviour for $x\in B$, where $B$ is a ball centered at the origin with a large radius. $$\quad$$ \centerline{{\bf Acknowledgment}} The author was partially supported by Grant-in-Aid for Scientific Research (C)(No. 17K05331) of Japan Society for the Promotion of Science. $$\quad$$ \vskip1cm \noindent e-mail address [email protected] \end{document}
\begin{document} \begin{frontmatter} \pretitle{Research Article} \title{The risk model with stochastic premiums and a~multi-layer dividend strategy} \author{\inits{O.}\fnms{Olena}~\snm{Ragulina}\ead[label=e1]{[email protected]}\orcid{0000-0002-5175-6620}} \address{\institution{Taras Shevchenko National University of Kyiv},\break Department of Probability Theory, Statistics and Actuarial Mathematics,\break 64 Volodymyrska Str., 01601 Kyiv, \cny{Ukraine}} \markboth{O. Ragulina}{The risk model with stochastic premiums and a~multi-layer dividend strategy} \begin{abstract} The paper deals with a generalization of the risk model with stochastic premiums where dividends are paid according to a multi-layer dividend strategy. First of all, we derive piecewise integro-differential equations for the Gerber--Shiu function and the expected discounted dividend payments until ruin. In addition, we concentrate on the detailed investigation of the model in the case of exponentially distributed claim and premium sizes and find explicit formulas for the ruin probability as well as for the expected discounted dividend payments. Lastly, numerical illustrations for some multi-layer dividend strategies are presented. \end{abstract} \begin{keywords} \kwd{Risk model with stochastic premiums} \kwd{multi-layer dividend strategy} \kwd{Gerber--Shiu function} \kwd{expected discounted dividend payments} \kwd{ruin probability} \kwd{piecewise integro-differential equation} \end{keywords} \begin{keywords}[MSC2010] \kwd{91B30} \kwd{60G51} \end{keywords} \received{\sday{29} \smonth{3} \syear{2019}} \revised{\sday{4} \smonth{8} \syear{2019}} \accepted{\sday{4} \smonth{8} \syear{2019}} \publishedonline{\sday{28} \smonth{8} \syear{2019}} \end{frontmatter} \section{Introduction} \label{sec:1} The ruin measures such as the ruin probability,\index{ruin probability} the surplus prior to ruin and the deficit at ruin have attracted great interest of researchers recently (see, e.g., \cite{AsAl2010,MiRa2016,RoScScTe1999} and references therein). Gerber and Shiu \cite{GeSh1998} introduced the expected discounted penalty function for the classical risk model, which enabled to study those risk measures together by combining them into one function. After that, the so-called Gerber--Shiu function\index{Gerber--Shiu function} has been investigated by many authors in more general risk models\index{risk models} (see, e.g., \cite{BoCoLaMa2006,ChVr2014,ChTa2003,CoMaMa2008,CoMaMa2010,GeSh2005,He2014,Su2005,XiWu2006,YoXi2012,ZhYa2011_1}). In particular, a lot of attention has been paid to the study of risk models\index{risk models} where shareholders receive dividends from their insurance company. De Finetti \cite{De1957}, who first considered dividend strategies\index{dividend ! strategies} in insurance, dealt with a binomial model. For the classical risk model and its different generalizations, different dividend strategies\index{dividend ! strategies} have been studied in a number of papers (see, e.g., \cite{ChLi2011,CoMaMa2011,CoMaMa2014,La2008,LiWuSo2009,LiGa2004,LiPa2006,LiWiDr2003,LiLiPe2014,ShLiZh2013,Wa2015,YaZh2008}). In addition, the monograph by Schmidli \cite{Sc2008} is devoted to optimal dividend problems in insurance risk models. Applying multi-layer dividend strategies enables to change the dividend payment intensity depending on the current surplus. Albrecher and Har\-tin\-ger \cite{AlHa2007} consider the modification of the classical risk model where both the premium intensity and the dividend payment intensity are assumed to be step functions depending on the current surplus level. The authors derive algorithmic schemes for the \xch{determination}{determanation} of explicit expressions for the Gerber--Shiu function\index{Gerber--Shiu function} and the expected discounted dividend payments.\index{dividend ! payments} A similar risk model\index{risk models} is considered by Lin and Sendova \cite{LiSe2008}, who derive a piecewise integro-differential equation for the Gerber--Shiu function\index{Gerber--Shiu function} and provide a recursive approach to obtain general solutions to that equation and its generalizations. Developing a recursive algorithm to calculate the moments of the expected discounted dividend payments\index{dividend ! payments} for a class of risk models with Markovian claim arrivals, Badescu and Landriault \cite{BaLa2008} generalize some of the results obtained in \cite{AlHa2007} (see also \cite{BaDrLa2007} for some results related to the class of Markovian risk models\index{risk models} with a multi-layer dividend strategy). The absolute ruin problem in the classical risk model with constant interest force and a multi-layer dividend strategy is investigated in \cite{YaZhLa2008}, where a piecewise integro-differential equation for the discounted penalty function is derived, some explicit expressions are given when claims are exponentially distributed and an asymptotic formula for the absolute ruin probability is obtained for heavy-tailed claim sizes. The dual model of the compound Poisson risk model with a multi-layer dividend strategy under stochastic interest is considered in \cite{Yi2012}. Results related to perturbed compound Poisson risk models under multi-layer dividend strategies can be found in \cite{MiSeTs2010,YaZh2009_2}. In addition, different classes of more general renewal risk models are investigated in \cite{DeZhDe2012,JiYaLi2012,YaZh2008,YaZh2009_1}, and some recent papers deal with risk models\index{risk models} that incorporate various dependence structures (see, e.g., \cite{LiMa2016,XiZo2017,ZhYa2011_2,ZhXiDe2015}). The present paper generalizes the risk model with stochastic premiums\index{stochastic premiums} introduced and investigated in \cite{Boi2003} (see also \cite{MiRa2016}). In that risk model,\index{risk models} both claims and premiums are modeled as compound Poisson processes,\index{compound Poisson processes} whereas premiums arrive with constant intensity and are not random in the classical compound Poisson risk model (see also \cite{MiRaSt2014,MiRaSt2015} for a generalization of the classical risk model where an insurance company gets additional funds whenever a claim arrives). In \cite{Boi2003}, claim sizes and inter-claim times are assumed to be mutually independent, and the same assumption is made concerning premium arrivals. In contrast to \cite{Boi2003}, the recent paper \cite{Ra2017} deals with the risk model with stochastic premiums\index{stochastic premiums} where the dependence structures between claim sizes and inter-claim times as well as premium sizes and inter-premium times are modeled by the Farlie--Gumbel--Morgenstern copulas, and dividends are paid according to a threshold dividend strategy. The Gerber--Shiu function, a special case of which is the ruin probability,\index{ruin probability} and the expected discounted dividend payments\index{dividend ! payments} until ruin are studied in \cite{Ra2017}. In the present paper, we develop those results and make the assumption that dividends are paid according to a multi-layer dividend strategy and all random variables and processes are mutually independent.\looseness=1 The rest of the paper is organized as follows. In Section~\ref{sec:2}, we give a description of the risk model with stochastic premiums\index{stochastic premiums} and a multi-layer dividend strategy. In Sections~\ref{sec:3} and~\ref{sec:4}, we derive piecewise integro-differential equations for the Gerber--Shiu\index{Gerber--Shiu function} function and the expected discounted dividend payments\index{dividend ! payments} until ruin. Next, in Section~\ref{sec:5}, we deal with exponentially distributed claim and premium sizes and obtain explicit formulas for the ruin probability\index{ruin probability} and the expected discounted dividend payments.\index{dividend ! payments} Finally, Section~\ref{sec:6} provides some numerical illustrations. \section{Description of the model} \label{sec:2} Let $(\varOmega , \mathfrak{F}, \mathbb{P})$ be a probability space satisfying the usual conditions, and let all the stochastic objects we use below be defined on it. In the risk model with stochastic premiums\index{stochastic premiums} introduced in \cite{Boi2003} (see also \cite{MiRa2016}), claim sizes form a sequence $(Y_{i})_{i\ge 1}$ of non-negative independent and identically distributed (i.i.d.) random variables (r.v.'s) with cumulative distribution function (c.d.f.) $F_{Y}(y)=\mathbb{P}[Y_{i}\le y]$, and the number of claims on the time interval $[0,t]$ is a Poisson process $(N_{t})_{t\ge 0}$ with constant intensity $\lambda >0$. In addition, premium sizes form a sequence $(\bar{Y}_{i})_{i\ge 1}$ of non-negative i.i.d. r.v.'s with c.d.f. $\bar{F}_{\bar{Y}}(y)=\mathbb{P}[\bar{Y} _{i}\le y]$, and the number of premiums on the time interval $[0,t]$ is a Poisson process $(\bar{N}_{t})_{t\ge 0}$ with constant intensity $\bar{\lambda }>0$. Thus, the total claims and premiums on $[0,t]$ equal $\sum_{i=1}^{N_{t}} Y_{i}$ and $\sum_{i=1}^{\bar{N}_{t}} \bar{Y}_{i}$, respectively. It is worth \xch{pointing}{to point} out that, here and subsequently, a sum is always set to 0 if the upper summation index is less than the lower one. In particular, we have $\sum_{i=1}^{0} Y_{i} =0$ if $N_{t} =0$, and $\sum_{i=1}^{0} \bar{Y}_{i} =0$ if $\bar{N}_{t} =0$. In what follows, we also assume that the r.v.'s $(Y_{i})_{i\ge 1}$ and $(\bar{Y}_{i})_{i \ge 1}$ have finite expectations $\mu >0$ and $\bar{\mu }>0$, respectively. Furthermore, we suppose that $(Y_{i})_{i\ge 1}$, $(\bar{Y}_{i})_{i\ge 1}$, $(N_{t})_{t\ge 0}$ and $(\bar{N}_{t})_{t \ge 0}$ are mutually independent. Next, we denote a non-negative initial surplus of the insurance company by $x$, and let $X_{t}(x)$ be its surplus at time $t$ provided that the initial surplus is $x$. Then the surplus process\index{surplus process} $ (X_{t}(x))_{t\ge 0}$ is defined by the \xch{equality}{equation} \begin{equation} \label{eq:1} X_{t}(x) =x+\sum_{i=1}^{\bar{N}_{t}} \bar{Y}_{i} -\sum_{i=1}^{N_{t}} Y _{i}, \quad t\ge 0. \end{equation} In contrast to the risk model\index{risk models} considered in \cite{Boi2003}, we make the additional assumption that the insurance company pays dividends to its shareholders according to a $k$-layer dividend strategy with $k\ge 2$. Let $\mathbf{b}=(b_{1},\ldots ,b_{k-1})$ be a $(k-1)$-dimensional vector with real-valued components such that $0<b_{1}<\cdots <b_{k-1}<\infty $. Besides that, we set $b_{0}=0$ and $b_{k}=\infty $. Let $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ denote the modified surplus process\index{surplus process} under the $k$-layer dividend strategy $\mathbf{b}$, which implies that dividends are paid continuously at a rate $d_{j}>0$ whenever $b_{j-1}\le X_{t}^{\mathbf{b}}(x)< b_{j}$, i.e. the process $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ is in the $j$th layer at time $t$, where $1\le j\le k$. Then \begin{equation} \label{eq:2} X_{t}^{\mathbf{b}}(x) =x+\sum _{i=1}^{\bar{N}_{t}} \bar{Y}_{i} -\sum _{i=1}^{N_{t}} Y_{i} - \int _{0}^{t} \sum_{j=1}^{k} d_{j} \Eins \bigl( b_{j-1}\le X_{s}^{\mathbf{b}}(x)< b_{j} \bigr)\, \mathrm{d}s, \quad t\ge 0, \end{equation} where $\Eins (\cdot )$ is the indicator function. From now on, we suppose that the net profit condition\index{net profit condition} holds, which in this case means that \begin{equation} \label{eq:3} \bar{\lambda }\bar{\mu }> \lambda \mu +\max_{1\le j\le k} \{d_{j}\}. \end{equation} Let $(D_{t})_{t\ge 0}$ denote the dividend distributing process. For the $k$-layer dividend strategy described above, we have \begin{equation*} \mathrm{d}D_{t} =d_{j}\, \mathrm{d}s \quad \text{if}\ b_{j-1} \le X_{t}^{\mathbf{b}}(x)< b_{j}, \quad 1\le j\le k. \end{equation*} Next, let $\tau _{\mathbf{b}}(x)= \inf \{t\ge 0\colon X_{t}^{ \mathbf{b}}(x) <0\}$ be the ruin time for the risk process $ (X _{t}^{\mathbf{b}}(x) )_{t\ge 0}$ defined by \eqref{eq:2}. In what follows, we omit the dependence on $x$ and write $\tau _{\mathbf{b}}$ instead of $\tau _{\mathbf{b}}(x)$ when no confusion can arise. For $\delta _{0} \ge 0$, the Gerber--Shiu function\index{Gerber--Shiu function} is defined by \begin{equation*} m(x,\mathbf{b}) =\mathbb{E} \bigl[ e^{-\delta _{0} \tau _{\mathbf{b}}} \, w\bigl(X_{\tau _{\mathbf{b}}-}^{\mathbf{b}}(x), \bigl\vert X_{\tau _{\mathbf{b}}} ^{\mathbf{b}}(x) \bigr\vert \bigr)\, \Eins ( \tau _{\mathbf{b}}<\infty ) \,|\, X _{0}^{\mathbf{b}}(x)=x \bigr], \quad x\ge 0, \end{equation*} where $w(\cdot ,\cdot )$ is a bounded non-negative measurable function, $X_{\tau _{\mathbf{b}}-}^{\mathbf{b}}(x)$ is the surplus immediately before ruin and $|X_{\tau _{\mathbf{b}}}^{\mathbf{b}}(x)|$ is a deficit at ruin. Note that if $w(\cdot ,\cdot ) \equiv 1$ and $\delta _{0}=0$, then $m(x,\mathbf{b})$ becomes the infinite-horizon ruin probability \begin{equation*} \psi (x,\mathbf{b}) =\mathbb{E}\bigl[\Eins (\tau _{\mathbf{b}}<\infty ) \,|\, X_{0}^{\mathbf{b}}(x)=x\bigr]. \end{equation*} For $\delta >0$, the expected discounted dividend payments\index{dividend ! payments} until ruin are defined by \begin{equation*} v(x,\mathbf{b}) =\mathbb{E} \Biggl[ \int_{0}^{\tau _{\mathbf{b}}} e ^{-\delta t}\, \mathrm{d}D_{t} \,|\, X_{0}^{\mathbf{b}}(x)=x \Biggr], \quad x\ge 0. \end{equation*} For simplicity of notation, we also write $m(x)$, $\psi (x)$ and $v(x)$ instead of $m(x,\mathbf{b})$, $\psi (x,\mathbf{b})$ and $v(x,\mathbf{b})$, respectively. For all $1\le j\le k$ and $b_{j-1} \le x\le b_{j}$, we also set $m_{j}(x)=m(x,\mathbf{b})$, $\psi _{j}(x)= \psi (x,\mathbf{b})$ and $v_{j}(x)=v(x,\mathbf{b})$. Thus, the functions $m_{j}(x)$, $\psi _{j}(x)$ and $v_{j}(x)$ are defined on $[b_{j-1},b _{j}]$, and we have $m_{j}(b_{j})=m_{j+1}(b_{j})$, $\psi _{j}(b_{j})= \psi _{j+1}(b_{j})$ and $v_{j}(b_{j})=v_{j+1}(b_{j})$ for all $1\le j\le k-1$. \begin{remark} \label{rem:1}Note that although we consider the interval $[b_{k-1},\infty )$ instead of $[b_{j-1},b_{j}]$ if $j=k$, for the sake of convenience and compactness, here and subsequently, we do write $[b_{j-1},b_{j}]$ for all $1\le j\le k$. In addition, in what follows, the derivatives of all functions at the ends of the closed intervals $[b_{j-1},b_{j}]$ are assumed to be one-sided. \end{remark} \section{Piecewise integro-differential equation for the Gerber--Shiu function\index{Gerber--Shiu function}} \label{sec:3} \begin{thm} \label{thm:1} Let the surplus process\index{surplus process} $ (X_{t}^{b}(x) )_{t\ge 0}$ be defined by \eqref{eq:2} under the above assumptions, and let $F_{Y}(y)$ and $w(u_{1},u_{2})$ be continuous on $\mathbb{R}_{+}$ and $\mathbb{R} _{+}^{2}$, respectively. Then the function $m(x)$ is differentiable on the intervals $[b_{j-1},b_{j}]$ for all $1\le j\le k$ and satisfies the piecewise integro-differential equation \begin{align} \label{eq:4} &d_{j} m'(x)+(\lambda +\bar{\lambda }+\delta _{0})m(x) = \lambda \int_{0}^{x} m(x-y)\, \mathrm{d}F_{Y}(y)\nonumber \\[-2pt] &\quad +\lambda \int_{x}^{\infty } w(x,y-x)\, \mathrm{d}F_{Y}(y) +\bar{ \lambda }\int_{0}^{\infty } m(x+y)\, \mathrm{d}F_{\bar{Y}}(y), \quad x\in [b_{j-1},b_{j}]. \end{align} \end{thm} \begin{proof} We now fix any $j$ such that $1\le j\le k$ and deal with the case $x\in [b_{j-1},b_{j}]$. For all $x\in [b_{j-1},b_{j}]$, we define the following functions: \begin{gather*} a_{1}(x)=(x-b_{j-1})/d_{j}+(b_{j-1}-b_{j-2})/d_{j-1} +\cdots +(b_{2}-b _{1})/d_{2}+(b_{1}-b_{0})/d_{1},\\[-2pt] a_{2}(x)=(x-b_{j-1})/d_{j}+(b_{j-1}-b_{j-2})/d_{j-1}+ \cdots +(b_{2}-b _{1})/d_{2},\\[-2pt] \ldots\\[-2pt] a_{j-1}(x)=(x-b_{j-1})/d_{j}+(b_{j-1}-b_{j-2})/d_{j-1},\\[-2pt] a_{j}(x)=(x-b_{j-1})/d_{j}. \end{gather*} From these equalities we conclude that for all $x\in [b_{j-1},b_{j}]$, the process\break $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ up to its first jump is in the $j$th layer if $t\in [0,a_{j}(x)]$ and in the $i$th layer if $t\in [a_{i+1}(x),a_{i}(x)]$, where $1\le i\le j-1$. Thus, for any $x\in [b_{j-1},b_{j}]$, the sequence $a_{j}(x)$, $a_{j-1}(x),\ldots, a_{1}(x)$ defines the times when $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ passes through the values $b_{j-1}$, $b_{j-2}, \ldots , b_{0}$ provided that it has no jumps until those times. It is easily seen that the time of the first jump of $ (X_{t}^{ \mathbf{b}}(x) )_{t\ge 0}$ is exponentially distributed with mean $1/(\lambda +\bar{\lambda })$. Considering the time and the size of the first jump of that process and applying the law of total probability we obtain \begin{equation} \label{eq:5} m(x)=I_{j}(x)+I_{j-1}(x)+\cdots +I_{1}(x)+I_{0}(x), \quad x\in [b_{j-1},b _{j}], \end{equation} where \begin{equation*} \begin{split} I_{j}(x) &=\int_{0}^{a_{j}(x)} e^{-(\lambda +\bar{\lambda })t} \Biggl( \lambda \int_{0}^{x-d_{j} t} e^{-\delta _{0} t}\, m (x-d_{j} t -y )\, \mathrm{d}F_{Y}(y) \\[-2pt] &\quad +\lambda \int_{x-d_{j} t}^{\infty } e^{-\delta _{0} t}\, w (x-d_{j} t,\, y-x+d_{j} t )\, \mathrm{d}F_{Y}(y) \\[-2pt] &\quad +\bar{\lambda }\int_{0}^{\infty } e^{-\delta _{0} t}\, m (x-d _{j} t +y )\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}t, \end{split} \end{equation*} \begin{equation*} \begin{split} I_{j-1}(x) &=\int_{a_{j}(x)}^{a_{j-1}(x)} e^{-(\lambda +\bar{\lambda })t} \Biggl( \lambda \int_{0}^{b_{j-1}-d_{j-1} (t-a_{j}(x))} e^{-\delta _{0} t} \\[-2pt] & \qquad \times m \bigl(b_{j-1}-d_{j-1} \bigl(t-a_{j}(x)\bigr) -y \bigr)\, \mathrm{d}F_{Y}(y) \\[-2pt] &\quad +\lambda \int_{b_{j-1}-d_{j-1}(t-\xch{a_{j}(x))}{a_{j}(x)}}^{\infty } e^{- \delta _{0} t}\, w \bigl(b_{j-1}-d_{j-1} \bigl(t-a_{j}(x)\bigr), \\[-2pt] & \qquad y-b_{j-1}+d_{j-1} \bigl(t-a_{j}(x) \bigr) \bigr)\, \mathrm{d}F_{Y}(y) \\[-2pt] &\quad +\bar{\lambda }\int_{0}^{\infty } e^{-\delta _{0} t}\, m \bigl(b _{j-1}-d_{j-1} \bigl(t-a_{j}(x)\bigr) +y \bigr)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}t,\\[-2pt] &\qquad\qquad\qquad\qquad\qquad\ldots \end{split} \end{equation*} \begin{equation*} \begin{split} I_{1}(x) &=\int_{a_{2}(x)}^{a_{1}(x)} e^{-(\lambda +\bar{\lambda })t} \Biggl( \lambda \int_{0}^{b_{1}-d_{1} (t-a_{2}(x))} e^{-\delta _{0} t} \\[-2pt] & \qquad \times m \bigl(b_{1}-d_{1} \bigl(t-a_{2}(x)\bigr) -y \bigr)\, \mathrm{d}F_{Y}(y) \\[-2pt] &\quad +\lambda \int_{b_{1}-d_{1} (t-a_{2}(x))}^{\infty } e^{-\delta _{0} t}\, w \bigl(b_{1}-d_{1} \bigl(t-a_{2}(x)\bigr), \\[-2pt] & \qquad y-b_{1}+d_{1} \bigl(t-a_{2}(x) \bigr) \bigr)\, \mathrm{d}F_{Y}(y) \\[-2pt] &\quad +\bar{\lambda }\int_{0}^{\infty } e^{-\delta _{0} t}\, m \bigl(b _{1}-d_{1} \bigl(t-a_{2}(x)\bigr) +y \bigr)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}t, \end{split} \end{equation*} \begin{equation*} I_{0}(x)=e^{-(\lambda +\bar{\lambda }+\delta _{0})\, a_{1}(x)}\, w(0,0). \end{equation*} Note that the term $I_{i}(x)$, $1\le i\le j$, in \eqref{eq:5} corresponds to the case where $ (X_{t}^{\mathbf{b}}(x) )_{t \ge 0}$ is in the $i$th layer when its first jump occurs, and the term $I_{0}(x)$ corresponds to the case where there are no jumps of $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ up to the time $a_{1}(x)$.\looseness=1 Changing the variable $x-d_{j} t=s$ in the outer integral in the expression for $I_{j}(x)$ yields \begin{align} \label{eq:6} I_{j}(x) &= \frac{1}{d_{j}}\, e^{-(\lambda +\bar{\lambda }+\delta _{0})x/d _{j}} \int_{b_{j-1}}^{x} e^{(\lambda +\bar{\lambda }+\delta _{0})s/d _{j}} \Biggl( \lambda \int_{0}^{s} m(s-y)\, \mathrm{d}F_{Y}(y)\nonumber \\[-2pt] &\quad +\lambda \int_{s}^{\infty } w(s,y-s)\, \mathrm{d}F_{Y}(y) +\bar{ \lambda }\int_{0}^{\infty } m(s+y)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}s. \end{align} Changing the variable $b_{j-1}-d_{j-1} (t-a_{j}(x))=s$ in the outer integral in the expression for $I_{j-1}(x)$ gives \begin{align} \label{eq:7} I_{j-1}(x) &= \frac{1}{d_{j-1}}\, e^{-(\lambda +\bar{\lambda }+\delta _{0})(a_{j}(x)+b_{j-1}/d_{j-1})}\nonumber \\[-2pt] &\quad \times \int_{b_{j-2}}^{b_{j-1}} e^{(\lambda +\bar{\lambda }+ \delta _{0})s/d_{j-1}} \Biggl( \lambda \int_{0}^{s} m(s-y)\, \mathrm{d}F_{Y}(y)\nonumber \\[-2pt] & \qquad +\lambda \int_{s}^{\infty } w(s,y-s)\, \mathrm{d}F_{Y}(y) +\bar{ \lambda }\int_{0}^{\infty } m(s+y)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}s. \end{align} In the same manner we change variables in all the outer integrals on the right-hand side of \eqref{eq:5}. Finally, changing the variable $b_{1}-d_{1} (t-a_{2}(x))=s$ in the outer integral in the expression for $I_{1}(x)$ yields \begin{align} \label{eq:8} I_{1}(x) &= \frac{1}{d_{1}}\, e^{-(\lambda +\bar{\lambda }+\delta _{0})(a _{2}(x)+b_{1}/d_{1})} \int_{b_{0}}^{b_{1}} e^{(\lambda +\bar{\lambda }+\delta _{0})s/d_{1}}\nonumber \\[-2pt] &\quad \times \Biggl( \lambda \int_{0}^{s} m(s-y)\, \mathrm{d}F_{Y}(y) +\lambda \int_{s}^{\infty } w(s,y-s)\, \mathrm{d}F_{Y}(y)\nonumber \\[-2pt] & \qquad +\bar{\lambda }\int_{0}^{\infty } m(s+y)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}s. \end{align} Thus, from the above and equality \eqref{eq:5} we deduce that $m(x)$ is continuous on $[b_{j-1},b_{j}]$, and hence, on $\mathbb{R} _{+}$. Therefore, \eqref{eq:6} implies that $I_{j}(x)$ is differentiable on $[b_{j-1},b_{j}]$, and for all $x\in [b_{j-1},b_{j}]$, we get \begin{equation*} \begin{split} I'_{j}(x) &=- \frac{\lambda +\bar{\lambda }+\delta _{0}}{d_{j}}\, I_{j}(x) +\frac{1}{d_{j}} \Biggl( \lambda \int_{0}^{x} m(x-y)\, \mathrm{d}F _{Y}(y) \\[-2pt] &\quad +\lambda \int_{x}^{\infty } w(x,y-x)\, \mathrm{d}F_{Y}(y) +\bar{ \lambda }\int_{0}^{\infty } m(x+y)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr). \end{split} \end{equation*} Moreover, it is easily seen, e.g. from \eqref{eq:7} and \eqref{eq:8}, that all the functions $I_{j-1}(x), \ldots , \break I_{1}(x)$ and $I_{0}(x)$ are differentiable on $[b_{j-1},b_{j}]$, and for all $x\in [b_{j-1},b_{j}]$, we have \begin{gather*} I'_{j-1}(x)=-\frac{\lambda +\bar{\lambda }+\delta _{0}}{d_{j}}\, I_{j-1}(x), \quad \ldots , \quad I'_{1}(x)=- \frac{\lambda +\bar{ \lambda }+\delta _{0}}{d_{j}}\, I_{1}(x), \\ I'_{0}(x)=-\frac{\lambda +\bar{\lambda }+\delta _{0}}{d_{j}}\, I_{0}(x). \end{gather*} From \eqref{eq:5} it follows that $m(x)$ is also differentiable on $[b_{j-1},b_{j}]$. Differentiating \eqref{eq:5} and taking into account expressions for $I'_{j}(x)$, $I'_{j-1}(x), \ldots , I'_{1}(x)$, $I'_{0}(x)$ we obtain \begin{equation*} \begin{split} &m'(x)=-\frac{\lambda +\bar{\lambda }+\delta _{0}}{d_{j}}\, m(x) + \frac{1}{d _{j}} \Biggl( \lambda \int_{0}^{x} m(x-y)\, \mathrm{d}F_{Y}(y) \\[-2pt] &\quad +\lambda \int_{x}^{\infty } w(x,y-x)\, \mathrm{d}F_{Y}(y) +\bar{ \lambda }\int_{0}^{\infty } m(x+y)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr), \quad x\in [b_{j-1},b_{j}], \end{split} \end{equation*} from which equation \eqref{eq:4} follows immediately. \end{proof} \begin{remark} \label{rem:2} To solve equation \eqref{eq:4}, we use the following boundary conditions. The first $k-1$ conditions are easily obtained from the equality $m_{j}(b_{j})=m_{j+1}(b_{j})$ or, equivalently, $ \lim_{x\uparrow b_{j}} m(x) =\lim_{x\downarrow b_{j}} m(x)$ for all $1\le j\le k-1$. In addition, for the ruin probability,\index{ruin probability} using standard considerations (see, e.g., \cite{MiRa2016,MiRaSt2015,RoScScTe1999}) we can show that $ \lim_{x\to \infty } \psi (x) =0$ provided that the net profit condition\index{net profit condition} holds. Finally, it is evident that $\psi (0)=1$. Although equation \eqref{eq:4} is not solvable analytically in the general case, we can find explicit expressions for the corresponding ruin probability\index{ruin probability} in the case where claim and premium sizes are exponentially distributed (see Section~\ref{sec:5}). The uniqueness of the required solution to \eqref{eq:4} should be justified in each case.\looseness=1 \end{remark} \begin{remark} \label{rem:3} In the assertion of Theorem \ref{thm:1}, we require the continuity of $F_{Y}(y)$. Note that if $F_{Y}(y)$ has positive points of discontinuity, then $m(x)$ may be not differentiable at some interior points of the intervals $[b_{j-1},b_{j}]$, $1\le j\le k$ (for details, see \cite{MiRa2016,MiRaSt2015}). Moreover, it is easily seen from \eqref{eq:4} that $m(x)$ is not differentiable at $x=b_{j}$, $1\le j\le k-1$, since its one-sided derivatives do not coincide at those points. \end{remark} \section{Piecewise integro-differential equation for the expected discounted dividend payments\index{dividend ! payments} until ruin} \label{sec:4} \begin{thm} \label{thm:2} Let the surplus process\index{surplus process} $ (X_{t}^{b}(x) )_{t\ge 0}$ be defined by \eqref{eq:2} under the above assumptions, and let $F_{Y}(y)$ be continuous on $\mathbb{R}_{+}$. Then the function $v(x)$ is differentiable on the intervals $[b_{j-1},b_{j}]$ for all $1\le j \le k$ and satisfies the piecewise integro-differential equation \begin{align} \label{eq:9} &d_{j} v'(x)+(\lambda +\bar{\lambda }+\delta )v(x) =\lambda \int _{0} ^{x} v(x-y)\, \mathrm{d}F_{Y}(y)\nonumber \\[-2pt] &\quad +\bar{\lambda }\int_{0}^{\infty } v(x+y)\, \mathrm{d}F_{ \bar{Y}}(y)+d_{j}, \quad x\in [b_{j-1},b_{j}]. \end{align} \end{thm} \begin{proof} We now fix any $j$ such that $1\le j\le k$ and deal with the case $x\in [b_{j-1},b_{j}]$. As in the proof of Theorem~\ref{thm:1}, considering the time and the size of the first jump of $ (X_{t} ^{\mathbf{b}}(x) )_{t\ge 0}$ and applying the law of total probability we have \begin{align} \label{eq:10} v(x) &=I_{1,j}(x) +I_{2,j}(x) +I_{1,j-1}(x) +I_{2,j-1}(x) +\cdots\nonumber \\ &\quad +I_{1,1}(x) +I_{2,1}(x) +I_{1,0}(x), \quad x\in [b_{j-1},b _{j}], \end{align} where \begin{gather*} I_{1,j}(x)=\int_{0}^{a_{j}(x)} (\lambda + \bar{\lambda }) e^{-(\lambda +\bar{\lambda })t} \Biggl( \int_{0}^{t} d_{j} e^{-\delta s}\, \mathrm{d}s \Biggr)\mathrm{d}t, \\ \begin{split} I_{2,j}(x) &=\int_{0}^{a_{j}(x)} e^{-(\lambda +\bar{\lambda })t} \Biggl( \lambda \int_{0}^{x-d_{j} t} e^{-\delta t}\, v (x-d_{j} t -y )\, \mathrm{d}F_{Y}(y) \\ &\quad +\bar{\lambda }\int_{0}^{\infty } e^{-\delta t}\, v (x-d _{j} t +y )\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}t, \end{split} \\ I_{1,j-1}(x)=\int_{a_{j}(x)}^{a_{j-1}(x)}\! ( \lambda +\bar{\lambda }) e^{-(\lambda +\bar{\lambda })t} \Biggl( \int_{0}^{a_{j}(x)} d_{j} e ^{-\delta s}\, \mathrm{d}s +\!\int _{a_{j}(x)}^{t} d_{j-1} e^{-\delta s}\, \mathrm{d}s \Biggr)\mathrm{d}t, \\ \begin{split} I_{2,j-1}(x) &=\int_{a_{j}(x)}^{a_{j-1}(x)} e^{-(\lambda +\bar{\lambda })t} \Biggl( \lambda \int_{0}^{b_{j-1}-d_{j-1} (t-a_{j}(x))} e^{- \delta t} \\ & \qquad \times v \bigl(b_{j-1}-d_{j-1} \bigl(t-a_{j}(x)\bigr) -y \bigr)\, \mathrm{d}F_{Y}(y) \\ &\quad +\bar{\lambda }\int_{0}^{\infty } e^{-\delta t}\, v \bigl(b _{j-1}-d_{j-1} \bigl(t-a_{j}(x)\bigr) +y \bigr)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}t, \end{split} \\ \ldots \\ \begin{split} I_{1,1}(x) &=\int_{a_{2}(x)}^{a_{1}(x)} \! (\lambda +\bar{\lambda }) e ^{-(\lambda +\bar{\lambda })t} \Biggl( \int_{0}^{a_{j}(x)} d_{j} e ^{-\delta s}\, \mathrm{d}s +\!\int _{a_{j}(x)}^{a_{j-1}(x)} d_{j-1} e ^{-\delta s} \, \mathrm{d}s +\cdots \\ &\quad +\int_{a_{3}(x)}^{a_{2}(x)} d_{2} e^{-\delta s}\, \mathrm{d}s +\int_{a_{2}(x)}^{t} d_{1} e^{-\delta s}\, \mathrm{d}s \Biggr) \mathrm{d}t, \end{split} \end{gather*} \begin{gather*} \begin{split} I_{2,1}(x) &=\int_{a_{2}(x)}^{a_{1}(x)} e^{-(\lambda +\bar{\lambda })t} \Biggl( \lambda \int_{0}^{b_{1}-d_{1} (t-a_{2}(x))} e^{-\delta t} \\[-2pt] & \qquad \times v \bigl(b_{1}-d_{1} \bigl(t-a_{2}(x)\bigr) -y \bigr)\, \mathrm{d}F_{Y}(y) \\[-2pt] &\quad +\bar{\lambda }\int_{0}^{\infty } e^{-\delta t}\, v \bigl(b _{1}-d_{1} \bigl(t-a_{2}(x)\bigr) +y \bigr)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}t, \end{split} \\[-2pt] \begin{split} I_{1,0}(x) &=\int_{a_{1}(x)}^{\infty } \! (\lambda +\bar{\lambda }) e ^{-(\lambda +\bar{\lambda })t} \Biggl( \int_{0}^{a_{j}(x)} d_{j} e ^{-\delta s}\, \mathrm{d}s +\!\int _{a_{j}(x)}^{a_{j-1}(x)} d_{j-1} e ^{-\delta s} \, \mathrm{d}s +\cdots \\[-2pt] &\quad +\int_{a_{3}(x)}^{a_{2}(x)} d_{2} e^{-\delta s}\, \mathrm{d}s +\int_{a_{2}(x)}^{a_{1}(x)} d_{1} e^{-\delta s}\, \mathrm{d}s \Biggr) \mathrm{d}t, \end{split} \end{gather*} and the functions $a_{1}(x)$, $a_{2}(x), \ldots ,a_{j}(x)$ are defined in the proof of Theorem~\ref{thm:1}. Note that the terms $I_{1,i}(x)$ and $I_{2,i}(x)$, $1\le i\le j$, in \eqref{eq:10} correspond to the case where $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ is in the $i$th layer when its first jump occurs, and the term $I_{1,0}(x)$ corresponds to the case where there are no jumps of $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ up to the time $a_{1}(x)$. The terms $I_{1,i}(x)$, $0\le i\le j$, are equal to the discounted dividend payments\index{dividend ! payments} until the first jump of $ (X_{t}^{ \mathbf{b}}(x) )_{t\ge 0}$ provided that the process is in the $i$th layer, whereas the terms $I_{2,i}(x)$, $1\le i\le j$, are equal to the corresponding expected discounted dividend payments\index{dividend ! payments} after that time. Next, we set \begin{equation} \label{eq:11} I_{1,*}(x)=I_{1,j}(x) +I_{1,j-1}(x) +\cdots +I_{1,1}(x) +I_{1,0}(x), \quad x\in [b_{j-1},b_{j}]. \end{equation} Thus, $I_{1,*}(x)$ describes the expected discounted dividend payments\index{dividend ! payments} until the first jump of $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$. Rearranging terms in the expression for $I_{1,*}(x)$ gives \begin{align} \label{eq:12} I_{1,*}(x) &=(\lambda + \bar{\lambda }) \Biggl(\, \int_{0}^{a_{j}(x)} e ^{-(\lambda +\bar{\lambda })t} \Biggl( \int_{0}^{t} d_{j} e^{-\delta s}\, \mathrm{d}s \Biggr)\mathrm{d}t\nonumber \\[-2pt] &\quad +\int_{a_{j}(x)}^{a_{j-1}(x)} e^{-(\lambda +\bar{\lambda })t} \Biggl( \int_{a_{j}(x)}^{t} d_{j-1} e^{-\delta s}\, \mathrm{d}s \Biggr)\mathrm{d}t +\cdots\nonumber \\[-2pt] &\quad +\int_{a_{2}(x)}^{a_{1}(x)} e^{-(\lambda +\bar{\lambda })t} \Biggl( \int_{a_{2}(x)}^{t} d_{1} e^{-\delta s}\, \mathrm{d}s \Biggr) \mathrm{d}t\nonumber \\[-2pt] &\quad +\int_{a_{j}(x)}^{\infty } e^{-(\lambda +\bar{\lambda })t} \Biggl( \int_{0}^{a_{j}(x)} d_{j} e^{-\delta s}\, \mathrm{d}s \Biggr) \mathrm{d}t\nonumber \\[-2pt] &\quad +\int_{a_{j-1}(x)}^{\infty } e^{-(\lambda +\bar{\lambda })t} \Biggl( \int_{a_{j}(x)}^{a_{j-1}(x)} d_{j-1} e^{-\delta s}\, \mathrm{d}s \Biggr)\mathrm{d}t +\cdots\nonumber \\[-2pt] &\quad +\int_{a_{1}(x)}^{\infty } e^{-(\lambda +\bar{\lambda })t} \Biggl( \int_{a_{2}(x)}^{a_{1}(x)} d_{1} e^{-\delta s}\, \mathrm{d}s \Biggr) \mathrm{d}t \, \Biggr). \end{align} Taking all the integrals on the right-hand side of \eqref{eq:12} and simplifying the resulting expression we get \begin{align} \label{eq:13} I_{1,*}(x) &= \frac{d_{j}}{\lambda +\bar{\lambda }+\delta }\, \bigl(1-e ^{-(\lambda +\bar{\lambda }+\delta )\,a_{j}(x)} \bigr)\nonumber \\ &\quad +\frac{d_{j-1}}{\lambda +\bar{\lambda }+\delta }\, \bigl(e ^{-(\lambda +\bar{\lambda }+\delta )\,a_{j}(x)} -e^{-(\lambda +\bar{ \lambda }+\delta )\,a_{j-1}(x)} \bigr) +\cdots\nonumber \\ &\quad +\frac{d_{1}}{\lambda +\bar{\lambda }+\delta }\, \bigl(e^{-( \lambda +\bar{\lambda }+\delta )\,a_{2}(x)} -e^{-(\lambda +\bar{ \lambda }+\delta )\,a_{1}(x)} \bigr). \end{align} Changing the variable $x-d_{j} t=s$ in the outer integral in the expression for $I_{2,j}(x)$ gives \begin{align} \label{eq:14} I_{2,j}(x) &= \frac{1}{d_{j}}\, e^{-(\lambda +\bar{\lambda }+\delta )x/d _{j}} \int_{b_{j-1}}^{x} e^{(\lambda +\bar{\lambda }+\delta )s/d_{j}}\nonumber \\[-2pt] &\quad \times \Biggl( \lambda \int_{0}^{s} v(s-y)\, \mathrm{d}F_{Y}(y) +\bar{\lambda }\int _{0}^{\infty } v(s+y)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}s. \end{align} Likewise, changing the variable $b_{j-1}-d_{j-1} (t-a_{j}(x))=s$ in the outer integral in the expression for $I_{2,j-1}(x)$ yields \begin{align} \label{eq:15} I_{2,j-1}(x) &= \frac{1}{d_{j-1}}\, e^{-(\lambda +\bar{\lambda }+\delta )(a_{j}(x)+b_{j-1}/d_{j-1})} \int_{b_{j-2}}^{b_{j-1}} e^{(\lambda +\bar{ \lambda }+\delta )s/d_{j-1}}\nonumber \\[-2pt] & \qquad \times \Biggl( \lambda \int_{0}^{s} v(s-y)\, \mathrm{d}F_{Y}(y) +\bar{ \lambda }\int _{0}^{\infty } v(s+y)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}s. \end{align} Next, in the same manner we change variables in all those outer integrals on the right-hand side of \eqref{eq:10} that are not included in the sum \eqref{eq:11}. Eventually, changing the variable $b_{1}-d_{1} (t-a_{2}(x))=s$ in the outer integral in the expression for $I_{2,1}(x)$ we obtain \begin{align} \label{eq:16} I_{2,1}(x) &= \frac{1}{d_{1}}\, e^{-(\lambda +\bar{\lambda }+\delta )(a _{2}(x)+b_{1}/d_{1})} \int_{b_{0}}^{b_{1}} e^{(\lambda +\bar{\lambda }+\delta )s/d_{1}}\nonumber \\[-2pt] &\quad \times \Biggl( \lambda \int_{0}^{s} v(s-y)\, \mathrm{d}F_{Y}(y) +\bar{\lambda }\int _{0}^{\infty } v(s+y)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr) \mathrm{d}s. \end{align} From \eqref{eq:13} it follows immediately that $I'_{1,*}(x)$ is differentiable on $[b_{j-1},b_{j}]$, and for all $x\in [b_{j-1},b_{j}]$, we get \begin{equation*} I'_{1,*}(x)=-\frac{\lambda +\bar{\lambda }+\delta }{d_{j}}\, \biggl( I _{1,*}(x) -\frac{d_{j}}{\lambda +\bar{\lambda }+\delta } \biggr). \end{equation*} Next, from the above and equality \eqref{eq:10} we conclude that $v(x)$ is continuous on $[b_{j-1},b_{j}]$, and hence, on $\mathbb{R} _{+}$. Hence, \eqref{eq:14} implies that $I_{2,j}(x)$ is differentiable on $[b_{j-1},b_{j}]$, and for all $x\in [b_{j-1},b_{j}]$, we have \begin{equation*} \begin{split} I'_{2,j}(x) &=- \frac{\lambda +\bar{\lambda }+\delta }{d_{j}}\, I_{2,j}(x) \\[-2pt] &\quad +\frac{1}{d_{j}} \Biggl( \lambda \int_{0}^{x} v(x-y)\, \mathrm{d}F_{Y}(y) +\bar{\lambda }\int _{0}^{\infty } v(x+y)\, \mathrm{d}F_{\bar{Y}}(y) \Biggr). \end{split} \end{equation*} Furthermore, it follows immediately, e.g. from \eqref{eq:15} and \eqref{eq:16}, that all the functions $I_{2,j-1}(x),\ldots , I_{2,1}(x)$ are differentiable on $[b_{j-1},b_{j}]$, and for all $x\in [b_{j-1},b_{j}]$, we obtain \begin{equation*} I'_{2,j-1}(x)=-\frac{\lambda +\bar{\lambda }+\delta }{d_{j}}\, I_{2,j-1}(x), \quad \ldots , \quad I'_{2,1}(x)=- \frac{\lambda +\bar{ \lambda }+\delta }{d_{j}}\, I_{2,1}(x). \end{equation*} By \eqref{eq:10}, we conclude that $v(x)$ is also differentiable on $[b_{j-1},b_{j}]$. Differentiating \eqref{eq:10} and taking into account expressions for $I'_{1,*}(x)$, $I'_{2,j}(x)$, $I'_{2,j-1}(x), \ldots , I'_{2,1}(x)$ we get \begin{equation*} \begin{split} v'(x) &=-\frac{\lambda +\bar{\lambda }+\delta }{d_{j}}\, v(x) +1 +\frac{1}{d _{j}} \Biggl( \lambda \int_{0}^{x} v(x-y)\, \mathrm{d}F_{Y}(y) \\[-2pt] &\quad +\bar{\lambda }\int_{0}^{\infty } v(x+y)\, \mathrm{d}F_{ \bar{Y}}(y) \Biggr), \quad x\in [b_{j-1},b_{j}], \end{split} \end{equation*} which immediately yields equation \eqref{eq:9}. \end{proof} \begin{remark} \label{rem:4} To solve equation \eqref{eq:9}, we obtain the first $k-1$ boundary conditions from the equality $v_{j}(b_{j})=v_{j+1}(b_{j})$ or, equivalently, $\lim_{x\uparrow b_{j}} v(x) =\lim_{x\downarrow b_{j}} v(x)$ for all $1\le j\le k-1$. Moreover, if the net profit condition\index{net profit condition} holds, applying arguments similar to those in \cite[p.~70]{Sc2008} we can show that $\lim_{x\to \infty } v(x) =d _{k}/{\delta }$. Lastly, it is easily seen that $v(0)=0$. The uniqueness of the required solution to \eqref{eq:9} should be justified in each case. Explicit expressions for $v(x)$ in the case where claim and premium sizes are exponentially distributed are given in \xch{Section~\ref{sec:5}.}{Section~\ref{sec:5}).} \end{remark} \begin{remark} \label{rem:5} If $F_{Y}(y)$ has positive points of discontinuity, then $v(x)$ may be not differentiable at some interior points of the intervals $[b_{j-1},b_{j}]$, $1\le j\le k$. Furthermore, from \eqref{eq:9} we deduce that $v(x)$ is not differentiable at $x=b_{j}$, $1\le j\le k-1$. \end{remark} \section{Exponentially distributed claim and premium sizes} \label{sec:5} In this section, we concentrate on the case where claim and premium sizes are exponentially distributed, i.e. \begin{equation} \label{eq:17} f_{Y}(y) =\frac{1}{\mu }\, e^{-y/\mu } \quad \text{and} \quad f_{ \bar{Y}}(y) = \frac{1}{\bar{\mu }}\, e^{-y/\bar{\mu }}, \quad y\ge 0. \end{equation} \subsection{Explicit formulas for the ruin probability\index{ruin probability}} \label{sec:5.1} Let now $w(\cdot ,\cdot ) \equiv 1$ and $\delta _{0}=0$. Taking into account \eqref{eq:17}, equation \eqref{eq:4} for the ruin probability\index{ruin probability} can be written as \begin{align} \label{eq:18} &d_{j} \psi '(x)+(\lambda +\bar{\lambda })\psi (x)\nonumber \\[-2pt] &\quad =\frac{\lambda }{\mu }\, e^{-x/\mu } \int_{0}^{x} \psi (u)e ^{u/\mu }\, \mathrm{d}u+ \lambda e^{-x/\mu } + \frac{\bar{\lambda }}{\bar{ \mu }}\, e^{x/\bar{\mu }} \int_{x}^{\infty } \psi (u)e^{-u/\bar{ \mu }}\, \mathrm{d}u \end{align} for all $x\in [b_{j-1},b_{j}]$ and $1\le j\le k$. We now reduce piecewise integro-differential equation \eqref{eq:18} to a piecewise linear differential equation with constant coefficients. \begin{lemma} \label{lem:1} Let the surplus process\index{surplus process} $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ be defined by \eqref{eq:2} under the above assumptions, and let claim and premium sizes be exponentially distributed with means $\mu $ and $\bar{\mu }$, respectively. Then $\psi (x)$ is a solution to the piecewise differential equation \begin{equation} \label{eq:19} d_{j}\mu \bar{\mu }\psi '''(x) + \bigl( d_{j}(\bar{\mu }-\mu ) +\mu \bar{ \mu }(\lambda +\bar{ \lambda }) \bigr)\psi ''(x) +(\bar{\lambda }\bar{ \mu }-\lambda \mu -d_{j})\psi '(x)=0 \end{equation} for all $x\in [b_{j-1},b_{j}]$ and $1\le j\le k$. \end{lemma} \begin{proof} It is easily seen that the right-hand side of \eqref{eq:18} is differentiable on $[b_{j-1},b_{j}]$. Therefore, $\psi (x)$ is twice differentiable on $[b_{j-1},b_{j}]$. Differentiating \eqref{eq:18} gives \begin{align} \label{eq:20} &d_{j} \psi ''(x) +(\lambda +\bar{\lambda })\psi '(x) =-\frac{1}{ \mu } \Biggl( \frac{\lambda }{\mu }\, e^{-x/\mu } \int_{0}^{x} \psi (u)e ^{u/\mu }\, \mathrm{d}u +\lambda e^{-x/\mu } \Biggr)\nonumber \\[-2pt] &\quad +\frac{\bar{\lambda }}{\bar{\mu }^{2}}\, e^{x/\bar{\mu }} \int_{x}^{\infty } \psi (u) e^{-u/\bar{\mu }}\, \mathrm{d}u + \biggl( \frac{ \lambda }{\mu } - \frac{\bar{\lambda }}{\bar{\mu }} \biggr)\psi (x), \quad x\in [b_{j-1},b_{j}]. \end{align} Multiplying \eqref{eq:20} by $\mu $ and adding \eqref{eq:18} we get \begin{align} \label{eq:21} &d_{j}\mu \psi ''(x) + \bigl(d_{j}+\mu (\lambda + \bar{\lambda }) \bigr) \psi '(x) +\bar{\lambda } \biggl(1+ \frac{\mu }{\bar{\mu }} \biggr) \psi (x)\nonumber \\[-2pt] &\quad =\frac{\bar{\lambda }}{\bar{\mu }} \biggl(1+\frac{\mu }{\bar{ \mu }} \biggr) e^{x/\bar{\mu }} \int_{x}^{\infty } \psi (u) e^{-u/\bar{ \mu }}\, \mathrm{d}u, \quad x\in [b_{j-1},b_{j}]. \end{align} From \eqref{eq:21} it follows that $\psi (x)$ has the third derivative on $x\in [b_{j-1},b_{j}]$. Differentiating \eqref{eq:21} yields \begin{align} \label{eq:22} &d_{j}\mu \psi '''(x) + \bigl(d_{j}+ \mu (\lambda +\bar{\lambda }) \bigr)\psi ''(x) + \bar{\lambda } \biggl(1+\frac{\mu }{\bar{\mu }} \biggr)\psi '(x)\nonumber \\[-2pt] &=\frac{\bar{\lambda }}{\bar{\mu }^{2}} \biggl(1+\frac{\mu }{\bar{ \mu }} \biggr) e^{x/\bar{\mu }} \int_{x}^{\infty }\! \psi (u) e^{-u/\bar{ \mu }}\, \mathrm{d}u -\frac{\bar{\lambda }}{\bar{\mu }} \biggl(1+ \frac{ \mu }{\bar{\mu }} \biggr)\psi (x), \quad x\in [b_{j-1},b_{j}]. \end{align} Finally, multiplying \eqref{eq:22} by $(-\bar{\mu })$ and adding \eqref{eq:21} we obtain \eqref{eq:19}. \end{proof} For $1\le j\le k$, we now define the following constants, which are used in the assertion of Theorem \ref{thm:3} below: \begin{equation*} \mathrm{D}_{j}= \bigl( d_{j} (\mu +\bar{\mu }) +\mu \bar{\mu }(\lambda -\bar{\lambda }) \bigr)^{2} +4\lambda \bar{\lambda }\mu ^{2} \bar{ \mu }^{2}, \end{equation*} \begin{equation*} z_{1,j}=\frac{- ( d_{j}(\bar{\mu }-\mu ) +\mu \bar{\mu }(\lambda +\bar{\lambda }) ) +\sqrt{\mathrm{D}_{j}}}{2d_{j}\mu \bar{ \mu }} \end{equation*} and \begin{equation*} z_{2,j}=\frac{- ( d_{j}(\bar{\mu }-\mu ) +\mu \bar{\mu }(\lambda +\bar{\lambda }) ) -\sqrt{\mathrm{D}_{j}}}{2d_{j}\mu \bar{ \mu }}. \end{equation*} \begin{thm} \label{thm:3} Let the surplus process\index{surplus process} $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ be defined by \eqref{eq:2} under the above assumptions, and let claim and premium sizes\vadjust{\goodbreak} be exponentially distributed with means $\mu $ and $\bar{\mu }$, respectively. If the net profit condition\index{net profit condition} \eqref{eq:3} holds, then we have \begingroup \abovedisplayskip=8pt \belowdisplayskip=8pt \begin{equation} \label{eq:23} \psi _{j}(x)=C_{1,j} e^{z_{1,j} x} +C_{2,j} e^{z_{2,j} x} +C_{3,j} \end{equation} for all $x\in [b_{j-1},b_{j}]$ and $1\le j\le k$, where $C_{3,k}=0$ and all the other constants $C_{1,j}$, $C_{2,j}$ and $C_{3,j}$ are determined from the system of linear equations \eqref{eq:24}--\eqref{eq:27}: \begin{align} \label{eq:24} &\lambda e^{-b_{j-1}/\mu } \sum _{l=1}^{j-1} \Biggl( \sum _{i=1}^{2} \frac{C _{i,l}}{\mu z_{i,l}+1}\, \bigl(e^{(z_{i,l}+1/\mu )b_{l}} -e^{(z_{i,l}+1/ \mu )b_{l-1}} \bigr)\nonumber \\[-2pt] &\; + \bigl(e^{b_{l}/\mu } \!-\! e^{b_{l-1}/\mu } \bigr) C_{3,l} \Biggr) \!+\!\sum_{i=1}^{2} \biggl( \frac{\bar{\lambda }e^{b_{j-1}/\bar{ \mu }}}{\bar{\mu }z_{i,j}-1}\, \bigl(e^{(z_{i,j}-1/\bar{\mu })b_{j}} \!-\! e^{(z_{i,j}-1/\bar{\mu })b_{j-1}} \bigr)\nonumber \\[-2pt] &\; -(d_{j} z_{i,j} +\lambda +\bar{\lambda }) e^{z_{i,j}b_{j-1}} \biggr) C_{i,j} - \bigl(\bar{\lambda }e^{(b_{j-1}-b_{j})/\bar{\mu }} + \lambda \bigr) C_{3,j}\nonumber \\[-2pt] &\; +\bar{\lambda }e^{b_{j-1}/\bar{\mu }} \sum_{l=j+1}^{k} \Biggl( \sum_{i=1}^{2} \frac{C_{i,l}}{\bar{\mu }z_{i,l}-1}\, \bigl(e^{(z_{i,l}-1/\bar{ \mu })b_{l}} -e^{(z_{i,l}-1/\bar{\mu })b_{l-1}} \bigr)\nonumber \\[-2pt] &\; - \bigl(e^{-b_{l}/\bar{\mu }} -e^{-b_{l-1}/\bar{\mu }} \bigr) C _{3,l} \Biggr) =-\lambda e^{-b_{j-1}/\mu }, \quad 1\le j\le k, \end{align} \begin{equation} \label{eq:25} C_{1,1} +C_{2,1} +C_{3,1}=1, \end{equation} \begin{equation} \label{eq:26} d_{j} \sum_{i=1}^{2} z_{i,j} e^{z_{i,j} b_{j}} C_{i,j} -d_{j+1} \sum_{i=1}^{2} z_{i,j+1} e^{z_{i,j+1} b_{j}} C_{i,j+1}=0, \quad 1\le j\le k-1, \end{equation} and \begin{equation} \label{eq:27} \sum_{i=1}^{2} e^{z_{i,j} b_{j}} C_{i,j} +C_{3,j} -\sum _{i=1}^{2} e ^{z_{i,j+1} b_{j}} C_{i,j+1} -C_{3,j+1}=0, \quad 1\le j\le k-1, \end{equation} \endgroup provided that its determinant is not equal to 0. \end{thm} \begin{proof} Taking into account the notation introduced in Section \ref{sec:2} and applying\break Lemma~\ref{lem:1} we conclude that the function $\psi _{j}(x)$ is a solution to \eqref{eq:19} on $x\in [b_{j-1},b_{j}]$ for each $1\le j\le k$. The corresponding characteristic equation has the form \begin{align} \label{eq:28} &d_{j} \mu \bar{\mu }z^{3} + \bigl( d_{j}(\bar{\mu }-\mu ) +\mu \bar{ \mu }(\lambda +\bar{\lambda }) \bigr) z^{2} +(\bar{\lambda }\bar{ \mu }- \lambda \mu -d_{j}) z =0 \end{align} for all $1\le j\le k$. We first show that the equation \begin{align} \label{eq:29} &d_{j} \mu \bar{\mu }z^{2} + \bigl( d_{j}(\bar{\mu }-\mu ) +\mu \bar{ \mu }(\lambda +\bar{\lambda }) \bigr) z +(\bar{\lambda }\bar{\mu }- \lambda \mu -d_{j})=\xch{0}{0.} \end{align} has two negative roots. Indeed, its discriminant is equal to \begin{align*} & \bigl( d_{j}(\bar{\mu }-\mu ) +\mu \bar{\mu }(\lambda +\bar{\lambda }) \bigr)^{2} -4d_{j} \mu \bar{\mu }(\bar{\lambda }\bar{\mu }-\lambda \mu -d_{j}) \\ & \:\: =d_{j}^{2} (\bar{\mu }-\mu )^{2} +\mu ^{2} \bar{\mu }^{2} (\lambda +\bar{ \lambda })^{2} +2d_{j} \mu \bar{\mu }(\lambda +\bar{\lambda }) (\bar{ \mu }-\mu )\\ &\:\: \quad +4d_{j} \mu \bar{\mu }(d_{j} + \lambda \mu -\bar{\lambda }\bar{ \mu }) \\ & \:\: =d_{j}^{2} (\mu +\bar{\mu })^{2} +\mu ^{2} \bar{\mu }^{2} (\lambda +\bar{ \lambda })^{2} +2d_{j} \mu \bar{\mu }(\lambda -\bar{\lambda }) ( \mu +\bar{\mu }) \\ & \:\: = \bigl( d_{j} (\mu +\bar{\mu }) +\mu \bar{\mu }(\lambda - \bar{\lambda }) \bigr)^{2} +4\lambda \bar{\lambda }\mu ^{2} \bar{\mu }^{2}. \end{align*} Hence, it is positive and coincides with the constant $\mathrm{D}_{j}$ defined above. Consequently, $z_{1,j}$ and $z_{2,j}$ defined before the assertion of the theorem are two real roots of equation \eqref{eq:29}. By the net profit condition\index{net profit condition} \eqref{eq:3}, we have \begin{equation*} \bar{\lambda }\bar{\mu }-\lambda \mu -d_{j} >0 \end{equation*} and \begin{equation*} d_{j}(\bar{\mu }-\mu ) +\mu \bar{\mu }(\lambda +\bar{\lambda }) = \mu (\bar{\lambda }\bar{\mu }-\lambda \mu -d_{j}) +\lambda \mu ^{2} + \lambda \mu \bar{\mu }+d_{j}\bar{\mu }>0 \end{equation*} for all $1\le j\le k$, which implies that $z_{1,j}<0$ and $z_{2,j}<0$. Therefore, $z_{1,j}<0$, $z_{2,j}<0$ and $z_{3,j}=0$ are roots of equation \eqref{eq:28}, and we get \eqref{eq:23} with some constants $C_{1,j}$, $C_{2,j}$ and $C_{3,j}$. Moreover, since condition \eqref{eq:3} holds, using standard considerations (see, e.g., \cite{MiRa2016,MiRaSt2015,RoScScTe1999}) we can easily show that $\lim_{x\to \infty } \psi (x) =0$, which yields $C_{3,k}=0$. To determine all the other constants $C_{1,j}$, $C_{2,j}$ and $C_{3,j}$, we use the following boundary conditions. The first $k$ conditions are obtained by letting $x=b_{j-1}$ in \eqref{eq:18} for $1\le j\le k$: \begin{align} \label{eq:30} &d_{j} \psi '(b_{j-1})+(\lambda +\bar{\lambda })\psi (b_{j-1}) =\frac{ \lambda }{\mu }\, e^{-b_{j-1}/\mu } \int _{0}^{b_{j-1}} \psi (u)e^{u/ \mu }\, \mathrm{d}u\nonumber \\[-2pt] &\quad +\lambda e^{-b_{j-1}/\mu } +\frac{\bar{\lambda }}{\bar{\mu }} \, e^{b_{j-1}/\bar{\mu }} \int_{b_{j-1}}^{\infty } \psi (u)e^{-u/\bar{ \mu }}\, \mathrm{d}u. \end{align} One more condition is obtained from the equality $\psi (0)=1$. Finally, the last $2(k-1)$ conditions are derived from the equalities $d_{j} \psi '_{j}(b_{j})=d_{j+1} \psi '_{j+1}(b_{j})$ and $\psi _{j}(b _{j})=\psi _{j+1}(b_{j})$ for $1\le j\le k-1$. Note that the first equality easily follows from \eqref{eq:18}. Taking into account \eqref{eq:23}, for all $1\le j\le k$, we get: \begin{equation} \label{eq:31} \psi '_{j}(x)=C_{1,j} z_{1,j} e^{z_{1,j} x} +C_{2,j} z_{2,j} e^{z_{2,j} x}, \quad x\in [b_{j-1},b_{j}],\vspace*{-9pt} \end{equation} \begin{align} \label{eq:32} &\int_{0}^{b_{j-1}} \psi (u)e^{u/\mu }\, \mathrm{d}u =\sum_{l=1}^{j-1} \int_{b_{l-1}}^{b_{l}} \psi _{l}(u)e^{u/\mu } \, \mathrm{d}u =\sum_{l=1} ^{j-1} \Biggl( \sum_{i=1}^{2} \frac{C_{i,l}}{z_{i,l}+1/\mu }\nonumber \\[-2pt] &\quad \times \bigl( e^{(z_{i,l}+1/\mu )b_{l}} -e^{(z_{i,l}+1/\mu )b _{l-1}} \bigr) +C_{3,l} \mu \bigl( e^{b_{l}/\mu } -e^{b_{l-1}/\mu } \bigr) \Biggr) \end{align} and \begin{align} \label{eq:33} &\int_{b_{j-1}}^{\infty } \psi (u)e^{-u/\bar{\mu }}\, \mathrm{d}u = \sum_{l=j}^{k} \int_{b_{l-1}}^{b_{l}} \psi _{l}(u)e^{-u/\bar{\mu }} \, \mathrm{d}u =\sum_{l=j}^{k} \Biggl( \sum_{i=1}^{2} \frac{C_{i,l}}{z _{i,l}-1/\bar{\mu }}\nonumber \\[-2pt] &\quad \times \bigl( e^{(z_{i,l}-1/\bar{\mu })b_{l}} -e^{(z_{i,l}-1/\bar{ \mu })b_{l-1}} \bigr) -C_{3,l} \bar{\mu } \bigl( e^{-b_{l}/\bar{\mu }} -e^{-b_{l-1}/\bar{\mu }} \bigr) \Biggr). \end{align} Substituting \eqref{eq:23}, \eqref{eq:31}, \eqref{eq:32} and \eqref{eq:33} into \eqref{eq:30} and doing some simplifications yield \eqref{eq:24}. Next, from the equality $\psi (0)=1$ we get \eqref{eq:25}. Lastly, substituting \eqref{eq:31} into $d_{j} \psi '_{j}(b _{j})=d_{j+1} \psi '_{j+1}(b_{j})$ and \eqref{eq:23} into $\psi _{j}(b _{j})=\psi _{j+1}(b_{j})$ for $1\le j\le k-1$ immediately yields \eqref{eq:26} and \eqref{eq:27}, respectively.\vadjust{\goodbreak} Thus, we get the system of $3k-1$ linear equations \eqref{eq:24}--\eqref{eq:27} to determine $3k-1$ unknown constants. That system has a unique solution provided that its determinant is not equal to 0. Hence, piecewise differential equation \eqref{eq:19} has a unique solution satisfying certain conditions and that solution is given by \eqref{eq:23}. Since we have derived \eqref{eq:19} from~\eqref{eq:18} without any additional assumptions concerning the differentiability of $\psi (x)$, we conclude that the functions $\psi _{j}(x)$, $1\le j \le k$, given by~\eqref{eq:23} are unique solutions to~\eqref{eq:18} on the intervals $[b_{j-1},b_{j}]$ satisfying certain conditions. This guaranties that the functions $\psi _{j}(x)$ we have found coincide with the ruin probability\index{ruin probability} on $[b_{j-1},b_{j}]$, which completes the proof. \end{proof} \begin{remark} \label{rem:6} In particular, if $k=2$, then $C_{3,2}=0$ and the constants $C_{1,1}$, $C_{2,1}$, $C_{3,1}$, $C_{1,2}$ and $C_{2,2}$ are determined from the system of linear equations \eqref{eq:34}--\eqref{eq:38}: \begin{align} \label{eq:34} &\sum_{i=1}^{2} \biggl( \frac{\bar{\lambda }}{\bar{\mu }z_{i,1}-1}\, \bigl(1- e^{(z_{i,1}-1/\bar{\mu })b_{1}} \bigr) +d_{1} z_{i,1} +\lambda +\bar{\lambda } \biggr) C_{i,1}\nonumber \\ &\quad + \bigl( \bar{\lambda }e^{-b_{1}/\bar{\mu }} +\lambda \bigr) C _{3,1} +\sum_{i=1}^{2} \frac{\bar{\lambda }e^{(z_{i,2}-1/\bar{\mu })b _{1}}}{\bar{\mu }z_{i,2}-1}\, C_{i,2} =\lambda ,\\[-30pt]\nonumber \end{align} \begin{align} \label{eq:35} &\sum_{i=1}^{2} \frac{\lambda e^{-b_{1}/\mu }}{\mu z_{i,1}+1}\, \bigl(1- e^{(z_{i,1}+1/\mu )b_{1}} \bigr) C_{i,1} + \lambda \bigl( e ^{-b_{1}/\mu }-1 \bigr) C_{3,1}\nonumber \\ &\quad +\sum_{i=1}^{2} \biggl( \frac{\bar{\lambda }e^{z_{i,2}b_{1}}}{\bar{ \mu }z_{i,2}-1} +(d_{2} z_{i,2} +\lambda +\bar{ \lambda }) e^{z_{i,2}b _{1}} \biggr) C_{i,2} =\lambda e^{-b_{1}/\mu }, \end{align} \begin{equation} \label{eq:36} C_{1,1} +C_{2,1} +C_{3,1}=1, \end{equation} \begin{equation} \label{eq:37} d_{1} \sum_{i=1}^{2} z_{i,1} e^{z_{i,1} b_{1}} C_{i,1} -d_{2} \sum_{i=1}^{2} z_{i,2} e^{z_{i,2} b_{1}} C_{i,2}=0 \end{equation} and \begin{equation} \label{eq:38} \sum_{i=1}^{2} e^{z_{i,1} b_{1}} C_{i,1} +C_{3,1} -\sum _{i=1}^{2} e ^{z_{i,2} b_{1}} C_{i,2}=0 \end{equation} provided that its determinant is not equal to 0. \end{remark} The proposition below enables us to check whether the system of equations \eqref{eq:34}--\eqref{eq:38} has a unique solution. Let \begin{align*} \Delta &=\frac{1}{\bar{\lambda }(\mu +\bar{\mu })^{2}} \\ &\times \bigl( d_{1} \bar{\mu }(z_{1,1} -z_{2,1}) \bigl( e^{b_{1}/\bar{ \mu }} -e^{-b_{1}/\mu } \bigr) \bigl( (\bar{\lambda }\bar{\mu }- \lambda \mu -d_{1}) e^{b_{1}/\bar{\mu }} -(\bar{\lambda }\bar{\mu }- \lambda \mu -d_{2}) \bigr) \\ & \;\;\quad -d_{1} \bar{\mu }e^{b_{1}/\bar{\mu }} (z_{1,1} -z_{2,1}) (\bar{\lambda }\bar{\mu }-\lambda \mu -d_{1}) \bigl( e^{b_{1}/\bar{\mu }} -e^{-b _{1}/\mu } \bigr) \\ & \;\;\quad +d_{1}\mu (1-1/\bar{\mu }) \bigl( e^{z_{1,1} b_{1}} -e^{z_{2,1} b_{1}} \bigr) \bigl( (\bar{\lambda }\bar{\mu }-\lambda \mu -d_{1}) e^{b_{1}/\bar{ \mu }} -(\bar{\lambda }\bar{\mu }-\lambda \mu -d_{2}) \bigr) \\ & \;\;\quad +(d_{2}-d_{1}) \bigl( e^{z_{1,1} b_{1}} -e^{z_{2,1} b_{1}} \bigr) (\bar{ \lambda }\bar{\mu }-\lambda \mu -d_{1}) \bigl( e^{b_{1}/\bar{\mu }} -e ^{-b_{1}/\mu } \bigr) \\ & \;\;\quad +d_{1}^{2}\mu e^{b_{1}/\bar{\mu }} (\bar{ \mu }-1) \bigl( z_{2,1} e ^{z_{1,1} b_{1}} -z_{1,1} e^{z_{2,1} b_{1}} \bigr) \\ & \;\;\quad +d_{1}\bar{\mu }(d_{2}-d_{1}) \bigl( e^{b_{1}/\bar{\mu }} -e^{-b_{1}/ \mu } \bigr) \bigl( z_{2,1} e^{z_{1,1} b_{1}} -z_{1,1} e^{z_{2,1} b _{1}} \bigr) \bigr). \end{align*} \begin{propos} \label{pr:1} The system of linear equations \eqref{eq:34}--\eqref{eq:38} has a unique solution if and only if $\Delta \neq 0$. \end{propos} \begin{proof} From \eqref{eq:36} we have $C_{3,1} =1 -C_{1,1} -C_{2,1}$. Substituting that into \eqref{eq:34}, \eqref{eq:35} and \eqref{eq:38} yields \begin{align} \label{eq:39} &\sum_{i=1}^{2} \biggl( \frac{\bar{\lambda }}{\bar{\mu }z_{i,1}-1}\, \bigl(1- e^{(z_{i,1}-1/\bar{\mu })b_{1}} \bigr) +d_{1} z_{i,1} +\bar{ \lambda }-\bar{\lambda }e^{-b_{1}/\bar{\mu }} \biggr) C_{i,1}\nonumber \\[-2pt] &\quad +\sum_{i=1}^{2} \frac{\bar{\lambda }e^{(z_{i,2}-1/\bar{\mu })b _{1}}}{\bar{\mu }z_{i,2}-1}\, C_{i,2} =-\bar{\lambda }e^{-b_{1}/\bar{ \mu }},\\[-30pt]\nonumber \end{align} \begin{align} \label{eq:40} &\sum_{i=1}^{2} \biggl( \frac{\lambda e^{-b_{1}/\mu }}{\mu z_{i,1}+1} \, \bigl(1- e^{(z_{i,1}+1/\mu )b_{1}} \bigr) +\lambda \bigl( 1-e^{-b _{1}/\mu } \bigr) \biggr) C_{i,1}\nonumber \\[-2pt] &\quad +\sum_{i=1}^{2} \biggl( \frac{\bar{\lambda }e^{z_{i,2}b_{1}}}{\bar{ \mu }z_{i,2}-1} +(d_{2} z_{i,2} +\lambda +\bar{ \lambda }) e^{z_{i,2}b _{1}} \biggr) C_{i,2} =\lambda \end{align} and \begin{equation} \label{eq:41} \sum_{i=1}^{2} \bigl(e^{z_{i,1}b_{1}}-1 \bigr) C_{i,1} -\sum _{i=1}^{2} e^{z_{i,2}b_{1}} C_{i,2}=-1. \end{equation} Thus, the system of equations \eqref{eq:34}--\eqref{eq:38} has a unique solution if and only if the system of equations \eqref{eq:39}, \eqref{eq:40}, \eqref{eq:37} and \eqref{eq:41} does. Multiplying \eqref{eq:41} by $(-d_{2} z_{1,2})$, adding \eqref{eq:37} and rearranging the terms we obtain \begin{equation} \label{eq:42} e^{z_{2,2} b_{1}} C_{2,2} =\frac{ d_{2} z_{1,2} -\sum_{i=1}^{2} ( d_{1} z_{i,1} e^{z_{i,1}b_{1}} +d _{2} z_{1,2} (1-e^{z_{i,1}b_{1}} ) ) C_{i,1}}{d_{2} (z _{1,2} -z_{2,2})}. \end{equation} Similarly, multiplying \eqref{eq:41} by $(-d_{2} z_{2,2})$, adding \eqref{eq:37} and rearranging the terms we obtain \begin{equation} \label{eq:43} e^{z_{1,2} b_{1}} C_{1,2} =\frac{ d_{2} z_{2,2} -\sum_{i=1}^{2} ( d_{1} z_{i,1} e^{z_{i,1}b_{1}} +d _{2} z_{2,2} (1-e^{z_{i,1}b_{1}} ) ) C_{i,1}}{d_{2} (z _{2,2} -z_{1,2})}. \end{equation} Substituting \eqref{eq:42} and \eqref{eq:43} into \eqref{eq:39} and doing some simplifications yield \begin{align} \label{eq:44} &\sum_{i=1}^{2} \biggl( \frac{\bar{\mu }z_{i,1} e^{b_{1}/\bar{\mu }} -e ^{z_{i,1}b_{1}}}{\bar{\mu }z_{i,1}-1} +d_{1} z_{i,1} e^{b_{1}/\bar{ \mu }} -1\nonumber \\[-2pt] & \qquad +\frac{-d_{1} \bar{\mu }z_{i,1} e^{z_{i,1}b_{1}} +d_{2} (1 -\bar{ \mu }z_{1,2} -\bar{\mu }z_{2,2}) (1-e^{z_{i,1}b_{1}})}{d_{2}(\bar{ \mu }z_{1,2}-1)(\bar{\mu }z_{2,2}-1)} \biggr) C_{i,1}\nonumber \\[-2pt] &\quad =-\frac{\bar{\mu }^{2} z_{1,2} z_{2,2}}{(\bar{\mu }z_{1,2}-1)(\bar{ \mu }z_{2,2}-1)}. \end{align} By Vieta's theorem applied to \eqref{eq:29} for $j=2$, we have \begin{equation*} z_{1,2} z_{2,2} =\frac{\bar{\lambda }\bar{\mu }-\lambda \mu -d_{2}}{d _{2} \mu \bar{\mu }},\vadjust{\goodbreak} \end{equation*} \begin{equation*} 1 -\bar{\mu }z_{1,2} -\bar{\mu }z_{2,2} = \frac{d_{2} \bar{\mu }+ \mu \bar{\mu }(\lambda +\bar{\lambda })}{d_{2} \mu } \end{equation*} and \begin{equation*} (\bar{\mu }z_{1,2}-1) (\bar{\mu }z_{2,2}-1) = \frac{\bar{\lambda }\bar{ \mu }(\mu +\bar{\mu })}{d_{2} \mu }. \end{equation*} Substituting these equalities into \eqref{eq:44} gives \begin{align} \label{eq:45} &\sum_{i=1}^{2} \biggl( \frac{\bar{\mu }z_{i,1} e^{b_{1}/\bar{\mu }} -e ^{z_{i,1} b_{1}}}{\bar{\mu }z_{i,1}-1} +d_{1} z_{i,1} e^{b_{1}/\bar{ \mu }} -1\nonumber \\[-2pt] & \qquad +\frac{-d_{1} \mu z_{i,1} e^{z_{i,1}b_{1}} +(d_{2} +\mu (\lambda +\bar{ \lambda })) (1-e^{z_{i,1}b_{1}})}{\bar{\lambda }(\mu +\bar{\mu })} \biggr) C_{i,1}\nonumber \\[-2pt] &\quad =\frac{d_{2} +\lambda \mu -\bar{\lambda }\bar{\mu }}{\bar{ \lambda }(\mu +\bar{\mu })}. \end{align} Next, multiplying \eqref{eq:41} by $(\lambda +\bar{\lambda })$ and adding \eqref{eq:37} and \eqref{eq:40} we get \begin{align} \label{eq:46} &\sum_{i=1}^{2} \biggl( \frac{\lambda e^{-b_{1}/\mu }}{\mu z_{i,1}+1} \, \bigl(1- e^{(z_{i,1}+1/\mu )b_{1}} \bigr) +\lambda \bigl( 1-e^{-b _{1}/\mu } \bigr) +d_{1} z_{i,1} e^{z_{i,1}b_{1}}\nonumber \\[-2pt] &\quad +(\lambda +\bar{\lambda }) \bigl( e^{z_{i,1}b_{1}}-1 \bigr) \biggr) C_{i,1} +\sum_{i=1}^{2} \frac{\bar{\lambda }e^{z_{i,2}b_{1}}}{\bar{ \mu }z_{i,2}-1} C_{i,2} =-\bar{\lambda }. \end{align} Multiplying \eqref{eq:39} by $(-e^{b_{1}/\bar{\mu }})$ and adding \eqref{eq:46} we obtain \begin{align} \label{eq:47} &\sum_{i=1}^{2} \biggl( \frac{-\lambda \mu z_{i,1} e^{-b_{1}/\mu } - \lambda e^{z_{i,1}b_{1}}}{\mu z_{i,1}+1} +\frac{-\bar{\lambda }e^{b _{1}/\bar{\mu }} +\bar{\lambda }e^{z_{i,1}b_{1}}}{\bar{\mu }z_{i,1}-1}\nonumber \\[-2pt] &\quad -(d_{1} z_{i,1} +\bar{\lambda }) e^{b_{1}/\bar{\mu }} +(d_{1} z_{i,1} +\lambda +\bar{ \lambda }) e^{z_{i,1}b_{1}} \biggr) C_{i,1} = \lambda -\bar{\lambda }. \end{align} Thus, if the system of equations \eqref{eq:45} and \eqref{eq:47} has a unique solution, then $C_{1,2}$ and $C_{2,2}$ can be found from \eqref{eq:43} and \eqref{eq:42}, respectively. Consequently, the system of equations \eqref{eq:39}, \eqref{eq:40}, \eqref{eq:37} and \eqref{eq:41} has a unique solution if and only if the system of equations \eqref{eq:45} and \eqref{eq:47} does. A standard computation shows that the determinant of the system of equations \eqref{eq:45} and \eqref{eq:47} is equal to $\Delta $ defined above. In particular, here we use Vieta's theorem applied to \eqref{eq:29} for $j=1$. Therefore, the system of equations \eqref{eq:34}--\eqref{eq:38} has a unique solution if and only if $\Delta \neq 0$. \end{proof} \subsection{Explicit formulas for the expected discounted dividend payments\index{dividend ! payments} until ruin} \label{sec:5.2} By \eqref{eq:17}, equation \eqref{eq:9} for the expected discounted dividend payments\index{dividend ! payments} can be written as \begin{align} \label{eq:48} &d_{j} v'(x)+(\lambda +\bar{\lambda }+\delta )v(x)\nonumber \\[-2pt] &\quad =\frac{\lambda }{\mu }\, e^{-x/\mu } \int_{0}^{x} v(u)e^{u/ \mu }\, \mathrm{d}u +\frac{\bar{\lambda }}{\bar{\mu }}\, e^{x/\bar{ \mu }} \int_{x}^{\infty } v(u)e^{-u/\bar{\mu }}\, \mathrm{d}u +d_{j} \end{align} for all $x\in [b_{j-1},b_{j}]$ and $1\le j\le k$. The piecewise integro-differential equation \eqref{eq:48} can also be reduced to a piecewise linear differential equation with constant coefficients. \begin{lemma} \label{lem:2}Let the surplus process\index{surplus process} $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ be defined by \eqref{eq:2} under the above assumptions, and let claim and premium sizes be exponentially distributed with means $\mu $ and $\bar{\mu }$, respectively. Then $v(x)$ is a solution to the piecewise differential equation \begin{align} \label{eq:49} &d_{j}\mu \bar{\mu }v'''(x) + \bigl( d_{j}(\bar{\mu }-\mu ) +\mu \bar{ \mu }(\lambda +\bar{\lambda }+ \delta ) \bigr)v''(x)\nonumber \\[-2pt] &\quad +\bigl(\bar{\mu }(\bar{\lambda }+\delta ) -\mu (\lambda +\delta ) -d _{j}\bigr)v'(x) -\delta v(x)=-d_{j} \end{align} for all $x\in [b_{j-1},b_{j}]$ and $1\le j\le k$. \end{lemma} The proof of the lemma is similar to the proof of Lemma~\ref{lem:1}. For $1\le j\le k$, let \begin{equation*} \begin{split} \tilde{\mathrm{D}}_{j} &=-18\delta d_{j} \mu \bar{\mu } \bigl( d_{j}(\bar{ \mu }-\mu ) + \mu \bar{\mu }(\lambda +\bar{\lambda }+\delta ) \bigr) \bigl( \bar{\mu }(\bar{ \lambda }+\delta ) -\mu (\lambda +\delta ) -d _{j} \bigr) \\ &\quad +4\delta \bigl( d_{j}(\bar{\mu }-\mu ) +\mu \bar{\mu }( \lambda +\bar{\lambda }+\delta ) \bigr)^{3} \\ &\quad + \bigl( d_{j}(\bar{\mu }-\mu ) +\mu \bar{\mu }(\lambda + \bar{ \lambda }+\delta ) \bigr)^{2} \bigl( \bar{\mu }(\bar{\lambda }+ \delta ) -\mu (\lambda +\delta ) -d_{j} \bigr)^{2} \\ &\quad -4d_{j} \mu \bar{\mu } \bigl( \bar{\mu }(\bar{\lambda }+ \delta ) -\mu (\lambda +\delta ) -d_{j} \bigr)^{3} -27 (\delta d_{j} \mu \bar{ \mu })^{2}. \end{split} \end{equation*} \begin{thm} \label{thm:4}Let the surplus process\index{surplus process} $ (X_{t}^{\mathbf{b}}(x) )_{t\ge 0}$ be defined by \eqref{eq:2} under the above assumptions, and let claim and premium sizes be exponentially distributed with means $\mu $ and $\bar{\mu }$, respectively. If the net profit condition\index{net profit condition} \eqref{eq:3} holds and $\min _{1\le j\le k} \tilde{\mathrm{D}}_{j}>0$, then we have \begin{equation} \label{eq:50} v_{j}(x)=\tilde{C}_{1,j} e^{\tilde{z}_{1,j} x} +\tilde{C}_{2,j} e^{ \tilde{z}_{2,j} x} + \tilde{C}_{3,j} e^{\tilde{z}_{3,j} x} +d_{j}/ \delta \end{equation} for all $x\in [b_{j-1},b_{j}]$ and $1\le j\le k$, where $\tilde{z} _{1,j}$, $\tilde{z}_{2,j}$ and $\tilde{z}_{3,j}$ are distinct real roots of the cubic equation \begin{equation} \label{eq:51} d_{j} \mu \bar{\mu }z^{3} + \bigl( d_{j} (\bar{\mu }-\mu ) +\mu \bar{ \mu }(\lambda +\bar{\lambda }+ \delta ) \bigr) z^{2} + \bigl( \bar{ \mu }(\bar{\lambda }+\delta ) - \mu (\lambda +\delta ) -d_{j} \bigr) z -\delta =0, \end{equation} $\tilde{C}_{3,k}=0$ and all the other constants $\tilde{C}_{1,j}$, $\tilde{C}_{2,j}$ and $\tilde{C}_{3,j}$ are determined from the system of linear equations \eqref{eq:52}--\eqref{eq:55}: \begin{align} \label{eq:52} &\lambda e^{-b_{j-1}/\mu } \sum _{l=1}^{j-1} \Biggl( \sum _{i=1}^{3} \frac{ \tilde{C}_{i,l}}{\mu \tilde{z}_{i,l}+1}\, \bigl(e^{(\tilde{z}_{i,l}+1/ \mu )b_{l}} -e^{(\tilde{z}_{i,l}+1/\mu )b_{l-1}} \bigr) \Biggr)\nonumber \\[-2pt] &\quad +\sum_{i=1}^{3} \biggl( \frac{\bar{\lambda }e^{b_{j-1}/\bar{ \mu }}}{\bar{\mu }\tilde{z}_{i,j}-1}\, \bigl(e^{(\tilde{z}_{i,j}-1/\bar{ \mu })b_{j}} - e^{(\tilde{z}_{i,j}-1/\bar{\mu })b_{j-1}} \bigr)\nonumber \\[-2pt] & \qquad -(d_{j} \tilde{z}_{i,j} +\lambda +\bar{ \lambda }+\delta ) e^{ \tilde{z}_{i,j}b_{j-1}} \biggr) \tilde{C}_{i,j}\nonumber \\[-2pt] &\quad +\bar{\lambda }e^{b_{j-1}/\bar{\mu }} \sum_{l=j+1}^{k} \Biggl( \sum_{i=1}^{3} \frac{\tilde{C}_{i,l}}{\bar{\mu }\tilde{z}_{i,l}-1} \, \bigl(e^{(\tilde{z}_{i,l}-1/\bar{\mu })b_{l}} -e^{(\tilde{z}_{i,l}-1/\bar{ \mu })b_{l-1}} \bigr) \Biggr)\nonumber \\[-2pt] &\quad =\frac{d_{j}(\lambda +\bar{\lambda })}{\delta } -\frac{\lambda e^{-b_{j-1}/\mu }}{\delta } \sum _{l=1}^{j-1} d_{l} \bigl(e^{b_{l}/ \mu } -e^{b_{l-1}/\mu } \bigr)\nonumber \\[-2pt] & \qquad +\frac{\bar{\lambda }e^{b_{j-1}/\bar{\mu }}}{\delta } \sum_{l=j}^{k} d _{l} \bigl(e^{-b_{l}/\bar{\mu }} -e^{-b_{l-1}/\bar{\mu }} \bigr), \quad 1 \le j\le k, \end{align} \begin{equation} \label{eq:53} \tilde{C}_{1,1} +\tilde{C}_{2,1} + \tilde{C}_{3,1}=-d_{1}/\delta , \end{equation} \begin{align} \label{eq:54} &d_{j} \sum _{i=1}^{3} \tilde{z}_{i,j} e^{\tilde{z}_{i,j} b_{j}} \tilde{C}_{i,j} -d_{j+1} \sum _{i=1}^{3} \tilde{z}_{i,j+1} e^{ \tilde{z}_{i,j+1} b_{j}} \tilde{C}_{i,j+1}\nonumber \\[-2pt] &\quad =d_{j} -d_{j+1}, \quad 1\le j\le k-1, \end{align} and \begin{equation} \label{eq:55} \sum_{i=1}^{3} e^{\tilde{z}_{i,j} b_{j}} \tilde{C}_{i,j} -\sum _{i=1} ^{3} e^{\tilde{z}_{i,j+1} b_{j}} \tilde{C}_{i,j+1} =\frac{d_{j+1}-d _{j}}{\delta }, \quad 1\le j\le k-1, \end{equation} provided that its determinant is not equal to 0. \end{thm} \begin{proof} By Lemma~\ref{lem:2} and the notation introduced in Section \ref{sec:2}, we deduce that the function $v_{j}(x)$ is a solution to \eqref{eq:49} on $x\in [b_{j-1},b_{j}]$ for each $1\le j\le k$. In addition, it is easily seen that \eqref{eq:51} is the corresponding characteristic equation and its discriminant coincides with the constant $\tilde{\mathrm{D}}_{j}$ introduced above. The assumption $\min _{1\le j\le k} \tilde{\mathrm{D}}_{j}>0$ guarantees that cubic equation \eqref{eq:51} has three distinct real roots $\tilde{z}_{1,j}$, $\tilde{z}_{2,j}$ and $\tilde{z}_{3,j}$. Hence, the general solution to \eqref{eq:49} is given by \eqref{eq:50} with some constants $\tilde{C}_{1,j}$, $\tilde{C}_{2,j}$ and $\tilde{C}_{3,j}$. By Vieta's theorem, we conclude that \eqref{eq:51} has either two or no negative roots for each $1\le j\le k$. Since the net profit condition\index{net profit condition} \eqref{eq:3} holds, applying arguments similar to those in \cite[p.~70]{Sc2008} shows that $\lim_{x\to \infty } v_{k}(x) =d_{k}/ {\delta }$. \xch{Consequently}{Concequently}, if equation \eqref{eq:51} for $j=k$ had no negative roots, the function $v_{k}(x)$ would be constant, which is impossible. Therefore, equation \eqref{eq:51} for $j=k$ has two negative roots. We denote those negative roots by $\tilde{z}_{1,k}$ and $\tilde{z} _{2,k}$, and let $\tilde{z}_{3,k}$ be the third root. Since $\tilde{z}_{3,k}>0$, we conclude that $\tilde{C}_{3,k}=0$. To determine all the other constants $\tilde{C}_{1,j}$, $\tilde{C} _{2,j}$ and $\tilde{C}_{3,j}$, we need $3k-1$ boundary conditions. The first $k$ conditions are obtained by letting $x=b_{j-1}$ in \eqref{eq:48} for $1\le j\le k$: \begin{align} \label{eq:56} &d_{j} v'(b_{j-1})+(\lambda +\bar{\lambda }+\delta )v(b_{j-1})\nonumber \\ &\quad =\frac{\lambda }{\mu }\, e^{-b_{j-1}/\mu } \int_{0}^{b_{j-1}} v(u)e^{u/\mu }\, \mathrm{d}u +\frac{\bar{\lambda }}{\bar{\mu }}\, e ^{b_{j-1}/\bar{\mu }} \int_{b_{j-1}}^{\infty } v(u)e^{-u/\bar{\mu }} \, \mathrm{d}u +d_{j}. \end{align} One more condition is obtained from the equality $v(0)=0$. The last $2(k-1)$ conditions are derived from the equalities $d_{j} v'_{j}(b _{j}) -d_{j} =d_{j+1} v'_{j+1}(b_{j}) -d_{j+1}$, which easily follow from \eqref{eq:48}, and $v_{j}(b_{j})=v_{j+1}(b_{j})$ for $1\le j \le k-1$. Substituting \eqref{eq:50} into \eqref{eq:56} \xch{as well as into the}{, applying} equalities $v(0)=0$, $d_{j} v'_{j}(b_{j}) -d_{j} =d_{j+1} v'_{j+1}(b_{j}) -d_{j+1}$ and $v_{j}(b_{j}) =v_{j+1}(b_{j})$ for $1\le j\le k-1$ and doing some simplifications yield the system of linear equations \eqref{eq:52}--\eqref{eq:55}, which has a unique solution provided that its determinant is not equal to 0. Thus, piecewise differential equation \eqref{eq:49} has a unique solution satisfying certain conditions, and that solution is given by \eqref{eq:50}. Applying arguments similar to those in the proof of Theorem~\ref{thm:3} guaranties that the functions $v_{j}(x)$ we have found coincide with the expected discounted dividend payments\index{dividend ! payments} until ruin on $[b_{j-1},b_{j}]$, which completes the proof. \end{proof} \begin{remark} \label{rem:7} In particular, if $k=2$, then $\tilde{C}_{3,2}=0$ and the constants $\tilde{C}_{1,1}$, $\tilde{C}_{2,1}$, $\tilde{C}_{3,1}$, $\tilde{C} _{1,2}$ and $\tilde{C}_{2,2}$ are determined from the system of linear equations \eqref{eq:57}--\eqref{eq:61}: \begin{align} \label{eq:57} &\sum_{i=1}^{3} \biggl( \frac{\bar{\lambda }}{\bar{\mu }\tilde{z}_{i,1}-1}\, \bigl(1-e^{( \tilde{z}_{i,1}-1/\bar{\mu })b_{1}} \bigr) +d_{1} \tilde{z}_{i,1} + \lambda +\bar{\lambda }+\delta \biggr) \tilde{C}_{i,1}\nonumber \\ &\quad +\sum_{i=1}^{2} \frac{\bar{\lambda }e^{(\tilde{z}_{i,2}-1/\bar{ \mu })b_{1}}}{\bar{\mu }\tilde{z}_{i,2}-1}\, \tilde{C}_{i,2} =-\frac{d _{1} \lambda }{\delta } - \frac{\bar{\lambda }(d_{1}-d_{2}) e^{-b_{1}/\bar{ \mu }}}{\delta },\\[-24pt]\nonumber \end{align} \begin{align} \label{eq:58} &\lambda e^{-b_{1}/\mu } \sum _{i=1}^{3} \frac{\tilde{C}_{i,1}}{\mu \tilde{z}_{i,1}+1}\, \bigl(1-e^{(\tilde{z}_{i,1}+1/\mu )b_{1}} \bigr) + \sum_{i=1}^{2} \biggl( \frac{\bar{\lambda }e^{\tilde{z}_{i,2} b_{1}}}{\bar{ \mu }\tilde{z}_{i,2}-1}\nonumber \\ &\quad +(d_{2} \tilde{z}_{i,2} +\lambda +\bar{\lambda }+\delta ) e ^{\tilde{z}_{i,2}b_{1}} \biggr) \tilde{C}_{i,2} = \frac{\lambda (d_{1}-d _{2})}{\delta } -\frac{d_{1} \lambda e^{-b_{1}/\mu }}{\delta }, \end{align} \begin{equation} \label{eq:59} \tilde{C}_{1,1} +\tilde{C}_{2,1} + \tilde{C}_{3,1}=-d_{1}/\delta , \end{equation} \begin{equation} \label{eq:60} d_{1} \sum_{i=1}^{3} \tilde{z}_{i,1} e^{\tilde{z}_{i,1} b_{1}} \tilde{C}_{i,1} -d_{2} \sum_{i=1}^{2} \tilde{z}_{i,2} e^{\tilde{z} _{i,2} b_{1}} \tilde{C}_{i,2}=d_{1} -d_{2} \end{equation} and \begin{equation} \label{eq:61} \sum_{i=1}^{3} e^{\tilde{z}_{i,1} b_{1}} \tilde{C}_{i,1} -\sum _{i=1} ^{2} e^{\tilde{z}_{i,2} b_{1}} \tilde{C}_{i,2} =\frac{d_{2}-d_{1}}{ \delta } \end{equation} provided that its determinant is not equal to 0. \end{remark} \section{Numerical illustrations} \label{sec:6} To present numerical examples for the results obtained in Section~\ref{sec:5}, we set $\lambda =0.1$, $\bar{\lambda }=2.3$, $\mu =3$, $\bar{\mu }=0.2$, $b=5$ and $\delta =0.01$. In addition, we denote by $\psi ^{*}(x)$ the ruin probability\index{ruin probability} in the corresponding model without dividend payments.\index{dividend ! payments} It is given by \begin{equation*} \psi ^{*}(x)=\frac{\lambda (\mu +\bar{\mu })}{\bar{\mu }(\lambda +\bar{ \lambda })}\, \exp \biggl( - \frac{(\bar{\lambda }\bar{\mu }-\lambda \mu )x}{\mu \bar{\mu }(\lambda +\bar{\lambda })} \biggr), \quad x \in [0,\infty ), \end{equation*} (see \cite{Boi2003,MiRa2016}). For the parameters chosen above, $\psi ^{*}(x) \approx 0.666667 e^{-0.111111 x}$.\vadjust{\goodbreak} Moreover, let now $d_{1}=0.05$ and $d_{2}=0.1$. Applying Theorems~\ref{thm:3} and~\ref{thm:4} as well as Remarks~\ref{rem:6} and~\ref{rem:7} we can calculate the ruin probability $\psi (x)$\index{ruin probability} and the expected discounted dividend payments\index{dividend ! payments} until ruin $v(x)$: \begin{gather*} \psi _{1}(x) \approx 0.530821 e^{-0.084781 x} +0.179668 e^{-43.248552 x} +0.289512, \quad x\in [0,5], \\ \psi _{2}(x) \approx 0.826718 e^{-0.051863 x} -7.043723 \cdot 10^{38} e ^{-19.28147 x}, \quad x\in [5,\infty ); \\ \begin{split} v_{1}(x) \approx 5 &- 2.992137 e^{-43.470279 x} -4.421273 e^{-0.124597 x} \\ &+2.41341 e^{0.061543 x}, \quad x\in [0,5], \end{split} \\ v_{2}(x) \approx 10 -2.198169 \cdot 10^{40} e^{-19.405407 x} -6.97712 e^{-0.107684 x}, \quad x\in [5,\infty ). \end{gather*} Table~\ref{table:1} presents the results of calculations for some values of $x$. \begin{table} \caption{The ruin probabilities without and with dividend payments and the expected discounted dividend payments, $d_{1}=0.05$ and $d_{2}=0.1$} \label{table:1} \begin{tabular*}{6cm}{@{\extracolsep{\fill}}clll@{}} \hline $x$ & \multicolumn{1}{c}{$\psi ^{*}(x)$} & \multicolumn{1}{c}{$\psi (x)$} & \multicolumn{1}{c}{$v(x)$} \\ \hline 0 &0.666667 &1 &0\\ 1 &0.596560 &0.777184 &3.663273\\ 2 &0.533825 &0.737542 &4.283457\\ 5 &0.382502 &0.636926 &5.911685\\ 7 &0.306284 &0.575029 &6.716708\\ 10 &0.219462 &0.492173 &7.623108\\ 15 &0.125917 &0.379750 &8.612682\\ 20 &0.072245 &0.293007 &9.190265\\ 50 &0.002577 &0.061825 &9.967986\\ 70 &0.000279 &0.021912 &9.996285\\ \hline \end{tabular*} \end{table} Next, for $d_{1}=0.1$ and $d_{2}=0.05$, we get \begin{gather*} \psi _{1}(x) \approx 1.204304 e^{-0.051863 x} +0.218067 e^{-19.28147 x} -0.42237, \quad x\in [0,5], \\ \psi _{2}(x) \approx 0.772527 e^{-0.084781 x} +1.012903 \cdot 10^{91} e ^{-43.248552 x}, \quad x\in [5,\infty ); \\ \begin{split} v_{1}(x) \approx 10 &- 2.219094 e^{-19.405407 x} -5.609737 e^{-0.107684 x} \\[-2pt] &-2.171169 e^{0.079758 x}, \quad x\in [0,5], \end{split} \\ v_{2}(x) \approx 5 +5.716149 \cdot 10^{92} e^{-43.470279 x} -2.857069 e^{-0.124597 x}, \quad x\in [5,\infty ). \end{gather*} The values of $\psi ^{*}(x)$, $\psi (x)$ and $v(x)$ for some $x$ are given in Table~\ref{table:2}. \begin{table} \caption{The ruin probabilities without and with dividend payments and the expected discounted dividend payments, $d_{1}=0.1$ and $d_{2}=0.05$} \label{table:2} \begin{tabular*}{6cm}{@{\extracolsep{\fill}}clll@{}} \hline $x$ & \multicolumn{1}{c}{$\psi ^{*}(x)$} & \multicolumn{1}{c}{$\psi (x)$} & \multicolumn{1}{c}{$v(x)$} \\ \hline 0 &0.666667 &1 &0\\ 1 &0.596560 &0.721066 &2.611525\\ 2 &0.533825 &0.663275 &2.930525\\ 5 &0.382502 &0.506845 &3.490686\\ 7 &0.306284 &0.426750 &3.805635\\ 10 &0.219462 &0.330912 &4.178134\\ 15 &0.125917 &0.216577 &4.559200\\ 20 &0.072245 &0.141747 &4.763582\\ 50 &0.002577 &0.011141 &4.994372\\ 70 &0.000279 &0.002044 &4.999534\\ \hline \end{tabular*} \end{table} The results presented in Tables~\ref{table:1} and~\ref{table:2} indicate that dividend payments\index{dividend ! payments} substantially increase the ruin probability.\index{ruin probability} The first strategy is much more profitable, although the corresponding ruin probability\index{ruin probability} is larger in that case. \end{document}
\begin{document} \begin{frontmatter} \title{Letter to the Editor} \runtitle{Letter to the Editor} \begin{aug} \author{\fnms{Marco} \snm{Geraci}\corref{}\thanksref{t1}\ead[label=e1]{[email protected]}} \thankstext{t1}{Department of Epidemiology and Biostatistics, Arnold School of Public Health, University of South Carolina, 915 Greene Street, Columbia SC 29209, USA. \printead{e1}} \runauthor{M. Geraci} \affiliation{University of South Carolina\thanksmark{t1}} \end{aug} \begin{abstract} \quad \cite{galarza} have recently proposed a method of estimating linear quantile mixed models \citep{geraci_2014b} based on a Monte Carlo EM algorithm. They assert that their procedure represents an improvement over the numerical quadrature and non-smooth optimization approach implemented by \cite{geraci_2014a}. The objective of this note is to demonstrate that this claim is incorrect. We also point out several inaccuracies and shortcomings in their paper which affect other results and conclusions that can be drawn. \end{abstract} \end{frontmatter} \section{Linear quantile mixed models} Linear quantile mixed models (LQMMs) were developed by \cite{geraci_2014b} as an extension of the quantile regression model with random intercepts of \cite{geraci_2007}. We consider data from two-level nested designs in the form $(\mathbf{x}_{ij}^{\top},\mathbf{z}_{ij}^{\top},y_{ij})$, for $j=1,\ldots , n_{i}$ and $i=1,\ldots , M$, $N = \sum_i n_i$, where $\mathbf{x}_{ij}^{\top}$ is the $j$th row of a known $n_{i}\times p$ matrix $\mathbf{X}_i$, $\mathbf{z}_{ij}^{\top}$ is the $j$th row of a known $n_{i}\times q$ matrix $\mathbf{Z}_i$ and $y_{ij}$ is the $j$th observation of the response vector $\mathbf{y}_i = (y_{i1},\ldots,y_{in_{i}})^{\top}$ for the $i$th cluster. The $N\times 1$ vector of responses is denoted by $\mathbf{y} = (\mathbf{y}_{1}^{\top},\ldots,\mathbf{y}_{M}^{\top})^{\top}$. This kind of data arise from longitudinal or panel studies and other cluster sampling designs. The $\tau$th LQMM is defined as \begin{equation}\label{eq:1} Q_{y_{ij}|\mathbf{u}_{i}}(\tau) = \mathbf{x}_{ij}^{\top}\bm\beta_{\tau} + \mathbf{z}_{ij}^{\top}\mathbf{u}_{i}, \end{equation} where $0 < \tau < 1$ is the given quantile level, $\bm\beta_{\tau}$ is a $p \times 1$ vector of $\tau$-specific coefficients that are common to all clusters, while the $q \times 1$ vector $\mathbf{u}_{i}$ may vary with cluster. For estimation purposes only, \cite{geraci_2014b} introduced the convenient assumption that the responses $y_{ij}$, $j=1,\ldots , n_{i}$, $i=1,\ldots,M$, conditionally on a $q\times1$ vector of random effects $\mathbf{u}_i$, independently follow the asymmetric Laplace (AL) density \begin{equation}\label{eq:2} p(y_{ij}|\mathbf{u}_{i}) = \frac{\tau(1-\tau)}{\sigma_{\tau}}\exp\left\{-\frac{1}{\sigma_{\tau}}\rho_\tau\left(y_{ij}-\mu_{\tau,ij}\right)\right\}, \end{equation} where $\rho_\tau(r)=r\left\{\tau-I(r < 0)\right\}$ is the `check' function and $I$ denotes the indicator function, with location and scale parameters given by $\mu_{\tau,ij} = \mathbf{x}_{ij}^{\top}\bm\beta_{\tau} + \mathbf{z}_{ij}^{\top}\mathbf{u}_{i}$ and $\sigma_{\tau}$, respectively, which we write as $y_{ij} \sim \mathcal{AL}\left(\mu_{\tau,ij}, \sigma_{\tau}\right)$. (The third parameter of the AL is the skew parameter $\tau \in (0,1)$ which, in this model, is fixed and defines the quantile level of interest.) Also, they assumed that $\mathbf{u}_{i} = \left(u_{i1},\ldots,u_{iq}\right)^{\top}$, for $i=1,\ldots,M$, is a random vector independent from the model's error term with mean zero and $\tau$-specific variance-covariance matrix $\bm\Sigma_{\tau}$ of dimensions $q\times q$. The latter is reparameterized in terms of an $m$-dimensional vector, $1 \leq m \leq q(q+1)/2$, of non-redundant parameters $\bm\theta_{\tau}$, i.e. $\bm\Sigma_{\tau} = \bm\Sigma(\bm\theta_{\tau})$. The algorithm to estimate $(\bm\beta_{\tau}, \bm\theta_{\tau}, \sigma_{\tau})$ is described in detail by \cite{geraci_2014a,geraci_2014b}. First, the (quasi) log-likelihood is integrated numerically over the distribution of the random effects, i.e. \begin{align}\label{eq:3} & \ell_{\mathrm{GQ}}(\bm\beta_{\tau}, \bm\theta_{\tau}, \sigma_{\tau}|\mathbf{y}) = \\ \nonumber & \quad \sum_{i}^{M}\log\left\{\sum_{k_1=1}^{K}\cdots\sum_{k_q=1}^{K}p\left(\mathbf{y}_{i}| \mathbf{v}_{k_1,\ldots,k_q}\right) \prod_{l=1}^{q}w_{k_{l}}\right\}, \end{align} with $\mathbf{v}_{k_1,\ldots,k_q}=(v_{k_1},\ldots,v_{k_q})^{\top}$, where $v_{k_l}$ and $w_{k_l}$, $k_l = 1,\ldots,K$, $l=1,\ldots,q$, denote the abscissas and the weights of the (one-dimensional) Gaussian quadrature. Second, the integrated log-likelihood \eqref{eq:3} is maximized via a non-smooth optimizer. In principle, one can consider different distributions for the random effects, which may be naturally linked to different quadrature rules (or penalties). For example, it is immediate to verify that the normal distribution is akin to a Gauss-Hermite quadrature, a special case of LQMM discussed by Geraci and Bottai. In their paper, Galarza et al write \vskip 0.2cm \begin{small} [Geraci and Bottai (2014)] extended [Geraci and Bottai's (2007)] setup to accommodate multiple random effects [...]. Here, we consider a more general correlated random effects framework with general dispersion matrix $\bm\Psi = \bm\Psi(\mathbf{a})$. \end{small} \vskip 0.2cm This statement is not justified since their model is precisely the LQMM defined in \eqref{eq:1}-\eqref{eq:2} with normal random effects and general variance-covariance matrix $\bm\Sigma_{\tau} = \bm\Sigma(\bm\theta_{\tau})$. \section{Simulation study} To fit the LQMM \eqref{eq:1}-\eqref{eq:2}, Galarza et al proposed to use a stochastic approximation of the expectation-maximization (SAEM) algorithm. They compared their estimation approach, as implemented in the \texttt{qrLMM} package, to the quadrature-based algorithm implemented in the \texttt{lqmm} package \citep{geraci_2014a}. The description of their simulation study is not clear as it lacks several details. First of all, it does not specify which versions of \texttt{lqmm}, \texttt{qrLMM} or \texttt{R} were used in their study. Most importantly, there is no indication about \texttt{lqmm} optimization settings and the syntax used for modeling. We speculate that the default options were used. We tried to replicate their design and we believe that the setting described below is quite similar to theirs. We generated data according to the model \begin{equation}\label{eq:4} y_{ij} = \mathbf{x}_{ij}^{\top}\bm\delta + \mathbf{z}_{ij}^{\top}\mathbf{u}_{i} + \varepsilon_{\tau,ij}, \end{equation} where $\bm\delta = (0.8,0.5,1)^{\top}$, $x_{1,ij} = 1$, $x_{2,ij} \sim \mathcal{N}(0,1)$, $x_{3,ij} \sim \mathcal{N}(0,1)$, $z_{1,ij} \sim \mathcal{N}(0,1)$, $z_{2,ij} \sim \mathcal{N}(0,1)$, and $\varepsilon_{\tau,ij}\sim \mathcal{AL}\left(0, 0.2\right)$. Moreover, $\mathbf{u}_{i} \sim \mathcal{N}(\mathbf{0},\bm\Sigma)$, where \[ \bm\Sigma = \left[ \begin{array}{cc} 0.8 & 0.5 \\ 0.5 & 1 \\ \end{array} \right]. \] The number of clusters $M$ varied (50, 100, 200, 300) while the size of the clusters was fixed to $n_{i} = 3$, $i = 1,\ldots,M$, throughout the simulation. We considered five LQMMs \eqref{eq:1} for $\tau \in \{0.05, 0.1, 0.5, 0.9, 0.95\}$. (Note that the error $\varepsilon_{\tau,ij}$ in \eqref{eq:4} is sampled from an AL with skewness determined by the same $\tau$ that defines the quantile to be estimated, therefore $\bm\beta_{\tau} = \bm\delta$ for all $\tau$.) Data were replicated 100 times for each combination of sample size and quantile level. For this simulation, we used \texttt{lqmm} 1.5.3, which is, at the time of writing, the latest version available on the Comprehensive R Archive Network, and \texttt{qrLMM} 1.3 for \texttt{R} version 3.4.2 \citep{R}. By default, the function \texttt{QRLMM} starts the SAEM algorithm with estimates of $\bm\beta_{\tau}$ and $\sigma_{\tau}$ obtained from linear programming (package \texttt{quantreg}). In contrast, \texttt{lqmm} starts by default from ordinary least squares estimates. Therefore, we changed the option \texttt{lqmmControl(startQR = TRUE)} for the sake of comparability. Moreover, we used $K = 9$ quadrature knots instead of the default $K=7$ to improve accuracy since $q > 1$ \citep[see][for details]{geraci_2014b}. For the SAEM algorithm we used the same settings as in Galarza et al, namely 20 Monte Carlo simulations, 500 maximum iterations and $0.2$ for the cut-point that determines the proportion of initial iterations with no memory. The variance-covariance matrix was specified as a general positive-definite matrix in both estimation procedures. All the other estimation settings in \texttt{lqmm} and \texttt{QRLMM} were left unchanged to their default values. In a preliminary analysis, we assessed the computational time needed to run the full simulation and we estimated it would take approximately two months for \texttt{QRLMM}, but less than half an hour for \texttt{lqmm}. Given the excessive computational time needed for \texttt{QRLMM}, we ran the latter only for selected scenarios, namely $M \in\{50, 300\}$ and $\tau \in\{0.05, 0.5, 0.95\}$. \begin{landscape} \begin{table} \caption{Performance summary for the quadrature \& non-smooth optimization algorithm (lqmm) and the approximated EM algorithm (qrLMM) run on a 64-bit operating system machine with 32 Gb of RAM and 3.60 GHz clock-rate processor. All figures refer to the same subset of scenarios, namely $M \in\{50, 300\}$ and $\tau \in\{0.05, 0.5, 0.95\}$. Averages are calculated over $2\times 3 \times 100 = 600$ replicated datasets.} \label{tab1} \begin{tabular}{lrrrrr} \hline \textit{Algorithm} & \multicolumn{1}{l}{\textit{Average bias}} & \multicolumn{1}{l}{\textit{Average root mean}} & \multicolumn{1}{l}{\textit{Total elapsed time}} & \multicolumn{1}{l}{\textit{Average elapsed time}} & \textit{Percentage of convergence}\\ &\multicolumn{1}{l}{} & \multicolumn{1}{l}{\textit{squared error}} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{\textit{failures}}\\ \hline lqmm & 0.0019 & 0.0852 & 7.5 (min) & 0.7 (s) & $0\%$ \\ qrLMM & 0.0017 & 0.0910 & 24066.0 (min) & 2551.2 (s) & $21\%$ \\ \hline \end{tabular} \end{table} \end{landscape} Table~\ref{tab1} shows a summary of the actual performance of the two algorithms, while Figures~\ref{fig1} and \ref{fig2} show, respectively, the absolute bias and root mean squared error (RMSE) of the two estimators. For comparison, Figure~\ref{fig2} also shows the RMSE values reported for SAEM by Galarza et al in Table 2 of their paper. The average bias and RMSE calculated for $\bm\beta_{\tau}$ and $\sigma_{\tau}$ in selected scenarios were small for both estimators (Table~\ref{tab1}), with average RMSE slightly lower for \texttt{lqmm}. Most notably, the time needed by \texttt{qrLMM} to run one model was on average 42.52 minutes (min 5.82, max 132.30 minutes) with a $21\%$ convergence failure rate. In contrast, \texttt{lqmm} took less than 1 second for one replication (min 0.09, max 9.27 seconds) with no convergence failures in any of the selected scenarios (and no convergence failures in any of the other scenarios either). The bias and RMSE for specific sample sizes and quantiles were similar in most cases except for the intercept, in which case \texttt{lqmm} seemed to have an advantage over \texttt{qrLMM} at more extreme quantiles (Figures~\ref{fig1} and \ref{fig2}). Note also that the RMSE results reported by Galarza et al in their paper are close to those we obtained in our simulation for selected scenarios (Figure~\ref{fig2}), thus it is reasonable to conclude that our simulation design is a faithful reproduction of theirs. \begin{landscape} \begin{figure}\label{fig1} \end{figure} \end{landscape} \begin{landscape} \begin{figure}\label{fig2} \end{figure} \end{landscape} Table~\ref{tab2} shows that the (scaled) average log-likelihood resulting from the two fitting algorithms was comparable in all selected scenarios. \begin{table} \caption{Average log-likelihood at convergence (scaled by $n$) for the quadrature \& non-smooth optimization algorithm (lqmm) and the approximated EM algorithm (qrLMM) run on a 64-bit operating system machine with 32 Gb of RAM and 3.60 GHz clock-rate processor. All figures refer to the same subset of scenarios, namely $M \in\{50, 300\}$ and $\tau \in\{0.05, 0.5, 0.95\}$. Averages are calculated over $2\times 3 \times 100 = 600$ replicated datasets.} \label{tab2} \begin{tabular}{lrrrr} \hline Algorithm & Sample size & $\tau = 0.05$ & $\tau = 0.5$ & $\tau = 0.95$ \\ \hline lqmm & 50 & $-$7.78 & $-$4.14 & $-$7.77 \\ & 300 & $-$7.83 & $-$4.28 & $-$7.83 \\ qrLMM & 50 & $-$7.81 & $-$4.11 & $-$7.80 \\ & 300 & $-$7.86 & $-$4.22 & $-$7.86 \\ \hline \end{tabular} \end{table} As a side note, we found Galarza el al's simulation setting rather unusual since the vector $\mathbf{z}_{ij}$ is typically a subset of $\mathbf{x}_{ij}$, for the random effects are constrained to have zero-mean (see also the discussion in the next section). Moreover, the AL distribution in LQMM provides only a quasi-likelihood for point estimation, i.e. it is not assumed to be the true distribution. Galarza el al's simulation is very limited and is more of a sanity check than a validation study. It would be more realistic to simulate errors from a variety of distributions with different shapes, along with heteroscedastic variants of these models. An extensive simulation of this kind is provided by \cite{geraci_2014b}. \section{Framingham study} Galarza et al also provided a comparison between the two algorithms using a subset of the cholesterol data from the Framingham study \citep{zhang}. Once again, the description of their analysis is incomplete. First of all, the authors state \vskip 0.2cm \begin{small} Interestingly, for the extremes quantiles, some warnings messages on convergence were displayed while fitting Geraci's method, even after increasing the number of iterations and reducing [sic] the tolerance, as suggested in the \texttt{lqmm} manual. \end{small} \vskip 0.2cm The authors do not say which estimation settings were used initially and how these were changed afterwards. Most importantly, they do not say whether the warning messages were obtained during model fitting or bootstrapping, and how many warnings were produced. These warnings may be of little concern \citep[see][for a discussion on this point]{geraci_2014a} and, as shown further below, they can be addressed with an appropriate tweaking of optimization parameters \citep{geraci_2014a}, along with a thoughtful examination of the data and model. The authors are also silent on the model for $\bm\Sigma_{\tau}$. This is irrelevant for the \texttt{qrLMM} package since it provides only one model (i.e., the general positive-definite matrix). However, the \texttt{lqmm} package provides four different models, including the diagonal variance-covariance structure as the default. We then decided to replicate the analysis of Galarza et al who considered the model \begin{equation}\label{eq:5} Q_{y_{ij}|\mathbf{u}_{i}}(\tau) = \beta_{\tau,0} + \beta_{\tau,1}\mathrm{sex}_{i} + \beta_{\tau,2}\mathrm{age}_{i} + u_{1,i} + u_{2,i}T_{ij}, \end{equation} where $y_{ij}$ is the $ij$th measurement of cholesterol (divided by 100) and $T_{ij} = (t_{ij} - 5)/10$, with $t_{ij}$ denoting years since the beginning of the study. There is clearly something awkward about the specification of model \eqref{eq:5} since it does not include a fixed coefficient for $T_{ij}$. We can only surmise that Galarza et al misinterpreted equation (9) in \cite{zhang}, where the random slope is actually centered about the fixed slope (not about zero). Whatever the reason, suffice it to say that this oversight not only may introduce bias in the estimates, but it might also render estimation more difficult and prone to failure if the fixed slope is effectively different from zero. Therefore, we proceeded with the following corrected model \begin{align}\label{eq:6} Q_{y_{ij}|\mathbf{u}_{i}}(\tau) = & \, \beta_{\tau,0} + \beta_{\tau,1}\mathrm{sex}_{i} + \beta_{\tau,2}\mathrm{age}_{i} + \beta_{\tau,3}T_{ij}\\ \nonumber & + u_{1,i} + u_{2,i}T_{ij}, \end{align} where $(u_{1,i}, u_{2,i})^{\top} \sim \mathcal{N}(\mathbf{0}, \bm\Sigma_{\tau})$ and $\bm\Sigma_{\tau}$ is a general positive-definite matrix. Model fitting and bootstrap standard error estimation were carried out for 19 vigintiles as detailed in Appendix A. The tolerance parameters set for \texttt{lqmm} estimation were less restrictive than the default values but still within reasonable bounds. We obtained only two warning messages of failed convergence during bootstrapping, but not during model estimation. Considering that $19 \times 50 = 950$ models were fitted for the bootstrap, this issue is hardly worthy of note in this case. Finally, Galarza et al's statement \vskip 0.2cm \begin{small} We observe that our SAEM method leads to mostly smaller SEs and AIC compared to the Geraci method. [...] Hence [...] the substantial gain in the AIC criterion and the SEs establish that our SAEM approach provides a much better fit to the dataset \end{small} \vskip 0.2cm \noindent is a hodgepodge of claims that are not substantiated anywhere in their paper. First, the authors found in their simulation study that the asymptotic approximations provide valid standard errors \citep[][Table 1]{galarza}. This should be expected since the data were generated under the AL distribution. Theoretical results actually show that outside the AL case it is not appropriate to quantify uncertainty using this distribution as it leads to underestimation of the true variability \citep{yang}. This is why \texttt{lqmm} makes use of bootstrap. Moreover, the authors never provided a simulation study to compare SAEM's standard errors with those obtained with \texttt{lqmm}; thus, it is not possible to understand the nature of the differences found for one particular dataset (even if we grant for the sake of argument that the \texttt{lqmm}'s estimation settings and modeling syntax were appropriately specified in the Framingham data analysis). Secondly, the comparison between average log-likelihoods in our simulation (Table~\ref{tab2}) does not support the superiority of SAEM in terms of goodness of fit (GOF). In summary, the issue of standard error estimation remains to be investigated, while empirical evidence, although limited, contradicts Galarza et al's claim that SAEM gives a better GOF performance. \section{Conclusion} Linear quantile mixed models \citep{geraci_2014b} represent a valuable tool available to the scientific community. Computational issues are still an open problem and different approaches have been investigated by several researchers \citep[see][for an overview]{marino_2015}. Galarza et al's claim of SAEM's superior performance fails to stand up to closer examination. Our simulation shows that while SAEM produces finite-sample bias and RMSE comparable to those obtained from the quadrature-based algorithm in \texttt{lqmm}, its sluggish convergence and high proportion of convergence failures put Galarza et al's proposal at enormous disadvantage. \appendix \section{R code} In this Appendix, we provide the \texttt{R} code for the analysis of the Framingham cholesterol data using the \texttt{lqmm} package. \begin{verbatim} library(lqmm) data(Cholesterol, package = "qrLMM") Cholesterol$year.c <- (Cholesterol$year - 5)/10 Cholesterol$sex <- as.factor(Cholesterol$sex) # Set optimization parameters ctrl <- lqmmControl(method = "df", LP_tol_ll = 1e-3, LP_max_iter = 2000, startQR = TRUE) # Fit model for tau = 0.05, 0.1, ..., 0.95 fit <- lqmm(I(cholst/100) ~ year.c + sex + age, random = ~ year.c, group = ID, data = Cholesterol, tau = 1:19/20, covariance = "pdSymm", control = ctrl) # Bootstrap (50 replicates) fit.s <- summary(fit, R = 50, seed = 178) \end{verbatim} \end{document}
\begin{document} \title[{A note on connectedness of Blaschke products}]{A note on connectedness of Blaschke products} \author[Yue Xin]{Yue Xin} \address{School of Mathematics, Jilin University, 130012, Changchun, P. R. China} \email{[email protected]} \author[Bingzhe Hou]{Bingzhe Hou} \address{School of Mathematics, Jilin University, 130012, Changchun, P. R. China} \email{[email protected]} \subjclass{Primary 30J05, 30J10; Secondary 54C35.} \keywords{Blaschke products; path-connectedness; pseudo-hyperbolic distance; interpolating; one-component.} \begin{abstract} Consider the space $\mathcal{F}$ of all inner functions on the unit open disk under the uniform topology, which is a metric topology induced by the $H^{\infty}$-norm. In the present paper, a class of Blaschke products, denoted by $\mathcal{H}_{SC}$, is introduced. We prove that for each $B\in\mathcal{H}_{SC}$, $B$ and $zB$ belong to the same path-connected component of $\mathcal{F}$. It plays an important role of a method to select a fine subsequence of zeros. As a byproduct, we obtain that each Blaschke product in $\mathcal{H}_{SC}$ has an interpolating and one-component factor. \end{abstract} \maketitle \section{Introduction} Let $\mathbb{D}$ be the unit open disk in the complex plane $\mathbb{C}$ and let $\partial\mathbb{D}$ be the boundary of $\mathbb{D}$, i.e., the unit circle. The pseudo-hyperbolic distance on the unit open disk $\mathbb{D}$, denoted by $\rho$, is given by \begin{equation*} \rho(z,w)={\big \vert}\frac{z-w}{1-\overline{z}w}{\big \vert}, \ \ \ \text{for any} \ z, w \in\mathbb{D}. \end{equation*} Let $H^{\infty}$ be the Banach algebra of bounded analytic functions on $\mathbb{D}$ equipped with the norm ${\Vert}f{\Vert}_{\infty}=\sup_{z\in\mathbb{D}}{\vert}f(z){\vert}$. A bounded analytic function $f$ on $\mathbb{D}$ is called an inner function if it has unimodular radial limits almost everywhere on the boundary $\partial\mathbb{D}$ of $\mathbb{D}$. Furthermore, denote by $\mathcal{F}$ the set of all inner functions. We are interested in the space $\mathcal{F}$ under uniform topology, which is a metric topology induced by the $H^{\infty}$-norm. Notice that the uniform topology on $\mathcal{F}$ is very complicated and interesting, see \cite{He74, Ne79, Ne80} for instance. A Blaschke product is an inner function of the form \begin{equation*} B(z)=\lambda z^{m}\prod_{n}\frac{{\vert}z_{n}{\vert}}{z_{n}}\frac{z_{n}-z}{1-\overline{z_{n}}z}, \end{equation*} where $m$ is a nonnegative integer, $\lambda$ is a complex number with ${\vert}\lambda{\vert}=1$, and $\{z_{n}\}$ is a sequence of points in $\mathbb{D}\setminus\{0\}$ satisfying the Blaschke condition $\sum_{n}(1-{\vert}z_{n}{\vert})<\infty$. Moreover, if $\lambda=1$, we say that $B$ is normalized. If for every bounded sequence of complex numbers $\{w_n\}_{n=1}^{\infty}$, there exists $f$ in $H^{\infty}$ satisfying $f(z_n)=w_n$ for every $n\in\mathbb{N}$, then both the sequence $\{z_n\}_{n=1}^{\infty}$ and the Blaschke product $B(z)$ are called interpolating. Following from a celebrated result of Carleson \cite{LC}, one can see that $B(z)$ is an interpolating Blaschke product if and only if $\{z_{n}\}$ is a uniformly separated sequence, i.e., \begin{equation*} \inf_{n\in \mathbb{N}}\prod_{k\neq n}{\big \vert}\frac{z_{k}-z_{n}}{1-\overline{z_{k}}z_{n}}{\big \vert}>0. \end{equation*} Moreover, if \begin{equation*} \lim_{n\rightarrow\infty}\prod_{k\neq n}{\big \vert}\frac{z_{k}-z_{n}}{1-\overline{z_{k}}z_{n}}{\big \vert}=1, \end{equation*} both the sequence $\{z_n\}_{n=1}^{\infty}$ and the Blaschke product $B(z)$ are called thin. In addition, A Blaschke product is called Carleson-Newman if it is a product of finitely many interpolating Blaschke products. Interpolating Blaschke products and Carleson-Newman Blaschke products play an important role in the study of $H^{\infty}$. As well-known, inner functions can be approximated uniformly by Blaschke products, and Carleson-Newman Blaschke products can be approximated uniformly by interpolating Blaschke products \cite{DEA}. However, there is still an open problem whether the set of all interpolation Blaschke products is dense in the inner function space $\mathcal{F}$. There has been obtained some related results. For instance, Marshall (\cite{DE}) proved that finite linear combinations of Blaschke products are dense in $H^{\infty}$; Nicolau and Su\'{a}rez \cite{NA} characterize the connected components of the subset $CN^{*}$ of $H^{\infty}$ formed by the products $bh$, where $b$ is a Carleson-Newman Blaschke product and $h\in H^{\infty}$ is an invertible function. In particular, a result of K. Tse \cite{KF} tells us that a sequence $\{z_n\}_{n=1}^{\infty}$ of points contained in a Stolz domain $$ \{z\in\mathbb{D}: {\vert}1-\overline{\xi}z{\vert}\le C(1-{\vert}z{\vert})\}, $$ where $\xi$ is a constant with ${\vert}\xi{\vert}=1$, is interpolating if and only if it is separated, i.e., \begin{equation*} \inf_{m\neq n}\rho(z_{m}, z_{n})>0. \end{equation*} Then, if the zero points lie in a Stolz domain, we may say something more about the Blaschke products. For example, A. Reijonen gave a sufficient condition for a Blaschke product with zeros in a Stolz domain to be a one-component inner function in \cite{Re19}. An inner function $u$ in $H^{\infty}$ is said to be one-component if there is $\eta\in(0,1)$ such that the level set $\Omega_{u}(\eta) :=\{z\in\mathbb{D}:{\vert}u(z){\vert}<\eta\}$ is connected. More details of one-component inner functions, we refer to \cite{AB, Jo, Re19}. In this paper, we focus on the path-connected components of the space $\mathcal{F}$ under the uniform topology. Recall that the topology of the uniform convergence on the set $\mathcal{F}$ is induced by the following metric \begin{equation*} d(f,g)={\Vert}f-g{\Vert}_{\infty}=\sup_{z\in\mathbb{D}}{\vert}f(z)-g(z){\vert}=\sup_{\theta\in \mathbb{R}}{\rm ess}{\vert}f(e^{{\bf i}\theta})-g(e^{{\bf i}\theta}){\vert}. \end{equation*} For any two inner functions $f$ and $g$, if they belong to the same path-connected component in the space $\mathcal{F}$, we denote $f\sim g$. The path of inner functions has always been of great interest and is related to many important issues. D. Herrero \cite{He74} considered the path-connected components of the space $\mathcal{F}$. He showed that a component of $\mathcal{F}$ can contain nothing but Blaschke products with infinitely many zeroes, exactly one (up to a constant factor) singular inner function or infinitely many pairwise coprime singular inner functions, which answered a problem of Douglas. Furthermore, V. Nestoridis studied the invariant and noninvariant connected components of the inner functions space $\mathcal{F}$. He proved that the inner functions $d(z)={\rm exp} \{(z+1)/(z-1)\}$ and $zd$ belong to the same connected component \cite{Ne79}, and gave a family of inner functions, denoted by $H$, such that for every $B\in H$, $B$ and $zB$ don't belong to the same component \cite{Ne80}. In particular, the family $H$ contains only Blaschke products, and contains all of thin Blaschke products. In addition, several authors have studied the connected component of inner functions in the context of model spaces and operator theory (see \cite{Al16, BC} for instance). In the present paper, we aim to give a class of Blaschke products, denoted by $\mathcal{H}_{SC}$, such that each $B\in\mathcal{H}_{SC}$, $B$ and $zB$ belong to the same connected component. Firstly, let us define a class of subsets of the unit open disk, named strip cones and denoted by $SC(\xi,\theta,T_{1}, T_{2})$. In this paper, we study the Blaschke products with zeros lying in a strip cone. \begin{definition}\label{SC} Let $\theta\in(0,\pi)$, $\xi\in \partial\mathbb{D}$, $T_1$ and $T_2$ be two nonzero real numbers. Denote by $J_{i}$ the arc on the circle $$ {\vert}z-(1-T_{i}e^{{\bf i}\theta})\xi{\vert}={\vert}T_i{\vert} $$ in the unit open disk, for $i=1,2$. Write $\xi_i$ as the intersection point of $J_{i}$ and $\partial\mathbb{D}$ other than $\xi$, $i=1,2$. We define $SC(\xi,\theta,T_{1}, T_{2})$ is the region bounded by $J_{1}$, $J_{2}$ and $\wideparen{\xi_1\xi_2}$ which is the arc on the unit circle $\partial\mathbb{D}$ from $\xi_1$ to $\xi_2$ without $\xi$, and call it a strip cone. If $T_1=T_2\in \mathbb{R}\setminus\{0\}$, then $SC(\xi,\theta,T_{1}, T_{2})$ is just the arc $J_{1}=J_{2}$. In particular, if $T_1$ and $T_2$ are infinity, then $SC(\xi,\theta,T_{1}, T_{2})$ is just the segment $J_{1}=J_{2}=(-1,1)$. \end{definition} One can see some examples of strip cones in Figure \ref{figure1}. Next, we explain why we name it strip cone. \begin{figure} \caption{Examples of strip cones} \label{figure1} \end{figure} \begin{definition} Let $\theta\in(-\frac{\pi}{2},\frac{\pi}{2})$, and $L_{1}$ and $L_{2}$ be two parallel straight lines with an angle of $\theta$ to the real axis. Denote by $SL(\theta,L_{1},L_{2})$ the strip region between $L_{1}$ and $L_{2}$ in the right half plane. \end{definition} Given any strip cone $SC(\xi,\theta,T_{1}, T_{2})$. Consider the fractional linear transformation $\varphi_{\xi}(z)=(\xi+z)/(\xi-z)$, where ${\vert}\xi{\vert}=1$. It is easy to see that $\varphi_{\xi}$ maps the unit open disk onto the right half plane, and map the arcs $J_1$ and $J_2$ in Definition \ref{SC} to some parallel straight lines $L_{1}$ and $L_{2}$ with the angle of $\theta$ to the imaginary axis in the right half plane. Then, $\varphi_{\xi}$ is an analytic bijection from $SC(\xi,\theta,T_{1},T_{2})$ to $SL(\frac{\pi}{2}-\theta,L_{1},L_{2})$ (see Figure \ref{figure2} for instance). This is the reason we call the subset $SC(\xi,\theta,T_{1},T_{2})$ strip cone. Without loss of generality, we may assume that $\xi=1$, because it is no difference to deal with $\xi=1$ and general $\xi$ with ${\vert}\xi{\vert}=1$. Moreover, we always denote $\varphi(z)=(1+z)/(1-z)$ through this paper. \begin{figure} \caption{$\varphi(z)$ maps $SC(1,\theta,T_{1} \label{figure2} \end{figure} \begin{definition} Denote by $\mathcal{H}_{SC}$ the family of all Blaschke products $B$ satisfying the following conditions, \begin{enumerate} \item[(i)]the zeros $\{z_{n}\}_{n=1}^{\infty}$ of $B$ lie in some certain strip cone $SC(\xi,\theta,T_{1},T_{2})$; \item[(ii)] ${\vert}\xi-z_{n}{\vert}$ non-increasingly tends to $0$; \item[(iii)]there exists a positive number $\delta<1$, such that $\rho(z_{n},z_{n+1})\le\delta$ for any $n\in\mathbb{N}$. \end{enumerate} \end{definition} Now we show our main result as the following theorem. ${\bf Main\ \ Theorem} \ \ $ For any $B\in\mathcal{H}_{SC}$, $B$ and $zB$ belong to the same path-connected component of the inner functions space $\mathcal{F}$ under the uniform topology. In the next section, we will introduce a method to select a "fine" factor of a Blaschke product in $\mathcal{H}_{SC}$, which plays an important role to prove the main theorem. As a byproduct, we obtain that each Blaschke product in $\mathcal{H}_{SC}$ has an interpolating and one-component factor. Then, we will complete the proof of the main theorem in the last section. \section{Preliminaries} First of all, let us start from the following simple result. \begin{lemma}\label{factorconnect} Let $f=\varphi_{1}\cdot\varphi_{2}$ and $g=\psi_{1}\cdot\psi_{2}$, where $f,g,\varphi_{1},\varphi_{2},\psi_{1},\psi_{2}\in\mathcal{F}$. If $\varphi_{1}\sim \psi_{1}$ and $\varphi_{2}\sim \psi_{2}$, then $f\sim g$. In particular, if $\varphi_{1}\sim z\varphi_{1}$, then $f\sim zf$. \end{lemma} To prove ${B}\sim z{B}$, it suffices to prove $\widetilde{B}\sim z\widetilde{B}$, if $\widetilde{B}$ is factor of $B$. In this section, it will be shown that we can select a factor $\widetilde{B}$ of $B$ such that $\widetilde{B}$ satisfies more conditions than $B$. The major is how to choose a subsequence of the zeros sequence of $B$. Recall that for a Blaschke product $B\in \mathcal{H}_{SC}$ with zeros $\{z_{n}\}_{n=1}^{\infty}$, there exists a positive number $\delta<1$, such that $\rho(z_{n},z_{n+1})\le\delta$ for any $n\in\mathbb{N}$. \begin{lemma}\label{le2} Let $a$, $a{'}$, $b$, $b{'}$ be real numbers satisfying \begin{equation*} 0\leq a\leq a{'}<1\ \ \text{and} \ \ 0\leq b\leq b{'}<1. \end{equation*} Then, \begin{equation*} \frac{a+b}{1+{a}b}\leq\frac{a{'}+b{'}}{1+{a{'}}b{'}} \end{equation*} \end{lemma} \begin{proof} \begin{align*} \frac{a{'}+b{'}}{1+{a{'}}b{'}}-\frac{a+b}{1+{a}b}&=\frac{(a{'}+b{'}+a{'}ab+abb{'})-(a+b+aa{'}b{'}+a{'}bb{'})}{(1+ab)(1+a{'}b{'})}\\ &=\frac{(a{'}-a)(1-bb{'})+(b{'}-b)(1-a{'}a)}{(1+ab)(1+a{'}b{'})}\\ &\geq 0. \end{align*} \end{proof} \begin{lemma}\label{sublu} Let $\{z_{n}\}_{n=1}^{\infty}$ be a sequence of complex numbers in the unit open disk, satisfying that \begin{enumerate} \item for any $m\in\mathbb{N}$, $\rho(z_{m},z_{n})\rightarrow 1$ as $n\rightarrow\infty$; \item there exists a positive number $0<\delta<1$, such that $\rho(z_{n},z_{n+1})\le\delta$ for any $n\in\mathbb{N}$. \end{enumerate} Then, for any $0<\varepsilon<1$, we can choose a subsequence $\{z_{n_{k}}\}_{n_{k}=1}^{\infty}$ of $\{z_{n}\}_{n=1}^{\infty}$ such that \begin{equation*} 0<\varepsilon\le\rho(z_{n_{k}},z_{n_{k+1}})\le\frac{\varepsilon+\delta}{1+\varepsilon\delta}<1. \end{equation*} \end{lemma} \begin{proof} Given any $0<\varepsilon<1$. Put $z_{n_{1}}=z_{1}$. Since $\rho(z_{n_{1}},z_{i})\rightarrow 1$ as $i\rightarrow\infty$, we can choose $$ n_{2}=\min\{i; \ \rho(z_{n_{1}},z_{i})\ge\varepsilon\}. $$ It is obvious that $\rho(z_{n_{1}},z_{n_{2}})\geq\varepsilon$. With the same method, we choose ${n_{k+1}}$ by $$ {n_{k+1}}=\min\{{i}; \ i> n_k, \ \rho(z_{n_{k}},z_{i})\ge\varepsilon\}. $$ Then, $\rho(z_{n_{k}},z_{n_{k+1}})\geq\varepsilon$. Moreover, for each $k=1,2,\ldots$, $$ 0\leq\rho(z_{n_{k}},z_{n_{k+1}-1})<\varepsilon<1 \ \ \text{and} \ \ 0\leq\rho(z_{n_{k+1}-1},z_{n_{k+1}})<\delta<1. $$ By Lemma \ref{le2}, we have \begin{equation*} \frac{\rho(z_{n_{k}},z_{n_{k+1}-1})+\rho(z_{n_{k+1}-1},z_{n_{k+1}})}{1+\rho(z_{n_{k}},z_{n_{k+1}-1})\rho(z_{n_{k+1}-1},z_{n_{k+1}})} \le\frac{\varepsilon+\delta}{1+\varepsilon\delta}. \end{equation*} Therefore, $$ 0<\varepsilon\le\rho(z_{n_{k}},z_{n_{k+1}})\le\frac{\rho(z_{n_{k}},z_{n_{k+1}-1})+\rho(z_{n_{k+1}-1},z_{n_{k+1}})}{1+\rho(z_{n_{k}},z_{n_{k+1}-1})\rho(z_{n_{k+1}-1},z_{n_{k+1}})} \le\frac{\varepsilon+\delta}{1+\varepsilon\delta}<1. $$ \end{proof} \begin{lemma}\label{ab} Let $\alpha$ and $\beta$ be two complex numbers in the unit open disk with ${\vert}1-\alpha{\vert}<{\vert}1-\beta{\vert}$. Denote $\alpha=s_{1}+{\bf i}(1-s_{1})\cot\theta_{1}$ and $\beta=s_{2}+{\bf i}(1-s_{2})\cot\theta_{2}$, where $\theta_1, \theta_2\in(0, \pi)$. For the pair of $\alpha$ and $\beta$, let the positive numbers $\varepsilon$, $\delta$, $\theta_0$, $C$, $\tau$ and $\eta$ satisfy the following conditions, \begin{enumerate} \item[(1)] \ $0<\varepsilon\le\rho(\alpha,\beta)\le\delta<1$; \item[(2)] \ $\theta_0\in(0, \pi)$ and $C={\vert}1-{\bf i}\cot\theta_{0}{\vert}$; \item[(3)] \ ${\vert}\cot\theta_{i}-\cot\theta_{0}{\vert}<\tau<\sqrt{\frac{3 C^2\varepsilon^{2}}{16 C^2(1-\varepsilon^{2})+3\varepsilon^{2}}}$, for $i=1,2$; \item[(4)] \ $3\leq 4-2\eta\leq(1+s_{1}-(1-s_{1})\cot^{2}\theta_{1})(1+s_{2}-(1-s_{2})\cot^{2}\theta_{2})\leq 4$. \end{enumerate} Then \begin{equation*} 0<C_1\le{\big \vert}\frac{1-\alpha}{1-\beta}{\big \vert}\le C_2 <1. \end{equation*} where {\small \begin{align*} &C_1=\frac{C-\tau}{C+\tau}\left(\frac{C-\tau}{C+\tau}+\frac{2\delta^{2}-2\sqrt{\delta^{4}+\delta^{2}(C^2-\tau^2)(1-\delta^{2})}}{(C+\tau)^{2}(1-\delta^{2})}\right), \\ &C_2=\frac{C+\tau}{C-\tau}\left(\frac{C+\tau}{C-\tau}+\frac{(2-\eta)\varepsilon^{2}-\sqrt{(2-\eta)^2\varepsilon^{4}+2\varepsilon^{2}(2-\eta)(C^2-\tau^2)(1-\varepsilon^{2})}}{(C-\tau)^{2}(1-\varepsilon^{2})} \right). \end{align*} } \end{lemma} \begin{proof} For $i=1,2$, it follows from condition $(3)$ that $$ {\vert}(1-{\bf i}\cot\theta_{i})-(1-{\bf i}\cot\theta_{0}){\vert}<\tau, $$ and consequently, $$ 0<C-\tau<{\vert}1-{\bf i}\cot\theta_{i}{\vert}<C+\tau. $$ Denote $$ K=\frac{1-s_{1}}{1-s_{2}}. $$ Notice that \begin{equation*} \begin{aligned} &0<(1-s_{1})(C-\tau)\le{\vert}1-\alpha{\vert}=(1-s_{1}){\vert}1-{\bf i}\cot\theta_{1}{\vert}\le(1-s_{1})(C+\tau),\\ &0<(1-s_{2})(C-\tau)\le{\vert}1-\beta{\vert}=(1-s_{2}){\vert}1-{\bf i}\cot\theta_{2}{\vert}\le(1-s_{2})(C+\tau). \end{aligned} \end{equation*} Then \begin{equation}\label{eq7} \frac{K(C-\tau)}{C+\tau}\le{\big \vert}\frac{1-\alpha}{1-\beta}{\big \vert}\le \frac{K(C+\tau)}{C-\tau}. \end{equation} Since $$ \rho(\alpha,\beta)^{2}=\frac{{\vert}\alpha-\beta{\vert}^{2}}{(1-\overline{\alpha}\beta)(1-\overline{\beta}\alpha)}=\frac{1}{\frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}}+1}, $$ it follows from condition $(1)$ that \begin{equation}\label{eq:3} \begin{aligned} 0<\frac{1}{\delta^{2}}-1\le\frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}}\le\frac{1}{\varepsilon^{2}}-1. \end{aligned} \end{equation} Consider $\frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}}$. We have \begin{equation*} \begin{aligned} &\frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}} \\ =&\frac{(1-(s_{1}^{2}+(1-s_{1})^{2}\cot^{2}\theta_{1}))\cdot(1-(s_{2}^{2}+(1-s_{2})^{2}\cot^{2}\theta_{2}))}{{\vert}(1-\beta)-(1-\alpha){\vert}^{2}}\\ =&\frac{(1-s_{1})(1-s_{2})(1+s_{1}-(1-s_{1})\cot^{2}\theta_{1})(1+s_{2}-(1-s_{2})\cot^{2}\theta_{2}))}{{\vert}(1-s_{1})(1-{\bf i}\cot\theta_{1})-(1-s_{2})(1-{\bf i}\cot\theta_{2}){\vert}^{2}}. \end{aligned} \end{equation*} Denote $$ {W}_{1}=1+s_{1}-(1-s_{1})\cot^{2}\theta_{1} \ \ \text{and} \ \ {W}_{2}=1+s_{2}-(1-s_{2})\cot^{2}\theta_{2}. $$ Then, $$ 3\leq 4-2\eta\leq{W}_{1}\cdot{W}_{2}\leq4. $$ Now we give the lower bound and upper bound of $K$, respectively. Then, by inequality (\ref{eq7}), we can complete the proof. {\bf Lower bound of $K$.} \ If $K\leq \frac{C-\tau}{C+\tau}$, we have \begin{equation*} \begin{aligned} \frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}}=&\frac{(1-s_{1})(1-s_{2}){W}_{1}{W}_{2}}{{\vert}(1-s_{1})(1-{\bf i}\cot\theta_{1})-(1-s_{2})(1-{\bf i}\cot\theta_{2}){\vert}^{2}}\\ =&\frac{K{W}_{1}{W}_{2}}{{\vert}K(1-{\bf i}\cot\theta_{1})-(1-{\bf i}\cot\theta_{2}){\vert}^{2}}\\ \le&\frac{4K}{({\vert}(1-{\bf i}\cot\theta_{2}){\vert}-{\vert}K(1-{\bf i}\cot\theta_{1}){\vert})^{2}}\\ \le&\frac{4K}{((C-\tau)-K(C+\tau))^{2}}. \end{aligned} \end{equation*} It follows from inequality (\ref{eq:3}) that \begin{equation*} \frac{4K}{((C-\tau)-K(C+\tau))^{2}}\ge\frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}}\ge\frac1{\delta^{2}}-1. \end{equation*} Then, $$ K^{2}-2\left(\frac{C-\tau}{C+\tau}+\frac{2\delta^{2}}{(C+\tau)^{2}(1-\delta^{2})}\right)K+\frac{(C-\tau)^2}{(C+\tau)^2}\le0, $$ Let $a_1$ and $a_2$ be the two roots of the above quadratic polynomial of $K$, $$ a_1=\frac{C-\tau}{C+\tau}+\frac{2\delta^{2}-2\sqrt{\delta^{4}+\delta^{2}(C^2-\tau^2)(1-\delta^{2})}}{(C+\tau)^{2}(1-\delta^{2})}, $$ $$ a_2=\frac{C-\tau}{C+\tau}+\frac{2\delta^{2}+2\sqrt{\delta^{4}+\delta^{2}(C^2-\tau^2)(1-\delta^{2})}}{(C+\tau)^{2}(1-\delta^{2})}. $$ One can see that $$ 0<a_1<\frac{C-\tau}{C+\tau}<a_2. $$ Then, we always have $$ K\geq a_1>0. $$ Moreover, put $C_1=\frac{C-\tau}{C+\tau}\cdot a_1$, $$ {\big \vert}\frac{1-\alpha}{1-\beta}{\big \vert}\ge\frac{K(C-\tau)}{C+\tau}\geq C_1>0. $$ {\bf Upper bound of $K$.} Notice that \begin{equation*} \begin{aligned} &\frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}} \\ =&\frac{K{W}_{1}{W}_{2}}{{\vert}K(1-{\bf i}\cot\theta_{1})-(1-{\bf i}\cot\theta_{2}){\vert}^{2}}\\ =&\frac{K{W}_{1}{W}_{2}}{{\big \vert}iK(\cot\theta_{0}-\cot\theta_{1})-i(\cot\theta_{0}-\cot\theta_{2})+(K-1)(1-{\bf i}\cot\theta_{0}){\big \vert}^{2}}\\ \ge&\frac{(4-2\eta)K}{\left({\big \vert}K(\cot\theta_{0}-\cot\theta_{1}){\big \vert}+{\big \vert}\cot\theta_{0}-\cot\theta_{2}{\big \vert}+{\big \vert}(K-1)C{\big \vert}\right)^{2}}\\ \ge&\frac{(4-2\eta)K}{({\vert}(K-1){\vert}C+(K+1)\tau)^{2}}. \end{aligned} \end{equation*} Firstly, let us prove that $K$ must be less than $1$. If $K\geq 1$, we have $$ \frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}}\ge\frac{(4-2\eta)K}{((1-K)C+(K+1)\tau)^{2}}. $$ It follows from inequality (\ref{eq:3}) that \begin{equation*} \frac{(4-2\eta)K}{((K-1)C+(K+1)\tau)^{2}}\le\frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}}\le\frac1{\varepsilon^{2}}-1. \end{equation*} Then, \begin{equation*} K^{2}-2\left(\frac{C-\tau}{C+\tau}+\frac{(2-\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C+\tau)^{2}}\right)K+\frac{(C-\tau)^2}{(C+\tau)^2}\ge0, \end{equation*} Let $\lambda_1$ and $\lambda_2$ be the two roots of the above quadratic polynomial of $K$, $$ \lambda_1=\frac{C-\tau}{C+\tau}+\frac{(2-\eta)\varepsilon^{2}-\sqrt{(2-\eta)^2\varepsilon^{4}+2\varepsilon^{2}(2-\eta)(C^2-\tau^2)(1-\varepsilon^{2})}}{(C+\tau)^{2}(1-\varepsilon^{2})}, $$ $$ \lambda_2=\frac{C-\tau}{C+\tau}+\frac{(2-\eta)\varepsilon^{2}+\sqrt{(2-\eta)^2\varepsilon^{4}+2\varepsilon^{2}(2-\eta)(C^2-\tau^2)(1-\varepsilon^{2})}}{(C+\tau)^{2}(1-\varepsilon^{2})}. $$ Since \begin{align*} &\left(\frac{C+\tau}{C-\tau} \right)^{2}-2\left(\frac{C-\tau}{C+\tau}+\frac{(2-\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C+\tau)^{2}}\right)\left(\frac{C+\tau}{C-\tau} \right)+\frac{(C-\tau)^2}{(C+\tau)^2} \\ = \ &\left(\frac{C+\tau}{C-\tau}-\frac{C-\tau}{C+\tau} \right)^{2}-\frac{(4-2\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C+\tau)^{2}}\cdot\left(\frac{C+\tau}{C-\tau} \right) \\ = \ &\frac{16C^2\tau^2}{(C^2-\tau^2)^2}-\frac{(4-2\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C^2-\tau^2)} \\ = \ &\frac{16C^2}{(C^2-\tau^2)^2}\left(\tau^2-\frac{3\varepsilon^{2}(C^2-\tau^2)}{16 C^2(1-\varepsilon^{2})}\right) \\ < \ &0, \end{align*} one can see that $$ 0<\lambda_1<\frac{C-\tau}{C+\tau}<1<\frac{C+\tau}{C-\tau}<\lambda_2. $$ Then, we have $$ K\geq \lambda_2>\frac{C+\tau}{C-\tau}>1. $$ However, by inequality (\ref{eq7}), $$ {\big \vert}\frac{1-\alpha}{1-\beta}{\big \vert}\ge\frac{K(C-\tau)}{C+\tau}\ge \lambda_2\cdot \frac{C-\tau}{C+\tau}>1. $$ It is a contradiction to ${\vert}1-\alpha{\vert}<{\vert}1-\beta{\vert}$. Now we have known $K<1$. Then, $$ \frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}}\ge\frac{(4-2\eta)K}{((1-K)C+(K+1)\tau)^{2}}. $$ It follows from inequality (\ref{eq:3}) that \begin{equation*} \frac{(4-2\eta)K}{((1-K)C+(K+1)\tau)^{2}}\le\frac{(1-{\vert}\alpha{\vert}^{2})(1-{\vert}\beta{\vert}^{2})}{{\vert}\alpha-\beta{\vert}^{2}}\le\frac1{\varepsilon^{2}}-1. \end{equation*} Then, \begin{equation*} K^{2}-2\left(\frac{C+\tau}{C-\tau}+\frac{(2-\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C-\tau)^{2}}\right)K+\frac{(C+\tau)^2}{(C-\tau)^2}\ge0. \end{equation*} Let $A_1$ and $A_2$ be the two roots of the above quadratic polynomial of $K$, $$ A_1=\frac{C+\tau}{C-\tau}+\frac{(2-\eta)\varepsilon^{2}-\sqrt{(2-\eta)^2\varepsilon^{4}+2\varepsilon^{2}(2-\eta)(C^2-\tau^2)(1-\varepsilon^{2})}}{(C-\tau)^{2}(1-\varepsilon^{2})}, $$ $$ A_2=\frac{C+\tau}{C-\tau}+\frac{(2-\eta)\varepsilon^{2}+\sqrt{(2-\eta)^2\varepsilon^{4}+2\varepsilon^{2}(2-\eta)(C^2-\tau^2)(1-\varepsilon^{2})}}{(C-\tau)^{2}(1-\varepsilon^{2})}. $$ Since \begin{align*} &\left(\frac{C-\tau}{C+\tau} \right)^{2}-2\left(\frac{C+\tau}{C-\tau}+\frac{(2-\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C-\tau)^{2}}\right)\left(\frac{C-\tau}{C+\tau} \right)+\frac{(C+\tau)^2}{(C-\tau)^2} \\ = \ &\left(\frac{C-\tau}{C+\tau}-\frac{C+\tau}{C-\tau} \right)^{2}-\frac{(4-2\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C-\tau)^{2}}\cdot\left(\frac{C-\tau}{C+\tau} \right) \\ = \ &\frac{16C^2\tau^2}{(C^2-\tau^2)^2}-\frac{(4-2\eta)\varepsilon^{2}}{(1-\varepsilon^{2})(C^2-\tau^2)} \\ = \ &\frac{16C^2}{(C^2-\tau^2)^2}\left(\tau^2-\frac{3\varepsilon^{2}(C^2-\tau^2)}{16 C^2(1-\varepsilon^{2})}\right) \\ < \ &0, \end{align*} one can see that $$ 0<A_1<\frac{C-\tau}{C+\tau}<1<A_2. $$ Then, we always have $$ K\leq A_1<\frac{C-\tau}{C+\tau}<1. $$ Moreover, put $C_2=\frac{C+\tau}{C-\tau}\cdot A_1$, $$ {\big \vert}\frac{1-\alpha}{1-\beta}{\big \vert}\le\frac{K(C+\tau)}{C-\tau}\leq C_2<1. $$ \end{proof} \begin{lemma}\label{genlu} Let $\{z_{n}\}_{n=1}^{\infty}$ be a sequence of complex numbers in a strip cone $SC(\xi,\theta_0,T_{1},T_{2})$, $\theta_0\in (0, \pi)$, satisfying that \begin{enumerate} \item ${\vert}1-z_{n}{\vert}$ non-increasingly tends to $1$, as $n\rightarrow\infty$; \item there exist two positive numbers $0<\varepsilon\le\delta<1$, such that for any $n\in\mathbb{N}$, \[ 0<\varepsilon\le \rho(z_{n},z_{n+1})\le\delta<1. \] \end{enumerate} Then there exists a positive integer $N$ and two positive constants $C_{1}$ and $C_{2}$, such that for any $n\geq N$, \begin{equation*} 0<C_{1}\le{\big \vert}\frac{1-z_{n+1}}{1-z_{n}}{\big \vert}\le C_{2}<1. \end{equation*} Furthermore, this implies $\sum_{n=1}^{\infty}{\vert}1-z_n{\vert}<\infty$. \end{lemma} \begin{proof} Without loss of generality, we may assume that $\{z_{n}\}_{n=1}^{\infty}$ lie in a strip cone $SC(1,\theta_0,T_{1},T_{2})$ (see Figure \ref{figure3}). \begin{figure} \caption{Zeros $\{z_{n} \label{figure3} \end{figure} Denote $$ z_{n}=x_{n}+{\bf i}(1-x_{n})\cot\theta_{n}, \ \ \ \ \text{for} \ n=1,2,\ldots, $$ and $$ C={\vert}1-{\bf i}\cot\theta_{0}{\vert}. $$ Following from $z_n\in SC(1,\theta_0,T_{1},T_{2})$, we could also denote $$ z_{n}=(1-t_{n}{\rm e}^{{\bf i}\theta_{0}})+t_n{\rm e}^{{\bf i}\zeta_{n}}, $$ where $t_n\in [T_1, T_2]$ if $T_1T_2>0$, and $t_n\in (-\infty, T_1]\cup [T_2, +\infty)$ if $T_1T_2<0$. Furthermore, $$ \lim\limits_{n\rightarrow\infty}1-z_n=\lim\limits_{n\rightarrow\infty}t_n({\rm e}^{{\bf i}\theta_{0}}-{\rm e}^{{\bf i}\zeta_{n}})=0=\lim\limits_{n\rightarrow\infty}1-x_n. $$ Notice that $t_n$ is uniformly far away from $0$ whenever $T_1T_2>0$ or $T_1T_2<0$. Then, $\zeta_{n}\rightarrow\theta_{0}$, as $n\rightarrow\infty$. Consequently, $$ \lim\limits_{n\rightarrow\infty}\cot\theta_{n}=\lim\limits_{n\rightarrow\infty}\frac{{\rm Imz_n}}{1-{\rm Rez_n}} =\lim\limits_{n\rightarrow\infty}\frac{t_n(\sin\zeta_n-\sin\theta_0)}{t_n(\cos\zeta_n-\cos\theta_0)}=\cot\theta_0. $$ Hence, for any two positive number $$ 0<\eta\le\frac{1}{2} \ \ \ \text{and} \ \ \ 0<\tau\le\sqrt{\frac{3 C^2\varepsilon^{2}}{16 C^2(1-\varepsilon^{2})+3\varepsilon^{2}}}, $$ there exists $N\in \mathbb{N}$ such that, for all $n\geq N$, $$ 3\leq 4-2\eta\leq(1+x_{n}-(1-x_{n})\cot^{2}\theta_{n})(1+x_{n+1}-(1-x_{n+1})\cot^{2}\theta_{n+1})\leq 4 $$ and $$ {\vert}\cot\theta_{n}-\cot\theta_{0}{\vert}<\tau. $$ Therefore, by Lemma \ref{ab}, there exist two positive constants $C_{1}$ and $C_{2}$, such that for any $n\geq N$, \begin{equation*} 0<C_{1}\le{\big \vert}\frac{1-z_{n+1}}{1-z_{n}}{\big \vert}\le C_{2}<1. \end{equation*} Furthermore, this implies \[ \sum\limits_{n=1}^{\infty}{\vert}1-z_n{\vert}\le \sum_{n=1}^{\infty}{\vert}1-z_1{\vert}C_2^{n-1}=\frac{{\vert}1-z_1{\vert}}{1-C_2}<\infty. \] \end{proof} Now, we show that one can select a "fine" subsequence of zeros of each Blaschke product $B\in\mathcal{H}_{SC}$. \begin{definition}\label{fine} A sequence $\{z_{n}\}_{n=1}^{\infty}$ is said to be fine, if it satisfies the following conditions. \begin{enumerate} \item[(1)] \ The sequence $\{z_{n}\}_{n=1}^{\infty}$ lies in some certain strip cone $SC(\xi,\theta, T_{1},T_{2})$. \item[(2)] \ ${\vert}\xi-z_{n}{\vert}$ non-increasingly tends to $\xi$, as $n\rightarrow\infty$; \item[(3)] \ There exist two positive numbers $0<\varepsilon\le\delta<1$ such that for any $n\in\mathbb{N}$, \[ 0<\varepsilon\le\rho(z_{n},z_{n+1})\le\delta. \] Moreover, there exists a positive integer $N$ and two positive constants $C_{1}$ and $C_{2}$, such that for any $n\geq N$, \begin{equation*} 0<C_{1}\le{\big \vert}\frac{\xi-z_{n+1}}{\xi-z_{n}}{\big \vert}\le C_{2}<1. \end{equation*} In particular, $\sum\limits_{n=1}^{\infty}{\vert}\xi-z_{n}{\vert}<\infty$. \item[(4)] \ ${\rm Re} \varphi_{\xi}(z_{n})$ monotonically tends to $+\infty$, where $\varphi_{\xi}(z)=(\xi+z)/(\xi-z)$. Moreover, there are two positive numbers $\widetilde{C_1}$ and $\widetilde{C_2}$ such that $$ 0<\widetilde{C_1}\leq \frac{{\rm Re}\varphi_{\xi}(z_{n})}{{\rm Re}\varphi_{\xi}(z_{n+1})} \leq \widetilde{C_2}<1. $$ \end{enumerate} Furthermore, a Blaschke product is said to be fine if its zeros sequence $\{z_{n}\}_{n=1}^{\infty}$ is fine. And denote by $\mathcal{\widetilde{H}}_{SC}$ the family of all fine Blaschke products. \end{definition} \begin{lemma}\label{finesub} Let $B$ be a Blaschke product in $\mathcal{H}_{SC}$ with zeros $\{z_{n}\}_{n=1}^{\infty}$. Then $B$ has a factor in $\mathcal{\widetilde{H}}_{SC}$, i.e., the sequence $\{z_{n}\}_{n=1}^{\infty}$ has a fine subsequence. \end{lemma} \begin{proof} Without loss of generality, assume $\{z_{n}\}_{n=1}^{\infty}$ lies in some certain strip cone $SC(1,\theta_0, T_{1},T_{2})$. Recall that $\{z_{n}\}_{n=1}^{\infty}$ satisfies the following conditions. \begin{enumerate} \item \ ${\vert}1-z_{n}{\vert}$ non-increasingly tends to $1$, as $n\rightarrow\infty$. \item \ There exists a positive number $0<\delta<1$, such that $\rho(z_{n},z_{n+1})\le\delta$ for any $n\in\mathbb{N}$. \end{enumerate} Denote by $SL(\frac{\pi}{2}-\theta_0,L_{1},L_{2})$ the image of the strip cone $SC(1, \theta_0, T_{1},T_{2})$ under $\varphi$. We may write $$ L_{1}:y-\tan(\frac{\pi}{2}-\theta_0)\cdot x-c_1=0 \ \ \ \text{and} \ \ \ L_{2}:y-\tan(\frac{\pi}{2}-\theta_0)\cdot x-c_2=0. $$ Since ${\vert}1-z_{n+1}{\vert}\leq {\vert}1-z_n{\vert}$ for each $n\in\mathbb{N}$, we have \begin{align*} {\vert}\varphi(z_{n+1}){\vert}=&{\big \vert}\frac{1+z_{n+1}}{1-z_{n+1}} {\big \vert} \\ =&{\big \vert}1-\frac{2}{1-z_{n+1}} {\big \vert} \\ \geq &{\big \vert}\frac{2}{1-z_{n+1}} {\big \vert}-1 \\ \geq &{\big \vert}\frac{2}{1-z_{n}} {\big \vert}-1 \\ \geq &{\big \vert}\frac{2}{1-z_{n}}-1 {\big \vert}-2 \\ =& {\vert}\varphi(z_{n}){\vert}-2. \end{align*} By Lemma \ref{sublu}, there exists a subsequence $\{z_{n_{k}}\}_{k=1}^{\infty}$ of $\{z_{n}\}_{n=1}^{\infty}$ and positive numbers $\varepsilon$ and $\delta$, such that $$ 0<\varepsilon\le\rho(z_{n_{k}},z_{n_{k+1}})\le \delta<1. $$ Consequently, by Lemma \ref{genlu}, there are positive numbers $C_1$ and $C_2$ such that \begin{equation*} 0<C_{1}\le{\big \vert}\frac{1-z_{n_{k+1}}}{1-z_{n_k}}{\big \vert}\le C_2<1. \end{equation*} Since $z_{n_k}\rightarrow 1$ as $k\rightarrow\infty$ and $\varphi(z_{n_k})\in SL(\frac{\pi}{2}-\theta_0,L_{1},L_{2})$, we have $$ \lim\limits_{k\rightarrow\infty}{\vert}\varphi(z_{n_{k}}){\vert}=\lim\limits_{k\rightarrow\infty}{\big \vert}\frac{1+z_{n_{k}}}{1-z_{n_{k}}} {\big \vert}=+\infty, $$ $$ \lim\limits_{k\rightarrow\infty}\frac{{\rm Re}\varphi(z_{n_{k}})}{{\vert}\varphi(z_{n_{k}}){\vert}}=\cos(\frac{\pi}{2}-\theta_0)>0 $$ and $$ \lim\limits_{k\rightarrow\infty}{\big \vert}\frac{\varphi(z_{n_{k}})}{\varphi(z_{n_{k+1}})} {\big \vert} \bigg/ {\big \vert}\frac{1-z_{n_{k+1}}}{1-z_{n_{k}}} {\big \vert} =\lim\limits_{k\rightarrow\infty}{\big \vert}\frac{1+z_{n_{k}}}{1+z_{n_{k+1}}} {\big \vert}=1. $$ Furthermore, $$ \lim\limits_{k\rightarrow\infty}\left(\frac{{\rm Re}\varphi(z_{n_{k}})}{{\rm Re}\varphi(z_{n_{k+1}})} \right) \bigg/ {\big \vert}\frac{1-z_{n_{k+1}}}{1-z_{n_{k}}} {\big \vert}=1. $$ Then, there exists a positive integer $k_0$ such that for any $k\geq k_0$, $$ 0<\frac{C_1}{2}\leq \frac{{\rm Re}\varphi(z_{n_{k}})}{{\rm Re}\varphi(z_{n_{k+1}})} \leq \frac{1+C_2}{2}<1. $$ Therefore, the subsequence $\{z_{n_{k}}\}_{k=k_0}^{\infty}$ is as required. In particular, $$ {\rm Re}\varphi(z_{n_{k+1}})>{\rm Re}\varphi(z_{n_k}). $$ \end{proof} Then, to prove our main theorem, it suffices to consider the Blaschke products in $\widetilde{\mathcal{H}}_{SC}$. By the way, we could also find that fine sequences implies some other properties of Blaschke products. More precisely, we could obtain that each $B\in \mathcal{H}_{SC}$ has a factor being an interpolating and one-component Blaschke product. \begin{lemma}[Corollary 2.5 in \cite{Jo}]\label{separ} Let $B$ be a Blaschke product whose zeros ${z_n}$ are contained in a Stolz domain and are separated. Suppose that $\rho(z_n,z_{n+1})\le\eta<1$. Then $B$ is a one-component inner function. \end{lemma} \begin{theorem}\label{one-c} Each $B\in\mathcal{H}_{SC}$ has an interpolating and one-component Blaschke product factor. \end{theorem} \begin{proof} By Lemma \ref{finesub}, it suffices to prove that each $B\in\mathcal{\widetilde{H}}_{SC}$ has an interpolating and one-component Blaschke product factor. Obviously, any fine sequence has a tail contained in some certain Stolz domain, and then a Blaschke product in $\mathcal{\widetilde{H}}_{SC}$ is interpolating if and only if its zeros sequence is separated, i.e., \begin{equation*} \inf\limits_{m\neq n}\rho(z_n,z_m)>0. \end{equation*} Furthermore, together with Lemma \ref{separ}, we only need to prove any fine sequence has a separated subsequence. Without loss of generality, assume $\{z_{n}\}_{n=1}^{\infty}$ is a fine sequence in a strip cone $SC(1,\theta, T_{1},T_{2})$. Since $\varphi(z)=(1+z)/(1-z)$, we have $z=(\varphi(z)-1)/(\varphi(z)+1)$, and then \begin{align*} {\vert}z_m-z_n{\vert}&={\big \vert}\frac{\varphi(z_m)-1}{\varphi(z_m)+1}-\frac{\varphi(z_n)-1}{\varphi(z_n)+1}{\big \vert}\\ &={\big \vert}\frac{2(\varphi(z_m)-\varphi(z_n))}{(\varphi(z_m)+1)(\varphi(z_n)+1)}{\big \vert} \end{align*} and \begin{align*} {\vert}1-\overline{z_m}z_n{\vert}&={\big \vert}1-\frac{\overline{\varphi(z_m)}-1}{\overline{\varphi(z_m)}+1}\cdot\frac{\varphi(z_n)-1}{\varphi(z_n)+1}{\big \vert}\\ &={\big \vert}\frac{2(\overline{\varphi(z_m)}+\varphi(z_n))}{(\overline{\varphi(z_m)}+1)(\varphi(z_n)+1)}{\big \vert}\\ &={\big \vert}\frac{2[\overline{\varphi(z_m)}+\varphi(z_n)]}{(\varphi(z_m)+1)(\varphi(z_n)+1)}{\big \vert}. \end{align*} Therefore, \begin{align*} \rho(z_m,z_n)&={\big \vert}\frac{z_m-z_n}{1-\overline{z_m}z_n}{\big \vert}\\ &={\big \vert}\frac{\varphi(z_m)-\varphi(z_n)}{\overline{\varphi(z_m)}+\varphi(z_n)}{\big \vert}\\ &\ge{\big \vert}\frac{{\vert}\varphi(z_m){\vert}-{\vert}\varphi(z_n){\vert}}{{\vert}\varphi(z_m){\vert}+{\vert}\varphi(z_n){\vert}}{\big \vert}\\ &={\big \vert}\frac{1-{\vert}\frac{\varphi(z_n)}{\varphi(z_m)}{\vert}}{1+{\vert}\frac{\varphi(z_n)}{\varphi(z_m)}{\vert}}{\big \vert}.\\ \end{align*} In addition, there are positive numbers $C_1$ and $C_2$ such that \begin{equation*} 0<C_{1}\le{\big \vert}\frac{1-z_{n_{k+1}}}{1-z_{n_k}}{\big \vert}\le C_2<1. \end{equation*} Since ${\vert}1-z_n{\vert}$ non-increasingly tends to $1$, as $n\to\infty$, there exists a positive integer $N\in\mathbb{N}$ such that for any $n\ge N$, \begin{equation*} {\big \vert}\frac{1+z_n}{1+z_{n+1}}{\big \vert}\le\frac{1+C_2}{2C_2}. \end{equation*} Then, we have \begin{align*} {\big \vert}\frac{\varphi(z_n)}{\varphi(z_{n+1})}{\big \vert}&={\big \vert}\frac{1+z_n}{1-z_n}\cdot\frac{1-z_{n+1}}{1+z_{n+1}}{\big \vert}\\ &={\big \vert}\frac{1+z_n}{1+z_{n+1}}{\big \vert}{\big \vert}\frac{1-z_{n+1}}{1-z_n}{\big \vert}\\ &\le\frac{C_2 +1}{2}\\ &<1, \end{align*} Consequently, for any $m>n$, \begin{align*} {\big \vert}\frac{\varphi(z_n)}{\varphi(z_m)}{\big \vert}\le&\left(\frac{C_2 +1}{2}\right)^{m-n} \\ \le& \frac{C_2 +1}{2} \\ <&1. \end{align*} Then, \begin{align*} \rho(z_m,z_n)&\ge{\big \vert}\frac{1-{\vert}\frac{\varphi(z_n)}{\varphi(z_m)}{\vert}}{1+{\vert}\frac{\varphi(z_n)}{\varphi(z_m)}{\vert}}{\big \vert}\\ &=1-\frac{2}{\frac{1}{{\big \vert}\frac{\varphi(z_n)}{\varphi(z_m)}{\big \vert}}+1}\\ &\ge 1-\frac{2}{\frac{1}{\frac{C_2 +1}{2}}+1}\\ &=\frac{1-C_2}{3+C_2}. \end{align*} Therefore, we have \begin{equation*} \inf\limits_{m\neq n}\rho(z_n,z_m)\ge\frac{1-C_2}{3+C_2}>0. \end{equation*} This finishes the proof. \end{proof} \section{Proof of the main theorem} In this section, we would prove our main theorem. Consider a Blaschke product $$ B=\lambda z^m\prod^{\infty}_{n=1}\frac{\overline{z_{n}}}{\mid z_{n}\mid}\cdot\frac{z_{n}-z}{1-\overline{z_{n}}z}. $$ where ${\vert}\lambda{\vert}=1$. It is well known that the finite Blaschke products with the same order are in the same component. Then, $B\sim zB$ if and only if for some certain $N\in\mathbb{N}$, \begin{equation}\label{NtoN1} \prod^{\infty}_{n=N}\frac{\overline{z_{n}}}{\mid z_{n}\mid}\cdot\frac{z_{n}-z}{1-\overline{z_{n}}z} \ \sim \ \prod^{\infty}_{n=N+1}\frac{\overline{z_{n}}}{\mid z_{n}\mid}\cdot\frac{z_{n}-z}{1-\overline{z_{n}}z}. \end{equation} Recall that $\varphi(z)=(1+z)/(1-z)$ is the fractional linear transformation from the unit open disk to the right half plane. Let $\alpha_{n}(t)$ be the unique point in $\mathbb{D}$ such that $$ \varphi(\alpha_{n}(t))=(1-t)\varphi(z_{n})+t\varphi(z_{n+1}), $$ for $t\in[0,1]$ and $n=1,2,\cdots$. Furthermore, following from the idea of Nestoridis \cite{Ne79}, define the map $B_t$ from $[0,1]$ to $\mathcal{F}$ by \begin{equation}\label{path} t \longmapsto B_{t}=\prod^{\infty}_{n=N}\frac{\overline{\alpha_{n}(t)}}{\mid\alpha_{n}(t)\mid}\cdot\frac{\alpha_{n}(t)-z}{1-\overline{\alpha_{n}(t)}z}. \end{equation} If the above map is continuous, then the relation (\ref{NtoN1}) holds and consequently $B\sim zB$. To prove the continuity of $B_t$, the following theorem plays an important role. \begin{lemma}[Lemma $1$ in \cite{Ne79}]\label{Nest} Let \begin{equation*} K_{1}=\prod^{\infty}_{n=1}\frac{\overline{\alpha_{n}}}{{\vert}\alpha_{n}{\vert}}\frac{\alpha_{n}-z}{1-\overline{\alpha_{n}}z}\ \ \ \text{and}\ \ \ K_{2}=\prod^{\infty}_{n=1}\frac{\overline{\beta_{n}}}{{\vert}\beta_{n}{\vert}}\frac{\beta_{n}-z}{1-\overline{\beta_{n}}z} \end{equation*} be two infinite Blaschke products such that $K_{1}(0)>0$ and $K_{2}(0)>0$, then we have the following inequality, \begin{equation} \begin{aligned} {\Vert}K_{1}-K_{2}{\Vert}_{\infty}\le &\sum_{n}{\big \vert}\textrm{arg}\frac{\alpha_{n}}{\beta_{n}}{\big \vert}+2\sum_{n}{\big \vert}\textrm{arg}\frac{1-\alpha_{n}}{1-\beta_{n}}{\big \vert} \\ &+2\sup_{y\in \mathbb{R}}{\rm ess}\sum_{n}{\big \vert}\textrm{arg}\frac{\varphi(\alpha_{n})-{\bf i}y}{\varphi(\beta_{n})-{\bf i}y}{\big \vert}. \end{aligned} \end{equation} \end{lemma} Now, we would apply Lemma \ref{Nest} to verify the continuity of the path $B_t$. More precisely, \begin{equation}\label{eq:1} \begin{aligned} {\Vert}B_{t}-B_{t+\Delta t}{\Vert}_{\infty}\le \ &\sum_{n}{\big \vert}\textrm{arg}\frac{\alpha_{n}(t)}{\alpha_{n}(t+\Delta t)}{\big \vert}+2\sum_{n}{\big \vert}\textrm{arg}\frac{1-\alpha_{n}(t)}{1-\alpha_{n}(t+\Delta t)}{\big \vert}\\ &+2\sup_{y\in \mathbb{R}}\textrm{ess}\sum_{n}{\big \vert}\textrm{arg}\frac{\varphi(\alpha_{n}(t))-{\bf i}y}{\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y}{\big \vert}. \end{aligned} \end{equation} Then, to prove $\lim\limits_{\Delta t \to 0}{\Vert}B_{t}-B_{t+\Delta t}{\Vert}=0$, it suffices to prove the three items in the right of the inequality (\ref{eq:1}) tend to $0$ as $\Delta t \to 0$. \begin{proposition}\label{prop1} For any $B\in \mathcal{\widetilde{H}}_{SC}$, there exists a positive integer $N_1\in\mathbb{N}$ and a positive number $K_1$ such that for any $t\in [0,1]$ and small positive number $\Delta t$, \[ \sum\limits_{n=N_1}^{\infty}{\big \vert}\textrm{arg}\frac{\alpha_{n}(t)}{\alpha_{n}(t+\Delta t)}{\big \vert} \le K_1\Delta t. \] \end{proposition} \begin{proof} Without loss of generality, suppose that the zeros sequence of $B\in \mathcal{\widetilde{H}}_{SC}$ is fine in a strip cone $SC(1,\theta, T_1, T_2)$. By the definition of $\alpha_{n}(t)$, we have \begin{align*} 1-\alpha_{n}(t)=&1-\varphi^{-1}(\varphi(\alpha_{n}(t))) \\ =&1-\frac{\varphi(\alpha_{n}(t))-1}{\varphi(\alpha_{n}(t))+1} \\ =&\frac{2}{\varphi(\alpha_{n}(t))+1} \end{align*} Furthermore, \begin{align*} &\alpha_{n}(t+\Delta t)-\alpha_{n}(t) \\ =&(1-\alpha_{n}(t))-(1-\alpha_{n}(t+\Delta t)) \\ =&\frac{2\Delta t (\varphi(z_{n+1})-\varphi(z_{n}))}{(\varphi(\alpha_{n}(t))+1)(\varphi(\alpha_{n}(t+\Delta t))+1)} \\ =&\frac{4\Delta t (z_{n+1}-z_{n})}{(1-z_{n})(1-z_{n+1})} \cdot \frac{1}{(\varphi(\alpha_{n}(t))+1)(\varphi(\alpha_{n}(t+\Delta t))+1)}. \end{align*} Since ${\rm Re}\varphi(z_{n})$ increasingly tends to $+\infty$ and \[ \lim\limits_{k\rightarrow\infty}\frac{{\rm Re}\varphi(z_{n})}{{\vert}\varphi(z_{n}){\vert}}=\cos(\frac{\pi}{2}-\theta)>0, \] there is a positive number $R_1>0$ such that \[ {\vert}\varphi(\alpha_{n}(t))+1{\vert}\ge {\vert}{\rm Re}(\varphi(\alpha_{n}(t))+1){\vert}\ge {\vert}{\rm Re}(\varphi(z_{n})+1){\vert}\ge \frac{{\vert}\varphi(z_{n})+1{\vert}}{R_1}. \] In addition, the followings hold. \begin{enumerate} \item[(1)] \ The sequence ${\vert}1-z_{n}{\vert}$ non-increasingly tends to $1$, as $n\rightarrow\infty$. \item[(2)] \ by Lemma \ref{genlu}, there exists a positive constants $C_{1}$ such that \begin{equation*} 0<C_{1}\le{\big \vert}\frac{1-z_{n+1}}{1-z_{n}}{\big \vert}. \end{equation*} \end{enumerate} Then, we have \begin{align*} &{\vert}\alpha_{n}(t+\Delta t)-\alpha_{n}(t){\vert} \\ =&{\big \vert}\frac{4\Delta t (z_{n+1}-z_{n})}{(1-z_{n})(1-z_{n+1})} {\big \vert}\cdot {\big \vert}\frac{1}{(\varphi(\alpha_{n}(t))+1)(\varphi(\alpha_{n}(t+\Delta t))+1)} {\big \vert} \\ \le&\frac{4\Delta t ({\vert}1-z_{n+1}{\vert}+{\vert}1-z_{n}{\vert})}{{\vert}1-z_{n}{\vert}{\vert}1-z_{n+1}{\vert}} \cdot \frac{R_1^2}{{\vert}\varphi(z_{n})+1{\vert}^2} \\ \le&\frac{8\Delta t {\vert}1-z_{n}{\vert}}{{\vert}1-z_{n}{\vert}{\vert}1-z_{n+1}{\vert}} \cdot \frac{R_1^2}{{\vert}\frac{2}{1-z_n}{\vert}^2} \\ =&2R_1^2\cdot \Delta t \cdot \frac{{\vert}1-z_{n}{\vert}}{{\vert}1-z_{n+1}{\vert}} \cdot {\vert}{1-z_n}{\vert} \\ \le& \frac{2R_1^2}{C_1}\cdot{\vert}\Delta t{\vert}\cdot {\vert}{1-z_n}{\vert}. \end{align*} Given a positive number $0<r<1$, it follows from $z_{n}\rightarrow 1$ that there exists a positive integer $N_1\in \mathbb{N}$ such that, for any $n\ge N_1$ and any $t\in [0,1]$, \[ {\vert}\alpha_{n}(t){\vert}\ge r. \] Thus, considering the area of the triangle with vertices $0$, $\alpha_{n}(t+\Delta t)$ and $\alpha_{n}(t)$, we have \begin{align*} &{\big \vert}\textrm{arg}\frac{\alpha_{n}(t)}{\alpha_{n}(t+\Delta t)}{\big \vert} \\ \le& \frac{\pi}{2}\cdot \sin{\big \vert}\textrm{arg}\frac{\alpha_{n}(t)}{\alpha_{n}(t+\Delta t)}{\big \vert} \\ \le& \frac{\pi}{2}\cdot \frac{{\vert}\alpha_{n}(t+\Delta t)-\alpha_{n}(t){\vert}\cdot 1}{{\vert}\alpha_{n}(t+\Delta t){\vert}\cdot{\vert}\alpha_{n}(t){\vert}} \\ \le& \frac{R_1^2\pi}{r^2C_1}\cdot \Delta t \cdot {\vert}{1-z_n}{\vert}. \end{align*} Consequently, by $\sum_{n}{\vert}{1-z_n}{\vert}<\infty$, for any $t\in[0,1]$, \[ \sum\limits_{n=N_1}^{\infty}{\big \vert}\textrm{arg}\frac{\alpha_{n}(t)}{\alpha_{n}(t+\Delta t)}{\big \vert} \le \Delta t \cdot \frac{R_1^2\pi}{r^2C_1}\sum\limits_{n=N_1}^{\infty}{\vert}{1-z_n}{\vert}. \] Moreover, write \[ K_1=\frac{R_1^2\pi}{r^2C_1}\sum\limits_{n=1}^{\infty}{\vert}{1-z_n}{\vert} \] as required. \end{proof} \begin{proposition}\label{prop2} For any $B\in \mathcal{\widetilde{H}}_{SC}$, there exists a positive integer $N_2\in\mathbb{N}$ and a positive number $K_2$ such that for any $t\in [0,1]$ and small positive number $\Delta t$, \[ \sup\limits_{y\in \mathbb{R}}\textrm{ess}\sum\limits_{n=N_2}^{\infty}{\big \vert}\textrm{arg}\frac{\varphi(\alpha_{n}(t))-{\bf i}y}{\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y}{\big \vert}\le K_2\Delta t. \] \end{proposition} \begin{proof} Without loss of generality, suppose that the zeros sequence of $B\in \mathcal{\widetilde{H}}_{SC}$ is fine in a strip cone $SC(1,\theta, T_1, T_2)$. Let the strip $SL(\frac{\pi}{2}-\theta,L_{1},L_{2})$ be the image of the strip cone $SC(1,\theta,T_{1},T_{2})$ under the map $\varphi(z)$, where \[ L_{1}:y-\tan(\frac{\pi}{2}-\theta)\cdot x-c_1=0 \ \ \ \text{and} \ \ \ L_{2}:y-\tan(\frac{\pi}{2}-\theta)\cdot x-c_2=0. \] Denote by $L$ the straight line passing $\varphi(z_n)$ and parallel $L_1$, and denote by $\omega_n$ the angle between $L$ and the straight line passing through $\varphi(z_{n})$ and $\varphi(z_{n+1})$. Then \[ \sin\omega_n\le \frac{{\vert}c_1-c_2{\vert}}{{\vert}\varphi(z_{n+1})-\varphi(z_n){\vert}}. \] Since there are positive numbers $C_1$ and $C_2$ such that \begin{equation*} 0<C_{1}\le{\big \vert}\frac{1-z_{n+1}}{1-z_{n}}{\big \vert}\le C_2<1, \end{equation*} and \[ \lim\limits_{n\rightarrow\infty}{\big \vert}\frac{\varphi(z_{n})}{\varphi(z_{n+1})} {\big \vert} \bigg/ {\big \vert}\frac{1-z_{n+1}}{1-z_{n}} {\big \vert} =\lim\limits_{k\rightarrow\infty}{\big \vert}\frac{1+z_{n}}{1+z_{n+1}} {\big \vert}=1, \] one can see that \begin{align*} \lim\limits_{n\rightarrow\infty}{\vert}\varphi(z_{n+1})-\varphi(z_n){\vert}=& \lim\limits_{n\rightarrow\infty}{\vert}\varphi(z_n){\vert}\left(\frac{{\vert}\varphi(z_{n+1}){\vert}}{{\vert}\varphi(z_n){\vert}}-1\right) \\ \le& \lim\limits_{n\rightarrow\infty}\left(\frac{1}{C_2}-1\right){\vert}\varphi(z_n){\vert} \\ =&\infty. \end{align*} Consequently, \[ \lim\limits_{n\rightarrow\infty}\omega_n\le \frac{\pi}{2}\lim\limits_{n\rightarrow\infty}\sin\omega_n\le \frac{\pi}{2}\lim\limits_{n\rightarrow\infty}\frac{{\vert}c_1-c_2{\vert}}{{\vert}\varphi(z_{n+1})-\varphi(z_n){\vert}}=0, \] that is \[ \lim\limits_{n\rightarrow\infty}\textrm{arg}(\varphi(z_{n+1})-\varphi(z_n))=\frac{\pi}{2}-\theta. \] Denote by $\vartheta_n$ the angle between the imaginary axis and the straight line passing through $\varphi(z_{n})$ and $\varphi(z_{n+1})$. Then, for any $\epsilon>0$, there exists a positive integer $N_2\in\mathbb{N}$ such that for every $n\ge N_2$ \[ (1-\epsilon)\sin\theta\le \sin\vartheta_n \le(1+\epsilon)\sin\theta \ \ \ \text{and} \ \ \ \textrm{Re}z_n>0. \] Denote by $h_n$ the distance from $\mathbf{i}y$ to the straight line passing through $\varphi(\alpha_{n}(t))$ and $\varphi(\alpha_{n}(t+\Delta t))$. Consider the area of the triangle with vertices $\mathbf{i}y$, $\varphi(\alpha_{n}(t))$ and $\varphi(\alpha_{n}(t+\Delta t))$, one can see that \begin{align*} &\sin\left(\textrm{arg}\frac{\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y}{\varphi(\alpha_{n}(t))-{\bf i}y}\right) \\ =&\frac{{\vert}\varphi(\alpha_{n}(t+\Delta t))-\varphi(\alpha_{n}(t)){\vert}\cdot h_n}{{\vert}\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y{\vert}\cdot {\vert}\varphi(\alpha_{n}(t))-{\bf i}y{\vert}} \\ \le& \frac{\frac{\textrm{Re}\varphi(\alpha_{n}(t+\Delta t))-\textrm{Re}\varphi(\alpha_{n}(t))}{(1-\epsilon)\sin\theta}\cdot \max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}(1+\epsilon)\sin\theta}{{\vert}\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y{\vert}\cdot {\vert}\varphi(\alpha_{n}(t))-{\bf i}y{\vert}} \\ =& \Delta t\cdot\frac{1+\epsilon}{1-\epsilon}\cdot \frac{(\textrm{Re}\varphi(z_{n+1})-\textrm{Re}\varphi(z_{n}))\cdot \max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{{\vert}\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y{\vert}\cdot {\vert}\varphi(\alpha_{n}(t))-{\bf i}y{\vert}}. \end{align*} Since $\{\textrm{Re}\varphi(z_{n})\}$ is an increasing sequence of positive numbers tending to $+\infty$, we have \begin{align*} &{\big \vert}\textrm{arg}\frac{\varphi(\alpha_{n}(t))-{\bf i}y}{\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y}{\big \vert} \\ \le & \frac{\pi}{2}\sin\left(\textrm{arg}\frac{\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y}{\varphi(\alpha_{n}(t))-{\bf i}y}\right)\\ \le & \Delta t\cdot\frac{\pi(1+\epsilon)}{2(1-\epsilon)}\cdot \frac{(\textrm{Re}\varphi(z_{n+1})-\textrm{Re}\varphi(z_{n}))\cdot \max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{{\vert}\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y{\vert}\cdot {\vert}\varphi(\alpha_{n}(t))-{\bf i}y{\vert}}. \end{align*} Moreover, \begin{align*} &{\big \vert}\textrm{arg}\frac{\varphi(\alpha_{n}(t))-{\bf i}y}{\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y}{\big \vert} \\ \le & \Delta t\cdot\frac{\pi(1+\epsilon)}{2(1-\epsilon)}\cdot \frac{(\textrm{Re}\varphi(z_{n+1})-\textrm{Re}\varphi(z_{n}))\cdot \max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{{\vert}\textrm{Re}\varphi(z_{n}){\vert}\cdot {\vert}\textrm{Re}\varphi(z_{n}){\vert}} \\ = & \Delta t\cdot\frac{\pi(1+\epsilon)}{2(1-\epsilon)}\cdot \left(\frac{\textrm{Re}\varphi(z_{n+1})}{\textrm{Re}\varphi(z_{n})}-1 \right)\cdot\frac{\max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{\textrm{Re}\varphi(z_{n})} \\ \le & \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_1})}{2\widetilde{C_1}(1-\epsilon)}\cdot\frac{\max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{\textrm{Re}\varphi(z_{n})}. \end{align*} $(1)$ \ Suppose that $y$ satisfies $\min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}\le {\vert}c_1-c_2{\vert}$. In this case, \[ \max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}\le \min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}+{\vert}c_1-c_2{\vert} \le 2{\vert}c_1-c_2{\vert}. \] Then, \begin{align*} &\sum\limits_{n=N_2}^{\infty}{\big \vert}\textrm{arg}\frac{\varphi(\alpha_{n}(t))-{\bf i}y}{\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y}{\big \vert} \\ \le & \sum\limits_{n=N_2}^{\infty}\Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_1})}{2\widetilde{C_1}(1-\epsilon)}\cdot\frac{2{\vert}c_1-c_2{\vert}}{\textrm{Re}\varphi(z_{n})}\\ \le & \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_1}){\vert}c_1-c_2{\vert}}{\widetilde{C_1}(1-\epsilon)}\cdot\sum\limits_{n=N_2}^{\infty}\frac{1}{{\vert}\varphi(z_{n}){\vert}(1-\epsilon)\sin\theta} \\ = & \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_1}){\vert}c_1-c_2{\vert}}{\widetilde{C_1}(1-\epsilon)^2\sin\theta}\cdot\sum\limits_{n=N_2}^{\infty}\frac{{\vert}1-z_{n}{\vert}}{{\vert}1+z_{n}{\vert}} \\ \le & \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_1}){\vert}c_1-c_2{\vert}}{\widetilde{C_1}(1-\epsilon)^2\sin\theta}\cdot\sum\limits_{n=N_2}^{\infty}{\vert}1-z_{n}{\vert}. \end{align*} $(2)$ \ Suppose that $y$ satisfies $\min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}\ge {\vert}c_1-c_2{\vert}$. Let $N\ge N_2$ be the first positive such that \[ \textrm{Re}\varphi(z_{N})\ge \max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}. \] Then, \begin{align*} &\sum\limits_{n=N}^{\infty}{\big \vert}\textrm{arg}\frac{\varphi(\alpha_{n}(t))-{\bf i}y}{\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y}{\big \vert} \\ \le & \sum\limits_{n=N}^{\infty}\Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_1})}{\widetilde{C_1}(1-\epsilon)}\cdot\frac{\max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{\textrm{Re}\varphi(z_{n})}\\ = & \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_1})}{\widetilde{C_1}(1-\epsilon)}\cdot\frac{\max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{\textrm{Re}\varphi(z_{N})} \cdot\sum\limits_{n=N}^{\infty}\frac{\textrm{Re}\varphi(z_{n})}{\textrm{Re}\varphi(z_{N})} \\ \le & \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_1})}{\widetilde{C_1}(1-\epsilon)} \cdot\sum\limits_{k=0}^{\infty}\widetilde{C_2}^k \\ = & \Delta t\cdot\frac{\pi(1+\epsilon)(1-\widetilde{C_1})}{\widetilde{C_1}(1-\epsilon)(1-\widetilde{C_2})}. \end{align*} Notice that for $t\in[0,1]$ \[ {\vert}\varphi(\alpha_{n}(t))-{\bf i}y{\vert}\ge \min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}\sin\theta \ge {\vert}c_1-c_2{\vert}\sin\theta. \] It easy to see that \begin{align*} \frac{\max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{{\vert}\varphi(\alpha_{n}(t))-{\bf i}y{\vert}}\le& \frac{\min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}+{\vert}c_1-c_2{\vert}}{\min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}\sin\theta} \\ =& \left(1+\frac{{\vert}c_1-c_2{\vert}}{\min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}\right)\cdot\frac{1}{\sin\theta} \\ \le& \frac{2}{\sin\theta}. \end{align*} Then, for any $N_2\le n \le N-1$, \begin{align*} &\sum\limits_{n=N_2}^{N-1}{\big \vert}\textrm{arg}\frac{\varphi(\alpha_{n}(t))-{\bf i}y}{\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y}{\big \vert} \\ \le & \sum\limits_{n=N_2}^{N-1}\Delta t\cdot\frac{\pi(1+\epsilon)}{2(1-\epsilon)}\cdot \frac{(\textrm{Re}\varphi(z_{n+1})-\textrm{Re}\varphi(z_{n}))\cdot \max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{{\vert}\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y{\vert}\cdot {\vert}\varphi(\alpha_{n}(t))-{\bf i}y{\vert}}\\ \le & \Delta t\cdot\frac{\pi(1+\epsilon)}{2(1-\epsilon)}\cdot \sum\limits_{n=N_2}^{N-1}\frac{(\textrm{Re}\varphi(z_{n+1})-\textrm{Re}\varphi(z_{n}))\cdot \max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{\min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}\sin\theta\cdot {\vert}\varphi(\alpha_{n}(t))-{\bf i}y{\vert}}\\ \le & \Delta t\cdot\frac{\pi(1+\epsilon)}{(1-\epsilon)\min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}\sin^2\theta}\cdot \sum\limits_{n=N_2}^{N-1}(\textrm{Re}\varphi(z_{n+1})-\textrm{Re}\varphi(z_{n}))\\ =& \Delta t\cdot\frac{\pi(1+\epsilon)}{(1-\epsilon)\sin^2\theta}\cdot \frac{\textrm{Re}\varphi(z_{N})-\textrm{Re}\varphi(z_{N_2})}{\min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}\\ \le& \Delta t\cdot\frac{\pi(1+\epsilon)}{(1-\epsilon)\sin^2\theta}\cdot \frac{\frac{\textrm{Re}\varphi(z_{N})}{\textrm{Re}\varphi(z_{N-1})}\cdot \textrm{Re}\varphi(z_{N-1})}{\min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}\\ \le& \Delta t\cdot\frac{\pi(1+\epsilon)}{(1-\epsilon)\sin^2\theta}\cdot \frac{\frac{1}{\widetilde{C_1}}\max\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}}{\min\{{\vert}y-c_1{\vert}, {\vert}y-c_2{\vert}\}} \\ =& \Delta t\cdot\frac{2\pi(1+\epsilon)}{(1-\epsilon)\widetilde{C_1}\sin^2\theta}. \end{align*} Thus, in this case, \begin{align*} &\sum\limits_{n=N_2}^{\infty}{\big \vert}\textrm{arg}\frac{\varphi(\alpha_{n}(t))-{\bf i}y}{\varphi(\alpha_{n}(t+\Delta t))-{\bf i}y}{\big \vert} \\ \le& \Delta t\cdot\left(\frac{2\pi(1+\epsilon)}{(1-\epsilon)\widetilde{C_1}\sin^2\theta}+\frac{\pi(1+\epsilon)(1-\widetilde{C_1})}{\widetilde{C_1}(1-\epsilon)(1-\widetilde{C_2})}\right). \end{align*} Therefore, we could write \[ K_2=\left\{\begin{matrix} \frac{\pi(1+\epsilon)(1-\widetilde{C_1}){\vert}c_1-c_2{\vert}}{\widetilde{C_1}(1-\epsilon)^2\sin\theta}\cdot\sum\limits_{n=N_2}^{\infty}{\vert}1-z_{n}{\vert}, \\ \frac{2\pi(1+\epsilon)}{(1-\epsilon)\widetilde{C_1}\sin^2\theta}+\frac{\pi(1+\epsilon)(1-\widetilde{C_1})}{\widetilde{C_1}(1-\epsilon)(1-\widetilde{C_2})} \end{matrix} \right\} \] as required. \end{proof} \begin{proposition}\label{prop3} For any $B\in \mathcal{\widetilde{H}}_{SC}$, there exists a positive integer $N_3\in\mathbb{N}$ and a positive number $K_3$ such that for any $t\in [0,1]$ and small positive number $\Delta t$, \[ \sum_{n}{\big \vert}\textrm{arg}\frac{1-\alpha_{n}(t)}{1-\alpha_{n}(t+\Delta t)}{\big \vert}\le K_3\Delta t. \] \end{proposition} \begin{proof} Without loss of generality, suppose that the zeros sequence of $B\in \mathcal{\widetilde{H}}_{SC}$ is fine in a strip cone $SC(1,\theta, T_1, T_2)$. Let the strip $SL(\frac{\pi}{2}-\theta,L_{1},L_{2})$ be the image of $SC(1,\theta,T_{1},T_{2})$ under the map $\varphi(z)$, where \[ L_{1}:y-\tan(\frac{\pi}{2}-\theta)\cdot x-c_1=0 \ \ \ \text{and} \ \ \ L_{2}:y-\tan(\frac{\pi}{2}-\theta)\cdot x-c_2=0. \] For convenience, assume the line $L_2$ is on the right of the line $L_1$. By translating $L_{2}$ one unit to the right, we obtain a new line $\widehat{L_2}$, more precisely, \[ \widehat{L_{2}}: \ y-\tan(\frac{\pi}{2}-\theta)\cdot (x-1)-c_2=0. \] Let \[ \varphi(\widehat{z_n})=\varphi(z_{n})+1 \ \ \text{and} \ \ \varphi(\widehat{\alpha}_{n}(t))=\varphi(\alpha_{n}(t))+1. \] It is not difficult to see that $\{\varphi(\widehat{z_n})\}_{n=1}^{\infty}$ also satisfies the following conditions as well as a fine sequence. \begin{enumerate} \item[(1)] \ The sequence $\{\varphi(\widehat{z_n})\}_{n=1}^{\infty}$ lies in the strip $SL(\frac{\pi}{2}-\theta,L_{1},\widehat{L_{2}})$. \item[(2)] \ $\lim\limits_{n\rightarrow\infty}\textrm{arg}(\varphi(z_{n+1})-\varphi(z_n))=\frac{\pi}{2}-\theta$. \item[(3)] \ ${\rm Re} \varphi(\widehat{z_n})$ monotonically tends to $+\infty$, and there are two positive numbers $\widetilde{D_1}$ and $\widetilde{D_2}$ such that \[ 0<\widetilde{D_1}\leq \frac{{\rm Re}\varphi(\widehat{z_n})}{{\rm Re}\varphi(\widehat{z_{n+1}})} \leq \widetilde{D_2}<1. \] \end{enumerate} Then, the Proposition \ref{prop2} also holds for the sequence $\{\varphi(\widehat{z_n})\}_{n=1}^{\infty}$. That is, there exists a positive integer $N_3\in\mathbb{N}$ and a positive number $K_3$ such that for any $t\in [0,1]$ and small positive number $\Delta t$, \[ \sup\limits_{y\in \mathbb{R}}\textrm{ess}\sum\limits_{n=N_3}^{\infty}{\big \vert}\textrm{arg}\frac{\varphi(\widehat{\alpha}_{n}(t))-{\bf i}y}{\varphi(\widehat{\alpha}_{n}(t+\Delta t))-{\bf i}y}{\big \vert}\le K_3\Delta t. \] In particular, the above inequality holds for $y=0$, and hence \begin{equation*} \begin{aligned} K_3\Delta t \ge & \sum\limits_{n=N_3}^{\infty}{\big \vert}\textrm{arg}\frac{\varphi(\widehat{\alpha}_{n}(t))}{\varphi(\widehat{\alpha}_{n}(t+\Delta t))}{\big \vert} \\ =&\sum\limits_{n=N_3}^{\infty}{\big \vert}\textrm{arg}\frac{\varphi(\alpha_{n}(t))+1}{\varphi(\alpha_{n}(t+\Delta t))+1}{\big \vert}\\ =&\sum\limits_{n=N_3}^{\infty}{\big \vert}\textrm{arg}\frac{\frac{1+\alpha_{n}(t)}{1-\alpha_{n}(t)}+1}{\frac{1+\alpha_{n}(t+\Delta t)}{1-\alpha_{n}(t+\Delta t)}+1}{\big \vert}\\ =&\sum\limits_{n=N_3}^{\infty}{\big \vert}\textrm{arg}\frac{1-\alpha_{n}(t+\Delta t)}{1-\alpha_{n}(t)}{\big \vert}. \end{aligned} \end{equation*} This completes the proof. \end{proof} {\bf Proof of Main Theorem :} Given any $B\in\mathcal{{H}}_{SC}$. Without loss of generality, we may assume that its zeros lie in a strip cone $SC(1,\theta,T_{1},T_{2})$, $\theta\in(0, \pi)$. By Lemma \ref{finesub}, it has a factor $\widetilde{B}\in\mathcal{\widetilde{H}_{SC}}$, denoted by \[ \widetilde{B}(z)=\prod\limits_{n=1}^{\infty}\frac{{\vert}z_{n}{\vert}}{z_{n}}\frac{z_{n}-z}{1-\overline{z_{n}}z}. \] Following from Proposition \ref{prop1}, Proposition \ref{prop2}, Proposition \ref{prop3} and Lemma \ref{Nest}, we could choose $N\ge \max\{N_1, N_2, N_3\}$ and then there is a continuous path from \[ \prod\limits_{n=N}^{\infty}\frac{{\vert}z_{n}{\vert}}{z_{n}}\frac{z_{n}-z}{1-\overline{z_{n}}z}\ \ \ \text{to} \ \ \ \prod\limits_{n=N+1}^{\infty}\frac{{\vert}z_{n}{\vert}}{z_{n}}\frac{z_{n}-z}{1-\overline{z_{n}}z}. \] Therefore, by the path-connectedness of M\"{o}bius transformations and Lemma \ref{factorconnect}, we have $B\sim zB$. \section*{Declarations} \begin{itemize} \item Ethics approval \noindent Not applicable. \item Competing interests \noindent The author declares that there are no conflict of interest or competing interests. \item Authors' contributions \noindent All authors reviewed this paper. \item Funding \noindent There is no funding source for this manuscript. \item Availability of data and materials \noindent Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. \end{itemize} \end{document}
\begin{document} \title[Lucas non-Wieferich primes in arithmetic progressions and the $abc$ conjecture]{Lucas non-Wieferich primes in arithmetic progressions and the $abc$ conjecture} \author[K. Anitha, I. Mumtaj Fathima and A R Vijayalakshmi]{K. Anitha$^{(1)}$, I. Mumtaj Fathima$^{(2)}$ and A R Vijayalakshmi$^{(3)}$} \address{$^{(1)}$Department of Mathematics, SRM IST Ramapuram, Chennai 600089, India} \address{$^{(2)}$Research Scholar, Department of Mathematics, Sri Venkateswara College of Engineering \\ Affiliated to Anna University, Sriperumbudur, Chennai 602117, India} \address{$^{(3)}$Department of Mathematics, Sri Venkateswara College of Engineering, Sriperumbudur, Chennai 602117, India} \email{$^{(1)}[email protected]} \email{$^{(2)}[email protected]} \email{$^{(3)}[email protected]} \begin{abstract} We prove the lower bound for the number of Lucas non-Wieferich primes in arithmetic progressions. More precisely, for any given integer $k\geq 2$ there are $\gg \log x$ Lucas non-Wieferich primes $p\leq x$ such that $p\equiv\pm1\Mod{k}$, assuming the $abc$ conjecture for number fields. Further, we discuss some applications of Lucas sequences in Cryptography. \end{abstract} \subjclass[2020]{11B25, 11B39, 11A41} \keywords{abc conjecture, arithmetic progressions, Lucas sequences, Lucas non-Wieferich primes, Public-key cryptosystem (LUC), Wieferich primes} \maketitle \section{Introduction} Let $a\geq2$ be an integer. An odd rational prime $p$ is said to be a \textit{Wieferich prime for base $a$} if \begin{equation}\label{wl} a^{p-1}\equiv 1 \Mod {p^2}. \end{equation} Otherwise, it is called a \textit{non-Wieferich prime for base $a$}. In $1909$, Arthur Wieferich \cite{wief} proved that if the first case of Fermat's last theorem is not true for a prime $p$, then $p$ is a Wieferich prime for base $2$. Today, $1093$ and $3511$ are the only known Wieferich primes for base $2$. It is still unknown whether there are infinitely many Wieferich primes that exist or not, for any given base $a$. But, Silverman proved that there are infinitely many non-Wieferich primes that exist for any base $a$, assuming the $abc$ conjecture. \begin{thm}(Silverman \cite[Theorem 1]{sman}) For any fixed $a\in \mathbb{Q}^{*}$, where $\mathbb{Q}^{*}=\mathbb{Q}\backslash\{0\}$ and $a\neq\pm1$. If the $abc$ conjecture is true, then \begin{equation*} \big|\{primes \, p\leq x: a^{p-1} \not\equiv 1 \Mod {p^2} \}\big| \gg_a \log x. \end{equation*} \end{thm} \noindent In $2013$, Graves and Ram Murty extended Silverman's result to certain arithmetic progressions. \begin{thm}(Graves and Ram Murty \cite[Theorem 3.3]{graves}) If $a,k$ and $n$ are positive integers and one assumes the abc conjecture, then \begin{equation*} \big|\{primes \, p\leq x: p\equiv 1 \Mod k,\\a^{p-1} \not\equiv 1 \Mod {p^2}\}\big| \gg_{a,k} \frac{\log x}{\log\log x}. \end{equation*} \end{thm} \noindent Later, Chen and Ding \cite{chending} improved the lower bound to $\displaystyle\frac{\log x (\log\log\log x)^M}{\log\log x}$, where $M$ is any fixed positive integer. Recently, Ding \cite{ding} further sharpened it to $\log x$. \noindent We prove the similar lower bound for non-Wieferich primes in Lucas sequences under the assumption of the $abc$ conjecture for number fields. We first study the Lucas sequences and Lucas non-Wieferich primes. \section{Lucas Sequences} \begin{defn}\cite{mci} The \textit{Lucas sequences of first kind} $(U_n(P, \, Q))_{n\geq0}$ and the \textit{second kind} $(V_{n}(P, \, Q))_{n\geq0}$ are defined by the recurrence relations, \begin{align*} U_n(P, Q) &= P U_{n-1}(P, Q)-Q U_{n-2}(P, Q),\\ V_{n}(P, Q) &= P V_{n-1}(P, Q)-Q\, V_{n-2}(P, Q), \end{align*} for all $n\geq 2$ with initial conditions $U_0(P, Q)=0$ and $U_1(P, Q)=1$, $V_0(P, \, Q)=2$ and $V_1(P, \, Q)=P$. Here, $P$ and $Q$ are non-zero fixed integers with $\gcd(P, Q)=1$. We always assume that the discriminant $\Delta:=P^2-4Q$ is positive. \noindent Alternatively, we have the formulae \cite{ribbook} \begin{align} U_n(P, Q) &= \frac{\alpha^n-\beta^n}{\alpha-\beta},\label{bin}\\ V_{n}(P, Q) &= \alpha^n+\beta^n, \end{align} \noindent where $\alpha$ and $\beta$ are the zeros of the polynomial $x^2-Px+Q$, following the convention that $|\alpha| > |\beta|$. It is also called the \textit{Binet formulae}. Lucas sequences have some other directions in solving Diophantine equations of the symmetric form. In \cite{mwang}, Min Wang et al. solved Diophantine equations of the form $A_{n_1}\cdots A_{n_k}=B_{m_1} \cdots B_{m_r}C_{t_1} \cdots C_{t_s}$, where $(A_n), \, (B_m)$, and $(C_t)$ are Lucas sequences of first or second kind and also they found all the solutions of certain symmetric Diophantine equations. \end{defn} We note that the \textit{Fibonacci sequences} are particular case of Lucas sequences of first kind associated to the pair $(1,-1)$ and it is denoted by $(F_n)_{n\geq0}$ with initial conditions $F_0=0$ and $F_1=1$. The sequence of \textit{Lucas numbers} are example of Lucas sequence of second kind associated to the pair $(1,-1)$ and it is denoted by $(L_n)_{n\geq0}$ with initial conditions $L_0=2$ and $L_1=1$ \cite{ribbook}. We observe that, $$ \displaystyle\lim_{n \to \infty}\frac{L_{n+1}}{L_n}=(1+\sqrt{5})/2=\lim_{n \to \infty}\frac{F_{n+1}}{F_n}. $$ The number $(1+\sqrt{5})/2$ is called \textit{the golden ratio} \cite{koshy}. The Fibonacci sequences and the golden ratio are used in mathematical models that appear in the field of Chemistry. In particular, in the structure of elements, the periodic table of the elements, and their symmetrical crystalline structures \cite{wlo}, \cite{bat}. Now, on returning to the general Lucas sequences of first kind $(U_n(P, \, Q))_{n\geq0}$. Throughout this paper, we simply write $U_n$ instead of $U_n(P, Q)$, if $P$ and $Q$ are fixed. In 2007, McIntosh and Roettger \cite{mci} defined Lucas non-Wieferich primes and studied their properties. \begin{defn}(McIntosh and Roettger \cite{mci}) An odd prime $p$ is called a \textit{Lucas--Wieferich prime associated to the pair $(P, Q)$} if $$ U_{p - \legendre{\Delta}{p}}\equiv 0 \Mod {p^2}, $$ where $\legendre{\Delta}{p}$ denotes the Legendre symbol. Otherwise, it is called a \textit{Lucas non-Wieferich prime associated to the pair $(P, Q)$}. \end{defn} If $(P,Q) = (3,2)$, then $\Delta =1$ and $U_{p - \legendre{\Delta}{p}} = 2^{p-1}-1$. Every Wieferich prime is thus a Lucas--Wieferich prime associated to the pair $(3, 2)$ \cite{mci}. In $2001$, Ribenboim \cite{riben} proved that there are infinitely many Lucas non-Wieferich primes under the assumption of the $abc$ conjecture. Recently, Rout \cite{rout2} proved some lower bound for the number of Lucas non-Wieferich primes $p$ such that $p\equiv1\Mod{k}$. However, we find some gaps in his proofs. More clearly, he used the following lemma which is a classical result of cyclotomic polynomial $\Phi_m(x)$. \begin{lem}(Ram Murty \cite{ram})\label{rammurty} If $p|\Phi_m(a)$, then either $p|m$ or $p\equiv 1 \Mod m$. \end{lem} The above lemma is true for any rational integer $a$, but not true for any general algebraic integer. Further in Rout's proof (\cite[page 6, line 4]{rout2}), he mixed up the rational integer $a$ with the algebraic number $\alpha/\beta$. This was noted by Wang and Ding \cite{wang} in their paper. We now give an example that illustrates how Lemma \ref{rammurty} fails when $a$ is replaced by $\alpha/\beta$. \begin{exmp} Let $\alpha=(7+\sqrt{37})/2$ and $\beta=(7-\sqrt{37})/2$. For $m=6$, the cyclotomic polynomial $\Phi_6(\alpha/\beta)=(860+140\sqrt{37})/9$. For a prime $p=5$, $5\mid\Phi_6(\alpha/\beta)$. But $5\nmid6$ and $5\not\equiv1 \Mod{6}$. \end{exmp} In this paper, we will fill those gaps and prove that there are $\gg\log x$ Lucas non-Wieferich primes $p$ such that $p\equiv\pm1 \Mod{k}$ using Ding's \cite{ding} proof techniques. To the best of our knowledge, our main theorem is the first result which addresses the problem of Lucas non-Wieferich primes in arithmetic progressions. More precisely, we prove the following: \section{Main Theorem} \begin{thm}\label{theorem2} Let $k\geq2$ be a fixed integer and let $n>1$ be any integer. Assuming the $abc$ conjecture for number fields (defined below), then $$ \bigg|\bigg\{primes \, p\leq x: p \equiv\pm 1 \Mod k,\, U_{p-\legendre{\Delta}{p}}\not\equiv 0 \Mod { p^2}\bigg\} \bigg| \gg_{\alpha, k}\log x. $$ \end{thm} \section{The $abc$ conjecture} \subsection\normalfont{\textit{The $abc$ conjecture for integers \big( Oesterl\'{e},\, Masser\big)}} Given any real number $\varepsilon>0$, there is a constant $C_{\varepsilon}$ such that for every triple of coprime positive integers $a, \, b, \,c$ satisfying $a+b=c$, we have \begin{equation*} c<C_{\varepsilon}(rad(abc))^{1+\varepsilon}, \end{equation*} where $rad(abc)=\textstyle\prod\limits_{p|abc}p$. We now recall the definition of Vinogradov symbol ( denoted as $\gg$). \begin{defn}\cite{vojta} Let $f$ and $g$ are two non-negative functions. If $f<cg$ for some positive constant $c$, then we write $f\ll g$ or $g\gg f$. It is also called \textit{Vinogradov symbol}. \end{defn} \subsection\normalfont{\textit{The generalized $abc$ conjecture for algebraic number fields (\cite{vojta}, \cite{gyo})}} Let $K$ be an algebraic number field with ring of integers $\mathcal{O}_K$ and let $K^{*}=K\backslash\{0\}$. Let $V_K$ be the set of all primes on $K$, i.e., any $\upsilon\in V_K$ is an equivalence class of non-trivial norms on $K$ (finite or infinite). For $\upsilon\in V_K$, we define an absolute value $\|\cdot\|_\upsilon$ by \begin{displaymath} \|x\|_\upsilon = \begin{cases} |\psi(x)| & \text{if } \upsilon \text{ is infinite, corresponding } \\ & \text{to the real embedding } \psi:K\rightarrow\mathbb{C} ,\\ |\psi(x)|^2 & \text{if } \upsilon \text{ is infinite, corresponding } \\ & \text{to the complex embedding } \psi:K\rightarrow\mathbb{C} ,\\ N(\mathfrak{p})^{-ord_{\mathfrak{p}} (x)} & \text{if } v \text{ is finite and } \mathfrak{p} \text{ is the corresponding } \\ & \text{ prime ideal}, \end{cases} \end{displaymath} for all $x \in K^*$. For $x\neq0, \, ord_{\mathfrak{p}} (x)$ denotes the exponent of $\mathfrak{p}$ in the prime ideal factorization of the principal fractional ideal $(x)$. \noindent For any triple $(a, b, c )\in K^*$, the \textit{height} of the triple is $$ H_{K}(a, b, c):=\displaystyle\prod\limits_{\upsilon\in V_K}\max\big(\|a\|_{\upsilon}, \|b\|_{\upsilon}, \|c\|_{\upsilon}\big). $$ The \textit{radical} of the triple $(a, b, c) \in K^*$ is $$ rad_K(a, b, c):=\displaystyle\prod\limits_{\mathfrak{p} \in I_{K}(a, b, c)}N(\mathfrak{p})^{ord_\mathfrak{p}(p)}, $$ where $p$ is the rational prime lying below the prime ideal $\mathfrak{p}$ and $I_K(a, \, b, \,c)$ is the set of all prime ideals $\mathfrak{p}$ of $\mathcal{O}_K$ for which $\|a\|_\upsilon, \, \|b\|_\upsilon, \, \|c\|_\upsilon$ are not equal. \noindent The $abc$ conjecture for an algebraic number field $K$ states that for any $\varepsilon>0$, $$ H_{K}(a, \,b, \, c)\ll_{\varepsilon,\, K}(rad_K(a, b, c))^{1+\varepsilon}, $$ for all $a, b, c\in K^*$ satisfying $a+b+c=0$. \section{Preliminaries} \noindent We need the following results for the proof of our main theorem. \begin{lem}(Rout \cite[Corollary 3.3]{rout2}) \label{lem1} Let $p$ be a prime and coprime to $2Q$. Suppose $U_n\equiv0 \Mod{p}$ and $U_n\not\equiv0 \Mod{p^2}$. Then $U_{p- \legendre{\Delta}{p}}\equiv0 \Mod{p}$ and $U_{p-\legendre{\Delta}{p}}\not\equiv0 \Mod{p^2}$. \end{lem} That is, Lemma \ref{lem1} says that if a prime $p$ divides the square-free part of $U_n$, for some $n\in \mathbb{N}$, then $p$ is a Lucas non-Wieferich prime. The \textit{rank of appearance (or apparition)} of a positive integer $k$ in the Lucas sequence $(U_n)_{n\geq0}$ is the least positive integer $m$ such that $k|U_{m}$. We denote the rank of apparition of $k$ by $\omega(k)$ if it exists \cite{lucas}. In 1930, Lehmer \cite{lehmer} proved that a prime $p$ divides $U_n$ if and only if $\omega(p)$ divides $n$. Thus, the prime $p$ is a Lucas non-Wieferich prime if and only if $\omega(p)$ divides $p-\legendre{\Delta}{p}$. \begin{lem}(Rout \cite[Lemma 3.4]{rout2})\label{lem2} For sufficiently large $n\geq0$, we have \begin{equation}\label{eqnlem2} |\alpha|^{n/2} < |U_n| \leq 2 |\alpha|^n. \end{equation} \end{lem} \noindent We now recall cyclotomic polynomial and some of its properties. \begin{defn}\cite{ram}\label{cyclotomic} Let $m\geq1$ be any integer. Then the $m^{th}$ \textit{cyclotomic polynomial} is $$ \Phi_m(X)=\prod\limits_{\substack{h=1 \\ \gcd(h,m)=1}}^{m}(X-\zeta_m^h), $$ where $\zeta_m$ is the primitive $m^{th}$ root of unity. It follows that, \begin{equation}\label{pab} X^m-1=\prod\limits_{\substack{d|m }}\Phi_d(X). \end{equation} \end{defn} The following lemma characterizes the prime divisors of $\Phi_m(\alpha, \beta)$, \\ where $$ \Phi_m(\alpha,\beta)=\prod\limits_{\substack{h=1\\ \gcd(h,m)=1}}^{m} (\alpha-\zeta_m^h\beta). $$ In the following lemma, let $P(r)$ denotes the greatest prime factor of $r$ with the convention that $P(0)=P(\pm1)=1$. \begin{lem}(Stewart \cite[Lemma 2]{stewart})\label{rm} Let $(\alpha+\beta)^2$ and $\alpha\beta$ be coprime non-zero integers with $\alpha/\beta$ not a root of unity. If $m>4$ and $m\neq6, 12$ then $P(m/\gcd(3,m))$ divides $\Phi_m(\alpha,\beta)$ to at most the first power. All other prime factors of $\Phi_m(\alpha, \beta)$ are congruent to $\pm1 \Mod{m}$. Further if $m>e^{452}4^{67}$ then $\Phi_m(\alpha, \beta)$ has at least one prime factor congruent to $\pm1 \Mod{m}$. \end{lem} We remark that, Yu. Bilu et al. \cite{bilu} reduced the above lower bound $e^{452}4^{67}$ to $30$. In the above lemma, Stewart \cite{stewart} considered the cyclotomic polynomial \begin{equation}\label{stewart cyclo} \alpha^m-\beta^m=\prod\limits_{\substack{d|m }}\Phi_d(\alpha, \beta). \end{equation} But we take the prime divisors $p$ of $\Phi_m(\alpha/\beta)$ such that $p\nmid\alpha\beta=Q$. So the prime divisors of $\Phi_m(\alpha/\beta)$ and the prime divisors of $\Phi_m(\alpha,\beta)$ are the same. Thus by using above Lemma \ref{rm}, the prime divisors of $\Phi_m(\alpha/\beta)$ are congruent to $\pm1 \Mod{m}$. \begin{lem}(Rout \cite[Lemma 2.10]{rout1})\label{rg} For any real number $\alpha/\beta$ with $|\alpha/\beta|>1$, there exists a constant $C>0$ such that $$ |\Phi_m(\alpha/\beta)| \geq C|\alpha/\beta|^{\phi(m)}, $$ where $\phi(m)$ is the Euler totient function. \end{lem} \section{Main Results} Let $n>1$ be any integer and let $k \geq 2$ be any fixed integer. We always write $U_{nk}=X_{nk}Y_{nk}$, where $X_{nk},Y_{nk}$ are the square-free and powerful parts of $U_{nk}$ respectively. \\ Let us also take $X^\prime_{nk} = \gcd (X_{nk}, \Phi_{nk}(\alpha/\beta))$ and $ Y^\prime_{nk} = \gcd (Y_{nk}, \Phi_{nk}(\alpha/\beta))$. \noindent We note that by using Binet formula \eqref{bin}, we write \begin{align*} U_n &=\frac{\beta^n}{\alpha-\beta}\big((\alpha/\beta)^n-1\big)\\ &=\frac{\beta^n}{\sqrt{\Delta}}\big((\alpha/\beta)^n-1\big). \end{align*} Thus \begin{equation}\label{delta} (\alpha/\beta)^n-1|\sqrt{\Delta}U_n. \end{equation} We prove the following lemma, which is similar to the result in \cite{rout2}. For the purpose of completeness, we present the proof here. \begin{lem} Assume that the $abc$ conjecture is true for the quadratic field $\mathbb{Q}(\sqrt{\Delta})$. Then for any $\varepsilon>0$, we have $$ |X^\prime_{nk}| |Q|^{\phi(nk)}\gg_{\varepsilon}|U_{\phi(k)}|^{2(\phi(n)-\varepsilon)}. $$ \end{lem} \begin{proof} By the Binet formula \eqref{bin}, we have \begin{equation}\label{bin2} \sqrt{\Delta}U_{nk}-\alpha^{nk}+\beta^{nk}=0. \end{equation} Now, by applying the $abc$ conjecture for the number field $K=\mathbb{Q}(\sqrt{\Delta})$ to the equation \eqref{bin2}, we have: For any $\varepsilon>0$, there exists a constant $C_{\varepsilon}$ such that \begin{equation}\label{abc} H(\sqrt{\Delta}U_{nk}, -\alpha^{nk}, \beta^{nk}) \leq C_\varepsilon(rad (\sqrt{\Delta}U_{nk}, -\alpha^{nk}, \beta^{nk}))^{1+\varepsilon}, \end{equation} where \begin{align} rad (\sqrt{\Delta}U_{nk}, -\alpha^{nk}, \beta^{nk}) &= \prod_{\mathfrak{p}|Q\sqrt{\Delta}U_{nk}}N(\mathfrak{p})^{ord_\mathfrak{p} (p)} \leq Q^2\Delta X^2_{nk}Y_{nk}\label{r}\\ \nonumber H(\sqrt{\Delta}U_{nk}, -\alpha^{nk}, \beta^{nk})&=\max\{|\sqrt{\Delta}U_{nk}|, |-\alpha^{nk}|, |\beta^{nk}|\}\cdot \max\{|-\sqrt{\Delta}U_{nk}|, |\alpha^{nk}|, |-\beta^{nk}|\}\\ \nonumber &\geq|\sqrt{\Delta}U_{nk}||-\sqrt{\Delta}U_{nk}|=\Delta U^2_{nk}\\ &=\Delta X^2_{nk}Y^2_{nk}.\label{h} \end{align} Substituting \eqref{r} and \eqref{h} in \eqref{abc}, we get \begin{equation}\label{ec} Y_{nk}\ll_{\varepsilon} U^{2\varepsilon}_{nk}. \end{equation} By equation \eqref{pab} we have, $$ \Phi_{nk}(\alpha/\beta)=\displaystyle\frac{(\alpha/\beta)^{nk}-1}{ \displaystyle\prod_{d|nk}\Phi_{d}(\alpha/\beta)}. $$ Since $\Phi_{nk}(\alpha/\beta)|(\alpha/\beta)^{nk}-1$ and by using \eqref{delta} we write $$ \Phi_{nk}(\alpha/\beta)| U_{nk}\sqrt{\Delta}. $$ Since $U_{nk}=X_{nk}Y_{nk}$, $$ \Phi_{nk}(\alpha/\beta)|X_{nk}Y_{nk}\sqrt{\Delta}. $$ As $\Phi_{nk}(\alpha/\beta)\nmid\sqrt{\Delta}$, we have $\Phi_{nk}(\alpha/\beta)|X_{nk}Y_{nk}$. Since $\gcd(X_{nk},Y_{nk})=1$, we obtain $\Phi_{nk}(\alpha/\beta)$ divides $X_{nk}$ or $\Phi_{nk}(\alpha/\beta)$ divides $Y_{nk}$. Suppose that $\Phi_{nk}(\alpha/ \beta)$ divides $X_{nk}$, we have $X^\prime_{nk}=\gcd(X_{nk}, \Phi_{nk}(\alpha/\beta))=\Phi_{nk}(\alpha/\beta)$ and $Y^\prime_{nk}=\gcd(Y_{nk}, \Phi_{nk}(\alpha/ \beta))=1$. Similarly, if $\Phi_{nk}(\alpha/ \beta)$ divides $Y_{nk}$, we get $X^\prime_{nk} = 1$ and $Y^\prime_{nk} = \Phi_{nk}(\alpha/ \beta)$. Thus in either case, we obtain \begin{equation}\label{ef} X^\prime_{nk}Y^\prime_{nk}=\Phi_{nk}(\alpha/ \beta). \end{equation} By Lemma \ref{rg} we write, \begin{equation}\label{es} |X^\prime_{nk}Y^\prime_{nk}|=|\Phi_{nk}(\alpha/\beta)| \geq C|\alpha/\beta|^{\phi(nk)}= C|\alpha^{2}/Q|^{\phi(nk)}. \end{equation} Hence from equations \eqref{ec}, \eqref{es} and \eqref{eqnlem2}, \begin{align*} |X^\prime_{nk}U^{2\varepsilon}_{nk}| &\gg_{\varepsilon}|X^{\prime}_{nk}Y_{nk}|\\ &\geq |X^\prime_{nk}Y^\prime_{nk}|\\ &\gg_{\varepsilon}\frac{1}{|Q|^{\phi(nk)}}|\alpha|^{2\phi(nk)}\\ &\gg_{\varepsilon} \frac{1}{|Q|^{\phi(nk)}} | U_{\phi(k)}|^{2\phi(n)}. \end{align*} Therefore, \begin{align*} |X^\prime_{nk}| &\gg_{\varepsilon}\frac{1}{|Q|^{\phi(nk)}}\bigg|\frac{U^{2\phi(n)}_{\phi(k)}}{U^{2\varepsilon}_{nk}}\bigg|\\ &=\frac{1}{|Q|^{\phi(nk)}}\bigg|\frac{U_{\phi(k)}^{2\varepsilon}U_{\phi(k)}^{2(\phi(n)-\varepsilon)}}{U_{nk}^{2\varepsilon}}\bigg| \\ &\gg_{\varepsilon} \frac{1}{|Q|^{\phi(nk)}} |U_{\phi(k)}|^{2(\phi(n)-\varepsilon)}. \end{align*} This completes the proof of the lemma. \end{proof} \noindent The following lemma is inspired by the result in \cite[Lemma 2.4]{wang}. \begin{lem}\label{l8} If $m<n$, then $\gcd(X^\prime_{m}, X^\prime_{n})=1$ or a power of $\sqrt{\Delta}$. \end{lem} \begin{proof} We suppose that $\gcd(X^{\prime}_m, X^{\prime}_n)>1$ is not a power of $\sqrt{\Delta}$. Let $\gamma(\neq\sqrt{\Delta})$ be a prime element of $\mathbb{Q}(\sqrt{\Delta})$ such that $\gamma|X^{\prime}_m$ and $\gamma|X^{\prime}_n$. By the definitions of $X^{\prime}_m$ and $X^{\prime}_n$, we write $\gamma|\Phi_m(\alpha/\beta)$ and $\gamma|\Phi_n(\alpha/\beta)$. Since $\Phi_m(\alpha/\beta)|(\alpha/\beta)^m-1$ and $\Phi_n(\alpha/\beta)|(\alpha/\beta)^n-1$, we get $\gamma|(\alpha/\beta)^m-1$ and $\gamma|(\alpha/\beta)^n-1$. Thus $\gamma|(\alpha/\beta)^{\gcd(m, n)}-1$.\\ For $m<n, \, \gcd(m, n)<n$.\\ Hence $$ (\alpha/\beta)^n-1=\frac{(\alpha/\beta)^n-1}{(\alpha/\beta)^{\gcd(m, n)}-1}(\alpha/\beta)^{\gcd(m, n)}-1. $$ As $\gcd(\Phi_n(\alpha/\beta), (\alpha/\beta)^{\gcd(m, n)}-1)=1$, we obtain $$ \Phi_n(\alpha/\beta)\bigg|\displaystyle\frac{(\alpha/\beta)^n-1}{(\alpha/\beta)^{\gcd(m, n)}-1}. $$ It follows that $\gamma^2|(\alpha/\beta)^n-1$. Thus using \eqref{delta}, we get $\gamma^2|\sqrt{\Delta}U_n$. As $\gamma\neq\sqrt{\Delta}, \, \gamma^2|U_n$. Let $p$ be a rational prime such that $\gamma|p$. Since $X_n$ and $U_n$ are integers and $\gamma|X^{\prime}_n, X^{\prime}_n|X_n$. It follows that $p|X_n$ and $p^2|U_n$. This contradicts the definition of $X_n$. \end{proof} \noindent We recall the following lemma from \cite{ding}. \begin{lem}(Ding \cite[Lemma 2.5]{ding})\label{lim} For any given positive integers $k$ and $n$, we have $$ \displaystyle\sum_{n\leq x}\frac{\phi(nk)}{nk}=c(k)x+O(\log x), $$ where $c(k)=\prod\limits_{p}\big(1-\frac{\gcd(p, k)}{p^2}\big)>0$ and the implied constant depends on $k$. \end{lem} The following lemma is an analogous result of \cite[Lemma 2.6]{ding} for the Lucas sequences. \begin{lem}\label{l11} Let $S=\Big\{n: |Q|^{\phi(nk)}|X^\prime_{nk}|>nk\Big\}$ and $S(x)=|S\cap[1, \, x]|$. Then, $$ S(x)\gg_{\alpha, k} x. $$ \end{lem} \begin{proof} Let $T=\Big\{\displaystyle n:\, \phi(nk)>2c(k)nk/3\Big\}$ and $T(x)=|T\cap[1, \, x]|$. By using equations \eqref{eqnlem2} and \eqref{ec}, we have \begin{equation}\label{fs} |Y^\prime_{nk}|\leq |Y_{nk}|\ll_{\varepsilon}| U_{nk}|^{2\varepsilon}\leq2|\alpha^{nk}|^{2\varepsilon}. \end{equation} On substituting \eqref{fs} in \eqref{es} and we write \begin{equation}\label{ne} |Q|^{\phi(nk)} |X^\prime_{nk}|\gg_{\varepsilon} |\alpha|^{2(\phi(nk)-\varepsilon nk)}. \end{equation} By taking $\varepsilon=c(k)/3$ in equation \eqref{ne}, we obtain \begin{equation}\label{f0} \displaystyle|Q|^{\phi(nk)}|X^\prime_{nk}|\gg_{\varepsilon} \displaystyle|\alpha|^{2(\phi(nk)-c(k)nk/3)}. \end{equation} For any $n\in T$, the above equation \eqref{f0} becomes, $$ \textstyle|Q|^{\phi(nk)} |X^\prime_{nk}| \gg_{\varepsilon} \textstyle |\alpha|^{2(\phi(nk)-c(k)nk/3)}> |\alpha|^{2c(k)nk/3}\gg_{\alpha, k} |\alpha|^{2c(k)nk/3-\log nk/\log|\alpha|}.nk>nk. $$ Hence there exists an integer $n_0$ depending only on $\alpha, \, k$ such that, if $n\geq n_0$ and $n\in T$, then $|Q|^{\phi(nk)}|X^\prime_{nk}|>nk$. Now we have, \begin{equation}\label{f1} S(x) = \sum\limits_{\substack{n\leq x \\ |Q|^{\phi(nk)}|X^\prime_{nk}|>nk}}1\geq \sum\limits_{\substack{n\leq x \\ n\geq n_0 \\ n\in T }}1 = \sum\limits_{\substack{n\leq x \\ n\geq n_0 \\ \phi(nk)>2c(k)nk/3}}1. \end{equation} Since we have \begin{equation}\label{f2} \sum\limits_{\substack{n\leq x \\ \phi(nk)\leq2c(k)nk/3}} \displaystyle\frac{\phi(nk)}{nk} \leq \sum\limits_{\substack{n\leq x \\ \phi(nk)\leq2c(k)nk/3}} \displaystyle\frac{2c(k)}{3} \leq\displaystyle\frac{2c(k)}{3}x. \end{equation} Hence by Lemma \ref{lim} and equation \eqref{f2} we shall write, \begin{align*} S(x) &\geq \sum\limits_{\substack{n\leq x \\ n\geq n_0 \\ \phi(nk)>2c(k)nk/3}}1 \\ &\gg \sum\limits_{\substack{n\leq x \\ \phi(nk)>2c(k)nk/3}}1 \\ &\geq \sum\limits_{\substack{n\leq x \\ \phi(nk)>2c(k)nk/3}}\frac{\phi(nk)}{nk} \\ &= \sum\limits_{\substack{n\leq x }}\frac{\phi(nk)}{nk}-\sum\limits_{\substack{n\leq x \\ \phi(nk)\leq2c(k)nk/3}}\frac{\phi(nk)}{nk}\\ &\geq c(k)x+O(\log x)-\displaystyle\frac{2c(k)}{3}x\gg_{\alpha,k} x. \end{align*} This completes the proof of Lemma \ref{l11}. \end{proof} \section{Proof of Main Theorem} For any $n\in S$, there exists a prime $p_n$ such that $p_{n}|X^{\prime}_{nk}$ and $p_n\nmid nk$. Since $p_n|X^{\prime}_{nk}$ and $X^{\prime}_{nk}|X_{nk}$, we observe that $p_n|U_{nk}$ and $p_n^2 \nmid U_{nk}$. Note that for all $p_n \nmid PQ \Delta$, except possibly finitely many primes. Hence by using Lemma \ref{lem1}, we obtain \begin{equation*} U_{p_{n}-\legendre{\Delta}{p_n}}\not\equiv 0 \Mod { p_n^2}. \end{equation*} As $p_n|\Phi_{nk}(\alpha/\beta), \, p_n\nmid nk$ and by using Lemma \ref{rm}, we have $p_{n} \equiv \pm1 \Mod {nk}$. Hence for any $n\in S$, there is a prime $p_{n}$ satisfying \begin{align*} U_{p_{n}-\legendre{\Delta}{p_{n}}}&\not\equiv 0 \Mod {p_{n}^2}, \\ p_{n} &\equiv\pm 1 \Mod {nk}. \end{align*} From Lemma \ref{l8}, we conclude that each $p_n \, (n\in S)$ are distinct prime. Thus we explore that, \begin{align*} \bigg|\bigg\{primes \, p\leq x\, : \, p \equiv\pm 1 \Mod k, \, U_{p-\legendre{\Delta}{p}}\not\equiv 0 \Mod {p^2}\bigg\}\bigg| &\geq \bigg|\bigg\{n\, :\, n\in S, \,|Q|^{\phi(nk)}|X^\prime_{nk}|\leq x\bigg\}\bigg|. \end{align*} Since $|Q|=|\alpha\beta|$, we shall write $|Q|^{\phi(nk)}<|\alpha|^{2nk}$ and also we have $|X^\prime_{nk}|\leq |X_{nk}|\leq |U_{nk}|\leq2|\alpha|^{nk}$. Therefore we obtain $|Q|^{\phi(nk)}|X^\prime_{nk}|<2|\alpha|^{3nk}$. We now get \begin{align*} \bigg|\bigg\{n\, :\, n\in S, \, |Q|^{\phi(nk)} |X^\prime_{nk}|\leq x\bigg\}\bigg| &\geq \bigg|\bigg\{n\, :\, n\in S, \,2|\alpha|^{3nk}\leq x\bigg\}\bigg| \\ &= \bigg|\bigg\{n\, :\, n\in S, \, n\leq \frac{\log x/2}{3k\log |\alpha|} \bigg\}\bigg| \\ &= \textstyle S\big(\frac{\log x/2}{3k \log |\alpha|}\big). \end{align*} Hence by Lemma \ref{l11}, \begin{align*} \bigg|\bigg\{primes \, p\leq \, x\, : \, p \equiv\pm 1 \Mod k,\, U_{p-\legendre{\Delta}{p}}\not\equiv 0 \Mod { p^2}\bigg\}\bigg| &\geq \textstyle S\big(\frac{\log x/2}{3k \log |\alpha|}\big) \\ &\gg_{\alpha, k}\log x/2 \\ &\gg_{\alpha, k} \log x. \end{align*} This completes the proof. \section{Applications of Lucas sequences in Cryptography} Many branches of number theory frequently deal with Lucas sequences. As an application in Cryptography, the various studies related to public-key encryption schemes based on the Lucas sequences have been highlighted. In \cite{smith}, Smith et al. introduced the \textit{Public-key cryptosystem (LUC)} that is based on the Lucas sequences. Further, they widely discussed the cryptography properties of Lucas sequences and the cryptographic strength of LUC. The major advantage of Lucas-based cryptosystems is that they are not formulated in terms of exponentiation \cite{bosma}. Then, Jiang et al. \cite{jiang} proposed a variant of (probabilistic) public-key encryption scheme based on Lucas sequences and also they analyzed the efficiency of the proposed schemes. Recently, a cryptosystem that is analogous to an elliptic curve cryptosystem has been developed by using Lucas sequences of the second kind \cite{sarbini}. By using the results in this paper, we suggest further investigation in this direction. \section{Conclusions} In this paper, we considered Lucas sequences which are more general sequences of Fibonacci and Lucas numbers. These special sequences are related to other research areas such as Chemistry and Cryptography. We proved that under the assumption of $abc$ conjecture for number fields, there are at least $O(\log x)$ as many Lucas non-Wieferich primes p such that $p\equiv\pm1 \Mod{k}$ for any fixed integer $k\geq2$. Our results will lead to further investigations in the field of Cryptography. Especially, in public-key cryptography or asymmetric cryptography. It will be more useful in engineering and technological applications. \noindent \textbf{Acknowledgment.} The author I. Mumtaj Fathima would like to express her gratitude to Maulana Azad National Fellowship for minority students, UGC. This research work is supported by MANF-2015-17-TAM-56982, University Grants Commission (UGC), Government of India. \end{document}
\begin{document} \title{Equivariant vector bundles on varieties with codimension-one orbits} \begin{abstract} Let $G$ be an algebraic group and let $X$ be a smooth $G$-variety with two orbits: an open orbit and a a closed orbit of codimension $1$. We give an algebraic description of the category of $G$-equivariant vector bundles on $X$ under a mild technical hypothesis. We deduce simpler classifications in the special cases of line bundles and vector bundles which are generically local systems. We apply our results to the study of admissible representations of semisimple Lie groups. Our main result gives a new set of constraints on the associated cycles of unipotent representations. \end{abstract} \tableofcontents \section{Introduction} \subsection{Summary} \label{ss:summary} Work over an algebraically closed field $k$ of characteristic zero. Consider a variety $X$ acted on by an algebraic group $G$. Make the following assumptions: \begin{itemize} \item $X$ is smooth. \item $X$ consists of one open $G$-orbit and one closed $G$-orbit of codimension one. \end{itemize} Then our main result (Theorem~\ref{thm-main}) gives an algebraic description of the category of $G$-equivariant vector bundles on $X$, provided that $X$ satisfies a technical hypothesis called `fastenedness' (Definition~\ref{def:fastened}). In the rest of the paper, we deduce simpler classifications in the special cases of line bundles (Proposition~\ref{prop:linebundles}) and vector bundles which are generically $G$-equivariant local systems (Corollary~\ref{cor-locsys}). We also use the main result to describe $G$-equivariant vector bundles on \emph{weakly normal} $G$-varieties obtained by gluing together those of the form described above (Section~\ref{sec:wn}). As an application, we study the coherent sheaves which arise as associated graded modules of $(\mf{g}, K)$-modules, when the pair $(\mf{g}, K)$ arises from a real reductive group (Section~\ref{sec:application}). In the rest of this introduction, we discuss Theorem~\ref{thm-main} in the larger context of `gluing of categories,' and we describe some of these applications in more detail. \subsection{The idea of gluing} \label{intro-gluing} Let $X$ be as the first paragraph of~\ref{ss:summary}, and write $X = U \cup Z$ where $U$ is the open $G$-orbit and $Z$ is the closed $G$-orbit of codimension one, equipped with the reduced subscheme structure. Upon choosing basepoints $z \in Z$ and $u \in U$, it is easy to describe the categories of $G$-equivariant vector bundles on $Z$ and $U$ separately: \e{ \Vect^G(Z) &\simeq \Rep(G^z) \\ \Vect^G(U) &\simeq \Rep(G^u), } where $G^z$ and $G^u$ are the stabilizer groups. It is natural to ask whether $\Vect^G(X)$ can be constructed by `gluing' the two categories displayed above. Motivation for this idea comes from Beilinson's description~\cite{Beilinson1987} of perverse (constructible) sheaves on a scheme $X$ which is decomposed as $U \cup Z$ where $Z = V(f)$ for some regular function $f \in \oh_X$. That description says that a perverse sheaf $\mc{F}$ on $X$ can be encoded as a quadruple $(\mc{F}_U, \mc{F}_Z, u, v)$ such that \begin{itemize} \item $\mc{F}_U$ and $\mc{F}_Z$ are perverse sheaves on $U$ and $Z$, respectively. \item $u$ and $v$ are maps \[ \Psi_f(\mc{F}_U) \xrightarrow{u} \mc{F}_Z \xrightarrow{v} \Psi_f(\mc{F}_U) \] where $\Psi_f(-)$ is the nearby cycles functor. (We have ignored the Tate twist.) The composition $u \circ v$ must equal $1 - (\text{monodromy endomorphism})$. \end{itemize} This description has the good feature that it involves only the categories of interest on $U$ and $Z$, along with a functor between them (and an endomorphism of this functor). For vector bundles, the situation is not as nice. The standard approaches to gluing quasicoherent sheaves (e.g.\ recollement of categories) require knowing about sheaves on $X_{\wh{Z}}$ (the formal completion of $X$ along $Z$). For example, Beauville--Laszlo gluing~\cite{bl} implies that a $G$-equivariant vector bundle on $X$ can be encoded as a triple $(\mc{F}_U, \mc{F}_{\wh{Z}}, \alpha)$ where \begin{itemize} \item $\mc{F}_U$ and $\mc{F}_{\wh{Z}}$ are $G$-equivariant vector bundles on $U$ and $X_{\wh{Z}}$, respectively. \item $\alpha$ is an isomorphism between their restrictions to $X_{\wh{Z}} \setminus Z$. \end{itemize} Since $X_{\wh{Z}}$ is not easy to compute with in practice, we sought a different description of $\Vect^G(X)$ which does not involve $X_{\wh{Z}}$. In some special cases, knowledge about $X_{\wh{Z}}$ can be gleaned from looking at the action of $G$ on the normal bundle $\mc{N}_{Z/X}$. For instance, if $G$ is reductive and $X$ is affine, then the Luna Slice Theorem provides a one-dimensional $G^z$-invariant locally closed subvariety $\ell \subset X$ which contains $z$ and is transverse to $Z$, together with a $G^z$-equivariant \'etale map $\ell \to \mc{N}_{Z/X}|_z$. Combined with the previous paragraph, this essentially reduces the problem to studying $G$-equivariant vector bundles on $\mc{N}_{Z/X}$. However, in our desired application $X$ is usually not affine, so this reduction does not apply. In our desired application, something weaker is true (Proposition~\ref{prop:chXfastened}): there is a line $\ell \subset X$ which intersects $Z$ transversely, is invariant under a one-parameter subgroup $\gamma : \gm \hra G^z$, and is acted upon nontrivially by $\gamma(\gm)$. The existence of such a pair $(\ell, \gamma)$ is equivalent to the `fastenedness' hypothesis of Theorem~\ref{thm-main}. Starting from a choice of $(\ell, \gamma)$ as above, we arrive at a description of $\Vect^G(X)$ which bears little resemblance to `gluing.' Instead, it should be thought of in the following terms. If we pull back $\mc{F} \in \Vect^G(X)$ along $\ell \hra X$, two algebraic structures emerge: \begin{itemize} \item $\mc{F}|_{\ell}$ is equivariant with respect to a $\gm$-action which comes from the subgroup $\gamma$. Recall that the category of $\gm$-equivariant vector bundles on $\BA^1$ is equivalent to the category of finite-dimensional vector spaces equipped with exhausting filtrations, via the Artin--Rees construction (Proposition~\ref{ar}). \item Each fiber of $\mc{F}|_{\ell}$ is acted on by the stabilizer group of the corresponding point in $\ell$. \end{itemize} The classifying data defined in Theorem~\ref{thm-main} are a combination of these two bullet points. However, in the case of line bundles, it is possible to reformulate the main result in terms which more closely resemble `gluing via nearby cycles' see Lemma~\ref{lem-nearby}. The possibility of doing so reflects that equivariant quasicoherent sheaves are more `rigid' than arbitrary quasicoherent sheaves. \subsection{Applications} Suppose $G_{\mathbb{R}}$ is a real reductive group and $K_{\mathbb{R}} \subset G_{\mathbb{R}}$ is a maximal compact subgroup. Let $K$ be the complexification of $K_{\mathbb{R}}$ and let $\mathcal{N}$ be the cone of nilpotent elements in the complexified Lie algebra of $G_{\mathbb{R}}$ The determination of the irreducible unitary representations of $G_{\mathbb{R}}$ is one of the major unsolved problem in representation theory. There is evidence to suggest that every such representation can be constructed through a sequence of well-understood operations from a finite set of building blocks, called the \emph{unipotent representations}. These representations are `attached' (in a certain mysterious sense) to the nilpotent orbits of $G_{\mathbb{R}}$ on the dual space of its Lie algebra. Attached to every irreducible admissible representation $V$ of $G_{\mathbb{R}}$ (for example, every irreducible unitary representation) is a Harish-Chandra module $M$, which has the structure of an irreducible $K$-equivariant $\mathfrak{g}$-module. Taking its associated graded gives a well-defined class $[\gr(M)]$ in the Grothendieck group of $K$-equivariant coherent sheaves on $\mathcal{N}$. This class provides valuable information about $V$, including, for example, its $K_{\mathbb{R}}$-types. Determining the classes $[\gr(M)]$ associated to unipotent representations is a fundamental problem with origins in the seminal paper (\cite{Vogan1991}) of Vogan. In Section~\ref{sec:application}, we define the $K$-chain $\overline{\mathrm{Ch}}(M)$ associated to $M$. This is the variety with $K$-action obtained by first deleting the orbits of codimension $\ge 2$ from the reduced support of $\gr(M)$ and then passing to the projectivization (see~\ref{gk-chain} for details). The class $[\gr(M)]$ gives rise to a class $[\overline{\gr}(M)]$ on $\overline{\mathrm{Ch}}(M)$, and when $V$ is unipotent, we conjecture that $[\overline{\mathrm{gr}}(M)]$ determines $[\gr(M)]$. Our main result in Section \ref{sec:application} is that $\overline{\mathrm{Ch}}(M)$ is fastened (Proposition \ref{prop:chXfastened}). To prove this remarkable fact, we use in an essential way the special structure of $\mathcal{N}$ (e.g. the existence of `real' Slodowy slices with nice contracting $\gm$-actions). Since $\overline{\mathrm{Ch}}(M)$ is fastened, we can understand $\overline{\mathrm{gr}}(M)$ (and by extension $\gr(M)$ and $V$) using Theorem \ref{thm-main} (and its corollaries). \subsection{Acknowledgments} The authors would like to thank Roman Bezrukavnikov and David Vogan for many helpful discussions. The second author is supported by an NSF GRFP grant. \section{Chains} \label{sec:chains} \subsection{Basic definitions} \label{s-def-chains} Let $k$ be an algebraically closed field of characteristic zero, and let $G$ be an affine algebraic group over $k$. \begin{defn}\label{def:chain} A $G$-chain is a $k$-variety\footnote{For us, a `variety' is a reduced, separated, finite type $k$-scheme. It is not necessarily irreducible.} $X$ equipped with an action by $G$ such that \begin{enumerate} \item\label{cond:chain1} $G$ acts on $X$ with finitely many orbits. \item\label{cond:chain2} $X$ is pure dimensional. \item\label{cond:chain3} Each $G$-orbit on $X$ has codimension $0$ or $1$. \end{enumerate} \end{defn} These properties imply that every $G$-orbit on $X$ is either open or closed. Each open orbit $U \subset X$ has codimension 0, and each closed orbit $Z \subset X$ has codimension 1. \begin{ex} \label{ex-X} Let $G = \gm$, and $X = \Spec \mathbb{C}[x,y]/(xy)$. Let $G$ act on $X$ by the formulas $$t \cdot x = t^2x \qquad t \cdot y = t^{-2} y \qquad t \in \gm$$ Then $X$ is a $G$-chain. It has two open $G$-orbits (the punctured axes), and one closed $G$-orbit ($\{0\}$). This example is closely related to the (infinite-dimensional) representation theory of $\on{SL}_2(\mathbb{R})$, see Example~\ref{ex-su}. \end{ex} We say that a $G$-chain is \emph{irreducible} if it has only one open $G$-orbit. (Note that an irreducible $G$-chain need not be irreducible as a $k$-variety if $G$ is not connected.) For an arbitrary $G$-chain $X$, we define its \emph{irreducible components} to be the closures of the open orbits of $X$, equipped with the reduced subscheme structures. These are precisely the irreducible closed sub-chains of $X$ whose dimensions equal $\dim X$. We say that a $G$-chain is \emph{simple} if it is irreducible and has at most one closed $G$-orbit. If $X$ is irreducible, then the simple locally-closed sub-chains form an open cover of $X$. If $X$ is a $G$-chain, then its normalization $\tilde{X}$ is again a $G$-chain. Any normal $G$-chain is smooth, since its singular locus is a subset of codimension $\ge 2$ and hence empty. Any smooth $G$-chain is the disjoint union of its irreducible components. In particular, the normalization of an irreducible $G$-chain is a smooth, irreducible $G$-chain. \subsection{Fastened chains} \begin{defn}\label{def:fastened} We define the property of `fastenedness' for $G$-chains in three steps: \begin{itemize} \item If $X$ is a smooth irreducible $G$-chain, we say that a closed $G$-orbit $Z \subset X$ is \emph{fastened} to the open orbit $U \subset X$ if $G$ acts transitively on the punctured normal bundle \[ \mc{N}^{\circ}_{Z / X} := \mc{N}_{Z / X} \setminus (\text{zero section}). \] \item If $X$ is a smooth irreducible $G$-chain, we say that $X$ is \emph{fastened} if each closed $G$-orbit $Z \subset X$ is fastened to the open orbit. \item If $X$ is any $G$-chain, we say that $X$ is \emph{fastened} if each irreducible component of the normalization of $X$ is fastened. \end{itemize} The terminology is set up so that it is clear what it means for a closed orbit $Z \subset X$ to be fastened to one open orbit which is adjacent to $Z$ but not to another. \end{defn} \begin{rmk} \label{rmk-fastened-normal} Suppose that $X$ is a smooth irreducible $G$-chain and that $Z \subset X$ is a closed $G$-orbit. For any closed point $z \in Z$, the stabilizer subgroup $G^z \subset G$ acts linearly on the (one-dimensional) fiber $\mc{N}_{Z_i / X}|_{z}$. This action defines a character $G^z \to \gm$, which is surjective if and only if $Z$ is fastened to the open orbit of $X$. \end{rmk} \begin{prop}\label{prop:affinefastened} Every affine $G$-chain is fastened. \end{prop} \begin{proof} Let $X$ be an affine $G$-chain. Since normalization maps of finite type schemes are finite, hence affine, we may assume that $X$ is also smooth and irreducible. Pick an arbitrary closed $G$-orbit $Z \subset X$. We shall show that $Z$ is fastened to the open orbit of $X$. Choose a point $z \in Z$. Assume for sake of contradiction that the character $G^z \to \gm$ defined in Remark~\ref{rmk-fastened-normal} is not surjective. Then it factors through some finite subgroup $\mu_r \subset \gm$ where $\mu_r$ is the group of $r$-th roots of unity. Choose a linear coordinate $\mc{N}_{Z/X}|_z \simeq \BA^1_t$, and define the regular function \[ f : \mc{N}_{Z/X} \to \BA^1 \] via the formula $f(t) = t^r$. Since $\mc{N}_{Z/X}$ is the associated bundle produced by the homogeneous space $G / G^z \simeq Z$ and the action $G^z \acts \mc{N}_{Z/X}|_z$, the function $f$ defined above extends to a unique $G$-invariant function $\tilde{f}$ on $\mc{N}_{Z/X}$. By construction, it is nonconstant along the fibers of $\mc{N}_{Z/X}$. Let \[ X_h := \on{Bl}_{Z \times \{0\}}(X \times \BA^1_h) \setminus (X \times \{0\})^{\sim} \to \BA^1_h \] be the deformation to the normal cone of $Z \subset X$. (This construction appears again in~\ref{I}.) Since $X$ is affine, so is $X_h$. Since $G$ is reductive, the functor of $G$-invariants is exact, so we may non-uniquely extend $\tilde{f}$ to a $G$-invariant regular function $\tilde{f}_h$ on $X_h$. Then $\tilde{f}_h$ is constant along the $G$-orbits $U \times \{a\} \subset X_h$ and hence along their limit, which is $\mc{N}_{Z/X}^{\circ}$.\footnote{This argument uses that $X_h \to \BA^1_h$ is flat.} But $\tilde{f}_h|_{\mc{N}_{Z/X}^\circ} = \tilde{f}$ by construction, so this constancy contradicts the previous paragraph. \end{proof} Outside the world of affine varieties, there are many examples of non-fastened chains. Here is one: \begin{ex} Let $G = SL_2(\mathbb{C})$ and let $V$ be its four-dimensional irreducible representation. $V$ is identified with the space of homogeneous degree-3 polynomials in the variables $x$ and $y$. Let $U$ be the 3-dimensional $G$-orbit passing through the element $x^2y \in V$. There are two $G$-orbits in its boundary: $\{0\}$ and the 2-dimensional $G$-orbit $Z$ passing through the element $z=x^3 \in V$. Let $X$ be the irreducible chain $U \cup Z$. Note that \[ G^z = \left\{\begin{pmatrix} \xi & a \\ 0 & \xi^{-1}\end{pmatrix}: \xi^3=1\right\}. \] The variety $X$ is singular, but its normalization $\tilde{X}$ is smooth. For any closed orbit $\tilde{Z} \subset \tilde{X}$, the natural map $\tilde{Z} \to Z$ is a $G$-equivariant cover. In particular, for any $\tilde{z} \in \tilde{Z}$, we have an inclusion of stabilizer groups \[ G^{\tilde{z}} \subseteq G^z = \left\{\begin{pmatrix} \xi & a \\ 0 & \xi^{-1}\end{pmatrix}: \xi^3=1\right\}. \] Thus $G^{\tilde{z}}$ does not contain a torus, so Remark~\ref{rmk-fastened-normal} implies that $X$ is not fastened. \end{ex} \subsection{Fastening data} We will show that fastened chains are glued together (or `fastened') by certain $\gm$-invariant curves, as described below: \begin{defn}\label{def:fasteningdatum} Let $X$ be a smooth irreducible $G$-chain. Let $U$ be the open orbit of $X$, and let $Z$ be some closed orbit. A \emph{fastening datum} for $(U, Z)$ is a pair $(\gamma, \ell)$ given as follows: \begin{itemize} \item $\gamma: \gm \to G$ is a co-character. \item $\ell: \BA^1 \hra X$ is a locally-closed embedding. \end{itemize} We require $(\gamma, \ell)$ to satisfy the following properties: \begin{enumerate} \item $\ell^{-1}(Z) = \{0\}$ and $\ell$ is transverse to $Z$. \item $\gamma(\gm)$ leaves $\ell(\BA^1) \subset X$ invariant and acts transitively on $\ell(\BA^1 \setminus \{0\})$. \end{enumerate} \end{defn} \begin{prop}\label{prop:fasteningdata} In the setting of Definition \ref{def:fasteningdatum}, $Z$ is fastened to $U$ if and only if there is a fastening datum $(\gamma,\ell)$ for $(U,Z)$. \end{prop} \begin{proof} First, assume that $Z$ is fastened to $U$. Fix a closed point $z \in Z$. By Remark~\ref{rmk-fastened-normal}, the action $G^z \acts \mc{N}_{Z/X}|_z$ occurs via a surjective character $G^z \xrightarrow{\chi} \gm$. Let $L^z \subset G^z$ be the identity component of a Levi subgroup of $G$. Since $\ker(\chi)$ contains the unipotent radical of $G^z$, the surjectivity of $\chi$ implies that $L^z \xrightarrow{\chi} \gm$ is also surjective. Since one-dimensional tori are dense in reductive groups, there is a one-dimensional torus $T^z \subset L^z$ such that $T^z \xrightarrow{\chi} \gm$ is nontrivial (hence surjective). Since $T^z$ is reductive, the projection map $\mc{T}_zX \to \mc{N}_{Z/X}|_z$ has a $T^z$-invariant section. The image of this section is a line $k \cdot v \subset \mc{T}_z X$ on which $T^z$ acts by the same nontrivial character $\chi$. By Sumihiro's theorem on torus actions, which applies since $X$ is smooth and hence normal, there exists a $T^z$-invariant open affine subscheme $W_z \subset X$ which contains $z$. Applying the Luna Slice Theorem to the $T^z$-orbit given by $\{z\} \subset W_z$, we conclude (possibly upon replacing $W_z$ by a smaller $T^z$-invariant affine open subscheme) that there is a $T^z$-equivariant \'etale map $\varphi : W_z \to \mc{T}_zX$ which induces an isomorphism on tangent spaces at $z \in W_z$. Consider the base change \begin{cd}[column sep = 0.6in] C \arrow[r,hookrightarrow] \arrow[d] & W_z \arrow[d,"\varphi"]\\ k \cdot v \arrow[r,hookrightarrow, "\text{cl.emb.}"] & \mc{T}_zX \end{cd} where the bottom horizontal map was constructed in the previous paragraph. Certainly $C$ contains $z \in W_z$ because $k \cdot v$ contains $0 \in \mc{T}_z X$. Let $C_z \subset C$ be the connected component of $C$ which contains $z$. The map $C_z \to k \cdot v$ is \'etale and $\gm$-equivariant, so the previous two sentences imply that it is an isomorphism.\footnote{Proof: Since $C_z$ is a smooth affine connected curve equipped with a $\gm$-action with a closed orbit $\{z\}$, it is isomorphic to $\BA^1$ with some multiple of the standard $\gm$-action. Since the map $C_z \to k \cdot v$ induces an isomorphism on tangent spaces at $z \in C_z$, this multiple must be the same as that given by $\chi|_{T^z}$. Now the map $C_z \to k \cdot v$ is determined by its behavior on the open $\gm$-orbits; there it is an isomorphism, and one can easily check that this implies the claim.} We define $\ell$ to be the composition \[ \BA^1 \simeq k \cdot v \simeq C_z \xhookrightarrow{\text{cl.emb.}} W_z \xhookrightarrow{\text{open emb.}} X, \] and we define $\gamma$ to be the composition \[ \gm \simeq T^z \hra G^z \hra G \] where $\gm \simeq T^z$ is an arbitrary isomorphism. Then $(\gamma, \ell)$ is a fastening datum for $(U, Z)$. Conversely, assume that $(\gamma, \ell)$ is a fastening datum for $(U, Z)$. Definition~\ref{def:fasteningdatum}(2) implies that the map $\BA^1 \xrightarrow{\ell} X$ is $\gm$-equivariant with respect to some nontrivial linear action $\gm \acts \BA^1$ (which is not necessarily the standard one). Passing to tangent spaces, we get a $\gm$-equivariant map \[ \BA^1 \simeq \mc{T}_0 \BA^1 \xrightarrow{d\ell} \mc{T}_zX \] where the action of $\gm \acts \mc{T}_zX$ is via the cocharacter $\gamma : \gm \to G^z$ (this uses Definition~\ref{def:fasteningdatum}(1)). By the transversality statement in Definition~\ref{def:fasteningdatum}(2), the composition \[ \BA^1 \simeq \mc{T}_0 \BA^1 \xrightarrow{d\ell} \mc{T}_zX \to \mc{N}_{Z/X}|_z \] is nonzero, hence an isomorphism. It is also $\gm$-equivariant, so we conclude that the action $\gamma(\gm) \acts \mc{N}_{Z/X}|_z$ is nontrivial. Hence, the action $G^z \acts \mc{N}_{Z/X}|_z$ is nontrivial. By Remark~\ref{rmk-fastened-normal}, this implies that $Z$ is fastened to the open orbit in $X$. \end{proof} \section{Equivariant sheaves on smooth simple chains}\label{sec:main} \subsection{Overview} In this section, $X = U \cup Z$ is a smooth $G$-chain with one open and one closed orbit. Assume that $X$ is fastened, and let $(\gamma, \ell)$ be a fastening datum (Definition~\ref{def:fasteningdatum}). Given this datum, we will describe the category $\QCoh^G_{\tf}(X)$ of $G$-equivariant torsion-free quasicoherent sheaves on $X$ in algebraic terms (Theorem~\ref{thm-main}). The main idea is that the line $\ell : \BA^1 \to X$ functions as a `basepoint' which sees both the open and the closed orbit of $X$. More precisely, we will explain how $X$ is related to the `homogeneous space' given by $G \times \BA^1$ modulo the stabilizer scheme of $\ell(\BA^1)$, see Lemma~\ref{lem-gs}. The special case when $G \acts X$ is the standard (weight one) action $\gm \acts \BA^1$ will feature prominently in our analysis. Here, the desired classification is elementary and well-known: \begin{prop}[Artin--Rees] \label{ar} Let $\mathrm{Vect}^{\mathrm{filt}}$ be the category of vector spaces equipped with exhausting (but not necessarily separating) $\mathbb{Z}$-filtrations. There is a natural equivalence of monoidal categories: \[ \QCoh^{\gm}_{\tf}(\BA^1) \simeq \mathrm{Vect}^{\mathrm{filt}} \] \end{prop} \begin{proof} The category $\QCoh^{\gm}(\BA^1)$ is equivalent (via global sections) to the category of graded modules over the algebra $k[x]$, where the generator $x$ is placed in degree 1. Each such module is encoded by a diagram \[ \cdots \to V_i \to V_{i+1} \to \cdots \] of vector spaces, where the maps are arbitrary. The module is torsion-free if and only if the maps are all injective. In this case, the diagram of vector spaces is equivalent to the datum of a single vector space $V = \colim_i V_i$ equipped with the exhausting filtration $F$ defined by $F^{\ge i}V = V_i$. The reverse direction is clear. \end{proof} We will often refer to this equivalence as the Artin--Rees construction. Since filtrations already appear here, it is unsurprising that they also appear in the statement of the main classification result (Theorem~\ref{thm-main}). \begin{rmk} \label{rmk-monoidal} We now explain how we propose to get `algebraic' descriptions of categories of equivariant sheaves. The appearance of the word `monoidal' in the proposition implies that we get analogous equivalences involving algebro-geometric objects defined over $\BA^1 / \gm$. For example, considering algebra objects on both sides yields the equivalence \[ \on{Sch}^{\on{aff}, \tf}_{\BA^1 / \gm} \simeq (\text{filtered rings})^{\op} \] where the left hand side is the category of affine schemes over $\BA^1 / \gm$ for which the structure map $S \to \BA^1 / \gm$ is flat. \collapse This equivalence intertwines the Cartesian monoidal structure on the left hand side with the $\otimes$ monoidal structure on the right hand side. Passing to group objects with respect to these monoidal structures, we obtain a (contravariant) equivalence between affine group schemes flat over $\BA^1 / \gm$ and filtered Hopf algebras. This list of equivalences continues indefinitely, but essentially we will only need one more observation: given such a group scheme $\mc{G} \to \BA^1 / \gm$ corresponding to a filtered Hopf algebra $A$, the category of $\gm$-equivariant torsion-free quasicoherent sheaves on the stack $\BA^1 / \mc{G}$ is equivalent to the category of filtered comodules for $A$. This is applied in Definition~\ref{claim1}. \end{rmk} \subsubsection{Further comments} In the above proposition, the `torsion-free' requirement is equivalent to flatness. Similarly, a $G$-equivariant sheaf on $X$ is torsion-free if and only if it is flat, because $Z$ has codimension one. In the rest of this section, we use `torsion-free' and `flat' interchangeably. In this section, a (back)slash means a quotient in the stack-theoretic sense. To clarify what is canonical and what is not, let us explain where the fastening datum $(\gamma, \ell)$ is used. In~\ref{I}, the fastening datum is not used, but Lemma~\ref{deform} relies on the assumption that $X$ is fastened. In~\ref{slice-line}, the line $\ell$ (satisfying Definition~\ref{def:fasteningdatum}(1)) is used to define the stabilizer scheme $G^s$. From~\ref{slicing-gm} onward, we use the full datum $(\gamma, \ell)$. \subsection{The punctured ideal sheaf} \label{I} Let $\I_{Z/X}$ be the ideal sheaf of $Z \subset X$, which is a line bundle on $X$. Consider its total space $ \curSpec (\curSym_{\oh_X}^\cdot \I_{Z/X}^\vee) $ which we (abusively) also denote $\I_{Z/X}$. Let \[ \I^\circ_{Z/X} := \I_{Z/X} \setminus (\text{zero section}) \] be the punctured ideal sheaf. Since $Z$ is $G$-invariant in $X$, there are commuting actions \begin{equation*} G \acts \I^\circ_{Z/X} \lacts \gm \end{equation*} where $\gm$ acts by dilation on the fibers. Since $\I^\circ_{Z/X} / \gm \simeq X$, we deduce that \begin{equation}\label{equiv1} \QCoh^G(X) \simeq \QCoh^{G \times \gm}(\I^\circ_{Z/X}) \simeq \QCoh^{\gm}(G \back \I^\circ_{Z/X}). \end{equation} Our goal in this and the next subsection is to explain how $G \back \I^\circ_{Z/X}$ is related to the stabilizer scheme of the line $\ell : \BA^1 \to X$. \begin{rmk} \label{rmk-I-geo} We point out two facts about the geometry of $\I^\circ_{Z/X}$. \begin{enumerate}[label=(\roman*)] \item $\I^\circ_{Z/X}$ is closely related to the deformation of $X$ to the normal cone of $Z \subset X$. To see this, note that the injection of sheaves $\I_{Z/X} \hra \oh_X$ defines a (noninjective) map of varieties \[ \I_{Z/X} \xrightarrow{p} X \times \BA^1. \] It is easy to check that this is isomorphic to the map \[ \on{Bl}_{Z \times \{0\}}(X \times \BA^1) \setminus (Z \times \BA^1)^{\sim} \to X \times \BA^1 \] where the left hand side is obtained by taking the blow-up of $X \times \BA^1$ along $Z \times \{0\}$ and deleting the proper transform of $Z \times \BA^1$. On the other hand, the deformation to the normal cone is given by \[ \on{Bl}_{Z \times \{0\}}(X \times \BA^1) \setminus (X \times \{0\})^{\sim} \to X \times \BA^1. \] This is not the same as the previous map, but the deformation to the normal cone contains the open subscheme \[ \on{Bl}_{Z \times \{0\}}(X \times \BA^1) \setminus \Big((X \times \{0\})^{\sim} \sqcup (Z \times \BA^1)^{\sim}\Big) \] which is isomorphic to $\I^\circ_{Z/X}$ as schemes over $X \times \BA^1$. \item Note that $\mc{I}_{Z/X}^\circ|_{Z}$ is the punctured conormal bundle of $Z$, while the fiber over $0 \in \BA^1$ of the scheme displayed in the last line above is the punctured normal bundle of $Z$. This apparent contradiction is resolved by the observation that the punctured total spaces of $\mc{L}$ and $\mc{L}^{\vee}$ are canonically isomorphic (but not in a $\gm$-equivariant way), for any line bundle $\mc{L}$. Thus, there is no harm in writing $\mc{I}_{Z/X}^\circ|_Z \simeq \mc{N}_{Z/X}^\circ$. \end{enumerate} \end{rmk} Define the `projection' map $ \pi = \mathrm{pr}_2 \circ p : \I^\circ_{Z/X} \to \BA^1. $ \begin{lem} \label{deform} We have the following: \begin{enumerate}[label=(\roman*)] \item $\pi$ is smooth and $G \times \gm$-equivariant, where the action $(G \times \gm) \acts \BA^1$ is given by $(\on{triv}, \on{standard})$. \item As $G$-varieties, the closed fibers of $\pi$ are given by \[ \pi^{-1}(c) \simeq \begin{cases} U &\text{ if } c \neq 0 \\ \mc{N}^{\circ}_{Z/X} &\text{ if } c = 0. \end{cases} \] In particular, $G$ acts transitively on the fibers of $\pi$. \end{enumerate} \end{lem} \begin{proof} The map $\pi$ is $G \times \gm$-equivariant because $p$ and $\pr_2$ are both $G \times \gm$-equivariant. The description of fibers of $\pi$ follows from Remark~\ref{rmk-I-geo} and the transitivity of the $G$-action on fibers follows from the assumption that $X$ is fastened. This proves (ii). Since the domain and target of $\pi$ are smooth, and each fiber has dimension equal to $\dim X$, the Miracle Flatness Theorem implies that $\pi$ is flat. Since each fiber is also smooth, this implies that $\pi$ is smooth (since $\pi$ is finitely presented), which proves (i). \end{proof} \subsection{The group scheme \texorpdfstring{$G^s$}{Gs} associated to a fastening datum}\label{slice-line} The choice of a line $\ell : \BA^1 \to X$ which satisfies Definition~\ref{def:fasteningdatum}(1) is equivalent to the choice of a section $s : \BA^1 \to \I^\circ_{Z/X}$ of the projection map $\pi$. The forward direction of this equivalence sends $\ell$ to the proper transform of $(\ell, \id_{\BA^1}) : \BA^1 \hra X \times \BA^1$ under the blow-up considered in Remark~\ref{rmk-I-geo}(i), and the transversality hypothesis is needed to ensure that the proper transform lands in the open subscheme $\I^\circ_{Z/X}$. In the rest of this section, we will work in the category of schemes (or stacks) over $\BA^1$. (Starting from the next subsection, these objects will also be $\gm$-equivariant, i.e.\ they descend to $\BA^1 / \gm$.) From this point of view, the section $s$ functions as a `basepoint' for $\mc{I}^\circ_{Z/X}$, which we view as a homogeneous space for the group scheme $G \times \BA^1$. The rest of this subsection makes this statement precise. Define the map $a$ via the commutative diagram \begin{cd} G \times \BA^1 \ar[d, hookrightarrow, swap, "\id_G \times s"] \ar[rd, "a"] \\ G \times \I^\circ_{Z/X} \ar[r, "\act"] & \I^\circ_{Z/X} \end{cd} where $\act$ is the $G$-action on $\I^{\circ}_{Z/X}$. Define $G^s$ as the fibered product \begin{cd} G^s \ar[r, hookrightarrow] \ar[d, swap] & G \times \BA^1 \ar[d, "a"] \\ \BA^1 \ar[r, hookrightarrow, "s"] & \I^\circ_{Z/X} \end{cd} \begin{lem}\label{lem-gs} Work in the category of schemes over $\BA^1$. \begin{enumerate}[label=(\roman*)] \item $G^s$ is smooth over $\BA^1$. \item The map $G^s \hra G \times \BA^1$ expresses the former as a group subscheme of the latter. \item The quotient $(G \times \BA^1) / G^s$ taken in the smooth topology (or any finer one) identifies with $\I^\circ_{Z/X}$ via the map $a$. \end{enumerate} \end{lem} \begin{proof} (ii). The indicated map $G^s \hra G \times \BA^1$ is a closed embedding because the map $s$ is a closed embedding. The `group subscheme' property follows from the definition of group action applied to $(G \times \BA^1) \acts \I^\circ_{Z/X}$. (i). The Miracle Flatness Theorem implies that $a$ is flat, since the source and target of $a$ are smooth varieties, and Lemma~\ref{deform}(ii) ensures that the fibers have the correct dimensions. Furthermore, $a$ is finitely presented, and its fibers are smooth because they are isomorphic to stabilizer group schemes, and group schemes are smooth in characteristic zero. Therefore $a$ is smooth. Since $G^s \to \BA^1$ is obtained from $a$ by base change, it is smooth as well. (iii). The following statement is well-known in algebraic geometry: \begin{itemize} \item Let $S$ be a scheme, and work in the category of sheaves of groupoids on the category of affine schemes over $S$ (denoted $\on{Sch}^{\on{aff}}_{/S}$) with respect to some Grothendieck topology. Let $\mc{G}$ be a group object, and let $\mc{G} \acts \mc{I}$ be a left action on some other object $\mc{I}$. Let $s : S \to \mc{I}$ be a section, let $a : \mc{G} \to \mc{I}$ be the action map, and let $\mc{G}^s$ be the stabilizer subgroup of $s$ (defined as above). If $a$ is a cover, then $\mc{G} / \mc{G}^s \simeq \mc{I}$. \end{itemize} This implies (iii) by taking $S = \BA^1$, working in the smooth topology, and taking $\mc{G} = G \times \BA^1$ and $\mc{I} = \mc{I}^\circ_{Z/X}$. Our proof of (i) shows that $a$ is a smooth cover, so the statement applies. For sake of completeness, we prove the statement in the bullet point. Since $a$ is a cover, the geometric realization of the Cech nerve of $a$ is isomorphic to $\mc{I}$. The Cech nerve of $a$ is the simplicial object \begin{cd} \cdots \ar[r, shift left = 2] \ar[r, shift left = -2] \ar[r] & \mc{G} \underset{\mc{I}}{\times} \mc{G} \ar[r, shift left = 1] \ar[r, shift right = 1] & \mc{G} \end{cd} It suffices to show that this is isomorphic to the simplicial object which encodes the right action of $\mc{G}^s$ on $\mc{G}$, shown below: \begin{cd} \cdots \ar[r, shift left = 2] \ar[r, shift left = -2] \ar[r] & \mc{G} \times \mc{G}^s \ar[r, shift left = 1] \ar[r, shift right = 1] & \mc{G} \end{cd} At the level of objects, this isomorphism is defined by the maps \[ \mc{G} \times \mc{G}^s \times \cdots \times \mc{G}^s \to \mc{G} \underset{\mc{I}}{\times} \mc{G} \underset{\mc{I}}{\times} \cdots \underset{\mc{I}}{\times} \mc{G} \] given by the formula \[ (g_1, g_2, \ldots, g_n) \mapsto (g_1, g_1g_2, \ldots, g_1 g_2 \cdots g_n), \] where $g_1$ is a $T$-point of $\mc{G}$ and $g_2,\ldots, g_n$ are $T$-points of $\mc{G}^s$, for any test object $T \in \on{Sch}^{\on{aff}}_{/S}$. It is straightforward to make these maps compatible with the maps in the simplicial objects. \end{proof} \begin{rmk} \label{rmk-orbits} For $c \in \BA^1 \setminus \{0\}$, we have $G / G^{s(c)} \simeq U$ where $G^{s(c)}$ means the stabilizer of $s(c) \in U$. For $0 \in \BA^1$, we similarly have $G / G^{s(0)} \simeq \mc{N}^{\circ}_{Z/X}$. The preceding lemma smoothly interpolates between these two cases, since $G^s|_c \simeq G^{s(c)}$. \end{rmk} In~\ref{I}(\ref{equiv1}), replacing $\I^\circ_{Z/X}$ by $(G \times \BA^1) / G^s$ yields a preliminary result: \[ \QCoh^G(X) \simeq \QCoh^{\gm}(\BA^1 / G^s). \] The category $\QCoh(\BA^1 / G^s)$ is just the category of sheaves on $\BA^1$ equipped with an action by the group scheme $G^s$. The problem is that the action $\BA^1 / G^s \lacts \gm$ is inexplicit. It does \emph{not} arise from an action $G^s \lacts \gm$ via automorphisms of $G^s$ as a group scheme over $\BA^1$. We will rectify this problem in~\ref{slicing-gm}. \subsection{Descent along a degree-\texorpdfstring{$n$}{n} cover} \label{descent} Let $\BA^1_h$ be the affine line with coordinate $h$, let $n$ be a positive integer, and consider the degree-$n$ map \[ \tau_n : \BA^1_{h^{1/n}} \to \BA^1_h \] defined by $c \mapsto c^n$.\footnote{We will use the notations $\BA^1_{h^{1/n}}$ and $\BA^1_h$ to distinguish between the domain and target of the map $\tau_n$. Of course, both are abstractly isomorphic to $\BA^1$.} Consider the standard $\gm$-actions on the domain and target. The map $\tau_n$ intertwines these actions, up to the degree-$n$ group homomorphism $\gm \xrightarrow{p_n} \gm$. The goal of this subsection is to relate the categories $\QCoh^{\gm}_{\tf}(\BA^1_h)$ and $\QCoh^{\gm}_{\tf}(\BA^1_{h^{1/n}})$. \begin{lem} \label{lem-sub} Consider the commutative diagram of stacks \begin{cd} \pt / \gm \ar[d] \ar[r, "p_n"] & \pt/\gm \ar[d] \\ \BA^1_{h^{1/n}} / \gm \ar[r, "\tau_n"] & \BA^1_h / \gm \end{cd} where the vertical maps go to $0 \in \BA^1$. The induced diagram of categories \begin{cd} \QCoh^{\gm}_{\tf}(\BA^1_h) \ar[r, "\tau_n^*"] \ar[d] & \QCoh_{\tf}^{\gm}(\BA^1_{h^{1/n}}) \ar[d] \\ \QCoh^{\gm}(\pt) \ar[r, "p_n^*"] & \QCoh^{\gm}(\pt) \end{cd} is Cartesian. Equivalently, $\tau_n^*$ is fully faithful, and its essential image is the subcategory \[ \QCoh^{\gm}_{\tf, (n)}(\BA^1_{h^{1/n}}) \subset \QCoh_{\tf}^{\gm}(\BA^1_{h^{1/n}}) \] consisting of $\gm$-equivariant sheaves $\mc{F}$ for which the weights of the action $\gm \acts \mc{F}|_0$ are all multiples of $n$. \end{lem} \begin{proof} We apply Proposition~\ref{ar} to the above diagram of categories. Under this reformulation, the map $\tau_n^*$ sends a filtered vector space $(V, F)$ to $(V, F')$ where $F'$ is the filtration defined by $(F')^{\ge m} V = F^{\ge nm} V$. The map $p_n^*$ sends a graded vector space $\oplus_i V_i$ to the same vector space but with the degrees multiplied by $n$. The vertical maps send a filtered vector space to its associated graded. Now the claim follows because a filtered vector space is of the form $(V, F')$ if and only if its associated graded has nonzero components only in degrees which are multiples of $n$. \end{proof} \begin{rmk} The restriction to torsion-free sheaves is necessary. An example of a torsion sheaf on $\BA^1_{h^{1/n}}$ which satisfies the condition that the weights of the $\gm$-action on the zero fiber are multiples of $n$ but which does not lie in the essential image of $\tau_n^*$ is a skyscraper sheaf at $0 \in \BA^1_{h^{1/n}}$ with $\gm$-weight 0. \end{rmk} Since the functors $\tau_n^*$ and $p_n^*$, one can immediately bootstrap Lemma~\ref{lem-sub} from a statement about sheaves on $\BA^1_h / \gm$ to a statement about stacks (and sheaves on stacks) over $\BA^1_h / \gm$, by the reasoning in Remark~\ref{rmk-monoidal}. This yields the following: \begin{prop} \label{cor-cover} Let $\mc{Y}$ be a stack with $\gm$-action, and let $\mc{Y} \to \BA^1_h$ be a $\gm$-equivariant map. Let $\tau_n^*\mc{Y}$ be the base change along $\tau_n : \BA^1_{h^{1/n}} \to \BA^1_h$. Then the following diagram is Cartesian and the horizontal functors are fully faithful: \begin{cd} \QCoh^{\gm}_{\tf}(\mc{Y}) \ar[r, "\tau_n^*"] \ar[d] & \QCoh^{\gm}_{\tf}(\tau_n^*\mc{Y}) \ar[d] \\ \QCoh^{\gm}_{\tf}(\mc{Y}|_0) \ar[r, "p_n^*"] & \QCoh^{\gm}_{\tf}((\tau_n^*\mc{Y})|_0) \end{cd} \end{prop} Here the subscript `$\tf$' refers to sheaves which are flat over the base $\BA^1$, not necessarily flat over $\mc{Y}$. The diagram of categories is induced from an analogous diagram of stacks, and the symbols $\mc{Y}|_0$ and $(\tau_n^*\mc{Y})|_0$ refer to the fibers over $0 \in \BA^1_h$ and $0 \in \BA^1_{h^{1/n}}$, respectively. Note that $\mc{Y}|_0 \simeq (\tau_n^*\mc{Y})|_0$, but the natural $\gm$-actions on them differ by $p_n$, which is why we denote the bottom horizontal map by $p_n^*$. Applying Proposition \ref{cor-cover} to $\mc{Y} = \BA^1 / G^s$ yields the following: \begin{cor}\label{cor-cover-apply} We have an equivalence \[ \QCoh^{\gm}_{\tf}(\BA^1_h / G^s) \xrightarrow{\tau_n^*} \QCoh^{\gm}_{\tf, (n)}(\BA^1_{h^{1/n}} / \tau_n^*G^s), \] where the right hand side is the full subcategory of $\QCoh^{\gm}_{\tf}(\BA^1_{h^{1/n}} / \tau_n^*G^s)$ consisting of sheaves whose pullback to the zero fiber $\pt / G^{s(0)}$ lies in the essential image of \[ \QCoh^{\gm}(\pt / G^{s(0)}) \xrightarrow{p_n^*} \QCoh^{\gm}(\pt / G^{s(0)}). \] \end{cor} Again, note that the two $\gm$-actions on $\pt / G^{s(0)}$ differ by $p_n$. \subsection{A \texorpdfstring{$\gm$}{gm}-action on \texorpdfstring{$G^s$}{Gs}} \label{slicing-gm} From this point onward, we use the cocharacter $\gamma : \gm \to G$ included in the fastening datum $(\gamma, \ell)$. This cocharacter will allow us to resolve the issue mentioned at the end of~\ref{slice-line}. The conditions of Definition~\ref{def:fasteningdatum} imply that there is a unique \emph{nontrivial} action $\gm \acts \BA^1$ for which the map $\BA^1 \xrightarrow{\ell} X$ is $\gm$-equivariant, where the action $\gm \acts X$ occurs via $\gamma$. Indeed, $\ell$ induces an isomorphism between $\BA^1$ and the locally closed subscheme $\ell(\BA^1) \subset X$, and the latter is invariant under the action of $\gamma(\gm)$. Furthermore, Definition~\ref{def:fasteningdatum}(1) implies that this action $\gm \acts \BA^1$ fixes $0 \in \BA^1$, so this action is linear. Thus it is given by $t \cdot h = t^n h$ for some nonzero integer $n$. Replacing $\gamma$ with $\gamma^{-1}$ if necessary, we can assume that $n > 0$. We specialize the general constructions of~\ref{descent} to the case of this particular $n$. Define a $\gm$-action on $G \times \BA^1_{h^{1/n}}$ by the formula \[ (g,x) \cdot t= (\gamma(t)g\gamma(t)^{-1}, t x ) \qquad t \in \gm, g \in G, x \in \BA^1_{h^{1/n}}. \] The compatibility between $\gamma$ and $\ell$ implies that the group subscheme $\tau_n^*G^s \subset G \times \BA^1_{h^{1/n}}$ is invariant under this action. Hence, we obtain an action \begin{equation}\label{eq-act} \tau_n^*G^s \lacts \gm. \end{equation} \begin{lem}\label{lem-crux} The action $\BA^1_{h^{1/n}} / \tau_n^*G^s \lacts \gm$ induced from (\ref{eq-act}) agrees with the pullback of the action $\BA^1_h / G^s \lacts \gm$ along the map $\tau_n$. \end{lem} \begin{proof} Let us start with the action $\BA^1_{h^{1/n}} / \tau_n^*G^s \lacts \gm$ obtained by pullback along $\tau_n$. We will define a $\gm$-equivariant map \[ \BA^1_{h^{1/n}} \xrightarrow{\phi} \BA^1_{h^{1/n}} / \tau_n^*G^s \] of stacks over $\BA^1_{h^{1/n}}$ such that the resulting $\gm$-action on the relative stabilizer scheme obtained by the fibered product \begin{cd} \tau_n^*G^s \ar[r] \ar[d] & \BA^1_{h^{1/n}} \ar[d, "\phi"] \\ \BA^1_{h^{1/n}} \ar[r, "\phi"] & \BA^1_{h^{1/n}} / \tau_n^* G^s \end{cd} coincides with the action of~(\ref{eq-act}). This proves the lemma. Recall from~\ref{slice-line} that the action $\BA^1_h / G^s \lacts \gm$ was defined using the isomorphism \[ G \back \I^\circ_{Z/X} \simeq \BA^1_h / G^s \] and the action $\I^\circ_{Z/X} \lacts \gm$ which scales the fibers of the line bundle. We display these actions in the following diagram: \begin{cd} G \ar[draw=none, shift right = 1]{r}{\acts} \ar[d, dash, shift left = 0.5] \ar[d, dash, shift right = 0.5] & G \times \BA^1_h \ar[d, swap, "a"] \\ G \ar[draw=none, shift right = 1]{r}{\acts} & \mc{I}^\circ_{Z/X} \ar[draw=none, shift right = 1]{r}{\lacts} & \gm \end{cd} Consider the base change along $\tau_n$: \begin{cd} G \ar[draw=none, shift right = 1]{r}{\acts} \ar[d, dash, shift left = 0.5] \ar[d, dash, shift right = 0.5] & G \times \BA^1_{h^{1/n}} \ar[d, swap, "a'"] & \\ G \ar[draw=none, shift right = 1]{r}{\acts} & \mc{I}^\circ_{Z/X} \underset{\BA^1_h}{\times} \BA^1_{h^{1/n}} \ar[draw=none, shift right = 1]{r}{\lacts} & \gm \end{cd} Here, $\gm$ acts on $\I^\circ_{Z/X}$ via the $n$-th power of the old action, and it acts on $\BA^1_{h^{1/n}}$ via the standard action. We define an action $(G \times \BA^1_{h^{1/n}}) \lacts \gm$ via the following formula: \begin{equation}\label{eq-act2} (g, c) \cdot t = (g \cdot \gamma^{-1}(t), tc). \end{equation} We claim that this action ensures that $a'$ is $\gm$-equivariant. Since the $\gm$-actions take place on irreducible varieties, it suffices to check this agreement on a dense open subset of the domain. Therefore, we can restrict to the locus $G \times (\BA^1_{h^{1/n}} \setminus \{0\})$ and use the isomorphism $\I^\circ_{Z/X}|_{\BA^1_h \setminus \{0\}} \simeq U \times (\BA^1_h \setminus \{0\})$ which was noted in Remark~\ref{rmk-I-geo}. With this identification in hand, the desired commutativity follows from a direct calculation: \begin{cd} (g, c) \ar[rr, mapsto, "\act_t"] \ar[d, mapsto, "a'"] & & (g \cdot \gamma^{-1}(t), tc) \ar[d, mapsto, "a'"] \\ \big((g \cdot \ell(c^n), c^n), c\big) \ar[r, mapsto, "\act_t"] & \big((g \cdot \ell(c^n), t^nc^n), tc\big) \ar[r, shift left = 0.5, dash] \ar[r, shift right = 0.1, dash] & \big((g \cdot \gamma^{-1}(t)\cdot \ell(t^nc^n), t^nc^n), tc\big) \end{cd} for $g \in G,\ c \in \BA^1_{h^{1/n}} \setminus \{0\}, \ t \in \gm$. Thus, we get a diagram \begin{cd} G \ar[draw=none, shift right = 1]{r}{\acts} \ar[d, dash, shift left = 0.5] \ar[d, dash, shift right = 0.5]& G \times \BA^1_{h^{1/n}} \ar[d, swap, "a'"] \ar[draw=none, shift right = 1]{r}{\lacts} & \gm \ar[d, dash, shift left = 0.5] \ar[d, dash, shift right = 0.5] \\ G \ar[draw=none, shift right = 1]{r}{\acts} & \mc{I}^\circ_{Z/X} \underset{\BA^1_h}{\times} \BA^1_{h^{1/n}} \ar[draw=none, shift right = 1]{r}{\lacts} & \gm \end{cd} Upon passing to $G$-quotients, the map $a'$ becomes the desired map $\phi$. \begin{rmk*} Let us summarize the previous construction in more conceptual language. The map $\phi$ corresponds to the choice of a $\gm$-equivariant $\tau_n^*G^s$-torsor on $\BA^1_{h^{1/n}}$. From this point of view, the original map $\BA^1_h \to \BA^1_h / G^s$ corresponds to the (trivial) $G^s$-torsor on $\BA^1_h$ obtained by taking the pullback of $a$ along $s$. Unfortunately, this $G^s$-torsor cannot be made $\gm$-equivariant over $\BA^1_h$. However, once we pull everything back along $\tau_n$, the resulting (trivial) $\tau_n^*G^s$-torsor does admit a structure of $\gm$-equivariance, which yields $\phi$. \end{rmk*} It remains to check that the action $\tau_n^*G^s \lacts \gm$ coincides with the action of~(\ref{eq-act}). We have a fibered diagram\footnote{This diagram is the $\tau_n$-pullback of part of the (augmented) simplicial object in Lemma~\ref{lem-gs}(iii) which, to use the terminology in that lemma, encodes the right action of $\mc{G}^s$ on $\mc{G}$.} \begin{cd} G \times \tau_n^*G^s \ar[r, "\id_G \times \pi'"] \ar[d, swap, "{(\on{mult}, \pi')}"] & G \times \BA^1_{h^{1/n}} \ar[d, "a'"] \\ G \times \BA^1_{h^{1/n}} \ar[r, "a'"] & \I^\circ_{Z/X} \underset{\BA^1_h}{\times} \BA^1_{h^{1/n}} \end{cd} Here $\pi'$ is the projection to the base $\BA^1_{h^{1/n}}$ and $\on{mult}$ is the group multiplication. The $\gm$-actions on the three lower factors collectively induce a $\gm$-action on the upper-left factor, which one checks is given by the formula \[ \big(g, (g', c)\big) \cdot t = \big(g\, \gamma^{-1}(t), (\gamma(t)\, g' \, \gamma^{-1}(t), tc)\big) \] for $g \in G,\ c \in \BA^1_{h^{1/n}},\ g' \in \tau_n^*G^s|_{c} \simeq G^{s(c^n)},\ t \in \gm$. On the upper-left factor, passing to the $G$-quotient amounts to forgetting the first coordinate, and the resulting $\gm$-action is exactly that of~(\ref{eq-act}). \end{proof} \subsection{Applying the criterion for descent} Let $\phi : \BA^1_{h^{1/n}} \to \BA^1_{h^{1/n}} / \tau_n^* G^s$ be the $\gm$-equivariant map constructed in the proof of Lemma~\ref{lem-crux}. Given a quasicoherent sheaf $\mc{F} \in \QCoh^{\gm}(\BA^1_{h^{1/n}} / \tau_n^*G^s)$, the pullback $\phi^*\mc{F}$ is a sheaf on $\BA^1_{h^{1/n}}$ which is equivariant with respect to $\gm$ and the group scheme $\tau_n^*G^s$. These two actions are intertwined (in an obvious manner) with the action $\tau_n^*G^s \lacts \gm$ defined in~\ref{slicing-gm}(\ref{eq-act}).\footnote{One could formulate this as an action of the semidirect product $\tau_n^*G^s \rtimes \gm$ on the sheaf $\phi^*\mc{F}$ on $\BA^1_{h^{1/n}}$.} We can now characterize the subcategory $\QCoh^{\gm}_{\tf, (n)} (\BA^1_{h^{1/n}} / \tau_n^*G^s)$ in concrete terms: \begin{cor}\label{cor-n} \begin{enumerate}[label=(\roman*)] \item[] \item Let $G^{s(0)} \rtimes \gm$ be defined using the $\gamma$-conjugation action of \ref{slicing-gm}(\ref{eq-act}). The image of the group homomorphism $G^{s(0)} \rtimes \gm \xrightarrow{(\iota, \gamma)} G$ is $G^{\ell(0)}$. \item The subcategory $\QCoh^{\gm}_{\tf, (n)}(\BA^1_{h^{1/n}} / \tau_n^*G^s)$ defined in Corollary~\ref{cor-cover-apply} consists of quasicoherent sheaves $\mc{F}$ satisfying the following condition: \begin{itemize} \item The action $G^{s(0)} \rtimes \gm \acts \mc{F}|_0$ factors through the map $(\iota, \gamma)$ in (i). \end{itemize} \end{enumerate} \end{cor} \subsubsection{Proof of (i)} \label{groupss} Consider the $G$-equivariant projection map $\mc{N}^\circ_{Z/X} \to Z$. Since $\gamma(\gm)$ acts transitively on the fiber over $\ell(0) \in Z$, and since this fiber contains $s(0) \in \mc{N}^\circ_{Z/X}$, we have $G^{\ell(0)} = \gamma(\gm) \cdot G^{s(0)}$ as subgroups of $G$. This proves (i). For expository purposes, let us record a few additional relations between these groups. We have a diagram of exact sequences \begin{equation}\label{groups} \begin{tikzcd} & \mu_n \ar[r, dash, shift left = 0.5] \ar[r, dash, shift right = 0.5] \ar[d, hookrightarrow, "{(\gamma \circ \on{inv} , \iota)}"] & \mu_n \ar[d, hookrightarrow] \\ G^{s(0)} \ar[d, dash, shift right = 0.5] \ar[d, dash, shift left = 0.5] \ar[r, hookrightarrow] & G^{s(0)} \rtimes \gm \ar[r, twoheadrightarrow] \ar[d, twoheadrightarrow, "{(\iota, \gamma)}"] & \gm \ar[d, twoheadrightarrow, "n"] \\ G^{s(0)} \ar[r, hookrightarrow] & G^{\ell(0)} \ar[r, twoheadrightarrow, "\act"] & \gm \end{tikzcd} \end{equation} Here, the map $\act$ encodes the action of $G^{\ell(0)}$ on the normal line to $z \in Z \subset X$ (i.e.\ the fiber mentioned in the previous paragraph), and the map `inv' is group inversion. \subsubsection{Proof of (ii)} To unpack the definition of $\QCoh^{\gm}_{\tf, (n)}(\BA^1_{h^{1/n}} / \tau_n^*G^s)$, we need to describe the map of stacks \[ (\pt / G^{s(0)}) / \gm^{(n)} \xrightarrow{p_n} (\pt / G^{s(0)}) / \gm \] which appears in Corollary~\ref{cor-cover-apply}.\footnote{Here, and in what follows, we use the notation $\gm^{(n)}$ to indicate a copy of $\gm$ which acts via $n$ times the standard action. This is to avoid collision of notation for nonisomorphic stacks.} As noted there, the map is the identity on the underlying stack $\pt / G^{s(0)}$, but the two $\gm$-actions differ by $p_n$, the $n$-th power group homomorphism $\gm \to \gm$. In view of Lemma~\ref{lem-gs} and Remark~\ref{rmk-orbits}, this map is given by \[ G\ \backslash \ ( \mc{N}^\circ_{Z/X} ) \ / \ \gm^{(n)} \to G \ \backslash \ ( \mc{N}^\circ_{Z/X} ) \ / \ \gm. \] The $\gm$-action on the right hand side is the standard one which scales the fibers of the normal bundle, while the $\gm$-action on the left hand side is $n$ times the former. Choosing the basepoint $s(0) \in \mc{N}^\circ_{Z/X}$, this becomes the map \[ \pt / \stab_{G \times \gm^{(n)}}(s(0)) \to \pt / \stab_{G \times \gm}(s(0)) \] which is induced by the map of groups $\psi_1$ in this Cartesian diagram: \begin{cd} \stab_{G \times \gm^{(n)}}(s(0)) \ar[r, "\psi_1"] \ar[d, hookrightarrow] & \stab_{G \times \gm}(s(0)) \ar[r, "\sim"] \ar[d, hookrightarrow] & G^{\ell(0)} \\ G \times \gm^{(n)} \ar[r, "\id_G \times p_n"] & G \times \gm \end{cd} This diagram contains the additional observation that $\stab_{G \times \gm}(s(0)) \simeq G^{\ell(0)}$, which follows because $Z \simeq \mc{N}^\circ_{Z/X} / \gm$ using the standard $\gm$-action, and under this quotient map $s(0) \in \mc{N}^\circ_{Z/X}$ projects to $\ell(0) \in Z$. We extend this by another Cartesian square \begin{cd} G^{s(0)} \rtimes \gm^{(n)} \ar{r}{\psi_2'}[swap]{\sim} \ar[d, hookrightarrow, "\iota \times \id_{\gm}"] & \stab_{G \times \gm^{(n)}}(s(0)) \ar[r, "\psi_1"] \ar[d, hookrightarrow] & \stab_{G \times \gm}(s(0)) \ar[r, "\sim"] \ar[d, hookrightarrow] & G^{\ell(0)} \\ G \rtimes \gm^{(n)} \ar{r}{\psi_2}[swap]{\sim} & G \times \gm^{(n)} \ar[r, "\id_G \times p_n"] & G \times \gm \end{cd} Here, the map $\psi_2$ sends $(g, t) \mapsto (g\, \gamma^{-1}(t), t)$, which is a group isomorphism by definition of the action~\ref{slicing-gm}(\ref{eq-act}). The pullback of $\stab_{G \times \gm^{(n)}}(s(0))$ under $\psi_2$ is indeed $G^{s(0)} \rtimes \gm^{(n)}$, because the formula for $\psi_2$ shows that the whole subgroup $\gm^{(n)} \subset G \rtimes \gm^{(n)}$ lies in the stabilizer. And it is straightforward to check that the upper horizontal composition $G^{s(0)} \rtimes \gm^{(n)} \sra G^{\ell(0)}$ coincides with the map $(\iota, \gamma)$ from (i), by comparing the formula for $\psi_2$ with the formula~\ref{slicing-gm}(\ref{eq-act2}). Thus, the map of stacks written at the beginning of this subsubsection identifies (via the isomorphisms of the previous paragraph) with the map \[ \pt / (G^{s(0)} \rtimes \gm^{(n)}) \to \pt / G^{\ell(0)} \] which is induced by the group homomorphism $(\iota, \gamma)$ from (i). Tracing through the construction of Lemma~\ref{lem-crux}, we find that the action $G^{s(0)} \rtimes \gm \acts \mc{F}|_0$ mentioned in (ii) coincides with the action obtained by pulling back along \[ \pt / (G^{s(0)} \rtimes \gm^{(n)}) \simeq (\pt / G^{s(0)}) / \gm^{(n)} \hra \BA^1_{h^{1/n}} / \tau_n^* G^s. \] This proves (ii). $\square$ \subsection{The main result} In this subsection, we assemble the previous constructions to obtain an algebraic description of the category $\QCoh^G_{\tf}(X)$. \begin{defn}\label{claim1} Let $A$ be the filtered Hopf algebra corresponding to the group scheme $\tau_n^*G^s \to \BA^1_{h^{1/n}}$ under Proposition~\ref{ar} and Remark~\ref{rmk-monoidal}. This map is flat by Lemma~\ref{lem-gs}(i), and~\ref{slicing-gm}(\ref{eq-act}) gives a $\gm$-action on $\tau_n^*G^s$, so the Artin--Rees construction applies. As is always the case with Artin--Rees, we have \e{ \Spec A &\simeq \big((\tau_n^*G^s)|_{\BA^1_{h^{1/n}} \setminus \{0\}}\big) / \gm \\ \Spec \gr A &\simeq \tau_n^*G^s|_0 \simeq G^{s(0)}. } In the second identification, the $\gm$-action on the left hand side given by the grading corresponds to the usual $\gm$-action on the right hand side (via $\gamma$). \end{defn} \begin{theorem}\label{thm-main} Let $\mc{C}_{(n)}(X)$ be the category whose objects are triples $(V, F, \alpha)$ as follows: \begin{itemize} \item $(V, F)$ is an exhaustively filtered vector space, and $\alpha : V \to V \otimes A$ is a coaction map which makes $(V, F)$ a filtered comodule for $A$. \item We furthermore require that the associated graded action \[ \gm \ltimes G^{s(0)} \acts \gr^F V \] factors through the quotient $\gm \ltimes G^{s(0)} \sra G^{\ell(0)}$ from Corollary~\ref{cor-n}(i). \end{itemize} The morphisms in this category are those of filtered $A$-comodules. Then there are equivalences of categories \[ \QCoh^G_{\tf}(X) \simeq \QCoh^{\gm}_{\tf, (n)}(\BA^1 / \tau_n^*G^s) \simeq \mathcal{C}_{(n)}(X). \] \end{theorem} \begin{proof} The first equivalence follows from Corollary~\ref{cor-cover-apply}. The second equivalence follows from the definition of $A$ and the last sentence of Remark~\ref{rmk-monoidal}. The reformulation of the $(-)_{(n)}$ condition given in the theorem statement follows from Corollary~\ref{cor-n}(ii). \end{proof} \begin{rmk} If one desires to remove the torsion-free requirement, then in Proposition~\ref{ar} the notion of `filtered vector space' must be replaced with that of `graded vector space equipped with a degree 1 endomorphism' and the criterion for descent given in Lemma~\ref{lem-sub} must be reformulated accordingly. \end{rmk} \begin{rmk} \label{rmk-concrete} Fixing the nonzero point $1 \in \BA^1_{h^{1/n}}$ allows one to view these constructions more concretely (albeit less canonically): \begin{enumerate}[label=(\roman*)] \item We get identifications \e{ \Spec A &\simeq G^{s(1)}\\ \Spec \gr A &\simeq G^{s(0)}, } where the identifications in the second line are $\gm$-equivariant. The filtration $F^{\ge m}$ induced on the ring of functions $\oh_{G^{s(1)}}$ is given as follows: for a function $f$, we have $f \in F^{\ge m}$ if and only if the limit \[ \lim_{t \to 0} \big(t^{-m} \act_{t^{-1}}^*f\big) \] exists as a function on $G^{s(0)}$. Here $\act_{t^{-1}}$ is the isomorphism $G^{s(t^n)} \to G^{s(1)}$ given by the $\gm$-action on $G^s$. \item Given an object $\mc{F} \in \QCoh^{G}_{\tf}(X)$ corresponding to $(V, F, \alpha)$ via Theorem~\ref{thm-main}, the fiber $\mc{F}|_{\ell(1)}$ identifies with $V$, and the action of $G^{\ell(1)}$ on this fiber is given by the coaction $\alpha$ (forgetting the filtrations). The fiber $\mc{F}|_{\ell(0)}$ is given by $\gr V$. \item The action of $G^{\ell(0)}$ on the fiber $\mc{F}|_{\ell(0)}$ is determined as follows. The action of $G^{s(0)}$ on $\mc{F}|_{\ell(0)}$ is given by $\gr \alpha$. Since this coaction map is graded, the action of $G^{s(0)}$ extends to an action by $G^{s(0)} \rtimes \gm$. The second bullet point in Theorem~\ref{thm-main} says that the action descends to one by the quotient group $G^{\ell(0)}$. \end{enumerate} \end{rmk} \section{Specialization maps} \label{sec:specialization} Retain the notations and assumptions of Section~\ref{sec:main}. This means that $X = U \cup Z$ is a smooth fastened chain equipped with fastening datum $(\gamma, \ell)$, which determines the group schemes $G^s$ and $\tau_n^*G^s$, the latter being $\gm$-equivariant over $\BA^1$. Theorem~\ref{thm-main} gives us an equivalence between $\QCoh^G_{\tf}(X)$ and a category of filtered co-modules for the filtered Hopf algebra $A = \oh_{G^{s(1)}}$. In this section, we show that some objects in $\QCoh^G_{\tf}(X)$ can be presented more economically, without referring explicitly to $A$. In the first half of this section, we give nicer classifications of line bundles (Proposition~\ref{prop:linebundles}) and sheaves whose restrictions to $U$ are $G$-equivariant local systems (Corollary~\ref{cor-locsys}). These classifications are `nicer' in the sense that the filtered Hopf algebra $A$ does not explicitly appear; instead, in each case $\tau_n^*G^s$ acts through a certain quotient \[ q : \tau_n^*G^s \to H \] (see~\ref{s:factor}), where $H \to \BA^1$ is a $\gm$-equivariant group scheme with the property that the action $\gm \acts H|_0$ is trivial. This implies that the specialization to the zero fiber for $H$ is encoded in a \emph{bona fide} map of algebraic groups $H|_0 \to H|_1$. In the second half of this section, we investigate the largest subcategory of $\Vect^G(X)$ for which this works. We show that there is a largest possible quotient of $\tau_n^*G^s$ which admits a `specialization map' description (Corollary~\ref{cor-tame-univ}) and we state the corresponding classification of vector bundles in Corollary~\ref{cor-tame-classify}. Lastly, in~\ref{apply}, we apply this idea to the problem of classifying sheaves whose restrictions to $U$ are \emph{twisted} $G$-equivariant local systems, which is the case of interest for our application (Section~\ref{sec:application}). \noindent \emph{Notations.} For the remainder of this section, we adopt the more concise notation $\tilde{G}^s = \tau_n^*G^s$. Recall from Remark~\ref{rmk-orbits} that $G^{s(0)} \simeq G^s|_0$ and $G^{s(1)} \simeq G^s|_1$. As a matter of convention, any scheme written as $S \times \BA^1$ (for some $S$) will be equipped with the $\gm$-action given by the standard action on the second factor. For an arbitrary map $H \to \BA^1$, we let $H_0$ and $H_1$ denote the fibers over $0, 1 \in \BA^1$, respectively. For a map $q$ of schemes over $\BA^1$, we let $q_0$ and $q_1$ denote the corresponding fibers of this map. \subsection{Line bundles} \label{s:line} In this subsection, we give a very simple description of the category of $G$-equivariant line bundles (Proposition~\ref{prop:linebundles}(ii)), which will be applied in~\ref{ssec:case-line}. Along the way, we prove a technical result which may be of independent interest: the specialization of a character of $G^{s(1)}$ to the zero fiber $G^{s(0)}$ is always well-defined (Proposition~\ref{prop:linebundles}(i)). This result is applied again in~\ref{apply}. Let $\Pic^G(X)$ denote the category of $G$-equivariant line bundles on $X$. For any character $\chi : G^{s(1)} \to \gm$, let $\Pic^G_{\chi}(X)$ be the full subcategory consisting of line bundles $\mc{L}$ for which the action $G^{s(1)} \acts \mc{L}|_{s(1)}$ factors through $\chi$. We have a decomposition \[ \Pic^G(X) \simeq \bigsqcup_{\chi} \Pic^G_{\chi}(X). \] \begin{prop}\label{prop:linebundles} Fix a character $\chi : G^{s(1)} \to \gm$. \begin{enumerate} \item[(i)] We have $\chi \in F^{\ge 0} A$. Concretely, this means that $\lim_{t \to 0} \on{act}_{t^{-1}}^* \chi$ exists as a character of $G^{s(0)}$. \end{enumerate} Let $q : \tilde{G}^s \to \gm \times \BA^1$ be the $\gm$-equivariant map of group schemes given by (i). The map $q$ is uniquely determined by the requirement that $q_1 : G^{s(1)} \to \gm$ equals $\chi$. \begin{enumerate} \item[(ii)] We have a canonical equivalence of groupoids \[ \Pic^G_{\chi}(X) \simeq (n\BZ + m) \times (\pt / \BC^\times). \] The residue class $m \bmod{n}$ is determined by the following property: if $\mu_n \simeq G^{s(0)} \cap \gamma(\gm)$ (see~\ref{groupss}), then the restriction of the character \begin{cd}[column sep = 0.8in] \mu_n \ar[r, hookrightarrow, "\on{inclusion}"] & G^{s(0)} \ar[r, "\lim_{t \to 0} \mathrm{Ad}_{\gamma(t^{-1})}^* \chi"] & \gm \end{cd} is given by $\zeta \mapsto \zeta^m$. \end{enumerate} \end{prop} \subsubsection{Proof that (i) implies (ii) in Proposition~\ref{prop:linebundles}} \label{prop:linebundles-1} The first equivalence in Theorem~\ref{thm-main} implies that an object $\mc{L} \in \Pic^G(X)$ is equivalent to a pair $(\varphi, \mc{L}')$ defined as follows: \begin{itemize} \item $q : \tilde{G}^s \to \gm \times \BA^1$ is a $\gm$-equivariant homomorphism of group schemes. \item $\mc{L}'$ is a $\gm$-equivariant line bundle on $\BA^1$. \item We require that the restriction of $\gm \acts \mc{L}'|_{0}$ to $\mu_n$ is given by the group homomorphism $\mu_n \xrightarrow{\gamma} G^{s(0)} \xrightarrow{q_0} \gm$. \end{itemize} This is because, for any $\gm$-equivariant line bundle $\mc{L}'$ on $\BA^1$, the group scheme $\curAut_{\BA^1}(\mc{L}')$ is $\gm$-equivariantly isomorphic to $\gm \times \BA^1$. Furthermore, $\mc{L} \in \Pic^G_{\chi}(X)$ if and only if $q_1 = \chi$. Thus $q$ is determined, and the remaining datum $\mc{L}'$ amounts to the choice of a one-dimensional vector space (which plays the role of $\mc{L}'|_{1}$) and an integer (which determines the filtration). The third bullet point above says that this integer lies in $n\BZ + m$, because $q_1 = \chi$ implies that $q_0 = \lim_{t \to 0} \on{act}_{t^{-1}}^* \chi$. $\qed$ The rest of this subsection is devoted to the proof of Proposition~\ref{prop:linebundles}(i). \begin{lem}\label{lem-dim-1} Let $\pi : H \to \BA^1$ be a flat $\gm$-equivariant group scheme of relative dimension 1. If there is an isomorphism $\varphi: H_1 \simeq \gm$, then the following are true: \begin{enumerate}[label=(\roman*)] \item The $\gm$-action on $H_0$ is (weakly) expanding as $t \to 0$.\footnote{This means that $\oh_{H_0}$ lives in nonnegative degrees with respect to the $\gm$-action. We omit the word `weakly' from now on, because we will not use the `strong' notion anywhere.} \item There exists a (unique) $\gm$-equivariant map $q : H \to \gm \times \BA^1$ of group schemes with the property that $q_1 : H_1 \to \gm$ equals $\varphi$. \end{enumerate} \end{lem} \begin{proof} (i). Suppose for sake of contradiction that the action $\gm \acts H_0$ is not expanding. This implies that all weights of the $\gm$-action on the (two-dimensional) tangent space $\mc{T}_{1_{H_0}}H$ are positive (here $1_{H_0} \in H_0$ is the identity element). (It also implies that the identity component of $H_0$ is isomorphic to $\ga$, but we will not use this explicitly.) Applying the Luna Slice Theorem to the $\gm$-orbit $\{1_{H_0}\} \subset H$ gives us a $\gm$-invariant affine open neighborhood $W \subset H$ containing $1_{H_0}$, along with a $\gm$-equivariant \'etale map \[ \psi : W \to \mc{T}_{1_{H_0}} H \] which induces an isomorphism on tangent spaces at $1_{H_0} \in W$. (The last phrase uses that $\pi$ is smooth, which is true because $\pi$ is flat, finitely presented, and has smooth fibers.) The $\gm$-equivariance of $\psi$, along with the previous paragraph, implies that $\psi$ is surjective. This in turn implies that $H_1 \cap W \simeq \BA^1$, contradicting the assumption that $H_1 \simeq \gm$. (ii). Point (i) says that $\oh_{H_0} \simeq \gr \oh_{H_1}$ lives in nonnegative degrees. In view of Remark~\ref{rmk-concrete}(i), this implies that any regular function on $H_1$ extends to a $\gm$-invariant function on $H$. Applying this to the character $\varphi$ yields the desired result. \end{proof} One can describe all possibilities for $\pi : H \to \BA^1$ satisfying the hypotheses of this lemma. See the second paragraph of Remark~\ref{rmk-nonflat} for more details. \subsubsection{Proof of Proposition~\ref{prop:linebundles}(i)} Consider the codimension-one subgroup $\ker(\chi) \subset G^{s(1)}$, and extend it to a $\gm$-equivariant sub group scheme $K \subset \tilde{G}^s$ which is flat over $\BA^1$. (Concretely, $K$ is obtained by taking the flat limit of the $\gm$-saturation of $\ker(\chi)$. It satisfies the sub group scheme property because this is a closed condition, hence this condition is preserved under taking the flat limit.) Let $\tilde{G}^s \sra H$ be the quotient by $K$, so that $H$ is a flat $\gm$-equivariant group scheme of relative dimension $\le 1$. By construction, there is a group map $H_1 \xhookrightarrow{\on{cl.emb.}} \gm$ which expresses $H_1$ as the image of $\chi$. Assume that $H$ has relative dimension $1$, so that $H_1 \hra \gm$ is an isomorphism. Then Lemma~\ref{lem-dim-1}(ii) gives a map $q : H \to \gm \times \BA^1$ which witnesses the fact that the limit $\lim_{t \to 0} \on{act}_{t^{-1}}^*\chi$ exists. If $H$ has relative dimension $0$, then $\chi$ is locally constant. Now the desired limit exists because two different connected components of $G^{s(1)}$ cannot specialize to the same component of $G^{s(0)}$. Indeed, this would contradict the fact that $G^{s(0)}$ is a group scheme and hence reduced. (This case can also be handled using~\ref{s:locsys}.) $\qed$ \subsection{Factoring through a quotient of \texorpdfstring{$\tilde{G}^s$}{tildeGs}} \label{s:factor} Forget the notation of~\ref{s:line}. Now let $H$ be any $\gm$-equivariant group scheme over $\BA^1$, and let $q : \tilde{G}^s \to H$ be a map of $\gm$-equivariant group schemes over $\BA^1$. \begin{defn} Let $\QCoh^{\gm}_{\tf, (n)}(\BA^1 / H)$ be the full subcategory of $\QCoh^{\gm}_{\tf}(\BA^1 / H)$ consisting of sheaves $\mc{F}$ for which the restriction of the action $\gm \ltimes H_0 \acts \mc{F}|_0$ along the group map \[ q_0 : \gm \ltimes G^{s(0)} \to \gm \ltimes H_0 \] factors through the quotient $\gm \ltimes G^{s(0)} \to G^{\ell(0)}$. \end{defn} We would like to think of $\QCoh^{\gm}_{\tf, (n)}(\BA^1 / H)$ as encoding a subcategory of $\QCoh_{\tf}^G(X)$ which `corresponds' to the quotient $q$. The next result makes this precise. In~\ref{s:subcat}, we explain how to classify the objects of this subcategory. \begin{prop}\label{flat-tame-prop} We have the following: \begin{enumerate}[label=(\roman*)] \item If $q_1 : G^{s(1)} \to H_1$ is surjective and $H$ is flat (i.e.\ torsion-free) over $\BA^1$, then the pullback functor \[ \QCoh^{\gm}_{\tf, (n)}(\BA^1 / H) \xrightarrow{q^*} \QCoh^{\gm}_{\tf, (n)}(\BA^1 / \tilde{G}^s) \] is fully faithful. \item If furthermore $q_0 : G^{s(0)} \to H_0$ is surjective, then the image of this functor is the subcategory of sheaves $\mc{F}$ for which the action $G^{s(1)} \acts \mc{F}|_1$ factors through $q_1$. \end{enumerate} \end{prop} \begin{proof} (i). Since $H$ is flat, an object in the left hand side is given by $(V, F, \beta)$ where $(V, F)$ is a filtered vector space and $\beta : V \to V \otimes \oh_{H_1}$ is a filtered coaction. Since $q_1$ is surjective, $\oh_{H_1} \to \oh_{G^{s(1)}}$ is injective, so whether or not a filtered coaction $V \to V \otimes \oh_{G^{s(1)}}$ factors through a coaction $V \to V \otimes \oh_{H_1}$ is a property, not additional structure. (ii). The crux of the proof is the following lemma, which the second author learned from Dori Bejleri: \begin{lem*} Let $f : X \to Y$ be a map of schemes over a base $S$. Let $x \in X$ be a point, and let $y \in Y$ and $s \in S$ be its images. Then the following two statements are equivalent: \begin{itemize} \item $X\to S$ is flat at $x$, and $f|_s : X|_s \to Y|_s$ is flat at $x$. \item $Y \to S$ is flat at $y$, and $f$ is flat at $x$. \end{itemize} \end{lem*} Since $q_0$ is a surjective map of algebraic groups, it is flat. Since $\tilde{G}^s$ is flat over $\BA^1$, we may apply the lemma and conclude that $q$ is flat. Since flat maps are preserved by base change, we conclude that the scheme $\ker(q)$ is flat over $\BA^1$. The image of the functor is obviously contained in the indicated subcategory. For the reverse containment, suppose that the sheaf $\mc{F} \in \QCoh^{\gm}_{\tf, (n)}(\BA^1 / \tilde{G}^s)$ lies in the indicated subcategory. The action of $\tilde{G}^s$ corresponds to a map $\tilde{G}^s \xrightarrow{\varphi} \curEnd_{\BA^1}(\mc{F})$. The closed subscheme $\ker(\varphi)|_{\BA^1 \setminus \{0\}}$ contains $\ker(q)|_{\BA^1 \setminus \{0\}}$ by the hypothesis on $\mc{F}$. The flatness proved in the previous paragraph implies that $\ker(\varphi)$ contains $\ker(q)$. Therefore the action of $\tilde{G}^s$ factors through $q$, as desired. \end{proof} \begin{defn}\label{def-q-cat} As a notational convenience, when the hypotheses of Proposition~\ref{flat-tame-prop}(i) are satisfied, we define $\QCoh^{G}_{\tf, q}(X)$ via the following Cartesian diagram: \begin{cd} \QCoh^{G}_{\tf, q}(X) \ar[r, hookrightarrow] \ar{d}[rotate=90, anchor=south, swap]{\sim} & \QCoh^{G}_{\tf}(X) \ar{d}[rotate=90, anchor=south, swap]{\sim}{\text{Thm.~\ref{thm-main}}} \\ \QCoh^{\gm}_{\tf, (n)}(\BA^1 / H) \ar[r, hookrightarrow, "q^*"] & \QCoh^{\gm}_{\tf, (n)}(\BA^1 / \tilde{G}^s) \end{cd} Note that this subcategory of $\QCoh^{G}_{\tf, (n)}(X)$ \emph{a priori} depends on the choice of fastening $(\gamma, \ell)$ as well as the map $q$. But when Proposition~\ref{flat-tame-prop}(ii) applies, it tells us that this subcategory consists exactly of those sheaves $\mc{F} \in \QCoh^G_{\tf}(X)$ for which the action $G^{s(1)} \acts \mc{F}|_{\ell(1)}$ factors through $q_1$. This definition will appear only in Corollary~\ref{cor-tame-classify}. \end{defn} \subsection{Local systems} \label{s:locsys} We apply~\ref{s:factor} to the problem of describing the subcategory \[ \QCoh_{\tf, \locsys}^G(X) \hra \QCoh_{\tf}^G(X) \] consisting of sheaves $\mc{F}$ for which the restriction $\mc{F}_U$ is a $G$-equivariant local system. We construct a quotient map $q : \tilde{G}^s \to H_{\locsys}$ of group schemes over $\BA^1 / \gm$ as follows. For each connected (equivalently, irreducible) component $C \subset \tilde{G}^s$, let $H_C \to \BA^1$ be the image of the projection map $C \to \BA^1$. Since $\tilde{G}^s$ is flat over $\BA^1$, each $H_C$ is either $\BA^1$ or $\BA^1 \setminus \{0\}$. We define \[ H_{\locsys} := \bigsqcup_{\substack{C \subset \tilde{G}^s \\ \text{components}}} H_C. \] Then $H_{\locsys}$ is obviously a group scheme over $\BA^1 / \gm$, and the desired map $q$ is defined on components by the tautological map $C \to H_C$. Note that $H_{\locsys}|_1 \simeq \on{Com}(G^{s(1)})$ where $\on{Com}(-)$ means the group of components. Since each component of $G^{s(0)}$ lies in a unique component of $\tilde{G}^s$, and the latter components are in bijection with those of $G^{s(1)}$, we obtain a canonical map of groups \[ \sigma : \on{Com}(G^{s(0)}) \to \on{Com}(G^{s(1)}). \] \begin{cor}\label{cor-locsys} Let $\mc{C}_{\locsys, (n)}(X)$ be the category whose objects are triples $(V, F, \rho)$ as follows: \begin{itemize} \item $V$ is a vector space, $\rho : \on{Com}(G^{s(1)}) \acts V$ is a representation, and $F$ is an exhausting filtration on $V$ which is invariant under $\rho$. \item We furthermore require that the restriction of the associated graded action \[ \gm \ltimes G^{s(0)} \to \gm \times \on{Com}(G^{s(0)}) \xrightarrow{\sigma} \gm \times \on{Com}(G^{s(1)}) \acts \gr^F V \] factors through the quotient $\gm \ltimes G^{s(0)} \sra G^{\ell(0)}$. \end{itemize} The morphisms in this category are maps of filtered representations. Then there is an equivalence of categories \[ \QCoh^G_{\tf, \locsys}(X) \simeq \mc{C}_{\locsys, (n)}(X). \] \end{cor} \begin{proof} Since Proposition~\ref{flat-tame-prop}(ii) applies to $q : \tilde{G}^s \to H_{\locsys}$, we conclude that the category $\QCoh^{\gm}_{\tf, (n)}(\BA^1 / H_{\locsys})$ is equivalent to the subcategory of $\QCoh^{\gm}_{\tf, (n)}(\BA^1 / \tilde{G}^s)$ consisting of sheaves $\mc{F}$ for which the action $G^{s(1)} \acts \mc{F}|_1$ factors through $G^{s(1)} \sra \on{Com}(G^{s(1)})$. But this subcategory is also equivalent to $\QCoh^G_{\tf, \locsys}(X)$ by Theorem~\ref{thm-main}. Thus, to finish the proof, it suffices to show that \[ \QCoh^{\gm}_{\tf, (n)}(\BA^1 / H_{\locsys}) \simeq \mc{C}_{\locsys, (n)}(X). \] Using the Artin--Rees construction, we interpret the left hand side as consisting of triples $(V, F, \beta)$ where $(V, F)$ is as before, and $\beta : V \to V \otimes \oh_{H_{\locsys}|_1}$ is a filtered coaction. We observed above that $H_{\locsys}|_1 \simeq \on{Com}(G^{s(1)})$. If $I \subset \oh_{\on{Com}(G^{s(1)})}$ is the ideal of the subscheme $\Im(\sigma)$, then the filtration on $\oh_{H_{\locsys}|_1}$ (given by the definition of $H_{\locsys}$) corresponds to the filtration \begin{equation} \label{filt} F^{\ge m} \oh_{\on{Com}(G^{s(1)})} = \begin{cases} \oh_{\on{Com}(G^{s(1)})} & \text{ if } m \le 0 \\ I & \text{ if } m > 0. \end{cases} \end{equation} This filtration implies that the filtered coaction $\beta$ is equivalent to the data in the first bullet point in the definition of $\mc{C}_{\locsys, (n)}(X)$. It is easy to check that the second bullet point corresponds to the $(-)_{(n)}$ condition on the left hand side. \end{proof} \subsection{The universal tame quotient}\label{sec:univtame} The analysis in~\ref{s:line} and \ref{s:locsys} is especially nice because in both cases we considered a quotient $q : \tilde{G}^s \to H$ (in the framework of~\ref{s:factor}) with the property that $H \to \BA^1$ is `almost' a constant group scheme. Indeed, in~\ref{s:line}, $H$ was the constant group scheme $H \simeq \gm \times \BA^1$. And in~\ref{s:locsys}, $H_{\locsys}$ is obtained from the constant group scheme $\on{Com}(G^{s(1)}) \times \BA^1$ by replacing the zero fiber by its closed subscheme $\Im(\sigma)$. One thing these two group schemes have in common is that the $\gm$-action on their zero fibers is trivial. In what follows, we explain why this property implies that a $\gm$-equivariant scheme over $\BA^1$ is `almost constant' in a sense that will be made precise (see Lemma~\ref{tame-describe}). \begin{defn} Define the full subcategory \[ \QCoh_{\tf, \tame}^{\gm}(\BA^1) \subset \QCoh_{\tf}^{\gm}(\BA^1) \] to consist of sheaves $\mc{F} \in \QCoh^{\gm}_{\tf}(\BA^1)$ for which the action $\gm \acts \mc{F}|_0$ is trivial. We call such a sheaf \emph{tame}. Under Artin--Rees, tame sheaves correspond to filtered vector spaces $(V, F)$ with the property that $F^{\ge 0}V = V$ and $F^{\ge 1}V = F^{\infty}V$.\footnote{By definition, $F^\infty V = \bigcap_{m \in \BZ} F^{\ge m} V$ and $F^{-\infty} V = \bigcup_{m \in \BZ} F^{\ge m} V$. The filtration $F$ is exhausting if and only if $F^{-\infty} V = V$.} Such vector spaces we call \emph{tamely filtered}. \end{defn} Since this is a monoidal subcategory, we obtain an analogous notion of tame affine schemes over $\BA^1 / \gm$ as in Remark~\ref{rmk-monoidal}. In the rest of the section, we study tame affine schemes in more detail. Any (flat) affine scheme over $\BA^1 / \gm$ is given by \[ \ul{\Spec} (R, F) := \Spec \oplus_m F^{\ge m} R \xrightarrow{\pi} \Spec k[h] = \BA^1 \] for some filtered ring $(R, F)$. Here $\oplus_m F^{\ge m} R$ and $k[h]$ are graded rings with $\oplus_m F^{\ge m} R$ in degree $m$ and $h$ in degree $-1$. (This convention ensures that the action $\gm \acts \BA^1_h$ is standard.) Then $\pi$ is defined by $h \mapsto 1_R \in F^{\ge -1} R$. Now $S = \ul{\Spec}(R, F)$ is \emph{tame} if $(R, F)$ is tamely filtered, or equivalently if the action $\gm \acts S|_0$ is trivial. Note that any tame affine scheme is by definition torsion-free (equivalently flat) over $\BA^1$. Next, we observe that tame affine schemes are `almost constant' as described in the first paragraph of this subsection. For a vector space $V$, let $F_0$ be the filtration for which $F_0^{\ge 0} = V$ and $F_0^{\ge 1}V = 0$. \begin{lem} \label{tame-describe} We have the following. \begin{enumerate}[label=(\roman*)] \item The map of filtered rings $(F^{\ge 0} R, F_0) \to (R, F)$ yields a $\gm$-equivariant map \[ \ul{\Spec} (R, F) \xrightarrow{p} (\Spec R) \times \BA^1. \] \item If $(R, F)$ is tamely filtered, then the base change of $p$ along $\BA^1 \setminus \{0\} \hra \BA^1$ is an isomorphism and the base change along $\{0\} \hra \BA^1$ is the closed embedding $\Spec (R/F^{\ge 1}) R \hra \Spec R$. In particular, there is a canonical map from the zero fiber of $\ul{\Spec} (R, F)$ to the general fiber. \end{enumerate} \end{lem} \begin{proof} Point (i) is a tautology. For point (ii), if $(R, F)$ is tamely filtered, then $R = F^{\ge 0} R$, from which it follows that $p$ is an isomorphism over $\BA^1 \setminus \{0\}$. The base change of $p$ along $\{0\} \hra \BA^1$ is the induced map on associated graded rings, which is $R \sra R / F^{\ge 1} R$ if $(R, F)$ is tamely filtered. \end{proof} Tame affine schemes over $\BA^1$ are usually not finite type. For example, let $(R, F)$ be the tamely filtered ring such that $R = k[x]$ and $F^{\ge 1}R = \la x \ra$. Then the Artin--Rees ring is $\oplus_m F^{\ge m} R \simeq k[h, h^{-1}x, h^{-2}x, \cdots]$, which is not finitely generated. Geometrically, this ring can be constructed from the affine plane $\Spec k[h, x]$ by taking an infinite sequence of affine charts of blow-ups at the origin, the first of which is $k[h, x] \to k[h, h^{-1}x]$. The following lemma tells us what finite type tame affine schemes look like: \begin{lem} \label{tame-describe-2} Let $(R, F)$ be a tamely filtered $k$-algebra. \begin{enumerate}[label=(\roman*)] \item $\oplus_m F^{\ge m} R$ is finitely generated (as a $k$-algebra) if and only if $R$ is finitely generated and the ideal $F^{\ge 1} R$ is idempotent. \item If the situation of (i) obtains, then the map $p$ from Lemma~\ref{tame-describe} looks as follows. For each connected component $C \subset \Spec R$, the map $p$ is either the identity map of $C \times \BA^1$ or the open embedding $C \times (\BA^1 \setminus \{0\}) \hra C \times \BA^1$. The former occurs if and only if the ideal $F^{\ge 1}R$ vanishes on $C$. \end{enumerate} \end{lem} \begin{proof} (i). If $\oplus_m F^{\ge m} R$ is finitely generated, choose a finite set of homogeneous generators, and partition them into two groups $\{r_i\} \cup \{s_j\}$ where the $r_i$ live in nonpositive degrees and the $s_j$ live in positive degrees. Let $n$ be the largest degree which occurs among the generators. For degree reasons, every polynomial in the generators which lies in the summand $F^{\ge n+1} R$ must be a sum of monomials each of which contains at least two $s_j$'s. The $s_j$'s lie in $F^{\ge 1}R$ when viewed as elements of $R$ (i.e.\ forgetting the grading). Therefore \[ F^{\ge 1} R = F^{\ge n+1} R \subset (F^{\ge 1} R)^2 \] as ideals of $R$. (The equality uses the tamely filtered hypothesis.) Hence $F^{\ge 1} R$ is idempotent. Also, the fiber of $\Spec \oplus_m F^{\ge m} R \xrightarrow{\pi} \Spec k[h]$ at $h=1$ is $\Spec R$, so $R$ is also finitely generated. Conversely, assume that $F^{\ge 1} R$ is idempotent and $R$ is finitely generated. Since $R$ is Noetherian, $F^{\ge 1} R$ is a finitely generated ideal. A set of generators for $\oplus_m F^{\ge m} R$ is given by taking a set of generators for $R$ placed in degree 0, a set of ideal generators for $F^{\ge 1} R$ placed in degree $1$, and the element $1_R$ placed in degree $-1$. Point (ii) is deduced from Lemma~\ref{tame-describe} and the fact that an idempotent ideal of a Noetherian ring vanishes on a union of connected components. \end{proof} Next, we observe that every filtered ring has a universal tame quotient. \begin{lem} \label{lem-tame-cat} There are adjoint functors \[ \iota : \QCoh^{\gm}_{\tf, \tame}(\BA^1) \rightleftarrows \QCoh^{\gm}_{\tf}(\BA^1) : \tau_{\tame} \] where $\iota$ is the inclusion and $\tau_{\tame}$ sends $(V, F)$ to $(F^{\ge 0} V, F_1)$ where $F_1$ is the tame filtration defined by $F_1^{\ge m} = F^{\infty} V$ for all $m > 0$. Both of these functors are monoidal. \end{lem} \begin{proof} Clear. \end{proof} Let $\Sch^{\on{aff}, \tf}_{\BA^1/\gm}$ be the category of $\gm$-equivariant affine schemes flat over $\BA^1$, and let $\Sch_{\BA^1 / \gm}^{\on{aff}, \tame}$ be the subcategory of tame affine schemes. \begin{cor}\label{cor-tame} There are adjoint functors \[ \tau^{\tame} : \Sch_{\BA^1 / \gm}^{\on{aff}} \rightleftarrows \Sch_{\BA^1 / \gm}^{\on{aff}, \tame} : \iota \] where $\iota$ is the inclusion, and both functors are monoidal with respect to the Cartesian monoidal structures. The functor $\tau^{\tame}$ preserves the class of affine schemes whose every connected component is integral and finite type. \end{cor} \begin{proof} The last sentence is not obvious. It suffices to show that, if $\oplus_m F^{\ge m} R$ is finitely generated and integral, then the following statements hold: \begin{enumerate}[label=(\roman*)] \item $F^{\ge 0} R$ is a finitely generated $k$-algebra. \item The ideal $F^{\infty} R$ is either $\la 0 \ra$ or $F^{\ge 0} R$.\footnote{If $F^{\infty}R = F^{\ge 0} R$, then $F$ must be the trivial filtration on $R$, that is $F^{\ge m} R = R$ for all $m$.} \end{enumerate} This is because of Lemma~\ref{tame-describe-2}(i) and the fact that we can consider each connected component separately. Let $s_i$ for $i = 1, \ldots, r$ be a finite set of homogeneous generators for the graded ring $\oplus_m F^{\ge m} R$, where $F^{\ge m} R$ lives in degree $m$. Then we have \[ F^{\ge m} R = \underset{\substack{e_1, \ldots, e_r \ge 0\\ \Sigma_i\, e_i \deg(s_i) = m}}{\on{span}} \left(\underset{i=1}{\overset{r}{\Pi}} s_i^{e_i} \right) \] where the notation means the $k$-linear span. In particular, the monomials of degree zero (whose $k$-span is $F^{\ge 0} R$) are in bijection with the kernel of the map of monoids $\BN^{\oplus r} \xrightarrow{\on{deg}} \BZ$ which sends $(e_1, \ldots, e_r) \mapsto \sum_{i} e_i \deg(s_i)$. Gordan's Lemma says that the submonoid of $\BN^{\oplus r}$ determined by the equation $\on{deg}(e_1, \ldots, e_r) = 0$ is finitely generated. This provides a finite set of $k$-algebra generators of $F^{\ge 0} R$, which proves (i). Similarly, applying the Gordon's lemma to the submonoid of $\BN^{\oplus r}$ cut out by the inequality $\deg \ge 0$ yields a finite set of generators of the graded subring $\oplus_{m \ge 0} F^{\ge m} R$. Let use denote these generators by $t_1, \ldots, t_{\ell}$, so that $\on{deg}(t_i) \ge 0$. If all these degrees are zero, then $F^{\ge m} R = \la 0 \ra$ for $m > 0$, which proves (ii). Therefore, we may assume that not all degrees are zero. Partition the generators into two parts \[ \{t_1, \ldots, t_{\ell}\} = \{t_1, \ldots, t_{\ell_0}\} \sqcup \{t_{\ell_0+1}, \ldots, t_{\ell}\} \] so that $\deg(t_i) > 0$ if and only if $i \le \ell_0$. Then \[ F^{\ge m} R = \sum_{\substack{e_1, \ldots, e_{\ell_0} \ge 0\\ \Sigma_i\, e_i \deg(t_i) = m}} \left \langle \underset{i=1}{\overset{\ell_0}{\Pi}} t_i^{e_i} \right \rangle \] where $\langle-\rangle$ denotes an ideal in $F^{\ge 0} R$. (Note that the sum ranges over possible exponents for $t_1$ through $t_{\ell_0}$. The remaining generators $t_{\ell_0 + 1}, \ldots, t_{\ell}$ appear as coefficients when one constructs the $F^{\ge 0} R$-ideal generated by the previous elements.) On the other hand, define $I = \langle t_1, \ldots, t_{\ell_0} \rangle$, so that \[ I^m = \sum_{\substack{e_1, \ldots, e_{\ell_0} \ge 0\\ \Sigma_i\, e_i = m}} \left \langle \underset{i=1}{\overset{\ell_0}{\Pi}} t_i^{e_i} \right \rangle. \] Let $d := \max_i \deg(t_i) > 0$. Then we have \[ I^m \subset F^{\ge m} R \subset I^{\lfloor \frac{m}{d} \rfloor}, \] because we have the implications \[ \Sigma_i\, e_i \ge m \quad \Longrightarrow \quad \Sigma_i\, e_i \deg(t_i)\ge m \quad \Longrightarrow \quad \Sigma_i\, e_i d \ge m. \] Taking intersections over all $m \ge 0$ implies that $F^{\infty} R = \cap_m I^m$. Since $R$ is integral by hypothesis, the Krull Intersection Theorem implies that $F^\infty R$ is either $\la 0 \ra$ or $F^{\ge 0}R$, which proves (ii). \end{proof} We shall omit instances of $\iota$ in the subsequent notation. \begin{cor} \label{cor-tame-univ} There is a finite type flat affine group scheme $\tau^{\tame}(\tilde{G}^s)$ over $\BA^1 / \gm$ and a map $q : \tilde{G}^s \to \tau^{\tame}(\tilde{G}^s)$ of group schemes over $\BA^1/\gm$ which is initial for all such maps from $\tilde{G}^s$ to a \underline{tame} affine group scheme over $\BA^1 / \gm$. The map $q$ is dominant because $q_1$ is surjective. \end{cor} \begin{proof} This follows from Corollary~\ref{cor-tame}. For the last sentence, note that $q_1$ is dominant because $F^{\ge 0} R \to R$ is an injection for any filtered ring $(R, F)$, and this implies that $q_1$ is surjective because it is a map of group schemes. \end{proof} In view of Lemma~\ref{tame-describe-2}(ii), $\tau^{\tame}(\tilde{G}^s)$ can be described very concretely: it is defined by a group $G^{\tame}_1$ and a subgroup $G^{\tame}_0$ corresponding to a subset of the connected components. The former is the general fiber and the latter is the special fiber of $\tau^{\tame}(\tilde{G}^s)$. There is a canonical surjective map $G^{s(1)} \sra G^{\tame}_1$ and a canonical $\gm$-equivariant map $G^{s(0)} \to G^{\tame}_0$, where the $\gm$-action on the target is trivial. \begin{cd}[column sep = 1.3in] G^{s(1)} \ar[d, twoheadrightarrow, "{\mathrm{surjective}}"] & G^{s(0)} \ar[d, "\gm\text{-equivariant}"] \\ G^{\tame}_1 \ar[hookleftarrow]{r}{\text{incl.\ of components}}[swap]{\text{Lemma~\ref{tame-describe}(ii)}} & G^{\tame}_0 \end{cd} And $\tau^{\tame}(\tilde{G}^s)$ is the `universal' example of a family of quotients of the fibers of $\tilde{G}^s$ which admits such a description. \begin{rmk} By the universal property of $\tau^{\tame}(\tilde{G}^s)$, Proposition~\ref{prop:linebundles}(i) is equivalent to the assertion that every character of $G^{s(1)}$ factors through the quotient map $G^{s(1)} \sra G_1^{\on{tame}}$. In addition, the character $\lim \chi$ of $G^{s(0)}$ obtained by taking the limit under specialization can be computed as the composition \[ G^{s(0)} \to G_0^{\on{tame}} \hra G_1^{\on{tame}} \xrightarrow{\chi} \gm. \] In effect, the computation of the limiting character is encapsulated in the construction of the tame quotient, which contains information about the limits of all functions on $G^{s(1)}$ under specialization toward the zero fiber. \end{rmk} \subsection{The subcategory associated to a tame quotient} \label{s:subcat} Let us apply~\ref{s:factor} to a map $q : \tilde{G}^s \to H$ where $H$ is tame. For example one could take $q$ to be the universal map produced by Corollary~\ref{cor-tame-univ}. We deduce the following generalization of Corollary~\ref{cor-locsys}: \begin{cor}\label{cor-tame-classify} Let $\mc{C}_{q, (n)}(X)$ be the category whose objects are triples $(V, F, \rho)$ as follows: \begin{itemize} \item $V$ is a vector space, $\rho : H_1 \acts V$ is a representation, and $F$ is an exhausting filtration on $V$ which is invariant under $\rho$. \item We furthermore require that the restriction of the associated graded action \[ \gm \ltimes G^{s(0)} \to \gm \times H_0 \xrightarrow{\sigma} \gm \times H_1 \acts \gr^F V \] factors through the quotient $\gm \ltimes G^{s(0)} \sra G^{\ell(0)}$. Here the specialization map $\sigma$ arises from Lemma~\ref{tame-describe}(ii). \end{itemize} The morphisms in this category are maps of filtered representations. Then there is an equivalence of categories \[ \QCoh^G_{\tf, q}(X) \simeq \mc{C}_{q, (n)}(X) \] where the left hand side is as in Definition~\ref{def-q-cat}. \end{cor} \begin{proof} The proof is identical to that of Corollary~\ref{cor-locsys}. \end{proof} Recall that Proposition~\ref{flat-tame-prop}(ii) gives a criterion under which the subcategory \[ \QCoh^G_{\tf, q}(X) \subset \QCoh^G_{\tf}(X) \] can be characterized in terms of the behavior of a sheaf $\mc{F} \in \QCoh^G_{\tf}(X)$ on $U$. Thus, when this criterion is satisfied, we have a subcategory for which it is easy to detect membership and to classify objects. Unfortunately, the closer $q$ is to being `universal,' the less likely it is to satisfy this criterion. \begin{ex} \label{ex:tame} In closing, we illustrate the preceding notions by constructing an example of a family of groups over $\BA^1$ which plays the role of $\tilde{G}^s$ and a tame quotient of this family which fails the criterion of Proposition~\ref{flat-tame-prop}(ii). Let $G = \GL_2$ and define the $\gm$-action on $G \times \BA^1$ via conjugation by the subgroup $\gamma : \gm \hra G$ defined by $\gamma(t) = \begin{pmatrix} 1 & 0 \\ 0 & t \end{pmatrix}$. Define the closed subscheme $\tilde{G}^s \subset G \times \BA^1$ by requiring that \[ \tilde{G}^s|_1 = \left\{ \left. \begin{pmatrix} a & b \\ b & a \end{pmatrix}\ \right|\ a^2 - b^2 = 1 \right\}, \] that $\tilde{G}^s$ is invariant under the $\gm$-action, and that it is flat over $\BA^1$. This implies that \[ \tilde{G}^s|_t = \left\{ \left. \begin{pmatrix} a & bt^{-1} \\ bt & a \end{pmatrix}\ \right|\ a^2 - b^2 = 1 \right\} \] for all $t\neq 0$, and \[ \tilde{G}^s|_0 = \left\{ \left. \begin{pmatrix} a & c \\ 0 & a \end{pmatrix}\ \right|\ a^2 = 1 \right\}. \] Thus, $\tilde{G}^s$ is a degeneration from the group $\gm$ to the group $\BZ / 2 \ltimes \ga$. Next, we find the universal tame quotient $\tilde{G}^s \to \tau^{\tame}(\tilde{G}^s)$. We have $A \simeq \oh_{\tilde{G}^s|_1} \simeq k[a, b] / \la a^2 - b^2 - 1\ra$, and the filtration on $A$ is the $\la b \ra$-adic filtration, meaning that \[ F^{\ge m} A = \begin{cases} A & \text{ if } m \le 0 \\ \la b \ra^m & \text{ if } m > 0. \end{cases} \] Since $\cap_m \la b \ra^m = \la 0 \ra$, we have \[ F^{\ge m} \tau^{\tame} A = \begin{cases} A & \text{ if } m \le 0 \\ \la 0 \ra & \text{ if } m > 0. \end{cases} \] Thus $\tau^{\tame}(\wt{G}^s) \simeq \gm \times \BA^1$. The map $G^s|_1 \sra G_1^{\tame}$ is an isomorphism, while the map $G^s|_0 \to G_0^{\tame}$ is the composition \[ \BZ/2 \ltimes \ga \sra \BZ / 2 \hra \gm. \] Thus the universal tame quotient is \emph{not} flat. In fact, the only flat tame quotient of $\tilde{G}^s$ is the trivial one. The characterization of the filtration on $A$ as the $\la b \ra$-adic filtration allows one to say a bit more. The group scheme $\wt{G}^s$ is obtained from the constant family $\gm \times \BA^1$ via deformation to the normal cone applied to the subgroup \[ \{\pm 1\} = V(b) \hra \Spec A \simeq \gm. \] In general, given a group $G$ and a subgroup $H$, the deformation to the normal cone of $H$ in $G$ yields a $\gm$-equivariant degeneration from $G$ to the normal bundle $\mc{N}_{H/G}$. (The group structure of the normal bundle is given by its realization as $\mc{N}_{H/G}|_1 \rtimes H$.) The degeneration from the Poincar\'e group to the Galilean group is realized in this way. \end{ex} \subsection{Application to equivariant twisted \texorpdfstring{$\D$}{D}-modules} \label{apply} Let us relate the tame quotient idea to the application discussed in Section~\ref{sec:application}. In~\ref{sec:admissibility}, we explain why the following two subcategories of $\Vect^G(X)$ are equal: \begin{itemize} \item The subcategory consisting of sheaves $\mc{F}$ whose restriction $\mc{F}|_U$ has the property of being a strongly $G$-equivariant twisted $\D$-module with respect to a fixed $G$-equivariant twisting on $U$. \item The subcategory consisting of sheaves $\mc{F}$ for which the restricted action \[ (G^{s(1)})_\circ \hra G^{s(1)} \acts \mc{F}|_{\ell(1)} \] factors through a fixed character $\chi : (G^{s(1)})_\circ \to \gm$ corresponding to the aforementioned twisting.\footnote{We emphasize that $\chi$ need not be defined on $G^{s(1)}$, but only on its identity component.} \end{itemize} We call this category $\Vect^G_\chi(X)$. This notation generalizes the notation $\Pic^G_\chi(X)$ which was introduced in~\ref{s:line}. The most important case (in view of Theorem~\ref{thm:admissibility}) is when the twisting is a power of the determinant line bundle on $U$. In this subsection, we construct a tame quotient of $\tilde{G}^s$ corresponding to $\chi$ as above, and we reformulate the criterion of Proposition~\ref{flat-tame-prop}(ii) more concretely. The main difference between this subsection and~\ref{s:line} is that here our character $\chi$ need only be defined on the identity component of $G^{s(1)}$. The case when $\chi$ is trivial corresponds to untwisted $\D$-modules. Since this case has already been discussed in~\ref{s:locsys}, we assume that $\chi$ is nontrivial. \begin{lem} \label{lem-tame-chi} If $\Vect^G_\chi(X)$ is nonzero, then $\ker(\chi)$ is normal in $G^{s(1)}$, not just in $(G^{s(1)})_\circ$. Let $q_1 : G^{s(1)} \to Q_1$ be the quotient by the subgroup $\ker(\chi)$. Then there is a $\gm$-equivariant map $\tilde{G}^s \to Q_1 \times \BA^1$ whose fiber at 1 equals $q_1$. \end{lem} \begin{proof} The first sentence follows from the fact that, if $G^{s(1)}$ has a nonzero representation for which the action of $(G^{s(1)})_\circ$ factors through $\chi$, then the intersection of the kernel of this representation with $(G^{s(1)})_\circ$ must be $\ker(\chi)$. Thus, $\ker(\chi)$ is an intersection of two normal subgroups, so it is normal. Now we prove the last statement. If $G^{s(1)}$ is connected, it follows from Proposition~\ref{prop:linebundles}(i). For the general case, we shall modify the proof, which uses Lemma~\ref{lem-dim-1}. Consider the codimension-one subgroup $\ker(\chi) \subset G^{s(1)}$, extend it to a $\gm$-equivariant sub group scheme $K \subset \tilde{G}^s$ which is flat over $\BA^1$, and let $\tilde{G}^s \sra \tilde{Q}$ be the quotient by $K$. Lemma~\ref{lem-dim-1}(i) applies to the identity component of $\tilde{Q}$, and it tells us that the action of $\gm$ on the identity component of $\tilde{Q}_0$ is expanding. If $(\tilde{Q}_0)_\circ \simeq \gm$, then the action $\gm \acts \tilde{Q}_0$ must be trivial (by rigidity of tori), so the proof of (i) implies (ii) in Lemma~\ref{lem-dim-1} gives a $\gm$-equivariant map $\tilde{Q} \to Q_1 \times \BA^1$, and we are done. If $(\tilde{Q}_0)_\circ \simeq \ga$, then since every action $\gm \acts \BA^1$ is equal to a translate of a power of the standard action, and since the action $\gm \acts \tilde{Q}_0$ respects the group structure, it follows that the action $\gm \acts \tilde{Q}_0$ is expanding. Now we finish as in the previous paragraph. \end{proof} \subsubsection{Definition of the tame quotient associated to $\chi$} \label{q-tame-chi} In the notation of Lemma~\ref{lem-tame-chi}, let $Q$ be the group scheme obtained from $Q_1 \times \BA^1$ by deleting from $Q_1 \times \{0\}$ the components which are not in the image of $G^{s(0)}$. Then consider the map $q : \tilde{G}^s \to Q$ obtained by factoring the map $\tilde{G}^s \to Q_1 \times \BA^1$ through $Q$. This is a tame quotient in the sense of~\ref{sec:univtame}, and in view of Corollary~\ref{cor-tame-univ} it must factor through the universal tame quotient of $\tilde{G}^s$. It is also related to the tame quotient of components considered in~\ref{s:locsys}. We have a map of exact sequences \begin{cd} \gm \ar[d, hookrightarrow] \ar[r, shift left = 0.5, dash] \ar[r, shift right = 0.5, dash] & \gm \ar[d, hookrightarrow] \\ Q_1 \ar[d, twoheadrightarrow] \ar[r, hookleftarrow] & Q_0 \ar[d, twoheadrightarrow] \\ \Com(G^{s(1)}) \ar[r, hookleftarrow] & \Im(\sigma) \end{cd} where $\sigma$ was defined in~\ref{s:locsys}. The bottom row is the `specialization map' for $H_{\locsys}$ as defined in~\ref{s:locsys}. When the twisting is given by a power of the determinant line, there is a nice criterion for testing whether this tame quotient satisfies Proposition~\ref{flat-tame-prop}(ii). \begin{lem} \label{lem-top} If the character $\chi$ is a (nonzero) rational power of the character $(G^{s(1)})_\circ \acts \wedge^{\on{top}} \mc{T}_{\ell(1)}U$, then the tame quotient $\tilde{G}^s \to Q$ constructed above is flat if and only if the action $(G^{s(0)})_{\circ} \acts \wedge^{\on{top}}\mc{T}_{\ell(0)}Z$ is nontrivial.\footnote{Since $G^{s(0)}$ acts trivially on the fibers of $\mc{N}_{Z/X}$, it does not matter whether $\mc{T}_{\ell(0)}Z$ is interpreted as the tangent space taken in $Z$ or in $X$.} \end{lem} \begin{proof} Let $\chi : G^{s(1)} \to \gm$ be the character given by acting on the line $\wedge^{\on{top}} \mc{T}_{\ell(1)}U$. First, we claim the following: \begin{itemize} \item The map $G^{s(0)} \to Q_0$ is surjective if and only if the character $\lim \chi$ of $G^{s(0)}$ is nontrivial on $(G^{s(0)})_\circ$. \end{itemize} Since the map $G^{s(0)} \to Q_0$ is surjective on components (by definition of $Q$), it suffices to test whether the map $(G^{s(0)})_\circ \to (Q_0)_\circ \simeq \gm$ is a surjection. This occurs if and only if it is nontrivial. Furthermore, the construction of Lemma~\ref{lem-tame-chi} shows that the latter map is given by a nonzero rational multiple of $\lim \chi$. This character is surjective if and only if $\lim \chi$ is, so the bullet point statement holds. To finish, we show that $\lim \chi$ equals the character with which $G^{s(0)}$ acts on the line $\wedge^{\on{top}}\mc{T}_{\ell(0)}Z$. The line bundle $\wedge^{\on{top}}\ell^*\mc{T}_X$ on $\BA^1$ is acted on by $G^s$ and $\gm$. By~\ref{prop:linebundles-1}, this defines a (tame) quotient $q : \tilde{G}^s \to \gm \times \BA^1$. By construction, $q_1 = \chi$, so $q_0 = \lim \chi$, and the claim follows. \end{proof} \begin{rmk} \label{rmk-nonflat} \begin{enumerate}[label=(\roman*)] \item[] \item When the criterion of Lemma~\ref{lem-top} is satisfied, then the category $\Vect^G_\chi(X)$ can be described using Corollary~\ref{cor-tame-classify}. Even when the criterion is not satisfied, it is always true (by continuity) that the action $\tilde{G}^s \acts \mc{F}|_{\ell}$ corresponding to $\mc{F} \in \Vect^G_\chi(X)$ factors through the (non-tame) quotient $\tilde{G}^s \to \tilde{Q}$ constructed in the proof of Lemma~\ref{lem-tame-chi}. Using the method of Lemma~\ref{lem-dim-1}, one can prove that $\tilde{Q}$ is obtained from $Q_1 \times \BA^1$ by iterating the following two operations: \begin{itemize} \item One may delete some connected components from the special fiber. \item One may apply the blow-up construction at the end of Example~\ref{ex:tame} to a (zero-dimensional) subgroup of the special fiber. \end{itemize} We do not need this result, so we did not include the proof. \item By the way, the criterion is not satisfied in Examples~\ref{ex-su} and \ref{ex-sp}. But the present subsection does not apply to those examples, because in those examples the character $\chi : (G^{s(1)})_\circ \to \gm$ is trivial. In those examples, we study line bundles so we are free to use the material of~\ref{s:line} instead. For vector bundles, we could use~\ref{s:locsys}. \end{enumerate} \end{rmk} \section{Equivariant sheaves on weakly normal chains}\label{sec:wn} In this section, we explain how to classify vector bundles on \emph{weakly normal} $G$-chains. These are the $G$-chains obtained by gluing smooth simple ones (as considered in Section~\ref{sec:main}) as transversely as possible along their codimension-one orbits. In~\ref{weak} and \ref{weak2}, we describe how to construct vector bundles on weakly normal $G$-chains by patching together vector bundles on the normalizations of their irreducible components. This material does not use Theorem~\ref{thm-main}, but that theorem (or any other classification result) can be used to describe vector bundles on the normalizations of the irreducible components. In~\ref{ssec:case-line}, we specialize to the case of line bundles and apply Theorem~\ref{thm-main} to arrive at a description of $\Vect^G(X)$ in terms of a `nearby cycles' or `gluing' functor, see Lemma~\ref{lem-nearby}. \subsection{Weak normality} \label{weak} The condition of \emph{weak normality} was originally defined (for complex analytic spaces) by Andreotti and Norguet in \cite{AndreottiNorguet} and (for schemes) by Andreotti and Bombieri in \cite{AndreottiBombieri}. A variety $X$ is weakly normal if every finite, birational, bijective map $Y \to X$ is an isomorphism (deleting the word 'bijective' from this definition recovers the usual notion of normality). All normal schemes are weakly normal, but many others are as well. For example, a nodal cubic is weakly normal but not normal. As proved in \cite{kollar}, every Noetherian scheme $X$ admits a \emph{weak normalization}, i.e. a finite, birational map $X^{\mathrm{wn}} \to X$ satisfying the obvious universal property. Let $G \acts X$ be a chain, let $Z \hra X$ denote the closed embedding of the disjoint union of the reduced codimension one orbits, let $\nu : \wt{X} \to X$ be the normalization, and consider the following Cartesian diagram: \begin{equation} \label{patch} \begin{tikzcd} \wt{Z} \ar[r, hookrightarrow] \ar[d] & \wt{X} \ar[d, "\nu"] \\ Z \ar[r, hookrightarrow, "\iota"] & X \end{tikzcd} \end{equation} Since $\tilde{X}$ is a normal $G$-chain, it is smooth (see the last paragraph of~\ref{s-def-chains}). \begin{prop}\label{weak-prop} If $X$ is weakly normal, then the following are true: \begin{enumerate}[label=(\roman*)] \item $\wt{Z}$ is reduced. \item The diagram~(\ref{patch}) is a Milnor square, meaning that these two equivalent conditions are satisfied: \begin{itemize} \item (\ref{patch}) is a pushout square in the category of schemes. \item The map \[ \oh_X \to \oh_{\wt{X}} \underset{\oh_{\wt{Z}}}{\times} \oh_Z \] is an isomorphism, where the terms on the right hand side are interpreted as sheaves of algebras on $X$, and the fibered product is computed in the category of such. \end{itemize} \item The following diagram of categories is Cartesian \begin{cd} \Vect^G(X) \ar[d, "\nu^*"] \ar[r, "\iota^*"] & \Vect^G(Z) \ar[d] \\ \Vect^G(\wt{X}) \ar[r] & \Vect^G(\wt{Z}) \end{cd} \end{enumerate} \end{prop} \begin{proof} (i). Let $C_X \hra X$ and $C_{\wt{X}} \hra \wt{X}$ be the conductor subschemes associated to $\nu$. Proposition 53 from~\cite{kollar} implies that $C_{\wt{X}}$ and $C_X$ are reduced. Hence $C_X$ is a union of components of $Z$. The relation $C_{\wt{X}} = \nu^*C_X$ (as closed subschemes of $\wt{Z}$) implies that the components of $\wt{Z}$ which lie over $C_X$ are reduced. The map $\wt{Z} \to Z$ must be an isomorphism over the components of $Z$ which are not contained in the conductor $C_X$, so the components of $\wt{Z}$ which do not lie over $C_X$ are also reduced. Point (ii) follows from the definition of the conductor ideal and the previous paragraph. Point (iii) follows from Milnor's Patching Theorem, as stated in~\cite{k-book}. Although this theorem is usually stated for affine schemes (i.e.\ rings), it also applies to non-affine schemes because vector bundles are a sheaf of categories over a scheme. \end{proof} \subsection{Patching vector bundles on chains} \label{weak2} \subsubsection{The graph of a chain} Preserve the notation of~\ref{weak}, so $G \acts X$ is a chain. Let $\{U_j\}_{j \in S_U}, \{Z_i\}_{i \in S_Z}, \{\tilde{Z}_k\}_{k \in S_{\tilde{Z}}}$ be the irreducible components\footnote{This term was defined in~\ref{s-def-chains}.} of $U$, $Z$, and $\tilde{Z}$, respectively. Here $S_U, S_Z, S_{\tilde{Z}}$ are sets that index the irreducible components. In this subsection we describe a diagram in the category of $G$-chains consisting of the objects $U_j, Z_i, \tilde{Z}_k$, and $U_j \cup \tilde{Z}_k$ whenever these two $G$-orbits are adjacent in $\tilde{X}$. The maps of this diagram will be given by the obvious ones shown below: \begin{cd} & U_j \cup \tilde{Z}_k \\ U_j \ar[ru, "\text{inclusion}"] & & \tilde{Z}_k \ar[lu, swap, "\text{inclusion}"] \ar[d, "\text{restriction of } \nu"] \\ & & Z_i \end{cd} First, we define maps of sets \begin{cd} S_U & S_{\tilde{Z}} \ar[l, swap, "a"] \ar[d, "\nu"] \\ & S_Z \end{cd} The map $a$ sends $k \in S_{\tilde{Z}}$ to the element $j \in S_U$ such that $\tilde{Z}_k$ is adjacent to $U_j$ in $\tilde{X}$. This $j$ is unique because $\tilde{X}$ is smooth (see~\ref{s-def-chains}). The map $\nu$ sends $k \in S_{\tilde{Z}}$ to the element $i \in S_Z$ such that the normalization map $\nu : \tilde{X} \to X$ sends $\tilde{Z}_k$ to $Z_i$. Next, we define a directed graph $\Gamma$ as follows: \begin{itemize} \item The vertex set is $S_U \sqcup S_{\tilde{Z}}^{(1)} \sqcup S_{\tilde{Z}}^{(2)} \sqcup S_Z$. As indicated here, we will use superscripts $(-)^{(1)}$ and $(-)^{(2)}$ to differentiate between the two copies of $S_{\tilde{Z}}$ in the vertex set. \item For every $k \in S_{\tilde{Z}}$, there are arrows \begin{cd} & k^{(1)} \\ a(k) \ar[ru] & & k^{(2)} \ar[lu] \ar[d] \\ & & \nu(k) \end{cd} \end{itemize} Finally, we define a functor $F : \Gamma \to G\mathrm{\text{-}Chains}$ which sends $j \in S_U$ to $U_j$, sends $k^{(1)} \in S_{\tilde{Z}}^{(1)}$ to $U_{a(k)} \cup \tilde{Z}_k$, sends $k^{(2)} \in S_{\tilde{Z}}^{(2)}$ to $\tilde{Z}_k$, and sends $i \in S_Z$ to $Z_i$. The behavior on arrows is given by the first diagram in this subsubsection. \begin{prop}\label{weak-prop-2} If $X$ is weakly normal, then the limit of the diagram of categories given by $\Vect^G(-) \circ F^{\op} : \Gamma^{\op} \to \on{Cat}$ is equivalent to $\Vect^G(X)$. \end{prop} \begin{proof} For the duration of this paragraph, fix $j \in S_U$ and let $k$ range over the subset $a^{-1}(j) \subset S_{\tilde{Z}}$. The subgraph of $\Gamma$ induced by the vertices $j$ and $k^{(1)}$ is connected to the rest of $\Gamma$ only via the edges $k^{(1)} \leftarrow k^{(2)}$ for each $k$. The limit of categories associated to this induced subgraph is $\Vect^G(\ol{U}_j)$ because the $U_j \cup \tilde{Z}_k$ constitute an open cover of $\ol{U}_j$. Thus, we may modify $F$ and $\Gamma$ (without changing the limit) by contracting this subgraph to a single vertex $v_{j}$ which maps to the category $\Vect^G(\ol{U}_j)$. Now we vary $j \in S_U$ and perform this replacement for all $j \in S_U$. We obtain a new directed graph $\Gamma'$ as follows: \begin{itemize} \item The vertex set is $\{v_j \, | \, j \in S_U\} \sqcup S_{\tilde{Z}}^{(2)} \sqcup S_{Z}$. \item For each $k \in S_{\tilde{Z}}$, there are edges $v_{a(k)} \leftarrow k^{(2)} \to \nu(k)$. \end{itemize} We also obtain a new diagram $(\Gamma')^{\op} \to \on{Cat}$ which sends $v_j \mapsto \Vect^G(\ol{U}_j)$ for each $j \in S_U$. By construction, this diagram has the same limit as the old one. The product of the categories associated to the $v_j$ is equivalent to $\Vect^G(\tilde{X})$. Similarly, the product of the categories associated to the $k \in S_{\tilde{Z}}$ is equivalent to $\Vect^G(\tilde{Z})$, and the product of the categories associated to the $i \in S_Z$ is equivalent to $\Vect^G(Z)$. Now it is easy to see (using the definition of $\Gamma'$) that the desired limit of categories coincides with that of Proposition~\ref{weak-prop}(iii), as desired. \end{proof} \begin{rmk} Concretely, this means that a $G$-equivariant vector bundle on $X$ is equivalent to the following data: \begin{itemize} \item We choose $\mc{V}_i \in \Vect^G(Z_i)$ for each $i \in S_Z$. \item We choose $\mc{W}_j \in \Vect^G(U_j)$ for each $j \in S_U$. \item For each $k \in S_{\tilde{Z}}$, we choose $\mc{E} \in \Vect^G(U_{a(k)} \cup \tilde{Z}_k)$ along with isomorphisms $\mc{E}|_{U_{a(k)}} \simeq \mc{W}$ and $\mc{E}|_{\tilde{Z}_k} \simeq \nu^*(\mc{V}_{\nu(k)})$, where $\nu : \tilde{Z}_k \to Z_{\nu(k)}$ is the restriction of the normalization map of $X$. \end{itemize} Since there are no two-dimensional cells in $\Gamma'$, cocycle conditions do not appear in the above list. This is a convenient consequence of the `codimension zero or one' requirement in the definition of a chain. This gives a purely algebraic description of $\Vect^G(X)$. Indeed, if we choose basepoints $z_i \in Z_i$ and $u_j \in U_j$, then we have equivalences $\Vect^G(Z_i) \simeq \on{Rep}(\stab_G(z_i))$ and $\Vect^G(U_j) \simeq \on{Rep}(\stab_G(u_j))$. Theorem~\ref{thm-main} can be used to describe the category $\Vect^G(U_j \cup \tilde{Z}_i)$. \end{rmk} \begin{cor} If $X$ is weakly normal and $\Gamma$ is contractible,\footnote{This means that the topological space associated to $\Gamma$ is contractible, i.e.\ the directed graph $\Gamma$ does not contain a `zig-zag cycle.'} then the assertion of Proposition~\ref{weak-prop-2} also holds at the level of isomorphism classes. \end{cor} \subsection{The case of line bundles} \label{ssec:case-line} \subsubsection{An analogue of the nearby cycles functor} \label{def-nearby} For this subsubsection only, we return to the setting of Section~\ref{sec:main}, so that $X = U \cup Z$ is a smooth simple fastened $G$-chain with one closed orbit. We shall describe the category $\Pic^G(X)$ in a way which more closely resembles the `gluing data' approach mentioned in~\ref{intro-gluing}. Choose a fastening datum $(\gamma, \ell)$ for $X$. This allows one to define a `nearby cycles' functor \[ \Psi : \Pic^G(U) \to \Pic^G(\mc{N}^{\circ}_{Z/X}), \] given by the composition \begin{cd} \Pic^G(U) \ar[r] & \Pic(\pt / G^{s(0)}) \ar[leftarrow]{r}{\text{restrict}}[swap]{\sim} & \Pic^G(\mc{N}^{\circ}_{Z/X}) \end{cd} To define the unlabeled arrow, interpret an object of $\Pic^G(U)$ as a pair $(\chi, V)$ where $\chi : G^{s(1)} \to \gm$ is a character and $V$ is a 1-dimensional vector space. This maps to $(\lim \chi, V) \in \Pic(\pt / G^{s(0)})$ where $\lim \chi$ is the character of $G^{s(0)}$ obtained by taking a limit of $\chi$ under $\gamma$-conjugation. (Note that $\lim \chi$ exists by Proposition~\ref{prop:linebundles}(i).) \begin{lem} \label{lem-nearby} We have a Cartesian diagram \begin{cd} \Pic^G(X) \ar[r] \ar[d] & \Pic^G(Z) \ar[d, "\mathrm{pullback}"] \\ \Pic^G(U) \ar[r, "\Psi"] & \Pic^G(\mc{N}^{\circ}_{Z/X}) \end{cd} \end{lem} \begin{proof} An object in the Cartesian product of the labeled maps is given by a triple $(\chi, V, \chi')$ as follows: \begin{itemize} \item $\chi : G^{s(1)} \to \gm$ is a character, $V$ is a 1-dimensional vector space, and $\chi' : G^{\ell(0)} \to \gm$ is a character. \item These data must satisfy the requirement that the restriction of $\chi'$ along $G^{s(0)} \xrightarrow{\iota} G^{\ell(0)}$ equals $\lim \chi$. \end{itemize} On the other hand, an object in $\Pic^G(X)$ corresponds (via~\ref{prop:linebundles-1}) to a triple $(V, F, \chi)$ as follows: \begin{itemize} \item $V$ is a 1-dimensional vector space, $F$ is an exhausting filtration on $V$ (i.e.\ an integer $d = \deg V$), and $\chi : G^{s(1)} \to \gm$ is a character. \item These data must satisfy the requirement that the character of $\gm \ltimes G^{s(0)}$ defined by $(d, \lim \chi)$ descends along the map $\gm \ltimes G^{s(0)} \sra G^{\ell(0)}$. \end{itemize} We define an equivalence between these two categories by sending \[ (V, F, \chi) \mapsto (\chi, V, \chi') \] where $\chi'$ is the character of $G^{\ell(0)}$ given by the last bullet point above, which in particular depends on $F$. The inverse equivalence is defined by sending $(\chi, V, \chi')$ to the triple $(V, F, \chi)$ where $F$ places $V$ in degree $d$, for the integer $d$ defined by the weight of the action $\gm \xrightarrow{\gamma} G^{\ell(0)} \acts V$. It is straightforward to check that these are mutually inverse equivalences, and that there are natural isomorphisms which make this equivalence compatible with the projections to $\Pic^G(U)$ and $\Pic^G(Z)$. For the latter, it is important to note that the vector spaces $V$ and $\gr^F V$ are \emph{canonically} isomorphic, because $V$ is 1-dimensional. \end{proof} \subsubsection{The category of line bundles} By combining Proposition~\ref{weak-prop-2} and Lemma~\ref{lem-nearby}, we obtain a characterization of $\Pic^G(X)$ as the limit of a diagram of categories with maps as follows, where $\Psi_{j, k}$ is the nearby cycles functor $\Psi$ for the smooth simple chain $U_j \cup \tilde{Z}_k$: \begin{cd} \Pic^G(U_j) \ar[rd, "\Psi_{j, k}"] & & \Pic^G(Z_{\nu(k)}) \ar{ld}[swap]{\text{pullback}}\\ & \Pic^G(\mc{N}^\circ_{\tilde{Z}_k / (U_j \cup \tilde{Z}_k)}) \end{cd} In what follows, we explain this observation in more detail. Let $\Gamma_{\on{pic}}$ be the directed graph defined as follows: \begin{itemize} \item The vertex set is $S_U \sqcup S_{\tilde{Z}} \sqcup S_Z$. \item For every $k \in S_{\tilde{Z}}$, there are arrows $a(k) \to k \leftarrow \nu(k)$. \end{itemize} Let $F_{\on{pic}} : \Gamma_{\on{pic}} \to \on{Cat}$ be the diagram defined as follows: \begin{itemize} \item It sends $j \in S_U$ to $\Pic^G(U_j)$. \item It sends $k \in S_{\tilde{Z}}$ to $\Pic^G(\mc{N}^{\circ}_{\tilde{Z}_k / (U_{a(k)} \cup \tilde{Z}_k)})$. \item It sends $i \in S_Z$ to $\Pic^G(Z_i)$. \item Its behavior on arrows is given by the above diagram. \end{itemize} \begin{cor}\label{cor:linebundles} If $X$ is weakly normal, then we have $\lim F_{\on{pic}} \simeq \Pic^G(X)$. \end{cor} \begin{proof} This follows from similar `diagram modifications' as were used in the proof of Proposition~\ref{weak-prop-2}. Starting with the diagram of Proposition~\ref{weak-prop-2} but with $\Pic^G(-)$ in place of $\Vect^G(-)$, one first replaces subdiagrams of the form \begin{cd} & \Pic^G(U_{a(k)} \cup \tilde{Z}_k) \ar[ld] \ar[rd] \\ \Pic^G(U_{a(k)}) & & \Pic^G(\tilde{Z}_k) \end{cd} with diagrams of the form \begin{cd} \Pic^G(U_{a(k)}) \ar[rd, "\Psi_{j, k}"] & & \Pic^G(\tilde{Z}_k) \ar[ld] \\ & \Pic^G(\mc{N}^\circ_{\tilde{Z}_k / (U_{a(k)} \cup \tilde{Z}_k)}) \end{cd} Lemma~\ref{lem-nearby} tells us that this does not change the resulting limit of categories. Lastly, for each $i \in S_Z$, one may replace the subdiagram induced by $\Pic^G(Z_i)$ and $\Pic^G(\tilde{Z}_k)$ for all $k \in \nu^{-1}(i)$ by the single object $\Pic^G(Z_i)$. This is because the limit does not change if we pass to an initial subdiagram. \end{proof} \begin{rmk} Concretely, this means that a $G$-equivariant line bundle on $X$ is equivalent to the following data: \begin{itemize} \item We choose $\mc{V}_i \in \Pic^G(Z_i)$ for each $i \in S_Z$. \item We choose $\mc{W}_j \in \Pic^G(U_j)$ for each $j \in S_U$. \item For each $k \in S_{\tilde{Z}}$, we choose an isomorphism $\eta : \Psi(\mc{W}_i) \simeq \pi^*\mc{V}_{\nu(k)}$ of objects in $\Pic^G(\mc{N}^\circ_{\tilde{Z}_k / (U_{a(k)} \cup \tilde{Z}_k)})$, where $\pi$ is the composition \[ \mc{N}^\circ_{\tilde{Z}_k / (U_{a(k)} \cup \tilde{Z}_k)} \to \tilde{Z}_k \to Z_{\nu(k)}. \] \end{itemize} As was the case in Proposition~\ref{weak-prop-2}, there are no cocycle conditions because $\Gamma_{\on{pic}}$ contains no 2-cells. \end{rmk} \section{An application to the representation theory of real reductive groups}\label{sec:application} Let $G_{\mathbb{R}}$ be the real points of a connected reductive algebraic group and let $G$ be the complexification of $G_{\mathbb{R}}$. Choose a Cartan involution $\theta$ of $G$ and let $K = G^{\theta}$, the fixed-points of $\theta$. By definition, $K \cap G_{\mathbb{R}} \subset G_{\mathbb{R}}$ is a maximal compact subgroup. If we write $\mathfrak{g}$ and $\mathfrak{k}$ for the Lie algebras of $G$ and $K$, respectively, the differenital of $\theta$ provides a decomposition $$\mathfrak{g} = \mathfrak{k} \oplus \mathfrak{p}$$ into $+1$ and $-1$ eigenspaces. Let $\mathcal{N} \subset \mathfrak{g}$ be the set of nilpotent elements of $\mathfrak{g}$. Then $\mathcal{N}$ is closed in the Zariski topology on $\mathfrak{g}$ and invariant under the commuting actions of $G$ and $\mathbb{C}^{\times}$. The $G$-action on $\mathcal{N}$ has finitely many orbits. An important player in the representation theory of $G_{\mathbb{R}}$ is the closed subvariety $$\mathcal{N}_{\theta} := \mathcal{N} \cap \mathfrak{p}$$ This subset is invariant under $K$ and $\mathbb{C}^{\times}$ (though not under $G$). By a result of Kostant-Rallis (\cite{KostantRallis1971}), the $K$-action on $\mathcal{N}_{\theta}$ has finitely-many orbits. Each $K$-orbit on $\mathcal{N}_{\theta}$ is a Lagrangian submanifold of its $G$-saturation, which is a $G$-orbit on $\mathcal{N}$. If $V$ is an irreducible admissible representation of $G_{\mathbb{R}}$, there is an associated class in the Grothendieck group $K\mathrm{Coh}^K(\mathcal{N}_{\theta})$ of $K$-equivariant coherent sheaves on $\mathcal{N}_{\theta}$, constructed as follows. The Harish-Chandra module of $V$ is a $\mathfrak{g}$-module $M$ with an algebraic action of $K$. These structures are compatible in the two obvious ways: the action map $$\mathfrak{g} \otimes M \to M$$ is $K$-equivariant and the $\mathfrak{g}$-action coincides on $\mathfrak{k}$ with the differentiated action of $K$. We say, in short, that $M$ is a $(\mathfrak{g},K)$-module. $M$ is irreducible, since $V$ is. Every finite-length $(\mathfrak{g},K)$-module $M$ admits a good filtration $... \subset M_{-1} \subset M_0 \subset M_1 \subset ...$ by $K$-invariant subspaces. The associated graded $\gr(M)$ has the structure of a graded, $K$-equivariant, coherent sheaf on $\mathfrak{p}$, with support contained in $\mathcal{N}_{\theta}$. Although $\gr(M)$ depends on the filtration used to define it, its class $[\gr(M)]$ in the Grothendieck group $K\Coh^K(\mathcal{N}_{\theta})$ does not. In particular, the set $$\mathrm{AV}(M) := \mathrm{Supp}(\gr(M)) \subset \mathcal{N}_{\theta},$$ called the \emph{associated variety} of $M$, is a well-defined invariant.\footnote{Here $\mathrm{AV}(M)$ is defined to be the \emph{reduced} support, so that the associated variety is indeed a variety. Although $\gr(M)$ may not be scheme-theoretically supported on $\mathrm{AV}(M)$ depending on the filtration of $M$, it is always possible to interpret $[\gr(M)]$ as an element of $K_0(\mathrm{AV}(M))$, and we shall do so.} The correspondence between irreducible $G_{\mathbb{R}}$-representations and classes in $K\mathrm{Coh}^K(\mathcal{N}_{\theta})$ is given by the assignment $V \mapsto [\gr(M)]$: \begin{center} \begin{tikzpicture} \draw (-5,0) node[rectangle, draw]{Irred $G_{\mathbb{R}}$-reps}; \draw (0,0) node[ rectangle,draw]{Irred $(\mathfrak{g},K)$-mods}; \draw (5,0) node[ rectangle,draw]{$K\mathrm{Coh}^K(\mathcal{N}_{\theta})$}; \draw[->] (-3.5,0) -- (-2,0); \draw[->] (2,0) -- (3.5,0); \draw (-2.75,.75) node[text width=3cm, align=center]{Harish-Chandra module}; \draw (2.75,.5) node{$\mathrm{gr}$}; \end{tikzpicture} \end{center} The Arthur conjectures (\cite{Arthur1983},\cite{Arthur1989}) predict the existence of a small finite set of irreducible representations of $G_{\mathbb{R}}$ -- called \emph{unipotent} by Arthur -- with very interesting properties. A working definition can be found for example in \cite{BarbaschVogan1985}. These representations have been studied extensively over the past several decades, but the general theory remains elusive. The unipotent representations of $G_{\mathbb{R}}$ give rise, by the correspondence described above, to a finite set of distinguished classes in $K\mathrm{Coh}^K(\mathcal{N}_{\theta})$, which we will call \emph{unipotent sheaves}. \begin{center} \begin{tikzpicture} \draw (-5,0) node[rectangle, draw]{Irred $G_{\mathbb{R}}$-reps}; \draw (0,0) node[ rectangle,draw]{Irred $(\mathfrak{g},K)$-mods}; \draw (5,0) node[ rectangle,draw]{$K\mathrm{Coh}^K(\mathcal{N}_{\theta})$}; \draw[->] (-3.5,0) -- (-2,0); \draw[->] (2,0) -- (3.5,0); \draw (-2.75,.75) node[text width=3cm, align=center]{Harish-Chandra module}; \draw (2.75,.5) node{$\mathrm{gr}$}; \draw(-5,-2) node[rectangle, draw]{Unipotent $G_{\mathbb{R}}$-reps}; \draw(5,-2) node[rectangle, draw]{Unipotent sheaves}; \draw [right hook->] (-5,-1.5) -- (-5,-.5); \draw [right hook->] (5,-1.5) -- (5,-.5); \draw [->] (-3,-2) -- (3,-2); \end{tikzpicture} \end{center} This discussion suggests the following natural question \begin{question}\label{question} Is there a purely geometric description of the unipotent sheaves? \end{question} \subsection{Admissibility} \label{sec:admissibility} In \cite{Vogan1991}, Vogan provides a partial answer to Question \ref{question}. The key definition is the following one, due originally to Schwartz: \begin{defn}[\cite{Schwartz}]\label{def:admissibility} Let $U$ be a homogeneous space for $K$. Choose $e \in U$ so that $U = K/\stab_K(e)$. Homogeneous sheaves of twisted differential operators on $U$ are parameterized by characters of the Lie algebra $\mathfrak{k}^{e}$ (see, for example, Appendix A in \cite{HechtMilicicSchmidWolf}). Let $\mathrm{tr}$ be the character of $\mathfrak{k}^{e}$ corresponding to the canonical line bundle on $U$. This character is given by the explicit formula $$\mathrm{tr}(X) = \mathrm{Tr}(\mathrm{ad}(X)|_{\mathfrak{k}^{e}}) - \mathrm{Tr}(\mathrm{ad}(X)|_{\mathfrak{k}}) \qquad X \in \mathfrak{k}^{e}$$ A $K$-equivariant vector bundle $\mathcal{E} \in \Vect^K(U)$ is \emph{admissible} if the restriction $\mathcal{E}|_{U}$ has the structure of a strongly $K$-equivariant $\mathcal{D}_{U}^{\frac{1}{2}\mathrm{tr}}$-module, where $\mathcal{D}_{U}^{\frac{1}{2}\mathrm{tr}}$ is the homogeneous sheaf of twisted differential operators corresponding to the character $\frac{1}{2}\mathrm{tr}$ of $\mathfrak{k}^{e}$. Here is a more explicit formulation. The fiber $E = \mathcal{E}|_e$ carries a representation $\rho$ of $\stab_K(e)$. $\mathcal{E}$ is admissible if $\rho$ satisfies the condition $$d\rho = \frac{1}{2}\mathrm{tr}\cdot \mathrm{Id}_{E} \in \mathrm{Rep}(\mathfrak{k}^e)$$ \end{defn} As is clear from the second formulation of Definition \ref{def:admissibility}, admissibility is a property of the class $[\mathcal{E}] \in K\Vect^K(U)$. Vogan proves \begin{theorem}[\cite{Vogan1991}, Theorem 8.7]\label{thm:admissibility} Suppose $M$ is the Harish-Chandra module of a unipotent representation of $G_{\mathbb{R}}$. Let $U_1,...,U_m$ be the open $K$-orbits on $\mathrm{AV}(M)$. Then the classes $[\gr(M)|_{U_i}] \in K\Vect^K(U_i)$ are admissible. \end{theorem} If $\mathcal{E}$ is a unipotent sheaf, Theorem \ref{thm:admissibility} imposes rigid constraints on the restrictions $\mathcal{E}|_{U_i}$ to the open $K$-orbits on its support. However, it says nothing about the relationship between the various $\mathcal{E}|_{U_i}$ or about the restrictions $\mathcal{E}|_{Z_j}$ to $K$-orbits of codimension $\geq 1$. The machinery developed in Sections \ref{sec:chains} and \ref{sec:wn} sheds light on both of these issues. \subsection{The chain associated to an irreducible \texorpdfstring{$(\mathfrak{g},K)$}{(g,K)}-module} \label{gk-chain} Let $M$ be an irreducible $(\mathfrak{g},K)$-module and let $[\gr(M)] \in K\mathrm{Coh}^K(\mathcal{N}_{\theta})$ be the class defined in Section \ref{sec:application}. If $M$ is irreducible, then $\mathrm{AV}(M)$ is of pure dimension (see \cite{Vogan1991}, Theorem 8.4). Hence, the subset $$\mathrm{Ch}(M) := \mathrm{AV}(M) \setminus (\text{all orbits of codimension} \geq 2)$$ is a $\mathbb{C}^{\times}$-invariant $K$-chain. Usually, $\mathrm{Ch}(M)$ is \emph{not} fastened. The following example is typical. \begin{ex}\label{ex:nonfastened2} Let $G_{\mathbb{R}} = Sp(4,\mathbb{R})$. Then $G = Sp(4,\mathbb{C})$ and $\mathfrak{g} = \mathfrak{sp}(4,\mathbb{C})$. In standard coordinates, $$\mathfrak{g} = \left\{\left( \begin{array}{c|c} A & B\\ \hline C & -A^t \end{array}\right): B,C \text{ symmetric}\right\}$$ Choose the Cartan involution $$\theta: G \to G \qquad \theta(X) = \ ^tX^{-1}$$ Then $K$ is identified with $GL_2(\mathbb{C})$ and $\mathfrak{p}$ is identified (as a representation of $K$) with $\mathrm{Sym}^2\mathbb{C}^2 \oplus \mathrm{Sym}^2(\mathbb{C}^2)^*$. The $K$-orbits on $\mathcal{N}_{\theta}$ are parameterized by partitions of $4$ with signs attached to even parts and all odd parts occurring with even multiplicity (see \cite{CollingwoodMcgovern}). The closure orderings and codimensions are indicated below: \begin{center} \begin{tikzcd} & \mathcal{O}_{4^+} & & \mathcal{O}_{4^-} & \\ \mathcal{O}_{2^+2^+} \arrow[dash,ur, "1"] & & \mathcal{O}_{2^+2^-} \arrow[dash,ul, "1"] \arrow[dash,ur,"1"] & & \mathcal{O}_{2^-2^-} \arrow[dash,ul,"1"]\\ & \mathcal{O}_{2^+11} \arrow[dash,ul,"1"] \arrow[dash,ur,"1"] & & \mathcal{O}_{2^-11} \arrow[dash,ul,"1"] \arrow[dash,ur,"1"] & \\ & & \mathcal{O}_{1111} =\{0\} \arrow[dash,ul,"2"] \arrow[dash,ur,"2"]& &\\ \end{tikzcd} \end{center} Choose elements $e_{4^+} \in \mathcal{O}_{4^+}$, $e_{4^-} \in \mathcal{O}_{4^-}$, and so on, and for each $e$, write $\mathrm{Red}_K(e)$ for the Levi factor of the isotropy group $\stab_K(e)$ (well-defined up to isomorphism). We compute \begin{align*} &\mathrm{Red}_K(e_{4^+}) \cong \mathrm{Red}_K(e_{4^-}) \cong \{\pm 1\} \\ &\mathrm{Red}_K(e_{2^+2^+}) \cong \mathrm{Red}_K(e_{2^-2^-}) \cong O_2(\mathbb{C}) \qquad \mathrm{Red}_K(e_{2^+2^-}) \cong \{\pm 1\} \times \{\pm 1\}\\ &\mathrm{Red}_K(e_{2^+11}) \cong \mathrm{Red}_K(e_{2^-11}) \cong \mathbb{C}^{\times} \times \{\pm 1\}\\ &\mathrm{Red}_K(0) = GL_2(\mathbb{C}) \end{align*} Let $M$ be the Harish-Chandra module of the spherical principal series representation of $G_{\mathbb{R}}$ of infinitesimal character $0$. Hence, $\mathrm{AV}(M) = \mathcal{N}_{\theta}$ and $$\mathrm{Ch}(M) = \mathcal{O}_{4^+} \cup \mathcal{O}_{4^-} \cup \mathcal{O}_{2^+2^+} \cup \mathcal{O}_{2^+2^-} \cup \mathcal{O}_{2^-2^-}$$ The irreducible components of $\mathrm{Ch}(M)$ are $$Y_+ := \mathcal{O}_{4^+} \cup \mathcal{O}_{2^+2^+} \cup \mathcal{O}_{2^+2^-} \qquad Y_- := \mathcal{O}_{4^-} \cup \mathcal{O}_{2^+2^-} \cup \mathcal{O}_{2^-2^-}$$ Both components are singular, but the normalizations $\tilde{Y}_+$ and $\tilde{Y}_-$ are smooth. Consider, for example, $\tilde{Y}_+$. It has a unique open $K$-orbit $U_+$ lying over $\mathcal{O}_{4^+}$. Over the codimension $1$ orbits in $Y_+$, $\tilde{Y}_+$ contains several closed orbits $Z_{++}^1,...,Z_{++}^m$ and $Z_{+-}^1,...,Z_{+-}^n$ lying over $\mathcal{O}_{2^+2^+}$ and $\mathcal{O}_{2^-2^-}$, respectively. If we choose $z_{+-}^1 \in Z_{+-}^1$ lying over $e_{2^+2^-}$, there is an injection $$\mathrm{Red}_K(z_{+-}^1) \subseteq \mathrm{Red}_K(e_{2^+2^-}) \cong \{\pm 1\} \times \{\pm 1\}$$ Hence, the character $\stab_K(z_{+-}^1) \to \mathbb{C}^{\times}$ corresponding to the normal bundle of $Z_{+-}^1$ in $N(Y_+)$ factors through a finite group (of order dividing $4$). In particular, by Remark~\ref{rmk-fastened-normal}, $\mathrm{Ch}(M)$ fails to be fastened. \end{ex} There is a remedy for this problem which exploits the $\mathbb{C}^{\times}$-action on $\mathrm{Ch}(M)$. Almost always $\mathrm{Ch}(M)$ does not contain $0$ (in the few cases when it does, $\mathrm{Ch}(M)$ is fastened, so there is nothing to worry about). Let $\overline{\mathrm{Ch}}(M)$ be the image of $\mathrm{Ch}(M) \subset \mathfrak{p}$ in the projectivization of $\mathfrak{p}$, that is \[ \overline{\mathrm{Ch}}(M):=\mathrm{Ch}(M)/\mathbb{C}^{\times} \subset \mathbb{P}\mathfrak{p}. \] Note that $\overline{\mathrm{Ch}}(M)$ is a $K$-chain of one dimension less than $\mathrm{Ch}(M)$. In the following subsection, we will prove that $\overline{\mathrm{Ch}}(M)$ is fastened (even when $\mathrm{Ch}(M)$ is not). \subsection{The Slodowy slice} Let $e \in \mathcal{N}_{\theta}$. By the Jacobson-Morozov theorem, there is an $\mathfrak{sl}_2$-triple $(e,f,h)$ containing $e$ as its nilpositive element. By a result of Kostant-Rallis (\cite{KostantRallis1971}, Proposition 4), we can arrange so that $f \in \mathcal{N}_{\theta}$ and $h \in \mathfrak{k}$. \begin{defn} The Slodowy slice through $e$ is the affine subspace $S_e \subset \mathfrak{p}$ defined by $$S_e := e + \mathfrak{p}^f$$ \end{defn} \begin{prop}\label{prop:propertiesofSlodowy} The Slodowy slice has the following properties: \begin{enumerate} \item $S_e$ is transverse to the $K$-orbit $K \cdot e \subset \mathfrak{p}$. \item $S_e \cap K \cdot e = \{e\}$. \item $S_e$ is invariant under the adjoint action of $\stab_K(e,f,h)$. \end{enumerate} \end{prop} \begin{proof} These facts are standard. Proofs can be found in \cite{GanGinzburg2001}. \end{proof} There is a $\mathbb{C}^{\times}$-action on $S_e$ first defined by Gan and Ginzburg in \cite{GanGinzburg2001}. \begin{defn} Let $\tau: \mathbb{C}^{\times} \to K$ be the co-character defined by the requirement $$d\tau(1) = h$$ The \emph{Kazhdan} action of $\mathbb{C}^{\times}$ on $\mathfrak{p}$ is defined by the formula $$t \ast X = t^2\Ad(\tau(t^{-1}))(X) \qquad t \in \mathbb{C}^{\times},X \in \mathfrak{p}$$ \end{defn} This action has the following useful properties. \begin{prop}\label{prop:propsofKazhdanaction} The Kazhdan action on $\mathfrak{p}$ \begin{enumerate} \item fixes $e$, \item preserves $S_e$, \item contracts $S_e$ onto $e$, i.e. \[ \lim_{t \to 0} (t \ast X) = e \quad \forall X \in S_e, \] \item commutes with the adjoint action of $\stab_K(e,f,h)$ \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item Compute $$t \ast e = t^2\Ad(\tau(t^{-1}))(e) = t^2t^{-2}e=e$$ \item By part (1), it suffices to show that $\mathfrak{p}^f$ is preserved by the Kazhdan action of $\mathbb{C}^{\times}$. Suppose $Y \in \mathfrak{p}^f$. Compute $$[f,t\ast Y] = t^2[f,\Ad(\tau(t^{-1}))(Y)] = t^2\Ad(\tau(t^{-1}))[\Ad(\tau(t))f,Y] = \Ad(\tau(t^{-1}))[f,Y] = 0$$ \item By part (1), it suffices to show that \begin{equation}\label{eqn:contracting}\lim_{t \to 0}(t \ast Y) = 0 \quad \forall Y \in \mathfrak{p}^f\end{equation} By the representation theory of $\mathfrak{sl}_2(\mathbb{C})$, $\tau(\mathbb{C}^{\times})$ acts on $\mathfrak{p}^f$ with nonpositive weights. Hence, the Kazhdan action on $\mathfrak{p}^f$ has strictly positive weights. Equation \ref{eqn:contracting} follows. \item $\stab_K(h)$ centralizes $\tau(\mathbb{C}^{\times})$ and therefore commutes in its action on $\mathfrak{p}$ with the Kazhdan action of $\mathbb{C}^{\times}$. \qedhere \end{enumerate} \end{proof} The remainder of this section is devoted to proving Proposition~\ref{prop:chXfastened} which says that any $K$-chain obtained by projectivizing a $K$-chain in $\mc{N}_{\theta}$ is fastened. The proof explains how to use the Slodowy slice considered above to produce an explicit fastening datum. Since applying Theorem~\ref{thm-main} in practice requires choosing a fastening datum, the proof of Proposition~\ref{prop:chXfastened} is perhaps more useful than the statement itself. \begin{lem} \label{lem-slice} Let $X$ be a $K$-variety which consists of an open orbit $U$ and a closed orbit $Z$. Let $Y \hra X$ be a locally closed subscheme with the following properties: \begin{itemize} \item The codimension of $Y \cap Z \subset Z$ equals the codimension of $Y \subset X$. \item The scheme-theoretic intersections $Y \cap Z$ and $Y \cap U$ are smooth. \item $Y$ is Cohen--Macaulay. \end{itemize} Then the base change of the normalization map of $X$ along $Y \hra X$ is the normalization map of $Y$. \end{lem} \begin{proof} We first show that the $K$-saturation map $K \times Y \xrightarrow{a} X$ is smooth. \begin{enumerate}[label=(\arabic*)] \item The fiber over a closed point $u \in U$ is given by the Cartesian product \begin{cd} a^{-1}(u) \ar[r, hookrightarrow] \ar[d] & K \ar[d, "k \mapsto k^{-1} \cdot u"] \\ Y \cap U \ar[r, hookrightarrow] & U \end{cd} The right vertical map is smooth (since the fibers are isomorphic to the stabilizer group $K^u$, which is smooth because we are working in characteristic zero), and $Y \cap U$ is smooth, so $a^{-1}(u)$ is smooth as well. \item The fiber over a closed point $z \in U$ is given by the Cartesian product \begin{cd} a^{-1}(z) \ar[r, hookrightarrow] \ar[d] & K \ar[d, "k \mapsto k^{-1} \cdot z"] \\ Y \cap Z \ar[r, hookrightarrow] & Z \end{cd} By the same argument as in (1), we know that $a^{-1}(z)$ is smooth. \item The first bullet point and the Cartesian squares in (1) and (2) imply that $a^{-1}(u)$ and $a^{-1}(z)$ have the same dimensions, for any $u$ and $z$. Because $Y$ is Cohen--Macaulay and $X$ is smooth, the Miracle Flatness Theorem implies that $a$ is flat. \item The fibers of $a$ are smooth by (1) and (2). Because $a$ is also flat (by (3)) and finitely presented, it follows that $a$ is smooth. \end{enumerate} Since normalization is preserved by smooth base change, the two squares in the following diagram are Cartesian: \begin{cd} \wt{Y} \ar[r] \ar[d] & (K \times Y)^{\sim} \ar[r] \ar[d] & \wt{X} \ar[d] \\ Y \ar[r, "1_K \times \id_Y"] & K \times Y \ar[r, "a"] & X \end{cd} where the vertical maps are normalization maps, and the upper horizontal maps arise via functoriality of normalization. (To see that the first square is Cartesian, apply the smooth base change result to the map $\pr_2 : K \times Y \to Y$ and use that $Y \xrightarrow{1_K \times \id_Y} K \times Y \xrightarrow{\pr_2} Y$ is the identity.) The outer square is Cartesian, which is the desired result. \end{proof} \begin{cor}\label{cor-slice} Let $X \hra \mc{N}_\theta$ be a locally closed $K$-invariant subvariety consisting of a $K$-orbit $U$ and another $K$-orbit $Z \subset \partial U$ of codimension one. Fix $e \in Z$. \begin{enumerate} \item[(i)] For the Kazhdan action, there is a unique one-dimensional $\gm$-orbit in $S_e \cap X$ whose closure contains $e$, and this orbit is contained in $U$. \end{enumerate} Let $C \hra S_e \cap X$ be the locally closed subvariety consisting of $e$ and the orbit in (i). \begin{enumerate} \item[(ii)] The preimage of $C$ under the normalization map of $X$ is smooth and transverse to the reduced preimage of $Z$. \end{enumerate} \end{cor} \begin{proof} Point (i) holds because $S_e$ is transverse to $Z \subset \mf{p}$ by Proposition~\ref{prop:propertiesofSlodowy}. Thus the (possibly singular) curve $C$ satisfies the requirements placed on $Y$ in Lemma~\ref{lem-slice}. (The second bullet point in the lemma holds because $C \cap Z$ is a reduced point, because $S_e$ is transverse to $Z$. The third bullet point holds because $C$ is a reduced curve.) The lemma implies the smoothness claim in (ii). If the preimage of $C$ is not transverse to the reduced preimage of $Z$, then pushing forward tangent vectors to $X$ shows that $\mc{T}_eC \cap \mc{T}_eZ$ is nonzero (because the map from the reduced preimage of $Z$ to $Z$ is \'etale, hence induces isomorphisms on tangent spaces), and this contradicts the fact that $S_e$ is transverse to $Z$. \end{proof} \begin{prop}\label{prop:chXfastened} Let $X \subset \mathcal{N}_{\theta}$ be a $K$-chain, not containing $0$. Write $\overline{X} \subset \mathbb{P}\mathfrak{p}$ for the image of $X$ in the projectivization of $\mathfrak{p}$. Then $\overline{X}$ acquires a $K$-action via the $K$-action on $X$ and is a fastened $K$-chain with respect to that action. \end{prop} \begin{proof} Thanks to the structure of Definition~\ref{def:fastened}, we may assume that $X = U \cup Z$ where $U$ is a $K$-orbit and $Z \subset \partial U$ is a $K$-orbit of codimension one. In the normalization $\wt{\overline{X}} \to \overline{X}$, we choose a $K$-orbit $\wt{\overline{Z}} \subset \wt{\overline{X}}$ lying over $\overline{Z}$, and we choose a point $\bar{e}' \in \wt{\overline{Z}}$ lying over the point $\bar{e} \in \overline{Z}$ which is the image of $e \in Z$ under projectivization. It suffices to construct a fastening datum $(\gamma, \ell)$ for $(\wt{\overline{U}}, \wt{\overline{Z}})$ at $\bar{e}'$. Let $C \hra X$ be as defined in Corollary~\ref{cor-slice}. Then we have a commutative diagram in which each vertical map is a normalization map, each square is Cartesian, and $\wt{C}$ is smooth: \begin{cd} \wt{C} \ar[r] \ar[d] & \wt{X} \ar[r] \ar[d] & \wt{\overline{X}} \ar[d] \\ C \ar[r] & X \ar[r] & \overline{X} \end{cd} Indeed, the first square is Cartesian by Corollary~\ref{cor-slice}. The second square is Cartesian because $X \to \ol{X}$ is smooth, and normalization commutes with smooth base change. Lastly, $\wt{C}$ is smooth because the normalization of a (reduced) curve is smooth. The whole diagram is $\gm$-equivariant with respect to the Kazhdan action, and Proposition~\ref{prop:propsofKazhdanaction}(3) implies that the action on every irreducible component of $C$ is nontrivial. In addition, the Kazhdan action on $\overline{X}$ and its normalization coincides with the ordinary action by the one-parameter subgroup $\tau(\gm) \subset K$, because the $t^2$ which appears in the definition of the Kazhdan action has no effect on the projectivization. Since $S_e \subset \mf{p}$ is an affine subspace which does not contain $0 \in \mf{p}$, the map $S_e \to \mathbb{P}\mf{p}$ is an open embedding. Hence the map $C \to \overline{X}$ in the diagram is a locally closed embedding. Since the outer square is Cartesian, the map $\wt{C} \to \wt{\overline{X}}$ is also a locally closed embedding. Thus, the previous paragraph implies that $\wt{C} \hra \wt{\overline{X}}$ is a smooth curve which is invariant under $\tau(\gm) \subset K$, and this action is nontrivial on every irreducible component of $\wt{C}$. Since $e \in C$, we know that $\bar{e}' \in \wt{C}$. Let $\wt{C}' \hra \wt{C}$ be the locally closed subvariety consisting of $\bar{e}'$ and the unique $\gm$-orbit adjacent to it. This curve is also invariant under $\tau(\gm)$, and the action of $\tau(\gm)$ on it is nontrivial. Next, let us show that $\wt{C}'$ is transverse to $\wt{\overline{Z}}$. Let $\wt{Z} \subset \tilde{X}$ be the preimage of $\wt{\overline{Z}} \subset \wt{\overline{X}}$ along the map $\wt{X} \to \wt{\overline{X}}$. It is a reduced $K$-orbit which is one component of the reduced preimage of $Z \subset X$ along the normalization map $\wt{X} \to X$. If $\wt{C}' \hra \wt{\overline{X}}$ is tangent to $\wt{\overline{Z}}$, then $\wt{C}' \hra \tilde{X}$ is tangent to $\wt{Z}$. But this contradicts the transversality statement in Corollary~\ref{cor-slice}(ii). Now $(\tau(\gm), \wt{C}')$ is the fastening datum we seek. \end{proof} \subsection{Unipotent sheaves} We begin with a conjecture \begin{conjecture}\label{tfconjecture} Let $M$ be the Harish-Chandra module of a unipotent representation of $G_{\mathbb{R}}$ and let $U_1,...,U_m$ be the open $K$-orbits on $\mathrm{AV}(M)$. Then \begin{enumerate} \item The classes $[\gr(M)|_{U_i}]$ are irreducible. \item There is a good filtration of $M$ such that $\gr(M)$ is a torsion-free, scheme-theoretically supported on $\mathrm{AV}(M)$, and \item $\gr(M) = j_*j^*\gr(M)$, where $j: \mathrm{Ch}(M) \subset \mathrm{AV}(M)$ is the inclusion. \end{enumerate} \end{conjecture} The first claim is a conjecture of Vogan in \cite{Vogan1991}. Evidence for the second and third claims comes from \cite{Vogan1991}, \cite{MasonBrown2018}, and many low-rank examples (including the two given below). If we assume Conjecture \ref{tfconjecture}(2), then $\gr(M)$ restricts to a torsion-free, $K \times \gm$-equivariant coherent sheaf on $\mathrm{Ch}(M)$. If $G_{\mathbb{R}}$ is classical, then the component groups $\Com(\stab_K(e_i))$ are abelian. Hence, Conjecture \ref{tfconjecture}(1) combined with Theorem \ref{thm:admissibility} implies that $\gr(M)|_{\mathrm{Ch}(M)}$ is a $K \times \gm$-equivariant line bundle. This line bundle descends along the projection map to a $K$-equivariant line bundle on $\overline{\mathrm{Ch}}(M)$. Since $\overline{\mathrm{Ch}}(M)$ is fastened (by Proposition \ref{prop:chXfastened}), we can describe this line bundle using Proposition \ref{prop:linebundles}. \begin{ex} \label{ex-su} Let $G_{\mathbb{R}} = SU(1,1)$. Define $\theta$ on $G= SL_2(\mathbb{C})$ by $$D = \begin{pmatrix}1 & 0 \\ 0 & -1 \end{pmatrix} \qquad \theta(X) = DXD^{-1}$$ Then $$K = \left\{ \mathrm{diag}(t,t^{-1}): t \in \mathbb{C}^{\times} \right\} \qquad \mathcal{N}_{\theta} = \left\{ \begin{pmatrix}0 & a \\ b & 0\end{pmatrix}: ab=0 \right\}$$ There are three $K$-orbits on $\mathcal{N}_{\theta}$: two non-zero $K$-orbits $$\mathcal{O}_+:= \left\{ \begin{pmatrix}0 & a \\ 0 & 0\end{pmatrix}: a \neq 0 \right\} \qquad \mathcal{O}_-:= \left\{ \begin{pmatrix}0 & 0 \\ b & 0\end{pmatrix}: b \neq 0 \right\},$$ each isomorphic (as homogeneous spaces) to $K/\{\pm 1\}$, and the zero orbit $\{0\} \cong K/K$. There are four chains in $\mathcal{N}_{\theta}$: $$\overline{\mathcal{O}_+} = \mathcal{O}_+ \cup \{0\} \qquad \overline{\mathcal{O}_-} = \mathcal{O}_- \cup \{0\} \qquad \mathcal{N}_{\theta} = \mathcal{O}_+ \cup \mathcal{O}_- \cup \{0\} \qquad \{0\}$$ All four $K$-chains are weakly normal and fastened. By Corollary \ref{cor:linebundles}, a $K$-equivariant line bundle on $\mathcal{N}_{\theta}$ is specified (up to isomorphism) by a tuple $(\rho_+,\rho_-,\lambda)$ consisting of two characters $\rho_+$ and $\rho_-$ of $\{\pm 1\}$ (defining line bundles on $\mathcal{O}_+$ and $\mathcal{O}_-$) and a character $\lambda$ of $\mathbb{C}^{\times}$ (defining a line bundle on $\{0\}$). The gluing condition of that Corollary is the requirement that $\rho_+ = \lambda|_{\{\pm 1\}} = \rho_-$. Similar statements can be formulated for the other three $K$-chains: \begin{center} \begin{tabular}{|l|l|} \hline Chain & $(\rho_i,\lambda)$ \\ \hline $\{0\}$ & $(n)$ \ $n \in \mathbb{Z}$ \\ \hline $\mathcal{O}_+ \cup \{0\}$ & $(\epsilon, n)$ \ $\epsilon \in \mathbb{Z}/2\mathbb{Z}, n \in \mathbb{Z}, n \equiv \epsilon \mod{2}$ \\ \hline $\mathcal{O}_- \cup \{0\}$ & $(\epsilon, n)$ \ $\epsilon \in \mathbb{Z}/2\mathbb{Z}, n \in \mathbb{Z}, n \equiv \epsilon \mod{2}$ \\ \hline $\mathcal{O}_+ \cup \mathcal{O}_- \cup \{0\}$ & $(\epsilon,\epsilon,n)$ \ $\epsilon \in \mathbb{Z}/2\mathbb{Z}, n \in \mathbb{Z}, n \equiv \epsilon \mod{2}$ \\ \hline \end{tabular} \end{center} There are four unipotent representations of $SU(1,1)$ and they correspond to the line bundles $(\rho_0) = (0), (\rho_+,\lambda_0) = (1,1), (\rho_-,\lambda_0) = (1,-1)$, and $(\rho_+,\rho_-,\lambda_0) = (0,0,0)$. They are, respectively, the trivial representation, the two limit of discrete series representations and the spherical principal series representation of infinitesimal character $0$. \end{ex} \begin{ex} \label{ex-sp} Return to the setting of Example~\ref{ex:nonfastened2}. There are two unipotent representations of $Sp(4,\mathbb{R})$ (of infinitesimal character $(1,0)$) attached to the orbit $\mathcal{O}_{2^+2^+}$. The associated chain is $\mathcal{O}_{2^+2^+} \cup \mathcal{O}_{2^+11}$. It is smooth and fastened. By Corollary \ref{cor:linebundles}, a $K$-equivariant line bundle on $\mathcal{O}_{2^+2^+} \cup \mathcal{O}_{2^+11}$ is specified (up to isomorphism) by a pair $(\rho,\lambda)$ consisting of a character $\rho$ of $\stab_K(e_{2^+2^+}) = O_2(\mathbb{C})$ and a compatible character $\lambda$ of $$\stab_K(e_{2^+11}) = \left\{ \begin{pmatrix}\pm 1 & \ast \\ 0 & \ast \end{pmatrix} \right\}$$ A character of $O_2(\mathbb{C})$ is given by $\epsilon \in \mathbb{Z}/2\mathbb{Z}$ (either $\triv$ or $\det$). A character of $\stab_K(e_{2^+11})$ is given by $\mu \otimes m$ for $\mu \in \mathbb{Z}/2\mathbb{Z}$ and $m \in \mathbb{Z}$. The compatibile pairs $(\rho,\lambda)$ are as follows $$(\rho_{2^+2^+},\lambda_{2^+11}) = (\epsilon, m \otimes \mu) \qquad \epsilon \equiv \mu \mod{2}$$ The unipotent representations correspond to the line bundles given by $(0,2 \otimes 0)$ and $(1, 2 \otimes 1)$. \end{ex} \end{document}
\begin{document} \title{Domination versus edge domination} {\small \begin{center} $^1$ Institute of Optimization and Operations Research, Ulm University, Germany, \texttt{\{julien.baste,maximilian.fuerst,elena.mohr,dieter.rautenbach\}@uni-ulm.de}\\[3mm] $^2$ Department of Mathematics and Applied Mathematics, University of Johannesburg, Auckland Park, 2006, South Africa, \texttt{[email protected]} \end{center} } \begin{abstract} We propose the conjecture that the domination number $\gamma(G)$ of a $\Delta$-regular graph $G$ with $\Delta\geq 1$ is always at most its edge domination number $\gamma_e(G)$, which coincides with the domination number of its line graph. We prove that $\gamma(G)\leq \left(1+\frac{2(\Delta-1)}{\Delta 2^{\Delta}}\right)\gamma_e(G)$ for general $\Delta\geq 1$, and $\gamma(G)\leq \left(\frac{7}{6}-\frac{1}{204}\right)\gamma_e(G)$ for $\Delta=3$. Furthermore, we verify our conjecture for cubic claw-free graphs. \end{abstract} {\small \begin{tabular}{lp{13cm}} {\bf Keywords:} & Domination; edge domination; minimum maximal matching\\ {\bf MSC 2010:} & 05C69, 05C70 \end{tabular} } \section{Introduction} We consider finite, simple, and undirected graphs, and use standard terminology. Let $G$ be a graph. A set $D$ of vertices of $G$ is a {\it dominating set} in $G$ if every vertex in $V(G)\setminus D$ has a neighbor in $D$, and the {\it domination number} $\gamma(G)$ of $G$ is the minimum cardinality of a dominating set in $G$. For a set $M$ of edges of $G$, let $V(M)$ denote the set of vertices of $G$ that are incident with an edge in $M$. The set $M$ is a {\it matching} in $G$ if the edges in $M$ are pairwise disjoint, that is, $|V(M)|=2|M|$. A matching $M$ in $G$ is {\it maximal} if it is maximal with respect to inclusion, that is, the set $V(G)\setminus V(M)$ is independent. Let the {\it edge domination number} $\gamma_e(G)$ of $G$ be the minimum size of a maximal matching in $G$. A maximal matching in $G$ of size $\gamma_e(G)$ is a {\it minimum maximal matching}. A natural connection between the domination number and the edge domination number of a graph $G$ becomes apparent when considering the line graph $L(G)$ of $G$. Since a maximal matching $M$ in $G$ is a maximal independent set in $L(G)$, the edge domination number $\gamma_e(G)$ of $G$ equals the independent domination number $i(L(G))$ of $L(G)$. Since $L(G)$ is always claw-free, and since the independent domination number equals the domination number in claw-free graphs \cite{alla}, $\gamma_e(G)$ actually equals the domination number $\gamma(L(G))$ of $L(G)$. While the domination number \cite{hahesl} and the edge domination number \cite{yaga}, especially with respect to computational hardness and algorithmic approximability \cite{caek,cafukopa,chch,golera,hoki,scvi}, have been studied extensively for a long time, little seems to be known about their relation. For regular graphs, we conjecture the following: \begin{conjecture}\label{conjecture1} If $G$ is a $\Delta$-regular graph with $\Delta\geq 1$, then $\gamma(G)\leq\gamma_e(G)$. \end{conjecture} The conjecture is trivial for $\Delta\leq 2$, and fails for non-regular graphs, see Figure \ref{fig1}. As pointed out by Felix Joos \cite{jo}, for $\Delta\geq 13$, Conjecture \ref{conjecture1} follows by combining the known results $\gamma(G)\leq \frac{(1+\ln(\Delta+1))n}{\Delta+1}$ (cf. \cite{alsp}) and $\gamma_e(G)\geq \frac{\Delta n}{4\Delta-2}$ (cf.~(\ref{e1}) below), that is, it is interesting for small values of $\Delta$ only. Furthermore, he observed that the union of two triangles plus a perfect matching shows that Conjecture \ref{conjecture1} is tight for $\Delta=3$. \begin{figure} \caption{A non-regular graph $G$ with $\gamma(G)=2>1=\gamma_e(G)$.} \label{fig1} \end{figure} Our contributions are three results related to Conjecture \ref{conjecture1}. A simple probabilistic argument implies a weak version of Conjecture \ref{conjecture1}, which, for $\Delta\leq 12$, is better than the above-mentioned consequence of \cite{alsp} and (\ref{e1}). \begin{theorem}\label{theorem1} If $G$ is a $\Delta$-regular graph with $\Delta\geq 1$, then $\gamma(G)\leq \left(1+\frac{2(\Delta-1)}{\Delta 2^{\Delta}}\right)\gamma_e(G)$. \end{theorem} For cubic graphs, Theorem \ref{theorem1} implies $\gamma(G)\leq \frac{7}{6}\gamma_e(G)$, which we improve with our next result. Even though the improvement is rather small, we believe that it is interesting especially because of the approach used in its proof. \begin{theorem}\label{theorem2} If $G$ is a cubic graph, then $\gamma(G)\leq \left(\frac{7}{6}-\frac{1}{204}\right)\gamma_e(G)$. \end{theorem} Finally, we show Conjecture \ref{conjecture1} for cubic claw-free graphs. \begin{theorem}\label{theorem3} If $G$ is a cubic claw-free graph, then $\gamma(G)\leq \gamma_e(G)$. \end{theorem} All proofs are given in the following section. \section{Proofs} We begin with the simple probabilistic proof of Theorem \ref{theorem1}, which is also the basis for the proof of Theorem \ref{theorem2}. \begin{proof}[Proof of Theorem \ref{theorem1}] Let $M$ be a minimum maximal matching in $G$. Since every vertex in $V(G)\setminus V(M)$ has $\Delta$ neighbors in $V(M)$, and every vertex in $V(M)$ has at most $\Delta-1$ neighbors in $V(G)\setminus V(M)$, we have \begin{eqnarray}\label{e1} \Delta(n-2\gamma_e(G))\leq 2(\Delta-1)\gamma_e(G), \end{eqnarray} where $n$ is the order of $G$. Let the set $D$ arise by selecting, for every edge in $M$, one of the two incident vertices independently at random with probability $1/2$. Clearly, $|D|=\gamma_e(G)$. If $u$ is a vertex in $V(G)\setminus V(M)$, then $u$ has no neighbor in $D$ with probability at most $1/2^{\Delta}$. Note that $u$ might be adjacent to both endpoints of some edge in $M$ in which case it always has a neighbor in $D$. If $B$ is the set of vertices in $V(G)\setminus V(M)$ with no neighbor in $D$, then linearity of expectation implies $$\mathbb{E}[|B|] =\sum\limits_{u\in V(G)\setminus V(M)}\mathbb{P}[u\in B] \leq \frac{|V(G)\setminus V(M)|}{2^{\Delta}} =\frac{n-2\gamma_e(G)}{2^{\Delta}}.$$ Since $D\cup B$ is a dominating set in $G$, the first moment method implies $$\gamma(G)\leq |D|+\mathbb{E}[|B|] =\gamma_e(G)+\frac{n-2\gamma_e(G)}{2^{\Delta}} \stackrel{(\ref{e1})}{\leq} \gamma_e(G)+\frac{2(\Delta-1)\gamma_e(G)}{\Delta 2^{\Delta}},$$ which completes the proof. \end{proof} The next proof arises by modifying the previous proof. \begin{proof} [Proof of Theorem \ref{theorem2}] Clearly, we may assume that $G$ is connected. Let $M$ be a minimum maximal matching in $G$. Let $R_0$ be the set of vertices from $V(G)\setminus V(M)$ that are adjacent to both endpoints of some edge in $M$, and let $R$ be $(V(G)\setminus V(M))\setminus R_0$. Also in this proof, we construct a random set $D$ containing exactly one vertex from every edge in $M$. Note that every vertex from $R_0$ will always have a neighbor in $D$. Again, let $B$ be the set of vertices in $R$ with no neighbor in $D$. As before, we will use the estimate $$\gamma(G)\leq \gamma_e(G)+\mathbb{E}[|B|] =\gamma_e(G)+\sum_{u\in R}\mathbb{P}[u\in B].$$ Initially, we choose $D$ exactly as in the proof of Theorem \ref{theorem1}, which implies $$\mathbb{E}[|B|]=\frac{|R|}{8}.$$ In order to obtain an improvement, we iteratively modify the random choice of $D$ in such a way that $\mathbb{E}[|B|]$ becomes smaller. We do this using two operations. Each individual operation leads to some reduction of $\mathbb{E}[|B|]$, and we ensure that all these reductions combine additively. While the first operation leads to a reduction of $\mathbb{E}[|B|]$ regardless of additional structural properties of $G$, our argument that the second operation leads to a reduction is based on the assumption that the first operation has been applied as often as possible. The first operation is as follows. \begin{itemize} \item If there are two edges $uv$ and $u'v'$ in $M$ such that the set $X$ of vertices $x$ in $R$ with $$N_G(x)\cap \{ u,v,u',v'\}\in \big\{ \{ u,u'\},\{v,v'\}\big\}$$ is larger than the set $Y$ of vertices $y$ in $R$ with $$N_G(y)\cap \{ u,v,u',v'\}\in \big\{ \{ u,v'\}, \{v,u'\}\big\},$$ see Figure \ref{figcp}, then we {\it couple} the random choices for the pair $\{ uv, u'v'\}$ in such a way that $D$ contains $\{ u,v'\}$ with probability $1/2$ and $\{ u',v\}$ with probability $1/2$. \end{itemize} \begin{figure} \caption{The edges $uv$, $u'v'$ and the sets $X$ and $Y$.} \label{figcp} \end{figure} The choice for the {\it coupled pair} $\{ uv,u'v'\}$ will remain independent of all other random choices involved in the construction of $D$. Furthermore, the two edges in a coupled pair will not be involved in any other operation modifying the choice of $D$. Let $\pi$ be a coupled pair $\{ uv,u'v'\}$. By construction, we obtain $\mathbb{P}[x\in B]=0$ for every vertex $x$ in $X$. Now, consider a vertex $y$ in $Y$. The two neighbors of $y$ in the two coupled edges are either both in $D$ or both outside of $D$, each with probability exactly $1/2$. We will ensure that the third neighbor of $y$, which is necessarily in a third edge from $M$, will belong to $D$ still with probability exactly $1/2$. By the independence mentioned above, we have $\mathbb{P}[y\in B]=1/4$. Recall that, for the choice of $D$ as in the proof of Theorem \ref{theorem1}, each vertex from $X\cup Y$ belongs to $B$ with probability exactly $1/8$. Hence, by coupling the pair $\pi$, the expected cardinality $\mathbb{E}[|B|]$ of $B$ is reduced by $(|X|-|Y|)/8$, which is at least $1/8$. The second operation is as follows. \begin{itemize} \item We select a suitable vertex $z$ from $R$ such that it has no neighbor in any of the coupled edges. If the edges $u_1v_1$, $u_2v_2$, and $u_3v_3$ from $M$ are such that $u_1$, $u_2$, and $u_3$ are the three neighbors of $z$, then we {\it derandomize} the selection for these three edges, and $D$ will always contain $u_1$, $u_2$, and $u_3$. We call $\{ u_1v_1,u_2v_2,u_3v_3\}$ a {\it derandomized triple with center $z$}. \end{itemize} We will first couple a maximal number of pairs, and then derandomize triples one after the other as long as possible. Let $\tau=\{ u_1v_1,u_2v_2,u_3v_3\}$ be the next triple to be derandomized at some point. Let $S(\tau)$ be the set of all vertices that are incident with an edge $e$ from $M\setminus \tau$ such that some vertex in $R$ has a neighbor in $V(\tau)$ as well as in $e$, see Figure \ref{figstau}. \begin{figure} \caption{The set $S(\tau)$.} \label{figstau} \end{figure} During all changes of the initial random choice of $D$ performed so far, we ensure that the following property holds just before we derandomize the triple $\tau$: \begin{eqnarray}\label{eproptau} \begin{minipage}{0.8\textwidth} \it For every vertex $u$ in $R$ that has a neighbor in $V(\tau)\cup S(\tau)$, the three neighbors of $u$ in $V(M)$ belong to $D$ independently with probability $1/2$. \end{minipage} \end{eqnarray} All coupled pairs and derandomized triples will be disjoint. For every edge in $M$ that does not belong to any coupled pair or derandomized triple, we select the endpoint that is added to $D$ exactly as in the proof of Theorem \ref{theorem1}, that is, with probability $1/2$ independently of all other random choices involved in the construction of $D$. \noindent We fix a maximal collection ${\cal P}$ of pairwise disjoint coupled pairs $\pi_1,\ldots,\pi_p$. Let $S_{\rm paired}$ be the set of the $4p$ vertices from $V(M)$ that are incident with some of the $2p$ paired edges. Let $R_1$ be the set of vertices in $R$ with exactly one neighbor in $S_{\rm paired}$, and let $R_2$ be the set of vertices in $R$ with at least two neighbors in $S_{\rm paired}$. Note that the sets $R_0$, $R_1$, and $R_2$ are disjoint by definition. Let $S_{\rm paired}'$ be the set of vertices from $V(M)\setminus S_{\rm paired}$ that are incident with an edge in $M$ that contains a neighbor of some vertex in $R_0\cup R_2$. Let $R_3$ be the set of vertices from $R\setminus (R_0\cup R_1\cup R_2)$ that have a neighbor in $S_{\rm paired}'$. All sets are illustrated in Figure \ref{fig3}. Let $$R^{(1)}=R\setminus (R_0\cup R_1\cup R_2\cup R_3),$$ $r=|R|$, $r^{(1)}=|R^{(1)}|$, and $r_i=|R_i|$ for $i\in \{ 0,1,2,3\}$. \begin{figure} \caption{The sets $S_{\rm paired} \label{fig3} \end{figure} Since $G$ has at most $8p$ edges leaving $S_{\rm paired}$, we have $2r_2+r_1\leq 8p$, which implies $r_1+7r_2\leq 28p$. By definition, we obtain $|S_{\rm paired}'|\leq 4r_0+2r_2$. Considering the number of edges leaving $S_{\rm paired}'$, we obtain $r_3\leq 3|S_{\rm paired}'|\leq 12r_0+6r_2$. Therefore, \begin{eqnarray} r^{(1)} & = & r-r_0-r_1-r_2-r_3\nonumber\\ & \geq & r-13r_0-r_1-7r_2\nonumber\\ & \geq & r-13r_0-28p.\label{er1} \end{eqnarray} Note that, only coupling the pairs $\pi_1,\ldots,\pi_p$ and not derandomizing any triple, we have \begin{eqnarray}\label{e2} \mathbb{E}[|B|]\leq \frac{|R|}{8}-\frac{p}{8} =\frac{1}{8}\left(n-2\gamma_e(G)-r_0\right)-\frac{p}{8}. \end{eqnarray} If $r_0+p$ is large enough, then this already yields the desired improvement. Since we cannot guarantee this, we now form derandomized triples one by one with centers from $R^{(1)}$. For every selected triple to be derandomized, we remove suitable vertices from $R^{(1)}$ in order to ensure (\ref{eproptau}). Suppose that we have already formed $t-1$ such derandomized triples with centers $z_1,\ldots,z_{t-1}$, then the center $z_t$ for the triple $\tau_t$ will be selected from $R^{(t)}$, where $t$ is initially $1$, and $R^{(t+1)}$ is obtained from $R^{(t)}$ by removing every vertex from $R^{(t)}$ that has a neighbor in $V(\tau_t)\cup S(\tau_t)$. This ensures that all coupled pairs and derandomized triples are disjoint as well as (\ref{eproptau}). Now, we analyze the reduction of $\mathbb{E}[|B|]$, or rather the reduction of the upper bound on $\mathbb{E}[|B|]$ given in (\ref{e2}), incurred by some derandomized triple $\tau_t$ with center $z_t$. Let $e_1$, $e_2$, and $e_3$ in $M$ be such that $e_i=u_iv_i$ for $i\in [3]$ and $z_t$ is adjacent to $u_1$, $u_2$, and $u_3$, that is, $\tau_t=\{ u_1v_1,u_2v_2,u_3v_3\}$. We consider two cases. \noindent {\bf Case 1} {\it Some vertex $z$ in $R$ distinct from $z_t$ has three neighbors in $V(\tau_t)$.} \noindent First, suppose that $z$ is adjacent to $u_1$ and $u_2$. In this case, the pair $e_1$ and $e_2$ could be coupled and added to ${\cal P}$, contradicting the choice of ${\cal P}$. Next, suppose that $z$ is adjacent to $v_1$, $v_2$, and $v_3$. Since the pair $e_1$ and $e_2$ cannot be coupled and added to ${\cal P}$, there are two vertices $z'$ and $z''$ in $R$ such that $z'$ is adjacent to $u_1$ and $v_2$, and $z''$ is adjacent to $u_2$ and $v_1$. Since the pair $e_2$ and $e_3$ cannot be coupled and added to ${\cal P}$, the vertex $z'$ is adjacent to $u_3$, which implies the contradiction that the pair $e_1$ and $e_3$ could be coupled and added to ${\cal P}$. Hence, by symmetry, we may assume that $z$ is adjacent to $u_1$, $v_2$, and $v_3$. Since the pair $e_2$ and $e_3$ cannot be coupled and added to ${\cal P}$, there are two vertices $z'$ and $z''$ in $R$ such that $z'$ is adjacent to $u_2$ and $v_3$, and $z''$ is adjacent to $u_3$ and $v_2$. If $z''$ is adjacent to $v_1$, then, considering the pair $e_1$ and $e_2$, it follows that $z'$ must be adjacent to $v_1$. In this case, the connected graph $G$ has order $10$, and $\{ u_1,u_2,u_3\}$ is a dominating set, which implies the statement. Hence, we may assume that $z''$ is not adjacent to $v_1$. A symmetric argument implies that $z'$ is not adjacent to $v_1$. See Figure \ref{figcase1} for an illustration. \begin{figure} \caption{The edges in $\tau$ and the vertices $z_t$, $z$, $z'$, and $z''$.} \label{figcase1} \end{figure} Our derandomized choice of adding always $u_1$, $u_2$, and $u_3$ to $D$ yields $$\mathbb{P}[z_t \in B] =\mathbb{P}[z\in B] =\mathbb{P}[z'\in B] =\mathbb{P}[z''\in B]=0.$$ Furthermore, property (\ref{eproptau}) implies $\mathbb{P}[w\in B]=1/4$ for every neighbor $w$ of $v_1$ in $R$. Since $v_1$ has at most two such neighbors, derandomizing the triple $\tau_t$ additionally reduces the upper bound on $\mathbb{E}[|B|]$ given in (\ref{e2}) by at least $\frac{4}{8}-\frac{2}{8}=\frac{1}{4}$. Since $z'$ and $z''$ both have at most one neighbor not in $V(\tau_t)$, and at most two neighbors of $v_1$ in $R$ both have at most two neighbors not in $V(\tau_t)$, we obtain $|S(\tau_t)|\leq 12$, and \begin{eqnarray*} |R^{(t+1)}| & = |R^{(t)}| & -\,\, \big|\big\{v\in R^{(t)}:v \mbox{ has a neighbor in }V(\tau_t)\cup S(\tau_t)\big\}\big|\\ & = |R^{(t)}| & -\,\, \big|\big\{v\in R^{(t)}:v \mbox{ has a neighbor in }V(\tau_t)\big\}\big|\\ && -\,\, \big|\big\{v\in R^{(t)}:v \mbox{ has a neighbor in }S(\tau_t) \mbox{ but no neighbor in }V(\tau_t)\big\}\big|\\ &\geq |R^{(t)}|&-\,\, 6-3\cdot 6\\ &=|R^{(t)}|&-\,\, 24. \end{eqnarray*} \noindent {\bf Case 2} {\it $z_t$ is the only vertex in $R$ that has three neighbors in $V(\tau_t)$.} \noindent Since the pair $e_1$ and $e_2$ cannot be coupled and added to ${\cal P}$, we may assume, by symmetry, that there is a vertex $z$ in $R$ that is adjacent to $u_1$ and $v_2$. Since the pair $e_2$ and $e_3$ cannot be coupled and added to ${\cal P}$, we may assume that there is a vertex $z'$ in $R$ such that either $z'$ is adjacent to $u_3$ and $v_2$ or $z'$ is adjacent to $u_2$ and $v_3$. If $z'$ is adjacent to $u_3$ and $v_2$, then the assumption of Case 2 implies the contradiction that the pair $e_1$ and $e_3$ can be coupled and added to ${\cal P}$. Hence, we may assume that $z'$ is adjacent to $u_2$ and $v_3$. Since the pair $e_1$ and $e_3$ cannot be coupled and added to ${\cal P}$, there is a vertex $z''$ in $R$ adjacent to $u_3$ and $v_1$. See Figure \ref{figcase2} for an illustration. \begin{figure} \caption{The edges in $\tau$ and the vertices $z_t$, $z$, $z'$, and $z''$.} \label{figcase2} \end{figure} The choice of ${\cal P}$ implies that no vertex from $R^{(t)}$ distinct from $z_t$, $z$, $z'$, and $z''$ has two neighbors in $V(\tau_t)$. Arguing as above, we obtain that derandomizing the triple $\tau_t$ additionally reduces the upper bound on $\mathbb{E}[|B|]$ given in (\ref{e2}) by at least $\frac{4}{8}-\frac{3}{8}=\frac{1}{8}$. Similarly as in Case 1, it follows that $|S(\tau_t)|\leq 18$, and that $$|R^{(t+1)}|=|R^{(t)}\setminus\{v\in R:~v \mbox{ has a neighbor in }V(\tau_t)\cup S(\tau_t)\}| \geq |R^{(t)}| -7-3\cdot 9= |R^{(t)}|-34.$$ \noindent Since we derandomize as many triples as possible, it follows that the number $t$ of derandomized triples satisfies $$t\geq \frac{r^{(1)}}{34}\stackrel{(\ref{er1})}{\geq} \frac{r-13r_0-28p}{34},$$ and that the joint reduction of the upper bound on $\mathbb{E}[|B|]$ given in (\ref{e2}) is at least $$\frac{t}{8}\geq \frac{r-13r_0-28p}{272}.$$ Altogether, coupling all $p$ pairs in ${\cal P}$, and derandomizing the $t$ triples, we obtain \begin{eqnarray*} \mathbb{E}[|B|] &\leq & \frac{1}{8}\left(n-2\gamma_e(G)-r_0\right)-\frac{p}{8}-\frac{t}{8}\\ &\leq & \frac{1}{8}\left(n-2\gamma_e(G)-r_0-p\right)-\frac{r-13r_0-28p}{272}\\ & = & \frac{1}{8}\left(n-2\gamma_e(G)-r_0-p\right)-\frac{n-2\gamma_e(G)-r_0-13r_0-28p}{272}\\ & = & \frac{33}{272}(n-2\gamma_e(G)) -\frac{5}{68}r_0 -\frac{3}{136}p\\ & \leq & \frac{33}{272}(n-2\gamma_e(G))\\ & \stackrel{(\ref{e1})}{\leq} & \frac{11}{68}\gamma_e(G). \end{eqnarray*} Therefore, $$\gamma(G)\leq \gamma_e(G)+\mathbb{E}[|B|] \leq \frac{79}{68}\gamma_e(G) =\left(\frac{7}{6}-\frac{1}{204}\right)\gamma_e(G),$$ which completes the proof. \end{proof} We proceed to the final proof. \begin{proof} [Proof of Theorem \ref{theorem3}] Let $M$ be a minimum maximal matching in $G$. Let the set $D$ of $|M|$ vertices intersecting each edge in $M$ be chosen such that the set $B=\{ u\in V(G)\setminus V(M):|N_G(u)\cap D|=0\}$ is smallest possible. For a contradiction, we may suppose that $B$ is non-empty. Let $C=\{ u\in V(G)\setminus V(M):|N_G(u)\cap D|=1\}.$ Let $b$ be a vertex in $B$. Let $u_{-1}v_{-1}$, $u_0v_0$, and $u_1v_1$ in $M$ be such that $N_G(b)=\{ v_{-1},v_0,v_1\}$. Since $D$ intersects each edge in $M$, we have $u_{-1},u_0,u_1\in D$. Since $G$ is claw-free, we may assume, by symmetry, that $v_0$ and $v_1$ are adjacent, which implies that $v_{-1}$ is not adjacent to $v_0$ or $v_1$. Let $x$ be the neighbor of $v_{-1}$ distinct from $u_{-1}$ and $b$. Since $G$ is claw-free, the vertex $x$ is adjacent to $u_{-1}$. If $x=u_0$, then $u_0$ has no neighbor in $C$, and exchanging $u_0$ and $v_0$ within $D$ reduces $|B|$, which is a contradiction. Hence, by symmetry between $u_0$ and $u_1$, the vertex $x$ is distinct from $u_0$ and $u_1$. Since exchanging $u_1$ and $v_1$ within $D$ does not reduce $|B|$, the vertex $u_1$ has a neighbor $c_1$ in $C$, which is necessarily distinct from $x$. Now, let $\sigma:v_1,u_1,c_1,v_2,u_2,c_2,\ldots,v_k,u_k,c_k$ be a maximal sequence of distinct vertices from $V(G)\setminus \{ u_{-1},u_0,v_{-1},v_0,b,x\}$ such that $u_iv_i\in M$, $u_i\in D$, $c_i\in C$, $u_i$ is adjacent to $c_i$ for every $i\in [k]$, and $v_{i+1}$ is adjacent to $u_i$ for every $i\in [k-1]$. Let $X=\{ u_{-1},u_0,v_{-1},v_0,b,x\}\cup \{ v_1,u_1,c_1,v_2,u_2,c_2,\ldots,v_k,u_k,c_k\}$, and see Figure \ref{figseq} for an illustration. \begin{figure} \caption{A subgraph of $G$ with vertex set $X$, where $k=4$.} \label{figseq} \end{figure} Let $v_{k+1}$ be the neighbor of $u_k$ distinct from $v_k$ and $c_k$. Since $G$ is claw-free, the vertex $v_{k+1}$ is adjacent to $c_k$. Since $V(G)\setminus V(M)$ is independent, we have $u_{k+1}v_{k+1}\in M$ for some vertex $u_{k+1}$. Since $c_k\in C$ and $u_k\in D$, we obtain $v_{k+1}\not\in D$ and $u_{k+1}\in D$, which implies that the vertex $v_{k+1}$ does not belong to $X$. If $u_{k+1}$ belongs to $X$, then $u_{k+1}=x$, and replacing $D$ with $$D'=(D\setminus \{ u_1,u_2,\ldots,u_{k+1}\})\cup \{ v_1,v_2,\ldots,v_{k+1}\}$$ reduces $|B|$, which is a contradiction. Hence, the vertex $u_{k+1}$ does not belong to $X$. If $u_{k+1}$ has a neighbor $c_{k+1}$ in $C$, then, by the structural conditions, the vertex $c_{k+1}$ does not belong to $X$, and the sequence $\sigma$ can be extended by appending $v_{k+1},u_{k+1},c_{k+1}$, contradicting its choice. Hence, the vertex $u_{k+1}$ has no neighbor in $C$, and replacing $D$ with the set $D'$ as above again reduces $|B|$. This final contradiction completes the proof. \end{proof} \noindent {\bf Acknowledgement} We thank Felix Joos for pointing out that Conjecture \ref{conjecture1} holds for large values of $\Delta$. \end{document}
\begin{document} \maketitle \begin{abstract} Multiple imputation provides us with efficient estimators in model-based methods for handling missing data under the true model. It is also well-understood that design-based estimators are robust methods that do not require accurately modeling the missing data; however, they can be inefficient. In any applied setting, it is difficult to know whether a missing data model may be good enough to win the bias-efficiency trade-off. Raking of weights is one approach that relies on constructing an auxiliary variable from data observed on the full cohort, which is then used to adjust the weights for the usual Horvitz-Thompson estimator. Computing the optimally efficient raking estimator requires evaluating the expectation of the efficient score given the full cohort data, which is generally infeasible. We demonstrate multiple imputation (MI) as a practical method to compute a raking estimator that will be optimal. We compare this estimator to common parametric and semi-parametric estimators, including standard multiple imputation. We show that while estimators, such as the semi-parametric maximum likelihood and MI estimator, obtain optimal performance under the true model, the proposed raking estimator utilizing MI maintains a better robustness-efficiency trade-off even under mild model misspecification. We also show that the standard raking estimator, without MI, is often competitive with the optimal raking estimator. We demonstrate these properties through several numerical examples and provide a theoretical discussion of conditions for asymptotically superior relative efficiency of the proposed raking estimator. \end{abstract} \section{Background} In many settings, variables of interest maybe too expensive or too impractical to measure precisely on a large cohort. Generalized raking is an important technique for using whole population or full cohort information in the analysis of a subsample with complete data, \citep{deville1992calibration, sarndal2007calibration, breslow2009using} closely related to the augmented inverse probability weighted (AIPW) estimators of Robins and co-workers.\citep{robins1994estimation, firth1998robust, lumley2011connections} Raking estimators use auxiliary data measured on the full cohort to adjust the weights of the Horvitz-Thompsonn estimator in a manner that leverages the information in the auxiliary data and improves efficiency. The technique is also, and perhaps more commonly, known as ``calibration of weights'', but we will avoid that term here because of the potential confusion with other uses of the word ``calibration''. An obvious competitor to raking is multiple imputation of the non-sampled data.\citep{rubin1996multiple} While multiple imputation was initially used for relatively small amounts of data missing by happenstance, it has more recently been proposed and used for large amounts of data missing by design, such as when certain variables are only measured on a subsample taken from a cohort.\citep{marti2011multiple, keogh2013using, jung2016fitting, seaman2012combining, morris2014tuning} In this paper we take a different approach. We use multiple imputation to construct new raking estimators that are more efficient than the simple adjustment of the sampling weights \cite{breslow2009using} and compare these estimators to direct use of multiple imputation in a setting where the imputation model may be only mildly misspecified. Our work has connections to the previous literature, where multiple imputation and empirical likelihood are used in the missing data paradigm to construct multiply robust estimators that are consistent if any of a set of imputation models or a set of sampling models are correctly specified.\cite{han2016combining} We differ from this work in assuming known subsampling probabilities, which allows for a complex sampling design from the full cohort, and in evaluating robustness and efficiency under contiguous (local) misspecification following the ``nearly-true models'' paradigm.\cite{lumley2017robustness} Known sampling weights commonly arise in settings, such as retrospective cohort studies using electronic health records (EHR) data, where a validation subset is often constructed to estimate the error structure in variables derived using automated algorithms rather than directly observed. Lumley (2017) \cite{lumley2017robustness} considered the robustness and efficiency trade-off of design-based estimators versus maximum likelihood estimators in the setting of nearly-true models. We build on this work by comparing multiple imputation with the standard raking estimator, and examine to what extent raking that makes use of multiple imputation to construct the auxiliary variable may affect the bias-efficiency trade-off for this setting. We first introduce the raking framework in Section 2. In Section 3, we describe the proposed raking estimator, which makes use of multiple imputation to construct the potentially optimal raking variable. In Section 4, we compare design-based estimators with standard multiple imputation estimators in two examples using simulation, a classic case-control study and a two phase study where the linear regression model is of interest and an erroprone surrogate is observed on the full cohort in place of the target variable. For this example, we additional study the relative performance of regression calibration, a popular method to address covariate measurement error. \citep{carroll2006} In section 5, we consider the relative performance of multiple imputation versus raking estimators in the National Wilms Tumor Study. We conclude with a discussion of the robustness efficiency trade-off in the studied settings. \section{Introduction to raking framework} Assume a full cohort of size $N$ and a probability subsample of size $n$ with known sampling probability $\pi_i$ for the $i$-th individual. Further, assume we observe an outcome variable $Y$, predictors $Z$, and auxiliary variables $A$ on the whole cohort, and observe predictors $X$ only on the sample. Our goal is to fit a model $P_\theta$ for the distribution of $Y$ given $Z$ and $X$ (but not $A$). Define the indicator variable for being sampled as $R_i$. We assume an asymptotic setting in which as $n\to\infty$, a law of large numbers and central limit theorem exist. In some places we will make the stronger asymptotic assumption that the sequence of cohorts are iid samples from some probability distribution and that the subsamples satisfy $\inf_i \pi_i>0$.\cite{breslow2009using,lumley2011connections,lumley2017robustness} With full cohort data with complete observations we would solve an estimating equation \begin{equation} \sum_{i=1}^N U(Y_i,X_i,Z_i;\theta)=0, \label{eq-census} \end{equation} where $U = U(Y,X,Z;\theta)$ is an estimate of the efficient score or influence function for giving at least locally efficient estimation of $\theta$ with complete data. We write $\tilde\theta_N$ for the resulting estimator with complete data from the full cohort, and assume it converges in probability to some limit $\theta^*$. If the cohort is truly a realization of the model $P_\theta$ we write $\theta_0$ for the true value of $\theta$. We assume $\tilde\theta_N$ would be a locally efficient estimator in the model $P_\theta$ at $\theta_0$, given compete data. The Horvitz-Thompson-type estimator $\hat\theta_{HT}$ of $\theta$ solves \begin{equation} \sum_{i=1}^N \frac{R_i}{\pi_i}U(Y_i,X_i,Z_i;\theta)=0. \label{eq-ht} \end{equation} Under regularity conditions, for example the existence of a central limit theorem and sufficient smoothness for $U$, it is also consistent for $\theta^*$, and thus for $\theta_0$ if $P_\theta$ is correctly specified. A generalized raking estimator using an auxiliary variable $H=H(Y, Z,A;\eta)$, which may depend on some parameter $\eta$, solves a weighted estimating equation \begin{equation} \sum_{i=1}^N \frac{g_iR_i}{\pi_i} U(Y_i,X_i,Z_i;\theta)=0, \label{eq-aipw} \end{equation} where the weight adjustments $g_i$ are chosen to satisfy the calibration constraints \begin{align} \sum_{i=1}^N \frac{R_ig_i}{\pi_i} H(Y_i, Z_i,A_i;\eta) = \sum_{i=1}^N H(Y_i, Z_i,A_i;\eta) \label{cal-adj} \end{align} while minimizing a distance function $\sum_{i=1}^n d(g_i/\pi_i, 1/\pi_i)$. Lagrange multipliers can be used to construct an iteratively weighted least squares algorithm for computing $g_i$.\cite{deville1992calibration} In the standard multiple imputation, we use a model for the distribution of $X$ given $Z$, $Y$ and $A$. For this, we generate $M$ samples from the predictive distribution to produce $M$ imputations $X_i^{(1)},\ldots, X_i^{(M)}$, giving rise to $M$ complete imputed datasets that represent samples from the unknown conditional distribution of the complete data given the observed data. It is now straightforward to solve equation (\ref{eq-census}) for each of the $m$-th imputed dataset, giving $M$ values of $\tilde{\theta}_{N,(m)}$ with estimated variances $\tilde{\sigma}_{N,(m)}^2$, $1 \leq m \leq M$. The imputation estimator $\hat\theta_{\mathrm{MI}}$ of $\theta$ is the average of the $\tilde{\theta}_{N,(m)}$, and the variance can be estimated from the variance of the $\tilde{\theta}_{N,(m)}$ and the average of $\tilde{\sigma}_{N,(m)}^2$.\citep{rubin1996multiple} \section{Imputation for calibration} \label{impute-cal} \subsection{Estimation} The optimal function $H_i$ is $E[U_i|Y_i, Z_i, A_i]$, and using this optimal $H_i$ would give the optimal design-consistent estimator of $\theta$,\citep{robins1994estimation}. However, the optimal $H_i$ is typically not available explicitly. In practice, one may estimate the optimal function $H_i$ with a single regression imputation $\hat X_i$ of $X_i$, where we first solve $$\sum_{i=1}^N U(Y_i,\hat X_i,Z_i;\theta)=0,$$ with respect to $\theta$, and then, compute $U(Y_i,\hat X_i,Z_i;\theta)$ at the solution.\cite{breslow2009using,rivera2016using} We write such a calibration estimator of $\theta$ with a single regression imputation by $\hat\theta_{\mathrm{cal,1}}$. In this study, we propose a raking estimator using multiple imputation. Specifically, we first solve the sets of equations $$\sum_{i=1}^N U(Y_i,\hat{X}_i^{(m)},Z_i;\theta)=0,$$ where $\hat{X}_1^{(m)}, \ldots, \hat{X}_N^{(m)}$ are imputed values of $X_i$ for each $m$-th imputation procedure to get multiple estimates $\hat\theta^{(m)}$, $1 \leq m \leq M$. Define $H_i$, for each $1 \leq i \leq N$, as the average of the $M$ resulting $U(Y_i,\hat{X}_i^{(m)},Z_i;\hat\theta^{(m)})$: \begin{align} H_i = \frac{1}{M} \sum_{m=1}^M U(Y_i,\hat{X}_i^{(m)},Z_i;\hat\theta^{(m)}). \label{multical-adj} \end{align} Finally, we solve \eqref{eq-aipw} with the weight adjustments under the calibration constraint \eqref{cal-adj}, and write the final estimator $\hat{\theta}=\hat\theta_{\mathrm{cal,M}}$ of $\theta$. \subsection{Efficiency and robustness} \label{efficient-robust} When all three of the sampling probability, the imputation model, and the regression model are correctly specified, the standard calibration estimator $\hat\theta_{\mathrm{cal,1}}$ gives a way to compute the efficient design-consistent estimator. If we are willing to only assume the regression model and imputation model are correct, there appears to be no motivation for requiring a design-consistent estimator. In this case, the standard multiple imputation estimator $\hat\theta_{\textrm{MI}}$ will also be consistent and typically more efficient than a design-based approach. If the regression model and the imputation model are correctly specified with all the available variables, it is clear that the empirical average \eqref{multical-adj} over multiple imputations in $H_i$ will converge to the optimal value $E[U_i|Y_i,Z_i, A_i]$ as $M$ and $N$ increase, so that the proposed raking estimator using multiple imputation provides the optimal calibration estimator. However, it is unreasonable in practice to assume that both the regression and imputation models are exactly correct. Recently, in the special case where the full cohort is an iid sample and the subsampling is independent, so-called Poisson sampling, it has been shown that the inverse probability weighting adjusted by multiple imputation attains the semi-parametric efficiency bound for a model that assumes only $E[U_i]=0$ and $E[R_i|Z_i,Y_i,A_i]=\pi_i$,\cite{han2016combining} where the proposed estimator $\hat\theta_{\mathrm{cal,M}}$ also solves a weighted estimating equation \eqref{eq-aipw} subject to the calibration constraints \eqref{cal-adj} computed by multiple imputation. In this paper, we argue one step further that the interesting questions of robustness and efficiency arise when the imputation model and potentially also the regression model are slightly misspecified. Under what conditions are $\|\hat\theta_{\mathrm{cal,M}}-\theta^*\|_2^2$ and $\|\hat\theta_{\mathrm{MI}}-\theta^*\|_2^2$ comparable, and do these correspond to plausible misspecifications of the regression model, the imputation model, or both? These questions were considered in a more abstract context by Lumley (2017)\cite{lumley2017robustness}, where the model is only nearly-true such that $$\sqrt{n}(\hat\theta_{\mathrm{cal,M}}-\theta^*){\rightsquigarrow} N(0,\sigma^2+\omega^2)$$ and $$\sqrt{n}(\hat\theta_{\mathrm{MI}}-\theta^*){\rightsquigarrow} N(\kappa\rho\omega,\sigma^2).$$ In the above equations, $\kappa$ is the limit of Kullback--Leibler divergence between the true model $P_n$ and the outcome model $Q_n$ defined as the sequence of misspecified distributions chosen to be contiguous to the true model. We assume $\kappa$ is bounded. $\rho$ is the asymptotic correlation between the log-likelihood ratio of two distributions, $P_n$ and $Q_n$, and the difference in influence functions for $\hat\theta_{\mathrm{cal,M}}$ and $\hat\theta_{\mathrm{MI}}$ under $P_n$ and $Q_n$, respectively. That is, the ``nearly-true'' models are defined by a sequence of outcome models such that one may not reliably reject misspecification, even using the most powerful test comparing the truly data-generating distribution. In simple but common cases, including the case-control design study and the linear regression analysis in the two-phase study, the model misspecification may neutralize the advantage of the standard multiple imputation. \cite{lumley2017robustness} Indeed the mean-squared error of $\hat\theta_{\mathrm{MI}}$ will be asymptotically larger than that for $\hat\theta_{\mathrm{cal,M}}$ whenever $|\kappa \rho|>1$.\cite{lumley2017robustness} We study the relative numerical performance of these two estimators and other standard competitors under nearly-true model setting in the next section. \section{Simulations} \label{sec-sim} In this section we are interested in three questions; how much precision is gained by multiple versus single imputation in raking, whether imputation models can maintain an efficiency advantage while being more robust, and how these affect the efficiency-robustness trade-off between weighted and imputation estimators. Source code in R for these simulations is available at \url{https://github.com/kyungheehan/calib-mi}. \subsection{Case-control study}\label{sim1} We first demonstrate numerical performance of multiple imputation for the case-control study where calibration is not available but the maximum likelihood estimator can be easily computed. Let $X$ be a standard normal random variable and $Y$ be a binary response taking values in $\{0,1\}$ such that for a given $X=x$ the associated logistic model is given by \begin{align} \textrm{logit}\,\mathbb{P}(Y=1 | X=x) = \alpha_0 + \beta_0 x + \delta_0(x-\xi) \mathbb{I}(x > \xi) \label{true1} \end{align} for some fixed $\delta_0$ and $\xi$, and $\textrm{logit}(p) = \log \big( \frac{p}{1-p} \big)$ for $0 < p < 1$. In accordance with the usual case-control study design, we assume $Y$ is known for everyone, but $X$ is available with sampling probability of 1 when $Y=1$ and a lower sampling probability when $Y=0$. To be specific, we first generate a full cohort $\mathcal{X}_N = \{ (Y_i, X_i) : 1 \leq i \leq N \}$ following the true model \eqref{true1} and denote the index set of all the $n$-case subjects in $\mathcal{X}_N$ by $S_1 \subset \{ 1, \ldots, N \}$, $n < N$. Thus, $Y_i=1$ if $i \in S_1$, otherwise $Y_i = 0$. Then a balanced case-control design is employed which consists of observing $(Y_i, X_i)$ for all the subjects in $S_1$ and a randomly chosen $n$-subsample $S_0$ from $\{1, \ldots, N \} \setminus S_1$. For cohort members $\{1, \ldots, N \} \setminus S_0 \cup S_1$, only $Y_i$ is observed. Define $\mathcal{X}^\ast_n = \{ (Y_i, X_i) : i \in S_0 \cup S_1 \}$. We examine the sensitivity of the multiple imputation approach in the setting of nearly-true models.\citep{lumley2017robustness} For a practical definition of a nearly-true model, we consider a working model that may not be reliably rejected, even when using the oracle test statistic of the likelihood ratio with the true model \eqref{true1} used to generate the data as the null. In other words, instead of fitting the true model \eqref{true1}, we employ a simpler outcome model \begin{align} \textrm{logit} \, \mathbb{P}(Y=1 | X=x) = \alpha + \beta x. \label{nearly-true1} \end{align} We note that when $\delta_0=0$ the working model \eqref{nearly-true1} is correctly specified, but misspecified when $\delta_0\neq 0$. It is worthwhile to mention that the single knot linear spline logistic model \eqref{true1} is the worst case of misspecified model of \eqref{nearly-true1} when $\alpha_0 = -5$, $\beta_0 = 1$ and $\xi \approx 1.8$, which maximizes correlation between the most powerful test to reject the model misspecification and the bias of the misspecified maximum likelihood estimator. \citep{lumley2017robustness} In this case, the maximum likelihood estimator of \eqref{nearly-true1} is the unweighted logistic regression \citep{prentice1979logistic} for the complete case analysis only with $\mathcal{X}_n^\ast$. Four different methods are compared in our example for estimating the nearly-true slope $\beta$ in \eqref{nearly-true1}; (i) the maximum likelihood estimation (MLE), (ii) a design-based inverse probability weighting (IPW) approach, (iii) a multiple imputation with a parametric imputation model (MI-P) and (iv) a multiple imputation with non-parametric imputation based on bootstrap resampling (MI-B). Formally, the parametric MI (MI-P) imputes covariates $X_i$, $i \not\in S_0\cup S_1$, from a parametric model such that $X|Y=y$ is assumed to be distributed as $N(\mu + \eta y, \sigma^2)$, where $\mu = \mathbb{E}(X | Y=0)$, $\eta = \mathbb{E}(X | Y=1) - \mu$, and $\sigma^2 = \mathbb{V}\text{ar}(X)$. Here, the parameters $\mu$, $\eta$ and $\sigma^2$ are estimated from $\mathcal{X}_n^\ast$. On the other hand, the bootstrap method (MI-B) resamples covariates $X_i$, $i \not\in S_0\cup S_1$, from the empirical distribution of $X$ given $Y=0$. We note that MLE only utilizes the sub-cohort information $\mathcal{X}_n^\ast$ but the other estimators additionally use response observations $\{Y_i : i \not\in S_0 \cup S_1\}$ so that efficiency gains can be expected for estimating the nearly-true slope $\beta$, depending on the level of model misspecification. Using Monte Carlo iterations, we summarized the empirical performance of the four different estimators based on fitting the nearly-true model \eqref{nearly-true1} with the mean squared error (MSE) of the target parameter $\beta$, \begin{align} \textrm{MSE}(\hat{\beta}) = \frac{1}{K} \sum_{k=1}^K \big( \hat{\beta}^{[k]} - \beta \big)^2 \label{mse}, \end{align} where $\hat{\beta}^{[k]}$ is the estimate of $\beta$ from the $k$-th Monte Carlo replication, $1 \leq k \leq K$. Similarly the empirical bias-variance decomposition, \begin{align} \textrm{Bias}(\hat{\beta}) = \textrm{E}{\hat{\beta}} - \beta \quad \textrm{and}\quad \textrm{Var}(\hat{\beta}) = \frac{1}{K} \sum_{k=1}^K \Big( \hat{\beta}^{[k]} - \textrm{E}{\hat{\beta}} \Big)^2, \label{bias-var} \end{align} was also reported to compare precision and efficiency, where $\textrm{E}{\hat{\beta}} = K^{-1} \sum_{k=1}^K \hat{\beta}^{[k]}$. For all simulations, we fixed $\beta=1$, $\alpha_0=-5$, $\xi_0=1.8$, $N=10^4$, and the number of cases was around $n=110$ in average. We used $M=100$ multiple imputations and $K=1000$ Monte Carlo simulations. Results are provided in Table \ref{table1}. Table \ref{table1} demonstrates two principles. First, the parametric MI (MI-P) estimator closely matches the maximum likelihood estimator, but the resampling (MI-B) estimator closely matches the design-based estimator. Second, more importantly, the design-based estimator is less efficient than the maximum likelihood estimator when the model is correctly specified, but has lower mean squared error when $\delta_0$ was greater than about $1.6$. In this case, even the most powerful one-sided test of the null $\delta_0=0$ based on the alternative model \eqref{nearly-true1} would have power less than approximately $0.5$, so that any model diagnostic used in a practical setting would have lower power. Figure 1 shows the relative efficiency of the methods as a function of the level of mispecification. In summary, we conclude that the efficiency gain of the model-based analysis is not robust even to mild forms of misspecification that would not be detectable in practical settings. \subsection{Linear regression with continuous surrogate}\label{sim2} We now evaluate the performance of the multiple imputation raking estimator in a two-phase sampling design. Let $Y$ be a continuous response associated with covariates $X=x$ and $Z=z$ such that \begin{align} \mathbb{E}(Y | X=x, Z=z)= \alpha_0 + \beta_0 x + \delta_0 x \cdot \mathbb{I}(|z| > \zeta_0), \label{true2} \end{align} for some fixed $\delta_0$ and $\zeta_0 = F_Z^{-1}(0.95)$, where $\mathbb{V}ar(Y|X,Z)=1$, $X$ is a standard normal random variable, $Z$ is a continuous surrogate of $X$ and $F_Z^{-1}$ is the inverse cumulative distribution fuction for $Z$. Similarly to the simulation study in the previous section \ref{sim1}, instead of the true model \eqref{true2} which generally will not be known in a real data setting, we are interested in the typical linear regression analysis with an outcome model \begin{align} \mathbb{E}(Y | X=x) = \alpha + \beta x. \label{nearly-true2} \end{align} Two different scenarios of the surrogate variable $Z$ are considered such that (a) $Z = X + \varepsilon$ for $\varepsilon \sim N(0,1)$ and (b) $Z= \eta X$ for $\eta \sim \Gamma(4,4)$, which represent additive and multiplicative error, respectively. In the first phase of sampling, we assume that outcomes $Y$ and auxiliary variables $Z$ are known for everyone, whereas covariate measurements of $X$ are available only at the second stage. The sampling for the second phase will be stratified on $Z$. Specifically, we will observe $X_i$ for all individuals if $|Z_i| > \zeta_0$, otherwise $5\%$ of subjects subjects in the intermediate stratum $|Z_i| \leq \zeta_0$ are randomly sampled, where $1 \leq i \leq N$. We write $S_2 \subset \{1, \ldots, N \}$ to be the index set of subjects collected in the second phase so that $\mathcal{X}_I = \{ (Y_i, Z_i) : 1 \leq i \leq N \}$ and $\mathcal{X}_{II} = \{ (Y_i, X_i, Z_i) : i \in S_2 \}$ denote the first and second stage samples, respectively. We compare five different methods of estimating the nearly-true parameter $\beta$: (i) maximum likelihood estimation (MLE), (ii) a standard generalized raking estimation using the auxiliary variable, (iii) regression calibration (RC), a single imputation method that imputes the missing covariate $X$ with an estimate of $\mathbb{E}[X|Z]$,\citep{carroll2006} (iv) multiple imputation without raking (MI), and (v) the proposed approach combining raking and the multiple imputation (MIR). We note that when $Y$ is Gaussian, the semi-parametric efficient maximum likelihood estimator of $\beta$ is available in the \texttt{missreg3} package in R,\citep{wild2013missreg3} using the stratification information.\cite{scott2006calculating} We employ this for the MLE (i). For the standard raking method (ii), we construct a design-based efficient estimator \citep{breslow2009using} as below: \begin{itemize} \item[R1.] Find a single imputation model $X = a + b Y + c Z + \epsilon$, where $\epsilon \sim N(0,\tau^2)$ based on the second phase sample $\mathcal{X}_{II}$. \item[R2.] Fit the nearly-true model \eqref{nearly-true2} using $(Y_i, \hat{X}_i)$ for $1 \leq i \leq N$, where $\hat{X}_i$ are fully imputed from (R1). \item[R3.] Calibrate sampling weights for raking using the influence function induced from the nearly-true fits in (R2). \item[R4.] Fit the design-based estimator of the nearly-true model \eqref{nearly-true2} with the second phase sample $\mathcal{X}_{II}$ and calibrated sampling weights from (R3). \end{itemize} For the conventional regression calibration approach (iii), we simply fit a linear model regressing $X_i$ on $Z_i$ for $i \in S_i$ and then impute missing observations $\hat{X}_i$ in the first phase so that the nearly-true model \eqref{nearly-true2} is evaluated using $\{ (Y_i, \hat{X}_i) : i \not\in S_2\}$ and $\{ (Y_i, X_i) : i \in S_2 \}$. We consider two resampling techniques for the multiple imputation method (iv): the wild bootstrap \citep{cao1991rate, mammen1993bootstrap,hardle1993comparing} and a Bayesian approach with a non-informative prior. Note, the wild bootstrap gives consistent estimates for settings where the conventional Efron's bootstrap does not work, such as under heteroscedasticity and high-dimensional settings. We refer to Appendix \ref{App-cal} for implementation details of multiple imputation with the wild bootstrap and a parametric Bayesian resampling. We now illustrate the proposed method that calibrates sampling weights using multiple imputation. \begin{itemize} \item[M1.] Resample $\hat{X}_i^\ast$ independently for all $1 \leq i \leq N$ by using either the wild bootstrap or the parametric Bayesian resampling. \item[M2.] Fit the nearly-true model \eqref{nearly-true2} based on a resample $\{ (Y_i, \hat{X}_i^\ast) : 1 \leq i \leq N\}$. \item[M3.] Repeat (M1) and (M2) in multiple times, and take the average of influence functions, induced by the nearly-true models fitted in (M2). \item[M4.] Calibrate sampling weights using the average influence function as auxiliary information. \item[M5.] Fit the design-based estimator of the nearly-true model \eqref{nearly-true2} with the second phase sample $\mathcal{X}_{II}$ and calibrated sampling weights obtained from (M4). \end{itemize} Setting {$N=5000$}, we ran $M=100$ multiple imputations over {$1000$} Monte Carlo replications. For all simulations, $\beta=1$, $\alpha_0=0$, $\zeta_0\approx2.3$ when $Z$ is a surrogate of $X$ with an additive measurement error but $\zeta_0\approx1.8$ with a multiplicative error in our simulation settings, and the phase two sample with $|S_2|=750$ in average. We considered several values of $\delta_0$ and the level of misspecification is described by the empirical power to reject the misspecified model for the level $0.05$ likelihood ratio test comparing the null \eqref{true2} and alternative \eqref{nearly-true2}. The numerical results with additive measurement errors are summarized in Table \ref{table2} and Figure \ref{figure2}. In this scenario, regression calibration (RC) performed the best for $\delta_0$ less than approximately 0.15, since RC correctly assumes a linear model for imputing $X$ from $Z$. The two standard multiple imputation had estimation bias due to a misspecified imputation model and had a larger MSE than the RC method. However, we note once again the model diagnostic for linearity, i.e. $\delta_0=0$, had at most $20\%$ power for the level of misspecifictation studied, which means one may not reliably reject the misspecified model even when $\delta_0=0.3$ and imputation with the correctly specified model is also unlikely. Indeed the standard and proposed MIR raking estimators achieved lower MSE when $\delta_0 \geq 0.15$. Thus, raking successfully leveraged the information from the cohort not in the phase two sample while maintaining its robustness, as seen in previous literature.\citep{deville1992calibration, sarndal2007calibration, breslow2009using} In this simulation we further found that the standard raking estimation efficiency can be improved by using multiple imputation to estimate the optimal raking variable, with efficiency gains of about $10\%$ in this example. Table \ref{table3} and Figure \ref{figure3} summarize the results for the multiplicative error scenario. In this case, even for $\delta_0=0$, the RC and multiple imputations have appreciable bias and worse relative performance compared to the two raking estimators, because of the misspecified imputation model. The two raking estimators outperformed all estimators for all levels of misspecfication. In this scenario, the MIR had smaller gains over the standard raking estimator. \section{Data Example: The National Wilms Tumor Study} \label{sec-data} We apply our proposed approach to the data from National Wilms Tumor Study (NWTS). In this example, we assume a key covariate of interest is only available in a phase 2 subsample, and compare the proposed MIR method with other standard estimators for this setting. In the data example with NWTS, we are interested in the logistic model for the binary relapse response with predictors histology (UH: unfavorable versus FH: favorable versus), the stage of disease (III/IV versus I/II), age at diagnosis (year) and the diameter of tumor (cm) as \begin{eqnarray} \begin{split} \qquad &\textrm{logit} \, \mathbb{P}(\textrm{Relapse} \, | \, \textrm{Histology}, \textrm{Stage}, \textrm{Age}, \textrm{Diameter})\\ &\quad = \alpha + \beta_1 (\textrm{Age}) + \beta_2 (\textrm{Diameter}) + \beta_3 (\textrm{Histology}) + \beta_4 (\textrm{Stage}) + \beta_{3,4} (\textrm{Histology}\ast\textrm{Stage}), \end{split} \label{wilms-model} \end{eqnarray} where $\beta_{3,4} $ indicates an interaction coefficient between histology and stage.\cite{lumley2011complex} We consider \eqref{wilms-model} is a nearly-true model of the relapse probability associated with covariates, as it is difficult to specify the true model in this real data setting. Histology was evaluated from both a central laboratory and a local laboratory, where the latter is subject to misclassification due to the difficulty of diagnosing this rare disease. For the first phase data, we suppose that the $N=3915$ observations of outcomes and covariates are available for the full cohort, except that the histology is obtained only from the local laboratory. Central histology is then obtained on a phase 2 subset. By considering the outcome-dependent sampling strategies,\cite{breslow1999design,lumley2011complex} we sampled individuals for the second phase by stratifying on relapse, local histology and disease stage levels. Specifically, all the subjects who either relapsed or had unfavorable local histology were selected, while only a random subset in the remaining strata (non-relapsed and favorable histology strata for each stage level) were selected so that there was a 1:1 case-control sample for each stage level.\cite{lumley2011complex} Similarly to previous numerical studies, we compared four estimators, where the ``true parameters'' in \eqref{wilms-model} are given by estimates from the full cohort analysis: (i) the maximum likelihood estimates (MLE) of the regression coefficients in \eqref{wilms-model} based on the complete case analysis of the second phase sample; (ii) the standard raking estimator, which calibrates sampling weights by using the local histology information in the first phase sample, where the raking variable was generated by the influence functions. We imputed (unobserved) a central histology path by using a logistic model regressing the second phase histology observations on the age, tumor diameter and three-way interaction among the relapse, stage and local histology together with their nested interaction terms. The reason for introducing interaction in the imputation model is that subjects at advanced disease stage or with unfavorable histology were mostly relapsed in the observed data. We also consider (iii) the conventional bootstrap procedure was employed for multiple imputation (MI) with the second phase sample, and (iv) we combined the raking and multiple imputation (MIR) as proposed in the previous section. The relative performance of the methods were assessed by obtaining estimates for 1000 two-phase samples. 100 multiple imputations were applied for each two-phase sample. Table \ref{table4} summarizes the results. Similarly to the numerical illustration in the previous section, we found that the proposed method (MIR) had the best performance in terms of achieving lowest MSE for the target parameter available only on the subset. While raking does not provide the lowest MSE for all parameters, in this example, MIR had the lowest squared error summed over the model parameters. \section{Discussion} There are many settings in which variables of interest are not directly observed, either because they are too expensive or difficult to measure directly or because they come from a convenient data source, such as EHR, not originally collected to support the research question. In any practical setting, the chosen statistical model to handle the mismeasured or missing data will be at best a close approximation to the targeted true underlying relationship. A general discussion of the difficulty of testing for model misspecification demonstrates that the data at hand cannot be used to reliably test whether or not the basic assumptions in the regression analysis hold without good knowledge of the potential structure.\cite{freedman2009} Here, we have considered the robustness-efficiency trade of several estimators in the setting of mild model misspecification, where idealized tests with the correct alternative have low power. When the misspecification is along the least-favorable direction contiguous to the true model, the bias will be in proportion to the efficiency gain from a parametric model.\cite{lumley2017robustness} We studied the relative performance of design-based estimators for a nearly-true regression model in two cases, logistic regression in a case-control study and linear regression in a two-phase design, where the misspecification was approximately in the least favorable direction. In both cases, the misspecification took the form of a mild departure from linearity, and as expected, the raking estimators demonstrated better robustness compared to the parametric MLE and standard multiple imputation models. Our approach to local robustness is related to that of Watson and Holmes (2016),\cite{watson2016} who consider making a statistical decision robust to model misspecification around the neighborhood of a given model in the sense of Kullback--Leibler divergence. Our approach is simpler than theirs for two reasons: we consider only asymptotic local minimax behavior, and we work in a two-phase sampling setting where the sampling probabilities are under the investigator's control and so can be assumed known. In this setting, the optimal raking estimator is consistent and efficient in the sampling model and so is locally asymptotically minimax. In more general settings of non-response and measurement error, it is substantially harder to find estimators that are local minimax, even asymptotically, and more theoretical work is needed. Another contribution of our study is that we demonstrated a practical approach for the efficient design-based estimator under contiguous misspecification. Without an explicit form of an efficient influence function, the characterization of the efficient estimator may not always lead to readily attainable computation of the efficient estimator in the standard raking method. We examined the use of multiple imputation to estimate the raking variable that confers the optimal efficiency. \citep{han2016combining} Our proposed raking estimator is easy to calculate and provides better efficiency than any raking estimator based on a single imputation auxiliary variable. In the two cases studied, the improvement in efficiency was evident, though at times small. On the other hand, the degree of improvement of the MI-raking estimator over the standard raking approach is expected to increase with the degree of non-linearity of the score for the target variable. In additional simulations, not shown, we did indeed see larger efficiency gains for MI-raking over single-imputation raking with large measurement error in $Z$. In many settings, there is a preference to choose simpler models when there is a lack of evidence to support a more complicated approach, because of the clarity of interpretation with simpler models. \citep{box2005statistics, stone1985additive} In such settings, design-based estimators are easy to implement in standard software and provide a desired robustness. More theoretical work is also needed to find a more practical representation of the least-favorable contiguous model for the general setting in order to better understand how much of a practical concern this type of misspecification may be. The bias--efficiency trade-off we describe is also important in the design of two-phase samples. The optimal design for the raking estimator will be different from the optimal design for the efficient likelihood estimator, and the optimal design when the outcome model is ``nearly-true'' may be different again. \section*{Acknowledgments} This work was supported in part by the Patient Centered Outcomes Research Institute (PCORI) Award R-1609-36207 and U.S. National Institutes of Health (NIH) grant R01-AI131771. The statements in this manuscript are solely the responsibility of the authors and do not necessarily represent the views of PCORI or NIH. \section*{Data availability} Source code in R for these simulations and the National Wilms Tumor Study data are available at \url{https://github.com/kyungheehan/calib-mi}. \nocite{*} \begin{table}[htbp] \caption{\label{table1} Relative performance of the maximum likelihood (MLE), design-based estimator (IPW), parametric imputation (MI-P) and bootstrap resampling (MI-B) imputation estimators in the case-control design with cohort size $N=10^4$, case-control subset with $n=110$ in average, $M=100$ imputations, and $1000$ Monte Carlo runs. We report the root-mean squared error ($\sqrt{\textrm{MSE}}$) for $\beta=1$, its bias and variance decomposition \eqref{bias-var}, and the empirical power to reject the nearly-true model \eqref{nearly-true1} through the most powerful (MP) test and the goodness-of-fit test of linear fits.\citep{li2007nonparametric, hart2013nonparametric}} \centering \begin{tabular}{cc rrrr c cc} \hline \multirow{2}{*}{$(\beta_0, \delta_0)$} & \multirow{2}{*}{Criterion} & \multicolumn{4}{c}{Estimation performance} & & \multicolumn{2}{c}{Empirical power$^\dagger$}\\ \cline{3-6}\cline{8-9} & & {MLE} & {IPW} & MI-P & MI-B & & MP test & Lin. test\\ \hline \multirow{3}{*}{(1, 0)} & $\sqrt\textrm{MSE}$ & 0.145 & 0.239 & 0.140 & 0.240 & & \multirow{3}{*}{0.046} & \multirow{3}{*}{0.042}\\ & Bias & 0.014 & 0.071 & 0.011 & 0.071 & & \\ & $\sqrt\textrm{Var}$ & 0.144 & 0.229 & 0.140 & 0.229 & & \\ \hline \multirow{3}{*}{(0.844, 0.700)} & $\sqrt\textrm{MSE}$ & 0.148 & 0.229 & 0.147 & 0.229 & & \multirow{3}{*}{0.202} & \multirow{3}{*}{0.042}\\ & Bias & -0.067 & 0.064 & -0.077 & 0.064 & & \\ & $\sqrt\textrm{Var}$ & 0.132 & 0.219 & 0.125 & 0.219 & & \\ \hline \multirow{3}{*}{(0.692, 1.400)} & $\sqrt\textrm{MSE}$ & 0.199 & 0.217 & 0.204 & 0.217 & & \multirow{3}{*}{0.410} & \multirow{3}{*}{0.061}\\ & Bias & -0.156 & 0.054 & -0.168 & 0.054 & & \\ & $\sqrt\textrm{Var}$ & 0.124 & 0.211 & 0.116 & 0.211 & & \\ \hline \multirow{3}{*}{(0.541, 2.100)} & $\sqrt\textrm{MSE}$ & 0.257 & 0.201 & 0.262 & 0.201 & & \multirow{3}{*}{0.683} & \multirow{3}{*}{0.156}\\ & Bias & -0.233 & 0.047 & -0.242 & 0.047 & & \\ & $\sqrt\textrm{Var}$ & 0.109 & 0.196 & 0.102 & 0.195 & & \\ \hline \multirow{3}{*}{(0.381, 2.800)} & $\sqrt\textrm{MSE}$ & 0.317 & 0.206 & 0.320 & 0.206 & & \multirow{3}{*}{0.905} & \multirow{3}{*}{0.382}\\ & Bias & -0.301 & 0.056 & -0.306 & 0.056 & & \\ & $\sqrt\textrm{Var}$ & 0.098 & 0.199 & 0.093 & 0.199 & & \\ \hline \end{tabular} \begin{flushleft} \item $^\dagger$$P_n$ and $Q_n$ are likelihood functions at $\theta_0 = (\alpha_0, \beta_0, \delta_0)$ and $\theta^* = (\alpha, \beta)$, respectively. \end{flushleft} \end{table} \begin{table}[htbp] \caption{\label{table2} Multiple imputation in two-stage analysis with continuous surrogates when $Z = X + \varepsilon$ for independent $\varepsilon \sim N(0,1)$. We compare relative performance of the maximum likelihood (MLE), standard raking, regression calibration (RC), multiple imputations (MI) using either the wild bootstrap or Bayesian approach, and the proposed multiple imputation with raking (MIR) estimators for a two-phase design with cohort size $N=5000$, phase 2 subset $|S_2|=750$ in average, $M=100$ imputations, and $1000$ Monte Carlo runs. We report the root-mean squared error ($\sqrt{\textrm{MSE}}$) for $\beta=1$, its bias and variance decomposition \eqref{bias-var}, and the empirical power to reject the nearly-true model \eqref{nearly-true2} through the most powerful (MP) test and the goodness-of-fit test of linear fits.\citep{hart2013nonparametric, li2007nonparametric}} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{cc cccccccc ccc} \hline \multirow{3}{*}{$(\beta_0, \delta_0)$} & \multirow{3}{*}{Criterion} & \multicolumn{8}{c}{Estimation performance} & \multirow{3}{*}{\multirow{1}{*}{Abs Corr$^\dagger$}} & \multicolumn{2}{c}{\multirow{1}{*}{Empirical power$^\ddagger$}}\\ \cline{3-10}\cline{12-13} & & \multirow{2}{*}{MLE} & \multirow{2}{*}{Raking} & \multirow{2}{*}{RC} & \multicolumn{2}{c}{MI} & & \multicolumn{2}{c}{MIR} & & \multirow{2}{*}{MP test} & \multirow{2}{*}{Lin. test}\\ \cline{6-7}\cline{9-10} & & & & & Boot & Bayes & & Boot & Bayes\\ \hline \multirow{3}{*}{(1, 0)} & $\sqrt\textrm{MSE}$ & 0.019 & 0.038 & 0.017 & 0.019 & 0.019 & & 0.034 & 0.034 & \multirow{3}{*}{-} & \multirow{3}{*}{0.052} & \multirow{3}{*}{0.065}\\ & Bias & 0.004 & 0.000 & 0.000 & 0.002 & -0.003 & & 0.001 & 0.001 & & \\ & $\sqrt\textrm{Var}$ & 0.019 & 0.038 & 0.017 & 0.018 & 0.018 & & 0.034 & 0.034 & & \\ \hline \multirow{3}{*}{(0.951, 0.068)} & $\sqrt\textrm{MSE}$ & 0.033 & 0.037 & 0.022 & 0.023 & 0.026 & & 0.033 & 0.033 & \multirow{3}{*}{0.480} & \multirow{3}{*}{0.140} & \multirow{3}{*}{0.078}\\ & Bias & -0.027 & 0.000 & -0.014 & -0.014 & -0.019 & & 0.001 & 0.001 & & \\ & $\sqrt\textrm{Var}$ & 0.018 & 0.037 & 0.017 & 0.018 & 0.018 & & 0.033 & 0.033 & & \\ \hline \multirow{3}{*}{(0.904, 0.131)} & $\sqrt\textrm{MSE}$ & 0.058 & 0.036 & 0.032 & 0.034 & 0.039 & & 0.033 & 0.033 & \multirow{3}{*}{0.496} & \multirow{3}{*}{0.407} & \multirow{3}{*}{0.089}\\ & Bias & -0.056 & 0.000 & -0.027 & -0.029 & -0.034 & & 0.001 & 0.001 & & \\ & $\sqrt\textrm{Var}$ & 0.018 & 0.036 & 0.017 & 0.018 & 0.018 & & 0.033 & 0.033 & & \\ \hline \multirow{3}{*}{(0.861, 0.191)} & $\sqrt\textrm{MSE}$ & 0.084 & 0.036 & 0.042 & 0.047 & 0.052 & & 0.032 & 0.032 & \multirow{3}{*}{0.497} & \multirow{3}{*}{0.698} & \multirow{3}{*}{0.108}\\ & Bias & -0.082 & -0.001 & -0.038 & -0.043 & -0.048 & & 0.001 & 0.001 & & \\ & $\sqrt\textrm{Var}$ & 0.018 & 0.036 & 0.017 & 0.018 & 0.018 & & 0.032 & 0.032 & & \\ \hline \multirow{3}{*}{(0.820, 0.247)} & $\sqrt\textrm{MSE}$ & 0.108 & 0.035 & 0.052 & 0.059 & 0.064 & & 0.032 & 0.032 & \multirow{3}{*}{0.496} & \multirow{3}{*}{0.893} & \multirow{3}{*}{0.142}\\ & Bias & -0.107 & 0.000 & -0.049 & -0.057 & -0.062 & & 0.001 & 0.001 & & \\ & $\sqrt\textrm{Var}$ & 0.017 & 0.035 & 0.017 & 0.018 & 0.018 & & 0.032 & 0.032 & & \\ \hline \multirow{3}{*}{(0.781, 0.3)} & $\sqrt\textrm{MSE}$ & 0.132 & 0.035 & 0.062 & 0.072 & 0.077 & & 0.032 & 0.032 & \multirow{3}{*}{0.495} & \multirow{3}{*}{0.978} & \multirow{3}{*}{0.189}\\ & Bias & -0.131 & -0.001 & -0.060 & -0.069 & -0.074 & & 0.001 & 0.001 & & \\ & $\sqrt\textrm{Var}$ & 0.017 & 0.035 & 0.017 & 0.018 & 0.018 & & 0.032 & 0.032 & & \\ \hline \end{tabular} } \begin{flushleft} \item $^{\dagger,\ddagger}$The absolute value of the correlation between $\hat{\beta}_{\textrm{MLE}} - \hat{\beta}_{\textrm{Raking}}$ and $\log {Q}_n- \log {P}_n$, \item where $P_n$ and $Q_n$ are likelihood functions at $\theta_0 = (\alpha_0, \beta_0, \delta_0)$ and $\theta^* = (\alpha, \beta)$, respectively. \end{flushleft} \end{table} \begin{table}[htbp] \caption{\label{table3} Multiple imputation in two-stage analysis with continuous surrogates when $Z = \eta X$ for independent $\eta \sim \Gamma(4,4)$. We compare relative performance of the maximum likelihood (MLE), standard raking, regression calibration (RC), multiple imputations using (MI) either the wild bootstrap or Bayesian approach, and the proposed multiple imputation with raking (MIR) estimators for a two-phase design with cohort size $N=5000$, phase 2 subset $|S_2|=750$ in average, $M=100$ imputations, and $1000$ Monte Carlo runs. We report the root-mean squared error ($\sqrt{\textrm{MSE}}$) for $\beta=1$, its bias and variance decomposition \eqref{bias-var}, and the empirical power to reject the nearly-true model \eqref{nearly-true2} through the most powerful (MP) test and the goodness-of-fit test of linear fits.\citep{hart2013nonparametric, li2007nonparametric}} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{cc cccccccc ccc} \hline \multirow{3}{*}{$(\beta_0, \delta_0)$} & \multirow{3}{*}{Criterion} & \multicolumn{8}{c}{Estimation performance} & \multirow{3}{*}{\multirow{1}{*}{Abs Corr$^\dagger$}} & \multicolumn{2}{c}{\multirow{1}{*}{Empirical power$^\ddagger$}}\\ \cline{3-10}\cline{12-13} & & \multirow{2}{*}{MLE} & \multirow{2}{*}{Raking} & \multirow{2}{*}{RC} & \multicolumn{2}{c}{MI} & & \multicolumn{2}{c}{MIR} & & \multirow{2}{*}{MP test} & \multirow{2}{*}{Lin. test}\\ \cline{6-7}\cline{9-10} & & & & & Boot & Bayes & & Boot & Bayes\\ \hline \multirow{3}{*}{(1, 0)} & $\sqrt\textrm{MSE}$ & 0.018 & 0.030 & 0.216 & 0.099 & 0.094 & & 0.029 & 0.029 & \multirow{3}{*}{-} & \multirow{3}{*}{0.048} & \multirow{3}{*}{0.056}\\ & Bias & 0.006 & 0.001 & 0.215 & 0.097 & 0.092 & & 0.002 & 0.002 & & \\ & $\sqrt\textrm{Var}$ & 0.017 & 0.030 & 0.013 & 0.018 & 0.018 & & 0.029 & 0.029 & & \\ \hline \multirow{3}{*}{(1.045, -0.068)} & $\sqrt\textrm{MSE}$ & 0.040 & 0.030 & 0.227 & 0.111 & 0.106 & & 0.029 & 0.029 & \multirow{3}{*}{0.585} & \multirow{3}{*}{0.149} & \multirow{3}{*}{0.062}\\ & Bias & 0.036 & 0.001 & 0.227 & 0.109 & 0.104 & & 0.002 & 0.002 & & \\ & $\sqrt\textrm{Var}$ & 0.018 & 0.030 & 0.013 & 0.018 & 0.018 & & 0.029 & 0.029 & & \\ \hline \multirow{3}{*}{(1.087, -0.131)} & $\sqrt\textrm{MSE}$ & 0.068 & 0.031 & 0.239 & 0.123 & 0.117 & & 0.030 & 0.030 & \multirow{3}{*}{0.584} & \multirow{3}{*}{0.427} & \multirow{3}{*}{0.075}\\ & Bias & 0.065 & 0.001 & 0.238 & 0.121 & 0.116 & & 0.002 & 0.002 & & \\ & $\sqrt\textrm{Var}$ & 0.018 & 0.031 & 0.013 & 0.018 & 0.018 & & 0.030 & 0.030 & & \\ \hline \multirow{3}{*}{(1.127, -0.191)} & $\sqrt\textrm{MSE}$ & 0.095 & 0.032 & 0.249 & 0.134 & 0.128 & & 0.031 & 0.031 & \multirow{3}{*}{0.585} & \multirow{3}{*}{0.697} & \multirow{3}{*}{0.099}\\ & Bias & 0.093 & 0.001 & 0.249 & 0.133 & 0.127 & & 0.002 & 0.002 & & \\ & $\sqrt\textrm{Var}$ & 0.018 & 0.032 & 0.014 & 0.018 & 0.018 & & 0.030 & 0.031 & & \\ \hline \multirow{3}{*}{(1.165, -0.247)} & $\sqrt\textrm{MSE}$ & 0.121 & 0.032 & 0.259 & 0.144 & 0.139 & & 0.031 & 0.031 & \multirow{3}{*}{0.583} & \multirow{3}{*}{0.890} & \multirow{3}{*}{0.136}\\ & Bias & 0.119 & 0.001 & 0.259 & 0.143 & 0.138 & & 0.002 & 0.002 & & \\ & $\sqrt\textrm{Var}$ & 0.019 & 0.032 & 0.014 & 0.019 & 0.019 & & 0.031 & 0.031 & & \\ \hline \multirow{3}{*}{(1.200, -0.3)} & $\sqrt\textrm{MSE}$ & 0.146 & 0.033 & 0.269 & 0.155 & 0.149 & & 0.032 & 0.032 & \multirow{3}{*}{0.580} & \multirow{3}{*}{0.967} & \multirow{3}{*}{0.179}\\ & Bias & 0.145 & 0.001 & 0.268 & 0.154 & 0.148 & & 0.003 & 0.002 & & \\ & $\sqrt\textrm{Var}$ & 0.019 & 0.033 & 0.014 & 0.019 & 0.019 & & 0.032 & 0.032 & & \\ \hline \end{tabular} } \begin{flushleft} \item $^{\dagger,\ddagger}$The absolute value of the correlation between $\hat{\beta}_{\textrm{MLE}} - \hat{\beta}_{\textrm{Raking}}$ and $\log {Q}_n- \log {P}_n$, \item where $P_n$ and $Q_n$ are likelihood functions at $\theta_0 = (\alpha_0, \beta_0, \delta_0)$ and $\theta^* = (\alpha, \beta)$, respectively. \end{flushleft} \end{table} \begin{table}[htbp] \caption{\label{table4} The National Wilms Tumor Study data example. We compare relative performance of the maximum likelihood (MLE), standard raking, multiple imputation (MI) using the wild bootstrap (MI), and the proposed multiple imputation with raking (MIR) estimators for a two-phase design with cohort size $N=3915$, phase 2 subset $|S_2|=1338$, $M=100$ imputations, and $1000$ Monte Carlo runs. We report the root-mean squared error ($\sqrt{\textrm{MSE}}$) for the parameter estimate obtained from the full cohort analysis of the outcome model \eqref{wilms-model}, and its bias and variance decomposition \eqref{bias-var}.} \centering \begin{tabular}{cc ccccc c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Criterion} & \multicolumn{5}{c}{Estimation performance by regressor} & \multirow{1}{*}{Sum of }\\ \cline{3-7} & & Hstg$^1$ & Stage$^2$ & Age$^3$ & Diam$^4$ & H$\ast$S$^5$ & Squares\\ \hline \multirow{3}{*}{MLE} & $\sqrt{\textrm{MSE}}$ & 1.768 & 0.777 & 0.014 & 0.014 & 0.605 & 4.096\\ & Bias & -1.768 & -0.777 & -0.007 & -0.012 & 0.603 & 4.091\\ & $\sqrt{\textrm{Var}}$ & 0.031 & 0.023 & 0.013 & 0.008 & 0.051 & 0.005\\ \hline \multirow{3}{*}{Raking} & $\sqrt{\textrm{MSE}}$ & 0.129 & 0.022 & 0.006 & 0.003 & 0.203 & 0.059\\ & Bias & 0.021 & -0.002 & 0.000 & 0.001 & -0.050 & 0.003\\ & $\sqrt{\textrm{Var}}$ & 0.127 & 0.022 & 0.006 & 0.003 & 0.197 & 0.056\\ \hline \multirow{3}{*}{MI} & $\sqrt{\textrm{MSE}}$ & 0.146 & 0.015 & 0.003 & 0.002 & 0.175 & 0.052\\ & Bias & 0.055 & -0.004 & 0.003 & 0.002 & -0.042 & 0.005\\ & $\sqrt{\textrm{Var}}$ & 0.135 & 0.015 & 0.002 & 0.001 & 0.170 & 0.047\\ \hline \multirow{3}{*}{MIR} & $\sqrt{\textrm{MSE}}$ & 0.124 & 0.022 & 0.006 & 0.003 & 0.189 & 0.051\\ & Bias & 0.018 & 0.006 & 0.001 & 0.001 & -0.038 & 0.002\\ & $\sqrt{\textrm{Var}}$ & 0.122 & 0.021 & 0.006 & 0.003 & 0.185 & 0.049\\ \hline \multirow{2}{*}{Full cohort} & Estimate & 1.193 & 0.285 & 0.089 & 0.028 & 0.816 & \multicolumn{1}{c}{-}\\ & Std. Error & 0.156 & 0.105 & 0.017 & 0.012 & 0.227 & \multicolumn{1}{c}{-}\\ \hline \end{tabular} \begin{flushleft} \item \hspace{1in}$^1$Unfavorable histology versus favorable; $^2$Disease stage III/IV versus I/II; \item \hspace{1in}$^3$Year at diagnosis; $^4$Tumor diameter (cm); $^5$Histology$\ast$Stage. \end{flushleft} \end{table} \begin{figure} \caption{Illustration of Table \ref{table1} \label{figure1} \end{figure} \begin{figure} \caption{Illustration of Table \ref{table2} \label{figure2} \end{figure} \begin{figure} \caption{Illustration of Table \ref{table3} \label{figure3} \end{figure} \appendix \section{Details of implementation} \label{appendix} \subsection{Imputation} \label{App-cal} The wild bootstrap multiple imputation estimator is computed as follows: \begin{itemize} \item[W1.] Generate ${X}_i^\ast = \hat{X}_i + V_i \hat{e}_i$ for $i \in S_2$, where $\hat{e}_i$ are residuals from (R2) and $V_i$ is an independent dichotomous random variable that takes on the value $(1+\sqrt{5})/2$ with probability $(\sqrt{5}-1)/ (2\sqrt{5})$, otherwise $(1-\sqrt{5})/2$, so that $\mathbb{E}V=0$ and $\textrm{Var}(V) = 1$. \item[W2.] Find an imputation model regressing ${X}_i^\ast$ on $Y_i$ and $Z_i$ for $i \in S_2$. \item[W3.] Resample $\hat{X}_i^\ast \sim N\big({\nu}(Y_i, Z_i), {\tau}^2(Y_i, Z_i)\big)$ independently for $i \in S_1$, where the mean and variance functions $\nu(Y_i, Z_i)\equiv\mathbb{E}(X | Y=y,Z=z)$ and ${\tau}^2(Y_i, Z_i)\equiv\textrm{Var}(X | Y=y,Z=z)$ are estimated from the model in (W2). \item[W4.] Fit the nearly-true model \eqref{nearly-true2} using $\{ (Y_i, \hat{X}_i^\ast) : 1 \leq i \leq N\}$, where $\hat{X}_i^\ast = X_i$ for $i \in S_2$. \item[W5.] Repeat (W1)--(W4) and take the average of multiple estimates of parameters. \end{itemize} We employ a parametric Bayesian resampling technique as follows: \begin{itemize} \item[B1.] Find a posterior distribution of parameters $(a,b,c,\tau^2)$ for the imputation model used in (R1) given the second phase sample $\mathcal{X}_{II}$. \item[B2.] Generate $(a^\ast,b^\ast,c^\ast,\tau_\ast^2)$ from the posterior distribution in (B1). \item[B3.] Resample ${X}_i^\ast \sim N\big(a^\ast + b^\ast Y_i + c^\ast Z_i, {\tau}_\ast^2\big)$ independently for $i \in S_1$. \item[B4.] Fit the nearly-true model \eqref{nearly-true2} using $\{ (Y_i, \hat{X}_i^\ast) : 1 \leq i \leq N\}$, where $\hat{X}_i^\ast = X_i$ for $i \in S_2$. \item[B5.] Repeat (B1)--(B4) and take the average of multiple estimates of parameters. \end{itemize} For the prior distribution of $(a,b,c,\tau^2)$, we adopt a non-informative prior $p(a,b,c,\tau^2) \propto 1/\tau^2$. In (B2), we first generate $\tau_\ast^2 | \mathcal{X}_{II} \sim \Gamma^{-1}(a_n/2, b_n/2)$, where $a_n = |S_2| - 3$ and $b_n$ is the residual sum of squares from the linear regression model. Then, we generate $(a^\ast,b^\ast,c^\ast)^\top|\tau_\ast^2, \mathcal{X}_{II} \sim N_3\big( (\hat{a}, \hat{b}, \hat{c})^\top, \tau_\ast^2 (\Xi^\top \Xi)^{-1}\big)$, where $\Xi$ is the design matrix of the linear regression model in (R1) and $(\hat{a}, \hat{b}, \hat{c})$ is the corresponding estimate of the regression coefficient. \subsection{Goodness-of-fit test} \label{App-gof} We use the wild bootstrap \citep{cao1991rate, mammen1993bootstrap,hardle1993comparing} together with kernel smoothing techniques in testing model specification of the parametric model. Suppose the true model is given by \begin{align} Y = m(X;\theta) + \varepsilon, \label{lof-eq1} \end{align} where $m$ is a known function depending of the parameter $\theta$ and $\varepsilon$ is a noise uncorrelated to $X$, that is $\mathbb{E}(\varepsilon | X) = 0$. In our study, we are mainly interested in in testing the null hypothesis such that \begin{align} H_0 : m(X;\theta) = \alpha + \beta X \quad (a.e.) \nonumber \end{align} for some $\theta =(\alpha, \beta)^\top \in \mathbf{R}^2$. We note that under the null hypothesis $H_0$, estimation of $\mathbb{E}(Y|X=\cdot)$ in a fully nonparametric way regressing i.i.d. observations $Y_i$ on $X_i$, $1 \leq i \leq n$, is less efficient than we directly fit the parametric model \eqref{lof-eq1} based on the same sample. However, fitting the parametric model may suffers from inevitable bias when the model is misspecified as the sample size is increasing.\cite{hart2013nonparametric,li2007nonparametric} From the above observation, we may test if the mean squared error quantifying the goodness-of-fit of the specified model \eqref{lof-eq1} is small compared to the nonparametric fits. Specifically, we measure $\ell_n = \textrm{MSE}(\hat{\theta}) - \textrm{MSE}(\hat{m})$ and examine if the observed quantity $\ell_n$ is significantly small, where $\hat{m}(\cdot)$ is a univariate kernel regression estimator of $\mathbb{E}(Y|X=\cdot)$. Here, we choose the bandwidth for kernel smoothing based on leave-one-out cross validation criterion which empirically optimizes prediction performance of the kernel smoothed estimates and it can be easily implemented by using the \texttt{npregbw} function of the \texttt{np} package in R \citep{racine2018np}. Similarly to the previous ideas of the bootstrap resampling, the p-value of testing the null hypothesis $H_0$ is computed as below: \begin{itemize} \item[T1.] Generate ${Y}_i^\ast = \hat{\alpha} + \hat{\beta}X_i + V_i \hat{e}_i$, $1 \leq i \leq n$, where $\hat{e}_i = Y_i - \hat{\alpha} + \hat{\beta}X_i$ and $V_i$ are random copies of an independent random variable $V$ which takes binary values by $(1+\sqrt{5})/2$ with probability $(\sqrt{5}-1)/ (2\sqrt{5})$, otherwise $(1-\sqrt{5})/2$ so that $\mathbb{E}V=0$ and $\textrm{Var}(V) = 1$. \item[T2.] Fit the parametric model with $(Y_1^\ast,X_1), \ldots, (Y_n^\ast, X_n)$ and let $\hat{\theta}^\ast = (\hat{\alpha}^\ast, \hat{\beta}^\ast)^\top$ be the resulting estimate of the parameter $\theta$. Compute the mean squared error $\textrm{MSE}(\hat{\theta}^\ast) = n^{-1}\sum_{i=i}^n (Y_i^\ast - \hat{\alpha}^\ast - \hat{\beta}^\ast X_i)^2$. \item[T3.] Find kernel smoothed fits $\hat{Y}^\ast = \hat{m}^\ast(X_i)$, $1 \leq i \leq n$, and compute the mean squared error $\textrm{MSE}(\hat{m}^\ast) = n^{-1}\sum_{i=i}^n (Y_i^\ast - \hat{m}^\ast(X_i))^2$. \item[T4.] Repeat (L1)--(L3) independently to obtain $\ell_n^\ast = \textrm{MSE}(\hat{\theta}^\ast) - \textrm{MSE}(\hat{m}^\ast)$ in multiple times to get an emirical distribution of $\ell_n$. \item[T5.] Compute the empirical p-value as the fraction of events $\ell_n^\ast > \ell_n$ occurred among repeated runs in (L4). \end{itemize} \end{document}
\begin{document} {\scriptscriptstyle\mathsf{T}}itle{Small sets of locally indistinguishable orthogonal maximally entangled states} \author{ Alessandro Cosentino\hspace{5mm} Vincent Russo\\[2mm] \emph{\small School of Computer Science and Institute for Quantum Computing}\\ \emph{\small University of Waterloo, Canada} } \date{April 10, 2014} \title{Small sets of locally indistinguishable orthogonal maximally entangled states} \begin{abstract} We study the problem of distinguishing quantum states using local operations and classical communication (LOCC). A question of fundamental interest is whether there exist sets of $k \leq d$ orthogonal maximally entangled states in $\mathbb{C}^{d}\otimes\mathbb{C}^{d}$ that are not perfectly distinguishable by LOCC. A recent result by Yu, Duan, and Ying [Phys. Rev. Lett. 109 020506 (2012)] gives an affirmative answer for the case $k = d$. We give, for the first time, a proof that such sets of states indeed exist even in the case $k < d$. Our result is constructive and holds for an even wider class of operations known as positive-partial-transpose measurements (PPT). The proof uses the characterization of the PPT-distinguishability problem as a semidefinite program. \end{abstract} \section{Introduction} A central subject of study in quantum information theory is the interplay between entanglement and nonlocality. An important tool to study this relationship is the paradigm of local quantum operations and classical communication (LOCC). This is a subset of all global quantum operations, with a fairly intuitive physical description. In a two-party LOCC protocol, Alice and Bob can perform quantum operations only on their local subsystems and the communication must be classical. This restricted paradigm has played a crucial role in the understanding of the role of entanglement in quantum information. It has also provided a framework for the description of basic quantum tasks such as quantum key distribution and entanglement distillation. A fundamental problem that has been studied to understand the limitations of LOCC protocols is the problem of distinguishing quantum states. The setup of the problem is pretty simple in the bipartite case. The two parties are given a single copy of a quantum state chosen with some probability from a collection of states and their goal is to identify which state was given, with the assumption that they have full knowledge of the collection. If the states are orthogonal and global operations are permitted, then it is always possible to determine the state with certainty. In contrast, if only LOCC protocols are allowed, Alice and Bob cannot in general discover the state they have been given, even if the states are orthogonal. The problem of distinguishing among a known set of orthogonal quantum states by LOCC has been studied by several researchers \cite{Bennett99,Walgate00,Ghosh2001,Walgate2002, Horodecki2003,Fan04,Ghosh04,Nathanson05,Watrous2005, Owari2006,Yu2011,Yu2012,Yu2012a,Cosentino2013,Bandyopadhyay2013}. Some direct applications of this problem include secret sharing \cite{Gottesman2000} and data hiding \cite{DiVincenzo2002}. A question of basic interest is how the size of LOCC-indistinguishable sets (denoted by $k$ in this paper) relates to the local dimension $d$ of each of Alice's and Bob's subsystems. We know that the dimension of a quantum system puts a bound on the degree of entanglement the system could possibly have with another system. Analogously, one can ask whether the local dimension of the two subsystems plays any special role in the nonlocality exhibited by LOCC-indistinguishable sets of states. Walgate et al. \cite{Walgate00} proved that any two orthogonal pure states can always be perfectly distinguished by an LOCC measurement. A particularly interesting case is when the set is constituted of orthogonal states with full local rank. Regarding this case, Nathanson \cite{Nathanson05} showed that it is always possible to perfectly distinguish any three orthogonal maximally entangled states in $\mathbb{C}^{3}\otimes\mathbb{C}^{3}$ by means of LOCC. On the other hand, it is known that $k > d$ orthogonal maximally entangled states can never be distinguished with certainty by LOCC measurements \cite{Ghosh04}. An interesting question is whether there exist sets of $k \leq d$ orthogonal maximally entangled states in $\mathbb{C}^{d}\otimes\mathbb{C}^{d}$ that are not perfectly distinguishable by LOCC, when $d > 3$. For the weaker model of \emph{one-way} LOCC protocols, Bandyopadhyay et al. \cite{Ghosh11} showed some explicit examples of indistinguishable sets of states with the size of the sets being equal to the dimension of the subsystems, i.e., $k=d$. Recently, Yu et al. \cite{Yu2012} gave an affirmative answer to the question for the case $k = d = 4$ in the setting of general LOCC protocols. Their result was later generalized in \cite{Cosentino2013} for the case $k = d = 2^t$, where $t \geq 2$. The answer has remained elusive for the case $k < d$. In this paper, we settle the question by exhibiting, for the first time, sets that contain fewer than $d$ orthogonal maximally entangled states in $\mathbb{C}^{d}\otimes\mathbb{C}^{d}$, which are not perfectly distinguishable by LOCC measurements. Thus we show that the local dimension of the subsystems is not a tight bound on the size of sets of locally indistinguishable orthogonal maximally entangled states. Even though all these results are about maximally entangled states, it should be noted that entanglement is not a necessary feature of locally indistinguishable sets of states. In a famous result, Bennett et al. \cite{Bennett99} exhibited a set of orthogonal bipartite pure product states that are perfectly distinguishable by separable operations, but not by LOCC (see \cite{Childs2012} for a simplified proof and a generalization of this result). In fact, if we allow states that are not maximally entangled to be in the set, we can construct indistinguishable sets with a fixed size in any dimension we like. Indeed, whenever we find a set of indistinguishable maximally entangled states for certain local dimensions, those states remain indistinguishable when embedded in any larger local dimensions. Nonetheless they are no longer maximally entangled with respect to the new larger local dimensions. On the one hand, entanglement makes distinguishability harder, but on the other hand, it can be used as a resource by the parties involved in the protocol. This makes the distinguishability problem especially interesting in the case when the set contains only maximally entangled states. We tackle the problem by studying distinguishability of states for a class of operations broader than the class of LOCC measurements, which is the class of positive-partial-transpose (PPT) measurements. In fact, this class is even broader than the class of separable measurements, for which distinguishability of states has been studied as well \cite{Duan2009,Bandyopadhyay2013}. As opposed to the set of LOCC measurements, the set of PPT measurements has a nice mathematical structure. Moreover, optimizing over this set is a computationally easy task, whereas optimizing over the set of separable measurements is known to be an NP-hard problem \cite{Gurvits2003,Gharibian2008}. Several properties of PPT operations can indeed be characterized in the framework of semidefinite programming (see \cite{Rains00} for an example). In fact, semidefinite duality also helps to prove analytical bounds on the power of PPT operations, and therefore on the power of LOCC operations. A straightforward application of this idea is a simplified proof of the previously mentioned fact that $k > d$ orthogonal maximally entangled states cannot be perfectly distinguished by LOCC \cite{Ghosh04} (see \cite{Yu2012} and \cite{Cosentino2013} for a proof that this fact holds for PPT as well). The characterization of the PPT-distinguishability problem as a semidefinite program has been also exploited in \cite{Cosentino2013} to find indistinguishable sets with size $k = d$. A recent work by Yu et al. \cite{Yu2012a} has investigated further properties of state distinguishability by PPT. They prove a tight bound on the entanglement necessary to distinguish between three Bell states via PPT measurements. Furthermore, they show that regardless of the number of copies, a maximally entangled state cannot be distinguished from its orthogonal complement. Before giving the definition of a PPT measurement, we review some notation. We denote by $\mathcal{A}$ and $\mathcal{B}$ the complex Euclidean spaces corresponding to Alice's and Bob's systems, respectively. We assume that $\mathcal{A}$ and $\mathcal{B}$ are isomorphic copies of $\mathbb{C}^{d}$. A pure state $u \in \mathcal{A}\otimes\mathcal{B}$ is called maximally entangled if $\operatorname{Tr}_{\mathcal{A}}(uu^{*}) =\operatorname{Tr}_{\mathcal{B}}(uu^{*}) = \bm{1}/d$. The partial transpose is a mapping on $\mathcal{A}\otimes\mathcal{B}$ defined by tensoring the transpose mapping acting on $\mathcal{A}$ and the identity mapping acting on $\mathcal{B}$ and it is denoted as $\operatorname{T}_{\mathcal{A}} = \operatorname{T} \otimes \bm{1}_{\lin{\mathcal{B}}}$. Given a complex Euclidean space $\mathcal{A}$, we use the symbol $\herm{\mathcal{A}}$ to denote the set of Hermitian operators acting on $\mathcal{A}$. Let $\mathcal{A} = \mathcal{B} = \mathbb{C}^{2}$ and let $\psi_{i}$, for $i\in \{0, 1, 2, 3\}$, be the density operators corresponding to the standard Bell basis, that is, $\psi_{i} = \ket{\psi_{i}}\bra{\psi_{i}}$, for $i\in \{0, 1, 2, 3\}$, where \begin{equation} \label{eq:bell-states} \ket{\psi_{0}} = \frac{\ket{00}+\ket{11}}{\sqrt{2}},\quad \ket{\psi_{1}} = \frac{\ket{01}+\ket{10}}{\sqrt{2}},\quad \ket{\psi_{2}} = \frac{\ket{01}-\ket{10}}{\sqrt{2}},\quad \ket{\psi_{3}} = \frac{\ket{00}-\ket{11}}{\sqrt{2}}. \end{equation} Our construction is based on states that are tensor products of Bell states. We write down explicitly the action of the partial transpose on the Bell basis: \begin{equation} \label{eq:transposebells} \operatorname{T}_{\mathcal{A}}(\psi_{0}) = \frac{1}{2}\bm{1} - \psi_{2}, \quad \operatorname{T}_{\mathcal{A}}(\psi_{1}) = \frac{1}{2}\bm{1} - \psi_{3}, \quad \operatorname{T}_{\mathcal{A}}(\psi_{2}) = \frac{1}{2}\bm{1} - \psi_{0}, \quad \operatorname{T}_{\mathcal{A}}(\psi_{3}) = \frac{1}{2}\bm{1} - \psi_{1}. \end{equation} A positive operator $P \geq 0$ is called a \emph{PPT operator} if it remains positive under the action of partial transposition, that is, $\operatorname{T}_{\mathcal{A}}(P) \geq 0$. A measurement $\{ P_a \geq 0: a \in \Gamma \}$ is called a \emph{PPT measurement} if each measurement operator is PPT. The maximum probability of distinguishing a set of states $\{ \rho_{1}, \ldots, \rho_{k}\}$ by PPT measurements can be expressed as the optimal value of the following semidefinite program (for more details, see \cite{Cosentino2013}). We are interested in perfect distinguishability, so we will assume, without loss of generality, that the states are drawn from the set with uniform probability, that is, $p_{j} = 1/k$, for each $j = 1, \ldots, k$. \begin{center} \centerline{\underline{Primal problem}} \begin{equation} \label{sdp-primal} \begin{aligned} {\scriptscriptstyle\mathsf{T}}ext{maximize:}\quad & \frac{1}{k} \sum_{j = 1}^k \ip{P_j}{\rho_{j}}\\ {\scriptscriptstyle\mathsf{T}}ext{subject to:}\quad & P_1+ \cdots + P_k = \bm{1}_{\mathcal{A}} \otimes \bm{1}_{\mathcal{B}},\\ & P_1,\ldots,P_k \geq 0,\\ & \operatorname{T}_{\mathcal{A}}(P_{1}), \ldots, \operatorname{T}_{\mathcal{A}}(P_{k}) \geq 0. \end{aligned} \end{equation} \end{center} The dual of the problem is easily obtained by routine calculation. \begin{center} \centerline{\underline{Dual problem}} \begin{equation} \label{sdp-dual} \begin{aligned} {\scriptscriptstyle\mathsf{T}}ext{minimize:}\quad & \frac{1}{k}\operatorname{Tr}(Y)\\ {\scriptscriptstyle\mathsf{T}}ext{subject to:}\quad & Y - \rho_{j} \geq \operatorname{T}_{\mathcal{A}}(Q_{j}), \quad j=1,\ldots,k \; ,\\ & Y\in\herm{\mathcal{A}\otimes\mathcal{B}},\\ & Q_{1}, \ldots, Q_{k} \geq 0. \end{aligned} \end{equation} \end{center} Given a set of states, an upper bound on the probability of distinguishing them by PPT measurements can be obtained by exhibiting a feasible solution of the above dual problem. \section{Main Result} For any $d \geq 4$ that is a power of $2$, we show how to construct sets of $d$ orthogonal maximally entangled states in $\mathbb{C}^{d}\otimes\mathbb{C}^{d}$, for which the above dual problem has optimal value less than or equal to $C$, where $C < 1$ is a constant. Given one of such sets, if we consider any of its subsets that contains only $k$ states, then we have a set of $k$ PPT-indistinguishable maximally entangled states in $\mathbb{C}^{d}\otimes\mathbb{C}^{d}$, where $k < d$, as long as $C < k/d$. Since any LOCC measurement is a PPT measurement, then such a set is also indistinguishable by LOCC. \begin{theorem} \label{th:maintheorem} For any $d = 2^{t}$, where $t \geq 2$, it is possible to construct a set of $k$ maximally entangled states in $\mathbb{C}^{d}\otimes\mathbb{C}^{d}$ for which there exists a feasible solution of the dual problem \eqref{sdp-dual} with value of the objective function equal to $(7d)/(8k)$. \end{theorem} \vspace*{12pt} \noindent {\bf Proof:} For the case $t = 2$ ($d = 4$), a set of states was shown by Yu et al. in \cite{Yu2012}: \begin{equation} \label{eq:example4} \begin{aligned} \rho_{1}^{(2)} &= \psi_{0}\otimes\psi_{0},\quad &\rho_{2}^{(2)} &= \psi_{1}\otimes\psi_{1},\quad &\rho_{3}^{(2)} &= \psi_{2}\otimes\psi_{1}, \quad &\rho_{4}^{(2)} &= \psi_{3}\otimes\psi_{1}. \end{aligned} \end{equation} This being the first instance in the paper where we use Bell-diagonal states, we point out that the tensor product structure of those states should not mislead the reader when considering the cut between Alice's and Bob's systems. If we denote the local systems by $\mathcal{A} = \mathcal{A}_{1}\otimes\mathcal{A}_{2}$ and $\mathcal{B} = \mathcal{B}_{1}\otimes\mathcal{B}_{2}$, then the cut is such that the states $\rho_{i}^{(2)}$ lie on the space $(\mathcal{A}_{1}\otimes\mathcal{B}_{1})\otimes(\mathcal{A}_{2}\otimes\mathcal{B}_{2})$. A bound of $7/8$ on the optimal probability of distinguishing these states was proved in \cite{Cosentino2013}. Here we write the feasible solution of the dual that achieves the value $7/8$: \begin{align*} Y^{(2)} &= \frac{1}{4} \bm{1}\otimes\bm{1} - \frac{1}{2}\operatorname{T}_{\mathcal{A}}(\psi_2\otimes\psi_3), \\ Q_{1}^{(2)} &= \frac{1}{2}[(\psi_0 + \psi_1 + \psi_3)\otimes\psi_2 + \psi_2\otimes(\psi_0 + \psi_1)], \\ Q_{2}^{(2)} &= \frac{1}{2}[(\psi_0 + \psi_1)\otimes\psi_3 + \psi_3\otimes(\psi_0 + \psi_1 + \psi_2)], \\ Q_{3}^{(2)} &= \frac{1}{2}[(\psi_1 + \psi_3)\otimes\psi_3 + \psi_0\otimes(\psi_0 + \psi_1 + \psi_2)], \\ Q_{4}^{(2)} &= \frac{1}{2}[(\psi_0 + \psi_3)\otimes\psi_3 + \psi_1\otimes(\psi_0 + \psi_1 + \psi_2)]. \end{align*} By using the set of equations \eqref{eq:transposebells}, it is easy to check that the constraints of the dual problem hold for the above solution. In fact, it is a straightforward calculation to check that, for all $j \in \{1, 2, 3, 4\}$, the following equations hold: \begin{equation} Y^{(2)} - \rho_{j}^{(2)} = \operatorname{T}_{\mathcal{A}}(Q_{j}^{(2)}). \end{equation} Furthermore, we observe that $Q_{1}^{(2)}, Q_{2}^{(2)}, Q_{3}^{(2)}$, and $Q_{4}^{(2)}$ are positive semidefinite, and $\operatorname{Tr}(Y^{(2)}) = 7/2$. For $t \geq 3$, we give a recursive construction of the states $\rho_{j}^{(t)}$, i.e., \begin{equation} \label{eq:construction} \rho_{j}^{(t)} = \begin{cases} \psi_{0}\otimes\rho_{j}^{(t-1)} &\mbox{if } j \leq 2^{t-1},\\ \psi_{1}\otimes\rho_{j-2^{t-1}}^{(t-1)} &\mbox{if } j > 2^{t-1},\\ \end{cases} \end{equation} for $j \in \{1, \ldots, d\}$. Given this set of states, we can construct, again recursively, a feasible solution of the dual problem, which achieves the desired bound: \begin{equation} \label{eq:generalsolution} \begin{aligned} Y^{(t)} &= (\psi_0 + \psi_1)^{\otimes (t-2)} \otimes Y^{(2)}, \\ Q_{j}^{(t)} &= (\psi_0 + \psi_1)^{\otimes (t-2)} \otimes Q_{r}^{(2)}, \quad j \in \{1, \ldots, d\}, \end{aligned} \end{equation} where $r \in \{1,2,3,4\}$ so that $r-1 \equiv j \pmod 4$. We now prove that this solution satisfies the constraints of the dual problem. First, it is easy to see that $Y^{(t)}$ is Hermitian and that $Q_{j}^{(t)} \geq 0$, for any $j \in \{1, \ldots, d\}$. We prove by induction on $t$ that the rest of the constraints are also satisfied, namely all the constraints of the form \[ Y^{(t)} - \rho_{j}^{(t)} \geq \operatorname{T}_{\mathcal{A}}(Q_{j}^{(t)}),\quad j \in \{1, \ldots, d\}. \] The base case $t = 2$ was considered above. By the induction hypothesis, and from the fact that $\psi_0 + \psi_1 \geq 0$, it holds that \begin{equation} (\psi_0 + \psi_1) \otimes Y^{(t)} - (\psi_0 + \psi_1)\otimes\rho_{j}^{(t)} \geq (\psi_0 + \psi_1)\otimes\operatorname{T}_{\mathcal{A}}(Q_{j}^{(t)}). \end{equation} From Eq. \eqref{eq:construction}, we have $\rho_{j}^{(t+1)} = \psi_{0}\otimes\rho_{j}^{(t)}$ if $j \leq 2^{t}$, or $\rho_{j}^{(t+1)} = \psi_{1}\otimes\rho_{j-2^{t}}^{(t)}$ if $j > 2^{t}$. Since $\psi_{0}, \psi_{1} \geq 0$, in either of the two cases we have \begin{equation} (\psi_0 + \psi_1) \otimes Y^{(t)} - \rho_{j}^{(t+1)} \geq (\psi_0 + \psi_1)\otimes\operatorname{T}_{\mathcal{A}}(Q_{j}^{(t)}). \end{equation} From the set of equations \eqref{eq:transposebells}, it is easy to see that \begin{equation} \label{eq:pt01} \operatorname{T}_{\mathcal{A}}(\psi_0 + \psi_1) = \psi_0 + \psi_1. \end{equation} It follows that \begin{equation} (\psi_0 + \psi_1) \otimes Y^{(t)} - \rho_{j}^{(t+1)} \geq \operatorname{T}_{\mathcal{A}}[(\psi_0 + \psi_1)\otimes(Q_{j}^{(t)})]. \end{equation} Finally, by the definition of the operators in Eq. \eqref{eq:generalsolution}, we have that \begin{equation} Y^{(t+1)} - \rho_{j}^{(t+1)} \geq \operatorname{T}_{\mathcal{A}}(Q_{j}^{(t+1)}). \end{equation} In the case where we consider only $k$ of the states we have constructed, the value of the program for this solution is equal to \begin{equation} \frac{\operatorname{Tr}(Y^{(t)})}{k} = \frac{2^{t-2}\operatorname{Tr}(Y^{(2)})}{k} = \frac{7d}{8k}. \end{equation} This concludes the proof. \,$\square$ \vspace*{6pt} It is possible to adapt the construction \eqref{eq:construction} and \eqref{eq:generalsolution} in order to use a different couple of Bell states other than $\psi_0$ and $\psi_1$. However, these states are well-suited for a clearer proof, due to the Eq. \eqref{eq:pt01}. \vspace*{6pt} \begin{cor} For any $d = 2^{t}$, where $t \geq 4$, there exists a set of $k < d$ maximally entangled states in $\mathbb{C}^{d}\otimes\mathbb{C}^{d}$ that cannot be perfectly distinguished by any LOCC measurement. \end{cor} \vspace*{12pt} \noindent {\bf Proof:} By the above Theorem, when $t \geq 4$, we can construct a set of $k < 2^{t}$ states that can be distinguished by any PPT measurement, and therefore any LOCC measurement, with only probability of success strictly less than $1$. In fact, we have that $(7 \cdot 2^{t})/(8 \cdot k) < 1$ whenever $t \geq 4$ and $k > (7 \cdot 2^{t})/8$. \,$\square$ \vspace*{12pt} Notice that the states generated by the above construction are Bell-diagonal, like the sets exhibited in \cite{Yu2012} and \cite{Cosentino2013}. A construction not based on Bell-diagonal states would be needed to generalize the result to the case when the dimension is not a power of two. Unfortunately, the most straightforward generalization, which makes use of the states corresponding to the generalized Pauli operators (see \cite{Cosentino2013} for a formal definition of these states), leads to weak bounds and does not seem to give neat analytic solutions of the semidefinite program. \section{Example} As an application of our construction, we consider an example where the two parties are given a state drawn with uniform probability from the following set of $k = 15$ orthogonal maximally entangled states in $\mathbb{C}^{16}\otimes\mathbb{C}^{16}$: \begin{align*} \rho_1 &= \psi_0 \otimes \psi_0 \otimes \psi_0 \otimes \psi_0, &\rho_2 &= \psi_0 \otimes \psi_0 \otimes \psi_1 \otimes \psi_1, \\ \rho_3 &= \psi_0 \otimes \psi_0 \otimes \psi_2 \otimes \psi_1, &\rho_4 &= \psi_0 \otimes \psi_0 \otimes \psi_3 \otimes \psi_1, \\ \rho_5 &= \psi_0 \otimes \psi_1 \otimes \psi_0 \otimes \psi_0, &\rho_6 &= \psi_0 \otimes \psi_1 \otimes \psi_1 \otimes \psi_1, \\ \rho_7 &= \psi_0 \otimes \psi_1 \otimes \psi_2 \otimes \psi_1, &\rho_8 &= \psi_0 \otimes \psi_1 \otimes \psi_3 \otimes \psi_1, \\ \rho_9 &= \psi_1 \otimes \psi_0 \otimes \psi_0 \otimes \psi_0, &\rho_{10} &= \psi_1 \otimes \psi_0 \otimes \psi_1 \otimes \psi_1, \\ \rho_{11} &= \psi_1 \otimes \psi_0 \otimes \psi_2 \otimes \psi_1, &\rho_{12} &= \psi_1 \otimes \psi_0 \otimes \psi_3 \otimes \psi_1, \\ \rho_{13} &= \psi_1 \otimes \psi_1 \otimes \psi_0 \otimes \psi_0, &\rho_{14} &= \psi_1 \otimes \psi_1 \otimes \psi_1 \otimes \psi_1,\\ \rho_{15} &= \psi_1 \otimes \psi_1 \otimes \psi_2 \otimes \psi_1. \end{align*} The probability of distinguishing this set by any PPT measurement is less than or equal to $14/15$. Examples in higher dimensions can be generated using the Python script available at \cite{code}. It is worth noting that the ``Entanglement Discrimination Catalysis'' phenomenon, observed in \cite{Yu2012} for the set \eqref{eq:example4}, also applies to the set of states in the above example and to any set derived from our construction. If Alice and Bob are provided with a maximally entangled state as a resource, then they are able to distinguish the states in these sets and, when the protocol ends, they are still left with an untouched maximally entangled state. When $t=2$, the catalyst is used to teleport the first qubit from one party to the other, say from Alice to Bob. Bob can then measure the first two qubits in the standard Bell basis and identify which of the four states was prepared. Since the third and fourth qubits are not being acted on, they can be used in a new round of the protocol. For the case $t > 2$, let us recall the recursive construction of the states $\rho_{j}^{(t)}$ from \eqref{eq:construction}. Distinguishing between the two cases of the recursion is equivalent to distinguishing between two Bell states. And the base case is exactly the case $t=2$ described above, with only one maximally entangled state involved in the catalysis. \section{Discussion} In this article we showed an explicit method to generate small sets of maximally entangled states that are not perfectly distinguishable by LOCC protocols. Thus we proved, for the first time, that the dimension of the local subsystems is not a tight bound on the size of sets of locally indistinguishable orthogonal maximally entangled states. Asymptotically, our construction allows for the cardinality of these sets to be as small as $C\cdot d$, where $C$ is a constant less than $1$, and $d$ is the dimension of each Alice's and Bob's subsystems. In particular, we have that $7/8 \leq C < 1$. It is possible that this constant can be improved by using a different construction or by starting our recursive construction from a different base case. A further improvement would be to show a construction of indistinguishable sets with size $o(d)$. Another open problem is to give a more general construction that works even when $d$ is not a power of two. Finally, the bounds we proved in the paper hold for the class of PPT measurements. Stronger bounds might hold for the more restricted classes of LOCC or separable measurements. Navascu\'es showed a hierarchy of semidefinite programs for the problem of state distinguishabilty by separable operations \cite{Navascues2008}. The first level of this hierarchy corresponds to the semidefinite program that we studied in this paper. An analysis of higher levels of the hierarchy may lead to stronger bounds than the one proved in this article. This idea will be developed in future work. \end{document}
\begin{document} \title{CHAOTIC BEHAVIOR OF UNIFORMLY CONVERGENT NONAUTONOMOUS SYSTEMS WITH RANDOMLY PERTURBED TRAJECTORIES} \author{LESZEK SZA\L{}A\footnote{[email protected]}} \maketitle \begin{abstract} We study nonautonomous discrete dynamical systems with randomly perturbed trajectories. We suppose that such a system is generated by a sequence of continuous maps which converges uniformly to a map $f$. We give conditions, under which a recurrent point of a (standard) autonomous discrete dynamical system generated by the limit function $f$ is also recurrent for the nonautonomous system with randomly perturbed trajectories. We also provide a necessary condition for a nonautonomous discrete dynamical system to be nonchaotic in the sense of Li and Yorke with respect to small random perturbations. \end{abstract} \section{Introduction} We consider discrete dynamical systems generated by continuous functions defined on the Cartesian product $I^{m}$ of $m$ intervals $I=[0,1]$, where $m$ is a positive integer. Their values are subjected to small random perturbations. Nonautonomous discrete dynamical systems (with no perturbations) have been recently studied because of their applications, e.g., in biology (\cite{Sen:The}, \cite{Elyadi:Global}, \cite{Wright:Periodic}), medicine (\cite{Coutinho:Threshold} and \cite{Lou:Threshold}), economy (\cite{Zhang:Discrete}), physics (\cite{Joshi:Nonlinear}). In our systems perturbations are involved, because in practical situations they often exist. By $\mathcal{C}(X)$ we denote the set of all continuous functions $f\colon\ X\to X$, where $X$ is a compact metric space. A sequence of functions $(f_{n})_{n=0}^{\infty}$ is denoted by $f_{0,\infty}$. If such a sequence converges uniformly to the limit function $f$, we denote it by $f_{n}\rightrightarrows f$. The main aim of this paper is to study, whether a recurrent point of a standard (i.e., autonomous with no perturbations) discrete dynamical system $(I^{m},f)$ with $f\in\mathcal{C}(I^{m})$ remains recurrent for a system generated by a sequence $f_{0,\infty}$ in $\mathcal{C}(I^{m})$, where $f_{n}\rightrightarrows f$ and a random perturbation is added to every iteration. The assumption about uniform convergence is common when nonautonomous discrete dynamical systems are studied. For example \cite{Kolyada:Topological} showed that the topological entropy of a system $(X,f_{0,\infty})$, where $X$ is a compact metric space and $f_{0,\infty}$ is a sequence of continuous selfmaps of $X$ converging uniformly to $f$, is less than or equal to the topological entropy of $(X,f)$. Notice that this inequality does not hold if the convergence is not uniform (see \cite{Balibrea:Weak}). If, additionally, $X=I$, the elements of $f_{0,\infty}$ are surjective and the topological entropy of $(I,f)$ equals zero, then every infinite $\omega$-limit set of $(I,f)$ is an $\omega$-limit set of $(I,f_{0,\infty})$ (see \cite{Stefankova:Inheriting}). Throughout this paper $\mathbb{N}$ is the set of positive integers and $\mathbb{N}_{0}=\mathbb{N}\cup\{0\}$. The $i$-th coordinate of $x\in\mathbb{R}^{m}$ is denoted by $x^{(i)}$. Let $n\in\mathbb{N}$. The {\itshape $n$-th iteration} of $f$ is defined by $f^{n}(x)=f(f^{n-1}(x))$ and $f^{0}(x)=x$. Let $\delta>0$. We call $(x_{i})_{i=0}^{n}$ a {\itshape $\delta$-chain} if $|f(x_{i})-x_{i+1}|<\delta$ for each $i=0,\ldots,n-1$. If $f_{0,\infty}=(f_{0},f_{1},f_{2},\ldots)$ is a sequence in $\mathcal{C}(X)$, a {\itshape nonautonomous discrete dynamical system} is a pair $(X,f_{0,\infty})$. The {\itshape trajectory} of $x_{0}$ under $f_{0,\infty}$ is the sequence $(x_{n})_{n=0}^{\infty}$ defined by $x_{n+1}=f_{n}(x_{n})$ for each $n\in\mathbb{N}_{0}$. A discrete (autonomous) dynamical system $(X,f)$, with $f\in\mathcal{C}(X)$ is a particular case of a nonautonomous system $(X,f_{0,\infty})$ with $f_{0,\infty}=(f,f,\ldots)$. When we deal with recurrence, sometimes it will be necessary to remove a certain number of first elements of $f_{0,\infty}$. For such a sequence with first $k$ elements removed we use a symbol $f_{k,\infty}$. Whenever we mention the notion of random variables, we assume that they are defined on $\Omega$ where $(\Omega,\Sigma,P)$ is a fixed probability space. By $\|\cdot\|$ we mean the maximum norm defined on $\mathbb{R}^{m}$, i.e., $\|x\|=\max_{i=1,\ldots,m}|x^{(i)}|$ for $x\in\mathbb{R}^{m}$. The open ball of a radius $r>0$ centered at $x$ is denoted by $B(x,r)$. A family $\mathcal{F}\subseteq\mathcal{C}(I^{m})$ is {\itshape equicontinuous} at $x\in I^{m}$ if for each $\varepsilon>0$ there is $\delta>0$ such that for each $f\in\mathcal{F}$ and each $y\in I^{m}$, $\|x-y\|<\delta$ implies $\|f(x)-f(y)\|<\varepsilon$. For convenience we recall the well-known Ascoli theorem (see, e.g., \cite{Dieudonne:Foundations}), which is used in the proofs presented in this paper. \begin{theorem}[Ascoli] Let $E$ be a compact metric space, $F$ be a Banach space and $\mathcal{C}_{F}(E)$ be the space of all continuous functions defined on $E$ with values in $F$. Then the closure of $H\subseteq\mathcal{C}_{F}(E)$ is compact if and only if $H$ is equicontinuous and the closure of $\{f(x),f\in H\}$ is compact for each $x\in E$. \end{theorem} The results presented in this paper mainly concern $(f_{0,\infty},\delta)$-recurrence for the case of (nonautonomous) $(f_{0,\infty},\delta)$-processes. It is a generalization of $(f,\delta)$-recurrence, introduced in \cite{Szala:Recurrence} for $(f,\delta)$-processes, which have been studied in literature, e.g., by \cite{Jankova:ChaosIn}, \cite{Jankova:Maps} and \cite{Jankova:Systems}. Let $f_{0,\infty}$ be a sequence in $\mathcal{C}(I^{m})$, $m\in\mathbb{N}$ and $\delta>0$. In order to define an $(f_{0,\infty},\delta)$-process it is necessary to consider continuous extensions of all $f_{0}$, $f_{1}$, ... Let $g$ be any of these functions. Then we extend its domain to $\mathbb{R}^{m}$ in such a way that $g(\mathbb{R}^{m}\setminus I^{m})\subseteq g(\partial I^{m})$, where $\partial A$ denotes the boundary of $A$. In order to keep the notation simple, we denote this extension by $g$, as well. An {\itshape $(f_{0,\infty},\delta)$-process that begins at $x_{0}$} is a sequence of random variables defined by the formula $X_{n+1} = f_{n}(X_{n})+(\xi^{(1)}_{n},\ldots,\xi^{(m)}_{n})$, $n\in\mathbb{N}_{0}$ and $X_{0} = x_{0}$, where all $\xi^{(j)}_{k}$, $j=1,\ldots,m$, $k=0,1,\ldots$ are independent and have uniform continuous distributions on $[-\delta,\delta]$. Then a point $x\in I^{m}$ is called {\itshape $(f_{0,\infty},\delta)$-recurrent} if for each open neighborhood $U$ of $x$ and each $\delta'\in(0,\delta)$, \begin{eqnarray} \label{recurrent-def} P\left(\bigcap_{n=1}^{\infty}\bigcup_{k=n}^{\infty}\{X_{k}\in U\}\right)=1, \end{eqnarray} where $(X_{n})$ is any $(f_{0,\infty},\delta')$-process that begins at $x$. There is a link between the standard notion of recurrence for standard discrete dynamical systems (with no perturbations) and $(f_{0,\infty},\delta)$-recurrence. In the first case, in each neighborhood of the point, which is said to be recurrent, there are infinitely many points of its trajectory. In the second case, infinitely many of the events $\{X_{k}\in U\}$, $k=1,2,\ldots$ occur with probability one. Recall the definition of chaos in the sense of Li and Yorke. A function $f\in\mathcal{C}(I)$ is {\itshape chaotic in the sense of Li and Yorke} if there is an uncountable set $S\subseteq I$ such that for all $x,y\in S$ and $x\neq y$, \begin{eqnarray*} \liminf\limits_{n\to\infty}|f^{n}(x)-f^{n}(y)|=0,\ \limsup\limits_{n\to\infty}|f^{n}(x)-f^{n}(y)|>0. \end{eqnarray*} \newline Recall that a point $x\in I^{m}$, where $m\in\mathbb{N}$, is a {\itshape fixed point} of $f\in\mathcal{C}(I^{m})$ if $f(x)=x$. We denote the set of all fixed points of $f$ by $\mathrm{Fix}(f)$. A point $x\in I^{m}$ is {\itshape periodic} with {\itshape period $n$} if $f^{n}(x)=x$ and $f^{k}(x)\neq x$ for any $k=1,\ldots,n-1$. We denote the set of all periodic points of $f$ by $\mathrm{Per}(f)$. \newline We say that $f$ is {\itshape nonchaotic} if for each $x\in I$ and each $\varepsilon>0$ there exists periodic point $p$ of $f$ such that $\limsup_{n\to\infty}|f^{n}(x)-f^{n}(p)|<\varepsilon$. It is well-known, that $f$ is either Li-Yorke chaotic, or it is nonchaotic in the sense of the previous definition, which gives a dichotomy between chaos in the sense of Li and Yorke and the simplicity of any orbit of $f$ (see \cite{Smital:Chaotic} and \cite{Jankova:Characterization}). \newline The result providing the dichotomy between simplicity and chaoticity mentioned above was generalized for the case of some nonautonomous systems by \cite{Canovas:Li} as described in the following. Let $f_{0,\infty}$ be a sequence in $\mathcal{C}(I)$ converging uniformly to $f$. Define $F_{n}(x)=f_{n}\circ\ldots\circ f_{0}(x)$ for each $x\in I$. Then $f_{0,\infty}$ is called {\itshape chaotic in the sense of Li and Yorke} if there is an uncountable set $S\subseteq I$ such that for all $x,y\in S$ and $x\neq y$, $\liminf_{n\to\infty}|F^{n}(x)-F^{n}(y)|=0$ and $\limsup_{n\to\infty}|F^{n}(x)-F^{n}(y)|>0$. \cite{Canovas:Li} uses so called pseudoperiodic points of $f_{0,\infty}$ when approximating orbits of $f_{0,\infty}$. If $f_{n}\rightrightarrows f$, these pseudoperiodic points are just periodic points of $f$. We use the same method when approximating orbits of nonautonomous systems with random perturbations. We obtain a generalization of the result published by \cite{Jankova:ChaosIn}, who studied connections between chaotic properties of $f\in\mathcal{C}(I)$ and chaotic behavior of $(f,\delta)$-processes. We call $f$ {\itshape nonchaotic stable} if for each $\varepsilon>0$ there exists $\delta>0$ such that for each $g\in\mathcal{C}(I)$ and each $x\in I$, $\|f-g\|<\delta$ implies $\limsup_{n\to\infty}|g^{n}(x)-g^{n}(p)|<\varepsilon$ for some periodic point $p$ of $g$. We call $f$ {\itshape nonchaotic with respect to small random perturbations} if for each $\varepsilon>0$ there exists $\delta>0$ such that for each $\delta'\in (0,\delta)$ and each $x_{0}\in I$, \begin{eqnarray} P\left(\exists p\in\mathrm{Per}(f)\colon\ \limsup_{n\to\infty}\left|X_{n}-f^{n}(p)\right|<\varepsilon\right)=1, \label{nonchaotic_with_respect_to_random_perturbations} \end{eqnarray} where $(X_{n})$ is arbitrary $(f,\delta')$-process which begins at $x_{0}$. \cite{Jankova:ChaosIn} proved that $f$ is nonchaotic with respect to small random perturbations provided that $f$ is nonchaotic stable and also mentioned that the opposite implication is not true. We show an analogous theorem, which concerns nonautonomous systems with randomly perturbed trajectories. \section{Recurrent points} Recall that a fixed point $x$ is {\itshape attractive} if there is a neighborhood $U$ of $x$ such that for each $y\in U$, $\lim_{n\to\infty}f^{n}(y)=x$. Briefly speaking, Theorem \ref{fixed_points} states, that each attractive fixed point of the limit function $f$ is also recurrent in the sens of uniformly convergent nonautonomous system connected to $f$ under a small additive stochastic perturbation. Notice that even for autonomous discrete dynamical systems such a statement is not true if the fixed point is not attractive. \begin{theorem} Let $f_{0,\infty}$ be a sequence in $\mathcal{C}(I^{m})$, $m\in\mathbb{N}$. Assume that $f_{0,\infty}$ converges uniformly to a function $f\in\mathcal{C}(I^{m})$. Let $x_{0}\in I^{m}$ be an attractive fixed point of $f$. Then there exist $K\in\mathbb{N}$ and $\delta>0$ such that for each integer $k>K$, $x_{0}$ is $(f_{k,\infty},\delta)$-recurrent. \label{fixed_points} \end{theorem} \begin{proof} Since $x_{0}$ is an attractive fixed point of $f$, there is $\kappa>0$ such that for each $x\in B(x_{0},\kappa)$, $\lim_{n\to\infty}f^{n}(x)=x_{0}$. By Lemma 1 in \cite{Szala:FC} there is $N\in\mathbb{N}$ such that \begin{eqnarray*} \forall x\in B(x_{0},\kappa)\ \exists j\in\{0,\ldots, N\}\colon\ f^{j}(x)\in B(x_{0},\kappa/3). \end{eqnarray*} First we show, that there is $K\in\mathbb{N}$ such that for each integer $k>K$, for each $x\in B(x_{0},\kappa)$ and each sequence $(y_{n})$ defined by $y_{0}=x$ and $y_{i+1}=f_{k+i}(y_{i})$ with $i=0,1,\ldots$, \begin{eqnarray} \exists j\in\{0,\ldots,N\}\colon\ y_{j}\in B(x_{0},2\kappa/3).\label{twierdzenie1_istnienie} \end{eqnarray} It is sufficient to show that there exists $K\in\mathbb{N}$ such that for each integer $k>K$ and for each $j\in\{0,\ldots,N\}$, \begin{eqnarray} \left\|f_{k+j-1}\circ\ldots\circ f_{k+1}\circ f_{k}(x)-f^{j}(x)\right\|<\kappa/3.\label{nier6} \end{eqnarray} Then inequality (\ref{twierdzenie1_istnienie}) is a direct consequence of (\ref{nier6}) and the triangular inequality. Let $j$ be any of $\{0,\ldots,N\}$. Inequality (\ref{nier6}) holds if \begin{eqnarray} \left\|f_{k+j-1}\left(f_{k+j-2}\circ\ldots\circ f_{k}(x)\right)-f_{k+j-1}\left(f_{k+j-1}^{j-1}(x)\right)\right\|<\kappa/6\label{nier2} \end{eqnarray} and \begin{eqnarray} \left\|f_{k+j-1}^{j}(x)-f^{j}(x)\right\|<\kappa/6.\label{nier3} \end{eqnarray} Since $f_{n}\rightrightarrows f$, there is $K_{1}\in\mathbb{N}$ such that for each integer $k>K_{1}$ inequality (\ref{nier3}) holds. Since $f_{k+j-1}$ is continuous, there is $\varepsilon_{1}\in (0,\kappa/6)$ (by Ascoli Theorem it does not depend on $k$ and $j$) such that inequality (\ref{nier2}) holds if \begin{eqnarray*} \|f_{k+j-2}\circ\ldots\circ f_{k}(x)-f_{k+j-1}^{j-1}(x)\|<\varepsilon_{1}. \end{eqnarray*} This inequality holds if \begin{eqnarray} \left\|f_{k+j-2}\circ\ldots\circ f_{k+1}\circ f_{k}(x)-f_{k+j-2}\left(f_{k+j-1}^{j-2}(x)\right)\right\|<\varepsilon_{1}/2\label{nier4} \end{eqnarray} and \begin{eqnarray} \left\|f_{k+j-2}\left(f_{k+j-1}^{j-2}(x)\right)-\left(f_{k+j-1}\left(f_{k+j-1}^{j-2}(x)\right)\right)\right\|<\varepsilon_{1}/2.\label{nier5} \end{eqnarray} Since $f_{0,\infty}$ is a Cauchy sequence, there is integer $K_{2}>K_{1}$ such that for each integer $k>K_{2}$ inequality (\ref{nier5}) holds. For inequality (\ref{nier4}) we use continuity of $f_{k+j-2}$ in the same way as above. It implies the existence of $\varepsilon_{2}\in (0,\varepsilon_{1}/2)$ such that (\ref{nier4}) holds if \begin{eqnarray*} \left\|f_{k+j-3}\circ\ldots\circ f_{k}(x)-f_{k+j-1}^{j-2}(x)\right\|<\varepsilon_{2}. \end{eqnarray*} Using the same procedure several times we obtain the sequence of positive integers $K_{1}<K_{2}<\ldots<K_{j}$. Choose $K=K_{j}$. In order to show $(f_{k,\infty},\delta)$-recurrence we use Borel-Cantelli Lemma in the same way as in the proof of Theorem 2 in \cite{Szala:Recurrence}. We can replace $f$ by elements of $f_{0,\infty}$ in the proof mentioned above, since $\{f,f_{0},f_{1},\ldots\}$ is an equicontinuous family, which follows by Ascoli theorem. \end{proof} The result from Theorem \ref{fixed_points} can be generalized to the case of attractive periodic points. The proof is evident and is omitted. Recall that a periodic point with period $n$ is {\itshape attractive} if it is an attractive fixed point of $f^{n}$. \begin{theorem} \label{Tw_punkty_okresowe} Let $f_{0,\infty}$ be a sequence in $\mathcal{C}(I^{m})$, $m\in\mathbb{N}$. Assume that $f_{0,\infty}$ converges uniformly to a function $f\in\mathcal{C}(I^{m})$. Let $x_{0}\in I^{m}$ be an attractive periodic point of $f$ with period $n$. Then there exist $K\in\mathbb{N}$ and $\delta>0$ such that for each integer $k>K$, $x_{0}$ is $(f_{k,\infty},\delta)$-recurrent. \end{theorem} The following example shows that the assumption about uniform convergence in Theorem \ref{fixed_points} is necessary. \begin{example} \label{przyklad} \noindent Let $f\in\mathcal{C}(I)$ be defined by the formula \begin{eqnarray*} f(x)=\begin{cases}0, & x\in\left[0,\frac{1}{2}\right], \\ 4x-2, & x\in\left(\frac{1}{2},\frac{3}{4}\right], \\ 1, & x\in\left(\frac{3}{4},1\right] \end{cases} \end{eqnarray*} (see Figure \ref{wykres4}). \begin{figure} \caption{The graph of $f$ from Example \ref{przyklad} \label{wykres4} \end{figure} \noindent Obviously $x_{0}=0$ is an attractive fixed point of $f$. Consider a sequence $f_{0,\infty}$ in $\mathcal{C}(I)$ defined for each $n\in\mathbb{N}_{0}$ by \begin{eqnarray*} f_{n}\left(x\right)=\begin{cases}8\cdot 2^{n}x, & x\in \left[0,\frac{1}{8\cdot 2^{n}}\right], \\ -8\cdot 2^{n}x+2, & x\in \left(\frac{1}{8\cdot 2^{n}},\frac{1}{4\cdot 2^{n}}\right],\\ 0, & x\in\left(\frac{1}{4\cdot 2^{n}},\frac{1}{2}\right], \\ 4x-2, & x\in \left(\frac{1}{2},\frac{3}{4}\right],\\ 1, & x\in\left(\frac{3}{4},1\right]\end{cases} \end{eqnarray*} (the graphs of $f_{0}$ and $f_{1}$ are sketched on Figure $\ref{wykres2}$). Then $f_{0,\infty}$ converges pointwise to $f$, but the convergence is not uniform. \noindent In order to show that $0$ is not $(f_{k,\infty},\delta)$-recurrent for any $\delta>0$ and any $k\in\mathbb{N}$, let $\delta\in (0,1/5)$ and $k\in\mathbb{N}_{0}$. Define $X_{0}=x_{0}$ and $X_{n+1}=f_{k+n}(X_{n})+\xi_{n}$, where $n\in\mathbb{N}_{0}$ and $(\xi_{0},\xi_{1},\ldots)$ is a sequence of random variables, which are independent and have uniform continuous distributions on $(-\delta,\delta)$. Then $P(X_{2}\in [4/5,1])>0$, which implies \begin{eqnarray*} P\left(\bigcap_{k=2}^{\infty}\left\{X_{k}\in\left[\frac{4}{5},1\right]\right\}\right)>0. \end{eqnarray*} \begin{figure} \caption{The graphs of $f_{0} \label{wykres2} \end{figure} \end{example} Recall that the {\itshape $\omega$-limit set} of a point $x\in I$ for $f\in\mathcal{C}(I)$ is the set of all limit points of the sequence $(f^{n}(x))_{n=0}^{\infty}$. We say that an $\omega$-limit set $\tilde{\omega}$ is {\itshape maximal} if for each $\omega$-limit set $\tilde{\omega}_{1}$, $\tilde{\omega}_{1}\subseteq\tilde{\omega}$ or $\tilde{\omega}_{1}\cap\tilde{\omega}=\emptyset$. We say that $f$ is of {\itshape type $2^{\infty}$} if it has a periodic point of period $2^{n}$ for all $n\in\mathbb{N}$ and no periodic points of other periods. Let $f\in\mathcal{C}(I)$ and $A\subseteq I$. We say that $A$ is {\itshape invariant} for $f$ if $f(A)\subseteq A$. The following theorem is formulated in dimension one only, because in this case an infinite $\omega$-limit set of a function of type $2^{\infty}$ has a special structure described in the following lemma (see Lemma 3.1 and Theorem A \cite{Fedorenko:Characterization}). \begin{lemma} \label{rozklad_na_czesci_okresowe} Let $f\in\mathcal{C}(I)$ be of type $2^{\infty}$. Let $f$ have an infinite $\omega$-limit set $\tilde{\omega}$. Then for each $k\in\mathbb{N}$ there is a decomposition $\{M(i,k),i=1,\ldots,2^{k}\}$ of $\tilde{\omega}$ such that every two sets $M(i,k)$, $M(j,k)$, where $i\neq j$, are separated by disjoint compact intervals and $f(M(i,k))=M(i+1,k)$ for any $i(\mathrm{mod}2^{k})$ . \end{lemma} The following result provides another class of points, that are known to be recurrent for both, standard discrete dynamical systems (without perturbations) and $(f_{0,\infty},\delta)$-processes, provided that $\delta>0$ is small enough. \begin{theorem} \label{tw_gl_1} Let $f\in\mathcal{C}(I)$ be of type $2^{\infty}$ and have a maximal infinite $\omega$-limit set $\tilde{\omega}$. Assume that the minimal closed and invariant interval $V$ containing $\tilde{\omega}$ contains exactly one periodic orbit of period $2^{n}$ for each $n\in\mathbb{N}$ and this periodic orbit is not attractive. Let $f_{0,\infty}$ be a sequence in $\mathcal{C}(I)$, which converges uniformly to $f$. Then there exist $K\in\mathbb{N}$ and $\delta>0$ such that for each integer $k>K$, every point $x\in\tilde{\omega}$ is $(f_{k,\infty},\delta)$-recurrent. \end{theorem} \begin{proof} Let $U=[u,v]$ be the convex hull of $\tilde{\omega}$. By Lemma 3.4. in \cite{Fedorenko:Characterization} there is interval $J\supseteq U$ relatively open in $[0,1]$ such that $f(\overline{J})\subseteq J$. First we show the following condition: \begin{eqnarray} \forall x\in J\ \exists n\in\mathbb{N}\colon\ f^{n}(x)\in U. \label{tw_gl_war_1} \end{eqnarray} To see this assume that the opposite of (\ref{tw_gl_war_1}) is true. Then $(f^{n}(x))$ has a subsequence that converges to a point of $U$. This limit point can be $u$ or $v$ only. Without loss of generality suppose this limit point is $u$. Since $u\in\tilde{\omega}$ and it is not a periodic point, there is $\varepsilon_{u}>0$ and $n\in\mathbb{N}$ such that $f^{n}(B(u,\varepsilon_{u}))\subset (u,v)$. There is $m\in\mathbb{N}$ such that $f^{m}(x)\in B(u,\varepsilon_{u})$. Then $f^{m+n}(x)\in (u,v)$, which is a contradiction. Therefore statement (\ref{tw_gl_war_1}) is proved. \newline Choose $x_{0}\in\tilde{\omega}$. Assume that $x_{0}$ is not an isolated point of $\tilde{\omega}$. Without loss of generality assume that $u\neq 0$, $v\neq 1$ and $J$ is open. Let $\kappa>0$ be such that $B(x_{0},\kappa)\subset J$. We show that \begin{eqnarray} \forall x\in J\ \forall\delta'>0\ \exists n\in\mathbb{N}\ \exists \delta'\textrm{-chain } (z_{0},\ldots,z_{n})\colon\ z_{0}=x, z_{n}\in B(x_{0},\kappa/4). \label{delta-lancuch} \end{eqnarray} By (\ref{tw_gl_war_1}) it is enough to prove the existence of a $\delta'$-chain mentioned above for each $x\in U$. Choose $\delta'>0$. Let $U(i,k)$ be the convex hull of $M(i,k)$ for each $i=1,\ldots,2^{k}$ and each $k=1,2,\ldots$, where $M(i,k)$ are as in Lemma \ref{rozklad_na_czesci_okresowe}. Each of these convex hulls can be written as follows: \begin{eqnarray*} U(i,k)=U(i_{1},k+1)\cup R(i,k)\cup U(i_{2},k+1) \end{eqnarray*} for some interval $R(i,k)$. It is obvious, that there is $M>0$ such that for each $k\ge M$, the diameter of at least one of the sets $U(i,k)$, $i=1,\ldots,2^{k}$ is smaller than $\delta'$. In order to prove (\ref{delta-lancuch}) we consider the following cases: \begin{enumerate} \item[(a)] Since the endpoints of every $U(i,k)$ are elements of $\tilde{\omega}$, the existence of $\delta'$-chain mentioned in (\ref{delta-lancuch}) is obvious whenever \begin{eqnarray*} x\in\bigcup_{k=M}^{\infty}\bigcup_{i=1}^{2^{k}}U(i,k). \end{eqnarray*} \item[(b)] Assume that $x$ is not a member of any $U(i,k)$ with $k\ge M$ and it is not a member of any $R(i,j)$. Then there is a sequence of positive integers $(i_{k})$ such that $i_{k}\in\{1,\ldots,2^{k}\}$ for each $k=1,2,\ldots$ and \begin{eqnarray*} x\in\bigcap_{k=1}^{\infty}U(i_{k},k). \end{eqnarray*} This leads to a contradiction. \item[(c)] Assume that $x$ is not a member of any $U(i,k)$ with $k\ge M$, that it is a member of some $R(i,j)$ and that it is a periodic point. Without loss of generality we assume $x\in R(1,1)$. Then $x$ is a fixed point. Since $x$ is not attractive, in every neighborhood of $x$ there is a point, whose trajectory does not converge to $x$. Let $y$ be such a point in $B(x,\delta')$. Then $(f^{n}(y))$ has a subsequence which converges to a point of $\tilde{\omega}$ (which finishes this part of the proof) or converges to another periodic point with a different period. By the intermediate value theorem there is $z$ between $x$ and $y$ such that its orbit intersects $\bigcup_{i=1}^{2^{k}}U(i,k)$ for some $k\ge M$. Then (a) applies. \item[(d)] Assume that $x$ is not a member of any $U(i,k)$ with $k\ge M$, it is a member of some $R(i,j)$ and it is not a periodic point. Then $(f^{n}(x))$ has a subsequence which converges to a point of $\tilde{\omega}$ (which finishes this part of the proof) or converges to a periodic point. Denote this periodic point by $y$. Then the trajectory of $x$ intersects $B(y,\delta'/2)$. Since $y$ is not attractive, in its every neighborhood (in particular in $B(y,\delta'/2)$) there is $z$ such that $(f^{n}(z))$ has a subsequence which converges to a point of $\tilde{\omega}$ (which finishes this part of the proof) or converges to another periodic point with a different period (then we use (c)). \end{enumerate} Then, property (\ref{delta-lancuch}), i.e., the existence of a $\delta'$-chain from each point of $x\in J$ to $B(x_{0},\kappa/4)$ is proved under the assumption that $x_{0}$ is not an isolated point of $\tilde{\omega}$. This assumption can be removed due to the structure of $\tilde{\omega}$. Define $r=\min\{|x-y|\colon\ x\in [0,1]\setminus J, y\in f(\overline{J})\}$. Choose $\delta\in (0,r)$ and $\delta'\in (0,\delta/2)$. Let $(x_{n}, n=0,\ldots,k_{x})$ be such a $\delta'$-chain from $x$ to $B(x_{0},\kappa/4)$. Let $c_{1},\ldots,c_{r_{x}}$ be such that $x_{n+1}=f(x_{n})+c_{n}$, $n=0,\ldots,k_{x}-1$. Continuity of $f$ implies that there is $r_{x}>0$ such that for each $y\in B(x,r_{x})$ and for each $\delta'$-chain $(y_{n})$ defined by $y_{0}=y$ and $y_{n+1}=f(y_{n})+c_{n}$ with $n=0,\ldots,r_{x}-1$ we have $y_{r_{x}}\in B(x_{0},\kappa/2)$. Continuity of $f$ implies also that there is $\varepsilon_{x}\in (0,\delta')$, such that whenever $d_{n}\in (0,\delta')$ and $|c_{n}-d_{n}|<\varepsilon_{x}$, then for each $z\in B(x,r_{x})$ and for each $\delta$-chain $(z_{n})$ defined by $z_{0}=z$ and $z_{n+1}=f(z_{n})+d_{n}$ for each $n=0,\ldots,r_{x}-1$ we have $z_{r_{x}}\in B(x_{0},3\kappa/4)$. \newline Since $\mathcal{A}=\{B(x,r_{x}),x\in\overline{J}\}$ is an open cover of $U$ and $U$ is compact, a finite subcover of $\mathcal{A}$ can be chosen. Let $\{B(x,r_{x}),x\in\mathcal{X}\}$ be such a finite subcover and define \begin{eqnarray*} N=\max_{x\in\mathcal{X}}k_{x},\ \varepsilon=\min_{x\in\mathcal{X}}\varepsilon_{x}. \end{eqnarray*} Notice that $N<\infty$ and $\varepsilon>0$, since $\mathcal{X}$ is a finite set. \newline The same manner as in the proof of Theorem \ref{fixed_points} leads to the conclusion that there is $K\in\mathbb{N}$ such that for each integer $k>K$ and each $j\in\{0,\ldots,N\}$, \begin{eqnarray} \left\|f_{k+j-1}\circ\ldots\circ f_{k}(x)-f^{j}(x)\right\|<\kappa/4. \end{eqnarray} Let $(X_{n})$ be any $(f_{k,\infty},\delta)$-process that begins at any point $x\in J$. For each $l\in\mathbb{N}$, \begin{eqnarray*} P\left(X_{lN}\in B(x_{0},\kappa)\right)\ge\left(\frac{\varepsilon}{\delta}\right)^{N}>0. \end{eqnarray*} Then $(f_{k,\infty},\delta)$-recurrence follows by Borel-Cantelli lemma. Equicontinuity of $\{f,f_{0},f_{1},\ldots\}$ follows from Ascoli theorem and it is used when replacing $f$ by elements of $f_{0,\infty}$. \end{proof} It is well-known that an infinite $\omega$-limit set $\tilde{\omega}$ of a function $f\in\mathcal{C}(I)$ of type $2^{\infty}$ has the following property: there is a sequence of compact periodic intervals $(J_{n})$ such that for each $n\in\mathbb{N}_{0}$, $J_{n}\supset J_{n+1}$, $J_{n}$ is periodic with period $2^{n}$ and \begin{eqnarray*} \tilde{\omega}\subseteq\bigcap\limits_{n=1}^{\infty}\bigcup\limits_{k=0}^{2^{n}-1}f^{k}(J_{n}) \end{eqnarray*} (see \cite{Fedorenko:Characterization}). If $f\in\mathcal{C}(I)$ is of type $2^{\infty}$ and has two infinite $\omega$-limit sets $\tilde{\omega}_{1}\neq\tilde{\omega}_{2}$, then exactly one of the following three cases holds: (i) $\tilde{\omega}_{1}\subset\tilde{\omega}_{2}$, (ii) $\tilde{\omega}_{2}\subset\tilde{\omega}_{1}$, (iii) $\tilde{\omega}_{1}\cap\tilde{\omega}_{2}=\emptyset$ (see \cite{Schweizer:Measures}). It is easy to see, that if (iii) holds and $(J^{(1)})$, $(J^{(2)})$ are the sequences of periodic intervals described above for $\tilde{\omega}_{1}$ and $\tilde{\omega}_{2}$ respectively, then there exists $N\in\mathbb{N}$ such that for each integer $n>N$, \begin{eqnarray*} \bigcup\limits_{k=0}^{2^{n}-1}f^{k}\left(J^{(1)}_{n}\right)\cap\bigcup\limits_{k=0}^{2^{n}-1}f^{k}\left(J^{(2)}_{n}\right)=\emptyset. \end{eqnarray*} The above remarks allow us to formulate Theorem \ref{tw_gl_1} in more general form, namely the assumption that $V$ contains only one maximal infinite $\omega$-limit set can be relaxed. \begin{corollary} \label{Wniosek1} Let $f\in\mathcal{C}(I)$ be of type $2^{\infty}$. Assume that $f$ has a maximal infinite $\omega$-limit set $\tilde{\omega}$. Assume that there exists $m\in\mathbb{N}$ such that the minimal closed and invariant interval $V$ containing $\tilde{\omega}$ contains exactly $m$ different maximal $\omega$-limit sets, $m$ periodic orbits of period $2^{n}$ for each $n\in\mathbb{N}$ and none of them is attractive. Let $f_{0,\infty}$ be a sequence in $\mathcal{C}(I)$, which converges uniformly to $f$. Then there exist $K\in\mathbb{N}$ and $\delta>0$ such that for each integer $k>K$, every point $x\in\tilde{\omega}$ is $(f_{k,\infty},\delta)$-recurrent. \end{corollary} \begin{remark} Let $f$ and $\tilde{\omega}$ be as above. If $x_{0}$ is an isolated point of $\tilde{\omega}$, then $x_{0}$ is not a recurrent point for $f$. \end{remark} \begin{remark} Let $x_{0}$ and $f$ be as in Theorem \ref{Tw_punkty_okresowe} or Corollary \ref{Wniosek1}. Then for each $k\in\mathbb{N}$ there exists $\delta_{k}>0$ such that $x_{0}$ is $(f^{k},\delta_{k})$-recurrent. \end{remark} The following example shows that the statement of Theorem \ref{tw_gl_1} is not true if $f$ has infinitely many infinite $\omega$-limit sets. Example \ref{przyklad_2_2} is a modification of Example 4.1 from \cite{Szala:Recurrence}. \begin{example} \label{przyklad_2_2} Let $\tau(x)=1-|2x-1|$ for each $x\in [0,1]$. Let $g\in\mathcal{C}(I)$ be of type $2^{\infty}$ defined by \begin{eqnarray*} g(x)=\begin{cases}\tau^{2}(\lambda ), & x\in[0,\tau(\lambda)], \\ \tau(x), & x\in (\tau(\lambda),1],\end{cases} \end{eqnarray*} where $\lambda=0.8249080\ldots$ (see \cite{Misiurewicz:Smooth}, Remark 4). We use the same notation as in Example 4.1. in \cite{Szala:Recurrence}, i.e., $\{M(i,k), i=1,\ldots,2^{k}\}$, $k\in\mathbb{N}$ is a decomposition of $\tilde{\omega}$ into periodic portions of period $2^{k}$ (see Lemma \ref{rozklad_na_czesci_okresowe}), $U^{(i)}_{k}=[u^{(i)}_{k},v^{(i)}_{k}]$ is a convex hull of $M(i,k)$ where $i=1,\ldots,2^{k}$, $k\in\mathbb{N}$. For each integer $k>1$ and $i=1,\ldots,2^{k}$ there is a periodic point $x^{(i)}_{k}$ of period $2^{k}$ in $U_{k}^{(i)}$. Moreover, \begin{eqnarray*} x_{k}^{(i)}\in\left(v_{k+1}^{(2i-1)},u_{k+1}^{(2i)}\right). \end{eqnarray*} For each $k\in\mathbb{N}$ and each $i=1,\ldots,2^{k}$ define \begin{eqnarray*} \varepsilon_{k}^{(i)}=\min\left\{x_{k}^{(i)}-v_{k+1}^{(2i-1)},u_{k+1}^{(2i)}-x_{k}^{(i)}\right\}. \end{eqnarray*} For each $k\in\mathbb{N}$ define \begin{eqnarray*} \varepsilon_{k}=\min\left\{\varepsilon_{k}^{(i)},i=1,\ldots,2^{k}\right\}. \end{eqnarray*} Let \begin{eqnarray*} I_{k}^{(i)}=\left(x_{k}^{(i)}-\varepsilon_{k},x_{k}^{(i)}+\varepsilon_{k}\right). \end{eqnarray*} Let $(i_{k})$ be a sequence such that for each $k\in\mathbb{N}$, $i_{k}\in\{1,\ldots,2^{k}\}$ and \begin{eqnarray*} \varepsilon_{k}^{(i_{k})}=\varepsilon_{k}. \end{eqnarray*} Let $f$ be defined such that \begin{enumerate} \item $f$ equals $g$ on the set $[0,1]\setminus\bigcup\nolimits_{k=1}^{\infty}\bigcup\nolimits_{i=1}^{2^{k}}I_{k}^{(i)}$; \item for each ach $k\in\mathbb{N}$ and $i=i_{k}$, \newline inside the square \begin{eqnarray*} \left[x_{k}^{(i_{k})}-\frac{2}{5}\varepsilon_{k},x_{k}^{(i_{k})}+\frac{2}{5}\varepsilon_{k}\right]\times \left[f\left(x_{k}^{(i_{k})}\right)-\frac{2}{5}\varepsilon_{k},\left(x_{k}^{(i_{k})}\right)+\frac{2}{5}\varepsilon_{k}\right] \end{eqnarray*} we define $f$ as a ``diminished copy'' of the graph of $g$, \newline on the interval \begin{eqnarray*} \left[x_{k}^{(i_{k})}-\frac{4}{5}\varepsilon_{k},x_{k}^{(i_{k})}-\frac{2}{5}\varepsilon_{k}\right) \end{eqnarray*} we define $f$ as a constant function equal \begin{eqnarray*} f\left(x_{k}^{(i_{k})}-\frac{2}{5}\varepsilon_{k}\right), \end{eqnarray*} on the interval \begin{eqnarray*} \left(x_{k}^{(i_{k})}+\frac{2}{5}\varepsilon_{k},x_{k}^{(i_{k})}+\frac{4}{5}\varepsilon_{k}\right] \end{eqnarray*} we define $f$ as a constant function equal \begin{eqnarray*} f\left(x_{k}^{(i_{k})}+\frac{2}{5}\varepsilon_{k}\right); \end{eqnarray*} \item for each $k\in\mathbb{N}$, for each $i\in\{1,\ldots,2^{k}\}\setminus\{i_{k}\}$ and for each \begin{eqnarray*} x\in\left[x_{k}^{(i)}-\frac{4}{5}\varepsilon_{k},x_{k}^{(i)}+\frac{4}{5}\varepsilon_{k}\right] \end{eqnarray*} we define \begin{eqnarray*} f(x)=x+f\left(x_{k}^{(i)}\right)-x_{k}^{(i)}; \end{eqnarray*} \item for each $k\in\mathbb{N}$ and for each $i\in\{1,\ldots,2^{k}\}$, $f$ is affine on the intervals \begin{eqnarray*} \left[x_{k}^{(i)}-\varepsilon_{k},x_{k}^{(i)}-\frac{4}{5}\varepsilon_{k}\right] \end{eqnarray*} and \begin{eqnarray*} \left[x_{k}^{(i)}+\frac{4}{5}\varepsilon_{k},x_{k}^{(i)}+\varepsilon_{k}\right]. \end{eqnarray*} \end{enumerate} \noindent Assume that the points of infinite $\omega$-limit set $\tilde{\omega}$ of $f$ are $(f,\delta)$-recurrent for some $\delta>0$. There exists positive integer $k_{0}>1$ such that $\varepsilon_{k_{0}}<\delta$. Let $\delta'=2\varepsilon_{k_{0}}/5$. It follows from the definition that the points of $\tilde{\omega}$ are $(f,\delta')$-recurrent. Let $(X_{n})$ be an $(f,\delta')$-process that begins at some point $x\in\tilde{\omega}$. Let $J_{k}=\bigcup_{i=1}^{2^{k}}I_{k}^{(i)}$. Define the events $A_{n}=\{X_{n}\in J_{k_{0}}\}$, $n=1,2,\ldots$. Notice that if $X_{m}\in J_{k_{0}}$ for some $m\in\mathbb{N}$, then $X_{n}\in J_{k_{0}}$ for each $n\ge m$, i.e., $A_{m}\subseteq\bigcap_{n=m}^{\infty}A_{n}$ for each $m\in\mathbb{N}_{0}$. Then \begin{eqnarray*} P\left(\bigcup_{m=1}^{\infty}\bigcap_{n=m}^{\infty}A_{n}\right)\ge P\left(\bigcup_{m=1}^{\infty}A_{m}\right)=1, \end{eqnarray*} which means that the points of $\tilde{\omega}$ cannot be $(f,\delta)$-recurrent with any $\delta>0$. Notice that $(f,\delta)$-recurrence is a particular example of $(f_{0,\infty},\delta)$-recurrent with $f_{0,\infty}=(f,f,\ldots)$. \end{example} \begin{remark} Using a similar modification of function $g$ from Example \ref{przyklad_2_2} we can obtain a function $f$ of type $2^{\infty}$ with the following properties: $f$ has exactly one infinite $\omega$-limit set $\tilde{\omega}$; every periodic point of $f$ is a member of an open interval of periodic points with the same period; for each $x\in\tilde{\omega}$ and each $\delta>0$, $x$ is not $(f,\delta)$-recurrent. \end{remark} \section{Approximation by periodic orbits of the limit function} We generalize the definition of nonchaoticity (see Introduction) in the following way: a sequence $f_{0,\infty}$ in $\mathcal{C}(I)$, which converges uniformly to $f\in\mathcal{C}(I)$, is {\itshape nonchaotic with respect to small random perturbations} if for each $\varepsilon>0$ there exist $K\in\mathbb{N}$ and $\delta>0$ such that for each integer $k>K$, for each $\delta'\in (0,\delta)$ and for each $x_{0}\in I$, \begin{eqnarray} P\left(\exists p\in\mathrm{Per}(f)\colon\ \limsup_{n\to\infty}\left|X_{n}-f^{n}(p)\right|<\varepsilon\right)=1, \label{nonchaotic_with_respect_to_random_perturbations_2} \end{eqnarray} holds, where $(X_{n})$ is any $(f_{k,\infty},\delta')$-process which begins at $x_{0}$. Here we provide a sufficient condition for $f_{0,\infty}$ to be nonchaotic with respect to small random perturbations. \begin{theorem} Let $f_{0,\infty}$ be a sequence in $\mathcal{C}(I)$ converging uniformly to $f\in\mathcal{C}(I)$. Let $f$ be nonchaotic stable. Then $f_{0,\infty}$ is nonchaotic with respect to small random perturbations. \end{theorem} \begin{proof} Assume that $f$ is nonchaotic stable. It is already known that for each $\varepsilon>0$ there is $\delta>0$ such that for each $x_{0}\in I$ and each $(f,\delta)$-chain $(x_{n})$ starting at $x_{0}$ there is periodic point $p$ of $f$ such that $\limsup_{n\to\infty}|x_{n}-f^{n}(p)|<\varepsilon$ (see \cite{Jankova:ChaosIn}). Let $\varepsilon>0$ be arbitrary. Let $\delta$ be as above. Since $f_{n}\rightrightarrows f$, there is $K\in\mathbb{N}$ such that for each integer $k\ge K$, $\|f_{k}-f\|<\delta/2$. Choose any $k>K$ and $x_{0}\in I$. Let $(Y_{n})$ by any $(f_{k,\infty},\delta/2)$-process that begins at $x_{0}$. For each $\omega\in\Omega$, $(Y_{n}(\omega))$ is an $(f,\delta)$-chain (we work with a fixed probability space $(\Omega,\Sigma,P)$ - see Introduction). Thus there is $p\in\mathrm{Per}(f)$ (which depends on $\omega$) such that $\limsup_{n\to\infty}|Y_{n}(\omega)-p(\omega)|<\varepsilon$. Then (\ref{nonchaotic_with_respect_to_random_perturbations_2}) holds. \end{proof} \begin{remark} \label{uwaga} The definition which states whether or not a function is nonchaotic with respect to small random perturbations was introduced in \cite{Jankova:ChaosIn} with the following, slightly different, condition: for each $\varepsilon>0$ there exists $\delta>0$ such that for each $x_{0}\in I$ and each $(f,\delta)$-process $(X_{n})$ starting at $x_{0}$ there is $p\in\mathrm{Per}(f)$ such that \begin{eqnarray*} P\left(\limsup_{n\to\infty}\left|X_{n}-f^{n}(p)\right|<\varepsilon\right)=1. \end{eqnarray*} In this definition $(X_{n})$ is a sequence of random variables and $p$ has to be a random variable too. Otherwise the following problem may occur: let $f$ be the function presented at Figure \ref{wykres5}. Let $(X_{n})$ be any $(f,\delta)$-process with $x_{0}=0.5$ and $\delta\in (0,0.2)$. Then \begin{eqnarray*} P\left(\bigcup_{k=0}^{\infty}\bigcap_{n=k}^{\infty}\left\{X_{n}\in\left[0,\frac{1}{5}\right)\right\}\right)=P\left(\bigcup_{k=0}^{\infty}\bigcap_{n=k}^{\infty}\left\{X_{n}\in \left(\frac{4}{5},1\right]\right\}\right)>0. \end{eqnarray*} It is obvious that there is no periodic point of $f$ whose orbit intersects infinitely many times a small neighborhood of 0 and a small neighborhood of 1. \begin{figure} \caption{The graph of $f$ in Remark \ref{uwaga} \label{wykres5} \end{figure} \end{remark} \end{document}
\begin{document} \title{Non-linear stability in photogravitational non-planar restricted three body problem with oblate smaller primary} \author{B. Ishwar and J. P. Sharma} \affil{University Department of Mathematics,B.R.A. Bihar university, Muzaffarpur-842001} \email{ishwar\[email protected]} \begin{abstract} We have discussed non-linear stability in photogravitational non-planar restricted three body problem with oblate smaller primary. By photogravitational we mean that both primaries are radiating. We normalised the Hamiltonian using Lie transform as in Coppola and Rand (1989). We transformed the system into Birkhoff's normal form. Lie transforms reduce the system to an equivalent simpler system which is immediately solvable. Applying Arnold’s theorem, we have found non-linear stability criteria. We conclude that $L_6$ is stable. We plotted graphs for $(\omega_1 ,D_2).$ They are rectangular hyperbola. \end{abstract} \keywords{Non-linear stability:Photogravitational:Non-planar:Oblate primary:RTBP} \section{Introduction} G. Hori(1966,1967) applied a theorem by Lie in canonical transformation to the theory of general perturbations. Theorem is applicable to such cases where the undisturbed portion of Hamiltonian depends on angular variable as well as momentum variables. A. Deprit(1969) introduced the concept of Lie series to the cases where the generating function itself depends explicitly on the small parameter. Lie transforms define naturally a class of canonical mappings in the form of power series in the small parameter. They reviewed how a Lie series defines a canonical mapping as a formal power series of a small parameter $\epsilon$, provided the generating function itself does not depend upon $\epsilon$. This restriction is overcome by introducing Lie transform. They showed that how they naturally define the canonical transformations contemplated by Von Zeipel's method. Canonical mappings defined by Lie transforms as formal power series of a small parameter constitute the natural ingredient of Transformation Theory applied to Hamiltonian systems. Orbital stability of quasi-periodic motions in multidimensional Hamiltonian systems was studied by Sokolskii(1978). With some applications to the Birkhoff's normal form along with its generalized form by K. R. Meyer, the restricted problem of three bodies near $L_4$, the Birkhoff's normalization procedure, and the singular perturbation, of Hamiltonian systems have been discussed by Liu(1985). K.R. Meyer and D.S.Schmidt(1986) established the full stability of Lagrange equilibrium point in the planar restricted three body problem even in the case when $\mu= \mu_c$ . Hamiltonian is normalised up to order six and then KAM theory is applied. This establishes the stability of the equilibrium in degenerate case. A.P. Markeev(1966) and K.T. Alfriend(1970,1971) have shown that $L_4$ is unstable when the mass ratio is equal to $\mu_2$ or $\mu_3$. The Lie transform method is an efficient perturbation scheme which explicitly generates the functional form of the reduced Hamiltonian under an implicitly defined canonical periodic near identity - transformation. Coppola \& Rand(1989) applied a method of Lie Transforms, a perturbation method for differential equations to a general class of Hamiltonian systems using computer algebra. They developed explicit formulas for transforming the system into Birkhoff normal form. They formed explicit nonlinear stability criteria solely in terms of H for systems where the linear stability is inconclusive. They applied these results to the non-linear stability of $L_4$ in the Circular RTBP. At $L_4$ , Arnold's theorem(1961) must be used since a Lyapunov function cannot be found. They confirmed the previous computations of Deprit and Deprit –Bartholome (1967), Meyer and Schmidt (1986). Algorithms of linear and nonlinear normalization of a Hamiltonian system near an equilibrium point were described by Maciejewski, and Gozdziewski(1991). A. Jorba(1997) described the effective computation of normal forms, centre manifolds and first integrals in Hamiltonian Mechanics. These kind of conclusions are very useful. They allow, for example, to give explicit estimates on the diffusion time or to compute invariant tori. Their approach presented here is based on using algebraic manipulation for the formal series but taking numerical co-efficients for them. J. Palacian \& P. Yanguas (2000) described the reduction of perturbed Hamiltonian systems. They used a technique based on Lie-transformations. By extending an integral of the unperturbed part to the whole transformed system up to a certain order of approximation, the number of degrees of freedom of such a system is reduced, under certain conditions. The idea of reducing a perturbed system is valid not only for Hamiltonians, but also for any system of differential equations. Recently there has been a resurgence of this subject. The problem of building formal integrals for Hamiltonian systems has received a wide treatment in the last forty five years. The results have been applied in fields such as Molecular Physics or Astrodynamics. For example, in galactic models of three degrees of freedom, the search for third integral becomes very important to analyze the onset of Chaos. Their approach consists in generalising the concept of normal forms by selecting a function $G(x)$ and proposing thereafter a symplectic change of variables. The nonlinear stability of triangular equilibrium points was studied by Kushvah et al (2007) in the generalised photogravitational restricted three body problem with Poynting-Robertson drag. They have performed first and second order normalization of the Hamiltonian of the problem and applied KAM theorem to examine the condition of non-linear stability. After computation they have found three critical mass ratios and concluded that triangular points are stable in the nonlinear sense except three critical mass ratios at which KAM theorem fails. Hence, we thought to examine the Non-linear stability of $L_6$, equilibrium point of Non-planar photogravitational restricted three body problem with oblate smaller primary. We examined the linear stability of above problem in Shankaran(2011) where $q_1$ is radiation pressure of bigger primary and $q_2$ that of smaller. We have found that $L_6$ is unstable due to positive real part in complex roots. Now, we proceed to normalize the Hamiltonian of the problem as in Coppola and Rand (1989) using Lie transform. We transformed the system into Birkhoff's normal form. Using Arnold's theorem, we have found non-linear stability criteria in terms of H. Lie transforms reduce the system to an equivalent simpler system which is immediately solvable. Finally we find that $D_2 \neq 0.$ Hence, according to Arnold theorem, we conclude that $L_6$ is stable. We plotted graphs for $(\omega_1 ,D_2).$ They are rectangular hyperbola. \section{Computation of $D_2$} We suppose that $q$ is the radiation co-efficient of bigger primary and $Q$ that of smaller primary in photogravitational non-planar restricted three body problem. We want to apply Arnold's theorem(1961). We applied method of Lie transform using computer algebra as given in research article of Coppola and Rand(1989). We suppose $L_n=\{,w_n\}$ and $S_n=\frac{1}{n} \sum_{m=0}^{n-1} L_{n-m} S_m$,(where $n=1,2,3\dots$) be the operators. Then we transformed the near-identity transformation $(x_m,z_m)$ to $(X_n,Z_m)$ variables. The Kamiltonian is given by \begin{equation} K_n=H_n+\frac{1}{2} \{H_n,w_n\}+\frac{1}{2} \sum_{m=0}^{n-1}[ L_{n-m} K_m+mS_{n-m}H_m] \quad n=2,3,4\dots\end{equation} We choose the generating function $w_n$ in such a way so as to best simplify Kamiltonian and each term to be canceled will be of the form $A X_1^jY_1^lX_2^rY_2^s$ where $A$ is a constant. We choose $w_n$ to be a sum of terms, one for each term to be canceled, of the form $w_n=A X_1^jY_1^lX_2^rY_2^s$ where $B = \frac{i A_n}{\omega_1(l-j)+\omega_3(s-r)}$ with $n=j+l+r+s-2.$ Here $\omega_1$ and $\omega_3$ are the basic frequencies which are rationally independent and we suppose that the frequency in vertical direction is constant i.e. $\omega_3= 1$. If $j=l$ and $s=r$ then $B$ will be infinite hence the terms $ (X_1Y_1)^j(X_2Y_2)^r$. The $D_2$ is given as (please see Coppola and Rand(1989)) \begin{equation} D_2=-(K2200\omega_3^2 +K1111\omega_1\omega_3+K0022\omega_1^2) \end{equation} We performed computation using Mathematica and found the following results: \begin{eqnarray} \text{K0022}&&=\text{H0022}+i\left(\frac{1 }{\omega _1\text{ }}\text{H0111}\times\text{H1011}+\frac{1 }{\omega _3\text{ }}(\text{H0012}\times\text{H0021}\right.\nonumber\\&&+\left.\text{H0003}\times\text{H0030})\right.\nonumber\\&&+ \left.\frac{ 1 }{\left( \omega _1-2\omega _3\right)}\text{H0120}\times\text{H1002}+\frac{1}{\left(\omega _1+2\omega _3\right)}\text{H0102}\times \text{H1020}\right); \end{eqnarray} \begin{eqnarray} \text{K1111}&&=\text{H1111}+i\left(\frac{ 1 }{ \omega _1 }(\text{H1011}\times\text{H1200}\right.\nonumber\\&&+\left.\text{H0111}\times\text{H2100})+\frac{ 1}{\left( \omega _1-2\omega _3\right)} \text{H0120}\times\text{H1002}+\right.\nonumber\\&& \frac{ 1 }{2 \omega _3 }(\text{H1101}\times\text{H0021}+\text{H1110}*\text{H0012})+\frac{ 1}{\left( 2\omega _1-\omega _3\right)}\text{H0210}\times\text{H2001}+\nonumber\\&& \left.\frac{1}{\left(\omega _1+2\omega _3\right)}\text{H0102}\times\text{H1020}+\frac{1}{\left(2\omega _1+\omega _3\right)}\text{H0201}\times\text{H2010}\right); \end{eqnarray} \begin{eqnarray} \text{K2200}&&=\text{H2200}+i\left(\frac{ 1}{\omega _3\text{ }}\text{H1101}\times\text{H1110}+\frac{1 }{\omega _1\text{ }}(\text{H1200}\times\text{H2100}\right.\\&& +\text{H0300}\times\text{H3000})+\left.\frac{ 1}{\left(2 \omega _1-\omega _3\right)}\text{H0210}\times\text{H2001}+\nonumber\frac{ 1 }{\left(2 \omega _1+\omega _3\right)}\text{H0201}\times\text{H2010}\right)\nonumber\end{eqnarray} \begin{eqnarray} \text{K2200}&&=\frac{\left(5 a_1^2-6 b_1 \omega _1\right) \omega _3 \left(4 \omega _1^2-\omega _3^2\right)+2 a_2^2 \omega _1 \left(4 \omega _1^2+\omega_1 \omega _3-\omega _3^2\right)}{16 \omega _1^3 \omega _3-4 \omega _1 \omega _3^3}\\ \text{K1111}&&=\frac{1}{8} \left(-8 b_3+\frac{12 a_1 a_3}{\omega _1}+\frac{a_3^2}{\omega _1-2 \omega _3}+\frac{a_2^2}{2 \omega _1-\omega _3}+\frac{6a_2 a_4}{\omega _3}+\frac{a_2^2}{2 \omega _1+\omega _3}+\frac{a_3^2}{\omega _1+2 \omega _3}\right)\nonumber\\&& \end{eqnarray} \begin{eqnarray} \text{K0022}&&= \frac{1}{8} \left(-12 b_5+\frac{10 a_4^2}{\omega _3}+a_3^2 \left(\frac{4}{\omega _1}+\frac{2 \omega _1}{\omega _1^2-4 \omega _3^2}\right)\right)\end{eqnarray} \begin{equation}D_2=-\left(\text{K2200} \omega _3{}^2+ \text{K1111} \omega _1 \omega _3 +\text{K0022} \omega _1{}^2\right)\end{equation} \noindent\(=\frac{1}{4} \left(-3 a_2 a_4 \omega _1+6 b_5 \omega _1^2-\frac{5 a_4^2 \omega _1^2}{\omega _3}-6 a_1 a_3 \omega _3+4 b_3 \omega _1 \omega _3+\right.\\ \left.6 b_1 \omega _3^2-\frac{5 a_1^2 \omega _3^2}{\omega _1}-\frac{a_3^2 \omega _1 \left(3 \omega _1^2+\omega _1 \omega _3-8 \omega _3^2\right)}{\omega _1^2-4 \omega _3^2}+\frac{2 a_2^2 \omega _3 \left(5 \omega _1^2+\omega _1 \omega _3-\omega _3^2\right)}{-4 \omega _1^2+\omega _3^2}\right)\) where \\ \noindent\(a\text{=}(1-\mu )+\frac{6(3)^{1/2} ((1-\mu ))(1-q)A^{3/2}}{\mu Q};\) \noindent\(c\text{=}(3*A)^{1/2} -\frac{9 ((1-\mu ))q A^2}{\mu Q};\) \noindent\(a_1=\frac{-q \mu ^4+Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}-2 Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}+Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}}{(-1+\mu )^2 \sqrt{(-1+\mu )^2} \mu ^4}-\\ \left(5 \left(-3 q \mu ^6+2 Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}-8 Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}-8 Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}+\right.\right.\\ \left.\left.\left.2 Q \sqrt{(-1+\mu )^2} \mu ^4 \sqrt{\mu ^2}+12 Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}\right) A\right)\right/\\ \left((-1+\mu )^4 \sqrt{(-1+\mu )^2} \mu ^6\right)+\left(24 \left(\sqrt{3} q \mu ^5-\sqrt{3} q^2 \mu ^5+\sqrt{3} Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}-\right.\right.\\ \sqrt{3} q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}-3 \sqrt{3} Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}+\\ 3 \sqrt{3} q Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}-\sqrt{3} Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}+\\ \sqrt{3} q Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}+3 \sqrt{3} Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}-\\ \left.\left.3 \sqrt{3} q Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}\right) A^{3/2}\right)/\left(Q (-1+\mu )^2 \sqrt{(-1+\mu )^2} \mu ^6\right)-\\ \left(945 \left(q \mu ^8+Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}-6 Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}-20 Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}+\right.\right.\\ 15 Q \sqrt{(-1+\mu )^2} \mu ^4 \sqrt{\mu ^2}-6 Q \sqrt{(-1+\mu )^2} \mu ^5 \sqrt{\mu ^2}+\\ \left.\left.\left.Q \sqrt{(-1+\mu )^2} \mu ^6 \sqrt{\mu ^2}+15 Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}\right) A^2\right)\right/\\ \left(8 \left((-1+\mu )^6 \sqrt{(-1+\mu )^2} \mu ^8\right)\right)+O[A]^{5/2};\) \noindent\( {a_2=\left(-\frac{6 \sqrt{3} q}{\left((1-\mu )^2\right)^{5/2}}+\frac{15 \sqrt{3} q \mu }{2 \left((1-\mu )^2\right)^{5/2}}-\frac{3 \sqrt{3} q \sqrt{(1-\mu )^2} \mu }{2 (1-\mu )^6}-\frac{6 \sqrt{3} Q \sqrt{\mu ^2}}{\mu ^5}\right) \sqrt{A}-}\\ {\frac{135 \left(\sqrt{3} q\right) A^{3/2}}{2 \left((-1+\mu )^5 \sqrt{(-1+\mu )^2}\right)}+}\\ {\left(54 \left(-10 q \mu ^6+9 q^2 \mu ^6+q^2 \mu ^7+10 Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}-10 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}-\right.\right.}\\ {40 Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}+39 q Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}-}\\ {40 Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}+34 q Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}+10 Q \sqrt{(-1+\mu )^2} }\\ {\mu ^4 \sqrt{\mu ^2}-6 q Q \sqrt{(-1+\mu )^2} \mu ^4 \sqrt{\mu ^2}-q Q \sqrt{(-1+\mu )^2} \mu ^5 \sqrt{\mu ^2}+}\\ {\left.\left.\left.60 Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}-56 q Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}\right) A^2\right)\right/}\\ {\left(Q (-1+\mu )^3 \sqrt{(-1+\mu )^2} \mu ^7\right)+O[A]^{5/2};}\) \noindent\( {a_3=\left(\frac{3 q (1-\mu )}{2 \left((1-\mu )^2\right)^{5/2}}-\frac{3 q (1-\mu ) \mu }{2 \left((1-\mu )^2\right)^{5/2}}-\frac{3 Q \sqrt{\mu ^2}}{2 \mu ^4}\right)+}\\ {\left(-\frac{45 q (1-\mu )}{2 \left((1-\mu )^2\right)^{7/2}}-\frac{45 q}{4 (1-\mu ) \left((1-\mu )^2\right)^{5/2}}+\frac{45 q (1-\mu ) \mu }{2 \left((1-\mu )^2\right)^{7/2}}+\frac{45 q \mu }{4 (1-\mu ) \left((1-\mu )^2\right)^{5/2}}+\right.}\\ {\left.\frac{45 Q \sqrt{\mu ^2}}{2 \mu ^6}\right) A+\left(\frac{36 \sqrt{3} (1-q) q (1-\mu )}{Q \left((1-\mu )^2\right)^{5/2}}-\frac{36 \sqrt{3} (1-q) q (1-\mu )}{Q \left((1-\mu )^2\right)^{5/2} \mu }-\right.}\\ {\left.\frac{36 \sqrt{3} \sqrt{\mu ^2}}{\mu ^6}+\frac{36 \sqrt{3} q \sqrt{\mu ^2}}{\mu ^6}+\frac{36 \sqrt{3} \sqrt{\mu ^2}}{\mu ^5}-\frac{36 \sqrt{3} q \sqrt{\mu ^2}}{\mu ^5}\right) A^{3/2}+}\\ {\left(\frac{945 q}{4 (1-\mu ) \left((1-\mu )^2\right)^{7/2}}+\frac{945 q}{16 (1-\mu )^3 \left((1-\mu )^2\right)^{5/2}}-\frac{945 q \mu }{4 (1-\mu ) \left((1-\mu )^2\right)^{7/2}}-\right.}\\ {\left.\frac{945 q \mu }{16 (1-\mu )^3 \left((1-\mu )^2\right)^{5/2}}+\frac{4725 Q \sqrt{\mu ^2}}{16 \mu ^8}\right) A^2+O[A]^{5/2};}\) \noindent\( {a_4=\left(\frac{3 \sqrt{3} q}{2 \left((1-\mu )^2\right)^{5/2}}-\frac{3 \sqrt{3} q \sqrt{(1-\mu )^2} \mu }{2 (1-\mu )^6}+\frac{3 \sqrt{3} Q \sqrt{\mu ^2}}{2 \mu ^5}\right) \sqrt{A}+}\\ {\frac{75 \sqrt{3} q A^{3/2}}{4 (-1+\mu )^5 \sqrt{(-1+\mu )^2}}+\left(\frac{3}{2} q \left(\frac{\frac{9 q}{Q}-\frac{9 q}{Q \mu }}{\left((1-\mu )^2\right)^{5/2}}-\frac{90 (1-q)}{Q \left((1-\mu )^2\right)^{5/2} \mu }\right)+\right.}\\ {\frac{135 q \sqrt{(1-\mu )^2}}{Q (1-\mu )^6}-\frac{243 q^2 \sqrt{(1-\mu )^2}}{2 Q (1-\mu )^6}-\frac{27 q^2 \sqrt{(1-\mu )^2} \mu }{2 Q (1-\mu )^6}+\frac{135 \sqrt{\mu ^2}}{\mu ^7}-}\\ {\left.\frac{135 q \sqrt{\mu ^2}}{\mu ^7}-\frac{135 \sqrt{\mu ^2}}{\mu ^6}+\frac{243 q \sqrt{\mu ^2}}{2 \mu ^6}+\frac{27 q \sqrt{\mu ^2}}{2 \mu ^5}\right) A^2+O[A]^{5/2};}\) \noindent\( {b_1=\left(-q \mu ^5-Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}+3 Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}+\right.}\\ {\left.Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}-3 Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}\right)/\left((-1+\mu )^3 \sqrt{(-1+\mu )^2} \mu ^5\right)-}\\ {\left(15 \left(-3 q \mu ^7-2 Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}+10 Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}+20 Q \sqrt{(-1+\mu )^2} \right.\right.}\\ {\mu ^3 \sqrt{\mu ^2}-10 Q \sqrt{(-1+\mu )^2} \mu ^4 \sqrt{\mu ^2}+2 Q \sqrt{(-1+\mu )^2} \mu ^5 \sqrt{\mu ^2}-}\\ {\left.\left.20 Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}\right) A\right)/\left(2 \left((-1+\mu )^5 \sqrt{(-1+\mu )^2} \mu ^7\right)\right)+}\\ {\left(30 \left(\sqrt{3} q \mu ^6-\sqrt{3} q^2 \mu ^6-\sqrt{3} Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}+\sqrt{3} q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}+\right.\right.}\\ {4 \sqrt{3} Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}-4 \sqrt{3} q Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}+}\\ {4 \sqrt{3} Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}-4 \sqrt{3} q Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}-}\\ {\sqrt{3} Q \sqrt{(-1+\mu )^2} \mu ^4 \sqrt{\mu ^2}+\sqrt{3} q Q \sqrt{(-1+\mu )^2} \mu ^4 \sqrt{\mu ^2}-}\\ {\left.6 \sqrt{3} Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}+6 \sqrt{3} q Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}\right) }\\ {\left.A^{3/2}\right)/\left(Q (-1+\mu )^3 \sqrt{(-1+\mu )^2} \mu ^7\right)-}\\ {\left(945 \left(q \mu ^9-Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}+7 Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}+35 Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}-\right.\right.}\\ {35 Q \sqrt{(-1+\mu )^2} \mu ^4 \sqrt{\mu ^2}+21 Q \sqrt{(-1+\mu )^2} \mu ^5 \sqrt{\mu ^2}-}\\ {\left.7 Q \sqrt{(-1+\mu )^2} \mu ^6 \sqrt{\mu ^2}+Q \sqrt{(-1+\mu )^2} \mu ^7 \sqrt{\mu ^2}-21 Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}\right) }\\ {\left.A^2\right)/\left(4 \left((-1+\mu )^7 \sqrt{(-1+\mu )^2} \mu ^9\right)\right)+O[A]^{5/2};}\) \noindent\( {b_3=}\\ {\left(-\frac{3 q}{\left((1-\mu )^2\right)^{5/2}}+\frac{3 q \mu }{\left((1-\mu )^2\right)^{5/2}}+\frac{15 Q \sqrt{\mu ^2}}{4 \mu ^7}-\frac{15 Q (1-\mu )^2 \sqrt{\mu ^2}}{4 \mu ^7}-\frac{15 Q \sqrt{\mu ^2}}{2 \mu ^6}+\frac{3 Q \sqrt{\mu ^2}}{4 \mu ^5}\right)+}\\ {\left(\frac{945 q}{8 \left((1-\mu )^2\right)^{7/2}}-\frac{45 q}{8 (1-\mu )^2 \left((1-\mu )^2\right)^{5/2}}-\frac{45 q \sqrt{(1-\mu )^2}}{4 (1-\mu )^8}-\frac{945 q \mu }{8 \left((1-\mu )^2\right)^{7/2}}+\right.}\\ {\left.\frac{45 q \mu }{8 (1-\mu )^2 \left((1-\mu )^2\right)^{5/2}}+\frac{45 q \sqrt{(1-\mu )^2} \mu }{4 (1-\mu )^8}+\frac{135 Q \sqrt{\mu ^2}}{2 \mu ^7}\right) A-}\\ {\left(90 \left(\sqrt{3} q \mu ^6-\sqrt{3} q^2 \mu ^6-\sqrt{3} Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}+\sqrt{3} q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}+\right.\right.}\\ {4 \sqrt{3} Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}-4 \sqrt{3} q Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}+4 \sqrt{3} Q }\\ {\sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}-4 \sqrt{3} q Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}-\sqrt{3} Q \sqrt{(-1+\mu )^2} }\\ {\mu ^4 \sqrt{\mu ^2}+\sqrt{3} q Q \sqrt{(-1+\mu )^2} \mu ^4 \sqrt{\mu ^2}-6 \sqrt{3} Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}+}\\ {\left.\left.6 \sqrt{3} q Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}\right) A^{3/2}\right)/\left(Q (-1+\mu )^3 \sqrt{(-1+\mu )^2} \mu ^7\right)+}\\ {\left(4725 \left(q \mu ^9-Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2}+7 Q \sqrt{(-1+\mu )^2} \mu \sqrt{\mu ^2}+35 Q \sqrt{(-1+\mu )^2} \mu ^3 \sqrt{\mu ^2}-\right.\right.}\\ {35 Q \sqrt{(-1+\mu )^2} \mu ^4 \sqrt{\mu ^2}+21 Q \sqrt{(-1+\mu )^2} \mu ^5 \sqrt{\mu ^2}-}\\ {\left.7 Q \sqrt{(-1+\mu )^2} \mu ^6 \sqrt{\mu ^2}+Q \sqrt{(-1+\mu )^2} \mu ^7 \sqrt{\mu ^2}-21 Q \sqrt{(-1+\mu )^2} \left(\mu ^2\right)^{3/2}\right) }\\ {\left.A^2\right)/\left(4 (-1+\mu )^7 \sqrt{(-1+\mu )^2} \mu ^9\right)+O[A]^{5/2};}\) \noindent\( {b_5=}\\ {\left(\frac{3 q}{8 \left((1-\mu )^2\right)^{5/2}}-\frac{3 q \mu }{8 \left((1-\mu )^2\right)^{5/2}}+\frac{3 Q \sqrt{\mu ^2}}{8 \mu ^5}\right)+\left(-\frac{45 q}{16 (1-\mu )^2 \left((1-\mu )^2\right)^{5/2}}-\frac{45 q \sqrt{(1-\mu )^2}}{4 (1-\mu )^8}+\right.}\\ {\left.\frac{45 q \mu }{16 (1-\mu )^2 \left((1-\mu )^2\right)^{5/2}}+\frac{45 q \sqrt{(1-\mu )^2} \mu }{4 (1-\mu )^8}-\frac{75 Q \sqrt{\mu ^2}}{8 \mu ^7}\right) A+}\\ {\left(\frac{45 \sqrt{3} (1-q) q}{4 Q \left((1-\mu )^2\right)^{5/2}}-\frac{45 \sqrt{3} (1-q) q}{4 Q \left((1-\mu )^2\right)^{5/2} \mu }+\frac{45 \sqrt{3} \sqrt{\mu ^2}}{4 \mu ^7}-\frac{45 \sqrt{3} q \sqrt{\mu ^2}}{4 \mu ^7}-\right.}\\ {\left.\frac{45 \sqrt{3} \sqrt{\mu ^2}}{4 \mu ^6}+\frac{45 \sqrt{3} q \sqrt{\mu ^2}}{4 \mu ^6}\right) A^{3/2}+}\\ {\left(\frac{945 q}{64 (1-\mu )^4 \left((1-\mu )^2\right)^{5/2}}+\frac{315 q \sqrt{(1-\mu )^2}}{2 (1-\mu )^{10}}-\frac{945 q \mu }{64 (1-\mu )^4 \left((1-\mu )^2\right)^{5/2}}-\right.}\\ {\left.\frac{315 q \sqrt{(1-\mu )^2} \mu }{2 (1-\mu )^{10}}-\frac{11025 Q \sqrt{\mu ^2}}{64 \mu ^9}\right) A^2+O[A]^{5/2};}\) From above results $D_2$ upto first order in $A$ is \noindent\( {D_2=\frac{9 q \sqrt{(1-\mu )^2} \omega _1^2}{16 (1-\mu )^6}-\frac{9 q \sqrt{(1-\mu )^2} \mu \omega _1^2}{16 (1-\mu )^6}+\frac{9 Q \sqrt{\mu ^2} \omega _1^2}{16 \mu ^5}+}\\ {\frac{9 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \omega _3}{4 (1-\mu )^6 (-1+\mu )^4}+\frac{9 Q^2 \omega _3}{4 (-1+\mu )^2 \mu ^6}-\frac{9 Q^2 \omega _3}{2 (-1+\mu )^2 \mu ^5}+\frac{9 Q^2 \omega _3}{4 (-1+\mu )^2 \mu ^4}-}\\ {\frac{9 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \mu \omega _3}{2 (1-\mu )^6 (-1+\mu )^4}+\frac{9 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \mu ^2 \omega _3}{4 (1-\mu )^6 (-1+\mu )^4}-}\\ {\frac{9 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{4 (1-\mu )^6 (-1+\mu )^2}-\frac{9 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{4 (1-\mu )^6 (-1+\mu )^2 \mu ^4}-\frac{9 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3}{4 (-1+\mu )^4 \mu ^4}+}\\ {\frac{9 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{(1-\mu )^6 (-1+\mu )^2 \mu ^3}-\frac{27 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{2 (1-\mu )^6 (-1+\mu )^2 \mu ^2}+\frac{9 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{(1-\mu )^6 (-1+\mu )^2 \mu }-}\\ {\frac{3 q \sqrt{(1-\mu )^2} \omega _1 \omega _3}{(1-\mu )^6}+\frac{3 q \sqrt{(1-\mu )^2} \mu \omega _1 \omega _3}{(1-\mu )^6}-\frac{3 Q \sqrt{\mu ^2} \omega _1 \omega _3}{\mu ^5}-}\\ {\frac{3 q \sqrt{(-1+\mu )^2} \omega _3^2}{2 (-1+\mu )^5}-\frac{3 Q \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^3 \mu ^5}+\frac{9 Q \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^3 \mu ^4}-\frac{9 Q \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^3 \mu ^3}+}\\ {\frac{3 Q \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^3 \mu ^2}-\frac{5 q^2 \omega _3^2}{4 (-1+\mu )^6 \omega _1}-\frac{5 Q^2 \omega _3^2}{4 (-1+\mu )^4 \mu ^6 \omega _1}+\frac{5 Q^2 \omega _3^2}{(-1+\mu )^4 \mu ^5 \omega _1}-}\\ {\frac{15 Q^2 \omega _3^2}{2 (-1+\mu )^4 \mu ^4 \omega _1}+\frac{5 Q^2 \omega _3^2}{(-1+\mu )^4 \mu ^3 \omega _1}-\frac{5 Q^2 \omega _3^2}{4 (-1+\mu )^4 \mu ^2 \omega _1}+\frac{5 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^6 \mu ^4 \omega _1}-}\\ {\frac{5 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3^2}{(-1+\mu )^6 \mu ^3 \omega _1}+\frac{5 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^6 \mu ^2 \omega _1}-\frac{27 q^2 \omega _1^3}{16 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{27 Q^2 \omega _1^3}{16 \mu ^6 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{27 q^2 \mu \omega _1^3}{4 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{81 q^2 \mu ^2 \omega _1^3}{8 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{27 q^2 \mu ^3 \omega _1^3}{4 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{27 q^2 \mu ^4 \omega _1^3}{16 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{27 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{8 (1-\mu )^6 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{27 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{4 (1-\mu )^6 \mu ^3 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{27 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{8 (1-\mu )^6 \mu ^2 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{9 q^2 \omega _1^2 \omega _3}{16 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{9 Q^2 \omega _1^2 \omega _3}{16 \mu ^6 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{9 q^2 \mu \omega _1^2 \omega _3}{4 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{27 q^2 \mu ^2 \omega _1^2 \omega _3}{8 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{9 q^2 \mu ^3 \omega _1^2 \omega _3}{4 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{9 q^2 \mu ^4 \omega _1^2 \omega _3}{16 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{9 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{8 (1-\mu )^6 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{9 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{4 (1-\mu )^6 \mu ^3 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{9 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{8 (1-\mu )^6 \mu ^2 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{9 q^2 \omega _1 \omega _3^2}{2 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{9 Q^2 \omega _1 \omega _3^2}{2 \mu ^6 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{18 q^2 \mu \omega _1 \omega _3^2}{(1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{27 q^2 \mu ^2 \omega _1 \omega _3^2}{(1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{18 q^2 \mu ^3 \omega _1 \omega _3^2}{(1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{9 q^2 \mu ^4 \omega _1 \omega _3^2}{2 (1-\mu )^{10} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{9 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^6 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{18 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^6 \mu ^3 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{9 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^6 \mu ^2 \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {A \left(\frac{81 q^2 \omega _1}{4 (1-\mu )^{10}}+\frac{81 Q^2 \omega _1}{4 \mu ^8}-\frac{81 q^2 \mu \omega _1}{2 (1-\mu )^{10}}+\frac{81 q^2 \mu ^2 \omega _1}{4 (1-\mu )^{10}}+\frac{81 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1}{2 (1-\mu )^6 \mu ^5}-\right.}\\ {\frac{81 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1}{2 (1-\mu )^6 \mu ^4}-\frac{675 q \sqrt{(1-\mu )^2} \omega _1^2}{32 (1-\mu )^8}+\frac{675 q \sqrt{(1-\mu )^2} \mu \omega _1^2}{32 (1-\mu )^8}-}\\ {\frac{225 Q \sqrt{\mu ^2} \omega _1^2}{16 \mu ^7}-\frac{135 q^2 \omega _1^2}{16 (1-\mu )^{10} \omega _3}-\frac{135 Q^2 \omega _1^2}{16 \mu ^8 \omega _3}+\frac{135 q^2 \mu \omega _1^2}{8 (1-\mu )^{10} \omega _3}-}\\ {\frac{135 q^2 \mu ^2 \omega _1^2}{16 (1-\mu )^{10} \omega _3}-\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2}{8 (1-\mu )^6 \mu ^5 \omega _3}+\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2}{8 (1-\mu )^6 \mu ^4 \omega _3}-}\\ {\frac{135 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \omega _3}{4 (1-\mu )^6 (-1+\mu )^6}-\frac{135 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \omega _3}{4 (1-\mu )^8 (-1+\mu )^4}-}\\ {\frac{135 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \omega _3}{8 (1-\mu )^7 (-1+\mu )^4}-\frac{45 Q^2 \omega _3}{2 (-1+\mu )^4 \mu ^8}-\frac{135 Q^2 \omega _3}{4 (-1+\mu )^2 \mu ^8}+\frac{90 Q^2 \omega _3}{(-1+\mu )^4 \mu ^7}+}\\ {\frac{135 Q^2 \omega _3}{2 (-1+\mu )^2 \mu ^7}-\frac{135 Q^2 \omega _3}{(-1+\mu )^4 \mu ^6}-\frac{135 Q^2 \omega _3}{4 (-1+\mu )^2 \mu ^6}+\frac{90 Q^2 \omega _3}{(-1+\mu )^4 \mu ^5}-\frac{45 Q^2 \omega _3}{2 (-1+\mu )^4 \mu ^4}+}\\ {\frac{135 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \mu \omega _3}{2 (1-\mu )^6 (-1+\mu )^6}+\frac{135 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \mu \omega _3}{2 (1-\mu )^8 (-1+\mu )^4}+}\\ {\frac{135 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \mu \omega _3}{8 (1-\mu )^7 (-1+\mu )^4}-\frac{135 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \mu ^2 \omega _3}{4 (1-\mu )^6 (-1+\mu )^6}-}\\ {\frac{135 q^2 \sqrt{(1-\mu )^2} \sqrt{(-1+\mu )^2} \mu ^2 \omega _3}{4 (1-\mu )^8 (-1+\mu )^4}+\frac{45 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{2 (1-\mu )^6 (-1+\mu )^4}+}\\ {\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{4 (1-\mu )^8 (-1+\mu )^2}+\frac{45 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{2 (1-\mu )^6 (-1+\mu )^4 \mu ^6}+}\\ {\frac{135 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3}{4 (-1+\mu )^4 \mu ^6}-\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{(1-\mu )^6 (-1+\mu )^4 \mu ^5}+}\\ {\frac{675 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{2 (1-\mu )^6 (-1+\mu )^4 \mu ^4}+\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{4 (1-\mu )^8 (-1+\mu )^2 \mu ^4}+}\\ {\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{8 (1-\mu )^7 (-1+\mu )^2 \mu ^4}+\frac{135 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3}{4 (-1+\mu )^6 \mu ^4}-}\\ {\frac{450 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{(1-\mu )^6 (-1+\mu )^4 \mu ^3}-\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{(1-\mu )^8 (-1+\mu )^2 \mu ^3}-}\\ {\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{8 (1-\mu )^7 (-1+\mu )^2 \mu ^3}+\frac{675 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{2 (1-\mu )^6 (-1+\mu )^4 \mu ^2}+}\\ {\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{2 (1-\mu )^8 (-1+\mu )^2 \mu ^2}+\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{8 (1-\mu )^7 (-1+\mu )^2 \mu ^2}-}\\ {\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{(1-\mu )^6 (-1+\mu )^4 \mu }-\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{(1-\mu )^8 (-1+\mu )^2 \mu }-}\\ {\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3}{8 (1-\mu )^7 (-1+\mu )^2 \mu }+\frac{405 q \sqrt{(1-\mu )^2} \omega _1 \omega _3}{4 (1-\mu )^8}-\frac{405 q \sqrt{(1-\mu )^2} \mu \omega _1 \omega _3}{4 (1-\mu )^8}+}\\ {\frac{135 Q \sqrt{\mu ^2} \omega _1 \omega _3}{2 \mu ^7}+\frac{135 q \sqrt{(-1+\mu )^2} \omega _3^2}{4 (-1+\mu )^7}+\frac{45 Q \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^5 \mu ^7}-\frac{225 Q \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^5 \mu ^6}+}\\ {\frac{225 Q \sqrt{\mu ^2} \omega _3^2}{(-1+\mu )^5 \mu ^5}-\frac{225 Q \sqrt{\mu ^2} \omega _3^2}{(-1+\mu )^5 \mu ^4}+\frac{225 Q \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^5 \mu ^3}-\frac{45 Q \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^5 \mu ^2}+}\\ {\frac{75 q^2 \omega _3^2}{2 (-1+\mu )^8 \omega _1}+\frac{25 Q^2 \omega _3^2}{(-1+\mu )^6 \mu ^8 \omega _1}-\frac{150 Q^2 \omega _3^2}{(-1+\mu )^6 \mu ^7 \omega _1}+\frac{375 Q^2 \omega _3^2}{(-1+\mu )^6 \mu ^6 \omega _1}-}\\ {\frac{500 Q^2 \omega _3^2}{(-1+\mu )^6 \mu ^5 \omega _1}+\frac{375 Q^2 \omega _3^2}{(-1+\mu )^6 \mu ^4 \omega _1}-\frac{150 Q^2 \omega _3^2}{(-1+\mu )^6 \mu ^3 \omega _1}+\frac{25 Q^2 \omega _3^2}{(-1+\mu )^6 \mu ^2 \omega _1}-}\\ {\frac{25 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3^2}{(-1+\mu )^8 \mu ^6 \omega _1}+\frac{100 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3^2}{(-1+\mu )^8 \mu ^5 \omega _1}-}\\ {\frac{375 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^8 \mu ^4 \omega _1}+\frac{175 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3^2}{(-1+\mu )^8 \mu ^3 \omega _1}-}\\ {\frac{125 q Q \sqrt{(-1+\mu )^2} \sqrt{\mu ^2} \omega _3^2}{2 (-1+\mu )^8 \mu ^2 \omega _1}+\frac{405 q^2 \omega _1^3}{8 (1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{405 q^2 \omega _1^3}{16 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{405 Q^2 \omega _1^3}{8 \mu ^8 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{405 q^2 \mu \omega _1^3}{2 (1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{1215 q^2 \mu \omega _1^3}{16 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{1215 q^2 \mu ^2 \omega _1^3}{4 (1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{1215 q^2 \mu ^2 \omega _1^3}{16 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{405 q^2 \mu ^3 \omega _1^3}{2 (1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{405 q^2 \mu ^3 \omega _1^3}{16 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{405 q^2 \mu ^4 \omega _1^3}{8 (1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{8 (1-\mu )^6 \mu ^6 \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{4 (1-\mu )^6 \mu ^5 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{8 (1-\mu )^8 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{16 (1-\mu )^7 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{8 (1-\mu )^6 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{4 (1-\mu )^8 \mu ^3 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{16 (1-\mu )^7 \mu ^3 \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{405 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^3}{8 (1-\mu )^8 \mu ^2 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{135 q^2 \omega _1^2 \omega _3}{8 (1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{135 q^2 \omega _1^2 \omega _3}{16 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{135 Q^2 \omega _1^2 \omega _3}{8 \mu ^8 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{135 q^2 \mu \omega _1^2 \omega _3}{2 (1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{405 q^2 \mu \omega _1^2 \omega _3}{16 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{405 q^2 \mu ^2 \omega _1^2 \omega _3}{4 (1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{405 q^2 \mu ^2 \omega _1^2 \omega _3}{16 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{135 q^2 \mu ^3 \omega _1^2 \omega _3}{2 (1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{135 q^2 \mu ^3 \omega _1^2 \omega _3}{16 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{135 q^2 \mu ^4 \omega _1^2 \omega _3}{8 (1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{8 (1-\mu )^6 \mu ^6 \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{4 (1-\mu )^6 \mu ^5 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{8 (1-\mu )^8 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{16 (1-\mu )^7 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{8 (1-\mu )^6 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{4 (1-\mu )^8 \mu ^3 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{16 (1-\mu )^7 \mu ^3 \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{8 (1-\mu )^8 \mu ^2 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{135 q^2 \omega _1 \omega _3^2}{(1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{135 q^2 \omega _1 \omega _3^2}{2 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{135 Q^2 \omega _1 \omega _3^2}{\mu ^8 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{540 q^2 \mu \omega _1 \omega _3^2}{(1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{405 q^2 \mu \omega _1 \omega _3^2}{2 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{810 q^2 \mu ^2 \omega _1 \omega _3^2}{(1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{405 q^2 \mu ^2 \omega _1 \omega _3^2}{2 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{540 q^2 \mu ^3 \omega _1 \omega _3^2}{(1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{135 q^2 \mu ^3 \omega _1 \omega _3^2}{2 (1-\mu )^{11} \left(\omega _1^2-4 \omega _3^2\right)}-\frac{135 q^2 \mu ^4 \omega _1 \omega _3^2}{(1-\mu )^{12} \left(\omega _1^2-4 \omega _3^2\right)}+\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^6 \mu ^6 \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{270 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^6 \mu ^5 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^8 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{2 (1-\mu )^7 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^6 \mu ^4 \left(\omega _1^2-4 \omega _3^2\right)}-}\\ {\frac{270 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^8 \mu ^3 \left(\omega _1^2-4 \omega _3^2\right)}-\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{2 (1-\mu )^7 \mu ^3 \left(\omega _1^2-4 \omega _3^2\right)}+}\\ {\frac{135 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^8 \mu ^2 \left(\omega _1^2-4 \omega _3^2\right)}+\frac{270 q^2 \omega _1^2 \omega _3}{(1-\mu )^{10} \left(-4 \omega _1^2+\omega _3^2\right)}+\frac{270 Q^2 \omega _1^2 \omega _3}{\mu ^8 \left(-4 \omega _1^2+\omega _3^2\right)}-}\\ {\frac{540 q^2 \mu \omega _1^2 \omega _3}{(1-\mu )^{10} \left(-4 \omega _1^2+\omega _3^2\right)}+\frac{270 q^2 \mu ^2 \omega _1^2 \omega _3}{(1-\mu )^{10} \left(-4 \omega _1^2+\omega _3^2\right)}+\frac{540 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{(1-\mu )^6 \mu ^5 \left(-4 \omega _1^2+\omega _3^2\right)}-}\\ {\frac{540 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1^2 \omega _3}{(1-\mu )^6 \mu ^4 \left(-4 \omega _1^2+\omega _3^2\right)}+\frac{54 q^2 \omega _1 \omega _3^2}{(1-\mu )^{10} \left(-4 \omega _1^2+\omega _3^2\right)}+\frac{54 Q^2 \omega _1 \omega _3^2}{\mu ^8 \left(-4 \omega _1^2+\omega _3^2\right)}-}\\ {\frac{108 q^2 \mu \omega _1 \omega _3^2}{(1-\mu )^{10} \left(-4 \omega _1^2+\omega _3^2\right)}+\frac{54 q^2 \mu ^2 \omega _1 \omega _3^2}{(1-\mu )^{10} \left(-4 \omega _1^2+\omega _3^2\right)}+\frac{108 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^6 \mu ^5 \left(-4 \omega _1^2+\omega _3^2\right)}-}\\ {\frac{108 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _1 \omega _3^2}{(1-\mu )^6 \mu ^4 \left(-4 \omega _1^2+\omega _3^2\right)}-\frac{54 q^2 \omega _3^3}{(1-\mu )^{10} \left(-4 \omega _1^2+\omega _3^2\right)}-}\\ {\frac{54 Q^2 \omega _3^3}{\mu ^8 \left(-4 \omega _1^2+\omega _3^2\right)}+\frac{108 q^2 \mu \omega _3^3}{(1-\mu )^{10} \left(-4 \omega _1^2+\omega _3^2\right)}-\frac{54 q^2 \mu ^2 \omega _3^3}{(1-\mu )^{10} \left(-4 \omega _1^2+\omega _3^2\right)}-}\\ {\left.\frac{108 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3^3}{(1-\mu )^6 \mu ^5 \left(-4 \omega _1^2+\omega _3^2\right)}+\frac{108 q Q \sqrt{(1-\mu )^2} \sqrt{\mu ^2} \omega _3^3}{(1-\mu )^6 \mu ^4 \left(-4 \omega _1^2+\omega _3^2\right)}\right);}\) \noindent\( {\text{when}\text{ }\mu \to 0.00025,A\to 0.00025,q\to 0.025\text{ }\text{and} Q\text{-$>$}0.00025}\) \noindent\( {D_2=-3.59996\times 10^{15} \left(-30720.2+\omega _1\right) \omega _1-\frac{8.64\times 10^{18} \omega _1^2}{\omega _3}+}\\ {\left(-1.12895\times 10^{20}+1.72798\times 10^{16} \omega _1\right) \omega _3+\frac{\left(2.55997\times 10^{19}-5.7599\times 10^{15} \omega _1\right) \omega _3^2}{\omega _1}+}\\ {\frac{\omega _1^2 \left(2.21184\times 10^{20} \omega _1+5.5296\times 10^{19} \omega _3\right)}{-4. \omega _1^2+1. \omega _3^2}+\frac{\omega _1^2 \left(-4.31996\times 10^{18} \omega _1-4.31996\times 10^{18} \omega _3\right)}{-0.25 \omega _1^2+1. \omega _3^2}}\) \noindent\( {\text{when}\text{ }\mu \to 0.00025,A\to 0.00025,q\to 0.025,Q\text{-$>$}0.00025 \text{and} \omega _3\to 1 }\) Using all the values of coefficients and algebraic manipulations we get the following expression for $D_2$:\\ \noindent\( {D_2=5.76096\times 10^{14}-\frac{3.2\times 10^{14}}{\omega _1}-1.92\times 10^{11} \omega _1+}\\ {3.6\times 10^{10} \omega _1^2+\frac{1.152\times 10^{15} \omega _1}{-4+\omega _1^2}-\frac{1.44\times 10^{14} \omega _1^2}{-4+\omega _1^2}-\frac{4.32\times 10^{14} \omega _1^3}{-4+\omega _1^2}+}\\ {0.00025 \left(-2.30423\times 10^{23}+\frac{1.024\times 10^{23}}{\omega _1}+8.30131\times 10^{22} \omega _1-\right.}\\ {3.45744\times 10^{22} \omega _1^2-\frac{2.21184\times 10^{23}}{1-4 \omega _1^2}+\frac{2.21184\times 10^{23} \omega _1}{1-4 \omega _1^2}+}\\ {\left.\frac{1.10592\times 10^{24} \omega _1^2}{1-4 \omega _1^2}-\frac{5.5296\times 10^{23} \omega _1}{-4+\omega _1^2}+\frac{6.912\times 10^{22} \omega _1^2}{-4+\omega _1^2}+\frac{2.0736\times 10^{23} \omega _1^3}{-4+\omega _1^2}\right)}\) We have computed the value of $D_2$ numerically for various values of parameters when $\mu=0.0025$ and $\omega_3=1$.The graphs are plotted $D_2$ versus $\omega_1$. In figures \ref{fig:fig1}, plot I,II,III represent the respective values of $q_2=0.25,0.50$, and $0.75$ respectively when $q_1=0.25$ fixed and the vertical line shown in each graph is an asymptote. Effect of $A_2$ in which first three curves are for $A_2=0.0025$ and second three curves belong to $A_2=0.0050$. Similarly we have also obtained the effect of $q_1$ in figure \ref{fig:fig2} in which plot I and II correspond to $q_1=0.50$ and $0.75$ respectively when $q_2=0.25$ fixed. Here first two curves are plotted for $A_2=.0025$ and second two curves for $A_2=.0050.$ \begin{figure} \caption{$D_2$ $\text{when} \label{fig:fig1} \end{figure} \begin{figure} \caption{\( {\text{when} \label{fig:fig2} \end{figure} \section{Conclusion} It is evident from all the above figures that the curves are in rectangular hyperbolic forms with singularity at $\omega_1=.50$ and we find that $D_2\neq 0$ for various values of different parameters. Thus we conclude that, according to Arnold's theorm since $D_2\neq 0$,out of plane equilibrium point $L_6$ is stable in non-linear sense. {\bf Acknowledgements:} We are thankful to D.S.T. Govt. of India,New Delhi for sanctioning a project SR/S4/MS:380/06,13/5/2008. We are also thankful to Dr Badam Singh Kushvah Department of Applied Mathematics, ISM, Dhanbad, India for valuable suggestions in preparing this manuscript during our visit to IUCAA, Pune. \end{document}
\begin{document} \title{Globally Irreducible Weyl modules} \author[S. Garibaldi]{Skip Garibaldi} \address{Center for Communications Research, San Diego, California 92121} \email{[email protected]} \author[R.M. Guralnick]{Robert M. Guralnick} \address{Department of Mathematics, University of Southern California, Los Angeles, CA 90089-2532} \email{[email protected]} \author[D.K. Nakano]{Daniel K. Nakano} \address{Department of Mathematics, University of Georgia, Athens, Georgia 30602, USA} \email{[email protected]} \dedicatory{Dedicated to Benedict Gross} \thanks{The second author was partially supported by NSF grants DMS-1265297 and DMS-1302886. Research of the third author was partially supported by NSF grants DMS-1402271 and DMS-1701768.} \sigmaubjclass[2010]{Primary 20G05, 20C20} \begin{abstract} In the representation theory of split reductive algebraic groups, it is well known that every Weyl module with minuscule highest weight is irreducible over every field. Also, the adjoint representation of $E_8$ is irreducible over every field. In this paper, we prove a converse to these statements, as conjectured by Gross: if a Weyl module is irreducible over every field, it must be either one of these, or trivially constructed from one of these. We also prove a related result on non-degeneracy of the reduced Killing form. \end{abstract} \maketitle \sigmaection{Introduction} Split semisimple linear algebraic groups over arbitrary fields can be viewed as a generalization of semisimple Lie algebras over the complex numbers, or even compact real Lie groups. As with Lie algebras, such algebraic groups are classified up to isogeny by their root system. Moreover, the set of irreducible representations of such a group is in bijection with the cone of dominant weights for the root system and the representation ring --- i.e., $K_0$ of the category of finite-dimensional representations --- is a polynomial ring with generators corresponding to a basis of the cone. One way in which this analogy breaks down is that, for an algebraic group $G$ over a field $k$ of \emph{prime} characteristic, in addition to the irreducible representation $L(\lambda)$ corresponding to a dominant weight $\lambda$, there are three other representations naturally associated with $\lambdambda$, namely the standard module $H^0(\lambda)$, the Weyl module $V(\lambda)$, and the tilting module $T(\lambda)$.\footnote{The definitions of these three modules make sense also when $\car k = 0$, and in that case all four modules are isomorphic.} The definition of $H^0(\lambda)$ is particularly simple: view $k$ as a one-dimensional representation of a Borel subgroup $B$ of $G$ where $B$ acts via the character $\lambda$, then define $H^0(\lambda):=\text{ind}_{B}^{G}\lambdambda$ to be the induced $G$-module. The \emph{Weyl module} $V(\lambda)$ is the dual of $H^0(-w_0\lambda)$ for $w_0$ the longest element of the Weyl group and has head $L(\lambda)$. Typical examples of Weyl modules are $\Lie(G)$ for $G$ semisimple simply connected ($V(\lambda)$ for $\lambda$ the highest root) and the natural module of $\SO_n$. See \cite{Jan} for general background on these three families of representations. It turns out that if any two of the four representations $L(\lambda)$, $H^0(\lambda)$, $V(\lambda)$, $T(\lambda)$ are isomorphic over a given field $k$, then all four are. Our focus is on the question: for which $\lambda$ are all four isomorphic for \emph{every} field $k$? This can be interpreted as a question about representations of split reductive group schemes over $\mathbb{Z}$. Recall that isomorphism classes of such groups are in bijection with (reduced) root data as described in \cite[XXIII.5.2]{SGA3:new}. A root datum for a group $G$ includes a character lattice $X(T)$ of a split maximal torus $T$ and the set $R \sigmaubset X(T)$ of roots of $G$ with respect to $T$. Picking an ordering on $R$ specifies a cone of dominant weights $X(T)_+$ in $X(T)$. For each $\lambda \in X(T)_+$, there is a representation $V(\lambda)$ for $G$, defined over $\mathbb{Z}$, that is generated by a highest weight vector with weight $\lambda$ such that $V(\lambda) \otimes \mathbb{C}$ is the irreducible representation with highest weight $\lambda$ of the complex reductive group $G \times \mathbb{C}$ and for every field $k$, $V(\lambda) \otimes k$ is the Weyl module of $G \times k$ mentioned above, see \cite[II.8.3]{Jan} or \cite[p.~212]{St}. Consequently, the question in the preceding paragraph is the same as asking: \emph{For which $G$ and $\lambda$ is it true that $V(\lambda) \otimes k$ is an irreducible representation of $G \times k$ for every field $k$?} Because $G$ is split, $V(\lambda) \otimes k$ is irreducible if and only if $V(\lambda) \otimes P$ is irreducible where $P$ is the prime field of $k$\footnote{See \cite[II.2.9]{Jan}. For a detailed study of how this fails when $G$ is not split, see \cite{Ti:R}.}, it is natural to call such $V(\lambda)$ \emph{globally irreducible}. There is a well known and elementary sufficient criterion: \begin{equation} \lambdabel{sufficient} \text{\emph{If $\lambda$ is minuscule, then $V(\lambda) \otimes k$ is irreducible for every field $k$.}} \end{equation} See \S\ref{defs} for the definition of minuscule. This provides an important family of examples, because representations occurring in this way include $\Lambda^r(V)$ for $1 \le r < n$ where $V$ is the natural module for $\SL_n$; the natural modules for $\SO_{2n}$, $\Sp_{2n}$, $E_6$ and $E_7$; and the (half) spin representations of $\Spin_n$. While these representations play an outsized role, it is nevertheless true that in any reasonable sense they are a set of measure zero among the list of irreducible representations. Therefore, we were surprised when Benedict Gross proposed to us that the sufficient condition \eqref{sufficient} is quite close to also being a \emph{necessary} condition, i.e., that there is only one other example. The purpose of this paper is to prove his claim, which is the following theorem. \begin{thm} \lambdabel{MT} Let $G$ be a split, simple algebraic group over $\mathbb{Z}$ with split maximal torus $T$ and fix $\lambda \in X(T)_+$. In the following cases, $V(\lambda) \otimes k$ is irreducible for every field $k$: \begin{enumerate} \renewcommand{\alph{enumi}}{\alph{enumi}} \item \lambdabel{MT.min} $\lambdambda$ is a \underline{minuscule} dominant weight, or \item \lambdabel{MT.E8} $G$ is a group of type $E_8$ and $\lambdambda$ is the highest root (i.e., $V(\lambdambda)$ is the adjoint representation for $E_{8}$); \end{enumerate} Otherwise, there is a prime $p \le 2(\rank G) + 1$ such that $V(\lambda) \otimes k$ is a reducible representation of $G$ for every field $k$ of characteristic $p$. \end{thm} The bound $2(\rank G) + 1$ is sharp by Theorem \ref{B.thm} below. The case where $G$ is simple and simply connected (as in Theorem \ref{MT}) is the main case. We have stated the theorem with these simplified hypotheses for the sake of clarity. See \S\ref{defs} for a discussion of the more general version where $G$ is assumed merely to be reductive. One surprising feature of our proof is the method we use to address a particular Weyl module of type $B$ in \S\ref{B.sec}, which we settle by appealing to modular representation theory of finite groups. The literature contains some results complementary to Theorem \ref{MT}, although we do not use them in our proof. For $G$ of type $A$, Jantzen gave in \cite[II.8.21]{Jan} a necessary and sufficient condition for the Weyl module $V(\lambda)$ to be irreducible over fields of characteristic $p$. McNinch \cite{McNinch:ss} (extending Jantzen \cite{Jantzen:low}) showed that for simple $G$ and for $\operatorname{dim}\nolimits V(\lambda) \le (\car k) \cdot (\rank G)$, $V(\lambda)$ is irreducible. We remark that John Thompson asked in \cite{Th} an analogous question where $G$ is finite: for which $\mathbb{Z}[G]$-lattices $L$ is $L/pL$ irreducible for every prime $p$? This was extended by Gross to the notion of globally irreducible representations, see \cite{Gross:JAMS} and \cite{Tiep}. Our results demonstrate that $F_{4}$ and $G_{2}$ are the only groups that do not admit globally irreducible representations other than the trivial representation. In an appendix, we prove another result that is similar in flavor to Theorem \ref{MT}: we determine the split simple $G$ over $\mathbb{Z}$ such that the reduced Killing form on $\Lie(G) \otimes k$ is nondegenerate for every field $k$. This is done by calculating the determinant of the form on $\Lie(G)$, completing the calculation for $G$ simply connected in \cite{SpSt}. \sigmaubsection*{Quasi-minuscule representations} The representations appearing in \eqref{MT.min} and \eqref{MT.E8} of Theorem \ref{MT} are \emph{quasi-minuscule} (called ``basic'' in \cite{Matsumoto:basic}), meaning that the non-zero weights are a single orbit under the Weyl group. For $G$ simple, the quasi-minuscule Weyl modules are the $V(\lambda)$ with $\lambda$ minuscule or equal to the highest short root $\alpha_0$. It is not hard to see that $V(\alpha_0) \otimes k$ is reducible for some $k$ when $G$ is not of type $E_8$. If $G$ has type $A$, $D$, $E_6$, or $E_7$, then $V(\alpha_0)$ is the action of $G$ on the Lie algebra of its simply connected cover ${\widetilde{G}}$, and the Lie algebra of the center $Z$ of ${\widetilde{G}}$ is a nonzero invariant submodule when $\car k$ divides the exponent of $Z$. The case where $G$ has type $B$ or $C$ is discussed in \S\ref{fund.sec}. If $G$ has type $G_2$ or $F_4$, then $V(\alpha_0)$ is the space of trace zero elements in an octonion or Albert algebra, and the identity element generates an invariant subspace if $\car k = 2$ or 3 respectively. \sigmaubsection*{Acknowledgements} The authors thank Dick Gross for suggesting the problem that led to the formulation of Theorem~\ref{MT}\eqref{MT.min}\eqref{MT.E8}, and for several useful discussions pertaining to the contents of the paper. The authors also thank Henning Andersen, James Humphreys, Jens Carsten Jantzen, George Lusztig, and the referee for their suggestions and comments on an earlier version of this manuscript. \sigmaection{Definitions and notation} \lambdabel{defs} We will follow the notation and conventions presented in \cite{Jan}. When we refer to an algebraic group $G$, we mean a smooth affine group scheme of finite type as in \cite{SGA3:new} or \cite[Ch.~VI]{KMRT}, as opposed to its (abstract) group of $k$-points, which we denote by $G(k)$. An example of this difference is that the natural map $\SL_p \to \PGL_p$ has nontrivial kernel the group scheme $\mu_p$, yet for $k$ a field of prime characteristic $p$, the map $\SL_p(k) \to \PGL_p(k)$ is injective. Let $G$ be a simple simply connected algebraic group, $T$ be a maximal split torus of $G$ and $\Phi$ be the root system associated to $(G,T)$. Fix a choice of simple roots $\Delta$. Let $B$ be a Borel subgroup containing $T$ corresponding to the negative roots and let $U$ denote the unipotent radical of $B$. One can naturally view $\Phi$ as contained in a Euclidean space ${\mathbb E}$ with inner product $\lambdangle\ , \ \rangle$. Let $X(T)$ be the integral weight lattice obtained from $\Phi$. The set $X(T)$ has a partial ordering defined as follows. If $\lambdambda,\mu\in X(T)$, then $\lambdambda\geq \mu$ if and only if $\lambdambda - \mu\in \sigmaum_{\alpha\in \Delta}\mathbb{N}\alpha$. \begin{table}[bth] {\centering\noindent\makebox[450pt]{ \begin{tabular}[c]{p{2.2in}|p{2.2in}} $\sigmamall{(A_n)~~}$ \begin{picture}(7,2)(0,0) \multiput(0,1)(20,0){3}{\circle{6}} \multiput(62,1)(20,0){3}{\circle{6}} \put(0,1){\circle*{3}} \put(0,1){\line(1,0){20}} \put(20,1){\circle*{3}} \put(20,1){\line(1,0){20}} \put(40,1){\circle*{3}} \put(40,-1.6){ \mbox{$\cdots$}} \put(62,1){\circle*{3}} \put(62,1){\line(1,0){20}} \put(82,1){\circle*{3}} \put(82,1){\line(1,0){20}} \put(102,1){\circle*{3}} \put(-2,-7){\mbox{\tiny $1$}} \put(18,-7){\mbox{\tiny $2$}} \put(38,-7){\mbox{\tiny $3$}} \put(54,-7){\mbox{\tiny $n$$-$$2$}} \put(75,-7){\mbox{\tiny $n$$-$$1$}} \put(100,-7){\mbox{\tiny $n$}} \end{picture} & $\sigmamall{(E_6)~~}$ \begin{picture}(7,2)(0,0) \put(0,-5){\circle*{3}} \put(0,-5){\circle{6}} \put(0,-5){\line(1,0){15}} \put(15,-5){\circle*{3}} \put(15,-5){\line(1,0){15}} \put(30,-5){\circle*{3}} \put(30,10){\circle*{3}} \put(30,-5){\line(0,1){15}} \put(30,-5){\line(1,0){15}} \put(45,-5){\circle*{3}} \put(45,-5){\line(1,0){15}} \put(60,-5){\circle*{3}} \put(60,-5){\circle{6}} \put(-2,-13){\mbox{\tiny $1$}} \put(13,-13){\mbox{\tiny $3$}} \put(28,-13){\mbox{\tiny $4$}} \put(43,-13){\mbox{\tiny $5$}} \put(58.5,-13){\mbox{\tiny $6$}} \put(33,9){\mbox{\tiny $2$}} \put(22,9){\mbox{\tiny $\sigmatar$}} \end{picture} \\ $\sigmamall{(B_n)~~}$ \begin{picture}(7,2)(0,0) \put(0,1){\circle*{3}} \put(0,1){\line(1,0){20}} \put(20,1){\circle*{3}} \put(20,1){\line(1,0){20}} \put(40,1){\circle*{3}} \put(40,-1.6){ \mbox{$\cdots$}} \put(62,1){\circle*{3}} \put(62,1){\line(1,0){20}} \put(82,1){\circle*{3}} \put(82,2){\line(1,0){20}} \put(82,0){\line(1,0){20}} \put(89,-1){{\tiny\mbox{$>$}}} \put(102,1){\circle*{3}} \put(102,1){\circle{6}} \put(-2,-7){\mbox{\tiny $1$}} \put(-2,5){\mbox{\tiny $\sigmatar$}} \put(18,-7){\mbox{\tiny $2$}} \put(38,-7){\mbox{\tiny $3$}} \put(54,-7){\mbox{\tiny $n$$-$$2$}} \put(75,-7){\mbox{\tiny $n$$-$$1$}} \put(100,-7){\mbox{\tiny $n$}} \end{picture} & $\sigmamall{(E_7)~~}$ \begin{picture}(7,2)(0,0) \put(0,-5){\circle*{3}} \put(0,-5){\circle{6}} \put(0,-5){\line(1,0){15}} \put(15,-5){\circle*{3}} \put(15,-5){\line(1,0){15}} \put(30,-5){\circle*{3}} \put(30,-5){\line(1,0){15}} \put(45,-5){\circle*{3}} \put(45,-5){\line(1,0){15}} \put(45,10){\circle*{3}} \put(45,-5){\line(0,1){15}} \put(60,-5){\circle*{3}} \put(60,-5){\line(1,0){15}} \put(75,-5){\circle*{3}} \put(-2,-13){\mbox{\tiny $7$}} \put(13,-13){\mbox{\tiny $6$}} \put(28,-13){\mbox{\tiny $5$}} \put(43,-13){\mbox{\tiny $4$}} \put(58.5,-13){\mbox{\tiny $3$}} \put(73,-13){\mbox{\tiny $1$}} \put(73,0){\mbox{\tiny $\sigmatar$}} \put(47.5,9){\mbox{\tiny $2$}} \end{picture} \\ $\sigmamall{(C_n)~~}$ \begin{picture}(7,2)(0,0) \put(0,1){\circle*{3}} \put(0,1){\line(1,0){20}} \put(20,1){\circle*{3}} \put(20,1){\line(1,0){20}} \put(40,1){\circle*{3}} \put(40,-1.6){ \mbox{$\cdots$}} \put(62,1){\circle*{3}} \put(62,1){\line(1,0){20}} \put(82,1){\circle*{3}} \put(82,2){\line(1,0){20}} \put(82,0){\line(1,0){20}} \put(89,-1){{\tiny\mbox{$<$}}} \put(102,1){\circle*{3}} \put(0,1){\circle{6}} \put(-2,-7){\mbox{\tiny $1$}} \put(18,-7){\mbox{\tiny $2$}} \put(18,5){\mbox{\tiny $\sigmatar$}} \put(38,-7){\mbox{\tiny $3$}} \put(54,-7){\mbox{\tiny $n$$-$$2$}} \put(75,-7){\mbox{\tiny $n$$-$$1$}} \put(100,-7){\mbox{\tiny $n$}} \end{picture} & $\sigmamall{(E_8)~~}$ \begin{picture}(7,2)(0,0) \put(0,-5){\circle*{3}} \put(0,-5){\line(1,0){15}} \put(15,-5){\circle*{3}} \put(15,-5){\line(1,0){15}} \put(30,-5){\circle*{3}} \put(30,-5){\line(1,0){15}} \put(45,-5){\circle*{3}} \put(60,-5){\line(0,1){15}} \put(60,-5){\circle*{3}} \put(60,10){\circle*{3}} \put(75,-5){\circle*{3}} \put(75,-5){\line(1,0){15}} \put(90,-5){\circle*{3}} \put(45,-5){\line(1,0){15}} \put(60,-5){\line(1,0){15}} \put(-2,-13){\mbox{\tiny $8$}} \put(-2,0){\mbox{\tiny $\sigmatar$}} \put(13,-13){\mbox{\tiny $7$}} \put(28,-13){\mbox{\tiny $6$}} \put(43,-13){\mbox{\tiny $5$}} \put(58.5,-13){\mbox{\tiny $4$}} \put(73,-13){\mbox{\tiny $3$}} \put(88,-13){\mbox{\tiny $1$}} \put(62.5,9){\mbox{\tiny $2$}} \end{picture} \\ $\sigmamall{(D_n)~~}$ \begin{picture}(7,2)(0,0) \put(0,1){\circle*{3}} \put(0,1){\circle{6}} \put(0,1){\line(1,0){20}} \put(20,1){\circle*{3}} \put(20,1){\line(1,0){20}} \put(40,1){\circle*{3}} \put(40,-1.6){ \mbox{$\cdots$}} \put(62,1){\circle*{3}} \put(62,1){\line(1,0){20}} \put(82,1){\circle*{3}} \put(82,2){\line(4,3){15}} \put(82,0){\line(4,-3){15}} \put(96.5,12.9){\circle*{3}} \put(96.5,12.9){\circle{6}} \put(96.5,-10.9){\circle*{3}} \put(96.5,-10.9){\circle{6}} \put(-2,-7){\mbox{\tiny $1$}} \put(18,5){\mbox{\tiny $\sigmatar$}} \put(18,-7){\mbox{\tiny $2$}} \put(38,-7){\mbox{\tiny $3$}} \put(54,-7){\mbox{\tiny $n$$-$$3$}} \put(86,-0.5){\mbox{\tiny $n$$-$$2$}} \put(100,-12){\mbox{\tiny $n$}} \put(100,11.8){\mbox{\tiny $n$$-$$1$}} \end{picture} & $\sigmamall{(F_4)~~}$ \begin{picture}(7,2)(0,0) \put(0,1){\circle*{3}} \put(0,1){\line(1,0){15}} \put(15,1){\circle*{3}} \put(15,0){\line(1,0){15}} \put(15,2){\line(1,0){15}} \put(19,-1){{\tiny\mbox{$>$}}} \put(30,1){\circle*{3}} \put(30,1){\line(1,0){15}} \put(45,1){\circle*{3}} \put(-2,-7){\mbox{\tiny $1$}} \put(13,-7){\mbox{\tiny $2$}} \put(28,-7){\mbox{\tiny $3$}} \put(43,-7){\mbox{\tiny $4$}} \put(43,5){\mbox{\tiny $\sigmatar$}} \put(65,1){$\sigmamall{(G_2)~~}$} \put(92,1){\circle*{3}} \put(92,0.1){\line(1,0){15}} \put(92,1.1){\line(1,0){15}} \put(92,2.1){\line(1,0){15}} \put(96,-1){{\tiny\mbox{$<$}}} \put(107,1){\circle*{3}} \put(90,-7){\mbox{\tiny $1$}} \put(90,5){\mbox{\tiny $\sigmatar$}} \put(105,-7){\mbox{\tiny $2$}} \end{picture} \end{tabular} }} \vskip .5cm \caption{Dynkin diagrams of simple root systems, with simple roots numbered. A circle around vertex $i$ indicates that the fundamental weight $\omega_i$ is minuscule. A $\sigmatar$ indicates that $\omega_i$ is the highest short root $\alpha_0$. The highest short root of $A_n$ is $\omega_1 + \omega_n$.} \lambdabel{dynks.table} \end{table} For $\alpha^{\vee}:=\frac{2\alpha}{\lambdangle\alpha,\alpha\rangle}$ the coroot corresponding to $\alpha\in \Phi$, the set of dominant integral weights is defined by $$X(T)_{+}:=\{\lambdambda\in X(T):\ 0\leq \lambdangle\lambdambda,\alpha^{\vee}\rangle\ \text{for all $\alpha \in \Delta$} \}.$$ The fundamental weights $\omega_{j}$ for $j=1,2,\dots, n$ are the dual basis to the simple coroots. That is, if $\Delta=\{\alpha_{1},\alpha_{2},\dots,\alpha_{n}\}$ then $\lambdangle \omega_{i},\alpha_{j}^{\vee} \rangle =\delta_{i,j}$. We call the weights in $X(T)_+$ that are minimal with respect to the partial ordering \emph{minuscule} weights. Note that the zero weight is minuscule by this definition (in some references this is not the case). Every nonzero minuscule weight is a fundamental dominant weight (one of the $\omega_i$'s), and we have marked them in Table \ref{dynks.table}. We remark that there is a unique minuscule weight in each coset of the root lattice $\mathbb{Z}\Phi$ in the weight lattice $X(T)$ by \cite[\S{VI.2}, Exercise 5a]{Bou:g4} or \cite[\S13, Exercise 13]{Hum:Lie}; this can be an aid for remembering the number of minuscule weights for each type and for determining which minuscule weight lies below a given dominant weight. \sigmaubsection*{Generalization of Theorem \ref{MT} to split reductive groups} Suppose now that $G$ is a split reductive group over a field $k$. Then there is a unique split reductive group scheme over $\mathbb{Z}$ whose base change to $k$ is $G$, which we denote also by $G$; it is the split reductive group scheme over $\mathbb{Z}$ with the same root datum as $G$. Moreover, there is a split reductive group scheme $G'$ over $\mathbb{Z}$ with a central isogeny $G' \to G$ where $G' = \prod_{i=0}^r G_i$ for $G_0$ a torus and $G_i$ simple and simply connected for $i \ne 0$, cf.~\cite[XXI.6.5.10]{SGA3:new}. A Weyl module $V(\lambda)$ for $G$ restricts to a Weyl module $V(\sigmaum \lambda_i)$ for $G'$, where $\lambda_i$ denotes the restriction of $\lambda$ to a maximal torus in $G_i$, and as in \cite[Lemma I.3.8]{Jan} we have $V(\sigmaum \lambda_i) \cong \otimes_{i=0}^r V(\lambda_i)$ where $V(\lambda_0)$ is one-dimensional. Therefore, $V(\lambda) \otimes k$ is an irreducible $G$-module for every field $k$ if and only if $V(\lambda_i) \otimes k$ is an irreducible $G_i$-module for every $k$, i.e., if and only if $(G_i, V(\lambda_i))$ satisfies condition \eqref{MT.min} or \eqref{MT.E8} of Theorem \ref{MT} for all $i \ne 0$. \sigmaection{Restriction to Levi subgroups} For $J\sigmaubseteq \Delta$, let $L_{J}$ be the Levi subgroup of $G$ generated by the maximal torus $T$ and the root subgroups corresponding to roots that are linear combinations of elements of $J$. Set \[ X_{J}(T)_{+}:=\{\lambdambda\in X(T)\ :\ 0\leq \lambdangle \lambdambda,\alpha^{\vee} \rangle \ \text{for all $\alpha\in J$}\}.\] For $\lambdambda\in X_{J}(T)_{+}$, we can construct an induced module $H^{0}_{J}(\lambdambda):=\text{ind}^{L_{J}}_{L_{J}\cap B}\lambdambda$ with simple $L_{J}$-socle $L_{J}(\lambdambda)$, and dually a Weyl module $V_{J}(\lambdambda)$ with head $L_{J}(\lambdambda)$. \begin{thm}\lambdabel{thm:Levireduction} Let $G$ be a simple simply connected algebraic group and $J\sigmaubseteq \Delta$. If $V(\lambdambda) \otimes k$ is an irreducible $G$-module, then $V_J(\lambda) \otimes k$ is an irreducible $L_J$-module. \end{thm} \begin{proof} For $k$ of characteristic 0, $V_J(\lambdambda)$ is just the set of fixed points of $Q_J$ on $V(\lambdambda)$ (the unipotent radical of the parabolic $P_J=L_JQ_J$); this is part of \cite{Smith}. Taking a $\mathbb{Z}$-form and reducing modulo $p$, we see that the dimension of the space of fixed points of $Q_J$ on $V(\lambdambda)$ can only go up in characteristic $p$. So if $V(\lambdambda)=L(\lambdambda)$, then again by \cite{Smith}, the fixed points of $Q_J$ on this module is $L_J(\lambdambda)$ but has dimension at least $V_J(\lambdambda)$. The other inequality is clear since $L_J(\lambdambda)$ is a quotient of $V_J(\lambdambda)$, so $L_J(\lambdambda)=V_J(\lambdambda)$. \end{proof} \begin{rmk} Given a group $G$ and a particular prime $p$, there are few known necessary and sufficient conditions in terms of $\lambdambda$ for the Weyl module $V(\lambdambda) \otimes k$ to be irreducible over every field $k$ of characteristic $p$. There is an easy-to-apply statement for $G = \SL_2$. For $G = \SL_n$, Jantzen gives a necessary and sufficient condition, but it is less easy to apply. There are also sporadic results in one direction or another, such as consequences of the Linkage Principle like \cite[II.6.24]{Jan} or irreducibility when $\lambdambda$ is restricted and $\operatorname{dim}\nolimits V(\lambdambda)$ is small. Theorem \ref{thm:Levireduction} provides an easy way to get necessary conditions on $\lambdambda$ by taking various small $J$. Writing $\lambdambda = \sigmaum c_i \omega_i$ and taking $J = \{\alpha_i\}$ one can apply the $\SL_2$ criterion to constrain the possible values of $c_i$. Taking $J$ to be pairs of adjacent roots of the same length allows one to reduce to the case of $A_2$, for which a lot is known, see \cite[II.8.20]{Jan}. \end{rmk} We mention the following related result that includes the case where $V(\lambda) \otimes k$ is reducible. \begin{prop} For every $\lambda \in X(T)_+$, every $J\sigmaubseteq \Delta$, and every field $k$, the irreducible representation $L_J(\lambda)$ of $L_J$ is a direct summand of $L(\lambda)\vert_{L_J}$. \end{prop} \begin{proof} For the sake of completeness we describe the analysis given in \cite[Section 8]{CN} which follows \cite{Smith} and \cite[II.5.21]{Jan}. There exists a weight space decomposition for the induced module given by $$ H^{0}(\lambdambda)=\left(\bigoplus_{\nu\in {\mathbb Z}J}H^{0}(\lambdambda)_{\lambdambda-\nu}\right) \oplus M. $$ where $M$ is the direct sum of all weight spaces $H^{0}(\lambdambda)_{\sigmaigma}$ where $\sigmaigma\neq \lambdambda-\nu$ for any $\nu\in {\mathbb Z}J$. Furthermore, $H^{0}_{J}(\lambdambda)=\oplus_{\nu\in {\mathbb Z}J}H^{0}(\lambdambda)_{\lambdambda-\nu}$ with the aforementioned decomposition being $L_{J}$-stable. This allows us to identify an $L_{J}$-direct summand \begin{equation} H^{0}(\lambdambda)|_{L_{J}}\cong H^{0}_{J}(\lambdambda)\oplus M. \end{equation} By definition $L(\lambdambda)=\text{soc}_{G}(H^{0}(\lambdambda))$. This implies that $\text{soc}_{L_{J}}L(\lambdambda)\sigmaubseteq \text{soc}_{L_{J}}(H^{0}(\lambdambda))$. Note that \begin{equation} L_{J}(\lambdambda)=\text{soc}_{L_{J}}(H^{0}_{J}(\lambdambda))\sigmaubseteq \text{soc}_{L_{J}}(H^{0}(\lambdambda)). \end{equation} Now $L_{J}(\lambdambda)$ appears as an $L_{J}$-composition factor of $L(\lambdambda)$ and $H^{0}(\lambdambda)$ with multiplicity one. Consequently, $L_{J}(\lambdambda)$ must occur in $\text{soc}_{L_{J}}L(\lambdambda)$. One can also apply the same argument for Weyl modules and see that \begin{equation} \lambdabel{eq;Weyldecomp} V(\lambdambda)|_{L_{J}}\cong V_{J}(\lambdambda)\oplus M^{\prime}. \end{equation} for some $L_{J}$-module $M^{\prime}$. By an argument dual to the one in the preceding paragraph, we deduce that $L_{J}(\lambdambda)$ appears in the head of $L(\lambdambda)|_{L_{J}}$. The fact that $L_{J}(\lambdambda)$ has multiplicity one in $L(\lambdambda)$ now shows that $L_{J}(\lambdambda)$ is an $L_{J}$-direct summand of $L(\lambdambda)$. \end{proof} \sigmaection{The case of fundamental weights} \lambdabel{fund.sec} We now verify Theorem \ref{MT} for every fundamental weight. We abuse notation and write $V(\lambda)$ instead of $V(\lambda) \otimes k$. \sigmaubsection*{Type $A_{n}$ ($n \ge 1$)} In this case, all the fundamental weights are minuscule, so $V(\lambdambda)=L(\lambdambda)$ for all $\lambdambda=\omega_{j}$, $j=1,2,\dots,n$. \sigmaubsection*{Type $B_{n}$ ($n \ge 2$)} For $B_n$, we claim that \emph{$V(\omega_i)$ is reducible for $1 \le i < n$ and $\car k = 2$}. The split adjoint group of type $B_n$ is $\SO(q)$ for a quadratic form $q$ on a vector space $X$ of dimension $2n+1$ where the tautological action on $X$ is $V(\omega_1)$, see \cite{KMRT} or \cite[\S23]{Borel}. As $\car k = 2$, the bilinear form $b_q$ deduced from $q$ by the formula $b_q(x,y) := q(x+y) - q(x) - q(y)$ is necessarily degenerate with 1-dimensional radical, providing an $\SO(q)$-invariant line, call it $S$. For $2 \le i < n$, we restrict to the Levi subgroup of type $B_{n-i+1}$ corresponding to $J = \{ \alpha_i, \alpha_{i+1}, \ldots, \alpha_n\}$. By the previous paragraph, $V_J(\omega_i)$ is reducible in characteristic 2, hence $V(\omega_i)$ is by Theorem \ref{thm:Levireduction}. Alternatively, one can see the reducibility concretely by noticing that $V(\omega_i)$ has the same character and dimension as $\Lambda^i(V(\omega_1))$, because this is so in case $k = \mathbb{C}$. In particular, $\Lambda^i(V(\omega_1))$ has a unique maximal weight, the highest weight of $V(\omega_i)$, and there is a nonzero $\SO(q)$-equivariant map $\phi \!: V(\omega_i) \to \Lambda^i(V(\omega_1))$. As $S \wedge \Lambda^{i-1}(V(\omega_1))$ is a proper and $\SO(q)$-invariant subspace of $\Lambda^i (V(\omega_1))$, it follows that $V(\omega_i)$ is reducible. \sigmaubsection*{Type $D_n$ ($n \ge 4$)} For type $D_n$, we claim that \emph{$V(\omega_i)$ is reducible for $2 \le i \le n - 2$ and $\car k = 2$.} The representation $V(\omega_2)$ has the same character and dimension as $\Lambda^2(V(\omega_1))$. The alternating bilinear form $b_q$ deduced from the invariant quadratic form $q$ on $V(\omega_1)$ gives an invariant line in $\Lambda^2(V(\omega_1))$ --- i.e., $D_n$ maps into $C_n$, which is already reducible on $\Lambda^2(V(\omega_1))$ --- proving the claim for $i = 2$. Alternatively, $V(\omega_2)$ is the adjoint action on the Lie algebra of $\Spin_{2n}$ (when $\car k = 2$, this is distinct from $\Lie(\SO_{2n})$), and the center $S$ is a proper submodule, namely $\Lie(\mu_2 \times \mu_2)$ (if $n$ is even) or $\Lie(\mu_4)$ (if $n$ is odd). For $2 < i \le n -2$, we may use either of the arguments employed in the $B_n$ case. \sigmaubsection*{Type $C_{n}$ ($n \ge 3$)} For type $C_n$ with $n \ge 3$, \cite[Th.~2(iv)]{PremetSup} gives that $V(\omega_2)$ is reducible when $\car k = p$ if and only if the prime $p$ divides $n$, compare \cite[p.~287]{Jan}. For $\omega_i$ with $2 < i < n$, restricting to the Levi of type $C_{n-i+2}$ corresponding to $J = \{ \alpha_{i-1}, \alpha_i, \ldots, \alpha_n \}$ shows that $V_J(\omega_i)$ is reducible if $p$ divides $n-i+2$. For $i = n$, we restrict to the Levi subgroup of type $C_2 = B_2$ corresponding to $J = \{ \alpha_{n-1}, \alpha_n \}$ to find that $V_J(\omega_n)$ is the 5-dimensional natural module for $B_2$, which is reducible in characteristic 2. \sigmaubsection*{Exceptional types} For exceptional types, tables of which fundamental weights $\omega$ have $V(\omega)$ reducible in which characteristics can be found in \cite[p.~299]{Jan:first} or, for smaller dimensions, in \cite{Lubeck}. These confirm our main theorem, and, in case $V(\omega) \otimes k$ is reducible for some $k$, it is so for a $k$ with $\car k = 2$ or 3. We remark that for the representations $V(\omega_i)$ of $E_8$ for $i \ne 8$, one can verify Theorem \ref{MT} by restricting to a Levi and using induction, instead of referring to \cite{Jan:first} or \cite{Lubeck} directly. Because it is such an important example, we mention specifically that the adjoint representation $V(\omega_8)$ of $E_8$ is irreducible because $\Lie(E_8)$ is simple for every field, see \cite{St:aut} or \cite{Hogeweij}. Here is an alternative argument provided to us by Gross: As $E_8$ is simply-laced, the Weyl group acts transitively on the roots, so the normalizer $N_{E_8}(T)$ of a split torus has an irreducible submodule in the adjoint representation $\Lie(E_8)$ given by the sum of all the root spaces. The miracle that is special to $E_8$ is that the Weyl group acts irreducibly on the submodule $\Lie(T)$, which is the $E_8$-lattice mod $\car k$.\footnote{This is an illustration of a specific case, for $G$ the Weyl group of $E_8$, of Thompson's question mentioned in the introduction.} Then the restriction of the representation $\Lie(E_8)$ to $N_{E_8}(T)$ is the direct sum of two irreducible representations, one of dimension 240 and the other of dimension 8. Since $E_8$ has no nontrivial map into $\SL_8$, it does not preserve either submodule, and so acts irreducibly on $\Lie(E_8)$. This is in contrast to the case where $G$ is simple of type other than $E_8$, where $N_G(T)$ acts reducibly on $\Lie(T)$ for some characteristic (2 for types $B$, $C$, $D$, $E_7$ and $F_4$; 3 for $E_6$ and $G_2$; and dividing $n$ for type $A_{n-1}$). And of course if $G$ has roots of different lengths and is simply connected, then for $\car k = 2$ or 3, the short roots generate a subalgebra of $\Lie(G)$ invariant under $G$, see e.g.~\cite[Lemma 3.2]{Hiss} or \cite[p.~1121]{St:aut}. Here is yet another argument to see that $\Lie(E_8)$ is an irreducible representation for every field $k$. Namely, it is a special case of the following observation: \emph{If $G$ is simple and simply connected over a field $k$, the center $Z$ of $G$ is \'etale\footnote{For example, this is true if $\car k$ is ``very good'' for $G$.}, and all the roots of $G$ have the same length, then $\Lie(G)$ is an irreducible representation of $G$.} To prove this general statement, note that the natural map $\Lie(G) \to \Lie(G/Z)$ has kernel $\Lie(Z) = 0$, so is an isomorphism by dimension count. But the domain is the Weyl module $V(\widetilde{\alpha})$ and the codomain is its dual $V(\widetilde{\alpha})^* = H^0(\widetilde{\alpha})$ because of the assumption on the roots \cite[3.5]{G:vanish}. Since $V(\widetilde{\alpha}) \cong H^0(\widetilde{\alpha})$, they are irreducible $G$-modules. \sigmaection{Type $B_n$, weight $\omega_1 + \omega_n$} \lambdabel{B.sec} Let $k$ be an algebraically closed field of characteristic $p \ge 0$. Let $G= \Spin_{2n+1}(k)$ for $n \ge 2$. The irreducible $G$-module $L(\omega_1)$ has dimension $2n+1$ if $\car k \ne 2$ and dimension $2n$ if $\car k = 2$. Moreover, the irreducible $G$-module $L(\omega_n)$ is the spin module for $G$ of dimension $2^n$. In this section we show the following, which amounts to a specific case of Theorem \ref{MT}. Although a different proof can be found in Lemmas 2.3.4 and 2.2.7 of \cite{BGT}, we include the proof below because it is a nice illustration of the use of finite group theory to prove a result about connected algebraic groups. \begin{thm} \lambdabel{B.thm} Let $G=\Spin_{2n+1}(k)$ with $n \ge 2$. Then \[ \operatorname{dim}\nolimits L(\omega_1 + \omega_n) = \begin{cases} 2^n \cdot 2n & \text{if $\car k$ does not divide $2n+1$;} \\ 2^n \cdot (2n-1) & \text{if $\car k$ does divide $2n+1$.} \end{cases} \] \end{thm} The proof will appear at the end of the section. The analysis will entail the restriction of modules to a monomial subgroup of $\SO_{2n+1}$ via its lift to $\Spin_{2n+1}$ and the use of permutation modules for the alternating group. Let $U := L(\omega_1) \otimesimes L(\omega_n)$. If $p=0$ (and so also for all but finitely many $p$), this is a direct sum of two composition factors $L(\omega_1 + \omega_n)$ and $L(\omega_n)$. In particular, the Weyl module for the dominant weight $\omega_1 + \omega_n$ has dimension $2n\cdot 2^n$. If $p = 2$ then as in \cite{St:rep}, $U$ is $L(\omega_1 + \omega_n)$, verifying the theorem. We assume for the rest of the section that $p$ is odd. Note that in $G/Z(G) = \SO_{2n+1}(k)$, there is a finite subgroup $X$ isomorphic to $A.A_{2n+1}$ where $A$ is an elementary abelian $2$-group of rank $2n$ and $A_{2n+1}$ denotes the alternating group on $2n+1$ symbols. The group $X$ is the derived subgroup of the group of orthogonal transformations preserving an orthogonal set of $2n+1$ lines. Let $H$ denote the lift of $X$ to $G$. Let $E$ be the lift of $A$ to $G$. First we note: \begin{lemma} $E$ is extraspecial of order $2^{1+2n}$. \end{lemma} \begin{proof} Since $X$ acts irreducibly on $E/Z(G)$, $E$ is either elementary abelian or extraspecial of the given order. By induction it suffices to see that $E$ is nonabelian in the case $n=2$ (actually we could start with $n=1$). This is clear since $\Spin_2(k) \cong \Sp_4(k)$ and so contains no rank $5$ elementary abelian $2$-groups. \end{proof} Note that $H/E \cong A_{2n+1}$. Let $H_1$ be a subgroup of $H$ containing $E$ with $H_1/E \cong A_{2n}$. The group $E$ has a unique faithful irreducible module over $k$ of dimension $2^n$ that is the restriction of $L(\omega_n)$. (It is a tensor product of $n$ 2-dimensional representations of the central factors of $E$, cf. \cite[5.5.4, 5.5.5]{Gorenstein}.) Since $Z(G)=Z(E)$ acts nontrivially on $U$, every composition factor for $E$ on $U$ is isomorphic to $L(\omega_n)$. It follows immediately that $L(\omega_1)$ and $L(\omega_n)$ are each irreducible for $H$. Note also that $L(\omega_1)$ is induced from a linear character $\phi$ of $H_1$. Thus, as an $H$-module, $U \cong L(\omega_n) \otimesimes \phi_{H_1}^H$. In fact, we see that we can replace $\phi$ by the trivial character of $H_1$: \begin{lemma} \lambdabel{induced1} $U \cong L(\omega_n) \otimesimes k_{H_1}^H$ as an $H$-module. \end{lemma} \begin{proof} It suffices to show that $L(\omega_n) \otimesimes \phi_{H_1} \cong L(\omega_n) \otimesimes k_{H_1}$ as $H_1$-modules. Note that they are both irreducible since they are irreducible $E$-modules. If $n=2$, the result is easy to see (alternatively, one can modify the argument below). So assume that $n > 2$. In fact, we observe that any $H_1$-module $V_1$ that is isomorphic to $L(\omega_n)$ as an $E$-module is isomorphic to $L(\omega_n)$ as an $H_1$-module. This follows by noting that $\operatorname{Hom}\nolimits_E(L(\omega_n), V_1)$ is $1$-dimensional and since $H_1/E$ is perfect, $H_1/E$ acts trivially on this $1$-dimensional space, whence $\operatorname{Hom}\nolimits_{H_1}(L(\omega_n), V_1)$ is also $1$-dimensional. Since the two modules are irreducible, this shows they are isomorphic. \end{proof} \begin{lemma}\lambdabel{lem.hom} $\operatorname{dim}\nolimits \operatorname{Hom}\nolimits_H(U,U)=2$. \end{lemma} \begin{proof} This follows by Lemma \ref{induced1} and Frobenius reciprocity. \end{proof} Let $V$ be the unique nontrivial composition factor of $k_{A_{2n}}^{A_{2n+1}}$ (for $n > 1$). This has dimension $2n$ if $p$ does not divide $2n+1$ and dimension $2n-1$ if $p$ does divide $2n+1$. By \cite{Dade} or \cite[Cor.~8.19]{Navarro}, we know: \begin{lemma} Viewing $V$ as an $H$-module (that is trivial on $E$), $L(\omega_n) \otimesimes_k V$ is irreducible. $ \qed$ \end{lemma} \begin{proof}[Proof of Theorem~\ref{B.thm}] From the above, we see that $U$ has two $H$-composition factors if $p$ does not divide $2n+1$ and three composition factors if $p$ does divide $2n+1$. This immediately implies that if $p$ does not divide $2n+1$, then $L(\omega_1 + \omega_n)$ is irreducible for $H$ and has dimension $2n \cdot 2^n$ (whence also for $G$). Now assume that $p$ does divide $2n+1$. For sake of contradiction, suppose that $L(\omega_1 + \omega_n)$ has the same dimension as the Weyl module $V(\omega_1 + \omega_n)$ for $G$, so $U$ has precisely two nonisomorphic composition factors as a $G$-module, $L(\omega_1 + \omega_n)$ and $L(\omega_n)$. Since $U$ is self-dual it would be a direct sum of the two modules. Recall $U$ has three $H$-composition factors (two isomorphic to $L(\omega_n))$. Thus, the $G$-submodule $L(\omega_1 + \omega_n)$ must have two nonisomorphic $H$-composition factors. Again, since $L(\omega_1 + \omega_n)$ is self dual, this implies that $U$ is a direct sum of three simple $H$-modules. This contradicts Lemma~\ref{lem.hom} and completes the proof of Theorem \ref{B.thm}. \end{proof} Our analysis shows that when $p\mid 2n+1$ the Weyl module $V(\omega_{1}+\omega_{n})$ has two composition factors: $L(\omega_{1}+\omega_{n})$, $L(\omega_{n})$. Therefore, one can apply \cite[II.2.14]{Jan} to determine $\operatorname{Ext}\nolimits^{1}$ between these simple modules. \begin{cor} \lambdabel{B.extcor} Let $G = \Spin_{2n+1}$. Then \[ \operatorname{dim}\nolimits \operatorname{Ext}\nolimits^{1}_{G}(L(\omega_1 + \omega_n),L(\omega_{n})) = \begin{cases} 0 & \text{if $\car k$ does not divide $2n+1$;} \\ 1 & \text{if $\car k$ does divide $2n+1$.} \end{cases} \] \end{cor} \sigmaection{Proof of Theorem \ref{MT}} \lambdabel{proofofMain} We now prove Theorem \ref{MT} by induction on the Lie rank of $G$. \sigmaubsection*{Type $A_1$} In case of rank 1, $G$ is $\SL_2$ and $V(d) \otimes k$ is irreducible if and only if, for $p = \car k$, $d+1 =cp^e$ for some $0 < c < p$ and $e \ge 0$ \cite[pp.~239, 240]{WinterSL2}. (This can be seen by comparing dimensions: Write out $d$ in base $p$ as $d = \sigmaum_i c_i p^i$. Then $\operatorname{dim}\nolimits V(d) = d+1$ whereas the irreducible module $L(d)$ over $k$ has dimension $\prod_i (c_i + 1)$ by Steinberg's tensor product theorem.) As a consequence, for $d \ge 2$, it is impossible for $V(d) \otimes \mathbb{F}_p$ to be irreducible for both $p = 2$ and 3. An alternate argument (as noted by Andersen) can be provided if one does not require $p \le 3$. Choose a prime $p$ dividing $d$. Now $\operatorname{dim}\nolimits V(d)=d+1$ with $V(d)\twoheadrightarrow L(d)\cong L(d/p)^{(1)}$. But, $\operatorname{dim}\nolimits L(d/p)^{(1)} \leq \frac{d}{p}+1 < d+1=\operatorname{dim}\nolimits V(d)$. So $V(d)$ is reducible in characteristic $p$. \sigmaubsection*{Reductions} So suppose $\rank G \ge 2$ and Theorem \ref{MT} holds for all groups of lower rank. Write $\lambda = \sigmaum c_i \omega_i$ with every $c_i \ge 0$. If some $c_i > 1$, then taking $J = \{ \alpha_i \}$, the Levi subgroup $L_J$ has semisimple type $A_1$ and the restriction of $V_J(\lambda)$ to $L_J$ is reducible when $\car k$ is 2 or 3 by the argument for type $A_{1}$. Therefore, by Theorem \ref{thm:Levireduction} we may assume that $c_i \in \{ 0, 1 \}$ for all $i$. If $\lambda = 0$ or $\lambda = \omega_i$ for some $i$, then we are done by \S\ref{fund.sec}. Hence, we may assume that at least two of the $c_i$'s are nonzero. If there is a connected and proper subset $J$ of $\Delta$ such that $c_i \ne 0$ for at least two indexes $i$ with $\alpha_i \in J$, then we are done by induction and Theorem \ref{thm:Levireduction}. \sigmaubsection*{Sums of extreme weights} The remaining case is when the Dynkin diagram has no branches (i.e., $G$ has type $A$, $B$, $C$, $F_4$, or $G_2$) and $\lambda = \omega_1 + \omega_n$ is the sum of dominant weights corresponding to the simple roots at the two ends of the diagram. For type $A_n$, $G = \SL_{n+1}$ and $V(\omega_1 + \omega_n)$ is the natural action on $\Lie(\SL_{n+1})$, the trace zero matrices. If $p$ divides $n+1$, then the scalar matrices are a $G$-invariant subspace. Type $B$ was handled in Theorem \ref{B.thm}. For type $C_n$ with $n \ge 3$, we restrict to the Levi subgroup of type $C_2$ and find that $V_J(\omega_1 + \omega_n)$ has dimension 5 and is reducible in characteristic 2. Alternatively, as in \cite[\S11]{St:rep}, in characteristic 2 one finds $L(\omega_1 + \omega_n) \cong L(\omega_1) \otimes L(\omega_n)$, which has dimension $n2^{n+1}$, whereas by the Weyl dimension formula, \[ \operatorname{dim}\nolimits V(\omega_1 + \omega_n) = 7\cdot 2^{n-1} \cdot \frac{n(2n+1)}{n+3} \cdot \prod_{i=6}^{n+1} \frac{2i - 3}{i} \quad \text{for $n \ge 4$.} \] In the case of exceptional groups, for type $F_4$, $V(\omega_1 + \omega_4)$ is reducible in characteristic 2 because it has dimension 1053, yet by \cite{St:rep} $L(\omega_1 + \omega_4) \cong L(\omega_1) \otimes L(\omega_4)$ has dimension $26^2 = 676$. For type $G_2$, $\operatorname{dim}\nolimits V(\omega_1 + \omega_2) = 64$ yet by Steinberg in characteristic 3 $L(\omega_1 + \omega_2)$ has dimension $7^2 = 49$. Alternatively, one can refer to \cite[Tables A.49, A.50]{Lubeck}. This completes the proof of Theorem \ref{MT}. \qed \sigmaection{Complements to Theorem \ref{MT}} \sigmaubsection*{Invariant bilinear forms} Let $G$ be a split reductive group over $\mathbb{Z}$ with a split maximal torus $T$ and $\lambda \in X(T)_+$. A $G$-invariant bilinear form $b$ on the Weyl module $V(\lambda)$ corresponds to a $G$-equivariant homomorphism $\delta_b \!: V(\lambda) \to V(\lambda)^*$ via $\delta_b(v)(v') = b(v,v')$. The map $\delta_b$ is determined by what it does to a highest weight vector in $V(\lambda)$ and in order that $\delta_b$ be nonzero it must be that $\lambda$ is a weight of $V(\lambda)^*$, and in particular that $\lambda \le -w_0 \lambda$, from which it follows that $\lambda = -w_0 \lambda$. As the highest weight spaces in $V(\lambda)$ and $V(\lambda)^*$ are rank 1 $\mathbb{Z}$-modules, we conclude that the space of $G$-invariant bilinear forms on $V(\lambda)$ is $\mathbb{Z}$ if $\lambda = -w_0 \lambda$ and 0 otherwise. So suppose $\lambda = -w_0 \lambda$ and let $b$ be an indivisible $G$-invariant bilinear form on $V(\lambda)$ --- it is determined up to sign. For each field $k$, the map $\delta_b \!: V(\lambda) \otimes k \to V(\lambda)^* \otimes k$ has kernel the unique maximal proper submodule of $V(\lambda) \otimes k$, see \cite[II.2.4, II.2.14]{Jan}, so \emph{$b \otimes k$ is nondegenerate if and only if $V(\lambda) \otimes k$ is irreducible.} Therefore, Theorem \ref{MT} gives: \begin{cor} \lambdabel{Killing.cor} Suppose, in the notation of the previous two paragraphs, that $G$ is simple and split and $\lambda = -w_0 \lambda$. Then $b \otimes k$ is nondegenerate for every field $k$ if and only if \begin{enumerate} \renewcommand{\alph{enumi}}{\alph{enumi}} \item $\lambda$ is minuscule; or \item $G = E_8$ and $\lambda$ is the highest root $\widetilde{\alpha}$. \end{enumerate} \end{cor} \sigmaubsection*{Failure of converse to Theorem \ref{thm:Levireduction}} As another complement to Theorem \ref{MT}, we make precise the settings where the converse to ``$V(\lambda)$ irreducible implies $V_J(\lambda)$ irreducible'' fails. \begin{thm}\lambdabel{qm} Let $\lambdambda$ be a dominant weight for $G$ a split, simple, and simply connected group over $\mathbb{Z}$ with $\rank G > 1$. Then $V_J(\lambda) \otimesimes k$ is irreducible for every $J \sigmaubsetneq \Delta$ if and only if one of the following occurs: \begin{enumerate} \renewcommand{\alph{enumi}}{\alph{enumi}} \item \lambdabel{qm.min} $\lambda$ is minuscule. \item \lambdabel{qm.hsr} $\lambda$ is the highest short root $\alpha_0$. \item \lambdabel{qm.B} $\Phi=B_n$ and $\lambdambda = \omega_1 + \omega_n$. \item \lambdabel{qm.G} $\Phi=G_2$ and $\lambdambda = \omega_2$ or $\omega_1 + \omega_2$. \end{enumerate} \end{thm} \begin{proof} First assume that one of conditions \eqref{qm.min}--\eqref{qm.G} holds. Then one can directly verify that $V_J(\lambda) \otimesimes k$ is irreducible for every $J \sigmaubsetneq \Delta$ (by Theorem \ref{MT}). On the other hand, suppose that $V_J(\lambda) \otimesimes k$ is irreducible for every $J \sigmaubsetneq \Delta$. Write $\lambda = \sigmaum_i a_i \omega_i$. If some $a_i > 1$ or at least three of the $a_i$ are nonzero, then by Theorem \ref{MT}, we see that $V_J(\lambda)$ is not irreducible with $J$ obtained by removing an end node other than $i$ in the first case or any end node in the second case. Next consider the case when $\lambda = \omega_i + \omega_j$, $i\neq j$. The result follows unless $\{i,j \}$ correspond to all the end nodes. If there are three end nodes, this is not possible. Thus, we only need consider types $A, B, C, F$ and $G$. If $\Phi=A_n$, this leads to \eqref{qm.hsr}. If $\Phi=B_n$, this leads to \eqref{qm.B}. Moreover, if $\Phi=C_n, n \ge 3$, then $V_J(\lambda) \otimes k$ is reducible for $J= \Delta-\{\alpha_{1} \}$ and $\car k = 2$. Similarly, if $\Phi=F_4$, $V_J(\lambda) \otimes k$ is reducible for $J=\Delta-\{ \alpha_{4} \}$ and $\car k = 3$. If $\Phi= G_2$, this leads to one of the cases in \eqref{qm.G}. It remains to consider the case that $\lambda = \omega_i$ for some $i$. If $\Phi=A_n$, then $\omega_i$ is minuscule. If $G$ has rank $2$, then removing a single node gives a Levi of type $A_{1}$ and so we have irreducibility as in \eqref{qm.min}, \eqref{qm.hsr}, and \eqref{qm.G}. So assume that $\Phi$ is not of type $A_n$ and has rank at least $3$. It suffices to check that for any $J$ obtained by removing an end node that $V_J(\lambda)$ irreducible implies that $\lambda$ is either minuscule or $\lambda = \alpha_0$. Suppose that $\Phi=D_n$, $n \ge 4$. If $\lambda$ is not minuscule and $\lambda \ne \alpha_0 = \omega_2$, then we can remove the first node and see that $V_J(\lambda) \otimes k$ is reducible for $\car k = 2$. It remains to consider types $B,C, E$ and $F$. If $i$ does not correspond to an end node, then we can choose $J$ in such a way that the Levi factor $L_J$ of the reduced system does not have type $A_n$ and $V_J(\lambda)$ does not correspond to an end node, whence by Theorem \ref{MT}, $V_J(\lambda)$ is not irreducible. If $G$ has type $B_n$ or $C_n$, then $\omega_1$ and $\omega_n$ either correspond to the short root or are minuscule for $L_J$. In the case when $G$ has type $E_6$, then $\omega_i$ corresponding to an end node is either $\alpha_0$ or minuscule for $L_J$. If $G$ has type $F_4$ or $E_n, n \ge 7$, then one checks the only end node satisfying the hypotheses is $\alpha_0$. \end{proof} \sigmaubsection*{Connection with $B$-cohomology} Let $B$ be the Borel of $G$ corresponding to the negative roots. For $2\rho$ the sum of the positive roots and $N$ the number of positive roots, one can use Serre duality to show that \begin{equation} \text{Hom}_{G}(k,V(-w_{0}\lambdambda))\cong \text{Ext}^{N}_{B}(k,\lambdambda-2\rho)\cong \text{H}^{N}(B,\lambdambda-2\rho), \end{equation} see \cite{HN} and \cite[Theorem 5.5]{GN}. For $\lambda = \widetilde{\alpha}$ the highest root, $\widetilde{\alpha} = -w_0 \widetilde{\alpha}$ and $V(\widetilde{\alpha})$ is the Lie algebra of the simply connected cover of $G$. The adjoint representation for $E_{8}$ is simple for all $p>0$, so $$\text{H}^{120}(B,\widetilde{\alpha}-2\rho)=0.$$ On the other hand, if $G$ is of type $A_{n}$ then $$ \text{H}^{n(n+1)/2}(B,\widetilde{\alpha}-2\rho)\cong \begin{cases} k & \text{if $p\mid n+1$} \\ 0 & \text{if $p\nmid n+1$.} \end{cases} $$ Similar statements can be formulated for other types. The calculation of the $B$-cohomology with coefficients in a one-dimensional representation is an open problem in general. Complete answers are known for degrees $0$, $1$, and $2$ and for most primes in degree $3$. See \cite{BNP} for a survey. \sigmaubsection*{Quantum Groups} For quantum groups (Lusztig ${\mathcal A}$-form) at roots of unity, one can ask when the quantum Weyl modules are globally irreducible. The Weyl modules with minuscule highest weights will yield globally irreducible representations.Ê One can prove an analog of Theorem~\ref{thm:Levireduction} to use Levi factors to reduce to considering fundamental weights or weights of the form $\omega_{1}+\omega_{n}$. For type $A_n$, if the root of unity has order $l$ and $l\mid n+1$ then $V(\omega_{1}+\omega_{n})$ is not simple (see \cite{Fayers}). This uses representation theory of the Hecke algebra of type $A_n$. From this one can prove the analog of our main theorem (Theorem~\ref{MT}) for quantum groups in the $A_n$ case. In order to handle root systems other than $A_{n}$, more detailed information needs to be worked out such as the the tables given in \cite{Jan:first} and analogs of results for Weyl modules in type $C_{n}$ as given in \cite{PremetSup}. \sigmaubsection*{Further Directions} Suppose now that $G$ is a split simply connected algebraic group over $\mathbb{Z}$ and $\lambdambda$ is a dominant weight. In a preliminary version of this manuscript, we asked to what extent is the following statement true: \emph{If $\mu$ is a dominant weight that is maximal among the dominant weights $< \lambdambda$, then there is a field $k$ such that $V(\lambdambda) \otimesimes k$ has $L(\mu)$ as a composition factor.} Certainly, it is false for $G = E_8$, $\lambdambda$ the highest root, and $\mu = 0$. Jantzen has recently shown in \cite{Jan:max} that, apart from this one counterexample, the statement holds when $G$ is simple. Note that, in contrast to Theorem \ref{MT}, this result does not include an upper bound on $\car k$ that only depends on the rank of $G$. For example, take $G = \SL_2$ and pick a prime $p$ and a $d > p$ not divisible by $p$. Then $d-2$ is a weight of the irreducible representation $L(d)$ with highest weight $d$ over $\mathbb{F}_p$, so $L(d-2)$ is not in the composition series for $V(d) \otimes \mathbb{F}_p$. \sigmaection{Appendix: the Killing form over $\mathbb{Z}$} \sigmaubsection*{The reduced Killing form} Let $G$ be a split simple algebraic group over $\mathbb{Z}$. There is a canonical indivisible $G$-invariant bilinear form on $\Lie(G)$, the \emph{reduced Killing form}, which we denote $b_G$. In this appendix, we prove the following result, which is similar in flavor to Theorem \ref{MT} and answers a question raised by George Lusztig. \begin{thm} \lambdabel{Killing} The reduced Killing form on $\Lie(G) \otimes k$ is nondegenerate for every field $k$ if and only if $G$ is one of the following groups: \begin{enumerate} \renewcommand{\alph{enumi}}{\alph{enumi}} \item $E_8$; \item \lambdabel{Killing.SO} $\SO_{2n}$ for some $n \ge 4$; \item \lambdabel{Killing.HSpin} $\HSpin_{2n}$ for $n$ divisible by $4$; or \item \lambdabel{Killing.SL} $\SL_{m^2}/\mu_m$ for some $m > 1$. \end{enumerate} \end{thm} We actually prove something more. The Lie algebra of $G$ is a free $\mathbb{Z}$-module, so it has a basis $v_1, \ldots, v_n$ for some $n$. The determinant of $b_G$ (denoted $\det b_G$) is the determinant of the matrix with $(i,j)$-entry $b_G(v_i, v_j)$; note that $\det b_G$, as an element of $\mathbb{Z}$, does not depend on the choice of basis. Furthermore, $b_G \otimes k$ is degenerate if and only if $\det b_G = 0$ in $k$. The point of this appendix is to calculate $\det b_G$ for every simple $G$ over $\mathbb{Z}$, from which Theorem \ref{Killing} quickly follows. \sigmaubsection*{The case where $G$ is simply connected} In case $G$ is simply connected, the Killing form on $\Lie(G)$ --- a bilinear form over $\mathbb{Z}$ --- is divisible by $2h^\vee$ for $h^\vee$ the dual Coxeter number of $G$; we define $b_G$ to be the quotient. It is even and indivisible \cite[Prop.~4]{GrossNebe}. It is natural to call $b_G$ the reduced Killing form because it is obtained from the Killing form by dividing by the greatest common divisor of its values. (Note that $b_G$ has the advantage that $b_G \otimes k$ is nonzero for every $k$, a property not satisfied by the Killing form.) For simply connected $G$, $\det b_G$ was calculated in \cite[I.4.8(a)]{SpSt}. Specifically, let $N$ denote the number of positive roots, $N_s$ denote the number of short roots, and $N_{ss}$ the number of short simple roots. Put $c$ for the ratio of the square-lengths of the long to short roots (so $c \in \{ 1, 2, 3 \}$) and $f$ for the determinant of the Cartan matrix. Let $T$ be a split maximal torus in $G$ over $\mathbb{Z}$. Then $\Lie(G)$ is an orthogonal sum of $\mathfrak{t} := \Lie(T)$ and a subspace $\mathfrak{n}$ spanned by the root subalgebras of $G$ with respect to $T$. One checks that $\det b_G\vert_{\mathfrak{n}} = (-1)^N c^{N_{ss}}$. The Lie algebra $\mathfrak{t}$ is naturally identified with the coroot lattice $Q^\vee$, and this identifies the restriction of $b_G$ to $\mathfrak{t}$ with the Weyl-group invariant bilinear form such that $b_G(\alpha^\vee, \alpha^\vee) = 2$ for every short coroot $\alpha^\vee$. In summary, one finds that \begin{equation} \lambdabel{SpSt.formula} \det b_G = (-1)^N c^{N_s + N_{ss}} f \quad \text{for $G$ simply connected.} \end{equation} The value of $\det b_G$ can be found in Table \ref{redkill.table}. The conflicting values given in the table on p.~634 of \cite{GrossNebe} are typos. \begin{table}[hbt] \begin{tabular}{c|c|ccc} Killing-Cartan&$G$ simply connected&\multicolumn{2}{c}{$G$ adjoint} \\ type of $G$&$\det b_G$&$e$&$\det b_G$ \\ \hline $A_n$&$(-1)^{(n^2+n)/2} (n+1)$&$n+1$&$(-1)^{(n^2+n)/2}(n+1)^{n^2+2n-1}$ \\ $B_n$&$(-1)^n 2^{2n+2}$&1&$(-1)^n 2^{2n}$ \\ $C_n$&$(-1)^n 2^{2n^2 - n}$&$\rho(n)$&$(-1)^n 2^{2n^2-n-2} \rho(n)^{2n^2 + n}$ \\ $D_n$ ($n \ge 4$)&$2^2$&$2\rho(n)$&$2^{2n^2-n-2} \rho(n)^{2n^2 - n}$ \\ $G_2$&$3^7$&\multicolumn{2}{c}{$\cdots$} \\ $F_4$&$2^{26}$&\multicolumn{2}{c}{$\cdots$} \\ $E_6$&3&3&$3^{77}$ \\ $E_7$&$-2$&2&$-2^{132}$ \\ $E_8$&1&\multicolumn{2}{c}{$\cdots$} \end{tabular} \caption{Determinant of the reduced Killing form for $G$ simply connected or adjoint. The value of $e$ is taken from \cite[\S3]{G:vanish}. The notation $\rho(n)$ means 1 if $n$ is even and 2 if $n$ is odd.} \lambdabel{redkill.table} \end{table} \sigmaubsection*{Definition of reduced Killing form for $G$ not simply connected} Suppose that $G$ is split simple over $\mathbb{Z}$ and let $f \!: {\widetilde{G}} \to G$ denote the simply connected cover in the sense of \cite[XXI.6.2.6, XXII.4.3.3]{SGA3:new}. Note that ${\widetilde{G}}$ and $G$ may have distinct Lie algebras; the natural map $\Lie({\widetilde{G}}) \otimes k \to \Lie(G) \otimes k$ has kernel $\Lie(\ker f \times k)$, which may be nonzero. The differential $\mathrm{d}f \!: \Lie({\widetilde{G}}) \to \Lie(G)$ gives an isomorphism $\mathrm{d}f_\mathbb{Q} \!: \Lie({\widetilde{G}}) \otimes \mathbb{Q} \to \Lie(G) \otimes \mathbb{Q}$; pushing forward $b_{\widetilde{G}}$ gives a $G$-invariant bilinear form $\hat{b}_G$ on $\Lie(G) \otimes \mathbb{Q}$ and we define the reduced Killing form on $\Lie(G)$ to be $b_G := e \cdot \hat{b}_G$, where $e$ is the smallest positive rational number such that the resulting $b_G$ has integer values. It follows from the indivisibility of $b_{\widetilde{G}}$ that $e$ is an integer. Pick a pinning of $G$ with respect to a split maximal torus $T$, and fix a corresponding pinning of ${\widetilde{G}}$ with respect to the maximal torus $\widetilde{T} := f^{-1}(T)^\circ$. The two groups have the same root system and $\mathrm{d}f$ restricts to give an isomorphism for each of the 1-dimensional root subalgebras of $\Lie(G)$ with the corresponding root subalgebra of $\Lie({\widetilde{G}})$. The map $\mathrm{d}f$ embeds $\Lie(\widetilde{T}) = Q^\vee$ in $\Lie(T)$. In case $G$ is adjoint, \cite[I.4.8(b)]{SpSt} says that $\det \hat{b}_G = (\det b_G)/f^2$ and therefore $\det b_G = e^{\operatorname{dim}\nolimits G} (\det b_{\widetilde{G}})/f^2$. This gives the values in the last column of Table \ref{redkill.table}. Note that $\Lie(T)$ is naturally identified with the lattice $P^\vee$ of weights for the dual root system. \sigmaubsection*{Groups that are neither simply connected nor adjoint} It remains to treat the case where $G$ is neither simply connected nor adjoint. Recall that for even $n > 4$, the simply connected group $\Spin_{2n}$ has two non-isomorphic quotients by a central $\mu_2$: $\SO_{2n}$ and one more called a \emph{half-spin group}; we denote it by $\HSpin_{2n}$. (For $n = 4$, $\HSpin_8$ is defined, but it is isomorphic to $\SO_8$.) So suppose $G$ is $\SL_n / \mu_m$ for some $m \mid n$, $\SO_{2n}$ for some $n \ge 4$, or $\HSpin_{2n}$ for some even $n \ge 4$. For each of these three possibilities, we determine $\hat{b}_G\vert_{\Lie(T)}$ and $e$. As all roots have the same length, we use the canonical identification of the root system with its dual and calculate $\Lie(T)$ as a sublattice of the weight lattice $P$. We may identify $\Lie(T)$ by noting that for each possibility for $G$, $\Lie(T)/Q$ is cyclic, so it suffices to find a fundamental weight $\omega$ such that $\Lie(T)$ is generated by $\omega$ and $Q$, and to find a simple root $\alpha$ and $c \in \mathbb{N}$ so that $c\omega$ is in $Q$ and is a sum of $\alpha$ and a linear combination of the other simple roots. Then $\{ \omega \} \cup \Delta \sigmaetminus \{ \alpha \}$ is a basis for $\Lie(T)$. We take, with roots numbered as in Table \ref{dynks.table}: \begin{itemize} \item for $G = \SL_n / \mu_m$: $\omega = (n/m)\omega_{n-1}$, $\alpha = \alpha_1$, $c = m$; \item for $G = \SO_{2n}$: $\omega = \omega_1$, $\alpha = \alpha_n$, $c = 2$; or \item for $G = \HSpin_{2n}$: $\omega = \omega_n$, $\alpha = \alpha_1$, $c = 2$, \end{itemize} see \cite[3.6, 3.7, 5.2]{G:vanish}, although a slightly different choice of basis was taken for $\HSpin_{2n}$ there. (We can now see $e$: as $\hat{b}_G(\omega, \omega) = n(n-1)/m^2$ (as in \cite[Lemma 5.2]{G:vanish}), $1$, and $n/4$ respectively, and $\hat{b}_G(\omega, Q) \sigmaubseteq \mathbb{Z}$, we find that $e = m/\gcd(m, n/m)$, $1$, and $\rho(n/2)$ respectively.) We can write down the Gram matrix for $\det \hat{b}_G\vert_{\Lie(T)}$ with respect to the basis ordered by taking the simple roots in the Bourbaki ordering, deleting $\alpha$, and appending $\omega$. This leads to $\det b_G$ via the formula \begin{equation}\lambdabel{e.det} \det b_G = (-1)^N e^{\operatorname{dim}\nolimits G} \det \hat{b}_G\vert_{\Lie(T)}, \end{equation} which follows from the definition of $e$ and the fact that $\mathrm{d}f$ restricts to an isomorphism on the span of the root subalgebras. For $G = \SL_n / \mu_m$, the Gram matrix is $(n-1)$-by-$(n-1)$, with a Cartan matrix of type $A_{n-2}$ in the upper left corner and the right column and bottom row are both $(0, \ldots, 0, n/m, n(n-1)/m^2)$, so its determinant is $n/m^2$. Equation \eqref{e.det} gives that \begin{equation} \lambdabel{sl.det} \det b_{\SL_n/\mu_m} = (-1)^{\binom{n}{2}} \frac{n}{m^2} \left( \frac{m}{\gcd(m, n/m)} \right)^{n^2 - 1}. \end{equation} For $G = \SO_{2n}$, the Gram matrix is $n$-by-$n$, has a Cartan matrix of type $A_{n-1}$ in the upper left corner, and has right column and bottom row both $(1, 0, \ldots, 0, 1)$. So its determinant is 1, and $\det b_G = 1$. For $G = \HSpin_{2n}$, the Gram matrix is $n$-by-$n$, has a Cartan matrix of type $D_{n-1}$ in the upper left corner, and has right column and bottom row both $(0, \ldots, 0, 1, n/4)$. So its determinant is 1, and $\det b_G = \rho(n/2)$. \begin{proof}[Proof of Theorem \ref{Killing}] Given the calculation of $\det b_G$ above, it suffices to consider the case $G = \SL_n / \mu_m$ where $n \ge 4$. If $n = m^2$, then $\det b_G = \pm 1$ and $b_G$ is nondegenerate. Conversely, assume $b_G$ is nondegenerate, so the restriction to the root subalgebras must be nondegenerate, i.e., $e = 1$, equivalently, $\gcd(m, n/m) = m$, equivalently, $m^2$ divides $n$. Then $\det b_G = \pm n/m^2$. \end{proof} \begin{rmk*} The subdivision of groups of type $D_n$ for $n$ even into the cases $n \equiv 0, 2 \bmod 4$, as seen in Theorem \ref{Killing}\eqref{Killing.HSpin}, can also be seen in the representation theory of these groups: the half-spin representation of $\Spin_{2n}$ over $\mathbb{C}$ is orthogonal for $n$ divisible by 4 and symplectic for $n \equiv 2 \bmod{4}$, see \cite[Ch.~VIII, Table 1]{Bou:g7} or \cite[8.4]{KMRT}. \end{rmk*} \begin{eg} Let $G := \SL_{p^d}/\mu_p$ for some prime $p$ and some $d \ge 3$. Then the reduced Killing form $b_G \otimes k$ on $\Lie(G) \otimes k$ is degenerate when $\car k = p$, where the element $\omega$ in $\Lie(T) \otimes k$ is in the radical. Yet $\Lie(G) \otimes k$ is self-dual because $\Lie(G) \otimes k$ is a direct sum of the self-dual representations $L(\widetilde{\alpha})$ and $k$ by \cite[9.4]{Hum:p}. Furthermore, by \eqref{sl.det}, $b_G \otimes k'$ is non-degenerate for every field $k'$ of characteristic different from $p$. \end{eg} \end{document}
\begin{document} \nocite{*} \title{Meditations on the Farey Fractal} \begin{abstract} We define the ``coronas'', which are especially spikey paths in the Farey graph going from $\underline{0}=(1,0)$ to $\underline{\infty}=(0,1)$. We show that for $R\ge 2$, $\left\{ (x,y)\in \mathbb{N}^{+}\times\mathbb{N}^{+},\; \gcd(x,y)=1, \; x+y\le R \right\}$ is a corona. \end{abstract} \section*{Preface} For me, the greatest mystery of mathematics was Andr\'e Weil's ``Roseta Stone'' \cite{W1939}, \cite{W1940a}: the analogies between number fields and function fields. \\ Regarding the Riemann hypothesis, initially I followed Weil's approach \cite{W1966} to Tate's thesis \cite{T1950}, viewing it as the harmonic analysis of the action of $\mathbb{A}^{\star}_{K}/K^{\star}$ on $\mathbb{A}_{K}/K^{\star}$ (or on distribution on $\mathbb{A}_{K}$ that are $K^{\star}$-invariant), and on $\mathbb{P}^{1}\left( \mathbb{A}_{k} \right)/K^{\star}$. This suggested that an (real valued) index-theorem, analogue of the Riemann-Roch for the associated surface, will give a proof of the Riemann Hypothesis along the lines of Weil's proof, see \cite{H1989}. The formula \cite{H1990} for Weil's explicit-sums-distribution \cite{W1952} was also the starting point for the program of Alain Connes and collaborates (cf. \cite{C1999} appendix, where the formula of \cite{H1990} is stated in an assymptotic form). But note that there is still not even a proof of Weil's Riemann-Hypothesis for a function field $K$ \cite{W1941} using the ``non-commutative'' space $\mathbb{A}_{K}/K^{\star}$! \\ The mysterious analogy between number fields and function fields is clarified by the concept of generalized-ring (see \cite{H2010} for a quick introduction). The language of generalized-rings can be used as the foundation of algebraic geometry in the style of Grothendieck (see \cite{H2017}): \begin{itemize} \item The final object of geometry is the \underline{absolute-point} $\text{spec}(\mathbb{F})$, where $\mathbb{F}$ is the initial object of generalized-rings, the \underline{``Field with one element''} \item The real $\mathbb{R}$ and complex $\mathbb{C}$ numbers, when viewed as (topological) \break generalized-rings, have (maximal compact topological)-sub-generalized-rings $\mathbb{Z}_{\mathbb{R}}\subseteq \mathbb{R}$, and $\mathbb{Z}_{\mathbb{C}}\subseteq \mathbb{C}$, (analogous to $\mathbb{Z}_{p}\subseteq \mathbb{Q}_{p}$); and $\text{spec}(\mathbb{Z})$, and $\text{spec}(O_{K})$, $K$ a number field, have natural compactifications. \item There are non-trivial Arithmetical surfaces, and higher arithemetical dimensions, as the tensor-product ($=$the categorical sum) does not reduce to its diagonal: \begin{equation*} \mathbb{Z}\otimes_{\mathbb{F}}\mathbb{Z} \neq \mathbb{Z}. \end{equation*} \item There is a natural generalization of homological algebra, and of the derived category of quasi-coherent sheaves of $O_{X}$-modules, and the derived functors of direct and inverse images, \cite{H2020}. \\ \end{itemize} However, we are still missing an arithmetical analogue of the \underline{Frobenius}\break \underline{correspondence}. For the \underline{tropical} examples (see \cite{H2010} and \cite{H2017}, p.29): \begin{equation*} \mathcal{B} = \left\{ 0,1 \right\}_{t} \subseteq \mathcal{J}=[0,1]_{t}\subseteq\mathcal{R}=[0,\infty)_{t} \end{equation*} where the subscript ``$t$'' indicates that addition is $x+y:=\max\left\{ x,y \right\}$, we have that $\mathcal{R}$ is a generalized-field, and the (multiplicative) group $\mathbb{R}^{+}$ acts on $\mathcal{R}$ by automorphism $x\mapsto x^{p}$, with fixed field the \underline{Boolean-field} $\mathcal{B}$. This resembles the Frobenius-automorphism $x\mapsto x^{p}$ of the field $\overline{\mathbb{F}_{p}}$ with fixed field $\mathbb{F}_{p}$. Unfortunately, the tensor-product (=categorical sum, in the categories of generalized-rings or of semi-rings) vanishes: $\mathbb{Z}\otimes\mathcal{B}=\left\{ 0 \right\}$ the zero=final object. \\ As Weil suggested \cite{W1942}, the arithmetical analogue of extending scalars to $\overline{\mathbb{F}_{p}}$, is the cyclotomic extension obtained by adding (all!) roots of unity $\bbmu_{\infty}$ (this idea, in the $p$-power cyclotomic extension, $\mathbb{Z}\left[ \mu_{p^{\infty}} \right]$, was developed by K. Iwasawa, who related it to the Kobuta-Leopoldt $p$-adic $L$-function \cite{Iw1972}). \\ All this makes the prospect of seeing, in our life-time, a proof of the Riemann-Hypothesis, along the lines of Weil's proof for a function field, unrealistic! But perhaps there is an alternative route: after all, the field of rational numbers $\mathbb{Q}$ is the analogue of the field of rational functions $\mathbb{F}_{p}(T)$, and the Riemann-Hypothesis for $\mathbb{F}_{p}(T)$, that is for $\mathbb{P}^1/\mathbb{F}_{p}$, is a triviality: there are no zeros of the zeta function, only the poles, and we can \underline{constructively} generate all the primes, $\overline{\mathbb{F}}_{p}/\left(x\hspace{-.15cm}\sim\hspace{-.15cm} x^{p}\right)$, and we can count exactly the number of points, \begin{equation*} \# \mathbb{P}^{1}\left( \mathbb{F}_{p^{d}} \right) = \frac{p^{2d}-1}{p^{d}-1}=p^{d}+1 \end{equation*} (and the Riemann-Hypothesis for a general function field follows from this by the Bombieri-Stepanov argument). So perhaps, for the basic number field of rational numbers $K=\mathbb{Q}$ there is a \underline{constructive} root. \section{Introduction} The non-zero natural numbers, $\mathbb{N}^{+}=\mathbb{N}\text{Set}minus \left\{ 0 \right\}$, are the free commutative, unital, monoid on the set of \underline{primes}, \begin{equation} \text{Prime} = \mathbb{N}^{++}\text{Set}minus \left( \mathbb{N}^{++}\bullet \mathbb{N}^{++} \right)\quad , \quad \mathbb{N}^{++} = \mathbb{N}^{+} \text{Set}minus \left\{ 1 \right\}, \label{eq:1} \end{equation} Writing $\left\{ \alpha \right\}$ for $1$ unconditionally, and for $\alpha$ iff we have the \underline{Riemann Hypothesis}, we have \begin{equation} \frac{1}{\zeta(s)} = \prod\limits_{p\in\text{Prime}} \left( 1-p^{-s} \right) = \sum\limits_{n\ge 1} \frac{\mu(n)}{n^{s}} \quad \text{Converges for $\real(s)>\left\{ \frac{1}{2} \right\}$} \label{eq:2} \end{equation} This is equivalent to the statement that \begin{equation} \displaystyle \left| \sum\limits_{n=1}^{R} \mu(n)\right| = \left| \sum\limits_{ \begin{array}[H]{c} 1\le x \le y \le R \\ \gcd(x,y) = 1 \end{array} } \displaystyle e^{2\pi i {x}/{y}} \right| = O \left( R^{ \left\{ 1/2 \right\} } \log\, R \right). \label{eq:3} \end{equation} Similarly, the positive rational numbers $\mathbb{Q}^{+}$ are the free abelian group on the primes \begin{equation} \mathbb{Q}^{+} = \mathbb{Z}\; \text{Prime} = \mathcal{O} plus_{p\in \text{Prime}} p^{\mathbb{Z}} \label{eq:4} \end{equation} While this does not determines the set of primes as in (\ref{eq:1}), we do have that every $v\in \mathbb{Q}^{+}$ can be written uniquely as $v=y/x$ with $x,y\in \mathbb{N}^{+}$, and $\gcd(x,y) = 1 $; we write $\underline{v}=(x,y)$, so that \begin{equation} \underline{\mathbb{Q}}^{+}\equiv \left\{ (x,y)\in \mathbb{N}^{+},\gcd (x,y) = 1 \right\} \equiv (N^{+}\times \mathbb{N}^{+})\text{Set}minus N^{++}\bullet (\mathbb{N}^{+}\times \mathbb{N}^{+}) \label{eq:5} \end{equation} We have now quite similarly to (\ref{eq:3}), with $\Phi (n)= \#\left(\mathbb{Z} /{n\mathbb{Z}} \right)^{*}$, \begin{equation} \begin{array}[H]{lll} \left| \sum\limits_{n=1}^{R} \Phi(n)\right| &=& 1 + \# \left\{ (x,y)\in \underline{\mathbb{Q}}^{+}, x+y \le R \right\} \\\\ &=& 1 + \sum\limits_{n\ge 1} \mu(n)\cdot \# \left\{ (x,y)\in \mathbb{N}^{+}\times \mathbb{N}^{+}, x+y\le\frac{R}{n} \right\}\\\\ &\approx& \sum\limits_{n\ge 1} \mu(n)\cdot \frac{1}{2}\left( \frac{R}{n} \right)^2 \\\\ &=& \frac{R^2}{2\zeta(2)}+O\left( R^{ \left\{ \frac{1}{2} \right\}} \log R \right) \end{array} \label{eq:6} \end{equation} Indeed, on the analytic side we have, since $\Phi$ is multiplicative \begin{equation} \begin{array}[H]{lll} \sum\limits_{n\ge 1}\frac{\Phi(n)}{n^{s}} &=& \prod\limits_{p\in\text{Prime}} \left( 1+(1-p^{-1})\sum\limits_{n\ge 1}p^{n(1-s)} \right) \\\\ &=& \prod\limits_{p\in\text{Prime}} \frac{1-p^{-s}}{1-p^{i-s}} = \frac{\zeta(s-1)}{\zeta(s)}. \end{array} \label{eq:7} \end{equation} This has a simple pole at $s=2$, with residue $\frac{1}{\zeta(2)}$, and otherwise is analytic for $\real(s)> \left\{ \frac{1}{2} \right\}$. \\ The functions $\mu(n)$ and $\Phi(n)$ are as mysterious as the primes, e.g. $\text{Prime} \equiv \left\{ n\in \mathbb{N}^{+}, \Phi(n) = n-1 \right\}$. But the sum in (\ref{eq:6}) is better than the sum in (\ref{eq:3}), because it can be made \underline{constructive}. Our purpose here is to \underline{linearize} this constructive approach so as to have explicit recursive formula for the sum in (\ref{eq:6}) and to explore some of the structures behind it. Curiously, there is some kind of interaction between the binary and the Fibonacci expansions of integers. \section{The Farey Graph} The group $\mathbb{Q}^{+}$ embeds as a dense subroup of the multiplicative group of positive real numbers \begin{equation} \mathbb{Q}^{+} \hookrightarrow\mathbb{R}^{+}= (0,\infty) \subseteq [0,\infty] \label{eq:8} \end{equation} and we have an induced total order $\le$ on $\underline{\mathbb{Q}}^{+}$. We add to $\underline{\mathbb{Q}}^{+}$ the points in the plane \begin{equation} \underline{0}:= (1,0)\qquad , \qquad \underline{\infty}:= (0,1), \quad \label{eq:9} \end{equation} and we have the \underline{Farey Graph} $\mathcal{G}$ with vertices $\mathcal{G}_0 = \left\{ \underline{\infty} \right\} \amalg \underline{\mathbb{Q}}^{+}\amalg \left\{ \underline{0} \right\}$ and edges \begin{equation} \mathcal{G}_1=\left\{ \begin{array}[h]{l} \left( v_{-}=(x_{-},y_{-}), v_{+}=(x_{+},y_{+})\right)\in \mathcal{G}_0\times \mathcal{G}_0 \;, \\ \det \begin{pmatrix} v_{-} \\ v_{+} \end{pmatrix} = x_{-}y_{+}-y_{-}x_{+}=1 \end{array}\right\} \label{eq:10} \end{equation} Every edge $(v_{-},v_{+})\in \mathcal{G}_1$, gives a \underline{parallelogram} \begin{equation} \begin{array}[H]{ll} P_{v} = \left\{ t_{+}v_{+}+t_{-}v_{-}; t_{\pm}\in [0,1] \right\} \\ \area\left( P_{v} \right) = \det \begin{pmatrix} v_{-} \\ v_{+} \end{pmatrix} =1 \quad , \quad P_{v} \cap (\mathbb{N}\times \mathbb{N}) = \left\{ (0,0), v_{-},v,v_{+} \right\} \end{array} \label{eq:11} \end{equation} and a \underline{triangle} \begin{equation} \begin{array}[H]{l} \mathcal{D}elta_{v} = \left\{ t_{+}v_{+}+t_{-}v_{-}; \; t_{\pm}\in [0,1]\; , \; t_{+}+t_{-}\ge 1 \right\} \\ \area \left( \mathcal{D}elta_v \right)=\frac{1}{2} \quad , \quad \mathcal{D}elta_{v}\cap \left( \mathbb{N}\times\mathbb{N} \right) = \left\{ v_{+}>v>v_{-} \right\} \end{array} \label{eq:12} \end{equation} where we denote them using the \underline{mediant} \begin{equation} v=v_{+}+v_{-} = \left( x_{+}+x_{-}, y_{+}+y_{-} \right) \label{eq:13} \end{equation} (Obtained as the vector addition in the plane; not addition in $\mathbb{Q}^{+}$!). \\ If we add to the triangles $\left\{ \mathcal{D}elta_{v} \right\}_{v\in \mathbb{Q}^{+}}$ the triangle \begin{equation} \mathcal{D}elta_{0} = \left\{ (t_{+},t_{-}); \; t_{\pm}\in[0,1],\; t_{+}+t_{-}\le 1 \right\} \label{eq:14} \end{equation} we get a triangulation of the first quadrant of the plane minus the multiples $t\cdot v$, $v\in \underline{\mathbb{Q}}^{+}\amalg \left\{ \underline{0},\underline{\infty} \right\}$, $t>1$: \begin{equation} [0,\infty)\times [0,\infty)\, \text{Set}minus\, (1,\infty)\bullet \left( \underline{\mathbb{Q}}^{+}\amalg \left\{ \underline{0},\underline{\infty} \right\} \right) \equiv \mathcal{D}elta_{0} \amalg \coprod\limits_{v\in\underline{Q}^{+}}\mathcal{D}elta_{v} \label{eq:15} \end{equation} \section{The binary tree} The monoid \begin{equation} \begin{array}[H]{lll} \SL_2(\mathbb{N})&:=& \left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \SL_2(\mathbb{Z}),\; a,b,c,d\ge 0 \right\} \\\\ &\equiv & \left\{ \begin{pmatrix} v_{-} \\ v_{+} \end{pmatrix}, \; (v_{-},v_{+})\in \mathcal{G}_1 \right\} \equiv \mathcal{G}_{1} \end{array} \label{eq:16} \end{equation} acts on $\underline{\mathbb{Q}}^{+}$, (and on $\underline{\mathbb{Q}}^{+}\amalg \left\{ \underline{0},\underline{\infty} \right\})$, on the right: \begin{equation} \begin{array}[H]{lll} (x,y) \begin{pmatrix} v_{-}\\ v_{+} \end{pmatrix} = x\cdot v_{-}+y\cdot v_{+} \\\\ \underline{0} \begin{pmatrix} v_{-} \\ v_{+} \end{pmatrix}= v_{-} \qquad , \qquad \underline{\infty} \begin{pmatrix} v_{-} \\ v_{+} \end{pmatrix} = v_{+} \end{array} \label{eq:17} \end{equation} This action preserves the graph structure $\mathcal{G}$, and the triangles $\mathcal{D}elta_{v}$: \begin{equation} \mathcal{D}elta_{v}\cdot g = \mathcal{D}elta_{vg}\quad , \quad v\in\underline{\mathbb{Q}}^{+}\quad , \quad g\in \SL_2(\mathbb{N}). \label{eq:18} \end{equation} For $g= \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \SL_2(\mathbb{N})\text{Set}minus \left\{ \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \right\} $, $ad=bc+1$, we have \begin{equation} \begin{array}[H]{ll} \text{either} & g\cdot \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}^{-1} = \begin{pmatrix} a & b-a \\ c & d-c \end{pmatrix} \in \SL_2(\mathbb{N}) \\\\ \text{or} & g \cdot \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}^{-1} = \begin{pmatrix} a-b & b \\ c-d & d \end{pmatrix} \in \SL_2(\mathbb{N}) \end{array} \label{eq:19} \end{equation} i.e. either ($b\ge a$ and $d\ge c$) \underline{or} ($b\le a$ and $d\le c$). \\ It follows that $\SL_2(\mathbb{N})$ is the \underline{free} monoid on these two generators \begin{equation} \SL_{2}(\mathbb{N}) = \left\langle g_{+} = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} , g_{-}= \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \right\rangle \label{eq:20} \end{equation} and every $g\in \SL_{2}(\mathbb{N})$ has a unique representation as a ``\underline{word}'' \begin{equation} \begin{array}[H]{l} g = g_{\delta}^{a_\ell}\dots g_{+}^{a_{2}}g_{-}^{a_{1}}g_{+}^{a_0}\quad , \quad a_0\ge 0, \quad a_{1+i}\ge 1 \quad , \quad \delta = \delta(g) = (-1)^{\ell} \end{array} \label{eq:21} \end{equation} The action of $\SL_{2}(\mathbb{N})$ on $\underline{\mathbb{Q}}^{+}$ is free, and we get an \underline{identification} \begin{equation} \begin{array}[H]{rcl} \SL_2(\mathbb{N}) &\xleftrightarrow{\qquad \sim \qquad}& \underline{\mathbb{Q}}^{+} \\\\ \begin{pmatrix} v_{-} \\ v_{+} \end{pmatrix} &\xleftrightarrow{\quad\quad} & v \\\\ g= \begin{pmatrix} v_{-} \\ v_{+} \end{pmatrix} &\xmapsto{\quad \quad}& v=(1,1)g = v_{+}+v_{-} \\\\ g_{v} = \begin{pmatrix} v_{-} \\ v_{+} \end{pmatrix} & \tikz \draw[arrows={stealth[scale=.5]-|[scale=.5]}] (0,0)--(.9,0); & v \end{array} \label{eq:22} \end{equation} Under this identification we have \begin{equation} g_{v}=g_{\delta}^{a_{\ell}}\cdots g_{-}^{a_1}g_{+}^{a_0}\xleftrightarrow{\quad\quad} v=(x,y) \label{eq:23} \end{equation} iff we have the \underline{continued fraction expansion} \begin{equation} \begin{array}[H]{l} \ldbrack a_0,\cdots , a_{\ell} \rdbrack := a_{0}+\frac{1}{a_1+\frac{1}{ \begin{array}[H]{lll} a_2+ \hspace{-.3cm} & \ \\ &\ddots & \\ & & \hspace{-0.2cm} a_{\ell-1}+\frac{1}{a_{\ell}+1} \end{array} }} = y/x \\\\ \text{here $\delta=\delta(v)=(-1)^{\ell}\in \left\{ \pm \right\}$} \end{array} \label{eq:24} \end{equation} We get the maps of \underline{upper} and \underline{lower bounds}: \begin{equation} \begin{array}[H]{l} t_{+}: \underline{\mathbb{Q}}^{+} \xrightarrow{\quad\quad}\underline{\mathbb{Q}}^{+} \amalg \left\{ \underline{\infty} \right\}, \quad t_{+}(v) = v_{+} = \underline{\infty}\; g_v \\\\ t_{-}: \underline{\mathbb{Q}}^{+} \xrightarrow{\quad\quad} \underline{\mathbb{Q}}^{+}\amalg \left\{ \underline{0} \right\}, \quad t_{-}(v)= v_{-} = \underline{0}\; g_{v} \\\\ t_{+}^{-1}(\underline{\infty}) = \left\{ (1,n), n\ge 1 \right\} = f_{\underline{\infty}}^{-}, \quad t_{-}^{-1}(\underline{0}) = \left\{ (n,1), n\ge 1 \right\} = f_{\underline{0}}^{+} \end{array} \label{eq:25} \end{equation} We get a structure of a binary tree on $\underline{\mathbb{Q}}^{+}$, the Stern-Brocot tree, with root $\underline{1}=(1,1)$, and each $v\in \underline{\mathbb{Q}}^{+}$ has the two \underline{offsprings} \begin{equation} \begin{array}[H]{l} \begin{tikzpicture} \coordinate [label=left:$v$] (A) at (-1cm,-1cm); \coordinate [label=right:$v+v_{+}$] (B) at (0.5cm,-0.2cm); \coordinate [label=right:$v+v_{-}$] (C) at (0.5cm,-2cm); \draw[->] (A)--(B); \draw[->] (A)--(C); \end{tikzpicture} \end{array} \label{eq:26} \end{equation} In terms of the identification (\ref{eq:22}), this can be written as \begin{equation} \begin{array}[H]{l} \begin{tikzpicture} \node (A) at (-1cm,-1cm) {$ g= \begin{pmatrix} v_{-} \\ v_{+} \end{pmatrix} $}; \node (B) at (2.5cm,-0.2cm) {$g_{+}\cdot g = \begin{pmatrix} v \\ v_{+}\end{pmatrix}$}; \node (C) at (2.5cm,-2cm) {$g_{-}\cdot g = \begin{pmatrix}v_{-} \\ v \end{pmatrix}$}; \draw[->] (A)--(B); \draw[->] (A)--(C); \end{tikzpicture} \end{array} \quad , \quad v=v_{+}+v_{-}. \label{eq:27} \end{equation} cf. figure 1 \begin{figure} \caption{The Stern-Brocot tree} \label{fig:1} \end{figure} \ \\ Thus each $v\in \underline{\mathbb{Q}}^{+}\text{Set}minus \left\{ (1,1) \right\}$ has a unique \underline{Mother} given by \begin{equation} \begin{array}[H]{c} M(v) = t_{-\delta(v)}(v) = \left\{ \begin{array}[H]{ll} v_{-} & \delta(v)=+ \\ v_{+} & \delta(v)=- \end{array} \right\} \in \underline{\mathbb{Q}}^{+} \\ M^{-1}(v) = \left\{ v+v_{+}, v+v_{-} \right\} \end{array} \label{eq:28} \end{equation} We also define the \underline{Father} of $v\in \underline{\mathbb{Q}}^{+}\text{Set}minus \left\{ (1,1) \right\}$ to be \begin{equation} F(v) = t_{\delta(v)}(v)= \left\{ \begin{array}[H]{cc} v_{+} & \delta(v)=+ \\\\ v_{-} & \delta(v)=- \end{array} \right\}\in \underline{\mathbb{Q}}^{+}\amalg \left\{ \underline{0},\infty \right\} \label{eq:29} \end{equation} We have for all $v\in \underline{\mathbb{Q}}^{+}\text{Set}minus \left\{ (1,1) \right\}$, \begin{equation} \begin{array}[H]{c} \left\{ F(v),M(v) \right\} \equiv \left\{ v_{+},v_{-} \right\} \\\\ \text{if } \delta(v) =\; +,\;\text{"convex"}:\;\; F(v)=v_{+}, \; M(v)=v_{-} \\\\ \text{if } \delta(v) =\; - , \;\text{"concave"}:\;\; F(v)=v_{-}, \; M(v)=v_{+} \\\\ \end{array} \label{eq:30} \end{equation} For the root $v=\underline{1}=(1,1)$, we have $g_{v}= \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, $ $v_{+}=\underline{\infty}$, $v_{-}=\underline{0}$, and we consider these two points, $\underline{\infty}$ and $\underline{0}$, to be \underline{both} the mothers and fathers of $v=\underline{1}$. \\ If $v=(x,y)$ has continued fraction expansion (\ref{eq:24}), then \begin{equation} \begin{array}[H]{l} \begin{array}[H]{ll} \begin{array}[H]{ll} \left. \begin{array}[H]{ll} M(v) = \ldbrack a_0,\cdots , a_{\ell} -1\rdbrack \\\\ F(v)=\ldbrack a_0,\cdots , a_{\ell-1} \rdbrack \end{array} \right\} & \text{if}\; a_{\ell}>1; \end{array}\\\\ \begin{array}[H]{ll} \text{if $a_{\ell}=1$: } \left. \begin{array}[H]{ll} M(v) = \ldbrack a_0,\cdots , a_{\ell -1}\rdbrack \\\\ F(v)=\ldbrack a_0,\cdots , a_{\ell-2} \rdbrack \end{array} \right\} & \ell\ge 2 ; \end{array}\\\\ \text{for } \ell=1\;:\; F(v)=a_0\; ; \; \text{for}\; \ell=0:\; F(a_0)=\underline{\infty}. \end{array} \end{array} \label{eq:31} \end{equation} Note that if $F(v)=v_{\delta}$, there is a unique $m=m(v)\ge 1$, such that \begin{equation} v_{\delta} = F(v) = M^{1+m}(v) \label{eq:32} \end{equation} i.e. every father is an ``m-th great grandmother'', and more precisely we have \begin{equation} v= \left\{\begin{array}[H]{ll} (1+m)v_{-}+t_{+}(v_{-}) & \delta(v) = - \\\\ (1+m)v_{+}+t_{-}(v_{+}) & \delta(v) = + \end{array}\right. \label{eq:33} \end{equation} cf. figure \ref{fig:2}. \begin{figure} \caption{ Concavity and convexity of $\delta(v)$ } \label{fig:2} \end{figure} \ \\ The \underline{direct descendants} of $v\in\mathbb{Q}^{+}$ are its offsprings $M^{-1}\left( v \right)= \left\{ v+v_{+},v+v_{-} \right\}$, as well as the elements of $F^{-1}(v)$, and they form the \underline{Fin around $v$}, with its positive and negative parts: \begin{equation} \begin{array}[H]{lll} f_{v} = M^{-1}(v)\amalg F^{-1}(v) = f_{v}^{+}\amalg f_{v}^{-} \\\\ f_{v}^{+} := \left\{ v_{+}+nv, n\ge 1 \right\} \quad , \quad f_{v}^{-} := \left\{ v_{-}+nv, n\ge 1 \right\} \\\\ \text{and we put also }\\\\ f_{\underline{\infty}} = f_{\underline{\infty}}^{-} = F^{-1}(\underline{\infty}) = \left\{ (1,n) \right\}_{n\ge 1}\quad , \quad f_{\underline{0}} = f_{\underline{0}}^{+} = F^{-1}(\underline{0}) = \left\{ (n,1) \right\}_{n\ge 1}. \end{array} \label{eq:34} \end{equation} cf. figure 3. \begin{figure} \caption{Fin around $v$} \label{fig:3} \end{figure} \section{The collection of $\lhd$-sets} The binary tree structure on $\mathbb{Q}^{+}$ gives a partial order $\lhd$ on $\mathbb{Q}^{+}$, \begin{equation} v^{\prime}\lhd v \quad \text{iff}\quad v^{\prime} = M^n(v) \quad \text{for some $n\ge 0$.} \label{eq:35} \end{equation} \begin{definition} A $\lhd$-set $c$ is a finite subset of $\mathbb{Q}^{+}$, such that \[ v\in c, \quad v^{\prime}\lhd v \Longrightarrow v^{\prime} \in c \] \label{def:36} or equivalently, $M(c)\subseteq c$ and $c$ is a \underline{finite subtree} of $\mathbb{Q}^{+}$. \end{definition} We let $\mathcal{C}$ denote the collection of all $\lhd$-sets, it is a lattice: \begin{equation} c,c^{\prime}\in \mathcal{C} \Longrightarrow c\cap c^{\prime}, \quad c\cup c^{\prime}\in \mathcal{C} \label{eq:37} \end{equation} \begin{equation} \mathcal{C}=\coprod_{m\ge 1}\mathcal{C}_{m}\quad , \quad \mathcal{C}_{m}=\left\{ c\in \mathcal{C}, \# c= m-1 \right\} \label{eq:38} \end{equation} For $v\in \mathbb{Q}^{+}$ we have the $\lhd$-set $c_{v}$ consisting of the path from the root ${\underline{1}}=(1,1)$ to $v$, \begin{equation} c_v = \left\{ v^\prime\in \mathbb{Q}^{+}, v^{\prime}\lhdeq v \right\} = \left\{ M^{n}(v) \right\}_{n\ge 0} \label{eq:39} \end{equation} We have \begin{equation} \begin{array}[H]{ll} c_{v_1}\cap c_{v_2} = c_{v_1\Lambda v_2} \\\\ \Lambda:\mathbb{Q}^{+}\times \mathbb{Q}^{+} \to \mathbb{Q}^{+} \quad \text{associative, commutative and } \\\\ v_{1}\lhd v_{2} \Longleftrightarrow v_{1}\Lambda v_{2}= v_{1} \\\\ v_{1}< v_{2} \Longleftrightarrow v=v_{1}\Lambda v_{2} \quad \text{satisfies} \quad v+v_{+}\lhd v_{2} \text{ or } v+v_{-}\lhd v_1. \end{array} \label{eq:40} \end{equation} Every $\lhd$-set $c$ is determined by its set of $\lhd$-maximal elements $c^{\max}$: \begin{equation} c = \bigcup\limits_{v\in c^{\max}}c_{v} \label{eq:41} \end{equation} For a $\lhd$-set $c\in \mathcal{C}_{m}$, we write its elements in increasing $\le$ order \begin{equation} c= \left\{ c_{m-1}>\cdots > c_2 > c_{1} \right\}\quad , \quad m=1+\# c, \label{eq:42} \end{equation} and we put \begin{equation} \begin{array}[H]{lll} c_{m}=\underline{\infty}\quad , \quad c_{0}=\underline{0} \\\\ \partial c = \left\{ [c_{i-1},c_i] \right\}_{i=1}^{m} \end{array} \label{eq:43} \end{equation} The collection of edges $\partial c=\left\{ \left[ c_{i-1},c_{i} \right] \right\}$ forms a \underline{polygonal path} in the Farey graph going from $\underline{0}$ to $\underline{\infty}$. Conversely every path in the Farey graph $\left\{ \left[ c_{i-1},c_{i} \right] \right\}$, $c_{0}=\underline{0}$, $c_m=\underline{\infty}$, $\det \begin{pmatrix}c_{i-1} \\ c_{i}\end{pmatrix}=1$, forms a $\lhd$-set $c=\left\{ c_{i} \right\}_{i=1}^{m-1}\in \mathcal{C}_{m} $. \\ Thus the finite subtrees of $\underline{\mathbb{Q}}^{+}$ are precisely the paths in $\mathcal{G}$ from $\underline{0}$ to $\underline{\infty}$. \section{Structure of $\lhd$-sets} For $c=\left\{ c_{i} \right\}\in \mathcal{C}$ we have \begin{equation} \begin{array}[H]{lll} c^{\max} &= \left\{ c_i,c_{i+1}\lhd c_i, c_{i-1}\lhd c_i \right\} \\\\ &= \left\{ c_{i},c_{i}=c_{i+1}+c_{i-1} \right\} \\\\ &= \left\{ v\in c,\; v+v_{+}\not\in c \; \text{and} \; v+v_{-}\not\in c \right\} \text{ the leaves of the tree $c$.} \end{array} \label{eq:45} \end{equation} We also define the \underline{local $\lhd$-minima} \begin{equation} \begin{array}[H]{lll} \begin{array}[H]{lll} \Phi c &= \left\{ c_{j}, c_{j}\lhd c_{j+1} \; \text{and}\; c_{j}\lhd c_{j-1} \right\} \\\\ &= \left\{ v\in c, v+v_{+}\in c \; \text{and} \; v+v_{-}\in c \right\} \end{array} \end{array} \label{eq:46} \end{equation} and we put \begin{equation} c^{\min} = \Phi c \amalg \left\{ \underline{0},\underline{\infty} \right\}. \label{eq:47} \end{equation} These sets are intertwined: \begin{equation} \begin{array}[H]{lll} c^{\max} = \left\{c_{i_{\ell}}>\cdots > c_{i_{1}}>c_{i_{0}} \right\} \\\\ c^{\min} = \left\{ \underline{\infty}>c_{j_{\ell}}> \cdots > c_{j_{1}}> \underline{0} \right\} \\\\ \text{with} \quad m>i_{\ell}>j_{\ell}> \cdots > i_{1}>j_{1}>i_{0}> 0 \end{array} \label{eq:48} \end{equation} \ \\ \begin{remark} There are still more perspectives on $\lhd$-sets. \\ With $c\in\mathcal{C}_{m}$ we have the \underline{triangulated polygone} \[\mathcal{D}elta (c):= \bigcup\limits_{v\in c}\mathcal{D}elta_{v}\] and the associated \underline{Friez Pattern} \[ \begin{array}[H]{lll} f(c):= \left\{ f(c)_{0}, f(c)_{1}, \cdots , f(c)_{m} \right\}, \quad f(c)_{j}=\# \left\{ v\in c, c_{j}\in \mathcal{D}elta_{v} \right\} \\\\ c^{\max}=\left\{ c_{j},f(c)_{j}=1 \right\} \end{array} \] \label{remark:44} \end{remark} Clearly $f(c)$ determines $c$. We have the associated \underline{bipartite graph} \[ \begin{array}[H]{lll} c \amalg \left\{ \underline{0},\underline{\infty} \right\} \xleftarrow{\;\;\pi_{0}\;\;} \mathbb{B}(c)\xrightarrow{\;\;\pi_{1}\;\;}c \\\\ \mathbb{B}(c):= \left\{ (v^{\prime},v), v^{\prime}\in \mathcal{D}elta_{v},v\in c \right\} \\\\ \# \pi_{0}^{-1}(c_{j}) = f(c)_{j}, \quad j=0,\cdots,m \; ; \quad \# \pi_{1}^{-1}(c_{j}) \equiv 3, \;\; j=1,\cdots,m-1. \end{array} \] \ \\ \begin{remark} The monoid $\SL_{2}(\mathbb{N})$ has two commuting \underline{involutions.} \\ One is the automorphism (outer in $\SL_{2}(\mathbb{Z})$, inner in $\mbox{GL}_{2}(\mathbb{Z})$), \begin{equation*} \begin{array}[H]{lll} g = \begin{pmatrix} x_{-} & y_{-} \\ x_{+} & y_{+} \end{pmatrix} \longmapsto g^{\star} := \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} g \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} = \begin{pmatrix} y_{+} & x_{+} \\ y_{-} & x_{-} \end{pmatrix} \\\\ \left( g_{\pm} \right)^{\star} = g_{\mp}\quad , \quad \left( g_{1}\cdot g_{2} \right)^{\star} = g_{1}^{\star}\cdot g_{2}^{\star}\quad , \quad g^{\star\star}=g. \end{array} \end{equation*} \label{remark:2} \end{remark} The other is the \underline{anti}-automorphism \begin{equation*} \begin{array}[H]{ll} g= \begin{pmatrix} x_{-} & y_{-} \\ x_{+} & y_{+} \end{pmatrix} \longmapsto g^{t}:= \begin{pmatrix} x_{-} & x_{+} \\ y_{-} & y_{+} \end{pmatrix} \\\\ \left( g_{\pm} \right)^{t} = g_{\mp} \quad , \quad \left( g_1 \cdot g_2 \right)^{t} = g_{2}^{t}\cdot g_{1}^{t}\quad , \quad g^{tt}=g. \end{array} \end{equation*} In terms of the identification $\SL_{2}(\mathbb{N})\equiv \mathbb{Q}^{+}$ these read: \begin{equation*} \begin{array}[H]{ll} v^{\star}= \left( y/x \right)^{\star} = \left( \frac{y_{+}+y_{-}}{x_{+}+x_{-}} \right)^{\star} = \frac{ x_{-}+x_{+}}{y_{-}+y_{+}} = x/y = v^{-1} \\\\ v^{t} = \left( \frac{y_{+}+y_{-}}{x_{+}+x_{-}} \right)^{t} = \frac{y_{+}+x_{+}}{y_{-}+x_{-}} = \frac{\;|v_{+}|_{1}\;}{|v_{-}|_{1}}. \end{array} \end{equation*} \subsection{Creation and annihilation operators} For $c=\left\{ c_{i} \right\}_{i=1}^{m-1}\in\mathcal{C}_{m}$, we have $c\cup \left\{ c_{i}+c_{i-1} \right\}\in \mathcal{C}_{m+1}$, $i=1,\cdots, m$, and we obtain the \underline{creation operator} \begin{equation} \begin{array}[H]{lll} d^{\star}:\mathbb{Z}\mathcal{C}_{m}\longrightarrow \mathbb{Z}\mathcal{C}_{m+1}\\\\ d^{\star}[c]:= \sum\limits_{i=1}^{m}\left[ c\cup \left\{ c_{i}+c_{i-1} \right\} \right]. \end{array} \label{eq:49} \end{equation} Similarly, for $c_{j}\in c^{\max}$, we have $c\text{Set}minus \left\{ c_{j} \right\}\in \mathcal{C}_{m-1}$, and we obtain the \underline{annihilation} \underline{operator} \begin{equation} \begin{array}[H]{lll} d:\mathbb{Z}\mathcal{C}_m \longrightarrow \mathbb{Z}\mathcal{C}_{m-1} \\\\ d[c] := \sum\limits_{c_{j}\in c^{\max}} \left[ c\text{Set}minus \left\{ c_{j} \right\} \right] \end{array} \label{eq:50} \end{equation} Since the operation of adding a mediant, and of removing a (different) maximal point commute, we see that the \underline{Number operator} \begin{equation} N=d\circ d^{\star}-d^{\star}\circ d : \mathbb{Z} \mathcal{C}_{m}\longrightarrow \mathbb{Z}\mathcal{C}_{m} \label{eq:51} \end{equation} is diagonalizable in the basis of $\lhd$-sets, and we have \begin{equation} N[c] = \left( m-\# c^{\max} \right)\cdot [c], \quad c\in \mathcal{C}_m . \label{eq:52} \end{equation} \ \\ \begin{remark} \underline{Friez indices}: For $c\in\mathcal{C}$, $c_{i}=v\in c$, we have: \begin{equation*} \begin{array}[H]{lll} c_{i-1}\in f_{v}^{-}\amalg \left\{ v_{-} \right\}=\left\{ v_{-}+n\cdot v, n\ge 0 \right\},\quad c_{i-1}=v_{-}+n\cdot v, \quad n=n(c)_{i }^{-}\ge 0, \\\\ c_{i+1}\in f_{v}^{+}\amalg \left\{ v_{+} \right\}= \left\{ v_{+}+n\cdot v, n\ge 0 \right\}, \quad c_{i+1} = v_{+}+ n\cdot v, \quad n=n(c)_{i}^{+}\ge 0. \\\\ c_{i}\in c^{\max} \Longleftrightarrow n(c)_{i}^{-}=n(c)_{i}^{+} = 0 \\\\ c_{i}\in \Phi c \Longleftrightarrow n(c)_{i}^{-}> 0 \;\; \text{and} \;\; n(c)_{i}^{+}>0 \\\\ f(c)_{i}=1+n(c)_{i}^{+}+n(c)_{i}^{-}, \quad i=1,\cdots , m-1. \end{array} \end{equation*} \label{remark:53} Note that if $n=n(c)_{i}^{-}>0$, then there exists \begin{equation} i_{0}<i_{1}<\cdots < i_{n}\equiv i-1, \quad \text{with} \quad c_{i_{k}} = v_{-}+k\cdot v, \quad k=0,\cdots n; \label{eq:54} \end{equation}\ Similarly, if $m=n(c)_{i}^{+}>0$, then there exists \begin{equation} j_{0}>j_{1}> \cdots > j_{m}\equiv i+1, \quad \text{with}\quad c_{j_{k}} = v_{+}+k\cdot v, \quad k=0,\cdots , m\; ; \label{eq:55} \end{equation} \end{remark} \subsection{The operad structure} For $c=\left\{ c_{i} \right\}_{i=1}^{m-1}\in\mathcal{C}_{m}$, so $\begin{pmatrix}c_{i-1}\\ c_{i}\end{pmatrix}\in \SL_{2}(\mathbb{N})$, and for $b^{(i)}=\left\{ b_{j}^{(i)} \right\}_{j=1}^{n_{i}-1}\in\mathcal{C}_{n_{i}}$ $i=1,\cdots, m$ (as usual $c_{m}=b_{n_{i}}^{(i)}=\underline{\infty}$, $c_{0}= b_{0}^{(i)}=\underline{0}$), we obtain the path $I_{i}=\left\{ b_{j}^{(i)} \begin{pmatrix}c_{i-1} \\ c_{i}\end{pmatrix} \right\}_{j=0}^{n_{i}}$ from $\underline{0} \begin{pmatrix}c_{i-1}\\ c_{i}\end{pmatrix} = c_{i-1}$ to $\underline{\infty}\begin{pmatrix} c_{i-1}\\ c_{i}\end{pmatrix}=c_{i}$, and together $\bigcup\limits_{i}I_{i}$ gives a new path from $\underline{0}$ to $\underline{\infty}$ in $\mathcal{G}$, denoted by $c\circ b$, and we obtain, \begin{proposition} The set $\mathcal{C}=\coprod\limits_{m\ge 1}\mathcal{C}_{m}$ is an \underline{operad} via \begin{equation*} \begin{array}[H]{l} \begin{array}[H]{ccccccrl} \mathcal{C}_{m} &\times &\mathcal{C}_{n_1} &\times &\cdots &\times &\mathcal{C}_{n_{m}} & \xrightarrow{\qquad}\mathcal{C}_{n_1+\ldots+ n_{m}} \\\\ c &, &b^{(1)} &, &\cdots &, &b^{(m)} & \xmapsto{\qquad} c\circ b \end{array} \\\\ c \circ b:= \left\{ b_{j}^{(i)} \begin{pmatrix} c_{i-1} \\ c_{i} \end{pmatrix}\right\}\;\; 1\le i \le m, \;\; 1\le j\le n_{i} \\\\ \left( c\circ b \right)\circ a = c\circ \left( b\circ a \right)\quad , \quad \left\{ \phi \right\}\circ c = c = c\circ \left\{ \phi \right\}^{m}. \end{array} \end{equation*} \label{prop:56} \end{proposition} The unit is the empty $\lhd$-set $\phi$, $\mathcal{C}_{1}=\left\{ \phi \right\}$, $\partial \phi =\left\{ [0,\infty] \right\} $; the root $\underline{1}=(1,1)$ satisfy $\partial\left\{ \underline{1}\right\}=\left\{ [\underline{0},\underline{1}],[\underline{1},\underline{\infty}] \right\}$, $\Phi\left\{ \underline{1}\right\}=\phi $; and we put $\Phi(\phi)=\phi$. \\ \section{Coronas} We shall identify the integers $\mathbb{Z}$ with $\lhd$-set via \begin{equation} \mathbb{Z}\ni m \xleftrightarrow{\qquad}\nu_{m}= \left\{ \begin{array}[H]{l} \left\{ (1,1),(1,2),\cdots, (1,m+1) \right\}, m\ge 0 \\\\ \left\{ (1,1) \right\}\quad m=0 \\\\ \left\{ (1,1),(2,1),\cdots, (|m|+1,1) \right\},\;\; m\le0 \end{array} \right\}\in \mathcal{C}_{|m|+2} \label{eq:57} \end{equation} Note that the $\lhd$-set $\nu_{m}$ has a unique $\lhd$-maximal point, and the path $\partial \nu_{m}$ consists of a straight line from $\underline{0}$ to the maximal point, followed by a straight line from the maximal point to $\underline{\infty}$; this properly characterizes the $\lhd$-sets $\left\{ \nu_{m} \right\}, \; m\in \mathbb{Z}$. Thus for a $\lhd$-set $c\in\mathcal{C}_{m}$, and vector $\nu=\sum\limits_{i=1}^{m}n_{i}\cdot \left[ c_{i-1},c_{i} \right]\in \mathbb{Z}\partial c$, we obtain the $\lhd$-set \begin{equation} c\circ \nu:=c\circ \left\{ \nu_{n_i} \right\}\in \mathcal{C}_{2m+|v|} \quad , \quad |\nu|=\sum\limits_{i=1}^{m} |n_{i}|. \label{eq:58} \end{equation} Explicitly, the $\lhd$-set $c\circ \nu$ is obtained by replacing $\left\{ c_{i}>c_{i-1} \right\}$ in $c$ by \begin{equation} \begin{array}[H]{lll} \left\{ c_{i}> (n+1)\cdot c_{i}+c_{i-1}> \cdots > c_{i}+c_{i-1}>c_{i-1} \right\}, \quad n_{i}=n\ge 0, \\\\ \left\{ c_{i}>c_{i}+c_{i-1}> c_{i-1} \right\}\quad , \quad n_{i}=0, \\\\ \left\{ c_{i}> c_{i}+c_{i-1}>\cdots > c_{i}+(|n|+1)\cdot c_{i-1}> c_{i-1} \right\}, \quad n_{i}=n\le 0. \end{array} \label{eq:59} \end{equation} We have \begin{equation} \Phi(c\circ \nu) = c. \label{eq:60} \end{equation} The $\lhd$-set $c\circ \nu$ is a $\star$-set according to the \begin{definition} $A$ $\star$-set $c\in \mathcal{C}_{m}^{\star}$ is a $\lhd$-set $c\in \mathcal{C}_{m}$ such that for any consecutive edges of $\partial c$, $c_{i+1}> c_{i}> c_{i-1}$, we have: \begin{equation*} \begin{array}[H]{lll} \text{either} & c_{i}\in c^{\max}\Longleftrightarrow c_{i}= c_{i+1}+c_{i-1} \\\\ \text{or}\quad & c_{i}\in c^{\min} \Longleftrightarrow c_{i}\lhd c_{i+1} \quad \text{and}\quad c_{i}\lhd c_{i-1} \end{array} \end{equation*} or they form a straight line: $c_{i}-c_{i-1}= c_{i+1}-c_{i}\Longleftrightarrow c_{i}=\frac{1}{2}\left( c_{i+1}+c_{i-1} \right)$. \label{eq:61} \end{definition} We write $\mathcal{C}^{\star}=\coprod\limits_{m\ge 1}\mathcal{C}_{m}^{\star}$ for the collection of $\star$-sets. \begin{remark} For $c\in\mathcal{C}^{\star}$, and for $c_{i'}>c_{i} $ two consecutive points of $c^{\min}$, let $c_{j}$ be the unique point of $c^{\max}$ between them, $i'>j>i$, we have either a positive or a negative fin between them: there exists $m>n\ge 0$ such that either \begin{equation} \begin{array}[H]{ll} \left( f_{c_{i}}^{+} \right): \; c\cap \left[ c_{i}, c_{i'} \right] \equiv \left\{\begin{array}[H]{rl} c_{i'}\equiv (c_{i})_{+}+n\cdot c_{i} &> (c_{i})_{+}+(n+1)c_{i} >\cdots \\ \cdots &> c_{j}=c_{i+1}= (c_{i})_{+}+mc_{i}> c_{i} \end{array}\right\} \\\\ \text{(note that $c_{i'}\lhd c_{i}$; unless $n=0$ and $c_{i}\lhd c_{i'}$); put $\lambda_{c}\left( \left[ c_{i}, c_{i'} \right] \right)=n-m+1$; or} \\\\ \left( f_{c_{i'}}^{-} \right): c\cap \left[ c_{i},c_{i'} \right]\equiv \left\{ \begin{array}[H]{rl} c_{i'}>c_{j}=c_{i'-1}= (c_{i'})_{-}+mc_{i'} &>\cdots \\ \cdots &> (c_{i'})_{-}+ nc_{i'} =c_{i} \end{array} \right\} \end{array} \label{eq:63} \end{equation} \label{remark:62} \end{remark} (note that $c_{i}\lhd c_{i'}$; unless $n=0$ and $c_{i'}\lhd c_{i}$); put $\lambda_{c}\left( \left[ c_{i},c_{i'} \right] \right)=m-n-1$. Thus in any case $[c_{i}, c_{i'}]$ is an edge of the Farey graph, so that $\partial c^{\min}$ is again a path from $\underline{0}$ to $\underline{\infty}$ in $\mathcal{G}$, and $\Phi c\in \mathcal{C}$ is again a $\lhd$-set. Thus any $\star$-set $c$ can be written (uniquely) as \begin{equation} c= \Phi c\circ \lambda_{c}. \label{eq:64} \end{equation} We obtain the fibration \begin{equation} \begin{array}[H]{lll} \Phi: \mathcal{C}^{\star}\xtworightarrow{\quad} \mathcal{C} \\\\ c\mapsto \Phi c= c^{\min}\text{Set}minus \left\{ \underline{0},\underline{\infty} \right\} \\\\ \mathbb{Z} \partial \overline{c}\equiv \Phi^{-1}(\overline{c}) \end{array} \label{eq:65} \end{equation} We get the pull-back diagram \begin{equation} \begin{array}[H]{lllll} \mathcal{C}^{\star} \xrightarrow{\qquad\Phi\qquad} & \mathcal{C} & \ & \\\\ \text{\protect\rotatebox{90}{$\subseteq$}} \;\; & \text{\protect\rotatebox{90}{$\subseteq$}} & \ & \\\\ \Phi^{-1}(\mathcal{C}^{\star})\xrightarrow{\qquad\Phi\qquad} &\mathcal{C}^{\star}& \xrightarrow{\qquad\Phi\qquad} & \mathcal{C} & \\\\ \text{\protect\rotatebox{90}{$\subseteq$}} & \text{\protect\rotatebox{90}{$\subseteq$}} & & \text{\protect\rotatebox{90}{$\subseteq$}} \\\\ \Phi^{-2}(\mathcal{C}^{\star})\xrightarrow{\qquad\Phi\qquad} & \Phi^{-1}(\mathcal{C}^{\star})& \xrightarrow{\qquad\Phi\qquad} & \mathcal{C}^{\star} \xrightarrow{\qquad\Phi\qquad} \mathcal{C} & \\\\ \vdots \end{array} \label{eq:66} \end{equation} \begin{definition} The set of \underline{Coronas} is \begin{equation*} \begin{array}[H]{lll} \cor:= \bigcap\limits_{n\ge 0} \Phi^{-n}(\mathcal{C}^{\star}) = \coprod\limits_{m\ge 1}\cor_{m} \\\\ \cor_{m}=\left\{ c\in \cor, \# c= m-1 \right\} \end{array} \end{equation*} \label{definition:1.67} \end{definition} We get the fibration \begin{equation} \begin{array}[H]{cc} \Phi:&\cor\xrightarrow{\qquad}\cor \\\\ \ & c\xmapsto{\qquad} \Phi c \\\\ \mathbb{Z} \partial \overline{c} &\equiv \Phi^{-1}(\overline{c}) \end{array} \label{eq:68} \end{equation} Given $c\in \cor_{m}$ and any vector $\nu=\sum\limits_{i=1}^{m}n_{i} \left[ c_{i-1},c_i \right] \in \mathbb{Z}\partial c­$, we get $c\circ \nu\in \cor_{2m+|\nu|}, |\nu|=\sum\limits_{i=1}^{m}|n_{i}|$, and $\Phi(c\circ \nu)=c$. Conversely, any $c\in \cor_{m}$ can be written uniquely as \begin{equation} c=\Phi(c)\circ\lambda_{c}, \qquad \text{with}\quad \Phi(c)\in \cor_{\frac{1}{2}\left( m-|\lambda_{c}| \right)}, \quad \lambda_c \in \mathbb{Z}\partial \Phi c. \label{eq:69} \end{equation} \section{Structure of coronas} We give next a constructive approach to coronas based on the \\ \underline{Inductive Principle}: For $c\in \cor_{m}$, $m>1$, there exists \begin{equation*} c_{j}\in c^{\max} \;\; \text{such that} \;\; c\text{Set}minus \left\{ c_{j} \right\}\in \cor_{m-1}. \end{equation*} Indeed writting $c=\Phi c\circ \lambda_{c}$, $\lambda_{c}=\sum n_{i}\left[ \overline{c}_{i-1},\overline{c}_{i} \right]$, $\Phi c=\overline{c}=\left\{ \overline{c}_{i} \right\}$, if $n_{i_{0}}\not = 0$ we can take $c_{j}$ to be the $\lhd$-maximal element in $c\cap \left[ \overline{c}_{i_{0}-1},\overline{c}_{i_0} \right]$, $\Phi(c\text{Set}minus \left\{ c_{j} \right\})\equiv \Phi c$. Otherwise, $\lambda_{c}\equiv 0$, $\overline{c}=\Phi c \in \cor_{\frac{m}{2}}$, by induction there is $\overline{c}_{i}\in \overline{c}^{\max}$ such that $\overline{c}\text{Set}minus \left\{ \overline{c}_{i} \right\}\in\cor_{\frac{m}{2}-1}$, and we can take $c_{j}$ to be either $\overline{c}_{i+1}+\overline{c}_{i}$ or $\overline{c}_{i}+\overline{c}_{i-1}$. \\ Thus every $c\in \cor_{m}$ is obtained from the empty corona $\phi\in\cor_{1}$ by adding one point at a time, and the set $\cor=\coprod\limits_{m\ge 1}\cor_{m}$ forms the vertices of a connected rooted graph with edges \begin{equation} \cor^{1} \equiv \left\{ (c,c^{\prime})\in\cor\times\cor , c\subseteq c^{\prime}, \# c^{\prime}=\# c+1 \right\} \label{eq:71} \end{equation} cf. figure 4. \begin{figure} \caption{First six levels of Coronas} \label{fig:4} \end{figure} \ \begin{figure} \caption{Level seven of the corona tree} \label{fig:5} \end{figure} \ \\ \ \subsection{Creation and annihilation operators} For $c\in\cor_{m}$ define the set of \underline{closed points} of $c$, $\cl(c)\subseteq c^{\max}$, by declaring all $c_{j}\in c^{\max}$ to be closed accept when $c_{j+1},c_{j-1}\in c^{\min}$, and we have \begin{equation} \begin{array}[H]{llll} \text{either} & c_{j+1}\lhd c_{j-1}\lhd c_{j-2} & \text{and} & c_{j-1} \neq \frac{1}{2}\left( c_{j+1}+c_{j-2} \right) \\\\ \text{or} & c_{j-1}\lhd c_{j+1}\lhd c_{j+2} & \text{and} & c_{j+1}\neq \frac{1}{2}\left( c_{j+2}+c_{j-1} \right) \end{array} \label{eq:72} \end{equation} Thus, \begin{equation} \begin{array}[H]{ll} \cl(c) = \left\{ c_{j}\in c^{\max}, c\text{Set}minus\left\{ c_{j} \right\}\in \cor_{m-1} \right\} \\\\ \# \cl(c) = 1+\# \Phi c -h^\circ(c), \quad 0\le h^\circ(c) \le \# \Phi c. \end{array} \label{eq73} \end{equation} We obtain the \underline{annihilation operator} \begin{equation} \begin{array}[h]{ll} d:\mathbb{Z}\cor_{m}\xrightarrow{\qquad} \mathbb{Z}\cor_{m-1} \\\\ d\left[ c \right] = \sum\limits_{c_{j}\in\cl(c)} \left[ c\text{Set}minus \left\{ c_{j} \right\} \right] \end{array} \label{eq:74b} \end{equation} Similarly, for $c\in \cor_{m}$ define the set of \underline{open edges} of $c$ \begin{equation} \begin{array}[H]{lll} \text{Op}(c)=\left\{ \left[ c_{i-1},c_{i} \right]\in\partial c, c\cup \left\{ c_{i}+c_{i-1} \right\}\in\cor_{m+1} \right\} \\\\ \#\text{Op}(c)=2+2\cdot \#\Phi c - h^{1}(c)\quad, \quad 0\le h^{1}(c) \le \#\Phi c. \end{array} \label{eq:75} \end{equation} Indeed, for $c_{i^{\prime}}>c_{i}$ consecutive points of $c^{\min}$, $c_{j}\in c^{\max}$ the $\lhd$-maximal point between them, cf. (\ref{eq:63}), the edge $\left[ c_{i},c_{j} \right]\equiv \left[ c_{i},c_{i+1} \right]$ (resp. $\left[ c_{j},c_{i^{\prime}} \right]\equiv \left[ c_{i^{\prime}-1},c_{i^{\prime}} \right]$) is always open in case of $f_{c_{i}}^{+}$ (resp. $f^{-}_{c_{i^{\prime}}}$); the only other possibly open edge in $c\cap \left[ c_{i},c_{i^{\prime}} \right]$ is the edge $[c_{i^{\prime}-1},c_{i^{\prime}}]$ (resp. $[c_{i},c_{i+1}]$), and this edge is open iff $\left[ c_{i},c_{i^{\prime}} \right]$ is open in $\Phi c$. \\ We obtain the \underline{creation operator} \begin{equation} \begin{array}[H]{l} d^{\star}:\mathbb{Z}\cor_{m}\xrightarrow{\quad}\mathbb{Z}\cor_{m+1} \\\\ d^{\star}[c] = \mathlarger{\mathlarger{\mathlarger{\sum}}}\limits_{\left[ c_{i-1},c_{i} \right]\in\text{Op}(c)} \left[ c\cup \left\{ c_{i}+c_{i-1} \right\} \right] \end{array} \label{eq:76} \end{equation} Since the operations of adding a mediant and that of removing a (different) maximal point commute, we see that the \underline{Number operator} \begin{equation} N=d\circ d^{\star} - d^{\star}\circ d: \mathbb{Z}\cor_{m}\xrightarrow{\qquad} \mathbb{Z}\cor_{m} \label{eq:77} \end{equation} is diagonalizable in the basis of coronas with eigenvalues \begin{equation} \begin{array}[H]{c} N[c] = e_{c}\cdot [c] \\\\ e_{c} = \#\text{Op}(c)-\#\cl(c) = 1 +\#\Phi c + h^{0}(c)-h^{1}(c), \qquad 1\le e_{c}\le 1+2\cdot \#\Phi c \\\\ \text{and when } h^{0}(c)=h^{1}(c):e_{c}=1+\#\Phi c = \# c^{\max}. \end{array} \label{eq:78} \end{equation} \subsection{The d.n.a. of a corona} For $c\in \cor_{m}$ define its \underline{height} to be \begin{equation} \text{ht}(c) = \min \left\{ n, \Phi^{n}c = \phi \right\} \label{eq:7.9} \end{equation} For $n=1,2,\cdots, \text{ht}(c)$ we have $\Phi^{n}c\in \cor_{\ell_{n}}$, and \begin{equation} \Phi^{n-1}c = \Phi^{n}c\circ \lambda_{c}^{n} \quad , \quad \lambda_{c}^{n}\in\mathbb{Z}\partial\Phi^{n}c\cong \mathbb{Z}^{\ell_{n}}. \label{eq:7.10} \end{equation} So that we have \begin{equation} c=\phi \circ \lambda_{c}^{\text{ht}(c)}\circ \cdots \lambda_{c}^{n}\circ \cdots \lambda_{c}^{1} \label{eq:7.11} \end{equation} Thus $c$ is determined by the $\mathbb{Z}$-valued vectors \begin{equation} \lambda_{c}^{n} = \left( \lambda_{1}^{n},\lambda_{2}^{n},\ldots , \lambda_{\ell_n}^{n} \right)\in \mathbb{Z}^{\ell_{n}}, \;\; \lambda_{j}^{n}\in \mathbb{Z}, \label{eq:7.12} \end{equation} of length $\ell_n=2\cdot \ell_{n+1}+|\lambda_{c}^{n+1}|$, $\left|\lambda_{c}^{n+1}\right|=\sum\limits_{j}|\lambda_{j}^{n+1}|$; $\ell_{\text{ht}(c)}=1$. \\ We refer to $\lambda_c^{\text{ht}(c)}\in\mathbb{Z}, \ldots , \lambda_{c}^{n}\in \mathbb{Z}^{\ell_n}, \ldots , \lambda_{c}^{1}\in \mathbb{Z}^{\ell_1}$ as the \underline{d.n.a. of $c$.} We have \begin{equation} \begin{array}[H]{ll} m &= \# c+1=2^{\text{ht}(c)}+2^{\text{ht}(c)-1}\cdot |\lambda_c^{\text{ht}(c)}|+ \cdots 2^{n}\cdot |\lambda_{c}^{n+1}|+\cdots |\lambda_{c}^{1}| \\\\ &= "\ell_0" = 2\cdot \ell_1+|\lambda_{c}^{1}| \end{array} \label{eq:7.13} \end{equation} \section{The main examples} Beside the real total order $\le$, and the tree partial order $\lhd$, we shall use the following two partial orders on $\underline{\mathbb{Q}}^{+}$. We have the \underline{pointwise order} $\text{\normalfont{pre-}}c$ : \begin{equation} \begin{array}[H]{l} v\text{\normalfont{pre-}}c v^{\prime} \Longleftrightarrow v^\prime-v\in \mathbb{N}\times \mathbb{N} \\ \text{or} \\ (x,y)\text{\normalfont{pre-}}c (x^{\prime},y^{\prime}) \Longleftrightarrow x\le x^{\prime} \text{ and } y\le y^{\prime} \end{array} \label{eq:79} \end{equation} We have the \underline{fundamental order} $\llcurly$ : \begin{equation} v\llcurly v^{\prime} \Longleftrightarrow v_{+}\text{\normalfont{pre-}}c v_{+}^{\prime} \quad \text{and} \quad v_{-}\text{\normalfont{pre-}}c v_{-}^{\prime} \label{eq:80} \end{equation} Note that we have the strict implications \begin{equation} v\lhd v^{\prime} \Longrightarrow v\llcurly v^{\prime}\Longrightarrow v\text{\normalfont{pre-}}c v^{\prime} \label{eq:81} \end{equation} Let $||:\underline{\mathbb{Q}}^{+}\to \mathbb{R}$ be any map that is \underline{fundamentally monotone}: \begin{equation} v\llcurly v^{\prime} \Longrightarrow |v|\le |v^{\prime}| \label{eq:82} \end{equation} For $R\in\mathbb{R}$ put \begin{equation} c\left( \left|\,\underline{\;}\,\right|\le R \right):= \left\{ v\in\underline{\mathbb{Q}}^{+}, |v|\le R \right\} \label{eq:83} \end{equation} \begin{theorem} Assuming $c\left( \left|\,\underline{\;}\,\right|\le R \right)$ is finite, then it is a corona. \label{thm:84} \end{theorem} Note that $v\lhd v^{\prime}$ imply $|v|\le |v^{\prime}|$, (\ref{eq:81}-\ref{eq:82}), so that $c(\left|\,\underline{\;}\,\right| \le R)$ is a $\lhd$-set. \\ We begin the proof of the theorem with the \begin{claim*} $c\left( \left|\,\underline{\;}\,\right|\le R \right)\in\mathcal{C}^{\star}$ is a $\star$-set. \label{claim:85} \end{claim*} \begin{proof}[Proof of claim:] Write $c=c(\left|\,\underline{\;}\,\right|\le R)=\left\{ c_{i} \right\}_{i=1}^{m-1}$ and let $c_{i}=v\in c\text{Set}minus \left( c^{\max} \amalg c^{\min} \right)$. We have, cf. (\ref{remark:53}), either \begin{equation} \begin{array}[H]{l} f^{-}_{v}: n(c)_{i}^{+} = 0, \;\; n(c)^{-}_{i}=m>0 : c_{i+1} = (c_{i})_{+} = v_{+}, \;\; c_{i-1}=v_{-}+m\cdot v \\ \text{or} \\ f_{v}^{+}: n(c)_{i}^{+}=m>0, \;\; n(c)_{i}^{-}=0\,:\, c_{i+1}=v_{+}+m\cdot v\;\; , \;\; c_{i-1}=(c_{i})_{-} = v_{-} \end{array} \label{eq:86} \end{equation} If $m=1$ then $c_{i+1}-c_{i}=c_{i}-c_{i-1}$ and $\left\{ c_{i-1}<c_{i}<c_{i+1} \right\}$ are on a stright line, so assume $m>1$. We obtain either \begin{equation} \begin{array}[H]{ll} \begin{array}[H]{ll} f_{v}^{-}: |c_{i+1}+c_{i}| &= |v_{+}+v| = \left| \begin{pmatrix} v \\ v_{+} \end{pmatrix} \right| \le \left| \begin{pmatrix} (m-1)v+v_{-} \\ v \end{pmatrix} \right| \\\\ &= |m\cdot v + v_{-}| =|c_{i-1}| \le R \end{array} \\\\ \text{a contradiction since $c_{i+1}+c_{i}\not\in c$ ; or} \\\\ \begin{array}[H]{ll} f_{v}^{+}: |c_{i}+c_{i-1}| &= |v+v_{-}| = \left| \begin{pmatrix} v_{-} \\ v \end{pmatrix} \right| \le \left| \begin{pmatrix} v \\ (m-1)v+v_{+} \end{pmatrix} \right| \\\\ &= |mv+v_{+}| = |c_{i+1}| \le R \end{array} \end{array} \label{eq:87} \end{equation} a contradiction since $c_{i}+c_{i-1}\not\in c$. \end{proof} Note that \begin{equation} \begin{array}[H]{lll} v\in \Phi c(\left|\,\underline{\;}\,\right|\le R) &\Longleftrightarrow & v+v_{+}, \;\; v+v_{-}\in c(\left|\,\underline{\;}\,\right|\le R) \\\\ &\Longleftrightarrow& |v|_{1}:= \sup \left\{ |v+v_{+}|, |v+v_{-}| \right\}\le R \end{array} \label{eq:88} \end{equation} The map $\left|\,\underline{\;}\,\right|_{1}$ is again fundamentally monotone so that by the claim: \\ $\Phi c(\left|\,\underline{\;}\,\right|\le R)=c(\left|\,\underline{\;}\,\right|_1\le R)\in \mathcal{C}^{\star}$ is again a $\star$-set. \\ Denoting the \underline{Fibonachi numbers} by \begin{equation} a_{1}=a_{2}=1\;\; , \;\; a_{n}=a_{n-1}+a_{n-2}=\frac{1}{\sqrt{5}}\left[ \left( \frac{1+\sqrt{5}}{2} \right)^n-\left( \frac{1-\sqrt{5}}{2} \right)^{n} \right] \label{eq:89} \end{equation} defined for $n\ge 0$, \begin{equation} |v|_{n}:= \sup \left\{ |a_{n+2}v_{+}+a_{n+1}v_{-}|,\; |a_{n+1}v_{+}+a_{n+2}v_{-}| \right\} \equiv |a_{n+2}M(v)+a_{n+1}F(v)| \label{eq:90} \end{equation} Then $\left|\,\underline{\;}\,\right|_{n}$ is again fundamentally monotone, and from our claim we may deduce inductively that \begin{equation} \Phi^n c\left( \left|\,\underline{\;}\,\right|\le R \right) \equiv c\left( \left|\,\underline{\;}\,\right|_{n}\le R \right)\in \mathcal{C}^{\star} \label{eq:91} \end{equation} so $c\left( \left|\,\underline{\;}\,\right|\le R \right)$ is indeed a corona. We get (\ref{eq:91}) by induction \begin{equation*} \begin{array}[H]{lll} v\in\Phi^{n}c(\left|\,\underline{\;}\,\right|\le R) &\Longleftrightarrow& v+v_{+}, v+v_{-}\in\Phi^{n-1} c \left( \left|\,\underline{\;}\,\right|\le R \right) \\\\ &\Longleftrightarrow& R \begin{array}[t]{l} \ge \sup\left\{|v+v_{+}|_{n-1},|v+v_{-}|_{n-1} \right\}= \\\\ = \sup \left\{ \begin{array}[H]{l} |a_{n}v_{+}+a_{n+1} v|, |a_{n+1}v+a_{n}v_{-}| \end{array} \right\} \\\\ = \sup \left\{ \begin{array}[H]{l} |(a_{n+1}+a_{n})v_{+}+ a_{n+1} v_{-}|, \\\\ |a_{n+1}v_{+}+(a_{n+1}+a_{n})v_{-}| \end{array} \right\} \\\\ =\sup\left\{ |a_{n+2}v_{+}+a_{n+1}v_{-}|, |a_{n+1}v_{+}+a_{n+2}v_{-}| \right\} \\\\ = |v|_{n} \end{array} \end{array} \end{equation*} This complete the proof of Theorem \ref{thm:84}. \\ Note that if $\left|\,\underline{\;}\,\right|$ is pointwise-monotone, $v\text{\normalfont{pre-}}c v^{\prime}\mathbb{R}ightarrow |v|\le |v^{\prime}|$, hence a-posteriori fundamentally monotone, the norms $\left|\,\underline{\;}\,\right|_{n}$, $n\ge 1$, need not be pointwise-monotone. \\ Let $\left|\,\underline{\;}\,\right|$ be pointwise-monotone map $\left|\,\underline{\;}\,\right|:\mathbb{N}\times \mathbb{N}\rightarrow \mathbb{R}$ that is also \underline{homogenouse}: $|a\cdot v|=a\cdot |v|$, $a\in \mathbb{N}^{+}$, then we have \begin{equation} |v|_{n}=\sup\left\{ |a_{n+2}v_{+}+a_{n+1}v_{-}|, |a_{n+1}v_{+}+a_{n+2}v_{-}| \right\} \le |a_{n+2}\cdot v|= a_{n+2}\bullet |v| \label{eq:92} \end{equation} Moreover, if $\left|\,\underline{\;}\,\right|$ comes from a \underline{norm}, i.e. satisfies the triangle inequality, we have for $v\in \Phi^{n}c(\left|\,\underline{\;}\,\right|\le R)$ \begin{equation} a_{n+3}\bullet |v| = |a_{n+3}\cdot v|\le |a_{n+2}v_{+}+a_{n+1}v_{-}|+ |a_{n+1}v_{+}+a_{n+2}v_{-}| \le 2\cdot R \label{eq:93} \end{equation} Together we have the \underline{exponential decay} of $\Phi^{n} c (\left|\,\underline{\;}\,\right|\le R)$. For a norm $\left|\,\underline{\;}\,\right|$: \begin{equation} c\left( \left|\,\underline{\;}\,\right| \le \frac{1}{a_{n+2}}\cdot R \right)\subseteq \Phi^{n} c\left( \left|\,\underline{\;}\,\right| \le R \right) \subseteq c\left( \left|\,\underline{\;}\,\right| \le \frac{2}{a_{n+3}}\cdot R \right) \label{eq:94} \end{equation} Examples of such coronas are given by, $p\ge 1$, $R\ge 2$, \begin{equation} c_{R}^{(p)}:=c\left( x^{p}+y^{p}\le R^{p} \right)\quad , \quad c_{R}^{(\infty)}:= c\left( \max \left\{ x,y \right\}\le R \right) \label{eq:95} \end{equation} We have \begin{equation} c_{R}^{(1)}\subseteq c_{R}^{(2)}\subseteq c_{R}^{(\infty)}\subseteq c_{2R}^{(1)} \subseteq c_{2R}^{(2)} \subseteq c_{2R}^{(\infty)} \subseteq c_{4R}^{(1)} \subseteq \cdots \label{eq:96} \end{equation} We have as well the norms $\left|\,\underline{\;}\,\right|_{A}$, for a positive real matrix $A= \begin{pmatrix} a_{1 1} & a_{1 2} \\ a_{2 1} & a_{2 2} \\ \end{pmatrix} $, $a_{i j}>0$, \begin{equation} \left| g\right|_{A} = \left| \begin{pmatrix} x_{-} & y_{-} \\ x_{+} & y_{+} \end{pmatrix} \right|_{A} := \text{tr}\left( g\cdot A^{t} \right) = x_{-}\cdot a_{1 1}+y_{-}\cdot a_{1 2}+x_{+}\cdot a_{2 1}+y_{+}\cdot a_{2 2}. \label{eq:97} \end{equation} In particular taking $A= \begin{pmatrix} \alpha & \beta \\ \alpha & \beta \end{pmatrix} $, we have the \underline{linear-norms}, $\alpha,\beta>0$, \begin{equation} \begin{array}[H]{ll} |x,y|_{(\alpha,\beta)} := \alpha \cdot x+\beta\cdot y \\\\ |v|_{(\alpha,\beta)} = |v_{+}+v_{-}|_{(\alpha,\beta)}= |v_{+}|_{(\alpha,\beta)}+|v_{-}|_{(\alpha,\beta)} \\\\ |vg|_{(\alpha,\beta)}=\text{tr}\left( g_{v}\cdot g\cdot \begin{pmatrix} \alpha & \beta \\ \alpha & \beta \end{pmatrix}^{t} \right) = \text{tr} \left( g_{v}\cdot \left( \begin{pmatrix} \alpha & \beta \\ \alpha & \beta \end{pmatrix} \cdot g^{t}\right)^t\right) = |v|_{(\alpha,\beta)g^{t}} \end{array} \label{eq:98} \end{equation} Thus we have the coronas \begin{equation} c_{R}^{(\alpha,\beta)}:= \left\{ (x,y)\in \underline{\mathbb{Q}}^{+}, \alpha\cdot x+\beta\cdot y\le R \right\} \label{eq:99} \end{equation} We have for $g\in \SL_{2}(\mathbb{N})$, with $\underline{0}g$, $\underline{\infty}g\in c_{R}^{(\alpha,\beta)}$, \begin{equation} c_{R}^{(\alpha,\beta)}\cap \left( \underline{0}g,\underline{\infty}g \right) = \left( c_{R}^{(\alpha,\beta)g^{t}} \right)g \label{eq:100} \end{equation} \section{The d.n.a of $c_{R}^{(\alpha,\beta)}$} Fix a linear-norm $\left|\,\underline{\;}\,\right|=\left|\,\underline{\;}\,\right|_{(\alpha,\beta)}$, $\alpha,\beta>0$, and $R\in\mathbb{R}$, $n\ge 1$, and let \begin{equation} \begin{array}[H]{ll} c^{n-1}=\Phi^{n-1}c_{R}^{(\alpha,\beta)}=\left\{ v\in\mathbb{Q}^{+}, a_{n+1}|v_{+}|+a_{n}|v_{-}|, a_{n}|v_{+}|+a_{n+1}|v_{-}|\le R \right\} \\\\ c^{n}=\Phi^{n}c_{R}^{(\alpha,\beta)}=\left\{ v\in\mathbb{Q}^{+}, a_{n+2}|v_{+}|+a_{n+1}|v_{-}|, a_{n+1}|v_{+}|+a_{n+2}|v_{-}|\le R \right\} \\\\ c^{n-1}=c^{n}\circ \lambda^{n} \\ \lambda^{n}=\sum\limits_{i=1}^{m}\lambda_{i}^{n}\left[ c_{i-1},c_{i} \right]\in \mathbb{Z}\partial c^{n}\;\; , \;\; c^{n}=\left\{ c_{i} \right\}_{i=1}^{m-1}. \end{array} \label{eq:101} \end{equation} Let $v\in \left( c^{n-1}\right)^{\max}$, and let $c_{i}>c_{i-1}\in \left( c^{n-1} \right)^{\min} $ be the points imediately above and below it, $c_{i}>v>c_{i-1}$, we have via (\ref{remark:62}-\ref{eq:63}), that there exists $k_{1}>k_{0}\ge 0$ such that either \begin{equation} \begin{array}[H]{lll} \left( f^{+}_{c_{i-1}} \right): c^{n-1}\cap \left[ c_{i-1},c_{i} \right]= \left\{ \begin{array}[H]{ll} c_{i}&=\left( c_{i-1} \right)_{+}+k_{0}c_{i-1} > \cdots > v \\\\ &=\left( c_{i-1} \right)_{+}+k_{1}c_{i-1}>c_{i-1} \end{array} \right\}, \\\\ \lambda^{n}_{i}=-\left( k_1-k_0 -1\right); \text{ or } \\\\ \left( f_{c_i}^{-} \right): c^{n-1}\cap \left[ c_{i-1},c_{i} \right]= \left\{ \begin{array}[H]{lll} c_{i} > v &=\left( c_{i} \right)_{-}+k_{1}c_{i}>\cdots > \left( c_{i} \right)_{-}+k_{0}c_{i} \\\\ &=c_{i-1} \end{array} \right\}, \\\\ \lambda_{i}^{n} = \left( k_{1}-k_{0}-1 \right). \end{array} \label{eq:102} \end{equation} Thus for the case of $\left( f_{c_{i-1}}^{+} \right)$ we have \begin{equation} \begin{array}[H]{lll} c_{i}=\left( c_{i-1} \right)_{+}+k_{0}\cdot c_{i-1}\in c^{n} & , & c_{i}+c_{i-1} = \left( c_{i-1} \right)_{+}+\left( k_{0}+1 \right)c_{i-1}\not\in c^{n} \\\\ v=\left( c_{i-1} \right)_{+}+k_{1}\cdot c_{i-1}\in c^{n-1} & , & v+c_{i-1}=(c_{i-1})_{+}+(k_1+1)\cdot c_{i-1}\not\in c^{n-1} \end{array} \label{eq:103} \end{equation} \begin{equation} \Longleftrightarrow \begin{array}[t]{l} \left|\left( c_{i-1} \right)_{+}+k_{0}c_{i-1}\right|_{n}\le R < \left|\left( c_{i-1} \right)_{+}+ \left( k_{0}+1 \right)c_{i-1}\right|_{n} \\\\ \left|\left( c_{i-1} \right)_{+}+k_{1}c_{i-1}\right|_{n-1}\le R < \left| \left( c_{i-1} \right)_{+}+\left( k_1 +1 \right)c_{i-1}\right|_{n-1} \end{array} \label{eq:104} \end{equation} \begin{equation} \begin{array}[H]{l} \Longleftrightarrow \\\\ \begin{array}[t]{l} a_{n+2}\left|\left( c_{i-1} \right)_{+}+ \left( k_{0}-1 \right)c_{i-1}\right|+a_{n+1}\left| c_{i-1}\right| \le R < \ \hspace{-.5cm} \ \begin{array}[t]{ll} a_{n+2}\big|\left( c_{i-1}\right)_{+} \hspace{-.5cm} \ &+k_{0}c_{i-1}\big| \\ &+a_{n+1}\left|c_{i-1}\right| \end{array} \\\\ a_{n+1}|(c_{i-1})_{+}+(k_1-1)c_{i-1}|+a_n|c_{i-1}|\le R < \ \hspace{-.5cm} \ \begin{array}[t]{ll} a_{n+1}|(c_{i-1})_{+}\hspace{-.5cm} \ &+k_1c_{i-1}| \\ &+a_n|c_{i-1}| \end{array} \end{array} \end{array} \label{eq:105} \end{equation} Thus $k=|\lambda_i^n|=k_1-k_0-1$ is the maximal integer such that \begin{equation*} (a_{n+1}\cdot k+a_n)\cdot|c_{i-1}|+a_{n+1}|c_i| = a_{n+1}|(c_{i-1})_{+}+(k_1-1)c_{i-1}|+a_n|c_{i-1}| \le R \end{equation*} and we obtain, $c_i=k_0 c_{i-1}+(c_{i-1})_{+}$, \begin{equation} \begin{array}[H]{lll} \left|\lambda_{i}^{n}\right| &= \left\lfloor\frac{R-a_{n+1}|c_{i}|-a_{n}|c_{i-1}|}{a_{n+1}|c_{i-1}|} \right\rfloor \\\\ &= \left\lfloor\frac{R-a_{n+1}|(c_{i-1})_{+}|-a_{n}|c_{i-1}|}{a_{n+1}|c_{i-1}|} \right\rfloor -k_{0},\quad \text{and when $k_{0}>0$:} \\\\ \left|\lambda_{i}^{n}\right| &= \left\lfloor \frac{R}{a_{n+1}|c_{i-1}|} - \frac{|(c_{i-1})_{+}|}{|c_{i-1}|}-\frac{a_n}{a_{n+1}} \right\rfloor - \left\lfloor \frac{R}{a_{n+2}|c_{i-1}|}-\frac{|(c_{i-1})_{+}|}{|c_{i-1}|}-\frac{a_{n+1}}{a_{n+2}} \right\rfloor-1 \end{array} \label{eq:106} \end{equation} Similarly in the case of $\left( f_{c_{i}}^{-} \right)$ we have, $c_{i-1}=k_0 c_i+(c_i)_{-}$, \begin{equation} \begin{array}[H]{lll} \lambda_{i}^{n} &= \left\lfloor \frac{R-a_{n+1}|c_{i-1}|-a_{n}|c_{i}|}{a_{n+1}|c_{i}|} \right\rfloor \\\\ &= \left\lfloor \frac{R-a_{n+1}|(c_{i})_{-}|-a_{n}|c_{i}|}{a_{n+1}|c_{i}|} \right\rfloor -k_0 \quad, \quad \text{and when $k_0>0$:} \\\\ &= \left\lfloor \frac{R}{a_{n+1}|c_i|}-\frac{|(c_i)_{-}|}{|c_{i}|} - \frac{a_n}{a_{n+1}} \right\rfloor - \left\lfloor \frac{R}{a_{n+2}|c_i|}-\frac{|(c_i)_{-}|}{|c_{i}|} - \frac{a_{n+1}}{a_{n+2}} \right\rfloor -1 \end{array} \label{eq:107} \end{equation} Note that, in the case of $\left( f_{c_{i-1}}^{+} \right)$, we have $\lambda_{i}^{n}\neq 0$ iff \begin{equation} \begin{array}[H]{ccc} \ & \frac{R-a_{n+1}|c_i|-a_n|c_{i-1}|}{a_{n+1}|c_{i-1}|}\ge 1 \\\\ \Longleftrightarrow & a_{n+2}|c_{i-1}|+a_{n+1}|c_i| \le R \end{array} \label{eq:108} \end{equation} We cannot have $k_{0}=0$, because than $c_{i}=\left( c_{i-1} \right)_{+}$, $k_{1}\ge 2$, so that \[ \begin{array}[h]{lll} a_{n+1}|(c_{i-1})_{+}+c_{i-1}|+a_{n}|c_{i-1}| &=& \left|(c_{i-1})_{+}+2\cdot c_{i-1}\right|_{n-1}\le R \\\\ &<& \left|(c_{i-1})_{+}+c_{i-1}\right|_{n}\\\\ &=& a_{n+2}|c_{i-1}|+a_{n+1}\left|(c_{i-1})_{+}\right| \end{array} \] a contradiction. Therefore $k_{0}\ge 1$, and so $k_1=|\lambda_{i}^{n}|+k_0+1\ge 3$. We see that in this case $c_{i-1}\in \left( c^{n} \right)^{\min}$, that is $c_{i-1}+(c_{i-1})_{-}\in c^{n}$, for otherwise we get \begin{equation} \begin{array}[H]{lll} a_{n+2}|c_{i-1}|+a_{n+1}|(c_{i-1})_{-}| &= |c_{i-1}+(c_{i-1})_{-}|_{n} \\\\ &> R \\\\ &\ge |3\cdot c_{i-1}+(c_{i-1})_{+}|_{n-1} \\\\ &= a_{n+1}|2\cdot c_{i-1}+(c_{i-1})_{+}|+a_{n}|c_{i-1}| \\\\ &= a_{n+2}|c_{i-1}|+a_{n+1}|c_{i-1}+(c_{i-1})_{+}| \end{array} \label{eq:109} \end{equation} a condradiction. \\ Similarly in the case of $(f_{c_{i}}^{-})$ we have $\lambda_{i}^{n}\neq 0$ iff \begin{equation} a_{n+2}|c_{i}|+a_{n+1}|c_{i-1}| \le R \label{eq:110} \end{equation} and this imply $|c_{i-1}|>|c_{i}|$, $k_{0}\ge 1$, $k_{1}\ge 3$, and $c_{i}\in \left( c^{n} \right)^{\min}$. \\ We summerize this in the following decription of the ``d.n.a. of $c_{R}^{(\alpha,\beta)}$ '': \begin{theorem} For a linear norm $|x,y|:=\alpha x+\beta y$, $\alpha,\beta>0$, we have for $n\ge 1$, \begin{equation*} \begin{array}[H]{lll} \Phi^{n-1}c_{R}^{(\alpha,\beta)}= \Phi^{n}c_{R}^{(\alpha,\beta)}\circ \lambda_{R}^{n} \\\\ \lambda_{R}^{n}=\sum\limits_{i=1}^{m}\lambda_{i}^{n}\cdot [c_{i-1},c_{i}]\in \mathbb{Z}\partial\Phi^{n}c_{R}^{(\alpha,\beta)}\quad , \quad \Phi^{n}c_{R}^{(\alpha,\beta)}=\left\{ c_{i} \right\}_{i=1}^{m-1}, \end{array} \end{equation*} and \begin{equation*} \begin{array}[H]{lll} \lambda_{R}^{n}=\lambda_{\underline{\infty}}^{-}\cdot\left[ c_{m-1},\underline{\infty} \right]+\sum\limits_{c_{i}\in\Phi^{n+1}c_{R}^{(\alpha,\beta)}} \left( \lambda_{c_{i}}^{+}\cdot \left[ c_{i},c_{i+1} \right]+\lambda_{c_{i}}^{-}[c_{i-1},c_{i}] \right)+ \lambda_{\underline{0}}^{+}\cdot \left[ \underline{0},c_{1} \right] \\\\ |\lambda_{c_i}^{+}| = \begin{cases} \left\lfloor\frac{R}{a_{n+1}|c_{i}|}-\frac{|(c_i)_{+}|}{|c_i|} - \frac{a_{n}}{a_{n+1}}\right\rfloor - \left\lfloor \frac{R}{a_{n+2}|c_{i}|}- \frac{|(c_{i})_{+}|}{|c_{i}|} - \frac{a_{n+1}}{a_{n+2}}\right\rfloor - 1, \\\\ 0 \qquad \text{ if }\qquad a_{n+2}|c_{i}|+a_{n+1}|c_{i+1}|>R; \end{cases} \\\\ \lambda_{c_i}^{-} = \begin{cases} \left\lfloor\frac{R}{a_{n+1}|c_{i}|}-\frac{|(c_i)_{-}|}{|c_i|} - \frac{a_{n}}{a_{n+1}}\right\rfloor - \left\lfloor \frac{R}{a_{n+2}|c_{i}|}- \frac{|(c_{i})_{-}|}{|c_{i}|} - \frac{a_{n+1}}{a_{n+2}}\right\rfloor - 1, \\\\ $0$ \qquad \text{ if } \qquad a_{n+2}|c_{i}|+a_{n+1}|c_{i-1}|>R; \end{cases} \\\\ |\lambda_{\underline{0}}^{+}| = \begin{cases} \left\lfloor \frac{R}{a_{n+1}\alpha}-\frac{\beta}{\alpha}-\frac{a_n}{a_{n+1}}\right\rfloor - \left\lfloor\frac{R}{a_{n+2}\alpha}-\frac{\beta}{\alpha}-\frac{a_{n+1}}{a_{n+2}}\right\rfloor - \underline{1}, \\\\ 0 \qquad \text{ if } \qquad a_{n+2}\alpha+a_{n+1}|c_1|>R; \end{cases} \\\\ \lambda_{\underline{\infty}}^{-}= \begin{cases} \left\lfloor \frac{R}{a_{n+1}\beta}-\frac{\alpha}{\beta}-\frac{a_n}{a_{n+1}}\right\rfloor - \left\lfloor \frac{R}{a_{n+2}\beta}-\frac{\alpha}{\beta}-\frac{a_{n+1}}{a_{n+2}}\right\rfloor -1 , \\\\ 0 \qquad \text{ if } \qquad a_{n+2}\beta+a_{n+1}|c_{m-1}|>R. \end{cases} \end{array} \end{equation*} \label{thm:111} \end{theorem} Thus if $\Phi^{n} c_{R}^{(\alpha,\beta)}=\left\{ c_{i} \right\}_{i=1}^{m-1}$, we obtain $\Phi^{n-1}c_{R}^{(\alpha,\beta)}$ from it by adding all the mediants $c_{i}+c_{i-1}$, and for those $c_{i}\in \Phi^{n+1}c_{R}^{(\alpha,\beta)}$, as well as $c_{m}=\underline{\infty}$, $c_{0}=\underline{0}$, we add the fin around $c_{i}$ whose length is given by the $\lambda_{c_i}^{\pm}$. \begin{corollary*} We have, with $\lfloor R\rfloor_{+}:=\max\left\{ 0,\lfloor R\rfloor \right\}$, \begin{equation*} \# \Phi^{n-1}c_{R}^{(\alpha.\beta)} = \begin{array}[t]{lll} 2\cdot \#\Phi^{n} c_{R}^{(\alpha,\beta)} \\\\ + \hspace{-.8cm} \mathlarger{\mathlarger{\mathlarger{\sum}}}\limits_{c_i=v\in \Phi^{n+1} c_{R}^{(\alpha,\beta)}} \hspace{-.6cm} \begin{array}[t]{l} \left\lfloor\frac{R}{a_{n+1}|v|}-\frac{|c_{i+1}|}{|v|}-\frac{a_{n}}{a_{n+1}}\right\rfloor_{+}+ \left\lfloor \frac{R}{a_{n+1}|v|}-\frac{|c_{i-1}|}{|v|}-\frac{a_n}{a_{n+1}}\right\rfloor_{+} \end{array} \\\\ +\left\lfloor \frac{R}{a_{n+1}\alpha}-\frac{|c_1|}{\alpha}-\frac{a_n}{a_{n+1}}\right\rfloor_{+} + \left\lfloor \frac{R}{a_{n+1}\beta}-\frac{|c_{m-1}|}{\beta}-\frac{a_n}{a_{n+1}}\right\rfloor_{+} \end{array} \end{equation*} \label{cor:112} \end{corollary*} The formula of Theorem \ref{thm:111} show a kind of interaction between the binary and Fibonachi bases. \\ Recall that every integer $R\in \mathbb{N}$ has a binary expansion \begin{equation} R=2^{n_1}+\cdots + 2^{n_{j}}+\cdots + 2^{n_{\ell}}\quad , \quad n_{j}\ge 0. \label{eq:113} \end{equation} This expansion is unique if we require $n_j>n_{j+1}$. We can add such numbers and bring them to the canonical form using the ``carry-reminders'' rule $2^{n}+2^{n}=2^{n+1}$. We can multiply numbers using the simple rule $2^{n}\cdot 2^{m}=2^{n+m}$. \\ Rewriting the Fibonacci numbers as \begin{equation} \begin{array}[H]{lll} \varphi^{n}:= a_{1+n} &= \frac{1}{\sqrt{5}}\left[ \left( \frac{1+\sqrt{5}}{2} \right)^{n+1}- \left( \frac{1-\sqrt{5}}{2} \right)^{n+1} \right] \\\\ &=\frac{1}{2^{n}}\sum\limits_{k=0}^{n}(1+\sqrt{5})^{k}(1-\sqrt{5})^{n-k} \end{array} \label{eq:114} \end{equation} Similarly, every $R\in \mathbb{N}$ has a Fibonacci or Zeckendorf expansion as a sum \begin{equation} R=\varphi^{n_1}+\cdots+\varphi^{n_j}+\cdots+ \varphi^{n_{\ell}} \label{eq:115} \end{equation} This expansion is unique if we require that $n_j>n_{j+1}+1$, i.e. we can represent $R$ by a sequence of $0$'s and $1$'s, where no two $1$'s are neighbours. We can add numbers in this representation, and bring them to the cannonical form using the ``carry-reminder'' rules: $\varphi^n+\varphi^{n+1}=\varphi^{n+2}$, and $\varphi^{n}+\varphi^{n}=\varphi^{n+1}+\varphi^{n-2}$. We can also multiply numbers using the rule, for $m\ge n$: \begin{equation*} \begin{array}[H]{lll} \left( \star \right)_{n,m} \varphi^{n}\cdot \varphi^{m} &=& \varphi^{n+m}+\varphi^{n+m-4}+\cdots+\varphi^{n+m-4j}+\cdots \\\\ &\ & + \left\{ \begin{array}[H]{ll} \varphi^{m-n+4}+\varphi^{m-n} & n\equiv 0(2) \\\\ \varphi^{m-n+2}+\varphi^{m-n-1} & n\equiv 1(2)\;\; n<m \\\\ \varphi^{2}+\varphi^{0} & m=n\equiv 1(2) \end{array} \right. \end{array} \end{equation*} One prove $(\star)_{n,m}$ by induction, via \begin{equation*} \begin{array}[H]{l} (\star)_{n,n}+(\star)_{n-1,n} \Longrightarrow (\star)_{n,n+1} \\\\ (\star)_{n-1,n}+(\star)_{n-2,n} \Longrightarrow (\star)_{n,n} \\\\ (\star)_{n,n}+(\star)_{n,n+1} \Longrightarrow (\star)_{n,m} \quad, \quad m\ge n. \end{array} \end{equation*} Note the curious $4$-periodicity of $(\star)_{n,m}$. \\ Perhaps this interaction of the \underline{binary} and \underline{Fibonacci} expansions should come as no surprise since our very approach to $\mathbb{Q}^{+}$ is as a \underline{binary} tree of \underline{Fibonacci} growth. \section{Equidistribution} We end with some remarks on equidistribution. \\ We have the exact \underline{potential function} \begin{equation} \begin{array}[H]{lll} h:\mathcal{G}_{1}=\SL_2(\mathbb{N})\rightarrow [0,1] \\\\ h(v_{-},v_{+}) := \frac{1}{|v_{-}|\cdot |v_{+}|}, \quad ,\quad |(x,y)|=x+y. \end{array} \label{eq:117} \end{equation} For each triangle $\mathcal{D}elta_{v}$ we have the \underline{exactness} \begin{equation} h(v_{-},v_{+}) = h\left( v_{-},v \right)+h\left( v,v_{+} \right) \label{eq:118} \end{equation} We get the function \begin{equation} \begin{array}[H]{l} H:\left\{ \underline{\infty} \right\}\amalg \underline{\mathbb{Q}}^{+} \amalg \left\{ \underline{0} \right\}=\mathcal{G}_{0}\xrightarrow{\qquad} [0,1] \\\\ \displaystyle H(v)=\displaystyle\int\limits_{\underline{0}}^v h\left( dg \right) = \text{sum of $h$ along (any) path from $\underline{0}$ to $v$.} \end{array} \label{eq:119} \end{equation} We have \begin{equation} H(x,y)=\frac{y}{x+y} \label{eq:120} \end{equation} Indeed, \begin{equation} \begin{array}[H]{lll} \displaystyle \partial H \begin{pmatrix} x_{-} & y_{-} \\ x_{+} & y_{+} \end{pmatrix} &= \displaystyle \frac{y_{+}}{x_{+}+y_{+}} - \frac{y_{-}}{x_{-}+y_{-}} \\\\ &= \displaystyle \frac{1}{(x_{+}+y_{+})(x_{-}+y_{-})}\\\\ &\displaystyle= \displaystyle h \begin{pmatrix} x_{-} & y_{-} \\ x_{+} & y_{+} \end{pmatrix} \end{array} \label{eq:121} \end{equation} For a $\lhd$-set $c=\left\{ c_{i} \right\}^{m-1}_{i=1}\in\mathcal{C}_{m}$, we get the function \begin{equation} \begin{array}[H]{l} R_{c}: \left\{ \underline{\infty} \right\} \amalg c \amalg \left\{ \underline{0} \right\} \xrightarrow{\qquad} [0,1] \\\\ R_{c}(v) = \left\{ \begin{array}[H]{ll} 1 & v=c_{m}=\underline{\infty} \\ j/m & v=c_j \\ 0 & v=c_{0}=\underline{0} \end{array} \right\} = \frac{1}{m}\displaystyle\int_{\underline{0}}^{v}{\mathbf{1}} \end{array} \label{eq:122} \end{equation} the length of the path $c$ from $\underline{0}$ to $v$ divided by the total length of c. \\ Put for $p\ge 1$, \begin{equation} \delta_{p}(c)=\| R_{c}-H\|^{p}_{\ell_{p(c)}} = \sum\limits_{j=1}^{m-1} \left| \frac{j}{m} -\frac{y_j}{x_{j}+y_{j}}\right|^{p} \quad , \quad c=\left\{ c_{j}=(x_{j},y_{j}) \right\}. \label{eq:123} \end{equation} For $c=c_{R}=c_{R}^{(1,1)}=\left\{ (x,y)\in\mathbb{Q}^{+}, x+y\le R \right\}$, we have that the following estimates imply Riemann Hypothesis, \begin{equation} \text{Franel \cite{F1924}:} \qquad \delta_{2}(c_{R})=O\left( \frac{\log R}{R} \right) \label{eq:124} \end{equation} \begin{equation} \text{Landau \cite{L1924}:} \qquad \delta_{1}(c_{R})= O\left( R^{1/2}\log R \right) \label{eq:125} \end{equation} To obtain this using an inductive procedure, one will need a good estimation of $\delta_{p}\left( \Phi^{n-1}c_{R} \right)$ in terms of $\delta_{p}\left( \Phi^{n} c_{R} \right)$. One can try to do this ``locally'', by dissecting $\Phi^{n}c_{R}$ to intervals. \underline{A partial $\lhd$-set}, or a $\lhd$-\underline{interval}, is a path $c=\left\{ c_{i} \right\}_{i=0}^{m}$ in the Farey Graph, $\det \begin{pmatrix} c_{i-1} \\ c_{i} \end{pmatrix} \equiv 1 $, from the \underline{initial-point} $c_0$ to the \underline{end-point} $c_{m}$ (and similarly one can define a \underline{partial} $\star$-\underline{set} and \underline{partial corona}). For such $\lhd$-interval $c=\left\{ c_{i}=(x_{i},y_{i}) \right\}_{i=0}^{m}$ we can define \begin{equation} \delta_{p}(c):= \sum\limits_{i} \left| \frac{y_{0}}{x_{0}+y_{0}}+\frac{i}{m}\left( \frac{y_{m}}{x_{m}+y_{m}}-\frac{y_{0}}{x_{0}+y_{0}} \right) - \frac{y_{i}}{x_{i}+y_{i}}\right|^{p} \label{eq:126} \end{equation} This agree with (\ref{eq:123}) when $(x_{0},y_{0})=\underline{0}=(1,0)$. $(x_{m},y_{m})=\underline{\infty}=(0,1)$. One can also demand that $\det \begin{pmatrix} c_{0} \\ c_{m} \end{pmatrix} = 1 $, so that \[\left( \frac{y_{m}}{x_{m}+y_{m}}-\frac{y_{0}}{x_{0}+y_{0}}\right)=\frac{1}{(x_{m}+y_{m})\cdot (x_0+y_0)},\] and $c=\left\{ c_i=\text{Ch}eck{c}_{i} \begin{pmatrix} c_0 \\ c_m \end{pmatrix} \right\}$ where $\text{Ch}eck{c}=\left\{ \text{Ch}eck{c}_{i} \right\}$ is a usual $\lhd$-set (or corona). \\ E.g. For such a partial $\lhd$-set (or corona) $c=\left\{ c_{i} \right\}_{i=0}^{m}$ one has the associated partial $\lhd$-set (corona) $\tilde{c}:= c\cup \left\{ c_{i}+c_{i-1} \right\}_{i=1}^{m}$ with the same initial and end points, obtained by adding all medians. There is an elementary estimation \begin{equation} \begin{array}[H]{l} \delta_{1}(\tilde{c})\le 2\cdot \delta_{1}(c)+\frac{1}{2} h(c) \\\\ h(c) = \frac{y_{m}}{x_{m}+y_{m}} - \frac{y_{0}}{x_{0}+y_{0}}\in [0,1] \qquad \text{the real length of $c$.} \end{array} \label{eq:127} \end{equation} Along the fins one can estimate $\delta_1$ using the Euler-MacLaurin formula. Also, if $c=\coprod\limits_{j} c_{j}$, where the end point of $c_{j-1}$ is the initial port of $c_{j}$, we have the elementary estimate \begin{equation} \begin{array}[H]{l} \left| \delta_{1}(c)- \sum\limits_{j} \delta_{1}(c_{j})\right| \le \frac{1}{2}\sum\limits_{j_{1}<j_{2}} \left| h(c_{j_{1}})M(c_{j_{2}})-h(c_{j_{2}})M(c_{j_{1}}) \right| \\\\ M(c) = \# c+1 = m\in \mathbb{N} \qquad \text{the degree or length of the path } \partial c. \end{array} \label{eq:128} \end{equation} \ \\ But it is important to note that $H(c_{R})\subseteq \left[ 0,1 \right]$ is \underline{Not} equidistributed, it is only on average so (\ref{eq:124}-\ref{eq:125}): the real distance between $H(v)=H(c_{i})$ and $H(c_{i\pm 1})$, for an ``old'' $v$, so $c_{i\pm 1}\in f_{v}^{\pm}$, is of the order $O\left( \frac{1}{|v|\cdot (R+|v_{\pm}|)} \right)$; while the real distance between the elements of $f^{+}_{v}$, or of $f_{v}^{-}$, are smaller- \[ \left| H(c_{i\pm 1})- H(c_{i\pm 2})\right| = O\left(\frac{1}{R\cdot (R+|v|)} \right) .\] E.g. for $v=\underline{0}=(1,0)=c_{0}$, we have $c_{1}=(R,1)$, $c_{2}=(R-1,1)$, and $|H(c_1)-H(c_0)|=\frac{1}{(R+1)}$, while $|H(c_2)-H(c_1)| = \frac{1}{R}-\frac{1}{R+1}= \frac{1}{R\cdot (R+1)}$. Thus it is important that the d.n.a. of $\Phi^n c_R$ adds extra points just around such old $v$-s, cf. Theorem \ref{thm:111}. \begin{remark} \label{10.13} Let $c(n)$ denote the corona of height $n$, with d.n.a. identically $0$, so that $c(n+1)$ is obtained from $c(n)$ just by adding all mediants, and $\# c(n)=2^n-1$, so $c(n) = \left\{(x_i,y_i)\right\}_{i=1}^{2^n-1}$ in increasing real order, and let \begin{equation} \label{eq:10.14} S_n = \sum_{i=1}^{2^n-1}\left|\frac{i}{2^n}-\frac{y_i}{x_i+y_i}\right|^2. \end{equation} The sum $S_n$ appears in all the even places of the sum $S_{n+1}$, so that \[S_1=0<S_2=\frac{2}{144}<S_3=\frac{668}{14400} < \dots < \dots \] is monotone increasing, and does not converge to $0$. Comparing this to (\ref{eq:124}), we see that it is the d.n.a. that is responsible for the uniform distribution of the rationals within the continuum. \end{remark} \end{document}
\begin{document} \title[$G_2$-manifolds from K3 surfaces with a $\mathbb{Z}^2_2$-action] {A construction of $G_2$-manifolds from K3 surfaces with a $\mathbb{Z}^2_2$-action} \author{Frank Reidegeld} \address{TU Dortmund University, Faculty for Mathematics, 44221 Dortmund, Germany} \email{[email protected]} \subjclass[2010]{53C29, 14J28} \begin{abstract} A product of a K3 surface $S$ and a flat 3-dimensional torus $T^3$ is a manifold with holonomy $SU(2)$. Since $SU(2)$ is a subgroup of $G_2$, $S\times T^3$ carries a torsion-free $G_2$-structure. We assume that $S$ admits an action of $\mathbb{Z}^2_2$ with certain properties. There are several possibilities to extend this action to $S\times T^3$. A recent result of Joyce and Karigiannis allows us to resolve the singularities of $(S\times T^3)/\mathbb{Z}^2_2$ such that we obtain smooth $G_2$-manifolds. We classify the quotients $(S\times T^3)/\mathbb{Z}^2_2$ under certain restrictions and compute the Betti numbers of the corresponding $G_2$-manifolds. Moreover, we study a class of quotients by a non-abelian group. Several of our examples have new values of $(b^2,b^3)$. \end{abstract} \maketitle \tableofcontents \section{Introduction} A $G_2$-structure is a 3-form $\phi$ on a 7-dimensional manifold $M$ whose stabilizer group is $G_2$ at each point. Any $G_2$-structure induces a Riemannian metric. $(M,\phi)$ is called a $G_2$-manifold if the holonomy group of the metric is $G_2$. Currently, $G_2$-manifolds are an active area of research in pure mathematics and in theoretical physics \cite{Acharya}. Constructing new examples of compact $G_2$-manifolds turns out to be surprisingly difficult. One of the reasons is that the equation for the holonomy reduction is highly non-linear. Another reason is that $G_2$-manifolds carry no complex structure and thus we cannot use techniques from complex geometry as in the case of Calabi-Yau manifolds, which have holonomy $SU(n)$. There are only three known methods for the construction of compact $G_2$-manifolds. The first one, which led to the first compact examples of $G_2$-manifolds, is due to Dominic Joyce \cite{Joyce1,Joyce}. We choose a discrete group $\Gamma$ that acts on a torus $T^7$ and preserves the flat $G_2$-structure. It is possible to define a closed $G_2$-structure $\phi$ with small $\|d\ast\phi\|$ on a resolution of $T^7/\Gamma$. An analytic argument shows that $\phi$ can be deformed such that $d\ast\phi=0$ and the holonomy of the metric thus is $G_2$ if the resolution is simply connected. The second construction is the twisted connected sum method, which has been proposed by Donaldson and was carried out in detail by Kovalev \cite{Ko}. Its starting point are two asymptotically cylindrical Calabi-Yau threefolds $W_1$ and $W_2$. Theorems that enable the construction of the $W_i$ can be found in \cite{HaskinsEtAl,Ko}. A $G_2$-manifold can be obtained by cutting off the cylindrical ends of the $W_i\times S^1$ and glueing together the truncated manifolds by an appropriate map. The first examples of $G_2$-manifolds that can be produced by this method can already be found in \cite{Ko}. In the meantime, several other classes of examples have been constructed \cite{Braun,CortiEtAl1,CortiEtAl2,KoLe}. Recently, a third method was developed by Dominic Joyce and Spiro Karigiannis \cite{JoyceKarigiannis}. Their idea is to divide a manifold $M$ with a torsion-free $G_2$-structure $\phi$ by an involution $\imath$. The quotient $M/\langle\imath\rangle$ has a singularity along a 3-dimensional submanifold $L$ that can be locally identified with $\mathbb{R}^3\times \mathbb{C}^2/\{\pm\text{Id}\}$. The next step is to cut out a neighbourhood of $L$ and glue in a family of Eguchi-Hanson spaces that are parameterized by $L$. After that, the authors obtain a smooth manifold that carries a metric whose holonomy is contained in $G_2$. It should be noted that the analytical methods that are needed to show the existence of a torsion-free $G_2$-structure are harder than in the two first constructions. One reason for this is that there is no canonical closed $G_2$-structure with small torsion on the family of Eguchi-Hanson spaces. Moreover, the method only works if a non-trivial condition is satisfied, namely that $L$ admits a closed and coclosed 1-form without zeroes. The result of \cite{JoyceKarigiannis} can be easily generalized to the case where $M/\langle\imath\rangle$ is replaced by an arbitrary $G_2$-orbifold with singularities of type $\mathbb{R}^3\times \mathbb{C}^2/\{\pm\text{Id}\}$. The first two methods start with a manifold whose holonomy group is smaller than $G_2$. The constructed $G_2$-manifold is in a certain sense nearby the manifold with smaller holonomy. In the first case, the starting point is a flat torus, which has trivial holonomy. The starting point of the twisted connected sum construction are the $W_i\times S^1$, which have holonomy $SU(3)$. Beside the trivial group and $SU(3)$ there is also the group $SU(2) \subseteq G_2$, which is also a possible holonomy group. Compact $7$-dimensional manifolds with holonomy $SU(2)$ are covered by $S\times T^3$, where $S$ is a K3 surface. This observation suggests the following construction of $G_2$-manifolds. Since the holonomy of $S\times T^3$ is a subgroup of $G_2$, it carries a torsion-free $G_2$-structure $\phi$. Let $\Gamma$ be a discrete group that acts on $S\times T^3$ and preserves $\phi$. The quotient $(S\times T^3)/\Gamma$ is an orbifold. By resolving the singularities, we should obtain a $G_2$-manifold. This idea has been already proposed by Joyce \cite{Joyce}. Moreover, he has shown that some of his resolutions of torus quotients are in fact examples of this method since K3 surfaces can be constructed as resolutions of $T^4/\mathbb{Z}_2$. As in \cite{JoyceKarigiannis}, the main obstacle to apply this method in full generality is to define a closed $G_2$-structure with small torsion on the resolved manifold. In the case $\Gamma=\mathbb{Z}_2$ we are in the situation of \cite{JoyceKarigiannis} and can work with their main result. Unfortunately, $(S\times T^3)/\mathbb{Z}_2$ has infinite fundamental group and therefore the holonomy of the manifold that we obtain is not the whole group $G_2$. By dividing $S\times T^3$ by a group that is isomorphic to $\mathbb{Z}^2_2$ and carefully applying the theorem of \cite{JoyceKarigiannis} twice it is finally possible to obtain $G_2$-manifolds. Some examples of $G_2$-manifolds that are resolutions of $(S\times T^3)/\mathbb{Z}^2_2$ can be found in \cite{Joyce,JoyceKarigiannis}. The aims of this paper are to find larger classes of examples and to check if $G_2$-manifolds with the same Betti numbers can be found in the literature. We built up our research on the results of \cite{ReidK3} on K3 surfaces with a $\mathbb{Z}^2_2$-action. It turns out that we have to refine our results in order to obtain classifications of our examples. We consider three different extensions of the $\mathbb{Z}_2^2$-action on $S$ to $S\times T^3$. Therefore, we obtain three different classes of examples. Each of them has distinct geometric properties. In the first case the action of one element of $\mathbb{Z}^2_2$ on $S\times T^3$ is free. This causes the holonomy of the manifold to be $SU(3)\rtimes\mathbb{Z}_2$ instead of $G_2$. In the second case our construction is equivalent to that of Kovalev and Lee \cite{KoLe}. Nevertheless, we find additional examples of $G_2$-manifolds that cannot be found in \cite{KoLe}. In the third case the second Betti number vanishes and $b^3$ takes only few different values. Finally, we consider another class of examples where $\Gamma$ is a non-abelian group instead of $\mathbb{Z}^2_2$. In that case we have $b^2,b^3\neq 0$ and obtain several $G_2$-manifolds that cannot be found in the existing literature. This paper is organised as follows. In Section \ref{G2-Section} we introduce the basic facts about $G_2$-manifolds and sum up the main theorems of \cite{JoyceKarigiannis}. In Section \ref{K3-Section} we recall the results of \cite{ReidK3} on K3 surfaces and refine them. Our examples of $G_2$-manifolds are contained in Section \ref{K3T3-Section}. \section{$G_2$-manifolds and their construction methods} \label{G2-Section} \subsection{General facts} In this section, we introduce the most important facts about $G_2$-manifolds that will be needed later on. For a more thorough introduction, we refer the reader to \cite{Joyce}. \begin{Def} Let $x^1,\ldots,x^7$ be the standard coordinates of $\mathbb{R}^7$ and let $dx^{i_1\ldots i_k} := dx^{i_1}\wedge\ldots\wedge dx^{i_k}$. The 3-form \begin{equation} \label{Can3form} \phi_0 := dx^{123} + dx^{145} + dx^{167} + dx^{246} - dx^{257} - dx^{347} - dx^{356} \end{equation} is called the \emph{standard $G_2$-form on $\mathbb{R}^7$}. \end{Def} We define the group $G_2$ as the stabilizer of $\phi_0$ and mention some algebraic facts that are helpful to understand the inclusions $SU(2) \subseteq SU(3) \subseteq G_2$. \begin{DaL} \begin{enumerate} \item The group of all linear maps $f:\mathbb{R}^7\rightarrow \mathbb{R}^7$ with $f^{\ast}\phi_0=\phi_0$ is a connected Lie group whose Lie algebra is the compact real of $\mathfrak{g}_2^{\mathbb{C}}$. We denote this group by $G_2$. \item Let $v\in\mathbb{R}^7\setminus\{0\}$. The group $\{f\in G_2|f(v) =v \}$ is isomorphic to $SU(3)$. \item A $3$-dimensional oriented subspace $V\subseteq\mathbb{R}^7$ is called \emph{associative} if $\phi_0|_V=\text{vol}$, where $\text{vol}$ is the positive volume form on $V$ with respect to the standard Euclidean metric. The group of all $f\in G_2$ that act as the identity on $V$ is isomorphic to $SU(2)$. \end{enumerate} \end{DaL} Next, we introduce the notion of a $G_2$-structure. \begin{Def} A \emph{$G_2$-structure} on a 7-dimensional manifold $M$ is a 3-form $\phi$ such that for all $p\in M$ there exists a bijective linear map $f:T_pM\rightarrow \mathbb{R}^7$ such that $\phi_p = f^{\ast}\phi_0$. \end{Def} Since $G_2\subseteq SO(7)$, any $7$-dimensional manifold with a $G_2$-structure $\phi$ carries a canonical metric $g$. Moreover, the following equation defines a volume form $\text{vol}$ on $M$: \begin{equation} (v\lrcorner\phi)\wedge(w\lrcorner\phi)\wedge\phi = 6 g(v,w)\: \text{vol}\:. \end{equation} The following proposition helps us to decide if a $G_2$-structure induces a metric with holonomy $G_2$. \begin{Pro} Let $(M,\phi)$ be a manifold with a $G_2$-structure and let $g$ be the metric that is induced by $\phi$. The following statements are equivalent. \begin{enumerate} \item $\nabla^g \phi = 0$, where $\nabla^g$ is the Levi-Civita connection. \item $d\phi = d\ast\phi = 0$. \item $\text{Hol} \subseteq G_2$, where $\text{Hol}$ is the holonomy group of the Levi-Civita connection. \end{enumerate} If any of the above statements is true, $(M,g)$ is Ricci-flat. Conversely, let $(M,g)$ be a $7$-dimensional Riemannian manifold whose holonomy is a subgroup of $G_2$. If this is the case, $M$ carries a $G_2$-structure $\phi$ with $d\phi = d\ast\phi = 0$ whose induced metric is $g$. \end{Pro} \begin{Def} In the situation of the above proposition, $\phi$ is called a \emph{torsion-free $G_2$-structure}. If in addition $\text{Hol} = G_2$, $(M,\phi)$ is called a \emph{$G_2$-manifold}. \end{Def} If the underlying manifold is compact, it is particularly easy to decide if the holonomy is all of $G_2$ or just a subgroup. \begin{Le} Let $M$ be a compact manifold with a torsion-free $G_2$-structure. The holonomy of the induced metric is all of $G_2$ if and only if $\pi_1(M)$ is finite. \end{Le} Finally, we have to introduce a special kind of submanifolds in order to formulate the results of \cite{JoyceKarigiannis}. \begin{Def} Let $(M,\phi)$ be a $G_2$-manifold and let $L$ be a 3-dimensional oriented submanifold of $M$. $L$ carries a volume form $vol$ that is determined by the restriction of the induced metric to $L$ and its orientation. We call $L$ an \emph{associative submanifold of $M$} if for all $p\in L$ we have $\phi|_{T_pL}=vol_p$. \end{Def} Since $G_2$ is the automorphism group of the octonions $\mathbb{O}$, a $G_2$-structure yields an identification of each tangent space with the imaginary space of the octonions $\text{Im}(\mathbb{O})$. An associative submanifold is characterized by the condition that each of its tangent spaces is associative or equivalently that it can be identified with $\varphi(\text{Im}( \mathbb{H}))$ for a $\varphi\in G_2$. Associative submanifolds are a special case of calibrated submanifolds. For a detailed introduction to this subject we refer the reader to the papers by Harvey and Lawson \cite{HarveyLawson} and by McLean \cite{McLean}. \subsection{The construction of Joyce and Karigiannis} \label{JoyceKarigiannisSection} The main result of \cite{JoyceKarigiannis} can be stated as follows. \begin{Th} \label{Thm-Joyce-Karigiannis} (Theorem 6.4. in \cite{JoyceKarigiannis}) Let $(M,\phi)$ be a compact manifold with a torsion-free $G_2$-structure and let $\imath:M\rightarrow M$ be an involution that preserves $\phi$. We assume that $\imath$ is not the identity and has at least one fixed point. In this situation, the fixed point set of $\imath$ is a compact, not necessarily connected associative submanifold $L$. We assume that there exists a closed and coclosed 1-form $\lambda$ on $L$ without zeroes. The quotient $M/\langle \imath \rangle$ is a $G_2$-orbifold with an $A_1$-singularity along $L$, which means that all fibers of the normal bundle of $L$ can be identified with $\mathbb{C}^2/\{\pm\text{Id}\}$. With help of $\lambda$, it is possible to construct a bundle $\sigma:P \rightarrow L$ whose fibers are Eguchi-Hanson spaces. By cutting out a neighbourhood of $L\subseteq M/\langle \imath \rangle$ and glueing in $P$, we obtain a resolution $\pi: N \rightarrow M/\langle \imath \rangle$, where $N$ is a smooth manifold. The 5-dimensional manifold $\pi^{-1}(L)$ is a bundle over $L$ whose fibers are diffeomorphic to $S^2$. In this situation, $N$ carries a torsion-free $G_2$-structure. In particular, $N$ is a $G_2$-manifold if $\pi_1(N)$ is finite. \end{Th} \begin{Rem} \begin{enumerate} \item For the proof of the theorem it is not important that the $A_1$-singularities are obtained by dividing $M$ by an involution. In fact, we may replace $M/\langle \imath \rangle$ by an arbitrary $G_2$-orbifold with $A_1$-singularities. \item For reasons of brevity, we call $N$ the resolution of $M/\langle\imath\rangle$, although the process that is described in the above theorem is more complicated than the resolution of singularities in algebraic geometry. \item An Eguchi-Hanson space is the blow-up of $\mathbb{C}^2/\{\pm Id\}$ together with a hyper-K\"ahler metric that approaches the standard Hermitian metric on $\mathbb{C}^2/\{\pm Id\}$ at infinity. Since $\mathbb{C}^2$ can be identified with $\mathbb{H}$, we have a sphere of complex structures on $\mathbb{C}^2/\{\pm Id\}$. This sphere can be identified with $S^2\subseteq \text{Im}(\mathbb{H})$. We are free to blow up the singularity with respect to any of these complex structures. The value of the 1-form $\lambda$ at $p\in L$ is an element of $T^{\ast}_p L \setminus \{0\}$. Since $L$ is associative, the normalization of $\lambda_p$ to unit length can be thought of as an element of $S^2\subseteq \text{Im}( \mathbb{H})$, too. Thus it determines a complex structure on the fiber of the normal bundle. Therefore, it is possible to use $\lambda$ to define the fibers of $P$. The details of this construction are important for the proof of Theorem \ref{Thm-Joyce-Karigiannis}, but they do not influence the topology of the $G_2$-manifold. We refer the reader to \cite{JoyceKarigiannis} for further information. \item Although it is easy to decide if a closed and coclosed one-form $\lambda$ exists on $L$, it is in general not possible to decide if $\lambda$ has zeroes without information on the metric on $L$. Fortunately, the submanifolds $L$ that we encounter are Riemannian products of $S^1$ and a two-dimensional manifold. Thus we can choose $\lambda = d\theta$ where $\theta$ is the angle that parameterizes $S^1$. \end{enumerate} \end{Rem} The Betti numbers and the fundamental group of $N$ can be computed fairly easily. \begin{Co} \label{JoyceKarigiannisBetti} (cf. Section 6.5 in \cite{JoyceKarigiannis}) In the situation of the above two theorems we have \[ b^k(N) = b^k(M/\langle\imath\rangle) + b^{k-2}(L) \] for all $k\in\{0,\ldots,7\}$, where $b^{-2}(L)$ and $b^{-1}(L)$ are defined as $0$. Moreover, the fundamental groups of $N$ and $M/\langle \imath \rangle$ are isomorphic. \end{Co} If we replace $\lambda$ with $-\lambda$, the complex structure $I$ on $\mathbb{C}^2/\{\pm Id\}$ is replaced by $-I$. Since this does not change the blow-up map and the sign of $\lambda$ plays no role in the other parts of the proof of Theorem \ref{Thm-Joyce-Karigiannis}, it suffices that $\lambda$ is defined up to a sign. We make this notion more explicit. Let $\pi:Z\rightarrow L$ be a $\mathbb{Z}_2$-principal bundle over $L$. We can consider the bundles $\bigwedge^k T^{\ast} L \otimes_{\mathbb{Z}_2} Z$ over $L$. The fibers of these bundles are isomorphic to $\bigwedge^k T^{\ast} L$ and its sections are called \emph{$Z$-twisted $k$-forms}. If we have a local trivialization $L|_U \cong U\times \mathbb{Z}_2$ over an open subset $U\subset L$, we can identify $Z$-twisted $k$-forms naturally with ordinary $k$-forms. The operators $d$ and $d^{\ast}$ induce operators on the bundles $\bigwedge^k T^{\ast} L \otimes_{\mathbb{Z}_2} Z$ and therefore it makes sense to talk about a closed and coclosed $Z$-twisted 1-form on $L$. Moreover, we can define the \emph{$Z$-twisted de Rham cohomology} as the cohomology of the complex $(\Gamma(\bigwedge^k T^{\ast} L \otimes_{\mathbb{Z}_2} Z)_{k\geq 0},d)$ and the \emph{$Z$-twisted Betti numbers $b^k(L,Z)$} as the dimension of the cohomologies. With this notation, there is the following generalization of Theorem \ref{Thm-Joyce-Karigiannis}. \begin{Co} \label{Thm-Joyce-Karigiannis-Twisted} Let $(M,\phi)$, $\imath$ and $L$ be as in Theorem \ref{Thm-Joyce-Karigiannis}. We assume that there exists a closed, coclosed, non-vanishing $Z$-twisted 1-form $\lambda$ on $L$ for a $\mathbb{Z}_2$-bundle $Z$ on $L$. In this situation, the singularities of $M/\langle \imath \rangle$ can be resolved such that the resolved manifold $N$ carries a torsion-free $G_2$-structure. Moreover, we have \[ b^k(N) = b^k(M/\langle\imath\rangle) + b^{k-2}(L,Z) \] for all $k\in\{0,\ldots,7\}$ and the fundamental groups of $N$ and $M/\langle \imath \rangle$ are isomorphic. \end{Co} \section{K3 surfaces with non-symplectic $\mathbb{Z}^2_2$-actions} \label{K3-Section} \subsection{K3 surfaces with non-symplectic involutions} We will see that any element of the group $\mathbb{Z}^2_2$, that acts on $S\times T^3$, acts on $S$ as a non-symplectic involution which is holomorphic with respect to one of the complex structures $I$, $J$ and $K$. Therefore, this section contains a brief introduction to K3 surfaces with non-symplectic involutions. A slightly more detailed account can be found in \cite{KoLe} or \cite{ReidK3}. We refer the reader for an in-depth introduction to the theory of K3 surfaces to \cite[Chapter VIII]{BHPV} and for the details of the classification of non-symplectic involutions to the original papers of Nikulin \cite{Nikulin1,Nikulin2,Nikulin3}. A K3 surface is a compact, simply connected, complex surface with tri\-vial canonical bundle. The underlying real $4$-dimensional manifold of a K3 surface is of a fixed diffeomorphism type. Therefore, any K3 surface $S$ has the same topological invariants. The Hodge numbers of $S$ are determined by $h^{0,0}(S)= h^{2,0}(S)=1$, $h^{1,0}(S)=0$ and $h^{1,1}(S)=20$. $H^2(S,\mathbb{Z})$ together with the intersection form is an even, unimodular lattice with signature $(3,19)$. Up to isometries, the only lattice with these properties is the \emph{K3 lattice} \begin{equation*} L := 3 H \oplus 2(-E_8) \:, \end{equation*} where $H$ is the hyperbolic plane lattice with the quadratic form $q(x,y):=2xy$ and $-E_8$ is the root lattice of $E_8$ together with the negative of the usual bilinear form. We often write \begin{equation*} L = H_1 \oplus H_2 \oplus H_3 \oplus (-E_8)_1 \oplus (-E_8)_2 \end{equation*} in order to distinguish between the different summands. We choose a basis $(w_1,\ldots,w_{22})$ of $L$ such that $(w_{2j+1}, w_{2j+2})$ with $j=0,1,2$ is a basis of $H_j$ and \begin{equation*} w_{2j+1}\cdot w_{2j+1} = w_{2j+2}\cdot w_{2j+2} = 0\:, \quad w_{2j+1}\cdot w_{2j+2} = 1\:. \end{equation*} Moreover, $(w_{7+8j},\ldots,w_{14+8j})$ with $j=0,1$ shall be a basis of $(-E_8)_j$ such that the matrix representation of the bilinear form with respect to $(w_{7+8j},\ldots,w_{14+8j})$ is the negative of the Cartan matrix of $E_8$, which means that $w_k\cdot w_k=-2$ for $k\in\{7, \ldots,22\}$ and for $k,l\in\{7,\ldots,22\}$ with $k\neq l$ we have $w_k\cdot w_l \in \{0,1\}$. We call $(w_1,\ldots,w_{22})$ the \emph{standard basis of $L$}. Any K3 surface $S$ admits a K\"ahler metric. Since $S$ has trivial canonical bundle, there exists a unique Ricci-flat K\"ahler metric in each K\"ahler class. The holonomy group $SU(2)$ is isomorphic to $Sp(1)$. Therefore, the Ricci-flat K\"ahler metrics are in fact hyper-K\"ahler. In principle, a hyper-K\"ahler structure on a K3 surface is determined by the cohomology classes $[\omega_1],[\omega_2],[\omega_3]\in H^2(S,\mathbb{R})$ of the three K\"ahler forms. In order to make this statement precise, we introduce the following notions. \begin{Def} \begin{enumerate} \item Let $S$ be a K3 surface. A lattice isometry $\phi:H^2(S,\mathbb{Z}) \rightarrow L$ is called a \emph{marking of} $S$. The pair $(S,\phi)$ is called a \emph{marked K3 surface}. \item A \emph{hyper-K\"ahler structure on a marked K3 surface} is a tuple\linebreak $(S,\phi,g,\omega_1,\omega_2,\omega_3)$, where $g$ is a hyper-K\"ahler metric and $\omega_i$ with $i\in\{1,2,3\}$ are the K\"ahler forms with respect to the complex structures $I_i$ that satisfy $I_1I_2I_3=-1$. We assume that $S$ has the orientation that makes $\omega_1 \wedge \omega_1$ positive. \item Two tuples $(S_1,\phi_1,g_1,\omega^1_1,\omega^1_2,\omega^1_3)$ and $(S_2,\phi_2,g_2,\omega^2_1,\omega^2_2,\omega^2_3)$ are \emph{isometric} if there exists a map $f:S_1\rightarrow S_2$ with $f^{\ast}g_2=g_1$, $f^{\ast}\omega^2_i = \omega^1_i$ for $i\in\{1,2,3\}$ and $\phi_1\circ f^{\ast} = \phi_2$. The \emph{moduli space of marked K3 surfaces with a hyper-K\"ahler structure} is the class of all tuples $(S,\phi,g,\omega_1,\omega_2,\omega_3)$ modulo isometries. \end{enumerate} \end{Def} We write $L_{\mathbb{R}}$ for $L\otimes\mathbb{R}$ and define \begin{equation*} \begin{aligned} \Omega:=\{ & (x,y,z)\in L_{\mathbb{R}}^3 | x^2 = y^2 = z^2>0,\: x\cdot y = x\cdot z = y\cdot z =0, \\ & \!\! \not\exists\: d\in L \:\:\text{with}\:\: d^2 = -2 \:\:\: \text{and} \:\:\: x\cdot d = y\cdot d = z\cdot d=0 \}\:. \\ \end{aligned} \end{equation*} The \emph{hyper-K\"ahler period map} $p$ that is defined by \[ p(S,\phi,g,\omega_1,\omega_2,\omega_3) = (\phi([\omega_1]), \phi([\omega_2]),\phi([\omega_3])) \] is a diffeomorphism between the hyper-K\"ahler moduli space and $\Omega$. This fact can also be found in \cite[Sec. 12.K]{Besse} and \cite[Sec. 7.3]{Joyce}. The reason that we exclude triples $(x,y,z)$ with $x\cdot d = y\cdot d = z\cdot d=0$ from $\Omega$ is that they correspond to K3 surfaces with singularities. The following lemma on isometries between K3 surfaces, that we have proven in \cite{ReidK3}, will be useful later on. \begin{Le} \label{IsomLem} Let $S_j$ with $j\in\{1,2\}$ be K3 surfaces together with hyper-K\"ahler metrics $g_j$ and K\"ahler forms $\omega^j_i$ with $i\in\{1,2,3\}$. Moreover, let $V_j\subseteq H^2(S_j,\mathbb{R})$ be the subspace that is spanned by the $[\omega^j_i]$, $i=1,2,3$. \begin{enumerate} \item Let $f:S_1\rightarrow S_2$ be an isometry. The pull-back $f^{\ast}:H^2(S_2,\mathbb{Z}) \rightarrow H^2(S_1,\mathbb{Z})$ is a lattice isometry. Its $\mathbb{R}$-linear extension maps $V_2$ to $V_1$. \item \label{Isom} Let $\psi:H^2(S_2,\mathbb{Z}) \rightarrow H^2(S_1,\mathbb{Z})$ be a lattice isometry. We denote the maps $\psi\otimes\mathbb{K}$ with $\mathbb{K}\in\{\mathbb{R},\mathbb{C}\}$ by $\psi$, too. We assume that $\psi(V_2)=V_1$. Then there exists an isometry $f:S_1\rightarrow S_2$ such that $f^{\ast} = \psi$. \item Let $f:S\rightarrow S$ be an isometry that acts as the identity on $H^2(S,\mathbb{Z})$. Then, $f$ itself is the identity map. As a consequence, the isometry from (\ref{Isom}) is unique. \end{enumerate} \end{Le} A \emph{non-symplectic involution} of a K3 surface $S$ is a holomorphic involution $\rho:S\rightarrow S$ such that $\rho$ acts as $-1$ on $H^{2,0}(S)$. Any K3 surface with a non-symplectic involution admits a K\"ahler class that is invariant under $\rho$. Therefore, there exists a hyper-K\"ahler structure on $S$ such that \[ \rho^{\ast}\omega_1 = \omega_1\:, \qquad \rho^{\ast}\omega_2 = - \omega_2\:, \qquad \rho^{\ast}\omega_3 = -\omega_3\:. \] Let $(S,\phi)$ be a marked K3 surface and $\rho:S\rightarrow S$ be a non-symplectic involution. The \emph{fixed lattice of $\rho$} is defined as \begin{equation*} L^{\rho} := \{x\in L | (\phi\circ\rho^{\ast}\circ\phi^{-1})(x) = x \}\:. \end{equation*} $L^{\rho}$ is a non-degenerate sublattice of $L$ that is primitively embedded, which means that $L/L^{\rho}$ has no torsion. Its signature is $(1,r-1)$, where $r$ is the rank of $L^{\rho}$. A lattice with this kind of signature is called \emph{hyperbolic}. Moreover, $L^{\rho}$ is \emph{2-elementary} which means that $L^{\rho\ast}/L^{\rho}$ is isomorphic to a group of type $\mathbb{Z}_2^a$. The number $a\in\mathbb{N}_0$ is a second invariant of $L^{\rho}$. We define a third invariant $\delta$ by \begin{equation*} \delta := \begin{cases} 0 & \text{if $x^2\in\mathbb{Z}$ for all $x\in L^{\rho\ast}$} \\ 1 & otherwise \\ \end{cases} \end{equation*} These invariants can in fact be defined for any $2$-elementary lattice. If we assume that the lattice is in addition even and hyperbolic, there is at most one lattice with invariants $(r,a,\delta)$. Nikulin \cite{Nikulin3} has shown that the deformation classes of K3 surfaces with a non-symplectic involution can be classified in terms of the triples $(r,a,\delta)$. There exist $75$ triples that satisfy \begin{equation*} 1\leq r\leq 20\:,\quad 0\leq a\leq 11\quad\text{and}\quad r-a\geq 0\:. \end{equation*} A figure with a graphical representation of all possible values of $(r,a,\delta)$ can be found in \cite{KoLe, Nikulin3}. For the computation of the Betti numbers of our $G_2$-manifolds we need the following theorem about the fixed locus of a non-symplectic involution. \begin{Th} \label{FixedLocusTheorem} (cf. \cite{KoLe,Nikulin3}) Let $\rho:S\rightarrow S$ be a non-symplectic involution of a K3 surface and let $(r,a,\delta)$ be the invariants of its fixed lattice. The fixed locus $S^{\rho}$ of $\rho$ is a disjoint union of complex curves. \begin{enumerate} \item If $(r,a,\delta)=(10,10,0)$, $S^{\rho}$ is empty. \item If $(r,a,\delta)=(10,8,0)$, $S^{\rho}$ is the disjoint union of two elliptic curves. \item In the remaining cases, we have \begin{equation*} S^{\rho} = C_g \cup E_1 \cup \ldots \cup E_k\:, \end{equation*} where $C_g$ is a curve of genus $g=\frac{1}{2}(22 - r - a)$ and the $E_i$ are $k=\frac{1}{2}(r-a)$ rational curves. \end{enumerate} \end{Th} \subsection{Classification results} \label{NonSymplecticZ22} For our construction of $G_2$-manifolds we need K3 surfaces $S$ with a pair of commuting involutive isometries $(\rho^1,\rho^2)$ that satisfy the following equations: \begin{equation} \label{KahlerRelations1} \begin{aligned} & {\rho^1}^{\ast}\omega_1 = \omega_1\:, \qquad {\rho^1}^{\ast}\omega_2 = - \omega_2\:, \qquad {\rho^1}^{\ast}\omega_3 = -\omega_3 \\ & {\rho^2}^{\ast}\omega_1 = - \omega_1\:, \qquad {\rho^2}^{\ast}\omega_2 = \omega_2\:, \qquad {\rho^2}^{\ast}\omega_3 = - \omega_3 \end{aligned} \end{equation} $\rho^3=\rho^1\rho^2$ is a third involution that satisfies \[ {\rho^3}^{\ast}\omega_1 = - \omega_1\:, \qquad {\rho^3}^{\ast}\omega_2 = -\omega_2\:, \qquad {\rho^3}^{\ast}\omega_3 = \omega_3 \] The set $(\text{Id},\rho^1,\rho^2,\rho^3)$ is a group that is isomorphic to $\mathbb{Z}^2_2$. $\rho^i$ with $i=1,2,3$ preserves the hyper-K\"ahler metric and the symplectic form $\omega_i$. Therefore, it is a biholomorphic map with respect to the complex structure $I_i$. Moreover, it maps the holomorphic $(2,0)$-form with respect to $I_i$ to its negative. All in all, $\rho^i$ is a non-symplectic involution that is holomorphic with respect to $I_i$. For this reason, we call the triple $(S,\rho^1,\rho^2)$ \emph{a K3 surface with a non-symplectic $\mathbb{Z}^2_2$-action}. Later on, we extend the $\rho^i$ with help of maps $\alpha^i:T^3 \rightarrow T^3$ to involutions of $S\times T^3$. Since the pairs $(\rho^1\times \alpha^1,\rho^2\times \alpha^2)$ and $(\rho^2\times \alpha^1,\rho^1\times \alpha^2)$ yield different quotients $(S\times T^3)/\mathbb{Z}^2_2$, we consider $(\rho^1,\rho^2)$ and $(\rho^2,\rho^1)$ as different objects. We should also keep in mind that if $(\rho^1,\rho^2)$ generates a non-symplectic $\mathbb{Z}^2_2$-action, the same is true for all $(\rho^i,\rho^j)$ with $i,j\in\{1,2,3\}$ and $i\neq j$. In order to find a classification result for non-symplectic $\mathbb{Z}^2_2$-actions that can be proven easily, we restrict ourselves to a special kind of non-symplectic involutions. \begin{Def} \label{SimpleInvolution} Let $S$ be a K3 surface and let $\rho:S\rightarrow S$ be a non-symplectic involution. We call $\rho$ a \emph{simple non-symplectic involution} if there exists a marking $\phi:H^2(S,\mathbb{Z})\rightarrow L$ such that for all $i\in\{1,\ldots,22\}$ there exists a $j\in\{1,\ldots,22\}$ with $\rho(w_i)= \pm w_j$, where $\rho$ is an abbreviation for $\phi\circ\rho^{\ast}\circ\phi^{-1}$ and $(w_1,\ldots,w_{22})$ is the standard basis of $L$. \end{Def} In \cite{ReidK3} we classified non-symplectic $\mathbb{Z}^2_2$-actions on K3 surfaces in terms of the invariants $(r_i,a_i,\delta_i)$ of $\rho^i$, where $i\in\{1,2\}$. We made the assumptions that $\rho^1$ and $\rho^2$ are simple and that the markings from the Definition \ref{SimpleInvolution} are the same for $\rho^1$ and $\rho^2$. In Section \ref{The_third_case}, we will need information on the fixed loci of $\rho^1$, $\rho^2$ and $\rho^3$ in order to construct $G_2$-manifolds and to compute their Betti numbers. Since the invariants $(r_3,a_3,\delta_3)$ are not determined by $(r_1,a_1,\delta_1)$ and $(r_2,a_2,\delta_2)$, we need a refined version of our results from \cite{ReidK3}. Since we make much use of the methods that we have developed in \cite{ReidK3}, we recommend the reader to take a look at that paper. In \cite{ReidK3} we have shown that any simple non-symplectic involution $\rho$ maps an $H_i$ to an $H_j$ and an $(-E_8)_i$ to an $(-E_8)_j$. Since we have $\rho(w_i)= \pm w_j$ and each restriction $\rho|_{H_i}:H_i\rightarrow H_j$ has to be a lattice isometry, we can conclude after some calculations that each $\rho|_{H_i}$ is given by one of the following four matrices: \[ M_1 := \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ \end{pmatrix} \:, \quad M_2 := \begin{pmatrix} -1 & 0 \\ 0 & -1 \\ \end{pmatrix} \:, \quad M_3 := \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix} \:, \quad M_4 := \begin{pmatrix} 0 & -1 \\ -1 & 0 \\ \end{pmatrix} \] Although the two matrices $M_3$ and $M_4$ are conjugate by a matrix in $GL(2,\mathbb{R})$, the eigenspace of $M_3$ to the eigenvalue $1$ is spanned by a positive vector while the eigenspace to the same eigenvalue of $M_4$ is spanned by a negative vector. Therefore, $M_3$ and $M_4$ should be considered as two distinct cases. We denote the restriction of $\rho$ to $3H$ by $\rho':3H\rightarrow 3H$. If $\rho'(H_i)=H_i$ for all $i\in\{1,2,3\}$, $\rho'$ is a block matrix with $3$ blocks $M_j$, $M_k$ and $M_l$ along the diagonal. We denote this diagonal block matrix by $\rho'_{jkl}$. Since the fixed lattice of $\rho$ has to be hyperbolic, one of the blocks is $M_1$ or $M_3$, which fix a positive vector, and the other ones have to be $M_2$ or $M_4$, which fix no positive vectors. Up to a permutation of the indices, $\rho'$ is one of the following \[ \rho'_{122}\:,\qquad \rho'_{124}\:,\qquad \rho'_{144}\:,\qquad \rho'_{322}\:,\qquad \rho'_{324}\:,\qquad \rho'_{344}\:. \] Let $L_1,\ldots,L_n$ be $2$-elementary lattices with invariants $(r_i,a_i,\delta_i)$. The invariants of the direct sum $L_1\oplus \ldots \oplus L_n$ are \[ (r_1+\ldots+r_n,a_1+\ldots+a_n,\max\{\delta_1,\ldots,\delta_n\}) \] We compute the fixed lattices of the $M_j$ and their invariants. After that, we obtain the invariants $(r',a',\delta')$ of the fixed lattices of the $\rho'_{jkl}$: \begin{center} \label{table_rho_jkl} \begin{tabular}{l|l|l} $\rho|_{3H}$ & Fixed lattice & $(r',a',\delta')$ \\ \hline $\rho'_{122}$ & $H$ & (2,0,0) \\ $\rho'_{124}$ & $H\oplus\mathbf{1}(-2)$ & (3,1,1) \\ $\rho'_{144}$ & $H\oplus\mathbf{1}(-2)\oplus\mathbf{1}(-2)$ & (4,2,1) \\ $\rho'_{322}$ & $\mathbf{1}(2)$ & (1,1,1) \\ $\rho'_{324}$ & $\mathbf{1}(2)\oplus\mathbf{1}(-2)$ & (2,2,1) \\ $\rho'_{344}$ & $\mathbf{1}(2)\oplus\mathbf{1}(-2)\oplus\mathbf{1}(-2)$ & (3,3,1) \\ \end{tabular} \end{center} In the above table, $\mathbf{1}$ denotes the 1-dimensional lattice that is generated by a single element of length $1$ and $L(k)$ with $k\in\mathbb{Z}$ denotes the lattice whose bilinear form is $k$ times the bilinear form of $L$. If there exist $i,j\in\{1,2,3\}$ with $i\neq j$ and $\rho'(H_i)=H_j$, the situation is more complicated. Since $\rho'$ is an involution, we have $\rho'(H_j)=H_i$ and the third sublattice of $3H$ is fixed. Without loss of generality, we assume that $\rho'(H_1)=H_2$, $\rho'(H_2)=H_1$ and $\rho'(H_3)=H_3$. Moreover, it follows from ${\rho'}^2=\text{Id}$ that the matrix representations of $\rho|_{H_1}$ and $\rho|_{H_2}$ have to be the same. This means that the matrix representation of $\rho'$ is given by \[ \hat{\rho}'_{km} := \begin{pmatrix} 0 & M_k & 0 \\ M_k & 0 & 0 \\ 0 & 0 & M_m \\ \end{pmatrix} \] Our next step is to determine the fixed lattice of $\hat{\rho}'_{km}$. We write the elements of $3H$ as $(v_1,v_2,v_3)^{\top}$ with $v_i\in H_i$. By a short calculation we see that the fixed lattice is given by \[ \left\{ \begin{pmatrix} v \\ M_k v \\ w \end{pmatrix} \middle| v\in H,\: M_m w = w \right\} \] The length of an element of the fixed lattice is \[ 2v^2 + w^2 \] The fixed lattice contains a positive vector, namely $(v,M_kv,0)^{\top}$ with $v^2>0$. Since the fixed lattice has to be be hyperbolic, all vectors $w$ that are fixed by $M_m$ have to be negative. This implies that $m\in \{2,4\}$, while $k\in\{1\ldots,4\}$ may be arbitrary. All in all, we have found $8$ additional possibilities for $\rho'$. If $m=2$, the fixed lattice of $\hat{\rho}'_{km}$ is isometric to $H(2)$ and has invariants $(2,2,0)$. If $m=4$, the fixed lattice is isometric to $H(2) \oplus \mathbf{1}(-2)$ and has invariants $(3,3,1)$. In the case $m=4$, $\hat{\rho}'_{km}$ is actually conjugate to $\rho'_{344}$, but in the case $m=2$ it is not conjugate to any $\rho'_{jkl}$ since the invariants $(2,2,0)$ do not appear in the table from page \pageref{table_rho_jkl}. We denote the restriction of $\rho$ to $2(-E_8)$ by $\rho'':2(-E_8) \rightarrow 2(-E_8)$. In \cite{ReidK3} we have shown that the restrictions $\rho|_{(-E_8)_i}:(-E_8)_i \rightarrow (-E_8)_j$ have to be plus or minus the identity. Therefore, $\rho''$ has to be one of the matrices below. \begin{align*} & \rho''_1 = \begin{pmatrix} I_8 & 0 \\ 0 & I_8 \\ \end{pmatrix} \quad \rho''_2 = \begin{pmatrix} I_8 & 0 \\ 0 & -I_8 \\ \end{pmatrix} \quad \rho''_3 = \begin{pmatrix} -I_8 & 0 \\ 0 & I_8 \\ \end{pmatrix} \\ & \rho''_4 = \begin{pmatrix} -I_8 & 0 \\ 0 & -I_8 \\ \end{pmatrix} \quad \rho''_5 = \begin{pmatrix} 0 & I_8 \\ I_8 & 0 \\ \end{pmatrix} \quad \rho''_6 = \begin{pmatrix} 0 & -I_8 \\ -I_8 & 0 \\ \end{pmatrix} \\ \end{align*} $\rho''_2$ is conjugate to $\rho''_3$ by a lattice isometry and the same is true for $\rho''_5$ and $\rho''_6$. No other pair of matrices from the above list is conjugate to each other. The invariants $(r'',a'',\delta'')$ of the fixed lattice of $\rho''$ can be found in the table below. \begin{center} \begin{tabular}{l|l|l} $\rho''$ & Fixed lattice & $(r'',a'',\delta'')$ \\ \hline $\rho''_1$ & $2(-E_8)$ & (16,0,0) \\ $\rho''_2$ & $-E_8$ & (8,0,0) \\ $\rho''_4$ & $\{0\}$ & (0,0,0) \\ $\rho''_5$ & $-E_8(2)$ & (8,8,0) \\ \end{tabular} \end{center} By writing $\rho$ as $\rho'\oplus\rho''$ and computing its invariants, we obtain our result from \cite{ReidK3}, which states that a non-symplectic involution is simple if and only if $(r,a,\delta)$ is an element of a list of $28$ triples. We prove a classification result for non-symplectic $\mathbb{Z}^2_2$-actions. For reasons of simplicity, we make the following assumptions. \begin{enumerate} \item The generators $\rho^i$ with $i=1,2$ of the group $\mathbb{Z}^2_2$ are both simple involutions. \item According to Definition \ref{SimpleInvolution}, there exist markings $\phi^i:H^2(S,\mathbb{Z})\rightarrow L$ such that $\phi^i\circ {\rho^i}^{\ast} \circ (\phi^i)^{-1}$ has the desired matrix representation. In this article, we always assume that $\phi^1=\phi^2$ although further examples of non-symplectic $\mathbb{Z}^2_2$-actions may exist. \item Let $L_i\subseteq L$ be the fixed lattice of $\rho^i$. We restrict ourselves to the case that $L_1\cap L_2 = \{0\}$. We have proven in \cite{ReidK3} that under this assumption we can find a smooth K3 surface with a non-symplectic $\mathbb{Z}^2_2$-action and fixed lattices $L_1$ and $L_2$ by choosing the three K\"ahler classes sufficiently generic. \end{enumerate} The $\rho^i$ can be written as ${\rho^i}'\oplus{\rho^i}''$ with ${\rho^i}': 3H \rightarrow 3H$ and ${\rho^i}'':2(-E_8)\rightarrow 2(-E_8)$. We start with the easier part and classify the pairs $({\rho^1}'',{\rho^2}'')$. Since ${\rho^i}'' \in \{\rho''_1,\ldots,\rho''_6 \}$, we have to check for all $j_1,j_2\in \{1,\ldots,6\}$ if $\rho''_{j_1}$ and $\rho''_{j_2}$ commute and if the sublattice of all vectors that are invariant under both maps is trivial. If we count only those pairs $(\rho''_{j_1},\rho''_{j_2}) $ that cannot be obtained from each other by conjugating both maps $\rho''_{j_i}$ by an isometry of $2(-E_8)$, the following pairs $(j_1,j_2)$ remain. \[ (1,4)\:,\quad (2,3)\:,\quad (2,4)\:,\quad (4,1)\:,\quad (4,2)\:,\quad (4,4)\:,\quad (4,5)\:,\quad (5,4)\:,\quad (5,6) \] Our next step is to classify the pairs $({\rho^1}',{\rho^2}')$. We start with the case where ${\rho^1}'$ and ${\rho^2}'$ map each $H_k$ to itself. Since the four matrices $M_1,\ldots,M_4$ commute pairwise, any choice of the ${\rho^i}'$ yields commuting involutions. The fixed lattice of a non-symplectic involution is hyperbolic. Therefore, each ${\rho^i}'$ has to fix exactly one positive vector. Since we can permute the three spaces $H_1$, $H_2$ and $H_3$, we can assume without loss of generality that ${\rho^1}'|_{H_1}$ and ${\rho^2}'|_{H_2}$ preserve a positive vector. These two restrictions have to be $M_1$ or $M_3$. The restrictions to the other $H_k$ have to be $M_2$ or $M_4$. As before, we have to take care that there are no lattice vectors in $3H$ that are fixed by both ${\rho^i}'$. The set of all pairs $(mn)$ with the property that the intersection of the eigenspaces of $M_m$ and $M_n$ to the eigenvalue $1$ is trivial consists of: \[ (1,2) \quad (2,1)\quad (2,2)\quad (2,3)\quad (2,4)\quad (3,2)\quad (3,4)\quad (4,2)\quad (4,3) \] With help of the above list, we find the following possibilities for ${\rho^1}'$ and ${\rho^2}'$. A triple $(p,q,r)$ in the $i$th column of the table means that ${\rho^i}'=\rho'_{pqr}$. \begin{center} \begin{tabular}{ccccc} \begin{tabular}{l|l} ${\rho^1}'$ & ${\rho^2}'$ \\ \hline \hline (1,2,2) & (2,1,2) \\ (1,2,2) & (2,1,4) \\ (1,2,2) & (2,3,2) \\ \hline (1,2,2) & (2,3,4) \\ (1,2,4) & (2,1,2) \\ (1,2,4) & (2,3,2) \\ \hline (1,4,2) & (2,3,2) \\ (1,4,2) & (2,3,4) \\ (1,4,4) & (2,3,2) \\ \end{tabular} & $\qquad$ & \begin{tabular}{l|l} ${\rho^1}'$ & ${\rho^2}'$ \\ \hline \hline (3,2,2) & (2,1,2) \\ (3,2,2) & (2,1,4) \\ (3,2,2) & (2,3,2) \\ \hline (3,2,2) & (2,3,4) \\ (3,2,2) & (4,1,2) \\ (3,2,2) & (4,1,4) \\ \hline (3,2,2) & (4,3,2) \\ (3,2,2) & (4,3,4) \\ (3,2,4) & (2,1,2) \\ \end{tabular} & $\qquad$ & \begin{tabular}{l|l} ${\rho^1}'$ & ${\rho^2}'$ \\ \hline \hline (3,2,4) & (2,3,2) \\ (3,2,4) & (4,1,2) \\ (3,2,4) & (4,3,2) \\ \hline (3,4,2) & (2,3,2) \\ (3,4,2) & (2,3,4) \\ (3,4,2) & (4,3,2) \\ \hline (3,4,2) & (4,3,4) \\ (3,4,4) & (2,3,2) \\ (3,4,4) & (4,3,2) \\ \end{tabular} \\ \end{tabular} \end{center} \begin{Rem} \label{RemarkLatticeInvariants} In the above table, there are a few cases where the invariants of ${\rho^1}'$ and ${\rho^2}'$ are the same but the $\mathbb{Z}^2_2$-actions cannot be obtained from each other by conjugating with a lattice isometry. An example can be found in the 6th and 7th row of the table. The fixed lattice of ${\rho^2}'$ is in both cases $\mathbf{1}(2)$ which is embedded into $H_2$. The fixed lattice of ${\rho^1}'$ is $H_1\oplus \mathbf{1}(-2)$, but in the 6th line $\mathbf{1}(-2)$ is embedded into $H_3$ and in the 7th line it is embedded into $H_2$. In the 6th line $\mathbf{1}(-2)$ is embedded into a different $H_k$ as $\mathbf{1}(2)$ and in the 7th line they are both embedded into $H_2$. Therefore, the $\mathbb{Z}^2_2$-actions are inequivalent although the fixed lattices are isomorphic. Since it is not clear if we can obtain topologically different $G_2$-manifolds from those two pairs, we will consider them as two distinct cases. \end{Rem} Next, we assume that ${\rho^1}'$ interchanges two of the $H_k$ and ${\rho^2}'$ leaves all $H_k$ invariant. Without loss of generality ${\rho^1}'$ interchanges $H_1$ and $H_2$. We recall that ${\rho^1}'$ is a matrix of type \[ \hat{\rho}'_{km} = \begin{pmatrix} 0 & M_k & 0 \\ M_k & 0 & 0 \\ 0 & 0 & M_m \\ \end{pmatrix} \] with $k\in\{1,\ldots,4\}$ and $m\in\{2,4\}$. $\hat{\rho}'_{km}$ fixes exactly one positive vector and this vector is an element of $H_1\oplus H_2$. By a short calculation we see that a diagonal block matrix $\rho'_{pqr}$ commutes with $\hat{\rho}'_{km}$ if and only if the following three equations are satisfied \begin{eqnarray*} M_k M_q & = & M_p M_k \\ M_k M_p & = & M_q M_k \\ M_m M_r & = & M_r M_m \\ \end{eqnarray*} Since the matrices $M_1,\ldots,M_4$ commute pairwise, the third equation is automatically satisfied. The first two equations are equivalent to \[ M_q = M_k M_p M_k \] since $M_k^2=1$. Again we use the fact that $M_1,\ldots,M_4$ commute and conclude that $p=q$. A linear map $\rho'_{ppr}$ fixes exactly one positive vector if and only if $p\in\{2,4\}$ and $r\in\{1,3\}$. In all cases, the fixed vector is an element of $H_3$ and thus the intersection of the fixed lattices is trivial. All in all, there are $8$ possibilities for ${\rho^1}'$ and $4$ possibilities for ${\rho^2}'$. Therefore, we have found $32$ additional pairs $({\rho^1}',{\rho^2}')$. We conjugate both maps by the matrix \[ \begin{pmatrix} 1 & 0 & 0 \\ 0 & M_k & 0 \\ 0 & 0 & 1 \\ \end{pmatrix} \] This conjugation transforms ${\rho^1}'$ into $\hat{\rho}'_{1m}$ and leaves ${\rho^2}'$ invariant. Therefore, only the following $8$ pairs remain: \[ \begin{tabular}{llll} $(\hat{\rho}'_{12},\rho'_{221})\:,\quad$ & $(\hat{\rho}'_{12},\rho'_{223})\:,\quad$ & $(\hat{\rho}'_{12},\rho'_{441})\:,\quad$ & $(\hat{\rho}'_{12},\rho'_{443})\:,$ \\[2mm] $(\hat{\rho}'_{14},\rho'_{221})\:,\quad$ & $(\hat{\rho}'_{14},\rho'_{223})\:,\quad$ & $(\hat{\rho}'_{14},\rho'_{441})\:,\quad$ & $(\hat{\rho}'_{14},\rho'_{443})$ \\ \end{tabular} \] In the case where ${\rho^1}'$ leaves all $H_k$ invariant and ${\rho^2}'$ interchanges two of these lattices, we find $8$ analogous pairs. Finally, we consider the case where both ${\rho^1}'$ and ${\rho^2}'$ interchange two summands of $3H$. Since ${\rho^1}'$ and ${\rho^2}'$ shall commute, this is only possible if both of them interchange the same pair. Therefore, we assume without loss of generality that ${\rho^1}'$ and ${\rho^2}'$ interchange $H_1$ and $H_2$. We have ${\rho^1}' = \hat{\rho}'_{km}$ and ${\rho^2}' = \hat{\rho}'_{ln}$ with $k,l\in \{1,\ldots,4\}$ and $m,n\in \{2,4\}$. It is easy to see that ${\rho^1}'$ and ${\rho^2}'$ commute, no matter what the values of $k,l,m$ and $n$ are. Therefore, we have found further $64$ pairs $({\rho^1}',{\rho^2}')$. By conjugating with the same matrix as in the previous case we can achieve that $k=1$, but $l,m$ and $n$ can still be chosen arbitrarily. This reduces the number of pairs to $16$. We obtain the complete list of all simple non-symplectic $\mathbb{Z}^2_2$-actions that satisfy our additional assumptions $\phi_1=\phi_2$ and $L_1\cap L_2 = \{0\}$ by combining any $({\rho^1}',{\rho^2}')$ with any $({\rho^1}'',{\rho^2}'')$. All in all, we find $(27 + 8 + 8 + 16)\cdot 9 = 531$ pairs. We denote the invariants of $\rho^i$ with $i\in\{1,2,3\}$ by $(r_i,a_i,\delta_i)$. Later on, we will see that the Betti numbers of our $G_2$-manifolds are independent of the $\delta_i$. Since in one of our cases the Betti numbers depend on the invariants of all $\rho^i$, we search for all tuples $(r_1,a_1|r_2,a_2|r_3,a_3)$ that can be obtained by our construction. We have $(r_i,a_i)=(r'_i + r''_i,a'_i+a''_i)$, where $(r'_i,a'_i)$ are the invariants of ${\rho^i}'$ and $(r''_i,a''_i)$ are the invariants of ${\rho^i}''$. We go through our lists of pairs $({\rho^1}',{\rho^2}')$ and $({\rho^1}'',{\rho^2}'')$, determine ${\rho^3}'$ and ${\rho^3}''$ and compute the invariants of the lattice involutions. After that, we have finally proven the following theorem. \begin{Th} \label{Theorem_Z22} Let $(\rho^1,\rho^2)$ be a pair of commuting non-symplectic involutions of a smooth K3 surface that are holomorphic with respect to complex structures $I_1$ and $I_2$ with $I_1I_2=-I_2I_1$. In addition, we assume that \begin{enumerate} \item the intersection of the fixed lattices of $\rho^1$ and $\rho^2$ is trivial, \item $\rho^1$ and $\rho^2$ are simple and \item the markings from Definition \ref{SimpleInvolution} are the same for $\rho^1$ and $\rho^2$. \end{enumerate} We denote $\rho^1\rho^2$ by $\rho^3$. The action of each $\rho^i$ on the K3 lattice $L$ splits into ${\rho^i}'\oplus {\rho^i}''$ with ${\rho^i}':3H\rightarrow 3H$ and ${\rho^i}'':2(-E_8) \rightarrow 2(-E_8)$. Moreover, the action of $\rho^1$ and $\rho^2$ is conjugate by a lattice isometry to one of $531$ actions that we have described earlier in this section. We denote the invariants of $\rho^i$ by $(r_i,a_i,\delta_i)$. The set of all possible tuples $(r_1,a_1|r_2,a_2|r_3,a_3)$ can be obtained by adding a tuple from the list of all invariants $(r'_1,a'_1|r'_2,a'_2|r'_3,a'_3)$ of the ${\rho^i}'$: \begin{center} \begin{tabular}{lll} $(1,1|1,1|4,2)$ & $(1,1|4,2|1,1)$ & $(4,2|1,1|1,1)$ \\ [2mm] $(1,1|2,0|3,1)$ & $(1,1|3,1|2,0)$ & $(2,0|1,1|3,1)$ \\ $(2,0|3,1|1,1)$ & $(3,1|1,1|2,0)$ & $(3,1|2,0|1,1)$ \\ [2mm] $(1,1|2,2|3,1)$ & $(1,1|3,1|2,2)$ & $(2,2|1,1|3,1)$ \\ $(2,2|3,1|1,1)$ & $(3,1|1,1|2,2)$ & $(3,1|2,2|1,1)$ \\ [2mm] $(1,1|2,2|3,3)$ & $(1,1|3,3|2,2)$ & $(2,2|1,1|3,3)$ \\ $(2,2|3,3|1,1)$ & $(3,3|1,1|2,2)$ & $(3,3|2,2|1,1)$ \\ [2mm] $(2,0|2,0|2,0)$ & & \\ [2mm] $(2,0|2,2|2,2)$ & $(2,2|2,0|2,2)$ & $(2,2|2,2|2,0)$ \\ [2mm] $(2,0|3,3|3,3)$ & $(3,3|2,0|3,3)$ & $(3,3|3,3|2,0)$ \\ [2mm] $(2,2|2,2|2,2)$ & & \\ [2mm] $(2,2|2,2|4,2)$ & $(2,2|4,2|2,2)$ & $(4,2|2,2|2,2)$ \\ [2mm] $(2,2|3,3|3,3)$ & $(3,3|2,2|3,3)$ & $(3,3|3,3|2,2)$ \\ [2mm] $(3,3|3,3|4,2)$ & $(3,3|4,2|3,3)$ & $(4,2|3,3|3,3)$ \\ \end{tabular} \end{center} to a tuple from the list of all invariants $(r''_1,a''_1|r''_2,a''_2|r''_3,a''_3)$ of the ${\rho^i}''$: \begin{center} \begin{tabular}{lll} $(0,0|0,0|16,0)$ & $(0,0|16,0|0,0)$ & $(16,0|0,0|0,0)$ \\ $(0,0|8,0|8,0)$ & $(8,0|0,0|8,0)$ & $(8,0|8,0|0,0)$ \\ $(0,0|8,8|8,8)$ & $(8,8|0,0|8,8)$ & $(8,8|8,8|0,0)$ \\ \end{tabular} \end{center} In particular, there are $38\cdot 9 = 342$ different tuples $(r_1,a_1|r_2,a_2|r_3,a_3)$. \end{Th} Another large class of non-symplectic $\mathbb{Z}^2_2$-actions is implicitly contained in the article of Kovalev and Lee \cite{KoLe} on twisted connected sums. Their construction of $G_2$-manifolds can be summed up as follows. \begin{enumerate} \item Let $S$ be a K3 surface and $\rho:S\rightarrow S$ be a non-symplectic involution. Moreover, let $\psi$ be a holomorphic involution of $\mathbb{CP}^1$. \item The quotient $Z:=(S\times \mathbb{CP}^1)/\langle\rho\times \psi\rangle$ is an orbifold with $A_1$-singularities along two copies of the fixed locus of $\rho$ since $\psi$ has two fixed points. \item We blow up the singularities of $Z$ and obtain a compact K\"ahler manifold $\overline{W}$. \item $\overline{W}$ is fibered by K3 surfaces. We remove a fiber from $\overline{W}$ and obtain a non-compact manifold $W$ that carries an asymptotically cylindrical Ricci-flat K\"ahler metric that approaches $S\times S^1\times (0,\infty)$. \item Let $W_1$ and $W_2$ be two asymptotically cylindrical Calabi-Yau manifolds that can be obtained by the above construction. $W_1\times S^1$ and $W_2\times S^1$ can be glued together to a $G_2$-manifold if a \emph{matching} between $S_1$ and $S_2$ exists, that is an isometry $f:S_1\rightarrow S_2$ with \[ f^{\ast}[\omega_1^2] = [\omega_2^1]\:,\qquad f^{\ast}[\omega_2^2] = [\omega_1^1]\:,\qquad f^{\ast}[\omega_3^2] = -[\omega_3^1]\:. \] In the above equation, $[\omega_j^i]$ denotes the cohomology class of the $j$th K\"ahler form on the K3 surface $S_i$. \end{enumerate} The first three steps describe the main idea of Kovalev and Lee \cite{KoLe}. The fourth step is a general result on asymptotically cylindrical Calabi-Yau manifolds that can be found in \cite{HaskinsEtAl,Ko} and the fifth step is the actual twisted connected sum construction that is developed in \cite{Ko}. The authors show with help of a theorem on lattice embeddings that a matching exists if the invariants $(r_i,a_i,\delta_i)$ of the non-symplectic involutions $\rho^i$ of $S_i$ satisfy $r_1+r_2\leq 11$ or $r_1+r_2+a_1+a_2<22$. The above construction yields a large number of $G_2$-manifolds. Further details including the Betti numbers of the $G_2$-manifolds can be found in \cite{KoLe}. Since $\rho^1$ and $f^{-1} \circ \rho^2 \circ f$ define a non-symplectic $\mathbb{Z}^2_2$-action on $S_1$, we immediately obtain the following theorem. \begin{Th} \label{KoLe-Theorem} Let $(r_i,a_i,\delta_i)\in \mathbb{N}\times\mathbb{N}_0\times \{0,1\}$ with $i=1,2$ be triples such that non-symplectic involutions of K3 surfaces with invariants $(r_i,a_i,\delta_i)$ exist. If \[ r_1+r_2\leq 11 \quad\text{or}\quad r_1+r_2+a_1+a_2<22 \] a K3 surface with a non-symplectic $\mathbb{Z}^2_2$-action $(S,\rho^1,\rho^2)$ exists such that $\rho^i$ has invariants $(r_i,a_i,\delta_i)$. \end{Th} \section{Examples of $G_2$-manifolds} \label{K3T3-Section} \subsection{The idea behind our construction} Let $S$ be a K3 surface with a hyper-K\"ahler structure and let $T^3 = \mathbb{R}^3/\Lambda$, where $\Lambda$ is a lattice of rank $3$, be a flat torus. The metric on the product $S\times T^3$ has holonomy $Sp(1)\subseteq G_2$ and $S\times T^3$ therefore carries a torsion-free $G_2$-structure, which we describe in detail. Let $x^1$, $x^2$ and $x^3$ be coordinates on $T^3$ such that $(\tfrac{\partial}{\partial x^1}, \tfrac{\partial}{\partial x^2},\tfrac{\partial}{\partial x^3})$ is an orthonormal frame and let $\omega_1,\omega_2,\omega_3$ be the three K\"ahler forms on $S$. The 3-form \begin{equation*} \phi:= \omega_1\wedge dx^1 + \omega_2\wedge dx^2 + \omega_3\wedge dx^3 + dx^1\wedge dx^2\wedge dx^3 \end{equation*} is a torsion-free $G_2$-structure, whose associated metric is the product metric on $S\times T^3$. Let $\Gamma$ be a finite group that acts on $S\times T^3$ and leaves $\phi$ invariant. The action of any $\gamma\in\Gamma$ can be written as a product of an isometry of $S$ and an isometry of $T^3$. Since $\phi$ contains the summand $dx^1\wedge dx^2\wedge dx^3$, $\gamma$ has to preserve the orientation of $T^3$. The action of $\gamma$ on $T^3$ can thus be written as \begin{equation*} x + \Lambda \mapsto (A^{\gamma} x + v^{\gamma}) + \Lambda\:, \end{equation*} where $v^{\gamma}\in\mathbb{R}^3$ and $A^{\gamma} = (A_{ij}^{\gamma})_{i,j=1,2,3}\in SO(3)$. Let $GL(\Lambda)$ be the subgroup of $GL(3,\mathbb{R})$ that maps $\Lambda$ to itself. $\gamma$ defines a well-defined, orientation preserving isometry of $T^3$ if and only if $A^{\gamma} \in SO(3)\cap GL(\Lambda)$. We assume that for any $\gamma \in\Gamma$ there exists an isometry of $S$ whose pull-back acts on the three K\"ahler forms as \begin{equation*} \omega_i \mapsto \sum_{j=1}^3 A_{ji}^{\gamma}\omega_j\:. \end{equation*} In this case the action of $\Gamma$ on $T^3$ can be extended to an action on $S\times T^3$ that leaves $\phi$ invariant. If $\Gamma$ acts freely on $S\times T^3$, the quotient $(S\times T^3)/\Gamma$ is a manifold whose holonomy group has $Sp(1)$ as identity component. In the following subsection, we search for non-free group actions on $S\times T^3$ such that we can resolve the singularities of $(S\times T^3)/\Gamma$ by Theorem \ref{Thm-Joyce-Karigiannis} and obtain smooth $G_2$-manifolds. \subsection{The choice of the group action} In order to motivate how the group $\Gamma$ acts on $S\times T^3$, we recall what is known about free group actions on $T^3$. A quotient of a torus by an isometric free group action is called a compact Euclidean space form. In dimension $3$, they were classified by Hantzsche and Wendt \cite{Hantzsche}. In the case where $\Gamma$ preserves the orientation there exist $6$ space forms. Only the last one, which has finite fundamental group, is relevant for our construction. In that case, the lattice $\Lambda$ can be chosen as $\mathbb{Z}^3$ and $\Gamma$ is isomorphic to $\mathbb{Z}_2^2$. The generators $\psi^1$ and $\psi^2$ of $\mathbb{Z}_2^2$ are given by \begin{equation} \begin{array}{rcl} \label{Generators} \psi^1((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (\tfrac{1}{2}+x^1, -x^2,\tfrac{1}{2} - x^3) + \mathbb{Z}^3 \\ \psi^2((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (-x^1,\tfrac{1}{2} + x^2, -x^3) + \mathbb{Z}^3 \\ \end{array} \end{equation} We extend the maps $\psi^i$ with $i=1,2$ to maps $\rho^i\times \psi^i: S\times T^3 \rightarrow S\times T^3$ that leave $\phi$ invariant and generate a $\mathbb{Z}^2_2$-action. It follows from (\ref{Generators}) that the pull-backs of $\rho^1$ and $\rho^2$ have to act on the K\"ahler forms as in the equation (\ref{KahlerRelations1}) that we have studied in Section \ref{NonSymplecticZ22}. $\mathbb{Z}_2^2$ acts freely on $T^3$ and $(S\times T^3)/\mathbb{Z}_2^2$ thus is a smooth manifold with holonomy $Sp(1) \rtimes \mathbb{Z}_2^2$. Since this is not what we want, we modify the translation part of $\psi^1,\psi^2:T^3\rightarrow T^3$ such that the action of $\mathbb{Z}_2^2$ on $T^3$ is not free anymore, but the action of $\mathbb{Z}_2^2$ on $S$ still satisfies the relations (\ref{KahlerRelations1}). All elements of $\mathbb{Z}_2^2$ are of order $2$ and we will see that the singularities of $(S\times T^3)/\mathbb{Z}_2^2$ look locally like $\mathbb{C}^2/\{\pm 1\} \times \mathbb{R}^3$ and thus can be resolved by Theorem \ref{Thm-Joyce-Karigiannis}. We consider three different kinds of group actions. In the first case we replace the translation part of $\psi^2$ by zero and obtain: \begin{equation} \begin{array}{rcl} \label{K3T3Case1} \psi^1((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (\tfrac{1}{2}+x^1,-x^2,\tfrac{1}{2} - x^3) + \mathbb{Z}^3 \\ \psi^2((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (-x^1,x^2,-x^3) + \mathbb{Z}^3 \\ \psi^3((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (\tfrac{1}{2}-x^1,-x^2,\tfrac{1}{2}+x^3) + \mathbb{Z}^3 \\ \end{array} \end{equation} where $\psi^3:=\psi^1\psi^2$. In the second case we do the same for $\psi^1$ and have: \begin{equation} \begin{array}{rcl} \psi^1((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (x^1,-x^2,- x^3) + \mathbb{Z}^3 \\ \psi^2((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (-x^1,\tfrac{1}{2}+x^2,-x^3) + \mathbb{Z}^3 \\ \psi^3((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (-x^1,\tfrac{1}{2}-x^2,x^3) + \mathbb{Z}^3 \\ \end{array} \end{equation} For aesthetic reasons, we want the fixed locus of $\psi^3$ to be empty, but we still want to have the same distribution of the signs before the $x^i$. Therefore, we permute $\psi^2$ and $\psi^3$ as well as the second and third coordinate. We obtain: \begin{equation} \begin{array}{rcl} \label{K3T3Case2} \psi^1((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (x^1,-x^2,- x^3) + \mathbb{Z}^3 \\ \psi^2((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (-x^1,x^2,\tfrac{1}{2}-x^3) + \mathbb{Z}^3 \\ \psi^3((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (-x^1,-x^2,\tfrac{1}{2}+x^3) + \mathbb{Z}^3 \\ \end{array} \end{equation} In the third case, we set both translation parts to zero and obtain: \begin{equation} \begin{array}{rcl} \label{K3T3Case3} \psi^1((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (x^1,-x^2,- x^3) + \mathbb{Z}^3 \\ \psi^2((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (-x^1,x^2,-x^3) + \mathbb{Z}^3 \\ \psi^3((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (-x^1,-x^2,x^3) + \mathbb{Z}^3 \\ \end{array} \end{equation} We choose $\rho^1,\rho^2:S\rightarrow S$ as one of the pairs that we have found in Section \ref{NonSymplecticZ22}. We can combine any of the three actions on $T^3$ with any non-symplectic $\mathbb{Z}^2_2$-action on $S$ and obtain a large number of quotients $(S\times T^3)/ \mathbb{Z}_2^2$. \subsection{The first case} \label{The_first_case} We investigate the three cases separately and start with the first one where the action on $T^3$ is given by (\ref{K3T3Case1}). Our first step is to determine the fixed loci of all $\psi^i$ with $i=1,2,3$. Since $\psi^1$ maps $x^1$ to $x^1+\tfrac{1}{2}$, it has no fixed points. The same argument can be made for $\psi^3$. We denote the fixed locus of a map $f:X\rightarrow X$ by $\text{Fix}(f)$ and obtain: \begin{equation} \begin{array}{rcl} \text{Fix}(\psi^1) & = & \emptyset \\ \text{Fix}(\psi^2) & = & \bigcup_{\epsilon_1,\epsilon_2\in \{0,1\}} \{(\tfrac{1}{2}\epsilon_1,x^2,\tfrac{1}{2}\epsilon_2) + \mathbb{Z}^3 | x^2\in\mathbb{R}\} \\ \text{Fix}(\psi^3) & = & \emptyset \\ \end{array} \end{equation} $\text{Fix}(\psi^2)$ consists of four circles and $\text{Fix}(\rho^2)$ is a disjoint union of complex curves. The connected components of $\text{Fix}(\rho^2\times\psi^2)$ are thus diffeomorphic two $S^1\times \Sigma$ where $\Sigma$ is a complex curve. $\rho^1\times \psi^1$ maps one of these connected components to another one and the singular locus of $(S\times T^3)/\mathbb{Z}^2_2$ therefore consists of two copies of $\text{Fix}(\rho^2)\times S^1$. Since the differential of a non-symplectic involution at a fixed point can be written as $\text{diag}(1,-1)\in \mathbb{C}^{2\times 2}$, we can conclude that the fibers of the normal bundle of the singular locus are isomorphic to $\mathbb{C}^2/\{\pm \text{Id}\}$. The circle factor of the singular locus is parameterized by $x^2$ and $dx^2$ is thus a harmonic nowhere vanishing one-form on the singular locus. Therefore, all conditions of Theorem \ref{Thm-Joyce-Karigiannis} are satisfied and we obtain a smooth manifold $M$ with a torsion-free $G_2$-structure. We have to check if the fundamental group of $M$ is finite since this would guarantee that the holonomy of $M$ is the whole group $G_2$. We make use of a theorem of Armstrong \cite{Armstrong} that is cited below. \begin{Th} \label{Pi1Quotient} Let $G$ be a discontinuous group of homeomorphisms of a path connected, simply connected, locally compact metric space $X$, and let $H$ be the normal subgroup of $G$ generated by those elements which have fixed points. Then the fundamental group of the orbit space $X/G$ is isomorphic to the factor group $G/H$. \end{Th} In the case where $\text{Fix}(\rho^2)=\emptyset$ the singular locus of $(S\times T^3)/\mathbb{Z}^2_2$ is empty and $M$ is covered by $S\times T^3$. Since $T^3=\mathbb{R}^3/\mathbb{Z}^3$, $(S\times T^3)/\mathbb{Z}^2_2$ can be written as a quotient of $S\times \mathbb{R}^3$ by a semidirect product $\mathbb{Z}^3 \rtimes \mathbb{Z}^2_2$. Therefore, we have $\pi_1(M) \cong \mathbb{Z}^3 \rtimes \mathbb{Z}^2_2$. Since this case is not particularly interesting, we assume from now on that $\text{Fix}(\rho^2)$ is not empty. A calculation shows that the elements of $\mathbb{Z}^3 \rtimes \mathbb{Z}^2_2$ whose action on $S\times\mathbb{R}^3$ has fixed points are precisely the unit element and the $\alpha_2\beta$, where $\beta$ is a translation along a vector $\{(x,0,z)^{\top} | x,z\in\mathbb{Z}\}$. These elements generate a group that is isomorphic to $\mathbb{Z}^2 \rtimes \mathbb{Z}_2$. Therefore, we have $\pi_1((S\times T^3)/\mathbb{Z}^2_2) \cong \mathbb{Z}\rtimes \mathbb{Z}_2$. Because of Corollary \ref{JoyceKarigiannisBetti} we also have $\pi_1(M) \cong \mathbb{Z}\rtimes \mathbb{Z}_2$ and the holonomy of $M$ is not the whole group $G_2$. In fact, the manifolds that we obtain in this case are an example of \emph{barely $G_2$-manifolds} \cite{Grigorian,HarveyMoore}. A barely $G_2$-manifold is a quotient $(X\times S^1)/\mathbb{Z}_2$, where $X$ is a Calabi-Yau threefold and $\mathbb{Z}_2$ denotes a free group action that acts anti-holomorphically on $X$ and orientation-reversing on the circle $S^1$. The holonomy of a barely $G_2$-manifold is $SU(3)\rtimes\mathbb{Z}_2$, which acts irreducibly on the tangent space. In our case, $X$ can be constructed as follows. The product of $S$ with the complex structure $I_2$ and the torus $T^2$ with coordinates $x^1$ and $x^3$ is a complex manifold. The projection of $\rho^2\times\psi^2$ to $S\times T^2$ is a holomorphic involution that is non-symplectic on $(S,I_2)$. Since $\psi^2$ fixes $4$ points of $T^2$, the quotient $(S\times T^2)/\mathbb{Z}_2$ has $A_1$-singularities along $4$ copies of $\text{Fix}(\rho^2)$. These singularities can be blown up. Since $S\times T^2$ has trivial canonical bundle, the blow-up is a Calabi-Yau manifold. This construction is known in the literature as the Borcea-Voisin construction \cite{Borcea,Voisin}. The circle in the product $X\times S^1$ is parameterized by $x^2$ and the group $\mathbb{Z}_2$ by which we divide $X\times S^1$ is generated by the lift of $\rho^1\times\psi^1$ to the resolution $X\times S^1$ of $(S\times T^3)/\langle \rho^2 \times\psi^2\rangle$. We compute the Betti numbers of $M$. Since $\psi^1$ has no fixed points, $M':=(S\times T^3)/\langle \rho^1\times\psi^1 \rangle$ is a smooth manifold. Moreover, $(S\times T^3)/\mathbb{Z}^2_2 = M'/\langle \rho^2\times\psi^2 \rangle$ and we can determine the Betti numbers of $M$ by Corollary \ref{JoyceKarigiannisBetti}. In order to do this, we need the Betti numbers of the singular set $L$ of $M'/\langle \rho^2\times\psi^2 \rangle$. We recall that in the generic case the fixed locus of $\rho^2$ consists of one complex curve of genus $\frac{1}{2}(22-r_2-a_2)$ and $\frac{1}{2}(r_2-a_2)$ rational curves. Therefore, we have \begin{equation} \begin{array}{rcl} b^0(\text{Fix}(\rho^2)) & = & \frac{1}{2}(r_2-a_2) +1 \\ b^1(\text{Fix}(\rho^2)) & = & 22 - r_2 - a_2 \\ \end{array} \end{equation} These equations still hold in the exceptional case $(r_2,a_2,\delta_2) = (10,8,0)$ but not in the case where the fixed locus is empty that we have already excluded. Since $L$ consists of two copies of $S^1\times \text{Fix}(\rho^2)$, we can conclude that \begin{equation} \begin{array}{rcl} b^0(L) = 2b^0(\text{Fix}(\rho^2)) & = & r_2-a_2 +2 \\ b^1(L) = 2b^0(\text{Fix}(\rho^2)) + 2b^1(\text{Fix}(\rho^2)_S) & = & 46 - r_2 - 3a_2 \\ \end{array} \end{equation} We also need the Betti numbers of $(S\times T^3)/\mathbb{Z}_2^2$. We compute them by counting the harmonic forms on $S\times T^3$ that are invariant under the $\mathbb{Z}^2_2$-action. The harmonic $k$-forms on $T^3$ are precisely the $dx^{i_1\ldots i_k}$. By taking a look at (\ref{K3T3Case1}) we see that none of them, except $1$ and $dx^{123}$, is preserved by the $\mathbb{Z}^2_2$-action. Since $\rho^1$ and $\rho^2$ are involutions, the eigenvalues of their action on $H^2(S,\mathbb{C})$ are $\pm 1$. For $\epsilon_1,\epsilon_2 \in \{1,-1\}$ we define \[ V_{\epsilon_1,\epsilon_2} := \{x\in H^2(S,\mathbb{C}) | (\rho^1)^{\ast}x= \epsilon_1x,\: (\rho^2)^{\ast}x= \epsilon_2x \} \] Since we always assume that $L_1\cap L_2 = \{0\}$, we have $V_{1,1}=\{0\}$ and \[ \dim{V_{1,-1}} + \dim{V_{-1,1}} + \dim{V_{-1,-1}} = \dim{H^2(S,\mathbb{C})} = 22 \] By a careful examination we see that there are no invariant harmonic $1$- or $2$-forms on $S\times T^3$ and that the invariant $3$-forms are precisely \begin{itemize} \item $dx^{123}$, \item $dx^1\wedge \beta_1$ with $[\beta_1]\in V_{1,-1}$ \item $dx^2\wedge \beta_2$ with $[\beta_2]\in V_{-1,1}$ \item $dx^3\wedge \beta_3$ with $[\beta_3]\in V_{-1,-1}$ \end{itemize} Thus, we have proven that the Betti numbers of the quotient $(S\times T^3)/\mathbb{Z}^2_2$ are $b^1=b^2=0$ and $b^3=23$. Therefore, it follows from Corollary \ref{JoyceKarigiannisBetti} that \begin{equation} \begin{array}{rcl} b^1(M) & = & 0 \\ b^2(M) & = & b^0(L) = r_2-a_2 +2 \\ b^3(M) & = & 23 + b^1(L) = 69 - r_2 - 3a_2 \\ \end{array} \end{equation} Finally, we create a table of the possible values of $(b^2,b^3)$. Since the minimal value of $r_1$ is $1$ and the minimal value of $r_1+a_1$ is $2$, the restrictions $r_1 + r_2\leq 11$ and $r_1+r_2+a_1+a_2 < 22$ from Theorem \ref{KoLe-Theorem} become $r_2\leq 10$ and $r_2+a_2 < 20$. We see that most pairs $(r_2,a_2)$ from Theorem \ref{Theorem_Z22} satisfy one of these restrictions except the $7$ pairs $(11,9)$, $(11,11)$, $(12,10)$, $(18,2)$, $(19,1)$, $(19,3)$ and $(20,2)$. Since there is a non-symplectic involution with invariants $(10,10,1)$, we have to include the pair $(r_2,a_2)=(10,10)$, too, although we have excluded the exceptional case $(10,10,0)$. We insert the allowed values of $(r_2,a_2)$ into the equations for $(b^2,b^3)$ and finally obtain the following theorem. \begin{Th} Let $\psi^i:T^3\rightarrow T^3$ with $i=1,2$ be the maps that we have defined in (\ref{K3T3Case1}). Moreover, let $\rho^i$ be the generators of one of the non-symplectic $\mathbb{Z}^2_2$-actions from Theorem \ref{Theorem_Z22} or \ref{KoLe-Theorem}. The maps $\rho^i\times \psi^i$ generate a group $\mathbb{Z}^2_2$ that acts on $S\times T^3$, where $S$ is an appropriate K3 surface. The quotient $(S\times T^3)/\mathbb{Z}^2_2$ is a $G_2$-orbifold with $A_1$-singularities. The singularities can be resolved and if the fixed locus of $\rho^2$ is non-empty, we obtain a manifold $M$ with fundamental group $\mathbb{Z}\rtimes\mathbb{Z}_2$ and holonomy $SU(3)\rtimes\mathbb{Z}_2$. The Betti numbers of $M$ depend on the choice of the $\rho^i$ and can be found in the table below. \begin{center} \begin{tabular}{l|l} $b^2$ & $b^3$ \\ \hline 2 & 25, 29, 33, 37, 41, 45, 49, 53, 57, 61, 65 \\ \hline 4 & 27, 31, 35, 39, 43, 47, 51, 55, 59, 63, 67 \\ \hline 6 & 37, 41, 45, 49, 53, 57 \\ \hline 8 & 39, 43, 47, 51, 55 \\ \hline 10 & 41, 45, 49, 53, 57 \\ \hline 12 & 43, 47, 51, 55, 59 \\ \hline 14 & 45, 49 \\ \hline 16 & 47 \\ \hline 18 & 41, 45, 49 \\ \hline 20 & 43, 47, 51 \\ \end{tabular} \end{center} \end{Th} \subsection{The second case} We proceed to the case where the $\psi^i$ are defined by (\ref{K3T3Case2}). The fixed loci of the $\rho^i\times\psi^i$ are given by \begin{equation} \begin{array}{rcl} \text{Fix}(\rho^1\times\psi^1) & := & \text{Fix}(\rho^1) \times \bigcup_{\epsilon_1,\epsilon_2\in \{0,1\}} \{(x^1,\tfrac{1}{2}\epsilon_1,\tfrac{1}{2}\epsilon_2) + \mathbb{Z}^3 | x^1\in\mathbb{R}\} \\ \text{Fix}(\rho^2\times\psi^2) & := & \text{Fix}(\rho^2) \times \bigcup_{\epsilon_1,\epsilon_2\in \{0,1\}} \{(\tfrac{1}{2}\epsilon_1,x^2,\tfrac{1}{2}\epsilon_2 + \tfrac{1}{4}) + \mathbb{Z}^3 | x^2\in\mathbb{R}\} \\ \text{Fix}(\rho^3\times\psi^3) & := & \emptyset \\ \end{array} \end{equation} Since the last coordinate of $\text{Fix}(\psi^1)$ is a multiple of $\tfrac{1}{2}$ and the last coordinate of $\text{Fix}(\psi^2)$ is $\tfrac{1}{4}$ plus a multiple of $\tfrac{1}{2}$, the set of all points in $S\times T^3$ that are fixed by a $\rho^i\times\psi^i$ is the disjoint union \begin{equation} \label{FixedLoci} \text{Fix}(\rho^1\times\psi^1)\:\dot{\cup}\: \text{Fix}(\rho^2\times\psi^2) \end{equation} As in the previous subsection, the blow-up of $(S\times T^3)/\langle\rho^1\times\psi^1 \rangle$ along the singularities is a product $X\times S^1$ of a Calabi-Yau manifold of Borcea-Voisin type and a circle. $\rho^2\times\psi^2$ induces an involution $\imath$ of $X\times S^1$ that is anti-holomorphic on $X$. The difference to the previous case is that $\imath$ acts not freely on $X\times S^1$, at least if $\text{Fix}(\rho^2)\neq\emptyset$. Its fixed point set is diffeomorphic to $\text{Fix}(\rho^2\times\psi^2)/ \langle \rho^1\times\psi^1 \rangle$. $\psi^1$ maps each of the four circles in $\text{Fix}(\psi^2)$ to a distinct one. The singular locus of $(X\times S^1)/\langle\imath\rangle$ therefore consists of two copies of $\text{Fix}(\rho^2)\times S^1$. Each connected component of this set is the product of a complex curve and $S^1$. For the same reasons as in the first case we can apply Theorem \ref{Thm-Joyce-Karigiannis} and construct a smooth manifold $M$ with a torsion-free $G_2$-structure out of $(S\times T^3)/\mathbb{Z}^2_2$. The fundamental group can be computed by the same methods as in the previous case. In the case where the fixed loci of $\rho^1$ and $\rho^2$ are non-empty, the elements of $\mathbb{Z}^3 \rtimes \mathbb{Z}^2_2$ that fix a vector in $\mathbb{R}^3$ are the unit element, the $\psi^1\circ \beta$, where $\beta(x)=x+v$ with $v\in\{(0,y,z)^{\top} | y,z\in\mathbb{Z}\}$ and the $\psi^2\circ \beta$ with $v\in\{(x,0,z)^{\top} | x,z\in\mathbb{Z}\}$. These elements generate all of $\mathbb{Z}^3 \rtimes \mathbb{Z}^2_2$ and therefore we have \begin{equation*} \pi_1(M) \cong \pi_1((S\times T^3)/\mathbb{Z}^2_2) \cong (\mathbb{Z}^3 \rtimes \mathbb{Z}^2_2)/(\mathbb{Z}^3 \rtimes \mathbb{Z}^2_2) \cong \{1\} \end{equation*} If either $\text{Fix}(\rho^1)$ or $\text{Fix}(\rho^2)$ is empty, we obtain as in Subsection \ref{The_first_case} $\pi_1(M)\cong \mathbb{Z}\rtimes \mathbb{Z}_2$. $M$ can be written as $(X\times S^1)/\mathbb{Z}_2$, where the $\mathbb{Z}_2$-action is free but different from the one in Subsection \ref{The_first_case}. Since the Betti numbers of $(S\times T^3)/ \mathbb{Z}^2_2$ and the shape of the singular locus are the same as in the previous subsection, we obtain barely $G_2$-manifolds with the same Betti numbers as before. We focus on the case where $\text{Fix}(\rho^1)$ and $\text{Fix}(\rho^2)$ are non-empty. Since $M$ is simply connected, it has indeed holonomy $G_2$. The Betti numbers of $M$ can be computed with help of the equation for the Betti numbers of a blow-up and Corollary \ref{JoyceKarigiannisBetti}. We see that $\text{Fix}(\rho^1\times\psi^1)$ and $\text{Fix}(\rho^2\times\psi^2)$ contribute the same to the cohomology of $M$. We denote the singular set of $(S\times T^3)/\mathbb{Z}^2_2$ by $L$ and obtain \begin{equation*} \begin{array}{rcl} b^0(L) & = & b^0(\text{Fix}(\rho^1\times\psi^1)/\langle \rho^2\times\psi^2\rangle) + b^0(\text{Fix}(\rho^2\times\psi^2)/\langle \rho^1\times\psi^1\rangle) \\[1mm] & = & 2b^0(\text{Fix}(\rho^1)) + 2b^0(\text{Fix}(\rho^2))\\[1mm] & = & 4 + r_1 + r_2 - a_1 -a_2 \\[1mm] b^1(L) & = & b^1(\text{Fix}(\rho^1\times\psi^1)/\langle \rho^2\times\psi^2\rangle) + b^1(\text{Fix}(\rho^2\times\psi^2)/\langle \rho^1\times\psi^1\rangle) \\[1mm] & = & 2b^0(\text{Fix}(\rho^1)) + 2b^0(\text{Fix}(\rho^2)) + 2b^1(\text{Fix}(\rho^1)) + 2b^1(\text{Fix}(\rho^2))\\[1mm] & = & 92 - r_1 - r_2 - 3a_1 - 3a_2 \\ \end{array} \end{equation*} The quotient $(S\times T^3)/\mathbb{Z}^2_2$ has the same Betti numbers as in the previous case and we obtain: \begin{equation} \label{BettiCase2} \begin{array}{rcl} b^1(M) & = & 0 \\ b^2(M) & = & b^0(L) = 4 + r_1 + r_2 - a_1 -a_2 \\ b^3(M) & = & 23 + b^1(L) = 115 - r_1 - r_2 - 3a_1 - 3a_2 \\ \end{array} \end{equation} \begin{Ex} \label{JoyceKarigiannisExample1} A special case of the above construction can be found in Example 7.2. in \cite{JoyceKarigiannis}. The authors construct a K3 surface $X$ as a double cover of $\mathbb{CP}^2$ that is branched along a sextic curve $C$. The map $\alpha$ that permutes both branches is a non-symplectic involution with fixed locus $C$. Complex conjugation on $\mathbb{CP}^2$ induces a second involution $\beta$ of $X$ that has fixed locus $S^2$. $\alpha$ and $\beta$ commute and generate a non-symplectic $\mathbb{Z}^2_2$-action. The fixed locus of $\alpha\beta$ is empty. The authors extend the $\mathbb{Z}^2_2$-action on $X$ to $T^3\times X$ in the same way as we did in (\ref{K3T3Case2}). By resolving the singularities of $(T^3\times X)/\langle \alpha,\beta \rangle$, the authors obtain a $G_2$-manifold with Betti numbers $b^2 = 4$ and $b^3 = 67$. With help of Theorem \ref{FixedLocusTheorem} and the list of all triples $(r,a,\delta)$ we can deduce that that the invariants of $\alpha$ are $(1,1,1)$, the invariants of $\beta$ are $(11,11,1)$ and the invariants of $\alpha\beta$ are $(10,10,0)$. Moreover, the fixed lattices of $\alpha$ and $\beta$ are orthogonal to each other since the first one is spanned by the cohomology class of the curve $C$ and $\beta$ acts orientation-reversing on $C$. Therefore, all assumptions that we have made in this subsection are satisfied and we obtain the same Betti numbers as in \cite{JoyceKarigiannis}. \end{Ex} \begin{Rem} Joyce and Karigiannis \cite{JoyceKarigiannis} observe that the above example is in fact a twisted connected sum. In order to see this, they choose the metric on $T^3$ as $dx^1 + dx^2 + R^2 dx^3$, where $R$ is large. After that, they divide by $\mathbb{Z}^2_2$ and resolve the singularities. Finally, they cut the quotient $(T^3\times X)/\mathbb{Z}^2_2$ along the middle of the circle in the $x^3$-direction. In the limit $R\rightarrow\infty$, they obtain two asymptotically cylindrical parts. The first one contains the fixed locus of $\alpha$ and the second one contains the fixed locus of $\beta$. These two parts are in fact the same as we obtain by the Kovalev-Lee construction with the non-symplectic involutions $\alpha$ and $\beta$ as starting point. They are glued together such that the direct sum of their fixed lattices is primitively embedded into $L$. Of course, the above arguments can not only be made for the example from \cite{JoyceKarigiannis} but for any $\rho^i\times\psi^i$ such that $(\rho^1, \rho^2)$ is a non-symplectic $\mathbb{Z}^2_2$-action and the $\psi^i$ are as in (\ref{K3T3Case2}). Therefore, all examples that we obtain in this subsection are in fact twisted connected sums. This explains why our formulas for $b^2$ and $b^3$ are precisely the same in Theorem 5.7.(c) in \cite{KoLe}. \end{Rem} We compute the Betti numbers of our examples and check if we obtain $G_2$-manifolds with new values of $(b^2,b^3)$. We assume that the non-symplectic $\mathbb{Z}^2_2$-action on $S$ is generated by simple involutions since the $G_2$-manifolds that we obtain by Theorem \ref{KoLe-Theorem} and the $\psi^i$ from (\ref{K3T3Case2}) can already be found in \cite{KoLe}. The Betti numbers depend only on the sums $r_1 + r_2$ and $a_1 + a_2$. We take a look at the tables from Theorem \ref{Theorem_Z22} and see that $(r'_1+r'_2,a'_1+a'_2)$ is one of the following \begin{align*} & (2,2)\quad (3,1)\quad (3,3)\quad (4,0) \\ & (4,2)\quad (4,4)\quad (5,1)\quad (5,3) \\ & (5,5)\quad (6,4)\quad (6,6)\quad (7,5) \\ \end{align*} Analogously, the pair $(r''_1+r''_2,a''_1+a''_2)$ is an element of the following list: \[ (0,0)\quad (8,0)\quad (8,8)\quad (16,0)\quad (16,16) \] We obtain $60$ pairs $(r_1+r_2,a_1+a_2)$. $31$ of them satisfy the condition $r_1+r_2\leq 11$ or $r_1+r_2+a_1+a_2 < 22$. Since these cases are already studied in \cite{KoLe}, we do not consider them further. The Betti numbers of the remaining $29$ pairs can be calculated by the equations (\ref{BettiCase2}). All in all, we have proven the following theorem. \begin{Th} Let $\psi^i:T^3\rightarrow T^3$ with $i=1,2$ be the maps that we have defined in (\ref{K3T3Case2}). Moreover, let $\rho^i$ be the generators of one of the non-symplectic $\mathbb{Z}^2_2$-actions from Theorem \ref{Theorem_Z22} such that the fixed loci of the $\rho^i$ are non-empty. The maps $\rho^i\times \psi^i$ generate a group $\mathbb{Z}^2_2$ that acts on $S\times T^3$, where $S$ is an appropriate K3 surface. The quotient $(S\times T^3)/\mathbb{Z}^2_2$ is a $G_2$-orbifold with $A_1$-singularities. The singularities can be resolved and we obtain a simply connected manifold $M$ with holonomy $G_2$. $M$ is diffeomorphic to the twisted connected sum that can be obtained from the construction of Kovalev and Lee \cite{KoLe} with the non-symplectic involutions $\rho^1$ and $\rho^2$ as input. The above construction yields $G_2$-manifolds with $29$ distinct pairs of Betti numbers $(b^2,b^3)$ that are not included in Theorem 5.7.(c) in \cite{KoLe}. Their Betti numbers together with the invariants of the pairs of non-symplectic involutions can be found in the table below. \begin{center} \begin{tabular}{ccc} \begin{tabular}{r|r|r|r} $b^2$ & $b^3$ & $r_1+r_2$ & $a_1+a_2$ \\ \hline 4 & 27 & 22 & 22 \\ 4 & 31 & 21 & 21 \\ 4 & 35 & 20 & 20 \\ 4 & 39 & 19 & 19 \\ 4 & 43 & 18 & 18 \\ 4 & 59 & 14 & 14 \\ 4 & 63 & 13 & 13 \\ 4 & 67 & 12 & 12 \\ 6 & 29 & 23 & 21 \\ 6 & 33 & 22 & 20 \\ 6 & 37 & 21 & 19 \\ 6 & 41 & 20 & 18 \\ 6 & 45 & 19 & 17 \\ 6 & 61 & 15 & 13 \\ 6 & 65 & 14 & 12 \\ \end{tabular} & \quad & \begin{tabular}{r|r|r|r} $b^2$ & $b^3$ & $r_1+r_2$ & $a_1+a_2$ \\ \hline 6 & 69 & 13 & 11 \\ 6 & 73 & 12 & 10 \\ 8 & 43 & 21 & 17 \\ 8 & 47 & 20 & 16 \\ 8 & 75 & 13 & 9 \\ 20 & 75 & 22 & 6 \\ 20 & 79 & 21 & 5 \\ 20 & 83 & 20 & 4 \\ 20 & 87 & 19 & 3 \\ 22 & 77 & 23 & 5 \\ 22 & 81 & 22 & 4 \\ 22 & 85 & 21 & 3 \\ 22 & 89 & 20 & 2 \\ 24 & 91 & 21 & 1 \\ & & & \\ \end{tabular} \end{tabular} \end{center} \end{Th} \begin{Rem} The above theorem yields probably more than one diffeomorphism type of $G_2$-manifolds for some of the pairs $(b^2,b^3)$. This may happen for pairs of non-symplectic involutions with the same values of $r_1+r_2$ and $a_1+a_2$ but different invariants $(r_i,a_i,\delta_i)$. \end{Rem} In the literature, there exist several examples of $G_2$-manifolds whose Betti numbers are the same as one of the pairs $(b^2,b^3)$ in the above table. \begin{itemize} \item $(4,59)$, $(4,63)$, $(4,67)$, $(6,61)$, $(6,65)$, $(6,69)$, $(6,73)$, $(8,75)$, $(20,75)$, $(20,79)$, $(20,83)$, $(20,87)$ can be found in Table 1 in \cite{KoLe}. Those examples are twisted connected sums, too, but at least one of the parts that are glued together is not constructed with help of a non-symplectic involution. \item $(4,35)$, $(6,41)$, $(8,47)$, $(22,89)$ can be found in Section 6.1 in \cite{KoLe}. The corresponding $G_2$-manifolds are constructed from pairs of non-symplectic involutions as in this section. \item $(4, 43)$ can be found in Section 6.3 in \cite{KoLe} and is a twisted connected sum of another type. \item $(4,27)$ and $(6,33)$ can be found in Section 12.4 in Joyce \cite{Joyce}. \item $(4,39)$, $(6,45)$ can be found in Section 12.5 in \cite{Joyce}. \item $(8,43)$ can be found in Section 12.7 in \cite{Joyce}. \end{itemize} The remaining pairs \[ (4,31)\quad (6,29)\quad (6,37)\quad (22,77)\quad (22,81)\quad (22,85)\quad (24, 91) \] belong to new $G_2$-manifolds, or at least they do not appear in \cite{Joyce1,Joyce,KoLe}. \subsection{The third case} \label{The_third_case} Finally, we investigate the case where the maps $\psi^i$ are defined by (\ref{K3T3Case3}). The fixed loci of the $\rho^i\times\psi^i$ are: \begin{equation} \begin{array}{rcl} \text{Fix}(\rho^1\times\psi^1) & = & \text{Fix}(\rho^1) \times \bigcup_{\epsilon_1,\epsilon_2\in \{0,1\}} \{(x^1,\tfrac{1}{2}\epsilon_1,\tfrac{1}{2}\epsilon_2) + \mathbb{Z}^3 | x^1\in\mathbb{R}\} \\ \text{Fix}(\rho^2\times\psi^2) & = & \text{Fix}(\rho^2) \times \bigcup_{\epsilon_1,\epsilon_2\in \{0,1\}} \{(\tfrac{1}{2}\epsilon_1,x^2,\tfrac{1}{2}\epsilon_2) + \mathbb{Z}^3 | x^2\in\mathbb{R}\} \\ \text{Fix}(\rho^3\times\psi^3) & = & \text{Fix}(\rho^3) \times \bigcup_{\epsilon_1,\epsilon_2\in \{0,1\}} \{(\tfrac{1}{2}\epsilon_1,\tfrac{1}{2}\epsilon_2,x^3) + \mathbb{Z}^3 | x^3\in\mathbb{R}\} \\ \end{array} \end{equation} It may happen that $\rho^1$ and $\rho^2$ have a common fixed point $p\in S$. In this situation, $p$ is a fixed point of $\rho^3$, too. The points $(\tfrac{1}{2}\epsilon_1,\tfrac{1}{2}\epsilon_2,\tfrac{1}{2}\epsilon_3) + \mathbb{Z}^3\in T^3$ with $\epsilon_i\in \{0,1\}$ are fixed by all $\psi^i$. Therefore, $S\times T^3$ has points that are fixed by the whole group $\mathbb{Z}^2_2$. This means that the orbifold $(S\times T^3)/\mathbb{Z}^2_2$ has more complicated singularities of type $\mathbb{R}^7/\mathbb{Z}^2_2$ and we cannot apply Theorem \ref{Thm-Joyce-Karigiannis} directly. In principle, it is possible to resolve the singularities in two steps. The first step would be to blow up the singularities of $(S\times T^3)/\mathbb{Z}_2$, where $\mathbb{Z}_2$ is generated by $\rho^1\times\psi^1$. We obtain a smooth manifold $N$ and $\rho^2\times \psi^2$ can be lifted to an involution $\imath$ of $N$. It can be shown that the singular set of $N/\langle\imath\rangle$ consists of $A_1$-singularities along disjoint three-dimensional submanifolds. Since blowing up means in this context that the metric is perturbed, it is not a priori clear that there exist harmonic 1-forms without zeroes on the three-dimensional submanifolds. Although this is probably true, we would need an analytical argument to show that this is indeed the case. Since this investigation is beyond the scope of this paper, we assume from now on that the fixed locus of $\rho^1$ is empty. This ensures that $\rho^2$ and $\rho^3$ have no common fixed point. In order to construct explicit examples, we have to know that the fixed locus of $\rho^1$ is empty and we need some information on the fixed loci of $\rho^2$ and $\rho^3$, too. Theorem \ref{KoLe-Theorem} contains only the invariants of $\rho^1$ and $\rho^2$. Since it is not straightforward to determine the invariants of $\rho^1\rho^2$, we restrict ourselves in this subsection to pairs $(\rho^1,\rho^2)$ from Theorem \ref{Theorem_Z22}. We show how to resolve the singularities in our situation. The fixed locus of $\rho^2\times \psi^2$ consists of four copies of $\text{Fix}(\rho^2)\times S^1$, where $S^1$ has the coordinate $x^2$. Analogously, the fixed locus of $\rho^3 \times \psi^3$ consists of four copies of $\text{Fix}(\rho^3)\times S^1$, where $S^1$ has the coordinate $x^3$. Let $\Sigma \subseteq S$ be one of the connected components of $\text{Fix}(\rho^2)$. $\rho^1$ either maps $\Sigma$ to itself such that $\rho^1:\Sigma\to\Sigma$ has no fixed points or it maps $\Sigma$ to another connected component $\Sigma'$. $\rho^1$ does not preserve the orientation of $\Sigma$ since it is anti-holomorphic with respect to $I_2$. The map $\psi^1$ reverses the orientation of the circle factor. $\text{Fix}(\rho^2\times\psi^2)/\langle \rho^1\times \psi^1 \rangle$ thus is a quotient of $\text{Fix}(\rho^2\times\psi^2)$ by an involution that preserves the orientation. The same argument can be made for the action of $\rho^1\times \psi^1$ on $\text{Fix}(\rho^3\times \psi^3)$. All in all, the singular locus of $(S\times T^3)/\mathbb{Z}^2_2$ is the disjoint union of four copies of $(\text{Fix}(\rho^2)\times S^1)/\mathbb{Z}_2$ and four copies of $(\text{Fix}(\rho^3) \times S^1)/\mathbb{Z}_2$. Unfortunately, we cannot apply Theorem \ref{Thm-Joyce-Karigiannis} to resolve the singularities since the harmonic one-form $dx^2$ on $\text{Fix}(\rho^2) \times S^1$ is not projected to a well-defined 1-form on $(\text{Fix}(\rho^2)\times S^1)/\mathbb{Z}_2$. The reason for this is that $\mathbb{Z}_2$ acts as $x^2\mapsto -x^2$ on $S^1$. Nevertheless, $dx^2$ is projected to a $Z$-twisted 1-form and we can apply Corollary \ref{Thm-Joyce-Karigiannis-Twisted} to obtain a smooth $G_2$-manifold $M$. The $\mathbb{Z}_2$-principal bundle $Z$ can be defined simply as $\pi: \text{Fix}(\rho^2\times \psi^2) \rightarrow \text{Fix}(\rho^2\times \psi^2)/ \mathbb{Z}_2$. Again, the same argument can be made for $dx^3$. In order to obtain the Betti numbers of $M$ we need the $Z$-twisted Betti numbers of $L:=\text{Fix}(\rho^2\times\psi^2)/\langle \rho^1\times\psi^1 \rangle$. Let $p\in L$ be arbitrary and let $q\in \text{Fix}(\rho^2\times\psi^2)$ be a point with $\pi(q)=p$. The fiber $\pi^{-1}(p)$ of the $\mathbb{Z}_2$-principal bundle consists of $q$ and $f(q)$, where $f$ is the restriction of $\rho^1\times \psi^1$ to $\text{Fix}(\rho^2\times\psi^2)$. Therefore, the fiber of the bundle $\bigwedge^k T^{\ast} L \otimes_{\mathbb{Z}_2} Z$ over $p$ consists of all equivalence classes \[ \{(\alpha_p,q), (-\alpha_p,f(q)) \} \] with $\alpha_p\in \bigwedge^k T^{\ast}_p L$. This means that the sections of $\bigwedge^k T^{\ast} L \otimes_{\mathbb{Z}_2} Z$ correspond to the $k$-forms on $L$ that are invariant under $\alpha \mapsto -f^{\ast} \alpha$. The $Z$-twisted Betti numbers count the linearly independent harmonic $k$-forms with this property. The harmonic $0$-forms on $L$ are precisely the functions that are constant on each connected component. For any pair of distinct connected components that are mapped to each other by $f$ we find one function with our invariance property. Since $\text{Fix}(\rho^2\times \psi^2)$ consists of four copies of $\text{Fix}(\rho^2)\times S^1$, $b^0(L,Z)$ is two times the number of connected components of $\text{Fix}(\rho^2)$ that are not mapped to itself by $\rho^1$. We compute $b^1(L,Z)$. Let $\Sigma\subseteq \text{Fix}(\rho^2)$ be a complex curve of genus $g$. A basis of the space of harmonic $1$-forms on $\Sigma\times S^1$ is given by $dx^2$, $g$ holomorphic forms on $\Sigma$ and $g$ anti-holomorphic forms on $\Sigma$. We assume that $f(\Sigma) = \Sigma$. $dx^2$ corresponds to a $Z$-twisted $1$-form on $(\Sigma\times S^1)/\mathbb{Z}_2$. Since $f$ is anti-holomorphic on $\Sigma$, it maps the $g$ holomorphic forms to $g$ anti-holomorphic forms and vice versa. We conclude that the $(+1)-$ and $(-1)$-eigenspace of $-f^{\ast}$ acting on the harmonic $1$-forms are of the same dimension. All in all, it follows that $b^1((\Sigma\times S^1)/ \mathbb{Z}_2,Z)=g+1$. Next, we assume that $f$ maps $\Sigma$ to another complex curve $\Sigma'$. We can extend any harmonic $1$-form on $\Sigma\times S^1$ to $(\Sigma\cup \Sigma')\times S^1$ such that it is invariant under $-f^{\ast}$. Therefore, we obtain $b^1(((\Sigma \cup \Sigma') \times S^1)/ \mathbb{Z}_2,Z)=2g+1$. The fixed locus of $\rho^3$ yields an analogous contribution to the Betti numbers of $M$. From now on, we restrict ourselves to triples $(\rho^1,\rho^2,\rho^3)$ with the property that neither $\text{Fix}(\rho^2)$ nor $\text{Fix}(\rho^3)$ contains a pair of curves with the same genus. This ensures that $\rho^1$ maps each connected component of $\text{Fix}(\rho^2)$ or $\text{Fix}(\rho^3)$ to itself. Under this assumption, we have $b^0(L,Z)=0$ and $b^1((\Sigma\times S^1)/\mathbb{Z}_2,Z)=g+1$ for each complex curve $\Sigma$ of genus $g$ in $\text{Fix}(\rho^2)$ or $\text{Fix}(\rho^3)$. If $\rho^2$ is not of one of the two exceptional types, $\text{Fix}(\rho^2)$ consists of one curve of genus $\frac{1}{2}(22 - r_2 - a_2)$ and $\frac{1}{2}(r_2 - a_2)$ rational curves. We have to exclude the cases $r_2-a_2\geq 4$ and $r_2+a_2=22$ but $r_2\neq a_2$ from our considerations. In both cases, we would have two or more rational curves in the fixed locus. We have to exclude the exceptional case $(r_2,a_2,\delta_2)=(10,8,0)$, too, since in that case the fixed locus consists of two elliptic curves. We are now able to determine the Betti numbers of $M$. The fixed locus of $\psi^2$ contributes four copies of $(\text{Fix}(\rho^2)\times S^1)/ \mathbb{Z}_2$ to the singular locus. Therefore, we obtain if $\text{Fix}(\rho^2)\neq\emptyset$: \begin{equation*} \begin{aligned} b^0(L,Z) & = 0 \\ b^1(L,Z) & = 4\cdot \frac{22 - r_2 - a_2}{2} + 4 + 4\cdot\frac{r_2 - a_2}{2} = 48 - 4a_2 \\ \end{aligned} \end{equation*} Since $(S\times T^3)/\mathbb{Z}^2_2$ has as usual the Betti numbers $b^1=b^2=0$ and $b^3=23$, the Betti numbers of our $G_2$-manifold $M$ are given by \begin{equation} \begin{array}{rcl} b^1(M) & = & 0 \\ b^2(M) & = & 0 \\ b^3(M) & = & 119 - 4(a_2+a_3) \end{array} \end{equation} if the fixed loci of $\rho^2$ and $\rho^3$ are both non-empty. If in addition to the fixed locus of $\rho^1$ the fixed locus of $\rho^k$ with $k\in\{2,3\}$ is empty, we have $b^3(M)=71 - 4a_{5-k}$. It is impossible that all three fixed loci are empty. In that case, the rank of the fixed lattice of any $\rho^k:S \rightarrow S$ with $k\in\{1,2,3\}$ would be $10$. The direct sum of the fixed lattices would be of dimension $30$, which is more than the dimension of the K3 lattice. Let $\mathbb{Z}^3 \rtimes \mathbb{Z}^2_2$ be the group that is generated by translations with integer coefficients and the lifts of the $\rho^k\times \psi^k$ to isometries of $S\times \mathbb{R}^3$. The group that is generated by all elements of $\mathbb{Z}^3\rtimes\mathbb{Z}^2_2$ that have a fixed point is $\mathbb{Z}^3\rtimes\mathbb{Z}^2_2$ itself if $\rho^2$ and $\rho^3$ both have fixed points. Therefore, $M$ is simply connected and its holonomy is the whole group $G_2$. If only one fixed locus is non-empty, we obtain similarly as in the first case $\pi_1(M)=\mathbb{Z}\rtimes\mathbb{Z}_2$ and the holonomy is not the whole group $G_2$. Examples of this type are in fact barely $G_2$-manifolds $(X\times S^1)/\mathbb{Z}_2$, where $X$ is a Calabi-Yau manifold of Borcea-Voisin type. More precisely, $X$ is the blow-up of $(S\times T^3)/\langle \rho^k \times \psi^k \rangle$, where $\rho^k$ is the non-symplectic involution with fixed points. We compute the possible values of $b^3$ in the case where the only empty fixed locus is that of $\rho^1$. The invariants of $\rho^1$ are $(10,10,0)$, which implies that $r'_1=2$, $a'_1=2$, $r''_1=8$ and $r''_1=8$. We take a look at the table of Theorem \ref{Theorem_Z22} and see that we necessarily have $a_2' + a_3'\in \{2,4,6\}$ and $a_2'' + a_3''=8$. Moreover, we can obtain all $3$ values of $a_2' + a_3'$ without choosing $\rho^2$ or $\rho^3$ as an involution with invariants $(10,10,0)$ or violating any of the further restrictions that we have imposed on $\rho^2$ and $\rho^3$. Explicit examples are given by $(r''_2,a''_2)=(8,8)$, $(r''_3,a''_3)=(0,0)$ and \[ ((r'_2,a'_2),(r'_3,a'_3))\in \{((3,1),(1,1)),((3,3),(1,1)),((3,3),(3,3))\} \] Therefore, we have $a_2+a_3\in\{10,12,14\}$ and obtain $G_2$-manifolds with \[ b^3 \in \{63,71,79\} \] We turn to the case where the fixed loci of $\rho^1$ and $\rho^2$ are empty. In Section \ref{NonSymplecticZ22} we have seen that a simple non-symplectic involution with invariants $(10,10,0)$ is up to conjugation the map whose restriction to $2(-E_8)$ maps $(x,y)$ to $(y,x)$ and whose restriction to $3H$ can be written as \[ \begin{pmatrix} & M_1 & \\ M_1 & & \\ & & M_2 \\ \end{pmatrix} \] We assume without loss of generality that $\rho^1$ is precisely this map. $\rho^2$ has to commute with $\rho^1$ and shall have a fixed lattice that is isomorphic to $H(2) \oplus (-E_8)(2)$ and is orthogonal to the fixed lattice of $\rho^1$. This is only possible if the restriction of $\rho^2$ to $2(-E_8)$ maps $(x,y)$ to $(-y,-x)$. The restriction of $\rho^2$ to $3H$ has to be of type \[ \begin{pmatrix} & M_l & \\ M_l & & \\ & & M_2 \\ \end{pmatrix} \] in order to satisfy all our conditions. The fixed lattice of $\rho^1$ is \[ \{ (v,v,0)^{\top} \in 3H | v\in H \} \] and the fixed lattice of $\rho^2$ is \[ \{ (v,M_l v,0)^{\top} \in 3H | v\in H \} \] The intersection of both lattices is trivial only if $l=2$. In this case, the fixed lattice of $\rho^3 = \rho^1\rho^2$ is $H_3$ and thus we have $(r_3,a_3,\delta_3)=(2,0,0)$. It follows that we obtain only one barely $G_2$-manifold, which has $b^3 = 71$. All in all, we have proven the following theorem. \begin{Th} Let $\psi^i:T^3\rightarrow T^3$ with $i=1,2$ be the maps that we have defined in (\ref{K3T3Case3}). Moreover, let $\rho^i$ be the generators of one of the non-symplectic $\mathbb{Z}^2_2$-actions from Theorem \ref{Theorem_Z22} such that the fixed locus of $\rho^1$ is empty and the fixed loci of $\rho^2$ and $\rho^3=\rho^1\rho^2$ are non-empty. Moreover, the fixed loci of $\rho^2$ and $\rho^3$ shall contain no pair of complex curves with the same genus. The maps $\rho^i\times \psi^i$ generate a group $\mathbb{Z}^2_2$ that acts on $S\times T^3$, where $S$ is an appropriate K3 surface. The quotient $(S\times T^3)/\mathbb{Z}^2_2$ is a $G_2$-orbifold with $A_1$-singularities. The singularities can be resolved and we obtain a simply connected manifold $M$ with holonomy $G_2$. The second Betti number of $M$ is always $0$ and the values of $b^3$ that can be obtained by our construction are $63$, $71$, and $79$. If the fixed locus of $\rho^2$ is empty, $M$ is a barely $G_2$-manifold of a unique diffeomorphism type. Its Betti numbers are $b^2=0$ and $b^3=71$. \end{Th} \begin{Rem} \begin{enumerate} \item $G_2$-manifolds with $b^2=0$ and the three values of $b^3$ from the above theorem can also be found in Table 1 in \cite{KoLe} and in the section "$G_2$-manifolds from pairs of smooth Fano 3-folds" in Corti et al. \cite{CortiEtAl2}. \item A set of invariants that is sufficient to determine the diffeomorphism type of a 2-connected 7-dimensional spin manifold can be found in \cite{CrowleyNordstroem}. Although it is outside of the scope of this paper, it would be interesting to examine if our $G_2$-manifolds are 2-connected, to determine their diffeomorphism type and to check if they coincide with the examples from \cite{CortiEtAl2}. \end{enumerate} \end{Rem} \begin{Ex} Let $X$ be the K3 surface with the two non-symplectic involutions $\alpha$ and $\beta$ from \cite{JoyceKarigiannis} that we have already considered in Example \ref{JoyceKarigiannisExample1}. Let $\alpha$ and $\beta$ act on $X\times T^3$ as in (\ref{K3T3Case3}). The resolution of the quotient $(X\times T^3)/\langle\alpha,\beta\rangle$ is studied in Example 7.3. in \cite{JoyceKarigiannis}. The authors obtain a smooth $G_2$-manifold with $b^2=0$ and $b^3=71$. We recall that the invariants of $\alpha$ are $(1,1,1)$, the invariants of $\beta$ are $(11,11,1)$ and that the fixed locus of $\alpha\beta$ is empty. By our formula we obtain the same value of $b^3$. \end{Ex} \subsection{Quotients by a non-abelian group} Finally, we study resolutions of quotients of $S\times T^3$ by a non-abelian group $\Gamma$. We define two maps $\psi^1,\psi^2: T^3\rightarrow T^3$ by \begin{equation} \label{K3T3Case4} \begin{array}{rcl} \psi^1((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (x^1 + \tfrac{1}{4},-x^2 + \tfrac{1}{4},-x^3) + \mathbb{Z}^3 \\ \psi^2((x^1,x^2,x^3) + \mathbb{Z}^3) & := & (-x^1,x^2,-x^3) + \mathbb{Z}^3 \\ \end{array} \end{equation} Since \[ (\psi^1)^2((x^1,x^2,x^3) + \mathbb{Z}^3) = (x^1 + \tfrac{1}{2}, x^2, x^3) + \mathbb{Z}^3\:, \] we have $(\psi^1)^4 = (\psi^2)^2 = \text{Id}_{T^3}$. Moreover, we have \[ \psi^1\psi^2((x^1,x^2,x^3) + \mathbb{Z}^3) = (-x^1 + \tfrac{1}{4},-x^2 + \tfrac{1}{4},x^3) + \mathbb{Z}^3 \] and thus $(\psi^1\psi^2)^2= \text{Id}_{T^3}$. All in all, $\psi^1$ and $\psi^2$ generate a group $\Gamma$ that is isomorphic to the dihedral group $D_4$. We extend the $\psi^i$ to maps $\rho^i\times\psi^i:S\times T^3 \to S\times T^3$ by assuming that \begin{equation} \begin{aligned} & {\rho^1}^{\ast}\omega_1 = \omega_1\:, \qquad {\rho^1}^{\ast}\omega_2 = - \omega_2\:, \qquad {\rho^1}^{\ast}\omega_3 = -\omega_3 \\ & {\rho^2}^{\ast}\omega_1 = - \omega_1\:, \qquad {\rho^2}^{\ast}\omega_2 = \omega_2\:, \qquad {\rho^2}^{\ast}\omega_3 = - \omega_3 \end{aligned} \end{equation} We determine the fixed loci of all elements of $\Gamma$. For reasons of brevity, we denote these elements by \[ \gamma_{jk} := (\rho^1\times\psi^1)^j (\rho^2\times \psi^2)^k \] with $j\in\{0,1,2,3\}$ and $k\in\{0,1\}$. $(\psi^1)^j$ with $j\in\{1,2,3\}$ acts on the $x^1$-coordinate as a translation. Therefore, the fixed point set of $\gamma_{j0}$ is all of $S\times T^3$ if $j=0$ and empty if $j\neq 0$. By a short calculation we see that the fixed point sets of the remaining group elements are given by: \begin{equation} \begin{array}{rcl} \text{Fix}(\gamma_{01}) & = & \bigcup_{\epsilon_1,\epsilon_2\in \{0,1\}} \{(\tfrac{1}{2}\epsilon_1,x^2,\tfrac{1}{2}\epsilon_2) + \mathbb{Z}^3 | x^2\in\mathbb{R}\} \times \text{Fix}(\rho^2) \\ \text{Fix}(\gamma_{11}) & = & \bigcup_{\epsilon_1,\epsilon_2\in \{0,1\}} \{(\tfrac{1}{8} + \tfrac{1}{2}\epsilon_1,\tfrac{1}{8} + \tfrac{1}{2}\epsilon_2,x^3) + \mathbb{Z}^3 | x^3\in\mathbb{R}\} \times \text{Fix}(\rho^1\rho^2) \\ \text{Fix}(\gamma_{21}) & = & \bigcup_{\epsilon_1,\epsilon_2\in \{0,1\}} \{(\tfrac{1}{4} + \tfrac{1}{2}\epsilon_1,x^2, \tfrac{1}{2}\epsilon_2) + \mathbb{Z}^3 | x^2\in\mathbb{R}\} \times \text{Fix}(\rho^2) \\ \text{Fix}(\gamma_{31}) & = & \bigcup_{\epsilon_1,\epsilon_2\in \{0,1\}} \{(\tfrac{3}{8} + \tfrac{1}{2}\epsilon_1,\tfrac{1}{8} + \tfrac{1}{2}\epsilon_2,x^3) + \mathbb{Z}^3 | x^3\in\mathbb{R}\} \times \text{Fix}(\rho^1\rho^2) \\ \end{array} \end{equation} $\rho^1\times\psi^1$ maps $\text{Fix}(\gamma_{01})$ bijectively to $\text{Fix}(\gamma_{21})$ and $\text{Fix}(\gamma_{11})$ to $\text{Fix}(\gamma_{31})$. The singular locus of $(S\times T^3)/\Gamma$ therefore consists of one component that is diffeomorphic to $\text{Fix}(\gamma_{01})$ and another one that is diffeomorphic to $\text{Fix}(\gamma_{11})$. More precisely, it is the union of $4$ copies of $S^1\times \text{Fix}(\rho^2)$ and $4$ copies of $S^1\times \text{Fix}(\rho^1\rho^2)$. By taking a look at the first coordinate of $T^3$ we see that this union is disjoint. Since $\gamma_{01}$ and $\gamma_{11}$ are both of order $2$, all singularities of $(S\times T^3)/\Gamma$ are $A_1$-singularities. Therefore, we can apply Theorem \ref{Thm-Joyce-Karigiannis} to obtain a smooth $G_2$-manifold $M$. We determine the fundamental group of $M$. As before, we have a natural action of $\mathbb{Z}^3\rtimes\Gamma$ on $S\times \mathbb{R}^3$. We consider the following maps that are induced by this group action: \begin{equation} \begin{array}{rcl} (p,(x^1,x^2,x^3)) & \mapsto & (\rho^2(p),(-x^1,x^2,-x^3) + (\lambda_1,0,\lambda_2)) \\ (p,(x^1,x^2,x^3)) & \mapsto & (\rho^1\rho^2(p),(-x^1+\frac{1}{4},-x^2+\frac{1}{4},x^3) + (\lambda_1,\lambda_2,0)) \\ \end{array} \end{equation} where $p\in S$, $(x^1,x^2,x^3)\in\mathbb{R}^3$ and $\lambda_1,\lambda_2\in\mathbb{Z}$ are arbitrary. If $\rho^2$ and $\rho^1\rho^2$ have at least one fixed point, the above maps have a fixed point, too, and we can conclude with help of Theorem \ref{Pi1Quotient} that $M$ is simply connected and thus has holonomy $G_2$. If $\text{Fix}(\rho^2)$ or $\text{Fix}(\rho^1\rho^2)$ is empty, the fundamental group is infinite and the holonomy is smaller. We do not pursue this case further. Finally, we calculate the Betti numbers of $M$. By the same arguments as in the previous cases we see that $b^1((S\times T^3)/\Gamma) = b^2((S\times T^3)/\Gamma) = 0$ and $b^3((S\times T^3)/\Gamma) = 23$. We denote the invariants of $\rho^2$ by $(r_2,a_2,\delta_2)$ and the invariants of $\rho^3$ by $(r_3,a_3,\delta_3)$. The Betti numbers of the singular locus $L$ can be computed by the usual methods as follows: \begin{equation*} \begin{array}{rcl} b^0(L) & = & 4\cdot\left(\frac{r_2 - a_2}{2} + 1 + \frac{r_3 - a_3}{2} + 1\right) = 8 + 2(r_2 + r_3) - 2(a_2 + a_3) \\ b^1(L) & = & b^0(L) + 4\cdot\left((22 - r_2 - a_2) + (22 - r_3 - a_3) \right) \\ & = & 184 - 2(r_2 + r_3) - 6(a_2 + a_3) \\ \end{array} \end{equation*} The above equations yield the correct Betti numbers for $(r_k,a_k,\delta_k)=(10,8,0)$ with $k\in\{2,3\}$, too. We obtain the following Betti numbers for $M$: \begin{equation} \begin{array}{rcl} b^2(M) & = & 8 + 2(r_2 + r_3) - 2(a_2 + a_3) \\ b^3(M) & = & 207 - 2(r_2 + r_3) - 6(a_2 + a_3) \\ \end{array} \end{equation} The Betti numbers of $M$ depend only on $r_2+r_3$ and $a_2+a_3$. The set of all pairs $(r_2 + r_3,a_2 + a_3)$ from Theorem \ref{Theorem_Z22} and Theorem \ref{KoLe-Theorem} is the same as the set of all $(r_1 + r_2,a_1 + a_2)$ since $(\rho^2,\rho^3)$ generate the same non-symplectic $\mathbb{Z}^2_2$-action as $(\rho^1,\rho^2)$. We calculate the Betti numbers that we obtain from these pairs and finally have proven the following theorem. \begin{Th} Let $\psi^i:T^3\rightarrow T^3$ with $i=1,2$ be the maps that we have defined in (\ref{K3T3Case4}). Moreover, let $\rho^i$ be the generators of one of the non-symplectic $\mathbb{Z}^2_2$-actions from Theorem \ref{Theorem_Z22} or \ref{KoLe-Theorem}. The maps $\rho^i\times \psi^i$ generate a group that is isomorphic to the dihedral group $D_4$ and acts on $S\times T^3$, where $S$ is an appropriate K3 surface. The quotient $(S\times T^3)/D_4$ is a $G_2$-orbifold with $A_1$-singularities. The singularities can be resolved and we obtain a manifold $M$ whose holonomy is contained in $G_2$. If the fixed loci of $\rho^2$ and $\rho^1\rho^2$ are non-empty, $M$ is simply connected and its holonomy is $G_2$. The Betti numbers of $M$ depend on the choice of the $\rho^i$ and can be found in the table below. \begin{center} \begin{tabular}{l|l} $b^2$ & $b^3$ \\ \hline 8 & 31, 39, 47, 55, 63, 95, 103, 111, 119, 127, 135, 143, 151, 159, 167,\\ & 175, 183, 191 \\ \hline 12 & 35, 43, 51, 59, 67, 99, 107, 115, 123, 131, 139, 147, 155, 163, 171,\\ & 179, 187, 195 \\ \hline 16 & 63, 71, 127, 135, 143, 151, 159, 167, 175, 183, 191, 199 \\ \hline 20 & 139, 147, 155, 163, 171, 179 \\ \hline 24 & 79, 87, 95, 103, 111, 143, 151, 159, 167, 175 \\ \hline 28 & 83, 91, 99, 107, 115, 147, 155, 163, 171, 179 \\ \hline 32 & 111, 119, 151, 159, 167, 175, 183 \\ \hline 36 & 155, 163 \\ \hline 40 & 127, 135, 143, 151, 159 \\ \hline 44 & 131, 139, 147, 155, 163 \\ \hline 48 & 159, 167 \\ \end{tabular} \end{center} \end{Th} The above theorem yields $95$ different pairs $(b^2,b^3)$. Some of them can be found in the literature: \begin{itemize} \item $(8,47)$, $(8,63)$, $(8,95)$, $(8,103)$, $(8,111)$, $(8,119)$, $(8,151)$, $(8,159)$, $(12,59)$, $(12,67)$, $(12,99)$, $(12,107)$, $(12,115)$, $(12,123)$, $(12,155)$, $(12,163)$, $(16,71)$, $(20,155)$ and $(24,95)$ can be found in Kovalev, Lee \cite{KoLe}. \item Of the remaining pairs, $(8,31)$, $(8,39)$, $(8,55)$, $(8,127)$, $(8,135)$, $(12,43)$, $(12,51)$, $(12,131)$ and $(16,135)$ can be found in Joyce \cite{Joyce}. \end{itemize} The remaining $67$ pairs do not appear in \cite{Joyce1,Joyce,KoLe}. One pair of Betti numbers from the above theorem usually corresponds to several non-symplectic $\mathbb{Z}^2_2$-actions and thus probably to several diffeomorphism types $G_2$-manifolds. Although the pairs $(b^2,b^3)$ from the literature usually correspond to several $G_2$-manifolds, too, it is possible that our theorem yields further new $G_2$-manifolds. \end{document}
\begin{document} \title{Exact Diagonalization of Two Quantum Models for the Damped Harmonic Oscillator} \author{M.\ Rosenau da Costa\thanks{E-mail: [email protected]}, A.\ O.\ Caldeira\thanks{E-mail: [email protected]}, S.\ M.\ Dutra\thanks{Present address: Huygens Laboratory, University of Leiden, P.\ O.\ Box 9504, 2300 RA Leiden, The Netherlands. E-mail: [email protected]}, H.\ Westfahl, Jr. \thanks{Present address: Department of Physics of Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, IL 61801-3080, USA. E-mail: [email protected]}\\ \emph{Instituto de F\'{\i}sica ``Gleb Wataghin''}\\ \emph{Universidade Estadual de Campinas, Unicamp} \\ \emph{Caixa Postal 6165, 13083-970 Campinas, S\~{a}o Paulo, Brazil}} \date{\today} \maketitle \begin{abstract} The damped harmonic oscillator is a workhorse for the study of dissipation in quantum mechanics. However, despite its simplicity, this system has given rise to some approximations whose validity and relation to more refined descriptions deserve a thorough investigation. In this work, we apply a method that allows us to diagonalize exactly the dissipative Hamiltonians that are frequently adopted in the literature. Using this method we derive the conditions of validity of the rotating-wave approximation (RWA) and show how this approximate description relates to more general ones. We also show that the existence of dissipative coherent states is intimately related to the RWA. Finally, through the evaluation of the dynamics of the damped oscillator, we notice an important property of the dissipative model that has not been properly accounted for in previous works; namely, the necessity of new constraints to the application of the factorizable initial conditions. \end{abstract} \section{Introduction} The study of dissipative systems and in particular of the Brownian motion has been pursued for a long time in the context of classical \cite{clas} and quantum mechanics \cite{AmB}. Although there has been a number of publications in this area there are some subtle points that have never been properly investigated in the literature. Among these we could mention three major ones; a careful investigation of the relation between different models \cite{Ford}, the existence of dissipative coherent states \cite{Agar,Wl1,Wl2,Ser} or the condition for the employment of factorizable initial conditions. These are exactly the issues we shall address in this paper. Usually the dissipation in the system is described as a consequence of its coupling to a reservoir. The properties of this dissipative systems are generally studied through the evaluation of the time evolution of its reduced density operator. This evolution is often described either by a generalization of the Feynman-Vernon approach \cite{AmB,AmA,Grab,Cris} or through master equations \cite{Agar,Wl1,Wl2,Ser,Pab,Zur,Zur0,Pab2,Zur2}. In this work the properties of the system will be studied through exact diagonalization of different Hamiltonians of the dissipative models. We will consider a damped harmonic oscillator. The usual models of dissipation consist of coupling the harmonic oscillator to a reservoir that is conveniently chosen as a group of N noninteracting oscillators. The coupling between the two systems is bilinear in the creation and destruction operators of quanta of energy. Then the Hamiltonian of the total system is given by \cite{Agar} \begin{equation} \hat{H}=\hat{H}_{Sis}+\hat{H}_{Res}+\hat{H}_{Int}, \label{0} \end{equation} being \begin{eqnarray} \hat{H}_{Sis} &=&\hbar \omega _{o}\hat{a}^{\dagger }\hat{a},\quad \hat{H} _{Res}=\hbar \sum_{j}\omega _{j}\hat{b}_{j}^{\dagger }\hat{b}_{j}, \label{1} \\ \hat{H}_{Int} &=&\hbar \left( \hat{a}^{\dagger }+\hat{a}\right) \sum_{j}\left( k_{j}\hat{b}_{j}+k_{j}^{*}\hat{b}_{j}^{\dagger }\right) , \label{2} \end{eqnarray} where we consider a harmonic oscillator with frequency $\omega _{o}$ (the system of interest) interacting with a bath of oscillators with frequencies $ \omega _{j}$ through the coupling constants $k_{j}$'s. We will take the limit of a continuous spectrum of excitations in the reservoir of the Hamiltonian $\hat{H}$. Then we will diagonalize $\hat{H}$ and determine the time evolution of the operator $\hat{ a}$ exactly. The analysis of $\hat{a}\left( t\right) $ will determine the conditions of validity of the rotating-wave approximation (RWA) which consists of neglecting the terms $k_{j}\hat{a}\hat{b}_{j}+k_{j}^{*}\hat{a} ^{\dagger }\hat{b}_{j}^{\dagger }$ in (\ref{1}) and writing \begin{equation} \hat{H}_{Int}^{RWA}=\hbar \sum_{j}\left( k_{j}\hat{a}^{\dagger }\hat{b} _{j}+k_{j}^{*}\hat{a}\hat{b}_{j}^{\dagger }\right) . \label{3} \end{equation} Once this has been accomplished we will discuss the existence of dissipative coherent states. Some authors \cite{Agar,Wl1,Wl2,Ser} have stated that the coherent states are special states that remain pure during its decay in dissipative systems. We will show that the existence of these dissipative coherent states is directly related to the RWA; they can only exist at zero temperature and in systems that meet the conditions required for the RWA. Once we have determined the evolution of the operator $\hat{a}\left( t\right) $ of the system we can determine the evolution of any of its observables. However, the dynamics of these observables will depend on the specific form adopted for the coupling constants $k_{j}$ as functions of the frequencies $\omega _{j}$. Our method holds for an arbitrary form but in order to compare our results with the Caldeira-Leggett model \cite{AmB}, we will reduce our results to the case where the function becomes the same as the one they have adopted. Then, as in Refs. \cite{Grab,Cris}, we will determine the evolution of the mean value of the position operator $\left\langle \hat{q}\left( t\right) \right\rangle $ of the damped oscillator. The result of this calculation reveals a very special need to carefully treat the initial time of the motion. We propose a simple initial condition that eliminates the initial transient that would appear in the evolution of $\left\langle \hat{q}\left( t\right) \right\rangle $ and we believe that it is enough to eliminate most of or maybe all the initial transients (in a certain time scale) which were noticed in this system in previous works \cite{Pab,Zur,Pab2}. The paper is organized as follows. In Sec. II we write the Hamiltonian (\ref {0}) in the limit of a continuous spectrum for the reservoir excitations and we diagonalize it exactly within and without the RWA. We compare the model given by the Hamiltonian (\ref{0}) with the dissipative model presented in \cite{AmB} in Sec. III. Here, we also determine the relation between the coupling function $\left| v\left( \omega \right) \right| ^{2}$, introduced in Sec. II, and the spectral function $J\left( \omega \right) $ introduced in \cite{AmA}. In Sec. IV we analyze the relevance of the different terms that appear in the calculation of the evolution of the operator $\hat{a} \left( t\right) $ with relation to the intensity of the dissipation in the system. In Sec. V we show under which conditions the evolution of the operator $\hat{a}\left( t\right) $ is reduced to that given in the RWA. In Sec. VI we show that the existence of dissipative coherent states is only possible within the RWA. In Sec. VII we present the calculation of the evolution of the mean value of the position of the damped harmonic oscillator. In Sec. VIII we discussed the physical meaning of the initial condition proposed in Sec.VII. Finally, we discuss the main results and conclusions in Sec. IX. \section{Diagonalization of the Dissipative Hamiltonians} \subsection{Treating a Reservoir with Continuous Spectrum} We can rewrite the Hamiltonian (\ref{0}) considering a continuous spectrum of excitations in the reservoir by making use of the transformation between the discrete boson operators $\hat{b}_{j}$ and the continuous ones $\hat{b} _{\Omega }$ \cite{Coh} \begin{equation} \hat{b}_{j}=\sqrt{g\left( \Omega _{j}\right) }\int_{1/g\left( \Omega _{j}\right) }d\Omega \hat{b}_{\Omega }, \label{230} \end{equation} where $g\left( \Omega _{j}\right) d\Omega _{j}$ is the number of modes in the reservoir with frequencies between $\Omega _{j}$ and $\Omega _{j}+d\Omega _{j}$ and $\int_{1/g\left( \Omega _{j}\right) }d\Omega $ represents an integration in a band of width $1/g\left( \Omega _{j}\right) $ around $\Omega _{j}$. The operators $\hat{b}_{\Omega }$ then satisfy the commutation relation \begin{equation} \left[ \hat{b}_{\Omega },\hat{b}_{\tilde{\Omega}}^{\dagger }\right] =\delta \left( \Omega -\tilde{\Omega}\right) , \label{230b} \end{equation} and all other commutators vanish. Under the transformation (\ref{230}) we find \begin{equation} \hat{H}_{Int}=\hbar \left( \hat{a}^{\dagger }+\hat{a}\right) \int d\Omega \sqrt{g\left( \Omega \right) }\left[ k\left( \Omega \right) \hat{b}_{\Omega }+k^{*}\left( \Omega \right) \hat{b}_{\Omega }^{\dagger }\right] , \label{232} \end{equation} where we considered that $g\left( \Omega _{j}\right) $ and $k\left( \Omega _{j}\right) $ are constant inside the interval $1/g\left( \Omega _{j}\right) $ and that $\sum_{j}\int_{1/g\left( \Omega _{j}\right) }d\Omega $ is nothing but $\int d\Omega $, where this last integral covers the whole spectrum of excitations of the reservoir. Then the total Hamiltonian of our system is given by \begin{eqnarray} \hat{H} &=&\hbar \omega _{o}\hat{a}^{\dagger }\hat{a}+\hbar \int \Omega \hat{ b}_{\Omega }^{\dagger }\hat{b}_{\Omega }d\Omega \label{235} \\ &&\qquad \qquad +\hbar \left( \hat{a}^{\dagger }+\hat{a}\right) \int \left[ v\left( \Omega \right) \hat{b}_{\Omega }+v^{*}\left( \Omega \right) \hat{b} _{\Omega }^{\dagger }\right] d\Omega . \nonumber \end{eqnarray} where \begin{equation} v\left( \Omega \right) =\sqrt{g\left( \Omega \right) }k\left( \Omega \right) . \label{234} \end{equation} \subsection{The Hamiltonian within the Rotating Wave Approximation} We will now perform a canonical transformation and apply the procedure proposed by Fano \cite{Fn} in order to diagonalize the Hamiltonian of our global system in the RWA that is written as \begin{eqnarray} \hat{H}^{RWA} &=&\hbar \omega _{o}\hat{a}^{\dagger }\hat{a}+\hbar \int \Omega \hat{b}_{\Omega }^{\dagger }\hat{b}_{\Omega }d\Omega \label{237} \\ &&\qquad \qquad +\hbar \int \left[ v\left( \Omega \right) \hat{a}^{\dagger } \hat{b}_{\Omega }+v^{*}\left( \Omega \right) \hat{a}\hat{b}_{\Omega }^{\dagger }\right] d\Omega . \nonumber \end{eqnarray} The diagonalization procedure presented in the sequel is basically a review of the method presented in \cite{Bar1}. Our goal is to find an operator that satisfies the eigenoperator equation \begin{equation} \left[ \hat{A}_{\omega },\hat{H}^{RWA}\right] =\hbar \omega \hat{A}_{\omega }, \label{241} \end{equation} and therefore has its evolution trivially given by $\hat{A}_{\omega }\left( t\right) =\hat{A}_{\omega }e^{-i\omega t}$. The new operator $\hat{A}_{\omega }$ can be written in terms of the operator $\hat{a}$ of the system and of the operators $\hat{b}_{\Omega }$ of the reservoir in the form \begin{equation} \hat{A}_{\omega }=\alpha _{\omega }\hat{a}+\int d\Omega \beta _{\omega ,\Omega }\hat{b}_{\Omega }. \label{242} \end{equation} Substituting this expression for $\hat{A}_{\omega }$ as well as (\ref{237}) for $\hat{H}^{RWA}$ in (\ref{241}) and calculating the commutators we have \begin{equation} \begin{array}{l} \omega _{o}\alpha _{\omega }\hat{a}+\alpha _{\omega }\int d\Omega v\left( \Omega \right) \hat{b}_{\Omega }+\int d\Omega \Omega \beta _{\omega ,\Omega } \hat{b}_{\Omega } \\ \qquad \qquad +\int d\Omega v^{*}\left( \Omega \right) \beta _{\omega ,\Omega }\hat{a}=\omega \left( \alpha _{\omega }\hat{a}+\int d\Omega \beta _{\omega ,\Omega }\hat{b}_{\Omega }\right) . \end{array} \label{243} \end{equation} Now, taking the commutator of this expression with $\hat{a}^{\dagger }$ and $ \hat{b}_{\Omega }^{\dagger }$, we obtain \begin{eqnarray} \omega _{o}\alpha _{\omega }+\int d\Omega v^{*}\left( \Omega \right) \beta _{\omega ,\Omega } &=&\omega \alpha _{\omega }, \label{244} \\ v\left( \Omega \right) \alpha _{\omega }+\Omega \beta _{\omega ,\Omega } &=&\omega \beta _{\omega ,\Omega }, \label{245} \end{eqnarray} respectively. Imposing \begin{equation} \left[ \hat{A}_{\omega },\hat{A}_{\tilde{\omega}}^{\dagger }\right] =\delta \left( \omega -\tilde{\omega}\right) , \label{241b} \end{equation} we have \begin{equation} \alpha _{\omega }\alpha _{\tilde{\omega}}^{*}+\int d\Omega \beta _{\omega ,\Omega }\beta _{\tilde{\omega},\Omega }^{*}=\delta \left( \omega -\tilde{ \omega}\right) . \label{246} \end{equation} The system of equations (\ref{244}), (\ref{245}) and (\ref{246}) is identical to the one presented in \cite{Fn}. The solution is given by \begin{equation} \left| \alpha _{\omega }\right| ^{2}=\frac{\left| v\left( \omega \right) \right| ^{2}}{\left[ \omega -\omega _{o}-F\left( \omega \right) \right] ^{2}+\left[ \pi \left| v\left( \omega \right)\right| ^{2}\right] ^{2}}, \label{247} \end{equation} with an arbitrary phase of $\alpha _{\omega }$, and \begin{equation} \beta _{\omega ,\Omega }=\left[ {\mathcal P}\frac{1}{\omega -\Omega }+\frac{ \omega -\omega _{o}-F\left( \omega \right) }{\left| v\left( \omega \right) \right| ^{2}}\delta \left( \Omega -\omega \right) \right] v\left( \Omega \right) \alpha _{\omega }, \label{248} \end{equation} where \begin{equation} F\left( \omega \right) ={\mathcal P}\int \frac{\left| v\left( \Omega \right) \right| ^{2}}{\omega -\Omega }d\Omega , \label{249} \end{equation} and ${\mathcal P}$ denotes the principal part. We can calculate the evolution of the operator $\hat{a}$ of the system expressing it as function of the operators $\hat{A}_{\omega }$. We can write $\hat{a}$ as function of $\hat{A}_{\omega }$ in the following way \begin{equation} \hat{a}=\int d\omega f_{\omega }\hat{A}_{\omega }. \label{251} \end{equation} Taking the commutator $\left[ \hat{a},\hat{A}_{\omega }^{\dagger }\right] $, first using (\ref{242}) and then (\ref{251}), we obtain $f_{\omega }=\alpha _{\omega }^{*}$. Therefore the evolution of the operator $\hat{a}$ is given by \begin{equation} \hat{a}\left( t\right) =\int d\omega \alpha _{\omega }^{*}\hat{A}_{\omega }e^{-i\omega t}. \label{254} \end{equation} Substituting the expression for $\hat{A}_{\omega }$ in this equation and using (\ref{248}) we obtain \begin{eqnarray} \hat{a}\left( t\right) &=&\int d\omega \left| \alpha _{\omega }\right| ^{2}e^{-i\omega t}\hat{a} \nonumber \\ &&+\int d\Omega v\left( \Omega \right) \left\{ \int d\omega \left| \alpha _{\omega }\right| ^{2}{\mathcal P}\frac{1}{\omega -\Omega }e^{-i\omega t}\right. \label{255} \\ &&\qquad \left. +\frac{\left| \alpha _{\Omega }\right| ^{2}}{\left| v\left( \Omega \right) \right| ^{2}}\left[ \Omega -\omega _{o}-F\left( \Omega \right) \right] e^{-i\Omega t}\right\} \hat{b}_{\Omega }. \nonumber \end{eqnarray} \subsection{The Hamiltonian without the Rotating Wave Approximation} Now we will present the diagonalization of the Hamiltonian (\ref{235}) without the RWA. The procedure that we will present is similar to the one adopted in \cite{Bar2}. Again we want to find an operator $\hat{A}_{\omega }$ that satisfies (\ref {241}), with $\hat{H}$ in the place of $\hat{H}^{RWA}$, and (\ref{241b}). Then we write $\hat{A}_{\omega }$ in the form \begin{equation} \hat{A}_{\omega }=\alpha _{\omega }\hat{a}+\int d\Omega \beta _{\omega ,\Omega }\hat{b}_{\Omega }+\chi _{\omega }\hat{a}^{\dagger }+\int d\Omega \sigma _{\omega ,\Omega }\hat{b}_{\Omega }^{\dagger }. \label{272} \end{equation} Imposing (\ref{241}) and (\ref{241b}) we obtain (see Appendix A) \begin{eqnarray} \left| \alpha _{\omega }\right| ^{2} &=&\left( \frac{\omega +\omega _{o}}{ 2\omega _{o}}\right) ^{2}\frac{1}{\left| v\left( \omega \right) \right| ^{2}\left[ \pi ^{2}+z ^{2}\left( \omega \right) \right] }, \label{273} \\ \beta _{\omega ,\Omega } &=&\left[ {\mathcal P}\frac{1}{\omega -\Omega } +z\left( \omega \right) \delta \left( \omega -\Omega \right) \right] \frac{ 2\omega _{o}}{\omega +\omega _{o}}v\left( \Omega \right) \alpha _{\omega }, \label{274} \\ \chi _{\omega } &=&\frac{\omega -\omega _{o}}{\omega +\omega _{o}}\alpha _{\omega }, \label{275} \\ \sigma _{\omega ,\Omega } &=&\frac{1}{\omega +\Omega }\frac{2\omega _{o}}{ \omega +\omega _{o}}v^{*}\left( \Omega \right) \alpha _{\omega }, \label{276} \end{eqnarray} where \begin{equation} z\left( \omega \right) =\frac{\omega ^{2}-\omega _{o}^{2}-2\omega _{o}H\left( \omega \right) }{2\omega _{o}\left| v\left( \omega \right) \right| ^{2}} \label{277b} \end{equation} and \begin{equation} H\left( \omega \right) =F\left( \omega \right) -G\left( \omega \right) = {\mathcal P}\int \frac{\left| v\left( \Omega \right) \right| ^{2}}{\omega -\Omega }d\Omega -\int \frac{\left| v\left( \Omega \right) \right| ^{2}}{ \omega +\Omega }d\Omega . \label{278} \end{equation} We can express $\hat{a}$ as a function of $\hat{A}_{\omega }$ and $\hat{A} _{\omega }^{\dagger }$ in the following way \begin{equation} \hat{a}=\int d\omega \phi _{\omega }\hat{A}_{\omega }+\int d\omega \varphi _{\omega }\hat{A}_{\omega }^{\dagger }. \label{279} \end{equation} Now taking, again, the commutators $\left[ \hat{a},\hat{A}_{\omega }^{\dagger }\right] $ and $\left[ \hat{a},\hat{A}_{\omega }\right] $ we obtain $\phi _{\omega }=\alpha _{\omega }^{*}$ and $\varphi _{\omega }=-\chi _{\omega }$. Substituting the expression (\ref{272}) for $\hat{A}_{\omega }$ in (\ref{279}) the time evolution of the operator $a$ can be easily written as \begin{eqnarray} \hat{a}\left( t\right) &=&\int \frac{d\omega }{\pi }\left| L\left( \omega \right) \right| ^{2}\left\{ A\left( \omega \right) \cos \left( \omega t\right) \hat{a}-i\left[ B\left( \omega \right) \hat{a}+C\left( \omega \right) \hat{a}^{\dagger }\right] \sin \left( \omega t\right) \right\} \nonumber \\ &&\qquad \qquad \qquad \qquad+\int \frac{d\Omega }{\pi }B_{1}\left( \Omega ;t\right) \hat{b}_{\Omega }+\int \frac{d\Omega }{\pi }B_{2}\left( \Omega ;t\right) \hat{b}_{\Omega }^{\dagger }, \label{280} \end{eqnarray} where \begin{eqnarray} A\left( \omega \right) &=&2\omega ,\quad B\left( \omega \right) =\frac{ \omega ^{2}+\omega _{o}^{2}}{\omega _{o}},\quad C\left( \omega \right) = \frac{\omega ^{2}-\omega _{o}^{2}}{\omega _{o}}, \label{281} \\ B_{1}\left( \Omega ;t\right) &=&v\left( \Omega \right) \left\{ \left( \omega _{o}+\Omega \right) \left[ X\left( \Omega ;t\right) +Z\left( \Omega \right) e^{-i\Omega t}\right] -iY_{\left( +\right) }\left( \Omega ;t\right) \right\} , \label{281b} \\ B_{2}\left( \Omega ;t\right) &=&v^{*}\left( \Omega \right) \left\{ \left( \omega _{o}-\Omega \right) \left[ X\left( \Omega ;t\right) +Z\left( \Omega \right) e^{i\Omega t}\right] -iY_{\left( -\right) }\left( \Omega ;t\right) \right\} , \label{281d} \end{eqnarray} with \begin{eqnarray} X\left( \Omega ;t\right) &=&{\mathcal P}\int d\omega \frac{2\left| L\left( \omega \right) \right| ^{2}}{\omega ^{2}-\Omega ^{2}}\omega \cos \left( \omega t\right) , \label{281e} \\ Y_{\left( \pm \right) }\left( \Omega ;t\right) &=&{\mathcal P}\int d\omega \frac{2\left| L\left( \omega \right) \right| ^{2}}{\omega ^{2}-\Omega ^{2}} \left( \omega ^{2}\pm \omega _{o}\Omega \right) \sin \left( \omega t\right) , \label{281f} \\ Z\left( \Omega \right) &=&\frac{\left| L\left( \Omega \right) \right| ^{2}}{ \left| v\left( \Omega \right) \right| ^{2}}\left[ \frac{\Omega ^{2}-\omega _{o}^{2}}{2\omega _{o}}-H\left( \Omega \right) \right] , \label{281g} \end{eqnarray} \begin{equation} \left| L\left( \omega \right) \right| ^{2}=\frac{2\pi \omega _{o}\left| v\left( \omega \right) \right| ^{2}}{\left[ \omega ^{2}-\omega _{o}^{2}-2\omega _{o}H\left( \omega \right) \right] ^{2}+\left[ 2\pi \omega _{o}\left| v\left( \omega \right) \right| ^{2}\right] ^{2}}. \label{281h} \end{equation} \section{The Model of Coordinate-Coordinate Coupling} The expressions obtained for the evolution of the operator $\hat{a}\left( t\right) $, within or without the RWA, remained written in terms of the coupling function $\left| v\left( \omega \right) \right| ^{2}$. Therefore, the choice of the function $\left| v\left( \omega \right) \right| ^{2}$ will determine the dynamics of the damped oscillator. We will choose the function $\left| v\left( \omega \right) \right| ^{2}$ by comparing the dissipation model corresponding to the Hamiltonian (\ref{235}) to the one presented in \cite{AmB} that corresponds to the following Hamiltonian \begin{eqnarray} \hat{H} &=&\frac{\hat{p}^{2}}{2M}+V\left( \hat{q}\right) +\sum_{j}\left( \frac{\hat{p}_{j}^{2}}{2m_{j}}+\frac{m_{j}\omega _{j}^{2}}{2}\hat{q} _{j}^{2}\right) \label{301} \\ &&\qquad \qquad \qquad \qquad -\sum_{j}C_{j}\hat{q}_{j}\hat{q}+V_{R}\left( \hat{q}\right) , \nonumber \end{eqnarray} where the counter-term $V_{R}\left( \hat{q}\right) $, which cancels the additional contribution to $V\left( \hat{q}\right) $ due to the coupling of the system to the reservoir, is given by \begin{equation} V_{R}\left( \hat{q}\right) =\sum_{j}\frac{C_{j}^{2}}{2m_{j}\omega _{j}^{2}} \hat{q}^{2}. \label{302} \end{equation} The spectral function $J\left( \omega \right) $ is defined by \begin{equation} J\left( \omega \right) =\frac{\pi }{2}\sum_{j}\frac{C_{j}^{2}}{m_{j}\omega _{j}}\delta \left( \omega -\omega _{j}\right) =\frac{\pi }{2}\frac{g\left( \omega \right) C_{\omega }^{2}}{m_{\omega }\omega }, \label{303b} \end{equation} where we have taken the limit of a continuous spectrum and used $g\left( \omega \right) $ from (\ref{230}). For ohmic dissipation \begin{equation} J\left( \omega \right) =\left\{ \begin{array}{c} 2M\gamma \omega \quad if\quad \omega <\Omega _{c} \\ 0\quad if\quad \omega >\Omega _{c}, \end{array} \right. \label{304} \end{equation} where $\Omega _{c}$ is a cutoff frequency, much larger than the natural frequencies of the motion of the system of interest. But in our calculations we will conveniently use the Drude form \begin{equation} J\left( \omega \right) =\frac{2M\gamma \omega }{\left( 1+\omega ^{2}/\Omega _{c}^{2}\right) }. \label{305} \end{equation} We are treating a damped harmonic oscillator so $V\left( \hat{ q}\right) =1/2M\omega _{o}^{2}\hat{q}^{2}$. Applying the usual definitions of the operators $\hat{a}$ and $\hat{b}_{j}$, \begin{equation} \hat{a}=\sqrt{\frac{M\omega _{o}}{2\hbar }}\left( \hat{q}+\frac{i}{M\omega _{o}}\hat{p}\right) ,\quad \hat{b}_{j}=\sqrt{\frac{m_{j}\omega _{j}}{2\hbar } }\left( \hat{q}_{j}+\frac{i}{m_{j}\omega _{j}}\hat{p}_{j}\right) , \label{41} \end{equation} we can rewrite (\ref{301}), initially without the inclusion of the counter-term $V_{R}\left( \hat{q}\right) $, as \begin{eqnarray} \hat{H} &=&\hbar \omega _{o}\hat{a}^{\dagger }\hat{a}+\sum_{j}\hbar \omega _{j}\hat{b}_{j}^{\dagger }\hat{b}_{j} \label{42} \\ &&\quad -\frac{\hbar }{2}\sqrt{\frac{1}{M\omega _{o}}}\left( \hat{a}+\hat{a} ^{\dagger }\right) \sum_{j}\frac{C_{j}}{\sqrt{m_{j}\omega _{j}}}\left( \hat{b }_{j}+\hat{b}_{j}^{\dagger }\right) \nonumber \end{eqnarray} (measuring the energy of the system from the energy of the vacuum). Now we can use the transformation (\ref{230}) in order to consider a continuous spectrum for the excitations of the reservoir. The second term on the RHS of (\ref{42}) becomes \begin{equation} \hat{H}_{Res}=\hbar \int \omega \hat{b}_{\omega }^{\dagger }\hat{b}_{\omega }d\omega \label{422} \end{equation} and its last term can be written in the following way: \begin{equation} \hat{H}_{Int}=-\frac{\hbar }{2}\sqrt{\frac{1}{M\omega _{o}}}\left( \hat{a}+ \hat{a}^{\dagger }\right) \int d\omega \sqrt{\frac{g\left( \omega \right) }{ m_{\omega }\omega }}C_{\omega }\left( \hat{b}_{\omega }+\hat{b}_{\omega }^{\dagger }\right) . \label{43} \end{equation} Now comparing (\ref{42}-\ref{43}) with (\ref{235}) we see that both Hamiltonians will be equivalent if we employ \begin{equation} v\left( \omega \right) =-\frac{1}{2}\sqrt{\frac{g\left( \omega \right) }{ M\omega _{o}m_{\omega }\omega }}C_{\omega }. \label{44} \end{equation} Taking the square of (\ref{44}) and comparing it with (\ref{303b}) we obtain \begin{equation} v ^{2}\left( \omega \right)=\frac{1}{2\pi }\frac{J\left( \omega \right) }{ M\omega _{o}}. \label{46} \end{equation} Adopting the Drude form (\ref{305}) $\left| v\left( \omega \right) \right| ^{2}$ is given by \begin{equation} \left| v\left( \omega \right) \right| ^{2}=\frac{\gamma \omega }{\pi \omega _{o}}\frac{1}{\left( 1+\omega ^{2}/\Omega _{c}^{2}\right) }, \label{47} \end{equation} which is defined only for $\omega \geq 0$. Now that we have established the form of $\left| v\left( \omega \right) \right| ^{2}$ corresponding to the Caldeira-Leggett model \cite{AmB}, we can determine $H\left( \omega \right) $ through (\ref{278}). A simple calculation shows that $H\left( \omega \right) $ will be given by \begin{equation} H\left( \omega \right) =-\frac{\gamma \Omega _{c}}{\omega _{o}}\frac{1}{ \left( 1+\omega ^{2}/\Omega _{c}^{2}\right) }. \label{49} \end{equation} We can also diagonalize the Hamiltonian (\ref{301}) considering the inclusion of the counter-term $V_{R}\left( \hat{q}\right) $ (see Appendix A). The result is that all the equations (\ref{272}-\ref{281h}) will remain valid with the following substitution: whenever the function $H\left( \omega \right) $ appears it should be replaced by \begin{equation} H_{R}\left( \omega \right) =H\left( \omega \right) +\frac{\Delta \omega ^{2} }{2\omega _{o}}, \label{410} \end{equation} where the frequency shift $\Delta \omega ^{2}$ is defined as \cite{AmB} \begin{equation} \frac{\Delta \omega ^{2}}{2\omega _{o}}=\frac{1}{2\omega _{o}M} \sum_{j=1}^{N}\frac{C_{j}^{2}}{m_{j}\omega _{j}^{2}}=2\int d\omega \frac{ \left| v\left( \omega \right) \right| ^{2}}{\omega }=\frac{\gamma \Omega _{c}}{\omega _{o}}. \label{411} \end{equation} Whenever a function appears with the sub-index $_{R}$ it means that we are considering the introduction of the counter-term. The spectral function (\ref{305}) is appropriate to the description of the reservoir since we consider $\Omega _{c}\gg \omega _{o}, \gamma $. So, in order to simplify and also obtain the exact function associated to the ohmic dissipation we will take the limit $\Omega _{c}\rightarrow \infty $ in the expression for $\left| L\left( \omega \right) \right| _{R}^{2}$. To do so, first we consider the renormalized function $\left| L\left( \omega \right) \right| _{R}^{2}$ given by \begin{equation} \left| L\left( \omega \right) \right| _{R}^{2}=\frac{2\pi \omega _{o}\left| v\left( \omega \right) \right| ^{2}}{\left[ \omega ^{2}-\omega _{o}^{2}-2\omega _{o}H_{R}\left( \omega \right) \right] ^{2}+\left[ 2\pi \omega _{o}\left| v\left( \omega \right) \right| ^{2}\right] ^{2}}. \label{421} \end{equation} Once \begin{equation} \lim_{\Omega _{c}\rightarrow \infty }H_{R}\left( \omega \right) =0\text{,} \quad \text{and}\quad \lim_{\Omega _{c}\rightarrow \infty }\left| v\left( \omega \right) \right| ^{2}=\frac{\gamma \omega }{\pi \omega _{o}}\text{,} \end{equation} we obtain \begin{equation} \lim_{\Omega _{c}\rightarrow \infty }\left| L\left( \omega \right) \right| _{R}^{2}=\frac{2\gamma \omega }{\left( \omega ^{2}-\omega _{o}^{2}\right) ^{2}+\left( 2\gamma \omega \right) ^{2}}. \end{equation} Thus, we see that \begin{equation} \lim_{\Omega _{c}\rightarrow \infty }\left| L\left( \omega \right) \right| _{R}^{2}=M\chi ^{"}\left( \omega \right) , \end{equation} where $\chi ^{"}\left( \omega \right) $ is the imaginary part of the response function of a damped harmonic oscillator. In the limit $\gamma \ll \omega _{o}$ we can write \begin{eqnarray} \left| L\left( \omega \right) \right| _{R}^{2} &\simeq &\frac{2\omega _{o}\gamma }{\left[ 2\omega _{o}\left( \omega -\omega _{o}\right) \right] ^{2}+\left( 2\omega _{o}\gamma \right) ^{2}} \nonumber \\ &=&\frac{1}{2\omega _{o}}\frac{\gamma }{\left( \omega -\omega _{o}\right) ^{2}+\gamma ^{2}}, \end{eqnarray} that corresponds to a Lorentzian distribution of width $\gamma $. For the function $\left| L\left( \omega \right) \right| ^{2}$, without the renormalization, we have $H\left( \omega \ll \Omega _{c}\right) \simeq -\gamma \Omega _{c}/\omega _{o}$ and therefore for $\Omega _{c}\gg \omega _{o}, \gamma $ we obtain \begin{equation} \left| L\left( \omega \right) \right| ^{2}=\frac{2\gamma \omega }{\left( \omega ^{2}-\omega _{o}^{2}+2\gamma \Omega _{c}\right) ^{2}+\left( 2\gamma \omega \right) ^{2}}. \label{430} \end{equation} In this case we should have $\omega _{o}^{2}>2\gamma \Omega _{c}$, because, without the renormalization, we must have \cite{Bar2} \begin{equation} \omega _{o}^{2}>\left| \Delta \omega ^{2}\right| \end{equation} for the diagonalization to be consistent. \section{Analysis of the Evolution of \lowercase{$\hat{a}\left( t\right) $}} Now we can analyze in detail the time evolution of the operator $\hat{a}$ associated with the system. We will analyze each term of the expression for $\hat{a}\left( t\right) $ in eq.(\ref{280}). We will be interested in the relation between the degree of dissipation in our system and the importance of each one of those terms. Initially we will analyze the coefficients associated to the operators $\hat{ a}$ and $\hat{a}^{\dagger }$. The fastest and most efficient way to understand the behavior of each one of them is through graphs. The graphs in Fig.1(a) present the behavior of $\left| L\left( \omega \right) \right| _{R}^{2}$ for three values of $\gamma $: $\gamma _{1}=0.1\omega _{o}$, $ \gamma _{2}=\omega _{o}$ and $\gamma _{3}=10\omega _{o}$. We see that for $\gamma _{1} =0.1\omega _{o}$, $\left| L\left( \omega \right) \right| _{R}^{2}$ presents a narrow peak centered approximately about $\omega _{o}$ (we showed that in the limit $\gamma \ll \omega _{o}$ the function $\left| L\left( \omega \right) \right| _{R}^{2}$ tends to a Lorentzian centered at $\omega _{o}$ and with width $\gamma $). As $\gamma $ increases ($\gamma _{2}=\omega _{o}$) the function $\left| L\left( \omega \right) \right| _{R}^{2}$ broadens and becomes centered at progressively lower frequencies. For $\gamma $ still larger ($ \gamma _{3}=10\omega _{o}$) $\left| L\left( \omega \right) \right| _{R}^{2}$ narrows again, but its peak is about very low frequencies. The graphs in Fig.1(b) present the behavior of the functions $A\left( \omega \right) $, $B\left( \omega \right) $ and $C\left( \omega \right) $ that appear multiplying $\left| L\left( \omega \right) \right| _{R}^{2}$ in the different terms of the expression for $\hat{a}\left( t\right) $. Simultaneously observing (1.a) and (1.b) we conclude that when $\gamma \ll \omega _{o}$ the function $C\left( \omega \right) \left| L\left( \omega \right) \right| _{R}^{2}$ has a negligible amplitude if compared to the functions $A\left( \omega \right) \left| L\left( \omega \right) \right| _{R}^{2}$ and $B\left( \omega \right) \left| L\left( \omega \right) \right| _{R}^{2}$, because in this case $\left| L\left( \omega \right) \right| _{R}^{2}$ is very sharp and centered at $\omega _{o}$ whereas $C\left( \omega _{o}\right) =0$. As $\gamma /\omega _{o}$ increases, $\left| L\left( \omega \right) \right| _{R}^{2}$ has its peak broadened and moved away from $\omega _{o}$. The function $C\left( \omega \right) \left| L\left( \omega \right) \right| _{R}^{2}$ becomes comparable to the others and in the limit $\gamma \gg \omega _{o}$, it is of the same order of $B\left( \omega \right) \left| L\left( \omega \right) \right| _{R}^{2}$ whereas $A\left( \omega \right) \left| L\left( \omega \right) \right| _{R}^{2}$ becomes very small. It remains to analyze the coefficients $B_{1,R}\left( \Omega ;t\right) $ and $ B_{2,R}\left( \Omega ;t\right) $ of $\hat{b}_{\Omega }$ and $\hat{b}_{\Omega }^{\dagger }$, respectively, in the expression (\ref{280}) for $\hat{a} \left( t\right) $. We know that in the limit $\gamma \ll \omega _{o}$ the function $\left| L\left( \omega \right) \right| _{R}^{2}$ tends to a Lorentzian centered at $\omega _{o}$ and with width $\gamma $. Therefore, the function $\left( \omega _{o}-\Omega \right) Z_{R}\left( \Omega \right) $ that appears in the expression (\ref{281d}) for $B_{2,R}\left( \Omega ;t\right) $ is, in this limit, negligible if compared to the function $ \left( \omega _{o}+\Omega \right) Z_{R}\left( \Omega \right) $ in the expression (\ref{281b}) for $B_{1,R}\left( \Omega ;t\right) $. The evaluation of $X_{R}\left( \Omega ;t\right) $ results in \begin{eqnarray} X_{R}\left( \Omega ;t\right) &=&\frac{-\pi}{\left( \Omega ^{2}-\omega ^{^{\prime }2}+\gamma ^{2}\right) +\left( 2\gamma \omega ^{\prime }\right) ^{2}}\frac{d }{dt}\left\{ \left[ \frac{\Omega ^{2}-\omega ^{^{\prime }2}+\gamma ^{2}}{\omega ^{\prime }}\sin \left( \omega ^{\prime }t\right) +2\gamma \cos \left( \omega ^{\prime }t\right) \right] e^{-\gamma t}\right\} \nonumber \\ &&\qquad \qquad \qquad \qquad \qquad \qquad \qquad-\frac{2\gamma \Omega \sin \left( \Omega t\right) }{\left( \Omega ^{2}-\omega _{o}^{2}\right) +\left( 2\gamma \Omega \right) ^{2}}, \end{eqnarray} for $\gamma <\omega _{o}$, where $\omega ^{\prime }=\sqrt{\omega _{o}^{2}-\gamma ^{2}}$. We see that in the limit $\gamma \ll \omega _{o}$ the function $X_{R}\left( \Omega ;t\right) $ will also be very sharply peaked around $\omega _{o}$. Therefore, the function $ \left( \omega _{o}-\Omega \right) X_{R}\left( \Omega ;t\right) $ in the expression (\ref{281d}) for $B_{2,R}\left( \Omega ;t\right) $ is also negligible if compared to the function $\left( \omega _{o}+\Omega \right) X_{R}\left( \Omega ;t\right) $ in the expression (\ref{281b}) for $B_{1,R}\left( \Omega ;t\right) $. Similarly it can be shown that, in this limit, the function $Y_{\left( -\right),R }\left( \Omega ;t\right) $ is negligible in relation to the function $Y_{\left( +\right),R }\left( \Omega ;t\right) $. We conclude that in the limit $\gamma \ll \omega _{o}$ the coefficient $B_{2,R}\left( \Omega ;t\right) $ is negligible in comparison to the coefficient $ B_{1,R}\left( \Omega ;t\right) $. As the ratio $\gamma /\omega _{o}$ increases and the function $\left| L\left( \omega \right) \right| _{R}^{2}$ changes its shape, the coefficient $B_{2,R}\left( \Omega ;t\right) $ becomes comparable to $B_{1,R}\left( \Omega ;t\right) $. So far we have analyzed the relevance of the terms associated to $\hat{a} ^{\dagger }$ and $\hat{b}_{\Omega }^{\dagger }$ in the expression (\ref{280} ) for $\hat{a}\left( t\right) $ considering the inclusion of the counter-term $V_{R}\left( \hat{q}\right) $ in our model. We showed that these terms are negligible in the limit $\gamma \ll \omega _{o}$, but become important as the dissipation increases and the function $\left| L\left( \omega \right) \right| _{R}^{2}$ becomes broader and is no more centered at $ \omega _{o}$. Now if we had not considered the inclusion of the counter-term in the interaction Hamiltonian, we would have $\left| L\left( \omega \right) \right| ^{2}$ given by (\ref{430}) instead of $\left| L\left( \omega \right) \right| _{R}^{2}$. In this case, we see that the condition for the function $\left| L\left( \omega \right) \right| ^{2}$ to be centered very close to $\omega _{o}$ is that $2\gamma \Omega _{c}\ll \omega _{o}^{2}$ or \begin{equation} \frac{\gamma }{\omega _{o}}\ll \frac{\omega _{o}}{\Omega _{c}}\ \ \left( \ll 1\right) . \label{431} \end{equation} Therefore, the condition $\gamma /\omega _{o}\ll 1$ would not be enough for us to ignore the terms associated to $\hat{a}^{\dagger }$ and $\hat{b} _{\Omega }^{\dagger }$ in the expression for $\hat{a}\left( t\right) $. These terms can only be neglected if the condition (\ref{431}), which limits our system to a much weaker dissipation, is satisfied. We notice that a system subject to a weak dissipation ($\gamma \ll \omega _{o}$, in our case) does not guarantee that its frequency shift ($\Delta \omega ^{2}=2\gamma \Omega _{c}$) is also small. We will see later, in more detail, that for a system subject to very weak dissipation the damping coefficient $\gamma $ will be given by $\pi \left| v\left( \omega _{o}\right) \right| ^{2}$ and the frequency shift by $ H\left( \omega _{o}\right) $. Observing the expression (\ref{278}) for $ H\left( \omega \right) $ we clearly see that the relation between these functions depends on the form adopted for the function $\left| v\left( \omega \right) \right| ^{2}$. Therefore, $\pi \left| v\left( \omega _{o}\right) \right| ^{2}\ll \omega _{o}$ does not guarantee that we will have $H\left( \omega _{o}\right) \ll \omega _{o}$ (as we have seen to be the case for $\left| v\left( \omega \right) \right| ^{2}$ given by (\ref{47})), although this can happen for some functions $\left| v\left( \omega \right) \right| ^{2}$. \section{Reduction to the Model with the Rotating Wave Approximation} Now let us consider the situation in which the following conditions are satisfied \begin{eqnarray} \pi \left| v\left( \omega \right) \right| ^{2} &\ll &\omega _{o}\text{,\quad for\quad }\omega \sim \omega _{o}\text{,} \label{436} \\ H\left( \omega \right) &\ll &\omega _{o}\text{,\quad for\quad }\omega \sim \omega _{o}\text{.} \label{433} \end{eqnarray} Under these conditions the function $\left| L\left( \omega \right) \right| ^{2}$ will be a function well peaked around $\omega _{o}$. Therefore we can ignore the terms associated with $\hat{a}^{\dagger }$ and $\hat{b}_{\Omega }^{\dagger }$ in the expression for $\hat{a}\left( t\right) $. Even the expressions for the coefficients of $\hat{a}$ and $\hat{b}_{\Omega }$ can be approximated considering that $\left| L\left( \omega \right) \right| ^{2}$ will only be appreciable, in this case, for $\omega \simeq \omega _{o}$. We can write \begin{equation} A\left( \omega \right) \left| L\left( \omega \right) \right| ^{2}\simeq B\left( \omega \right) \left| L\left( \omega \right) \right| ^{2}\simeq 2\omega _{o}\left| L\left( \omega \right) \right| ^{2}, \end{equation} \begin{eqnarray} B_{\Omega }^{\left( 1\right) } &\simeq &v\left( \Omega \right) \left\{ \int d\omega 2\left| L\left( \omega \right) \right| ^{2}{\mathcal P}\frac{ \omega _{o}}{ \omega -\Omega }e^{-i\omega t}\right. \nonumber \\ &&\left. +2\omega _{o}\frac{\left| L\left( \Omega \right) \right| ^{2}}{ \left| v\left( \Omega \right) \right| ^{2}}\left[ \Omega -\omega _{o} -H\left( \Omega \right) \right] e^{-i\Omega t}\right\} \end{eqnarray} and finally \begin{eqnarray} \hat{a}\left( t\right) &=&\int d\omega \left| \tilde{\alpha}_{\omega }\right| ^{2}e^{-i\omega t}\hat{a} \nonumber \\ &&+\int d\Omega v\left( \Omega \right) \left\{ \int d\omega \left| \tilde{ \alpha}_{\omega }\right| ^{2}{\mathcal P}\frac{1}{\omega -\Omega }e^{-i\omega t}\right. \label{438} \\ &&\quad \left. +\frac{\left| \tilde{\alpha}_{\Omega }\right| ^{2}}{\left| v\left( \Omega \right) \right| ^{2}}\left[ \Omega -\omega _{o}-H\left( \Omega \right) \right] e^{-i\Omega t}\right\} \hat{b}_{\Omega }, \nonumber \end{eqnarray} where the function $\left| \tilde{\alpha}_{\omega }\right| ^{2}$ comes from the approximation of $\left| L\left( \omega \right) \right| ^{2}$ considering (\ref{436} - \ref{433}), \begin{equation} \frac{2\omega _{o}}{\pi }\left| L\left( \omega \right) \right| ^{2}\simeq \frac{\left| v\left( \omega \right) \right| ^{2}}{\left[ \omega -\omega _{o}-H\left( \omega \right) \right] ^{2}+\left[ \pi \left| v\left( \omega \right) \right| ^{2}\right] ^{2}}=\left| \tilde{\alpha}_{\omega }\right| ^{2}. \label{432} \end{equation} Now let us compare (\ref{438}-\ref{432}) with the expressions (\ref{247}) and (\ref{255}), previously obtained in the RWA. The only difference between these expressions is given by the presence of $H\left( \omega \right) $ instead of $F\left( \omega \right) $. Once $H\left( \omega \right) -F\left( \omega \right) =-G\left( \omega \right) $ we would have, for $\omega \sim \omega _{o}$, $H\left( \omega \right) \simeq F\left( \omega \right) $ if $ G\left( \omega \right) \ll F\left( \omega \right) $. There can be functions $ \left| v\left( \omega \right) \right| ^{2}$ that satisfy this requirement. However, most of the physically reasonable functions $\left| v\left( \omega \right) \right| ^{2}$ do not; for example, if $\left| v\left( \omega \right) \right| ^{2}$ is given by (\ref{47}) we have $G\left( \omega \right) /F\left( \omega \right) \simeq -1$ for $\omega \sim \omega _{o}$. In this case, $H\left( \omega _{o}\right) \simeq 2F\left( \omega_{o}\right) $ yielding twice the frequency shift given by the model within the RWA \cite{notabenne}. The same relation is found whenever $\left| v\left( \omega \right) \right| ^{2}$ extends to frequencies much larger than $\omega _{o}$ with nonnegligible values, for then \begin{equation} H\left( \omega _{o}\right) \simeq -2{\mathcal P}\int \frac{\left| v\left( \Omega \right) \right| ^{2}}{\Omega }d\Omega \simeq 2F\left( \omega _{o}\right) . \end{equation} This larger frequency shift can be easily understood through a perturbative analysis. Let us consider a system described by (\ref{0}), (\ref{1}) and having $\hat{H}_{Int}$ within the RWA ( \ref{3}). It can be shown that, in second order, the perturbed levels of the oscillator remain equidistant with an apparent frequency $\omega _{o}+\Delta ^{RWA}\omega $, where \cite{Coh} \begin{equation} \Delta ^{RWA}\omega ={\mathcal P}\sum_{j}\frac{\left| k_{j}\right| ^{2}}{ \omega _{o}-\omega _{j}}\text{.} \label{d2} \end{equation} Taking the continuous limit and using (\ref{234}) we see that this expression is nothing but $F\left( \omega _{o}\right) $ which really represents the frequency shift in the weak dissipation limit. Now it is easy to show that if we consider $\hat{H}_{Int}$ given by (\ref{2}) without the RWA, we have in second order in the perturbation, \begin{equation} \Delta \omega ={\mathcal P}\sum_{j}\frac{\left| k_{j}\right| ^{2}}{\omega _{o}-\omega _{j}}-{\mathcal P}\sum_{j}\frac{\left| k_{j}\right| ^{2}}{\omega _{o}+\omega _{j}}. \end{equation} This expression, in the continuum limit, is nothing but $H\left( \omega _{o}\right) $. Therefore, we see that the substitution of $F\left( \omega _{o}\right) $ by $H\left( \omega _{o}\right) $ could already be foreseen by a simple perturbative theory. The same perturbative analysis can be used to understand why the counter-rotating term is not important in the calculation of the decay rate of the system in the weak dissipation limit. In first order, the decay rate of the system is given by the Fermi's golden rule for which only the terms of $\hat{H}_{Int}$ that directly conserve energy in the transition are relevant. This is not done by the counter-rotating terms. In fact, it is only done by the rotating terms that create or destroy energy quanta such that $\omega _{j}=\omega _{o}$. This is the reason for the dependence only on $\left| v\left( \omega _{o}\right) \right| ^{2}$ that appears in the very weak dissipation calculations. In a model that takes the counter-term into account we automatically have $H_{R}\left( \omega \right) =0$ and the expression (\ref{438}) can be substituted by \begin{eqnarray} \hat{a}\left( t\right) &=&\int d\omega \left| \alpha _{\omega }\right| _{R}^{2}e^{-i\omega t}\hat{a} \nonumber \\ &&+\int d\Omega v\left( \Omega \right) \left[ \int d\omega \left| \alpha _{\omega }\right| _{R}^{2}{\mathcal P}\frac{1}{\omega -\Omega }e^{-i\omega t}\right. \\ &&\qquad \left. +\frac{\left| \alpha _{\Omega }\right| _{R}^{2}}{\left| v\left( \Omega \right) \right| ^{2}}\left( \Omega -\omega _{o}\right) e^{-i\Omega t}\right] \hat{b}_{\Omega }, \nonumber \end{eqnarray} where \begin{equation} \frac{2\omega _{o}}{\pi }\left| L\left( \omega \right) \right| _{R}^{2}\simeq \frac{\left| v\left( \omega \right) \right| ^{2}}{\left( \omega -\omega _{o}\right) ^{2}+\left[ \pi \left| v\left( \omega \right) \right| ^{2}\right] ^{2}}=\left| \alpha _{\omega }\right| _{R}^{2}. \end{equation} Therefore, the RWA leads us to the correct results, with regard to the decay rate of the system $\left( \text{related to }\left| \alpha _{\omega }\right| _{R}^{2}\right) $, {\em if and only if the condition of weak dissipation (\ref {436}) is satisfied}. Regarding the frequency shift $\left( \text{associated to }F\left( \omega _{o}\right) \right) $, we see that its agreement with that given in the limit of weak dissipation, in a model without the counter-term, strongly depends on the function $\left| v\left( \omega \right) \right| ^{2}$ adopted. For functions $\left| v\left( \omega \right) \right| ^{2}$ that extend to frequencies much larger than $\omega _{o}$ we have twice the shift foreseen in the RWA. Besides, it is also necessary that the condition (\ref{433}) be satisfied in order to guarantee that this shift is much smaller than $\omega _{o}$ (and we can neglect the terms in $ \hat{a}^{\dagger }$ and $\hat{b}_{\Omega }^{\dagger }$ in the expression for $\hat{a}\left( t\right) $). In the case of ohmic dissipation the conditions (\ref{436}-\ref{433}) are reduced to \begin{equation} \gamma \ll \omega _{o}, \label{433b} \end{equation} once in this case \[ \pi \left| v\left( \omega _{o}\right) \right| ^{2}=\gamma \text{\quad and\quad }H_{R}\left( \omega \right) =0\text{,} \] in the limit $\Omega _{c}\rightarrow \infty $. \section{Evolution of a Coherent State} We showed that if our system satisfies the conditions of weak dissipation ( \ref{436}) and small frequency shift (\ref{433}) the evolution of the operator $\hat{a}\left( t\right) $ can be reduced to the expression given by (\ref{438}). Now we will suppose that initially our system is in a coherent state $\left| \alpha \right\rangle $ and that the reservoir is in the vacuum state $\left| 0\right\rangle $ corresponding to a reservoir at zero temperature. In this case we have \begin{equation} \hat{a}\left( t\right) \left| \alpha ,0\right\rangle =\int d\omega \left| \tilde{\alpha}_{\omega }\right| ^{2}e^{-i\omega t}\alpha \left| \alpha ,0\right\rangle . \label{257} \end{equation} Therefore, in this particular case, a coherent state stays as such during its evolution with eigenvalue $\alpha \left( t\right) $ given by \begin{equation} \alpha \left( t\right) =\alpha \int d\omega \left| \tilde{\alpha}_{\omega }\right| ^{2}e^{-i\omega t}. \label{439b} \end{equation} We can also calculate the evolution of the operator $\hat{b}_{\Omega }\left( t\right) $ of the reservoir. Then in the case of weak dissipation and small frequency shift we can show that the modes of the reservoir also evolve from the vacuum state to coherent states with eigenvalues given by \begin{eqnarray} \beta _{\Omega }^{\left( {\mathcal R}\right) }\left( t\right) &=&\alpha \left[ {\mathcal P} \int d\omega \frac{\left| \tilde{\alpha}_{\omega }\right| ^{2}}{\omega -\Omega }e^{-i\omega t}\right. \\ &&\qquad \qquad \left. +\frac{\Omega -\omega _{o}-H\left( \Omega \right) }{ \left| v\left( \Omega \right)\right| ^{2}}\left| \tilde{\alpha}_{\Omega }\right| ^{2}e^{-i\Omega t}\right] v^{*}\left( \Omega \right). \nonumber \end{eqnarray} Still under the conditions (\ref{436}-\ref{433}) we can further approximate $\left| \tilde{\alpha}_{\omega }\right| ^{2}$ by \begin{equation} \left| \tilde{\alpha}_{\omega }\right| ^{2}\simeq \frac{\pi \left| v\left( \omega _{o}\right) \right| ^{2}}{\left[ \omega -\omega _{o}-H\left( \omega _{o}\right) \right] ^{2}+\left[ \pi \left| v\left( \omega _{o}\right) \right| ^{2}\right] ^{2}} \label{435} \end{equation} and also extend the lower limit of the frequency integral in (\ref{439b}) to $-\infty $ introducing a negligible error. Then we have \begin{equation} \alpha \left( t\right) =\alpha e^{-i\left[ \omega _{o}+\Delta \omega \right] t}e^{-\pi \left| v\left( \omega _{o}\right) \right| ^{2}t},\ \text{where}\ \Delta \omega =H\left( \omega _{o}\right) \text{.} \label{441} \end{equation} In the case of ohmic dissipation with the inclusion of the counter-term we have \begin{equation} \alpha \left( t\right) =\alpha e^{-i\omega _{o}t}e^{-\gamma t}. \label{442} \end{equation} Now it is clear that when (\ref{436}-\ref{433}) are not satisfied making the terms associated to the operators $\hat{a}^{\dagger }$ and $\hat{ b}_{\Omega }^{\dagger }$ in the expression (\ref{280}) for $\hat{a}\left( t\right) $ no longer neglegible, $\left| \alpha ,0\right\rangle $ will not be an eigenstate of $ \hat{a}\left( t\right) $ because $\left| \alpha \right\rangle $ and $\left| 0\right\rangle $ are not eigenstates of $\hat{a}^{\dagger }$ and $\hat{b} _{\Omega }^{\dagger }$, respectively. Therefore, we see that an initial coherent state $\left| \alpha \right\rangle $, interacting with a reservoir even at temperature $T=0$, will not remain a coherent state during its decay unless we have a system subject to very weak dissipation. The previous works that emphasized the existence of dissipative coherent states \cite{Agar,Wl1,Wl2,Ser}, in models described by the $\hat{H}_{Int}$ (\ref{2}), were based on master equations obtained through a method that is appropriate only in the limit of weak dissipation. However, we saw that in this limit the corresponding model (\ref{2}) is reduced to the RWA model (\ref{3}) that really preserves the coherent states. We believe that the implicit assumption of weak dissipation is the reason why these authors have obtained the dissipative coherent states. Our result agrees with the one presented in \cite{Zur0} where it was shown that the model (\ref{301}) presents the coherent states as the initial states of the system that produce the least amount of entropy as time evolves. \section{Evolution of the Center of a Wave Packet} We can also study the evolution of the operator $\hat{q}$ associated to the position of the particle. Once the operators $\hat{q}$ and $\hat{p}$ are related to the operator $\hat{a}$ by (\ref{41}), we obtain from (\ref{280}) the following expression for $\hat{q}\left( t\right) $: \begin{equation} \hat{q}\left( t\right) ={\mathcal G}_{\mathcal S}\left( \hat{q},\hat{p};t\right) + {\mathcal F}_{\mathcal R}\left( \hat{q}_{\Omega },\hat{p}_{\Omega };t\right) , \label{445} \end{equation} where \begin{eqnarray} {\mathcal G}_{\mathcal S}\left( \hat{q},\hat{p};t\right) &=&\hat{q}\frac{d}{dt} {\mathcal L}\left( t\right) +\frac{\hat{p}}{M}{\mathcal L}\left( t\right) , \\ {\mathcal F}_{\mathcal R}\left( \hat{q}_{\Omega },\hat{p}_{\Omega };t\right) &=&2\omega _{o}\int \frac{d\Omega }{\pi }v\left( \Omega \right) \sqrt{\frac{ m_{\Omega }\Omega }{M\omega _{o}}}\left\{ \left[ \frac{d}{dt}W_{R}\left( \Omega ,t\right) +Z_{R}\left( \Omega \right) \cos \left( \Omega t\right) \right] \hat{q}_{\Omega }\right. \nonumber \\ &&\qquad \qquad \qquad \left. +\left[ \Omega W_{R}\left( \Omega ,t\right) +Z_{R}\left( \Omega \right) \sin \left( \Omega t\right) \right] \right\} \frac{\hat{p}_{\Omega }}{m_{\Omega }\Omega }, \label{531} \\ {\mathcal L}\left( t\right) &=&2\int \frac{d\omega }{\pi }\left| L\left( \omega \right) \right| _{R}^{2}\sin \left( \omega t\right) , \\ W_{R}\left( \Omega ,t\right) &=&{\mathcal P}\int d\omega \frac{2\left| L\left( \omega \right) \right| _{R}^{2}}{\omega ^{2}-\Omega ^{2}}\sin \left( \omega t\right) , \end{eqnarray} with $Z\left( \Omega \right) $ defined in (\ref{281g}). Now we suppose that the initial density operator of our global system can be written in the factorizable form \begin{equation} \rho _{T}=\rho _{\mathcal S}\otimes \rho _{\mathcal R}, \label{501} \end{equation} where $\rho _{\mathcal S}$ and $\rho _{R}$ are, respectively, the density operators of the system and reservoir when they are isolated. Then we have \begin{eqnarray} \left\langle \hat{q}\left( t\right) \right\rangle &=&Tr_{\mathcal S}\left[ {\mathcal G} _{\mathcal S}\left( \hat{q},\hat{p};t\right) \rho _{\mathcal S}\right] +Tr_{\mathcal R}\left[ {\mathcal F} _{\mathcal R}\left( \hat{q}_{\Omega },\hat{p}_{\Omega };t\right) \rho _{\mathcal R}\right] \nonumber \\ &=&{\mathcal G}_{\mathcal S}\left( \left\langle \hat{q}\right\rangle _{\mathcal S},\left\langle \hat{p}\right\rangle _{\mathcal S};t\right) +{\mathcal F}_{\mathcal R}\left( \left\langle \hat{q }_{\Omega }\right\rangle _{\mathcal R},\left\langle \hat{p}_{\Omega }\right\rangle _{\mathcal R};t\right) . \label{502} \end{eqnarray} Assuming that the initial state of the reservoir is such that \begin{equation} \left\langle \hat{q}_{j}\right\rangle _{\mathcal R}=\left\langle \hat{p} _{j}\right\rangle _{\mathcal R}=0, \label{503} \end{equation} which in the continuum limit corresponds to $\left\langle \hat{q}_{\Omega }\right\rangle _{\mathcal R}=\left\langle \hat{p}_{\Omega }\right\rangle _{\mathcal R}=0$, we obtain the following expression for $\left\langle \hat{q} \left( t\right) \right\rangle $: \begin{equation} \left\langle \hat{q}\left( t\right) \right\rangle =\left\langle \hat{q} \right\rangle _{\mathcal S}\frac{d}{dt}{\mathcal L}\left( t\right) +\frac{\left\langle \hat{p}\right\rangle _{\mathcal S}}{M}{\mathcal L}\left( t\right) , \label{447} \end{equation} where \begin{equation} {\mathcal L}\left( t\right) =\left\{ \begin{array}{l} \frac{1}{\omega ^{\prime }}\sin \left( \omega ^{\prime }t\right) e^{-\gamma t}, \\ t.e^{-\gamma t}, \\ \frac{1}{\gamma _{2}-\gamma _{1}}e^{-\gamma _{1}t}+\frac{1}{\gamma _{1}-\gamma _{2}}e^{-\gamma _{2}t}, \end{array} \begin{array}{l} \text{for\qquad }\gamma <\omega _{o}, \\ \text{for\qquad }\gamma =\omega _{o}, \\ \text{for\qquad }\gamma >\omega _{o}, \end{array} \right. \label{506} \end{equation} with $\omega ^{\prime }=\sqrt{\omega _{o}^{2}-\gamma ^{2}}$ and $\gamma _{1,2}=\gamma \pm \sqrt{\gamma ^{2}-\omega _{o}^{2}}$. The expression (\ref {447}) was also obtained by Grabert and collaborators \cite{Grab}, by the method of functional integration. They affirmed that it would correspond to the classical trajectory of a damped harmonic oscillator. However, it is easy to see that this is not true. If the initial state of the system presents an initial average momentum $\left\langle \hat{p}\right\rangle _{\mathcal S}=p_{o}$ and an initial average position $\left\langle \hat{q}\right\rangle _{\mathcal S}=q_{o}$, then according to (\ref{447}) $ \left\langle \hat{q}\left( t\right) \right\rangle $ would evolve as \begin{eqnarray} \left\langle \hat{q}\left( t\right) \right\rangle &=&q_{o}\left[ \cos \left( \omega ^{\prime }t\right) -\frac{\gamma }{\omega ^{\prime }}\sin \left( \omega ^{\prime }t\right) \right] e^{-\gamma t} \label{520} \\ &&\qquad \qquad \quad \qquad +\frac{p_{o}}{M\omega ^{\prime }}\sin \left( \omega ^{\prime }t\right) e^{-\gamma t}, \nonumber \end{eqnarray} for $\gamma <\omega _{o}$. However the classical trajectory is known to be \begin{eqnarray} q\left( t\right) _{clas} &=&q_{o}\left[ \cos \left( \omega ^{\prime }t\right) +\frac{\gamma }{\omega ^{\prime }}\sin \left( \omega ^{\prime }t\right) \right] e^{-\gamma t} \label{521} \\ &&\qquad \qquad \quad \qquad +\frac{p_{o}}{M\omega ^{\prime }}\sin \left( \omega ^{\prime }t\right) e^{-\gamma t}. \nonumber \end{eqnarray} Thus, we see that there is a phase difference between (\ref{520}) and (\ref {521}) if the oscillator has an initial displacement $q_{o}$. Let us now suppose that the initial state of the reservoir is such that \begin{equation} \left\langle \hat{q}_{j}\right\rangle _{\mathcal R}=\frac{C_{j}}{m_{j}\omega _{j}^{2}} \left\langle \hat{q}\right\rangle _{\mathcal S},\qquad \left\langle \hat{p} _{j}\right\rangle _{\mathcal R}=0. \label{523} \end{equation} We can write the expression (\ref{531}) in the discrete limit, replace (\ref {503}) by (\ref{523}) and return to the continuum limit. Then we obtain (see Appendix B) the following expression for $\left\langle \hat{q}\left( t\right) \right\rangle $: \begin{equation} \left\langle \hat{q}\left( t\right) \right\rangle =\left\langle \hat{q} \right\rangle _{\mathcal S}\left[ \frac{d}{dt}{\mathcal L}\left( t\right) +2\gamma {\mathcal L}\left( t\right) \right] +\frac{\left\langle \hat{p}\right\rangle _{\mathcal S}}{M}{\mathcal L}\left( t\right) . \label{524} \end{equation} In this case if the initial state of the system presents an initial average momentum $\left\langle \hat{p}\right\rangle _{\mathcal S}=p_{o}$ and an initial average position $\left\langle \hat{q}\right\rangle _{\mathcal S}=q_{o}$, (\ref{524}) becomes \begin{eqnarray} \left\langle \hat{q}\left( t\right) \right\rangle &=&q_{o}\left[ \cos \left( \omega ^{\prime }t\right) +\frac{\gamma }{\omega ^{\prime }}\sin \left( \omega ^{\prime }t\right) \right] e^{-\gamma t} \label{525} \\ &&\quad \qquad \qquad \qquad +\frac{p_{o}}{M\omega ^{\prime }}\sin \left( \omega ^{\prime }t\right) e^{-\gamma t}, \nonumber \end{eqnarray} for $\gamma <\omega _{o}$, that corresponds to the correct classical trajectory. Thus, we see that the classical evolution is not obtained with the initial condition (\ref{503}) but with the initial condition (\ref{523}). We can understand why this happens through the classical analysis of the model (\ref{305}) presented in the next section. \section{Classical Analysis and Discussion } In this section we will accomplish a classical analysis of the model used. Our objective is to obtain a physical intuition on the effect that causes the difference between the equations (\ref{520}) and (\ref{525}) and then on the meaning of the initial condition (\ref{523}). This procedure can be justified by the equivalence of the classical and quantum dynamics of this model \cite{Hab} . The Hamiltonian (\ref{301}) can be written as \cite{Hak}: \begin{equation} H=\frac{p^{2}}{2M}+V\left( q\right) +\sum_{j}\left[ \frac{p_{j}^{2}}{2m_{j}}+ \frac{m_{j}\omega _{j}^{2}}{2}\left( q_{j}-\frac{C_{j}}{m_{j}\omega _{j}^{2}} q\right) ^{2}\right] . \label{306} \end{equation} The equations of motion of this system are given by \begin{eqnarray} M\ddot{q}\left( t\right) +V^{\prime }\left( q\right) &=&\sum_{j}C_{j}\left[ q_{j}\left( t\right) -\frac{C_{j}}{m_{j}\omega _{j}^{2}}q\left( t\right) \right] , \label{307} \\ m_{j}\ddot{q}_{j}\left( t\right) +m_{j}\omega _{j}^{2}q_{j}\left( t\right) &=&C_{j}q\left( t\right) . \label{308} \end{eqnarray} If $q_{j}\left( 0\right) $ and $\dot{q}_{j}\left( 0\right) $ are the initial conditions the solution of the homogeneous part of (\ref{308}) will be \begin{equation} q_{j}^{H}\left( t\right) =q_{j}\left( 0\right) \cos \left( \omega _{j}t\right) +\frac{\dot{q}_{j}\left( 0\right) }{\omega _{j}}\sin \left( \omega _{j}t\right) . \label{309} \end{equation} The particular solution, considering the presence of the force $C_{j}q\left( t\right) $, can be obtained by taking the Fourier transform of (\ref{308}). Then we have \begin{equation} \begin{array}{l} q_{j}^{P}\left( t\right) =\frac{C_{j}}{m_{j}\omega _{j}}\int_{0}^{t}dt^{ \prime }q\left( t^{\prime }\right) \sin \left[ \omega _{j}\left( t-t^{\prime }\right) \right] \\ \qquad \,\,=\frac{C_{j}}{m_{j}\omega _{j}^{2}}\left\{ q\left( t\right) -q\left( 0\right) \cos \left( \omega _{j}t\right) \right. \\ \qquad \qquad \qquad \qquad \left. -\int_{0}^{t}dt^{\prime }\dot{q}\left( t^{\prime }\right) \cos \left[ \omega _{j}\left( t-t^{\prime }\right) \right] \right\} \end{array} \label{310} \end{equation} Using the definition of the spectral function $J\left( \omega \right) $ (\ref{303b}) and (\ref{304}) it can be shown that in the limit $\Omega _{c}\rightarrow \infty $ we have \begin{equation} \sum_{j}\frac{C_{j}^{2}}{m_{j}\omega _{j}^{2}}\int_{0}^{t}dt^{\prime }\cos \left[ \omega _{j}\left( t-t^{\prime }\right) \right]\dot{q}\left( t^{\prime }\right) =2M\gamma \dot{q}\left( t\right) . \label{312} \end{equation} Therefore, the general solution of (\ref{308}), $q_{j}\left( t\right) =q_{j}^{H}\left( t\right) +q_{j}^{P}\left( t\right) $, when substituted in ( \ref{307}) results in the following Langevin equation \[ M\ddot{q}\left( t\right) +V^{\prime }\left( q\right) +2M\gamma \dot{q}\left( t\right) =F\left( t\right) , \] where \begin{equation} F\left( t\right) =\sum_{j}C_{j}\tilde{q}_{j}\left( 0\right) \cos \left( \omega _{j}t\right) +\sum_{j}\frac{C_{j}}{\omega _{j}}\dot{q}_{j}\left( 0\right) \sin \left( \omega _{j}t\right) , \label{313} \end{equation} is the fluctuating force and we have redefined the position of the oscillators of the bath \cite{nota} \begin{equation} \tilde{q}_{j}\left( 0\right) =q_{j}\left( 0\right) -\frac{C_{j}}{m_{j}\omega _{j}^{2}}q\left( 0\right) . \label{314} \end{equation} Supposing that the bath is initially in thermodynamic equilibrium in relation to the coordinates $\tilde{q}_{j}\left( 0\right) $ we have, in the classical limit, \begin{eqnarray} \left\langle \tilde{q}_{j}\left( 0\right) \right\rangle &=&\left\langle \dot{q}_{j}\left( 0\right) \right\rangle =\left\langle \tilde{q}_{j}\left( 0\right) \dot{q}_{j^{\prime }}\left( 0\right) \right\rangle =0. \label{332} \\ \left\langle \tilde{q}_{j}\left( 0\right) \tilde{q}_{j^{\prime }}\left( 0\right) \right\rangle &=&\frac{kT}{m_{j}\omega _{j}^{2}}\delta _{jj^{\prime }},\ \left\langle \dot{q}_{j}\left( 0\right) \dot{q}_{j^{\prime }}\left( 0\right) \right\rangle =\frac{kT}{m_{j}}\delta _{jj^{\prime }}. \label{333} \end{eqnarray} The physical meaning of this initial condition written in terms of the relative coordinates $\tilde{q}_{j} $ has alread been analized by Zwanzig \cite{Zwan} some time ago. Using (\ref{332}-\ref{333}) and after some algebraic manipulations it is shown that $\left\langle F\left( t\right) \right\rangle =0$ and $ \left\langle F\left( t\right) F\left( t^{\prime }\right) \right\rangle \simeq 4M\gamma kT\delta \left( t-t^{\prime }\right) $ which correspond to the expressions that characterize the Brownian motion. On the other hand if we had adopted the initial condition \begin{equation} \left\langle q_{j}\left( 0\right) \right\rangle =\left\langle \dot{q} _{j}\left( 0\right) \right\rangle =\frac{1}{m_{j}}\left\langle p_{j}\left( 0\right) \right\rangle =0, \label{511} \end{equation} we would have \begin{eqnarray} \left\langle F\left( t\right) \right\rangle &=&-q\left( 0\right)\sum_{j}\frac{C_{j}^{2} }{m_{j}\omega _{j}^{2}}\cos \left( \omega _{j}t\right) \nonumber \\ &=&-4M\gamma q\left( 0\right)\frac{1}{\pi }\int_{0}^{\Omega _{c}}d\omega \cos \omega t=-4M\gamma q\left( 0\right)\delta \left( t\right) , \label{512} \end{eqnarray} where we have used (\ref{303b}), (\ref{304}) and taken the limit $\Omega _{c}\rightarrow \infty $. Therefore, we would not have $\left\langle F\left( t\right) \right\rangle =0$, but the presence of a delta force at $t=0$. Physically what happens is that if the oscillators of the bath are not ``appropriately'' distributed around the particle (as in the initial condition (\ref{511})), when it is inserted in the bath, these oscillators will ``pull'' the particle until they reach this ``appropriate'' distribution. This force will act on the particle during a time interval of the order $1/\Omega _{c}$. Therefore, in the limit $\Omega _{c}\rightarrow \infty $ we will have a delta force that will cause a phase difference in the evolution of the system. This phase difference is the difference between (\ref{520}) and (\ref{521}) which is corrected in (\ref{525}) by the adoption of the initial condition (\ref{523}) (quantum analogue of (\ref{332})) instead of (\ref{503}) (quantum analogue of (\ref{511})). As far as we know, the need to use the initial condition (\ref{523}) in place of (\ref{503}) in the quantum treatment of this model has not been noticed in previous works. In Ref.~\cite{AmB} the authors make some approximations which are equivalent to regarding the initial time as $t=0^{+}$ $\left( t\sim 1/\Omega _{c}\right) $. So the initial conditions are established at this instant although the coupling between particle and bath is switched on at $t=0$ and gives rise to a delta type force at this instant. The inclusion of $t=0$ in propagator methods must be accompanied by the above-mentioned modification of the factorizable initial condition. However, it must be emphasized that we are not addressing here the question of the generalized initial condition \cite{Grab,Cris}. Actually the point we have raised is clearly responsible for the disagreement between $\left\langle \hat{q}\left( t\right) \right\rangle $ found in these references. In Ref.~\cite{Grab} the authors reproduced the dephased $\left\langle \hat{q}\left( t\right) \right\rangle $ (c.f.\ eq.~(\ref{447}) above) whereas in Ref.~\cite{Cris} this time evolution is the correct one as in (\ref{525}). The origin of the discrepancy is the use of $t=0$ or $t=0^{+}$ as the initial instant together with the factorizable initial condition. We would like to take advantage of this opportunity to correct a mistake that was made in Ref.~\cite{Cris} of which one of us is co-author. The referred article considers an initial condition of the system when the bath of oscillators meets thermodynamical equilibrium with the particle at the position it is placed in the bath. In this case one obtains mean values of the position $\left\langle \hat{q}\left( t\right) \right\rangle $ and momentum $\left\langle \hat{p}\left( t\right) \right\rangle $ which depend on the temperature of the reservoir and that do not exactly coincide with their classical counterparts. This disagreement was justified within a classical analysis of the model. In this analysis it was affirmed that the classical initial condition equivalent to the proposed quantum initial state, that corresponds exactly to $\left\langle \tilde{q}_{j}\left( 0\right) \right\rangle =0$, would imply in a classical solution of the model different from the trajectories of a damped harmonic oscillator. We saw in the present work that this is not true and therefore this argument can not be used. We believe that the origin of the disagreement when adopting a non-factorizable initial condition is the impossibility to describe the evolution of the system through an independent sum of functions of the system and reservoir variables as in (\ref{502}). The quantum effects of the correlation between the variables of the system and reservoir prevent a direct comparison of the quantum mean values with the values obtained through the classical analysis of the model. Accordingly, it can be shown that the discrepancy vanishes in the classical limit $\left( kT\gg \hbar \omega _{o}\right) $. After we had made the above analysis we became aware that in previous works \cite{Pab,Zur} the authors had also noticed the existence of initials kicks and jolts in this system when the initial condition (\ref{501},\ref{503}) is used. In both works the existence of an initial kick, given by (\ref{512}) in the limit limite $ \Omega _{c}\rightarrow \infty $, is noticed (eqs. (3.2) and (45), respectively). However, the existence of this initial transient is considered as a characteristic of the model to be taken into account. In our analysis we see that although the existence of the kick given by (\ref{512}) is a real characteristic of the model when it is subject to the initial condition (\ref{501},\ref{503}) it is an undesirable feature that should be corrected. Fortunately, this correction can be made even with an improved factorizable initial condition, that is, (\ref{501},\ref{523}). The authors of \cite{Pab} and \cite{Zur} have recognized that the presence of initial jolts, in their master equation coefficients, generates certain non-physical effects and so they suppose that they are due the adoption of a factorizable initial condition. In \cite {Pab2} the evolution of the system is analyzed for a non-factorizable initial condition, similar to the one used in \cite{Grab} and \cite{Cris}, in which the initial position of the particle is defined by a measurement process in a state in thermodynamic equilibrium with the bath. However, the initial jolts in the time scale $1/\Omega _{c}$ still persists. We believe that this happens because this initial condition does not satisfy (\ref{523}) when the initial mean values of the position of the particle and the oscillators in the bath are calculated. Thus, in this aspect, it is less general than the improved factorizable initial condition that we considered. Actually the initial jolt, at least the one they attribute to the decoherence process (however see discussion below), does not appear in the more general initial condition later adopted in \cite{Zur2}. It is an initial condition prepared by a dynamic process in a finite time $t_{p}$. In this case we can consider that the condition (\ref{523}) will be satisfied since $ t_{p}\gg 1/\Omega _{c}$. Indeed in this situation $\left( t_{p}\gg 1/\Omega _{c}\right) $, it was shown that the initial jolt does not appear. Thus, we believe that the initial condition (\ref{523}) is enough to eliminate most of or maybe all the initial transients that would appear in this system in the characteristic time scale $1/\Omega _{c}$. However, another initial transient in this system is also known. In \cite{Amb} it was shown that for a factorizable initial state in the high temperature limit $\left( kT\gg \hbar \Omega _{c}\right) $ of the master equation it presents an initial transient within the time scale of the internal decoherence of the initial wave packet. If applied to times shorter than this it can lead to nonsensical results. We believe that this pathology can only be really corrected with the adoption of non-factorizable initial conditions. \section{Conclusion} In this paper we have applied the Fano diagonalization procedure to two Hamiltonians commonly used as models for dissipative systems in quantum optics and in condensed matter systems; the rotating wave and the coordinate-coordinate coupling models, respectively. By exactly diagonalizing these two models we have succeeded in showing how the RWA turns out to be the extremely underdamped limit of the more general coordinate-coordinate coupling model. We have also been able to analyze the role played by the counter-term in this limiting procedure from the latter to the RWA. We have shown through the evaluation of the destruction operator $\hat{a}\left( t\right) $ of the system that the RWA is a good approximation for ( \ref{0}) if and only if the conditions (\ref{436}) and (\ref{433}) are satisfied. For certain choices of $\left| v\left( \omega \right) \right| ^{2}$, we have $H\left( \omega _{o}\right) \approx \pi \left| v\left( \omega _{o}\right) \right| ^{2}$ and the fulfillment of (\ref{436}) automatically implies (\ref{433}). However, for other choices, we can have $H\left( \omega _{o}\right) \gg \pi \left| v\left( \omega _{o}\right) \right| ^{2}$ and (\ref{433}) limits the validity of the approximation. Once these conditions are satisfied, we have shown that the time evolution of the system is identical to that determined within the RWA, with the exception of the frequency shift. We have found that this shift will be given by $H\left( \omega _{o}\right) $ instead of $F\left( \omega _{o}\right) $. As we have shown these functions usually have the same order of magnitude, but they are not identical. For functions $\left| v\left( \omega \right) \right| ^{2}$ that extend to frequencies much larger than $\omega _{o}$ we have $H\left( \omega _{o}\right) \simeq 2F\left( \omega _{o}\right) $. The comparison of the Hamiltonian (\ref{235}) with the Hamiltonian of the coordinate-coordinate coupling model established the relation (\ref{46}) between the spectral function $J\left( \omega \right) $ of this model and the coupling function $\left| v\left( \omega \right) \right| ^{2}$. In the case of ohmic dissipation and considering the inclusion of the counter-term, we find that $H_{R}\left(\omega \right) =0$ in the limit $\Omega _{c}\rightarrow \infty $. Then the only condition required for the RWA to be valid is \begin{equation} \gamma \ll \omega _{o}. \label{53} \end{equation} As an application of this method, we have studied the existence of dissipative coherent states and concluded that it can only exist within the RWA and when thermal fluctuations are neglegible. When these conditions are not met, the initial state will in the long run become a statistical mixture. Finally, we have also addressed the question of the discrepancies in the time evolution of the observables of the system that arise when the factorizable initial conditions are not properly accounted for. We have shown how to deal with this problem by using the appropriate improved factorizable initial condition (\ref{523}) rather than (\ref{503}). \acknowledgments M. R. da C. acknowledges full support from CAPES. A. O. C. is grateful for partial support from the Conselho Nacional de Desenvolvimento Cient\'{i}fico e Tecnol\'{o}gico (CNPq) and H. W., Jr. kindly acknowledges full support from Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP). The research of S.\ M.\ Dutra at UNICAMP has also been made possible by a postdoctoral grant from FAPESP. \appendix \section{Diagonalization without the RWA} Here, the procedure used in the diagonalization of the Hamiltonian (\ref{235} ) will be presented. We want to find the operator $\hat{A}_{\omega }$ that allows us to write (\ref{235}) in the diagonal form. We write $\hat{A} _{\omega }$ in its general form (\ref{272}) and then we impose the commutation relation (\ref{241}) \begin{equation} \left[ \hat{A}_{\omega },\hat{H}\right] =\hbar \omega \hat{A}_{\omega }. \label{a3} \end{equation} Replacing (\ref{235}) and (\ref{272}) in (\ref{a3}) and taking the commutators of the expression obtained with $\hat{a}^{\dagger }$, $\hat{a}$, $\hat{b}_{\Omega }$ and $\hat{b}_{\Omega }^{\dagger }$, we have, respectively, \begin{eqnarray} \omega \alpha _{\omega } &=&\omega _{o}\alpha _{\omega }+\int \left[ \beta _{\omega ,\Omega }v^{*}\left( \Omega \right) -\sigma _{\omega ,\Omega }v\left( \Omega \right) \right] d\Omega , \label{a5} \\ \omega \chi _{\omega } &=&-\omega _{o}\chi _{\omega }+\int \left[ \beta _{\omega ,\Omega }v^{*}\left( \Omega \right) -\sigma _{\omega ,\Omega }v\left( \Omega \right) \right] d\Omega , \label{a6} \\ \omega \beta _{\omega ,\Omega } &=&\left( \alpha _{\omega }-\chi _{\omega }\right) v\left( \Omega \right) +\Omega \beta _{\omega ,\Omega }, \label{a7} \\ \omega \sigma _{\omega ,\Omega } &=&\left( \alpha _{\omega }-\chi _{\omega }\right) v^{*}\left( \Omega \right) -\Omega \sigma _{\omega ,\Omega }. \label{a8} \end{eqnarray} Subtracting (\ref{a5}) from (\ref{a6}), we have \begin{equation} \chi _{\omega }=\frac{\omega -\omega _{o}}{\omega +\omega _{o}}\alpha _{\omega }. \label{a9} \end{equation} Replacing (\ref{a9}) in (\ref{a7}) we obtain \begin{equation} \beta _{\omega ,\Omega }=\left[ {\mathcal P}\frac{1}{\omega -\Omega }+z\left( \omega \right) \delta \left( \omega -\Omega \right) \right] \frac{2\omega _{o}}{\omega +\omega _{o}}v\left( \Omega \right) \alpha _{\omega }, \label{a10} \end{equation} where $z\left( \omega \right) $ is a function to be determined. Similarly, substituting (\ref{a9}) in (\ref{a8}), we have \begin{equation} \sigma _{\omega ,\Omega }=\frac{1}{\omega +\Omega }\frac{2\omega _{o}}{ \omega +\omega _{o}}v^{*}\left( \Omega \right) \alpha _{\omega }. \label{a11} \end{equation} Now, substituting (\ref{a10}-\ref{a11}) in (\ref{a5}) we obtain $ z\left( \omega \right) $ given by (\ref{277b}). It remains to determine $\alpha _{\omega }$. For this we impose the condition (\ref{241b}) which results in \begin{equation} \begin{array}{l} \alpha _{\omega }\alpha _{\tilde{\omega}}^{*}+\int d\Omega \beta _{\omega ,\Omega }\beta _{\tilde{\omega},\Omega }^{*}-\chi _{\omega }\chi _{\tilde{ \omega}}^{*} \\ \qquad \qquad \qquad \qquad -\int d\Omega \sigma _{\omega ,\Omega }\sigma _{ \tilde{\omega},\Omega }^{*}=\delta \left( \omega -\tilde{\omega}\right) . \end{array} \label{a15} \end{equation} Using (\ref{a9}) and (\ref{a11}) we obtain, respectively, \begin{equation} \alpha _{\omega }\alpha _{\tilde{\omega}}^{*}-\chi _{\omega }\chi _{\tilde{ \omega}}^{*}=\frac{2\omega _{o}\left( \omega +\tilde{\omega}\right) }{\left( \omega +\omega _{o}\right) \left( \tilde{\omega}+\omega _{o}\right) }\alpha _{\omega }\alpha _{\tilde{\omega}}^{*}. \label{a16} \end{equation} and \begin{equation} \int d\Omega \sigma _{\omega ,\Omega }\sigma _{\tilde{\omega},\Omega }^{*}= \frac{\left( 2\omega _{o}\right) ^{2}}{\left( \omega +\omega _{o}\right) \left( \tilde{\omega}+\omega _{o}\right) }\frac{G\left( \tilde{\omega} \right) -G\left( \omega \right) }{\omega -\tilde{\omega}}\alpha _{\omega }\alpha _{\tilde{\omega}}^{*}, \label{a17} \end{equation} were $G\left( \omega \right) $ is given by (\ref{278}). Now, using (\ref{a10} ), as well as the property \begin{equation} \frac{{\mathcal P}}{\omega -\omega ^{\prime }}\frac{{\mathcal P}}{\tilde{\omega }-\omega ^{\prime }}=\frac{1}{\omega -\tilde{\omega}}\left( \frac{{\mathcal P} }{\tilde{\omega}-\omega ^{\prime }}-\frac{{\mathcal P}}{\omega -\omega ^{\prime }}\right) +\pi ^{2}\delta \left( \omega -\tilde{\omega}\right) \delta \left[ \omega ^{\prime }-\frac{1}{2}\left( \omega +\tilde{\omega} \right) \right] , \end{equation} we obtain \begin{eqnarray} \int d\Omega \beta _{\omega ,\Omega }\beta _{\tilde{\omega},\Omega }^{*}&=& \frac{\left( 2\omega _{o}\right) ^{2}}{\left( \omega +\omega _{o}\right) \left( \tilde{\omega}+\omega _{o}\right) }\left\{ \frac{1}{\omega -\tilde{ \omega}}\left[ \frac{\tilde{\omega}^{2}-\omega ^{2}}{2\omega _{o}}+G\left( \tilde{\omega}\right) -G\left( \omega \right) \right] \right. \nonumber \\ &&\qquad \qquad \qquad \qquad \qquad \left.+\left[ \pi ^{2}+z^{2}\left( \omega \right) \right] \left| v\left( \omega \right) \right| ^{2}\delta \left( \omega -\tilde{\omega}\right) \right\} . \label{a18} \end{eqnarray} Then substituting (\ref{a16}-\ref{a17}) and (\ref{a18}) in (\ref{a15}), we have \begin{equation} \alpha _{\omega }\alpha _{\tilde{\omega}}^{*}\frac{\left( 2\omega _{o}\right) ^{2}\left| v\left( \omega \right) \right| ^{2}}{\left( \omega +\omega _{o}\right) \left( \tilde{\omega}+\omega _{o}\right) }\left[ \pi ^{2}+z^{2}\left( \omega \right) \right] \delta \left( \omega -\tilde{\omega} \right) =\delta \left( \omega -\tilde{\omega}\right) \label{a19} \end{equation} and, therefore, we should have $\left| \alpha _{\omega }\right| ^{2}$ given by (\ref{273}). In the calculations presented above we supposed that $\left| v\left( \omega \right) \right| $ is a continuous function and such that $\left| v\left( 0\right) \right| =0$. In this way we guarantee that $\int_{0}^{\infty }d\Omega f\left( \Omega \right) \left| v\left( \Omega \right) \right| ^{2}\delta \left( \Omega -\omega \right) =f\left( \omega \right) \left| v\left( \omega \right) \right| ^{2}$ for any nonsingular function $f\left( \omega \right) $ within the whole interval $\left( 0,\infty \right) $. We can also diagonalize the Hamiltonian (\ref{301}) considering the introduction of the counter-term $V_{R}\left( \hat{q}\right) $. Rewriting it in terms of the operators $\hat{a}$ and $\hat{b}_{j}$, defined in (\ref{41} ), we have \begin{equation} \hat{H}=\hbar \omega _{o}\hat{a}^{\dagger }\hat{a}+\hbar \frac{\Delta \omega ^{2}}{4\omega _{o}}\left( \hat{a}+\hat{a}^{\dagger }\right) ^{2}+\sum_{j}\hbar \omega _{j}\hat{b}_{j}^{\dagger }\hat{b}_{j}-\frac{\hbar }{2} \sqrt{\frac{1}{M\omega _{o}}}\left( \hat{a}+\hat{a}^{\dagger }\right) \sum_{j}\frac{C_{j}}{\sqrt{m_{j}\omega _{j}}}\left( \hat{b}_{j}+\hat{b}_{j} ^{\dagger }\right) . \label{c2} \end{equation} Writing (\ref{c2}) in the continuum limit and following the same procedure as adopted above, we will see that the equations (\ref{a5}) and (\ref{a6}) will be substituted now by the equations \begin{eqnarray} \omega \alpha _{\omega } &=&\left( \omega _{o}+\frac{\Delta \omega ^{2}}{ 2\omega _{o}}\right) \alpha _{\omega }-\frac{\Delta \omega ^{2}}{2\omega _{o} }\chi _{\omega }+\int \left[ \beta _{\omega ,\Omega }v^{*}\left( \Omega \right) -\sigma _{\omega ,\Omega }v\left( \Omega \right) \right] d\Omega , \label{c3} \\ \omega \chi _{\omega } &=&-\left( \omega _{o}+\frac{\Delta \omega ^{2}}{ 2\omega _{o}}\right) \chi _{\omega }+\frac{\Delta \omega ^{2}}{2\omega _{o}} \alpha _{\omega }+\int \left[ \beta _{\omega ,\Omega }v^{*}\left( \Omega \right) -\sigma _{\omega ,\Omega }v\left( \Omega \right) \right] d\Omega , \label{c4} \end{eqnarray} respectively. The equations (\ref{a7}) and (\ref{a8}) will stay the same. Thus, it can be easily shown that all the other previous equations will not change with the only difference that the function $H\left( \omega \right) $ should be substituted by $H_{R}\left( \omega \right) $ given in (\ref{410}). \section{Calculation of \lowercase{$\uppercase{{\mathcal F}}_{\uppercase{\mathcal R}}\left( \left\langle \hat{q}_{j}\right\rangle _{\uppercase{\mathcal R}},\left\langle \hat{p}_{j}\right\rangle _{\uppercase{\mathcal R}};t\right) $}} The expression (\ref{531}) for ${\mathcal F}_{\mathcal R}\left( \hat{q}_{\Omega },\hat{ p}_{\Omega };t\right) $ can be written as \begin{equation} {\mathcal F}_{\mathcal R}\left( \hat{q}_{\Omega },\hat{p}_{\Omega };t\right) =2\omega _{o}\int \frac{d\Omega }{\pi }\sqrt{\frac{m_{\Omega }\Omega }{M\omega _{o}}} v\left( \Omega \right) \left[ {\mathcal J}\left( \Omega ;t\right) \hat{q} _{\Omega }+{\mathcal K}\left( \Omega ;t\right) \frac{\hat{p}_{\Omega }}{ m_{\Omega }\Omega }\right] , \label{81} \end{equation} where the expressions for ${\mathcal J}\left( \Omega ;t\right) $ and ${\mathcal K} \left( \Omega ;t\right) $ are obtained by direct comparison between (\ref{81} ) and (\ref{531}). Now we can substitute the expression (\ref{44}) for $ v\left( \Omega \right) $ in (\ref{81}) and write the expression obtained in the discrete limit \begin{eqnarray} {\mathcal F}_{\mathcal R}\left( \hat{q}_{\Omega },\hat{p}_{\Omega };t\right) &=&-\frac{1 }{M}\sum_{j}\frac{C_{\Omega _{j}}}{\pi }\left[ {\mathcal J}\left( \Omega _{j};t\right) \sqrt{g\left( \Omega _{j}\right) }\int_{1/g\left( \Omega _{j}\right) }d\Omega \hat{q}_{\Omega } \right. \nonumber \\ &&\qquad \qquad \qquad \qquad \left. +\frac{{\mathcal K}\left( \Omega _{j};t\right) }{m_{\Omega _{j}}\Omega _{j}}\sqrt{g\left( \Omega _{j}\right) } \int_{1/g\left( \Omega _{j}\right) }d\Omega \hat{p}_{\Omega }\right] . \label{83} \end{eqnarray} Recalling the relation (\ref{230}) between the discrete and continuous operators, we obtain \begin{equation} {\mathcal F}_{\mathcal R}\left( \hat{q}_{j},\hat{p}_{j};t\right) =-\frac{1}{M}\sum_{j} \frac{C_{_{j}}}{\pi }\left[ {\mathcal J}\left( \Omega _{j};t\right) \hat{q}_{j}+ {\mathcal K}\left( \Omega _{j};t\right) \frac{\hat{p}_{j}}{m_{\Omega _{j}}\Omega _{j}}\right] . \label{84} \end{equation} Employing the initial condition (\ref{523}), we have \begin{eqnarray} {\mathcal F}_{\mathcal R}\left( \left\langle \hat{q}_{j}\right\rangle _{R},\left\langle \hat{p}_{j}\right\rangle _{\mathcal R};t\right) &=&-\frac{1}{M\pi } \sum_{j}\frac{C_{j}^{2}}{m_{j}\Omega _{j}^{2}}\ {\mathcal J}\left( \Omega _{j};t\right) \left\langle \hat{q}\right\rangle _{\mathcal S} \nonumber \\ &=&{\mathcal H}\left( t\right) \left\langle \hat{q}\right\rangle _{\mathcal S}, \end{eqnarray} with \begin{equation} {\mathcal H}\left( t\right) =-4\omega _{o}\int \frac{d\Omega }{\pi }\frac{ \left| v\left( \Omega \right) \right| ^{2}}{\Omega }{\mathcal J}\left( \Omega ;t\right) , \end{equation} where we used again the relation (\ref{44}). Writing ${\mathcal H}\left( t\right) $ as \begin{equation} {\mathcal H}\left( t\right) =I_{1}\left( t\right) +I_{2}\left( t\right) , \label{8102} \end{equation} we have \begin{eqnarray} I_{1}\left( t\right) &=&-4\omega _{o}\int \frac{d\Omega }{\pi }\frac{\left| v\left( \Omega \right) \right| ^{2}}{\Omega }\frac{d}{dt}W_{R}\left( \Omega ,t\right) , \label{811} \\ I_{2}\left( t\right) &=&-4\omega _{o}\int \frac{d\Omega }{\pi }\frac{\left| v\left( \Omega \right) \right| ^{2}}{\Omega }Z_{R}\left( \Omega \right) \cos \left( \Omega t\right) . \end{eqnarray} The calculation of $I_{1}\left( t\right) $ is a somewhat lengthy but straighforward calculation and results in $I_{1}\left( t\right) =0$. So all that is left is \begin{equation} {\mathcal H}\left( t\right) =I_{2}\left( t\right) =4\gamma \int \frac{d\Omega }{\pi }\frac{\omega _{o}^{2}-\Omega ^{2}}{\left( \Omega ^{2}-\omega _{o}^{2}\right) ^{2}+\left( 2\gamma \Omega \right) ^{2}}\cos \left( \Omega t\right) . \label{813} \end{equation} The evaluation of this last integral can also be accomplished by the method of residues and yields \begin{equation} {\mathcal H}\left( t\right) =2\gamma {\mathcal L}\left( t\right) , \label{814} \end{equation} for $t>0$. \begin{figure} \caption{(a) Graph of $\left| L\left( \omega \right) \right| _{R} \label{abs} \end{figure} \end{document}
\begin{document} \title[Auslander's Formula: Variations and Applications]{Auslander's Formula: Variations and Applications} \author[Asadollahi, Asadollahi, Hafezi, Vahed]{Javad Asadollahi, Najmeh Asadollahi, Rasool Hafezi and Razieh Vahed} {\rm{add}\mbox{-}}ress{Department of Mathematics, University of Isfahan, P.O.Box: 81746-73441, Isfahan, Iran and School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O.Box: 19395-5746, Tehran, Iran } \email{[email protected], [email protected]} {\rm{add}\mbox{-}}ress{Department of Mathematics, University of Isfahan, P.O.Box: 81746-73441, Isfahan, Iran} \email{[email protected]} {\rm{add}\mbox{-}}ress{School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O.Box: 19395-5746, Tehran, Iran } \email{[email protected]} {\rm{add}\mbox{-}}ress{Department of Mathematics, Khansar Faculty of Mathematics and Computer Science, Khansar, Iran and School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O.Box: 19395-5746, Tehran, Iran } \email{[email protected]} \subjclass[2010]{18E30, 16E35, 18E15} \keywords{Auslander's formula, functor category, recollement, derived category.} \thanks{This research was in part supported by a grant from IPM (No: 94130216)} \begin{abstract} According to the Auslander's formula one way of studying an abelian category $\mathbb{C} C$ is to study ${\rm{{mod\mbox{-}}}} \mathbb{C} C$, that has nicer homological properties than $\mathbb{C} C$, and then translate the results back to $\mathbb{C} C$. Recently Krause gave a derived version of this formula and thus renewed the subject. This paper contains a detailed study of various versions of Auslander formula including the versions for all modules and for unbounded derived categories. We apply them to include some results concerning recollements of triangulated categories. \end{abstract} \maketitle \section{Introduction} Let $\mathbb{C} C$ be an abelian category. A contravariant functor $F$ from $\mathbb{C} C$ to the category of abelian groups $\mathcal{A}b$ is called finitely presented, or coherent \cite{As1}, if there exists an exact sequence $$ {\rm{Hom}}_{\mathbb{C} C}(-, X) \longrightarrow {\rm{Hom}}_{\mathbb{C} C}(-,Y) \longrightarrow F \longrightarrow 0$$ of functors. Let ${\rm{{mod\mbox{-}}}} \mathbb{C} C$ denote the category of all coherent functors. The systematic study of ${\rm{{mod\mbox{-}}}} \mathbb{C} C$ is initiated by Auslander \cite{As1}. He, not only showed that ${\rm{{mod\mbox{-}}}} \mathbb{C} C$ is an abelian category of global dimension less than or equal to two but also provided a nice connection between ${\rm{{mod\mbox{-}}}} \mathbb{C} C$ and $\mathbb{C} C$. This connection, which is known as Auslander formula \cite{L,K}, suggests that one way of studying $\mathbb{C} C$ is to study ${\rm{{mod\mbox{-}}}} \mathbb{C} C$, that has nicer homological properties than $\mathbb{C} C$, and then translate the results back to $\mathbb{C} C$. In particular, if we let $\mathbb{C} C$ to be ${\rm{{mod\mbox{-}}}} \Lambda$, where $\Lambda$ is an artin algebra, Auslander formula translates to the equivalence $$\frac{{\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} \Lambda)}{\{F\mid F(\Lambda)=0\}} \simeq {\rm{{mod\mbox{-}}}} \Lambda$$ of abelian categories. As it is mentioned in \cite{L}, `a considerable part of Auslander's work on the representation theory of finite dimensional, or more general artin, algebras can be connected to this formula'. Recently, Krause \cite{K} established a derived version of Auslander's formula, showing that $\mathbb{D} ^{\bb}(\mathbb{C} C)$ is equivalent to a quotient of $\mathbb{D} ^{\bb}({\rm{{mod\mbox{-}}}}\mathbb{C} C)$. Also he gave a derived version of this formula for complexes of injective objects \cite[Sec. 4]{K}. This work can be considered as a continuation of \cite{K}. It contains a detailed study of various versions of Auslander's formula, including the versions for all modules and for unbounded derived categories. These will have some applications, in particular, provide two expressions of $\mathbb{D} ({\rm{Mod\mbox{-}}} R)$ as Verdier quotients. For the proof, we follow similar argument as in the proof of the classical case by Auslander, step by step. Let us be more precise on the structure of the paper. Section \ref{Section 2} is the preliminary section and contains a collection of known facts that we need throughout the paper. Section \ref{Section 3} is devoted to Auslander's formula and its variations, from large Mod to different derived versions, i.e. unbounded, bounded above and bounded, both for contravariant and also covariant functors. One of the key points is a fundamental four terms exact sequence, similar to what Auslander has proved to exist \cite[pp. 203-204]{As1}. Here we use special flat resolutions instead of projective resolutions in Auslander's work, to prove that such a sequence exists in our context, see Proposition \ref{Aus-ExtSeq}. Let $R$ be a right coherent ring. We extend the existence of the sequence to complexes of functors over ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$ and apply it to present an unbounded derived version of Auslander's formula for ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$, i.e. $$\frac{\mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{D} _0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}\simeq \mathbb{D} ({\rm{Mod\mbox{-}}} R),$$ where $\mathbb{D} _0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ is the thick subcategory of $\mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ consisting of all complexes $\mathbf{X}$ such that $\mathbf{X}(R)$ is an acyclic complex. This equivalence restricts to triangle equivalences $$\frac{\mathbb{D} ^*({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{D} ^*_0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}\simeq \mathbb{D} ^*({\rm{Mod\mbox{-}}} R),$$ where $* \in \{-, \bb\}$ and $\mathbb{D} ^*_0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))= \mathbb{D} _0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \bigcap \mathbb{D} ^*({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. The argument works also to reprove Krause's result as well as its extension to unbounded derived categories, see Proposition \ref{Ext-Krause}. These are done in Subsection \ref{Contravariant functor}. A version of Auslander's formula for ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$, the category of covariant functors from ${\rm{{mod\mbox{-}}}} R$ to $\mathbb{C} A b$ can be found in Subsection \ref{Covariant functors}. In Section \ref{Section 4}, we apply our results to present two recollements and hence two descriptions of $\mathbb{D} ({\rm{Mod\mbox{-}}} R)$ as the Verdier quotients of homotopy categories. To this end, we consider the pure-exact structure on the category ${\rm{Mod\mbox{-}}} R$. The injective objects with respect to the pure-exact structure are called pure-injective $R$-modules. We denote the class of pure-injective $R$-modules by ${\rm P}{\lan I \ran}nj R$. Dually, we have the class ${\rm P}{\rm{Prj}\mbox{-}} R$ of all pure-projective $R$-modules. We show that the homotopy category $\mathbb{K} ({\rm P}{\lan I \ran}nj R)$ glues the homotopy categories $\mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R)$ of all acyclic complexes over pure-injective $R$-modules and the derived category $\mathbb{D} ({\rm{Mod\mbox{-}}} R)$, i.e. there is a recollement \[\xymatrix@C=0.5cm{ \mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R) \ar[rrr]^{ } &&& \mathbb{K} ({\rm P}{\lan I \ran}nj R) \ar[rrr]^{} \ar@/^1.5pc/[lll]_{ } \ar@/_1.5pc/[lll]_{} &&& \mathbb{D} ({\rm{Mod\mbox{-}}} R).\ar@/^1.5pc/[lll]_{} \ar@/_1.5pc/[lll]_{} }\] Moreover, we show that similar recollement exists for $\mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R)$. There are some interesting consequences, among them an equivalence $$\mathbb{K} _{{\rm{ac}}}({\rm P} {\lan I \ran}nj R) \simeq \mathbb{K} _{{\rm{ac}}}({\rm P}{\rm{Prj}\mbox{-}} R),$$ of triangulated categories, where $\mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R)$, resp. $\mathbb{K} _{{\rm{ac}}}({\rm P}{\rm{Prj}\mbox{-}} R)$, denotes the homotopy category of all acyclic complexes of pure-injective, resp. pure-projective, $R$-modules. Throughout the paper $R$ denotes a right coherent ring, ${\rm{Mod\mbox{-}}} R$ denotes the category of all right $R$-modules and ${\rm{{mod\mbox{-}}}} R$ denotes the full subcategory of ${\rm{Mod\mbox{-}}} R$ consisting of all finitely presented modules. \\ \section{Preliminaries}\lambdabel{Section 2} In this section we collect some facts, that are needed throughout the paper. \s Let $\mathbb{C} A$ be an abelian category. We denote by $\mathbb{C} (\mathbb{C} A)$ the category of all complexes over $\mathbb{C} A$ and by $\mathbb{K} (\mathbb{C} A)$ the homotopy category of $\mathbb{C} A$. Moreover, $\mathbb{K} ^-(\mathbb{C} A)$, resp. $\mathbb{K} ^{\bb}(\mathbb{C} A)$, denote the full subcategory of $\mathbb{K} (\mathbb{C} A)$ consisting of all bounded above, resp. bounded, complexes. The derived category of $\mathbb{C} A$ will be denoted by $\mathbb{D} (\mathbb{C} A)$. Moreover, $\mathbb{D} ^{-}(\mathbb{C} A)$, resp. $\mathbb{D} ^{\bb}(\mathbb{C} A)$, denotes the full subcategory of $\mathbb{D} (\mathbb{C} A)$ consisting of all homologically bounded above, resp. homologically bounded, complexes. Let ${\rm{Prj}\mbox{-}}\mathbb{C} A$, resp. ${\lan I \ran}nj\mathbb{C} A$, denote the full subcategory of $\mathbb{C} A$ formed by all projective, resp. all injective, objects. In case $\mathbb{C} A= {\rm{Mod\mbox{-}}} R$, we abbreviate ${\rm{Prj}\mbox{-}} ({\rm{Mod\mbox{-}}} R)$ to ${\rm{Prj}\mbox{-}} R$ and set ${\rm{prj}\mbox{-}} R = {\rm{Prj}\mbox{-}} R \bigcap {\rm{{mod\mbox{-}}}} R$. Similarly ${\lan I \ran}nj R$ and ${\rm{inj}\mbox{-}} R$ will be defined.\\ \noindent {\bf Functor categories.} Let $\mathbb{C} C$ be an essentially small abelian category. The additive contravariant functors from $\mathbb{C} C$ to the category of abelian groups $\mathbb{C} A b$ together with the natural transformations between them form a category which is known as the functor category and is denoted either by $(\mathbb{C} C^{{\rm{op}}},\mathbb{C} A b)$ or ${\rm{Mod\mbox{-}}} \mathbb{C} C$. The category ${\rm{Mod\mbox{-}}} \mathbb{C} C$, sometimes, is called the category of modules over $\mathbb{C} C$. It is known that ${\rm{Mod\mbox{-}}}\mathbb{C} C$ is an abelian category. Similarly, all covariant functors and their natural transformations form an abelian category which is denoted by $(\mathbb{C} C, \mathbb{C} A b)$, or sometimes by ${\rm{Mod\mbox{-}}}\mathbb{C} C^{{\rm{op}}}$. It follows from Yoneda lemma that for every object $C \in \mathbb{C} C$, the representable functor ${\rm{Hom}}_{\mathbb{C} C}(-,C)$ is a projective object of ${\rm{Mod\mbox{-}}} \mathbb{C} C$. Also, for every functor $F$ in ${\rm{Mod\mbox{-}}}\mathbb{C} C$ there is an epimorphism $\coprod_i {\rm{Hom}}_{\mathbb{C} C}(-,C_i) \longrightarrow F \longrightarrow 0$, where $C_i$ runs through all isomorphism classes of objects of $\mathbb{C} C$. Hence, the abelian category ${\rm{Mod\mbox{-}}} \mathbb{C} C$ has enough projective objects. A $\mathbb{C} C$-module $F$ is called finitely presented if there is the following exact sequence $$ {\rm{Hom}}_{\mathbb{C} C}(-,C_1) \longrightarrow {\rm{Hom}}_{\mathbb{C} C}(-, C_0) \longrightarrow F \longrightarrow 0$$ of $\mathbb{C} C$-modules, where $C_1 , C_0 \in \mathbb{C} C$. The category of all finitely presented $\mathbb{C} C$-modules is an abelian category \cite[Chapter III, \S 2]{Au2} and will be denoted by ${\rm{{mod\mbox{-}}}} \mathbb{C} C$.\\ \s \lambdabel{ProjObj} Recall that a short exact sequence $$0 \longrightarrow M' \longrightarrow M \longrightarrow M''\longrightarrow 0$$ of $R$-modules is called pure-exact if for every $N \in {\rm{{mod\mbox{-}}}} R$, the induced sequence $$ 0 \longrightarrow {\rm{Hom}}_R(N, M') \longrightarrow {\rm{Hom}}_R(N,M) \longrightarrow {\rm{Hom}}_R(N,M'') \longrightarrow 0$$ is exact. Consider the pure-exact structure on the category ${\rm{Mod\mbox{-}}} R$. An $R$-module $M$ is called pure-projective, if it is a projective object with respect to this exact structure. Warfield \cite{Wa} showed that pure-projective modules are precisely the direct summands of direct sums of finitely presented modules, see also \cite[33.6]{Wi}. We denote by ${\rm P}{\rm{Prj}\mbox{-}} R$ the full subcategory of ${\rm{Mod\mbox{-}}} R$ consisting of all pure-projective modules. The subcategory of ${\rm{Mod\mbox{-}}} R$ consisting of pure injective modules, ${\rm P}{\lan I \ran}nj R$, defines dually. It is known that a functor $P$ in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ is projective if and only if $P \cong {\rm{Hom}}_R(-,M)$ for some pure-projective $R$-module $M$, see e.g. \cite[Theorem B.10]{JL}.\\ \noindent {\bf Recollements of abelian categories.} A subcategory $\mathbb{C} C$ of an abelian category $\mathbb{C} A$ is called a Serre subcategory, if for every short exact sequence $0 \rightarrow C_1 \rightarrow C_2 \rightarrow C_3 \rightarrow 0$ in $\mathbb{C} A$, $C_2 \in \mathbb{C} C$ if and only if $C_1, C_3 \in \mathbb{C} C$. For a Serre subcategory $\mathbb{C} C$ of $\mathbb{C} A$, Gabriel \cite{Gab} constructed an abelian category $\mathbb{C} A/\mathbb{C} C$ with the same objects as in $\mathbb{C} A$ and morphism sets \[ {\rm{Hom}}_{\mathbb{C} A/\mathbb{C} C}(X, Y) = \underset{ X',Y'}{\underrightarrow{\rm lim}} {\rm{Hom}}_{\mathbb{C} A}(X', Y/Y'),\] where $X'$, resp. $Y'$, runs through all subobjects of $X$, resp. $Y$, such that $X/X'$, resp. $Y'$, lies in $\mathbb{C} C$. Assigned to a Serre subcategory $\mathbb{C} C$ of $\mathbb{C} A$, there is an exact and dense quotient functor $Q: \mathbb{C} A \longrightarrow \mathbb{C} A/\mathbb{C} C$. A Serre subcategory $\mathbb{C} C$ is called localizing, resp. colocalizing, if $Q$ possesses a right, resp. left, adjoint. Let $\mathbb{C} B$ be another abelian category and $F: \mathbb{C} A \longrightarrow \mathbb{C} B$ be an additive functor. We set $ {\rm{Im}} F= \{ B \in \mathbb{C} B \mid B \cong F(A), \text{ for some } A \in \mathbb{C} A\}$ and $\mathbb{K} er F= \{ A \in \mathbb{C} A \mid F(A)=0\}.$ \begin{definition}\lambdabel{Def-Rec} Let $\mathbb{C} A'$, $\mathbb{C} A$ and $\mathbb{C} A''$ be abelian categories. A recollement \cite{BBD} of $\mathbb{C} A$ with respect to $\mathbb{C} A'$ and $\mathbb{C} A''$ is a diagram \[\xymatrix{\mathbb{C} A'\ar[rr]^{i_*=i_!} && \mathbb{C} A \ar[rr]^{j^*=j^!} \ar@/^1pc/[ll]_{i^!} \ar@/_1pc/[ll]_{i^*} && \mathbb{C} A'' \ar@/^1pc/[ll]_{j_*} \ar@/_1pc/[ll]_{j_!} }\] of additive functors satisfying the following conditions: \begin{itemize} \item[$(i)$] $(i^*,i_*)$, $(i_!,i^!)$, $(j_!, j^!)$ and $(j^*,j_*)$ are adjoint pairs. \item[$(ii)$] $i_*$, $j_*$ and $j_!$ are fully faithful. \item[$(iii)$] ${\rm{Im}} i_*= \mathbb{K} er j^*$. \end{itemize} \end{definition} A sequence of abelian categories is called a localization sequence if the lower two rows of a recollement exist and the functors appearing in these two rows, i.e. $i_*, i^!, j^!$ and $j_*$, satisfy all the conditions in the definition above which involve only these functors. Similarly, one can define a colocalization sequence of abelian categories via the upper two rows.\\ For the proof of the following facts see e.g. \cite{FP} and \cite{Gab}. \begin{remark}\lambdabel{Properties} Consider the recollement of Definition \ref{Def-Rec}. Then the functors $i_*$ and $j^*$ are exacts and $i_*$ induces an equivalence between $\mathbb{C} A'$ and the Serre subcategory ${\rm{Im}} i_* = \mathbb{K} er j^*$ of $\mathbb{C} A$. In particular, $\mathbb{C} A'$ can be considered as a Serre subcategory of $\mathbb{C} A$. Furthermore, since the exact functor $j^*$ has a fully faithful right, resp. left, adjoint, $\mathbb{C} A'$ is a localizing, resp, colocalizing, subcategory of $\mathbb{C} A$ and there exists an equivalence $\mathbb{C} A'' \simeq \mathbb{C} A/\mathbb{C} A'$. \end{remark} \noindent {\bf Recollements of triangulated categories and stable $t$-structures.} Let $\mathbb{C} T$, $\mathbb{C} T'$ and $\mathbb{C} T''$ be triangulated categories. \begin{definition} A recollement of $\mathbb{C} T$ relative to $\mathbb{C} T'$ and $\mathbb{C} T''$ is defined by six triangulated functors as follows \[\xymatrix{\mathbb{C} T'\ar[rr]^{i_*=i_!} && \mathbb{C} T \ar[rr]^{j^*=j^!} \ar@/^1pc/[ll]_{i^!} \ar@/_1pc/[ll]_{i^*} && \mathbb{C} T'' \ar@/^1pc/[ll]_{j_*} \ar@/_1pc/[ll]_{j_!} }\] satisfying the following conditions: \begin{itemize} \item[$(i)$] $(i^*,i_*)$, $(i_!,i^!)$, $(j_!, j^!)$ and $(j^*,j_*)$ are adjoint pairs. \item[$(ii)$] $i^!j_*=0$, and hence $j^!i_!=0$ and $i^*j_!=0$. \item[$(iii)$] $i_*$, $j_*$ and $j_!$ are fully faithful. \item[$(iv)$] for any object $T \in \mathbb{C} T$, there exist the following triangles \[i_!i^!(T) \rightarrow T \rightarrow j_*j^*(T) \rightsquigarrow \ \ \ \text{and} \ \ \ j_!j^!(T) \rightarrow T \rightarrow i_*i^*(T) \rightsquigarrow\] in $\mathbb{C} T$. \end{itemize} \end{definition} Similar to the case of abelian categories, one can define a localization and a colocalization sequence of triangulated categories. \begin{definition} A pair $(\mathbb{C} U, \mathbb{C} V)$ of full subcategories of a triangulated category $\mathbb{C} T$ is called a stable $t$-structure in $\mathbb{C} T$ if the following conditions are satisfied. \begin{itemize} \item[$(i)$] $\mathcal{U}=\Sigma \mathcal{U}$ and $\mathcal{V}= \Sigma \mathcal{V}$. \item[$(ii)$] ${\rm{Hom}}_{\mathbb{C} T}(\mathcal{U}, \mathcal{V})=0$. \item[$(iii)$] For each $X \in \mathbb{C} T$, there is a triangle $U \rightarrow X \rightarrow V \rightsquigarrow$ with $U \in \mathcal{U}$ and $V \in \mathcal{V}$. \end{itemize} \end{definition} Following result establishes a close relation between recollements of triangulated categories and stable $t$-structures, see \cite[Proposition 2.6]{Mi}. \begin{proposition} \lambdabel{Miyachi} Let $\mathbb{C} T$ be a triangulated category. Let $(\mathbb{C} U,\mathbb{C} V)$ and $(\mathbb{C} V,\mathbb{C} W)$ be stable $t$-structures in $\mathbb{C} T$. Then there is a recollement \[\[email protected]@R-0.5pc{ \mathbb{C} V \ar[rr]^{i_*} && \mathbb{C} T \ar[rr]^{j^*} \ar@/^1pc/[ll]_{i^!} \ar@/_1pc/[ll]_{i^*} && \mathbb{C} T/\mathbb{C} V \ar@/^1pc/[ll]_{j_*} \ar@/_1pc/[ll]_{j_!} }\] in which $i_*: \mathbb{C} V \longrightarrow \mathbb{C} T$ is an inclusion functor, ${\rm{Im}} j_!=\mathbb{C} U$ and ${\rm{Im}} j_* =\mathbb{C} W$. \end{proposition} We also need the following result of Miyachi. \begin{proposition}\lambdabel{Miyachi2} \cite{Mi} Let $\mathbb{C} T$ be a triangulated category. Then the following statements hold true. \begin{itemize} \item [$(i)$]Let $(\mathbb{C} U,\mathbb{C} V)$ be a stable $t$-structure in $\mathbb{C} T$. Then the inclusion functor $i_*: \mathbb{C} U \longrightarrow \mathbb{C} T$, resp.\ $j_*: \mathbb{C} V \longrightarrow \mathbb{C} T$, admits a right adjoint $i^!: \mathbb{C} T \longrightarrow \mathbb{C} U$ , resp.\ a left adjoint $j^*: \mathbb{C} T \longrightarrow \mathbb{C} V$. Moreover, the functor $i^!$, resp.\ $j^*$, induces a triangle equivalence $\mathbb{C} T/\mathbb{C} V \simeq \mathbb{C} U$, resp.\ $\mathbb{C} T/\mathbb{C} U \simeq \mathbb{C} V$. \item [$(ii)$] If the inclusion functor $i_*: \mathbb{C} T' \longrightarrow \mathbb{C} T$ has a left adjoint $i^*: \mathbb{C} T \longrightarrow \mathbb{C} T'$, then $(\mathbb{K} er i^*, {\rm{Im}} i_*)$ is a stable $t$-structure in $\mathbb{C} T$. \item [$(iii)$] If the inclusion functor $i_*: \mathbb{C} T' \longrightarrow \mathbb{C} T$ has a right adjoint $i^*: \mathbb{C} T \longrightarrow \mathbb{C} T'$, then $({\rm{Im}} i_*, \mathbb{K} er i^*)$ is a stable $t$-structure in $\mathbb{C} T$. \end{itemize} \end{proposition} \noindent {\bf Cotorsion theory.} A pair $(\mathbb{C} X, \mathbb{C} Y)$ of classes of objects of an abelian category $\mathbb{C} A$ is called a cotorsion theory if $\mathbb{C} X^\perp=\mathbb{C} Y$ and $\mathbb{C} X = {}^\perp\mathbb{C} Y$, where the left and right orthogonals are taken with respect to ${\rm{Ext}}^1_{\mathbb{C} A}$. So for example \[\mathbb{C} X^\perp:=\{A \in \mathbb{C} A \ \mid \ {\rm{Ext}}^1_{\mathbb{C} A}(X,A)=0, \ {\rm{for \ all}} \ X \in \mathbb{C} X \}.\] A cotorsion theory $(\mathbb{C} X, \mathbb{C} Y)$ is called complete if for every $A \in \mathbb{C} A$ there exist exact sequences $0 \rightarrow Y \rightarrow X \rightarrow A \rightarrow 0$ and $0 \rightarrow A \rightarrow Y' \rightarrow X' \rightarrow 0$, with $X, X'\in \mathbb{C} X$ and $Y, Y' \in \mathbb{C} Y$. A cotorsion theory $(\mathbb{C} X, \mathbb{C} Y)$ is said to be cogenerated by a set if there is a set $\mathbb{C} S$ of objects of $\mathbb{C} A$ such that $\mathbb{C} S^\perp = \mathbb{C} Y$. If a cotorsion theory is cogenerated by a set, then it is complete, see \cite[Theorem 10]{ET}. \section{Deriving Auslander's formula}\lambdabel{Section 3} In this section, we prove a version of Auslander's formula for ${\rm{Mod\mbox{-}}} R$ and also provide an unbounded derived version of Auslander's formula for it, i.e. we prove that $\mathbb{D} ({\rm{Mod\mbox{-}}} R)$ is equivalent to a quotient of $\mathbb{D} ({\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R))$. Note that in this section $R$ is always a right coherent ring. \subsection{Contravariant functors}\lambdabel{Contravariant functor} Recall that a functor $F \in {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ is called flat if every morphism $f: E \longrightarrow F$, where $E$ is a finitely presented functor, factors through a representable functor $P$. It is known that an object $F$ of ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ is flat if and only if $F \cong {\rm{Hom}}_R(-, M)$, for some $R$-module $M$, see \cite[Theorem B.10]{JL}. Let $\mathbb{C} F({\rm{{mod\mbox{-}}}} R)$ denote the full subcategory of ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ consisting of all flat objects. It is known \cite[Theorem B.11]{JL} that the functor $$U: {\rm{Mod\mbox{-}}} R \longrightarrow {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$$ given by $U(M) = {\rm{Hom}}_R(-, M)|_{{\rm{{mod\mbox{-}}}} R}$ is fully faithful and induces an equivalence ${\rm{Mod\mbox{-}}} R \simeq \mathbb{C} F({\rm{{mod\mbox{-}}}} R)$ of categories. For simplicity, throughout we write $(-, M)$, resp. $(-, f)$, instead of ${\rm{Hom}}_R(-, M)|_{{\rm{{mod\mbox{-}}}} R}$, resp. ${\rm{Hom}}_R(-, f)$, where $M$ is an $R$-module and $f$ is an $R$-homomorphism. Let $F$ be an object in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$. Consider the following projective presentation of $F$ $$ (-,M_1) \stackrel{(-,d_1)}\longrightarrow (-,M_0) \longrightarrow F \longrightarrow 0, $$ where $M_1$ and $M_0$ belong to ${\rm P}{\rm{Prj}\mbox{-}} R$. Let $M_2 \stackrel{d_2} \longrightarrow M_1$ be the kernel of $d_1$. Then there is a flat resolution $$ 0 \longrightarrow (-,M_2) \stackrel{(-,d_2)} \longrightarrow (-,M_1) \stackrel{(-,d_1)} \longrightarrow (-,M_0) \longrightarrow F \longrightarrow 0$$ of $F$. Hence, every functor in ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$ has a flat resolution of length at most 2. By definition, for a functor $F \in {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$, a flat precover is a morphism $\pi:(-, M) \longrightarrow F$ such that $M \in {\rm{Mod\mbox{-}}} R$ and ${\rm{Hom}} (- , (-,M)) \longrightarrow {\rm{Hom}} (-, F)$ is surjective on $\mathbb{C} F({\rm{{mod\mbox{-}}}} R)$. If, moreover, the kernel of $\pi$ belongs to $\mathbb{C} F({\rm{{mod\mbox{-}}}} R)^\perp$, then $\pi: (-,M) \longrightarrow F$ is called a special flat precover, where orthogonal is taken with respect to the functor ${\rm{Ext}}^1$. \begin{sremark} A flat resolution $$ 0 \longrightarrow (-,M_2) \stackrel{(-,d_2)} \longrightarrow (-,M_1) \stackrel{(-,d_1)} \longrightarrow (-,M_0) \stackrel{\varepsilon}\longrightarrow F \longrightarrow 0$$ of $F$ is called special, if both morphisms $(-,M_0) \stackrel{\varepsilon}\longrightarrow F$ and $(-,M_1) \stackrel{(-,d_1)} \longrightarrow \mathbb{K} er \varepsilon$ are special flat precovers. It is shown by Herzog \cite[Proposition 7]{H} that every functor in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ admits a special flat resolution of length at most 2. \end{sremark} To prepare the ground for our main result, we follow, step by step, Auslander's argument in Sections 2 and 3 of \cite{As1}. Since the techniques are similar, we just explain the outlines. The details then are straightforward and can be found in \cite{As1}.\\ Consider the embedding $U : {\rm{Mod\mbox{-}}} R \longrightarrow {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$. It induces a functor $$U_{\centerdot}: ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R) , {\rm{Mod\mbox{-}}} R) \longrightarrow ({\rm{Mod\mbox{-}}} R, {\rm{Mod\mbox{-}}} R).$$ Let $F $ be an object in $({\rm{Mod\mbox{-}}} R , {\rm{Mod\mbox{-}}} R)$. Then for a functor $G \in {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$, set $$ (U^{\centerdot}F)(G):= \mathbb{C} oker (F(M_1) \stackrel{F(d_1)} \longrightarrow F(M_0)),$$ where $$0 \longrightarrow (-,M_2) \stackrel{(-, d_2)}\longrightarrow (-, M_1) \stackrel{(-, d_1)} \longrightarrow (-, M_0) \longrightarrow G \longrightarrow 0$$ is a special flat resolution of $G$. Moreover, a map $g: G \longrightarrow G'$ in ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$ can be lifted to a map of their special flat resolutions. This, in turn, induces a map $U^{\centerdot}F(g): U^{\centerdot}F(G) \longrightarrow U^{\centerdot}F(G')$. Known techniques in homological algebra guaranteeing that the definitions of $(U^{\centerdot}F)(G)$ and $(U^{\centerdot}F)(g)$ are independent of the choice of special flat resolutions of $G$ and $G'$. In fact $U^{\centerdot}F$ is a functor. It is straightforward to check that $(U^{\centerdot}F, U_{\centerdot})$ is an adjoint pair and similar to Proposition 2.1 of \cite{As1}, we have the following. We skip the details of the proof. \begin{sproposition}\lambdabel{leftaadjoint} Consider the adjoint pair $(U^{\centerdot}F, U_{\centerdot})$. For a functor $F \in ({\rm{Mod\mbox{-}}} R, {\rm{Mod\mbox{-}}} R)$, $U_{\centerdot} U^{\centerdot} F=F$. Moreover, $U^{\centerdot}F$ is an exact functor if $F$ is so. \end{sproposition} Let $i: {\rm{Mod\mbox{-}}} R \longrightarrow {\rm{Mod\mbox{-}}} R$ be the identity functor and consider the functor $$\nu=U^{\centerdot}i: {\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R) \longrightarrow {\rm{Mod\mbox{-}}} R.$$ It follows from the above properties that if $$ 0 \longrightarrow (-,M_2) \stackrel{(-,d_2)} \longrightarrow (-,M_1) \stackrel{(-,d_1)} \longrightarrow (-,M_0) \longrightarrow F \longrightarrow 0$$ is a special flat resolution of $F$ in ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$, then $\nu(F)= \mathbb{C} oker(M_1 \stackrel{d_1} \longrightarrow M_0)$. \sss Let ${\rm{Mod\mbox{-}}}d({\rm{{mod\mbox{-}}}} R)$ be the full subcategory of ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$ consisting of all functors $F$ such that $\nu(F)=0$. Note that since $i$ is an exact functor, $\nu$ is exact. So, ${\rm{Mod\mbox{-}}}d({\rm{{mod\mbox{-}}}} R)$ is a Serre subcategory of ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$, see Sec. 3 of \cite{As1}. \begin{sremark} A functor $F \in {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ is called effaceable if $F(P)=0$, for every finitely generated projective $R$-module $P$, see \cite[p. 141]{G}. Also, by definition of the functor $U^{\centerdot}$, a functor $F \in {\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$ belongs to ${\rm{Mod\mbox{-}}}d({\rm{{mod\mbox{-}}}} R)$ if and only if $F(R)=0$, or equivalently $F(P)=0$, for every finitely generated projective $R$-module $P$. This fact identifies ${\rm{Mod\mbox{-}}}d({\rm{{mod\mbox{-}}}} R)$ with the effaceable functors in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$. \end{sremark} Let $F \in {\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$. If we consider a special flat resolution of $F$, a similar argument as in \cite[pp. 203-204]{As1} can be applied now, to prove the following result. Since this sequence plays a central role in the paper, we include a brief proof. \begin{sproposition}\lambdabel{Aus-ExtSeq} For each functor $F $ in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ there is an exact sequence $$ 0 \longrightarrow F_0 \longrightarrow F \longrightarrow ( -, \nu(F)) \longrightarrow F_1 \longrightarrow 0$$ such that $F_0, F_1 \in {\rm{Mod\mbox{-}}}d({\rm{{mod\mbox{-}}}} R)$. \end{sproposition} \begin{proof} Consider a special flat resolution $$ 0 \longrightarrow (-,M_2) \stackrel{(-,d_2)} \longrightarrow (-,M_1) \stackrel{(-,d_1)} \longrightarrow (-,M_0) \longrightarrow F \longrightarrow 0$$ of $F$. By definition $\nu(F)= \mathbb{C} oker (M_1 \stackrel{d_1} \longrightarrow M_0)$. If we set $M := \mathbb{C} oker (M_2 \stackrel{d_2} \longrightarrow M_1)$, then there are short exact sequences \[ 0 \longrightarrow M_2 \longrightarrow M_1 \longrightarrow M \longrightarrow 0 \ \ \text{and} \ \ 0 \longrightarrow M \longrightarrow M_0 \longrightarrow \nu(F) \longrightarrow 0.\] So there is the following commutative diagram \[ \[email protected]@R-0.5pc{ &&& 0 \ar[d] &0 \ar@{.>}[d] & \\ 0 \ar[r] & (-, M_2) \ar[r] \ar@{=}[d] & (-,M_1) \ar[r] \ar@{=}[d] & (-,M) \ar[r] \ar[d] & F_0 \ar@{.>}[d] \ar[r] & 0 \\ 0\ar[r] & (-,M_2) \ar[r] & (-, M_1) \ar[r] & (-,M_0)\ar[d] \ar[r] & F \ar[r] \ar@{.>}[dl] & 0 \\ &&& (-, \nu(F)) \ar[d] & & \\ &&& F_1 \ar[d] &&\\ &&& 0 && }\] In the above diagram $F_0$, resp. $F_1$, is a cokernel of $((-, M_1) \longrightarrow (-,M))$, resp. $((-,M_0) \longrightarrow (-,\nu(F))$, and so belongs to ${\rm{Mod\mbox{-}}}d({\rm{{mod\mbox{-}}}} R)$. So, by the Snake Lemma, there is a unique exact sequence, identified by dashed line, $$0 \longrightarrow F_0 \longrightarrow F \longrightarrow (-, \nu(F)) \longrightarrow F_1 \longrightarrow 0$$ making the above diagram commutative. This completes the proof. \end{proof} \begin{sremark}\lambdabel{Hom&Ext} Let $F$ be a functor in ${\rm{Mod\mbox{-}}}d ({\rm{{mod\mbox{-}}}} R)$. Then, similar to Proposition 3.2 of \cite{As1}, we can deduce that ${\rm{Ext}}^i(F, (-,M))=0$, for $i=0,1$ and for all $M \in {\rm{Mod\mbox{-}}} R$. We do not include the proof as it follows by similar argument. \end{sremark} As a direct consequence of Proposition \ref{Aus-ExtSeq} and Remark \ref{Hom&Ext}, we have the following theorem. It can be proven using Proposition II.2 of \cite{Gab}, but by a completely different approach. \begin{stheorem}(Compare \cite[Theorem 2.3]{K})\lambdabel{Loc-Seq} There exists a localization sequence \[\xymatrix@C=0.5cm{ {\rm{Mod\mbox{-}}}d ({\rm{{mod\mbox{-}}}} R) \ar[rrr] &&& {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R) \ar[rrr]^{\nu} \ar@/^1.5pc/[lll] &&& {\rm{Mod\mbox{-}}} R \ar@/^1.5pc/[lll]^{U} }\] of abelian categories. In particular, ${\rm{Mod\mbox{-}}} R \simeq \frac{{\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)}{{\rm{Mod\mbox{-}}}d({\rm{{mod\mbox{-}}}} R)}$. \end{stheorem} \begin{proof} Let $F$ be a functor in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$. Then by Proposition \ref{Aus-ExtSeq} there is an exact sequence $$ 0 \longrightarrow F_0 \longrightarrow F \longrightarrow (-, \nu(F)) \longrightarrow F_1 \longrightarrow 0$$ with $F_0 , F_1 \in {\rm{Mod\mbox{-}}}d ({\rm{{mod\mbox{-}}}} R)$. In view of Remark \ref{Hom&Ext}, ${\rm{Ext}}^i (F_j, (-,M)) =0$, for $i,j \in \{0, 1\}$ and for every module $M \in {\rm{Mod\mbox{-}}} R$. This fact, in view of the fully faithfulness of the functor $U$ imply the following natural isomorphisms \[\begin{array}{ll} {\rm{Hom}}(F, (-,M)) & \cong {\rm{Hom}}((-, \nu(F)), (-,M))\\ & \cong {\rm{Hom}}_R(\nu(F), M). \end{array}\] Thus $\nu: {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R) \longrightarrow {\rm{Mod\mbox{-}}} R$ provides a left adjoint of $U: {\rm{Mod\mbox{-}}} R \longrightarrow {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$. Hence the existence of the desired localization sequence follows from Lemma 2.1 of \cite{K97}. \end{proof} \sss Let $\mathbb{K} ^*_{R \mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ denote the full subcategory of $\mathbb{K} ^*({\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R))$ consisting of all complexes $(\mathbf{X},d^i)$ such that $$\mathbf{X}(R): \ \cdots \longrightarrow X^{i-1}(R) \stackrel{d^{i-1}(R)} \longrightarrow X^i(R) \stackrel{d^i(R)} \longrightarrow X^{i+1}(R) \longrightarrow \cdots$$ is an acyclic complex of abelian groups, where $* \in \{\text{blank}, -, \bb\}$. Note that if $\mathbf{X}$ is a complex in $\mathbb{K} ^*_{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$, then $\mathbf{X}(P)$ is acyclic for all $P \in {\rm{prj}\mbox{-}} R$. It can be easily checked that $\mathbb{K} ^*_{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ is a thick subcategory of $\mathbb{K} ^*({\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R))$. So we can form the quotient category $$\mathbb{K} ^*({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))/ \mathbb{K} ^*_{R \mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)),$$ which we denote it by $\mathbb{D} ^*_{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$.\\ In our next theorem we show that there is a triangle equivalence between $\mathbb{D} ({\rm{Mod\mbox{-}}} R)$ and $\mathbb{D} _{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. This fact has a short proof using a `ring with several objects' version of Corollary 3.6 of \cite{AHV}. There it is proved that for an artin algebra $\Lambda$ and an $R$-module $M \in {\rm{{mod\mbox{-}}}}\Lambda$, there is an equivalence of triangulated categories $\mathbb{D} ^{\bb}_M({\rm{Mod\mbox{-}}}\Lambda)\simeq \mathbb{D} ^{\bb}({\rm{Mod\mbox{-}}}{\rm{End}}_{\Lambda}(M))$. It can be generalized to the functor categories without severe problems. So deriving the equivalence of Theorem \ref{Loc-Seq} in view of this fact, implies the equivalence mentioned above. Here we present a constructive proof, introducing the equivalence map as we need its exact definition in our next results. To this end, we need the following lemma that provides a complex version of Proposition \ref{Aus-ExtSeq} and Remark \ref{Hom&Ext}. \begin{slemma}\lambdabel{ExSeqCom} \lambdabel{Hom0} $(i)$ Let $\mathbf{X} \in \mathbb{C} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. There exists an exact sequence $$ 0 \longrightarrow \mathbf{X}_0 \longrightarrow \mathbf{X} \longrightarrow (-, \nu(\mathbf{X})) \longrightarrow \mathbf{X}_1 \longrightarrow 0,$$ where $\mathbf{X}_0 $ and $\mathbf{X}_1$ are complexes over ${\rm{Mod\mbox{-}}}d({\rm{{mod\mbox{-}}}} R)$ and $\nu(\mathbf{X})$ is a complex over ${\rm{Mod\mbox{-}}} R$ whose i-th degree is $\nu(X^i)$.\\ \noindent $(ii)$ Let ${\bf F} \in {\rm{Mod\mbox{-}}}d({\rm{{mod\mbox{-}}}} R)$. Then ${\rm{Ext}}^i({\bf F},(-, {\bf C}))=0$, for $j=0, 1$ and for every ${\bf C} \in \mathbb{C} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. \end{slemma} \begin{proof} $(i)$ Let $(\mathbf{X},d^i)$ be a complex over ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$. By Proposition \ref{Aus-ExtSeq}, for every $i \in \mathbb{Z} $, there is an exact sequence $$ 0 \longrightarrow X^i_0 \longrightarrow X^i \longrightarrow ( -, \nu(X^i)) \longrightarrow X^i_1 \longrightarrow 0,$$ such that $X^i_0$ and $X^i_1$ belong to ${\rm{Mod\mbox{-}}}d ({\rm{{mod\mbox{-}}}} R)$. In view of Lemma \ref{Hom&Ext}, for $j=0, 1$, $${\rm{Ext}}^j_R(X_0^i, (-, \nu(X^{i+1})))=0={\rm{Ext}}^j_R(X_1^i, (-, \nu(X^{i+1}))),$$ for all $i \in \mathbb{Z} $. Thus, for every $i \in \mathbb{Z} $ there is a unique morphism $\overline{d^i}: \nu(X^i) \longrightarrow \nu(X^{i+1})$ making the following diagram commutative \[\xymatrix{ X^i \ar[r] \ar[d]^{d^i} & (-, \nu(X^i)) \ar[d]^{(-,\overline{d^i})} \\ X^{i+1} \ar[r] & (-, \nu(X^{i+1})).}\] Moreover, there exist unique morphisms $d_0^i: X_0^i \longrightarrow X_0^{i+1}$ and $d_1^i: X_1^i \longrightarrow X_1^{i+1}$ such that the following diagram \[\xymatrix{ 0 \ar[r] & X_0^i \ar[r] \ar[d]^{d_0^i} & X^i \ar[r] \ar[d]^{d^i} & (-, \nu(X^i)) \ar[r] \ar[d]^{(-,\overline{d^i})} & X_1^i \ar[r] \ar[d]^{d_1^i} & 0 \\ 0 \ar[r] & X_0^{i+1} \ar[r] & X^{i+1} \ar[r] & (-, \nu(X^{i+1})) \ar[r] & X_1^{i+1} \ar[r] & 0}\] is commutative. The uniqueness of $d^i_0$, $d^i_1$ and $\bar{d^i}$ yield the existence of complexes $$\mathbf{X}_0 : \cdots \longrightarrow X^{i-1}_0 \stackrel{d_0^{i-1}} \longrightarrow X^i_0 \stackrel{d^i_0} \longrightarrow X^{i+1}_0 \longrightarrow \cdots, \ \ \ \ \ \ \ $$ $$\mathbf{X}_1 : \cdots \longrightarrow X^{i-1}_1 \stackrel{d_1^{i-1}} \longrightarrow X^i_1 \stackrel{d^i_1} \longrightarrow X^{i+1}_1 \longrightarrow \cdots, \ \ \text{and}$$ $$\nu(\mathbf{X}) : \cdots \longrightarrow \nu(X^{i-1}) \stackrel{\overline{d^{i-1}}} \longrightarrow \nu(X^i) \stackrel{\overline{d^i}} \longrightarrow \nu(X^{i+1}) \longrightarrow \cdots$$ such that fit in the following exact sequence of complexes $$ 0 \longrightarrow \mathbf{X}_0 \longrightarrow \mathbf{X} \longrightarrow (-, \nu(\mathbf{X})) \longrightarrow \mathbf{X}_1 \longrightarrow 0.$$ Hence, we get the desired exact sequence. $(ii)$ Note that $\mathbb{C} ({\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R))$ is an abelian category with enough projectives \cite[Theorem 10.43]{Rot}. Moreover, by Theorem 10.42 of \cite{Rot}, projective complexes are split exact complexes of projectives. Let $\mathbf{P}$ be a projective complex. Since $U: {\rm{Mod\mbox{-}}} R \longrightarrow {\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$ is full and faithful, in view of \ref{ProjObj}, a projective complex $\mathbf{P}$ is isomorphic to the complex $$ (-, {\bf M}): \ \ \cdots \longrightarrow (-, M^{i-1}) \longrightarrow (-,M^i) \longrightarrow (-,M^{i+1}) \longrightarrow \cdots, $$ where for all $i$, $M^i$ is a pure-projective $R$-module. Since $(-, {\bf C})$ is left exact, to prove the result, it is enough to show that ${\rm{Hom}}({\bf F}, (-, {\bf C}))=0$, see Proposition 3.2 of \cite{As1}. To see this, asume that $\tau$ is a map in ${\rm{Hom}}({\bf F}, (-, {\bf C}))$. Then, we have a map $\tau \circ \pi: (-, {\bf M}) \longrightarrow (-, {\bf C})$ of complexes over ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$. The fully faithful functor $U: {\rm{Mod\mbox{-}}} R \longrightarrow {\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$ can be naturally extended to the full and faithful functor $\mathbb{C} (U): \mathbb{C} ({\rm{Mod\mbox{-}}} R) \longrightarrow \mathbb{C} ( {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. Thus, $\tau \circ \pi = (-, f)$, where $f: {\bf M} \longrightarrow {\bf C}$ is a chain map of complexes. Since ${\bf F}$ is a complex over ${\rm{Mod\mbox{-}}}d ({\rm{{mod\mbox{-}}}} R)$, $${\bf F}(R): \ \cdots \longrightarrow F^{i-1}(R) \longrightarrow F^i(R) \longrightarrow F^{i+1}(R) \longrightarrow \cdots$$ is a zero complex. So $\tau$ vanishes in $\mathbb{C} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. \end{proof} \begin{stheorem}\lambdabel{Thm1} There exists an equivalence $$\mathbb{D} ({\rm{Mod\mbox{-}}} R) \simeq \mathbb{D} _{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$$ of triangulated categories, where as in our setup $R$ is a right coherent ring. \end{stheorem} \begin{proof} Define the functor $\eta: \mathbb{D} ({\rm{Mod\mbox{-}}} R) \longrightarrow \mathbb{D} _{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ by the attachment $\eta(\mathbf{X})=(-,\mathbf{X})$. Also, $\eta$ maps every roof $\xymatrix{\mathbf{X} & \ar[l]_s\mathbf{Z} \ar[r]^f & \mathbf{Y} }$ in $\mathbb{D} ({\rm{Mod\mbox{-}}} R)$ to the roof $$\xymatrix{ (-,\mathbf{X}) & \ar[l]_{(-,s)} (-,\mathbf{Z}) \ar[r]^{(-,f)}& (-,\mathbf{Y}) }$$ in $\mathbb{D} _R({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. Observe that since ${\rm cone}(s)$ belongs to $\mathbb{K} _{{\rm{ac}}}({\rm{Mod\mbox{-}}} R)$, ${\rm cone}((-,s))$ belongs to $\mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. We claim that $\eta$ is faithful, full, dense and also a triangle functor. Let $\xymatrix{\mathbf{X} & \ar[r]^f\mathbf{Z}\ar[l]_s & \mathbf{Y} }$ be a roof in $\mathbb{D} ({\rm{Mod\mbox{-}}} R)$ such that the induced roof $\xymatrix{(-, \mathbf{X}) & (-, \mathbf{Z}) \ar[l]_{(-,s)} \ar[r]^{(-,f)}& (-,\mathbf{Y}) }$ is zero in $\mathbb{D} _{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. So, there is a morphism $T: {\bf F} \longrightarrow (-,\mathbf{Z}) $ such that ${\rm cone}(T) \in \mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ and $ (-, f) \circ T =0$ in $\mathbb{K} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. In view of Lemma \ref{ExSeqCom} there exists an exact sequence \[\xymatrix{ 0 \ar[r] & {\bf F}_0 \ar[r]& {\bf F} \ar[r]^{W \ \ \ \ \ \ } & (-, \nu({\bf F})) \ar[r] & {\bf F}_1\ar[r] &0,}\] such that ${\bf F}_0(R)=0 ={\bf F}_1(R)$ and $\nu({\bf F})$ is a complex whose $i$-th degree is $\nu(F^i)$. Since, $W(R): {\bf F}(R) \longrightarrow \nu({\bf F})$ is an isomorphism, there is a map $\nu({\bf F}) \stackrel{W(R)^{-1}} \longrightarrow {\bf F}(R) \stackrel{T(R)} \longrightarrow \mathbf{Z}$ with acyclic cone, such that $(T\circ X^{-1})(R) \circ f=0$ in $\mathbb{K} ({\rm{Mod\mbox{-}}} R)$. Hence, the roof $\xymatrix{\mathbf{X} & \mathbf{Z} \ar[l]_s\ar[r]^f & \mathbf{Y} }$ is zero in $\mathbb{D} ({\rm{Mod\mbox{-}}} R)$. Thus, $\eta$ is faithful. To see that $\eta$ is full, let $$\xymatrix{(-,\mathbf{X}) & {\bf H} \ar[r]^f \ar[l]_s & (-, \mathbf{Y}) }$$ be a roof in $\mathbb{D} _R({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. By Lemma \ref{ExSeqCom}, there exist an exact sequence $$ 0 \longrightarrow {\bf H}_0 \longrightarrow {\bf H} \stackrel{\varphi}\longrightarrow (-,\nu({\bf H})) \longrightarrow {\bf H}_1 \longrightarrow 0,$$ where ${\bf H}_0$ and ${\bf H}_1$ are complexes over ${\rm{Mod\mbox{-}}}d ({\rm{{mod\mbox{-}}}} R)$ and $\nu({\bf H})$ is a complex whose $i$-th degree is $\nu(H^i)$. By Lemma \ref{Hom0}, ${\rm{Ext}}^j({\bf H}_0 , (-, {\bf C})) =0 ={\rm{Ext}}^j({\bf H}_1, (-,{\bf C}))$ for $j=0, 1$ and every complex ${\bf F}$ over ${\rm{Mod\mbox{-}}}d ({\rm{{mod\mbox{-}}}} R)$. So, by applying the Hom functors ${\rm{Hom}}(-, (-, \mathbf{X}))$ and ${\rm{Hom}} (-, (-,\mathbf{Y}))$ on the above exact sequence, respectively, we have isomorphisms $$ {\rm{Hom}}({\bf H}, (-, {\bf X})) \cong {\rm{Hom}}((-, \nu({\bf H})), (-, \mathbf{X})) \ \ \text{and}$$ $$ {\rm{Hom}}({\bf H}, (-, \mathbf{Y})) \cong {\rm{Hom}} ((-, \nu({\bf H})), (-,\mathbf{Y})).$$ Therefore, there are morphisms $\varphi_{\mathbf{X}}: (-, \nu({\bf H})) \longrightarrow (-, \mathbf{X})$ and $\varphi_{\mathbf{Y}}: (-, \nu({\bf H})) \longrightarrow (-,\mathbf{Y})$ such that $\varphi_{\mathbf{X}} \circ \varphi=s$ and $\varphi_{\mathbf{Y}}\circ \varphi=f$. Note that since ${\rm cone}(s) \in \mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ and $\varphi(R)$ is an isomorphism, ${\rm cone}(\varphi_{\mathbf{X}})$ belongs to $\mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. Now, the commutative diagram \[\[email protected]@R-0.5pc{ & & {\bf H} \ar[dr]^{\varphi} \ar[dl]_{\rm id}& & \\ & {\bf H} \ar[dl]_s \ar[drrr]^{ f \ \ } && (-, \nu({\bf H}))\ar[dr]^{\varphi_{\mathbf{Y}}} \ar[dlll]_{\varphi_{\mathbf{X}}} & \\ (-, \mathbf{X}) & & & & (-,\mathbf{Y}) }\] implies that the roof $\xymatrix{ (-,\mathbf{X}) & {\bf H} \ar[r]^f \ar[l]_{\ \ \ s } & (-, \mathbf{Y}) }$ is equivalent to the roof $$\xymatrix{(-,\mathbf{X}) & (-, \nu({\bf H}))\ar[r]^{\varphi_{\mathbf{X}}} \ar[l]_{ \varphi_{\mathbf{Y}}}& (-, \mathbf{Y}) } $$ in $\mathbb{D} _R({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. Moreover, let $\mathbf{X}$ be a complex in $\mathbb{D} _{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. Then an exact sequence $$0 \longrightarrow \mathbf{X}_0 \longrightarrow \mathbf{X} \longrightarrow (-, \nu(\mathbf{X})) \longrightarrow \mathbf{X}_1 \longrightarrow 0$$ implies that $\mathbf{X}$ is isomorphic to $(-, \nu(\mathbf{X}))$ in $\mathbb{D} _{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ and so $\eta$ is dense. Finally, it is obvious that $\eta$ is a triangle functor. \end{proof} Set $\mathbb{D} _0^* ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)):= \frac{\mathbb{K} ^*_{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} ^*_{{\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}$ and $\mathbb{D} _0^* ({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R)):= \frac{\mathbb{K} ^*_{R \mbox{-} {\rm{ac}}}({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} ^*_{{\rm{ac}}}({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R))}$, where $* \in \{ {\rm blank}, -, \bb\}$. It follows from \cite[Corollaire 4-3]{V2} that there is the following equivalences of triangulated categories $$ \mathbb{D} ^*_{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \simeq \frac{\mathbb{D} ^*({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{D} _0^* ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))},$$ $$ \mathbb{D} ^*_{R}({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R)) \simeq \frac{\mathbb{D} ^*({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{D} _0^* ({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R))}.$$ Following corollary is a derived version of Auslander's formula for ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$. \begin{scorollary} There is a commutative diagram \[ \xymatrix{ \mathbb{D} ({\rm{Mod\mbox{-}}} R) \ar[rr]^{\stackrel{\eta}\sim} && \frac{\mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{D} _0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))} \\ \mathbb{D} ^-({\rm{Mod\mbox{-}}} R)\ar@{^(->}[u] \ar[rr]^{\sim} && \frac{\mathbb{D} ^-({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{D} ^-_0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))} \ar@{^(->}[u]\\ \mathbb{D} ^{\bb}({\rm{Mod\mbox{-}}} R) \ar[rr]^{\sim} \ar@{^(->}[u] && \frac{\mathbb{D} ^{\bb}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{D} ^{\bb}_0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))} \ar@{^(->}[u] }\] of triangulated categories whose rows are triangle equivalences. \end{scorollary} \begin{proof} The equivalence of the first row proved in Theorem \ref{Thm1}, while the second and third equivalences follow directly from the definition of the functor $\eta: \mathbb{D} ({\rm{Mod\mbox{-}}} R) \longrightarrow \mathbb{D} _{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. \end{proof} The same method as in the proof of Lemma \ref{ExSeqCom} and Theorem \ref{Thm1} can be applied to get the following result. The equivalence of the bottom row has already been proved by Krause \cite[Corollary 3.2]{K}, but using a different approach. \begin{sproposition}\lambdabel{Ext-Krause} There is a commutative diagram \[ \xymatrix{ \mathbb{D} ({\rm{{mod\mbox{-}}}} R) \ar[rr]^{\sim} && \frac{\mathbb{D} ({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{D} _0({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R))} \\ \mathbb{D} ^-({\rm{{mod\mbox{-}}}} R)\ar@{^(->}[u] \ar[rr]^{\sim} && \frac{\mathbb{D} ^-({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{D} ^-_0({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R))}\ar@{^(->}[u]\\ \mathbb{D} ^{\bb}({\rm{{mod\mbox{-}}}} R) \ar[rr]^{\sim} \ar@{^(->}[u] && \frac{\mathbb{D} ^{\bb}({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{D} ^{\bb}_0({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R))} \ar@{^(->}[u] }\] of triangulated categories whose rows are triangle equivalences. \end{sproposition} \subsection{Covariant functors}\lambdabel{Covariant functors} Gruson and Jensen \cite{GJ} characterized the injective objects of ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ as the functors isomorphic to those of the form $- \otimes_R M$, where $M$ is a pure-injective $R^{{\rm{op}}}$-module. Moreover, it is known \cite[Theorem B.16]{JL} that a covariant functor $F$ in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ is fp-injective if and only if $F \cong -\otimes_R M$, for some (left) $R$-module $M$. Recall that a functor $F$ in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ is fp-injective, if ${\rm{Ext}}^1_R(G,F)=0$, for all finitely presented functors $G$ in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$. Let ${\rm fp}\mbox{-} \mathbb{C} I(R)^{{\rm{op}}}$ denote the full subcategory of ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ consisting of fp-injective objects. In fact, ${\rm fp}\mbox{-} \mathbb{C} I(R)^{{\rm{op}}} = ({\rm{{mod\mbox{-}}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}})^\perp$. Let $F$ be an object in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$. Then there is an injective copresentation $$0 \longrightarrow F \longrightarrow - \otimes_R M \stackrel{\varphi}\longrightarrow -\otimes_R N$$ of $F$. Since the functor $v: {\rm{Mod\mbox{-}}} R \longrightarrow {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ defined by $v(M)= -\otimes_R M$ is full and faithful \cite[Theorem B.16]{JL}, there is a morphism $f: M \longrightarrow N$ of $R$-modules such that $\varphi=- \otimes_R f$. If we set $L:= \mathbb{C} oker f$, then the above injective copresentation of $F$ can be completed to the following coresolution of $F$ by fp-injective objects $$ 0 \longrightarrow F \longrightarrow -\otimes_R M \stackrel{-\otimes f} \longrightarrow - \otimes_R N \longrightarrow - \otimes_R L \longrightarrow 0.$$ Hence, every functor $F$ in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ admits a fp-injective coresolution of length at most 2. On the other hand, we have the cotorsion theory $({}^\perp {\rm fp}\mbox{-} \mathbb{C} I(R)^{{\rm{op}}} , {\rm fp}\mbox{-} \mathbb{C} I(R)^{{\rm{op}}})$ in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$. Since ${\rm{{mod\mbox{-}}}} R$ is an essentially small category, this cotorsion theory is cogenerated by a set. So it is complete and hence for every functor $F \in {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$, there exists a short exact sequence $$ 0 \longrightarrow F \longrightarrow - \otimes_R M \longrightarrow C \longrightarrow 0,$$ where $M \in {\rm{Mod\mbox{-}}} R^{{\rm{op}}}$ and $C \in {}^\perp {\rm fp}\mbox{-} \mathbb{C} I(R)^{{\rm{op}}}$. A fp-injective coresolution $$ 0 \longrightarrow F \longrightarrow -\otimes_R M \stackrel{-\otimes f}\longrightarrow -\otimes_R N \longrightarrow -\otimes_R L \longrightarrow 0 $$ of $F$ is called special if the image of $- \otimes f$ and $-\otimes_R L$ belong to ${}^\perp {\rm fp}\mbox{-} \mathbb{C} I(R)^{{\rm{op}}}$. It follows from the above argument that each functor $F \in {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ admits a special fp-injective coresolution of length 2. Using this fact, in this subsection we provide a derived version of Auslander's formula for the category ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$. The argument is similar, rather dual, to the argument we applied for the proof of Theorem \ref{Loc-Seq}. In fact, one should follow the following steps. We just give a sketch of proof for the first step. \\ \noindent {\bf Step $1$}. The functor $$v_{\centerdot}: ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}, {\rm{Mod\mbox{-}}} R^{{\rm{op}}}) \longrightarrow ({\rm{Mod\mbox{-}}} R^{{\rm{op}}}, {\rm{Mod\mbox{-}}} R^{{\rm{op}}}),$$ that is induced by the embedding $v: {\rm{Mod\mbox{-}}} R^{{\rm{op}}} \longrightarrow {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$, admits a right adjoint $v^{\centerdot}$ such that for all functors $F \in ({\rm{Mod\mbox{-}}} R^{{\rm{op}}}, {\rm{Mod\mbox{-}}} R^{{\rm{op}}})$, $v_{\centerdot}v^{\centerdot}F=F$. Moreover, if $F$ is exact, then $v^{\centerdot}F$ is an exact functor. Set $\vartheta:=v^{\centerdot}i: {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}} \longrightarrow {\rm{Mod\mbox{-}}} R^{{\rm{op}}}$, where $i: {\rm{Mod\mbox{-}}} R^{{\rm{op}}} \longrightarrow {\rm{Mod\mbox{-}}} R^{{\rm{op}}}$.\\ \noindent {\bf Sketch of the proof of Step $1$.} Let $F:{\rm{Mod\mbox{-}}} R^{{\rm{op}}} \longrightarrow \mathcal{D}$ be a covariant functor. Define $$ v^{\centerdot}F(T):= \mathbb{K} er (T(M^0) \stackrel{T(d^0)} \longrightarrow T(M^1)),$$ where $0 \longrightarrow T \longrightarrow - \otimes_R M^0 \stackrel{-\otimes d^0}\longrightarrow -\otimes_RM^1 \stackrel{-\otimes d^1}\longrightarrow -\otimes_RM^2 \longrightarrow 0 $ is a special fp-injective coresolution of $T$. Also, if there is a morphism $f: T \longrightarrow T'$ of functors, it can be lifted to a morphism of their special fp-injective coresolutions and so we have a morphism $v^{\centerdot}F(f): v^{\centerdot}F(T) \longrightarrow v^{\centerdot}F(T')$. Now, one can easily see that $v^\centerdot$ is the right adjoint of $v_\centerdot$.\\ \noindent {\bf Step $2$}. Let ${\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ be the full subcategory of ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ formed by all functors $F$ with the property that $\vartheta(F)=0$. Since $\vartheta$ is an exact functor, ${\rm Mod}^0 \mbox{-}({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ is a Serre subcategory of ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$. Note that, by definition of the functor $v^{\centerdot}$, a functor $F \in {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ belongs to ${\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ if and only if $F(R)=0$.\\ \noindent {\bf Step $3$}. For each functor $F$ in ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ there is a unique exact sequence $$0 \longrightarrow F_0 \longrightarrow - \otimes \vartheta(F) \longrightarrow F \longrightarrow F_1 \longrightarrow 0$$ such that $F_0, F_1 \in {\rm Mod}^0 \mbox{-}({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$.\\ \noindent {\bf Step $4$}. ${\rm{Hom}}(- \otimes_R M, F)=0$, for all $M \in {\rm{Mod\mbox{-}}} R$ and $F \in {\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$.\\ Based on the above facts, similar to the proof of Theorem \ref{Loc-Seq}, one deduce that $(v,\vartheta)$ is an adjoint pair. So, we have the following colocalisation sequence of abelian categories \[ \xymatrix@C=0.5cm@R=0.5cm{ {\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}} \ar[rrr] &&& {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}} \ar[rrr]^{\vartheta} \ar@/_1.5pc/[lll] &&& {\rm{Mod\mbox{-}}} R^{{\rm{op}}}. \ar@/_1.5pc/[lll]_{v} }\] This in turn, implies that there is an equivalence ${\rm{Mod\mbox{-}}} R^{{\rm{op}}}\simeq \frac{{\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}}{{\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}}$ of abelian categories.\\ On the other hand, since ${\rm Mod}^0 \mbox{-}({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ is closed under direct sums, by \cite[Theorem 15.11]{F}, ${\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}$ is localizing, that is the quotient functor $q: {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}} \longrightarrow \frac{{\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}}{{\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}}$ admits a right adjoint. Thus there exists the following localization sequence of abelian categories \[ \xymatrix@C=0.5cm@R=0.5cm{ {\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}} \ar[rrr] &&& {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}} \ar[rrr] \ar@/^1.5pc/[lll] &&& \frac{{\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}}{{\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}}. \ar@/^1.5pc/[lll] }\] So we have proved the following result. \begin{sproposition}\lambdabel{Prop-cov} Let $R$ be a right coherent ring. Then there is a recollement \[ \xymatrix@C=0.5cm@R=0.5cm{ {\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}} \ar[rrr] &&& {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}} \ar[rrr]^{\vartheta} \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] &&& {\rm{Mod\mbox{-}}} R^{{\rm{op}}} \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll]_{v} }\] of abelian categories. In particular, ${\rm{Mod\mbox{-}}} R^{{\rm{op}}}\simeq \frac{{\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}}{{\rm Mod}^0 \mbox{-} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}}}$. \end{sproposition} Set $\mathbb{D} ^0({\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}})$ to be the quotient category $\frac{\mathbb{K} _{R\mbox{-} \rm ac}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}})}{\mathbb{K} _{{\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}})}$. We also may follow the same argument, as in the case for contravariant functors and prove the following result. So we do not include a proof. \begin{stheorem} Let $R$ be a right coherent ring. Then there exists the following equivalence $$ \mathbb{D} ({\rm{Mod\mbox{-}}} R^{{\rm{op}}}) \simeq \frac{\mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}})}{\mathbb{D} ^0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)^{{\rm{op}}})}$$ of triangulated categories. \end{stheorem} \section{Recollements involving $\mathbb{D} ({\rm{Mod\mbox{-}}} R)$}\lambdabel{Section 4} Let $R$ be a right coherent ring. As applications of our results in Section \ref{Section 3}, here we provide recollements of homotopy category of pure-projective and homotopy category of pure-injective $R$-modules. These recollements are mixing together the pure exact structure with the usual exact structure. \s A complex $\mathbf{X} \in \mathbb{K} ({\rm{Mod\mbox{-}}} R)$ is called pure-exact if for every module $M \in {\rm{{mod\mbox{-}}}} R$, the induced complex ${\rm{Hom}}(M, \mathbf{X})$ is exact. Let $\mathbb{K} _{\rm pac}({\rm{Mod\mbox{-}}} R)$ denote the full subcategory of $\mathbb{K} ({\rm{Mod\mbox{-}}} R)$ formed by all pure-exact complexes. The pure derived category $\mathbb{D} _{\rm pur}({\rm{Mod\mbox{-}}} R)$, is the derived category with respect to the pure exact structure and so is the Verdier quotient $\mathbb{K} ({\rm{Mod\mbox{-}}} R) / \mathbb{K} _{\rm pac}({\rm{Mod\mbox{-}}} R)$. In \cite{K12}, Krause studied this category and proved that $\mathbb{D} _{\rm pur}({\rm{Mod\mbox{-}}} R)$ is compactly generated with $\mathbb{D} _{\rm pur}({\rm{Mod\mbox{-}}} R)^{\rm c} \simeq \mathbb{K} ^{\bb}({\rm{{mod\mbox{-}}}} R)$. Moreover, he \cite[Corollary 6]{K12} proved that for a ring $R$, there exists a triangle equivalence $$\mathbb{D} _{\rm pur}({\rm{Mod\mbox{-}}} R) \simeq \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)).$$ \begin{remark} \lambdabel{Cot-F} \begin{itemize} \item [$(i)$] A functor $C \in {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ is called cotorsion if ${\rm{Ext}}^1(F,C)=0$, for all flat functors $F \in \mathbb{C} F({\rm{{mod\mbox{-}}}} R)$. It is proved in \cite[Theorem 4]{H} that a flat functor $(-,M)$ in ${\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)$ is cotorsion if and only if $M$ is a pure-injective module. Hence, the fully faithful functor $U: {\rm{Mod\mbox{-}}} R \longrightarrow {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ induces an equivalence ${\rm P}{\lan I \ran}nj R \simeq {\rm Cot}\mbox{-} \mathbb{C} F({\rm{{mod\mbox{-}}}} R)$, where ${\rm Cot\mbox{-}\mathbb{C} F}({\rm{{mod\mbox{-}}}} R)$ denotes the full subcategory of $\mathbb{C} F({\rm{{mod\mbox{-}}}} R)$ consisting of all cotorsion-flat functors. Moreover, the functor $U$ can be extended to the full and faithful functor $\mathbb{K} (U): \mathbb{K} ({\rm{Mod\mbox{-}}} R) \longrightarrow \mathbb{K} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ of triangulated categories. The above argument implies the following equivalence of triangulated categories $$\mathbb{K} ({\rm P}{\lan I \ran}nj R) \stackrel{\stackrel{\mathbb{K} (U)}\sim} \longrightarrow \mathbb{K} ({\rm Cot}\mbox{-} \mathbb{C} F({\rm{{mod\mbox{-}}}} R)).$$ \item [$(ii)$] As it is mentioned in \ref{ProjObj}, a functor $P$ in ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ is projective if and only if $P \cong (-,M)$, for some pure-projective $R$-module $M$. So, there is an equivalence ${\rm P}{\rm{Prj}\mbox{-}} R \simeq \mathbb{C} P({\rm{{mod\mbox{-}}}} R)$ induced by the functor $U$. Furthermore, the full and faithful functor $\mathbb{K} (U): \mathbb{K} ({\rm{Mod\mbox{-}}} R) \longrightarrow \mathbb{K} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ restricts to an equivalence $$ \mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R) \simeq \mathbb{K} (\mathbb{C} P({\rm{{mod\mbox{-}}}} R))$$ of triangulated categories. \item [(iii)] Let $\mathbb{K} _{{\rm{ac}}}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$ be the full triangulated subcategory of $\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$ consisting of acyclic complexes of flat functors. We have a triangle equivalence $\mathbb{K} _{{\rm pac}}({\rm{Mod\mbox{-}}} R) \simeq \mathbb{K} _{{\rm{ac}}}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$ via the functor $\mathbb{K} (U)$. \end{itemize} \end{remark} \begin{lemma}\lambdabel{(P,F)} The pair $$(\mathbb{K} (\mathbb{C} P({\rm{{mod\mbox{-}}}} R)), \mathbb{K} _{\rm ac}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R)))$$ is a stable $t$-structure in $\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$. In particular, there is an equivalence $$ \frac {\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{\rm ac}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}\simeq \mathbb{K} (\mathbb{C} P({\rm{{mod\mbox{-}}}} R))$$ of triangulated categories. \end{lemma} \begin{proof} In view of Theorem 5.4 of \cite{St1}, there exists the complete cotorsion theory $$(\mathbb{C} ({\rm P}{\rm{Prj}\mbox{-}} R), \mathbb{C} _{{\rm pac}}({\rm{Mod\mbox{-}}} R))$$ in $\mathbb{C} ({\rm{Mod\mbox{-}}} R).$ Hence, by \cite[Theorem 3.5]{BEIJR}, the inclusion functor $\iota: \mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R) \longrightarrow \mathbb{K} ({\rm{Mod\mbox{-}}} R)$ has a right adjoint $\iota_*: \mathbb{K} ({\rm{Mod\mbox{-}}} R) \longrightarrow \mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R)$, which is defined as follows. Since the above cotorsion theory is complete, for each complex $\mathbf{X} \in \mathbb{K} ({\rm{Mod\mbox{-}}} R)$, there is a short exact sequence $$ 0\longrightarrow {\bf D} \longrightarrow {\bf C} \longrightarrow \mathbf{X} \longrightarrow 0$$ with ${\bf D} \in \mathbb{C} _{{\rm pac}}({\rm{Mod\mbox{-}}} R)$ and ${\bf C} \in \mathbb{C} ({\rm P}{\rm{Prj}\mbox{-}} R)$. Then, $\iota_*(\mathbf{X})$ is defined to be the complex ${\bf C}$. It follows directly from definition that the kernel of $\iota_*$ is the homotopy category $\mathbb{K} _{{\rm pac}}({\rm{Mod\mbox{-}}} R)$. So, Proposition \ref{Miyachi2} (iii) yields the stable $t$-structure $(\mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R), \mathbb{K} _{{\rm pac}}({\rm{Mod\mbox{-}}} R))$ in $\mathbb{K} ({\rm{Mod\mbox{-}}} R)$. Therefore, we have the stable $t$-structure $$(\mathbb{K} (\mathbb{C} P({\rm{{mod\mbox{-}}}} R)) , \mathbb{K} _{{\rm{ac}}}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R)))$$ in $\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$, see Remark \ref{Cot-F}(ii)-(iii). Let $\beta: \mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R)) \longrightarrow \mathbb{K} (\mathbb{C} P({\rm{{mod\mbox{-}}}} R))$ be a triangle functor that makes the following diagram commutative \[\xymatrix{ \mathbb{K} ({\rm{Mod\mbox{-}}} R) \ar[r]^{\iota_*} \ar[d]^{\wr}_{\mathbb{K} (U)} & \mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R)\ar[d]_{\wr}^{\mathbb{K} (U)} \\ \mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R)) \ar[r]^\beta & \mathbb{K} (\mathbb{C} P({\rm{{mod\mbox{-}}}} R)).}\] It can be easily checked, using the adjoint pair $(\iota , \iota_*)$, that $\beta$ is the right adjoint of the inclusion functor $\mathbb{K} (\mathbb{C} P({\rm{{mod\mbox{-}}}} R)) \longrightarrow \mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$. Now, it follows from Proposition \ref{Miyachi2}(i) that the functor $\beta$ induces an equivalence $$ \frac {\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{\rm ac}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}\simeq \mathbb{K} (\mathbb{C} P({\rm{{mod\mbox{-}}}} R))$$ of triangulated categories. \end{proof} We also need the following parallel result. \begin{lemma}\lambdabel{(F,C)} The pair $$(\mathbb{K} _{\rm ac}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R)), \mathbb{K} ({\rm Cot\mbox{-}\mathbb{C} F}({\rm{{mod\mbox{-}}}} R)))$$ is a stable $t$-structure in $\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$. In particular, there is an equivalence $$ \frac{\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{\rm ac}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))} \simeq \mathbb{K} (\mathbb{C} CF({\rm{{mod\mbox{-}}}} R)).$$ of triangulated categories. \end{lemma} \begin{proof} The proof is similar to the proof of the above lemma. Just note that by Theorem 5.4 of \cite{St1}, we have the following complete cotorsion theory $$(\mathbb{C} _{{\rm pac}}({\rm{Mod\mbox{-}}} R), \mathbb{C} ({\rm P}{\lan I \ran}nj R))$$ in $\mathbb{C} ({\rm{Mod\mbox{-}}} R)$. So, \cite[Theorem 3.5]{BEIJR} comes to play and implies that the inclusion functor $\iota: \mathbb{K} ({\rm P}{\lan I \ran}nj R) \longrightarrow \mathbb{K} ({\rm{Mod\mbox{-}}} R)$ possess a left adjoint $\iota^*: \mathbb{K} ({\rm{Mod\mbox{-}}} R) \longrightarrow \mathbb{K} ({\rm P}{\lan I \ran}nj R)$. The rest of the proof is similar. so we leave it as an easy exercise. \end{proof} Next proposition follows from Corollary 5.8 of \cite{St1}. The feature of the proof presented here is that we explicitly discuss the structure of the equivalences and will apply this structure to the forthcoming results. We preface the proposition with a remark. \begin{remark} In view of \cite[Theorem 4.2]{St2}, there exists the complete cotorsion pair $$(\mathbb{C} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R)), \mathbb{C} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))^\perp)$$ in $\mathbb{C} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. Hence, by Theorem 3.5 of \cite{BEIJR}, there is a right adjoint $$\psi: \mathbb{K} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \longrightarrow \mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$$ of the inclusion functor $\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R)) \longrightarrow \mathbb{K} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. The functor $\psi$ is defined as follows. Let $\mathbf{X}$ be a complex in $\mathbb{K} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. Then there is a short exact sequence $$ 0 \rightarrow {\bf C} \rightarrow {\bf F} \rightarrow \mathbf{X} \rightarrow 0$$ where ${\bf F} \in \mathbb{C} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$ and ${\bf C} \in \mathbb{C} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))^\perp$. Set $\psi(\mathbf{X})= {\bf F}$. It follows from definition that $\mathbb{C} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))^{\perp} \subseteq \mathbb{C} _{{\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$. Let $\mathbf{X} \in \mathbb{K} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ be an acyclic complex and consider the corresponding short exact sequence $ 0 \rightarrow {\bf C} \rightarrow {\bf F} \rightarrow \mathbf{X} \rightarrow 0$, with ${\bf F} \in \mathbb{C} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$ and ${\bf C} \in \mathbb{C} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))^\perp$. Since ${\bf C}$ is acyclic, ${\bf F}$ is acyclic as well. So, $\psi$ maps every acyclic complex in $\mathbb{K} ({\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R))$ to an acyclic, and hence pure-exact, complex in $\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$. Therefore, $\psi$ induces a triangle functor $$\bar{\psi}: \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \longrightarrow \frac{\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}.$$ Moreover, the fully faithful functor $U: {\rm{Mod\mbox{-}}} R \longrightarrow {\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ yields the following equivalence of triangulated categories $$\chi: \mathbb{D} _{\rm pur}({\rm{Mod\mbox{-}}} R) \longrightarrow \frac{\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}.$$ It is proved in \cite[Corollary 4.8]{K12} that the functor $U$ induces an equivalence $$\eta: \mathbb{D} _{\rm pur}({\rm{Mod\mbox{-}}} R) \longrightarrow \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$$ of triangulated categories. It can be easily checked that $\chi^{-1} \circ \bar{\psi}$ is the quasi-inverse of $\eta$. So $\bar{\psi}$ is an equivalence of triangulated categories. \end{remark} \begin{proposition}\lambdabel{isos} Let $R$ be a right coherent ring. Then there are the following triangle equivalences \begin{itemize} \item [$(i)$] $\mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \stackrel{\Psi} \longrightarrow \mathbb{K} ({\rm P}{\lan I \ran}nj R),$ \item [$(ii)$] $\mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \stackrel{\Phi}\longrightarrow \mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R).$ \end{itemize} \end{proposition} \begin{proof} $(i)$ By Lemma \ref{(F,C)} there is an equivalence $$ \xi : \frac{\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{\rm ac}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))} \longrightarrow \mathbb{K} (\mathbb{C} CF({\rm{{mod\mbox{-}}}} R))$$ given by $\xi(\mathbf{X}) = {\bf C}$, where ${\bf C}$ fits into a short exact sequence $$0 \rightarrow \mathbf{X} \rightarrow {\bf C} \rightarrow {\bf F} \rightarrow 0$$ with ${\bf F} \in \mathbb{C} _{{\rm{ac}}}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))$. Combine the above equivalence together with the equivalence $$\mathbb{K} (U)^{-1}: \mathbb{K} (\mathbb{C} CF({\rm{{mod\mbox{-}}}} R)) \longrightarrow \mathbb{K} (\mathbb{C} P({\rm{{mod\mbox{-}}}} R)),$$ of part $(i)$ of Remark \ref{Cot-F}, we gain the following triangle equivalence $$ \Psi: \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \stackrel{\bar{\psi}}\longrightarrow \frac{\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))} \stackrel{\xi}\longrightarrow \mathbb{K} (\mathbb{C} CF({\rm{{mod\mbox{-}}}} R)) \stackrel{\mathbb{K} (U)^{-1}} \longrightarrow \mathbb{K} ({\rm P}{\lan I \ran}nj R),$$ where $\bar{\psi}$ is the equivalence introduced in the above remark. $(ii)$ This is similar to the proof of part $(i)$. One should apply Lemma \ref{(P,F)}, equivalence of Part $(ii)$ of Remark \ref{Cot-F}, and the above remark to get the following sequence of the equivalences $$\Phi: \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \stackrel{\bar{\psi}} \longrightarrow \frac{\mathbb{K} (\mathbb{C} F({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}(\mathbb{C} F({\rm{{mod\mbox{-}}}} R))} \stackrel{\beta} \longrightarrow \mathbb{K} (\mathbb{C} P({\rm{{mod\mbox{-}}}} R)) \stackrel{\mathbb{K} (U)^{-1}}\longrightarrow \mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R)$$ of triangulated categories. \end{proof} \begin{theorem}\lambdabel{RecDer} Let $R$ be a right coherent ring. Then the following statements hold true. \begin{itemize} \item [$(i)$] The equivalence $\Psi: \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \longrightarrow \mathbb{K} ({\rm P}{\lan I \ran}nj R)$ induces the following commutative diagram of recollements \[ \xymatrix@C=0.3cm@R=0.4cm{\mathbb{D} _0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \ar[dd]^{\Psi|}\ar[rrr] &&& \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \ar[rrr] \ar[dd]^{\Psi} \ar@/_1.5pc/[lll] \ar@/^1.5pc/[lll] &&& \mathbb{D} _{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \ar[dd] \ar@/_1.5pc/[lll] \ar@/^1.5pc/[lll] \\ \\ \mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R)\ar[rrr] &&& \mathbb{K} ({\rm P}{\lan I \ran}nj R) \ar[rrr] \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] &&& \frac{\mathbb{K} ({\rm P}{\lan I \ran}nj R)}{\mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R)}, \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] }\] \\whose vertical functors are triangle equivalences. In particular, there is a recollement \[ \xymatrix@C=0.5cm@R=0.5cm{ \mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R)\ar[rrr] &&& \mathbb{K} ({\rm P}{\lan I \ran}nj R) \ar[rrr] \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] &&& \mathbb{D} ({\rm{Mod\mbox{-}}} R), \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] }\] \\of triangulated categories. \item [$(ii)$] The equivalence $\Phi: \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \longrightarrow \mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R)$ induces the following commutative diagram of recollements \[ \xymatrix@C=0.3cm@R=0.4cm{\mathbb{D} _0({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \ar[dd]^{\Phi|}\ar[rrr] &&& \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \ar[rrr] \ar[dd]^{\Phi} \ar@/_1.5pc/[lll] \ar@/^1.5pc/[lll] &&& \mathbb{D} _{R}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \ar[dd] \ar@/_1.5pc/[lll] \ar@/^1.5pc/[lll] \\ \\ \mathbb{K} _{{\rm{ac}}}({\rm P}{\rm{Prj}\mbox{-}} R)\ar[rrr] &&& \mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R) \ar[rrr] \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] &&& \frac{\mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R)}{\mathbb{K} _{{\rm{ac}}}({\rm P}{\rm{Prj}\mbox{-}} R)}, \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] }\] \\whose vertical functors are triangle equivalences. In particular, there is a recollement \[ \xymatrix@C=0.5cm@R=0.5cm{ \mathbb{K} _{{\rm{ac}}}({\rm P}{\rm{Prj}\mbox{-}} R)\ar[rrr] &&& \mathbb{K} ({\rm P}{\rm{Prj}\mbox{-}} R) \ar[rrr] \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] &&& \mathbb{D} ({\rm{Mod\mbox{-}}} R), \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] }\] \\of triangulated categories. \end{itemize} \end{theorem} \begin{proof} There exist stable $t$-structures $$(Q({}^\perp \mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))), \frac{\mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}) \ \text{and}$$ $$ (\frac{\mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}, Q(\mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))^\perp) ) \ \ \ \ $$ in $\frac{\mathbb{K} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}$, where $Q: \mathbb{K} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \longrightarrow \frac{\mathbb{K} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}$ is the canonical functor. Set $ \ \mathbb{C} U:= \Psi(Q({}^\perp \mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))))$, $\ \ \mathbb{C} V:= \Psi( \frac{\mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}) \ $ \ and \ \ $ \ \ \ \mathbb{C} W:= \Psi (Q(\mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))^\perp))$. Since $\Psi$ is an equivalence, we have stable $t$-structures $(\mathbb{C} U, \mathbb{C} V)$ and $(\mathbb{C} V, \mathbb{C} W)$ in $\mathbb{K} ({\rm P}{\lan I \ran}nj R)$. By definition, the functor $\Psi$ sends every complex $\mathbf{X}$ with the property that $\mathbf{X}(R)$ is acyclic to an acyclic complex of pure-injective $R$-modules. So, the equivalence $$\Psi: \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))\longrightarrow \mathbb{K} ({\rm P}{\lan I \ran}nj R)$$ induces an equivalence $\Psi: \frac{\mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))} \longrightarrow \mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R)$ and so $$\Psi(\frac{\mathbb{K} _{R\mbox{-} {\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}{\mathbb{K} _{{\rm{ac}}}({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))}) = \mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R).$$ Now, by Corollary 1.13 of \cite{IKM}, we have the desired commutative diagram of recollements. For the second part, note that by Theorem \ref{Thm1}, $\mathbb{D} _{R}({\rm{Mod\mbox{-}}}({\rm{{mod\mbox{-}}}} R)) \simeq \mathbb{D} ({\rm{Mod\mbox{-}}} R)$ as triangulated categories. Thus, $\frac{\mathbb{K} ({\rm P}{\lan I \ran}nj R)}{\mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R)} \simeq \mathbb{D} ({\rm{Mod\mbox{-}}} R)$ and we have the desired recollement. The same argument works to prove $(ii)$. \end{proof} As a direct consequence of the above theorem we have the following results. \begin{corollary} For a right coherent ring $R$, there is an equivalence $$\mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R) \simeq \mathbb{K} _{{\rm{ac}}}({\rm P}{\rm{Prj}\mbox{-}} R)$$ of triangulated categories. \end{corollary} We need the following lemma for the proof of the next result. \begin{lemma}\lambdabel{Murfet}\cite[Corolary 2.10]{Mu} Let there is the following recollement of triangulated categories with $\mathbb{C} T$ compactly generated \[\xymatrix{\mathbb{C} T'\ar[rr] && \mathbb{C} T \ar[rr] \ar@/^1pc/[ll]\ar@/_1pc/[ll]&& \mathbb{C} T'' \ar@/^1pc/[ll] \ar@/_1pc/[ll] }\] \{\bf{T}}hen $\mathbb{C} T'$ is compactly generated, and if $\mathbb{C} T''$ is also compactly generated, then there is a triangle equivalence up to direct summands $\frac{\mathbb{C} T^{\rm c}}{\mathbb{C} T''^{\rm c}} \stackrel{\sim }\longrightarrow \mathbb{C} T'^{\rm c}$. \end{lemma} \begin{corollary}\lambdabel{ComObj} Let $R$ be a right coherent ring. Then the homotopy category $\mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R)$ is compactly generated and there exists the following triangle equivalence up to direct summands \[ \frac{\mathbb{K} ^{\bb}({\rm{{mod\mbox{-}}}} R)}{\mathbb{K} ^{\bb}({\rm{prj}\mbox{-}} R)}\stackrel{\sim} \longrightarrow \mathbb{K} ^{\rm c}_{{\rm{ac}}}({\rm P}{\lan I \ran}nj R).\] \end{corollary} \begin{proof} By Proposition \ref{isos}, there is an equivalence $\Psi: \mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)) \longrightarrow \mathbb{K} ({\rm P}{\lan I \ran}nj R)$. View ${\rm{{mod\mbox{-}}}} R$, as a ring with several objects. The same argument as in the ring case, implies that $\mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ is compactly generated and $$\mathbb{D} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))^{\rm c} \simeq \mathbb{K} ^{\bb}({\rm{prj}\mbox{-}} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))),$$ where ${\rm{prj}\mbox{-}} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))$ denotes the full subcategory of ${\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R)$ consisting of finitely generated projective functors. Also, the Yoneda functor yields the equivalence $$\mathbb{K} ^{\bb}({\rm{prj}\mbox{-}} ({\rm{Mod\mbox{-}}} ({\rm{{mod\mbox{-}}}} R))) \simeq \mathbb{K} ^{\bb}({\rm{{mod\mbox{-}}}} R)$$ of triangulated categories. Since $\Psi$ preserves direct sums, $\mathbb{K} ({\rm P}{\lan I \ran}nj R)$ is also compactly generated and $\mathbb{K} ^{\rm c}({\rm P}{\lan I \ran}nj R) \simeq \mathbb{K} ^{\bb}({\rm{{mod\mbox{-}}}} R)$. Moreover, Theorem \ref{RecDer} gives the following recollement of triangulated categories \[ \xymatrix@C=0.5cm@R=0.5cm{ \mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R)\ar[rrr] &&& \mathbb{K} ({\rm P}{\lan I \ran}nj R) \ar[rrr] \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] &&& \mathbb{D} ({\rm{Mod\mbox{-}}} R). \ar@/^1.5pc/[lll] \ar@/_1.5pc/[lll] }\] \{\lan I \ran}t is known that $\mathbb{D} ({\rm{Mod\mbox{-}}} R)$ is compactly generated and $\mathbb{D} ^{\rm c}({\rm{Mod\mbox{-}}} R) \simeq \mathbb{K} ^{\bb}({\rm{prj}\mbox{-}} R)$. Thus, we can apply Lemma \ref{Murfet} to get that $\mathbb{K} _{{\rm{ac}}}({\rm P}{\lan I \ran}nj R)$ is compactly generated and there is a triangle equivalence up to direct summands \[ \frac{\mathbb{K} ^{\bb}({\rm{{mod\mbox{-}}}} R)}{\mathbb{K} ^{\bb}({\rm{prj}\mbox{-}} R)}\stackrel{\sim} \longrightarrow \mathbb{K} ^{\rm c}_{{\rm{ac}}}({\rm P}{\lan I \ran}nj R).\] \end{proof} \section*{Acknowledgments} The authors also thank the Center of Excellence for Mathematics (University of Isfahan). Part of this work is carried out in IHES, Paris, France, when the last author were visiting there and she would like to thank the support and excellent atmosphere of IHES. This work was partially supported by a grant from the Simons Foundation. \end{document}
\begin{document} \title{Federated Learning with a Sampling Algorithm\ under Isoperimetry} \begin{abstract} Federated learning uses a set of techniques to efficiently distribute the training of a machine learning algorithm across several devices, who own the training data. These techniques critically rely on reducing the communication cost---the main bottleneck---between the devices and a central server. Federated learning algorithms usually take an optimization approach: they are algorithms for minimizing the training loss subject to communication (and other) constraints. In this work, we instead take a Bayesian approach for the training task, and propose a communication-efficient variant of the Langevin algorithm to sample \textit{a posteriori}. The latter approach is more robust and provides more knowledge of the \textit{a posteriori} distribution than its optimization counterpart. We analyze our algorithm without assuming that the target distribution is strongly log-concave. Instead, we assume the weaker log Sobolev inequality, which allows for nonconvexity. \end{abstract} \section{Introduction} Federated learning uses a set of techniques to efficiently distribute the training of a machine learning algorithm across several devices, who own the training data. These techniques critically rely on reducing the communication cost---the main bottleneck---between the devices and a central server~\cite{konevcny2016federated,mcmahan2017communication}. Indeed, classical training algorithms such as the Stochastic Gradient Descent (SGD) can formally be distributed between the server and the devices. But this requires communications between the devices and the server, raising privacy and communication concerns. Therefore, most federated learning algorithms rely on compressing the messages that are exchanged between the server and the devices. For instance, variants SGD involving compression~\cite{karimireddy2020scaffold,horvath2019stochastic,haddadpour2021federated,das2020faster,Gorbunov2021} have successfully been applied in several engineering setups~\cite{kairouz2021advances,dimitriadis2022flute}. More generally, most training algorithms in federated learning can be seen as communication efficient optimization algorithms minimizing the training loss $F$. This approach can provide an approximate minimizer, or a maximum \textit{a posteriori} estimator if we use the Bayesian language. However, this approach fails to provide sufficient knowledge of the \textit{a posteriori} distribution $\exp(-F)$ if we want to perform some Bayesian computation, such as computing confidence intervals. For Bayesian computation, samples from the \textit{a posteriori} distribution $\exp(-F)$ are preferred. In this case, the training task can be formalized as sampling from $\exp(-F)$, rather than minimizing $F$. Exploiting the connections between optimization and sampling~\cite{wibisono2018sampling,durmus2019analysis}, we take a Bayesian approach to federated learning, similarly to~\cite{Vono2021,deng2021convergence,el2021federated}. We introduce a communication efficient variant of the Langevin algorithm, a widely used algorithm to sample from a target distribution whose density is proportional to $\exp(-F)$, where $F$ is a smooth nonconvex function. We study the complexity of our communication efficient algorithm to sample $\exp(-F)$. \subsection{Related Work} Most approaches to train federated learning algorithms rely on minimizing the training loss with a communication efficient variant of SGD. Several training algorithms have been proposed, but we mention~\cite{karimireddy2020scaffold,horvath2019stochastic,haddadpour2021federated,das2020faster,Gorbunov2021} because these papers contain the state of the art results in terms of minimization of the training loss. In this work we shall specifically use the MARINA gradient estimator introduced in~\cite{Gorbunov2021} and inspired from~\cite{Nguyen2017}. The literature on sampling with Langevin algorithm is also large. In recent years, the machine learning research community has specifically been interested in the complexity of Langevin algorithm~\cite{dalalyan2017theoretical,durmus2017non}. In the case where $F$ is not convex, recent works include~\cite{vempala2019rapid,Wibisono2019,ma2019there,chewi2021} which use an isoperimetric-type inequality (\cite[Chapter 21]{villani2008}) such as the log Sobolev inequality to prove convergence in Kullback-Leibler divergence. Other works on Langevin in the nonconvex case include \cite{balasubramanian2022towards,cheng2018sharp,majka2020nonasymptotic,mattingly2002ergodicity}. The closest papers to our work are~\cite{Vono2021,deng2021convergence}. Like our work, these papers are theoretical and study the convergence of an efficient variant of Langevin algorithm for federated learning. The key difference between these papers and our work is that they assume $F$ strongly convex, whereas we only assume that the target distribution $\pi \propto \exp(-F)$ satisfies the log Sobolev inequality (LSI). The strong convexity of $F$ implies LSI~\cite[Chapter 21]{villani2008}, but LSI allows $F$ to be nonconvex. We compare our complexity results to their in Table~\ref{tab:complexity}, Our contributions are summarized below. \begin{table}[ht] \centering \begin{tabular}{ |c|c|c|c| } \hline & & & \\ Paper & Assumption & Criterion & Complexity \\ & & & \\ \hline & & & \\ \cite{Vono2021} & Strong convexity & $W_2(\rho_K,\pi)< \varepsilon$ & $K = \tilde{\mathcal O} \left(\frac{d}{\varepsilon^2}\right)$ \\ & & & \\ \hline & & & \\ \cite{deng2021convergence} & Strong convexity & $W_2(\rho_K,\pi)< \varepsilon$ & $K = \tilde{\mathcal O} \left(\frac{d}{\varepsilon^2}\right)$ \\ & & & \\ \hline & & & \\ \textbf{This paper} & \textbf{log Sobolev inequality} & $W_2(\rho_K,\pi)< \varepsilon$ & $K = \tilde{\mathcal O} \left(\frac{d}{\varepsilon^2}\right)$ \\ & & & \\ \hline \end{tabular} \\ \caption{Sufficient number of iterations of our algorithm and concurrent algorithms to achieve $\varepsilon$ accuracy in 2-Wasserstein distance in dimension $d$. Strong convexity implies log Sobolev inequality.} \label{tab:complexity} \end{table} \subsection{Contributions} We consider the problem of sampling from $\pi \propto \exp(-F)$, where $\pi$ satisfies LSI, in a federated learning setup. We make the following contributions: \begin{itemize} \item We propose a communication efficient variant of Langevin algorithm, called Langevin-MARINA, that can be distributively implemented between the central server and the devices. Langevin-MARINA relies on compressed communications. \item We analyze the complexity of our sampling algorithm in terms of the Kullback Leibler divergence, the Total Variation distance and the 2-Wasserstein distance. Our approach relies on viewing our sampling problem as an optimization problem over a space of probability measures, and allows $F$ to be nonconvex. \item Our sampling algorithm, Langevin-MARINA, is inspired from an optimization algorithm called MARINA~\cite{Gorbunov2021} for which we give a new convergence proof in the Appendix. This new proof draws connections between optimization (MARINA) and sampling (Langevin-MARINA). \end{itemize} \subsection{Paper organization} In Section~\ref{sec:background}, we review some background material on sampling and optimization. Next, we introduce our federated learning setup in Section~\ref{sec:fl}. In Section~\ref{sec:main} we give our main algorithm, Langevin-MARINA, and our main complexity results. We conclude in Section~\ref{sec:ccl}. The proofs, including the new proofs of existing results for MARINA, are deferred to the Appendix. \section{Preliminaries}\label{sec:background} \subsection{Mathematical problem} Throughout this paper, we consider a nonconvex function $F: {\mathbb R}^d \to {\mathbb R}$ which is assumed to be $L$-smooth, i.e., differentiable with an $L$-Lipschitz continuous gradient: $\norm{\nabla F(x)-\nabla F(y)}\leq L\norm{x-y}$. As often in machine learning, $F$ can be seen as a training loss and takes the form of a finite sum \begin{equation} \label{eq:finitesum} F = \sum_{i = 1}^n F_i, \end{equation} where each $F_i$ can be seen as the loss associated to the dataset stored in the device $i$. Assuming that $\int \exp(-F(x))dx \in (0,\infty)$, we denote by \begin{equation} \label{eq:sampling} \pi \propto \exp(-F), \end{equation} the probability distribution whose density is proportional to $\exp(-F)$. We take a Bayesian approach: instead of minimizing $F$, our goal is to generate random samples from $\pi$, which can be seen as the posterior distribution of some Bayesian model. \subsection{Sampling and Optimal Transport} We denote by ${\mathcal P}_2({\mathbb R}^d)$ the set of Borel measures $\sigma$ on ${\mathbb R}^d$ with finite second moment, that is $\int\normsq{x}d\sigma(x)<+\infty$. For every $\sigma, \nu\in{\mathcal P}_2({\mathbb R}^d)$, ${\textbf G}amma(\sigma,\nu)$ is the set of all the coupling measures between $\sigma$ and $\nu$ on ${\mathbb R}^{d}\times{\mathbb R}^d$, that is $\gamma\in{\textbf G}amma(\sigma,\nu)$ if and only if $\sigma(dx)= \gamma(dx,{\mathbb R}^d)$ and $\nu(dy)= \gamma({\mathbb R}^d,dy)$. The Wasserstein distance between $\sigma$ and $\nu$ is defined by \begin{equation} \label{eq:wstdis} W_2(\sigma,\nu)=\sqrt{\inf_{\gamma\in{\textbf G}amma(\sigma,\nu)}\int \normsq{x-y}\gamma(dx,dy)}. \end{equation} The Wasserstein distance $W_2(\cdot,\cdot)$ is a metric on ${\mathcal P}_2({\mathbb R}^d)$ and the metric space $\left({\mathcal P}_2({\mathbb R}^d),W_2\right)$ is called the Wasserstein space, see \cite{ambrosio2008gradient}. The Kullback-Leibler~(KL) divergence w.r.t. $\pi$ can be seen as the map from the Wasserstein space to $(0,\infty]$ defined for every $\sigma \in {\mathcal P}_2({\mathbb R}^d)$ by \begin{equation} \label{eq:KLdiv} \KL{\sigma}:=\begin{cases} \int\log(\frac{\sigma}{\pi})(x)d\sigma(x) & \text{if $\sigma \ll \pi$} \\ \infty & \text{else}, \end{cases} \end{equation} where $\sigma \ll \pi$ means that $\sigma$ is absolutely continuous w.r.t. $\pi$ and $\frac{\sigma}{\pi}$ denotes the density of $\sigma$ w.r.t. $\pi$. The KL divergence is always nonnegative and it is equal to zero if and only if $\sigma = \pi$. Therefore, assuming that $\pi \in {\mathcal P}_2({\mathbb R}^d)$, $\pi$ can be seen as the solution to the optimization problem \begin{equation} \label{eq:optim-sampling} \min_{\sigma \in {\mathcal P}_2({\mathbb R}^d)} \KL{\sigma}. \end{equation} Viewing $\pi$ as a minimizer of the KL divergence is the cornerstone of our approach. Indeed, we shall view the proposed algorithm as an optimization algorithm to solve Problem~\eqref{eq:optim-sampling}. In particular, following~\cite{wibisono2018sampling,durmus2019analysis}, we shall view the Langevin algorithm as a first order algorithm over the Wasserstein space. In particular, the relative Fisher information, defined as \begin{equation} \label{eq:fishinfor} {\mathcal F}S{\sigma}:=\begin{cases} \int \normsq{\nabla \log(\frac{\sigma}{\pi})(x)}d\sigma(x)&\text{if $\sigma \ll \pi$} \\ \infty, & \text{else}, \end{cases} \end{equation} will play the role of the squared norm of the gradient of the objective function. Indeed, by defining a differential structure over the Wasserstein space~\cite{ambrosio2008gradient}, the relative Fisher information is the squared norm of the gradient of the KL divergence. The analysis of nonconvex optimization algorithm minimizing $F$ often relies on a gradient domination condition relating the squared norm of the gradient to the function values, such as the following Lojasiewicz condition~\cite{blanchet2018family,karimi2016linear} \begin{equation} \label{eq:PL} F(x)- \min F \leq \frac{1}{2\mu}\|\nabla F(x)\|^{2}. \end{equation} The latter condition has a well-known analogue over the Wasserstein space for the KL divergence, called logarithmic Sobolev inequality. \begin{assumption}[Logarithmic Sobolev Inequality~(LSI)] \label{def:LSI} The distribution $\pi$ satisfies the logarithmic Sobolev inequality with constant $\mu$: for all $\sigma\in{\mathcal P}_2({\mathbb R}^d)$, \begin{equation} \label{eq:LSIinequ} \KL{\sigma}\leq\frac{1}{2\mu}{\mathcal F}S{\sigma}. \end{equation} \end{assumption} The log Sobolev inequality has been studied in the optimal transport community~\cite[Chapter 21]{villani2008} and holds for several target distributions $\pi$. First, if $F$ is $\mu$-strongly convex (we say that $\pi$ is strongly log-concave), then LSI holds with constant $\mu$. Besides, LSI is preserved under bounded perturbations and Lipschitz mappings~\cite[Lemma 16, 19]{vempala2019rapid}. Therefore, small perturbations of strongly-log concave distributions satisfy LSI. For instance, if we add to a Gaussian distribution another Gaussian distribution with a small weight, then the resulting mixture of Gaussian distributions is not log-concave but it satisfies LSI. One can also find compactly supported examples, see~\cite[Introduction]{Wibisono2019}. LSI is a condition that has been used in the analysis of the Langevin algorithm \begin{equation} \label{eq:langevin} x_{k+1}=x_k-h\nabla F(x_k)+\sqrt{2h}Z_{k+1}, \end{equation} where $h>0$ is a step size and $(Z_k)$ a sequence of i.i.d standard Gaussian vectors over ${\mathbb R}^d$. Langevin algorithm can be seen as a gradient descent algorithm for $F$ to which a Gaussian noise is added at each step. For instance, \citep{Wibisono2019} showed that the distributions $\rho_k$ of $x_k$ converges rapidly towards the target distribution $\pi$ in terms of the KL divergence under LSI. Other metrics have also been considered, such a the Total Variation distance \begin{equation} \label{eq:TV} \|\sigma-\pi\|_{TV} = \sup_{A \in B({\mathbb R}^d)}|\sigma(A) - \pi(A)|, \end{equation} where $B({\mathbb R}^d)$ denotes the Borel sigma field of ${\mathbb R}^d$. \section{Federated learning} \label{sec:fl} \subsection{Example of a federated learning algorithm: MARINA} We now describe our federated learning setup. A central server and a number of devices are required to run a training algorithm in a distributed manner. Each device $i \in \{1,\ldots,n\}$ owns a dataset. A meta federated learning algorithm is as follows: at each iteration, each device performs some local computation (for instance, a gradient computation) using its own dataset, then each device sends the result of that computation to the central server. The central server aggregates the results and sends that aggregation back to the devices. Algorithm~\ref{alg:capmarina} provides an example of a federated learning algorithm. \begin{algorithm}[t!] \caption{\algname{MARINA}~\cite{Gorbunov2021}}\label{alg:capmarina} \begin{algorithmic}[1] \State {\bfseries Input:} Starting point $x_0$, step-size $h$, number of iterations $K$ {\mathcal F}or{$i=1,2,\cdots,n$ in parallel} \State Device $i$ computes MARINA estimator $g_{0}^i$ \State Device $i$ uploads $g_0^i$ to the central server {{\mathbb E}}ndFor \State Server aggregates $g_0=\frac{1}{n}\sum_{i=1}^ng_0^i$ {\mathcal F}or {$k=0,1,2,\cdots,K-1$} \State Server broadcasts $g_k$ to all devices $i$ {\mathcal F}or{$i=1,2,\cdots,n$ in parallel} \State Device $i$ performs $x_{k+1}=x_k-h g_k$ \State Device $i$ computes MARINA estiamtor $g_{k+1}^i$ \State Device $i$ uploads $g_{k+1}^i$ to the central server {{\mathbb E}}ndFor \State Server aggregates $g_{k+1}=\frac{1}{n}\sum_{i=1}^ng_{k+1}^i$ {{\mathbb E}}ndFor \State {\bfseries Return:} $\hat{x}^K$ chosen uniformly at random from $\left(x_k\right)_{k=0}^K$ or last point $x_K$ \end{algorithmic} \end{algorithm} This algorithm, called MARINA, was introduced and analyzed in~\cite{Gorbunov2021} under the assumption that the sequence $(g_k)$ is a MARINA estimator of the gradient as defined below. \begin{definition} \label{def:marina} Given a sequence produced by an algorithm $(x_k)_k$, a MARINA estimator of the gradient is a random sequence $(g_k)_k$ satisfying ${{\mathbb E}}xp{g_k} = {{\mathbb E}}xp{\nabla F(x_k)}$ and \begin{equation} \label{eq:LAboundg111} {\textbf G}_{k+1}\leq (1-p){\textbf G}_k+(1-p)L^2\alpha{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+\theta, \end{equation} where ${\textbf G}_k = {{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}}$ and $0 < p \leq 1$, $\alpha,\theta \geq 0$. \end{definition} MARINA iterations can be rewritten as \begin{equation} \label{eq:MARINAit} x_{k+1} = x_k - h g_{k}. \end{equation} Since ${{\mathbb E}}xp{g_k} = {{\mathbb E}}xp{\nabla F(x_k)}$, MARINA can be seen as a SGD distributed between the central server and the devices, to minimize the nonconvex function $F$. Recall that in federated learning, the main bottleneck is the communication cost between server and devices. The key aspect of MARINA is that it achieves the state of the art in terms of communication complexity among nonconvex optimization algorithms. The reason is the following: there are practical examples of MARINA estimators $(g_k)$ that rely on a compression step before the communication and are therefore cheap to communicate\footnote{Note that we did not specify how $g_{k}^i$ is computed in Algorithm~\ref{alg:capmarina}. For the moment we only require $g_k$ to be a MARINA estimator}. \subsection{Compression operator} MARINA estimators are typically cheaper to communicate than full gradient because they involve a compression step, making the communication light. A compression operator is a random map from ${\mathbb R}^d$ to ${\mathbb R}^d$ with the following properties. \begin{definition}[Compression] \label{def:compression} A stochastic mapping $\mathcal{Q}: \mathbb{R}^{d} \rightarrow \mathbb{R}^{d}$ is a compression operator if there exists $\omega>0$ such that for any $x \in \mathbb{R}^{d}$, \begin{equation} \label{eq:sgskjf} {{\mathbb E}}xp{\mathcal{Q}(x)}=x, \quad {{\mathbb E}}xp{\|\mathcal{Q}(x)-x\|^{2}} \leq \omega\|x\|^{2}. \end{equation} \end{definition} The first equation states that $\mathcal{Q}$ is unbiased and the second equation states that the variance of the compression operator has a quadratic growth. There are lots of compression operators satisfying \eqref{eq:sgskjf}, explicit examples based on quantization and/or sparsification are given in~\cite{alistarh2017qsgd,horvath2019natural}. In general, compressed quantities are cheap to communicate. We now fix a compression operator and give examples of gradient estimators involving compression, which are provably MARINA estimators. \subsection{\algname{MARINA} gradient estimators}\label{subsec:11} In every examples presented below, $g_k:=\frac{1}{n}\sum_{i=1}^n g_k^i$, where $g_k^i$ is meant to approximate $\nabla F_i(x_k)$, i.e., the gradient at the device $i$, at step $k$. We consider any random sequence $(x_k)$ generated by a training algorithm. \subsubsection{Vanilla MARINA gradient estimator} The vanilla MARINA gradient estimator is defined as the average $g_k:=\frac{1}{n}\sum_{i=1}^n g_k^i$, where for every $i \in \{1,\dots,n\}$, $g_0^i=\nabla F_i(x_0)$ and \begin{equation} \label{eq:gradientestimator} g^i_{k+1}:=\begin{cases} \nabla F_i(x_{k+1})& \text{with probability }~ p>0~\\ g_k+\mathcal{Q}\left(\nabla F_i(x_{k+1})-\nabla F_i(x_k)\right)& \text{with probability }~1-p \end{cases}. \end{equation} Node $i$ randomly (and independently of $\mathcal{Q}, g_k, x_{k+1}$) computes and uploads to the server the gradient of $F_i$ at point $x_{k+1}$ with probability $p$. Else, node $i$ computes and uploads the compressed difference of the gradients of $F_i$ between points $x_{k+1}$ and $x_{k}$: $\mathcal{Q}\left(\nabla F_i(x_{k+1})-\nabla F_i(x_k)\right)$. Note that the server already knows $g_k$ from the previous iteration, so no need to send $g_k$ to the server. In average, device $i$ computes a full gradient every $1/p$ steps ($p$ is very small, see Remark~\ref{rk} below). \begin{proposition} \label{prop:vanilla} Assume that $F_i$ is $L_i$-smooth for all $i\in \{1,\ldots,n\}$, i.e., \begin{equation}\label{eq:Lismooth} \left\|\nabla F_{i}(x)-\nabla F_{i}(y)\right\| \leq L_{i}\|x-y\|,\quad \forall x, y \in \mathbb{R}^{d}, \forall i\in \{1,\ldots,n\}. \end{equation} Then, $g_k = \frac{1}{n}\sum_{i=1}^n g_k^i$, where $g_k^i$ is defined by~\eqref{eq:gradientestimator} is a MARINA estimator in the sense of Definition~\ref{def:marina}, with $\alpha=\frac{\omega\sum_{i=1}^n L_i^2}{n^2L^2},\theta=0$. \end{proposition} This proposition means that the MARINA estimator reduces the variance induced by the compression operator. Note that, if $F_i$ is $L_i$-smooth, then $L \leq \frac{1}{n} \sum_{i=1}^n L_i$. \subsubsection{Finite sum case} Consider the finite sum case where each $F_i$ is a sum over the data points stored in the device $i$: $F_i = \sum_{j = 1}^N F_{ij}$, where $F_{ij}$ is the loss associated to the data point $j$ in the device $i$. The finite sum MARINA gradient estimator is analogue to the vanilla estimator, but with subsampling. It is defined as the average $g_k:=\frac{1}{n}\sum_{i=1}^n g_k^i$, where for every $i \in \{1,\dots,n\}$, $g_0^i=\nabla F_i(x_0)$ and \begin{equation} \label{eq:gradientestimator222} g^i_{k+1}:=\begin{cases} \nabla F_i(x_{k+1})& \text{with probability }~ p>0~\\ g_k+\mathcal{Q}\left(\frac{1}{b'}\sum_{j\in I_{i,k}^{\prime}}\left(\nabla F_{ij}(x_{k+1})-\nabla F_{ij}(x_k)\right)\right)& \text{with probability }~1-p \end{cases}, \end{equation} where $b'$ is the minibatch size and $I_{i, k}^{\prime}$ is the set of the indices in the minibatch, $\left|I_{i, k}^{\prime}\right|=b^{\prime}$. Here, $(I_{i, k}^{\prime})_{i,k}$ are i.i.d random sets consisting in $b'$ i.i.d samples from the uniform distribution over $\{1,\ldots,N\}$. \begin{proposition} \label{prop:finitesum} Assume that $F_{ij}$ is $L_{ij}$-smooth for all $i \in \{1,\ldots,n\}$ and all $j \in \{1,\ldots,N\}$. Then, $g_k = \frac{1}{n}\sum_{i=1}^n g_k^i$, where $g_k^i$ is defined by~\eqref{eq:gradientestimator222} is a MARINA estimator in the sense of Definition~\ref{def:marina}, with \[\alpha=\frac{\omega\sum_{i=1}^nL_i^2+(1+\omega)\frac{\sum_{i=1}^n\mathcal{L}^2_i}{b'}}{n^2L^2},\]where $\mathcal{L}_{i} = \max _{j \in[N]} L_{i j}$ and $\theta=0$. \end{proposition} This proposition means that the MARINA estimator reduces the variance induced by the compression operator and the variance induced by the subsampling. \subsubsection{Online case} \label{sec:online} Consider the online case where each $F_i$ is an expectation over the randomness of a stream of data arriving online: $F_{i}(x)=\mathbb{E}_{\xi_{i} \sim \mathcal{D}_{i}}\left[F_{\xi_{i}}(x)\right]$, where $F_{\xi_{i}}$ is the loss associated to a data point $\xi_i$ in device $i$. The online MARINA gradient estimator is analogue to the finite sum estimator, but with subsampling from $\mathcal{D}_{i}$ instead of the uniform over $\{1,\ldots,N\}$. Moreover, the full gradient $\nabla F_i$ is never computed (because it is intractable in online learning). The online MARINA gradient estimator is defined as the average $g_k:=\frac{1}{n}\sum_{i=1}^n g_k^i$, where for every $i \in \{1,\dots,n\}$, $g_0^i=\frac{1}{b} \sum_{\xi \in I_{i, 0}} \nabla F_{\xi}\left(x_{0}\right)$ and \begin{equation} \label{eq:gradientestimator333} g^i_{k+1}:=\begin{cases} \frac{1}{b} \sum_{\xi \in I_{i, k}} \nabla F_{\xi}\left(x_{k+1}\right)& \text{with prob.}~ p>0~\\ g_k+\mathcal{Q}\left(\frac{1}{b^{\prime}} \sum_{\xi \in I_{i, k}^{\prime}}\left(\nabla F_{\xi}\left(x_{k+1}\right)-\nabla F_{\xi}\left(x_{k}\right)\right)\right)& \text{with prob.} ~1-p \end{cases}, \end{equation} where $I_{i, k}^{\prime}$ and $I_{i, k}$ are the set of the indices in a minibatch, $\left|I_{i, k}^{\prime}\right|=b^{\prime}$ and $\left|I_{i, k}\right|=b$. Here, $(I_{i, k})_{i,k}$ (resp. $(I_{i, k}^{\prime})_{i,k}$) is an i.i.d random set consisting in $b$ (resp. $b'$) i.i.d samples from $\mathcal{D}_{i}$. Besides, in the online case only, we make a bounded gradient assumption. \begin{proposition} \label{prop:online} Assume that for all $i \in\{1,\ldots,n\}$ there exists $\sigma_{i} \geq 0$ such that for all $x \in \mathbb{R}^{d}$, \begin{align*} &\mathbb{E}_{\xi_{i} \sim \mathcal{D}_{i}}\left[\nabla F_{\xi_{i}}(x)\right] =\nabla F_{i}(x), \\ &\mathbb{E}_{\xi_{i} \sim \mathcal{D}_{i}}\left[\left\|\nabla F_{\xi_{i}}(x)-\nabla F_{i}(x)\right\|^{2}\right] \leq \sigma_{i}^{2}. \end{align*} Moreover, assume that $F_{\xi_i}$ is $L_{\xi_i}$-smooth $\mathcal D_i$ almost surely. Then, $g_k = \frac{1}{n}\sum_{i=1}^n g_k^i$, where $g_k^i$ is defined by~\eqref{eq:gradientestimator333} is a MARINA estimator in the sense of Definition~\ref{def:marina}, with \[\alpha=\frac{\omega\sum_{i=1}^nL_i^2+(1+\omega)\frac{\sum_{i=1}^n\mathcal{L}^2_i}{b'}}{n^2L^2},\] where $\mathcal{L}_{i}$ is the $L^\infty$ norm of $L_{\xi_i}$ (where $\xi_i \sim \mathcal D_i$) and $\theta=\frac{p\sum_{i=1}^n\sigma_i^2}{n^2b}$. \end{proposition} Note that in the online case $\theta \neq 0$ in general, because the online MARINA estimator does not reduce to zero the variance induced by the subsampling. That is because the loss $F_i$ is an expectation (and not a finite sum \textit{a priori}), for which variance reduction to zero is impossible in general. However, $\theta$ can be made small by taking a large minibatch size $b$ or a small probability $p$. Moreover, in the online case, ${\textbf G}_0 \neq 0$ unlike in the two other cases. \begin{remark} \label{rk} Typically, $p$ is very small, for instance, \cite{Gorbunov2021} chooses $p=\zeta_{\mathcal{Q}}/d$ in \eqref{eq:gradientestimator}, $p=\min\left\{\zeta_{\mathcal{Q}}/d,b'/(N+b')\right\}$ in \eqref{eq:gradientestimator222} and $p=\min\left\{\zeta_{\mathcal{Q}}/d,b'/(b+b')\right\}$ in \eqref{eq:gradientestimator333}, where $\zeta_{\mathcal{Q}}=\sup _{x \in \mathbb{R}^{d}} \mathbb{E}\left[\|\mathcal{Q}(x)\|_{0}\right]$ and $\|y\|_{0}$ is the number of non-zero components of $y \in \mathbb{R}^{d}$. Therefore, with high probability $1-p$, the compressed difference $\mathcal{Q}\left(\nabla F_i(x_{k+1})-\nabla F_i(x_k)\right)$ is sent to the server. Sending compressed quantities has a low communication cost~\cite{alistarh2017qsgd,horvath2019natural}. \end{remark} \begin{remark} Propositions~\ref{prop:vanilla}, \ref{prop:finitesum} and~\ref{prop:online} can be found in~\cite[Equation 21, 33, 46]{Gorbunov2021} in the case where $(x_k)$ is a stochastic gradient descent algorithm (i.e., $x_{k+1} = x_k - h g^k$). In the general case where $(x_k)$ is produced by any algorithm, we found out that the proofs of these Propositions are the same, but we reproduce these proofs in the Appendix for the sake of completeness. \end{remark} \section{Langevin-MARINA}\label{sec:main} In this section, we give our main algorithm, Langevin-MARINA, and study its convergence for sampling from $\pi \propto \exp(-F)$. We prove convergence bounds in KL divergence, 2-Wasserstein distance and Total Variation distance under LSI. Our main algorithm, Langevin-MARINA can be seen as a Langevin variant of MARINA which adds a Gaussian noise at each step of MARINA (Algorithm~\ref{alg:capmarina}). Our motivation is to obtain a Langevin algorithm whose communication complexity is similar to that of MARINA. Alternatively, one can see Langevin-MARINA as a MARINA variant of Langevin algorithm which uses a MARINA estimator of the gradient $g_k=\frac{1}{n}\sum_{i=1}^n g_k^i$ instead of the full gradient $\nabla F(x_k)$ in the Langevin algorithm: \begin{equation} \label{eq:LMARINAit} x_{k+1} = x_k - h g_{k} + \sqrt{2h}Z_{k+1}. \end{equation} Langevin-MARINA is presented in Algorithm~\ref{alg:federatedsampling}. \begin{algorithm}[h!] \caption{Langevin-MARINA (proposed algorithm)}\label{alg:federatedsampling} \begin{algorithmic}[1] \State {\bfseries Input:} Starting point $x_0\sim\rho_0$, step-size $h$, number of iterations $K$ {\mathcal F}or{$i=1,2,\cdots,n$ in parallel} \State Device $i$ computes MARINA estimator $g_{0}^i$ \State Device $i$ uploads $g_0^i$ to the central server {{\mathbb E}}ndFor \State Server aggregates $g_0=\frac{1}{n}\sum_{i=1}^ng_0^i$ {\mathcal F}or {$k=0,1,2,\cdots,K-1$} \State Server broadcasts $g_k$ to all devices $i$ \State Server draws a Gaussian vector $Z_{k+1}\sim\mathcal{N}(0,I_{d})$ \State Server performs $x_{k+1}=x_k-hg_k+\sqrt{2h}Z_{k+1}$ \State Server broadcasts $ x_{k+1}$ to all devices $i$ {\mathcal F}or{$i=1,2,\cdots,n$ in parallel} \State Device $i$ computes MARINA estimator $g_{k+1}^i$ \State Device $i$ uploads $g^i_{k+1}$ to the central server {{\mathbb E}}ndFor \State Server aggregates $g_{k+1}=\frac{1}{n}\sum_{i=1}^ng_{k+1}^i$ {{\mathbb E}}ndFor \State {\bfseries Return:} $x_K$ \end{algorithmic} \end{algorithm} Compared to MARINA where $n$ equivalent stochastic gradient descent steps are performed by the devices, here the Langevin step is performed only once, by the server. This allows for computation savings ($n$ times less computations) at the cost of more communication: the iterates $x_k$ need to be broadcast by the server to the devices. Therefore, the communication complexity of the sampling algorithm Langevin-MARINA is higher than the communication complexity of the optimization algorithm MARINA. However, compared to the concurrent sampling algorithm of~\cite{Vono2021}, the communication complexity per iteration of Langevin-MARINA is equivalent. Comparing the communication complexity of Langevin-MARINA to that of FA-LD~\cite{deng2021convergence} is more difficult because FA-LD makes communication savings by performing communication rounds only after a number $T$ of local updates, instead of using a compression operator after each local update like Langevin-MARINA. We first prove the convergence of Langevin-MARINA in KL-divergence. \begin{theorem} \label{thm:federatedsampling} Assume that LSI (Assumption~\ref{def:LSI}) holds and that $g_k = \frac{1}{n}\sum_{i=1}^n g_k^i$ is a MARINA estimator in the sense of Definition~\ref{def:marina}. If \begin{equation}\label{eq:sdksjfsk} 0<h\leq \min \left\{\frac{1}{14L}\sqrt{\frac{p}{1+\alpha}},\frac{p}{6\mu} \right\}, \end{equation} then \begin{equation}\label{eq:KLre} \KL{\rho_{K}}\leq e^{-\mu K h}\Psi_3+\frac{1-e^{-K\mu h}}{\mu}\tau, \end{equation} where $\Psi_3=\KL{\rho_0}+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{0}, \tau=\left(2L^2+C(1-p)L^2\alpha\right)\left(8Lh^2d+4dh\right)+C\theta, C=\frac{8L^2h^2\beta+2\beta}{1-(1-p)\left(4L^2h^2\alpha+1\right)\beta}, \beta=e^{\mu h}$ and $\rho_k$ is the distribution of $x_k$ for every $k$. In particular, if $\theta=0$, set $h=\mathcal{O}\left(\frac{\mu p\varepsilon}{L^2(1+\alpha) d}\right)$, $K=\Omega\left(\frac{L^2(1+\alpha) d}{\mu^2 p\varepsilon}\log \left(\frac{\Psi_3}{\varepsilon}\right)\right)$ then $\KL{\rho_{K}} \leq \varepsilon$. If $\theta\neq 0$, there will always be an extra residual term $\frac{1-e^{-K\mu h}}{\mu}C\theta$ in the right hand side of \eqref{eq:KLre} which cannot be diminished to 0 by setting $K$ and $h$. However in the online case (Section~\ref{sec:online}), $\theta=\frac{p\sum_{i=1}^n\sigma_i^2}{n^2b}$, we can set $b=\Omega\left(\frac{\sum_{i=1}^{n} \sigma_{i}^{2}}{\mu n^{2} \varepsilon}\right)$ to make the residual term small $\frac{1-e^{-K\mu h}}{\mu}C\theta={\mathcal O}(\varepsilon)$. \end{theorem} From this theorem, we can obtain complexity results in Total Variation distance and 2-Wasserstein distance. Indeed, Pinsker inequality states that \begin{equation} \|\sigma-\pi\|_{TV}\leq\sqrt{\frac{1}{2}H_{\pi}{(\sigma)}},\forall \sigma,\nu\in\mathcal{P}_2({\mathbb R}^d), \end{equation} and LSI implies Talagrand's $T_2$ inequality \begin{equation} W_2^2(\sigma,\pi)\leq \frac{2}{\mu}\KL{\sigma},\forall\sigma\in\mathcal{P}_2({\mathbb R}^d), \end{equation} see \cite[Chapter 21]{villani2008}. \begin{corollary}\label{cor:12} Let assumptions and parameters be as in Theorem \ref{thm:federatedsampling}. Then, \begin{equation}\label{eq:totrm} \|\rho_K-\pi\|^2_{TV}\leq \frac{1}{2}\left(e^{-\mu K h}\Psi_3+\frac{1-e^{-K\mu h}}{\mu}\tau\right), \end{equation} and \begin{equation}\label{eq:cor-W2} W^2_2(\rho_K,\pi)\leq \frac{2}{\mu}\left(e^{-\mu K h}\Psi_3+\frac{1-e^{-K\mu h}}{\mu}\tau\right). \end{equation} \end{corollary} In particular, if $\theta = 0$, set $h=\mathcal{O}\left(\frac{\mu p\varepsilon^2}{L^2(1+\alpha) d}\right)$, $K=\Omega\left(\frac{L^2(1+\alpha) d}{\mu^2 p\varepsilon^2}\log\left(\frac{\Psi_3}{\varepsilon^2}\right)\right)$ then $\|\rho_K-\pi\|_{TV}\leq \varepsilon$. If $\theta\neq 0$, there will always be an extra residual term $\frac{1-e^{-K\mu h}}{\mu}C\theta$ in the right hand side of \eqref{eq:totrm} which cannot be diminished to 0 by setting $K$ and $h$. However in the online case (Section~\ref{sec:online}), $\theta=\frac{p\sum_{i=1}^n\sigma_i^2}{n^2b}$, we can set $b=\Omega\left(\frac{\sum_{i=1}^{n} \sigma_{i}^{2}}{\mu n^{2} \varepsilon^2}\right)$ to make the residual term small $\frac{1-e^{-K\mu h}}{\mu}C\theta={\mathcal O}(\varepsilon^2)$. Finally, if $\theta=0$, set $h=\mathcal{O}\left(\frac{\mu^2 p\varepsilon^2}{L^2(1+\alpha) d}\right)$, $K=\Omega\left(\frac{L^2(1+\alpha) d}{\mu^3 p\varepsilon^2}\log\left(\frac{\Psi_3}{\mu\varepsilon^2}\right)\right)$ then $W_2(\rho_K,\pi)\leq\varepsilon$. If $\theta\neq 0$, there will always be an extra residual term $\frac{1-e^{-K\mu h}}{\mu^2}C\theta$ in the right hand side of \eqref{eq:cor-W2} which cannot be diminished to 0 by setting $K$ and $h$. However in the online case, $\theta=\frac{p\sum_{i=1}^n\sigma_i^2}{n^2b}$, we can set $b=\Omega\left(\frac{\sum_{i=1}^{n} \sigma_{i}^{2}}{\mu^2 n^{2} \varepsilon^2}\right)$ to make the residual term small $\frac{1-e^{-K\mu h}}{\mu^2}C\theta={\mathcal O}(\varepsilon^2)$. We can compare our results to~\cite{Vono2021,deng2021convergence} which also obtain bounds in 2-Wasserstein distance under the stronger assumption that $F$ is $\mu$-strongly convex, see Table~\ref{tab:complexity}. In \cite{Vono2021}, they need $h={\mathcal O}\left(\frac{\varepsilon^2}{dl}\right)$ and $K=\Omega\left(\frac{dl}{\varepsilon^2}\log\left(\frac{W_2^2(\rho_0,\pi)}{\varepsilon^2}\right)\right)$ , where they compute the full gradient every $l$ step ~(we do not include the dependence in $\mu,L$ in their result for the sake of simplicity). One can think of $l$ as $1/p$. In \cite{deng2021convergence}, they require $h={\mathcal O}\left(\frac{\mu^2\varepsilon^2}{dL^2\left(T^2\mu+L\right)}\right)$ and $K=\Omega\left(\frac{dL^2\left(T^2\mu+L\right)}{\mu^3\varepsilon^2}\log\left(\frac{d}{\varepsilon^2}\right)\right)$ to get $W_2(\rho_K,\pi)\leq\varepsilon$, where $T$ denotes the number of local updates. \section{Conclusion} \label{sec:ccl} We introduced a communication efficient variant of Langevin algorithm for federated learning called Langevin-MARINA. We studied the complexity of this algorithm in terms of KL divergence, Total Variation distance and 2-Wasserstein distance. Unlike existing works on sampling for federated learning, we only require the target distribution to satisfy LSI, which allows the target distribution to not be log-concave. Langevin-MARINA is inspired by a optimization algorithm for federated learning called MARINA. MARINA achieves the state of the art in communication complexity among nonconvex optimization algorithms. However, Langevin-MARINA requires more communication rounds than MARINA. The fundamental reason for this is that Langevin-MARINA needs to communicate the Gaussian noise. To solve this issue, one approach is to design a Langevin algorithm allowing for the compression of the Gaussian noise. Another approach is to put the same random seed in each device in order to ensure that they all generate the same $Z_{k+1}$ at step $k$. This approach does not require the communication of the Gaussian noise. We leave this question for future work. \appendix \tableofcontents \section{New proofs of existing optimization results for MARINA (Algorithm~\ref{alg:capmarina})} The goal of this section is to we show that the approach that we used to study Langevin-MARINA can be used to study MARINA. In particular, we provide a new analysis of MARINA (Algorithm~\ref{alg:capmarina}) establishing the convergence of the whole continuous trajectories $(x_t)_{t\geq 0}$ generated by the algorithm, unlike~\cite{Gorbunov2021} which focuses on the discrete iterates~$(x_k)_{k=0}^{K}$. Compared to~\cite{Gorbunov2021}, we obtain the same convergence rate, but with a different method. The approach we take to study MARINA is similar to the approach we took to study Langevin-MARINA: we establish the convergence of the whole continuous trajectories. The key difference is that MARINA is an optmization algorithm in the Euclidean space ${\mathbb R}^d$, so the underlying space in the convergence proofs of MARINA is ${\mathbb R}^d$. On the contrary, we viewed Langevin-MARINA as an optimization algorithm in the Wasserstein space, so the underlying space in the convergence proofs of Langevin-MARINA is the Wasserstein space. In this section, we focus on the minimization, by MARINA (Algorithm~\ref{alg:capmarina}), of the empirical risk: \begin{equation}\label{eq:aaaattt} \min_{x\in{\mathbb R}^d} F(x) = \sum_{i=1}^n F_i(x). \end{equation} One can see MARINA as a variant of the gradient descent algorithm which uses a MARINA estimator of the gradient $g_k=\frac{1}{n}\sum_{i=1}^n g_k^i$ instead of the full gradient $\nabla F(x_k)$ in the gradient descent algorithm: \begin{equation} \label{eq:MARINA_formal} x_{k+1} = x_k - h g_{k}. \end{equation} MARINA is presented in Algorithm~\ref{alg:capmarina}. We now provide the convergence results of MARINA. The proofs are provided later in the Appendix. \begin{theorem} \label{thm:mainthmop} Let $\left(g_k\right)_{k=0}^{K-1}$ be MARINA estimators. If the step-size satisfies \begin{equation} \label{eq:stepsizeop} 0<h\leq\frac{1}{10L}\sqrt{\frac{p}{1+\alpha}}, \end{equation} then \begin{equation} \label{eq:case1sol2opt} \frac{1}{Kh}\int_0^{Kh}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}dt\leq\frac{2\left(\Psi_1-F(x^*)\right)}{Kh}+2C\theta, \end{equation} where $\Psi_1=F(x_0)+hC{\textbf G}_{0} ,C=\frac{8L^2h^2+2}{1-(1-p)\left(4L^2h^2\alpha+1\right)}$, $x_t:=x_{\lfloor\frac{t}{h}\rfloor h}+(t-\lfloor\frac{t}{h}\rfloor h)x_{\lceil\frac{t}{h}\rceil h}$. \end{theorem} Let $h=\frac{1}{10L}\sqrt{\frac{p}{1+\alpha}}$ ~(hence $C=\mathcal{O}\left(\frac{1}{p}\right)$) and $\hat{x}_t:= x_T$, where $T$ is a uniform random variable over $[0,Kh]$ independent of $(x_t)$. Then, ${{\mathbb E}}xp{\normsq{\nabla F(\hat{x}_t)}} = \frac{1}{Kh}\int_0^{Kh}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}dt$. To achieve ${{\mathbb E}}xp{\normsq{\nabla F(\hat{x}_t)}}\leq\varepsilon^2$ when $\theta=0$ (for instance for the MARINA estimators \eqref{eq:gradientestimator} and \eqref{eq:gradientestimator222}), $\Omega \left(\frac{(\Psi_1-F(x^*)}{\varepsilon^2}\sqrt{\frac{1+\alpha}{p}}L\right)$ iterations suffice. If $\theta \neq 0$ (for instance for the MARINA estimator~\eqref{eq:gradientestimator333}), if we choose the batch-size $b=\Omega \left(\frac{\sum_{i=1}^n\sigma_i^2}{n^2\varepsilon^2} \right)$, then $2C\theta = \mathcal{O}(\varepsilon^2)$, so $\Omega \left(\frac{(\Psi_1-F(x^*))}{\varepsilon^2}\sqrt{\frac{1+\alpha}{p}}L \right)$ iterations suffice. If $F$ further satisfies the Lojasiewicz condition~\ref{eq:PL}, we can obtain the following stronger result. \begin{theorem} \label{thm:mainthmopawa} Let $\left(g_k\right)_{k=0}^{K-1}$ be MARINA estimators. Assume that the gradient domination condition~\ref{eq:PL} holds. If the step-size satisfies \begin{equation} \label{eq:stepsizeoppl} 0<h\leq\min \left\{\frac{1}{14L}\sqrt{\frac{p}{1+\alpha}},\frac{p}{6\mu} \right\}, \end{equation} we will have \begin{equation} \label{eq:case1sol2optpl} {{\mathbb E}}xp{F(x_{K})}-F(x^*)\leq e^{-\mu Kh}\left(\Psi_2-F(x^*)\right)+\frac{1-e^{-K\mu h}}{\mu}C\theta, \end{equation} where $\Psi_2=F(x_{0})+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{0} ,C=\frac{8L^2h^2{\beta}+2{\beta}}{1-(1-p)\left(4L^2h^2\alpha+1\right){\beta}}$,$\beta=e^{\mu h}$. \end{theorem} Let $h=\min\{\frac{1}{14L}\sqrt{\frac{p}{1+\alpha}},\frac{p}{6\mu}\}$ (hence $C=\mathcal{O}\left(\frac{1}{p}\right)$). To achieve ${{\mathbb E}}xp{F(x_{K})}-F(x^*)\leq\varepsilon$ when $\theta=0$ (for instance for the MARINA estimators \eqref{eq:gradientestimator} and \eqref{eq:gradientestimator222}), $\Omega(\max\{\sqrt{\frac{1+\alpha}{p}}\frac{L}{\mu},\frac{1}{p}\}\log\left(\frac{\Psi_2-F(x^*)}{\varepsilon})\right)$ iterations suffice. If $\theta \neq 0$ (for instance for the MARINA estimator~\eqref{eq:gradientestimator333}), if we choose the batch-size $b=\Omega\left(\frac{\sum_{i=1}^n\sigma_i^2}{\mu n^2\varepsilon}\right)$, then term $\frac{1-e^{-K\mu h}}{\mu}C\theta$ is of order $\mathcal{O}(\varepsilon)$, so $\Omega\left(\max\{\sqrt{\frac{1+\alpha}{p}}\frac{L}{\mu},\frac{1}{p}\}\log(\frac{\Psi_2-F(x^*)}{\varepsilon})\right)$ iterations suffice. Since we recover the convergence rates of~\cite{Gorbunov2021}, we refer to the latter paper for a discussion of these results. \section{Proofs} \subsection{Proof of {\mathcal C}ref{prop:vanilla}} For gradient estimator \ref{eq:gradientestimator}, we have \begin{equation} \begin{aligned} &{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}\mid x_{k+1},x_k}\\ &=(1-p){{\mathbb E}}xp{\normsq{g_{k}+\frac{1}{n}\sum_{i=1}^n\mathcal{Q}(\nabla F_{i}(x_{k+1})-\nabla F_i(x_k))-\nabla F(x_{k+1})}\mid x_{k+1},x_k}\\ &=(1-p){{\mathbb E}}xp{\normsq{\frac{1}{n}\sum_{i=1}^n\mathcal{Q}(\nabla F_{i}(x_{k+1})-\nabla F_i(x_k))-\nabla F(x_{k+1})+\nabla F(x_k)}\mid x_{k+1},x_k}\\ &\quad +(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}. \end{aligned} \end{equation} Since $\mathcal{Q}\left(\nabla F_{1}\left(x_{k+1}\right)-\nabla F_{1}\left(x_{k}\right)\right), \ldots, \mathcal{Q}\left(\nabla F_{n}\left(x_{k+1}\right)-\nabla F_{n}\left(x_{k}\right)\right)$ are independent random vectors, now we have \begin{equation} \begin{aligned} &{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}\mid x_{k+1},x_k}\\ &=(1-p){{\mathbb E}}xp{\normsq{\frac{1}{n}\sum_{i=1}^n\left(\mathcal{Q}(\nabla F_{i}(x_{k+1})-\nabla F_i(x_k))-\nabla F_i(x_{k+1})+\nabla F_i(x_k)\right)}\mid x_{k+1},x_k}\\ &\quad +(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n {{\mathbb E}}xp{\normsq{\mathcal{Q}(\nabla F_{i}(x_{k+1})-\nabla F_i(x_k))-\nabla F_i(x_{k+1})+\nabla F_i(x_k)}\mid x_{k+1},x_k}\\ &\quad+ (1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}\\ &\leq \frac{(1-p)\omega}{n^2}\sum_{i=1}^n {{\mathbb E}}xp{\normsq{\nabla F_i(x_{k+1})-\nabla F_i(x_k)}\mid x_{k+1},x_k}+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}. \end{aligned} \end{equation} Use {\mathcal C}ref{eq:Lismooth} and the tower property, we obtain \begin{equation} \begin{aligned} &{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}}\\ &= {{\mathbb E}}xp{{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}\mid x_{k+1},x_k}}\\ &\leq \frac{(1-p)\omega}{n^2}\sum_{i=1}^n L_i^2 {{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}}\\ &=(1-p)L^2\frac{\omega\sum_{i=1}^n L_i^2}{n^2L^2} {{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}}, \end{aligned} \end{equation} so $\alpha=\frac{\omega\sum_{i=1}^n L_i^2}{n^2L^2},\theta=0$. \subsection{Proof of {\mathcal C}ref{prop:finitesum}} For gradient estimator \ref{eq:gradientestimator222}~(finite sum case, that is for each $i\in [n]$, $F_i:=\frac{1}{N}\sum_{j=1}^NF_{ij}$), we have \begin{equation} \begin{aligned} &{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}\mid x_{k+1},x_k}\\ &=(1-p){{\mathbb E}}xp{\normsq{g_k+\frac{1}{n}\sum_{i=1}^n\mathcal{Q}\left(\frac{1}{b'}\sum_{j\in I_{i,k}'}\left(\nabla F_{ij}(x_{k+1})-\nabla F_{ij}(x_k)\right)\right)-\nabla F(x_{k+1})}\mid x_{k+1},x_k}\\ &=(1-p){{\mathbb E}}xp{\normsq{\frac{1}{n}\sum_{i=1}^n\mathcal{Q}\left(\frac{1}{b'}\sum_{j\in I_{i,k}'}\left(\nabla F_{ij}(x_{k+1})-\nabla F_{ij}(x_k)\right)\right)-\nabla F(x_{k+1})+\nabla F(x_k)}\mid x_{k+1},x_k}\\ &\quad +(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}. \end{aligned} \end{equation} Next, we use the notation: $\widetilde{{{D}}elta}_{i}^{k}=\frac{1}{b^{\prime}} \sum_{j \in I_{i, k}^{\prime}}\left(\nabla F_{i j}\left(x_{k+1}\right)-\nabla F_{i j}\left(x_{k}\right)\right)$ and ${{D}}elta_{i}^{k}=\nabla F_{i}\left(x_{k+1}\right)-\nabla F_{i}\left(x_{k}\right)$. They satisfy ${{\mathbb E}}\left[\widetilde{{{D}}elta}_{i}^{k} \mid x_{k+1}, x_{k}\right]={{D}}elta_{i}^{k}$ for all $i \in[n]$. Moreover, $\mathcal{Q}\left(\tilde{{{D}}elta}_{1}^{k}\right), \ldots, \mathcal{Q}\left(\tilde{{{D}}elta}_{n}^{k}\right)$ are independent random vectors, now we have \begin{equation} \begin{aligned} &{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}\mid x_{k+1},x_k}\\ &=(1-p){{\mathbb E}}xp{\normsq{\frac{1}{n}\sum_{i=1}^n\left(\mathcal{Q}(\widetilde{{{D}}elta}_{i}^{k})-{{D}}elta_i^k\right)}\mid x_{k+1},x_k}+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n{{\mathbb E}}xp{\normsq{\mathcal{Q}(\widetilde{{{D}}elta}_{i}^{k})-\widetilde{{{D}}elta}_{i}^{k}+\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_{i}^k}\mid x_{k+1},x_k}+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n\left({{\mathbb E}}xp{\normsq{\mathcal{Q}(\widetilde{{{D}}elta}_{i}^{k})-\widetilde{{{D}}elta}_{i}^{k}}\mid x_{k+1},x_k}+{{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_{i}^k}\mid x_{k+1},x_k}\right)+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n\left(\omega {{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}}\mid x_{k+1},x_k}+{{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_i^k}\mid x_{k+1},x_k}\right)+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n\left(\omega {{\mathbb E}}xp{\normsq{{{{D}}elta}_{i}^{k}}\mid x_{k+1},x_k}+(1+\omega){{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_i^k}\mid x_{k+1},x_k}\right)+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}. \end{aligned} \end{equation} Next we need to calculate ${{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_i^k}\mid x_{k+1},x_k}$. For convenience, we will denote $a_{ij}:=\nabla F_{ij}(x_{k+1})-\nabla F_{ij}(x_k)$ and $a_i:=\nabla F_i(x_{k+1})-\nabla F_i(x_k)$. Define \begin{equation} \chi_{s}=\begin{cases} 1& \text{with prob.} \frac{1}{N}\\ 2&\text{with prob.} \frac{1}{N}\\ \quad &\vdots\\ N&\text{with prob.}\frac{1}{N} \end{cases}, \end{equation} $\{\chi_{s}\}_{s=1}^{b'}$ independent with each other. Let $I_{i,k}'=\bigcup_{s=1}^{b'}\chi_s$, so \begin{equation} \begin{aligned} \widetilde{{{D}}elta}_{i}^{k}&=\frac{1}{b^{\prime}} \sum_{j \in I_{i, k}^{\prime}}\left(\nabla F_{i j}\left(x_{k+1}\right)-\nabla F_{i j}\left(x_{k}\right)\right)\\ &=\frac{1}{b'}\sum_{s=1}^{b'}\sum_{j=1}^N1_{\chi_s=j}a_{ij}, \end{aligned} \end{equation} then \begin{equation} \begin{aligned} &{{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}}-{{D}}elta_{i}^k\mid x_{k+1},x_k}\\ &= {{\mathbb E}}xp{\normsq{\frac{1}{b'}\sum_{s=1}^{b'}\sum_{j=1}^N1_{\chi_s=j}(a_{ij}-a_i)}\mid x_{k+1},x_k}\\ &=\frac{1}{b'^2}\left(\sum_{s=1}^{b'}{{\mathbb E}}xp{\normsq{\sum_{j=1}^N1_{\chi_s=j}(a_{ij}-a_i)}\mid x_{k+1},x_k}\right.\\ &\left.\quad +\sum_{s\neq s'}{{\mathbb E}}xp{\inner{\sum_{j=1}^N1_{\chi_s=j}(a_{ij}-a_i)}{\sum_{j=1}^N1_{\chi_s'=j}(a_{ij}-a_i)}\mid x_{k+1},x_k}\right)\\ &=\frac{1}{b'^2}\left(\sum_{s=1}^{b'}{{\mathbb E}}xp{\normsq{\sum_{j=1}^N1_{\chi_s=j}(a_{ij}-a_i)}\mid x_{k+1},x_k}\right.\\ &\left.\quad+\sum_{s\neq s'}{\inner{{{\mathbb E}}xp{\sum_{j=1}^N1_{\chi_s=j}(a_{ij}-a_i)\mid x_{k+1},x_k}}{{{\mathbb E}}xp{\sum_{j=1}^N1_{\chi_s'=j}(a_{ij}-a_i)}\mid x_{k+1},x_k}}\right)\\ &=\frac{1}{b'}\left({{\mathbb E}}xp{\normsq{\sum_{j=1}^N1_{\chi_{1}=j}a_{ij}}\mid x_{k+1},x_k}-\normsq{a_i}\right)\\ &\leq \frac{1}{b'}{{\mathbb E}}xp{\normsq{\sum_{j=1}^N1_{\chi_{1}=j}a_{ij}}\mid x_{k+1},x_k}\\ &=\frac{1}{b'}\left(\sum_{j=1}^N{{\mathbb E}}xp{\normsq{1_{\chi_{1}=j}}}\normsq{a_{ij}}+\sum_{j\neq j'}{{\mathbb E}}xp{\inner{1_{\chi_{1}=j}}{1_{\chi_{1}=j'}}}\inner{a_{ij}}{a_{ij'}}\right)\\ &=\frac{1}{b'}\frac{1}{N}\sum_{j=1}^N\normsq{a_{ij}}\\ &\leq \frac{\mathcal{L}_i^2}{b'}\normsq{x_{k+1}-x_k}. \end{aligned} \end{equation} Use {\mathcal C}ref{eq:Lismooth} and the tower property, we get \begin{equation}\label{eq:lspppp} \begin{aligned} &{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}}\\ &= {{\mathbb E}}xp{{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}\mid x_{k+1},x_k}}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n\left(\omega L_i^2+\frac{(1+\omega)\mathcal{L}_i^2}{b'}\right){{\mathbb E}}xp{\normsq{x_{k-1}-x_k}}+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}}\\ &=(1-p)L^2\frac{\omega\sum_{i=1}^nL_i^2+(1+\omega)\frac{\sum_{i=1}^n\mathcal{L}^2_i}{b'}}{n^2L^2}{{\mathbb E}}xp{\normsq{x_{k-1}-x_k}} +(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}}, \end{aligned} \end{equation} so $\alpha=\frac{\omega\sum_{i=1}^nL_i^2+(1+\omega)\frac{\sum_{i=1}^n\mathcal{L}^2_i}{b'}}{n^2L^2},\theta=0$. \subsection{Proof of {\mathcal C}ref{prop:online}} For gradient estimator \ref{eq:gradientestimator333}~(online case, that is for each $i\in [n]$, $F_i:=\mathbb{E}_{\xi_{i}\sim\mathcal{D}_i}\left[F_{\xi_i}\right]$), we first have \begin{equation} \begin{aligned} &{{\mathbb E}}xp{\left[\left\|g_{k+1}-\nabla F\left(x_{k+1}\right)\right\|^{2}\mid x_{k+1},x_k\right] }\\ &=(1-p) {{\mathbb E}}xp{\left[\left\|g_{k}+\frac{1}{n} \sum_{i=1}^{n} \mathcal{Q}\left(\frac{1}{b^{\prime}} \sum_{\xi \in I_{i, k}^{\prime}}\left(\nabla F_{\xi}\left(x_{k+1}\right)-\nabla F_{\xi}\left(x_{k}\right)\right)\right)-\nabla F\left(x_{k+1}\right)\right\|^{2}\mid x_{k+1},x_k\right]}\\ &\quad+\frac{p}{n^{2} b^{2}} {{\mathbb E}}xp{\left[\left\|\sum_{i=1}^{n} \sum_{\xi \in I_{i, k}}\left(\nabla F_{\xi}\left(x_{k+1}\right)-\nabla F\left(x_{k+1}\right)\right)\right\|^{2}\mid x_{k+1}\right]}\\ &{=}(1-p){{\mathbb E}}xp{\left[\left\|\frac{1}{n} \sum_{i=1}^{n} \mathcal{Q}\left(\frac{1}{b^{\prime}} \sum_{\xi \in I_{i, k}^{\prime}}\left(\nabla F_{\xi}\left(x_{k+1}\right)-\nabla F_{\xi}\left(x_{k}\right)\right)\right)-\nabla F\left(x_{k+1}\right)+\nabla F\left(x_{k}\right)\right\|^{2}\mid x_{k+1},x_k\right]}\\ &\quad+(1-p) {{\mathbb E}}xp{\left[\left\|g_{k}-\nabla F\left(x_{k}\right)\right\|^{2}\mid x_k\right]}+\frac{p}{n^{2} b^{2}} \sum_{i=1}^{n} \sum_{\xi \in I_{i, k}} {{\mathbb E}}xp{\left[\left\|\nabla F_{\xi}\left(x_{k+1}\right)-\nabla F\left(x_{k+1}\right)\right\|^{2}\mid x_{k+1}\right]}\\ &\leq(1-p){{\mathbb E}}xp{\left[\left\|\frac{1}{n} \sum_{i=1}^{n} \mathcal{Q}\left(\frac{1}{b^{\prime}} \sum_{\xi \in I_{i, k}^{\prime}}\left(\nabla F_{\xi}\left(x_{k+1}\right)-\nabla F_{\xi}\left(x_{k}\right)\right)\right)-\nabla F\left(x_{k+1}\right)+\nabla F\left(x_{k}\right)\right\|^{2}\mid x_{k+1},x_k\right]}\\ &\quad+(1-p) {{\mathbb E}}xp{\left[\left\|g_{k}-\nabla F\left(x_{k}\right)\right\|^{2}\mid x_k\right]}+\frac{p\sum_{i=1}^n\sigma_{i}^2}{n^{2} b}, \end{aligned} \end{equation} here $I'_{i,k}$ consists of $b'$ elements i.i.d. sampled from distribution $\mathcal{D}_i$. In the following, we use the notation: $\widetilde{{{D}}elta}_{i}^{k}=\frac{1}{b^{\prime}} \sum_{\xi \in I_{i, k}^{\prime}}\left(\nabla F_{\xi}\left(x_{k+1}\right)-\nabla F_{\xi}\left(x_{k}\right)\right)$ and ${{D}}elta_{i}^{k}=\nabla F_{i}\left(x_{k+1}\right)-\nabla F_{i}\left(x_{k}\right)$. They satisfy ${{\mathbb E}}\left[\widetilde{{{D}}elta}_{i}^{k} \mid x_{k+1}, x_{k}\right]={{D}}elta_{i}^{k}$ for all $i \in[n]$. Moreover, $\mathcal{Q}\left(\tilde{{{D}}elta}_{1}^{k}\right), \ldots, \mathcal{Q}\left(\tilde{{{D}}elta}_{n}^{k}\right)$ are independent random vectors, then we have \begin{equation} \begin{aligned} &{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}\mid x_{k+1},x_k}\\ &\leq(1-p){{\mathbb E}}xp{\normsq{\frac{1}{n}\sum_{i=1}^n\left(\mathcal{Q}(\widetilde{{{D}}elta}_{i}^{k})-{{D}}elta_i^k\right)}\mid x_{k+1},x_k}+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}+\frac{p\sum_{i=1}^n\sigma_{i}^2}{n^{2} b}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n{{\mathbb E}}xp{\normsq{\mathcal{Q}(\widetilde{{{D}}elta}_{i}^{k})-\widetilde{{{D}}elta}_{i}^{k}+\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_{i}^k}\mid x_{k+1},x_k}+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}+\frac{p\sum_{i=1}^n\sigma_{i}^2}{n^{2} b}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n\left({{\mathbb E}}xp{\normsq{\mathcal{Q}(\widetilde{{{D}}elta}_{i}^{k})-\widetilde{{{D}}elta}_{i}^{k}}\mid x_{k+1},x_k}+{{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_{i}^k}\mid x_{k+1},x_k}\right)+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}\\ &\quad +\frac{p\sum_{i=1}^n\sigma_{i}^2}{n^{2} b}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n\left(\omega {{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}}\mid x_{k+1},x_k}+{{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_i^k}\mid x_{k+1},x_k}\right)+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}\\ &\quad+\frac{p\sum_{i=1}^n\sigma_{i}^2}{n^{2} b}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n\left(\omega {{\mathbb E}}xp{\normsq{{{{D}}elta}_{i}^{k}}\mid x_{k+1},x_k}+(1+\omega){{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_i^k}\mid x_{k+1},x_k}\right)+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}\mid x_k}\\ &\quad+\frac{p\sum_{i=1}^n\sigma_{i}^2}{n^{2} b}. \end{aligned} \end{equation} Now we need to calculate ${{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_i^k}\mid x_{k+1},x_k}$. Let $\xi_{i,s}^k\sim\mathcal{D}_i,s=1,2,\ldots,b'$ be $b'$ i.i.d. random variables, $I_{i,k}^{\prime}:=\bigcup_{s=1}^{b'}\xi_{i,s}^k$ and denote $a_{\xi_{i,s}^k}:=\nabla F_{\xi_{i,s}^k}(x_{k+1})-\nabla F_{\xi_{i,s}^k}(x_k),a_i:=\nabla F_i(x_{k+1})-\nabla F_i(x_k)$, then \begin{equation} \begin{aligned} \widetilde{{{D}}elta}_i^k&=\frac{1}{b'}\sum_{\xi \in I_{i, k}^{\prime}}a_{\xi}=\frac{1}{b'}\sum_{k=1}^{b'}a_{\xi_{i,s}^k}. \end{aligned} \end{equation} So \begin{equation} \begin{aligned} &{{\mathbb E}}xp{\normsq{\widetilde{{{D}}elta}_{i}^{k}-{{D}}elta_i^k}\mid x_{k+1},x_k}\\ &={{\mathbb E}}xp{\normsq{\frac{1}{b'}\sum_{k=1}^{b'}\left(a_{\xi_{i,s}^k}-a_i\right)}\mid x_{k+1},x_k}\\ &=\frac{1}{b'^2}\sum_{k=1}^{b'}{{\mathbb E}}xp{\normsq{a_{\xi_{i,s}^k}-a_i}\mid x_{k+1},x_k}+\frac{1}{b'^2}\sum_{k\neq k'}{{\mathbb E}}xp{\inner{a_{\xi_{i,s}^k}-a_i}{a_{\chi_{i,k'}}-a_i}\mid x_{k+1},x_k}\\ &=\frac{1}{b'^2}\sum_{k=1}^{b'}{{\mathbb E}}xp{\normsq{a_{\xi_{i,s}^k}-a_i}\mid x_{k+1},x_k}\\ &=\frac{1}{b'^2}\sum_{k=1}^{b'}{{\mathbb E}}xp{\normsq{a_{\xi_{i,s}^k}-a_i}\mid x_{k+1},x_k}\\ &=\frac{1}{b'}\left({{\mathbb E}}xp{\normsq{a_{\xi_{i,1}^k}}\mid x_{k+1},x_k}-\normsq{a_i}\right)\\ &\leq\frac{1}{b'}{{\mathbb E}}xp{\normsq{a_{\xi_{i,1}^k}}\mid x_{k+1},x_k}\\ &\leq \frac{\mathcal{L}_i^2}{b'}\normsq{x_{k+1}-x_k}. \end{aligned} \end{equation} Combine with the tower property, we finally get \begin{equation} \begin{aligned} &{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}}\\ &= {{\mathbb E}}xp{{{\mathbb E}}xp{\normsq{g_{k+1}-\nabla F(x_{k+1})}\mid x_{k+1},x_k}}\\ &=\frac{1-p}{n^2}\sum_{i=1}^n\left(\omega L_i^2+\frac{(1+\omega)\mathcal{L}_i^2}{b'}\right){{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}} +\frac{p\sum_{i=1}^n\sigma_i^2}{n^2b}\\ &=(1-p)L^2\frac{\omega\sum_{i=1}^nL_i^2+(1+\omega)\frac{\sum_{i=1}^n\mathcal{L}^2_i}{b'}}{n^2L^2}{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}} +(1-p){{\mathbb E}}xp{\normsq{g_k-\nabla F(x_k)}}+\frac{p\sum_{i=1}^n\sigma_i^2}{n^2b}, \end{aligned} \end{equation} so $\alpha=\frac{\omega\sum_{i=1}^nL_i^2+(1+\omega)\frac{\sum_{i=1}^n\mathcal{L}^2_i}{b'}}{n^2L^2},\theta=\frac{p\sum_{i=1}^n\sigma_i^2}{n^2b}$. \subsection{Proof of Theorem \ref{thm:federatedsampling}}\label{sec:b5} The following lemma is an integral form of Gr{\"o}nwall inequality from \cite[Chapter II.]{amann2011ordinary}, which plays an important role in the proof of Theorem \ref{thm:federatedsampling} and \ref{thm:mainthmopawa}. \begin{lemma}[Gr{\"o}nwall Inequality]\label{lem:gronwalllll} Assume $\phi, B:[0, T] \rightarrow \mathbb{R}$ are bounded non-negative measurable function and $C:[0, T] \rightarrow \mathbb{R}$ is a non-negative integrable function with the property that \begin{equation} \label{eq:grrrr1} \phi(t) \leq B(t)+\int_{0}^{t} C(\tau) \phi(\tau) d \tau \quad \text { for all } t \in[0, T] \end{equation} Then \begin{equation} \label{eq:grrrr2} \phi(t) \leq B(t)+\int_{0}^{t} B(s) C(s) \exp \left(\int_{s}^{t} C(\tau) d \tau\right) d s \quad \text { for all } t \in[0, T]. \end{equation} \end{lemma} We also need the following lemma from \cite[Lemma 16.]{chewi2021}. \begin{lemma} \label{lem:Chew} Assume that $\nabla F$ is L-Lipschitz. For any probability measure $\mu$, it holds that \begin{equation} \label{eq:chew} \mathbb{E}_{\mu}\left[\|\nabla F\|^{2}\right] \leq \mathbb{E}_{\mu}\left[\left\|\nabla \log( \frac{\mu}{ \pi})\right\|^{2}\right]+2 d L ={\mathcal F}S{\mu}+2dL. \end{equation} \end{lemma} Remind you the definition of KL divergence and Fisher information: \begin{equation} \KL{\rho_t}:=\int_{{\mathbb R}^d}\log(\frac{\rho_t}{\pi})(x)d\rho_t,\quad{\mathcal F}S{\rho_t}:=\int_{{\mathbb R}^d}\normsq{\nabla \log(\frac{\rho_t}{\pi})}d\rho_t. \end{equation} We follow the proof of \cite[Lemma 3]{vempala2019rapid}. Consider the following SDE \begin{equation} dx_t=-f_{\xi}(x_0)dt+\sqrt{2}dB_t, \end{equation} where $f_{\cdot}(\cdot): {\mathbb R}^d\times{\mathcal X}i\to{\mathbb R}^d$, ${\mathcal X}i$ is of some probability space $\left({\mathcal X}i,\rho,\mathcal{F}\right)$, let $\rho_{0t}(x_0,{\xi},x_t)$ denote the joint distribution of $\left(x_0,\xi,x_t\right)$, which we write in terms of the conditionals and marginals as $$ \rho_{0 t}\left(x_{0},\xi, x_{t}\right)=\rho_{0}\left(x_{0},\xi\right) \rho_{t \mid 0}\left(x_{t} \mid x_{0},\xi\right)=\rho_{t}\left(x_{t}\right) \rho_{0 \mid t}\left(x_{0},\xi \mid x_{t}\right) . $$ Conditioning on $\left(x_{0},\xi\right)$, the drift vector field $f_{\xi}(x_0)$ is a constant, so the Fokker-Planck formula for the conditional density $\rho_{t \mid 0}\left(x_{t} \mid x_{0}\right)$ is $$ \frac{\partial \rho_{t \mid 0}\left(x_{t} \mid x_{0},\xi\right)}{\partial t}=\nabla \cdot\left(\rho_{t \mid 0}\left(x_{t} \mid x_{0},\xi\right) f_{\xi}\left(x_{0}\right)\right)+{{D}}elta \rho_{t \mid 0}\left(x_{t} \mid x_{0},\xi \right) $$ To derive the evolution of $\rho_{t}$, we take expectation over $\left(x_{0},\xi\right) \sim \rho_{0}$, we obtain \begin{equation} \begin{aligned} \frac{\partial \rho_{t}(x)}{\partial t} &=\int_{\mathbb{R}^{d}\times{\mathcal X}i} \frac{\partial \rho_{t \mid 0}\left(x \mid x_{0},\xi\right)}{\partial t} \rho_{0}\left(x_{0},\xi\right) d x_{0}d\xi\\ &=\int_{\mathbb{R}^{d}\times{\mathcal X}i}\left(\nabla \cdot\left(\rho_{t \mid 0}\left(x_{t} \mid x_{0},\xi\right) f_{\xi}\left(x_{0}\right)\right)+{{D}}elta \rho_{t \mid 0}\left(x_{t} \mid x_{0},\xi \right)\right) \rho_{0}\left(x_{0},\xi\right) d x_{0}d\xi \\ &=\int_{\mathbb{R}^{d}\times{\mathcal X}i}\left(\nabla \cdot\left(\rho_{0t}\left(x, x_{0},\xi\right) f_{\xi}\left(x_{0}\right)\right)+{{D}}elta \rho_{0t}\left(x, x_{0},\xi\right)\right) d x_{0}d\xi \\ &=\nabla \cdot\left(\rho_{t}(x) \int_{\mathbb{R}^{d}\times{\mathcal X}i} \rho_{0 \mid t}\left(x_{0} ,\xi\mid x\right) f_{\xi}\left(x_{0}\right) d x_{0}d\xi\right)+{{D}}elta \rho_{t}(x) \\ &=\nabla \cdot\left(\rho_{t}(x) \mathbb{E}_{\rho_{0 \mid t}}\left[f_{\xi}\left(x_{0}\right) \mid x_{t}=x\right]\right)+{{D}}elta \rho_{t}(x), \end{aligned} \end{equation} so we have \begin{equation} \label{eq:tytytyti} \begin{aligned} \dif{\KL{\rho_t}}&=\int_{{\mathbb R}^d}\left(\nabla \cdot\left(\rho_{t}(x) \mathbb{E}_{\rho_{0 \mid t}}\left[f_{\xi}\left(x_{0}\right) \mid x_{t}=x\right]\right)+{{D}}elta \rho_{t}(x)\right)\log(\frac{\rho_t}{\pi})(x)dx\\ &=-\int_{{\mathbb R}^d}\inner{\mathbb{E}_{\rho_{0 \mid t}}\left[f_{\xi}\left(x_{0}\right) \mid x_{t}=x\right]+\nabla\log(\rho_t)(x)}{\nabla\log(\frac{\rho_t}{\pi})(x)}\rho_t(x)dx\\ &=-\int_{{\mathbb R}^d}\inner{\nabla\log(\frac{\rho_t}{\pi})(x)-\nabla\log(\frac{\rho_t}{\pi})(x)+\mathbb{E}_{\rho_{0 \mid t}}\left[f_{\xi}\left(x_{0}\right) \mid x_{t}=x\right]+\nabla\log(\rho_t)(x)}{\nabla\log(\frac{\rho_t}{\pi})(x)}\rho_t(x)dx\\ &=-\int_{{\mathbb R}^d}\inner{\nabla\log(\frac{\rho_t}{\pi})(x)+\mathbb{E}_{\rho_{0 \mid t}}\left[f_{\xi}\left(x_{0}\right) \mid x_{t}=x\right]-\nabla F(x)}{\nabla\log(\frac{\rho_t}{\pi})(x)}\rho_t(x)dx\\ &=-{\mathcal F}S{\rho_t}-\int_{{\mathbb R}^d}\inner{\mathbb{E}_{\rho_{0 \mid t}}\left[f_{\xi}\left(x_{0}\right) \mid x_{t}=x\right]-\nabla F(x)}{\nabla\log(\frac{\rho_t}{\pi})(x)}\rho_t(x)dx\\ &\leq -{\mathcal F}S{\rho_t}+\frac{1}{4}{\mathcal F}S{\rho_t}+\int_{{\mathbb R}^d}\inner{\mathbb{E}_{\rho_{0 \mid t}}\left[f_{\xi}\left(x_{0}\right) \mid x_{t}=x\right]-\nabla F(x)}{\mathbb{E}_{\rho_{0 \mid t}}\left[f_{\xi}\left(x_{0}\right) \mid x_{t}=x\right]-\nabla F(x)}\rho_t(x)dx\\ &\leq-\frac{3}{4}{\mathcal F}S{\rho_t}+{{\mathbb E}}xp{\normsq{\mathbb{E}\left[f_{\xi}(x_0)-\nabla F(x_t)\mid x_t\right]}}\\ &\leq-\frac{3}{4}{\mathcal F}S{\rho_t}+{{\mathbb E}}xp{\mathbb{E}\left[\normsq{f_{\xi}(x_0)-\nabla F(x_t)}\mid x_t\right]}\\ &= -\frac{3}{4}{\mathcal F}S{\rho_t}+{{\mathbb E}}xp{\normsq{f_{\xi}(x_0)-\nabla F(x_t)}}. \end{aligned} \end{equation} If we replace $f_{\xi}(x_0)$ by $g_k(x_k)$ in \eqref{eq:tytytyti}, we will have \begin{equation} \label{eq:KLM} \begin{aligned} \dif{\KL{\rho_t}}&\leq -\frac{3}{4}{\mathcal F}S{\rho_t}+{{\mathbb E}}xp{\normsq{\nabla F(x_t)-g_k}}\\ &\leq -\frac{3}{4}{\mathcal F}S{\rho_t}+2{{\mathbb E}}xp{\normsq{\nabla F(x_t)-\nabla F(x_k)}}+2{{\mathbb E}}xp{\normsq{\nabla F(x_k)-g_k}}\\ &=-\frac{3}{4}{\mathcal F}S{\rho_t}+2\underbrace{{{\mathbb E}}xp{\normsq{\nabla F(x_t)-\nabla F(x_k)}}}_{A}+2{\textbf G}_k. \end{aligned} \end{equation} We bound term $A$, denote $\mathcal{F}^k_t$ the filtration generated by $\{B_{s}\}_{s=0}^{kh+t}$, then \begin{equation} \label{eq:temA} A=\mathbb{E}_{\rho_t}\left[\mathbb{E}\left[\normsq{\nabla F(x_t)-\nabla F(x_k)}\mid\mathcal{F}^k_t\right]\right], \end{equation} next we estimate the inner expectation, \begin{equation} \label{eq:innerexp} \begin{aligned} \mathbb{E}\left[\normsq{\nabla F(x_t)-\nabla F(x_k)}\mid\mathcal{F}^k_t\right]&\leq L^2\mathbb{E}\left[\normsq{ x_t-x_k}\mid\mathcal{F}^k_t\right]\\ &=L^2\mathbb{E}\left[\normsq{tg_k+\sqrt{2}\left(B_{kh+t}-B_{kh}\right)}\mid\mathcal{F}^k_t\right]\\ &=L^2t^2\normsq{g_k}+2L^2dt\\ &\leq L^2h^2\normsq{g_k}+2L^2dh\\ &= L^2\mathbb{E}\left[\normsq{ x_{k+1}-x_k}\mid\mathcal{F}^k_h\right], \end{aligned} \end{equation} from \eqref{eq:innerexp} and \eqref{eq:KLM}, we finally have \begin{equation} \label{eq:finahave} \dif{\KL{\rho_t}}\leq -\frac{3}{4}{\mathcal F}S{\rho_t}+2L^2{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+2{\textbf G}_k. \end{equation} Next, we use Lemma \ref{lem:Chew} to bound ${{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}$, \begin{equation} \label{eq:bounddif1} \begin{aligned} {{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}&=h^2{{\mathbb E}}xp{\normsq{g_k}}+2dh\\ &\leq 2h^2\left({{\mathbb E}}xp{\normsq{\nabla F(x_k)-g_k}}+{{\mathbb E}}xp{\normsq{\nabla F(x_k)}}\right)+2dh\\ & =2h^2 {{\mathbb E}}xp{\normsq{\nabla F(x_k)}}+2h^2{\textbf G}_k+2dh\\ &\leq 4h^2\left({{\mathbb E}}xp{\norm{\nabla F(x_t)}}+{{\mathbb E}}xp{\normsq{\nabla F(x_t)-\nabla F(x_k)}}\right)+2h^2{\textbf G}_k+2dh\\ &\leq 4h^2{{\mathbb E}}xp{\norm{\nabla F(x_t)}}+4L^2h^2{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+2h^2{\textbf G}_k+2dh, \end{aligned} \end{equation} so let $h\leq\frac{1}{2\sqrt{2}L}$, we have \begin{equation} \label{eq:kk+1} {{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}\leq 8h^2{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+4h^2{\textbf G}_k+4dh. \end{equation} Add $C{\textbf G}_{k+1}$ to both sides of inequality \eqref{eq:finahave}, $C$ is some constant to be determined later, then use {\mathcal C}ref{def:marina}, the right hand side of \eqref{eq:finahave} will be \begin{equation} \label{eq:Kfinahave} \begin{aligned} RHS&=-\frac{3}{4}{\mathcal F}S{\rho_t}+2L^2{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+2{\textbf G}_k+C{\textbf G}_{k+1}\\ &\leq-\frac{3}{4}{\mathcal F}S{\rho_t}+2L^2{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+2{\textbf G}_k+C\left((1-p){\textbf G}_k+(1-p)L^2\alpha{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+\theta\right)\\ &=-\frac{3}{4}{\mathcal F}S{\rho_t}+\left(2L^2+C(1-p)L^2\alpha\right){{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+\left(2+C(1-p)\right){\textbf G}_k+C\theta\\ &\overset{\eqref{eq:kk+1}}{\leq}-\frac{3}{4}{\mathcal F}S{\rho_t}+\left(2L^2+C(1-p)L^2\alpha\right)\left(8h^2{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+4h^2{\textbf G}_k+4dh\right)\\ &\quad+\left(2+C(1-p)\right){\textbf G}_k+C\theta\\ &\overset{\text{Lemma}~\ref{lem:Chew}}{\leq}-\frac{3}{4}{\mathcal F}S{\rho_t}+\left(2L^2+C(1-p)L^2\alpha\right)\left(8h^2\left({\mathcal F}S{\mu}+2nL\right)+4h^2{\textbf G}_k+4dh\right)\\ &\quad+\left(2+C(1-p)\right){\textbf G}_k+C\theta\\ &=-\left(\frac{3}{4}-8h^2\left(2L^2+C(1-p)L^2\alpha\right)\right){\mathcal F}S{\rho_t}+\left(8L^2h^2+C(1-p)\left(4L^2h^2\alpha+1\right)+2\right){\textbf G}_k\\ &\quad+\underbrace{\left(2L^2+C(1-p)L^2\alpha\right)\left(8Lh^2d+4dh\right)+C\theta}_{\text{denote as} ~\tau}\\ &=-\left(\frac{3}{4}-8h^2\left(2L^2+C(1-p)L^2\alpha\right)\right){\mathcal F}S{\rho_t}+\left(8L^2h^2+C(1-p)\left(4L^2h^2\alpha+1\right)+2\right){\textbf G}_k+\tau. \end{aligned} \end{equation} Choose parameter $\beta$~($\beta=1$ or $\beta=e^{\mu h}$) and let $C=\left(8L^2h^2+C(1-p)\left(4L^2h^2\alpha+1\right)+2\right)\beta$, solve this, we get \begin{equation} \label{eq:newC} C=\frac{8L^2h^2\beta+2\beta}{1-(1-p)\left(4L^2h^2\alpha+1\right)\beta}. \end{equation} To make sure $C\geq 0$, we should require $h\leq \frac{1}{2L}\sqrt{\frac{p}{(1-p)\alpha}}$ when $\beta=1$, the case $\beta=e^{\mu h}$ is a bit complicated: when $h$ small (for example $h\leq\frac{1}{\mu}$). we have $\beta=e^{\mu h}\leq 1+2\mu h$, insert this into the denominator of $C$ and make sure it positive, that is \begin{equation} \label{re:eq} 1-(1-p)\left(4L^2\alpha h^2+1\right)\left(1+2\mu h\right)>0, \end{equation} which is equivalent to \begin{equation} \label{re:ssdsw} \underbrace{\frac{1-p}{p}8L^2\mu\alpha h^3}_{I}+\underbrace{\frac{1-p}{p}4L^2\alpha h^2}_{II}+\underbrace{\frac{1-p}{p}2\mu h}_{III}<1, \end{equation} one simple solution for \eqref{re:ssdsw} is to let $I<\frac{1}{3},II<\frac{1}{3},III<\frac{1}{3}$, which is \begin{equation}\label{re:ree} h< \min\{(\frac{p}{24L^2\mu\alpha(1-p)})^{1/3},(\frac{p}{12L^2\alpha (1-p)})^{1/2},\frac{p}{6\mu(1-p)}\}. \end{equation} Insert \eqref{eq:newC} into the parameter before ${{\mathbb E}}xp{\normsq{\nabla F(x_t)}}$ and require \begin{equation} \label{eq:1/22} 8h^2\left(2L^2+C(1-p)L^2\alpha\right)=8h^2\left(2L^2+\frac{8L^2h^2\beta+2\beta}{1-(1-p)\left(4L^2h^2\alpha+1\right)\beta}(1-p)L^2\alpha\right)\leq\frac{1}{4}, \end{equation} solve this we get \begin{equation}\label{eq:hbounddddd} h\leq\frac{1}{2L}\sqrt{\frac{1-(1-p)\beta}{16+(1-p)(17\alpha-16)\beta}}. \end{equation} If $\beta=1$, we need \begin{equation} \label{eq:hbounuuu} {h\leq\frac{1}{10L}\sqrt{\frac{p}{1+\alpha}}}\leq\frac{1}{2L}\sqrt{\frac{p}{17\alpha(1-p)}}\leq\frac{1}{2L}\min\{\sqrt{\frac{p}{16p+17\alpha(1-p)}},\sqrt{\frac{p}{\alpha(1-p)}}\} \end{equation} to guarantee $C\geq0$ and \eqref{eq:hbounddddd}. The $\beta=e^{\mu h}$ case is complicated: from \eqref{re:ree}, we know $h< \frac{p}{6\mu(1-p)}$, so $\beta=e^{\mu h}\leq 1+2\mu h\leq 1+\frac{p}{3(1-p)}$, insert this upper bound of $\beta$ into \eqref{eq:hbounddddd}, we get a lower bound of the right hand side of \eqref{eq:hbounddddd}, that is \begin{equation} \label{re:rere} \begin{aligned} \frac{1}{2L}\sqrt{\frac{2p}{17\alpha(3-2p)+32p}}&=\frac{1}{2L}\sqrt{\frac{1-(1-p)(1+\frac{p}{3(1-p)})}{16+(1-p)(17\alpha-16)(1+\frac{p}{3(1-p)})}}\\ &\leq\frac{1}{2L}\sqrt{\frac{1-(1-p)\beta}{16+(1-p)(17\alpha-16)\beta}}, \end{aligned} \end{equation} So we need {\begin{equation} \label{eq:reree} h<\min\{\frac{1}{2L}\sqrt{\frac{2p}{17\alpha(3-2p)+32p}},(\frac{p}{24L^2\mu\alpha(1-p)})^{1/3},(\frac{p}{12L^2\alpha (1-p)})^{1/2},\frac{p}{6\mu(1-p)}\}. \end{equation}} We can further simplify \eqref{eq:reree}: the first and third term in \eqref{eq:reree} will be greater than $\underbrace{\frac{1}{14L}\sqrt{\frac{p}{1+\alpha}}}_{a}$, the fourth term is greater than $\underbrace{\frac{p}{6\mu}}_{b}$ and $\min\{a,b\}$ is a lower bound of the second term in \eqref{eq:reree}, since $\min\{a,b\}\leq a^{2/3}b^{1/3}=(\frac{p^2}{1176L^2\mu(1+\alpha)})^{1/3}\leq (\frac{p}{24L^2\mu\alpha(1-p)})^{1/3}$. So finally when $\beta=e^{\mu h}$, we need {\begin{equation} \label{eq:rererererer} h\leq\min\{\frac{1}{14L}\sqrt{\frac{p}{1+\alpha}},\frac{p}{6\mu}\} \end{equation}} to guarantee $C\geq0$ and \eqref{eq:hbounddddd}. Once we have $C\geq 0$ and \eqref{eq:1/22}, then \begin{equation} \label{eq:finnnnn} \dif{\KL{\rho_t}}+C{\textbf G}_{k+1}\leq -\frac{1}{2}{\mathcal F}S{\rho_t}+\beta^{-1}C{\textbf G}_k+C\tau. \end{equation} \textbf{Case I.} $\beta=1$, integrate \eqref{eq:finnnnn}, we have then \begin{equation} \label{eq:nonconvexLA} \int_{kh}^{(k+1)h}{\mathcal F}S{\rho_t}dt\leq 2\left(\KL{\rho_{kh}}+C{\textbf G}_kh-\left(\KL{\rho_{(k+1)h}}+C{\textbf G}_{k+1}h\right)\right)+C\tau h, \end{equation} use \eqref{eq:nonconvexLA} for $k=0,1,2,\cdots,K$, we finally have { \begin{equation} \label{eq:nonconvexLAf} \begin{aligned} \frac{1}{Kh}\int_{0}^{Kh}{\mathcal F}S{\rho_t}dt&\leq \frac{2\left(\KL{\rho_{0}}+C{\textbf G}_0h-\left(\KL{\rho_{Kh}}+C{\textbf G}_{K}h\right)\right)}{Kh}+\tau\\ &\leq \frac{2(\KL{\rho_0}+hC{\textbf G}_0)}{Kh}+\tau . \end{aligned} \end{equation}} \textbf{Case II.} Suppose $\pi$ satisfies LSI with parameter $\mu$, that is \begin{equation} \label{eq:LSI} \KL{\nu}\leq \frac{1}{2\mu}{\mathcal F}S{\nu}, \end{equation} so with LSI, we have from \eqref{eq:finnnnn} that \begin{equation} \label{eq:LSILA} \dif{\KL{\rho_t}}+C{\textbf G}_{k+1}\leq-\mu\KL{\rho_t}+\beta^{-1}C{\textbf G}_k+\tau. \end{equation} Change \eqref{eq:LSILA} into its equivalent integral form, then it satisfies \eqref{eq:grrrr1} with $\phi(t)=\KL{\rho_t},~B(t)=\left(\beta^{-1}C{\textbf G}_k-C{\textbf G}_{k+1}+\tau\right)t+\KL{\rho_{kh}},~C(t)=-\mu$, then by \eqref{eq:grrrr2}, we have \begin{equation} \label{eq;KLKLKL} \KL{\rho_t}\leq e^{\mu t}\KL{\rho_{kh}}+\frac{1-e^{-\mu t}}{\mu}\left(\beta^{-1}C{\textbf G}_k-C{\textbf G}_{k+1}+\tau\right), \end{equation} let $t=h$ and $\beta=e^{\mu h}$, then we have \begin{equation} \label{eq:henhao} \begin{aligned} \KL{\rho_{(k+1)h}}+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{k+1}&\leq e^{-\mu h}\left(\KL{\rho_{kh}}+e^{\mu h}\frac{1-e^{-\mu h}}{\mu}\beta^{-1}C{\textbf G}_k\right)+\frac{1-e^{-\mu h}}{\mu}\tau\\ &=e^{-\mu h}\left(\KL{\rho_{kh}}+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{k}\right)+\frac{1-e^{-\mu h}}{\mu}\tau, \end{aligned} \end{equation} use \eqref{eq:henhao} for $k=0,1,2,\cdots,K-1$, we have finally { \begin{equation} \label{eq:fina666} \begin{aligned} \KL{\rho_{Kh}}&\leq\KL{\rho_{Kh}}+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{K}\\ &\leq e^{-\mu Kh}\left(\KL{\rho_0}+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{0}\right)+\frac{1-e^{-K\mu h}}{\mu}\tau, \end{aligned} \end{equation}} this proves {\mathcal C}ref{thm:federatedsampling} \subsection{Complexity of Langevin-MARINA} If $h\leq\min\{\frac{1}{14L}\sqrt{\frac{p}{1+\alpha}},\frac{p}{6\mu}\}$, we wil have $C=\mathcal{O}(\frac{1}{p})$. To achieve $\KL{\rho_{K}}\leq e^{-\mu Kh}\Psi_3+\frac{1-e^{-\mu Kh}}{\mu}\tau=\mathcal{O}(\varepsilon)$, we need to bound the residual term $\frac{1-e^{-K\mu h}}{\mu}\tau$ and the contraction term $e^{-\mu K h}\Psi_3$ respectively. When $\theta=0$, $h=\mathcal{O}\left(\frac{1}{\frac{4L^2(1+\alpha) d}{\mu p\varepsilon}+\sqrt{\frac{8L^3(1+\alpha) d}{\mu p\varepsilon}}}\right)=\mathcal{O}\left(\frac{\mu p\varepsilon}{L^2(1+\alpha) d}\right)$ is enough to make $\frac{1-e^{-K\mu h}}{\mu}\tau={\mathcal O}(\varepsilon)$; when $\theta\neq 0$, we need further require $b=\Omega(\frac{\sum_{i=1}^n\sigma_i^2}{\mu n^2\varepsilon})$ to make the left term $\frac{1-e^{-K\mu h}}{\mu}C\theta={\mathcal O}(\varepsilon)$. For the contraction term, we need $K=\Omega\left(\frac{L^2(1+\alpha) d}{\mu^2 p\varepsilon}\log(\frac{\Psi_3}{\varepsilon})\right)$ ~(here we assume $\frac{\mu p\varepsilon}{L^2(1+\alpha) d}\ll \min \left\{\frac{1}{14L}\sqrt{\frac{p}{1+\alpha}},\frac{p}{6\mu} \right\}$) to make $e^{-\mu K h}\Psi_3=\mathcal{O}(\varepsilon)$. So for all gradient estimators, we need $K=\Omega\left(\frac{L^2(1+\alpha) d}{\mu^2 p\varepsilon}\log(\frac{\Psi_3}{\varepsilon})\right)$ and $h=\mathcal{O}\left(\frac{\mu p\varepsilon}{L^2(1+\alpha) d}\right)$ to guarantee $\KL{\rho_{K}}=\mathcal{O}(\varepsilon)$, for algorithm based on \eqref{eq:gradientestimator333}, we need further assume $b=\Omega(\frac{\sum_{i=1}^n\sigma_i^2}{\mu n^2\varepsilon })$. The analysis of the complexity under the Total Variation distance and the $2$-Wasserstein distance are similar. We want $\normsq{\rho_{K}-\pi}_{TV}\leq {\mathcal O}(\varepsilon^2)$ and $W_2^2(\rho_{K},\pi)\leq {\mathcal O}(\varepsilon^2)$, by {\mathcal C}ref{cor:12} we only need to guarantee $e^{-\mu Kh}\Psi_3+\frac{1-e^{-\mu Kh}}{\mu}\tau={\mathcal O}(\varepsilon^2)$ and $e^{-\mu Kh}\Psi_3+\frac{1-e^{-\mu Kh}}{\mu}\tau={\mathcal O} (\mu\varepsilon^2)$ respectively. So we only need to replace the $\varepsilon$ in $K,h,b$ in the above by $\varepsilon^2$ and $\mu\varepsilon^2$, then we will have $\norm{\rho_{K}-\pi}_{TV}={\mathcal O}(\varepsilon)$ and $W_2(\rho_{K},\pi)={\mathcal O}( \varepsilon)$. \subsection{Proofs of Theorem \ref{thm:mainthmop} and \ref{thm:mainthmopawa}}\label{sec:1and2} Define two flows, \begin{equation} \label{eq:exflow11} {x}_t={x}_k-tg_k, \end{equation} \begin{equation} \label{eq:exflow21} {x}_t^s=\begin{cases} {x}_t & s< t\\ {x}_t-\int_t^s\nabla F({x}_t^l)dl & s\geq t \end{cases}, \end{equation} where $g_k$ is the gradient estimator of \eqref{eq:gradientestimator}, \eqref{eq:gradientestimator222} or \eqref{eq:gradientestimator333}, $x_0=x_k, x_h=x_{k+1}$, note ${x}_t^s$ is continuous with respect to $s$ and when $s=t$, ${x}_t^s={x}_t$. Then follow the same procedure as in the last section, we have \begin{equation} \label{eq:Marina1} \begin{aligned} \dif{F(x_t)}&=\frac{dF({x}^s_t)}{ds}\mid_{s=t}+\left(\dif{F({x}_t)}-\frac{dF({x}^s_t)}{ds}\mid_{s=t}\right)\\ &=-\normsq{\nabla F(x_t)}+\langle\nabla F({x}_t),\nabla F({x}_t)-g_k\rangle\\ &\leq -\normsq{\nabla F({x}_t)}+\frac{1}{4}\normsq{\nabla F({x}_t)}+\normsq{\nabla F({x}_t)-g_k}\\ &\leq-\frac{3}{4}\normsq{\nabla F(x_t)}+\normsq{\nabla F(x_t)-g_k}\\ &\leq -\frac{3}{4}\normsq{\nabla F(x_t)}+2\normsq{\nabla F(x_t)-\nabla F(x_k)}+2\normsq{\nabla F(x_k)-g_k}\\ &\leq -\frac{3}{4}\normsq{\nabla F(x_t)}+2L^2\normsq{x_{k+1}-x_k}+2\normsq{\nabla F(x_k)-g_k}, \end{aligned} \end{equation} since $\normsq{\nabla F(x_t)-\nabla F(x_k)}\leq L^2t^2\normsq{g_k}\leq L^2h^2\normsq{g_k}=L^2\norm{x_{k+1}-x_k}$. We define ${\textbf G}_k:={{\mathbb E}}xp{\normsq{\nabla F(x_k)-g_k}}$, then take expectation on both sides of \eqref{eq:Marina1}, we have \begin{equation} \label{eq:Marinannna} \dif{{{\mathbb E}}xp{F(x_t)}}\leq -\frac{3}{4}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+2L^2{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+2{\textbf G}_k. \end{equation} Next, we bound ${{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}$, \begin{equation} \label{eq:b1} \begin{aligned} {{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}&=h^2{{\mathbb E}}xp{\normsq{g_k}}\\ &\leq 2h^2{{\mathbb E}}xp{\normsq{\nabla F(x_k)}}+2h^2{{\mathbb E}}xp{\normsq{\nabla F(x_k)-g_k}}\\ &\leq 4h^2{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+4h^2{{\mathbb E}}xp{\normsq{\nabla F(x_t)-\nabla F(x_k)}}+2h^2{\textbf G}_k\\ &\leq 4h^2{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+4L^2h^2{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+2h^2{\textbf G}_k, \end{aligned} \end{equation} which is equivalent to \begin{equation} \label{eq:bb1} \left(1-4L^2h^2\right){{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}\leq 4h^2{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+2h^2{\textbf G}_k. \end{equation} If we let $h\leq\frac{1}{2\sqrt{2}L}$, then based on \eqref{eq:bb1}, we have \begin{equation} \label{eq:b2} {{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}\leq 8h^2{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+4h^2{\textbf G}_k. \end{equation} Add $C{\textbf G}_{k+1}$ to both sides of \eqref{eq:Marinannna}, $C$ is some parameter to be determined later. Then we calculate the right hand side based on ~{\mathcal C}ref{def:marina}, \begin{equation} \label{eq:rhs} \begin{aligned} RHS&= -\frac{3}{4}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+2L^2{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+2{\textbf G}_k+C{\textbf G}_{k+1}\\ &\leq-\frac{3}{4}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+2L^2{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+2{\textbf G}_k+C\left((1-p){\textbf G}_k+(1-p)L^2\alpha{{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+\theta\right)\\ &\leq -\frac{3}{4}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+\left(2L^2+C(1-p)L^2\alpha\right){{\mathbb E}}xp{\normsq{x_{k+1}-x_k}}+\left(2+C(1-p)\right){\textbf G}_k+C\theta\\ &\leq -\frac{3}{4}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+\left(2L^2+C(1-p)L^2\alpha\right)\left(8h^2{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+4h^2{\textbf G}_k\right)+\left(2+C(1-p)\right){\textbf G}_k+C\theta\\ &\leq -\left(\frac{3}{4}-8h^2\left(2L^2+C(1-p)L^2\alpha\right)\right){{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+\left(8L^2h^2+C(1-p)\left(4L^2h^2\alpha+1\right)+2\right){\textbf G}_k+C\theta. \end{aligned} \end{equation} Choose $\beta$~(we will set $\beta=1$~or~$\beta=e^{\mu h}$~in the later)~ and let $C=\left(8L^2h^2+C(1-p)\left(4L^2h^2\alpha+1\right)+2\right)\beta$, solve this equation, we get \begin{equation} \label{eq:C} C=\frac{8L^2h^2\beta+2\beta}{1-(1-p)\left(4L^2h^2\alpha+1\right)\beta}. \end{equation} We need \begin{equation}\label{eq:sfskfj} C\geq0,\quad \frac{3}{4}-8h^2\left(2L^2+C(1-p)L^2\alpha\right)\geq \frac{1}{2}. \end{equation} By similar analysis as in {\mathcal C}ref{sec:b5}, we need require {$h\leq \frac{1}{10L}\sqrt{\frac{p}{1+\alpha}}$} when $\beta=1$ and {$h\leq\min\{\frac{1}{14L}\sqrt{\frac{p}{1+\alpha}},\frac{p}{6\mu}\}$} when $\beta=e^{\mu h}$ to guarantee \eqref{eq:sfskfj}. Once we have \eqref{eq:sfskfj}, then we finally have by \eqref{eq:rhs} \begin{equation} \label{eq:final1} \dif{{{\mathbb E}}xp{F(x_t)}}\leq-\frac{1}{2}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+\beta^{-1}CG_{k}- C{\textbf G}_{k+1}+C\theta. \end{equation} \textbf{Case I.} Let $\beta=1$, then we have \begin{equation} \label{eq:case1} \dif{{{\mathbb E}}xp{F(x_t)}}+C{\textbf G}_{k+1}\leq -\frac{1}{2}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}+C{\textbf G}_k+C\theta, \end{equation} where $C$ is defined in \eqref{eq:C}, integrating both sides from $0$ to $h$, we have \begin{equation} \label{case1sol} {{\mathbb E}}xp{F(x_{k+1})}-{{\mathbb E}}xp{F(x_k)}+hC{\textbf G}_{k+1}\leq -\frac{1}{2}\int_{0}^{h}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}dt+hC{\textbf G}_k+hC\theta, \end{equation} which is equivalent to \begin{equation} \label{eq:casesol1} \int_{0}^{h}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}dt\leq 2\left(\left({{\mathbb E}}xp{F(x_k)}+hC{\textbf G}_k\right)-\left({{\mathbb E}}xp{F(x_{k+1}}+hC{\textbf G}_{k+1})\right)\right)+2hC\theta. \end{equation} If we first do the same procedure as above for $k=0,1,2,\cdots, K-1$ then take summation from both sides, we will have { \begin{equation} \label{eq:case1sol2} \begin{aligned} \frac{1}{Kh}\int_0^{Kh}{{\mathbb E}}xp{\normsq{\nabla F(x_t)}}dt&\leq\frac{2\left(\left({{\mathbb E}}xp{F(x_0)}+hC{\textbf G}_0\right)-\left({{\mathbb E}}xp{F(x_{K})}+hC{\textbf G}_{K}\right)\right)}{Kh}+2C\theta\\ &\leq \frac{2\left(F(x_0)+hC{\textbf G}_0-F(x^*)\right)}{Kh}+2C\theta, \end{aligned} \end{equation} } this proves {\mathcal C}ref{thm:mainthmop}. \textbf{Case II.}Suppose Lojasiewicz condition holds, that is \begin{equation} \label{eq:PL1111111} \|\nabla F(x)\|^{2} \geq 2 \mu\left(F(x)-\min F\right),\quad\forall x \in \mathbb{R}^{d} . \end{equation} Combine \eqref{eq:final1} and \eqref{eq:PL1111111}, we have \begin{equation} \label{eq:final2121212} \dif{\left({{\mathbb E}}xp{F(x_t)}-F(x^*)\right)}\leq-\mu \left({{\mathbb E}}xp{F(x_t)}-F(x^*)\right)+\beta^{-1}CG_{k}- C{\textbf G}_{k+1}+C\theta, \end{equation} which is equivalent to the integral form \begin{equation} \label{eq:final2} {{\mathbb E}}xp{F(x_t)}-F(x^*)\leq \left(\beta^{-1}C{\textbf G}_k- C{\textbf G}_{k+1}+C\theta\right)t+{{\mathbb E}}xp{F(x_k)}-F(x^*)+\int_0^t -\mu\left({{\mathbb E}}xp{F(x_{\tau})}-F(x^*)\right)d\tau,\quad t\in [0,h]. \end{equation} Now we use Gr{\"o}nwall inequality \ref{lem:gronwalllll}, note \eqref{eq:final2} satisfies \eqref{eq:grrrr1} with $\phi(t)=F(x_t)-F(x^*), B(t)=\left(\beta^{-1}C{\textbf G}_k- C{\textbf G}_{k+1}+C\theta\right)t+F(x_k)-F(x^*),C(t)=-\mu$, then by \eqref{eq:grrrr2}, we have \begin{equation} \label{eq:fina4} {{\mathbb E}}xp{F(x_t)}-F(x^*)\leq e^{-\mu t}\left({{\mathbb E}}xp{F(x_k)}-F(x^*)\right)+\frac{1-e^{-\mu t}}{\mu}\left(\beta^{-1}C{\textbf G}_{k}- C{\textbf G}_{k+1}+C\theta\right), \end{equation} let $t=h$ and $\beta=e^{\mu h}$, then we have \begin{equation} \label{eq:fina5} \begin{aligned} {{\mathbb E}}xp{F(x_{k+1})}-F(x^*)+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{k+1}&\leq e^{-\mu h}\left({{\mathbb E}}xp{F(x_k)}-F(x^*)+e^{\mu h}\frac{1-e^{-\mu h}}{\mu}\beta^{-1}C{\textbf G}_k\right)+\frac{1-e^{-\mu h}}{\mu}C\theta\\ &=e^{-\mu h}\left({{\mathbb E}}xp{F(x_{k})}-F(x^*)+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{k}\right)+\frac{1-e^{-\mu h}}{\mu}C\theta, \end{aligned} \end{equation} use \eqref{eq:fina5} for $k=0,1,2,\cdots,K-1$, we have finally { \begin{equation} \label{eq:fina6} \begin{aligned} {{\mathbb E}}xp{F(x_{K})}-F(x^*)&\leq{{\mathbb E}}xp{F(x_{K})}-F(x^*)+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{K}\\ &\leq e^{-\mu Kh}\left({{\mathbb E}}xp{F(x_{0})}-F(x^*)+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{0}\right)+\frac{1-e^{-K\mu h}}{\mu}C\theta\\ &\leq e^{-\mu K h}\left(F(x_0)+\frac{1-e^{-\mu h}}{\mu}C{\textbf G}_{0}-F(x^*)\right)+\frac{1-e^{-K\mu h}}{\mu}C\theta, \end{aligned} \end{equation}} this proves {\mathcal C}ref{thm:mainthmopawa}. \end{document}
\begin{document} \title{Spectral gap and stability for groups and non-local games} \date{\today} \keywords{} \author[M.~De~La~Salle]{Mikael De La Salle} \address{Université de Lyon, CNRS, France} \thanks{MdlS was funded by the ANR grants AGIRA ANR-16-CE40-0022 and Noncommutative analysis on groups and quantum groups ANR-19-CE40-0002-01} \email{[email protected]} \begin{abstract} The word stable is used to describe a situation when mathematical objects that almost satisfy an equation are close to objects satisfying it exactly. We study operator-algebraic forms of stability for unitary representations of groups and quantum synchronous strategies for non-local games. We observe in particular that simple spectral gap estimates can lead to strong quantitative forms of stability. For example, we prove that the direct product of two (flexibly) Hilbert-Schmidt stable groups is again (flexibly) Hilbert-Schmidt stable, provided that one of them has Kazhdan's property (T). We also provide a simple form and simple analysis of a non-local game with few questions, with the property that synchronous strategies with large value are close to perfect strategies involving large Pauli matrices. This simplifies one of the steps (the question reduction) in the recent announced resolution of Connes' embedding problem by Ji, Natarajan, Vidick, Wright and Yuen. \end{abstract} \maketitle The aim of this note is to present, in Theorem~\ref{thm:mainTheorem}, the construction of a $2$-player non-local game with $O(N^2)$ questions and $2^N$ answers where any quantum synchronous strategy with value $1-\varepsilon$ is $O(\varepsilon)$-close to a synchronous startegy involving Pauli matrices of size $2^N$ (see Section~\ref{section:games} for the precise definitions). This can be used to simplify the analysis of the Pauli basis test from \cite{MIPRE}, generalize it and provide better quantitative bounds. The proof does not use a specific quantum soundness property of any specific code, and the result is the particular case (applied to asymptotically good codes) of a general construction that takes as input any linear error-correcting code, see Example~\ref{ex:game_from_code}. The main originality in this work is Theorem~\ref{thm:almost_commutation}, which is a very easy consequence of a simple spectral gap argument (Lemma~\ref{lemma:spectral_gap_commutator}). As in previous work of Vidick \cite{vidickPauliBraiding}, the proof also relies on a general form of stability in average for representations of finite groups, but there is a new input in Lemma~\ref{lem:GowersHatami_subgroup}. It turns out that the same idea has a consequence about Hilbert-Schmidt stability that was apparently not known, although similar ideas are also present in the work of Ioana \cite{MR4134896}. See \S~\ref{sec:stability} for the terminology and a more precise and quantitative statement, that is also relevant for finite groups. \begin{theorem}\label{thm:stability_direct_product_simple} Let $G$ and $H$ be two countable groups, one of which has property (T). If $G$ and $H$ are both Hilbert-Schmidt stable, then $G\times H$ also. If $G$ and $H$ are both Hilbert-Schmidt flexibly stable, then $G\times H$ also. \end{theorem} The property (T) assumption is crucial, as Ioana recently proved \cite{Ioana2} that the direct product of two finitely generated non-abelian free groups is not flexibly Hilbert-Schmidt stable, whereas free groups are obviously Hilbert-Schmidt stable. In this note, we start from scratch and prove every statement that we need (we sometimes refer to \cite{orthonormalisation}, but the results that rely on it are not used in the proof of the main Theorem). So a significant part of the note consists of known facts, either classical or borrowed from \cite{MIPRE}. The paper is organized as follows. In Section~\ref{sec:preliminaries}, we present some preliminary facts on Fourier transform for abelian groups, spectral gaps and flexible stability for finite groups. In particular, we prove some variants of results of Gowers and Hatami \cite{MR3733361}. In Section~\ref{sec:main} we prove the main result in terms of stability for direct products of finite groups. In Section~\ref{section:games}, we translate this result in terms of non-local games. Section~\ref{sec:stability}, completely independent from the rest of the paper (except for the use of Lemma~\ref{lemma:spectral_gap_commutator}), is devoted to the proof of Theorem~\ref{thm:stability_direct_product_simple}. \section{Preliminaries}\label{sec:preliminaries} \subsection{Matrix and von Neumann algebras notation} A von Neumann algebra is a self-adjoint subalgebra of the algebra $B(\mathcal{H})$ of bounded operators on a complex Hilbert space $\mathcal{H}$ that is equal to its bicommutant. Here the commutant of a subset $A\subset B(\mathcal{H})$ is the algebra $A':=\{X\in B(\mathcal{H})\mid XY=YX \forall Y \in A\}$, and its bicommutant is the commutant of its commutant. Every von Neumann algebra is uniquely a dual space, which allows us to talk about its weak-* topology. A state on a von Neumann algebra $\mathcal{M}$ is a linear form $\mathcal{M} \to \mathbf{C}$ that is positive, meaning that $\tau(x^*x)\geq 0$ for every $x\in \mathcal{M}$, and normalized by $\tau(1)=1$. It is called faithful if $\tau(x^*x)=0$ holds only if $x=0$. It is called normal if it is continuous for the weak-* topology, and it is called a trace if $\tau(ab)=\tau(ba)$. In the whole note, we denote by $(\mathcal{M},\tau)$ a von Neumann algebra with a normal faithful tracial state. The main case of interest in $\mathcal{M} = M_n(\mathbf{C})$ for large $n$ and $\tau$ is the normalized trace $\tr = \frac{1}{n}\Tr$. The reader can safely assume throughout the paper that we are in this situation. We will denote $\|x\|_2 = \big(\tau(x^*x)\big)^{\frac 1 2}$ for the $L_2$ norm on $\mathcal{M}$. The completion of $\mathcal{M}$ for this norm is denoted $L_2(\mathcal{M},\tau)$. We also use the notation $\|x\|_q = \big(\tau( (x^*x)^{q/2})\big)^{\frac 1 q}$ for the $L_q$ norm and $1\leq q<\infty$, and $\|x\|_\infty$ for the operator norm. If $(\mathcal{M},\tau)$ is a von Neumann algebra with a faithful normal tracial state and if $\mathcal{N} \subset\mathcal{M}$ is an inclusion of von Neumann algebras, we obtain that $L_2(\mathcal{N},\tau)\subset L_2(\mathcal{M},\tau)$. The orthogonal projection $L_2(\mathcal{M},\tau) \to L_2(\mathcal{N},\tau)$ turns out to map $\mathcal{M}$ into $\mathcal{N}$, and the corresponding map is called the conditional expectation from $\mathcal{M}$ to $\mathcal{N}$. We also denote $\mathcal{M}_\infty = \mathcal{M} \overline{\otimes} B(\ell_2)$ with (infinite) trace $\tau_\infty:=\tau \otimes \Tr$. We often identify $\mathcal{M}$ with $\mathcal{M} \otimes e_{1,1} \subset \mathcal{M}_\infty$ (with a different unit $1_\mathcal{M} = 1 \otimes e_{1,1}$ than $1_{\mathcal{M}_\infty}$), and often use the same letter $\tau$ to denote the amplified trace $\tau_\infty$. We also use the notation $\|x\|_2 = \big(\tau_\infty(x^*x)\big)^{\frac 1 2}$ for $x \in \mathcal{M}_\infty$, but in this situation the norm can take the value $\infty$. A PVM (positive valued measure), on a Hilbert space $\mathcal{H}$ and indexed by a finite set $I$ is a family $(P_i)_{i\in I}$ of self-adjoint projections such that $\sum_i P_i=1$. We talk about PVMs in $\mathcal{M}$ if $\mathcal{M}\subset B(\mathcal{H})$ and $(P_i)_{i \in I}$ is a PVM on $\mathcal{H}$ with $P_i \in \mathcal{M}$ for every $i$. \subsection{Reminders on Fourier transform} Groups appearing in this paper will be denoted multiplicatively. We will often denote by $G$ arbitrary groups and by $A$ abelian groups, except in the last section where the letter $A$ will be reserved to answers. If a group $G$ is finite, we will denote $\mathbf{P}_G$ the uniform probability measure on $G$ and $\mathbf{E}_g f(g)$ the integration with respect to it. When several groups enter the picture and precisions are needed, we sometimes write $\mathbf{E}_{g \in G} f(g)$. We will write $L_2(G)$ and $\ell_2(G)$ for the functions $G \to \mathbf{C}$ with norms \[\|f\|_{L_2} = \left(\mathbf{E}_g |f(g)|^2\right)^{\frac 1 2}\textrm{ and }\|f\|_{\ell_2} = \left(\sum_{g \in G} |f(g)|^2\right)^{\frac 1 2}\] respectively. Of course, $L_2(G)$ will only make sense for finite groups. Let $A$ be a finite abelian group. Recall that a character on $A$ is a group homomorphism $A \to \{z \in \mathbf{C} \mid |z|=1\}$. The set of all characters of $A$, denoted $\hat A$, is a finite group for the operation of pointwise multiplication called the (Pontryagin) dual of $A$. Moreover, the dual of the dual of $A$ identifies naturally with $A$. The Fourier transform, which implements an isometry between $L_2(A)$ and $\ell_2(\hat A)$, also implements a correspondence between unitary representations of $A$ and PVMs on $\hat A$. \begin{lemma}\label{lem:Fourier_abelian_group} For every Hilbert space $\mathcal{H}$, the following maps, inverse of each other, are bijections between unitary representations of $A$ on $\mathcal{H}$ and PVMs on $\mathcal{H}$ indexed by $\hat A$~: \[ U \mapsto \left(P_\chi = \mathbf{E}_a \overline{\chi}(a) U(a)\right)_{\chi \in \hat A},\] \[ (P_\chi)_{\chi \in \hat A} \mapsto \left( U: a \mapsto \sum_\chi \chi(a) P_\chi\right).\] \end{lemma} \begin{proof} This is well-known and follows from the orthogonality of characters. \end{proof} \subsection{Spectral gap preliminaries}\label{sec:spectral_gap} Let $G$ be a countable group (for example a finite group). If $\mu$ is a symmetric probability measure on $G$ with generating support, then for every unitary representation $(\pi,\mathcal{H})$, $\pi(\mu):=\sum_g \mu(g)\pi(g)$ is a self-adjoint operator of norm $\leq 1$, whose eigenspace for the eigenvalue $1$ is the space of invariant vectors $\mathcal{H}^\pi=\{\xi\in\mathcal{H} \mid \forall g\in G, \pi(g)\xi=\xi\}$ (this uses that the support of $\mu$ generates the whole group $G$). We say that $\mu$ has spectral gap is there is a real number $\kappa$ such that for every unitary representation $(\pi,\mathcal{H})$ of $G$, the spectrum of $\pi(\mu)$ in contained in $[-1,1-\frac{1}{\kappa}]\cup\{1\}$, or equivalently is the restriction of $\pi(\mu)$ to the orthogonal of $\mathcal{H}^\pi$ has spectrum contained in $[-1,1-\frac{1}{\kappa}]$. The smallest such $\kappa$ is denoted by $\kappa(\mu)$, it is the inverse of the spectral gap. If $\mu$ does not have spectral gap, we set $\kappa(\mu)=\infty$. The reason for this parametrization of the spectral gap is because it is proportional to the constant appearing in the Poincar\'e inequalities, and is related to the Kazhdan constants. Specifically, $\kappa(\mu)$ is the smallest real number such that, for every unitary representation $(\pi,\mathcal{H})$ of $G$, and every vector $\xi \in \mathcal{H}$, \begin{equation}\label{eq:Poincare_constant} \|\xi - P_{\mathcal{H}^\pi} \xi\|^2 \leq \frac\kappa 2 \int_G \|\pi(g) \xi-\xi\|_2^2d\mu(g). \end{equation} Here $P_{\mathcal{H}^\pi}$ is the orthogonal projection on $\mathcal{H}^\pi$. Let us justify this characterization of $\kappa$. Indeed, by expanding the square norms, \eqref{eq:Poincare_constant} is equivalent to the inequality \[ \|\xi\|^2 \leq \kappa (\|\xi\|^2 - \langle\pi(\mu)\xi,\xi\rangle) \forall \xi \in (\mathcal{H}^\pi)^\perp,\] or in other words (Rayleigh quotients) the spectrum of $\pi(\mu)$ restricted to $(\mathcal{H}^\pi)^\perp = \ker(\pi(\mu)-1)^\perp$ consists of real numbers $\lambda$ satisfying $1\leq \kappa(1-\lambda)$, that is $\lambda \leq 1-\frac{1}{\kappa}$. For a given group $G$, $\mu$ has spectral gap if and only if $G$ has Kazhdan's property (T). In particular, the property that $\kappa(\mu)$ is finite does not depend on $\mu$. But it will be useful, in particular for finite groups, to study measures with good spectral gap. The extreme case is when $\mu =\mathbf{P}_G$ is the uniform probability on $G$. In that case, $\pi(\mathbf{P}_G)$ is the orthogonal projection on $\mathcal{H}^\pi$ and therefore $\kappa(\mu) =1$ (if $G$ is not the group $\{0\}$). If $\mu$ is a not-necessarily symmetric probability measure on $G$ with support still generating, we define $\kappa(\mu)$ as $\kappa(\nu)$ where $\nu$ is the symmetric measure $\nu(g) = \frac{1}{2}(\mu(g)+\mu(g^{-1})$. It is also the smallest constant such that \eqref{eq:Poincare_constant} holds. \subsection{Probability measures with spectral gap and small support} If $A$ is a finite abelian group and $\mu$ is a probability measure on $A$, denote by $\hat \mu:\hat A \to \mathbf{C}$ its Fourier transform $\hat \mu(\chi) = \int \chi(a) d\mu(a)$. By Fourier transform, the spectral gap constant $\kappa(\mu)$ is simply expressed in terms of $\hat \mu$ by \begin{equation}\label{eq:spectral_gap_abelian}\kappa(\mu) = \max_{\chi \in \hat A \setminus \{1\}} \frac{1}{1-\mathbf{R}e\hat \mu(\chi)}. \end{equation} In the particular case of $A=(\mathbf{Z}/2\mathbf{Z})^N$ and of uniform measures on finite sets, the spectral gap can be expressed in the language of error-correcting codes, and measures with small spectral gap constant (respectively small support) are the same as linear codes with large distance (respectively large dimension). We recall that in the next example, and refer for example to \cite{MR0465509} or \cite{MR1996953} for the vocabulary. I thank Jason Gaitonde for pointing out to me on Mathoverflow this connection with error-correcting codes \cite{414657}. \begin{example}\label{ex:code_spectral_gap} Let $A=(\mathbf{Z}/2\mathbf{Z})^N$ and let $\mu$ be the uniform probability measure on a generating family $a_1,\dots,a_K$. Define $C\subset (\mathbf{Z}/2\mathbf{Z})^K$ the subgroup generated by $b_1,\dots,b_N$ where $b_j(i) = a_i(j)$. Then \begin{equation}\label{eq:relation_kappa_distance_binary} \kappa(\mu) = \frac{1}{2 \min\{ d_H(x,0) \mid x \in C\setminus\{0\}\}}. \end{equation} Here $d_H$ is the normalized Hamming distance on $(\mathbf{Z}/2\mathbf{Z})^K$ \[ d_H(x,y) = \frac{1}{K}\sum_{i=1}^K 1_{x(i) \neq y(i)}.\] In the vocabulary of error correcting codes, $C$ is a $[K,N,\frac{2}{\kappa(\mu)}K]$-linear binary code. Here $K$ is the \emph{length}, $N$ is the \emph{dimension} and $\frac{2}{\kappa(\mu)}K$ is the \emph{distance} of the code (and $\frac 2{\kappa(\mu)}$ is the \emph{relative distance}). Conversely, if $C$ is a binary linear code of length $K$, dimension $N$ and distance $d$, any choice of a basis $b_1,\dots,b_N$ of $C$ gives rise to a subset of cardinality $K$ of $(\mathbf{Z}/2\mathbf{Z})^N$ with spectral gap constant $2K/d$. More generally, let $q$ be a prime power and $C$ be a $[K,N,d]_q$-code, that is $C\subset \mathbf{F}_q^K$ is a linear subspace of dimension $N$ and distance $d = \min_{x \in C\setminus \{0\}} \#\{i\leq K\mid x_i \neq 0\}$. Any choice of a basis $b_1,\dots,b_N$ of $C$ allows us to define a subset of cardinality $(q-1)K$ of $\widehat{\mathbf{F}_q^N}$ \[ \left\{ y \in \mathbf{F}_q^N \mapsto \chi(\sum_j y_j b_j(i) ) \mid \chi \in \widehat{\mathbf{F}_q}\setminus\{1\}, 1\leq i\leq K\right\}.\] The uniform probability measure $\mu$ on this finite set has spectral gap constant \begin{equation}\label{eq:relation_kappa_distance_code}\kappa(\mu) = \frac{q-1}{q} \frac{K}{d}. \end{equation} \end{example} \begin{proof} In the construction of the binary code from the family $a_1,\dots,a_K$, the assumption that $a_1,\dots,a_K$ generates $(\mathbf{Z}/2\mathbf{Z})^N$ is equivalent to $b_1,\dots,b_N$ being linearly free. So \eqref{eq:relation_kappa_distance_binary} is a particular case of \eqref{eq:relation_kappa_distance_code} for $q=2$, and all we have to do is justify \eqref{eq:relation_kappa_distance_code}. By \eqref{eq:spectral_gap_abelian}, we have \begin{align*} \frac{1}{\kappa(\mu)}& =\min_{y \in \mathbf{F}_q^N\setminus\{0\}}1-\frac{1}{(q-1)K} \sum_{\chi \in\hat{\mathbf{F}_q}\setminus\{0\}, i\leq K} \chi(\sum_j y_j b_j(i))\\ & =\min_{y \in \mathbf{F}_q^N\setminus\{0\}}\frac{q}{q-1}(1-\frac{1}{K} \sum_{i\leq K} 1_{\sum_j y_j b_j(i)=0})\\ & =\frac{q}{q-1}\frac{d}{K}. \end{align*} The second inequality is because, for every $z \in\mathbf{F}_q$, \[\frac{1}{q-1} \sum_{\chi \in\hat{\mathbf{F}_q}\setminus\{0\}} \chi(z) = \frac{1}{q-1}(-1+\sum_{\chi \in\hat{\mathbf{F}_q}}\chi(z)) = -\frac{1}{q-1} +\frac{q}{q-1} 1_{z=0}.\qedhere \] \end{proof} The following is a a result by Alon and Roichman \cite{MR1262979}, see also \cite{MR2097328} for a simple proof. By the previous example, it generalizes to arbitrary finite groups the classical fact that there exist \emph{asymptotically good binary linear codes}, that is linear codes with both dimension and distance proportional to the length. \begin{proposition}\cite{MR1262979}\label{prop:measure_with_small_support_and_small_Fourier_transform_nonabelian} There is a constant $C$ such that, for every finite group $G$, there is a subset $F \subset G$ of size $K\leq C \log |G|$ such that $\mu$, the uniform probability measure on $F$, has spectral gap constant $\kappa(\mu)\leq 2$. \end{proposition} In this proposition, the logarithmic dependance between $G$ and $K$ is clearly optimal (if $G=(\mathbf{Z}/2\mathbf{Z})^N$, $F$ has to contain a basis). But for interesting groups, (for example finite simple groups \cite{MR2221038}), much stronger results are known, and there exist measures of bounded support with uniform spectral gap. \subsection{Average Gowers-Hatami theorem} Some of the results in this subsection are probably well-known (except for Lemma~\ref{lem:GowersHatami_subgroup} that seems to be new), but we do not know whether they appear explicitely in the litterature. A form of the following theorem appears for example in \cite{vidickPauliBraiding}. The case $p=\infty$ is contained in \cite{MR3867328}, which was itself a generalization of \cite{MR3733361}. For unitary groups replaced by permutation groups, similar results also appear in \cite{beckerChapman}. The results of \cite{MR3867328} and \cite{beckerChapman} are also valid for amenable discrete groups. Following the terminology in \cite{MR3867328}, we call a unitarily invariant semi-norm on $\mathcal{M}_\infty$ any semi-norm that is defined on an ideal of $\mathcal{M}_\infty$ and that is invariant by left and right multiplication by unitaries, and that is by convention put to be equal to $\infty$ outside of this ideal. Later, we will only use the $2$-norm. \begin{theorem}\label{thm:averageGowersHatami} Let $G$ be a finite group, $1 \leq p < \infty$ and $\varphi:G \to \mathcal{U}(\mathcal{M})$ be a map. Let $\|\cdot\|$ be a unitarily invariant semi-norm on $\mathcal{M}_\infty$. Assume that \[ \left(\mathbf{E}_{g,h \in G} \|\varphi(gh)-\varphi(g)\varphi(h)\|^p\right)^{\frac 1 p} \leq \varepsilon.\] Then there is a projection $P \in \mathcal{M}_\infty$, a unitary representation $\pi: G \to \mathcal{U}(P\mathcal{M}_\infty P)$ and an isometry $w \in P \mathcal{M}_\infty 1_\mathcal{M}$ such that \begin{equation}\label{eq:varphi_close_to_rep}\left(\mathbf{E}_g \|\varphi(g)-w^* \pi(g) w\|^p\right)^{\frac 1 p}\leq 13\varepsilon, \end{equation} \begin{equation}\label{eq:P_of_trace_almost_1} \|P-ww^*\| \leq 4\varepsilon. \end{equation} \end{theorem} For example, if $\|\cdot\|$ is the norm in $L_q(\mathcal{M},\tau)$, \eqref{eq:P_of_trace_almost_1} is equivalent to $\tau(P) \leq 1+(4\varepsilon)^q$. This theorem follows by the same proof as in \cite{MR3867328}. We provide a detailed proof for completeness. We first prove an intermediary statement, with better constants but the isometry $w$ replaced by a contraction. \begin{lemma}\label{lemma:averageGowersHatami_non_isometry} Let $G$ be a finite group, $1 \leq p<\infty$ and $\varphi:G \to \mathcal{U}(\mathcal{M})$ be a map. Let $\|\cdot\|$ be a unitarily invariant semi-norm on $\mathcal{M}_\infty$. Assume that \[ \left(\mathbf{E}_{g,h \in G} \|\varphi(gh)-\varphi(g)\varphi(h)\|^p\right)^{\frac 1 p} \leq \varepsilon.\] Then there is a projection $P \in \mathcal{M}_\infty$, a unitary representation $\pi : G \to \mathcal{U}(P\mathcal{M}_\infty P)$ and an element $X \in P \mathcal{M}_\infty 1_\mathcal{M}$ of operator norm $\|X\|_\infty\leq 1$ such that \begin{equation}\label{eq:varphi_close_to_repX} \left(\mathbf{E}_g \|\varphi(g)-X^* \pi(g) X\|^p\right)^{\frac 1 2} \leq 5 \varepsilon, \end{equation} \begin{equation}\label{eq:XstarX_close_to1} \|1_\mathcal{M}-X^*X\| \leq 4 \varepsilon, \end{equation} and \begin{equation}\label{eq:XXstar_close_toP} \|P-XX^*\| \leq 4 \varepsilon, \end{equation} \end{lemma} \begin{proof} Let $\mathcal{H}$ be the Hilbert space on which $\mathcal{M}$ is realized. Define $V \colon \mathcal{H} \to L_2(G,\mathcal{H})$ by $(V\xi)(g) = \varphi(g^{-1})\xi$. It is clear that $V x= (1_{L_2(G)}\otimes x) V$ for every $x \in \mathcal{M}'$, so (if we identify $L_2(G)$ with a subspace of $\ell_2$ with orthogonal projection $Q$), $V$ is an isometry in $Q \mathcal{M}_\infty 1_\mathcal{M}$. Its adjoint is $V^*f = \mathbf{E}_g \varphi(g^{-1})^* f(g)$, so that \[ V^* (\lambda(g)\otimes 1_\mathcal{H}) V = \mathbf{E}_h \varphi(h)^* \varphi(h g)=: \tilde \varphi(g).\] Observe that \[ \mathbf{E}_g \|\varphi(g) - \tilde\varphi(g)\|^p \leq \mathbf{E}_{g,h} \|\varphi(g) - \varphi(h)^* \varphi(hg)\|^p \leq \varepsilon^p.\] We are almost done, except that $\tau_\infty(Q) = |G|$. If $VV^*$ did commute with the representation $\lambda$, we would be done with $X=V$ and $P=VV^*$. In general, we define $X=PV$ and $\pi$ to be the restriction of $\lambda \otimes 1_{\mathcal{H}}$ on the image of $P$, where $P$ is the spectral projection $\chi_{[1/2,1]}(A)$, for $A$ the conditional expectation of $VV^*$ on $\mathcal{M} \otimes \lambda(G)'$, that is \[ A:= \mathbf{E}_g \lambda(g) VV^* \lambda(g^{-1}).\] It remains to justify \eqref{eq:varphi_close_to_repX}, \eqref{eq:XstarX_close_to1} and \eqref{eq:XXstar_close_toP}. To do so we first observe that, writing $1-\tilde{\varphi}(g)^* \tilde\varphi(g) = \varphi(g)^*(\varphi(g) - \tilde{\varphi}(g)) + (\varphi(g) - \tilde{\varphi}(g))^* \tilde{\varphi}(g)$, we can bound by H\"older's inequality \begin{equation}\label{eq:tildevarphi_close_to_1} \mathbf{E}_g \|1-\tilde{\varphi}(g)^* \tilde\varphi(g)\| \leq 2\mathbf{E}_g \|\varphi(g) - \tilde{\varphi}(g)\| \leq 2{\varepsilon}. \end{equation} We deduce \[ \| 1-V^*A V\| = \|\mathbf{E}_g 1-\tilde\varphi (g)^*\tilde\varphi(g)\| \leq 2{\varepsilon}.\] Therefore (using that $\lambda(g)$ commutes with $\sqrt{1-A}$) \begin{align*} \|A - A^2\| &= \| \sqrt{1-A} \mathbf{E}_g \lambda(g) VV^* \lambda(g) \sqrt{1-A}\|\\ & \leq \| \sqrt{1-A} V V^* \sqrt{1-A}\| = \|V^*(1-A) V\| \leq 2{\varepsilon}. \end{align*} We now prove \eqref{eq:XstarX_close_to1}. Using $(1-P) \leq 2(1-A)$, we have \[ \|1_\mathcal{M} - X^*X\| = \|V^*(1-P)V\| \leq 2 \|V^* (1-A) V\| \leq 4 {\varepsilon}.\] We can now turn to \eqref{eq:varphi_close_to_repX}. By the triangle inequality, \begin{multline*} \left(\mathbf{E}_g \|\varphi(g)-X^* \pi(g) X\|^p\right)^{\frac 1 p} \\\leq \left(\mathbf{E}_g \|\varphi(g)-\widetilde{\varphi}(g)\|^p\right)^{\frac 1 p} + \left(\mathbf{E}_g \|\widetilde \varphi(g)-X^* \pi(g) X\|^p\right)^{\frac 1 p}. \end{multline*} The first term is $\leq \varepsilon$. The second term is (by \cite[Corollary 2.8]{MR3867328}) \begin{align*} \left(\mathbf{E}_g \|V^*(1-P) \lambda(g)V\|^p\right)^{\frac 1 p} & \leq \|V^*(1-P)V\| \leq 4\varepsilon. \end{align*} The last inequality is the already proven \eqref{eq:XstarX_close_to1}. Finally, we prove \eqref{eq:XXstar_close_toP}. Using that $P \leq 2A$, \begin{align*} \|P - XX^*\| &= \|P(1-VV^*)P\| \\&= \|(1-VV^*)P(1-VV^*)\|\\ &\leq 2 \|(1-VV^*) A (1-VV^*)\|. \end{align*} By the triangle inequality, this is less than \begin{multline*} 2 \mathbf{E}_g \|(1-VV^*) \lambda(g) VV^* \lambda(g)^* (1-VV^*)\|\\ = 2 \mathbf{E}_g \|V^* \lambda(g)^* (1-VV^*) \lambda(g) V\| = 2 \mathbf{E}_g \|1-\tilde{\varphi}(g)^* \tilde\varphi(g)\| \end{multline*} which is less than $4\varepsilon$ by \eqref{eq:tildevarphi_close_to_1}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:averageGowersHatami}] Let $X,P,\pi$ be given by Lemma~\ref{lemma:averageGowersHatami_non_isometry}. Write $X = w_0 |X|$ the polar decomposition of $X$. We can find a partial isometry such $w_1 \in \mathcal{M}_\infty$ such that $w_0^* w_0 +w_1 ^* w_1=1_\mathcal{M}$ and $w_1 w_1^*$ is orthogonal to $P$. Define $P'=P+w_1 w_1^*$, and extend $\pi$ to a unitary representation on $P'\mathcal{M}_\infty P'$ by declaring that $\pi(g)$ is the identity on $w_1 w_1^*$. If we define $w=w_0+w_1$, $w$ is then an isometry in $P' \mathcal{M}_\infty 1_\mathcal{M}$ such that $X=w |X|$. We can then bound \[ \| w-X\| = \|1_\mathcal{M} - |X|\| \leq \|1_\mathcal{M} - X^*X\| \leq 4\varepsilon,\] where the first inequality is because $1-t \leq 1-t^2$ for every $0 \leq t \leq 1$, and the second inequality is \eqref{eq:XstarX_close_to1}. Moreover, by \eqref{eq:XXstar_close_toP} we have \[ \|P'-ww^*\| = \|P-w_0 w_0^*\| \leq \|P-X X^*\| \leq 4\varepsilon.\] Therefore, we obtain \[ \left(\mathbf{E}_g \|\varphi(g)-w^* \pi(g) w\|^p\right)^{\frac 1 p} \leq \left(\mathbf{E}_g \|\varphi(g)-X^* \pi(g) X\|^p\right)^{\frac 1 p} + 2 \|X-w\|_2.\] This is less than $13 \sqrt{\varepsilon}$ by \eqref{eq:varphi_close_to_repX}. \end{proof} The next lemma shows that, if $\varphi$ is assumed to behave well on a subgroup, then $\varphi$ and $w^* \pi w$ are close on this subgroup. The lemma is clearly false without the additional assumption \eqref{eq:phi_H_left_equivariant} or \eqref{eq:phi_H_right_equivariant} : if $H$ has very small index, perturbing arbitrarily $\varphi$ on $H$ will not affect much the hypothesis nor the conclusion of Theorem~\ref{thm:averageGowersHatami}, but the conclusion of the lemma cannot hold. \begin{lemma}\label{lem:GowersHatami_subgroup} Let $G$, $\varphi$, $\varepsilon$, $P$, $\pi$ and $w$ be as in the hypothesis and conclusion of Theorem~\ref{thm:averageGowersHatami}. Let $H<G$ is a subgroup, and assume either \begin{equation}\label{eq:phi_H_left_equivariant} \forall h\in H, \forall g\in G,\ \varphi(hg) = \varphi(h) \varphi(g) \end{equation} or \begin{equation}\label{eq:phi_H_right_equivariant}\forall h\in H, \forall g\in G,\ \varphi(gh) = \varphi(g) \varphi(h). \end{equation} Then \[ \left(\mathbf{E}_{h \in H} \|\varphi(h)-w^* \pi(h) w\|^p\right)^{\frac 1 p}< 38 \varepsilon.\] \end{lemma} An inspection of the proof reveals that the statement holds (with $38$ replaced by another constant) if \eqref{eq:phi_H_left_equivariant} is replaced by the weaker hypothesis that \[\left(\mathbf{E}_{h\in H,g \in G} \|\varphi(hg) - \varphi(h) \varphi(g)\|^p\right)^{\frac 1 p} + \left(\mathbf{E}_{h \in H, h'\in H} \|\varphi(hh') - \varphi(h) \varphi(h')\|^p\right)^{\frac 1 p} = O(\varepsilon). \] \begin{proof} Assume \eqref{eq:phi_H_left_equivariant}. The case when we assume \eqref{eq:phi_H_right_equivariant} is proved in the same way and left to the reader. Define $\psi(g) = w^* \pi(g) w$. This is not a representation of $G$, but almost: for every $g_1,g_2 \in G$, \begin{multline}\label{eq:psi_almost_rep} \| \psi(g_1g_2) - \psi(g_1)\psi(g_2)\| \\= \| w^* \pi(g_1) (P-ww^*) \pi(g_2) w\| \leq \|P-ww^*\| \leq 4\varepsilon. \end{multline} the last inequality is the assumption~\eqref{eq:P_of_trace_almost_1}. So it follows from \eqref{eq:phi_H_left_equivariant}, \eqref{eq:varphi_close_to_rep} and the triangle inequality that \begin{multline*} (\mathbf{E}_{g \in G} \mathbf{E}_{h \in H} \|\varphi(h)\varphi(g) - \psi(h) \psi(g)\|^p)^{\frac 1 p} \\\leq 4\varepsilon + (\mathbf{E}_{g \in G} \mathbf{E}_{h \in H} \|\varphi(hg) - \psi(hg)\|^p)^{\frac 1 p} \leq 17 {\varepsilon}. \end{multline*} The last inequality is because, for fixed $h \in H$, $hg$ is uniformly distributed in $G$ when $g$ is. In particular, there is $X$ of operator norm $\leq 1$ (namely $X=\psi(g) \varphi(g)^{-1}$ for some $g$) such that \begin{equation}\label{eq:diff_phi_psiX} (\mathbf{E}_{h \in H} \|\varphi(h) - \psi(h) X\|^p)^{\frac 1 p} \leq 17 {\varepsilon}. \end{equation} We deduce \begin{align*} (\mathbf{E}_h \|\varphi(h) - \psi(h)\|^p)^{\frac 1 p} & = (\mathbf{E}_{h_1,h_2} \|\varphi(h_1)\varphi(h_2) - \psi(h_1) \varphi(h_2)\|^p)^{\frac 1 p} \\ & \leq 17{\varepsilon}+ (\mathbf{E}_{h_1,h_2} \|\varphi(h_1h_2) - \psi(h_1) \psi(h_2) X\|^p)^{\frac 1 p}\\ &\leq 21{\varepsilon}+ (\mathbf{E}_{h_1,h_2} \|\varphi(h_1h_2) - \psi(h_1h_2)X\|^p)^{\frac 1 2}\\ &\leq 38{\varepsilon}. \end{align*} The first inequality is the unitary invariance of the $\|\cdot\|$-norm, the second is \eqref{eq:phi_H_left_equivariant} and \eqref{eq:diff_phi_psiX}, the third is \eqref{eq:psi_almost_rep} and the last is \eqref{eq:diff_phi_psiX} again. This concludes the proof of the lemma. \end{proof} We will use the following consequences. For simplicity we restrict to the $2$-norm. \begin{corollary}\label{cor:averageGowersHatami_products} Let $A,B$ be two finite groups and $U:A \to \mathcal{U}(\mathcal{M})$ and $V : B \to \mathcal{U}(\mathcal{M})$ be two group homomorphisms. If they satisfy \[ \mathbf{E}_{a \in A,b \in B} \|[U(a),V(b)]\|_2^2 \leq \varepsilon,\] then there is a projection $P \in \mathcal{M}_\infty$, unitary representation $\tilde{U}: A \to \mathcal{U}(P\mathcal{M}_\infty P)$ and $\tilde{V}: B \to \mathcal{U}(P\mathcal{M}_\infty P)$ with commuting ranges, and an isometry $w \in P \mathcal{M}_\infty 1_\mathcal{M}$ such that \[\mathbf{E}_{a \in A} \|U(a)-w^* \tilde U(a) w\|_2^2<1444\varepsilon,\] \[\mathbf{E}_{b \in B} \|U(b)-w^* \tilde V(b) w\|_2^2<1444\varepsilon,\] \[ \tau_\infty(P)\leq 1+16\varepsilon.\] \end{corollary} \begin{proof} Consider $G=A \times B$ and define $\varphi:A \times B \to \mathcal{U}(\mathcal{M})$ by $\varphi(a,b) = U(a) V(b)$. Then for $g=(a,b)$ and $h=(a',b')$, by unitary invariance of the $\|\cdot\|_2$-norm \begin{align*} \|\varphi(gh) - \varphi(g) \varphi(h)\|_2 &= \| U(aa') V(bb') - U(a) V(b) U(a') V(b')\|_2\\ &= \|[U(a'),V(b)]\|_2, \end{align*} so the assumption is exactly that \[ \mathbf{E}_g \|\varphi(gh) - \varphi(g) \varphi(h)\|_2^2 \leq \varepsilon.\] By applying Theorem~\ref{thm:averageGowersHatami} for the norm on $L_2(\mathcal{M}_\infty,\tau_\infty)$ and $p=2$, we obtain a projection $P$ and, an isometry $w$ and a representation $\pi:A \times B \to \mathcal{U}(\mathcal{M})$ satisfying \[ \tau(P) \leq 1+16 \varepsilon\] and \begin{equation}\label{eq:phi_close_to_rho} \mathbf{E}_g \|\varphi(g) - w^* \pi(g) w\|_2^2 \leq 169 \varepsilon.\end{equation} If we define $\tilde{U}(a) = \pi(a,1)$ and $\tilde{V}(b) = \pi(1,b)$, these are unitary representations with commuting ranges. Moreover, since $\varphi(a g) =\varphi(a) \varphi(g)$ for every $a \in A$, $g \in G$, and similarly for $B$ on the right, Lemma~\ref{lem:GowersHatami_subgroup} concludes the proof. \end{proof} \begin{corollary}\label{cor:averageGowersHatami_central_ext_products} Let $A,B$ be two finite groups and $\gamma:A\times B \to \{-1,1\}$ a map such that $\gamma(a_1 a_2,b_1b_2) = \prod_{i,j} \gamma(a_i b_j)$.\footnote{This is the same as $\gamma$ being a $2$-cocycle on $A \times B$ whose restriction to $A$ and $B$ is zero, or a group homomorphism $A \to \mathrm{hom}(B,\{-1,1\})$, or a group homomorphism $B \to \mathrm{hom}(A,\{-1,1\})$.} Let $U:A \to \mathcal{U}(\mathcal{M})$ and $V : B \to \mathcal{U}(\mathcal{M})$ be two group homomorphisms. Assume that they satisfy \[ \mathbf{E}_{a \in A,b \in B} \|U(a)V(b) - \gamma(a,b) V(b) U(a)\|_2^2 \leq \varepsilon.\] There is a projection $P \in \mathcal{M}_\infty$, unitary representations $\tilde U:A \to \mathcal{U}(P\mathcal{M}_\infty P)$ and $\tilde V:B \to \mathcal{U}(P\mathcal{M}_\infty P)$ and a partial isometry $w \in P \mathcal{M}_\infty 1_\mathcal{M}$ such that \[ \tilde U(a) \tilde V(b) = \gamma(a,b) \tilde V(b) \tilde U(a) \forall a\in A, b \in B,\] \[\mathbf{E}_{a \in A} \|U(a)-w^* \tilde U(a) w\|_2^2< 30000 \varepsilon,\] \[\mathbf{E}_{b \in B} \|U(b)-w^* \tilde V(b) w\|_2^2< 30000 \varepsilon,\] \[ \tau_\infty(P)\leq 1+16\varepsilon.\] \end{corollary} \begin{proof} Consider $G$, the central extension of $A\times B$ by $\{-1,1\}$ given by $\gamma$. This is a Weyl-Heisenberg-type group, that is the set $A\times B \times \mathbf{Z}/2\mathbf{Z}$ for the group operation \[ (a,b,z)(a',b',z') = (aa',bb',\gamma(b,a')zz').\] We shall identify $A$, $B$ and $\{-1,1\}$ with the subgroups $\{(a,1,1)\mid a \in A\}$, $\{(1,b,1)\mid b \in b\}$ and $\{(1,1,1),(1,1,-1)\}$. Define $\varphi:G \to \mathcal{U}(\mathcal{M})$ by \[\varphi(a,b,z) = z U(a) V(b).\] Clearly, it satisfies \[ \varphi(a g) = \varphi(a) \varphi(g), \varphi(gb) = \varphi(g) \varphi(b), \varphi(zg) = \varphi(z) \varphi(g)\] for every $g \in G$, $a \in A$, $b \in B$, $z \in \{-1,1\}$. Moreover our assumption is equivalent to \[ \mathbf{E}_g \|\varphi(gh) - \varphi(g) \varphi(h)\|_2^2 \leq \varepsilon.\] We can apply Theorem~\ref{thm:averageGowersHatami}for the $\|\cdot\|_2$-norm and $p=2$ to obtain a projection $P$, a representation $\pi:G \to \mathcal{U}(P\mathcal{M}_\infty P)$ and an isometry $w \in P \mathcal{M}_\infty 1_\mathcal{M}$ as in Theorem~\ref{thm:averageGowersHatami}. Also, by Lemma~\ref{lem:GowersHatami_subgroup} we know that, for $H=A,B$ or $\{-1,1\}$. \begin{equation}\label{eq:varphi_close_to_rho_on_subgroups} \mathbf{E}_{h\in H} \|\varphi(h) - w^* \pi(h) w\|_2^2 \leq 1444 \varepsilon. \end{equation} We are almost at the desired conclusion, except that we do not a priori have the desired (anti)-commutation relation, but only (denoting $Z = \pi(1,1,-1)$) \[ \pi(a) \pi (b) = Z \pi(b) \pi(a)\textrm{ when }\gamma(a,b)=-1.\] So we would be done if $Z$ was equal to $-P$. This is almost the case. Indeed, taking $H = \{-1,1\}$ in \eqref{eq:varphi_close_to_rho_on_subgroups} we \[ \|1_\mathcal{M} + w^* Z w\|_2^2 \leq 2888\varepsilon\] and we deduce \[ \| P + Z\|_2 \leq \|ww^* (P+Z)ww^*\|_2 + 2\|P-ww^*\|_2 \leq 38\sqrt{2\varepsilon}+8\sqrt{\varepsilon}.\] But $Z$ is an order $2$ unitary in $P \mathcal{M}_\infty P$, so it can be written as $P-2Q$ for a projection $Q \leq P$, which commutes with the range of $\pi$ because $(1,1,-1)$ belongs to the center of $G$. And we have just proved that $\|P-Q\|_2 \leq (19\sqrt{2}+4)\sqrt \varepsilon$. So let us replace $P$ by $Q$, $\pi$ by $g \mapsto Q \pi(g)$ and $w$ by $w'$, the partial isometry in the polar decomposition of $Qw$. We claim that \[ \|w-w'\|_2 \leq \|w - Qw\|_2 + \|w'-Qw\|_2\leq 2 \|P-Q\|_2.\] Indeed, the first term is $\|Pw-Qw\|_2 \leq \|P-Q\|_2$, and the second term is \[ \|(w')^*w' - |Qw|\|_2 \leq \|1_\mathcal{M} - |Qw|\|_2 \leq \|1_\mathcal{M} -|Qw|^2\| = \|w^*(P-Q)w\|_2 \leq \|P-Q\|_2.\] So we deduce for $H=A$ or $H=B$ \begin{align*} \left(\mathbf{E}_{h \in H} \|\varphi(h) - (w')^* \pi(h) w'\|_2^2\right)^{\frac 1 2} &\leq \left(\mathbf{E}_a \|\varphi(h) - w^* \pi(h) w\|_2^2\right)^{\frac 1 2} + 2\|w-w'\|_2\\ & \leq (38+4(19\sqrt{2}+4))\sqrt{\varepsilon}. \end{align*} The last inequality is \eqref{eq:varphi_close_to_rho_on_subgroups}. This proves the corollary. \end{proof} \section{The main result}\label{sec:main} The main new contribution in this note is the following result. It is expressed in term of the spectral gaps $\kappa(\cdot)$ defined in subsection~\ref{sec:spectral_gap}. \begin{theorem}\label{thm:almost_commutation} Let $A,B$ be two finite groups equipped with probability measures $\mu,\nu$. Let $U:A \to \mathcal{U}(\mathcal{M})$ and $V:B \to \mathcal{U}(\mathcal{M})$ be two homomorphisms. Then \[ \mathbf{E}_{a \in A,b\in B} \| [U(a),V(b)]\|_2^2 \leq \kappa(\mu)\kappa(\nu) \int \| [U(a),V(b)]\|_2^2 d\mu(a) d\nu(b).\] \end{theorem} We first prove a result with only one group entering the picture. \begin{lemma}\label{lemma:spectral_gap_commutator} Let $G$ be a group, $\mu$ a probability measure on $G$ with generating support, and $U\colon G\to \mathcal{U}(\mathcal{M},\tau)$ be a group homomorphism. Define $\mathcal{N}=\mathcal{M} \cap U(G)'$ and $E_\mathcal{N}:\mathcal{M}\to\mathcal{N}$ the conditional expectation. For every $V \in \mathcal{M}$, \begin{equation}\label{eq:spectral_gap_and_commutators} \| V - E_\mathcal{N}(V)\|_2^2 \leq \frac{\kappa(\mu)}{2} \int \|[U(g),V]\|_2^2 d\mu(g). \end{equation} If $G$ is finite, \[ \mathbf{E}_{g \in G} \|[U(g),V]\|_2^2 \leq \kappa(\mu) \int \|[U(g),V]\|_2^2 d\mu(g).\] \end{lemma} \begin{proof} The formula \[ \pi(a) X = U(a) X U(a)^*\] defines a unitary representation of $\pi$ on the Hilbert space $L_2(\mathcal{M},\tau)$. Here we use in an essential way that $\tau$ is trace. Moreover, the space of invariant vectors $L_2(\mathcal{M},\tau)^\pi$ coincides with $L_2(\mathcal{N},\tau)$, and the orthogonal projection on $L_2(\mathcal{M},\tau)^\pi$ is just $E_\mathcal{N}$. Therefore, \eqref{eq:spectral_gap_and_commutators} is exactly the Poincar\'e inequality \eqref{eq:Poincare_constant}. If $G$ is finite, $E_\mathcal{N}$ is given by $V\mapsto \mathbf{E}_{g\in G} U(g) V U(g)^*$, so \begin{align*} \mathbf{E}_{g \in G} \|[U(g),V]\|_2^2 &= 2-2\tau(\mathbf{E}_{g \in G} U(g)^* V U(g)V^*)\\& = 2-2\|E_\mathcal{N}(V)\|_2^2 = 2\|V-E_\mathcal{N}(V)\|_2^2. \end{align*} So the second inequality is a particular case of \eqref{eq:spectral_gap_and_commutators}. \end{proof} \begin{proof}[Proof of Theorem~\ref{sec:main}] Successive applications of Lemma~\ref{lemma:spectral_gap_commutator} for $A$ and for $B$ give \begin{align*}\mathbf{E}_{a,b} \|[U(a),V(b)]\|_2^2 & \leq \mathbf{E}_b \kappa(\mu) \int_A \|[U(a),V(b)]\|_2^2 d\mu(a)\\ & = \kappa(\mu) \int_A \mathbf{E}_b \|[U(a),V(b)]\|_2^2 d\mu(a)\\ & \leq \kappa(\mu) \int_A \kappa(\nu) \int_B \|[U(a),V(b)]\|_2^2 d\nu(b) d\mu(a). \end{align*} This proves the theorem. \end{proof} \subsection{Consequences} We list four consequences of this theorem. Only the last one (Corollary~\ref{thm:almost_anticommutation_2group}) will be used in the next section. \begin{corollary} Let $A,B,U,V,\mu,\nu$ be as in Theorem~\ref{thm:almost_commutation}. If \[ \int \| [U(a),V(b)]\|_2^2 d\mu(a) d\nu(b) \leq \varepsilon,\] then there is a projection $P \in \mathcal{M}_\infty$, unitary representations $\tilde{U}: A \to \mathcal{U}(P\mathcal{M}_\infty P)$ and $\tilde{V}: B \to \mathcal{U}(P\mathcal{M}_\infty P)$ with commuting ranges, and an isometry $w \in P \mathcal{M}_\infty 1_\mathcal{M}$ such that \[\mathbf{E}_{a \in A} \|U(a)-w^* \tilde U(a) w\|_2^2\lesssim \kappa(\mu)\kappa(\nu)\varepsilon,\] \[\mathbf{E}_{b \in B} \|U(b)-w^* \tilde V(b) w\|_2^2\lesssim \kappa(\mu)\kappa(\nu)\varepsilon,\] \[ \tau_\infty(P)\leq 1+ 16\kappa(\mu)\kappa(\nu) \varepsilon.\] \end{corollary} \begin{proof} This follows from Theorem~\ref{thm:almost_commutation} and Corollary~\ref{cor:averageGowersHatami_products}.\end{proof} For abelian groups, we have a stronger conclusion. \begin{corollary}\label{thm:almost_commutationbis} Let $A,B,U,V,\mu,\nu$ be as in Theorem~\ref{thm:almost_commutation}, with $A$ and $B$ abelian. If \[ \int \|[U(a),V(b)]\|_2^2 d\mu(a) d\nu(b) \leq \varepsilon,\] then there exists a unitary representation $V_1:B \to \mathcal{U}(\mathcal{M})$ such that \[ [U(a),V_1(b)] = 0 \ \ \forall a \in A,b\in B\] and \[ \mathbf{E}_b \|V(b) - V_1(b)\|^2 \leq 10\kappa(\mu) \kappa(\nu) \varepsilon.\] \end{corollary} \begin{proof} By Theorem~\ref{thm:almost_commutation}, the hypothesis implies that \begin{equation}\label{eq:U_V_almost_commute} \mathbf{E}_{a,b} \| [U(a),V(b)]\|_2^2 \leq \kappa(\mu) \kappa(\nu) \varepsilon. \end{equation} Therefore, the result follows from \cite[Corollary 5]{orthonormalisation}. Observe that \cite[Corollary 5]{orthonormalisation} is only stated for $A=\mathbf{Z}/n\mathbf{Z}$ and $B=\mathbf{Z}/m\mathbf{Z}$, but its proof is valid for any finite abelian groups. Indeed, by Lemma~\ref{lem:Fourier_abelian_group}, \eqref{eq:U_V_almost_commute} can be translated to the PVMs corresponding to $U$ and $V$ being close, so \cite[Theorem 4]{orthonormalisation} applies. \end{proof} \begin{corollary}\label{thm:almost_anticommutation} Let $A$ be a finite abelian group, and $\mu,\nu$ be probability measures on $A$ and $\hat A$ respectively. Let $U:A \to \mathcal{U}(\mathcal{M})$ and $V:\hat A \to \mathcal{U}(\mathcal{M})$ be two homomorphisms satisfying \[ \int \| U(a)V(\chi) -\chi(a) V(\chi) U(a) \|_2^2 d\mu(a) d\nu(\chi) \leq \varepsilon.\] Then they satisfy \[ \mathbf{E}_{a,\chi} \| U(a)V(\chi) -\chi(a) V(\chi) U(a) \|_2^2 \leq \kappa(\mu)\kappa(\nu)\varepsilon.\] \end{corollary} \begin{proof} Consider, on $\ell_2(A)$, the operators (called Pauli matrices) $\lambda(a)$ of translation operators, and $M(\chi)$ of multiplication by $\chi$: \[(\lambda(a) f)(a') = f(a^{-1}a'), (M(\chi) f)(a') = \chi(a') f(a').\] They satisfy $\lambda(a) M(\chi) = \overline{\chi}(a) M(\chi) \lambda(a)$. Therefore, on $\mathcal{M} \otimes M_A(\mathbf{C})$ with the trace $\tau \otimes \tr$, if we define \[ \tilde{U}(a) = U(a) \otimes \lambda(a), \tilde{V}(\chi) = V(\chi) \otimes \mathcal{M}(\chi),\] we have \[ \tilde U(a)\tilde V(\chi) -\tilde V(\chi) \tilde U(a) = (U(a)V(\chi) -\chi(a) V(\chi) U(a))\otimes \lambda(a) M(\chi).\] So we conclude by Theorem~\ref{thm:almost_commutation}. \end{proof} \begin{corollary}\label{thm:almost_anticommutation_2group} Assume, in addition to the hypotheses of Corollary~\ref{thm:almost_anticommutation}, that $A$ is a $2$-group. Then there is a projection $P \in \mathcal{M}_\infty$, a partial isometry $w \in P \mathcal{M}_\infty 1_\mathcal{M}$, a pair of representations $U_1:A \to \mathcal{U}(P\mathcal{M}_\infty P)$ and $V_1:\hat A \to \mathcal{U}(P\mathcal{M}_\infty P)$ that satisfy \[ U_1(a) V_1(\chi) = \chi(a) V_1(\chi) U_1(a)\forall a \in A,\chi \in \hat A,\] \[ \mathbf{E}_a \| U(a) - w^* U_1(a)w\|_2^2 \lesssim \kappa(\mu)\kappa(\nu) \varepsilon,\] \[ \mathbf{E}_b \| V(b) - w^* V_1(b)w\|_2^2 \lesssim \kappa(\mu)\kappa(\nu) \varepsilon,\] \[ \|1-w^*w\|_2^2 \lesssim \kappa(\mu)\kappa(\nu) \varepsilon,\] \[ \|P-w^*w\|_2^2 \lesssim \kappa(\mu)\kappa(\nu) \varepsilon.\] \end{corollary} \begin{proof} Combine Corollary~\ref{thm:almost_anticommutation} with Corollary~\ref{cor:averageGowersHatami_central_ext_products}. \end{proof} \section{Translation to games and strategies}\label{section:games} In this section, we translate the results of the previous section (in particular Corollary~\ref{thm:almost_anticommutation_2group}) in terms of (two-player one-round) games. This translation is adapter from a construction by Natarajan and Vidick \cite{MR3678246}, which was also adapted in similar fashion in \cite{MIPRE} (see \S~\ref{subsection:comparison} for a comparison). Since we only consider synchronous strategies, we adopt the following slightly unconventional definition of a game, but that is equivalent to the standard notion of a synchronous game for our purposes. \begin{definition}\label{def:game} A game is a tuple $G=(X,\mu,A,D)$ where $X$ is a finite set, $\mu$ is a probability distribution on $X\times X$, $A=(A(x))_{x \in X}$ is a collection of finite sets (the possible answers to question $x$) and $D$ is a $\{0,1\}$-valued function defined on $\{(x,y,a,b)\mid (x,y) \in \mathrm{supp}(\mu), a \in A(x), b \in A(y)\}$ and that is required to be symmetric (that is $D(x,y,a,b)=D(y,x,b,a)$ whenever both terms are defined). \end{definition} By abuse, we will also write by the same symbol $\mu$ the probability measure on $X$ given by \[\mu(x) = \frac{1}{2} \sum_{y \in X} \mu(x,y)+\mu(y,x).\] If $G=(X,\mu,A,D)$ is a game, a synchronous strategy in $(\mathcal{M},\tau)$ is, for every $x \in X$, a PVM $(P^x_a)_{a \in A(x)}$ in $\mathcal{M}$. The value of the game on this strategy is $\val_G(P):=\int \sum_{a \in A(x), b \in A(y)} D(x,y,a,b) \tau(P^x_a P^y_b) d\mu(x,y)$. \begin{remark} In the standard definition of a synchronous game, $\mu$ and $D$ are both required to be symmetric, and it is moreover required that $D(x,x,a,b)=0$ whenever $a \neq b$. This last requirement is not important for us because the value of a synchronous strategy does not involve such values of $D$ (because $\tau(P^x_a P^x_b)=0$ if $a\neq b$). Moreover, a game as defined in Definition~\ref{def:game} can be turned a symmetric game as in the standard definition by replacing $\mu$ by $\frac{1}{2}(\mu + \tilde{\mu})$, where $\tilde{\mu}(x,y)=\mu(y,x)$ and extending $D$ by symmetry. But this is not very important, as the value of any synchronous strategy for the original game and the symmetrized game agree. \end{remark} \begin{definition} We say that a synchronous strategy $(Q^x_a)_{a \in A(x)}^{x \in X}$ in $(\mathcal{N},\tau')$ is $\varepsilon$-close to another synchronous strategy $(P^x_a)_{a \in A(x)}^{x \in X}$ in $(\mathcal{M},\tau)$ if~: \begin{itemize} \item There is a projection $P \in \mathcal{M}_\infty$ of finite trace such that $\mathcal{N} = P \mathcal{M}_\infty P$ with trace $\tau' = \frac{1}{\tau_\infty(P)} \tau$. \item There is a partial isometry $w \in P \mathcal{M}_\infty 1_\mathcal{M}$ such that \[ \tau(1-w^*w) \leq \varepsilon, \tau'(P-w w^*)\leq \varepsilon,\] \item $\mathbf{E}_x \sum_{a \in A(x)} \|P^x_a - w^* Q^x_a w\|_2^2\leq \varepsilon$. \end{itemize} \end{definition} The following straightforward lemma illustrates that this notion of strategies being close is compatible with the closeness of unitary representations that was considered in the previous sections. \begin{lemma}\label{lem:close_strategies_unitaries} Let $(\mathcal{M},\tau)$ be a tracial von Neumann algebra, $P \in \mathcal{M}_\infty$ be a projection and $w \in P\mathcal{M}_\infty 1_\mathcal{M}$. Let $H$ be a finite abelian group, $U:H\to \mathcal{U}(\mathcal{M})$ and $V:H \to\mathcal{U}(P\mathcal{M}_\infty P)$ be two unitary representations, with corresponding PVMs $(P_\chi)_{\chi \in \hat H}$ and $(Q_\chi)_{\chi \in \hat H}$ as in Lemma~\ref{lem:Fourier_abelian_group}. Then \[ \mathbf{E}_{h\in H} \|U(h) - w^* V(h) w\|_2^2 = \sum_{\chi \in \hat H} \|P_\chi - w^* Q_\chi w\|_2^2.\] \end{lemma} \begin{proof} The left-hand side is \[ \mathbf{E}_{h\in H} \|\sum_\chi \chi(h) (P_\chi - w^* Q_\chi w)\|_2^2,\] so the lemma is just the orthogonality of characters.\end{proof} We start with two important and well-known examples. I include proofs for completeness, but there is nothing original here. \subsection{The commutation game} Given two sets $A_1,A_2$, the \emph{commutation game} on $A_1,A_2$ is the game where \begin{itemize} \item $X=\{x_1,x_2,y\}$, $A(x_i) = A_i$ and $A(y) = A_1 \times A_2$, \item $\mu = \frac 1 2 ( \delta_{(x_1,y)}+ \delta_{(x_2,y)})$, \item $D(x_1,y,a,(a',b')) = 1_{a=a'}$ and $D(x_2,y,b,(a',b')) = 1_{b=b'}$. \end{itemize} The important feature of this game is that strategies with large value have to be almost commuting, as shown by the following lemma. For later use, we denote $G_{com}= (X_{com},A_{com},\mu_{com},D_{com})$ the commutation game on $A_1=A_2=\{-1,1\}$, and $x_{com,1}=x_1$ and $x_{com,2}=x_2$. \begin{lemma}\label{lem:commutation_game} If a synchronous strategy achieves the value $1-\varepsilon$ on the commutation game of $A_1,A_2$, then its restriction $(p_a)_{a \in A_1}$ and $(q_b)_{b \in A_2}$ to $x_1$ and $x_2$ respectively satisfies \begin{equation}\label{eq:commutation_projection}\sum_{a\in A_1,b \in A_2} \|[p_a,q_b]^2\|_2^2 \leq 16 \varepsilon. \end{equation} If $A_1=A_2=\{-1,1\}$ then \begin{equation}\label{eq:commutation_unitary} \|[p_1-p_{-1},q_1-q_{-1}]\|_2^2 \leq 64 \varepsilon. \end{equation} \end{lemma} \begin{proof} The assumption means that there is a PVM $(r_{a,b})_{(a,b)\in A_1 \times A_2}$ such that \[ \frac 1 2 \sum_{a,b} \tau( p_a r_{a,b}) + \tau(q_b r_{a,b}) \geq 1-\varepsilon.\] Define $p'_a= \sum_b r_{a,b}$ and $q'_b = \sum_a r_{a,b}$. The previous inequality can be equivalently written as $\eta_1+\eta_2 \leq 4\varepsilon$, where \[ \eta_1 = \sum_a \|p_a - p'_a\|_2^2\] and \[ \eta_2 = \sum_b \|q_b - q'_b\|_2^2.\] We can bound $(\sum_{a,b} \|[p_a,q_b]^2\|_2^2)^{\frac 1 2}$ by \[ (\sum_{a,b} \|[p_a-p'_a,q_b]^2\|_2^2)^{\frac 1 2} + (\sum_{a,b} \|[p'_a,q_b-q'_b]^2\|_2^2)^{\frac 1 2} + (\sum_{a,b} \|[p'_a,q'_b]^2\|_2^2)^{\frac 1 2}.\] Using the easy bound \[ \sum_b \|[x,q_b]\|_2^2 \leq 2 \|x\|_2^2\] valid for every $x \in \mathcal{M}$, we can bound the first term by $\sqrt{2 \eta_1}$ and similarly the second term by $\sqrt{2 \eta_2}$. The last term vanishes because $r$ is a PVM. We deduce \[ \sum_{a,b} \|[p_a,q_b]^2\|_2^2 \leq (\sqrt{2\eta_1}+\sqrt{2\eta_2})^2 \leq 4(\eta_1+\eta_2).\] The inequality \eqref{eq:commutation_projection} follows because $\eta_1+\eta_2 \leq 4\varepsilon$. When $A_1=A_{-1}=\{1,-1\}$, we have $p_1 - p_{-1} = 2p_1-1 =1-2p_{-1}$ and similarly for $q$, so that $\|[p_1-p_{-1},q_1-q_{-1}]\|_2 = 4\|[p_a,p_b]\|_2$ for every $a,b \in \{1,-1\}$. Therefore, \eqref{eq:commutation_unitary} is immediate from \eqref{eq:commutation_projection}. \end{proof} \subsection{The anticommutation game} The anticommutation game (or magic square game) is a game with $|X|=15$ with two specific questions $x_1,x_2$ with answers $A(x_1) = A(x_2)=\{-1,1\}$. For later use, we denote $G_{anticom}= (X_{anticom},A_{anticom},\mu_{anticom},D_{anticom})$ the anticommutation game. The specific question will also be denote $x_{anticom,j}$. What will be important will not be the precise definition of the game, but that synchronous strategies with large values forces some anti-commutation~: \begin{lemma}\cite{MagicSquare}\label{lem:magic-square} If $(p_{-1},p_1)$ and $(q_{-1},q_1)$ are the restriction to $x_1$ and $x_2$ of a synchronous strategy for the anticommutation game with value $1-\varepsilon$, then \[ \|(p_1-p_{-1})(q_1-q_{-1})+ (q_1-q_{-1})(p_1-p_{-1})\|_2^2 \leq 432 \varepsilon.\] \end{lemma} We recall the proof for completeness. To prove the lemma, we need to give the definition of the game. Its set of questions is $X = C \cup L$ where $C=\{1,2,3\}^2$ is a $3\times 3$ square and $L$ is the set of all horizontal or vertical lines in the square. The specific points are $x_1=(1,1)$ and $x_2 = (2,2)$. Define $\alpha(\ell)=1$ for every line except the last vertical line, for which $\alpha(\ell)=-1$. For $c\in C$, define $A(c) = \{-1,1\}$, and for a line $\ell$, define $A(\ell) \subset \prod_{c \in \ell} \{-1,1\}$ by \[A(\ell) = \{(b_c)_c \in \prod_{c\in \ell} b_c=\alpha(\ell)\}. \] The distribution $\mu$ is the uniform distribution on $\{(c,\ell) \mid c \in \ell\}$. And $D(c,\ell,a,b) =1_{a=b_c}$. \begin{proof}[Proof of Lemma~\ref{lem:magic-square}] Let $(p^c_{-1},p^c_1)_{c \in C}$ and $(p^\ell_b)_{b \in A(\ell)}$ be the synchronous strategy with value $\geq 1-\varepsilon$, so that $(p_{-1},p_1) =(p^{1,1}_{-1},p^{1,1}_1)$ and $(q_{-1},q_1) =(p^{2,2}_{-1},p^{2,2}_1)$. For every $c \in C$, define $U^{c} = p^{c}_{-1}-p^c_1$. For every $\ell \in L$ and every $c\in \ell$, define $U^\ell(c) = \sum_{b \in A(\ell)} b_c p^\ell_b$. Define \[ \eta_{c,\ell} = \|U^c - U^\ell(c)\|_2 = 2\sqrt{ 1-\sum_{b \in A(\ell)} \tau(p^c_{b_c} p^\ell_b)}.\] So by the assumption that the strategy has value $\geq 1-\varepsilon$, we obtain \[ \frac{1}{18} \sum_{\ell} \sum_{c \in \ell} \eta_{c,\ell}^2 =\int \eta_{c,\ell}^2 d\mu(c,\ell) \leq 4\varepsilon.\] Therefore, if for $\ell \in L$ we denote $\eta_\ell = (\sum_{c \in \ell} \eta_{c,\ell}^2)^{\frac 1 2}$, we obtain \[ \sum_\ell \eta_\ell^2 \leq 24 \varepsilon.\] The $U^c$ and $U^\ell_c$ are all self-adjoint unitaries. Observe that if $c,c',c''$ are the points in $\ell$, then $U^\ell(c) = \alpha(\ell) U^{\ell}(c') U^{\ell}(c'')$. Therefore, we obtain \[ \|U^c - \alpha(\ell) U^{c'} U^{c''}\|_2 \leq \eta_{c,\ell} + \eta_{c',\ell} +\eta_{c'',\ell} \leq \sqrt{3}\eta_\ell.\] In the following, we denote by $hi$ the $i$-th horizontal line, and $vj$ the $j$-th vertical line. In the following, we write $M\simeq_\delta N$ if $\|M-N\|_2 \leq \sqrt{3}\delta$. We therefore have \begin{align*} U^{11}U^{22} &\simeq_{\eta_{h1}+\eta_{v2}} U^{13}U^{12} U^{12}U^{32} = U^{13}U^{32}\\ & \simeq_{\eta_{v3}+\eta_{h3}} -U^{23}U^{33}U^{33}U^{31} = -U^{23}U^{31}\\ & \simeq_{\eta_{h2}+\eta_{v1}} - U^{22}U^{21} U^{21}U^{11} = - U^{22}U^{11}. \end{align*} So we deduce \[ \|U^{11}U^{22} + U^{22}U^{11}\|_2 \leq \sum_\ell \sqrt{3} \eta_\ell \leq \sqrt{18 \sum_{\ell} \eta_\ell^2} .\] The lemma follows, because we have already justified that $\sum_\ell \eta_\ell^2 \leq 24 \varepsilon$, and $18 \cdot 24 = 432$. \end{proof} \subsection{Pauli matrices}\label{subsection:Pauli} Let $A$ be a finite group of exponent $2$, that is a group isomorphic to $(\mathbf{Z}/2\mathbf{Z})^N$ for some integer $N$. In the proof of Corollary~\ref{thm:almost_anticommutation}, we considered two unitary representations $a \in A \mapsto \lambda(a)$ and $\chi \in \hat A \mapsto M(\chi)$ on $B(\ell_2(A))$, called the Pauli representations. By Fourier transform (Lemma~\ref{lem:Fourier_abelian_group}), these representations correspond to PVMs $(\tau^X_\chi)_{\chi \in \hat A}$ and $(\tau^Z_a)_{a \in A}$, that we will call the Pauli PVMs. If (by fixing a basis) we choose an isomorphism between $A$ and $(\mathbf{Z}/2\mathbf{Z})^N$ and identify accordingly $\hat A$ with $(\mathbf{Z}/2\mathbf{Z})^N$ for the duality $\langle a ,b\rangle = (-1)^{\sum_{i=1}^N a_i b_i}$ and $\ell_2(A)$ with $\otimes_{i=1}^N \mathbf{C}^2$, then we have \[ \tau^X_a = \otimes_{i=1}^N \tau^X_{a_i} \textrm{ where } \tau^X_{0} = \begin{pmatrix} 1/2&1/2\\1/2&1/2 \end{pmatrix}, \tau^X_{1} = \begin{pmatrix} 1/2&-1/2\\-1/2&1/2 \end{pmatrix}\] and \[ \tau^Z_a = \otimes_{i=1}^N \tau^Z_{a_i} \textrm{ where } \tau^Z_{0} = \begin{pmatrix} 1&0\\0&0 \end{pmatrix}, \tau^X_{1} = \begin{pmatrix} 0&0\\0&1 \end{pmatrix}.\] We have the following classical fact. \begin{lemma}\label{lem:Pauli_matrices} Let $U:A\to \mathcal{U}(\mathcal{M})$ and $V:\hat A\to\mathcal{U}(\mathcal{M})$ be two unitary representations with values in a tracial von Neumann algebra satisfying $U(a) V(\chi) = \chi(a) V(\chi) U(a)$ for every $a\in A,\chi \in\hat A$. Then there is a tracial von Neumann algebra $(\mathcal{N},\tau')$ such that $(\mathcal{M} ,\tau) = (B(\ell_2(A))\otimes \mathcal{N},\tr\otimes \tau')$, \[ \mathbf{E}_a \overline{\chi(a)}U(a) =\tau^X_\chi \otimes 1_\mathcal{N} \textrm{ for all }\chi \in\hat A\] and \[\mathbf{E}_\chi \overline{\chi(a)} V(\chi) = \tau^Z_a\otimes 1_\mathcal{N} \textrm{ for all }a \in A.\] \end{lemma} \begin{proof} This is a result about the representation theory of the Weyl-Heisenberg group introduced in Corollary~\ref{cor:averageGowersHatami_central_ext_products}. Indeed, it is well-known that its irreducible representations are of two kinds~: those that are trivial on the center and one-dimensional (there are $|A|^2$ of them, corresponding to characters of the abelian group $A\times \hat A$), and a unique representation $\pi_0$ that is non trivial on the center, of dimension $|A|$, given by the pair $(\lambda,V)$. Therefore, if $U$ and $V$ are in the lemma, they give rise to a unitary representation $\pi$ of the Weyl-Heisenberg group such that $\pi(Z)=-1_\mathcal{M}$ (where $Z$ is the non-trivial central element), so by Peter-Weyl $\pi$ is of the form $\pi_0\otimes 1$, and the lemma follows. \end{proof} \subsection{Combining the two games} We now explain how, adapting the construction from \cite{MR3678246}, we can combine the commutation and anticommutation games to obtain the desired game. Let $H$ be a finite abelian group of exponent $2$ (that is a group isomorphic to $(\mathbf{Z}/2\mathbf{Z})^N$ for some integer $N$). Let $(\Omega,\mathbf{P})$ be a (finite) probability space with two independent random variable $\alpha:\Omega \to H$ $\beta:\Omega \to \hat H$. Define a partition $\Omega= \Omega_+ \cup \Omega_-$ by \[\Omega_+ = \{\omega \in \Omega \mid \langle \beta(\omega),\alpha(\omega)\rangle = 1\}\] and \[\Omega_- = \{\omega \in \Omega \mid \langle \beta(\omega),\alpha(\omega)\rangle = -1.\}\] This data allows us to define a game $(\mathcal{X},\mu,A,D)$ as follows, inspired by the Pauli basis test in \cite{MIPRE}. \begin{itemize} \item $\mathcal{X} = \{PX,PZ\}\cup (X_{com} \times \Omega_+) \cup (X_{anticom} \times \Omega_-)$ \item $A(PX) = \hat H$, $A(PZ) = H$, for $x \in X_{com}$, $A(x,\omega) = A_{com}(x)$, and for $x \in X_{anticom}$, $A(x,\omega)=A_{anticom}(x)$. \item $\mu$ is the law of $(x,y) \in \mathcal{X}$ generated as follows~: generate independantly $i$ uniformly in $\{1,2,3\}$, $\omega \in \Omega$, $(x_c,y_c)$ according to $\mu_{com}$ and $(x_a,y_a)$ according to $\mu_{anticom}$. Define \[ (x_0,y_0) =\begin{cases} (x_c,y_c) &\textrm{if }\omega \in \Omega_+\\ (x_a,y_a)&\textrm{otherwise (if }\omega \in \Omega_-). \end{cases}\] Define $(x,y)$ as \begin{equation}\label{eq:def_mu} (x,y) =\begin{cases} (PX,(x_{com,1},\omega)) &\textrm{if $i=1$ and $\omega \in \Omega_+$}\\ (PX,(x_{anticom,1},\omega)) &\textrm{if $i=1$ and $\omega \in \Omega_-$}\\ ((x_0,\omega),(y_0,\omega))&\textrm{if $i=2$}\\ (PZ,(x_{com,2},\omega)) &\textrm{if $i=3$ and $\omega \in \Omega_+$}\\ (PZ,(x_{anticom,2},\omega)) &\textrm{if $i=3$ and $\omega \in \Omega_-$}\\ \end{cases}.\end{equation} \item The decision function $D$ is given by: if $x=x_{com,1}$ and $\omega \in \Omega_+$, or if $x=x_{anticom,1}$ if $\omega \in \Omega_-$, \[D(PX,(x,\omega),\chi,\varepsilon) = 1_{\langle \chi,\alpha(\omega)\rangle = \varepsilon}.\]If $x=x_{com,2}$ and $\omega \in \Omega_+$ or $x=x_{anticom,2}$ and $\omega \in \Omega_-$, \[D(PZ,(x,\omega),h,\varepsilon) = 1_{\langle \beta(\omega),h\rangle = \varepsilon},\] and in the remaining cases of the support of $\mu$, \[D((x,\omega),(y,\omega),a,b) = \begin{cases} D_{com}(x,y,a,b)&\textrm{if }\omega \in \Omega_+\\ D_{anticom}(x,y,a,b)&\textrm{if }\omega \in \Omega_-. \end{cases}.\] \end{itemize} \begin{proposition}\label{prop:construction_of_game} Assume that \[ \forall 1\neq \chi \in \hat H, \mathbf{E}_\omega \langle \chi, \alpha(\omega)\rangle \leq 1-\frac 1 c\] and \[ \forall 1\neq h \in H, \mathbf{E}_\omega \langle \beta(\omega),h\rangle \leq 1-\frac 1 {c'}.\] If the previous game achieves a value $\geq 1-\varepsilon$ on a synchronous strategy, then its restriction $(p^{PX}_\chi)_{\chi \in \hat H}$ and $(p^{PZ}_h)_{h \in H}$ satisfies \[ \frac{1}{|H|^2} \sum_{h\in H,\chi \in \hat H}\|U^{PX}(h) U^{PZ}(\chi) - \chi(h) U^{PZ}(\chi) U^{PX}(h)\|_2^2 \leq 1320 cc' \varepsilon, \] where $U^{PX}$ is the unitary representation of $H$ corresponding to the PVM $p^{PX}$, and $V^{PZ}$ is the unitary representation of $\hat H$ corresponding to the PVM $p^{PZ}$. \end{proposition} \begin{proof} Denote by $1-\varepsilon_i$ the value of this strategy when the questions asked are conditionned to case $i$ in \eqref{eq:def_mu}. By definition, we then have \begin{equation}\label{eq:sumepsiloni} \varepsilon_1+\varepsilon_2+\varepsilon_3\leq 3 \varepsilon. \end{equation} For every $\omega$, and $j=\{1,2\}$ let $(p^{j,\omega}_{-1},p^{j,\omega}_{1})$ we the PVM corresponding to the question $(x_{com,j},\omega)$ if $\omega \in \Omega_+$ and to $(x_{anticom,j},\omega)$ otherwise. These are restrictions to $\{x_1,x_2\}$ of a strategy for the commutation or anticommutation game (depending on whether $\omega \in \Omega_+$ or $\Omega_-$) with value $1-\varepsilon_2(\omega)$, where $\mathbf{E}_\omega \varepsilon_2(\omega) = \varepsilon_2$. So, if we define \[U^{\omega} = p^{1,\omega}_{1}-p^{1,\omega}_{-1},\ V^{\omega} = p^{2,\omega}_{1}-p^{2,\omega}_{-1},\] we know from the properties of the commutation and anticommutation game, that \[ \mathbf{E}_\omega \|U^{\omega} V^{\omega} - \langle \beta(\omega), \alpha(\omega)\rangle V^{\omega} U^{\omega}\|_2^2 \leq 432 \varepsilon_2.\] (and in fact, $432$ can be replaced by $64$ on $\Omega_+$, but this is of no use for us). Now by definition of $\varepsilon_1$, we have \[ \mathbf{E}_\omega \sum_{\chi \in \hat H} \tau(p^{PX}_\chi p^{1,\omega}_{\langle \chi,\alpha(\omega)}) = \varepsilon_1,\] or equivalently \[ \mathbf{E}_\omega \| U^{PX}(\alpha(\omega)) - U^\omega\|_2^2 = 4\varepsilon_1.\] In the same way, \[ \mathbf{E}_\omega \| V^{PZ}(\beta(\omega)) - V^\omega\|_2^2 = 4\varepsilon_3.\] Putting everything together, we obtain \begin{multline*} \mathbf{E}_\omega \|U^{PX}(\alpha(\omega)) V^{PZ}(\beta(\omega)) - \langle \beta(\omega),\alpha(\omega)\rangle V^{PZ}(\beta(\omega)) U^{PX}(\alpha(\omega))\|_2^2\\ \leq (\sqrt{4\varepsilon_1} + \sqrt{432 \varepsilon_2}+\sqrt{4\varepsilon_3})^2)^2. \end{multline*} This is less than $440(\varepsilon_1+\varepsilon_2+\varepsilon_3) \leq 1320\varepsilon$ by Cauchy-Schwarz and \eqref{eq:sumepsiloni}. If $\mu$ is the law of $\alpha(\omega)$ and $\nu$ is the law of $\beta(\omega)$, by the assumption that $\alpha$ and $\beta$ are independent, we can write this as \[ \int \|U^{PX}(h) V^{PZ}(\chi) - \langle \chi,h\rangle V^{PZ}(\chi) U^{PX}(h)\|_2^2 d\mu(\chi) d\nu(h) \leq 1320 \varepsilon.\] We conclude by Corollary~\ref{thm:almost_anticommutation}. \end{proof} The next corollary is expressed in terms of the Pauli PVMs introduced in subsection \ref{subsection:Pauli}. \begin{corollary}\label{cor:pauli_from_good_strategy} Under the same assumption on $\alpha,\beta$ as in Proposition~\ref{prop:construction_of_game}, then any synchronous strategy with value $1-\varepsilon$ to $G_N$ is $O(cc'\varepsilon)$-close to a strategy on an algebra of the form $(M_{2^N}(\mathbf{C})\otimes \mathcal{N},\mathrm{\tr}\otimes \tau')$ where $P^{PX}_\chi = \tau^X_\chi\otimes 1_\mathcal{N}$ and $P^{PZ}_h = \tau^Z_h\otimes 1_\mathcal{N}$ for all $\chi \in\hat H, h\in H$. \end{corollary} \begin{proof}By the conclusion of Proposition~\ref{prop:construction_of_game} and Corollary~\ref{thm:almost_anticommutation_2group}, the representations $U^{PX},U^{PZ}$ are $O(cc'\varepsilon)$-close to a pair of representations $U_1,V_1$ satisfying \[ U_1(h) V_1(\chi) = \chi(h) V_1(\chi) U_1(h)\forall h \in H,\chi \in \hat H.\] By Lemma~\ref{lem:Pauli_matrices} $U_1$ and $V_1$ have the desired forms, and by Lemma~\ref{lem:close_strategies_unitaries} the closeness of the unitary representations is equivalent to the closeness of the strategies. \end{proof} \begin{example}\label{ex:game_from_code} Let $C,C'$ be two linear binary codes of the same dimension $N$, say with parameters $[k,N,d]$ and $[k',N,d']$. By Example~\ref{ex:code_spectral_gap}, any choice of a basis for $C$ gives rise to a probability measure on $(\mathbf{Z}/2\mathbf{Z})^N$ that is uniform on a subset of cardinality $k$, and its spectral gap constant is $\kappa = k/2d$. Similarly, any choice of a basis for $C'$ produces a probability measure on $(\mathbf{Z}/2\mathbf{Z})^N$ uniform on a subset of size $k'$ and with $\kappa = k'/2d'$. Let us consider $\Omega = \mathrm{supp}(\mu)\times \mathrm{supp}(\mu')$ with its uniform probability measure, and $\alpha,\beta:\Omega\to (\mathbf{Z}/2\mathbf{Z})^N$ the two coordinate projections. If we identify $(\mathbf{Z}/2\mathbf{Z})^N$ with its Pontryagin dual for the duality $\langle a,b\rangle=(-1)^{\sum_i a_i b_i}$, the previous construction therefore gives rise to a two-player non-local game $G(C)=(X_{C,C'},\mu,A_{C,C'},D_{C,C'})$ with $|X_{C,C'}|=O(kk')$, $A_{C,C'}=(\mathbf{Z}/2\mathbf{Z})^N$ and two particular questions $PX$ and $PZ$ with the following properties~ \begin{itemize} \item $\mu(PX)=\mu(PZ)=\frac 1 3$, \item Any synchronous strategy with value $1-\varepsilon$ to $G(C)$ is $O(\varepsilon\frac{kk'}{dd'})$-close to a strategy on an algebra of the form $(M_{2^N}(\mathbf{C})\otimes \mathcal{N},\mathrm{\tr}\otimes \tau')$ where $P^{PX}_a = \tau^X_a\otimes 1_\mathcal{N}$ and $P^{PZ}_b = \tau^Z_b\otimes 1_\mathcal{N}$ for all $a,b\in (\mathbf{Z}/2\mathbf{Z})^N$. \end{itemize} \end{example} The existence of asymptotically good codes (Proposition~\ref{prop:measure_with_small_support_and_small_Fourier_transform_nonabelian}) implies in particular the following Theorem. \begin{theorem}\label{thm:mainTheorem} For every $N$, there is a game $G_N$ with $|X|\leq C N^2$, $|A|=2^N$ and with two specific questions $PX,PZ \in X$ satisfyin $\mu(PX)=\mu(PZ) = \frac 1 3$, with answer sets $A(PX)=(\mathbf{Z}/2\mathbf{Z})^N = A(PZ)=(\mathbf{Z}/2\mathbf{Z})^N$ such that any synchronous strategy with value $1-\varepsilon$ to $G_N$ is $O(\varepsilon)$-close to a strategy on an algebra of the form $(M_{2^N}(\mathbf{C})\otimes \mathcal{N},\mathrm{\tr}\otimes \tau')$ where $P^{PX}_a = \tau^X_a\otimes 1_\mathcal{N}$ and $P^{PZ}_b = \tau^Z_b\otimes 1_\mathcal{N}$ for all $a,b\in (\mathbf{Z}/2\mathbf{Z})^N$. \end{theorem} Moreover, the $G_N$ can be made \emph{explicit}, replacing the existencial argument of asymptotically good codes from Proposition~\ref{prop:measure_with_small_support_and_small_Fourier_transform_nonabelian} by explicit constructions, for examples such as Justesen codes \cite{MR0465509} or expander codes \cite{MR2494807}, see also \cite{MR1996953}. \begin{proof} Combine Proposition~\ref{prop:measure_with_small_support_and_small_Fourier_transform_nonabelian} and Corollary~\ref{cor:pauli_from_good_strategy}. \end{proof} \subsection{Final comment}\label{subsection:comparison} Let $q=2^k$ be a power of $2$ with $k$ odd. This guarantees that the Pontryagin dual of $\mathbf{F}_q$ identifies with $\mathbf{F}_q$ for the duality bracket $\langle x,y\rangle = (-1)^{Tr(xy)}$ where we identify an element of $\mathbf{F}_q$ (here $xy$) with the $\mathbf{F}_2$-linear map of multiplication on $\mathbf{F}_q$, seen as an $\mathbf{F}_2$ vector space, and so $Tr(xy) \in \mathbf{F}_2$ is the trace of this $\mathbf{F}_2$-linear operator. Identify the Pontryagin dual of $\mathbf{F}_q^N$ with itself accordingly. Let $m$ be an integer. Consider the Reed-Muller code $C$, the set of all polynomials of individual degree $\leq 1$ in $m$ variables, seen as a subspace of the space $\mathbf{F}_q^{\mathbf{F}_q^m}$ of all functions $\mathbf{F}_q^m\to \mathbf{F}_q$. By the Schwarz-Zippel Lemma, any nonzero such polynomial has at most $mq^{m-1}$ zeros, so it is a $[q^m,2^m,\leq q^m(1-m/q)]_q$-code. By Example~\ref{ex:code_spectral_gap}, this code therefore gives rise to a probability measure $\mu$ on $\mathbf{F}_q^{2^m}$ that is uniformly supported on a set of size $q^{m+1}$, and such that $\kappa(\mu) \leq \frac{q-1}{q-m}$. In particular, as soon as $q \geq 2m$, we obtain $\kappa(\mu) \leq 2$. If we define a game from two copies of $C$ as in Example~\ref{ex:game_from_code}, we therefore obtain a game with $|X| = O(2^{2k(m+1)})$, $|A| = 2^{k2^m}$ and that satisfies the same conclusion as in Theorem~\ref{thm:mainTheorem}. In this example, the dependance between the number of questions $O(2^{2k(m+1)})$ and of answers $O(2^{k 2^m})$ is not as good as in Theorem~\ref{thm:mainTheorem}. In \cite[Section 7.3]{MIPRE} a game called the Pauli basis test is studied, depending on the same parameters $k$ and $m$ and an additional parameter $d$. We will not recall the precise description of the Pauli basis test here, but it is closely related to the game previously defined for this value of $H,\alpha,\beta$. In particular, it is not difficult to show that, for every $d\geq 1$, the Pauli basis test contains the game above, and therefore a conclusion similar to Theorem~\ref{thm:mainTheorem} holds for the Pauli basis test as soon as $q \geq 2m$. This is significantly better than in \cite[Theorem 7.14]{MIPRE}, where this statement is proved for the Pauli basis test, but with $O(\varepsilon)$ replaced by $a(md)^a(\varepsilon^b + q^{-b}+2^{-bmd})$ for some constants $a,b,c$. Therefore, Theorem~\ref{thm:mainTheorem} and the more general construction in Example~\ref{ex:game_from_code} can be seen as both an improvement, generalization and simplification of \cite[Theorem 7.14]{MIPRE}. However, it should be noted that this simplification does not remove all the dependances to the difficult result from \cite{quantumsoundness}, as this result is still used in the answer reduction (or PCP) part of \cite{MIPRE}. The main difference between the results in this note and \cite{quantumsoundness} is that in \cite{quantumsoundness}, the number of answers is much smaller. \section{Stability}\label{sec:stability} \begin{definition} Let $G$ be a countable group and $\mu$ a probability measure with generating support. A map $\varphi:G\to \mathcal{U}(\mathcal{M})$ is said to be an $(\varepsilon,\mu)$-almost homomorphism if $\iint \| \varphi(gh) - \varphi(g) \varphi(h)\|_2^2d\mu(g) d\mu(h) \leq \varepsilon$. \end{definition} Let $\mathcal{C}$ be a class of von Neumann algebras equipped with normal tracial states. \begin{definition}\label{def:stability} We say that $(G,\mu)$ is $\mathcal{C}$-stable if there is a non-decreasing function $\delta:[0,4]\to [0,4]$ such that $\lim_{t \to 0} \delta(t)= 0$ such that, for every $(\mathcal{M},\tau)$ in $\mathcal{C}$ and every $(\varepsilon,\mu)$-almost homomorphism $\varphi: G \to \mathcal{U}(\mathcal{M})$, there is a group homomorphism $\pi:G \to \mathcal{U}(\mathcal{M})$ satisfying \[ \int \|\varphi(g) - \pi(g)\|_2^2 d\mu(g) \leq \delta(\varepsilon).\] Such a function $\delta$ satisfying moreover that $\delta$ is concave and $\delta(4)= 4$ will be called a modulus of $\mathcal{C}$-stability. \end{definition} \begin{remark} If $(G,\mu)$ is $\mathcal{C}$-stable, then it admits a modulus of $\mathcal{C}$-stability (by replacing $\delta$ by the smallest function greater than $\delta$, concave and taking the value $4$ at $4$). The concavity requirement for $\delta$ is here to make statements such as Theorem~\ref{thm:stability_direct_product} cleaner. It is also very natural, and in fact if $\mathcal{C}$ is stable by direct sums then the best $\delta$ is necessarily concave. Indeed, if $\varphi_1$ and $\varphi_2$ are respectively $(\varepsilon_1,\mu)$ and $(\varepsilon_2,\mu)$ almost representations with values in $\mathcal{U}(\mathcal{M}_1)$ and $\mathcal{U}(\mathcal{M}_2)$, and $\lambda \in [0,1]$, then defining $\mathcal{M} = \mathcal{M}_1\oplus \mathcal{M}_2$ with trace $\tau(x_1,x_2) = \lambda \tau_1(x_1) + (1-\lambda)\tau_2(x_2)$, the pair $(\varphi_1,\varphi_2)$ defines a $(\lambda\varepsilon_1+(1-\lambda)\varepsilon_2,\mu)$ almost representation $\varphi$ on $\mathcal{M}$. Moreover, a unitary representation $\pi:G\to \mathcal{U}(\mathcal{M})$ is the same as a pair of unitary representations $\pi_i:G\to \mathcal{U}(\mathcal{M}_i)$, with \[ \int \|\varphi(g) - \pi(g)\|_2^2 d\mu(g) =\lambda \int \|\varphi_1(g) - \pi_1(g)\|_2^2 d\mu(g) + (1-\lambda) \int \|\varphi_2(g) - \pi_2(g)\|_2^2 d\mu(g).\] Taking the supremum over all $(\varepsilon_1,\mu)$ and $(\varepsilon_2,\mu)$ almost representations, we obtain \[ \lambda\delta(\varepsilon_1) + (1-\lambda)\delta(\varepsilon_2) \leq \delta(\lambda\varepsilon_1+(1-\lambda)\varepsilon_2).\] That is, $\delta$ is concave. \end{remark} \begin{definition}\label{def:flexible_stability} We say that $(G,\mu)$ is $\mathcal{C}$ flexibly stable if there is a non-decreasing function $\delta:[0,4]\to [0,4]$ such that $\lim_{t \to 0} \delta(t) = 0$ such that, for every $(\mathcal{M},\tau)$ in $\mathcal{C}$ and every $(\varepsilon,\mu)$-almost homomorphism $\varphi: G \to \mathcal{U}(\mathcal{M})$, there is a projection $P \in \mathcal{M}_\infty$, a group homomorphism $\pi:G \to \mathcal{U}(\mathcal{M})$ and an isometry $w \in P \mathcal{M}_\infty 1_\mathcal{M}$ satisfying \[\max\left(\tau(P)-1, \int \|\varphi(g) - w^*\pi(g)w\|_2^2 d\mu(g)\right) \leq \delta(\varepsilon).\] Such a function $\delta$ satisfying moreover that $\delta$ is concave and $\delta(4)=4$ will be called a modulus of $\mathcal{C}$-flexible stability. \end{definition} For example, Theorem~\ref{thm:averageGowersHatami} says that for every finite group and any $\mathcal{C}$, $(G,\mathbf{P}_G)$ is $\mathcal{C}$ flexibly stable with modulus $\delta(t) = \min(4,169 t)$. This is an adaptation of the standard notions of Hilbert-Schmidt stability and Hilbert-Schmidt flexible stability, see for example \cite{Ioana2}. Let $\mathcal{C}_{\mathrm{fin}}$ denote the class of all finite-dimensional von Neumann algebras with tracial states. \begin{lemma} A countable group $G$ is Hilbert-Schmidt (flexibly) stable if and only if there is a probability measure $\mu$ on $G$ with generating support such that $(G,\mu)$ is $\mathcal{C}_{\mathrm{fin}}$ (flexibly) stable. If $G$ is finitely presented, $\mu$ can moreover be taken to be of finite support. \end{lemma} \begin{proof} The lemma is immediate if $\mathcal{C}_{\mathrm{fin}}$ is replaced by the set $\{M_n\} = \{(M_n(\mathbf{C}),\tr) \mid n\geq 1.\}$. So the whole point of the lemma is to show that if $(G,\mu)$ is $\{M_n\}$-stable, then $(G,\mu)$ is $\mathcal{C}_{\mathrm{fin}}$-stable, and similarly for flexible stability. Assume that $(G,\mu)$ is $\{M_n\}$-stable, and let $\delta$ be a modulus. We shall prove that $(G,\mu)$ is $\mathcal{C}_{\mathrm{fin}}$-stable with the same modulus. Let $(\mathcal{M},\tau) \in \mathcal{C}_{\mathrm{fin}}$ and $\varphi:G\to \mathcal{U}(\mathcal{M})$ be an $(\varepsilon,\mu)$-almost homomorphism. We can decompose $\mathcal{M}$ into a finite direct sum of matrix algebras $\mathcal{M} = \oplus_{i=1}^k M_{n_i}(\mathbf{C})$ with trace $\tau = \sum_i \lambda_i \tr_{n_i}$ for positive numbers $\lambda_i$ summing to $1$. Then $\varphi(g) = (\varphi_i(g))_i$ where $\varphi_i$ is an $(\varepsilon_i,\mu)$-almost homomorphism for some numbers $\varepsilon_i$ satisfying \[\sum\nolimits_i \lambda_i\varepsilon_i\leq\varepsilon.\] Therefore, by $\{M_n\}$-stability, there is homomorphism $\pi_i \colon G\to\mathcal{U}(n_i)$ such that $\int\|\varphi_i(g) - \pi_i(g)\|_2^2d\mu(g) \leq \delta(\varepsilon_i)$. If we define a homomorphism $\pi:G\to \mathcal{U}(\mathcal{M})$ by $\pi(g) = (\pi_i(g))_{g\in G}$, we obtain \[ \int \|\varphi(g) - \pi(g)\|_2^2 d\mu(g) = \sum_{i=1}^k\lambda_i \int\|\varphi_i(g) - \pi_i(g)\|_2^2d\mu(g) \leq \delta(\varepsilon)\] by concavity of $\delta$. The argument is identical for flexible stability, and left to the reader. \end{proof} Assume that $\mathcal{C}$ is class of von Neumann algebras closed by taking subalgebras. \begin{theorem}\label{thm:stability_direct_product} Direct product of $\mathcal{C}$-stable groups are $\mathcal{C}$-stable. More precisely, if $(G_1,\mu_1)$ is $\mathcal{C}$-stable with modulus $\delta_1$ and $(G_2,\mu_2)$ is $\mathcal{C}$-stable with modulus $\delta_2$, then $(G_1 \times G_2,\mu)$ is $\mathcal{C}$-stable with modulus $\delta(\varepsilon) \lesssim \delta_2(\kappa(\mu_1) \delta_2(\varepsilon))$, where $\mu$ is the probability measure \[\mu(x,y) = \frac{1}{2}(\mu(x) 1_{y=1}+\mu(y) 1_{x=1}).\] \end{theorem} The proof will use the following simple lemmas. \begin{lemma}\label{eq:almost-unitary_close_to_unitary} Let $(\mathcal{M},\tau)$ be a tracial von Neumann algebra and $\mathcal{N}\subset\mathcal{M}$ be a subalgebra, and let $E_\mathcal{N}:\mathcal{M} \to \mathcal{N}$ be the conditional expectation. For every unitary $V \in \mathcal{U}(\mathcal{M})$, there is a unitary $\tilde{V}\in\mathcal{U}(\mathcal{N})$ such that $\|V-\tilde{V}\|_2 \leq \sqrt{2} \|V-E_\mathcal{N}(V)\|_2$. \end{lemma} The $\sqrt{2}$ is optimal, for example when $E_\mathcal{N}(V)=0$. \begin{proof} Let $X = E_\mathcal{N}(V)$. Since $\mathcal{N}$ is finite, we can write $X=\tilde{V}|X|$ where $\tilde{V} \in\mathcal{U}(\mathcal{N})$ and $|X|=(X^*X)^{\frac 1 2}$. By the duality between $L_1(\mathcal{M},\tau)$ and $\mathcal{M}$, we have $\mathbf{R}e \tau(V^*X) \leq\tau(|X|)$, or equivalently $\|\tilde{V}-X\|_2 \leq \|V-X\|_2$. If we decompose $V-\tilde{V} = V-X + X-\tilde{V}$, the terms $V-X$ and $X-\tilde{V}$ are orthogonal, and therefore \[\|V-\tilde{V}\|_2^2 = \|V-X\|^2 + \|\tilde{V}-X\|^2 \leq 2\|V-X\|_2^2.\] This proves the lemma. \end{proof} \begin{lemma}\label{eq:norm_of_conditional_exp} Let $\mathcal{N}\subset\mathcal{M}$ be a von Neumann sulalgebra, and $E_\mathcal{N}:\mathcal{M}\to\mathcal{N}$ be the trace-preserving conditional expectation. For every $\xi \in L_2(\mathcal{M},\tau)$, \[ \|\xi - E_\mathcal{N}(\xi)\|_2 = \sup\{\tau(\xi \eta) \mid \eta \in L_2(\mathcal{M},\tau), E_\mathcal{N}(\eta)=0,\|\eta\|_2 = 1\}.\] \end{lemma} \begin{proof} $(1-E_\mathcal{N})$ is the orthogonal projection on $\{\eta \in L_2(\mathcal{M},\tau) \mid E_\mathcal{N}(\eta)=0\}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:stability_direct_product}] Let $\varphi : G_1\times G_2\to\mathcal{U}(\mathcal{M})$ be a $(\varepsilon,\mu)$-almost homomorphism. The idea is simple: we first use the stability of $G_1$ to say that the restriction of $\pi$ to $G_1$ is close to a representation. Then by property (T), we will deduce that the restriction of $\pi$ to $G_2$ is close to an almost homomorphism from $G_2$ to the commutant of $G_1$, and therefore by stability of $G_2$ to an actual homorphism with values in the commutant of $G_1$. Here are the details. If we denote \[\varepsilon_{i,j} = \iint_{G_i\times G_j} \| \varphi(gh) - \varphi(g) \varphi(h)\|_2^2d\mu_i(g) d\mu_j(h),\] we then have $\varepsilon_{1,1}+\varepsilon_{2,2} + \varepsilon_{1,2}+\varepsilon_{2,1}\leq 4\varepsilon$. The restriction of $\varphi$ to $G_1$ is an $(\varepsilon_{1,1},\mu_1)$-almost representation of $G_1$, so there is a unitary representation $\pi_1:G_1\to\mathcal{U}(\mathcal{M})$ such that \[\int \|\varphi(g)-\pi_1(g)\|_2^2 d\mu_1(g) \leq \delta_1(\varepsilon_{1,1}).\] Denote $\mathcal{N}=\mathcal{M} \cap \tilde{\pi}_1(G_1)'$, and $E:\mathcal{M}\to\mathcal{N}$ the conditional expectation. Since $\mathcal{C}$ is assumed to be stable by subalgebras, we have $\mathcal{N}\in\mathcal{C}$. Define, for every $h \in G_2$, \[\eta(h) = \|\varphi(h) -E(\varphi(h))\|_2.\] We first establish an upper bound for the norm of $\eta$ in $L_2(G_2,\mu_2)$. This is where we use that $G_1$ has property (T). By Lemma~\ref{lemma:spectral_gap_commutator}, we know that \[ \eta(h)^2 \leq \kappa(\mu_1) \int_{G_1} \|[\varphi(h),\pi_1(g)]\|_2^2 d\mu_1(g).\] We can bound \[\|[\varphi(h),\pi_1(g)\|_2 \leq 2\|\varphi(g)-\pi_1(g)\|_2 + \|\varphi(h)\varphi(g) -\varphi(hg)\|_2 + \|\varphi(g) \varphi(h) - \varphi(gh)\|_2,\] So by the triangle inequality in $L_2(\mu_1\times \mu_2)$, we obtain \[ \|\eta\|_{L_2(\mu_2)} \leq \sqrt{\kappa(\mu_1)/2}(2 \sqrt{\delta_1(\varepsilon_{1,1})} + \sqrt{\varepsilon_{1,2}} + \sqrt{\varepsilon_{2,1}}).\] By the standing assumptions on $\delta_1$, and the Cauchy-Schwarz inequality, we obtain the following inequality that we will use shortly \begin{equation}\label{eq:L2norm_of_eta} \int\eta(h)^2 d\mu_2(h) \leq 12 \kappa(\mu_1) \delta_1(\varepsilon).\end{equation} By Lemma~\ref{eq:almost-unitary_close_to_unitary}, for every $h \in G_2$ there is a unitary $V(h) \in \mathcal{U}(\mathcal{N})$ such that $\|\varphi(h) - V(h)\|_2\leq \sqrt{2}\eta(h)$. Our goal is to show that $V$ is an almost-homomorphism, to then apply flexible stability for $G_2$. By the triangle inequality, \[ \left(\int \|V(gh)-V(g)V(h)\|_2^2 d\mu_2(g) d\mu_2(h)\right)^{\frac 1 2}\] is bounded above by \[ \sqrt{\varepsilon_{2,2}} + 2\sqrt{2} \|\eta\|_{L_2(\mu_2)} + \sqrt{2}\|\eta\|_{L_2(\mu_2\ast\mu_2)}.\] Decompose $\varphi(gh) = a+b+c$ with $a= \varphi(gh)-\varphi(g)\varphi(h)$, $b= \varphi(g) (\varphi(h) - E_\mathcal{N}(\varphi(h)))$ and $c=\varphi(g) E_\mathcal{N}(\varphi(h))$. Using Lemma~\ref{eq:norm_of_conditional_exp}, and writing $\sup_\eta$ for the supremum over all unit vectors in the orthogonal of $L_2(\mathcal{N},\tau)$ in $L_2(\mathcal{M},\tau)$, we have \begin{align*} \eta(gh) & = \sup_\eta \tau(a\eta) + \tau(b\eta)+ \tau(c\eta)\\ & \leq \|a\|_2 + \|b\|_2 +\|c - E_\mathcal{N}(c)\|_2\\ & \leq \|\varphi(gh) - \varphi(g) \varphi(h)\|_2 + \eta(h)+\eta(g). \end{align*} As a consequence, we have $\|\eta\|_{L_2(\mu_2\ast\mu_2)} \leq \sqrt{\varepsilon_{2,2}} + 2\|\eta\|_{L_2(\mu_2)}$, and we deduce \[ \left(\int \|V(gh)-V(g)V(h)\|_2^2 d\mu_2(g) d\mu_2(h)\right)^{\frac 1 2} \leq (1+\sqrt{2})\sqrt{\varepsilon_{2,2}} + 4\sqrt{2} \|\eta\|_{L_2(\mu_2)}.\] By \eqref{eq:L2norm_of_eta}, this last quantity is less than $\sqrt{C\kappa(\mu_1) \delta_1(\varepsilon)}$ for some universal constant $C$. The theorem follows easily. Indeed, by stability for $G_2$, we deduce that there is a unitary representation $\pi_2:G\to\mathcal{U}(\mathcal{N})$ such that \[ \int \|V(g) - \pi_2(g)\|_2^2d\mu_2(g) \leq \delta_2(C\kappa(\mu_1)\delta_1(\varepsilon)),\] and therefore \[ \int \|\varphi(g)- \pi_2(g)\|_2^2d\mu_2(g) \leq 3(\|\eta\|_{L_2(\mu_2)}^2 + \delta_2(C\kappa(\mu_1)\delta_1(\varepsilon))). \] By definition of $\mathcal{N}$, $\pi_2(g)$ commutes with $\pi_1(G_1)$, so the pair $({\pi}_1,{\pi}_2)$ gives rise to a unitary representation ${\pi} : G_1\times G_2\to \mathcal{U}(\mathcal{M})$ by ${\pi}(g,h)={\pi}_1(g){\pi}_2(h)$. We have \begin{align*} \int \|\varphi(g) - {\pi}(g)\|_2^2 d\mu(g) &\leq \frac{1}{2}(\delta_1(\varepsilon_{1,1}) + 3\|\eta\|_{L_2(\mu_2)}^2 + 3\delta_2(C\kappa(\mu_1)\delta_1(\varepsilon))). \end{align*} This is $\lesssim \delta_2(\kappa(\mu_1)\delta_1(\varepsilon))$ by \eqref{eq:L2norm_of_eta} and the standing assumption that the moduli $\delta_i$ are concave and satisfy $\delta_i(t) \geq t$. \end{proof} \begin{theorem}\label{thm:flexible_stability_direct_product} Direct product of $\mathcal{C}$-flexibly stable groups are $\mathcal{C}$-flexibly stable. More precisely, if $(G_1,\mu_1)$ is $\mathcal{C}$-flexibly stable with modulus $\delta_1$ and $(G_2,\mu_2)$ is $\mathcal{C}$-flexibly stable with modulus $\delta_2$, then $(G_1 \times G_2,\frac{1}{2}(\mu_1 \otimes \delta_1 + \delta_1 \otimes \mu_2))$ is $\mathcal{C}$-flexibly stable with modulus $\delta(\varepsilon) \lesssim \delta_2(\kappa(\mu_1) \delta_2(\varepsilon))$. \end{theorem} \begin{proof} This is proved in the same way as for stability. The only difference is that the von Neumann algebras change after each use of flexible stability. First, when flexible stability is used for $G_1$, a representation $\tilde{\pi}_1$ of $G_1$ is constructed in a small dilation $\mathcal{M}_1$ of $\mathcal{M}$. The almost representation of $G_2$ is then defined with values in $\mathcal{U}(\mathcal{M}_1)$ as $w_1 \pi(g) w_1^* +P_1 - w_1w_1^*$. Similarly, when flexible stability is used for $G_2$ to construct a representation $\tilde{\pi}_2$ in a small dilation $\mathcal{M}_2$ of $\mathcal{M}_1$, the representation $\tilde{\pi}_1$ of $G_1$ is then defined with values in $\mathcal{U}(\mathcal{M}_2)$ as $w_2 \tilde{\pi}_1(g) w_2^* +P_2 - w_2w_2^*$. \end{proof} One way of expressing the crucial step in the above proofs is as follows~: by (the baby case of) von Neumann's bicommutant theorem, the centralizer of a subgroup of $\mathcal{U}(n)$ decomposes as a direct product of smaller unitary groups. This is not true for permutation groups, and therefore the proof of the preceding theorems does not apply verbatim for permutation stability or permutation flexible stability. The following question is however natural. A positive answer would pleasingly complement Ioana's results \cite{MR4134896}. \begin{question} Is it true that the direct product of two (flexibly) permutation stable groups is (flexibly) permutation stable, provided that one of them has property ($\tau$)? \end{question} \end{document}
\begin{document} \maketitle \begin{abstract} We give an easy optimal bound for the dimension of the subspaces generated by the best Diophantine approximations. \end{abstract} \section{Completely irrational subspaces.} Let $m,n$ be positive integers and $d=m+n$. We consider Euclidean space $\mathbb{R}^d$ with coordinates $$ \pmb{z} = (\pmb{x},\pmb{y})^\top,\,\,\, \pmb{x} = (x_1,...,x_m)^\top \in \mathbb{R}^n,\,\,\, \pmb{y} = (y_1,...,y_n)^\top \in \mathbb{R}^n $$ (for any martix $W$ by $W^\top$ we denote the matrix transposed to $W$). Linear subspace $\mathcal{R} \subset \mathbb{R}^d$ of dimension $ r = {\rm dim}\, \mathcal{R}$ is called {\it rational} if the intersection $\mathcal{R}\cap \mathbb{Z}^d$ is a $r$-dimensional sublattice in $\mathbb{Z}^d$. By $ H(\mathcal{R})$ we denote the {\it height} of the rational subspace $\mathcal{R}$, that is the fundamental volume of $r$-dimensional lattice $\mathcal{R}\cap \mathbb{Z}^d$. We define $m$-dimensional linear subspace $\mathcal{L}$ in $\mathbb{R}^d$ to be {\it completely irrational} if for any rational subspace $\mathcal{R}\subset\mathbb{R}^d$ of dimension ${\rm dim }\, \mathcal{R}= n$ we have $\mathcal{L}\cap \mathcal{R} =\{ \pmb{0}\}$, that is $\mathcal{L}$ has no non-trivial intersections with rational subspaces of dimension $\le r$. Let $$\Theta =\left( \begin{array}{ccc} \theta_{1,1}&\cdots&\theta_{1,m}\cr \theta_{2,1}&\cdots&\theta_{2,m}\cr \cdots &\cdots &\cdots \cr \theta_{n,1}&\cdots&\theta_{n,m} \end{array} \right)\,\,\, $$ be $m\times n$ real matrix. In the present paper we will consider only $m$-dimensional subspaces of the form \begin{equation}\label{ll} \mathcal{L}_\Theta = \{ \pmb{z} \in \mathbb{R}^d: \,\,\,\, \pmb{y} = \Theta \pmb{x}\}. \end{equation} In the case of completely irrational subspace $\mathcal{L}_\Theta$ we will also call matrix $\Theta$ to be completely irrational. \section{ Best approximation vectors.} We use notation $|\pmb{\xi}| = \max_{1\le j \le k} |\xi_j|$ for the sup-norm of the vector $\pmb{\xi}=(\xi_1,...,\xi_k)^\top\in \mathbb{R}^k$ and $||\pmb{\xi}|| = \min_{\pmb{a}\in \mathbb{Z}^k}|\pmb{\xi}-\pmb{a}|$ for the distance to the nearest integer point in sup-norm. An integer vector $\pmb{x} $ is called a best approximation for the matrix $\Theta$ if $$ ||\Theta \pmb{x}|| =|\Theta\pmb{x} - \pmb{y}| <\min_{\pmb{x}'} ||\Theta \pmb{x}'||, $$ where the minimum is taken over all non-zero integer points $\pmb{x}'$ with $|\pmb{x}'|\le |\pmb{x}|$, $ \pmb{x}'\neq \pm\pmb{x}$, and $ \pmb{y} = (y_1,...,y_n)^\top\in \mathbb{Z}^n$ is just the integer point at which the distance to the nearest integer point is attached. For a best approximation $\pmb{x}$ satisfying this definition the point $-\pmb{x}$ will also be a best approximation. We will also consider the extended vector of the best approximation $\pmb{z} = (\pmb{x},\pmb{y})^\top$. We should note that in general it may happen that for two integer points $\pmb{x}'$ and $\pmb{x}''$ with the same value $|\pmb{x}'|=|\pmb{x}''| $ one has $ ||\Theta \pmb{x}'|| = ||\Theta \pmb{x}''||$. So, in the general situation we cannot define uniquely a sequence of the best approximation vectors $\pm \pmb{x}_\nu\in \mathbb{Z}^m, \nu=1,2,3,...$ to satisfy \begin{equation}\label{1} |\pmb{x}_1|< |\pmb{x}_2|<...< |\pmb{x}_\nu|< |\pmb{x}_{\nu+1}|<....$$$$ ||\Theta \pmb{x}_1|| >||\Theta \pmb{x}_2||>... > ||\Theta \pmb{x}_\nu||>... >||\Theta \pmb{x}_{\nu+1}||>... \, . \end{equation} Also in the general situation we cannot define uniquely vectors $\pmb{y}_\nu\in \mathbb{Z}^n$ to satisfy $ ||\Theta \pmb{x}_\nu|| =|\Theta\pmb{x}_\nu - \pmb{y}_\nu| $. However the corresponding values of norms $ |\pmb{x}_\nu|$ and $ ||\Theta \pmb{x}_\nu||$ satisfying (\ref{1}) are well defined. We define matrix $\Theta$ to be {\it good} if the unique sequence of best approximation vectors $\pm\{\pmb{z}_\nu\}$ satisfying (\ref{1}) is well defined. In this note we consider only good matrices. \vskip+0.3cm Next, we should note that parallelepiped \begin{equation}\label{pi} \Pi_\nu =\{ \pmb{z}= (\pmb{x},\pmb{y})^\top \in \mathbb{R}^d:\,\,\,\,\, |\pmb{x}|\le |\pmb{x}_{\nu+1}|,\,\,\,\,\, |\Theta\pmb{x} - \pmb{y}|\le ||\Theta\pmb{x}_\nu || \,\} \end{equation} has no non-zero integer points inside and so from Minkowski convex body theorem one has \begin{equation}\label{ppp} ||\Theta \pmb{x}_\nu|| \le |\pmb{x}_{\nu+1}|^{-\frac{m}{n}}. \end{equation} Also we would like to recall the definition of irrationality measure function $$ \psi_\Theta (t) = \min_{\pmb{x}\in \mathbb{Z}^m: \, 0< |\pmb{x}|\le t }\,\, ||\Theta\pmb{x}||, $$ which is a piecewise constant function which is not continuous just in the points $t_\nu = |\pmb{x}_\nu|$, as well as a general statement by Jarn\'{\i}k \cite{J59} which claims the following. \vskip+0.3cm (a) For any integers $ m \ge 2, n\ge 1$ and for any function $ \psi (t)$ decreasing to zero as $t\to \infty$ there exist $n\times m$ matrices $\Theta$ with algebraically independent elements $\theta_{j,i} , 1\le i \le m, 1\le j \le n$ such that \begin{equation}\label{z1} \psi_\Theta (t) \le \psi (t) \,\,\,\,\,\text{for all}\,\,\,t \,\,\,\text{large enough}. \end{equation} (b) For $ m =1$ and for any $n\ge 2$ the following holds. Suppose that function $ \psi (t)$ decreas to zero as $t\to \infty$ but $\lim_{t\to \infty} t\cdot \psi(t) = +\infty$. Then there exist $n\times 1$ matrices (vectors) $\Theta$ with algebraically independent elements $\theta_{j,1}, 1\le j \le n$ such that (\ref{z1}) holds. \vskip+0.3cm In particular, Jarn\'{\i}k's result means that matrices $\Theta$ satisfying (\ref{z1}) can be chosen to be completely irrational and good. \section{ Dimension of subspaces of best approximations.} For a good matrix $\Theta$ we define the value $$ R(\Theta) = \min \{ s\in \mathbb{Z}_+:\,\,\, \text{there exists a linear subspace }\,\,\mathcal {R} \subset \mathbb{R}^d\,\, \text{of dimension} $$ $$ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\rm dim}\, \mathcal{R} =s \,\,\, \text{and}\,\, \nu_0 \in \mathbb{Z}_+ \,\,\, \text{such that }\,\,\, \pmb{z}_\nu \in \mathcal{R}\,\,\,\text{for all}\,\,\, \nu\ge \nu_0\} $$ which will be the main object of our study. It is clear that for any $m,n\ge 1$ for any good matrix $\Theta$ one has $ R(\Theta) \ge 2$. In the next theorem we collect together all the results about all possible values of $R(\Theta)$ for completely irrational $\Theta$. \vskip+0.3cm {\bf Theorem 1.} {\it Suppose $\Theta$ to be a completely irrational good matrix. Then \vskip+0.2cm {\rm (1)} if $ m=1$ for any $n$ one has $ R(\Theta) = d= n+1$; {\rm (2a)} if $n=1$ and $ m \ge 2$ one has $ R(\Theta) \ge 3$; {\rm (2b)} if $n=1$ for any $ m \ge 2$ there exist completely irrational good $\Theta$ with $ R(\Theta) = 3$; {\rm (3a)} if $n,m\ge 2 $ and $ n\le m $ one has $ R(\Theta) \ge n+2$; {\rm (3b)} if $n,m\ge 2 $ and $ n\le m $ there exist completely irrational good $\Theta$ with $ R(\Theta) = n+2$; {\rm (4a)} if $n,m\ge 2 $ and $ n> m $ one has $ R(\Theta) \ge n+1$; {\rm (4b)} if $n,m\ge 2 $ and $ n> m $ there exist completely irrational good $\Theta$ with $ R(\Theta) = n+1$. } \vskip+0.3cm Statements (1), (2a), (2b) are well known and discussed in surveys \cite{Che,Msing}. We should note that statement (2a) goes back to Jarn\'{\i}k \cite{Jsze,Jche} and was used by Davenport and Schmidt \cite{DS}, Lemma 5. Various examples of degeneracy of dimension from \cite{Msing} deal with the situation when $\Theta$ is not a completely irrational matrix. Statement (2b) was first proven in \cite{MDAN}. In \cite{Neck} (Theorem 3.5) it is shown that for an $m$-dimensional completely irrational subspace the inequality $ R(\Theta) \ge n+1$ is valid (see also Corollary 2 of Theorem 9 from \cite{Msing}), and so (4a) follows from this fact. Statements (3a), (3b), (4b) has never been documented. We give a proof of statement (3a) in Section 4 below. Statement (4b) is proven in Section 5. The proof of statement (3b) is quite similar and we give a sketched proof in Section 6. \vskip+0.3cm We would like to formulate here some additional comments concerning cases (3) and (4). To do this we need to recall the definition of {\it uniform Diophantine exponent} $$\hat{\omega} (\Theta) = \sup\{ \gamma \in \mathbb{R}:\,\,\, \limsup_{t\to \infty }\,\, t^\gamma \cdot \psi_\Theta ( t) =\limsup_{\nu \to \infty}\,\, |\pmb{x}_{\nu+1}|^\gamma \cdot ||\Theta\pmb{x}_\nu ||<+ \infty\}. $$ It is well known that $\hat{\omega} (\Theta)$ satisfies $$ \frac{m}{n} \le \hat{\omega}(\Theta) \le \begin{cases} 1,\,\,\,\,\, \,\,\,\,\, m=1\cr \infty,\,\,\,\,\, \,\,m \ge 2 \end{cases} $$ and may attain any value in these intervals. \vskip+1cm {\bf Remark 2.} (a) Suppose that $ n> m\ge 2$ and $ R(\Theta) = n+1$. Then $ \frac{m}{n} \le \hat{\omega} (\Theta)\le 1$. Moreover, $\hat{\omega} (\Theta)$ may attain any value in this interval; (b) Suppose that $ 2\le n\le m$ and $ R(\Theta) = n+2$. Then, of course, inequality $ \frac{m}{n} \le \hat{\omega} (\Theta)\le +\infty $ holds, and moreover, $\hat{\omega} (\Theta)$ may attain any value in this interval. (c) In particular, there exist matrices $\Theta$ with degenerate dimension of subspaces of best approximation vectors and with $\hat{\omega} (\Theta)=\frac{m}{n}$. \vskip+0.3cm Remark 2 follows form the fact that irrationality measure functions $\psi_\Theta (t)$ and $\psi_\Theta (t)$ eventually coincide (see Lemma 5 below) and the irrationality measure function defines the uniform exponent. \vskip+0.3cm Quite recently Schleischitz \cite{Sch} studied various properties of the sets of $\Theta$ for which $R(\Theta) <d =m+1$ in the case $n=1$. In particular he give various bounds for Hausdorff dimension of the sets under the consideration. The analysis of the subspaces of best approximations and their dimensions turned out to be important to some other studies related to Diophantine exponents, see \cite{Sch0}. In particular, in \cite{Sch0} it is shown that for the {\it ordinary Diophantine exponent} $${\omega} (\Theta) \! = \! \sup\{ \gamma \in \mathbb{R}:\liminf_{t\to \infty } t^\gamma \cdot \psi_\Theta ( t) =\limsup_{\nu \to \infty}\, |\pmb{x}_{\nu+1}|^\gamma \cdot ||\Theta\pmb{x}_\nu ||<+ \infty\}\! \ge \!\hat{\omega}(\Theta) $$ for $m\times n $ matrix $\Theta$ under the condition $ r= R(\Theta) <d$ one has non-trivial lower bounds of the form $$ {\omega} (\Theta) \ge G(m,n,r) \cdot \frac{m}{n},\,\,\,\,\text{where}\,\,\,\,\, G(m,n,r) >1. $$ This happens due to existence of a non-trivial lower bound for the ratio $\frac{\omega(\Theta)}{\hat{\omega}(\Theta)}$. \vskip+0.3cm If we suppose that $\Theta$ is not necessary completely irrational but good, certain existence results were obtained in \cite{Msing}, Section 2. In particular (Theorems 7, 10, 13, 14), it was shown that (1) for any $m\ge 2$ and $ n >m$ there exist good matrices $\Theta$ with entries $\theta_{i,j}$ linearly independent together with 1 over $\mathbb{Q}$ such that $ R(\Theta) = 2$; (2) for any $ m>n$ for a good matrix $\Theta$ one has $ R(\Theta) \ge 3$; (3) for any $m\ge 2$ and arbitrary $ n\ge 1$ there exist good matrices $\Theta$ with entries $\theta_{i,j}$ linearly independent together with 1 over $\mathbb{Q}$ such that $ R(\Theta) = 3$. \vskip+0.3cm We finish this section wth the following remark. In the case $m=n=2, d=4$ we do not know if there exists good matrix $\Theta$ with $R(\Theta)=2$. By our Theorem 1 (3a) we see that this is not possible for a completely irrational matrix $\Theta$, as in such a case one has $R(\Theta) = 4$ (see also Corollary 3.7 from \cite{Neck}). In \cite{Msing} it is shown that $R(\Theta)=3$ is not possible for good $2\times 2$ matrices. \section{ Proof of the statement (3a).} We assume that $\mathcal{L}$ defined in (\ref{ll}) is a completely irrational subspace. As $R(\Theta)\ge n+1$ (see Theorem 3.5 from \cite{Neck}) it is enough to prove that $R(\Theta)= n+1$ does not hold. Suppose that $\mathcal{R} \subset \mathbb{R}^d, {\rm dim}\, \mathcal{R} = n+1$ be the rational subspace of the smallest dimension form the definition of $R(\Theta)$. We consider the intersection $ \ell = \mathcal{R}\cap \mathcal{L}$. We claim that $\ell$ is a one-dimensional linear subspace. Indeed, if ${\dim}\, \ell \ge 2$ then the union $\mathcal{R}\cup \mathcal{L} $ is contained in a $(d-1)$-dimensional subspace $\mathcal{W}$ of $\mathbb{R}^d$. Then for any $n$-dimensional (rational) subspace $ \mathcal{R}'\subset \mathcal{R}$, both $n$-dimensional subspace $\mathcal{R}'$ and $m$-dimensional subspace $\mathcal{L}$ lie in $(m+n-1)$-dimensional subspace $\mathcal{W}$. So for rational subspace $\mathcal{R}'$ we have $\mathcal{R}'\cap \mathcal{L} \neq \{ \pmb{0}\}$ and this is not possible by the definition of completely irrational subspace. As $ n\le m$ from (\ref{ppp}) we see that \begin{equation}\label{pp} ||\Theta \pmb{x}_\nu|| \cdot |\pmb{x}_{\nu+1}|\le 1. \end{equation} We deal with two successive best approximations $ \pmb{z}_j = (\pmb{x}_j,\pmb{y}_j)^\top\in \mathcal{R}, j = \nu, \nu+1$. We prove that \begin{equation}\label{pp} \lim_{\nu \to \infty } ||\Theta \pmb{x}_\nu|| \cdot |\pmb{x}_{\nu+1}| =\infty \end{equation} and this will contradict to (\ref{pp}). Let ${\rm dist}\, (\mathcal{A},\mathcal{B}) $ denotes the Euclidean distance between the sets $\mathcal{A},\mathcal{B}\subset \mathbb{R}^d$ and ${\rm angle}\, (\pmb{z}',\pmb{z}'')$ denote the angle between non-zero vectors in $\mathbb{R}^d$. We consider the orthogonal complement $\ell^\perp$ which is a $(d-1)$-dimensional subspace and subspaces $\frak{R} = \mathcal{R}\cap \ell^\perp, \frak{L} = \mathcal{L}\cap \ell^\perp$. By compactness argument we see that $$ \delta (\mathcal{R},\mathcal{L} ) = \min_{\pmb{z}' \in \frak{R},\pmb{z}''\in \frak{L}} \, {\rm angle}\, (\pmb{z}',\pmb{z}'') >0. $$ It is clear that for any $\pmb{z} = (\pmb{x},\pmb{y})^\top \in \mathcal{R}$ one has \begin{equation}\label{qq} {\rm dist}\, (\pmb{z}, \ell) \le \frac{{\rm dist}\, (\pmb{z},\mathcal{L})}{\sin \delta (\mathcal{R},\mathcal{L} )} \le C_{\mathcal{L},\mathcal{R} }\, |\Theta\pmb{x}-\pmb{y}| \end{equation} with some positive constant $C_{\mathcal{L},\mathcal{R} }$ and \begin{equation}\label{qqq} {\rm dist}\, (\pmb{z}_\nu, \ell) \to 0,\,\,\,\,\,\nu \to \infty. \end{equation} Now we consider two-dimensional subspace $\pi_\nu =\langle \pmb{z}_\nu, \pmb{z}_{\nu+1}\rangle_{\mathbb{R}}$ and two-dimensional lattice $\Lambda_\nu = \langle \pmb{z}_\nu, \pmb{z}_{\nu+1}\rangle_{\mathbb{Z}}\subset \pi_\nu $. Let $\Delta_\nu $ denotes covolume (the area of the fundamental parallelogramm) of $\Lambda_\nu$. As the number of two-dimensional integer sublattices of $\mathbb{Z}^d$ with bounded covolume is finite and $\ell \cap \pi_\nu = \{\pmb{0}\} \,\, \forall\, \nu$, from (\ref{qqq}) we see that \begin{equation}\label{qqqq} \Delta_\nu \to \infty, \nu \to \infty. \end{equation} As $ \pmb{z}_\nu, \pmb{z}_{\nu+1} \in \Pi_\nu \cap\mathbb{R} , \ell \subset \Pi_\nu \cap \mathbb{R}$ from (\ref{qq}) we see that $$ \pm \pmb{z}_\nu, \pm\pmb{z}_{\nu+1} \in \pi_\nu\cap \Omega, $$ {where} $$ \Omega = \{ \pmb{z}\in \mathcal{R}: \,\,|\pmb{z}|\le C_1(\Theta)|\pmb{x}_{\nu+1}|, \,\,\,{\rm dist}\, (\pmb{z}, \ell) \le C_2 (\Theta, \mathcal{R}) ||\Theta\pmb{x}_\nu|| \} $$ with some positive $ C_1(\Theta), C_2 (\Theta, \mathcal{R})$ and so \begin{equation}\label{co} \Delta_\nu \le \,\text{area of}\, \,\, \pi_\nu\cap \Omega \le C_3 (\Theta, \mathcal{R})\, |\pmb{x}_{\nu+1}|\cdot ||\Theta\pmb{x}_\nu||, \,\,\, C_3 (\Theta, \mathcal{R}). \end{equation} From (\ref{qqqq}) and (\ref{co}) we get (\ref{pp}).$\Box$. \section{ Proof of the statement (4b).} We consider vector $\Theta^{*} = (\theta^*_1,...,\theta_n^*)^\top \in \mathbb{R}^n$ which consist of algebraically independent components $\theta^*_j$. Let $$ \psi_{\Theta^*} (t) = \min_{x\in \mathbb{Z}_+:\, x\le t} ||\Theta^* x|| = \min_{x\in \mathbb{Z}_+:\, x\le t} \max_{1\le j \le n} ||\theta_j^* x|| $$ be the irrationality measure function for simultaneous approximation with vector $\Theta^1$ and $( x_\nu,y_{1,\nu},...,y_{n,\nu}) \in \mathbb{Z}^{n+1}, \nu =1,2,3,...$ be the sequence of all best approximations to $\Theta^*$. It is clear that $ R(\Theta^*) = n+1$. Denote $$ \xi_\nu = \psi_{\Theta^*} (x_\nu) = ||\Theta^*x_\nu||. $$ We consider $m\times n$ matrices $\Theta$ for the form \begin{equation}\label{mma} \Theta =(\Theta^*|\Theta^2),\, \text{where}\, \Theta^2 =\left( \begin{array}{ccc} \theta_{1,2}&\cdots&\theta_{1,m}\cr \theta_{2,2}&\cdots&\theta_{2,m}\cr \cdots &\cdots &\cdots \cr \theta_{n,2}&\cdots&\theta_{n,m} \end{array} \right)\, \text{is a }\, (m-1)\times n\, \text{matrix}. \end{equation} We identify the set of all matrices $\Theta^2$ with $\mathbb{R}^N, N = (m-1)n$ and consider Lebesgue measure on $\mathbb{R}^N$. We consider the sequence of integer vectors \begin{equation}\label{ss} (x_\nu,\underbrace{0,...,0}_{(m-1)\,\,\text{times}}; y_{1,\nu},...,y_{n,\nu}), \,\,\, \nu =1,2,3,... \, . \end{equation} \vskip+0.3cm {\bf Lemma 3.}\,{\it Assume that the series \begin{equation}\label{ser} \sum_{\nu=1}^\infty x_{\nu+1}^{m} \xi_\nu^n \end{equation} converges. Then for almost all matrices $\Theta^2$ the sequence of the extended best approximation vectors for matrix $\Theta$ defined in (\ref{mma}) differs from the sequence (\ref{ss}) by at most finite number of elements. In particular, for almost all $\Theta^2$ there exists $t_0$ such that \begin{equation}\label{eeee} \psi_\Theta (t) = \psi_{\Theta^*} (t),\,\,\,\,\,\, \forall\, t \ge t_0. \end{equation} } \vskip+0.3cm Statement (4b) of Theorem 3 follows immediately from Lemma 5. Indeed, we see that $R(\Theta) = R(\Theta^*) = n+1$. Assuming $ n>m$, take $ \gamma \in \left(\frac{m}{n},1\right)$. Then by Jarn\'{\i}k's result cited in Section 2, part (b) applied with $ \psi(t) = t^{-\gamma}$, we get $\Theta^*$ with algebraically independent elements and $ \psi_{\Theta^*} (t) \le t^{-\gamma}$. This means that for $\nu$ large enough one has \begin{equation}\label{ppr} x_{\nu+1}^m\xi_{\nu}^{n} \le x_{\nu+1}^{\gamma_1},\,\,\,\, \gamma_1 = m-n\gamma <0, \end{equation} and the series (\ref{ser}) converges as $x_\nu$ grow exponentially (see Lemma 1 from \cite{BL}). Now Lemma 5 ensures the existence of completely irrational extended matrix $\Theta =(\Theta^*|\Theta^2)$ satisfying the desired properties. \vskip+0.3cm Proof of Lemma 3. For vector $\pmb{x} = (x_1,...,x_m)\in \mathbb{Z}^m$ we define a shortened vector $\underline{\pmb{x}}= (x_2,...,x_m)$. To show that sequence (\ref{ss}) eventually coincides with the sequence of the best approximations for matrix $\Theta$ with entries $\theta_{i,j} \in [0,1]$ it is enough to show that there exists $\nu_0$ such that for all $\nu \ge \nu_0$ and for all integer vectors \begin{equation}\label{222} (\pmb{x}, \pmb{y}) = (x_1,x_2,...,x_m; y_1,...,y_n), \in \mathbb{Z}^{d} $$$$ \text{with} \,\,\,\, 0\neq |\underline{\pmb{x}}|\le |\pmb{x}|\le x_{\nu+1} , \,\,\, \max_{1\le j\le n} |y_j - x_1\theta_j^*| \le m |\underline{\pmb{x}}| \end{equation} one has $$ |\Theta \pmb{x} - \pmb{y}| > \xi_\nu $$ (we should note that for fixed $\pmb{x}$ the number of integer vectors $\pmb{y}\in \mathbb{Z}^n$ satisfying the last inequality in (\ref{222}) is $\ll_m |\underline{\pmb{x}}|^n$). This condition may be rewritten as follows. Define the sets $$ \Omega_{j,\nu} ( \pmb{x} , y_j) = \{ ( \theta_{j,2},...,\theta_{j,m}) \in [0,1]^{m-1}:\, |\theta_{j,2}x_2+...+\theta_{j,m}x_m + (\theta_{1}^* x_1-y_j)|\le \xi_\nu\}. $$ As in our consideration $|\underline{\pmb{x}}|\neq 0$, these sets $ \Omega_{j,\nu} ( \pmb{x} , y_j) $ are intersections of strips of thickness $\frac{\xi_\nu}{|\underline{\pmb{x}}|}$ around hyperplanes $$ \{ ( w_{j,2},...,w_{j,m}) \in \mathbb{R}^{m-1}:\, w_2x_2+...+w_mx_m + (\theta_{1}^* x_1-y_j)= 0\} $$ with the unit cube $[0,1]^{m-1}$. Consider $$ \Omega_\nu( \pmb{x} , \pmb{y}) = \Omega_{1,\nu} ( \pmb{x} , y_1)\times \cdots \times \Omega_{n,\nu}( \pmb{x} , y_n) \subset [0,1]^N $$ and define $$ \frak{W}_\nu = \bigcup_{\pmb{x} } \bigcup_{\pmb{y}} \Omega_\nu( \pmb{x} , \pmb{y}) $$ where the union is taken over all vectors $\pmb{x},\pmb{y}$ satisfying (\ref{222}). Now the condition \begin{equation}\label{main} \exists\, \nu_0:\,\,\, \forall \, \nu \ge \nu_0 \,\,\,\,\, \text{one has} \,\,\,\,\, \Theta^2 \not \in \frak{W}_\nu \end{equation} ensures that the sequence (\ref{ss}) eventually coincides with the sequence of the best approximations for matrix $\Theta$. For the $(m-1)$-dimensional measure of $ \Omega_j ( \pmb{x} , y_j)$ we have an upper bound \begin{equation}\label{bou} \mu_{m-1} \left( \Omega_j ( \pmb{x} , y_j)\right) \ll_m \frac{\xi_\nu}{|\underline{\pmb{x}}|}, \end{equation} and so for the measure of $\frak{W}_\nu$ we get $$ \mu_N (\frak{W}_\nu) \ll_{m} \sum_{\pmb{x}} \sum_{\pmb{y}} \left(\frac{\xi_\nu}{|\underline{\pmb{x}}|}\right)^n\ll_{m,n} \xi_\nu^n \sum_{\pmb{x}} 1\ll_{m,n} \xi_\nu^n\cdot{x}_{\nu+1}^m. $$ Now everything follows from Borel-Cantelli lemma.$\Box$ \section{ Sketched proof of the statement (3b).} The proof of statement (3b) is similar to those of (4b). We give a sketch of a proof and left the details to the reader. We may assume $ m\ge 3$. One should consider matrices \begin{equation}\label{mma1} \Theta =(\Theta^*|\Theta^2),\,\,\,\, \text{where}\,\,\,\, \Theta^* =\left( \begin{array}{cc} \theta_{1,1}^*&\theta_{1,2}^*\cr \theta_{2,1}^*&\theta_{2,2}^*\cr \cdots &\cdots \cr \theta_{n,1}^*&\theta_{n,2}^* \end{array} \right), \,\,\,\, \Theta^2 =\left( \begin{array}{ccc} \theta_{1,3}&\cdots&\theta_{1,m}\cr \theta_{2,3}&\cdots&\theta_{2,m}\cr \cdots &\cdots &\cdots \cr \theta_{n,3}&\cdots&\theta_{n,m} \end{array} \right). \end{equation} Then we suppose $\Theta^*$ to be completely irrational. By Theorem 1, statement (3a) we see that $R(\Theta^*) = n+2$. Note that now $$ \psi_{\Theta^*} (t) = \min_{\pmb{x}\in \mathbb{Z}^2:0< |\pmb{x}|\le t} ||\Theta^* \pmb{x}|| = \min_{x_1,x_2\in \mathbb{Z}:\,0<\max(|x_1|,|x_2|) \le t} \,\,\, \max_{1\le j \le n}\,\, ||\theta_{j,1}^* x_1+\theta_{j,2}^*x_2||, $$ and for the best approximation vectors $ \pmb{z}_\nu = (x_{1,\nu},x_{2,\nu}; y_{1,\nu},...,y_{n,\nu})\in \mathbb{Z}^{n+2}$ for $\Theta^*$ we consider \begin{equation}\label{dde} \xi_\nu = \psi_{\Theta^*} (\pmb{x}_\nu),\,\,\, \pmb{x}_\nu = (x_{1,\nu},x_{2\nu}). \end{equation} Instead of Lemma 3 we need the following \vskip+0.3cm {\bf Lemma 4.}\,{\it Assume that the series (\ref{ser}) for $\xi_\nu$ defined in (\ref{dde}) converges. Then for almost all matrices $\Theta^2$ the sequence of the extended best approximation vectors for matrix $\Theta$ defined in (\ref{mma1}) differs from the sequence $$ (x_{1,\nu},x_{2,\nu},\underbrace{0,...,0}_{(m-2)\,\,\text{times}}; y_{1,\nu},...,y_{n,\nu}) \in \mathbb{Z}^{n+m}, \,\,\, \nu =1,2,3,... $$ associated to the sequence $\pmb{z}_\nu \in \mathbb{Z}^{n+2}$ of the best approximations for matrix $\Theta^*$ by at most finite number of elements, and in particular (\ref{eeee}) holds. } \vskip+0.3cm The proof is quite similar to the proof of Lemma 5. It also uses Borel-Cantelli argument. \vskip+0.3cm To finish the proof of statement (3b) of Theorem 1 one needs to apply Jarn\'{\i}k's result cited in Section 2, part (b) applied with $ \psi(t) = t^{-\gamma}$, $\frac{m}{n} < \gamma < +\infty$. So one gets $\Theta^*$ with algebraically independent elements and $ \psi_{\Theta^*} (t) \le t^{-\gamma}$. Now again for $\nu$ large enough one has (\ref{ppr}) the convergence of the series (\ref{ser}) follows from the exponential growth o $|\pmb{x}_\nu|$. \vskip+0.3cm {\bf Acknowledgements}. This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Program, Grant agreement no. 754475 \vskip+1cm \end{document}
\begin{document} \title[No Measure of Maximal Entropy]{A continuous, piecewise affine surface map with\\ no measure of maximal entropy} \begin{abstract} It is known that piecewise affine surface homeomorphisms always have measures of maximal entropy. This is easily seen to fail in the discontinuous case. Here we describe a piecewise affine, globally continuous surface map with no measure of maximal entropy. \end{abstract} \author{J\'er\^ome Buzzi} \address{Jerome Buzzi\\C.N.R.S. \& D\'epartement de Math\'ematique\\ Universit\'e Paris-Sud\\91405 Orsay Cedex\\France} \email{[email protected]} \urladdr{www.jeromebuzzi.com} \keywords{Dynamical systems; ergodic theory; entropy; maximal entropy; piecewise affine maps.} \maketitle \section{Introduction} \subsection{Measures of Maximal Entropy} The complexity of the orbit structure of a dynamical system is reflected by its \emph{topological entropy}. Entropy can also be defined at the level of invariant probability measures. These two levels are related by the variational principle, i.e., for any continuous self-map of a compact metric space: $$ h_\top(f) = \sup_\mu h(f,\mu). $$ This brings to the fore \emph{maximal entropy measures}, i.e., those having "full complexity" in the following sense: $$ h(f,\mu)=h_\top(f). $$ Such measures may fail to exist, e.g., for any $r<\infty$, there are $C^r$ smooth interval maps with non-zero topological entropy and no maximal entropy measures \cite{BuzziSIM,RuetteEx}. However, building on Yomdin's theory \cite{Yomdin} of smooth mappings, Newhouse has shown the following \begin{theorem}[Newhouse \cite{Newhouse}] If $f$ is $C^\infty$ self-map of a compact manifold, then $\mu\mapsto h(f,\mu)$ is upper semicontinuous as a function on the compact set of invariant probability measures endowed with the weak star topology. In particular, there exists a maximal entropy measure. \end{theorem} \subsection{Piecewise Affine Transformations} This does not apply to the following simple class of transformations, even under the assumption of global continuity: \begin{definition} A map $T:M\to M$ is said to be {\bf piecewise affine} if \begin{enumerate} \item $M$ is admits an affine atlas (i.e., a set of charts whose change of coordinates are affine diffeomorphisms); \item there exists a finite partition of $M$ whose elements $A$ satisfy: (i) $A$, resp. $TA$, are each contained in the domain of a chart $\chi$, resp. $\chi'$, of the affine atlas; (ii) $\chi'\circ T\circ\chi^{-1}|A$ is the restriction of an affine map to some open subset of some $\mathbb R^d$. \end{enumerate} \end{definition} However, Newhouse observed \cite{NewhousePerso,BuzziPWAH} that the above property nevertheless holds for \emph{piecewise affine surface homeomorphisms}. This follows from the sub-exponential rate at which discontinuities can accumulate when one iterates the map: the \emph{multiplicity entropy} introduced in \cite{BuzziAffine} is zero for such maps. Additionally, the set of maximal entropy measures of such transformations has been shown \cite{BuzziPWAH} to be a finite-dimensional simplex whenever $h_\top(f)\ne0$. It is easy to see that the {\bf finiteness property fails for piecewise affine continuous maps}. Indeed, it is enough to consider the direct product of the identity on some interval with a piecewise affine, globally continuous interval map with nonzero entropy. \subsection{Main Result} In this note we show that {\bf existence also fails} to hold generally for such maps: \begin{theorem} There exists a piecewise affine, globally continuous map $T$ of $[0,1]^2$ satisfying $ h_\top(T)=\sup_\mu h(T,\mu)=\log2 $ with no maximal entropy measure, i.e., no measure achieving this supremum. \end{theorem} This map will be explicitly described. Taking the direct product of the above map with the identity of a cube of the proper dimension, one immediately obtains the following: \begin{corollary} For any integer $d\geq2$, there exists a piecewise affine, globally continuous map $T$ of $[0,1]^d$ satisfying $ h_\top(T)=\sup_\mu h(T,\mu)=\log2 $ with no maximal entropy measure, i.e., no measure achieving this supremum. \end{corollary} \subsection{Comments} We can compare globally continuous, piecewise affine maps to related classes for which this problem has been studied. On the one hand, existence is known to fail if one removes: \begin{itemize} \item either the continuity assumption. We refer to \cite{BuzziAffine,KruglikovRypdal1,KruglikovRypdal2} for examples and more discussion, including failure of the variational principle. \item or the affine assumption. It is easy to construct a globally continuous, piecewise quadratic map without a measure of maximal entropy \cite{BuzziPWAH}. This construction is derived from examples of $C^\infty$ maps at which the topological entropy fails to be lower semi-continuous. \end{itemize} On the other hand, a classical theorem of Hofbauer \cite{Hofbauer} asserts that existence and finiteness hold for piecewise affine interval maps (in fact, piecewise monotone maps) with non zero entropy. \section{General Description of the Map}\label{sec:map-desc} \subsection{Key Properties} The map $T$ is defined on a parallelogram $Q:=ENWS$ (we write $X_1\dots X_k$ for the compact polygon with sides $[X_1X_2],\dots,[X_kX_{1}]$). Define the vertical cone $$ \mathcal C^s:=\left\{\left(\begin{matrix}x\\y\end{matrix}\right):|x|\leq 2|y|\right\}. $$ We say that $\mathcal C^s$ is \emph{stable} for some map, if the differential at any point of the inverse of that map (where this differential exists) sends $\mathcal C^s$ into itself. Let us describe the key properties of the map $T$ (see Fig. \ref{fig:scheme}): \begin{enumerate} \item\label{key-NS} the poles $N$ and $S$ are fixed points; \item\label{key-half} all points in the top half $NWE$ (the grey triangle in Fig. \ref{fig:scheme}) are attracted to $N$; \item\label{key-markov} each one of the red triangles $ABS$ and $CDS$ is mapped to the large triangle $ADS$; \item\label{key-ABS} $T|ABS$ divides the $y$-coordinate by a factor $\leq 2$; \item\label{key-CDS} $T|CDS$ multiplies the $y$-coordinate by a factor of $2$; \item\label{key-cone} $\mathcal C^s$ is stable for the map on $ABS\cup CDS$; \item\label{key-transverse} on $ABS\cup CDS$, the map preserves the horizontal and expands horizontal vectors by a factor $\geq 4$; \item\label{key-folding} the middle, green-blue part $BCS$ is mapped into the blue, right side $DES$, except for a part which is mapped into $NEW$ after one or two iterations of $T$; \item\label{key-right} $DES$ is mapped into the purple left side $WAS$ and $NWE$; \item\label{key-left} all orbits starting in $WAS$ eventually converge to fixed points. \end{enumerate} To be precise, we must state the following corrections, involving the exact partition defined below: Properties (\ref{key-markov})-(\ref{key-transverse}) do not hold on the top parts $ABA^tB^t$ and $CDC^cD^c$ which are mapped into $NEW$. The contraction is exactly by a factor of $2$ on the lower part, $A^cB^cS$, of $ABS$. \begin{figure} \caption{Main zones for the dynamics of the example $T$} \label{fig:scheme} \end{figure} \subsection{Precise definition of the map} The above is realized as a piecewise affine, globally continuous map $T:Q\to Q$. The parallelogram $Q=NWES$ is partitioned into 26 triangles, on each of which the map is affine. The $x$-coordinates of the vertices of this partition which lie on the diagonal $y=y^1$ are given in Table \ref{tab:key-vert}. The top and bottom points of $Q$ are $N(0,2)$ and $S(0,0)$. The other vertices of the partition are obtained from those on $y=y^1$ by homotheties centered at $S$ or $N$ yielding points on $y=y^u,y^t,y^c,y^b$ with values in Table \ref{tab:key-lines}. We denote by $A^t$ the vertex obtained from $A$ on the line $y=y^t$, etc. The resulting partition and vertices are depicted in Fig. \ref{fig:partition}. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Name & W & A & B & O & C & D & E\\ \hline $x$ & -1.5 & -1 & -0.9 & 0 & 0.9 & 1 & 1.5\\ \hline \end{tabular} \medbreak \caption{$x$-coordinates of the vertices on the line $y=y^1$.}\label{tab:key-vert} \end{table} \begin{table} \begin{tabular}{|*{10}{c|}} \hline Name & $y^1$ & $y^t$ & $y^c$ & $y^b$ & $y^u$ \\ \hline $y$ & 1 & 0.8 & 0.5 & 0.25 & 1.5\\ \hline \end{tabular} \medbreak \caption{$y$-coordinates of the horizontal lines used to define $T$.}\label{tab:key-lines} \end{table} \begin{figure} \caption{Partition and special points for the map $T$.} \label{fig:partition} \end{figure} The piecewise affine map is finally defined by the images of the vertices of the partition, as given in Table \ref{tab:map-points}. \begin{table} \begin{tabular}{|*{10}{c|}} \hline On $y=y^1$ & $W$ & $A$ & $B$ & $O$ & $C$ & $D$ & $E$ \\ \hline Image & $W^u$ & $A^u$ & $D^u$ & $E^u$ & $D^u$ & $A^u$ & $W^u$\\ \hline \hline On $y=y^t$ & $W^t$ & $A^t$ & $B^t$ & $O^t$ & --- & --- & --- \\ \hline Image & $W$ & $A$ & $D$ & $E$ & ---& --- & --- \\ \hline \hline On $y=y^c$ & $W^c$ & $A^c$ & $B^c$ & $O^c$ & $C^c$ & $D^c$ & $E^c$ \\ \hline Image & $W^c$ & $A^b$ & $D^b$ & $E^c$ & $D$ & $A$ & $W$\\ \hline \end{tabular} \medbreak \caption{Mapping of the points. Note: the remaining vertices, $S$ and $N$, are fixed points.}\label{tab:map-points} \end{table} \subsection{Proof of the Key Properties}\label{sec:map-check} Key properties (\ref{key-NS}) and (\ref{key-half}) are immediate from $T(S)=S$, $T(N)=N$ and the fact that $T|NWO$ and $T|NEO$ are affine map with $W,O,E$ mapped strictly above the line $(WE)$. A direct computation shows that the preimage of $NWE$ is the union of $NWE$ with $WW^tOO^t$, $C^cE^cCE$ and some upper part of $OO^cCC^c$ ---see Fig. \ref{fig:preimage-top}. Using this, the following shows the key properties (\ref{key-markov})-(\ref{key-transverse}). \begin{figure} \caption{$T^{-1} \label{fig:preimage-top} \end{figure} Key property (\ref{key-markov}) follows by inspection, see Fig. \ref{fig:strips}. Looking at Table \ref{tab:mat} we see that $T'|A^tB^tS$ multiplies the $y$-coordinate by $0.5$ or $2.5$, proving (\ref{key-ABS}), $0.5$ on $A^cB^cS$. (\ref{key-CDS}) is checked in the same way. \begin{figure} \caption{On the left: the triangles $A^tB^tS$ and $C^cD^cS$. On the right: their (identical) image. The bold lines on the right are the images of those on the left.} \label{fig:strips} \end{figure} We are going to prove that $\mathcal C^s$ is stable for $T|A^tB^tS$ and $T|C^cD^cS$. We have to check this property for the $4$ affine maps involved using the following: \begin{lemma}\label{lem:cone-linear} Let $A=\left(\begin{matrix} a & b\\ c& d\end{matrix}\right)$. The following is a sufficient condition for the invariance $A^{-1}(K_C)\subset K_C$ where $K_C:=\{\left(\begin{matrix} x\\ y\end{matrix}\right):|x|\leq C|y|\}$: $$ \gamma_1:=C|c/a|<1 \text{ and } \gamma_2:=\frac{|d|+|b|C^{-1}}{|a|-C|c|} \leq 1. $$ \end{lemma} \begin{proof} Let $\left(\begin{matrix} \xi'\\ \eta'\end{matrix}\right)=A\left(\begin{matrix} \xi\\ \eta \end{matrix}\right)$ with $|\xi'|\leq C|\eta'|$. We have $$ \xi=a^{-1}\xi'-a^{-1}b\eta \text{ and }\eta'=c\xi+d\eta. $$ Hence, abreviating $|a|,|b|,\dots$ to $a,b,\dots$, $$ |\xi| \leq a^{-1}|\xi'|+a^{-1}b|\eta| \leq C a^{-1}(c|\xi|+d|\eta|)+a^{-1}b|\eta|. $$ Thus, $$ (1-a^{-1}cC)|\xi| \leq (a^{-1}dC+a^{-1}b) |\eta|. $$ If the first factor is positive this is equivalent to $$ |\xi|\leq \frac{a^{-1}dC+a^{-1}b}{1-a^{-1}cC}. $$ The Lemma follows. \end{proof} \begin{table} \begin{tabular}{|*{5}{c|}} \hline Triangle & $A^cB^cS$ & $A^tB^tA^c$ & $A^cB^tB^c$ & $C^cD^cS$ \\ \hline Matrix & $10\quad 9.5$ & $25\quad 22.5$ & $10\quad 11.5$ & $-40\quad 38$\\ & $\;0\quad 0.5$ & $\;0\quad 2.5$ & $\;0\quad 2.5$ & $0\quad 2$\\ \hline $\gamma_1$& $0$ & $0$ & $0$ & $0$ \\ \hline $\gamma_2$& $0.525$ & $0.55$ & $0.825$& $0.525$\\ \hline \end{tabular} \medbreak \caption{Stability of the cone.}\label{tab:mat} \end{table} The matrices of the linear parts of the $4$ affine maps are listed in Table \ref{tab:mat} together with the quantities denoted above $\gamma_1,\gamma_2$ for each of those. Thus (\ref{key-cone}) holds. Property (\ref{key-transverse}) also follows immediately from the matrices in Table \ref{tab:mat} (the left lower entry is zero and the left upper entry is bigger than $4$ in absolute value). To prove that the "folding zone" $BCS$ is mapped to the right of $CDS$ except for the part that ends up in $NEW$ in one or two iterations, we decompose it into its left half $BOS$ and right half $OSC$ ---see Fig. \ref{fig:middle}. The images are given in Fig. \ref{fig:middle-image}. We see that $T(BOS)\subset DES\cup NEW$ and the image of $T(OSC)\subset DES \cup NEW\cup\Delta$ where $\Delta\subset OO^cCC^c$ is itself mapped into $NEW$ according to Fig. \ref{fig:preimage-top}. This proves property (\ref{key-folding}). \begin{figure} \caption{The two halves of the folding zone: $BOS$ on the left and $OSC$ on the right.} \label{fig:middle} \end{figure} \begin{figure} \caption{The images of two halves of the folding zone: $T(BOS)$ on the left and $T(OSC)$ on the right. The bold lines are the images of those in Fig. \ref{fig:middle} \label{fig:middle-image} \end{figure} Similarly one checks that $DES$ and $WAS$ (see Fig. \ref{fig:left-right}) are both mapped to subsets of $WAS \cup NEW$ (see Fig. \ref{fig:image-left-right}). This establishes property (\ref{key-right}) and prepares the proof of (\ref{key-left}). \begin{figure} \caption{The two triangles $(WAS)$ on the left and $(DES)$ on the right.} \label{fig:left-right} \end{figure} \begin{figure} \caption{The images of $(WAS)$ on the left and $(DES))$ on the right. The bold lines are the image of those in Fig. \ref{fig:left-right} \label{fig:image-left-right} \end{figure} \subsection{Orbits not contained in $A^tB^tS\cup C^cD^cS$} By the above remarks, such orbits must eventually land in $NEW$ --in which case they converge to the fixed point $N$; or enter $WAS$ and stay there for ever. Let us show that an orbit which is confined to $WAS$ converges to a fixed point. $WAS$ is the union of $5$ triangles on each of which the map is affine: the triangles numbered $3,4,5,6$ and $21$ on Fig. \ref{fig:partition}. As pictured in Fig. \ref{fig:preimage-top}, the triangles $3$ and $4$ are mapped into $NEW$ so our orbit cannot enter them. As can be seen in Fig. \ref{fig:image-left-right}, the triangle $21$ (i.e., $W^cA^cS$) is mapped into itself. Moreover the segment $[W^cS]$ is made of fixed points, while the transverse direction is contracted like $[A^cS]$. It follows that all points in this triangle eventually converge to some fixed point in $[W^cS]$. Triangle $6$, i.e., $W^tW^cA^t$, is mapped to $WW^cA$ which is contained in $W^tW^cA^t\cup T^{-1}NEW$. But $$ T'(x,y) = \left(\begin{matrix} \frac54 & \frac58\\ 0 & \frac53\end{matrix}\right) $$ with eigenvalues are $5/3$ and $5/4$: it is expanding. Thus all points in triangle 6, except the fixed point $W^c$ are eventually mapped into $NEW$ under positive iteration. We consider triangle $5$, i.e., $W^cA^cA^t$. Using Fig. \ref{fig:image-left-right}, observe that points that exit this triangle cannot re-enter it. Hence it is enough to analyze orbits that stay in $W^cA^cA^t$ forever. However, on that triangle, $$ T'(x,y) = \left(\begin{matrix} 2 & -\frac12\\ -1 & \frac32\end{matrix}\right) $$ with eigenvalues: $2.5$ and $1$, the latter with eigenvector $\left(\begin{matrix} 1\\ 2\end{matrix}\right)$. As $T(W^c)=W^c$, this gives a segment of fixed points for the affine map and everything else is eventually mapped outside of the triangle. This completes the proof of property (\ref{key-left}) and therefore of all the key properties. \section{Proof of the Theorem} In this section, we deduce the Main Theorem from the key properties (\ref{key-NS})-(\ref{key-left}). First, let us note that the only aperiodic ergodic, invariant measures are carried by the compact invariant set $$ K := \bigcap_{n\geq0} T^{-n}(ABS\cup CDS). $$ Indeed, by key properties (\ref{key-half}), (\ref{key-folding}), (\ref{key-right}) and (\ref{key-right}) and the additional remark below them, all orbits which do not stay in $ABS\cup CDS$ for ever converge to fixed points. We set $K_0:=K\setminus\{S\}$ and denote the $2$-shift by $\sigma:\Sigma\to\Sigma$ with $\Sigma:=\{0,1\}^{\mathbb N}$. The key fact is: \begin{lemma} $(T,K_0)$ is an entropy-preserving extension of a subset of the $2$-shift according to $$ \gamma:x\in K_0\longmapsto (\eps_n)_{n\geq0}\in\Sigma \text{ where }\eps_n=0\iff T^n(x)\in ABS. $$ More precisely, $h(T,\mu)=h(\sigma,\gamma_*\mu)$ for any invariant probability measure $\mu$ of $(T,K_0)$. \end{lemma} \begin{proof} $\gamma$ is clearly continuous and satisfies $\gamma\circ T=\sigma\circ\gamma$ on $K_0$. We claim that $\gamma^{-1}\gamma(x)$ is a $C^1$ curve starting from $S$, containing $x$ and whose tangent is everywhere contained in $C^s$ (we call such curves \emph{vertical}). Indeed, define, for any $n\geq0$, $C_n(x):=\{y\in Q:\forall 0\leq k\leq n\; f^ny\in\gamma_k(x)\}$ where $\gamma_k(x)\in\{ABS,CDS\}$ is characterized by $T^k(x)\in\gamma_k(x)$. Note that $T(ABS),T(CDS)\supset ABS\cup CDS$. Hence $T^nC_n(x)=ABS$ or $CDS$. According to (\ref{key-ABS})-(\ref{key-CDS}), $\mathcal C^s$ is stable by $T|ABS$ and $T|CDS$ and contains the vertical boundary lines of these two triangle (with equations $x=\kappa y$ with $|\kappa|\in\{0.9,1\}$, see Table \ref{tab:map-points}). Thus $C_n(x)$ is bounded by two vertical curves and some graph $y=\phi(x)$. Moreover $(T^n|C_n(x))^{-1}$ is strongly contracting horizontally by (\ref{key-transverse}). Therefore $\bigcap_{n\geq0} C_n(x)$ is a vertical curve containing $x$. To conclude, observe that this vertical curve is $\gamma^{-1}\gamma(x)$ and that the restriction of $T^n$ to this set for any $n\geq0$ is a homeomorphism. Hence $h_\top(T, \gamma^{-1}\gamma(x))=0$. It follows from a result of Bowen \cite{Bowen} that $\gamma$ preserves the entropy (the requirement of compactness can be fulfilled by replacing $K_0$ with its image under $(x,y)\mapsto(x/y,y)$, compactified by the addition of $\Sigma$). \end{proof} \begin{lemma} $T$ satisfies: $h_\top(T)=\log 2$. \end{lemma} \begin{proof} As a continuous map, $T$ satisfies: $h_\top(T)=\sup_\mu h(T,\mu)$. The above lemma shows that one can restrict this supremum to invariant measures carried by $K_0$ and that these measures have entropy at most $\log 2$, proving $h_\top(T)\leq\log 2$. For the reverse implication, we use that $T|A^cB^cS$ and $T|C^cD^cS$ are linear, multiplying the $y$-coordinate by $1/2$ and $2$ respectively. It follows that if $(x_k,y_k):=T^k(x_0,y_0)$ belongs to these two small triangles near $S$ for $0\leq k<n$, then: $$ \log y_n = \log y_0 + \log 2 \cdot \sum_{k=0}^{n-1}\operatorname{sign}(x_k). $$ Let $M$ be a large integer. Let $$ \Sigma_M:=\biggl\{\alpha\in\Sigma:\forall p<q\; \bigl|\sum_{k=p}^q(\alpha_k-\frac12)\bigr| \leq M\biggr\}. $$ It is clearly compact and invariant, i.e., a subshift. We claim that: $\lim_{M\to\infty} h_\top(\Sigma_M)=\log 2$. Indeed, let $B(N):=\{A\in\{0,1\}^{2N}:\sum_{k=0}^{2N-1}A_k=N\}$ and $$ \Sigma'_M:=\bigcup_{k=0}^{2N-1}\{A^1A^2\dots:A^n\in B(N)\} $$ This is a subshift of $\Sigma_{2N}$ with entropy $\log\#B(N)/2N$ which converges to $\log 2$. The claim is proven. Let $$ X_M:=\{(\alpha,s)\in\Sigma_M\times\{-M,\dots,M\}:\forall p\in\NN\; \bigl|\frac s2+\sum_{k=0}^p(\alpha_k-1/2)\bigr|\leq \frac M2\} $$ and define $F_M:X_M\to X_M$ by $F_M(\alpha,s)=(\sigma(\alpha),s+(\alpha_0-1/2))$. It is easy to check that this is a well-defined, finite, topological extension of $\Sigma_M$. Also it can be embedded into $K_0\cap\{2^{-M}y^c\leq y\leq y^c\}$ by the map $\iota_M$ defined by: $$ \iota_M(\alpha,s) = (y(s)x(\alpha),y(s)) $$ where $$ x(\alpha):=-\frac{19}{20}\sum_{n\geq0} \frac{\sigma_0\dots\sigma_n}{20^n} \text{ and }y(s):=y^c2^{s-M/2} $$ Embedding a measure maximizing entropy of $F_M$ into $T$ through $\iota_M$ and letting $M\to\infty$ shows that $h_\top(T)=\log2$. \end{proof} \begin{proposition} $T$ has no invariant probability measure with entropy $\log 2$. \end{proposition} \begin{proof} We proceed by contradiction, assuming the existence of such a measure, say $\mu$. By the previous lemma, it is supported by $K_0$. Hence $\gamma_*\mu$ is an invariant probability measure of the full shift with entropy $\log 2$. It must be the $(\frac12,\frac12)$-Bernoulli measure. Thus, noting as above $(x_k,y_k):=T^k(x_0,y_0)$, we have: $$ \mu-\forall (x_0,y_0)\qquad \sup_{n\geq0} \sum_{k=0}^n \operatorname{sign}(x_k)=\infty. $$ Now, $$ \log y_n = \log y_0 + \sum_{k=0}^{n-1} \log\frac{y_{k+1}}{y_k} $$ and $y_{k+1}\geq 2^{\operatorname{sign}(x_k)}y_k$ by key properties (\ref{key-ABS})-(\ref{key-CDS}). Hence, $\sup_{n\geq0} y_n=\infty$, a contradiction. \end{proof} \end{document} \section{Properties of the Map}\label{sec:map-check} \section{Semi-local dynamics} \subsection{Definition} We fix three numbers $\kappa,L,y^c$ such that \begin{gather} L>1 \label{eq:ass-L}\\ 1/L<y^c<1 \label{eq:ass-yc}\\ \kappa > (1+1/(1-y^c))L \label{eq:ass-kappa} \end{gather} We then set: $y^t=1$ and $y^b=y^c/L$. We then define the reference points: the south pole $S(0,0)$ and, on the top line: $$ A^t(-y^t,y^t),B^t((-1+2/\kappa)y^t,y^t),C^t((1-2/\kappa)y^t,y^t),D^t(y^t,y^t) $$ The central line $A,B,C,D$, resp. the bottom line $A^b,B^b,C^b,D^b$, is the image of the above by the homothety with center $S$ and ratio $y^c/y^t$, resp. $y^b/y^t$. We analyze the map in the contracting and expanding strips. The \emph{expanding strip} is the triangle $CDS$ and $$ T|CDS \text{ is affine with } T(C)=L\cdot D,\; T(D)=L\cdot A\text{ and } T(S)=S. $$ Thus, $$ T(x,y)=L\cdot(-\kappa x+(\kappa-1)y,y). $$ The \emph{contracting strip} is the triangle $A^tB^tS$. On the lower part we have simply: $$ T|ABS \text{ is affine with } T(S)=S,\; T(A)=A^b\text{ and } T(B)=D^b $$ In other words, $$ T(x,y)=L^{-1}\cdot (\kappa x+(\kappa-1)y,y) $$ On the top line $[-1,-1+2/\kappa]\times\{y^t\}$, we have $$ T(x,y^t)=(\kappa x+(\kappa-1)y^t,y^t), $$ so $T(A^t)=A^t$ and $T(B^t)=D^t$. To join these two pieces, we are going to define $T|A^tB^tBA$ by approximating the following quadratic map: $$ Q(x,y)=L_y^{-1}\cdot(\kappa x+(\kappa-1)y,y) \quad \text{where }L_y^{-1}=L^{-1}+\frac{y-y^c}{y^t-y^c}(1-L^{-1}). $$ \subsection{Vertical cone preserved by the inverse} We claim that there exists a constant $0<C<\infty$ such that the vertical cone: $$ C^s := \left\{\left(\begin{matrix} \xi\\ \eta\end{matrix}\right): |\xi|\leq C|\eta| \right\} $$ is preserved by the differential of the inverse branches $(T|A^0B^0S)^{-1}$ and $(T|CDS)^{-1}$. \subsubsection{Preservation under the expanding strip map}\label{sec:cone-exp-strip} For $(x,y)\in CDS$, $$ T(x,y)=L\cdot (-\kappa x+(\kappa-1)y,y) $$ Lemma \ref{lem:cone-linear} show that the invariance holds as soon as $C\geq 1$. \subsubsection{Preservation under the quadratic map} We have: $$ Q'(x,y) = L_y^{-1}\cdot\left(\begin{matrix} \kappa & \kappa-1 \\ 0 & 1 \end{matrix}\right) +\frac{1-L^{-1}}{y^t-y^c}\cdot\left(\begin{matrix} 0 & \kappa x+(\kappa-1)y \\ 0 & y \end{matrix}\right) $$ Hence if $\left(\begin{matrix} \xi'\\ \eta'\end{matrix}\right)$ and $\left(\begin{matrix} \xi\\ \eta\end{matrix}\right)$ are as above, $$ \eta'=A(y)\eta \text{ and } \xi'=L_y^{-1}\kappa\xi+B(x,y)\eta $$ where $A(y)=L_y^{-1}+(1-L^{-1})y/(y^t-y^c)$ and $$ B(x,y)=L_y^{-1}(\kappa-1)+(1-L^{-1})(\kappa x+(\kappa-1)y)/(y^t-y^c)). $$ Thus $$ \eta=A(y)^{-1}\eta' \text{ and }\xi=L_y\kappa^{-1}\xi'-L_y\kappa^{-1}B(x,y)\eta $$ Hence, using the assumption $|\xi'|\leq C|\eta'|$, $$ |\xi| \leq L_y\kappa^{-1}(C|\eta'|+B(x,y)|\eta|) = L_y\kappa^{-1}(A(y)C+B(x,y))|\eta| $$ and this is less than $C|\eta|$ if and only if $$ C\geq \frac{L_y\kappa^{-1}}{1-A(y)L_y\kappa^{-1}} B(x,y) $$ Indeed, $L_y\leq L_{y^c}=L$ and $A(y)\leq A(y^t)=1+y^t/(y^t-y^c)$, so: $$ A(y)L_y\kappa^{-1}\leq A(y^t)L_{y^c}\kappa^{-1}=(1+y^t/(y^t-y^c))L\kappa^{-1}<1, $$ using eq. (\ref{eq:ass-kappa}). Moreover, $$ B(x,y)\leq \kappa-1+\frac{2\kappa-1}{y^t-y^c}. $$ Finally $|\xi'|\leq C|\eta'|\implies|\xi|\leq C|\eta|$ will follow if \begin{equation}\label{eq:cone} C \geq \frac{1}{L^{-1}\kappa-(1+y^t/(y^t-y^c))} (\kappa-1+(2\kappa-1)/(y^t-y^c)) \end{equation} \subsection{Piecewise Affine Approximation} To get our piecewise affine example, we replace the quadratic piece $Q|A^tB^tBA$ by a family of piecewise affine pieces $\{T|\Delta^\pm_{ij}\}_{\pm\in\{+,-\},1\leq i\leq M,1\leq j\leq N}$ ($N,M\geq1$ are integer parameters $N,M\geq1$). Let us define these pieces. First, we define gridpoints. For $0\leq i\leq M$, $0\leq j\leq N$: $$ p_{ij}=(1-\frac{j}{N})A+\frac{j}{N}B +\frac{i}{M}(1-\frac{j}{N})(A^t-A)+\frac{ij}{MN}(B^t-B) $$ See Fig. X. For $1\leq i\leq M$, $1\leq j\leq N$, we define $\Delta^+_{ij}$, resp. $\Delta^-_{ij}$ as the convex closure of $p_{(i-1)(j-1)},p_{i(j-1)},p_{ij}$, resp. $p_{(i-1)(j-1)},p_{i(j-1)},p_{ij}$ (see also Fig. X). We define $T|\Delta^\pm_{ij}$ to be the affine map that coincides with $Q$ on the three vertices of $\Delta^\pm_{ij}$ so that $T$ is a globally continuous, piecewise affine map satisfying the properties described in Sec. \ref{sec:map-prop}. \begin{lemma} Let $$ L=2,\; y^c=\frac34,\; \kappa=20 \text{ and } C=100 $$ and $M=N=$???. Then the vertical cone $C^s$ defined by $C$ is preserved by the inverse of the differentials on the contracting and expanding strips of the piecewise affine map $T=T_{L,y^c,\kappa}$. \end{lemma} We note that the above values satisfy the conditions (\ref{eq:ass-L}),(\ref{eq:ass-yc}), and (\ref{eq:ass-kappa}). \begin{proof} As $C\geq1$, the expanding strip has the property by Sec. \ref{sec:cone-exp-strip}. (\ref{eq:cone}) is easily checked ($C\geq 39$ is enough). However we must check this for the piecewise affine approximation. We do it by applying Lemma \ref{lem:cone-linear} for each of the $2(N+1)(M+1)$ affine maps. We compute the quantities $\gamma_1,\gamma_2$ defined in that lemma, for all the affine pieces in the contracting pieces. TABLE... Triangle 22: $T'=\left(\begin{matrix} 10 & 9.5\\ 0 & 0.5\end{matrix}\right)$. Hence $y'=y/2$ and $\gamma_1=0$, $\gamma_2=0.525$ Triangle 9: $T'=\left(\begin{matrix} 25 & 22.5\\ 0 & 2.5\end{matrix}\right)$. Hence $y'>y/2$ and $\gamma_1=0$, $\gamma_2=0.55$ Triangle 10: $T'=\left(\begin{matrix} 10 & 11.5\\ 0 & 2.5\end{matrix}\right)$. Hence $y'>y/2$ and $\gamma_1=0$, $\gamma_2=0.825$ Triangle 25: $T'=\left(\begin{matrix} -40 & 38\\ 0 & 2\end{matrix}\right)$. Hence $y'=2y$ and $\gamma_1=0$, $\gamma_2=0.525$ \end{proof} \end{document}
\begin{document} \title{Initial self-embeddings of models of set theory} \author[1]{Ali Enayat} \affil[1]{University of Gothenburg, Gothenburg, Sweden\newline \texttt{[email protected]}} \author[2]{Zachiri McKenzie} \affil[2]{Department of Philosophy, Zhejiang University, Hangzhou, P. R. China\newline \texttt{[email protected]}} \maketitle \begin{abstract} By a classical theorem of Harvey Friedman (1973), every countable nonstandard model $\mathcal{M}$ of a sufficiently strong fragment of ZF has a proper rank-initial self-embedding $j$, i.e., $j$ is a self-embedding of $\mathcal{M}$ such that $j[\mathcal{M}]\subsetneq\mathcal{M}$, and the ordinal rank of each member of $j[\mathcal{M}]$ is less than the ordinal rank of each element of $\mathcal{M}\setminus j[\mathcal{M}]$. Here we investigate the larger family of proper \textit{initial-embeddings} $j$ of models $\mathcal{M}$ of fragments of set theory, where the image of $j$ is a transitive submodel of $\mathcal{M}$. Our results include the following three theorems. In what follows, $\mathrm{ZF}^-$ is $\mathrm{ZF}$ without the power set axiom; $\mathrm{WO}$ is the axiom stating that every set can be well-ordered; $\mathrm{WF}(\mathcal{M})$ is the well-founded part of $\mathcal{M}$; and $\Pi^1_\infty\mathrm{-DC}_\alpha$ is the full scheme of dependent choice of length $\alpha$. \noindent \textbf{Theorem A.} \textit{There is an} $\omega$-\textit{standard countable nonstandard model} $\mathcal{M}$ \textit{of} $\mathrm{ZF}^-+\mathrm{WO}$ \textit{that carries no initial self-embedding} $j:\mathcal{M} \longrightarrow \mathcal{M}$ \textit{other than the identity embedding.} \noindent \textbf{Theorem B.} \textit{Every countable $\omega$-nonstandard model $\mathcal{M}$ of} $\mathrm{ZF}$ \textit{is isomorphic to a transitive submodel of the hereditarily countable sets of its own constructible universe} $L^{\mathcal{M}}$. \noindent \textbf{Theorem C.} \textit{The following three conditions are equivalent for a countable nonstandard model} $\mathcal{M}$ \textit{of} $\mathrm{ZF}^{-}+\mathrm{WO}+\forall \alpha\ \Pi^1_\infty\mathrm{-DC}_\alpha$. \begin{itemize} \item[(I)] \textit{There is a cardinal in} $\mathcal{M}$ \textit{that is a strict upper bound for the cardinality of each member of} $\mathrm{WF}(\mathcal{M})$. \item[(II)] $\mathrm{WF}(\mathcal{M})$ \textit{satisfies the powerset axiom}. \item[(III)] \textit{For all} $n \in \omega$ \textit{and for all} $b \in M$, \textit{there exists a proper initial self-embedding} $j: \mathcal{M} \longrightarrow \mathcal{M}$ \textit{such that} $b \in \mathrm{rng}(j)$ \textit{and} $j[\mathcal{M}] \prec_n \mathcal{M}$. \end{itemize} \end{abstract} \xdef\@thefnmark{}\@footnotetext{\textit{Key Words}. Self-embedding, initial embedding, nonstandard, model of set theory.} \xdef\@thefnmark{}\@footnotetext {\textit{2010 Mathematical Subject Classification}. Primary: 03F30; Secondary: 03H9.} \pagebreak \tableofcontents \section[Introduction]{Introduction}By a classical theorem of Friedman \cite{fri73}, every countable nonstandard model $\mathcal{M}$ of ZF admits a \textit{proper rank-initial self-embedding}, i.e., an embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $j[\mathcal{M}]\subsetneq \mathcal{M}$ and the ordinal rank of each member of $\mathcal{M\setminus }j[\mathcal{M}]$ (as computed in $\mathcal{M}$) exceeds the ordinal rank of each member of $j[\mathcal{M}]$ (some authors refer to this situation by saying that $\mathcal{M}$ is a \textit{top} extension of $j[\mathcal{M}]$). Friedman's work on rank-initial self-embeddings was refined by Ressayre \cite{res87}, who constructed proper rank-initial self-embeddings of models of set theory that pointwise fix any prescribed rank-initial segment of the ambient model determined by an ordinal of the model; and more recently by Gorbow \cite{gor18}, who vastly extended Ressayre's work by carrying out a systematic study of the structure of \textit{fixed point sets} of rank initial self-embeddings of models of set theory. In another direction, Hamkins \cite{ham13} investigated the family of embeddings $j: \mathcal{M} \longrightarrow \mathcal{N}$, where $\mathcal{M}$ and $\mathcal{N}$ are models of set theory, for which $j[\mathcal{M}]$ is merely required to be a \textit{submodel} of $\mathcal{N}$. The main result of Hamkins' paper shows that, surprisingly, every countable model $\mathcal{M}$ of a sufficiently strong fragment of ZF is embeddable as a submodel of their own constructible universe $L^\mathcal{M}$. Here we investigate a family of self-embeddings that is wider than the family of rank-initial embeddings, but narrower than the family considered by Hamkins. More specifically, we study \textit{initial} self-embeddings, i.e., embeddings $j:\mathcal{M}\longrightarrow \mathcal{M}$ such that no member of $j[\mathcal{M}]$ gains a new member in the passage from $j[\mathcal{M}]$ to $\mathcal{M}$ (some authors refer to this situation by saying that $\mathcal{M}$ is an \textit{end} extension of $j[\mathcal{M}]$, or that $j[\mathcal{M}]$ is a \textit{transitive} submodel of $\mathcal{M}$). Theorems A, B, and C of the abstract represent the highlights of our results. Theorem A is presented as Theorem \ref{Th:ModelOfZFCminusWithoutSelfEmbedding}; it shows that in contrast with Friedman's aforementioned self-embedding theorem, the theory $\mathrm{ZF^-}$ has countable nonstandard models with no proper initial self-embeddings. Theorem B is presented as Theorem \ref{Th:MainSelfEmbeddingResultOmegaNonStandard2}; it demonstrates that for $\omega$-nonstandard models, Hamkins' aforementioned theorem can be refined so as to yield a proper \textit{initial} embedding. Finally, Theorem C, which is presented as Theorem \ref{Th:IsomorphicElementarySubmodelsResult}, gives necessary and sufficient conditions for the existence of proper initial self-embeddings whose images are $\Sigma_n$-elementary in the ambient model; these necessary and sufficient conditions reveal the subtle relationship between the existence of initial self-embeddings of a model $\mathcal{M}$ of set theory and the way in which the well-founded part of $\mathcal{M}$ ``sits" in $\mathcal{M}$. \section[Preliminaries]{Preliminaries} \label{Sec:Background} Throughout this paper $\mathcal{L}$ will denote the usual language of set theory whose only nonlogical symbol is the membership relation. Structures will usually be denoted using upper-case calligraphic Roman letters ($\mathcal{M}, \mathcal{N}$, etc.) and the corresponding plain font letter ($M, N$, etc.) will be used to denote the underlying set of that structure. If $\mathcal{M}$ is an $\mathcal{L}^\prime$-structure where $\mathcal{L}^\prime \supseteq \mathcal{L}$ and $a \in M$, then we will use $a^*$ to denote the set $\{x \in M \mid \mathcal{M} \models (x \in a)\}$ where the background model, $\mathcal{M}$, used in definition of $a^*$ will be clear from the context. In addition to the L\'{e}vy classes of $\mathcal{L}$-formulae $\Delta_0= \Sigma_1=\Pi_0$, $\Sigma_1$, $\Pi_1$, etc., we will also have cause to consider the Takahashi classes $\Delta_0^\mathcal{P}$, $\Sigma_1^\mathcal{P}$, $\Pi_1^\mathcal{P}$, etc. $\Delta_0^\mathcal{P}$ is the smallest class of $\mathcal{L}$-formulae that contains all atomic formulae, contains all compound formulae formed using the connectives of first-order logic, and is closed under quantification in the form $\mathcal{Q} x \in y$ and $\mathcal{Q} x \subseteq y$ where $x$ and $y$ are distinct variables, and $\mathcal{Q}$ is $\exists$ or $\forall$. The classes $\Sigma_1^\mathcal{P}, \Pi_1^\mathcal{P}$, etc. are defined inductively from the class $\Delta_0^\mathcal{P}$ in the same way that the classes $\Sigma_1, \Pi_1$, etc. are defined from $\Delta_0$. If $\Gamma$ is a collection of formulae and $T$ is a theory, then we will write $\Gamma^T$ for the collection of formulae that are $T$-provably equivalent to a formula in $\Gamma$. If $T$ is an $\mathcal{L}$-theory, then $\Delta_n^T$ is the collection of all $\mathcal{L}$-formulae that are $T$-provably equivalent to both a $\Sigma_n$-formula and a $\Pi_n$-formula. Similarly, $(\Delta_n^\mathcal{P})^T$ is the collection of all $\mathcal{L}$-formulae that are $T$-provably equivalent to both a $\Sigma_n^\mathcal{P}$-formula and a $\Pi_n^\mathcal{P}$-formula. Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures. We write $\mathcal{M} \equiv \mathcal{N}$ to indicate that $\mathcal{M}$ and $\mathcal{N}$ satisfy the same $\mathcal{L}$-sentences; and write $\mathcal{M} \subseteq \mathcal{N}$ to indicate that $\mathcal{M}$ is a substructure (also referred to as a submodel) of $\mathcal{N}$. If $\Gamma$ is a class of $\mathcal{L}$-formulae, then we will write $\mathcal{M} \prec_\Gamma \mathcal{N}$ if $\mathcal{M} \subseteq \mathcal{N}$ and for every finite tuple $\vec{a} \in M$, $\vec{a}$ satisfies the same $\Gamma$-formulae in both $\mathcal{M}$ and $\mathcal{N}$. In the case that $\Gamma$ is $\Pi_\infty$ (i.e., all $\mathcal{L}$-formulae) or $\Gamma$ is $\Sigma_n$, we will abbreviate this notation by writing $\mathcal{M} \prec \mathcal{N}$ and $\mathcal{M} \prec_n \mathcal{N}$ respectively. If $\mathcal{M} \subseteq \mathcal{N}$ and for all $x \in M$ and $y \in N$, $$\textrm{if } \mathcal{N} \models (y \in x) \textrm{ then } y \in M,$$ then we say that $\mathcal{N}$ is an \emph{end-extension} of $\mathcal{M}$ (equivalently: $\mathcal{M}$ is an \emph{initial submodel} of $\mathcal{N}$, or $\mathcal{M}$ is a \emph{transitive submodel} of $\mathcal{N}$) and write $\mathcal{M} \subseteq_e \mathcal{N}$. It is well-known that if $\mathcal{M} \subseteq_e \mathcal{N}$, then $\mathcal{M} \prec_0 \mathcal{N}$. The following is a slight generalisation of the notion of a powerset preserving end-extension that was first studied by Forster and Kaye in \cite{fk91}. \begin{Definitions1} Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures. We say that $\mathcal{N}$ is a \textbf{powerset preserving end-extension} of $\mathcal{M}$, and write $\mathcal{M} \subseteq_e^\mathcal{P} \mathcal{N}$ if \begin{itemize} \item[(i)] $\mathcal{M} \subseteq_e \mathcal{N}$, and \item[(ii)] for all $x \in N$ and for all $y \in M$, if $\mathcal{N} \models (x \subseteq y)$, then $x \in M$. \end{itemize} \end{Definitions1} Just as end-extensions preserve $\Delta_0$-properties, powerset preserving end-extensions preserve $\Delta_0^\mathcal{P}$-properties. The following is a slight modification of a result proved in \cite{fk91}: \begin{Lemma1} Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures that satisfy Extensionality. If $\mathcal{M} \subseteq_e^\mathcal{P} \mathcal{N}$, then $\mathcal{M} \prec_{\Delta_0^\mathcal{P}} \mathcal{N}$. \Square \end{Lemma1} \begin{Definitions1} Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures. We say that $\mathcal{N}$ is a \textbf{topless powerset preserving end-extension} of $\mathcal{M}$, and write $\mathcal{M} \subseteq_{\mathrm{topless}}^\mathcal{P} \mathcal{N}$ if \begin{itemize} \item[(i)] $\mathcal{M} \subsetneq_e^\mathcal{P} \mathcal{N}$, and \item[(ii)] if $c \in N$ and $c^* \subseteq M$, then $c \in M$. \end{itemize} \end{Definitions1} Let $\Gamma$ be a class of $\mathcal{L}$-formulae. The following define the restriction of some commonly encountered axiom and theorem schemes of $\mathrm{ZFC}$ to formulae in the class $\Gamma$: \begin{itemize} \item[]($\Gamma$-Separation) For all $\phi(x, \vec{z}) \in \Gamma$, $$\forall \vec{z} \forall w \exists y \forall x(x \in y \iff (x \in w) \land \phi(x, \vec{z})).$$ \item[]($\Gamma$-Collection) For all $\phi(x, y, \vec{z}) \in \Gamma$, $$\forall \vec{z} \forall w((\forall x \in w) \exists y \phi(x, y, \vec{z}) \Rightarrow \exists c (\forall x \in w)(\exists y \in c) \phi(x, y, \vec{z})).$$ \item[](Strong $\Gamma$-Collection) For all $\phi(x, y, \vec{z}) \in \Gamma$, $$\forall \vec{z} \forall w \exists C (\forall x \in w)(\exists y \phi(x, y, \vec{z}) \Rightarrow (\exists y \in C) \phi(x, y, \vec{z})).$$ \item[]($\Gamma$-Foundation) For all $\phi(x, \vec{z}) \in \Gamma$, $$\forall \vec{z}(\exists x \phi(x, \vec{z}) \Rightarrow \exists y(\phi(y, \vec{z}) \land (\forall x \in y) \neg \phi(x, \vec{z}))).$$ If $\Gamma= \{x \in z\}$ then we will refer to $\Gamma$-Foundation as \emph{Set Foundation}. \end{itemize} We will use $\bigcup x \subseteq x$ to abbreviate the $\Delta_0$-formula that says ``$x$ is transitive", i.e., $(\forall y \in x)(\forall z \in y)(z \in x)$. We will also make reference to the axiom of transitive containment ($\mathrm{TCo}$), Zermelo's well-ordering principle ($\mathrm{WO}$), Axiom H and for all $n \in \omega$, the axiom scheme of $\Delta_n$-Separation: \begin{itemize} \item[]($\mathrm{TCo}$) $$\forall x \exists y \left(\bigcup y \subseteq y \land x \subseteq y \right).$$ \item[]($\mathrm{WO}$) $$\forall x \exists r (r \textrm{ is a well-ordering of } x).$$ \item[](Axiom H) $$\forall u \exists t \left(\bigcup t \subseteq t \land \forall z(\bigcup z \subseteq z \land |z| \leq |u| \Rightarrow z \subseteq t) \right).$$ \item[]($\Delta_n$-separation) For all $\Sigma_n$-formulae $\phi(x, \vec{z})$ and $\psi(x, \vec{z})$, $$\forall \vec{z} \ \forall w(\forall x(\phi(x, \vec{z}) \iff \neg \psi(x, \vec{z})) \Rightarrow \exists y \forall x(x \in y \iff (x \in w) \land \phi(x, \vec{z}))).$$ \end{itemize} For $\alpha$ an ordinal, the {\it $\alpha$-dependent choice scheme} ($\Pi^1_\infty\mathrm{-DC}_\alpha$) is the natural class version of L\'{e}vy's axiom $\mathrm{DC}_\alpha$ \cite{lev64} that generalises Tarski's Dependent Choice Principle by facilitating $\alpha$-sequences of dependent choices. \begin{itemize} \item[] ($\Pi^1_\infty\mathrm{-DC}_\alpha$) For all $\mathcal{L}$-formulae $\phi(x, y, \vec{z})$, $$\forall \vec{z} \left(\begin{array}{c} \forall g(\forall \gamma \in \alpha)((g \textrm{ is a function})\land (\mathrm{dom}(g)= \gamma) \Rightarrow \exists y \phi(g, y, \vec{z})) \Rightarrow\\ \exists f ( (f \textrm{ is a function})\land (\mathrm{dom}(f)= \alpha) \land (\forall \beta \in \alpha)\phi(f \upharpoonright \beta, f(\beta), \vec{z})) \end{array}\right).$$ \end{itemize} We will have cause to consider the following subsystems of $\mathrm{ZFC}$: \begin{itemize} \item $\mathbf{M}^-$ is the $\mathcal{L}$-theory with axioms: Extensionality, Emptyset, Pair, Union, Infinity, $\mathrm{TCo}$, $\Delta_0$-Separation and Set Foundation. \item $\mathbf{M}$ is obtained from $\mathbf{M}^-$ by adding the powerset axiom. \item $\mathrm{Mac}$ is obtained from $\mathbf{M}$ by adding the axiom of choice. \item $\mathrm{KPI}$ is obtained from $\mathbf{M}^-$ by adding $\Delta_0$-Collection and $\Pi_1$-Foundation. \item $\mathrm{KP}$ is obtained from $\mathrm{KPI}$ by removing the axiom of infinity. \item $\mathrm{KP}^\mathcal{P}$ is obtained from $\mathbf{M}$ by adding $\Delta_0^\mathcal{P}$-Collection and $\Pi_1^\mathcal{P}$-Foundation. \item $\mathrm{MOST}$ is obtained from $\mathrm{Mac}$ by adding $\Sigma_1$-Separation and $\Delta_0$-Collection. \item $\mathrm{ZF}^-$ is obtained by adding $\Pi_\infty$-Collection to $\mathrm{KPI}$. \end{itemize} The theories $\mathbf{M}$, $\mathrm{KPI}$, $\mathrm{KP}$ and $\mathrm{KP}^\mathcal{P}$ are studied in \cite{mat01}. In contrast with the version of Kripke-Platek Set Theory studied in \cite{fri73, bar75}, which includes $\Pi_\infty$-Foundation, we follow \cite{mat01}, by only including $\Pi_1$-Foundation in the theories $\mathrm{KP}$ and $\mathrm{KPI}$, and only including $\Pi_1^\mathcal{P}$-Foundation in the theory $\mathrm{KP}^\mathcal{P}$. The theory $\mathrm{KPI}$, as defined here, plays a key role in \cite{flw16}, where it is referred to as $\mathrm{KP}^- + \mathrm{infinity} + \Pi_1$-Foundation. The results of \cite{zar96} and, more recently, \cite{ght16} highlight the importance of axiomatising $\mathrm{ZF}^-+\mathrm{WO}$ using the collection scheme ($\Pi_\infty$-Collection) instead of the replacement scheme. The strength of Zermelo's well-order principle $\mathrm{WO}$ in the $\mathrm{ZF}^-$ context is revealed in \cite{zar82}, which shows that, in the absence of the powerset axiom, the statement that {\it every set of nonempty sets has a choice function} does not imply $\mathrm{WO}$.\footnote{Zarach credits Z. Szczepaniak with first finding a model of $\mathrm{ZF}^-$ in which every set of nonempty sets has a choice function, but in which $\mathrm{WO}$ fails.} Let $\mathrm{ZF}^-+ \mathrm{GWO}(R)$ be the extension of $\mathrm{ZF}^-+\mathrm{WO}$ obtained as follows: introduce a new binary relation symbol, $R$, to the language of set theory, and then add an axiom asserting that $R$ is a bijection between the universe and the class of ordinals (a global well-order), and also extend the schemes of separation and collection so as to ensure that formulae mentioning $R$ can be used. As a consequence of a result of Flanagan \cite[Theorem 7.1]{fla75}, $\mathrm{ZF}^-+ \mathrm{GWO}(R)$ is a conservative extension of the theory $\mathrm{ZF}^-+\mathrm{WO}+\forall \alpha \ \Pi^1_\infty\mathrm{-DC}_\alpha$. Recent work of S. Friedman, Gitman and Kanovei \cite{fgk19} shows that $\Pi^1_\infty\mathrm{-DC}_{\omega}$ is independent of $\mathrm{ZF}^-+\mathrm{WO}$. Next we record the following useful relationships between fragments of Collection, Separation and Foundation over the base theory $\mathbf{M}^-$. \begin{Lemma1} \label{basicimplications} Let $\Gamma$ be a class of $\mathcal{L}$-formulae, and $n \in \omega$. \begin{enumerate} \item In the presence of $\mathbf{M}^-$, $\Pi_n\textrm{-Separation}$ is equivalent to $\Sigma_n\textrm{-Separation}.$ \item $\mathbf{M}^-+\Gamma\textrm{-Separation} \vdash \Gamma\textrm{-Foundation}.$ \item $\mathbf{M}^{-}+\Pi_n\textrm{-Collection} \vdash \Sigma_{n+1}\textrm{-Collection}.$ \item \cite[Lemma 4.13]{flw16} $\mathbf{M}^-+\Pi_n\textrm{-Collection} \vdash \Delta_{n+1}\textrm{-Separation}$. \item \cite[Lemma 2.5]{mck19} In the presence of $\mathbf{M}^-$, $\Pi_n\textrm{-Collection}+\Sigma_{n+1}\textrm{-Separation}$ is equivalent to $\textrm{Strong }\Pi_n\textrm{-Collection}$. \end{enumerate} \end{Lemma1} As indicated by the following well-known result, over the theory $\mathbf{M}^-$, $\Pi_n$-Collection implies that the classes $\Pi_{n+1}$ and $\Sigma_{n+1}$ are essentially closed under bounded quantification (part (3) of Lemma \ref{basicimplications} is used in the proof). \begin{Lemma1} Let $\phi(x, \vec{z})$ be a $\Sigma_{n+1}$-formula and let $\psi(x, \vec{z})$ be a $\Pi_{n+1}$-formula. The theory $\mathbf{M}^-+\Pi_n\textrm{-Collection}$ proves that $(\forall x \in y) \phi(x, \vec{z})$ is equivalent to a $\Sigma_{n+1}$-formula and $(\exists x \in y)\psi(x, \vec{z})$ is equivalent to a $\Pi_{n+1}$-formula. \end{Lemma1} \begin{Definitions1} A transitive set $M$ is said to be \textbf{admissible} if $\langle M, \in\rangle \models \mathrm{KP}$. \end{Definitions1} The theory $\mathrm{KPI}$ and its variants that include the scheme of full class foundation have been widely studied \cite{fri73, bar75, mat01, flw16}. One appealing feature of this theory is the fact that it is strong enough to carry out many of the fundamental set-theoretic constructions such as defining set-theoretic rank, proving the existence of transitive closures, defining satisfaction and constructing G\"{o}del's $L$ hierarchy. \begin{itemize} \item For all sets $X$, we use $\mathrm{TC}(X)$ to denote the $\subseteq$-least transitive set with $X$ as a subset. The theory $\mathrm{KPI}$ proves that the function $X \mapsto \mathrm{TC}(X)$ is total. Moreover, the proof of \cite[Proposition 1.29]{mat01} shows that the formulae ``$x = \mathrm{TC}(y)$" and ``$x \in \mathrm{TC}(y)$" with free variables $x$ and $y$ are $\Delta_1^\mathrm{KP}$, and ``$x = \mathrm{TC}(y)$" is also $\Delta_0^\mathcal{P}$. \item The theory $\mathrm{KP}$ is capable of defining and proving the totality of the rank function $\rho$ satisfying $$\rho(a)= \sup \left\{ \rho(b)+1: b\in a \right\}.$$ The formula ``$\rho(x)=y$" with free variables $x$ and $y$ is $\Delta_1^{\mathrm{KP}}$ \cite[Theorem 1.5]{fri73}. \item As verified in \cite[Section III.1]{bar75}, satisfaction in set structures is definable in $\mathrm{KPI}$. In particular, if $\mathcal{N}$ is a set structure in a model $\mathcal{M}$ of $\mathrm{KPI}$, $\mathcal{L}(\mathcal{N})$ is the language of $\mathcal{N}$, $\vec{a}$ is an $\mathcal{M}$-finite sequence of members of $\mathcal{N}$, and $\phi$ is an $\mathcal{L}(\mathcal{N})$-formula in the sense of $\mathcal{M}$ whose arity agrees with the length of $\vec{a}$, then ``$\mathcal{M} \models \phi[\vec{v}/\vec{a}]$" is definable in $\mathcal{M}$ by a formula that is $\Delta_1^{\mathrm{KPI}}$. \item As shown in \cite[Chapter II]{bar75} the theory $\mathrm{KPI}$ is capable of constructing the levels of G\"{o}del's $L$ hierarchy. The following operation can be defined using a formula for satisfaction for set structures in $\mathrm{KPI}$: for all sets $X$, $$\mathrm{Def}(X)= \{Y \subseteq X \mid Y \textrm{ is a definable subclass of } \langle X, \in \rangle\}.$$ The levels of the $L$ hierarchy are then recursively defined by: $$L_0= \emptyset \textrm{ and } L_\alpha= \bigcup_{\beta < \alpha} L_\beta \textrm{ if }\alpha \textrm{ is a limit ordinal},$$ $$L_{\alpha+1}= L_\alpha \cup \mathrm{Def}(L_\alpha),$$ $$L= \bigcup_{\alpha \in \mathrm{Ord}} L_\alpha.$$ The function $\alpha \mapsto L_\alpha$ is total and $\Delta_1^{\mathrm{KP}}$. As usual, we will use $V=L$ to abbreviate the axiom that says that every set is a member of some $L_\alpha$, i.e., $\forall x \exists \alpha (x \in L_\alpha)$. \end{itemize} The fact that $\mathrm{KPI}$ can express satisfaction in set structures can be used, in this theory, to express satisfaction for $\Delta_0$-formulae in the universe via the definition below. \begin{Definitions1} \label{Df:Delta0Satisfaction} The formula $\mathrm{Sat}_{\Delta_0}(q, x)$ is defined as $$\begin{array}{c} (q \in \omega) \land (q= \ulcorner \phi(v_1, \ldots, v_m) \urcorner \textrm{ where } \phi \textrm{ is } \Delta_0) \land (x= \langle x_1, \ldots, x_m \rangle) \land\\ \exists N \left( \bigcup N \subseteq N \land (x_1, \ldots, x_m \in N) \land (\langle N, \in \rangle \models \phi[x_1, \ldots, x_m]) \right) \end{array}.$$ \end{Definitions1} The absoluteness of $\Delta_0$ properties between transitive structures and the universe, and the availability of $\mathrm{TCo}$ in $\mathrm{KPI}$ imply that the formula $\mathrm{Sat}_{\Delta_0}$ is equivalent, in the theory $\mathrm{KPI}$, to the formula $$\begin{array}{c} (q \in \omega) \land (q= \ulcorner \phi(v_1, \ldots, v_m) \urcorner \textrm{ where } \phi \textrm{ is } \Delta_0) \land (x= \langle x_1, \ldots, x_m \rangle) \land\\ \forall N \left( \bigcup N \subseteq N \land (x_1, \ldots, x_m \in N) \Rightarrow (\langle N, \in \rangle \models \phi[x_1, \ldots, x_m]) \right) \end{array}.$$ Therefore, the fact that ``$\langle N, \in \rangle \models \phi[\cdots]$" is $\Delta_1^{\mathrm{KPI}}$ implies that $\mathrm{Sat}_{\Delta_0}(q, x)$ is also $\Delta_1^{\mathrm{KPI}}$, and $\mathrm{Sat}_{\Delta_0}(q, x)$ expresses satisfaction for $\Delta_0$-formulae in the theory $\mathrm{KPI}$. We can now inductively define formulae $\mathrm{Sat}_{\Sigma_n}(q, x)$ and $\mathrm{Sat}_{\Pi_n}(q, x)$ that express satisfaction for formulae in the classes $\Sigma_n$ and $\Pi_n$. \begin{Definitions1} The formulae $\mathrm{Sat}_{\Sigma_n}(q, x)$ and $\mathrm{Sat}_{\Pi_n}(q, x)$ are defined recursively for $n>0$. $\mathrm{Sat}_{\Sigma_{n+1}}(q, x)$ is defined as the formula $$\exists \vec{y} \exists k \exists b \left( \begin{array}{c} (q= \ulcorner\exists \vec{u} \phi(\vec{u}, v_1, \ldots, v_l)\urcorner \textrm{ where } \phi \textrm{ is } \Pi_n)\land (x= \langle x_1, \ldots, x_l \rangle)\\ \land (b= \langle \vec{y}, x_1, \ldots, x_l \rangle) \land (k= \ulcorner \phi(\vec{u}, v_1, \ldots, v_l) \urcorner) \land \mathrm{Sat}_{\Pi_n}(k, b) \end{array}\right);$$ and $\mathrm{Sat}_{\Pi_{n+1}}(q, x)$ is defined as the formula $$\forall \vec{y} \forall k \forall b \left( \begin{array}{c} (q= \ulcorner\forall \vec{u} \phi(\vec{u}, v_1, \ldots, v_l) \urcorner \textrm{ where } \phi \textrm{ is } \Sigma_n)\land (x= \langle x_1, \ldots, x_l \rangle)\\ \land ((b= \langle \vec{y}, x_1, \ldots, x_l \rangle) \land (k= \ulcorner\phi(\vec{u}, v_1, \ldots, v_l)\urcorner) \Rightarrow \mathrm{Sat}_{\Sigma_n}(k, b)) \end{array}\right).$$ \end{Definitions1} \begin{Theorems1} \label{Complexityofpartialsat} Suppose $n \in \omega$ and $m=\max \{ 1, n \}$. The formula $\mathrm{Sat}_{\Sigma_n}(q, x)$ (respectively $\mathrm{Sat}_{\Pi_n}(q, x)$) is $\Sigma_n^{\mathrm{KPI}}$ ($\Pi_n^{\mathrm{KPI}}$, respectively). Moreover, $\mathrm{Sat}_{\Sigma_n}(q, x)$ (respectively $\mathrm{Sat}_{\Pi_n}(q, x)$) expresses satisfaction for $\Sigma_n$-formulae ($\Pi_n$-formulae, respectively) in the theory $\mathrm{KPI}$, i.e., if $\mathcal{M} \models\mathrm{KPI}$, $\phi(v_1,\ldots,v_k)$ is a $\Sigma_n$-formula, and $x_1,\ldots,x_k$ are in $M$, then for $q = \ulcorner \phi( v_1, \ldots, v_k) \urcorner$, $\mathcal{M}$ satisfies the universal generalization of the following formula: $$ x= \langle x_1, \ldots,x_k \rangle \Rightarrow \left( \phi(x_1,\ldots,x_k) \leftrightarrow \mathrm{Sat}_{\Sigma_{n}}(q, x) \right).$$ \end{Theorems1} The following result appears in \cite[Theorem 3.8]{flw16}. \begin{Lemma1} \label{Th:SchroderBernsteinInKP} (Friedman, Li, Wong) The theory $\mathrm{KP}$ proves the Schr\"{o}der-Bernstein Theorem, i.e., $\mathrm{KP}$ proves that if $A$ and $B$ are sets such that $|A| \leq |B|$ and $|B| \leq |A|$, then $|A|=|B|$. \end{Lemma1} The following theorem highlights the important fact that the $\Sigma_1^\mathcal{P}$-Recursion Theorem is provable in the theory $\mathrm{KP}^\mathcal{P}$ \cite[Theorem 6.26]{mat01}. \begin{Theorems1} \label{Th:Sigma1PRecursionTheorem} ($\mathrm{KP}^\mathcal{P}$) Let $G$ be a $\Sigma_1^\mathcal{P}$-definable class. If $G$ is a total function, then there exists a $\Sigma_1^\mathcal{P}$-definable total class function $F$ such that for all $x$, $F(x)= G(F\upharpoonright x)$. \end{Theorems1} \begin{Definitions1} \label{V_alpha} We write ``$V_\alpha$ exists'' as an abbreviation for the sentence expressing that $\alpha$ is an ordinal, and there is a function $f$ whose domain is $\alpha+1$ that satisfies the following conditions (1) through (3) below. \begin{enumerate} \item $f(0)=\emptyset$. \item $\forall \beta<\alpha \left((\beta \textrm{ is a limit ordinal}) \Rightarrow f(\beta)= \bigcup_{\xi < \beta} f(\xi) \right)$. \item $(\forall \beta \in \mathrm{dom}(f))(\forall y )(y \in f(\beta+1) \iff y \subseteq f(\beta))$. \end{enumerate} \end{Definitions1} Note that under Definition \ref{V_alpha}, if $V_\alpha$ exists, then $V_\beta$ exists for all $\beta<\alpha$. The following consequence of the $\Sigma_1^\mathcal{P}$-Recursion Theorem is Proposition 6.28 of \cite{mat01}. \begin{Coroll1} \label{Th:RanksInKPP} The theory $\mathrm{KP}^\mathcal{P}$ proves that for all ordinals $\alpha$, $V_\alpha$ exists. Note that in particular, this theory proves that for all ordinals $\alpha$, there is a function $f$ with domain $\alpha$ such that for all $\beta \in \alpha$, $f(\beta) = V_\beta$. \Square \end{Coroll1} Section 3 of \cite{mat01} contains the verification of the following lemma. \begin{Lemma1} \label{Th:MOSTisMacPlusH} $\mathrm{MOST}$ is the theory $\mathrm{Mac}+\textrm{Axiom }\mathrm{H}$. \end{Lemma1} We also record the following consequence of $\mathrm{MOST}$ that are proved in \cite[Section 3]{mat01}: \begin{Lemma1} \label{Th:ConsequencesOfMOST} The theory $\mathrm{MOST}$ proves \begin{itemize} \item[(i)] every well-ordering is isomorphic to an ordinal, \item[(ii)] every well-founded extensional relation is isomorphic to a transitive set, \item[(iii)] for all cardinals $\kappa$, $\kappa^+$ exists, and \item[(iv)] for all cardinals $\kappa$, the set $H_\kappa= \{x \mid |\mathrm{TC}(x)| < \kappa \}$ exists. \end{itemize} \end{Lemma1} The following result is \cite[Lemma 3.3]{ekm18} combined with the refinement of a theorem due to Takahashi proved in \cite[Proposition Scheme 6.12]{mat01}: \begin{Lemma1} \label{Th:HCutsSatisfyPi1Collection} If $\mathcal{M}, \mathcal{N} \models \mathrm{MOST}$ and $\mathcal{N} \subseteq_{\mathrm{topless}}^\mathcal{P} \mathcal{M}$, then $\mathcal{N} \models \Pi_1\textrm{-Collection}$. \end{Lemma1} We next recall a remarkable absoluteness phenomenon unveiled by L\'{e}vy \cite{lev64}, which shows that, provably in ZF, $H^{L}_{\aleph_1}$ (i.e., the collection of sets that are hereditarily countable, as computed in the constructible universe) is a $\Sigma_1$-elementary submodel of the universe of sets.\footnote{The proof of this result relies heavily on the venerable Shoenfield Absoluteness Theorem. The original proof by L\'{e}vy of Theorem \ref{Levy-Shoenfield} presented this result as a theorem of ZF + DC (where DC here is axiom of dependent choice of length $\omega$). As pointed out by Kunen, DC can be eliminated by a forcing-and-absoluteness stratagem (see page 55 of \cite{bar71}). Later Barwise and Fischer gave a direct forcing-free proof in ZF \cite{barwise-fischer70}.} \begin{Theorems1} \label{Levy-Shoenfield} (L\'{e}vy-Shoenfield Absoluteness) Let $\theta(x,y)$ be a $\Sigma_1^{\mathrm{ZF}}$-formula with no free variables except $x$ and $y$, then the universal generalization of the following formula is provable in $\mathrm{ZF}$ $$ \left(y \in H^{L}_{\aleph_1} \land \exists x\ \theta(x,y) \right)\Rightarrow \exists x(x\in H^{L}_{\aleph_1} \land \theta(x,y)).$$ \end{Theorems1} The L\'{e}vy-Shoenfield Absoluteness Theorem readily implies the following corollary that shows that the $\Sigma_1$-theory of every model of ZF coincides with the $\Sigma_1$-theory of $H_{\aleph_1}$ of the constructible universe of the same model. \begin{Coroll1} \label{cor. of Levy-Shoenfield} Let $\delta(\vec{x})$ be a $\Delta_0^{\mathrm{ZF}}$-formula, and $\mathcal{M}$ be a model of $\mathrm{ZF}$. Then $$\mathcal{M} \models \exists \vec{x}\ \delta(\vec{x}) \Longleftrightarrow H^{L^{\mathcal{M}}}_{\aleph_1} \models \exists \vec{x}\ \delta(\vec{x}).$$ \end{Coroll1} Any model of $\mathrm{KPI}$ comes equipped with its well-founded part that consists of all sets in this structure whose rank is a standard ordinal, as indicated by the following definition. \begin{Definitions1} \label{Df:StandardPart} Let $\mathcal{M} \models \mathrm{KP}$. The \textbf{well-founded part} or \textbf{standard part} of $\mathcal{M}$, denoted $\mathrm{WF}(\mathcal{M})$, is the substructure of $\mathcal{M}$ with underlying set $$\mathrm{WF}(M)= \{x \in M \mid \neg \exists f(f: \omega \longrightarrow M\land(\forall n \in \omega)(f(0)= x \land f(n+1)\in^\mathcal{M} f(n)))\}.$$ If $\mathrm{WF}(\mathcal{M})\neq \mathcal{M}$, then we say that $\mathcal{M}$ is {\bf nonstandard}. The \textbf{standard ordinals} of $\mathcal{M}$, denoted $\mathrm{o}(\mathcal{M})$, is the substructure of $\mathcal{M}$ with underlying set $\mathrm{o}(M)= \mathrm{WF}(M)\cap \mathrm{Ord}^\mathcal{M}$. If $\omega^\mathcal{M} \in \mathrm{o}(M)$, then we say that $\mathcal{M}$ is $\mathbf{\omega}$\textbf{-standard}; otherwise $\mathcal{M}$ is said to be $\mathbf{\omega}$\textbf{-nonstandard}. Mostowski's Collapsing Lemma ensures that both $\mathrm{o}(\mathcal{M})$ and $\mathrm{WF}(\mathcal{M})$ are isomorphic to transitive sets. In particular, $\mathrm{o}(\mathcal{M})$ is isomorphic to an ordinal that is called the \textbf {standard ordinal} of $\mathcal{M}$. \end{Definitions1} The following definition generalises the notion of standard system that plays an important role in the study of models of arithmetic. \begin{Definitions1} Let $\mathcal{M} \models \mathrm{KPI}$. The \textbf{standard system} of $\mathcal{M}$ is the set $$\mathrm{SSy}(\mathcal{M})= \{y^* \cap \mathrm{WF}(\mathcal{M}) \mid y \in M\}.$$ If $A \in \mathrm{SSy}(\mathcal{M})$ and $y \in M$ is such that $A= y^* \cap \mathrm{WF}(\mathcal{M})$, then we say that $y$ {\bf codes} $A$. \end{Definitions1} \begin{Definitions1} Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures. An \textbf{embedding} of $\mathcal{M}$ into $\mathcal{N}$ is an injection $j: M \longrightarrow N$ such that for all $x, y \in M$, $$\mathcal{M} \models x \in y \textrm{ if and only if } \mathcal{N} \models j(x) \in j(y).$$ Note that we will often write $j: \mathcal{M} \longrightarrow \mathcal{N}$ to indicate that $j$ is an embedding of $\mathcal{M}$ into $\mathcal{N}$. If $j: \mathcal{M} \longrightarrow \mathcal{N}$ is an embedding of $\mathcal{M}$ into $\mathcal{N}$, then we write $j[\mathcal{M}]$ for the substructure of $\mathcal{N}$ whose underlying set is $\mathrm{rng}(j)$. \end{Definitions1} \begin{Definitions1} Let $\mathcal{M}$ be an $\mathcal{L}$-structure and let $j: \mathcal{M} \longrightarrow \mathcal{M}$ be an embedding of $\mathcal{M}$ into $\mathcal{M}$. The \textbf{fixed point set} of $j$ is the set $\mathrm{Fix}(j)= \{ x \in M \mid j(x)=x\}$. \end{Definitions1} \begin{Definitions1} Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures. Let $j: \mathcal{M} \longrightarrow \mathcal{N}$ be an embedding of the structure $\mathcal{M}$ into $\mathcal{N}$. We say that $j$ is an \textbf {initial embedding} if $j[\mathcal{M}] \subseteq_e \mathcal{N}$. We say that $j$ is a \textbf {$\mathcal{P}$-initial embedding} if $j[\mathcal{M}] \subseteq_e^\mathcal{P} \mathcal{N}$. If $j: \mathcal{M} \longrightarrow \mathcal{M}$ is a ($\mathcal{P}$-) initial embedding with $j[\mathcal{M}]\neq \mathcal{M}$, then we say that $j$ is a \textbf{proper ($\mathcal{P}$-) initial self-embedding} of $\mathcal{M}$. \end{Definitions1} Next, we take advantage of the rank function available in $\mathrm{KP}$ to define the notion of rank extension, and the notion of rank-initial embedding. \begin{Definitions1} Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures with $\mathcal{M} \subseteq_e \mathcal{N}$ and $\mathcal{N} \models \mathrm{KP}$. We say that $\mathcal{N}$ is a \textbf{rank extension} of $\mathcal{M}$ if for all $\alpha \in \mathrm{Ord}^\mathcal{M}$, if $x \in N$ and $\mathcal{N} \models \rho(x)=\alpha$, then $x \in M$. \end{Definitions1} \begin{Definitions1} Let $\mathcal{M}$ and $\mathcal{N}$ be $\mathcal{L}$-structures that satisfy $\mathrm{KP}$. Let $j: \mathcal{M} \longrightarrow \mathcal{N}$ be an embedding of the structure $\mathcal{M}$ into $\mathcal{N}$. We say $j$ is a \textbf{rank-initial embedding} if $j$ is an initial embedding and $\mathcal{M}$ is a rank extension of $j[\mathcal{M}]$. If $j: \mathcal{M} \longrightarrow \mathcal{M}$ is a rank-initial embedding with $j[\mathcal{M}]\neq \mathcal{M}$, then we say that $j$ is a \textbf{proper rank-initial self-embedding} of $\mathcal{M}$. \end{Definitions1} Note that a rank-initial embedding $j: \mathcal{M} \longrightarrow \mathcal{N}$, where $\mathcal{M}, \mathcal{N} \models \mathrm{KP}$, is also $\mathcal{P}$-initial. The following result of Gorbow \cite[Corollary 4.6.12]{gor18} shows that if the source and target model of a $\mathcal{P}$-initial embedding both satisfy $\mathrm{KP}^\mathcal{P}$, then this embedding is also rank-initial. \begin{Lemma1} \label{Th:PInitialImpliesRankInitialKPP} Let $\mathcal{M}$ and $\mathcal{N}$ be models of $\mathrm{KP}^\mathcal{P}$. If $j: \mathcal{M} \longrightarrow \mathcal{N}$ is a $\mathcal{P}$-initial embedding, then $j$ is a rank-initial embedding. \end{Lemma1} Note that, in any model of $\mathrm{ZFC}$, $L$ is a powerset preserving end-extension of $H_{\aleph_\omega}^L$ and $H_{\aleph_\omega}^L$ satisfies $\mathrm{MOST}+\Pi_\infty\textrm{-Separation}$. This example shows that the assumption that $\mathcal{M}$ satisfies $\Delta_0^\mathcal{P}\textrm{-Collection}$ in Lemma \ref{Th:PInitialImpliesRankInitialKPP} can not relaxed to $\Delta_0\textrm{-Collection}$ even in the presence of the full scheme of separation. H. Friedman's seminal \cite{fri73} pioneered the study of rank-initial self-embeddings of $\mathrm{KP}^\mathcal{P}+\Pi_\infty\textrm{-Foundation}$. His work was refined and extended by Ressayre \cite{res87}, and more recently by Gorbow \cite{gor18}. The following theorem of Gorbow guarantees the existence of proper rank-initial self-embeddings of countable nonstandard models of an extension of $\mathrm{KP}^\mathcal{P}$. Gorbow's theorem refines \cite[Theorem 4.3]{fri73}, and is a consequence of results proved in \cite[Section 5.2]{gor18}. \begin{Theorems1} (Gorbow) \label{Th_Gorbow} Every countable nonstandard model $\mathcal{M}$ of $\mathrm{KP}^\mathcal{P}+\Sigma_1^\mathcal{P}\textrm{-Separation}$ has a proper rank-initial self-embedding. Moreover, given any $\alpha \in \mathrm{Ord}^\mathcal{M}$ there exists a proper rank-initial self-embedding $j$ of $ \mathcal{M}$ that fixes every element of $(\mathrm{V}_{\alpha}^\mathcal{M})^*$. \end{Theorems1} We also note the following self-embedding theorem that is readily obtained by putting \cite[Theorem 5.6]{ekm18} together \cite[Proposition Scheme 6.12]{mat01}. \begin{Theorems1} \label{Th:PInitialSelfEmbeddingOfRecursivelySaturatedModels} Every countable recursively saturated model of $\mathrm{MOST}+\Pi_1\textrm{-Collection}$ has a proper $\mathcal{P}$-initial self-embedding. \end{Theorems1} \section[The well-founded part]{The well-founded part} In this section we present results about well-founded parts of models of set theory that are relevant to the proofs the main result of this paper. H. Friedman's \cite{fri73} systematically studied the structure of the well-founded part of a nonstandard model of Kripke-Platek Set Theory and \cite[Theorem 2.1]{fri73} showed that such a well-founded part must be isomorphic to an admissible set. This result is also a consequence of \cite[Lemma 8.4]{bar75}. As we mentioned earlier, the versions of Kripke-Platek Set Theory studied in \cite{bar75} and \cite{fri73} include $\Pi_\infty$-Foundation. An examination of these proofs reveals that the well-founded part of a model of $\mathrm{KPI}+\Sigma_1\textrm{-Foundation}$ is isomorphic to an admissible set. Before proving this, we will first verify in the lemma below that any nonstandard model of $\mathrm{KPI}$ is a topless powerset preserving end-extension of its well-founded part. \begin{Lemma1} \label{Th:WellfoundedPartTopless} Let $\mathcal{M}$ be a nonstandard model of $\mathrm{KPI}$. Then $$\mathrm{WF}(\mathcal{M}) \subseteq_{\mathrm{topless}}^\mathcal{P} \mathcal{M}.$$ \end{Lemma1} \begin{proof} It follows immediately from Definition \ref{Df:StandardPart} that $\mathrm{WF}(\mathcal{M}) \subseteq_{e}^\mathcal{P} \mathcal{M}$. The fact that $\mathcal{M}$ is nonstandard immediately means that $M \neq \mathrm{WF}(M)$. Let $c \in M$ with $c^* \subseteq \mathrm{WF}(M)$. Suppose, for a contradiction, that $f: \omega \longrightarrow M$ witness the fact that $c \notin \mathrm{WF}(M)$. But, $f(0)= c$ and $\mathcal{M} \models (f(1) \in c)$, so $f(1) \in \mathrm{WF}(M)$. Define $g: \omega \longrightarrow M$ by: for all $n \in \omega$ with $n \geq 1$, $g(n-1)=f(n)$. Now, $g$ witness the fact that $f(1) \notin \mathrm{WF}(M)$, which is a contradiction. This shows that $\mathrm{WF}(\mathcal{M}) \subseteq_{\mathrm{topless}}^\mathcal{P} \mathcal{M}$. \Square \end{proof} \begin{Lemma1} \label{Th:StandardPartClosedUnderRank} Let $\mathcal{M}$ be a nonstandard model of $\mathrm{KPI}$. Then $\mathrm{WF}(\mathcal{M})$ satisfies Extensionality, Emptyset, Pair, Union, $\Delta_0$-Separation and $\Pi_\infty$-Foundation. Moreover, for all $x \in M$, $$x \in \mathrm{WF}(M) \textrm{ if and only if } \rho^\mathcal{M}(x) \in \mathrm{WF}(M).$$ \end{Lemma1} \begin{proof} The fact $\mathrm{WF}(\mathcal{M}) \subseteq_e^\mathcal{P} \mathcal{M}$ implies that $\mathrm{WF}(\mathcal{M})$ satisfies Extensionality, Emptyset, Pair, Union, and $\Delta_0$-Separation. The fact that $\mathrm{WF}(\mathcal{M})$ is well-founded ensures that $\Pi_\infty$-Foundation holds in $\mathrm{WF}(\mathcal{M})$. Now, to prove the last statement, consider $$a= \{x \in \mathrm{WF}(M) \mid \rho^\mathcal{M}(x) \notin \mathrm{WF}(M) \}.$$ Suppose that $a \neq \emptyset$ and let $z \in a$ be a $\in^\mathcal{M}$-least member of $a$. Working inside $\mathcal{M}$, consider $$b= \{ \rho(y)+1 \mid y \in z\}.$$ Next we observe that $b$ is a set and $b^* \subseteq \mathrm{WF}(M)$, so $b \in \mathrm{WF}(M)$. It now follows that $\rho^\mathcal{M}(z)= (\sup b)^\mathcal{M} \in \mathrm{WF}(M)$, which is a contradiction. This shows that if $x \in \mathrm{WF}(M)$, then $\rho^\mathcal{M}(x) \in \mathrm{WF}(M)$. Conversely, consider $$c= \{\rho^\mathcal{M}(x) \mid x \notin \mathrm{WF}(M)\}.$$ Suppose, for a contradiction, that $c \cap \mathrm{WF}(M)\neq \emptyset$. Let $\alpha \in c$ be least and let $z \in M$ with $z \notin \mathrm{WF}(M)$ be such that $\mathcal{M} \models \rho(z)= \alpha$. Since $\alpha$ is least, $z^* \subseteq \mathrm{WF}(M)$. Since $\mathrm{WF}(\mathcal{M}) \subseteq_{\mathrm{topless}}^\mathcal{P} \mathcal{M}$, this implies that $z \in \mathrm{WF}(M)$, which is a contradiction. \Square \end{proof} We next verify that under the additional assumption that the nonstandard model of $\mathrm{KPI}$ satisfies $\Sigma_1$-Foundation, the well-founded part also satisfies $\Delta_0$-Collection. \begin{Theorems1} \label{Th:WellFoundedPartIsAdmissible} Let $\mathcal{M}$ be a nonstandard model of $\mathrm{KPI}+\Sigma_1\textrm{-Foundation}$. Then $\mathrm{WF}(\mathcal{M})$ is isomorphic to an admissible set. \end{Theorems1} \begin{proof} We need to show that $\mathrm{WF}(\mathcal{M})$ satisfies all of the axioms of $\mathrm{KP}$. By Lemma \ref{Th:StandardPartClosedUnderRank}, we are left to verify that $\mathrm{WF}(\mathcal{M})$ satisfies $\Delta_0$-Collection. Let $\phi(x, y, \vec{z})$ be a $\Delta_0$-formula and let $a, \vec{b} \in \mathrm{WF}(M)$ be such that $$\mathrm{WF}(\mathcal{M}) \models (\forall x \in a) \exists y \phi(x, y, \vec{b}).$$ Since $\mathrm{WF}(\mathcal{M}) \prec_{\Delta_0^\mathcal{P}} \mathcal{M}$, $$\mathcal{M} \models (\forall x \in a) \exists y \phi(x, y, \vec{b}).$$ Consider $\theta(\gamma, \vec{z}, w)$ defined by $$(\forall x \in w)(\exists \alpha \in \gamma)\exists y (\phi(x, y, \vec{z})\land \rho(y)= \alpha).$$ Recall that $\Sigma_1$-Collection in $\mathcal{M}$ implies that $\theta(\gamma, \vec{z}, w)$ is equivalent to a $\Sigma_1$-formula. Therefore, using $\Sigma_1$-Foundation, let $\delta \in M$ be the least element of $$A= \{ \beta \in M \mid \mathcal{M} \models \theta(\beta, \vec{b}, a)\}.$$ Now, every nonstandard $\mathcal{M}$-ordinal is an element of $A$ and so $\delta \in \mathrm{WF}(M)$. Let $\psi(x, z, \gamma, \vec{w})$ be the $\Sigma_1$-formula $$\exists y \exists \alpha(z= \langle y, \alpha\rangle \land (\alpha \in \gamma)\land \rho(y)= \alpha \land \phi(x, y, \vec{w})).$$ Therefore, $$\mathcal{M} \models (\forall x \in a) \exists z \psi(x, z, \delta, \vec{b}).$$ Work inside $\mathcal{M}$. By $\Sigma_1$-Collection, there exists $d$ such that $(\forall x \in a) (\exists z\in d) \psi(x, z, \delta, \vec{b})$. Let $c= \mathrm{dom}(d)$. Note that by Lemma \ref{Th:StandardPartClosedUnderRank}, $c^* \subseteq \mathrm{WF}(M)$ and so $c \in \mathrm{WF}(M)$. Hence $$\mathrm{WF}(\mathcal{M}) \models (\forall x \in a) (\exists y \in c) \phi(x, y, \vec{b}),$$ and therefore $\Delta_0$-Collection holds in $\mathrm{WF}(\mathcal{M})$. So, the Mostowski collapse of $\mathrm{WF}(\mathcal{M})$ witnesses the fact that $\mathrm{WF}(\mathcal{M})$ is isomorphic to an admissible set. \Square \end{proof} In Definitions \ref{Df:CUnbounded} and \ref{contained-def}, we introduce two relationships between models of set theory and their well-founded parts that will be shown to be linked to the existence of proper initial self-embeddings in Sections \ref{Sec:ModelZFCMinusWithNoSelfEmbedding} and \ref{Sec:ModelsWithSelfEmbeddings}. \begin{Definitions1} \label{Df:CUnbounded} Let $\mathcal{M} \models \mathrm{KPI}$. (a) The well-founded part of $\mathcal{M}$ is \textbf{c-bounded} in $\mathcal{M}$, where ``c" stands for ``cardinalitywise", if there is some $x \in M$ such that for all $w \in \mathrm{WF}(M)$ $\mathcal{M} \models |x| > |w|$. (b) The well-founded part of $\mathcal{M}$ is \textbf{c-unbounded} in $\mathcal{M}$ if the well-founded part of $\mathcal{M}$ is not c-bounded in $\mathcal{M}$, i.e., if for all $x \in M$, there exists $w\in \mathrm{WF}(M)$ such that $\mathcal{M} \models |x| \leq |w|$. \end{Definitions1} In Section \ref{Sec:ModelZFCMinusWithNoSelfEmbedding} we will see that the well-founded part being c-bounded prevents a model of $\mathrm{KPI}$ from admitting a proper initial self-embedding. In contrast, the following condition will be used in Section \ref{Sec:ModelsWithSelfEmbeddings} to show that nonstandard models of certain extensions of $\mathrm{KPI}$ are guaranteed to admit proper initial self-embedding. \begin{Definitions1} \label{contained-def} Let $\mathcal{M} \models \mathrm{KPI}$. We say that the \textbf{well-founded part of $\mathcal{M}$ is contained} if there exists $c \in M$ such that $\mathrm{WF}(M) \subseteq c^*$. \end{Definitions1} The next result shows that if the well-founded part of an $\omega$-standard model is contained, then Theorem \ref{Th:WellFoundedPartIsAdmissible} can be extended to show that the well-founded part satisfies all of the axioms of $\mathrm{KP}^\mathcal{P}$. \begin{Theorems1} \label{Th:ContainedStandardPartsSatisfyKPP} Let $\mathcal{M} \models \mathrm{KPI}$ be $\omega$-standard. If the well-founded part of $\mathcal{M}$ is contained, then $\mathrm{WF}(\mathcal{M})\models \mathrm{KP}^\mathcal{P}$. \end{Theorems1} \begin{proof} Suppose that the well-founded part of $\mathcal{M}$ is contained. Let $c \in M$ be such that $\mathrm{WF}(M) \subseteq c^*$. Since $\omega \in \mathrm{WF}(M)$, $\mathrm{WF}(\mathcal{M})$ satisfies the axiom of infinity. By Lemma \ref{Th:StandardPartClosedUnderRank}, we are left to verify that Powerset and $\Delta_0^{\mathcal{P}}$-Collection hold in $\mathrm{WF}(\mathcal{M})$. To see that Powerset holds, let $x \in \mathrm{WF}(M)$. It follows from Lemma \ref{Th:StandardPartClosedUnderRank} that if $y \in M$ with $\mathcal{M} \models y \subseteq x$, then $y \in \mathrm{WF}(M)$. Note that $\Delta_0$-Separation in $\mathcal{M}$ ensures that $$A= \{ y \in c \mid y \subseteq x\}.$$ is a set. Moreover, $\mathcal{M} \models (A= \mathcal{P}(x))$. Now, $A^* \subseteq \mathrm{WF}(M)$ and so, by Lemma \ref{Th:WellfoundedPartTopless}, $A \in \mathrm{WF}(M)$. Therefore, $\mathrm{WF}(\mathcal{M})\models \textrm{Powerset}$. We are left to verify $\Delta_0^\mathcal{P}$-Collection. Let $\phi(x, y, \vec{z})$ be a $\Delta_0^\mathcal{P}$-formula. Let $a, \vec{b} \in \mathrm{WF}(M)$ be such that $$\mathrm{WF}(\mathcal{M}) \models (\forall x \in a) \exists y \phi(x, y, \vec{b}).$$ Since $\mathrm{WF}(\mathcal{M}) \subseteq_{\mathrm{topless}}^\mathcal{P} \mathcal{M}$, $$\mathcal{M}\models (\forall x \in a) (\exists y \in c) \phi(x, y, \vec{b}).$$ Let $\phi^c(x, y, \vec{z})$ be the $\Delta_0$-formula obtained by restricting all of the quantifiers in $\phi(x, y, \vec{z})$ to elements of $c$. Since $\mathrm{WF}(M) \subseteq c^*$ and $\mathrm{WF}(\mathcal{M}) \subseteq_e^\mathcal{P} \mathcal{M}$, for all $x, y, \vec{z} \in \mathrm{WF}(M)$, $$\mathcal{M} \models \phi(x, y, \vec{z}) \textrm{ if and only if } \mathrm{WF}(\mathcal{M}) \models \phi^c(x, y, \vec{z}).$$ Work inside $\mathcal{M}$. Let $$B= \{y \in c \mid (\exists x \in a)(\phi^c(x, y, \vec{b}) \land (\forall w \in c)(\phi^c(x, w, \vec{b}) \Rightarrow \rho(y) \leq \rho(w)))\}.$$ Now, $\Delta_1$-Separation ensures that $B$ is a set. Lemma \ref{Th:StandardPartClosedUnderRank} implies that $B^* \subseteq \mathrm{WF}(M)$. Therefore, since $\mathrm{WF}(\mathcal{M}) \subseteq_{\mathrm{topless}}^\mathcal{P} \mathcal{M}$, $B \in \mathrm{WF}(M)$. This shows that $$\mathrm{WF}(\mathcal{M}) \models (\forall x \in a)(\exists y \in B) \phi(x, y, \vec{b}),$$ and $\mathrm{WF}(\mathcal{M})\models \mathrm{KP}^{\mathcal{P}}$. \Square \end{proof} Note that if $\mathcal{M}$ is a model of $\mathrm{KPI}$ that is not $\omega$-standard, then $\mathrm{WF}(\mathcal{M})$ is isomorphic to $V_\omega$ (the hereditarily finite sets of the metatheory). This observation combined with Theorem \ref{Th:ContainedStandardPartsSatisfyKPP} and Corollary \ref{Th:RanksInKPP} make it clear that if the well-founded part of a model of $\mathrm{KPI}$ is contained, then the well-founded part of that model has access to sequences enumerating the $V_\alpha$s (as in Definition \ref{V_alpha}), thereby yielding the corollary below. \begin{Coroll1} \label{Th:StandardPartHasRanks} Let $\mathcal{M} \models \mathrm{KPI}$. If the well-founded part of $\mathcal{M}$ is contained, then the sentence expressing ``$V_{\alpha}$ exists for all ordinal $\alpha$'' holds in $\mathrm{WF}(\mathcal{M})$. \end{Coroll1} We can now see that if the well-founded part of a model of $\mathrm{KPI}$ is contained, then the well-founded part is c-bounded in that model. \begin{Theorems1} \label{Th:ContainedImpliesBoundedOverKPI} Let $\mathcal{M} \models \mathrm{KPI}$. If the well-founded part of $\mathcal{M}$ is contained, then the well-founded part of $\mathcal{M}$ is c-bounded in $\mathcal{M}$. \end{Theorems1} \begin{proof} Assume that the well-founded part of $\mathcal{M}$ is contained. Let $C \in M$ be such that $\mathrm{WF}(M) \subseteq C^*$. Suppose, for a contradiction, that the well-founded part of $\mathcal{M}$ is c-unbounded in $\mathcal{M}$. Let $X \in \mathrm{WF}(\mathrm{M})$ be such that $\mathcal{M} \models |C| \leq |X|$. It follows from Theorem \ref{Th:ContainedStandardPartsSatisfyKPP} that there exists $Y \in \mathrm{WF}(M)$ such that $\mathcal{M} \models Y= \mathcal{P}(X)$. Work inside $\mathcal{M}$. By Cantor's Theorem, $|X| < |Y|$. Note the usual proof of Cantor's theorem can be carried out in $\mathrm{KP}$ since it uses $\Delta_1$-Separation, which is a theorem of $\mathrm{KP}$. But, $|Y| \leq |C| \leq |X|$, which is a contradiction. Therefore the well-founded part of $\mathcal{M}$ is c-bounded in $\mathcal{M}$. \Square \end{proof} The next lemma shows that in the special case when a countable model $\mathcal{M}$ of $\mathrm{KPI}$ is $\omega$-nonstandard, the axioms of $\mathrm{KPI}$ are sufficient to ensure that the well-founded part is contained (note that the well-founded part of an $\omega$-nonstandard is isomorphic to the hereditarily finite sets). \begin{Lemma1} \label{Th:OmegaNonstandardModelsHaveContainedStandardPart} If $\mathcal{M} \models \mathrm{KPI}$ is $\omega$-nonstandard, then the well-founded part of $\mathcal{M}$ is contained. \end{Lemma1} \begin{proof} If $\mathcal{M} \models \mathrm{KPI}$ is $\omega$-nonstandard, then $\mathrm{WF}(\mathcal{M})$ is isomorphic to $V_\omega$ and $\mathrm{WF}(M) \subseteq (L_\omega^\mathcal{M})^*$. \Square \end{proof} Similarly, if $\mathcal{M}$ in nonstandard and satisfies all of the axioms of $\mathrm{KP}^\mathcal{P}$, then the well-founded part of $\mathcal{M}$ is contained. \begin{Lemma1} \label{Th:WellFoundedPartsOfKPPContained} If $\mathcal{M} \models \mathrm{KP}^\mathcal{P}$ is nonstandard, then the well-founded part of $\mathcal{M}$ is contained. \end{Lemma1} \begin{proof} Let $\mathcal{M}= \langle M, \in^\mathcal{M} \rangle$ be a nonstandard model of $\mathrm{KP}^\mathcal{P}$. Let $x \in M$ be such that $x \notin \mathrm{WF}(M)$. Let $\alpha= \rho^\mathcal{M}(x)$. By Lemma \ref{Th:StandardPartClosedUnderRank}, $\alpha \in M \backslash \mathrm{WF}(M)$ and $\mathrm{WF}(M) \subseteq (V_\alpha^\mathcal{M})^*$.\Square \end{proof} Over the theory $\mathrm{ZF}^-+\mathrm{WO}+\forall \alpha\ \Pi_\infty^1-\mathrm{DC}_\alpha$, we get a converse to Theorem \ref{Th:ContainedImpliesBoundedOverKPI}. \begin{Lemma1} \label{Th:CBoundedImpliesPowersetInWellFoundedPart} Let $\mathcal{M}$ be a model of $\mathrm{ZF}^-+\mathrm{WO}+\forall \alpha\ \Pi_\infty^1-\mathrm{DC}_\alpha$. If the well-founded part of $\mathcal{M}$ is c-bounded in $\mathcal{M}$, then $$\mathrm{WF}(\mathcal{M}) \models \mathrm{Powerset}.$$ \end{Lemma1} \begin{proof} Suppose that the well-founded part of $\mathcal{M}$ is c-bounded in $\mathcal{M}$. Using the axiom WO and transitive collapse inside $\mathcal{M}$, let $\kappa \in M$ be such that $\mathcal{M} \models (\kappa \textrm{ is a cardinal})$ and for all $X \in \mathrm{WF}(M)$, $\mathcal{M} \models |X| < \kappa$. Since $\mathrm{WF}(\mathcal{M}) \subseteq_{\mathrm{topless}}^{\mathcal{P}} \mathcal{M}$, to see that $\mathrm{WF}(\mathcal{M}) \models \mathrm{Powerset}$ it is sufficient to show that for all $X \in \mathrm{WF}(M)$, $\mathcal{M}$ thinks that $\mathcal{P}(X)$ exists. Suppose, for a contradiction, that $Y \in \mathrm{WF}(M)$ is such that $\mathcal{M}$ believes that the powerset of $Y$ does not exist. Note that if $Z \in M$ with $\mathcal{M} \models Z \subseteq Y$, then $Z \in \mathrm{WF}(M)$. Consider the formula $\phi(f, y, Y, \kappa)$ defined by $$(\exists \alpha \in \kappa)(f \textrm{ is a function with domain } \alpha) \Rightarrow (y \subseteq Y) \land (y \notin \mathrm{rng}(f)).$$ Now, suppose that there exists $f \in M$ such that $\mathcal{M} \models \forall y \neg \phi(f, y, Y, \kappa)$. It follows that $$\mathcal{M} \models (f \textrm{ is a function}) \land (\mathrm{rng}(f)= \mathcal{P}(Y)),$$ which contradicts the fact that the powerset of $Y$ does not exist. Therefore $$\mathcal{M} \models \forall f \exists y \phi(f, y, Y, \kappa),$$ and so, by $\Pi^1_\infty\mathrm{-DC}_\kappa$, there exists $f \in M$ such that $$\mathcal{M} \models (f \textrm{ is a function}) \land (\mathrm{dom}(f)= \kappa) \land (\forall \alpha \in \kappa)\phi(f\upharpoonright \alpha, f(\alpha), Y, \kappa).$$ Work inside $\mathcal{M}$. If $\alpha \in \kappa$ is least such that $f(\alpha) \nsubseteq Y$, then $\phi(f \upharpoonright \alpha, f(\alpha), Y, \kappa)$ does not hold. Therefore for all $y \in \mathrm{rng}(f)$, $y \subseteq Y$. If $\alpha \in \kappa$ is least such that there exists $\beta \in \alpha$ with $f(\alpha) = f(\beta)$, then $\phi(f \upharpoonright \alpha, f(\alpha), Y, \kappa)$ does not hold. This shows that $f$ is injective. Therefore, we have $\mathrm{rng}(f)^* \subseteq \mathrm{WF}(M)$ and $\mathcal{M} \models \kappa \leq |\mathrm{rng}(f)|$. And, since $\mathrm{WF}(\mathcal{M}) \subseteq_{\mathrm{topless}}^{\mathcal{P}} \mathcal{M}$, $\mathrm{rng}(f) \in \mathrm{WF}(M)$, and the fact that $\mathcal{M} \models \kappa \leq |\mathrm{rng}(f)|$ contradicts our choice of $\kappa$. This shows that $\mathrm{WF}(\mathcal{M})$ satisfies the powerset axiom. \end{proof} \begin{Lemma1} \label{Th:PowersetInWellFoundedPartImpliesContained} Let $\mathcal{M}$ be a model of $\mathrm{ZF}^-+\mathrm{WO}$. If $\mathcal{M}$ is nonstandard and $$\mathrm{WF}(\mathcal{M}) \models \mathrm{Powerset},$$ then the well-founded part of $\mathcal{M}$ is contained. \end{Lemma1} \begin{proof} Suppose that $\mathcal{M}$ is nonstandard and $$\mathrm{WF}(\mathcal{M}) \models \mathrm{Powerset}.$$ Consider the formula $\phi(f, \alpha)$ that expresses that $f$ is a function with domain $\alpha$ such that $f(\beta)=V_\beta$ for all $\beta<\alpha$. Suppose that the class $\{ \alpha \in \mathrm{Ord}^\mathcal{M} \mid \mathcal{M} \models \neg \exists f \phi(f, \alpha)\}$ is nonempty and, using foundation, let $\xi \in \mathrm{Ord}^\mathcal{M}$ be the least element of this class. We claim that $\xi \notin \mathrm{o}(M)$. Suppose, for a contradiction, that $\xi \in \mathrm{o}(M)$. Since $\mathrm{WF}(\mathcal{M}) \models \mathrm{Powerset}$, $\xi$ is not a successor ordinal and therefore must be a limit ordinal. Work inside $\mathcal{M}$. Using Collection and Separation in $\mathcal{M}$, the class $$A= \{ f \mid (\exists \alpha \in \xi) \phi(f, \alpha)\}$$ is a set. Now, $\bigcup A$ is a function that satisfies $\phi\left(\bigcup A, \xi\right)$, which contradicts our choice of $\xi$. This shows that if $\{ \alpha \in \mathrm{Ord}^\mathcal{M} \mid \mathcal{M} \models \neg \exists f \phi(f, \alpha)\}$ is nonempty, then its least element cannot be standard. Therefore, since $\mathcal{M}$ is nonstandard, there exists $f \in M$ and $\gamma \in \mathrm{Ord}^\mathcal{M} \backslash \mathrm{o}(M)$ such that $\mathcal{M} \models \phi(f, \gamma)$. Moreover, since $\mathrm{WF}(\mathcal{M}) \subseteq_{\mathrm{topless}}^{\mathcal{P}} \mathcal{M}$, there exists $\nu \in \gamma^*\backslash \mathrm{o}(M)$. A standard induction argument inside $\mathcal{M}$ shows that $$\mathcal{M} \models \forall x( x \in f(\nu) \iff \rho(x)< \nu).$$ Therefore, by Lemma \ref{Th:StandardPartClosedUnderRank} and the fact that $\nu$ is nonstandard, $\mathrm{WF}(M) \subseteq f(\nu)^*$. This shows that the well-founded part of $\mathcal{M}$ is contained. \Square \end{proof} \begin{Theorems1} \label{Th:CBoundedImpliesContainedOverDependentChoices} Let $\mathcal{M}$ be a model of $\mathrm{ZF}^-+\mathrm{WO}+\forall \alpha\ \Pi_\infty^1-\mathrm{DC}_\alpha$. If the well-founded part of $\mathcal{M}$ is c-bounded in $\mathcal{M}$, then the well-founded part of $\mathcal{M}$ is contained. \end{Theorems1} \begin{proof} Suppose that the well-founded part of $\mathcal{M}$ is c-bounded in $\mathcal{M}$. Therefore $\mathcal{M}$ is nonstandard and, by Lemma \ref{Th:CBoundedImpliesPowersetInWellFoundedPart}, $$\mathrm{WF}(\mathcal{M})\models \mathrm{Powerset}.$$ The fact that the well-founded part of $\mathcal{M}$ is contained now follows from Lemma \ref{Th:PowersetInWellFoundedPartImpliesContained}. \Square \end{proof} \section[Obstructing initial self-embeddings]{Obstructing initial self-embeddings} \label{Sec:ModelZFCMinusWithNoSelfEmbedding} In this section we establish the first main result of the paper (Theorems \ref{Th:ModelOfZFCminusWithoutSelfEmbedding}) on the existence of countable nonstandard models of $\mathrm{ZF}^-+\mathrm{WO}$ with no nontrivial initial self-embeddings. Furthermore, in Theorem \ref{Th:Uncountablemodelswithnoselfembeddings} we exhibit nonstandard \textit{uncountable} models of ZF with no proper initial self-embeddings. We begin with verifying that an initial embedding of a model of $\mathrm{KPI}$ must fix the well-founded part of this model. This result will allow us to show that models of $\mathrm{KPI}$ in which the well-founded part is c-unbounded (in the sense of Definition \ref{Df:CUnbounded}) do not admit proper initial self-embedding. \begin{Lemma1} \label{Th:InitialSelfEmbeddingsIdentityOnStandardPart} Let $\mathcal{M} \models \mathrm{KPI}$. If $j: \mathcal{M} \longrightarrow \mathcal{M}$ is an initial self-embedding, then $j$ is the identity on $\mathrm{WF}(\mathcal{M})$. \end{Lemma1} \begin{proof} Let $j: \mathcal{M} \longrightarrow \mathcal{M}$ be a proper initial self-embedding. Suppose that $j$ is not the identity on $\mathrm{WF}(\mathcal{M})$ and let $x \in\mathrm{WF}(\mathcal{M})$ be $\in^\mathcal{M}$-least such that $j(x)\neq x$. Now, if $z \in M$ with $\mathcal{M} \models z \in x$ and $\mathcal{M} \models z \notin j(x)$, then $j(z) = z$ and $\mathcal{M} \models (z \in x) \land (j(z) \notin j(x))$, which is a contradiction. Similarly, if $z \in M$ with $\mathcal{M} \models z \notin x$ and $\mathcal{M} \models z \in j(x)$, then $j^{-1}(z)\neq z$ and $\mathcal{M} \models j^{-1}(z) \in x$, which contradicts the fact that $x$ is the $\in^{\mathcal{M}}$-least thing moved by $j$. \Square \end{proof} \begin{Coroll1} Let $\mathcal{M} \models \mathrm{KPI}$. If $j: \mathcal{M} \longrightarrow \mathcal{M}$ is a proper initial self-embedding, then $$\mathrm{WF}(\mathcal{M}) \subseteq_{\mathrm{topless}}^\mathcal{P} j[\mathcal{M}].$$ \end{Coroll1} The next lemma shows that in addition to containing the well-founded part, the fixed point set $\mathrm{Fix}(j)$ of a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ also contains all points that are $\Sigma_1$-definable in $\mathcal{M}$ from points in $\mathrm{Fix}(j)$. \begin{Lemma1} \label{Th:Sigma1DefinablePointsAreFixed} Let $\mathcal{M} \models \mathrm{KPI}$. Let $j: \mathcal{M} \longrightarrow \mathcal{M}$ be an initial self-embedding. If $x \in M$ is definable in $\mathcal{M}$ by a $\Sigma_1$-formula with parameters from $\mathrm{Fix}(j)$, then $x \in \mathrm{Fix}(j)$. \end{Lemma1} \begin{proof} Suppose that $\phi(z, \vec{y})$ is a $\Sigma_1$-formula, $\vec{a} \in \mathrm{Fix}(j)$ and $x \in M$ is the unique element of $M$ such that $$\mathcal{M} \models \phi(x, \vec{a}).$$ Therefore, since $\vec{a} \in \mathrm{Fix}(j)$, $$j[\mathcal{M}] \models \phi(j(x), \vec{a}).$$ And, since $j[\mathcal{M}] \subseteq_e \mathcal{M}$ and $\phi$ is a $\Sigma_1$-formula, $$\mathcal{M} \models \phi(j(x), \vec{a}).$$ By the uniqueness of $x$, $j(x)= x$ and $x \in \mathrm{Fix}(j)$. \Square \end{proof} In particular, if $j: \mathcal{M} \longrightarrow \mathcal{M}$ is a proper initial self-embedding and $x \in M$ is a point that is $\Sigma_1$-definable in $\mathcal{M}$ from points in the well-founded part of $\mathcal{M}$, then $x$ must be fixed by $j$. This observation allows us to show if the well-founded part of a nonstandard model of $\mathrm{KPI}$ is c-unbounded, then that model admits no proper initial self-embedding. \begin{Theorems1} \label{Th:NoSelfEmbeddingWhenStandardPartDense} Let $\mathcal{M} \models \mathrm{KPI}$. If the well-founded part of $\mathcal{M}$ is c-unbounded in $\mathcal{M}$, and $j:\mathcal{M} \longrightarrow \mathcal{M}$ is an initial self-embedding, then $j$ is the identity embedding. \end{Theorems1} \begin{proof} Assume that the well-founded part of $\mathcal{M}$ is c-unbounded in $\mathcal{M}$ and suppose that $j: \mathcal{M} \longrightarrow \mathcal{M}$ is an initial self-embedding. We will show that $j$ must be the identity function. Let $x \in M$. Let $y \in M$ be such that $$\mathcal{M} \models y= \mathrm{TC}^\mathcal{M}(\{x\}).$$ Let $X \in \mathrm{WF}(M)$ be such that $$\mathcal{M} \models |y| \leq |X|.$$ Work inside $\mathcal{M}$. Let $f: y \longrightarrow X$ be injective. Let $X^\prime= \mathrm{rng}(f)$. Consider $$Y= \{\langle u, v\rangle \in X \times X \mid (u, v \in \mathrm{rng}(f))\land (f^{-1}(u) \in f^{-1}(v))\}.$$ Now, $Y, X^\prime \in \mathrm{WF}(M)$. Consider $\phi(z, W, Z)$ defined by $$\exists w\exists f \left(\begin{array}{c} (f:w \longrightarrow W \textrm{ is a bijection})\land (\forall u, v \in w)(u \in v \iff \langle f(u), f(v)\rangle \in Z)\\ \land (Z\subseteq W \times W)\land \bigcup w \subseteq w \land (z \in w)\land (\forall u \in w)(z \notin u) \end{array}\right).$$ Note that $\phi(z, W, Z)$ is a $\Sigma_1$-formula and $x$ is the unique point in $M$ such that $$\mathcal{M} \models \phi(x, X^\prime, Y).$$ Therefore, by Lemmas \ref{Th:Sigma1DefinablePointsAreFixed} and \ref{Th:Sigma1DefinablePointsAreFixed}, $j(x)=x$. Since $x \in M$ was arbitrary, this shows that $j$ is the identity embedding, as desired. \Square \end{proof} This allows us to show that there are nonstandard $\omega$-standard models of $\mathrm{ZF}^-+\mathrm{WO}$ that are not isomorphic to a transitive subclass of themselves. To build such a model we will employ the following consequence of \cite[Theorem 2.2]{fri73}. \begin{Lemma1} \label{Th:NonstandardModelWithSameStandardOrdinals} Let $T$ be a recursive $\mathcal{L}$-theory. If $A$ is a countable admissible set such that $\langle A, \in \rangle \models T$, then there exists a nonstandard $\mathcal{L}$-structure $\mathcal{M}$ such that $\mathcal{M} \models \mathrm{KP}+ T$ and $\mathrm{o}(M)= A\cap \mathrm{Ord}$. \end{Lemma1} \begin{Theorems1} \label{Th:ModelOfZFCminusWithoutSelfEmbedding} There exists a countable nonstandard $\omega$-standard model $\mathcal{M} \models \mathrm{ZF}^-+\mathrm{WO}$ such that there is no initial self-embedding $j:\mathcal{M} \longrightarrow \mathcal{M}$ other than the identity embedding. \end{Theorems1} \begin{proof} Note that $\langle H_{\aleph_1}, \in \rangle \models \mathrm{ZF}^-+\mathrm{WO}+\forall x (|x| \leq \aleph_0)$. Therefore, by the Downwards L\"{o}wenheim-Skolem Theorem and the Mostowski Collapsing Lemma, there exists a countable admissible set $A$ such that $\omega \in A$ and $\langle A, \in \rangle \equiv \langle H_{\aleph_1}, \in \rangle$. So, by Lemma \ref{Th:NonstandardModelWithSameStandardOrdinals} and the Downwards L\"{o}wenheim-Skolem Theorem, there exists a countable $\mathcal{L}$-structure $\mathcal{M}$ such that $\mathcal{M} \models \mathrm{ZF}^-+\mathrm{WO}+\forall x (|x| \leq \aleph_0)$, $\mathcal{M}$ is nonstandard and $\mathrm{o}(M)= A \cap \mathrm{Ord}$. Since $\omega \in A$, the well-founded part of $\mathcal{M}$ is c-unbounded in $\mathcal{M}$ and so, by Theorem \ref{Th:NoSelfEmbeddingWhenStandardPartDense}, there is no proper initial self-embedding $j:\mathcal{M} \longrightarrow \mathcal{M}$. \Square \end{proof} We conclude this section by exhibiting uncountable nonstandard models of $\mathrm{ZF}$ that carry no proper initial self-embeddings. Before doing so, let us note that it is well-known that every consistent extension of $\mathrm{ZF}$ has a model of cardinality $\aleph_1$ that carries no proper \textit{rank-initial} self-embedding. To see this, recall that by a classical result due to Keisler and Morely (first established in \cite{Keisler-Morley}, and exposited as Theorem 2.2.18 of \cite{Chang-Keisler}) every countable model of $\mathrm{ZF}$ has a proper elementary end-extension. It is easy to see that an elementary end-extension of a model of $\mathrm{ZF}$ is a rank-extension. Now if $T$ is a consistent extension of $\mathrm{ZF}$, we can readily build a \textit{countable nonstandard} model of $T$ and use the Keisler-Morley theorem $\aleph_1$-times (while taking unions at limit ordinals) to build a so-called $\aleph_1$-like model of $T$, i.e., a model $\cal{M}$ of power $\aleph_1$ such that $a^*$ is finite or countable for each $a \in M$. It is evident that $\cal{M}$ is nonstandard. Moreover, $\cal{M}$ carries no proper rank initial embedding $j$ since any such embedding $j$ would have to have the property that $j[\mathcal{M}]$ is a submodel of some structure of the form $V^{\mathcal{M}}_{\alpha}$ for some ``ordinal" $\alpha$ of $\mathcal{M}$, which is impossible, since $(V^{\mathcal{M}}_{\alpha})^*$ is countable thanks to the fact that $\cal{M}$ is $\aleph_1$-like. \begin{Theorems1} \label{Th:Uncountablemodelswithnoselfembeddings} Every consistent extension of $\mathrm{ZF}+ V = L$ has a nonstandard model of power $\aleph_1$ that carries no proper initial self-embedding. \end{Theorems1} \begin{proof} Let $T$ be a consistent extension of $\mathrm{ZF} + V = L$, and $\mathcal{M}$ be a nonstandard $\aleph_1$-like model of $T$. Recall that, provably in $\mathrm{ZF} + V = L$, there is a $\Sigma_1$-formula $\sigma(x,y)$ that describes the graph of a bijection $f$ between the class $V$ of sets and the class $\mathrm{Ord}$ of ordinals; see, e.g., the proof of Lemma 13.19 of \cite{Jechbook}. Suppose $j$ is an initial embedding of $\mathcal{M}$. We will show that $j$ is not a proper embedding by verifying that every element $m$ of $\mathcal{M}$ is in the image of $j$. Being an ordinal is a $\Delta_0$-property and thus preserved by $j$, so $j[\mathrm{Ord}^\mathcal{M}] \subseteq \mathrm{Ord}^\mathcal{M}$. Since $\mathcal{M}$ is $\aleph_1$-like, $j[\mathrm{Ord}^\mathcal{M}]$ must be cofinal in $\mathrm{Ord}^\mathcal{M}$. But, $j$ is initial, so $j[\mathrm{Ord}^\mathcal{M}]= \mathrm{Ord}^\mathcal{M}$. To show that $j[\mathcal{M}] = \mathcal{M}$, suppose $m \in M$. Then there is a unique $\alpha \in \mathrm{Ord}^{\mathcal{M}}$ such that $\mathcal{M} \models \sigma(m, \alpha)$. Since every ordinal of $\mathcal{M}$ is in $j[\mathrm{Ord}^{\mathcal{M}}]$, there is some $\beta \in \mathrm{Ord}^{\mathcal{M}}$ such that $j(\beta) = \alpha$. Let $m_0$ be the unique element of $\mathcal{M}$ such that $\mathcal{M} \models \sigma(m_0, \beta)$. Then since $j$ is an embedding, $j[\mathcal{M}] \models \sigma(j(m_0), j(\beta))$, and by the choice of $\beta$, $j[\mathcal{M}] \models \sigma(j(m_0), \alpha))$, which coupled with the fact that $\sigma$ is a $\Sigma_1$-formula, yields $\mathcal{M} \models \sigma(j(m_0), \alpha)$. So in light of the fact that $\sigma$ within $\mathcal{M}$ defines the graph of a bijection $f$ between $V$ and $\mathrm{Ord}$, and $f(m)=\alpha$ (by the choice of $\alpha$), we can can conclude that $j(m_0)=m$, thereby showing that $j[\mathcal{M}] = \mathcal{M}$. \Square \end{proof} \section[Constructing initial self-embeddings]{Constructing initial self-embeddings} \label{Sec:ModelsWithSelfEmbeddings} In the previous section we saw that if the well-founded part of a model $\mathcal{M}$ of $\mathrm{KPI}$ is c-unbounded (in the sense of Definition \ref{Df:CUnbounded}) in $\mathcal{M}$, then there is no proper initial self-embedding of $\mathcal{M}$. In this section we prove an adaption of H. Friedman's Self-embedding Theorem \cite[Theorem 4.1]{fri73} that ensures the existence of proper initial self-embeddings of models of extensions of $\mathrm{KPI}$ with contained well-founded parts.\footnote{It is known that $\mathrm{KP+\lnot I}$ + $\Sigma_1$-separation is bi-interpretable with the fragment $\mathrm{I}\Sigma_1$ of PA (Peano arithmetic), where $\mathrm{KP+\lnot I}$ is KP plus the negation the axiom of infinity; the proof is implicit in the proof of the main result of Kaye and Wong \cite{kw07}. Moreover, the bi-interpretation at work makes it clear that the study of initial self-embeddings of models of $\mathrm{KP+\lnot I}$ + $\Sigma_1$-Separation boils down to the study of initial self-embeddings of models of $\mathrm{I}\Sigma_1$. The interested reader can consult \cite{BahEna18} for a systematic study of initial self-embeddings of $\mathrm{I}\Sigma_1$.} We now turn to the investigation of conditions under which models of $\mathrm{KPI}$ with contained well-founded parts admit proper initial self-embeddings. We begin with the verification that $\Delta_1$-Separation ensures that $\Sigma_0$-types with parameters from the well-founded part that are realised are coded in the standard system; and that for $n>0$, $\Sigma_n$-Separation is sufficient to ensure the corresponding condition for $\Sigma_n$-types. \begin{Lemma1} \label{Th:Sigma1TypesWithParametersAreCoded} \footnote{We are grateful to Kameryn Williams for spotting an unnecessary assumption in an earlier version of this Lemma \ref{Th:Sigma1TypesWithParametersAreCoded}} Suppose $n \in \omega$ and $\mathcal{M}$ is a model of $\mathrm{KPI}+\Sigma_{n}\textrm{-Separation}$ such that the well-founded part of $\mathcal{M}$ is contained. If $\vec{a} \in M$, then $$\{\langle \ulcorner \phi(x, \vec{y}) \urcorner, b\rangle \mid \phi \textrm{ is } \Sigma_{n}, \ b \in \mathrm{WF}(M) \textrm{ and } \mathcal{M} \models \phi(b, \vec{a}) \} \in \mathrm{SSy}(\mathcal{M}).$$ \end{Lemma1} \begin{proof} Let $C \in M$ be such that $\mathrm{WF}(M) \subseteq C^*$. Let $a_1, \ldots, a_l \in M$. Work inside $\mathcal{M}$. Consider $$D= \{\langle q, b\rangle \in C \mid (q \in \omega) \land (a= \langle b, a_1, \ldots, a_l \rangle) \land \mathrm{Sat}_{\Sigma_n}(q, a) \}.$$ Thanks to Theorem \ref{Complexityofpartialsat}, $\Sigma_n$-Separation ($\Delta_1$-Separation when $n=0$) ensures that $D$ is a set in $\mathcal{M}$. It is clear that $D$ codes $$\{\langle \ulcorner \phi(x, \vec{y}) \urcorner, b\rangle \mid \phi \textrm{ is } \Sigma_n, b \in \mathrm{WF}(M) \textrm{ and } \mathcal{M} \models \phi(b, \vec{a}) \}.$$ \Square \end{proof} As verified in the next lemma, in the special case when the model is not $\omega$-standard, in Lemma \ref{Th:Sigma1TypesWithParametersAreCoded} the assumption that the well-founded part is contained can be dropped and the assumption that $\Sigma_n$-Separation holds can be replaced by a fragment of the collection scheme coupled with a fragment foundation scheme. \begin{Lemma1} \label{Th:SigmaTypesOfOmegaNonstandardModelsAreCoded} Suppose that $n \in \omega$, $m= \max\{1, n\}$, and $\mathcal{M}$ is an $\omega$-nonstandard model of $ \mathrm{KPI}+ \Pi_{m-1}\textrm{-Collection}+\Pi_{n+1}\textrm{-Foundation}$. If $\vec{a} \in M$, then $$\{\langle \ulcorner \phi(x, \vec{y}) \urcorner, b\rangle \mid \phi \textrm{ is } \Sigma_n, b \in \mathrm{WF}(M) \textrm{ and } \mathcal{M} \models \phi(b, \vec{a}) \} \in \mathrm{SSy}(\mathcal{M}).$$ \end{Lemma1} \begin{proof} By Lemma \ref{Th:OmegaNonstandardModelsHaveContainedStandardPart}, the well-founded part of $\mathcal{M}$ is contained. Let $C \in M$ be such that $\mathrm{WF}(M) \subseteq C^*$. Let $a_1, \ldots, a_m \in M$. Let $\psi_1(\alpha, f,C)$ be the formula: $$\left(\begin{array}{c} \alpha \in \omega \land \mathrm{dom}(f)= \alpha+1 \land f(0)= \emptyset \ \land \\ (\forall \beta \in \mathrm{dom}(f))(\forall y \in C)(y \in f(\beta+1) \iff y \subseteq f(\beta)) \end{array}\right).$$ \noindent Consider the formula $\theta(\alpha, \omega, a_1, \ldots, a_m)$ defined by: $$\exists v (\exists f \in C) \left( \psi_1(\alpha, f,C) \land \psi_2(\alpha, f,C,v,a_1,\ldots,a_m)\right),$$ where $\psi_2(\alpha, f,C,v, a_1,\ldots,a_m) $ is: $$\left(\begin{array}{c} (\forall \langle q, b\rangle \in C)\left(\begin{array}{c} \langle q, b \rangle \in v \cap f(\alpha) \iff\\ (\langle q, b\rangle \in f(\alpha))\land \mathrm{Sat}_{\Sigma_n}(q, \langle b, a_1, \ldots, a_m \rangle) \end{array}\right) \end{array}\right).$$ \noindent By $\Pi_{m-1}$-Collection, the formula $\theta(\alpha, \omega, a_1, \ldots, a_m)$ is equivalent to a $\Sigma_{n+1}$-formula. Now, since $\mathrm{WF}(\mathcal{M})$ is isomorphic to $V_\omega$, for all $\alpha \in \mathrm{o}(M)$, $$\mathcal{M} \models \theta(\alpha, \omega, a_1, \ldots, a_m).$$ Therefore, by $\Pi_{n+1}$-Foundation, there exists $\gamma \in (\omega^\mathcal{M})^* \backslash \mathrm{WF}(M)$ such that $$\mathcal{M} \models \theta(\gamma, \omega, a_1, \ldots, a_m).$$ \noindent Let $v \in M$ be such that $$\mathcal{M} \models (\exists f \in C) \left( \psi_1(\gamma, f,C) \land \psi_2(\gamma, f,C,v,a_1,\ldots,a_m)\right).$$ \noindent Since $\gamma \in (\omega^\mathcal{M})^*$ is nonstandard, it follows that $v$ codes $$\{\langle \ulcorner \phi(x, \vec{y}) \urcorner, b\rangle \mid \phi \textrm{ is } \Sigma_n, b \in \mathrm{WF}(M) \textrm{ and } \mathcal{M} \models \phi(b, \vec{a}) \} \in \mathrm{SSy}(\mathcal{M}).$$ \Square \end{proof} Lemma \ref{Th:Sigma1TypesWithParametersAreCoded} allows us to prove the following theorem that gives a sufficient condition for nonstandard models of extensions of $\mathrm{KPI}$ to admit proper initial self-embeddings. \begin{Theorems1} \label{Th:MainSelfEmbeddingResult2} Let $p \in \omega$, $\mathcal{M}$ be a countable model of $\mathrm{KPI}+\Sigma_{p+1}\textrm{-Separation}+\Pi_p\textrm{-Collection}$, and let $b, B \in M$ and $c \in B^*$ with the following properties: \begin{itemize} \item[(I)] $\mathcal{M} \models \bigcup B \subseteq B$. \item[(II)] $\mathrm{WF}(M) \subseteq B^*$. \item[(III)] for all $\Pi_p$-formulae $\phi(\vec{x}, y, z)$ and for all $a \in \mathrm{WF}(M)$, $$\textrm{if } \mathcal{M} \models \exists \vec{x} \phi(\vec{x}, a, b), \textrm{ then } \mathcal{M} \models (\exists \vec{x} \in B)\phi(\vec{x}, a, c).$$ \end{itemize} Then there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $j[\mathcal{M}] \subseteq_e B^*$, $j(b)=c$ and $j[\mathcal{M}] \prec_p \mathcal{M}$. \end{Theorems1} \begin{proof} It follows from (II) that $B \in M$ witnesses the fact that the well-founded part of $\mathcal{M}$ is contained. Let $\langle d_i \mid i \in \omega\rangle$ be an enumeration of $M$ such that $d_0= b$. Let $\langle e_i \mid i \in \omega\rangle$ be an enumeration of $B^*$ in which every element of $B^*$ appears infinitely often. We will construct an initial embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ by constructing sequences $\langle u_i \mid i \in \omega\rangle$ of elements of $M$ and $\langle v_i \mid i \in \omega \rangle$ of elements of $B^*$ and defining $j(u_i)= v_i$ for all $i \in \omega$. After stage $n \in \omega$, we will have chosen $u_0, \ldots, u_{n} \in M$ and $v_0, \ldots, v_{n} \in B^*$ and maintained \begin{itemize} \item[]($\dagger_n$) for all $\Pi_{p}$-formulae, $\phi(\vec{x}, z, y_0, \ldots, y_{n})$, and for all $a \in \mathrm{WF}(M)$, $$\textrm{if } \mathcal{M} \models \exists \vec{x} \phi(\vec{x}, a, u_0, \ldots, u_{n}), \textrm{ then } \mathcal{M} \models (\exists \vec{x} \in B)\phi(\vec{x}, a, v_0, \ldots, v_{n}).$$ \end{itemize} At stage $0$, let $u_0= b$ and let $v_0= c$. By (III), this choice of $u_0$ and $v_0$ satisfy ($\dagger_0$). Let $n \in \omega$ with $n \geq 1$. Assume that we have chosen $u_0, \ldots, u_{n-1} \in M$ and $v_0, \ldots, v_{n-1} \in B^*$ and that ($\dagger_{n-1}$) holds.\\ {\bf Case $n=2k+1$ for $k \in \omega$:} This step will ensure that the embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ is initial. If $$\mathcal{M}\models e_k \notin \mathrm{TC}(\{v_0, \ldots, v_{n-1}\}),$$ then let $u_n= u_0$ and $v_n= v_0$. This choice of $u_n$ and $v_n$ ensure that $u_0, \ldots, u_n \in M$ and $v_0, \ldots, v_n \in B^*$ satisfy ($\dagger_{n}$). If $$\mathcal{M}\models e_k \in \mathrm{TC}(\{v_0, \ldots, v_{n-1}\}),$$ then let $v_n= e_k$ and we need to choose $u_n$ to satisfy ($\dagger_{n}$). By Lemma \ref{Th:Sigma1TypesWithParametersAreCoded}, $\Sigma_{p+1}$-separation, and $\Pi_p$-collection, there exists $D \in M$ that codes the class $$\{ \langle \ulcorner\phi(\vec{x}, z, y_0, \ldots, y_{n}) \urcorner, a \rangle \mid a \in \mathrm{WF}, \phi \textrm{ is } \Sigma_{p} \textrm{ and } \mathcal{M} \models (\forall \vec{x} \in B)\phi(\vec{x}, a, v_0, \ldots, v_n)\} \in \mathrm{SSy}(\mathcal{M}).$$ By Corollary \ref{Th:StandardPartHasRanks}, the well-founded part of $\mathcal{M}$ believes that ranks exist. For all $\alpha \in \mathrm{o}(M)$, let $D_\alpha \in M$ be such that $$\mathcal{M} \models D_\alpha= D \cap V_\alpha.$$ Note that for all $\alpha \in \mathrm{o}(M)$, $D_\alpha \in \mathrm{WF}(M)$. We have that for all $\alpha \in \mathrm{o}(M)$, \begin{equation} \label{eq:BackStepOfEmeddingTheoremClaim3} \mathcal{M} \models \exists v((v \in \mathrm{TC}(\{v_0, \ldots, v_{n-1}\}) \land (\forall \langle m, a \rangle \in D_\alpha) (\forall \vec{x} \in B) \mathrm{Sat}_{\Sigma_{p}}(m, \langle \vec{x}, a, v_0, \ldots, v_{n-1}, v \rangle)). \end{equation} {\bf Claim:} For all $\alpha \in \mathrm{o}(M)$, \begin{equation} \label{eq:BackStepOfEmeddingTheoremClaim1} \mathcal{M} \models (\exists v \in \mathrm{TC}(\{u_0, \ldots, u_{n-1}\}))(\forall \langle m, a\rangle \in D_\alpha)\forall \vec{x}\ \mathrm{Sat}_{\Sigma_p}(m, \langle \vec{x}, a, u_0, \ldots, u_{n-1}, v\rangle). \end{equation} To prove this claim, suppose not, and let $\alpha \in \mathrm{o}(M)$ be such that $$\mathcal{M} \models (\forall v \in \mathrm{TC}(\{u_0, \ldots, u_{n-1}\}))(\exists \langle m, a\rangle \in D_\alpha) \exists \vec{x}\ \neg \mathrm{Sat}_{\Sigma_p}(m, \langle \vec{x}, a, u_0, \ldots, u_{n-1}, v \rangle).$$ By $\Pi_p$-collection, \begin{equation} \label{eq:BackStepOfEmeddingTheoremClaim2} \mathcal{M} \models \exists C (\forall v \in \mathrm{TC}(\{u_0, \ldots, u_{n-1}\}))(\exists \langle m, a\rangle \in D_\alpha) (\exists \vec{x} \in C)\ \neg \mathrm{Sat}_{\Sigma_p}(m, \langle \vec{x}, a, u_0, \ldots, u_{n-1}, v \rangle). \end{equation} Now, $\Pi_p$-Collection implies that (\ref{eq:BackStepOfEmeddingTheoremClaim2}) is equivalent to a $\Sigma_{p+1}$-formula. Therefore, by ($\dagger_{n-1}$), $$\mathcal{M} \models (\exists C \in B)(\forall v \in \mathrm{TC}(\{v_0, \ldots, v_{n-1}\}))(\exists \langle m, a\rangle \in D_\alpha) (\exists \vec{x} \in C)\ \neg \mathrm{Sat}_{\Sigma_p}(m, \langle \vec{x}, a, v_0, \ldots, v_{n-1}, v \rangle).$$ But then $$\mathcal{M} \models (\forall v \in \mathrm{TC}(\{v_0, \ldots, v_{n-1}\}))(\exists \langle m, a\rangle \in D_\alpha) (\exists \vec{x} \in B)\ \neg \mathrm{Sat}_{\Sigma_p}(m, \langle \vec{x}, a, v_0, \ldots, v_{n-1}, v \rangle),$$ which contradicts (\ref{eq:BackStepOfEmeddingTheoremClaim3}). This proves the claim.\\ \noindent Consider the formula $\theta(\alpha, D, B, u_0, \ldots, u_{n-1})$ defined by: $$ (\exists f \in B) \left( \psi_1(\alpha,f,B) \land \psi_2(\alpha,f, D, u_0, \ldots, u_{n-1}) \right),$$ where $\psi_1(\alpha,f,B)$ is: $$\left(\begin{array}{c} (\alpha \textrm{ is an ordinal}) \land \mathrm{dom}(f)= \alpha \land f(0)= \emptyset \ \land \\ (\forall \beta \in \mathrm{dom}(f))\left((\beta \textrm{ is a limit ordinal}) \Rightarrow f(\beta)= \bigcup_{\xi < \beta} f(\xi) \right) \land\\ (\forall \beta \in \mathrm{dom}(f))(\forall y \in B)(y \in f(\beta+1) \iff y \subseteq f(\beta)) \end{array}\right),$$ and $\psi_2(\alpha,f, D, u_0, \ldots, u_{n-1})$ is: $$ (\exists v \in \mathrm{TC}(\{u_0, \ldots, u_{n-1}\}))(\forall \langle m, a\rangle \in D \cap f(\alpha))\forall \vec{x}\ \mathrm{Sat}_{\Sigma_p}(m, \langle \vec{x}, a, u_0, \ldots, u_{n-1}, v\rangle).$$ \noindent Now, for all $\alpha \in \mathrm{o}(M)$, $$\mathcal{M} \models \theta(\alpha, D, B, u_0, \ldots, u_{n-1}).$$ And $\Pi_p$-Collection implies that $\theta(\alpha, D, B, u_0, \ldots, u_{n-1})$ is equivalent to a $\Pi_{p+1}$-formula. Therefore, by $\Sigma_{p+1}$-Foundation, there exists $\gamma \in \mathrm{Ord}^\mathcal{M} \backslash \mathrm{o}(M)$ and $f \in M$ such that $$\mathcal{M} \models \psi_1(\gamma,f,B).$$ \noindent and $$\mathcal{M} \models \psi_2(\gamma, f,D,u_0,\ldots,u_{n-1}).$$ \noindent Let $v \in M$ be such that $$\mathcal{M} \models v \in \mathrm{TC}(\{u_0, \ldots, u_{n-1}\}),$$ and $$\mathcal{M} \models (\forall \langle m, a\rangle \in D \cap f(\gamma))\forall \vec{x}\ \mathrm{Sat}_{\Sigma_p}(m, \langle \vec{x}, a, u_0, \ldots, u_{n-1}, v\rangle).$$ Let $u_n= v$. This choice of $u_n$ ensures that $u_0, \ldots, u_n \in M$ and $v_0, \ldots, v_n \in B^*$ satisfy ($\dagger_{n}$).\\ \\ {\bf Case $n=2k$ for $k \in \omega$:} Let $u_n= d_k$. This choice will ensure that the domain of $j$ is all of $M$. By Lemma \ref{Th:Sigma1TypesWithParametersAreCoded}, there exists $A \in M$ that codes the class $$\{ \langle\ulcorner \phi(\vec{x}, z, y_0, \ldots, y_{n})\urcorner, a\rangle \mid a \in \mathrm{WF}, \phi \textrm{ is } \Pi_{p} \textrm{ and } \mathcal{M} \models \exists \vec{x} \phi(\vec{x}, a, u_0, \ldots, u_{n})\} \in \mathrm{SSy}(\mathcal{M}).$$ Now, by Corollary \ref{Th:StandardPartClosedUnderRank}, the well-founded part of $\mathcal{M}$ believes that ranks exist. For all $\alpha \in \mathrm{o}(M)$, let $A_\alpha \in M$ be such that $$\mathcal{M} \models A_\alpha= V_\alpha \cap A.$$ Note that for all $\alpha \in \mathrm{o}(M)$, $A_\alpha \in \mathrm{WF}(M)$. We have that for all $\alpha \in \mathrm{o}(M)$, $$\mathcal{M} \models (\forall \langle m, a \rangle \in A_\alpha) \exists \vec{x} \ \mathrm{Sat}_{\Pi_p}(m, \langle \vec{x}, a, u_0, \ldots, u_n\rangle).$$ So, for all $\alpha \in \mathrm{o}(M)$, $$\mathcal{M}\models \exists v (\forall \langle m, a\rangle \in A_\alpha) \exists \vec{x} \ \mathrm{Sat}_{\Pi_p}(m, \langle \vec{x}, a, u_0, \ldots, u_{n-1}, v \rangle),$$ and, using $\Pi_p$-Collection, this formula is equivalent to a $\Sigma_{p+1}$-formula with parameters $A_\alpha \in \mathrm{WF}(M)$ and $u_0, \ldots, u_{n-1}$. Therefore, by ($\dagger_{n-1}$) and (I), for all $\alpha \in \mathrm{o}(M)$, $$\mathcal{M} \models (\exists v \in B)(\forall \langle m, a\rangle \in A_\alpha) (\exists \vec{x} \in B) \mathrm{Sat}_{\Pi_p}(m, \langle \vec{x}, a, v_0, \ldots, v_{n-1}, v \rangle).$$ \noindent Consider the formula $\theta(\alpha, A, B, v_0, \ldots, v_{n-1})$ defined by $$(\exists v, f \in B)\left(\psi_1(\alpha,f,B) \land \psi_2(\alpha,A,B,v_0,\ldots,v_{n-1},v) \right),$$ where $\psi_1(\alpha,f,B)$ is as in the proof of Case $(2n=k+1)$, and $\psi_2(\alpha,A,B,v_0,\ldots,v_{n-1},v)$ is: $$(\forall \langle m, a \rangle \in A \cap f(\alpha))((\exists \vec{x} \in B) \mathrm{Sat}_{\Pi_p}(m, \langle\vec{x}, a, v_0, \ldots, v_{n-1}, v \rangle)) .$$ \noindent Note that $\theta(\alpha, A, B, v_0, \ldots, v_{n-1})$ is equivalent to a $\Pi_{p}$-formula and for all $\alpha \in \mathrm{o}(M)$, $$\mathcal{M} \models \theta(\alpha, A, B, v_0, \ldots, v_{n-1}).$$ Therefore, by $\Sigma_{p}$-Foundation, there exists $\gamma \in \mathrm{Ord}^\mathcal{M} \backslash \mathrm{o}(M)$ such that: $$\mathcal{M} \models \theta(\gamma, A, B, v_0, \ldots, v_{n-1}).$$ Let $f, v \in B^*$ be such that $$\mathcal{M} \models \left( \psi_1(\gamma,f,B^*) \land \psi_2(\gamma, f,A,B,v_0,\ldots,v_{n-1},v) \right),$$ \noindent and let $v_n= v$. Therefore $$\mathcal{M} \models (\forall \langle m, a \rangle \in A \cap f(\gamma))((\exists \vec{x} \in B) \mathrm{Sat}_{\Pi_p}(m, \langle \vec{x}, a, v_0, \ldots, v_n \rangle)),$$ and this choice of $v_n$ ensures that $u_0, \ldots, u_n \in M$ and $v_0, \ldots, v_n \in B^*$ satisfy ($\dagger_{n}$). This completes the case where $n=2k$ and shows that we can construct sequences $\langle u_i \mid i \in \omega\rangle$ and $\langle v_i \mid i \in \omega \rangle$ while maintaining the conditions ($\dagger_n$) at each stage of the construction. Now, define $j: \mathcal{M} \longrightarrow \mathcal{M}$ by: for all $i \in \omega$, $j(u_i)= v_i$. Our ``back-and-forth" construction ensures that $j$ is a proper initial self-embedding with $j[\mathcal{M}] \subseteq_e B^*$, $j(b)= c$ and $j[\mathcal{M}] \prec_p \mathcal{M}$. \Square \end{proof} In the proof of Theorem \ref{Th:MainSelfEmbeddingResult2}, the only use of $\Sigma_{p+1}$-Separation is to prove $\Sigma_{p+1}$- and $\Pi_{p+1}$-Foundation, and to satisfy the assumptions of Lemma \ref{Th:Sigma1TypesWithParametersAreCoded}. Therefore, in the special case where the model involved is $\omega$-nonstandard, we can replace Lemma \ref{Th:Sigma1TypesWithParametersAreCoded} with Lemma \ref{Th:SigmaTypesOfOmegaNonstandardModelsAreCoded} to obtain the following simplified variant of Theorem \ref{Th:MainSelfEmbeddingResult2}. \begin{Theorems1} \label{Th:MainSelfEmbeddingResultOmegaNonStandard2} Let $p \in \omega$, $\mathcal{M}$ be a countable $\omega$-nonstandard model of $\mathrm{KPI}+\Pi_p\textrm{-Collection}+\Pi_{p+2}\textrm{-Foundation}$, and let $b, B \in M$ and $c \in B^*$ with the following properties: \begin{itemize} \item[(I)] $\mathcal{M} \models \bigcup B \subseteq B$, \item[(II)] $\mathrm{WF}(M) \subseteq B^*$, and \item[(III)] for all $\Pi_p$-formulae $\phi(\vec{x}, z)$, $$\textrm{if } \mathcal{M} \models \exists \vec{x} \phi(\vec{x}, b), \textrm{ then } \mathcal{M} \models (\exists \vec{x} \in B) \phi(\vec{x}, c).$$ \end{itemize} Then there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $j[\mathcal{M}] \subseteq_e B^*$, $j(b)= c$ and $j[\mathcal{M}] \prec_p \mathcal{M}$. \end{Theorems1} Equipped with Theorems \ref{Th:MainSelfEmbeddingResult2} and \ref{Th:MainSelfEmbeddingResultOmegaNonStandard2}, we are now able to demonstrate in Theorem \ref {Th:ExistenceOfElementarySelfEmbeddingsForKPI} and Corollary \ref{Th:ElementarySelfEmbeddingsForZFCminus} that a variety of nonstandard models of $\mathrm{KPI}$ are isomorphic to $\Sigma_n$-elementary transitive substructures of themselves. \begin{Theorems1} \label{Th:ExistenceOfElementarySelfEmbeddingsForKPI} Let $p \in \omega$, $\mathcal{M}$ be a countable nonstandard model of $\mathrm{KPI}+\Sigma_{p+1}\textrm{-Separation}+\Pi_p\textrm{-Collection}$ such that the well-founded part of $\mathcal{M}$ is contained, and let $b \in M$. Then there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $b \in \mathrm{rng}(j)$ and $j[\mathcal{M}] \prec_p \mathcal{M}$. \end{Theorems1} \begin{proof} Let $C \in M$ be such that $\mathrm{WF}(M) \subseteq C^*$. Work inside $\mathcal{M}$. Consider the formula $\theta(x, y, b)$ defined by $$x= \langle m, a \rangle \land (m \in \omega) \land y= \langle y_1, \ldots, y_k\rangle \land \mathrm{Sat}_{\Pi_p}(m, \langle y_1, \ldots, y_k, a, b\rangle).$$ Note that if $p \geq 1$, then $\theta(x, y, b)$ is equivalent to a $\Pi_p$-formula, and if $p=0$, then $\theta(x, y, b)$ is equivalent to a $\Sigma_1$-formula. By Lemma \ref{basicimplications}, Strong $\Pi_p$-Collection holds in $\mathcal{M}$. Therefore, there exists a set $D$ such that $$(\forall x \in C)(\exists y \theta(x, y) \Rightarrow (\exists y \in D)\theta(x, y)).$$ Let $B= \mathrm{TC}(D)$. Now, $B \in M$ is transitive set in $\mathcal{M}$ with $\mathrm{WF}(M) \subseteq B^*$, and for all $\Pi_p$-formulae $\phi(\vec{x}, y, z)$ and for all $a \in \mathrm{WF}(M)$, $$\textrm{if } \mathcal{M} \models \exists \vec{x} \phi(\vec{x}, a, b), \textrm{ then } \mathcal{M} \models (\exists \vec{x} \in B) \phi(\vec{x}, a, b).$$ Therefore, by Theorem \ref{Th:MainSelfEmbeddingResult2}, there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $j[\mathcal{M}] \subseteq_e B^*$, $j(b)=b$ and $j[\mathcal{M}] \prec_p \mathcal{M}$. \Square \end{proof} \begin{Coroll1} \label{Th:ElementarySelfEmbeddingsForZFCminus} Let $\mathcal{M}$ be a countable nonstandard model of $\mathrm{ZF}^-+\mathrm{WO}$ such that the well-founded part of $\mathcal{M}$ is contained. Then for all $p \in \omega$ and for all $b \in M$, there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $b \in \mathrm{rng}(j)$ and $j[\mathcal{M}] \prec_p \mathcal{M}$. \end{Coroll1} Theorem \ref{Th:ExistenceOfElementarySelfEmbeddingsForKPI} also yields the following results that provide two different sufficient conditions for models of $\mathrm{KPI}+\Sigma_{1}\textrm{-Separation}$ to admit proper initial self-embeddings. \begin{Coroll1} \label{Th:SelfEmbeddingOfExtensionOfKPI1} Let $\mathcal{M}$ be a countable nonstandard model of $\mathrm{KPI}+\Sigma_{1}\textrm{-Separation}$ such that the well-founded part of $\mathcal{M}$ is contained and let $b \in M$. Then there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $b \in \mathrm{rng}(j)$. \end{Coroll1} \begin{Coroll1} \label{Th:NonOmegaModelsOfKPISelfEmeddingResult} Let $\mathcal{M}$ be a countable $\omega$-nonstandard model of $\mathrm{KPI}+\Sigma_{1}\textrm{-Separation}$ and let $b \in M$. Then there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ with $b \in \mathrm{rng}(j)$. \end{Coroll1} This allows us to give an example of a countable $\omega$-nonstandard model of $\mathrm{MOST}+\Pi_1\textrm{-Collection}$ that admits a proper initial self-embedding, but no proper $\mathcal{P}$-initial self-embedding. \begin{Examp1} \label{ex:InitialButNoPInitial} \normalfont Let $\mathcal{M}$ be a countable model of $\mathrm{ZF}+V=L$ that is not $\omega$-standard. Let $\mathcal{N}$ be the substructure of $\mathcal{M}$ with underlying set $$N= \bigcup_{n \in \omega} (H_{\aleph_n}^{\mathcal{M}})^*.$$ The fact that $\mathcal{N} \models \mathrm{Mac}$ follows immediately from the fact that $\mathcal{N} \models \textrm{Powerset}$ and $\mathcal{N} \subseteq_e^\mathcal{P} \mathcal{M}$. Since $\mathcal{M}$ satisfies the Generalised Continuum Hypothesis, $\mathcal{N} \models \textrm{Axiom }\mathrm{H}$. Therefore, by Lemma \ref{Th:MOSTisMacPlusH}, $\mathcal{N} \models \mathrm{MOST}$. It follows from Lemma \ref{Th:HCutsSatisfyPi1Collection} that $\mathcal{N} \models \mathrm{MOST}+\Pi_1\textrm{-Collection}$. By Corollary \ref{Th:NonOmegaModelsOfKPISelfEmeddingResult}, $\mathcal{N}$ admits a proper initial self-embedding. Now, suppose that $j: \mathcal{N} \longrightarrow \mathcal{N}$ is a proper $\mathcal{P}$-initial self-embedding. But this is impossible, because, since $j[\mathcal{N}] \subseteq_e^\mathcal{P} \mathcal{N}$, cardinals are preserved between $j[\mathcal{N}]$ and $\mathcal{N}$. \end{Examp1} We are also able to find an example of a countable $\omega$-nonstandard model of $\mathrm{MOST}+\Pi_1\textrm{-Collection}$ that admits a proper $\mathcal{P}$-initial self-embedding, but no proper rank-initial self-embedding. \begin{Examp1} \normalfont Let $\mathcal{N}= \langle N, \in^\mathcal{M} \rangle$ be the countable model of $\mathrm{MOST}+\Pi_1\textrm{-Collection}$ described in Example \ref{ex:InitialButNoPInitial}. Note that $\mathcal{N}$ satisfies the Generalised Continuum Hypothesis and the infinite cardinals of $\mathcal{N}$ are exactly $\aleph_n$ for each standard natural number $n$. In particular, for all $n \in \omega$, $$\mathcal{N}\models (|\mathcal{P}^n(V_\omega)| = \aleph_n) \textrm{, where } \mathcal{P}^n(V_\omega) \textrm{ is the powerset operation applied to } V_\omega\ n\textrm{-times.}$$ It follows that $\mathcal{N}$ satisfies \begin{itemize} \item[] ($\dagger$) For all cardinals $\kappa$, there exists a set $X$ with cardinality $\kappa$ and countable rank. \end{itemize} Therefore, $\mathcal{N}$ shows that the theory $\mathrm{MOST}+\Pi_1\textrm{-Collection}+(\dagger)$ is consistent. Now, let $\mathcal{K}= \langle K, \in^\mathcal{K}\rangle$ be a recursively saturated model of $\mathrm{MOST}+\Pi_1\textrm{-Collection}+(\dagger)$. By Theorem \ref{Th:PInitialSelfEmbeddingOfRecursivelySaturatedModels}, $\mathcal{K}$ has a proper $\mathcal{P}$-initial self embedding. Now, suppose $j: \mathcal{K} \longrightarrow \mathcal{K}$ is a proper $\mathcal{P}$-initial self embedding. Since a bijection between an ordinal $\kappa$ and $\alpha \in \kappa$ is a subset of $\mathcal{P}(\kappa \times \kappa)$, for all $\kappa \in \mathrm{rng}(j)$, $\kappa$ is a cardinal according to $\mathcal{K}$ if and only if $\kappa$ is a cardinal according to $j[\mathcal{K}]$. Similarly, if $R \in K$ and $\kappa \in \mathrm{rng}(j)$ is a cardinal of $\mathcal{K}$ such that $$\mathcal{K} \models (R \subseteq \kappa \times \kappa)\land (R \textrm{ is a well-founded extensional relation with a maximal element}),$$ then $R \in \mathrm{rng}(j)$ and $$j[\mathcal{K}] \models (R \textrm{ is a well-founded extensional relation with a maximal element})$$ Therefore, by Lemma \ref{Th:ConsequencesOfMOST}, if $\kappa \in \mathrm{rng}(j)$ is a cardinal, then $(\kappa^+)^\mathcal{K} \in \mathrm{rng}(j)$ and $H_{\kappa^+}^{j[\mathcal{K}]}= H_{\kappa^+}^\mathcal{K}$. Now, since $j$ is proper, let $x \in K \backslash \mathrm{rng}(j)$. Let $\kappa \in K$ be such that $\mathcal{K} \models (|\mathrm{TC}(\{x\})|=\kappa)$. By the observations that we have just made, $\kappa \notin \mathrm{rng}(j)$ and for all $y \in K$, if $\mathcal{K} \models (|\mathrm{TC}(\{y\})|\geq \kappa)$, then $y \notin \mathrm{rng}(j)$. Therefore, since $\mathcal{K} \models (\dagger)$, there exists a set $y \in K$ with countable rank in $\mathcal{K}$ such that $y \notin \mathrm{rng}(j)$. This shows that $j$ is not a proper rank-initial self-embedding. \end{Examp1} Theorem \ref{Th:ExistenceOfElementarySelfEmbeddingsForKPI} combined with Lemma \ref{Th:WellFoundedPartsOfKPPContained} also yields the following result that shows that every nonstandard model of $\mathrm{KP}^\mathcal{P}+\Sigma_1\textrm{-Separation}$ admits a proper initial self-embedding. \begin{Coroll1} \label{Th:FriedmanStyleExistenceTheorem} Let $\mathcal{M}$ be a countable nonstandard model of $\mathrm{KP}^\mathcal{P}+\Sigma_1\textrm{-Separation}$ and let $b \in M$. Then there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ with $b \in \mathrm{rng}(j)$. \end{Coroll1} Note that the theory $\mathrm{MOST}+\Pi_1\textrm{-Collection}+\Pi_1^\mathcal{P}\textrm{-Foundation}$ is obtained from $\mathrm{KP}^\mathcal{P}+\Sigma_1\textrm{-Separtion}$ by adding the Axiom of Choice. The following example shows that the assumptions of Corollary \ref{Th:FriedmanStyleExistenceTheorem} cannot be weakened to saying that $\mathcal{M}$ is a nonstandard model of $\mathrm{MOST}+\Pi_1\textrm{-Collection}$. \begin{Examp1} \normalfont Let $\mathcal{M}$ be a countable model of $\mathrm{ZF}+V=L$ that is $\omega$-standard but has a nonstandard ordinal that is countable according to $\mathcal{M}$. Note that such a model can by obtained from the assumption that there exists a transitive model of $\mathrm{ZF}+V=L$ using \cite[Theorem 2.4]{Keisler-Morley} or, from the same assumption, using the Barwise Compactness Theorem as in the proof of \cite[Theorem 4.5]{mck15}. Let $\mathcal{N}$ be the substructure of $\mathcal{M}$ with underlying set $$N= \bigcup_{\alpha \in \mathrm{o}(\mathcal{M})} (H_{\aleph_\alpha}^\mathcal{M})^*.$$ Using the same reasoning that was used in Example \ref{ex:InitialButNoPInitial}, $$\mathcal{N} \models \mathrm{MOST}+\Pi_1\textrm{-Collection}.$$ Moreover, $\mathcal{N}$ is nonstandard. Since $\mathcal{M}$ satisfies the Generalised Continuum Hypothesis, a straightforward induction argument inside $\mathcal{M}$ shows that $$\mathcal{M} \models \forall \alpha(\alpha \textrm{ is an ordinal} \Rightarrow |V_{\omega+\alpha}|= \aleph_\alpha).$$ Therefore, $$\mathrm{WF}(N)= \bigcup_{\alpha \in \mathrm{o}(\mathcal{N})} (V_{\omega+\alpha}^\mathcal{M})^*$$ and the well-founded part of $\mathcal{N}$ is c-unbounded in $\mathcal{N}$. So, by Theorem \ref{Th:NoSelfEmbeddingWhenStandardPartDense}, $\mathcal{N}$ admits no proper initial self-embedding. \end{Examp1} We can also use Theorem \ref{Th:MainSelfEmbeddingResultOmegaNonStandard2} to prove the following variant of Theorem \ref{Th:ExistenceOfElementarySelfEmbeddingsForKPI} for models of extensions of $\mathrm{KPI}$ that are not $\omega$-standard. \begin{Theorems1} Let $p \in \omega$, $\mathcal{M}$ be a countable $\omega$-nonstandard model of $\mathrm{KPI}+\Pi_p\textrm{-Collection}+\Pi_{p+2}\textrm{-Foundation}$, and let $b \in M$. Then there exists a proper initial self-embedding $j:\mathcal{M} \longrightarrow \mathcal{M}$ such that $b \in \mathrm{rng}(j)$ and $j[\mathcal{M}] \prec_p \mathcal{M}$. \end{Theorems1} \begin{proof} Consider $\theta(C, n, b, \omega)$ defined by $$(n \in \omega) \land (\forall m \in n)(\exists \vec{x}\ \mathrm{Sat}_{\Pi_p}(m, \langle \vec{x}, b \rangle) \Rightarrow (\exists \vec{x} \in C) \mathrm{Sat}_{\Pi_p}(m, \langle \vec{x}, b\rangle)).$$ Note that $\Pi_p$-Collection implies that $\theta(C, n, b, \omega)$ is equivalent to a $\Pi_{p+1}$-formula. Moreover, if $n \in \omega$, then there exists a finite set $C$ such that $\theta(C, n, b , \omega)$ holds. Therefore, for all $n \in \omega$, $$\mathcal{M} \models \exists C \ \theta(C, n, b, \omega).$$ So, by $\Pi_{p+2}$-Foundation, there exists a nonstandard $k \in (\omega^\mathcal{M})^*$ such that $$\mathcal{M}\models \exists C \ \theta(C, k, b, \omega).$$ Let $C \in M$ be such that $$\mathcal{M}\models \theta(C, k, b, \omega).$$ And, working inside $\mathcal{M}$, let $B= \mathrm{TC}(C\cup V_\omega)$. Therefore $\mathcal{M} \models \bigcup B \subseteq B$, $\mathrm{WF}(M) \subseteq B^*$ and for all $\Pi_p$-formulae $\phi(\vec{x}, z)$, $$\textrm{if } \mathcal{M} \models \exists \vec{x} \phi(\vec{x}, b), \textrm{ then } \mathcal{M} \models (\exists \vec{x} \in B) \phi(\vec{x}, b).$$ Therefore, by Theorem \ref{Th:MainSelfEmbeddingResultOmegaNonStandard2}, there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $j[\mathcal{M}] \subseteq_e B^*$, $j(b)=b$ and $j[\mathcal{M}] \prec_p \mathcal{M}$. \Square \end{proof} \begin{Coroll1} \label{Th:SelfEmbeddingOfExtensionKPI2} Let $\mathcal{M}$ be a countable $\omega$-nonstandard model of $\mathrm{KPI}+\Pi_2\textrm{-Foundation}$, and let $b \in M$. Then there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $b \in \mathrm{rng}(j)$. \end{Coroll1} Note that Corollaries \ref {Th:NonOmegaModelsOfKPISelfEmeddingResult} and \ref{Th:SelfEmbeddingOfExtensionKPI2} give two distinct extensions of $\mathrm{KPI}$ such that every countable $\omega$-nonstandard model of these extensions is isomorphic to a transitive proper initial segment of itself. We will next use Theorem \ref{Th:MainSelfEmbeddingResultOmegaNonStandard2} together with Corollary \ref{cor. of Levy-Shoenfield} to verify the surprising result that every model of $\mathrm{ZFC}$ that is $\omega$-nonstandard is isomorphic to a transitive substructure of the hereditarily countable sets of its own $L$. \begin{Theorems1} \label{Th:InitialEmbeddingIntoL} Let $\mathcal{M}$ be a countable $\omega$-nonstandard model of $\mathrm{ZF}$. Then there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $j[\mathcal{M}] \subseteq_e (H_{\kappa}^{L^\mathcal{M}})^*$, where $\kappa= (\aleph_1^L)^\mathcal{M}$. \end{Theorems1} \begin{proof} Let $\kappa= (\aleph_1^L)^\mathcal{M}$. Now, let $B= H_\kappa^{L^\mathcal{M}}$. It is clear that $\mathcal{M} \models \bigcup B \subseteq B$ and $\mathrm{WF}(M) \subseteq B^*$. Note that $\emptyset \in B^* \cap M$. By Corollary \ref{cor. of Levy-Shoenfield}, for all $\Delta_0$-formula $\phi(\vec{x}, z)$, $$\textrm{if } \mathcal{M} \models \exists \vec{x} \phi(\vec{x}, \emptyset), \textrm{ then } \mathcal{M} \models (\exists \vec{x} \in B) \phi(\vec{x}, \emptyset).$$ So, by Theorem \ref{Th:MainSelfEmbeddingResultOmegaNonStandard2}, there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $j[\mathcal{M}] \subseteq_e B^*$. \Square \end{proof} Hamkins \cite{ham13} showed that if $\mathcal{M}$ is a countable model of $\mathrm{ZF}$, then there exists an embedding of $\mathcal{M}$ into its own $L$. However, the embeddings produced in \cite{ham13} are not required to be initial embeddings. Theorem \ref{Th:InitialEmbeddingIntoL} shows that under the condition that $\mathcal{M}$ is a countable $\omega$-nonstandard model of $\mathrm{ZF}$, there exists an embedding of $\mathcal{M}$ into its own $L$ that is also initial. Question 35 of \cite{ham13} asks whether every countable model of set theory can be embedded into its own $L$ by an embedding that preserves ordinals. Since initial embeddings preserve ordinals, Theorem \ref{Th:InitialEmbeddingIntoL} provides a positive answer to this question when $\mathcal{M}$ is a countable $\omega$-nonstandard model of $\mathrm{ZF}$. Theorem \ref{Th:InitialEmbeddingIntoL} immediately implies the corollary below that shows that every countable model of $\mathrm{ZF}$ that is not $\omega$-standard can be end-extended to a model of $\mathrm{ZFC}+V=L$. \begin{Coroll1} \label{Th:SpecialCaseOfBarwiseResult} Let $\mathcal{M}$ be a countable $\omega$-nonstandard model of $\mathrm{ZF}$. Then there exists structures $\mathcal{N}_1$ and $\mathcal{N}_2$ such that \begin{itemize} \item[(I)] $\mathcal{M} \subseteq_e \mathcal{N}_1 \subseteq_e \mathcal{N}_2$, \item[(II)] $\mathcal{N}_2 \models \mathrm{ZFC}+V=L$, and \item[(III)] $\mathcal{N}_1= \langle (H_{\aleph_1}^{\mathcal{N}_2})^*, \in^{\mathcal{N}_2} \rangle$. \end{itemize} \end{Coroll1} Corollary \ref{Th:SpecialCaseOfBarwiseResult} is a special case of \cite[Theorem 3.1]{bar71}, which shows that Corollary \ref{Th:SpecialCaseOfBarwiseResult} holds for all countable models of $\mathrm{ZF}$. Barwise used methods from infinitary logic; Hamkins has recently formulated a purely set-theoretic proof of the same result \cite{ham18}. We now turn to applying Theorem \ref{Th:MainSelfEmbeddingResult2} to finding transitive partially elementary substructures of nonstandard models of $\mathrm{ZF}^-+\mathrm{WO}$. Despite the failure of reflection in $\mathrm{ZF}^-+\mathrm{WO}$ \cite{fgk19}, Quinsey \cite[Corollary 6.9]{qui80} employed indicators and methods from infinitary logic to show the following: \begin{Theorems1} \label{Th:QuinseyResult} (Quinsey) Let $n \in \omega$. If $\mathcal{M} \models \mathrm{ZF}^-$, then there exists $\mathcal{N} \subseteq_e \mathcal{M}$ such that $N \neq M$, $\mathcal{N} \prec_n \mathcal{M}$ and $\mathcal{N} \models \mathrm{ZF}^-$. \end{Theorems1} Extensions of H. Friedman's self-embedding result \cite{fri73} proved by Gorbow \cite{gor18} show that if the nonstandard model $\mathcal{M}$ in Theorem \ref{Th:QuinseyResult} is countable and satisfies $\mathrm{ZF}$, then the conclusions Theorem \ref{Th:QuinseyResult} can be strengthened to require that $\mathcal{N} \subseteq_e^\mathcal{P} \mathcal{M}$ and $\mathcal{N} \cong \mathcal{M}$. In light of this, it natural to ask under what circumstances the conclusion of Theorem \ref{Th:QuinseyResult} can be strengthened to require that the $\Sigma_n$-elementary submodel be isomorphic to the original nonstandard model. Theorem \ref{Th:ModelOfZFCminusWithoutSelfEmbedding} shows that such a strengthening of Theorem \ref{Th:QuinseyResult} does not hold in general, even when $n=0$ and the model is countable. However, using Theorem \ref{Th:MainSelfEmbeddingResult2}, we can show that the countable nonstandard models of $\mathrm{ZF}^-+\mathrm{WO}+\forall \alpha\ \Pi_\infty^1-\mathrm{DC}_\alpha$ for which this strengthening of Quinsey's result holds are exactly the models in which the well-founded part is c-bounded. Our final result below shows that the c-boundedness of the well-founded part of a countable nonstandard model of $\mathcal{M}$ of $\mathrm{ZF}^-+\mathrm{WO}+\forall \alpha\ \Pi^1_\infty\mathrm{-DC}_\alpha$ is necessary and sufficient for $\mathcal{M}$ to admit a proper initial self-embedding. \begin{Theorems1} \label{Th:IsomorphicElementarySubmodelsResult} Let $\mathcal{M}$ be a countable nonstandard model of $\mathrm{ZF}^-+\mathrm{WO}+\forall \alpha\ \Pi^1_\infty\mathrm{-DC}_\alpha$. Then the following are equivalent: \begin{itemize} \item[(I)] The well-founded part of $\mathcal{M}$ is c-bounded in $\mathcal{M}$, \item[(II)] $\mathrm{WF}(\mathcal{M}) \models \mathrm{Powerset}$, \item[(III)] For all $n \in \omega$ and for all $b \in M$, there exists a proper initial self-embedding $j: \mathcal{M} \longrightarrow \mathcal{M}$ such that $b \in \mathrm{rng}(j)$ and $j[\mathcal{M}] \prec_n \mathcal{M}$. \end{itemize} \end{Theorems1} \begin{proof} (I)$\Rightarrow$(II) is Lemma \ref{Th:CBoundedImpliesPowersetInWellFoundedPart}. To see that (II)$\Rightarrow$(III), assume that (II) holds and note that Lemma \ref{Th:PowersetInWellFoundedPartImpliesContained} implies that the standard part of $\mathcal{M}$ is contained. Therefore, by Corollary \ref{Th:ElementarySelfEmbeddingsForZFCminus}, (III) holds. Finally, (III)$\Rightarrow$(I) is the contrapositive of Theorem \ref{Th:NoSelfEmbeddingWhenStandardPartDense}. \Square \end{proof} \section[Questions]{Questions} \label{Sec:questions} \noindent \textbf{Question 6.1} \textit{Can Theorem} \ref{Th:ExistenceOfElementarySelfEmbeddingsForKPI} \textit{be strengthened by adding the requirement to the conclusion of the theorem that the self-embedding} $j$ \textit{fixes every member of} $b^*$? \begin{itemize} \item The above question is motivated by the ``moreover" clause of Theorem \ref{Th_Gorbow}. \end{itemize} \noindent \textbf{Question 6.2} \textit{Does every countable model of} $\mathrm{KPI}$ \textit{that is not} $\omega$-\textit{standard admit a proper initial self-embedding}? \begin{itemize} \item The above question is prompted by Corollaries \ref{Th:SelfEmbeddingOfExtensionOfKPI1} and \ref{Th:SelfEmbeddingOfExtensionKPI2}, which provide sufficient conditions for a countable model of $\mathrm{KPI}$ that is not $\omega$-standard to admit a proper initial self-embedding. \end{itemize} \noindent \textbf{Question 6.3} \textit{Is there a countable model of} $\mathcal{M}$ \textit{of} $\mathrm{ZF}^-+\mathrm{WO}$ \textit{such that the well-founded part of} $\mathcal{M}$ i\textit{s c-bounded} \textit{and} $\mathcal{M}$ \textit{does not admit any proper initial self-embedding?} \begin{itemize} \item The key role played by the scheme $\forall \alpha\ \Pi^1_\infty\mathrm{-DC}_\alpha$ in the proof of Theorem \ref{Th:IsomorphicElementarySubmodelsResult} suggests a positive answer to the above question. \end{itemize} \end{document}
\begin{document} \maketitle \begin{abstract} A large class of real $3$-dimensional nilpotent polynomial vector fields of arbitrary degree is considered. The aim of this work is to present general properties of the discrete and continuous dynamical systems induced by these vector fields. In the discrete case, it is proved that each dynamical system has a unique fixed point and no $2$-cycles. Moreover, either the fixed point is a global attractor or there exists a $3$-cycle which is not a repeller. In the continuous setting, it is proved that each dynamical system is polynomially integrable. In addition, for a subclass of the considered vector fields, the system is polynomially completely integrable. Furthermore, for a family of low degree vector fields, it is provided a more precise description about the global dynamics of the trajectories of the induced dynamical system. In particular, it is proved the existence of an invariant surface foliated by periodic orbits. Finally, some remarks and open questions, motivated by our results, the Markus--Yamabe Conjecture and the problem of planar limit cycles, are given. \end{abstract} \section{Introduction} The study of \emph{nilpotent polynomial vector fields}, i.e. polynomial vector fields $\displaystyle F \colon \mathbb{K}^n \longrightarrow \mathbb{K}^n$ whose Jacobian matrix $JF$ is nilpotent, where $\mathbb{K}$ is a field of characteristic zero, is closely related to the Jacobian Conjecture~\cite{Ess2000,EKC2021}. Indeed, the seminal works of A.V. Yagzhev~\cite{Yag} and H. Bass et al. \cite{BCW} prove that for analyzing the Jacobian Conjecture is sufficient to focus on polynomial vector fields of the form $I+F$, where $I$ is the identity and $F$ is nilpotent and homogeneous of degree three. Another motivation for studying nilpotent polynomial vector fields arises from dynamics. Recall that each real vector field $F \displaystyle \colon \mathbb{R}^n \longrightarrow \mathbb{R}^n$ induces a discrete dynamical system defined by the iteration of $F$ and a continuous dynamical system defined by the flow generated by the differential system associated with $F$. For this induced continuous (resp. discrete) dynamical system L. Markus and H. Yamabe \cite{MY} (resp. J. LaSalle \cite{LaSalle}) established the continuous (resp. discrete) global stability conjecture, which is true for $n \leq 2$ with $F$ polynomial, but it admits counterexamples for $n\geq 3$. Such counterexamples have the form $\lambda I + F$ where $\lambda < 0 \, (\textrm{resp. } |\lambda| < 1)$ and $F$ a nilpotent polynomial vector field, see \cite{CEGMH,CGM}. Therefore, the nilpotent polynomial vector fields play a fundamental role in order to give a negative or positive answer to the Jacobian Conjecture as well as in the construction of examples and counterexamples to the Markus--Yamabe and LaSalle conjectures. Furthermore, the characterization and understanding of this kind of vector fields in any dimension and any degree, even inhomogeneous, go beyond these conjectures and represent a challenging open problem by itself. The characterization of nilpotent polynomial vector fields is well-known in dimension two \cite[p. 148]{Ess2000}. In dimension three, it depends on the linear dependence of the components of $F$ over $\mathbb{K}$. When the components are linearly dependent, it is given in \cite[Corollary 1.1]{ChE} (the former counterexamples to both Markus--Yamabe and LaSalle conjectures have linearly dependent components). When the components are linearly independent, the first steps towards such a characterization were taken by M. Chamberland and A. van den Essen in \cite{ChE}. In particular, they prove in \cite[Theorem 2.1]{ChE} that any polynomial vector field $F(x,y,z)=(u(x,y), v(x,y,z), h(u(x,y)))$ is nilpotent if and only if \begin{displaymath} F(x,y,z) = \left(g(y+b(x)),v_1 z - (b_1 + 2 v_1 \alpha x) g(y+b(x)),\alpha (g(y+b(x)))^2\right) , \end{displaymath} where $ b(x) = b_1 x + v_1 \alpha x^2 $, $v_1 \alpha \neq 0 $ and $ g \in \mathbb{K}[t]$, with $\deg g \geq 1$ and $ g(0) = 0$. Later, Chamberland consider the above form of $F$ with the particular parameters: $b_1 = 0, v_1 = 1$ and $ \alpha = -1$. He showed in \cite{Cha} that the discrete dynamical system induced by such a particular vector field has a unique fixed point, there are not 2-cycles, and under a suitable condition on the function $g$ there exists a $3$-cycle, which show that the nilpotent polynomial vector fields can induce a rich dynamics. Another step in the task of the characterization of nilpotent polynomial vector fields in dimension three has been done by D. Yan and G. Tang in \cite{Dan1}, where they generalize the results of \cite{ChE}. This characterization problem has been followed by several authors; see for instance \cite{Dan2} and references there in. One of the most recent and general results in this issue is \cite[Theorem 1]{CaVa}, which gives the characterization of all nilpotent polynomial vector fields $\displaystyle F \colon \mathbb{K}^n \longrightarrow \mathbb{K}^n$ of the form \begin{equation*} F(x_1,x_2,\ldots,x_n) = (F_1(x_1,x_2),F_2(x_1,x_2,x_3), \ldots, F_{n-1}(x_1,x_2,x_{n}),F_n(x_1,x_2)). \end{equation*} In the three dimensional real case and by changing the variables $x_1,x_2,x_3$ by $x,y,z$, such a characterization is as follows. The polynomial vector field \begin{equation} \label{campo-vectorial-3d-corto} F \colon \mathbb{R}^3 \longrightarrow \mathbb{R}^3,\; (x,y,z)\longmapsto (F_1(x,y),F_2(x,y,z),F_3(x,y)), \end{equation} is nilpotent if and only if \begin{equation} \label{campo-vectorial-3d} \begin{aligned} F_1(x,y) &= P_1\big(y + A_1(x)\big),\\[0.5pt] F_2(x,y,z) &= P_2\mathfrak{B}ig(z + \frac{1}{d_2 p_{d_2}} A_2(x)\mathfrak{B}ig)-A_1'(x)F_1(x,y),\\[0.5pt] F_3(x,y) &= -\frac{1}{d_{2} {p_d}_{2}} \left[-\frac{1}{2} {A_1''(x)} \big(F_1(x,y)\big)^{2}+A_{2}'(x) F_1(x,y)\right]+A_3, \end{aligned} \end{equation} where \begin{equation} \label{cond-Pols-P-3d} \left \{ \begin{aligned} & P_i \in \mathbb{R}[s],\; d_i:=\deg P_i \geq 1,\; p_{d_i}:= \mbox{the leading coefficient of $P_i$},\\ & A_1(x)=a_{10}+a_{11}x+a_{12}x^2, \,\, A_2(x)=a_{20}+a_{21}x, \: A_3\in \mathbb{R}.\\ &\textrm{If} \, \, d_2 > 1, \, \, then \, \, A_1''(x)\equiv 0. \end{aligned} \right . \end{equation} In this work, inspired by Chamberland's article \cite{Cha}, we will analyze the discrete and continuous dynamics induced by the nilpotent polynomial vector fields \eqref{campo-vectorial-3d-corto}, whose components are in \eqref{campo-vectorial-3d}. More precisely, on the one hand, we will study the discrete dynamical system $(\mathbb{R}^3,\mathbb{N}_0,F)$, where the dynamics is given by \begin{equation} \label{sistema-discreto-3d} \begin{aligned} {x}_{k+1} &= F_1\left(x_{k},y_{k}\right), \\[0.5pt] {y}_{k+1} &= F_2\left(x_{k},y_{k},z_{k}\right), \\[0.5pt] {z}_{k+1} &= F_3\left(x_{k},y_{k}\right). \end{aligned} \end{equation} On the other hand, the continuous dynamical system $(\mathbb{R}^3,\mathbb{R},\Phi)$, where $\Phi$ is the flow generated by the differential system \begin{equation} \label{sistema-diferencial-3d} \begin{aligned} \dot{x} &= F_1(x,y),\\[0.5pt] \dot{y} &= F_2(x,y,z),\\[0.5pt] \dot{z} &= F_3(x,y). \end{aligned} \end{equation} Concerning the discrete dynamics our main result is the following. \begin{theorem} \label{Teorema-1} Each system \eqref{sistema-discreto-3d} has a unique fixed point and there are no $2$-cycles. In addition, \begin{enumerate} \item if $\deg A_1(x)=1$, then the fixed point is a global attractor, which is reached from any initial point after three iterations; \item if $\deg A_1(x)=2$, then the system has a $3$-cycle which is not a repeller. \end{enumerate} \end{theorem} Although the assertions of this result are essentially the same as in the work of Chamberland \cite[Theorem 3.1]{Cha}, we emphasize that the above theorem is a generalization. Indeed, the family of nilpotent vector fields of the form \eqref{campo-vectorial-3d-corto} is wider than the studied in \cite{Cha}. For instance, the polynomial $P_2(s)$ in \eqref{campo-vectorial-3d} is of arbitrary degree while in Chamberland's paper is linear. Regarding the continuous dynamics our main result is as follows. \begin{theorem} \label{Teorema-2} Each differential system \eqref{sistema-diferencial-3d} is polynomially integrable. In addition, if $\deg A_1(x)=1$, then differential system \eqref{sistema-diferencial-3d} is polynomially completely integrable. \end{theorem} This result gives valuable information to describe and comprehend the long-term behavior of the trajectories of each differential system \eqref{sistema-diferencial-3d}. In particular, it says that the dynamics of the system occurs in the algebraic surfaces defined by the level sets of the polynomial first integral guaranteed by the theorem. Thus, the topology of these surfaces plays an important role in kind of orbits that they can supported. For instance, if they are simply connected surfaces and does not posses any singularity of the system, then they can not support periodic orbits of the differential system. In order to get more precise features on the dynamics of the trajectories of these continuous dynamical systems, we will study some particular cases according with the degrees of $P_1(s)$ and $P_2(s)$ in \eqref{cond-Pols-P-3d}. Thus, we have the following result. \begin{proposition} \label{resultados-caso-d1=d2=1} Assume that $\deg P_1(s)= \deg P_2(s)=1$ in system \eqref{sistema-diferencial-3d}. \begin{enumerate} \item If $\deg A_1(x)=1$, then each nontrivial trajectory of system \eqref{sistema-diferencial-3d} goes to infinity in forward and backward time. \item If $\deg A_1(x)=2$ and we define $\mu:=A_3\,a_{12}\,p_{d_2}\,p_{d_1}^2$, then \begin{enumerate} \item each trajectory of \eqref{sistema-diferencial-3d} goes to infinity in forward and backward time if $\mu>0$, \item there exists a unique cuspidal invariant surface $\mathcal{S}_{0}$ of \eqref{sistema-diferencial-3d} and each trajectory of \eqref{sistema-diferencial-3d} in $\mathbb{R}^3 \backslash \mathcal{S}_{0}$ goes to infinity in forward and backward time if $\mu=0$, \item there exists a unique isochronous periodic surface $\mathcal{S}_{\mu}$ of \eqref{sistema-diferencial-3d} and each trajec\-to\-ry of \eqref{sistema-diferencial-3d} in $\mathbb{R}^3 \backslash \mathcal{S}_{\mu}$ goes to infinity in forward and backward time if $\mu<0$. \end{enumerate} \end{enumerate} \end{proposition} The properties of statement $2)$ of this proposition states an interesting and surprising analogy with the Bogdanov--Takens bifurcation. Indeed, in such a bifurcation, we can choose a $1$-parameter curve in such a way that the corresponding system has no singularities for positive values of the parameter, a cusp singularity if the parameter is zero, and a unique periodic orbit (limit cycle) for negative values of the parameter. See \cite[p. 324]{Kus}. The paper is organized as follows. In Section 2 we will simplify the expression of dynamical systems \eqref{sistema-discreto-3d} and \eqref{sistema-diferencial-3d} through polynomial automorphisms. We use the simplified expressions to analyze the discrete dynamics in Section 3 and the continuous dynamics in Section 4. Some concluding remarks, questions and comments are given in Section 5. \section{Simpler conjugated systems} \label{Seccion-2} The main idea to prove our results is the use of polynomial automorphisms of $\mathbb{R}^3$ to transform the original dynamical systems into new ones with simpler expressions. The transformed dynamical systems are analyzed easily. Concretely, the polynomial map \begin{equation} \label{cambio-variables-3d-normal} \begin{tikzcd} (u,v,w) \arrow{r}[]{\Psi} & \displaystyle \bigg(u, v - A_1(u), w - \frac{1}{d_2 p_{d_2}} A_2(u)\bigg)=(x,y,z) \end{tikzcd} \end{equation} is a polynomial automorphism of $\displaystyle \mathbb{R}^3$, whose inverse is \begin{equation} \label{cambio-variables-3d-inversa} \begin{tikzcd} (x,y,z) \arrow{r}[]{\Psi^{-1}} & \displaystyle \bigg(x, y + A_1(x), z + \frac{1}{d_2 p_{d_2}} A_2(x)\bigg)=(u,v,w). \end{tikzcd} \end{equation} If we define $\displaystyle G(u,v,w) := (\Psi^{-1} \circ F \circ \Psi)(u,v,w)$, then $\displaystyle (\mathbb{R}^n,\mathbb{N}_0,F)$ and $\displaystyle (\mathbb{R}^n,\mathbb{N}_0,G)$ are conjugated. Explicitly, by using equations \eqref{campo-vectorial-3d}, \eqref{cond-Pols-P-3d}, \eqref{cambio-variables-3d-normal}, and \eqref{cambio-variables-3d-inversa}, the discrete dynamical system \eqref{sistema-discreto-3d} is conjugated to the system \begin{equation} \label{sistema-discreto-3d-conjugado} \begin{aligned} {u}_{k+1} &= P_1\left(v_{k}\right), \\ {v}_{k+1} &= P_2\left(w_{k}\right)+a_{12}P_1\left(v_{k}\right)\mathfrak{B}ig(P_1\left(v_{k}\right)-2u_{k}\mathfrak{B}ig)+a_{10}, \\ {w}_{k+1} &= \frac{a_{12}}{d_2 p_{d_2}} \big(P_1(v_k)\big)^2 + A_3 + \frac{a_{20}}{d_2 p_{d_2}}. \end{aligned} \end{equation} Analogously, by using \eqref{cambio-variables-3d-inversa}, as a change of coordinates, together with equations \eqref{campo-vectorial-3d} and \eqref{cond-Pols-P-3d}, the differential system \eqref{sistema-diferencial-3d} becomes \begin{equation} \label{sistema-diferencial-transformado-3d} \begin{aligned} \dot{u} &= P_1(v),\\[0.5pt] \dot{v} &= P_2(w),\\[0.5pt] \dot{w} &= \frac{a_{12}}{d_{2} {p_d}_{2}} \big(P_1(v)\big)^2+A_3. \end{aligned} \end{equation} Thus, the continuous dynamical systems associated with differential systems $\eqref{sistema-diferencial-3d}$ and \eqref{sistema-diferencial-transformado-3d} are conjugated. \section{Discrete dynamics} In this section, we will prove the general properties of the discrete dynamical system \eqref{sistema-discreto-3d} stated in Theorem~\ref{Teorema-1}. \begin{proof}[Proof of Theorem \ref{Teorema-1}] From previous section, we know that the discrete dynamical systems \eqref{sistema-discreto-3d} and \eqref{sistema-discreto-3d-conjugado} are conjugated. Hence, we will use system \eqref{sistema-discreto-3d-conjugado} to give the proof of the theorem. The general part of the result will be proved by considering two cases: $d_2>1$ and $d_2=1$. In addition, the proof of Statements $1)$ and $2)$ will be provided in the first and second cases, respectively. {\it Case 1: $d_2>1$}. Taking in account \eqref{cond-Pols-P-3d}, $a_{12}=0$. So system \eqref{sistema-discreto-3d-conjugado} is simplified. Thus, its fixed points are the solutions to the algebraic system $$ u= P_1\left(v\right), \quad v = P_2\left(w\right)+a_{10}, \quad w = A_3 + \frac{a_{20}}{d_2 p_{d_2}}, $$ which has the unique solution $$ (u_0,v_0,w_0)=\left(P_1\left(v_0\right),P_2\left(w_0\right)+a_{10},A_3 + \frac{a_{20}}{d_2 p_{d_2}}\right). $$ Moreover, if $(u,v,w)$ was part of a $2$-cycle, then $$ u = P_1\big(P_2\left(w\right)+a_{10}\big), \quad v = P_2\mathfrak{B}ig(A_3 + \frac{a_{20}}{d_2 p_{d_2}}\mathfrak{B}ig)+a_{10}, \quad {w} = A_3 + \frac{a_{20}}{d_2 p_{d_2}}. $$ Hence $(u,v,w)=(u_0,v_0,w_0)$, the fixed point, so there are no $2$-cycles in this case. Even more, the third iteration $(u_3,v_3,w_3)$ of an arbitrary point $(u,v,w)$ is $$ (u_{3},v_3,w_3)= \bigg( P_1\mathfrak{B}ig(P_2\mathfrak{B}ig(A_3 +\frac{a_{20}}{d_2 p_{d_2}}\mathfrak{B}ig)+a_{10}\mathfrak{B}ig), P_2\mathfrak{B}ig(A_3 +\frac{a_{20}}{d_2 p_{d_2}}\mathfrak{B}ig)+a_{10}, A_3 +\frac{a_{20}}{d_2 p_{d_2}} \bigg), $$ which clearly does not depend on the coordinates of the initial point $(u,v,w)$. The point $(u_3,v_3,w_3)$ is precisely the fixed point $(u_0,v_0,w_0)$. This last argument is the proof of Statement 1) because $a_{21}=0$ is equivalent to $\deg A_1(x)=1$. {\it Case 2: $d_2=1$}. We assume $P_2(s)=p_{21}s+p_{20}$, and define $\alpha:=p_{20}+a_{10}$. Moreover, we can assume $\deg A_1(x)=2$ because otherwise we are in the previous paragraph. The polynomial map \begin{equation*} \begin{tikzcd} (X,Y,Z) \arrow{r}[]{\Psi_2} & \displaystyle \bigg( X+P_1(Y),\, Y, \, \frac{Z+a_{12}P_1(Y)(P_1(Y)+2X)-\alpha}{p_{21}} \bigg) =(u,v,w) \end{tikzcd} \end{equation*} is a polynomial automorphism of $\mathbb{R}^3$ that gives \begin{displaymath} (\Psi_2^{-1}\circ G \circ \Psi_2)(X,Y,Z)= \mathfrak{B}ig(P_1(Y)-P_1(Z),\, Z,\, a_{12}\big(P_1(Y)-P_1(Z)\big)^2+\nu\mathfrak{B}ig), \end{displaymath} where $\nu:=p_{21}A_3+a_{20}+\alpha$. Thus, system \eqref{sistema-discreto-3d} is also conjugated to the system \begin{equation} \label{sistema-discreto-3d-conjugado-2} \begin{aligned} {X}_{k+1} &= P_1(Y_k)-P_1(Z_k), \\[0.2pc] {Y}_{k+1} &= Z_k, \\[0.0pc] {Z}_{k+1} &= a_{12}\big(P_1(Y_k)-P_1(Z_k)\big)^2+\nu. \end{aligned} \end{equation} From second equation in \eqref{sistema-discreto-3d-conjugado-2} it follows that this last discrete dynamical system has a unique fixed point at $(X_0,Y_0,Z_0)=(0,\nu,\nu)$. Moreover, if $(X,Y,Z)$ was part of a $2$-cycle, then $$ {X} = P_1(Z)-P_1(Y), \quad {Y} = a_{12}X^2+\nu, \quad {Z} = a_{12}X^2+\nu. $$ Hence $(X,Y,Z)=(0,\nu,\nu)$, so there are no $2$-cycles. This completes the proof of the general part of the theorem. To finish, we will prove Statement~2). We claim that if the equation \begin{equation} \label{condicion-de-3-ciclo} a_{12}\big(P_1(s)-P_1(\nu)\big)^2+\nu=s \end{equation} has a real solution $s_0\neq \nu$, then the point $(0,s_0,\nu)$ is part of a $3$-cycle of the discrete dynamical system \eqref{sistema-discreto-3d-conjugado-2}. Indeed, if \eqref{condicion-de-3-ciclo} holds for $s_0\neq \nu$, then $$ (0,s_0,\nu) \longrightarrow \big(P_1(s_0)-P_1(\nu),\nu,s_0)\big) \longrightarrow \big(P_1(\nu)-P_1(s_0),s_0,s_0)\big) \longrightarrow (0,s_0,\nu), $$ which is a $3$-cycle because no one of these points is the fixed point. Look for a solution of equation \eqref{condicion-de-3-ciclo} is equivalent to look for a zero of the polynomial \begin{equation}\label{h} h(s)=a_{12}\big(P_1(s)-P_1(\nu)\big)^2+\nu-s. \end{equation} Since $h(s)$ has even degree and it has a simple zero at $s=\nu$ because $h(\nu) = 0$ and $h'(\nu) = -1$, it must have another real zero. Therefore, our claim follows. The linearization of \eqref{sistema-discreto-3d-conjugado-2} at an arbitrary point $(X,Y,Z)$ is the matrix $$ \begin{pmatrix} 0 & P_1'(Y) & -P_1'(Z) \\ 0 & 0 & 1 \\ 0 & 2\,a_{12}(P_1(Y)-P_1(Z))P_1'(Y) & - 2\,a_{12}(P_1(Y)-P_1(Z))P_1'(Z) \end{pmatrix}. $$ By evaluating this matrix at each one of the three points of the $3$-cycle and computing their product we obtain the linearization of the third iteration of \eqref{sistema-discreto-3d-conjugado-2} at the point $(0,s_0,\nu),$ which after using that $(P_1(s_0)-P_1(\nu))^2 = (s_0 - \nu)/a_{12}$ can be written as $$ \begin{pmatrix} 0 & * & * \\ 0 & L_{22} & * \\ 0 & 0 & 0 \end{pmatrix} $$ where $L_{22} := 4 a_{12}^2 \big( P_{1}(s_{0}) - P_{1}(\nu) \big)^2 (P_1^{'}(s_0))^2.$ Therefore, the $3$-cycle is an attractor when $|L_{22}|<1$ and it is a saddle when $|L_{22}|>1$. \end{proof} \begin{remark} When $P_1(s)$ is linear the equation \eqref{h} has only one solution $s_0$, different from $\nu$, and $L_{22} = 4.$ Hence, in this case the $3$-cycle of \eqref{sistema-discreto-3d-conjugado-2} is not an attractor. We can find suitable $a_{12}, \nu$ and $P_1(s)$ of degree two such that \eqref{sistema-discreto-3d-conjugado-2} has a $3$-cycle which is attractor. \end{remark} \begin{remark} When $P_1(s)$ is linear, we compute, by using computational software, the Gr\"{o}ebner basis of the components of $(\Psi_2^{-1}\circ G \circ \Psi_2)^5(X,Y,Z)-(X,Y,Z)$. We obtain three polynomials, one of them depends only on $Z$ and has even degree with $\nu$ as a solution, the other two are linear in $X$ and $Y$. Hence, \eqref{sistema-discreto-3d-conjugado-2} has a $5$-cycle. \end{remark} \section{Continuous dynamics} In this section, we will prove the general properties of the continuous dynamical system \eqref{sistema-diferencial-3d}. Before that, we need to recall some concepts related to the assertions of Theorem~\ref{Teorema-2} and Proposition~\ref{resultados-caso-d1=d2=1}. A non-constant function $H \colon \mathbb{R}^3 \longrightarrow \mathbb{R}$ is a $C^r$ \emph{first integral} for differential system \eqref{sistema-diferencial-3d} if the equation \begin{equation*} \langle F, \nabla H\rangle = F_1H_x+F_2H_y+F_3H_z=0 \end{equation*} holds on the whole $ \mathbb{R}^3$ and $H$ is of class $C^r$, with $r=1,2,\ldots,\infty,\omega$, where $C^{\omega}$ stands for analytic functions. In addition, if $H$ is a polynomial function, then we have a \emph{polynomial first integral}. Two $C^r$ functions $H_1(x,y,z)$ and $H_2(x,y,z)$ are \emph{functionally independent} in $\mathbb{R}^3$ if their gradients, $\nabla H_1$ and $\nabla H_2$, are linearly independent in a full Lebesgue measure subset of $\mathbb{R}^3$. Then, by definition, differential system \eqref{sistema-diferencial-3d} is $C^r$(\emph{polynomially}) \emph{integrable} if it has a $C^r$ (polynomial) first integral in $\mathbb{R}^3$. Furthermore, it is $C^r$ (\emph{polynomially}) \emph{completely integrable} if it has two functionally independent $C^r$ (polynomial) first integrals in $\mathbb{R}^3$. \begin{definition} \label{iso} A \emph{periodic surface of system \eqref{sistema-diferencial-3d}} is a surface $S\subset \mathbb{R}^3$, which is foliated by periodic orbits of the system. A periodic surface $S$ of system \eqref{sistema-diferencial-3d} is \emph{isochronous} when all its periodic orbits have the same period. \end{definition} \begin{proof}[Proof of Theorem 2] From Section \ref{Seccion-2}, we know that differential systems \eqref{sistema-diferencial-3d} and \eqref{sistema-diferencial-transformado-3d} are polynomially conjugated through the change of coordinates \eqref{cambio-variables-3d-inversa}. Moreover, the last two equations in \eqref{sistema-diferencial-transformado-3d} form a planar Hamiltonian system, whose Hamiltonian function is $$ G(v,w):=\int P_2(w)\, dw - \frac{a_{12}}{d_{2} {p_d}_{2}}\int(P_1(v))^2\, dv -A_3v. $$ Then, by extending this function to $\mathbb{R}^3$, that is, by defining the polynomial function \begin{equation} \label{integral-primera-general} H(u,v,w):=\int P_2(w)\, dw - \frac{a_{12}}{d_{2} {p_d}_{2}}\int(P_1(v))^2\, dv -A_3v, \end{equation} we have $$ H_{u}=0,\quad H_{v}=-\frac{a_{12}}{d_{2} {p_d}_{2}}(P_1(v))^2-A_3\quad \mbox{and}\quad H_{w}=P_2(w). $$ Thus, $$ P_1(v)\, H_{u}+P_2(w)\,H_{v}+\left( \frac{a_{12}}{d_{2} {p_d}_{2}} \big(P_1(v)\big)^2+A_3\right) H_{w}= 0, \quad \forall \; (u,v,w)\in \mathbb{R}^3. $$ Hence, $H$ is a polynomial first integral of system \eqref{sistema-diferencial-transformado-3d}. Since the change of variables \eqref{cambio-variables-3d-inversa} is polynomial, also differential system \eqref{sistema-diferencial-3d} has polynomial first integral. We now prove the second part of the theorem. Since $\deg A_1(x)=1$, $a_{12}=0$. Then, system \eqref{sistema-diferencial-transformado-3d} reduces to \begin{equation} \label{sistema-diferencial-transformado-3d-a12-cero} \begin{aligned} \dot{u} &= P_1(v),\\[0.5pt] \dot{v} &= P_2(w),\\[0.5pt] \dot{w} &= A_3. \end{aligned} \end{equation} We have proved that system \eqref{sistema-diferencial-transformado-3d} has a polynomial first integral, then we will show the existence of an additional polynomial first integral of the system. If $A_3=0$, then \eqref{sistema-diferencial-transformado-3d-a12-cero} admits the two functionally independent polynomial first integrals $$ H_1(u,v,w)=w \quad \mbox{and} \quad H_2(u,v,w)=\int P_1(v)\,dv-uP_2(w). $$ If $A_3 \neq 0$, then \eqref{sistema-diferencial-transformado-3d-a12-cero} admits the two functionally independent polynomial first integrals $$ H_1(u,v,w)=\int P_2(w)\,dw-A_3v $$ and $$ H_2(u,v,w)=A_3^{d_1+1}u-\sum_{j=0}^{d_1}(-1)^jA_3^{d_1-j}\left(\frac{d^j}{dv^j} P_1(v)\right)\,\xi_j(w), $$ where $\xi_0(w)=w$ and $\xi_j(w)=\int P_2(w)\xi_{j-1}(w)\,dw$ for $j=1,2,\ldots,d_1$. In both previous cases $H_1(u,v,w)$ is the reduction of the polynomial first integral \eqref{integral-primera-general}. Thus, system \eqref{sistema-diferencial-transformado-3d-a12-cero} is polynomially completely integrable. Therefore, system \eqref{sistema-diferencial-3d}, with $\deg A_1(x)=1$, is also polynomially completely integrable because it and system \eqref{sistema-diferencial-transformado-3d-a12-cero} are equivalent after a polynomial change of coordinates. \end{proof} \subsection{Case $d_1=d_2=1$} \begin{proof}[Proof of Proposition~\ref{resultados-caso-d1=d2=1}] Recall that differential systems \eqref{sistema-diferencial-3d} and \eqref{sistema-diferencial-transformado-3d} are equivalent under the polynomial change of coordinates \eqref{cambio-variables-3d-inversa}. Statement $1)$. Since $\deg A_1(x)=1$, $a_{12}=0$. The linear change of coordinates $$ X=\frac{1}{p_{d_1}p_{d_2}}\,u,\; Y=\frac{1}{p_{d_1}p_{d_2}}\,P_1(v),\; Z=\frac{1}{p_{d_2}}\,P_2(w) $$ transforms the differential system \eqref{sistema-diferencial-transformado-3d}, with $a_{12}=0$, into the differential system \begin{equation*} \begin{aligned} \dot{X} &= Y,\\[0.5pt] \dot{Y} &= Z,\\[0.5pt] \dot{Z} &= A_3, \end{aligned} \end{equation*} which can be solved explicitly. Indeed, the trajectory $\phi_t(X_0,Y_0,Z_0)$ of the system passing through the point $(X_0,Y_0,Z_0)$ has the components: $$ X(t)=\dfrac{A_{{3}}}{6}\,{t}^{3}+\frac{Z_{{0}}}{2}\,{t}^{2}+Y_{{0}}\,t+X_{{0}},\quad Y(t)=\dfrac{A_{{3}}}{2}\,{t}^{2}+Z_{{0}}\,t+Y_{{0}},\quad Z(t)=A_{{3}}\,{t}+Z_{{0}}. $$ So if $(X_0,Y_0,Z_0)$ is not a singularity, then the nontrivial trajectory $\phi_t(X_0,Y_0,Z_0)$ escapes to infinity in forward and backward time. Statement $2)$. Since $\deg A_1(x)=2$, $a_{12} \neq 0$. The linear change of coordinates $$ X=(a_{12}\,p_{d_1})\,u,\; Y=(a_{12}\,p_{d_1})\,P_1(v),\; Z=(a_{12}\,p_{d_1}^2)\,P_2(w) $$ transforms the differential system \eqref{sistema-diferencial-transformado-3d}, with $a_{12} \neq 0$, into the differential system \begin{equation} \label{sistema-diferencial-transformado-3d-lineal-a12-nocero} \begin{aligned} \dot{X} &= Y,\\[0.5pt] \dot{Y} &= Z,\\[0.5pt] \dot{Z} &= Y^2+\mu, \end{aligned} \end{equation} where $\mu=A_3\,a_{12}\,p_{d_2}\,p_{d_1}^2$ (as it is defined in Proposition \ref{resultados-caso-d1=d2=1}). Moreover, the first integral \eqref{integral-primera-general} for system \eqref{sistema-diferencial-transformado-3d} becomes $$ H(X,Y,Z)=-\mu\,Y+\frac{Z^2}{2}-\frac{Y^3}{3}, $$ which is a first integral for system \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero}. Thus, a trajectory of the system \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero} is contained in a level surface $H^{-1}(c)\subset \mathbb{R}^3$ of $H$, with $c \in \mathbb{R}$. Since $H$ does not depend on $X$, $H^{-1}(c)$ has the form $$ H^{-1}(c)=\mathbb{R} \times G^{-1}(c), $$ where $G(Y, Z)=-\mu\,Y+Z^2/2-Y^3/3$. Moreover, the last two equations in \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero} form the planar Hamiltonian system associated with $G(Y, Z)$. We will give the proof of $a)$, $b)$ and $c)$ of the statement in three cases: $\mu>0$, $\mu=0$ and $\mu<0$. {\it Case 1: $\mu>0$}. $G(Y, Z)$ does not have any singular point in the $YZ$-plane. Thus, $G^{-1}(c)$ is homeomorphic to $\mathbb{R}$ for any $c\in \mathbb{R}$. In addition, system \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero} does not have singularities in the whole space $\mathbb{R}^3$, then each $H^{-1}(c)$ is a simply connected surface without any singularity of the system. Therefore, each trajectory goes to infinity in forward and backward time. {\it Case 2: $\mu=0$}. $G(Y, Z)$ has the origin as the unique singularity in the $YZ$-plane. In fact, $(0,0)$ is a cusp singularity of $G(Y, Z)$. Since $G(0,0)=0$, $G^{-1}(0)$ is the cuspidal cubic curve. Hence, $G^{-1}(c)$ is homeomorphic to $\mathbb{R}$ for any $c\neq 0$. In addition, since all the singularities of \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero} are of the form $(X,0,0)$, they are contained in the cuspidal invariant (singular) surface $S_{0}:=H^{-1}(0)=\mathbb{R} \times G^{-1}(0)$. This implies that $H^{-1}(c)$, with $c \neq 0$ is a simply connected surface without any singularity of the system. Hence, all trajectories in $\mathbb{R}^3 \backslash S_{0}$ have to escape to infinity in forward and backward time. {\it Case 3: $\mu<0$}. We can change the parameter $\mu$ by $-\beta^2$, with $\beta>0$. Then, by using the linear the change of coordinates $ {x}=\sqrt{\beta}X,\; {y}=\beta(Y-1),\; {z}={\beta}^{3/2}\,Z $ and the linear change of time $\tau=\sqrt{\beta}\, t$, the differential system \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero}, with $\mu=-\beta^2$, is transformed into the differential system \begin{equation} \label{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3} \begin{aligned} x' &= y-1,\\[0.5pt] y' &= z,\\[0.5pt] z' &= y(y-2), \end{aligned} \end{equation} where the prime denotes the derivative with respect to a new time variable $\tau$. Thus, for completing the proof of this case, we will demonstrate that system \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3} has a unique isochronous periodic surface $\mathcal{S}^{*}$ and that all its trajectories in $\mathbb{R}^3 \backslash \mathcal{S}^{*}$ go to infinity in forward and backward time. The differential system \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3} does not have any singularity in the whole $\mathbb{R}^3$ and it has the polynomial first integral $$ H(x,y,z)=(6y^2+3z^2-2y^3)/6. $$ Since this first integral does not depend on $x$, $ H^{-1}(c)=\mathbb{R} \times G^{-1}(c), $ where $G(y,z)=(6y^2+3z^2-2y^3)/6$. The last two equations in \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3} form, in the $yz$-plane, the planar Hamiltonian system associated with $G(y,z)$, whose singularities are $(0,0)$ and $(2,0)$. A simple computation shows that they are a center and a saddle, respectively. Thus, this Hamiltonian system has a period annulus $\mathscr{P}$ surrounding the center $(0,0)$ and bounded by the homoclinic loop $\Gamma$ that joins the stable and the unstable manifolds of the saddle point $(2,0)$. Since $G(0,0)=0$ and $G(2,0)=4/3$, for all $c \in (0,4/3)$ the level curve $G^{-1}(c)$ has a connected component $\gamma_c$ homeomorphic to the unit circle $\mathbb{S}^1$ that forms part of $\mathscr{P}$ and the level surface $H^{-1}(c)$ has a connected component $ \mathcal{S}_c$ homeomorphic to the cylinder $\mathbb{R} \times \mathbb{S}^1$. See Figure \ref{Fig1}. The straight lines $L_{0}:=\mathbb{R} \times \{(0,0)\}$ and $L_{2}:=\mathbb{R} \times \{(2,0)\}$ are invariant by the flow of \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3}. Thus, as trajectories, they go to infinity in forward and backward time. Moreover, a straightforward analysis on the topology of $G^{-1}(c)$ implies that for any $c\in \mathbb{R}$, $$ H^{-1}(c)\cap \left(\mathbb{R}^3 \backslash \big(\cup_{c \in (0,4/3)} \mathcal{S}_c \cup L_0\cup L_2\big)\right) $$ is formed only by disjoint simply connected surfaces. Hence, $i)$ only the invariant surfaces $\mathcal{S}_c$, with $c\in(0,4/3)$, could support periodic orbits and $ii)$ any trajectory of system \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3} in $\displaystyle \mathbb{R}^3 \backslash \cup_{c \in (0,4/3)} \mathcal{S}_c$ goes to infinity in forward and backward time. It remains to prove the existence of only one surface $S^{*}=\mathcal{S}_{c^{*}}$, with $c^{*} \in (0,4/3)$, that is foliated by periodic orbits of the same period. \begin{center} \begin{figure} \caption{\label{Fig1} \label{Fig1} \end{figure} \end{center} In the $yz-$plane, the intersection of the period annulus $\mathscr{P}$ with the positive $z$-axis is the line segment $\sigma^{+}:=\{(0,z) \; | \; 0<z<\sqrt{8/3}\}$, which is a transversal section for the flow of the Hamiltonian system associated with $G(y, z)$. The dot product of the vector $(0,1,0)$, which is orthogonal to $\Sigma^{+}:=\mathbb{R} \times \sigma^{+}$, and the vector field $\mathcal{X}$, associated to \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3}, has defined sing: $\langle (0,1,0),\mathcal{X}\rangle=z>0$. Hence, $\Sigma^{+}$ is a $2$-dimensional transversal section for the flow of system~\eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3}. As usual, we can use the energy level $c$ of $H$ to get the parametrization $$ \mathbb{R} \times (0, 4/3) \longrightarrow \Sigma^+, \, (x,c) \longmapsto (x,0, \sqrt{2c}), $$ of the transversal section $\Sigma^{+}.$ In other words, the points in $\Sigma^{+}$ can be described by the two coordinates $(x,c)$. Let $\phi_{\tau}(x,c)$ be the trajectory of system \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3} passing through $(x,c)\in \Sigma^{+}$. Since the right-hand side of the system does not depend of $x$, $\phi_{\tau}(x,c)$ has the form $ \big(x_{c}(\tau),\varphi_{\tau}(0,c)\big), $ where $\varphi_{\tau}(0,c)=(y_{c}(\tau),z_{c}(\tau))$ is the trajectory of the Hamiltonian system associated with $G(y,z)$, passing through the point $(0,c)\in \sigma^{+}$ at time $\tau=0$. Thus, there exists a well-defined Poincar\'e first return map $$ \begin{array}{rcl} \mathcal{P} \colon \Sigma^{+} & \longrightarrow & \Sigma^{+}\\ (x,c) & \longmapsto &\phi_{\tau(x,c)}(x,c), \end{array} $$ where $\tau(x,c)$ is the time of first return of the point $(x,c)$ to $\Sigma^{+}$. Each trajectory $\phi_{\tau}(x,c)$ of the system starting in the region $\mathbb{R} \times \mathscr{P} \subset \mathbb{R}^3$ is contained in the surface $\mathcal{S}_c$ and $\Sigma^{+}\cap \mathcal{S}_c= \{(x,c) \;|\; x \in \mathbb{R}\}$, then the $c$-coordinate of $\mathcal{P}(x,c)$ remains invariant. Thus, $\mathcal{P}(x,c)=\phi_{\tau(x,c)}(x,c)=(x_c(\tau(x,c)),c)$, which implies that the fixed points of $\mathcal{P}$ are in correspondence with the zeros of the displacement function $$ L(x,c):=x_c(\tau(x,c))-x_c(0). $$ Since the right-hand side of the system (\ref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3}) does not depend on $x$, the time of first return $\tau(x,c)$ does not either, that is, $\tau(x,c)=\tau(0,c)$. Thus, if $L(0,c^{*})=0$, then $L(x,c^*)=0$ for all $x\in \mathbb{R}$, whence $\mathcal{S}_{c^{*}}$ will be a isochronous (periodic) surface, according to Definition \ref{iso}. Hence, it is enough to study the function $$ L(0,c)=x_c(\tau(0,c))-x_c(0), \quad \mbox{with } x_c(0)=0. $$ To complete the proof, we will prove that $L(0,c)<0$ for $0<c \leq 2/3$, $L(0,c)>0$ for $2/3 \ll c <4/3$, and $L(0,c)$ is a monotonous increasing function in $\big(2/3,4/3\big)$, which implies the existence of a unique $c^{*}\in (0, 4/3)$ such that $L(0,c^{*})=0$. This will prove the uniqueness of the isochronous surface $\mathcal{S}_{c^{*}}$ . The proof of these assertions is analogous to the proof of the uniqueness of the limit cycle in the van der Pol differential system given in \cite[Sec 12.3]{HSD}. Hence, we will give the main ideas to prove the properties of $L(0,c)$ and we leave the details to the reader. From the fundamental theorem of calculus and the first equation in \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3} we have $$ L(0,c)=x_c(\tau(0,c))-x_c(0)=\int_{0}^{\tau(0,c)}x_c'(\tau)\,d \tau= \int_{0}^{\tau(0,c)}(y_c(\tau)-1)\,d \tau. $$ In Figure \ref{Fig2} we show an sketch for the graph of $y_c(\tau)-1$, whose shape follows easily from $a)$ in Figure \ref{Fig1} and the fact that $G(1,0) = 2/3$. Hence, for $0<c \leq 2/3$, $y_c(\tau)-1<0$ for almost all $\tau$, thus $ L(0,c)<0 $. \begin{center} \begin{figure} \caption{\label{Fig2} \label{Fig2} \end{figure} \end{center} For $2/3 < c < 4/3,$ we rewrite the displacement function as $$L(0,c) = \displaystyle \int_{\gamma_c} y-1.$$ Following \cite[p.269]{HSD}, we divide the curve $\gamma_c$ into four curves $\gamma^1_c, \gamma^2_c, \gamma^3_c, \gamma^4_c$ as shown in Figure~\ref{Fig3}. Let $y_0:= 1- \sqrt{3}$ be the intersection of $\gamma_{2/3}$ with the negative $y-$axis. \begin{center} \begin{figure} \caption{\label{Fig3} \label{Fig3} \end{figure} \end{center} The curves $\gamma^1_c$ and $\gamma^3_c$ are defined for $y_0 \leq y \leq 1,$ while the curves $\gamma^2_c$ and $\gamma^2_c$ are defined for $z_1 \leq z \leq z_2$ (of course, $z_1$ and $z_2$ depend on $c$). Then $$L(0,c) = L_1(c)+L_2(c)+L_3(c)+L_4(c),$$ where $$L_i(c):= \displaystyle \int_{\gamma^i_{c}} y-1, \,\, i=1,2,3,4.$$ We can make an analogous analysis as in \cite[p.270]{HSD} to prove the following properties. $L_1(c)$ and $L_3(c)$ are negative monotonous increasing functions which are bounded in the interval $(2/3, 4/3)$; $L_2(c)$ is a positive monotonous increasing function, which goes to infinity as $c$ goes to $4/3$; $L_4(c)$ is a negative decreasing function, which is bounded and whose derivative goes to zero as $c$ goes to $4/3.$ The properties on $L_2(c)$ and $L_4(c)$ imply that $L_2(c) + L_4(c)$ has a unique zero and $L_2(c) + L_4(c)$ goes to infinity as $c$ goes to $4/3.$ In addition, the properties on $L_1(c)$ and $L_3(c)$ imply that $L(0,c)$ has a unique zero $c^*$ in $(2/3, 4/3).$ Furthermore, a more accurate analysis proves that $L(0,c)$ is a monotonous increasing function in $(2/3, 4/3).$ \end{proof} Figure \ref{Fig4} shows part of the phase portrait of \eqref{sistema-diferencial-transformado-3d-lineal-a12-nocero-redu3} close to the isochronous surface $\mathcal{S}_{c^{*}}$, where $c^{*}\in (1.6305,1.6310)$. More precisely, it gives part of the level surfaces of the first integral and the trajectories with initial conditions $(0,0,1.6323)$, in magenta, $(0,0,1.54919)$, in red, and $(0,0,1.6308)$, $(2.7,0,1.6308)$, $(-4,0,1.6308)$ in blue. The magenta trajectory advances in the positive direction of the $x$-axis, the red trajectory advances in the negative direction of the $x$-axis and the blue ones are (approximately) periodic, with the same period. \begin{center} \begin{figure} \caption{\label{Fig4} \label{Fig4} \end{figure} \end{center} \section{Concluding remarks} From the previous sections we can observe that the discrete and continuous dynamics of the nilpotent polynomial vector fields \eqref{campo-vectorial-3d-corto} share some similarities. For instance, from \eqref{sistema-discreto-3d-conjugado} it follows that the surface $$ w=\frac{a_{12}}{d_2 p_{d_2}} u^2 + A_3 + \frac{a_{20}}{d_2 p_{d_2}} $$ is reached from any initial point after one iteration. Thus, this surface contains the long-term dynamics of the system $(\mathbb{R}^3,\mathbb{N}_0,G)$, which is conjugated to \eqref{sistema-discreto-3d}. Similarly, since system \eqref{sistema-diferencial-3d} is polynomially integrable, its dynamics evolves in the algebraic level surfaces of the polynomial first integral. Hence, this reduction of the dynamics in one dimension is a similarity that, in general, share the discrete and continuous dynamical systems previously mentioned. Other similarity is that for the case $\deg A_1(x)=1$, the discrete dynamical system \eqref{sistema-discreto-3d} and the continuous dynamical system \eqref{sistema-diferencial-3d} are completely understood. Indeed, from Theorem \ref{Teorema-1} the discrete system has a global attractor and from Theorem \ref{Teorema-2} the continuous system is polynomially completely integrable. From conditions \eqref{cond-Pols-P-3d} we know that $\deg A_1(x)=1$ if $d_2 > 1$. Hence, according to the previous paragraph, the discrete and continuous dynamics of \eqref{campo-vectorial-3d-corto} will be more interesting for $d_1\geq 1$, $d_2=1$ and $\deg A_1(x)=2$. The system \eqref{sistema-discreto-3d} with $d_2=1$ has been studied in Statement 2) of Theorem~\ref{Teorema-1}. We have showed that in such case there exist a unique fixed point, there are not $2$-cycles and there exists at least one $3$-cycle. This triggers the following questions: \begin{itemize} \item How to discern analytically the existence of $m$-cycles with $ m \geq 4$? \item Could be the $3$-cycle of Theorem \ref{Teorema-1} unique and attractor? \end{itemize} The system \eqref{sistema-diferencial-3d}, with $d_1=d_2=1$ have been analyzed in Proposition \ref{resultados-caso-d1=d2=1}. We showed that when the associated planar Hamiltonian system has only one period annulus, the system has only one isochronous periodic surface $\mathcal{S}_{\mu}$. Concerning this result a natural question arise: \begin{itemize} \item Is any periodic orbit in $\mathcal{S}_{\mu}$ persisting under the perturbation $\lambda I$ with $\lambda < 0$? \end{itemize} A positive answer to the this question would give a affirmative response to the Problem 19 in \cite{GasullProblemas}, which is related with the Markus--Yamabe Conjecture. We note that for $d_1>1$ the planar Hamiltonian system associated with system \eqref{sistema-diferencial-3d} can have several period annuli. For instance, by taking $P_1(s)=s^2-s-3$, $P_2(s)=s$, $a_{12}=1$ and $A_3=-6$, the system \eqref{sistema-diferencial-3d} has two period annuli. Hence, we can ask: \begin{itemize} \item How many periodic surfaces can have system \eqref{sistema-diferencial-3d} for $d_1\geq 1$ and $d_2=1$? \end{itemize} This question is in some sense analogous to the problem about the number of limit cycles in planar polynomial vector fields. See \cite{Ily02,RP2009} and references there in. In this work, we have focused in the discrete and continuous dynamics of the nilpotent polynomial vector fields in dimension three. However, we believe that the techniques used in the present research are useful also for an analogous study in higher dimensions. Recall that in \cite{CaVa} is provided the characterization of a wide class of nilpotent polynomial vector fields in any dimension. Of course, the generalization or extension of the results presented here is no simple. For example, in dimension four there are six different families of nilpotent polynomial vector fields to be analyzed. We expect that in some of these families could be arise different behaviors than those obtained here. \end{document}
\begin{document} \preprint{APS/123-QED} \title{Local unambiguous discrimination of symmetric ternary states} \affiliation{ Quantum Information Science Research Center, Quantum ICT Research Institute, Tamagawa University, Machida, Tokyo 194-8610, Japan } \affiliation{ School of Information Science and Technology, Aichi Prefectural University, Nagakute, Aichi 480-1198, Japan } \author{Kenji Nakahira} \affiliation{ Quantum Information Science Research Center, Quantum ICT Research Institute, Tamagawa University, Machida, Tokyo 194-8610, Japan } \author{Kentaro Kato} \affiliation{ Quantum Information Science Research Center, Quantum ICT Research Institute, Tamagawa University, Machida, Tokyo 194-8610, Japan } \author{Tsuyoshi \surname{Sasaki Usuda}} \affiliation{ School of Information Science and Technology, Aichi Prefectural University, Nagakute, Aichi 480-1198, Japan } \affiliation{ Quantum Information Science Research Center, Quantum ICT Research Institute, Tamagawa University, Machida, Tokyo 194-8610, Japan } \date{\today} \begin{abstract} We investigate unambiguous discrimination between given quantum states with a sequential measurement, which is restricted to local measurements and one-way classical communication. If the given states are binary or those each of whose individual systems is two-dimensional, then it is in some cases known whether a sequential measurement achieves a globally optimal unambiguous measurement. In contrast, for more than two states each of whose individual systems is more than two-dimensional, the problem becomes extremely complicated. This paper focuses on symmetric ternary pure states each of whose individual systems is three-dimensional, which include phase shift keyed (PSK) optical coherent states and a lifted version of ``double trine'' states. We provide a necessary and sufficient condition for an optimal sequential measurement to be globally optimal for the bipartite case. A sufficient condition of global optimality for multipartite states is also presented. One can easily judge whether these conditions hold for given states. Some examples are given, which demonstrate that, despite the restriction to local measurements and one-way classical communication, a sequential measurement can be globally optimal in quite a few cases. \end{abstract} \pacs{03.67.Hk} \maketitle \section{Introduction} Discrimination between quantum states as accurately as possible is a fundamental issue in quantum information theory. It is a well-known property of quantum theory that perfect discrimination among nonorthogonal quantum states is impossible. Then, given a finite set of nonorthogonal quantum states, we need to find an optimal measurement with respect to a reasonable criterion. Unambiguous discrimination is one of the most common strategies to distinguish between quantum states \cite{Iva-1987,Die-1988,Per-1988}. An unambiguous measurement achieves error-free (i.e., unambiguous) discrimination at the expense of allowing for a certain rate of inconclusive results. Finding an unambiguous measurement that maximizes the average success probability for various quantum states has been widely investigated (e.g., \cite{Jae-Shi-1995,Ray-Lut-Enk-2003,Eld-Sto-Has-2004,Fen-Dua-Yin-2004,Jaf-Rez-Kar-Ami-2008,Pan-Wu-2009,Kle-Kam-Bru-2010,Sug-Has-Hor-Hay-2010,Ber-Fut-Fel-2012}). When given quantum states are shared between two or more systems, measurement strategies can be classified into two types: global and local. A local measurement is performed by a series of individual measurements on the subsystems combined with classical communication. In particular, sequential measurements, in which the classical communication is one-way only, have been widely investigated under several optimality criteria (e.g., \cite{Bro-Mei-1996,Ban-Yam-Hir-1997,Vir-Sac-Ple-Mar-2001,Aci-Bag-Bai-Mas-2005,Owa-Hay-2008,Ass-Poz-Pie-2011,Nak-Usu-2012-receiver,Nak-Usu-2016-LOCC,Ros-Mar-Gio-2017-capacity,Cro-Bar-Wei-2017}). Although the performance of an optimal sequential measurement is often strictly less than that of an optimal global measurement even if given states are not entangled, a sequential measurement has the advantage of being relatively easy to implement with current technology. As an example of a realizable sequential measurement for optical coherent states, a receiver based on a combination of a photon detector and a feedback circuit, which we call a Dolinar-like receiver, has been proposed \cite{Dol-1973} and experimentally demonstrated \cite{Coo-Mar-Ger-2007}. Also, unambiguous discrimination using Dolinar-like receivers has been studied \cite{Ban-1999,Enk-2002,Bec-Fan-Mig-2013}. Several studies on optimal unambiguous sequential measurements have also been carried out \cite{Che-Yan-2002,Ji-Cao-Yin-2005,Chi-Dua-Hsi-2014,Sen-Mar-Mun-2018}. For binary pure states with any prior probabilities, it has been shown that an optimal unambiguous sequential measurement can achieve the performance of an optimal global measurement \cite{Che-Yan-2002,Ji-Cao-Yin-2005}. For short, we say that a sequential measurement can be globally optimal. As for more than two states, in the case in which each of the individual systems is two-dimensional, whether a sequential measurement can be globally optimal has been clarified for several cases \cite{Chi-Dua-Hsi-2014,Sen-Mar-Mun-2018}. However, in the case in which individual systems are more than two-dimensional, the problem becomes extremely complicated. Due to the restriction of local measurements and one-way classical communication, it would not be surprising if a sequential measurement cannot be globally optimal except for some special cases. It is worth mentioning that, according to Ref.~\cite{Nak-Kat-Usu-2018-Dolinar}, in the case of a minimum-error measurement, which maximizes the average success probability but sometimes returns an incorrect answer, an optimal sequential measurement does not seem to be globally optimal for any ternary phase shift keyed (PSK) optical coherent states. In this paper, we focus on symmetric ternary pure states each of whose individual systems is three-dimensional. These states include PSK optical coherent states and a lifted version of ``double trine'' states \cite{Sho-2002}. We provide a necessary and sufficient condition that a sequential measurement can be globally optimal for the bipartite case, using which one can easily judge whether global optimality is achieved by a sequential measurement for given states. We use the convex optimization approach reported in Ref.~\cite{Nak-Kat-Usu-2018-seq_gen} to derive the condition. We also give a sufficient condition of global optimality for the multipartite case. Some examples of symmetric ternary pure states are presented, which show that a sequential measurement can be globally optimal in quite a few cases. One of the examples shows that the problem of whether a sequential measurement for bipartite ternary PSK optical coherent states can be globally optimal is completely solved analytically, while its minimum-error measurement version has been solved only numerically \cite{Nak-Kat-Usu-2018-Dolinar}. Moreover, we show that a Dolinar-like receiver for any ternary PSK optical coherent states cannot be globally optimal unambiguous measurement. The paper is organized as follows. In Sec.~\ref{sec:optimization}, we formulate the problem of finding an optimal unambiguous measurement and its sequential-measurement version as convex programming problems. In Sec.~\ref{sec:sym3}, we present our main theorem. Using this theorem, we derive a necessary and sufficient condition for an optimal sequential measurement for bipartite symmetric ternary pure states to be globally optimal. A sufficient condition of global optimality for multipartite symmetric ternary pure states is also derived. In Sec.~\ref{sec:sym3_optLambda}, we prove the main theorem. Finally, we provide some examples to demonstrate the usefulness of our results in Sec.~\ref{sec:example}. \section{Optimal unambiguous sequential measurements} \label{sec:optimization} In this section, we first provide an optimization problem of finding optimal unambiguous measurements. Then, we discuss a sequential-measurement version of the optimization problem. We also provide a necessary and sufficient condition for an optimal sequential measurement to be globally optimal. Note that this condition is quite general but requires extra effort to decide whether global optimality is achieved by a sequential measurement for given quantum states. In Sec.~\ref{sec:sym3}, we will use this condition to derive a formula that is directly applicable to symmetric ternary pure states. \subsection{Problem of finding optimal unambiguous measurements} We here consider unambiguous measurements without restriction to sequential measurements. Consider a quantum system prepared in one of $R$ quantum states represented by density operators $\{ \tilde{\rho}_r \}_{r \in \mathcal{I}_R}$ on a complex Hilbert space $\mathcal{H}$, where $\mathcal{I}_R \coloneqq \{ 0, 1, \cdots, R-1 \}$. The density operator $\tilde{\rho}_r$ satisfies $\tilde{\rho}_r {(g)}e 0$ and ${\rm Tr} ~\tilde{\rho}_r = 1$, where $\hat{a}t{A} {(g)}e 0$ denotes that $\hat{a}t{A}$ is positive semidefinite (similarly, $\hat{a}t{A} {(g)}e \hat{a}t{B}$ denotes $\hat{a}t{A} - \hat{a}t{B} {(g)}e 0$). To unambiguously discriminate the $R$ states, we can consider a measurement represented by a positive-operator-valued measure (POVM), $\hat{a}t{P}i \coloneqq \{ \hat{a}t{P}i_r \}_{r=0}^R$, consisting of $R + 1$ detection operators, on $\mathcal{H}$, where $\hat{a}t{P}i_r$ satisfies $\hat{a}t{P}i_r {(g)}e 0$ and $\sum_{r=0}^R \hat{a}t{P}i_r = \hat{1}$ ($\hat{1}$ is the identity operator on $\mathcal{H}$). The detection operator $\hat{a}t{P}i_r$ with $r < R$ corresponds to the identification of the state $\tilde{\rho}_r$, while $\hat{a}t{P}i_R$ corresponds to the inconclusive answer. Any unambiguous measurement $\hat{a}t{P}i$ satisfies ${\rm Tr}(\tilde{\rho}_r \hat{a}t{P}i_k) = 0$ for any $k \in \mathcal{I}_R \backslash \{r\}$, where $\backslash$ denotes set difference. Given possible states $\{ \tilde{\rho}_r \}$ and their prior probabilities $\{ \xi_r \}$, we want to find an unambiguous measurement maximizing the average success probability, which we call an optimal unambiguous measurement or just an optimal measurement for short. Reference~\cite{Eld-Sto-Has-2004} shows that the problem of finding an optimal measurement can be formulated as a semidefinite programming problem, which is a special case of a convex programming problem. For analytical convenience, instead of the formulation of Ref.~\cite{Eld-Sto-Has-2004}, we consider the following semidefinite programming problem: \begin{eqnarray} \begin{array}{lll} {\rm {\rm P_G}:} & {\rm maximize} & \displaystyle P(\hat{a}t{P}i) \coloneqq \lim_{\lambda \to \infty} \sum_{r=0}^{R-1} {\rm Tr} [ (\hat{r}ho_r - \lambda \hat{n}u_r) \hat{a}t{P}i_r ] \\ & {\rm subject~to} & \hat{a}t{P}i : {\rm POVM}, \end{array} \label{eq:PG} \end{eqnarray} where $\hat{r}ho_r \coloneqq \xi_r \tilde{\rho}_r$ and $\hat{n}u_r \coloneqq \sum_{k \in \mathcal{I}_R \backslash \{ r \}} \hat{r}ho_k$. Since $P(\hat{a}t{P}i) = -\infty$ holds if there exists $r \in \mathcal{I}_R$ such that ${\rm Tr}(\hat{n}u_r \hat{a}t{P}i_r) \neq 0$ (i.e., $\hat{a}t{P}i$ is not an unambiguous measurement), any optimal solution to Problem~${\rm P_G}$ is guaranteed to be an unambiguous measurement. The optimal value, which is the average success probability of an optimal measurement, is larger than zero if and only if at least one of the operators $\hat{r}ho_r$ has a nonzero overlap with the kernels of $\hat{n}u_r$ \cite{Rud-Spe-Tur-2003}. The dual problem to Problem~${\rm P_G}$ can be written as \footnote{One can obtain this problem from Eq.~(12) in Ref.~\cite{Nak-Kat-Usu-2015-general} with $M = R + 1$, $J = 0$, $\hat{c}_m = \hat{r}ho_m - \lambda \hat{n}u_m$ $~(m < R)$, $\hat{c}_R = 0$, and $\lambda \to \infty$.} \begin{eqnarray} \begin{array}{lll} {\rm {\rm DP_G}:} & {\rm minimize} & {\rm Tr}~\hat{a}t{Z}G \\ & {\rm subject~to} & \displaystyle \hat{a}t{Z}G {(g)}e \lim_{\lambda \to \infty} \hat{r}ho_r - \lambda \hat{n}u_r ~ (\forall r \in \mathcal{I}_R), \end{array} \label{eq:DPG} \end{eqnarray} where $\hat{a}t{Z}G$ is a positive semidefinite operator on $\mathcal{H}$. The optimal values of Problems~${\rm P_G}$ and ${\rm DP_G}$ are the same. \subsection{Problem of finding optimal unambiguous sequential measurements} Now, let us assume that $\mathcal{H}$ is a bipartite Hilbert space, $\mathcal{H} = \mathcal{H}A \otimes \mathcal{H}B$, and let us restrict our attention to a sequential measurement from Alice to Bob. In a sequential measurement, Alice performs a measurement on $\mathcal{H}A$ and communicates her result to Bob. Then, he performs a measurement on $\mathcal{H}B$, which can depend on Alice's outcomes, and obtains the final measurement result. This sequential measurement can be considered from a different point of view \cite{Nak-Kat-Usu-2018-Dolinar}. Let $\omega$ be an index associated with Bob's measurement $\hat{a}t{B}w \coloneqq \{ \hat{a}t{B}w_r \}_{r=0}^R$, and $\Omega$ be the entire set of indices $\omega$. Alice performs a measurement, $\hat{a}t{A} \coloneqq \{ \hat{a}t{A}(\omega) \}_{\omega \in \Omega}$, with continuous outcomes, and sends the result $\omega \in \Omega$ to Bob. Then, he performs the corresponding measurement $\hat{a}t{B}w$, which is uniquely determined by the result $\omega$. This sequential measurement is denoted as $\hat{a}t{P}iA \coloneqq \{ \hat{a}t{P}iA_r \}_{r=0}^R$ with \begin{eqnarray} \hat{a}t{P}iA_r &\coloneqq& \int_{\Omega} \hat{a}t{A}dw \otimes \hat{a}t{B}w_r, \label{eq:PiA} \end{eqnarray} which is uniquely determined by Alice's POVM $\hat{a}t{A}$. The problem of finding an unambiguous sequential measurement maximizing the average success probability, which we call an optimal unambiguous sequential measurement or just an optimal sequential measurement, can be formulated as the following optimization problem: \begin{eqnarray} \begin{array}{lll} {\rm P:} & {\rm maximize} & \displaystyle P[\hat{a}t{P}iA] \\ & {\rm subject~to} & \hat{a}t{A} \in \mMA \end{array} \label{eq:P} \end{eqnarray} with Alice's POVM $\hat{a}t{A}$, where $\mMA$ is the entire set of Alice's continuous measurements $\{ \hat{a}t{A}(\omega) \}_{\omega \in \Omega}$. Compared to Problem~${\rm P_G}$, this problem restricts $\hat{a}t{P}i$ to the form $\hat{a}t{P}i = \hat{a}t{P}iA$. We can easily see that this problem is a convex programming problem and obtain the following dual problem \cite{Nak-Kat-Usu-2018-seq_gen}: \begin{eqnarray} \begin{array}{lll} {\rm DP:} & {\rm minimize} & {\rm Tr}~\hat{a}t{X} \\ & {\rm subject~to} & \displaystyle \hat{a}t{G}amma(\omega; \hat{a}t{X}) {(g)}e 0 ~ (\forall \omega \in \Omega) \end{array} \label{eq:DP} \end{eqnarray} with a Hermitian operator $\hat{a}t{X}$, where \begin{eqnarray} \hat{a}t{G}amma(\omega; \hat{a}t{X}) &\coloneqq& \hat{a}t{X} - \lim_{\lambda \to \infty} \sum_{r=0}^{R-1} {\rm Tr}B \left[ (\hat{r}ho_r - \lambda \hat{n}u_r) \hat{a}t{B}w_r \right]. \label{eq:hGw} \end{eqnarray} ${\rm Tr}B$ is the partial trace over $\mathcal{H}B$. The optimal values of Problems~P and DP are also the same. \subsection{Condition for sequential measurement to be globally optimal} Let $\hat{a}t{Z}G^\star$ be an optimal solution to Problem~${\rm P_G}$ and $\hat{a}t{X}Gopt \coloneqq {\rm Tr}B~\hat{a}t{Z}G^\star$. Also, let $\hat{a}t{G}opt(\omega) \coloneqq \hat{a}t{G}amma(\omega; \hat{a}t{X}Gopt)$. We now want to know whether a sequential measurement can be globally optimal, i.e., whether an optimal solution to Problem~P is also optimal to Problem~${\rm P_G}$. To this end, we utilize the following remark: \begin{remark} \label{remark:opt} A sequential measurement $\hat{a}t{P}iA$ $~(\hat{a}t{A} \in \mMA)$ is an optimal unambiguous measurement if and only if it satisfies \begin{eqnarray} \hat{a}t{G}opt(\omega) \hat{a}t{A}(\omega) &=& 0, ~~ \forall \omega \in \Omega. \label{eq:cond} \end{eqnarray} \end{remark} \begin{proof} Assume that $\hat{a}t{X}Gopt$ is a feasible solution to Problem~DP, i.e., $\hat{a}t{G}opt(\omega) {(g)}e 0$ holds for any $\omega \in \Omega$. It is known that $\hat{a}t{P}iA$ and $\hat{a}t{X}$ are respectively optimal solutions to Problems~P and DP if and only if $\hat{a}t{G}amma(\omega; \hat{a}t{X}) {(g)}e 0$ and $\hat{a}t{G}amma(\omega; \hat{a}t{X}) \hat{a}t{A}(\omega) = 0$ hold for any $\omega \in \Omega$ (see Theorem~2 of Ref.~\cite{Nak-Kat-Usu-2018-seq_gen} \footnote{We here consider the case of $M = R + 1$, $J = 0$, $\hat{c}_m = \hat{r}ho_m - \lambda \hat{n}u_m$ $~(m < R)$, $\hat{c}_R = 0$, and $\lambda \to \infty$.}). Thus, $\hat{a}t{P}iA$ and $\hat{a}t{X}Gopt$ are respectively optimal solutions to Problems~P and DP if and only if Eq.~\eqref{eq:cond} holds. If Eq.~\eqref{eq:cond} holds, then, since $P[\hat{a}t{P}iA] = {\rm Tr}~\hat{a}t{X}Gopt = {\rm Tr}~\hat{a}t{Z}G^\star$ is equal to the optimal value of Problem~${\rm P_G}$, $\hat{a}t{P}iA$ is globally optimal. Therefore, to prove this remark, it suffices to show that $\hat{a}t{X}Gopt$ is a feasible solution to Problem~DP. Multiplying $[ \hat{a}t{B}w_r ]^{1/2}$ on both sides of the constraint of Problem~${\rm DP_G}$ and taking the partial trace over $\mathcal{H}B$ gives \begin{eqnarray} {\rm Tr}B \left[ \hat{a}t{Z}G^\star \hat{a}t{B}w_r \right] &{(g)}e& \lim_{\lambda \to \infty} {\rm Tr}B \left[ (\hat{r}ho_r - \lambda \hat{n}u_r) \hat{a}t{B}w_r \right]. \end{eqnarray} Therefore, we have \begin{eqnarray} \sum_{r=0}^{R-1} {\rm Tr}B \left[ \hat{a}t{Z}G^\star \hat{a}t{B}w_r \right] &{(g)}e& \lim_{\lambda \to \infty} \sum_{r=0}^{R-1} {\rm Tr}B \left[ (\hat{r}ho_r - \lambda \hat{n}u_r) \hat{a}t{B}w_r \right]. \end{eqnarray} Also, from $\hat{a}t{X}Gopt = {\rm Tr}B~\hat{a}t{Z}G^\star$, we have \begin{eqnarray} \hat{a}t{X}Gopt &=& \sum_{r=0}^R {\rm Tr}B \left[ \hat{a}t{Z}G^\star \hat{a}t{B}w_r \right] {(g)}e \sum_{r=0}^{R-1} {\rm Tr}B \left[ \hat{a}t{Z}G^\star \hat{a}t{B}w_r \right]. \end{eqnarray} From these equations and Eq.~\eqref{eq:hGw}, $\hat{a}t{G}opt(\omega) {(g)}e 0$ holds for any $\omega \in \Omega$, and thus $\hat{a}t{X}Gopt$ is a feasible solution to Problem~DP. \hspace*{0pt} $\blacksquare$ \end{proof} We will further investigate Alice's POVM $\hat{a}t{A}$ satisfying Eq.~\eqref{eq:cond}. Let \begin{eqnarray} \mathcal{K}_\omega &\coloneqq& {\rm Ker}~\sum_{r=0}^{R-1} {\rm Tr}B \left[ \hat{n}u_r \hat{a}t{B}w_r \right]. \label{eq:mK_omega} \end{eqnarray} Let us consider $\ket{{(g)}amma} \in {\rm supp}~\hat{a}t{A}(\omega)$. Suppose that Eq.~\eqref{eq:cond} holds; then, from Eqs.~\eqref{eq:hGw} and \eqref{eq:mK_omega}, $\ket{{(g)}amma} \in \mathcal{K}_\omega$ and $\hat{a}t{P}_\omega \left[ \hat{a}t{X}Gopt - \sum_{r=0}^{R-1} {\rm Tr}B [\hat{r}ho_r \hat{a}t{B}w_r] \right] \ket{{(g)}amma} = 0$ hold, where $\hat{a}t{P}_\omega$ is the projection operator onto $\mathcal{K}_\omega$. Conversely, if these two equations hold for any $\ket{{(g)}amma} \in {\rm supp}~\hat{a}t{A}(\omega)$, then Eq.~\eqref{eq:cond} holds. Therefore, Eq.~\eqref{eq:cond} is equivalent to the following equations: \begin{eqnarray} {\rm supp}~\hat{a}t{A}(\omega) &\subseteq& \mathcal{K}_\omega, \nonumber \\ \hat{a}t{P}_\omega \left[ \hat{a}t{X}Gopt - \sum_{r=0}^{R-1} {\rm Tr}B \left[ \hat{r}ho_r \hat{a}t{B}w_r \right] \right] \hat{a}t{A}(\omega) &=& 0. \label{eq:cond2} \end{eqnarray} Let us consider the case in which each state $\hat{r}ho_r$ is separable, i.e., it is in the form of \begin{eqnarray} \hat{r}ho_r &=& \xi_r \hat{a}_r \otimes \hat{b}_r, \label{eq:separable} \end{eqnarray} where $\hat{a}_r$ and $\hat{b}_r$ are respectively density operators on $\mathcal{H}A$ and $\mathcal{H}B$. Then, Eq.~\eqref{eq:hGw} reduces to \begin{eqnarray} \hat{a}t{G}amma(\omega; \hat{a}t{X}) &=& \hat{a}t{X} - \sum_{r=0}^{R-1} p^\w_r \xi_r \hat{a}_r + \lim_{\lambda \to \infty} \lambda \sum_{r=0}^{R-1} e^\w_r \xi_r \hat{a}_r, \label{eq:hGw2} \end{eqnarray} where $p^\w_r \coloneqq {\rm Tr} [ \hat{b}_r \hat{a}t{B}w_r ]$ is the probability of Bob correctly identifying the state $\hat{b}_r$ and $e^\w_r \coloneqq \sum_{k \in \mathcal{I}_R \backslash \{ r \}} {\rm Tr} [ \hat{b}_r \hat{a}t{B}w_k ]$ is the probability of Bob misidentifying the state $\hat{b}_r$. Also, it follows from $\mathcal{K}_\omega = {\rm Ker}~\sum_{r=0}^{R-1} e^\w_r \xi_r \hat{a}_r$ that the first line of Eq.~\eqref{eq:cond2} can be expressed as \begin{eqnarray} {\rm Tr}[\hat{a}_r \hat{a}t{A}(\omega)] &=& 0, ~~ \forall r \not\in {T^\w}, \label{eq:suppA} \end{eqnarray} where ${T^\w}$ is the entire set of indices $r \in \mathcal{I}_R$ such that Bob's measurement never gives incorrect results, i.e., \begin{eqnarray} {T^\w} &\coloneqq& \left\{ r \in \mathcal{I}_R : e^\w_r = 0 \right\}. \label{eq:Tw} \end{eqnarray} Equation~\eqref{eq:suppA} implies that, for any $r \in \mathcal{I}_R$ and $\omega \in \Omega$ such that the state $\hat{b}_r$ will be incorrectly identified by Bob's measurement $\hat{a}t{B}w$ (i.e., $e^\w_r \neq 0$), Alice's outcome must not be $\omega$ for the state $\hat{a}_r$ (i.e., ${\rm Tr}[\hat{a}_r \hat{a}t{A}(\omega)] = 0$). Thus, Eq.~\eqref{eq:suppA} ensures that the measurement $\hat{a}t{P}iA$ never gives erroneous results. \section{Sequential measurements for symmetric ternary pure states} \label{sec:sym3} Remark~\ref{remark:opt} is useful in determining whether a sequential measurement can be globally optimal. Concretely, it is possible to decide whether a sequential measurement can be globally optimal by examining whether there exists $\hat{a}t{A} \in \mMA$ satisfying Eq.~\eqref{eq:cond}. However, in general, it is quite difficult to examine this for all continuous values $\omega \in \Omega$. In this section, we consider sequential measurements for bipartite symmetric ternary pure states and derive a formula that can directly determine whether a sequential measurement can be globally optimal. Extending our results to the multipartite case enables us to obtain a sufficient condition that a sequential measurement can be globally optimal. \subsection{Main results} Let us consider bipartite ternary separable pure states, $\{ \ket{\Psi_r} \coloneqq \ket{a_r} \otimes \ket{b_r} \}_{r=0}^2$, which are the special case of Eq.~\eqref{eq:separable} with $\hat{a}_r = \ket{a_r}\bra{a_r}$ and $\hat{b}_r = \ket{b_r}\bra{b_r}$. Assume that $\{ \ket{a_r} \}$ and $\{ \ket{b_r} \}$ respectively span three-dimensional Hilbert spaces, $\mathcal{H}A$ and $\mathcal{H}B$. Also, assume that $\{ \ket{\Psi_r} \}$ is symmetric in the following sense: the prior probabilities are equal (i.e., $\xi_r = 1/3$) and there exist unitary operators $\hat{a}t{V}A$ on $\mathcal{H}A$ and $\hat{a}t{V}B$ on $\mathcal{H}B$ satisfying \begin{eqnarray} \ket{a_{r \oplus 1}} &=& \hat{a}t{V}A \ket{a_r}, ~ \ket{b_{r \oplus 1}} = \hat{a}t{V}B \ket{b_r}, \label{eq:Vab} \end{eqnarray} where $\oplus$ denotes addition modulo 3. These states are characterized by the inner products $K_\A \coloneqq \braket{a_0|a_1}$ and $K_\B \coloneqq \braket{b_0|b_1}$, which are generally complex values. For any $r \in \mathcal{I}_3$, we have \begin{eqnarray} \braket{a_r | a_{r \oplus 1}} &=& K_\A, ~~~ \braket{b_r | b_{r \oplus 1}} = K_\B. \end{eqnarray} ${ \ket{a_r} }$ and/or ${ \ket{b_r} }$ can be PSK optical coherent states, pulse position modulated (PPM) optical coherent states, and lifted trine states \cite{Sho-2002}. If $\{ \ket{a_r} \}$ or $\{ \ket{b_r} \}$ is mutually orthogonal (i.e., $K_\A = 0$ or $K_\B = 0$), then an optimal sequential measurement perfectly discriminates $\{ \ket{\Psi_r} \}$, and thus is globally optimal. So, assume that $\{ \ket{a_r} \}$ and $\{ \ket{b_r} \}$ are not mutually orthogonal. We shall present a theorem that can be used to determine whether a sequential measurement can be globally optimal for given bipartite symmetric ternary pure states. Let us consider the following set with seven elements \begin{eqnarray} \Omega^\star &\coloneqq& \{ \omega_{1,j}, \omega_{2,j}, \omega_3 : j \in \mathcal{I}_3 \}, \label{eq:Omega_opt} \end{eqnarray} where \begin{enumerate}[(1)] \item $\hat{a}t{B}^{(\omega_{1,j})}$ is the measurement that always returns $j$, i.e., $\hat{a}t{B}^{(\omega_{1,j})}_r = \delta_{r,j} \hat{1}B$, where $\delta_{r,j}$ is the Kronecker delta and $\hat{1}B$ is the identity operator on $\mathcal{H}B$. \item $\hat{a}t{B}^{(\omega_{2,j})}$ is an optimal unambiguous measurement for binary states $\{ \ket{b_{j \oplus 1}}, \ket{b_{j \oplus 2}} \}$ with equal prior probabilities of $1/2$. \item $\hat{a}t{B}^{(\omega_3)}$ is an optimal unambiguous measurement for ternary states $\{ \ket{b_r} \}_{r=0}^2$ with equal prior probabilities of $1/3$. \end{enumerate} For simpler notation, we write $\omega_k$ for $\omega_{k,0}$ for each $k \in \{1,2\}$. When a sequential measurement can be globally optimal, there can exist a large (or even infinite) number of optimal sequential measurements. However, as we shall show in the following theorem, if a sequential measurement can be globally optimal, then there always exists an optimal sequential measurement in which Alice never returns an index $\omega$ with $\omega \not\in \Omega^\star$ (proof in Sec.~\ref{sec:sym3_optLambda}): \begin{thm} \label{thm:sym3_optLambda} Suppose that, for bipartite symmetric ternary pure states $\{ \ket{\Psi_r} \coloneqq \ket{a_r} \otimes \ket{b_r} \}_{r=0}^2$, a sequential measurement can be globally optimal. Then, there exists an optimal sequential measurement $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ with $\hat{a}t{A}^\star \in \mMA$ such that \begin{eqnarray} \hat{a}t{A}^\star(\omega) &=& 0, ~~ \forall \omega \not\in \Omega^\star. \label{eq:sym3_optLambda} \end{eqnarray} \end{thm} The measurement $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ is schematically illustrated in Fig.~\ref{fig:optPOVM}. Due to the definition of $\omega_{1,j}, \omega_{2,j}, \omega_3 \in \Omega^\star$, ${T^\w}$, defined by Eq.~\eqref{eq:Tw}, satisfies $T^{(\omega_{1,j})} = \{ j \}$, $T^{(\omega_{2,j})} = \{ j \oplus 1, j \oplus 2 \}$, and $T^{(\omega_3)} = \{ 0, 1, 2 \}$. From Eq.~\eqref{eq:suppA}, ${\rm Tr}[\hat{a}_r \hat{a}t{A}^\star(\omega_{1,j})] = 0$ must hold for any distinct $r,j \in \mathcal{I}_3$. Thus, if Alice returns the index $\omega_{1,j}$, then the given state must be $\ket{\Psi_j}$. (In this case, the given state is uniquely determined before Bob performs the measurement.) Also, from Eq.~\eqref{eq:suppA}, ${\rm Tr}[\hat{a}_j \hat{a}t{A}^\star(\omega_{2,j})] = 0$ holds for any $j \in \mathcal{I}_3$, which indicates that if Alice returns the index $\omega_{2,j}$, then the state $\ket{\Psi_j}$ is unambiguously filtered out. In this case, Alice's measurement result does not indicate which of the two states $\ket{\Psi_{j \oplus 1}}$ and $\ket{\Psi_{j \oplus 2}}$ is given. If Alice returns the index $\omega_3$, then Alice's result provides no information about the given state. \begin{figure} \caption{Schematic diagram of an optimal sequential measurement $\hat{a} \label{fig:optPOVM} \end{figure} Using Theorem~\ref{thm:sym3_optLambda}, we can derive a simple formula for determining whether a sequential measurement can be globally optimal. Before we state this formula, we shall give some preliminaries. Let $\tau \coloneqq \exp(i 2\pi/3)$, where $i \coloneqq \sqrt{-1}$. Also, let $\ket{\phi_n}$ and $\ket{\phi'_n}$, respectively, denote the normalized eigenvectors corresponding to the eigenvalues $\tau^n$ $~(n \in \mathcal{I}_3)$ of $\hat{a}t{V}A$ and $\hat{a}t{V}B$. Moreover, let \begin{eqnarray} x_n &\coloneqq& |\braket{\phi_n | a_0}|, ~~ y_n \coloneqq |\braket{\phi'_n | b_0}|. \end{eqnarray} Note that $x_n, y_n > 0$ holds for any $n \in \mathcal{I}_3$. By selecting appropriate global phases of $\ket{a_r}$ and $\ket{b_r}$ and permuting $\ket{\Psi_1}$ and $\ket{\Psi_2}$ if necessary, we may assume \begin{eqnarray} x_0 &>& x_2, ~~ x_1 {(g)}e x_2, ~~~ y_0 {(g)}e y_1 {(g)}e y_2, ~~ y_0 \neq y_2. \label{eq:xy_cond} \end{eqnarray} Also, by selecting global phases of $\ket{\phi_n}$ and $\ket{\phi'_n}$ such that $\braket{\phi_n | a_0}$ and $\braket{\phi'_n | b_0}$ are positive real numbers, $\ket{a_r}$ and $\ket{b_r}$ are written as \begin{eqnarray} \ket{a_r} &=& \sum_{n=0}^2 x_n \tau^{rn} \ket{\phi_n}, ~ \ket{b_r} = \sum_{n=0}^2 y_n \tau^{rn} \ket{\phi'_n}. \label{eq:ab_matrix} \end{eqnarray} $x_n$ and $y_n$ are uniquely determined by $K_\A$ and $K_\B$. Let $K_\A' \coloneqq \braket{a_0|a_1}$ and $K_\B' \coloneqq \braket{b_0|b_1}$, where $\ket{a_r}$ and $\ket{b_r}$ are expressed by Eq.~\eqref{eq:xy_cond} and \eqref{eq:ab_matrix}; then, we have \begin{eqnarray} x_n &=& \sqrt{\frac{1 + \tau^{2n} K_\A' + \tau^n (K_\A')^*}{3}}, ~~ y_n = \sqrt{\frac{1 + \tau^{2n} K_\B' + \tau^n (K_\B')^*}{3}}, \nonumber \\ \label{eq:xy} \end{eqnarray} where $^*$ designates complex conjugate. Let $\eta \coloneqq (1 - |K_\B|) / 3$. Note that $3\eta = 1 - |K_\B|$ equals the average success probability of the optimal unambiguous measurement $\hat{a}t{B}^{(\omega_2)}$ for binary states $\{ \ket{b_1}, \ket{b_2} \}$ with equal prior probabilities of $1/2$. We get the following corollary (proof in Appendix~\ref{append:cor_sym3_nas}): \begin{cor} \label{cor:sym3_nas} For bipartite symmetric ternary pure states $\{ \ket{\Psi_r} \coloneqq \ket{a_r} \otimes \ket{b_r} \}_{r=0}^2$ expressed by Eqs.~\eqref{eq:xy_cond} and \eqref{eq:ab_matrix}, the following two statements are equivalent. \begin{enumerate}[(1)] \item A sequential measurement can be globally optimal. \item Either $y_1 = y_2$ or \begin{eqnarray} x_2 z_0 - x_1 z_1 &{(g)}e& 0, \nonumber \\ \sum_{k=0}^2 x_k^2 ( z_{1 \ominus k}^{-2} - z_{3 \ominus k}^{-2} ) &{(g)}e& 0 \label{eq:sym3_nas_cond} \end{eqnarray} holds, where $z_k \coloneqq y_k^2 - \eta$ and $\ominus$ denotes subtraction modulo 3. \end{enumerate} \end{cor} Using this corollary, we can easily judge whether a sequential measurement can be globally optimal for bipartite symmetric ternary pure states. \subsection{Extension to multipartite states} We can extend the above results to multipartite states. As a simple example, we consider tripartite symmetric ternary pure states $\{ \ket{\Psi_r} = \ket{a_r} \otimes \ket{b_r} \otimes \ket{c_r} \}_{r=0}^2$, which have equal prior probabilities. There exist unitary operators $\hat{a}t{V}A$, $\hat{a}t{V}B$, and $\hat{a}t{V}C$ on $\mathcal{H}A$, $\mathcal{H}B$, and $\mathcal{H}C$ satisfying Eq.~\eqref{eq:Vab} and $\ket{c_{r \oplus 1}} = \hat{a}t{V}C \ket{c_r}$. Here, let us consider the composite system of $\mathcal{H}B$ and $\mathcal{H}C$, $\mathcal{H}BC \coloneqq \mathcal{H}B \otimes \mathcal{H}C$, and interpret these states as bipartite states $\{ \ket{\Psi_r} = \ket{a_r} \otimes \ket{B_r} \}_{r=0}^2$, where $\ket{B_r} \coloneqq \ket{b_r} \otimes \ket{c_r} \in \mathcal{H}BC$. It is obvious that if a sequential measurement can be globally optimal for the tripartite states, then it is also true for the bipartite states. Assume that it is true for the bipartite states; then, from Theorem~\ref{thm:sym3_optLambda}, there exists a sequential measurement $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ satisfying Eq.~\eqref{eq:sym3_optLambda}, which is globally optimal. Also, it follows that $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ can be realized by a sequential measurement on the tripartite system $\mathcal{H}A \otimes \mathcal{H}B \otimes \mathcal{H}C$ if and only if, for any $\omega \in \Omega^\star$, the measurement $\hat{a}t{B}w$ on $\mathcal{H}BC$ can be realized by a sequential measurement on $\mathcal{H}B \otimes \mathcal{H}C$. $\hat{a}t{B}^{(\omega_{1,j})} = \{ \hat{a}t{B}^{(\omega_{1,j})}_r = \delta_{r,j} \}_r$ can obviously be realized by a sequential measurement. Also, it is known that a globally optimal measurement for any bipartite binary pure states can be realized by a sequential measurement \cite{Che-Yan-2002,Ji-Cao-Yin-2005}, and thus $\hat{a}t{B}^{(\omega_{2,j})}$ can also be realized by a sequential one. Therefore, $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ can be realized by a sequential measurement on the tripartite system if and only if $\hat{a}t{B}^{(\omega_3)}$ can be realized by a sequential measurement. Since $\hat{a}t{B}^{(\omega_3)}$ is globally optimal for the bipartite symmetric ternary pure states $\{ \ket{b_r} \otimes \ket{c_r} \}_{r=0}^2$, $\hat{a}t{B}^{(\omega_3)}$ can be realized by a sequential measurement if and only if a sequential measurement for $\{ \ket{b_r} \otimes \ket{c_r} \}_r$ can be globally optimal. We can summarize the above discussion as follows: if a sequential measurement can be globally optimal for each of the two sets of states $\{ \ket{a_r} \otimes \ket{B_r} \}_r$ and $\{ \ket{b_r} \otimes \ket{c_r} \}_r$, then the same is true for the tripartite states $\{ \ket{\Psi_r} \}$. Repeating the above arguments, we can extend it to more than three-partite system, as stated in the following corollary: \begin{cor} \label{cor:multipartite} Let us consider $N$-partite ternary pure states $\{ \ket{\Psi_r} \coloneqq \ket{\psi^{(0)}_r} \otimes \ket{\psi^{(1)}_r} \otimes \cdots \otimes \ket{\psi^{(N-1)}_r} \}_{r=0}^2$ with equal prior probabilities, where $N {(g)}e 3$. Suppose that $\{ \ket{\Psi_r} \}$ are symmetric, i.e., for any $n \in \mathcal{I}_N$, there exists a unitary operator $\hat{a}t{V}^{(n)}$ satisfying $\ket{\psi^{(n)}_{r \oplus 1}} = \hat{a}t{V}^{(n)} \ket{\psi^{(n)}_r}$. Let $\ket{b^{(n)}_r} \coloneqq \ket{\psi^{(n+1)}_r} \otimes \cdots \otimes \ket{\psi^{(N-1)}_r}$ $~(n \in \mathcal{I}_{N-1})$. If for any $n \in \mathcal{I}_{N-1}$, a sequential measurement can be globally optimal for bipartite states $\{ \ket{\psi^{(n)}_r} \otimes \ket{b^{(n)}_r} \}_{r=0}^2$ with equal prior probabilities, then the same is true for $\{ \ket{\Psi_r} \}$. \end{cor} By using Corollary~\ref{cor:sym3_nas}, one can easily judge whether a sequential measurement for the bipartite states $\{ \ket{\psi^{(n)}_r} \otimes \ket{b^{(n)}_r} \}_{r=0}^2$ can be globally optimal. Note that the above sufficient condition may not be necessary. For example, let us again consider the tripartite states $\{ \ket{a_r} \otimes \ket{b_r} \otimes \ket{c_r} \}_r$. For an optimal sequential measurement for these states to be globally optimal, it is sufficient that there exists a globally optimal sequential measurement $\hat{a}t{P}iA$ for the bipartite states $\{ \ket{a_r} \otimes \ket{B_r} \}_r$ such that the measurement $\hat{a}t{B}w$ can be realized by a sequential measurement on the bipartite system $\mathcal{H}B \otimes \mathcal{H}C$ for any $\omega$ with $\hat{a}t{A}(\omega) \neq 0$, where $A$ can be different from $\hat{a}t{A}^\star$. \section{Proof of Theorem~\ref{thm:sym3_optLambda}} \label{sec:sym3_optLambda} We now prove Theorem~\ref{thm:sym3_optLambda} using Remark~\ref{remark:opt}. We provide an overview of our proof in this section, leaving some technical details to the appendix. As a starting point, we first obtain $\hat{a}t{X}Gopt$ in Sec.~\ref{subsec:sym3_optLambda_XG}. Next, in Sec.~\ref{subsec:sym3_optLambda_y1y2}, we consider the case $y_1 = y_2$. After that, we consider the case $y_1 \neq y_2$. A sufficient condition for this theorem to hold and its reformulation are given in Secs.~\ref{subsec:sym3_optLambda_sufficient1} and \ref{subsec:sym3_optLambda_sufficient2}, respectively. In Sec.~\ref{subsec:sym3_optLambda_proof}, we prove that this sufficient condition holds. \subsection{Derivation of $\hat{a}t{X}Gopt$} \label{subsec:sym3_optLambda_XG} Let us consider $\ket{a_r}$ and $\ket{b_r}$ in the form of Eqs.~\eqref{eq:xy_cond} and \eqref{eq:ab_matrix}. A simple calculation gives \begin{eqnarray} \ket{\Psi_r} &\coloneqq& \ket{a_r} \otimes \ket{b_r} = \sum_{n=0}^2 \tilde{x}_n \tau^{rn} \ket{\tilde{\phi}_n}, \label{eq:Psi} \end{eqnarray} where \begin{eqnarray} \ket{\tilde{\phi}_n} &\coloneqq& \frac{1}{\tilde{x}_n} \sum_{k=0}^2 x_k y_{n \ominus k} \ket{\phi_k} \otimes \ket{\phi'_{n \ominus k}}, \nonumber \\ \tilde{x}_n &\coloneqq& \sqrt{\sum_{k=0}^2 x_k^2 y_{n \ominus k}^2}. \end{eqnarray} Obviously, $\{ \ket{\tilde{\phi}_n} \}_{n=0}^2$ is an orthonormal basis and $\tilde{x}_n$ is positive real. We have \begin{eqnarray} \tilde{x}_1^2 - \tilde{x}_2^2 &=& x_0y_1 + x_1y_0 + x_2y_2 - x_0y_2 - x_1y_1 - x_2y_0 \nonumber \\ &=& (x_0 - x_2)(y_1 - y_2) + (x_1 - x_2)(y_0 - y_1) \nonumber \\ &{(g)}e& 0, \label{eq:tx12} \end{eqnarray} where the inequality follows from Eq.~\eqref{eq:xy_cond}. Thus, $\tilde{x}_1 {(g)}e \tilde{x}_2$ holds. Note that whether $\tilde{x}_0 {(g)}e \tilde{x}_2$ or not depends on given states. Let \begin{eqnarray} [\upsilon_0, \upsilon_1, \upsilon_2] &\coloneqq& \left\{ \begin{array}{cc} \left[ 2, 1, 0 \right], & \tilde{x}_0 {(g)}e \tilde{x}_2, \\ \left[ 0, 2, 1 \right], & {\rm otherwise}; \\ \end{array} \right. \label{eq:upsilon} \end{eqnarray} then, $\tilde{x}_{\upsilon_1} {(g)}e \tilde{x}_{\upsilon_0}$ and $\tilde{x}_{\upsilon_2} {(g)}e \tilde{x}_{\upsilon_0}$ hold. We can easily see that an optimal solution to Problem~${\rm DP_G}$ is given by (see Appendix~\ref{append:sym3_opt}) \begin{eqnarray} \hat{a}t{Z}G^\star &=& 3\tilde{x}_{\upsilon_0}^2 \ket{\tilde{\phi}_{\upsilon_0}} \bra{\tilde{\phi}_{\upsilon_0}}. \label{eq:ZG} \end{eqnarray} From Eq.~\eqref{eq:ZG}, we have \begin{eqnarray} \hat{a}t{X}Gopt &=& {\rm Tr}B~\hat{a}t{Z}G^\star = 3 \sum_{n=0}^2 x_n^2 y_{\upsilon_n}^2 \ket{\phi_n} \bra{\phi_n}. \label{eq:XG} \end{eqnarray} \subsection{Case of $y_1 = y_2$} \label{subsec:sym3_optLambda_y1y2} We here show that, in the case of $y_1 = y_2$ (i.e., $K_\B$ is positive real), there exists a globally optimal sequential measurement $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ satisfying Eq.~\eqref{eq:sym3_optLambda}. Let \begin{eqnarray} \hat{a}t{A}^\star(\omega) &\coloneqq& \left\{ \begin{array}{cc} \hat{a}t{A}_r, & \omega = \omega_{1,r} ~ (r \in \mathcal{I}_3), \\ \hat{a}t{A}_3, & \omega = \omega_3, \\ 0, & {\rm otherwise}, \\ \end{array} \right. \end{eqnarray} where $\{ \hat{a}t{A}_r \}_{r=0}^3$ is an optimal unambiguous measurement for $\{ \ket{a_r} \}$ with equal prior probabilities. Obviously, $\hat{a}t{A}^\star$ is in $\mMA$ and satisfies Eq.~\eqref{eq:sym3_optLambda}. It follows that the sequential measurement $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ can be interpreted as follows: Alice and Bob respectively perform optimal measurements for $\{ \ket{a_r} \}$ and $\{ \ket{b_r} \}$ with equal prior probabilities and get the results, $r_{\rm A}$ and $r_{\rm B}$. $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ returns $r_{\rm A}$ if $r_{\rm A} \in \mathcal{I}_3$, $r_{\rm B}$ if $r_{\rm B} \in \mathcal{I}_3$, and $r = 3$ otherwise. Note that $r_{\rm A} = r_{\rm B}$ holds whenever $r_{\rm A}$ and $r_{\rm B}$ are in $\mathcal{I}_3$. The average success probabilities of optimal measurements for $\{ \ket{a_r} \}$ and $\{ \ket{b_r} \}$ with equal prior probabilities are respectively $P_{\rm A} \coloneqq 3 x_2^2$ and $P_{\rm B} \coloneqq 3 y_2^2$ (see Theorem~4 of Ref.~\cite{Eld-2003-unamb}); thus, the average success probability of $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ is \begin{eqnarray} 1 - (1 - P_{\rm A}) (1 - P_{\rm B}) &=& 3 (x_2^2 + y_2^2 - 3 x_2^2 y_2^2). \end{eqnarray} On the other hand, that of an optimal measurement for $\{ \ket{\Psi_r} \}$ with equal prior probabilities is given by \begin{eqnarray} 3 \tilde{x}_{\upsilon_0}^2 &=&3 \tilde{x}_2^2 = 3 \left[ (1 - x_2^2) y_2^2 + x_2^2(1 - 2y_2^2) \right] \nonumber \\ &=& 3 (x_2^2 + y_2^2 - 3 x_2^2 y_2^2), \end{eqnarray} where the first equality follows from the fact that $\tilde{x}_0 {(g)}e \tilde{x}_2$ (i.e., $\upsilon_0 = 2$) holds when $y_1 = y_2$, and the second equality follows from the definition of $\tilde{x}_n$. Thus, $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ is globally optimal. We should note that the same discussion is applicable to the case of $x_1 = x_2$ (i.e., $K_\A$ is positive real); in this case, there also exists a globally optimal sequential measurement $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ satisfying Eq.~\eqref{eq:sym3_optLambda}. \subsection{Sufficient condition for Theorem~\ref{thm:sym3_optLambda}} \label{subsec:sym3_optLambda_sufficient1} Since we have already proved the theorem in the case of $y_1 = y_2$, in what follows, we only consider the case $y_1 \neq y_2$. (We do not have to assume $x_1 \neq x_2$; the following proof is also valid for $x_1 = x_2$.) After some algebra using Eq.~\eqref{eq:XG} \footnote{We also use the fact that $p^{(\omega_{1,j})}_j = 1$, $p^{(\omega_{2,j})}_{j \oplus 1} = p^{(\omega_{2,j})}_{j \oplus 2} = 3\eta$, and $p^{(\omega_3)}_r = 3y_2^2$.}, we get ${\rm rank}~\hat{a}t{G}opt(\omega) = 2$ for any $\omega \in \Omega^\star$ and $\hat{a}t{G}opt(\omega) \ket{\pi^\star_\omega} = 0$, where $\ket{\pi^\star_\omega} \in \mathcal{H}A$ $~(\omega \in \Omega^\star)$ is the normal vector defined as \begin{eqnarray} \ket{\pi^\star_\omega} &\coloneqq& \left\{ \begin{array}{cc} \displaystyle C_1 \hat{a}t{V}A^j \sum_{n=0}^2 x_n^{-1} \ket{\phi_n}, & \omega = \omega_{1,j}, \\ \displaystyle C_2 \hat{a}t{V}A^j \sum_{n=0}^2 x_n^{-1} z_{\upsilon_n}^{-1} \ket{\phi_n}, & \omega = \omega_{2,j}, \\ \ket{\phi_{\upsilon_2}}, & \omega = \omega_3 \\ \end{array} \right. \label{eq:pi} \end{eqnarray} and $C_1$ and $C_2$ are normalization constants. Thus, it follows that $\hat{a}t{A}^\star$ satisfies Eq.~\eqref{eq:sym3_optLambda} and Eq.~\eqref{eq:cond} with $\hat{a}t{A} = \hat{a}t{A}^\star$ if only if $\hat{a}t{A}^\star$ is expressed as \begin{eqnarray} \hat{a}t{A}^\star(\omega) &=& \left\{ \begin{array}{cc} \kappa^\star_\omega \ket{\pi^\star_\omega} \bra{\pi^\star_\omega}, & \omega \in \Omega^\star, \\ 0, & {\rm otherwise}, \\ \end{array} \right. \label{eq:Aopt_kappa} \end{eqnarray} where, for each $\omega \in \Omega^\star$, $\kappa^\star_\omega$ is a nonnegative real number. Therefore, from Remark~\ref{remark:opt}, to prove that $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ is an optimal measurement, it suffices to show that there exists Alice's POVM $\hat{a}t{A}^\star$ (i.e., $\hat{a}t{A}^\star \in \mMA$) in the form of Eq.~\eqref{eq:Aopt_kappa}. Let $\hat{a}t{A}$ be an optimal solution to Problem~P. Due to the symmetry of the states, we assume without loss of generality that $\hat{a}t{A}$ is symmetric in the following sense: for any $\omega \in \Omega$, $\hat{a}t{A}(\omega') = \hat{a}t{V}A \hat{a}t{A}(\omega) \hat{a}t{V}A^\daggerger$ and $\hat{a}t{A}(\omega'') = \hat{a}t{V}A^\daggerger \hat{a}t{A}(\omega) \hat{a}t{V}A$ hold, where $\omega', \omega'' \in \Omega$ are the indices such that $\hat{a}t{B}^{(\omega')}_r = \hat{a}t{V}B \hat{a}t{B}w_{r \ominus 1} \hat{a}t{V}B^\daggerger$ and $\hat{a}t{B}^{(\omega'')}_r = \hat{a}t{V}B^\daggerger \hat{a}t{B}w_{r \oplus 1} \hat{a}t{V}B$ for any $r \in \mathcal{I}_3$ (see Theorem~4 of Ref.~\cite{Nak-Kat-Usu-2018-seq_gen} in detail). Let \begin{eqnarray} \hat{a}t{S}(\hat{a}t{T}) &\coloneqq& \frac{1}{3} \sum_{k=0}^2 \hat{a}t{V}A^k \hat{a}t{T} \left( \hat{a}t{V}A^k \right)^\daggerger, \label{eq:Somega} \end{eqnarray} where $\hat{a}t{T}$ is a positive semidefinite operator on $\mathcal{H}A$. $\hat{a}t{S}(\hat{a}t{T})$ is also a positive semidefinite operator on $\mathcal{H}A$ satisfying ${\rm Tr}[\hat{a}t{S}(\hat{a}t{T})] = {\rm Tr}~\hat{a}t{T}$ and commuting with $\hat{a}t{V}A$. For notational simplicity, we denote $\hat{a}t{S}[\hat{a}t{A}(\omega)]$ by $\hat{a}t{S}(\omega)$. Due to the symmetry of $\hat{a}t{A}$, $\hat{a}t{S}(\omega) = \hat{a}t{S}(\omega') = \hat{a}t{S}(\omega'')$ holds. Let \begin{eqnarray} \hat{a}t{E}^\star_k &\coloneqq& \hat{a}t{S} \left( \ket{\pi^\star_{\omega_k}} \bra{\pi^\star_{\omega_k}} \right), ~ k \in \{ 1,2,3 \}; \label{eq:Ek} \end{eqnarray} then, Eqs.~\eqref{eq:pi} and \eqref{eq:Somega} give \begin{eqnarray} \sum_{j=0}^2 \ket{\pi^\star_{\omega_{k,j}}} \bra{\pi^\star_{\omega_{k,j}}} &=& 3 \hat{a}t{E}^\star_k, ~~ k \in \{ 1,2 \}, \nonumber \\ \ket{\pi^\star_{\omega_3}} \bra{\pi^\star_{\omega_3}} &=& \hat{a}t{E}^\star_3. \label{eq:pi_E} \end{eqnarray} Here, assume that $\hat{a}t{S}(\omega)$ can be expressed as \begin{eqnarray} \hat{a}t{S}(\omega) &=& \sum_{k=1}^3 w_{\omega,k} \hat{a}t{E}^\star_k, ~~ \forall \omega \in \Omega_+, \nonumber \\ w_{\omega,k} &{(g)}e& 0, ~~ \forall \omega \in \Omega_+, k \in \{ 1,2,3 \}, \label{eq:thm_cond} \end{eqnarray} where \begin{eqnarray} \Omega_+ &\coloneqq& \{ \omega \in \Omega : \hat{a}t{A}(\omega) \neq 0 \} \end{eqnarray} and $w_{\omega,k}$ is a weight. Let us choose \begin{eqnarray} \kappa^\star_\omega &=& \left\{ \begin{array}{cc} \displaystyle \frac{w^\star_k}{3}, & \omega = \omega_{k,j} ~(k \in \{ 1,2 \}), \\ w^\star_3, & \omega = \omega_3, \\ \end{array} \right. \nonumber \\ w^\star_k &\coloneqq& \int_{\Omega_+} w_{\omega,k} d\omega; \label{eq:kappa_omega} \end{eqnarray} then, from Eq.~\eqref{eq:Aopt_kappa}, we have \begin{eqnarray} \int_\Omega \hat{a}t{A}^\star(d\omega) &=& \sum_{k=1}^3 w^\star_k \hat{a}t{E}^\star_k = \int_{\Omega_+} \sum_{k=1}^3 w_{\omega,k} \hat{a}t{E}^\star_k d\omega \nonumber \\ &=& \int_{\Omega_+} \hat{a}t{S}(d\omega) = \int_\Omega \hat{a}t{A}(d\omega) = \hat{1}A, \label{eq:Aopt_ident} \end{eqnarray} where $\hat{1}A$ is the identity operator on $\mathcal{H}A$. The first equation follows from Eq.~\eqref{eq:pi_E}. Equation~\eqref{eq:Aopt_ident} yields $\hat{a}t{A}^\star \in \mMA$. Therefore, to prove Theorem~\ref{thm:sym3_optLambda}, it suffices to prove Eq.~\eqref{eq:thm_cond}. \subsection{Reformulation of Eq.~\eqref{eq:thm_cond}} \label{subsec:sym3_optLambda_sufficient2} For convenience of analysis, we shall reformulate the sufficient condition given by Eq.~\eqref{eq:thm_cond}. For any positive semidefinite operator $\hat{a}t{T} \neq 0$, $s_n(\hat{a}t{T})$ is defined as follows: \begin{eqnarray} s_n(\hat{a}t{T}) &\coloneqq& {\rm B}raket{\phi_n | \frac{\hat{a}t{S}(\hat{a}t{T})}{{\rm Tr}[\hat{a}t{S}(\hat{a}t{T})]} | \phi_n}. \label{eq:sn} \end{eqnarray} From $\sum_{n=0}^2 \braket{\phi_n | \hat{a}t{S}(\hat{a}t{T}) | \phi_n} = {\rm Tr}[\hat{a}t{S}(\hat{a}t{T})]$, $\sum_{n=0}^2 s_n(\hat{a}t{T}) = 1$ holds. Let us consider the following point \begin{eqnarray} s(\hat{a}t{T}) &\coloneqq& [s_{\upsilon_1}(\hat{a}t{T}), s_{\upsilon_0}(\hat{a}t{T})], \end{eqnarray} which is in a two-dimensional space (we call it the $S$-plane). Since $s_n(\hat{a}t{T}) {(g)}e 0$ holds from Eq.~\eqref{eq:sn}, each $s(\hat{a}t{T})$ is in the first quadrant of the $S$-plane. We can easily verify that the point $s(\hat{a}t{T})$ has a one-to-one correspondence with $\hat{a}t{S}(\hat{a}t{T}) / {\rm Tr}[\hat{a}t{S}(\hat{a}t{T})]$. Let \begin{eqnarray} e^\star_k &\coloneqq& s \left( \ket{\pi^\star_{\omega_k}} \bra{\pi^\star_{\omega_k}} \right), ~ k \in \{ 1,2,3 \}, \label{eq:vk} \end{eqnarray} which is the point in the $S$-plane that corresponds to $\hat{a}t{E}^\star_k$ defined by Eq.~\eqref{eq:Ek}. $e^\star_3 = [0,0]$ holds from Eq.~\eqref{eq:pi}. Also, let $\mathcal{T}^\star$ be the triangle formed by $e^\star_1$, $e^\star_2$, and $e^\star_3$. Note that $\mathcal{T}^\star$ may degenerate to a straight line segment in special cases. For simplicity, we denote $s_n(\omega) \coloneqq s_n[\hat{a}t{A}(\omega)]$ and $s(\omega) \coloneqq s[\hat{a}t{A}(\omega)]$ $~(\omega \in \Omega_+)$. From the first line of Eq.~\eqref{eq:thm_cond}, we have \begin{eqnarray} s(\omega) &=& \frac{1}{{\rm Tr}[\hat{a}t{S}(\omega)]} \sum_{k=1}^3 w_{\omega,k} e^\star_k. \end{eqnarray} Thus, it follows that Eq.~\eqref{eq:thm_cond} is equivalent to the following: \begin{eqnarray} s(\omega) &\in& \mathcal{T}^\star, ~~ \forall \omega \in \Omega_+. \label{eq:thm_sufficient_T} \end{eqnarray} Figure~\ref{fig:triangle} shows the $S$-plane representation in the case of $K_\A = K_\B = 0.2 \exp(i \pi / 10)$. The entire sets of points $s(\omega)$ $~(\omega \in \Omega_+)$ with $|{T^\w}| = 2$ and 3, denoted by $\mathcal{D}_2$ and $\mathcal{D}_3$, are depicted by the green and blue regions in this figure, respectively. Also, $s(\omega) = e^\star_1$ holds when $\omega$ satisfies $|{T^\w}| = 1$. Indeed, in this case, since ${T^\w} = \{ j \}$ holds for certain $j \in \mathcal{I}_3$, we can easily see that $\hat{a}t{A}(\omega) \propto \ket{\pi^\star_{\omega_{1,j}}} \bra{\pi^\star_{\omega_{1,j}}}$ must hold from Eq.~\eqref{eq:suppA}, which gives $s(\omega) = e^\star_1$. The triangle $\mathcal{T}^\star$ is also shown in the dashed line in Fig.~\ref{fig:triangle}. One can see that $e^\star_1$, $\mathcal{D}_2$, and $\mathcal{D}_3$ are all included in $\mathcal{T}^\star$. Note that we can show, under the assumption that Theorem~\ref{thm:sym3_optLambda} holds, that a sequential measurement can be globally optimal if and only if $s(\hat{1}A) (= [1/3, 1/3]) \in \mathcal{T}^\star$ holds (see Appendix~\ref{append:S}). In the case shown in Fig.~\ref{fig:triangle}, $s(\hat{1}A)$ is in $\mathcal{T}^\star$, and thus a sequential measurement can be globally optimal. \begin{figure} \caption{$S$-plane representation in the case of $K_\A = K_\B = 0.2 \exp(i \pi / 10)$. $e^\star_1$ (purple), $\mathcal{D} \label{fig:triangle} \end{figure} \subsection{Proof of Eq.~\eqref{eq:thm_sufficient_T}} \label{subsec:sym3_optLambda_proof} We shall prove that Eq.~\eqref{eq:thm_sufficient_T}, which is a sufficient condition of Theorem~\ref{thm:sym3_optLambda}, holds. $\hat{a}t{G}opt$ can be rewritten as the following form: \begin{eqnarray} \hat{a}t{G}opt(\omega) &=& \hat{a}t{X}Gopt - \sum_{r=0}^2 \mu^\w_r \ket{a_r} \bra{a_r}. \label{eq:hGw3} \end{eqnarray} From Eq.~\eqref{eq:hGw2}, $\mu^\w_r = -\infty$ holds if $e^\w_r \neq 0$; otherwise, $\mu^\w_r = p^\w_r / 3$ holds. It is very hard to show in a naive way that each $s(\omega)$ with $\omega \in \Omega_+$ is included in $\mathcal{T}^\star$. However, we can rather easily show that Eq.~\eqref{eq:thm_sufficient_T} holds by considering the following two cases: (1) the case in which at least two of $\{ \mu^\w_r \}_r$ are the same and (2) the other case in which $\{ \mu^\w_r \}_r$ are all different. \subsubsection*{Case (1): at least two of $\{ \mu^\w_r \}_r$ are the same} Due to the symmetry of the states, we assume $\mu^\w_1 = \mu^\w_2 \eqqcolon q$ without loss of generality; then, $\hat{a}t{G}opt(\omega)$ can be expressed as \begin{eqnarray} \hat{a}t{G}opt(\omega) &=& \hat{a}t{X}Gopt - q \hat{a}t{P}si - p \ket{a_0} \bra{a_0}, \label{eq:K_omega_q} \end{eqnarray} where $\hat{a}t{P}si \coloneqq \sum_{k=0}^2 \ket{a_k} \bra{a_k}$ and $p$ is a real number. If $|{T^\w}| = 3$, then $q = p^\w_1/3 = p^\w_2/3$ holds to satisfy Eq.~\eqref{eq:K_omega_q}. Also, in this case, we can easily see that $p^\w_1 = p^\w_2 \le p^{(\omega_2)}_1 = p^{(\omega_2)}_2 = 3\eta$ holds, which gives $0 \le q \le \eta$. Moreover, $q = - \infty$ holds if $|{T^\w}| = 1$, and $q = \eta$ holds if $|{T^\w}| = 2$. Thus, $q \le \eta$ always holds. For each $q \le \eta$, let $\ket{{(g)}amma_q}$ be a normal vector satisfying \begin{eqnarray} \ket{{(g)}amma_q} &\in& {\rm Ker}~\hat{a}t{G}one(q), \nonumber \\ \hat{a}t{G}one(q) &\coloneqq& \hat{a}t{X}Gopt - q \hat{a}t{P}si - p_q \ket{a_0} \bra{a_0}, \label{eq:Gone} \end{eqnarray} where $p_q$ is a real number determined such that ${\rm rank}~\hat{a}t{G}one(q) < 3$. $\ket{{(g)}amma_q}$ can be written, up to a global phase, as (see Appendix~\ref{append:gamma}): \begin{eqnarray} \ket{{(g)}amma_q} &=& \left\{ \begin{array}{cc} \displaystyle C'_q \sum_{n=0}^2 \frac{1}{x_n(y_{\upsilon_n}^2 - q)} \ket{\phi_n}, & q \neq y_2^2, \\ \ket{\phi_{\upsilon_2}}, & {\rm otherwise}, \\ \end{array} \right. \label{eq:soq} \end{eqnarray} where $C'_q$ is a normalization constant. Let $\mathcal{C}$ be the set defined as \begin{eqnarray} \mathcal{C} &\coloneqq& \{ s(\ket{{(g)}amma_q} \bra{{(g)}amma_q}) : q \le \eta \}. \end{eqnarray} In Fig.~\ref{fig:triangle}, $\mathcal{C}$ is shown in the blue dotted line. Since $\hat{a}t{G}opt(\omega)$ is in the form of Eq.~\eqref{eq:K_omega_q} satisfying ${\rm rank}~\hat{a}t{G}opt(\omega) < 3$, $\hat{a}t{G}opt(\omega)$ is equivalent to $\hat{a}t{G}one(\mu^\w_1)$. Thus, $\hat{a}t{A}(\omega) \propto \ket{{(g)}amma_{\mu^\w_1}} \bra{{(g)}amma_{\mu^\w_1}}$ holds, which yields $s(\omega) \in \mathcal{C}$. Therefore, to prove $s(\omega) \in \mathcal{T}^\star$, it suffices to show $\mathcal{C} \subseteq \mathcal{T}^\star$. We can prove this using Eq.~\eqref{eq:soq} (see Appendix~\ref{append:C}). \subsubsection*{Case (2): $\{ \mu^\w_r \}_r$ are all different} In this case, we can show that each $s(\omega)$ is on a straight line segment whose endpoints are in $\mathcal{C}$ (see Appendix~\ref{append:case2}). Since $\mathcal{C} \subseteq \mathcal{T}^\star$ holds, such a line segment is in the triangle $\mathcal{T}^\star$. Therefore, $s(\omega) \in \mathcal{T}^\star$ holds. The two cases (1) and (2) exhaust all possibilities; thus, from the above arguments, Eq.~\eqref{eq:thm_sufficient_T} holds, and thus we complete the proof. \hspace*{0pt} $\blacksquare$ \section{Examples} \label{sec:example} In this section, we present some examples of symmetric ternary pure states in which a sequential measurement can be globally optimal. In Secs.~\ref{subsec:example_bi_eq} and \ref{subsec:example_bi_ineq}, we consider the bipartite case. In Sec.~\ref{subsec:example_multi}, we consider the multipartite case. \subsection{Case of $K_\A = K_\B$} \label{subsec:example_bi_eq} We first give some examples of bipartite states $\{ \ket{\Psi_r} \coloneqq \ket{a_r} \otimes \ket{b_r} \}_r$ with $K_\A = K_\B \eqqcolon K$. Note that when $\{ \ket{a_r} \}$ and $\{ \ket{b_r} \}$ are in the form of Eqs.~\eqref{eq:xy_cond} and \eqref{eq:ab_matrix}, $x_n = y_n$ holds for each $n \in \mathcal{I}_3$, and thus $x_0 {(g)}e x_1$ holds from $y_0 {(g)}e y_1$. The region of the complex plane where a sequential measurement for the states $\{ \ket{\Psi_r} \}$ with $K_\A = K_\B = K$ can be globally optimal is shown in red in Fig.~\ref{fig:result-half}. This region is easily obtained from Corollary~\ref{cor:sym3_nas}. The horizontal and vertical directions are the real and imaginary axes, respectively. The region of all possible $K$ is represented as the dotted equilateral triangle. This figure implies that, at least in the case of $K_\A = K_\B$, a sequential measurement can be globally optimal in quite a few cases. As a concrete example, let us consider the symmetric ternary pure states in which $\{ \ket{a_r} \}$ and $\{ \ket{b_r} \}$ are the lifted trine states, $\{ \ket{{\rm L}_r} \}_{r=0}^2$, which are expressed by \cite{Sho-2002} \begin{eqnarray} \ket{{\rm L}_r} &=& \sqrt{1 - g} \left( \cos \frac{2\pi r}{3} \ket{u_0} + \sin \frac{2\pi r}{3} \ket{u_1} \right) + \sqrt{g} \ket{u_2}, \nonumber \\ \label{eq:lift_r} \end{eqnarray} where $\{ \ket{u_n} \}_{n=0}^2$ is an orthonormal basis. The real parameter $g$ is in the range $0 < g < 1$. Equation~\eqref{eq:lift_r} gives $K = (3g - 1) / 2$, and thus $K$ is real in the range $-1/2 < K < 1$. It follows that the states $\{ \ket{\Psi_r} = \ket{{\rm L}_r} \otimes \ket{{\rm L}_r} \}_r$ are also regarded as lifted trine states. The region of possible values of $K$ is shown in dashed green line in Fig.~\ref{fig:result-half}. From this figure, a sequential measurement for $\{ \ket{\Psi_r} \}$ can be globally optimal if and only if $K {(g)}e 0$ (i.e., $g {(g)}e 1/3$). Another example is the states in which $\{ \ket{a_r} \}$ and $\{ \ket{b_r} \}$ are the ternary PSK optical coherent states $\{ \ket{\alpha_r} \}_{r=0}^2$, where $\ket{\alpha_r}$ is a normalized eigenvector of the photon annihilation operator with the eigenvalue $\alpha_r \coloneqq \sqrt{S}\tau^r$, and $S = |\alpha_r|^2$ is the average photon number of $\ket{\alpha_r}$. In this case, the states $\{ \ket{\Psi_r} = \ket{\alpha_r} \otimes \ket{\alpha_r} \}_r$ are also regarded as the ternary PSK optical coherent states with the average photon number $2S$. We have \begin{eqnarray} K &=& \braket{\alpha_0|\alpha_1} = e^{-\frac{3}{2} S} e^{i \frac{\sqrt{3}}{2} S} \label{eq:PSK_K} \end{eqnarray} for some global phase. The solid blue line in Fig.~\ref{fig:result-half} shows the region of possible values of $K$. It follows from Eq.~\eqref{eq:PSK_K} that $\arg K = \frac{\sqrt{3}}{2} S$ is proportional to $S$. From Fig.~\ref{fig:result-half}, a necessary and sufficient condition that a sequential measurement for $\{ \ket{\Psi_r} \}$ can be globally optimal is $2\pi k/3 \le \arg K + \pi/6 \le 2\pi k/3 + \pi/3$, i.e., \begin{eqnarray} \frac{(4k-1)\pi}{3\sqrt{3}} \le S \le \frac{(4k+1)\pi}{3\sqrt{3}}, ~~ k \in \{ 0, 1, 2, \cdots \}. \label{eq:ex_S} \end{eqnarray} The average success probability of an optimal sequential measurement for $\{ \ket{\Psi_r} = \ket{\alpha_r} \otimes \ket{\alpha_r} \}_r$ is plotted in solid blue line in Fig.~\ref{fig:result-half-PSK}. Also, that of an optimal measurement is shown in dashed black line. These probabilities can be numerically computed using a modified version of the method given in Ref.~\cite{Nak-Kat-Usu-2018-Dolinar}. The region of $S$ satisfying Eq.~\eqref{eq:ex_S}, in which a sequential measurement can be globally optimal, is shown in red. It is worth mentioning that, as shown in Fig.~2 of Ref.~\cite{Nak-Kat-Usu-2018-Dolinar}, in the strategy for minimum-error discrimination, an optimal sequential measurement for the ternary PSK optical coherent states is unlikely to be globally optimal, at least when $S$ is small. In the strategy for unambiguous discrimination, a sequential measurement can be globally optimal if (and only if) $S$ satisfies Eq.~\eqref{eq:ex_S}. \begin{figure} \caption{The region of the complex plane where a sequential measurement for symmetric ternary pure states with $K_\A = K_\B \eqqcolon K$ can be globally optimal.} \label{fig:result-half} \end{figure} \begin{figure} \caption{The average success probabilities of an optimal sequential measurement and an optimal measurement for the ternary PSK optical coherent states $\{ \ket{\alpha_r} \label{fig:result-half-PSK} \end{figure} \subsection{Case of $K_\A \neq K_\B$} \label{subsec:example_bi_ineq} Two concrete examples of symmetric ternary pure states with $K_\A \neq K_\B$ will be given. The first is a set of states $\{ \ket{\Psi_r} \coloneqq \ket{a_r} \otimes \ket{\delta_r} \}_r$, where $\{ \ket{\delta_r} \}_{r=0}^2$ are ternary PPM optical coherent states expressed by \begin{eqnarray} \ket{\delta_0} &\coloneqq& \ket{\alpha} \otimes \ket{\beta} \otimes \ket{\beta}, \nonumber \\ \ket{\delta_1} &\coloneqq& \ket{\beta} \otimes \ket{\alpha} \otimes \ket{\beta}, \nonumber \\ \ket{\delta_2} &\coloneqq& \ket{\beta} \otimes \ket{\beta} \otimes \ket{\alpha}. \label{eq:PPM} \end{eqnarray} $\ket{\alpha}$ and $\ket{\beta}$ are distinct optical coherent states. $\{ \ket{a_r} \}$ are not necessarily ternary PPM optical coherent states. From Eq.~\eqref{eq:PPM}, $K_\B = \braket{\delta_r | \delta_{r \oplus 1}} = |\braket{\alpha|\beta}|^2 \braket{\beta|\beta}$ holds, and thus $K_\B$ is nonnegative real. Therefore, as described in Sec.~\ref{subsec:sym3_optLambda_y1y2}, a sequential measurement can be globally optimal. The same argument can be applied to states $\{ \ket{\Psi_r} \coloneqq \ket{\delta_r} \otimes \ket{b_r} \}_r$. The second example is the states $\{ \ket{\Psi_r} \coloneqq \ket{a_r} \otimes \ket{b_r} \}_r$, where $\{ \ket{a_r} \}$ and $\{ \ket{b_r} \}$ are the ternary PSK optical coherent states with average photon numbers $S_\A$ and $S_\B$, respectively. The states $\{ \ket{\Psi_r} \}$ are also regarded as the ternary PSK optical coherent states with the average photon number $S_\A + S_\B$. The region of $(S_\A, S_\B)$ in which a sequential measurement can be globally optimal is shown in red in Fig.~\ref{fig:result-PSK}. We can see that a sequential measurement can be globally optimal in some cases. If $S_\A$ (or $S_\B$) is equal to $4\pi k/(3\sqrt{3}) \approx 2.42k$ $~(k = 1, 2, \cdots)$, then, since $K_\A$ (or $K_\B$) is nonnegative real, a sequential measurement can be globally optimal. \begin{figure} \caption{The region where a sequential measurement for the ternary PSK optical coherent states $\{ \ket{a_r} \label{fig:result-PSK} \end{figure} Let us consider whether the ternary PSK optical coherent states $\{ \ket{\alpha_r} \}$ with an average photon number $S$ can be unambiguously discriminated by a Dolinar-like receiver, which consists of continuous photon counting and infinitely fast feedback (e.g., \cite{Dol-1976}). The performance of this receiver never exceeds that of an optimal sequential measurement for $N$-partite PSK optical coherent states $\{ \ket{\alpha'_r}^{\otimes N} \}$ with $N \to \infty$, where $\{ \ket{\alpha'_r} \coloneqq \ket{\alpha_r / \sqrt{N}} \}_r$ is also the PSK optical coherent states with the average photon number $S / N$. Note that $n$ identical copies of $\ket{\alpha'_r}$ are regarded as $\ket{\alpha_r}$ whose average photon number is $S$ (i.e., $\ket{\alpha_r} = \ket{\alpha'_r}^{\otimes N}$). We here want to know whether a Dolinar-like receiver can be globally optimal. We consider the bipartite ternary states $\{ \ket{\alpha_r} = \ket{a_r} \otimes \ket{b_r} \}_r$, where $\ket{a_r} = \ket{\sqrt{t} \alpha_r} (= \ket{\alpha'_r}^{\otimes tN})$ and $\ket{b_r} = \ket{\sqrt{1-t} \alpha_r} (= \ket{\alpha'_r}^{\otimes (1-t)N})$ with $0 < t < 1$ are optical coherent states with average photon numbers $tS$ and $(1-t)S$, respectively. The average success probability of an optimal sequential measurement for the bipartite states with any $t$ is an upper bound on that of an optimal sequential measurement for $N$-partite states $\{ \ket{\alpha'_r}^{\otimes N} \}$ with $N \to \infty$, and thus is an upper bound on that of a Dolinar-like receiver. We here show that there exists $t$ such that an optimal sequential measurement for the corresponding bipartite states $\{ \ket{a_r} \otimes \ket{b_r} \}_r$ is not globally optimal, which means that a Dolinar-like receiver cannot be globally optimal. In the case in which $\braket{\alpha_0 | \alpha_1}$ is nonnegative real (i.e., $S = 4\pi k/\sqrt{3}$ with $k = 1, 2, \cdots$), we choose $t = 1/2$; then, from Eq.~\eqref{eq:PSK_K}, $K_\A = K_\B$ and $\arg~K_\A = \pi$ holds, and thus a sequential measurement cannot be globally optimal, as already shown in Fig.~\ref{fig:result-half}. In the other case, we choose $t \to 0$; formulating $\{ \ket{a_r} \}$ and $\{ \ket{b_r} \}$ in the form of Eqs.~\eqref{eq:xy_cond} and \eqref{eq:ab_matrix}, we have that for each $k \in \{1,2\}$ \begin{eqnarray} x_k^2 &=& \frac{1}{3} + \frac{2}{3} e^{-\frac{3tS}{2}} \cos \left[ (-1)^k \frac{2\pi}{3} + \frac{\sqrt{3}tS}{2} \right]. \end{eqnarray} Taking the limit of $t \to 0$, we obtain $x_2 / x_1 \to 0$. From Corollary~\ref{cor:sym3_nas}, it is necessary to satisfy $x_2 z_0 - x_1 z_1 {(g)}e 0$ for a sequential measurement to be able to be globally optimal. When $t \to 0$, from $x_2 / x_1 \to 0$, $z_1 \to 0$ must hold. However, $z_1$ converges to a positive number. ($z_1 \to 0$ holds only if $\braket{b_0 | b_1}$ converges to a nonnegative real number, i.e., $y_1 - y_2 \to 0$; however, $\braket{b_0 | b_1}$ converges to $\braket{\alpha_0 | \alpha_1}$, which is not a nonnegative real number.) Therefore, a Dolinar-like receiver cannot be globally optimal for any ternary PSK optical coherent states. \subsection{Case of multipartite states} \label{subsec:example_multi} As an example of multipartite states, let us address the problem of multiple-copy state discrimination \cite{Bro-Mei-1996,Aci-Bag-Bai-Mas-2005,Hig-Boo-Dph-Bar-2009,Cal-Vic-Mun-Bag-2010,Hig-Doh-Bar-Pry-Wis-2011}. We again consider $N$-partite ternary PSK optical coherent states $\{ \ket{\alpha'_r}^{\otimes N} \}$ $~(\ket{\alpha'_r} \coloneqq \ket{\alpha_r / \sqrt{N}})$. As described in Sec.~\ref{subsec:example_bi_ineq}, in the limit of $N \to \infty$, a sequential measurement cannot be globally optimal. In this section, we consider $N$ to be finite. By using Corollaries~\ref{cor:sym3_nas} and \ref{cor:multipartite}, we can judge whether a sequential measurement can be globally optimal. The region of the average photon number $S$ of $\ket{\alpha_r}$ for which the sufficient condition holds is shown in red in Fig.~\ref{fig:result-PSK-multiple}. We here consider the range $S \le 1.3$. We can see in this figure that a sequential measurement can be globally optimal even for large $N$ (such as $N = 20$) if $S$ is sufficiently small (such as $S \le 0.1$). \begin{figure} \caption{Sufficient condition for a sequential measurement for the $N$-partite ternary PSK optical coherent states $\{ \ket{\alpha'_r} \label{fig:result-PSK-multiple} \end{figure} \section{Conclusion} \label{conclusion} An unambiguous sequential measurement for bipartite symmetric ternary pure states has been investigated. We have shown that a certain type of sequential measurement can always be globally optimal whenever there exists a globally optimal sequential measurement. From this result, we have derived a formula that can easily determine whether an optimal sequential measurement is globally optimal. Moreover, our results have been extended to multipartite states and have given a sufficient condition that a sequential measurement can be globally optimal. \begin{acknowledgments} We are grateful to O. Hirota of Tamagawa University for support. This work was supported by JSPS KAKENHI Grant Numbers JP17H07115 and JP16H04367. \end{acknowledgments} \appendix \section{Proof of Corollary~\ref{cor:sym3_nas}} \label{append:cor_sym3_nas} Since, as already described in Sec.~\ref{subsec:sym3_optLambda_y1y2}, a sequential measurement can be globally optimal when $y_1 = y_2$, we only have to consider the case $y_1 \neq y_2$. $(1) \Rightarrow (2)$: From the discussion in Sec.~\ref{subsec:sym3_optLambda_sufficient1}, there exists an optimal solution, $\hat{a}t{A}^\star$, to Problem~P that is expressed by Eq.~\eqref{eq:Aopt_kappa} with $\kappa^\star_{\omega_k}$ $~(k \in \{ 1,2,3 \})$ independent of $j \in \mathcal{I}_3$. Since $\hat{a}t{A}^\star$ is a POVM, we have \begin{eqnarray} \sum_{j=0}^2 [\hat{a}t{A}^\star(\omega_{1,j}) + \hat{a}t{A}^\star(\omega_{2,j})] + \hat{a}t{A}^\star(\omega_3) &=& \hat{1}A. \label{eq:Aopt_POVM} \end{eqnarray} Substituting Eqs.~\eqref{eq:pi} and \eqref{eq:Aopt_kappa} into Eq.~\eqref{eq:Aopt_POVM} gives \begin{eqnarray} \left[ \begin{array}{ccc} x_{\upsilon_0}^{-2} & x_{\upsilon_0}^{-2} z_0^{-2} & 0 \\ x_{\upsilon_1}^{-2} & x_{\upsilon_1}^{-2} z_1^{-2} & 0 \\ x_{\upsilon_2}^{-2} & x_{\upsilon_2}^{-2} z_2^{-2} & 1 \\ \end{array} \right] \left[ \begin{array}{c} 3 \kappa^\star_{\omega_1} |C_1|^2 \\ 3 \kappa^\star_{\omega_2} |C_2|^2 \\ \kappa^\star_{\omega_3} \end{array} \right] &=& \left[ \begin{array}{c} 1 \\ 1 \\ 1 \end{array} \right], \label{eq:nu2} \end{eqnarray} where we use $\upsilon_{\upsilon_k} = k$ $~(k \in \mathcal{I}_3)$, which follows from Eq.~\eqref{eq:upsilon}. After some algebra, we can see that Eq.~\eqref{eq:sym3_nas_cond} must hold if and only if there exists $\kappa^\star_{\omega_k} {(g)}e 0$ satisfying Eq.~\eqref{eq:nu2}. $(2) \Rightarrow (1)$: Let $\kappa^\star_{\omega_k}$ be the solution to Eq.~\eqref{eq:nu2}; then, $\hat{a}t{A}^\star$ defined by Eq.~\eqref{eq:Aopt_kappa} is a POVM. Since, as already described in Sec.~\ref{subsec:sym3_optLambda_sufficient1}, $\hat{a}t{G}opt(\omega) \ket{\pi^\star_\omega} = 0$ holds for any $\omega \in \Omega^\star$, Eq.~\eqref{eq:cond} with $\hat{a}t{A} = \hat{a}t{A}^\star$ obviously holds. Therefore, from Remark~\ref{remark:opt}, the sequential measurement $\Pi^{(\hat{a}t{A}^\star)}$ is globally optimal. \hspace*{0pt} $\blacksquare$ Note that one can obtain an analytical expression of $\hat{a}t{A}^\star$ by substituting the solution $\kappa^\star_{\omega_k}$ to Eq.~\eqref{eq:nu2} into Eq.~\eqref{eq:pi}. \section{Deriving of Eq.~\eqref{eq:ZG}} \label{append:sym3_opt} From Theorem~4 of Ref.~\cite{Eld-2003-unamb}, since the states $\{ \ket{\Psi_r} \}_{r=0}^2$ with equal prior probabilities are {\it geometrically uniform} states, the {\it equal-probability} measurement $\hat{a}t{P}i^\star \coloneqq \{ \hat{a}t{P}i^\star_r \}_{r=0}^3$, given by \begin{eqnarray} \hat{a}t{P}i^\star_r &=& \ket{\pi^\star_r} \bra{\pi^\star_r}, ~~ r \in \mathcal{I}_3, \nonumber \\ \ket{\pi^\star_r} &\coloneqq& \frac{\tilde{x}_{\upsilon_0}}{\sqrt{3}} \sum_{n=0}^2 \tilde{x}_n^{-1} \ket{\tilde{\phi}_n}, \end{eqnarray} is an optimal unambiguous measurement for $\{ \ket{\Psi_r} \}$. Also, its average success probability is $P(\hat{a}t{P}i^\star) = 3 \tilde{x}_{\upsilon_0}^2$. Therefore, $\hat{a}t{Z}G^\star$ of Eq.~\eqref{eq:ZG}, which is a feasible solution to Problem~${\rm DP_G}$, satisfies ${\rm Tr}~\hat{a}t{Z}G^\star = P(\hat{a}t{P}i^\star)$, and thus is an optimal solution to Problem~${\rm DP_G}$. Note that $\hat{a}t{Z}G^\star$ of Eq.~\eqref{eq:ZG} is always an optimal solution to Problem~${\rm DP_G}$, while there could be other optimal solutions. \section{Supplement of the $S$-plane} \label{append:S} Under the assumption that Theorem~\ref{thm:sym3_optLambda} holds, we shall show that $s(\hat{1}A) \in \mathcal{T}^\star$ is a necessary and sufficient condition that a sequential measurement can be globally optimal. First, we show the necessity. Assume that a sequential measurement can be globally optimal. From Theorem~\ref{thm:sym3_optLambda}, there exists $\hat{a}t{A}^\star \in \mMA$ satisfying Eq.~\eqref{eq:sym3_optLambda} such that $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ is globally optimal. As described in Sec.~\ref{subsec:sym3_optLambda_sufficient1}, $\hat{a}t{A}^\star$ is expressed by Eq.~\eqref{eq:Aopt_kappa}. Thus, let $\kappa'_3 \coloneqq \kappa^\star_{\omega_3}$ and $\kappa'_k \coloneqq 3 \kappa^\star_{\omega_k}$ for $k \in \{1,2\}$; then, since $\hat{a}t{A}^\star$ is a POVM, we have \begin{eqnarray} \sum_{k=1}^3 \kappa'_k \hat{a}t{E}^\star_k &=& \int_\Omega \hat{a}t{A}^\star(d\omega) = \hat{1}A. \label{eq:Asum} \end{eqnarray} Premultiplying and postmultiplying this equation by $\bra{\phi_n}$ and $\ket{\phi_n}$, respectively, gives \begin{eqnarray} \sum_{k=1}^3 \frac{\kappa'_k}{3} e^\star_k &=& s(\hat{1}A). \label{eq:sn_A} \end{eqnarray} This indicates that $s(\hat{1}A)$ is the weighted sum of $e^\star_k$ with the weights $\kappa'_k/3 {(g)}e 0$, and thus $s(\hat{1}A) \in \mathcal{T}^\star$ holds. Next, we show the sufficiency. The above argument can be applied in the reverse direction. Assume $s(\hat{1}A) \in \mathcal{T}^\star$; then, there exists $\kappa'_k {(g)}e 0$ satisfying Eq.~\eqref{eq:sn_A}. Consider $\hat{a}t{A}^\star$ expressed by Eq.~\eqref{eq:Aopt_kappa} with $\kappa^\star_{\omega_3} = \kappa'_3$ and $\kappa^\star_{\omega_k} = \kappa'_k / 3$ $~(k = \{1,2\})$. It follows that $\hat{a}t{A}^\star$ is a POVM satisfying Eq.~\eqref{eq:sym3_optLambda} and $\hat{a}t{G}opt(\omega) \hat{a}t{A}^\star(\omega) = 0$. Thus, from Remark~\ref{remark:opt}, $\hat{a}t{P}i^{(\hat{a}t{A}^\star)}$ is globally optimal, and thus a sequential measurement can be globally optimal. \hspace*{0pt} $\blacksquare$ \section{Supplement of Theorem~\ref{thm:sym3_optLambda}} \subsection{Proof of $y_2^2 < \eta < y_1^2$} \label{append:eta} Let \begin{eqnarray} \chi &\coloneqq& y_0^2y_1^2 + y_1^2y_2^2 + y_2^2y_0^2; \label{eq:chi} \end{eqnarray} then, we have \begin{eqnarray} \lefteqn{ (1-3y_k^2)^2 - (1-3\chi) } \nonumber \\ &=& 3 (3 y_k^4 - 2y_k^2 + \chi) \nonumber \\ &=& 3 [y_k^4 - 2 y_k^2 (y_{k \oplus 1}^2 + y_{k \oplus 2}^2) + y_k^2 (y_{k \oplus 1}^2 + y_{k \oplus 2}^2) + y_{k \oplus 1}^2 y_{k \oplus 2}^2] \nonumber \\ &=& 3 (y_k^2 - y_{k \oplus 1}^2)(y_k^2 - y_{k \oplus 2}^2), \label{eq:eta_ineq} \end{eqnarray} where the third line follows from $\sum_{n=0}^2 y_n^2 = 1$. Substituting $k = 1$ into Eq.~\eqref{eq:eta_ineq} yields $(1-3y_1^2)^2 \le 1-3\chi$. The equality holds when $y_0 = y_1$. In this case, from $y_2^2 = 1 - 2y_0^2$, we have $1 - 3y_1^2 = y_2^2 - y_0^2 < 0 \le \sqrt{1 - 3\chi}$. Thus, $1 - 3y_1^2 < \sqrt{1 - 3\chi}$ always holds. Substituting the definition of $y_n$ in Eq.~\eqref{eq:xy} into Eq.~\eqref{eq:chi} gives $|K_\B|^2 = |K_\B'|^2 = 1 - 3 \chi$. Therefore, from the definition of $\eta$, we have \begin{eqnarray} \eta &=& \frac{1}{3} \left( 1 - \sqrt{1 - 3\chi} \right) < y_1^2. \label{eq:eta_chi} \end{eqnarray} In the same way, substituting $k = 2$ into Eq.~\eqref{eq:eta_ineq} yields $1-3y_2^2 > \sqrt{1-3\chi}$, which gives $\eta > y_2^2$. \hspace*{0pt} $\blacksquare$ \subsection{Derivation of $\ket{{(g)}amma_q}$} \label{append:gamma} From Eqs.~\eqref{eq:ab_matrix} and \eqref{eq:XG}, we have \begin{eqnarray} \hat{a}t{X}Gopt - q \hat{a}t{P}si &=& 3 \sum_{n=0}^2 x_n^2 (y_{\upsilon_n}^2 - q) \ket{\phi_n} \bra{\phi_n}. \label{eq:X_qPsi} \end{eqnarray} Also, since $y_2^2 < \eta < y_1^2 \le y_0^2$ holds (see Appendix~\ref{append:eta}), $\hat{a}t{X}Gopt - q \hat{a}t{P}si$ $~(q \le \eta)$ is singular if and only if $q = y_2^2$ holds. One can easily see $\ket{{(g)}amma_q} = \ket{\phi_{\upsilon_2}}$ when $q = y_2^2$. Note that, in this case, one can define $p_q \coloneqq 0$. In what follows, assume $q \neq y_2^2$. From Eq.~\eqref{eq:Gone}, we have \begin{eqnarray} (\hat{a}t{X}Gopt - q \hat{a}t{P}si) \ket{{(g)}amma_q} &=& p_q \ket{a_0} \braket{a_0 | {(g)}amma_q} \propto \ket{a_0}. \label{eq:soq0} \end{eqnarray} Thus, from Eqs.~\eqref{eq:ab_matrix} and \eqref{eq:X_qPsi}, we have \begin{eqnarray} \ket{{(g)}amma_q} &\propto& (\hat{a}t{X}Gopt - q \hat{a}t{P}si)^{-1} \ket{a_0} \propto \sum_{n=0}^2 \frac{1}{x_n(y_{\upsilon_n}^2 - q)} \ket{\phi_n}. \label{eq:soq1} \end{eqnarray} Therefore, $\ket{{(g)}amma_q}$ is expressed by Eq.~\eqref{eq:soq}. One can verify ${\rm rank}~\hat{a}t{G}one(q) < 3$ by letting $p_q \coloneqq \braket{a_0 | (\hat{a}t{X}Gopt - q\Psi)^{-1} | a_0}^{-1}$ if $q < \eta$ and $p_q \coloneqq - \infty$ if $q = \eta$. Note that since $\ket{{(g)}amma_q}$ is unique up to a global phase, ${\rm rank}~\hat{a}t{G}one(q) = 2$ holds. \subsection{Proof of $\mathcal{C} \subseteq \mathcal{T}^\star$} \label{append:C} Since $s(\ket{{(g)}amma_q}\bra{{(g)}amma_q}) = e^\star_3 \in \mathcal{T}^\star$ holds when $q = y_2^2$, we have only to consider the case of $q \neq y_2^2$. Let \begin{eqnarray} u_n(q) &\coloneqq& (y_n^2 - q)^{-1}. \label{eq:uk} \end{eqnarray} Note that $q \neq y_n^2$ holds for any $n \in \mathcal{I}_3$ since $y_2^2 < \eta < y_1^2$ holds (see Appendix~\ref{append:eta}). From Eqs.~\eqref{eq:Somega}, \eqref{eq:sn}, and \eqref{eq:soq} and $\upsilon_{\upsilon_n} = n$, we have \begin{eqnarray} s_{\upsilon_n}(\ket{{(g)}amma_q}\bra{{(g)}amma_q}) &\propto& x_{\upsilon_n}^{-2} u_n^2(q), \label{eq:snwq} \end{eqnarray} and thus \begin{eqnarray} s(\ket{{(g)}amma_q}\bra{{(g)}amma_q}) &\propto& [x_{\upsilon_1}^{-2}u_1^2(q), x_{\upsilon_0}^{-2}u_0^2(q)]. \label{eq:sgg} \end{eqnarray} First, let us consider the case in which the three points $e^\star_1$, $e^\star_2$, and $e^\star_3$ lie on a straight line. From Eq.~\eqref{eq:pi}, this case occurs only when $y_0 = y_1$. Since $s(\ket{{(g)}amma_q}\bra{{(g)}amma_q}) \propto [x_{\upsilon_1}^{-2}, x_{\upsilon_0}^{-2}]$ holds from Eq.~\eqref{eq:sgg}, every point in $\mathcal{C}$ is on the line joining the origin $e^\star_3$ to the point $e^\star_1$. From $u_0^2(q) = u_1^2(q) \le u_2^2(q)$ (see Appendix~\ref{append:uk}), we have \begin{eqnarray} s_{\upsilon_0}(\ket{{(g)}amma_q} \bra{{(g)}amma_q}) &=& \frac{x_{\upsilon_0}^{-2} u_0^2(q)}{\displaystyle \sum_{k=0}^2 x_{\upsilon_k}^{-2} u_k^2(q)} \le \frac{x_{\upsilon_0}^{-2} u_0^2(q)}{\displaystyle u_0^2(q) \sum_{k=0}^2 x_{\upsilon_k}^{-2}} = s_{\upsilon_0}(\omega_1). \nonumber \\ \end{eqnarray} Thus, $s(\ket{{(g)}amma_q} \bra{{(g)}amma_q})$ is an interior point between $e^\star_1$ and $e^\star_3$. Therefore, $\mathcal{C} \subseteq \mathcal{T}^\star$ holds. Next, let us consider the other case in which $e^\star_1$, $e^\star_2$, and $e^\star_3$ do not lie on a straight line. Let $l_{jk}$ denote the straight line joining $e^\star_j$ and $e^\star_k$. It suffices to prove the following two statements: (a) $\mathcal{C}$ is in the region between the two lines $l_{13}$ and $l_{23}$, and (b) $\mathcal{C}$ is in the region between the two lines $l_{12}$ and $l_{13}$. First, we prove the statement (a). The gradient of the line joining the origin to the point $s(\ket{{(g)}amma_q}\bra{{(g)}amma_q})$ is \begin{eqnarray} \zeta(q) &\coloneqq& \frac{s_{\upsilon_0}(\ket{{(g)}amma_q}\bra{{(g)}amma_q})}{s_{\upsilon_1}(\ket{{(g)}amma_q}\bra{{(g)}amma_q})} = \frac{x_{\upsilon_1}^2(y_1^2 - q)^2}{x_{\upsilon_0}^2(y_0^2 - q)^2}, \end{eqnarray} where the last equality follows from Eq.~\eqref{eq:snwq}. Since $\eta < y_1^2 \le y_0^2$ holds (see Appendix~\ref{append:eta}), one can easily verify that $\zeta(q)$ monotonically decreases in the range $q \le \eta$, which gives \begin{eqnarray} \zeta(-\infty) &{(g)}e& \zeta(q) {(g)}e \zeta(\eta), ~~ \forall q \le \eta. \label{eq:zeta_dec} \end{eqnarray} Also, from $e^\star_1 = s(\ket{{(g)}amma_{-\infty}}\bra{{(g)}amma_{-\infty}})$ and $e^\star_2 = s(\ket{{(g)}amma_\eta}\bra{{(g)}amma_\eta})$, the gradients of the lines $l_{13}$ and $l_{23}$ are, respectively, $\zeta(-\infty)$ and $\zeta(\eta)$. Therefore, from Eq.~\eqref{eq:zeta_dec}, the statement (a) holds. Next, we prove the statement (b). Let $c(q)$ denote the $s_{\upsilon_1}$-coordinate of the intersection of the $s_{\upsilon_1}$-axis and the line joining the two points $e^\star_1$ and $s(\ket{{(g)}amma_q}\bra{{(g)}amma_q})$ in $\mathcal{C}$. It follows that the statement (b) holds if and only if $c(q)$ satisfies \begin{eqnarray} 0 &\le& c(q) \le c(\eta), ~~ \forall q \le \eta. \label{eq:cq} \end{eqnarray} Since $s(\ket{{(g)}amma_q}\bra{{(g)}amma_q})$ is on the line joining $e^\star_1$ and $[c(q), 0]$, we have that for some real number $w$ \begin{eqnarray} s_{\upsilon_1}(\ket{{(g)}amma_q}\bra{{(g)}amma_q}) &=& w s_{\upsilon_1}(\omega_1) + (1 - w) c(q), \nonumber \\ s_{\upsilon_0}(\ket{{(g)}amma_q}\bra{{(g)}amma_q}) &=& w s_{\upsilon_0}(\omega_1). \label{eq:case1_c1} \end{eqnarray} Also, since $\sum_{n=0}^2 s_n(\hat{a}t{T}) = 1$ holds for any nonzero positive semidefinite operator $\hat{a}t{T}$, we have \begin{eqnarray} s_{\upsilon_2}(\ket{{(g)}amma_q}\bra{{(g)}amma_q}) &=& w s_{\upsilon_2}(\omega_1) + (1 - w) [1 - c(q)]. \label{eq:case1_c2} \end{eqnarray} After some algebra with Eqs.~\eqref{eq:case1_c1}, \eqref{eq:case1_c2}, and \eqref{eq:snwq}, we obtain \begin{eqnarray} \tilde{c}(q) &\coloneqq& \frac{x_{\upsilon_1}^2 c(q)}{x_{\upsilon_2}^2 [1 - c(q)]} = \frac{u_1^2(q) - u_0^2(q)}{u_2^2(q) - u_0^2(q)}. \label{eq:tc} \end{eqnarray} It follows from the definition of $\tilde{c}(q)$ that $\tilde{c}(q)$ monotonically increases with $c(q)$. Thus, the statement (b), i.e. Eq.~\eqref{eq:cq}, is equivalent to \begin{eqnarray} 0 &\le& \tilde{c}(q) \le \tilde{c}(\eta), ~~\forall q \le \eta. \label{eq:tcq_ineq} \end{eqnarray} Since $u_0^2(q) \le u_1^2(q) \le u_2^2(q)$ holds (see Appendix~\ref{append:uk}), $\tilde{c}(q) {(g)}e 0$ obviously holds. Therefore, we need only show $\tilde{c}(q) \le \tilde{c}(\eta)$. Differentiating $\tilde{c}(q)$ of Eq.~\eqref{eq:tc} with respect to $q$ gives \begin{eqnarray} \frac{d\tilde{c}(q)}{dq} &=& \frac{2[u_1^2(q) - u_0^2(q)]}{u_2^2(q) - u_0^2(q)} [f[u_1(q)] - f[u_2(q)]], \nonumber \\ f(x) &\coloneqq& \frac{x^2+u_0(q)x+u_0^2(q)}{x+u_0(q)}, \end{eqnarray} which implies that $d\tilde{c}(q)/dq {(g)}e 0$ is equivalent to $f[u_1(q)] {(g)}e f[u_2(q)]$. In the case of $q < y_2^2$, from $0 \le u_1(q) \le u_2(q)$, $f[u_1(q)] \le f[u_2(q)]$ (i.e., $d\tilde{c}(q)/dq \le 0$) holds, which follows from the fact that $f(x)$ monotonically increases in the range $x {(g)}e 0$. In the other case of $q > y_2^2$, from $u_2(q) \le - u_1(q) \le - u_0(q)$ (see Appendix~\ref{append:uk}), $f[u_2(q)] < 0 \le f[u_1(q)]$ (i.e., $d\tilde{c}(q)/dq {(g)}e 0$) holds, which follows from the fact that $f(x) < 0$ holds if and only if $x < -u_0(q)$. Therefore, $\tilde{c}(q)$ $~(q \le \eta)$ attains its maximum at $q = -\infty$ and/or $q = \eta$, and thus, for the rest, it suffices to show $\tilde{c}(-\infty) \le \tilde{c}(\eta)$. From Eq.~\eqref{eq:tc}, we have \begin{eqnarray} \tilde{c}(q) &=& \frac{(y_2^2-q)^2 [ (y_0^2-q)^2 - (y_1^2-q)^2 ]} {(y_1^2-q)^2 [ (y_0^2-q)^2 - (y_2^2-q)^2 ]} \nonumber \\ &=& \frac{(y_0^2-y_1^2)(y_2^2-q)^2(y_0^2+y_1^2-2q)}{(y_0^2-y_2^2)(y_1^2-q)^2(y_0^2+y_2^2-2q)}, \label{eq:tcq} \end{eqnarray} which gives \begin{eqnarray} \tilde{c}(-\infty) &=& \frac{y_0^2 - y_1^2}{y_0^2 - y_2^2}. \end{eqnarray} Also, we have \begin{eqnarray} \lefteqn{ (y_1^2-\eta)^2(y_0^2+y_2^2-2\eta) - (y_2^2-\eta)^2(y_0^2+y_1^2-2\eta) } \nonumber \\ &=& (y_1^2-y_2^2)(y_0^2y_1^2 + y_1^2y_2^2 + y_2^2y_0^2 - 2\eta + 3\eta^2) \nonumber \\ &=& (y_1^2-y_2^2)(3\eta^2 - 2\eta + \chi) \nonumber \\ &=& 0, \end{eqnarray} where the second to fourth lines, respectively, follow from $\sum_{k=0}^2 y_k^2 = 1$, Eq.~\eqref{eq:chi}, and Eq.~\eqref{eq:eta_chi}. Thus, substituting $q = \eta$ into Eq.~\eqref{eq:tcq} gives $\tilde{c}(\eta) = (y_0^2 - y_1^2) / (y_0^2 - y_2^2)$. Therefore, $\tilde{c}(-\infty) = \tilde{c}(\eta)$ holds. \hspace*{0pt} $\blacksquare$ \subsection{Proof of $u_0^2(q) \le u_1^2(q) \le u_2^2(q)$ $~(\forall q \le \eta, q \neq y_2^2)$} \label{append:uk} Since $u_1(q) {(g)}e u_0(q) > 0$ holds, it suffices to prove $u_2^2(q) {(g)}e u_1^2(q)$. In the case of $q < y_2^2$, from $u_2(q) > u_1(q) > 0$, this is obvious. Let us consider the case of $q > y_2^2$. Since $u_2(q) < 0$ holds, it suffices to show $u_2(q) + u_1(q) \le 0$. Let $\tilde{u}_k \coloneqq u_k(\eta)$; then, we have \begin{eqnarray} \tilde{u}_2 + \tilde{u}_1 &\le& \tilde{u}_2 + \tilde{u}_1 + \tilde{u}_0 \nonumber \\ &=& \tilde{u}_2\tilde{u}_1\tilde{u}_0 [(\tilde{u}_0\tilde{u}_1)^{-1} + (\tilde{u}_1\tilde{u}_2)^{-1} + (\tilde{u}_2\tilde{u}_0)^{-1}] \nonumber \\ &=& \tilde{u}_2\tilde{u}_1\tilde{u}_0 (3\eta^2 - 2\eta + \chi) \nonumber \\ &=& 0, \end{eqnarray} where the third and fourth lines, respectively, follow from the definition of $\chi$ in Eq.~\eqref{eq:chi} and Eq.~\eqref{eq:eta_chi}. Since $u_2(q) \le \tilde{u}_2$ and $u_1(q) \le \tilde{u}_1$ hold, $u_2(q) + u_1(q) \le \tilde{u}_2 + \tilde{u}_1 \le 0$ holds. \hspace*{0pt} $\blacksquare$ \subsection{Case (2)} \label{append:case2} Let us consider, without loss of generality, $\omega \in \Omega_+$ such that $\mu^\w_2 < \mu^\w_0$ and $\mu^\w_2 < \mu^\w_1$. In order to show that $s(\omega)$ is on a straight line segment whose endpoints are in $\mathcal{C}$, we shall show the two statements: (a) $s(\omega)$ is on a certain straight line segment, and (b) the line segment is part of a straight line segment whose endpoints are in $\mathcal{C}$. Since we now consider the case (2), $|{T^\w}|$ must be 2 or 3. Let \begin{eqnarray} \hat{a}t{X} &\coloneqq& \left\{ \begin{array}{cc} \hat{a}t{X}Gopt - \mu^\w_2 \Psi, & |{T^\w}| = 3, \\ \hat{a}t{X}Gopt + \infty \ket{a_2}\bra{a_2}, & |{T^\w}| = 2. \\ \end{array} \right. \label{eq:X} \end{eqnarray} One can easily see that $\hat{a}t{X}$ is a positive definite operator. Let $\ket{\varpi(q)}$ be a normal vector satisfying \begin{eqnarray} \ket{\varpi(q)} &\in& {\rm Ker}~\hat{a}t{G}two(q), \nonumber \\ \braket{a_0|\varpi(q)} &{(g)}e& 0, \nonumber \\ \hat{a}t{G}two(q) &\coloneqq& \hat{a}t{X} - q \ket{a_1} \bra{a_1} - p'_q \ket{a_0} \bra{a_0}, \label{eq:hGw_q2} \end{eqnarray} where $p'_q$ is the function of $q$ such that ${\rm rank}~\hat{a}t{G}two(q) < 3$. (We can define such $p'_q$ as $p'_q \coloneqq \braket{a_0 | (\hat{a}t{X} - q \ket{a_1} \bra{a_1})^{-1} | a_0}^{-1}$. Since $\hat{a}t{X} - q \ket{a_1} \bra{a_1}$ is positive definite, such $p'_q$ always exists.) $p'_q$ monotonically decreases with $q$. $\hat{a}t{G}opt(\omega) = \hat{a}t{G}two[\mu^\w_1 - \mu^\w_2]$ holds if $|{T^\w}| = 3$; otherwise, $\hat{a}t{G}opt(\omega) = \hat{a}t{G}two[\mu^\w_1]$ holds. First, we show the statement (a). Let $\hat{a}t{G}two_0 \coloneqq \hat{a}t{G}two(0)$, $\hat{a}t{G}two_1 \coloneqq \hat{a}t{G}two(p'_0)$, $\ket{\varpi_0} \coloneqq \ket{\varpi(0)}$, and $\ket{\varpi_1} \coloneqq \ket{\varpi(p'_0)}$. Note that $p'_q = 0$ holds when $q = p'_0$ (i.e., $p'_{p'_0} = 0$). We shall express $\ket{\varpi(q)}$ in terms of $\ket{\varpi_0}$ and $\ket{\varpi_1}$. For each $k \in \{0,1\}$, from $\hat{a}t{G}two_k \ket{\varpi_k} = 0$, we have \begin{eqnarray} \ket{a_k} &=& \frac{\hat{a}t{X} \ket{\varpi_k}}{p'_0 \braket{a_k|\varpi_k}}. \label{eq:aXpi} \end{eqnarray} Note that since $\hat{a}t{X}$ is positive definite, we have $\hat{a}t{X} \ket{\varpi_k} \neq 0$, which yields $p'_0 \braket{a_k|\varpi_k} \neq 0$. Substituting Eq.~\eqref{eq:hGw_q2} into $\hat{a}t{G}two(q) \ket{\varpi(q)} = 0$ and using Eq.~\eqref{eq:aXpi} yields \begin{eqnarray} \ket{\varpi(q)} &=& \hat{a}t{X}^{-1} (p'_q r_0 \ket{a_0} + q r_1 \ket{a_1}) \nonumber \\ &=& \frac{1}{p'_0} \left( \frac{p'_q r_0}{\braket{a_0|\varpi_0}} \ket{\varpi_0} + \frac{q r_1}{\braket{a_1|\varpi_1}} \ket{\varpi_1} \right), \label{eq:QSD_kernel_s_sk0} \end{eqnarray} where $r_k \coloneqq \braket{a_k|\varpi(q)}$. Premultiplying this equation by $\bra{a_0}$ and some algebra gives \begin{eqnarray} \frac{q r_1}{\braket{a_1|\varpi_1}} &=& \frac{(p'_0 - p'_q) r_0}{\braket{a_0|\varpi_1}}. \end{eqnarray} Substituting this equation into Eq.~\eqref{eq:QSD_kernel_s_sk0} gives \begin{eqnarray} \ket{\varpi(q)} &=& \frac{r_0}{p'_0} \left( \frac{p'_q}{\braket{a_0|\varpi_0}} \ket{\varpi_0} + \frac{p'_0 - p'_q}{\braket{a_0|\varpi_1}} \ket{\varpi_1} \right). \label{eq:case2_sw} \end{eqnarray} Since $r_0 {(g)}e 0$ and $\braket{a_0|\varpi_k} {(g)}e 0$ hold from Eq.~\eqref{eq:hGw_q2}, it follows from Eq.~\eqref{eq:case2_sw} that $\ket{\varpi(q)}$ is expressed as \begin{eqnarray} \ket{\varpi(q)} &=& c_0 \ket{\varpi_0} + c_1 \ket{\varpi_1} \label{eq:case2_c01} \end{eqnarray} with certain nonnegative real numbers $c_0$ and $c_1$. Let $q_2$ be the real number satisfying $p'_{q_2} = q_2$. One can easily verify that, when $q = q_2$, Eq.~\eqref{eq:case2_c01} with $c_0 = c_1 \eqqcolon c$ holds. Let $\ket{\varpi_2} \coloneqq \ket{\varpi(q_2)}$. Due to the symmetry of the states, $\hat{a}t{S}(\ket{\varpi_0} \bra{\varpi_0}) = \hat{a}t{S}(\ket{\varpi_1} \bra{\varpi_1}) \eqqcolon \hat{a}t{S}_{\varpi_0}$ holds. Thus, from Eq.~\eqref{eq:case2_c01}, we have \begin{eqnarray} \hat{a}t{S}(\ket{\varpi(q)} \bra{\varpi(q)}) &=& (c_0^2 + c_1^2) \hat{a}t{S}_{\varpi_0} + c_0 c_1 \hat{a}t{S}', \label{eq:QSD_R_Rs} \end{eqnarray} where \begin{eqnarray} \hat{a}t{S}' &\coloneqq& \frac{1}{3} \sum_{j=0}^2 \hat{a}t{V}A^j \left( \ket{\varpi_0} \bra{\varpi_1} + \ket{\varpi_1} \bra{\varpi_0} \right) \left( \hat{a}t{V}A^j \right)^\daggerger. \end{eqnarray} Substituting $q = q_2$ into Eq.~\eqref{eq:QSD_R_Rs} and letting $\hat{a}t{S}_{\varpi_2} \coloneqq \hat{a}t{S}(\ket{\varpi_2} \bra{\varpi_2})$ yields \begin{eqnarray} \hat{a}t{S}_{\varpi_2} &=& 2 c^2 \hat{a}t{S}_{\varpi_0} + c^2 \hat{a}t{S}'. \end{eqnarray} Substituting this into Eq.~\eqref{eq:QSD_R_Rs} gives \begin{eqnarray} \hat{a}t{S}(\ket{\varpi(q)} \bra{\varpi(q)}) &=& c'_0 \hat{a}t{S}_{\varpi_0} + c'_2 \hat{a}t{S}_{\varpi_2}, \label{eq:case2_Sw} \end{eqnarray} where $c'_0 \coloneqq (c_0 - c_1)^2$ and $c'_2 = c_0 c_1 / c^2$. Note that taking the trace of this gives $c'_0 + c'_2 = 1$ and that $c'_0, c'_2 {(g)}e 0$ holds. Also, Eq.~\eqref{eq:case2_Sw} gives \begin{eqnarray} s(\ket{\varpi(q)} \bra{\varpi(q)}) &=& c'_0 s_{\varpi_0} + c'_2 s_{\varpi_2}, \end{eqnarray} where $s_{\varpi_k} \coloneqq s(\ket{\varpi_k} \bra{\varpi_k})$ for each $k \in \{0,2\}$. Therefore, $s(\ket{\varpi(q)} \bra{\varpi(q)})$ is on the straight line segment, denoted by $\mathcal{L}$, whose endpoints are $s_{\varpi_0}$ and $s_{\varpi_2}$. Next, we show the statement (b). In the case of $q = q_2$, since Eq.~\eqref{eq:hGw_q2} with $q = p'_q = q_2$ holds, this is the case (1), i.e., at least two of $\{ \mu^\w_r \}_r$ in Eq.~\eqref{eq:hGw3} are the same; thus, $s_{\varpi_2}$ is in $\mathcal{C}$. Also, if $\omega$ satisfies $|{T^\w}| = 3$, then $q = 0$ is also the case (1), and thus $s_{\varpi_0}$ is in $\mathcal{C}$. Therefore, in the case of $|{T^\w}| = 3$, $\mathcal{L}$ is the line segment whose endpoints, $s_{\varpi_0}$ and $s_{\varpi_2}$, are in $\mathcal{C}$. In what follows, assume $|{T^\w}| = 2$. We shall show that $\mathcal{L}$ is part of the straight line segment whose endpoints are $e^\star_1 = s(\omega_1) \in \mathcal{C}$ and $s_{\varpi_2} \in \mathcal{C}$. Taking the limit as $q \to - \infty$ in Eq.~\eqref{eq:hGw_q2} gives $s(\ket{\varpi(-\infty)} \bra{\varpi(-\infty)}) = e^\star_1$. Thus, repeating the above argument with $q \to - \infty$ indicates that $\ket{\varpi(-\infty)}$ is expressed as Eq.~\eqref{eq:case2_c01} with $c_0 > 0$ and $c_1 < 0$, and that Eq.~\eqref{eq:case2_Sw} holds with $c'_2 < 0$. Thus, $s_{\varpi_0}$ is an interior point between $e^\star_1$ and $s_{\varpi_2}$. Therefore, $\mathcal{L}$ is part of the line segment whose endpoints are $e^\star_1$ and $s_{\varpi_2}$. \hspace*{0pt} $\blacksquare$ \input{report-en-arXiv.bbl} \end{document}
\begin{document} \title{Random Distances Associated With\\Equilateral Triangles} \author{Yanyan Zhuang and Jianping Pan\\ University of Victoria, Victoria, BC, Canada} \maketitle \begin{abstract} In this report, the explicit probability density functions of the random Euclidean distances associated with equilateral triangles are given, when the two endpoints of a link are randomly distributed in 1) the same triangle, 2) two adjacent triangles sharing a side, 3) two parallel triangles sharing a vertex, and 4) two diagonal triangles sharing a vertex, respectively. The density function for 1) is based on the most recent work by B{\"a}sel~\cite{bäsel2012random}. 2)--4) are based on 1) and our previous work in~\cite{yanyan2011, yanyan2011h}. Simulation results show the accuracy of the obtained closed-form distance distribution functions, which are important in the theory of geometrical probability. The first two statistical moments of the random distances and the polynomial fits of the density functions are also given in this report for practical uses. \end{abstract} \begin{keywords} Random distances; distance distribution functions; equilateral triangles \end{keywords} \section{The Problem} \begin{figure} \caption{Random Points and Distances Associated with Equilateral Triangles.} \label{fig:triangle} \end{figure} Define a ``unit triangle'' as the equilateral triangle with side length $1$. Picking two points uniformly at random from the interior of a unit triangle, or between two adjacent unit triangles sharing a side or a vertex, the goal is to obtain the probabilistic density function (PDF) of the random distances between these two endpoints. There are four different cases $|ab|$, $|pq|$, $|ef|$ and $|gh|$, depending on the geometric locations of these two random endpoints, as shown in Fig.~\ref{fig:triangle}. The next section gives the explicit PDFs for these cases. \section{Distance Distributions Associated with Equilateral Triangles}\label{sec:result} \subsection{$|ab|$: Distance Distribution within an Equilateral Triangle} The author of~\cite{bäsel2012random} obtained the chord length distribution function for \textit{any} regular polygon. From this result, \cite{bäsel2012random} further derived the density function for the distance between two uniformly and independently distributed random points in the regular polygon. Although the methods used were elementary, this work can be considered as a major breakthrough in Geometrical Probability, which also helps us verify the distance distribution in a regular hexagon~\cite{yanyan2011h}. Following are the notations used in~\cite{bäsel2012random}: $\mathcal P_{\rm n,r}$ is the regular polygon with $n$ vertices and with a circumscribed circle of radius $r$; $l_k$ is the distance between vertices, given by \begin{equation} l_k=2r\sin \frac{k\pi}{n}, \end{equation} where $k=0,1,...,K$ and $K=\lfloor\frac{n-2}{2}\rfloor$. $L$ denotes the perimeter and $A$ the area of $\mathcal P_{\rm n,r}$: \begin{equation} L=2nr\sin \frac{\pi}{n} ~~\mbox{ and }~~ A=\frac{1}{2}nr^2\sin \frac{2\pi}{n}. \end{equation} Denote the chord length distribution derived in~\cite{bäsel2012random} as $f(s)$ for $\mathcal P_{\rm n,r}$, and the density function for the distance between two random points in $\mathcal P_{\rm n,r}$ as $g_{D_{\rm I}}(d)$, the relationship between these two functions is as follows according to~\cite{piefke1978beziehungen}: \begin{equation} g_{D_{\rm I}}(d)=\frac{2Ld}{A^2}\int_d^{l_{\rm K+1}}(s-d)f(s){\rm d}s. \end{equation} According to this relationship and the derived chord length distribution $f(s)$, the density function of random distances in an equilateral triangle, $g_{D_{\rm I}}(d)$, is a special case in~\cite{bäsel2012random} when $n=3,r=\frac{1}{\sqrt{3}}$: \begin{equation}\label{eq:triangle_within} g_{D_{\rm I}}(d)= 4d\left\{ \begin{array}{lcr} \left(2+\frac{4\pi}{3\sqrt{3}}\right)d^2-8d+\frac{2\pi}{\sqrt{3}} & \quad & 0\leq d\leq \frac{\sqrt{3}}{2}\\ \frac{2}{\sqrt{3}}\left(4d^2+6\right)\sin^{-1}\frac{\sqrt{3}}{2d}+\left(2- \frac{8\pi}{3\sqrt{3}}\right)d^2+6\sqrt{4d^2-3} \\ ~~~~-8d-\frac{4\pi}{\sqrt{3}} & \quad & \frac{\sqrt{3}}{2}\leq d \leq 1\\ 0 & \quad & {\rm otherwise} \end{array} \right.. \end{equation} The corresponding cumulative distribution function (CDF) is \begin{equation} G_{D_{\rm I}}(d)= 2\left\{ \begin{array}{lcr} 0 & \quad & d \leq 0 \\ \left(1+\frac{2\pi}{3\sqrt{3}}\right)d^4-\frac{16}{3}d^3+\frac{2\pi}{\sqrt{3}} d^2 & \quad & 0\leq d\leq \frac{\sqrt{3}}{2} \\ \frac{4d^2}{\sqrt{3}}\left(d^2+3\right)\sin^{-1}\frac{\sqrt{3}}{2d}+\left(\frac{ 26d^2}{3}+1\right)\sqrt{d^2-\frac{3}{4}}+\left(1-\frac{4\pi}{3\sqrt{3}} \right)d^4 \\ ~~~~-\frac{16}{3}d^3-\frac{4\pi}{\sqrt{3}}d^2 & \quad & \frac{\sqrt{3}}{2}\leq d \leq 1\\ 1 & \quad & d \geq 1 \end{array} \right.. \end{equation} \subsection{$|pq|$: Distance Distribution between Two Adjacent Equilateral Triangles Sharing a Side} Given the result above, and the result obtained by us in~\cite{yanyan2011} for the distance distribution in a rhombus, further results of the random distances between two equilateral triangles can be derived. In Fig.~\ref{fig:triangle}, rhombus $OABC$ can be decomposed into two congruent, adjacent equilateral triangles $\Delta OAB$ and $\Delta OBC$. Picking points uniformly at random from the interior of this rhombus, then the points are equally likely to fall inside any one of these two triangles. Therefore, given the location of one endpoint of a random link, the second endpoint falls inside the same triangle as the first one with probability $\frac{1}{2}$ (such as $|ab|$), and with probability $\frac{1}{2}$ falls inside the adjacent triangle (such as $|pq|$). Suppose rhombus $OABC$ in Fig.~\ref{fig:triangle} has a side length of $1$, then the distribution of $|ab|$ is known as $g_{D_{\rm I}}(d)$ in (\ref{eq:triangle_within}) above. Denote the distribution of $|pq|$ as $g_{D_{\rm A}}(d)$. The probability density function of the random distances between two uniformly distributed points that are both inside the same rhombus is $f_{D_{\rm I}}(d)$ (see (1) in~\cite{yanyan2011}). From the reasoning in the previous paragraph, \begin{equation} f_{D_{\rm I}}(d)=\frac{1}{2}g_{D_{\rm I}}(d)+\frac{1}{2}g_{D_{\rm A}}(d), \end{equation} and we have \begin{equation} g_{D_{\rm A}}(d)=2 f_{D_{\rm I}}(d)-g_{D_{\rm I}}(d). \end{equation} Therefore, the probability density function of the random distances between two uniformly distributed points, one in each of the two adjacent unit triangles that are sharing a side, is \begin{equation}\label{eq:triangle_btw} g_{D_{\rm A}}(d)=4d\left\{ \begin{array}{lcr} \frac{8}{3}d-\left(\frac{2}{3}+\frac{10\pi}{9\sqrt{3}}\right)d^2 & \quad & 0\leq d\leq \frac{\sqrt{3}}{2}\\ -\frac{4}{\sqrt{3}}\left(1+\frac{4d^2}{3}\right)\sin^{-1}\frac{\sqrt{3}}{2d} +\left(\frac{14\pi}{9\sqrt{3}}-\frac{2}{3}\right)d^2-\frac{8}{3}\sqrt{4d^2-3} \\ ~~~~+\frac{8}{3}d+\frac{2\pi}{\sqrt{3}} & \quad & \frac{\sqrt{3}}{2}\leq d\leq 1\\ \frac{4}{\sqrt{3}}\left(1-\frac{d^2}{3}\right)\sin^{-1}\frac{\sqrt{3}}{2d} +\left(\frac{2\pi}{9\sqrt{3}}-\frac{2}{3}\right)d^2+\sqrt{4d^2-3} \\ ~~~~-\frac{2\pi}{3\sqrt{3}}-1 & \quad & 1\leq d\leq \sqrt{3} \\ 0 & \quad & {\rm otherwise} \end{array} \right.. \end{equation} The corresponding CDF is \begin{equation}\label{eq:triangle_btw_cdf} G_{D_{\rm A}}(d)= 2\left\{ \begin{array}{lcr} 0 & \quad & d \leq 0 \\ \frac{16}{9}d^3-\left(\frac{1}{3}+\frac{5\pi}{9\sqrt{3}}\right)d^4 & \quad & 0\leq d\leq \frac{\sqrt{3}}{2} \\ -\frac{4d^2}{\sqrt{3}}\left(1+\frac{2d^2}{3}\right)\sin^{-1}\frac{\sqrt{3}}{2d} -4d^2\sqrt{d^2-\frac{3}{4}}+\left(\frac{7\pi}{9\sqrt{3}}-\frac{1}{3}\right)d^4 \\ ~~~~+\frac{16}{9}d^3+\frac{2\pi}{\sqrt{3}}d^2 & \quad & \frac{\sqrt{3}}{2}\leq d \leq 1\\ \frac{4d^2}{\sqrt{3}}\left(1-\frac{d^2}{6}\right)\sin^{-1}\frac{\sqrt{3}}{2d} +\left(\frac{11d^2}{9}+\frac{5}{6}\right)\sqrt{d^2-\frac{3}{4}}+\left(\frac{\pi} {9\sqrt{3}}-\frac{1}{3}\right)d^4 \\ ~~~~-\left(\frac{2\pi}{3\sqrt{3}}+1\right)d^2-\frac{1}{4} & \quad & 1\leq d \leq \sqrt{3}\\ 1 & \quad & d \geq \sqrt{3} \end{array} \right.. \end{equation} Note that although unit triangles are assumed in (\ref{eq:triangle_within})--(\ref{eq:triangle_btw_cdf}), the distance distribution functions can be easily scaled by a nonzero scalar, for equilateral triangles of arbitrary side length. For example, let the side length of such triangles be $s>0$, then \begin{equation} G_{sD}(d)=P(sD\leq d)=P(D\leq \frac{d}{s})=G_D(\frac{d}{s}). \nonumber \end{equation} Therefore, \begin{equation}\label{eq:scale} g_{sD}(d)=G'_D(\frac{d}{s})=\frac{1}{s}g_D(\frac{d}{s}). \end{equation} \subsection{$|ef|$: Distance Distribution between Two Parallel Equilateral Triangles Sharing a Vertex} This case corresponds to the random distance $|ef|$ in Fig.~\ref{fig:triangle}. Here four unit triangles $\Delta OAB$, $\Delta OBC$, $\Delta OCD$ and $\Delta BCG$ together create a larger equilateral triangle $\Delta AGD$ with side length $2$. According to (\ref{eq:scale}), the density function of distance distribution inside triangle $\Delta AGD$ is $g_{2D_{\rm I}}(d)=\frac{1}{2}g_{D_{\rm I}}(\frac{d}{2})$, as $s=2$. On the other hand, if we look at the two random endpoints of a given link inside the large triangle, they will fall into one of the two following cases: i) one of the endpoints falls inside one of the three unit triangles on the boarder of the large triangle, such as $\Delta OAB$, $\Delta OCD$ or $\Delta BCG$, with probability $\frac{3}{4}$; ii) one of the endpoints falls inside the unit triangle $\Delta OBC$ in the middle, with probability $\frac{1}{4}$. Each of these two cases includes several more detailed sub-cases as follows: \renewcommand{Case \roman{enumi})}{Case \roman{enumi})} \begin{enumerate} \item Given the location of the first endpoint, the second endpoint will fall inside the same triangle as the first one (such as $|ab|$) with probability $\frac{1}{4}$, fall inside the adjacent triangle sharing a side (such as $|pq|$) with probability $\frac{1}{4}$, and fall inside one of the parallel triangles sharing a vertex (such as $|ef|$) with probability $\frac{1}{2}$. \item When the location of the first endpoint is in $\Delta OBC$, the second endpoint will fall inside the same triangle with probability $\frac{1}{4}$, and fall inside one of the adjacent triangles sharing a side with probability $\frac{3}{4}$. \end{enumerate} Denote the density function of random distance $|ef|$ as $g_{D_{\rm P}}(d)$, we have the following \begin{equation} g_{2D_{\rm I}}(d)=\frac{3}{4}\left[\frac{1}{4}g_{D_{\rm I}}(d)+\frac{1}{4}g_{D_{\rm A}}(d)+ \frac{1}{2}g_{D_{\rm P}}(d)\right]+\frac{1}{4}\left[\frac{1}{4}g_{D_{\rm I}}(d)+ \frac{3}{4}g_{D_{\rm A}}(d)\right]. \end{equation} Hence, \begin{equation} g_{D_{\rm P}}(d)=\frac{8}{3}g_{2D_{\rm I}}(d)-\frac{2}{3}g_{D_{\rm I}}(d)-g_{D_{\rm A}}(d) =\frac{4}{3}g_{D_{\rm I}}(\frac{d}{2})-\frac{2}{3}g_{D_{\rm I}}(d)-g_{D_{\rm A}}(d). \end{equation} Therefore, the probability density function of the random distances between two uniformly distributed points, one in each of the two parallel unit triangles that are sharing a vertex, is \begin{equation}\label{eq:triangle_para} g_{D_{\rm P}}(d)=4d\left\{ \begin{array}{lcr} \left(\frac{4\pi}{9\sqrt{3}}-\frac{1}{3}\right)d^2 & \quad & 0\leq d\leq \frac{\sqrt{3}}{2}\\ -\frac{4}{\sqrt{3}}\sin^{-1}\frac{\sqrt{3}}{2d}+\left(\frac{4\pi}{9\sqrt{3}}- \frac{1}{3}\right)d^2-\frac{4}{3}\sqrt{4d^2-3}+\frac{2\pi}{\sqrt{3}} & \quad & \frac{\sqrt{3}}{2}\leq d\leq 1\\ \frac{4}{\sqrt{3}}\left(\frac{d^2}{3}-1\right)\sin^{-1}\frac{\sqrt{3}}{2d} +d^2-\sqrt{4d^2-3}-\frac{8}{3}d+\frac{2\pi}{\sqrt{3}}+1 & \quad & 1\leq d\leq \sqrt{3} \\ \frac{4}{\sqrt{3}}\left(\frac{d^2}{3}+2\right)\sin^{-1}\frac{\sqrt{3}}{d}+\left(\frac{1}{3} -\frac{4\pi}{9\sqrt{3}}\right)d^2+4\sqrt{d^2-3}\\ ~~~~-\frac{8}{3}d-\frac{8\pi}{3\sqrt{3}} & \quad & \sqrt{3} \leq d \leq 2\\ 0 & \quad & {\rm otherwise} \end{array} \right.. \end{equation} The corresponding CDF is \begin{equation}\label{eq:triangle_para_cdf} G_{D_{\rm P}}(d)= 2\left\{ \begin{array}{lcr} 0 & \quad & d \leq 0 \\ \left(\frac{2\pi}{9\sqrt{3}}-\frac{1}{6}\right)d^4 & \quad & 0\leq d\leq \frac{\sqrt{3}}{2} \\ -\frac{4d^2}{\sqrt{3}}\sin^{-1}\frac{\sqrt{3}}{2d}+\left(\frac{2\pi}{9\sqrt{3}} -\frac{1}{6}\right)d^4-\left(\frac{8d^2}{9}+\frac{1}{3}\right)\sqrt{4d^2-3} +\frac{2\pi}{\sqrt{3}}d^2 & \quad & \frac{\sqrt{3}}{2}\leq d \leq 1\\ \frac{4d^2}{\sqrt{3}}\left(\frac{d^2}{6}-1\right)\sin^{-1}\frac{\sqrt{3}}{2d} -\left(\frac{11d^2}{18}+\frac{5}{12}\right)\sqrt{4d^2-3}+\frac{d^4}{2}-\frac{16}{9}d^3 \\ ~~~~+\left(\frac{2\pi}{\sqrt{3}}+1\right)d^2-\frac{1}{12} & \quad & 1\leq d \leq \sqrt{3}\\ \frac{4d^2}{\sqrt{3}}\left(\frac{d^2}{6}+2\right)\sin^{-1}\frac{\sqrt{3}}{d} +\left(\frac{26d^2}{9}+\frac{4}{3}\right)\sqrt{d^2-3}+\left(\frac{1}{6} -\frac{2\pi}{9\sqrt{3}}\right)d^4 \\ ~~~~-\frac{16}{9}d^3-\frac{8\pi}{3\sqrt{3}}d^2-\frac{5}{6} & \quad & \sqrt{3} \leq d \leq 2\\ 1 & \quad & d \geq 2 \end{array} \right.. \end{equation} \subsection{$|gh|$: Distance Distribution between Two Diagonal Equilateral Triangles Sharing a Vertex} This case corresponds to the random distance $|gh|$ in Fig.~\ref{fig:triangle}. Here a regular hexagon is divided into six unit triangles. Looking at the two random endpoints of a given link inside the hexagon, the first endpoint can fall inside any one of the six triangles, and the second endpoint will i) fall inside the same triangle as the first one (such as $|ab|$) with probability $\frac{1}{6}$; ii) fall inside the adjacent triangle sharing a side (such as $|pq|$) with probability $\frac{1}{3}$; iii) fall inside the parallel triangle sharing a vertex (such as $|ef|$) with probability $\frac{1}{3}$; iv) fall inside the diagonal triangle sharing a vertex (such as $|gh|$) with probability $\frac{1}{6}$. The density function of the random distances within a regular hexagon has been derived in~\cite{yanyan2011h}, and we denote it as $f_H(d)$. Also denote the density function of random distance $|gh|$ as $g_{D_{\rm D}}(d)$, we have \begin{equation} f_H(d)=\frac{1}{6}g_{D_{\rm I}}(d)+\frac{1}{3}g_{D_{\rm A}}(d)+\frac{1}{3}g_{D_{\rm P}}(d) +\frac{1}{6}g_{D_{\rm D}}(d), \end{equation} or, \begin{equation} g_{D_{\rm D}}(d)=6f_H(d)-\left[g_{D_{\rm I}}(d)+2g_{D_{\rm A}}(d)+2g_{D_{\rm P}}(d) \right]. \end{equation} Therefore, the probability density function of the random distances between two uniformly distributed points, one in each of the two diagonal unit triangles that are sharing a vertex, is \begin{equation}\label{eq:triangle_diag} g_{D_{\rm D}}(d)=4d\left\{ \begin{array}{lcr} \left(\frac{2}{3}-\frac{2\pi}{9\sqrt{3}}\right)d^2 & \quad & 0\leq d\leq \frac{\sqrt{3}}{2}\\ \frac{4}{\sqrt{3}}\left(\frac{2d^2}{3}+1\right)\sin^{-1}\frac{\sqrt{3}}{2d}+ \left(\frac{2}{3}-\frac{14\pi}{9\sqrt{3}}\right)d^2+2\sqrt{4d^2-3}-\frac{2\pi}{\sqrt{3}} & \quad & \frac{\sqrt{3}}{2}\leq d\leq 1\\ -\frac{4}{\sqrt{3}}\left(\frac{2d^2}{3}+1\right)\sin^{-1}\frac{\sqrt{3}}{2d} +\left(\frac{2\pi}{9\sqrt{3}}-\frac{2}{3}\right)d^2-2\sqrt{4d^2-3}\\ ~~~~+\frac{16}{3}d+\frac{2\pi}{3\sqrt{3}} & \quad & 1\leq d\leq \sqrt{3} \\ -\frac{4d^2}{3\sqrt{3}}\sin^{-1}\frac{\sqrt{3}}{d}+\left(\frac{4\pi}{9\sqrt{3}} -\frac{4}{3}\right)d^2-\frac{4}{3}\sqrt{d^2-3}+\frac{16}{3}d-4 & \quad & \sqrt{3} \leq d \leq 2\\ 0 & \quad & {\rm otherwise} \end{array} \right.. \end{equation} The corresponding CDF is \begin{equation}\label{eq:triangle_diag_cdf} G_{D_{\rm D}}(d)= 2\left\{ \begin{array}{lcr} 0 & \quad & d \leq 0 \\ \left(\frac{1}{3}-\frac{\pi}{9\sqrt{3}}\right)d^4 & \quad & 0\leq d\leq \frac{\sqrt{3}}{2} \\ \frac{4d^2}{\sqrt{3}}\left(\frac{d^2}{3}+1\right)\sin^{-1}\frac{\sqrt{3}}{2d} +\left(\frac{13d^2}{9}+\frac{1}{6}\right)\sqrt{4d^2-3}+\left(\frac{1}{3}- \frac{7\pi}{9\sqrt{3}}\right)d^4\\ ~~~~-\frac{2\pi}{\sqrt{3}}d^2 & \quad & \frac{\sqrt{3}}{2}\leq d \leq 1\\ -\frac{4d^2}{\sqrt{3}}\left(\frac{d^2}{3}+1\right)\sin^{-1}\frac{\sqrt{3}}{2d} -\left(\frac{13d^2}{9}+\frac{1}{6}\right)\sqrt{4d^2-3}+\left(\frac{\pi}{9\sqrt{3}} -\frac{1}{3}\right)d^4 \\ ~~~~+\frac{32}{9}d^3+\frac{2\pi}{3\sqrt{3}}d^2+\frac{1}{3} & \quad & 1\leq d \leq \sqrt{3}\\ -\frac{2d^4}{3\sqrt{3}}\sin^{-1}\frac{\sqrt{3}}{d}+\left(\frac{4}{3}- \frac{10d^2}{9}\right)\sqrt{d^2-3}+\left(\frac{2\pi}{9\sqrt{3}} -\frac{2}{3}\right)d^4+\frac{32}{9}d^3 \\ ~~~~-4d^2+\frac{11}{6} & \quad & \sqrt{3} \leq d \leq 2\\ 1 & \quad & d \geq 2 \end{array} \right.. \end{equation} \section{Verification by Simulation} \begin{figure} \caption{Distributions of Random Distances Associated with Equilateral Triangles.} \label{fig:triangle_pdf} \end{figure} \begin{figure} \caption{Distribution and Simulation Results for Random Distances Associated with Equilateral Triangles.} \label{fig:triangle_cdf} \end{figure} Figure~\ref{fig:triangle_pdf} plots the probability density functions, as given in (\ref{eq:triangle_within}), (\ref{eq:triangle_btw}), (\ref{eq:triangle_para}) and (\ref{eq:triangle_diag}) of the four random distance cases shown in Fig.~\ref{fig:triangle}. Figure~\ref{fig:triangle_cdf} shows a comparison between the cumulative distribution functions (CDFs) of the random distances, and the simulation results by generating $1,000$ pairs of random points with the corresponding geometric locations. Figure~\ref{fig:triangle_cdf} demonstrates that our distance distribution functions are very accurate when compared with the simulation results. \section{Practical Results} \subsection{Statistical Moments of Random Distances} The distance distribution functions given in Section~\ref{sec:result} can conveniently lead to all the statistical moments of the random distances associated with equilateral triangles. Given $g_{D_{\rm I}}(d)$ in (\ref{eq:triangle_within}), for example, the first moment (mean) of $d$, i.e., the average distance within a unit triangle, is \begin{eqnarray} M_{D_{\rm I}}^{(1)}=\int_0^1xg_{D_{\rm I}}(x)dx=\frac{1}{5}+\frac{3}{20}\ln (3)\approx 0.3647918433, \nonumber \end{eqnarray} and the second raw moment is \begin{eqnarray} M_{D_{\rm I}}^{(2)}=\int_0^1x^2g_{D_{\rm I}}(x)dx=\frac{1}{6}, \nonumber \end{eqnarray} from which the variance (the second central moment) can be derived as \begin{eqnarray} Var_{D_{\rm I}}=M_{D_{\rm I}}^{(2)}-\left[M_{D_{\rm I}}^{(1)}\right]^2\approx 0.0335935777. \nonumber \end{eqnarray} When the side length of the unit triangle is scaled by $s$, the corresponding first two statistical moments given above then become \begin{equation} M_{D_{\rm I}}^{(1)}=0.3647918433s,~~\mbox{}~~M_{D_{\rm I}}^{(2)}=\frac{s}{6} ~~\mbox{ and }~~Var_{D_{\rm I}}=0.0335935777s^2. \end{equation} \begin{table} \caption{Moments and Variance---Numerical vs Simulation Results} \centering \begin{tabular}{|c||c|c|c|c|} \hline Endpoint Geometry & PDF/Sim & $M_{D}^{(1)}$ & $M_{D}^{(2)}$ & $Var_{D}$ \\ \hline \hline Within a & $g_{D_{\rm I}}(d)$ & $0.3647918433s$ & $0.1666666667s$ & $0.0335935777s^2$ \\ \cline{2-5} Single Triangle & Sim & $0.3636606517s$ & $0.1654245278s$ & $0.0331164521s^2$ \\ \hline Between Two & $g_{D_{\rm A}}(d)$ & $0.6599648287s$ & $0.5s$ & $0.0644464249s^2$\\ \cline{2-5} Adjacent Triangles & Sim & $0.6597703174s$ & $0.4991102154s$ & $0.0637684635s^2$ \\ \hline Between Two & $g_{D_{\rm P}}(d)$ & $1.0423971067s$ & $1.1666666667s$ & $0.0800749386s^2$\\ \cline{2-5} Parallel Triangles & Sim & $1.0423259678s$ & $1.1655336144s$ & $0.0791204824s^2$ \\ \hline Between Two & $g_{D_{\rm D}}(d)$ & $1.1880379828s$ & $1.5s$ & $0.0885657513s^2$\\ \cline{2-5} Diagonal Triangles & Sim & $1.1896683303s$ & $1.5048850531s$ & $0.0896042750s^2$ \\ \hline \end{tabular} \label{tab:moment} \end{table} Table~\ref{tab:moment} lists the first two moments and the variance of the random distances in all four cases given in Section~\ref{sec:result}. It also gives the corresponding simulation results for verification purposes. \subsection{Polynomial Fits of Random Distances} \begin{table} \caption{Coefficients of the Polynomial Fit and the Norm of Residuals (NR)} \centering \begin{tabular}{|c||c|c|} \hline PDF & Polynomial Coefficients & NR \\ \hline \hline \vspace*{-0.12cm} & \vspace*{-0.12cm} & \vspace*{-0.12cm} \\ & $10^{10}\times \left[0.006410~-0.062752~~0.283869~-0.787308~~1.497828\right.$ & \\ $g_{D_{\rm I}}(d)$ & $-2.071998~~2.155623~-1.720734~~1.065808~-0.514668~~0.193640$ & $0.002646$ \\ & $-0.056449~~0.012613~-0.002124~~0.000263~ -0.000023~~0.000001$ & \\ & $\left.0~~0~0~~0\right]$ & \\\hline \vspace*{-0.12cm} & \vspace*{-0.12cm} & \vspace*{-0.12cm} \\ & $10^8\times \left[-0.000192~~0.003514~-0.029768~~0.154523~-0.549699\right.$ & \\ $g_{D_{\rm A}}(d)$ & $1.420142~-2.754830~~4.091879~-4.703977~~4.202832~-2.914966$ & \\ & $1.559704~-0.636497~~0.194663~-0.043503~~0.006952~-0.000722$ & $0.086864$ \\ & $\left.0.000047~-0.000002~~0~0\right]$ & \\\hline \vspace*{-0.12cm} & \vspace*{-0.12cm} & \vspace*{-0.12cm} \\ & $10^7\times \left[-0.000069~~0.001559~-0.016145~~0.101938~-0.439383\right.$ & \\ $g_{D_{\rm P}}(d)$ & $1.370859~-3.202401~~5.714218~-7.873972~~8.415340~-6.697807$ & \\ & $4.441839~-2.155120~~0.781941~-0.206851~~0.038480~-0.004776$ & $0.105657$ \\ & $\left.0.000365~-0.000015~~0~0\right]$ & \\\hline \vspace*{-0.12cm} & \vspace*{-0.12cm} & \vspace*{-0.12cm} \\ & $10^7\times \left[0.000023~-0.000530~~0.005579~-0.035274~~0.150448\right.$ & \\ $g_{D_{\rm D}}(d)$ & $-0.460164~~1.045986~-1.805145~~2.394131~-2.453508~~1.942627$ & \\ & $-1.182240~~0.547275~-0.189550~~0.047945~-0.008552~~0.001022$ & $0.075017$ \\ & $\left.0.000076~-0.000003~~0~0\right]$ & \\\hline \end{tabular} \label{tab:poly} \end{table} \begin{figure} \caption{Polynomial Fit of the Distance Distribution Functions Associated with Equilateral Triangles.} \label{fig:triangle_poly} \end{figure} Table~\ref{tab:poly} lists the coefficients of the degree-$20$ polynomial fits of the original PDFs given in Section~\ref{sec:result}, from $d^{20}$ to $d^{0}$, and the corresponding norm of residuals. Figure~\ref{fig:triangle_poly}(a)--(d) plot the polynomials listed in Table~\ref{tab:poly} with the original PDFs. From the figure, it can be seen that all the polynomials match closely with the original PDFs. These high-order polynomials facilitate further manipulations of the distance distribution functions, with a high accuracy. \section{Conclusions} \label{sec:conclude} In this report, we gave the closed-form probability density functions of the random distances associated with equilateral triangles. The correctness of the obtained results has been validated by simulation. The first two statistical moments and the polynomial fits of the density functions are also given for practical uses. \section*{Acknowledgment} The authors would like to thank Dr. Aaron Gulliver for initially posing the problem associated with hexagons, which lead to our previous work~\cite{yanyan2011, yanyan2011h} and this report. \end{document}
\begin{document} \title{Sobolev homeomorphic extensions} \author[A. Koski]{Aleksis Koski} \address{Department of Mathematics and Statistics, P.O.Box 35 (MaD) FI-40014 University of Jyv\"askyl\"a, Finland} \email{[email protected]} \author[J. Onninen]{Jani Onninen} \address{Department of Mathematics, Syracuse University, Syracuse, NY 13244, USA and Department of Mathematics and Statistics, P.O.Box 35 (MaD) FI-40014 University of Jyv\"askyl\"a, Finland } \email{[email protected]} \thanks{ A. Koski was supported by the Academy of Finland Grant number 307023. J. Onninen was supported by the NSF grant DMS-1700274.} \mathbb{S}ubjclass[2010]{Primary 46E35, 58E20} \keywords{Sobolev homeomorphisms, Sobolev extensions, Douglas condition} \begin{abstract} Let $\mathbb{X}$ and $\mathbb{Y}$ be $\ell$-connected Jordan domains, $\ell \in \mathbb N$, with rectifiable boundaries in the complex plane. We prove that any boundary homeomorphism $\varphi \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ admits a Sobolev homeomorphic extension $h \colon \overline{\mathbb{X}} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}}$ in $\mathscr{W}^{1,1} (\mathbb{X}, \mathbb{C})$. If instead $\mathbb{X}$ has $s$-hyperbolic growth with $s>p-1$, we show the existence of such an extension lies in the Sobolev class $\mathscr{W}^{1,p} (\mathbb{X}, \mathbb C)$ for $p\in (1,2)$. Our examples show that the assumptions of rectifiable boundary and hyperbolic growth cannot be relaxed. We also consider the existence of $\mathscr{W}^{1,2}$-homeomorphic extensions subject to a given boundary data. \end{abstract} \maketitle \mathbb{S}ection{Introduction} Throughout this text $\mathbb{X}$ and $\mathbb{Y}$ are $\ell$-connected Jordan domains, $\ell=1,2, \dots$, in the complex plane $\mathbb C$. Their boundaries $\partial\mathbb{X}$ and $\partial\mathbb{Y}$ are thus a disjoint union of $\ell$ simple closed curves. If $\ell =1 $, these domains are simply connected and will just be called Jordan domains. In the simply connected case, the Jordan-Sch\"onflies theorem states that every homeomorphism $\varphi \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ admits a continuous extension $h \colon \overline{\mathbb{X} }\to \overline{\mathbb{Y}}$ which takes $\mathbb{X}$ homeomorphically onto $\mathbb{Y}$. In the first part of this paper we focus on a Sobolev variant of the Jordan-Sch\"onflies theorem. The most pressing demand for studying such variants comes from the variational approach to Geometric Function Theory~\cite{AIMb, IMb, Reb} and Nonlinear Elasticity~\cite{Anb, Bac, Cib}. Both theories share the compilation ideas to determine the infimum of a given energy functional \begin{equation}\label{energ} \mathsf E_\mathbb X [h] = \int_\mathbb X \mathbf {\bf E}(x,h, Dh )\, \textnormal d x\, , \end{equation} among orientation preserving homeomorphisms $h \colon \overline{\mathbb X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb Y }$ in the Sobolev space $\mathscr W^{1,p} (\mathbb{X}, \mathbb{Y})$ with given boundary data $\varphi \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$. We denote such a class of mappings by $\mathscr H_\varphi^{1,p}(\mathbb X, \mathbb Y)$. Naturally, a fundamental question to raise then is whether the class $\mathscr H_\varphi^{1,p}(\mathbb X, \mathbb Y)$ is non-empty. \begin{question}\label{existQuestion} Under what conditions does a given boundary homeomorphism $\varphi \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ admit a homeomorphic extension $h \colon \overline {\mathbb{X}} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}}$ of Sobolev class $\mathscr{W}^{1,p} (\mathbb{X}, \mathbb{C})$? \end{question} A necessary condition is that the mapping $\varphi$ is the Sobolev trace of some (possibly non-homeomorphic) mapping in $\mathscr{W}^{1,p}(\mathbb{X}, \mathbb{C})$. Hence to solve Question \ref{existQuestion} one could first study the following natural sub-question: \begin{question}\label{Q2} Suppose that a homeomorphism $\varphi : \partial \mathbb{X} \to \partial \mathbb{Y}$ admits a Sobolev $\mathscr{W}^{1,p}$-extension to $\mathbb{X}$. Does it then follow that $\varphi$ also admits a homeomorphic Sobolev $\mathscr{W}^{1,p}$-extension to $\mathbb{X}$? \end{question} Our main results, Theorem~\ref{thm:main} and its multiply connected variant (Theorem~\ref{thm:multiply}), give an answer to these questions when $p \in [1,2)$. The construction of such extensions is important not only to ensure the well-posedness of the related variational questions, but also for example due to the fact that various types of extensions were used to provide approximation results for Sobolev homeomorphisms, see \cite{HP, IKOapprox}. We touch upon the variational topics in Section~\ref{sec:mono}, where we provide an application for one of our results. Apart from Theorem~\ref{thm:multiply} and its proof (\S\ref{anyplansguysz}), the rest of the paper deals with the simply connected case. Let us start considering the above questions in the well-studied setting of the Dirichlet energy, corresponding to $p=2$ above. The Rad\'o \cite{Ra}, Kneser \cite{Kn} and Choquet \cite{Ch} theorem asserts that if ${\mathbb Y} \mathbb{S}ubset {\mathbb R}^2$ is a convex domain then the harmonic extension of a homeomorphism $\varphi \colon \partial {\mathbb X} \to \partial {\mathbb Y}$ is a univalent map from ${\mathbb X}$ onto ${\mathbb Y}$. Moreover, by a theorem of Lewy \cite{Le}, this univalent harmonic map has a non-vanishing Jacobian and is therefore a real analytic diffeomorphism in $\mathbb{X}$. However, such an extension is not guaranteed to have finite Dirichlet energy in $\mathbb{X}$. The class of boundary functions which admit a harmonic extension with finite Dirichlet energy was characterized by Douglas~\cite{Do}. The {\it Douglas condition} for a function $\varphi \colon \partial \mathbb D \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ reads as \begin{equation}\label{eq:douglas} \int_{\partial \mathbb D} \int_{\partial \mathbb D} \leqslantft| \frac{\varphi (\mathfrak{X}i) - \varphi (\eta)}{ \mathfrak{X}i - \eta }\right|^2 \abs{\textnormal d \mathfrak{X}i } \, \abs{\textnormal d \eta } < \infty \, . \end{equation} The mappings satisfying this condition are exactly the ones that admit an extension with finite $\mathscr{W}^{1,2}$-norm. Among these extensions is the harmonic extension of $\varphi$, which is known to have the smallest Dirichlet energy among all extensions. Note that the Dirichlet energy is also invariant with respect to a conformal change of variables in the domain $\mathbb{X}$. Therefore thanks to the Riemann Mapping Theorem, when considering Question~\ref{existQuestion} in the case $p=2$, we may assume that $\mathbb{X} = \mathbb D$ without loss of generality. Now, there is no challenge to answer Question~\ref{existQuestion} when $p=2$ and $\mathbb{Y}$ is Lipschitz. Indeed, for any Lipschitz domain there exists a global bi-Lipschitz change of variables $\Phi \colon \mathbb C \to \mathbb C$ for which $\Phi (\mathbb{Y})$ is the unit disk. Since the finiteness of the Dirichlet energy is preserved under a bi-Lipschitz change of variables in the target, we may reduce Question~\ref{existQuestion} to the case when $\mathbb{X} = \mathbb{Y} = \mathbb{D}$, for which the Rad\'o-Kneser-Choquet theorem and the Douglas condition provide an answer. In other words, if $\mathbb{Y}$ is Lipschitz then the following are equivalent for a boundary homeomorphism $\varphi \colon \partial \mathbb D \to \partial \mathbb{Y}$ \begin{enumerate} \item{$\varphi$ admits a $\mathscr{W}^{1,2}$-Sobolev homeomorphic extension $h \colon \overline{\mathbb D} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}}$}\label{pt:1} \item{$\varphi$ admits $\mathscr{W}^{1,2}$-Sobolev extension to $\mathbb D$}\label{pt:2} \item{$\varphi$ satisfies the Douglas condition~\eqref{eq:douglas}}\label{pt:3} \end{enumerate} In the case when $1 \leqslantqslant p < 2$, the problem is not invariant under a conformal change of variables in $\mathbb{X}$. However, when $\mathbb{X}$ is the unit disk and $\mathbb{Y}$ is a convex domain, a complete answer to Question~\ref{existQuestion} was provided by the following result of Verchota~\cite{Ve}. \begin{proposition}\label{Verchota} Let $\mathbb{Y}$ be a convex domain, and let $\varphi : \partial \mathbb{D} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ be any homeomorphism. Then the harmonic extension of $\varphi$ lies in the Sobolev class $\mathscr{W}^{1,p}(\mathbb{D},\mathbb{C})$ for all $1 \leqslant p < 2$. \end{proposition} This result was further generalized in~\cite{IMS} and~\cite{Kal}. The case $p>2$ will be discussed in Section~\ref{sec:liptriv}. Our main purpose is to provide a general study of Question~\ref{existQuestion} in the case when $1 \leqslantqslant p < 2$. Considering now the endpoint case $p = \infty$, we find that Question~\ref{existQuestion} is equivalent to the question of finding a homeomorphic Lipschitz map extending the given boundary data $\varphi$. In this case the Kirszbraun extension theorem~\cite{Ki} shows that a boundary map $\varphi \colon \partial \mathbb D \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ admits a Lipschitz extension if and only if $\varphi$ is a Lipschitz map itself. In the case when $\mathbb{X}$ is the unit disk, a positive answer to Question~\ref{Q2} is then given by the following recent result by Kovalev~\cite{Kovalev}. \begin{theorem}\label{thm:lip}\textnormal{{\bf ($p=\infty$)}} Let $\varphi \colon \partial \mathbb D \to \mathbb{C}$ be a Lipschitz embedding. Then $\varphi$ admits a homeomorphic Lipschitz extension to the whole plane $\mathbb{C}$. \end{theorem} Let us return to the case of the Dirichlet energy, see~\eqref{pt:1}-\eqref{pt:3} above. The equivalence of a $\mathscr{W}^{1,2}$-Sobolev extension and a $\mathscr{W}^{1,2}$-Sobolev homeomorphic extension for non-Lipschitz targets is a more subtle question. In this perspective, a slightly more general class of domains is the class of inner chordarc domains studied in Geometric Function Theory~\cite{HS, Po, Tu, Va1, Va2}. By definition~\cite{Va1}, a rectifiable Jordan domain $\mathbb{Y}$ is {\it inner chordarc} if there exists a constant $C$ such that for every pairs of points $y_1, y_2 \in \partial \mathbb{Y}$ there exists an open Jordan arc $\gamma \mathbb{S}ubset \mathbb{Y}$ with endpoints at $y_1$ and $y_2$ such that the shortest connection from $y_1$ to $y_2$ along $\partial \mathbb{Y}$ has length at most $C\cdot\textnormal{length} (\gamma )$. For example, an inner chordarc domain allows for inward cusps as oppose to Lipschitz domains. According to a result of V\"ais\"al\"a~\cite{Va1} the inner chordarc condition is equivalent with the requirement that there exists a homeomorphism $\Psi \colon \overline{\mathbb{Y}} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb D}$, which is $\mathscr C^1$-diffeomorphic in $\mathbb{Y}$, such that the norms of both the gradient matrices $D\Psi$ and $(D\Psi)^{-1}$ are bounded from above. Surprisingly, the following example shows that, unlike for Lipschitz targets, the answer to Question~\ref{Q2} for $p=2$ is in general negative when the target is only inner chordarc. \begin{example}\label{ex:nodirichglet} There exists an inner chordarc domain $\mathbb{Y}$ and a homeomorphism $\varphi \colon \partial \mathbb D \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ satisfying the Douglas condition~\eqref{eq:douglas} which does not admit a homeomorphic extension $h \colon \overline {\mathbb D} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}}$ in $\mathscr{W}^{1,2} (\mathbb D , \mathbb{Y})$. \end{example} In~\cite{AIMO} it was, as a part of studies of mappings with smallest mean distortion, proved that for $\mathscr C^1$-smooth $\mathbb{Y}$ the Douglas condition~\eqref{eq:douglas} can be equivalently formulated in terms of the inverse mapping $\varphi^{-1} \colon \partial \mathbb{Y} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb D$, \begin{equation}\label{eq:invdouglas} \int_{\partial \mathbb Y} \int_{\partial \mathbb Y} \big| \log \abs{\varphi^{-1} (\mathfrak{X}i) - \varphi^{-1} (\eta) } \big| \, \abs{\textnormal d \mathfrak{X}i } \, \abs{\textnormal d \eta } < \infty \, . \end{equation} It was recently shown that for inner chordarc targets this condition is necessary and sufficient for $\varphi$ to admit a $\mathscr{W}^{1,2}$-homeomorphic extension, see~\cite{KWX}. We extend this result both to cover rectifiable targets and to give a global homeomorphic extension as follows. \begin{theorem}\label{thm:dirichlet} \textnormal{{\bf ($p=2$)}} Let $\mathbb{X}$ and $\mathbb{Y}$ be Jordan domains, $\partial \mathbb{Y}$ being rectifiable. Every $\varphi \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ satisfying~\eqref{eq:invdouglas} admits a homeomorphic extension $h \colon \mathbb{C} \to \mathbb{C}$ of Sobolev class $\mathscr{W}^{1,2}_{loc} (\mathbb{C}, \mathbb{C})$. \end{theorem} Without the rectifiability of $\partial \mathbb{Y}$, Question~\ref{Q2} will in general admit a negative answer for all $p \leqslantqslant 2$. This follows from the following example of Zhang~\cite{Zh}. \begin{example}\label{ex:yi} There exists a Jordan domain $\mathbb{Y}$ and a homeomorphism $\varphi \colon \partial \mathbb D \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ which admits a $\mathscr{W}^{1,2}$-Sobolev extension to $\mathbb{D}$ but does not admit any homeomorphic extension to $\mathbb{D}$ in the class $\mathscr{W}^{1,1}(\mathbb{D}, \mathbb C)$. \end{example} We now return to the case when $1 \leqslantqslant p < 2$. In this case it is natural to ask under which conditions on the domains $\mathbb{X}$ and $\mathbb{Y}$ does any homeomorphism $\varphi: \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ admit a $\mathscr{W}^{1,p}$-Sobolev homeomorphic extension. Proposition~\ref{Verchota} already implies that this is the case for $\mathbb{X} = \mathbb{D}$ and $\mathbb{Y}$ convex. Example~\ref{ex:yi}, however, will imply that this result does not hold in general for nonrectifiable targets $\mathbb{Y}$. A general characterization is provided by the following two theorems. \begin{theorem}\label{thm:main} \textnormal{{\bf ($1\leqslant p<2$)}} Let $\mathbb{X}$ and $\mathbb{Y}$ be Jordan domains in the plane with $\partial \mathbb{Y}$ rectifiable. Let $\varphi \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ be a given homeomorphism. Then there is a homeomorphic extension $h \colon \overline{\mathbb{X}} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}}$ such that \begin{enumerate} \item $h \in \mathscr{W}^{1,1} (\mathbb{X}, \mathbb{C})$, provided $\partial\mathbb{X}$ is rectifiable, and \item $h\in \mathscr{W}^{1,p} (\mathbb{X}, \mathbb{C})$ for $1<p<2$, provided $\mathbb{X}$ has \emph{$s$-hyperbolic growth} with $s>p-1$. \end{enumerate} \end{theorem} \begin{definition}\label{def:shyper} Let $\mathbb{X}$ be a domain in the plane. Choose and fix a point $x_0 \in \mathbb{X}$. We say that $\mathbb{X}$ has \emph{$s$-hyperbolic growth}, $s \in (0,1)$, if the following condition holds \begin{equation}\label{cuspCondition} h_{\mathbb{X}}(x_0,x) \leqslantqslant C \leqslantft(\frac{\dist(x_0,\partial \mathbb{X})}{\dist(x,\partial \mathbb{X})}\right)^{1-s} \qquad \text{ for all } x \in \mathbb{X} \, . \end{equation} Here $h_{\mathbb{X}}$ stands for the quasihyperbolic metric on $\mathbb{X}$ and $\dist(x,\partial\mathbb{X})$ is the Euclidean distance of $x$ to the boundary. The constant $C$ is allowed to depend on everything except the point $x$. \end{definition} It is easily verified that this definition does not depend on the choice of $x_0$. Recall that if $\Omega$ is a domain, the quasihyperbolic metric $h_{\Omega} $ is defined by~\cite{GP} \begin{equation} h_{\Omega} (x_1, x_2) = \inf_{\gamma \in \Gamma} \int_\gamma \frac{1}{\dist(x,\partial\mathbb{X})}\, \abs{\textnormal d x} \, , \qquad x_1 , x_2 \in \Omega \end{equation} where $\Gamma$ is the family of all rectifiable curves in $\Omega$ joining $x_1$ and $x_2$. Definition~\ref{def:shyper} is motivated by the following example. For $s \in (0,1)$ we consider the Jordan domain $\mathbb{X}_s$ whose boundary is given by the curve \[\Gamma_s = \{(x,y) \in \mathbb{C} : -1 \leqslantqslant x \leqslantqslant 1,\, y = |x|^s\} \cup \{ z \in \mathbb{C} : |z-i| = 1,\, \im(z) \geqslantqslant 1\}.\] \begin{figure} \caption{The Jordan domain $\mathbb{X} \label{cuspFig} \end{figure} In particular, the boundary of $\mathbb{X}_s$ is locally Lipschitz apart from the origin. Near to the origin the boundary of $\mathbb{X}_s$ behaves like the graph of the function $|x|^s$. Then one can verify that the boundary of $\mathbb{X}_s$ has $t$-hyperbolic growth for every $t \geqslantqslant s$. Note that smaller the number $s$ sharper the cusp is. The results of Theorem~\ref{thm:main} are sharp, as described by the following result. \begin{theorem}\label{counterExampleThm1} \quad \begin{enumerate} \item{There exists a Jordan domain $\mathbb{X}$ with nonrectifiable boundary and a homeomorphism $\varphi : \partial\mathbb{X} \to \partial\mathbb{D}$ such that $\varphi$ does not admit a continuous extension to $\mathbb{X}$ in the Sobolev class $\mathscr{W}^{1,1}(\mathbb{X}, \mathbb C)$.} \item{For every $p \in (1,2)$ there exists a Jordan domain $\mathbb{X}$ which has $s$-hyperbolic growth, with $p-1 = s$, and a homeomorphism $\varphi : \partial\mathbb{X} \to \partial\mathbb{D}$ such that $\varphi$ does not admit a continuous extension to $\mathbb{X}$ in the Sobolev class $\mathscr{W}^{1,p}(\mathbb{X}, \mathbb C)$.} \end{enumerate} \end{theorem} To conclude, as promised earlier, we extend our main result to the case where the domains are not simply connected. The following generalization of Theorem \ref{thm:main} holds. \begin{theorem}\label{thm:multiply} Let $\mathbb{X}$ and $\mathbb{Y}$ be multiply connected Jordan domains with $\partial \mathbb{Y}$ rectifiable. Let $\varphi \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ be a given homeomorphism which maps the outer boundary component of $\mathbb{X}$ to the outer boundary component of $\mathbb{Y}$. Then there is a homeomorphic extension $h \colon \overline{\mathbb{X}} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}}$ such that \begin{enumerate} \item $h \in \mathscr{W}^{1,1} (\mathbb{X}, \mathbb{C})$, provided $\partial\mathbb{X}$ is rectifiable, and \item $h\in \mathscr{W}^{1,p} (\mathbb{X}, \mathbb{C})$ for $1<p<2$, provided $\mathbb{X}$ has \emph{$s$-hyperbolic growth} with $s>p-1$. \end{enumerate} \end{theorem} \mathbb{S}ubsection*{Acknowledgements} We thank Pekka Koskela for posing the main question of this paper to us. \mathbb{S}ection{Preliminaries}\label{sec:pre} \mathbb{S}ubsection{The Dirichlet problem} Let $\Omega $ be a bounded domain in the complex plane. A function $u \colon \Omega \to \mathbb R$ in the Sobolev class $\mathscr{W}^{1,p}_{\loc} (\Omega)$, $1<p<\infty$, is called {\it $p$-harmonic} if \begin{equation}\label{eq:eijoo}\partialiv \abs{\nabla u}^{p-2} \nabla u =0. \end{equation} We call $2$-harmonic functions simply \emph{harmonic}. There are two formulations of the Dirichlet boundary value problem for the $p$-harmonic equation~\eqref{eq:eijoo}. We first consider the variational formulation. \begin{lemma} Let $u_\circ \in \mathscr{W}^{1,p}(\Omega)$ be a given Dirichlet data. There exists precisely one function $u\in u_\circ + \mathscr{W}_\circ^{1,p}(\Omega)$ which minimizes the $p$-harmonic energy: \[\int_\Omega \abs{\nabla u}^p =\inf \leqslantft\{ \int_\Omega \abs{\nabla w}^p\colon w\in u_\circ + \mathscr{W}_\circ^{1,p}(\Omega) \right\}.\] \end{lemma} The variational formulation coincides with the classical formulation of the Dirichlet problem. \begin{lemma}\label{proexist} Let $\Omega \mathbb{S}ubset \mathbb{C}$ be a bounded Jordan domain and $u_\circ \in \mathscr{W}^{1,p}(\Omega) \cap \mathscr C (\overline{\Omega})$. Then there exists a unique $p$-harmonic function $u\in \mathscr{W}^{1,p}(\Omega) \cap \mathscr C (\overline{\Omega})$ such that $u_{|_{\partial \Omega}}=u_{\circ |_{\partial \Omega}}$. \end{lemma} For a reference for proofs of these facts we refer to~\cite{IKOapprox}. \mathbb{S}ubsection{The Rad\'o-Kneser-Choquet Theorem} \begin{lemma}\label{lem:RKC} Consider a Jordan domain $\mathbb X \mathbb{S}ubset \mathbb{C}$ and a bounded convex domain $\mathbb Y \mathbb{S}ubset \mathbb C$. Let $h \colon \partial \mathbb X \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb Y$ be a homeomorphism and $H \colon \mathbb U \to \mathbb{C}$ denote its harmonic extension. Then $H$ is a $\mathscr C^\infty$-diffeomorphism of $\mathbb X$ onto $\mathbb Y$. \end{lemma} For the proof of this lemma we refer to~\cite{Dub, IOrado}. The following $p$-harmonic analogue of the Rad\'o-Kneser-Choquet Theorem is due to Alessandrini and Sigalotti~\cite{AS}, see also~\cite{IOsimply}. \begin{proposition} Let $\mathbb{X}$ be a Jordan domain in $\mathbb C$, $1<p< \infty$, and $h=u+iv \colon \overline{\mathbb{X} } \to \mathbb C$ be a continuous mapping whose coordinate functions are $p$-harmonic. Suppose that $\mathbb{Y}$ is convex and that $h \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ is a homeomorphism. Then $h$ is a diffeomorphism from $\mathbb{X}$ onto $\mathbb{Y}$. \end{proposition} \mathbb{S}ubsection{Sobolev homeomorphic extensions onto a Lipschitz target}\label{sec:liptriv} Combining the results in this section allows us to easily solve Question~\ref{Q2} for convex targets. \begin{proposition}\label{pr:blah} Let $\mathbb{X}$ and $\mathbb{Y}$ be Jordan domains in the plane with $\mathbb{Y}$ convex, and let $p$ be given with $1<p< \infty$. Suppose that $\varphi \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ is a homeomorphism. Then there exists a continuous $g\colon \overline{\mathbb{X}} \to \mathbb{C}$ in $\mathscr{W}^{1,p} (\mathbb{X}, \mathbb{C})$ such that $g(x)=\varphi(x)$ on $\partial \mathbb{X}$ if and only if there exists a homeomorphism $h\colon \overline{\mathbb{X}} \to \overline{\mathbb{Y}}$ in $\mathscr{W}^{1,p} (\mathbb{X}, \mathbb{C})$ such that $h(x)=\varphi(x)$ on $\partial \mathbb{X}$. \end{proposition} Now, replacing the convex $\mathbb{Y}$ by a Lipschitz domain offers no challenge. Indeed, this follows from a global bi-Lipschitz change of variables $\Phi \colon \mathbb C \to \mathbb C$ for which $\Phi (\mathbb{Y})$ is the unit disk. If the domain in Proposition~\ref{pr:blah} is the unit disk $\mathbb D$, then the existence of a finite $p$-harmonic extension can be characterized in terms of a Douglas type condition. If $1<p<2$, then such an extension exists for an arbitrary boundary homeomorphism (Proposition~\ref{Verchota}) and if $2 \leqslant p < \infty$ the extension exists if and only the boudary homeomorphism $\varphi \colon \partial \mathbb D \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ satisfies the following condition, \begin{equation}\label{eq:pdouglas} \int_{\partial \mathbb D} \int_{\partial \mathbb D} \leqslantft| \frac{\varphi (\mathfrak{X}i) - \varphi (\eta)}{ \mathfrak{X}i - \eta }\right|^p \abs{\textnormal d \mathfrak{X}i } \, \abs{\textnormal d \eta } < \infty \, . \end{equation} For the proof of the latter fact we refer to~\cite[p. 151-152]{Stb}. \mathbb{S}ubsection{A Carleson measure and the Hardy space $H^p$} Roughly speaking, a Carleson measure on a domain $\mathbb G$ is a measure that does not vanish at the boundary of $\mathbb G$ when compared to the Hausdorff $1$-measure on $\partial \mathbb G$. We will need the notion of Carleson measure only on the unit disk $\mathbb D$. \begin{definition}\label{def:carleson} Let $\mu$ be a Borel measure on $\mathbb D$. Then $\mu$ is a {\it Carleson measure} if there is a constant $C>0$ such that \[\mu (S_\epsilon(\theta)) \leqslant C \epsilon\] for every $\epsilon >0$. Here \[ S_\epsilon(\theta) = \{r e^{i\alpha} : 1 - \epsilon < r < 1, \theta - \epsilon < \alpha < \theta + \epsilon\} \, . \] \end{definition} Carleson measures have many applications in harmonic analysis. A celebrated result by L. Carleson~\cite{Ca}, also see Theorem 9.3 in~\cite{Duhp}, tells us that a Borel measure $\mu$ on $\mathbb D$ is a bounded Carleson measure if and only if the injective mapping from the Hardy space $H^p (\mathbb D)$ into a the measurable space $ L^p_\mu (\mathbb D)$ is bounded: \begin{proposition}\label{pro:carleson} Let $\mu$ be a Borel measure on the unit disk $\mathbb D$. Let $0<p<\infty$. Then in order that there exist a constant $C>0$ such that \[\leqslantft( \int_{\mathbb D} \abs{f(z)}^p \, d \mu (z)\right)^\frac{1}{p} \leqslant C ||f||_{H^p (\mathbb D)} \quad \textnormal{for all } f \in H^p (\mathbb D) \] it is necessary and sufficient that $\mu$ be a Carleson measure. \end{proposition} Recall that the Hardy space $H^p(\mathbb D)$, $0<p<\infty$, is the class of holomorphic functions $f$ on the unit disk satisfying \[||f||_{H^p(\mathbb D)}:=\mathbb{S}up_{0 \leqslant r <1 } \leqslantft(\frac{1}{2\pi} \int_0^{2\pi } \abs{f(r e^{i \theta})}^p \, d \theta \right)^\frac{1}{p} < \infty \, . \] Note that $||\cdot||_{H^p(\mathbb D)}$ is a norm when $p \geqslant 1$, but not when $0<p<1$. \mathbb{S}ection{Sobolev integrability of the harmonic extension} At the end of this section we prove our main result in the simply connected case, Theorem \ref{thm:main}. The proof will be based on a suitable reduction of the target domain to the unit disk, and the following auxiliary result which concerns the regularity of harmonic extensions. \begin{theorem}\label{harmonicThm} Let $\mathbb{X}$ be a Jordan domain and $\varphi : \partial \mathbb{X} \to \partial \mathbb{D}$ be an arbitrary homeomorphism. Let $h$ denote the harmonic extension of $\varphi$ to $\mathbb{X}$, which is a homeomorphism from $\bar{\mathbb{X}}$ to $\bar{\mathbb{D}}$. Then the following hold. \begin{enumerate} \item{If the boundary of $\mathbb{X}$ is rectifiable, then $h \in \mathscr{W}^{1,1}(\mathbb{X}, \mathbb{C})$.} \item{If $\mathbb{X}$ has $s$-hyperbolic growth, then $h \in \mathscr{W}^{1,p}(\mathbb{X}, \mathbb{C})$ for $p = s-1$.} \end{enumerate} \end{theorem} This theorem will be a direct corollary of the following theorem and the two propositions after it. \begin{theorem}\label{harmonicThmV2} Let $\mathbb{X}$ be a Jordan domain, and denote by $g : \mathbb{D} \to \mathbb{X}$ a conformal map onto $\mathbb{X}$. Let $1 \leqslantqslant p < 2$. Suppose that the condition \begin{equation}\label{integrableConfMap} \mathbb{S}up_{\omega \in \partial \mathbb{D}} \int_{\mathbb{D}} \frac{|g'(z)|^{2-p}}{|\omega - z|^p} \, dz \leqslantqslant M < \infty \end{equation} holds. Then the harmonic extension $h : \mathbb{X} \to \mathbb{D}$ of any boundary homeomorphism $\varphi : \partial \mathbb{X} \to \partial \mathbb{D}$ lies in the Sobolev space $\mathscr{W}^{1,p}(\mathbb{X}, \mathbb{C})$, with the estimate \begin{equation}\label{harmonicpEnergyEstim} ||h||_{\mathscr{W}^{1,p}(\mathbb{X}, \mathbb{C})} \leqslantqslant c M. \end{equation} \end{theorem} \begin{proposition}\label{h1Conformal} Let $\mathbb{X}$ be a Jordan domain with rectifiable boundary. Then the condition \eqref{integrableConfMap} holds with $p=1$. \end{proposition} \begin{proposition}\label{pConformal} Let $\mathbb{X}$ be a Jordan domain which has $s$-hyperbolic growth, with $s \in (0,1)$. Then condition \eqref{integrableConfMap} holds for all $p > 1$ with $p-1 < s$. \end{proposition} \begin{proof}[Proof of Theorem \ref{harmonicThmV2}] First, since $\mathbb{X}$ is a Jordan domain according to the classical Carath\'eodory's theorem the conformal mapping $g \colon \mathbb D \to \mathbb X$ extends continuously to a homeomorphism from the unit circle onto $\partial \mathbb X$. Second, since a conformal change of variables preserves harmonicity, we find that the map $H := h \circ g \colon \mathbb{D} \to \mathbb{D}$ is a harmonic extension of the boundary homeomorphism $\psi := \varphi \circ g\vert_{\partial \mathbb{D}}$. We will now assume that $H$ is smooth up to the boundary of $\mathbb{D}$. The general result will then follow by an approximation argument. Indeed, for each $r < 1$, we may take the preimage of the disk $B(0,r)$ under $H$, and letting $\psi_r: \mathbb{D} \to H^{-1}(B(0,r))$ be the conformal map onto this preimage we may define $H_r := H \circ \psi_r$. Then $H_r$ is harmonic, smooth up to the boundary of $\mathbb{D}$, and will converge to $H$ locally uniformly along with its derivatives as $r \to 1$. Hence the general result will follow once we obtain uniform estimates for the Sobolev norm under the assumption of smoothness up to the boundary. The harmonic extension $H := h \circ g \colon \mathbb{D} \to \mathbb{D}$ of $\psi := \varphi \circ g\vert_{\partial \mathbb{D}}$ is given by the Poisson integral formula~\cite{Dub}, \[(h \circ g) (z)=H(z) = \frac{1}{2 \pi} \int_{\partial \mathbb D} \frac{1-\abs{z}^2}{\abs{z- \omega}} \psi (\omega) \, d \omega \, . \] Differentiating this, we find the formula \begin{equation*} (h \circ g)_z = \int_{\partial \mathbb{D}} \frac{\psi(\omega)}{(z-\omega)^2}\, d\omega = \int_0^{2\pi} \frac{\psi(e^{it})}{(z-e^{it})^2} i e^{it}\, dt = \int_0^{2\pi} \frac{\psi'(e^{it})}{z-e^{it}} i e^{it}\, dt, \end{equation*} where we have used integration by parts to arrive at the last equality. The change of variables formula now gives \begin{align*} \int_{\mathbb{X}} |h_z(\tilde{z})|^p \, d\tilde{z} &= \int_{\mathbb{D}} |(h \circ g)_z(z)|^p |g'(z)|^{2-p} \, dz \\&= \int_{\mathbb{D}} \leqslantft|\int_0^{2\pi} \frac{\psi'(e^{it})}{z-e^{it}} i e^{it}\, dt\right|^p |g'(z)|^{2-p} \, dz, \end{align*} We now apply Minkowski's integral inequality to find that \begin{align*} &\leqslantft(\int_{\mathbb{D}} \leqslantft|\int_0^{2\pi} \frac{\psi'(e^{it})}{z-e^{it}} i e^{it}\, dt\right|^p |g'(z)|^{2-p} \, dz\right)^{\frac{1}{p}} \\&\qquad \leqslantqslant \int_0^{2\pi} |\psi'(e^{it})| \leqslantft(\int_{\mathbb{D}} \frac{|g'(z)|^{2-p}}{|z-e^{it}|^p} \, dz\right)^{\frac{1}{p}} \, dt \\&\qquad\leqslantqslant M\int_0^{2\pi} |\psi'(e^{it})| \, dt \\&\qquad= 2\pi M \end{align*} This gives the uniform bound $||h_z||_{L^p(\mathbb{X})} \leqslantqslant 2\pi M$. An analogous estimate for the $L^p$-norm of $h_{\bar{z}}$ now proves the theorem. \end{proof} \begin{proof}[Proof of Proposition \ref{h1Conformal}] Since $\partial\mathbb{X}$ is rectifiable, the derivative $g'$ of a conformal map from $\mathbb{D}$ onto $\mathbb{X}$ lies in the Hardy space $H^1(\mathbb D)$ by Theorem 3.12 in~\cite{Duhp}. By rotational symmetry it is enough to verify condition \eqref{integrableConfMap} for $\omega = 1$ and $g : \mathbb{D} \to \mathbb{X}$ an arbitrary conformal map. By Proposition~\ref{pro:carleson}, it suffices to verify that the measure $\mu(z) = \frac{dz}{|1-z|}$ is a Carleson measure, see Definition~\ref{def:carleson}, to obtain the estimate \[\int_{\mathbb{D}} \frac{|g'(z)|}{|1 - z|} \, dz \leqslantqslant C ||g'||_{H^1(\mathbb D)},\] which will imply that the proposition holds. Let us hence for each $\epsilon$ define the set $S_\epsilon(\theta) = \{r e^{i\alpha} : 1 - \epsilon < r < 1, \theta - \epsilon < \alpha < \theta + \epsilon\}$. We then estimate for small $\epsilon$ that \begin{equation*} \mu(S_\epsilon(0)) \leqslantqslant \mu(B(1,2\epsilon)) = \int_{B(1,2\epsilon)} \frac{dz}{|1-z|} = \int_0^{2\pi} \int_0^{2\epsilon} \frac{1}{r}\, r \, dr d\alpha = 4\pi \epsilon. \end{equation*} It is clear that for any other angles $\theta$ the $\mu$-measure of $S_\epsilon(\theta)$ is smaller than for $\theta = 0$. Hence $\mu$ is a Carleson measure and our proof is complete. \end{proof} \begin{proof}[Proof of Proposition \ref{pConformal}] Recall that $g$ denotes the conformal map from $\mathbb{D}$ onto $\mathbb{X}$. Since $\mathbb{X}$ has $s$-hyperbolic growth, we may apply Definition \ref{cuspCondition} with $x_0 = g(0)$ to find the estimate \begin{equation}\label{hyperbolicEstim1} h_{\mathbb{X}}(g(0),g(z)) \leqslantqslant C \leqslantft(\frac{1}{\dist(g(z),\partial \mathbb{X})}\right)^{1-s} \qquad \text{ for all } z \in \mathbb{D}, \end{equation} Since $\mathbb{X}$ is simply connected, the quasihyperbolic distance is comparable to the hyperbolic distance $\rho_{\mathbb{X}}$. By conformal invariance of the hyperbolic distance we find that \[C_1 h_{\mathbb{X}}(g(0),g(z)) \geqslantqslant \rho_{\mathbb{X}}(g(0),g(z)) = \rho_{\mathbb{D}}(0,z) = \log \frac{1}{1-|z|^2}.\] Now by the Koebe $\frac14$-theorem we know that the expression $\dist(g(z),\partial \mathbb{X})$ is comparable to $(1-|z|)|g'(z)|$ by a universal constant. Combining these observations with \eqref{hyperbolicEstim1} leads to the estimate \begin{equation*}\label{hyperbolicEstim2} \log \frac{1}{1-|z|^2} \leqslantqslant C \leqslantft(\frac{1}{(1-|z|)|g'(z)|}\right)^{1-s}, \end{equation*} which we transform into \begin{equation}\label{hyperbolicEstim3} |g'(z)| \leqslantqslant \frac{C}{(1-|z|) \log^{1/(1-s)} \frac{1}{1-|z|}}, \end{equation} Let us denote $\beta = (2-p)/(1-s)$ so that $\beta > 1$ by assumption. We now apply the estimate \eqref{hyperbolicEstim3} to find that \begin{align}\label{integralLogEstim}\int_{\mathbb{D}} \frac{|g'(z)|^{2-p}}{|1 - z|^p} \, dz \leqslantqslant C&\int_{\mathbb{D} \mathbb{S}etminus \frac12 \mathbb{D}} \frac{1}{(1-|z|)^{2-p}|1 - z|^p \log^{\beta} \frac{1}{1-|z|}} \, dz \\ \nonumber & + \int_{\frac12 \mathbb{D}} \frac{|g'(z)|^{2-p}}{|1 - z|^p} \, dz.\end{align} It is enough to prove that the quantity on the right hand side above is finite as then rotational symmetry will imply that the estimate \eqref{integrableConfMap} holds for all $\omega$. The second term is easily seen to be finite, as the integrand is bounded on the set $\frac12 \mathbb{D}$. To estimate the first integral we will cover the annulus $\mathbb{D} \mathbb{S}etminus \frac12\mathbb{D}$ by three sets $S_1,S_2$ and $S_3$ defined by \begin{align*}S_1 &= \{1 + r e^{i\theta} : r \leqslantqslant 3/4, \, 3\pi/4 \leqslantqslant \theta \leqslantqslant 5\pi/4\} \\ S_2 &= \{(x,y) \in \mathbb{D} : -1/\mathbb{S}qrt{2} \leqslantqslant y \leqslantqslant 1/\mathbb{S}qrt{2}, \, x \leqslantqslant 1, \, x \geqslantqslant 1 - |y|\} \\ S_3 &= \{ r e^{i\theta} : 1/2 \leqslantqslant r \leqslantqslant 1, \, \pi/4 \leqslantqslant \theta \leqslantqslant 7\pi/4\} \end{align*} See Figure \ref{Sfig} for an illustration of these sets. Since the sets $S_1,S_2$ and $S_3$ cover the set in question, it will be enough to see that the first integral on the right hand side of equation \eqref{integralLogEstim} is finite when taken over each of these sets. \begin{figure} \caption{The sets $S_i$, $i = 1,2,3$.} \label{Sfig} \end{figure} On the set $S_1$, one may find by geometry that the estimate $1-|z| \geqslantqslant c|1-z|$ holds for some constant $c$. Hence we may apply polar coordinates around the point $z = 1$ to find that \[\int_{S_1} \frac{1}{(1-|z|)^{2-p}|1 - z|^p \log^{\beta} \frac{1}{1-|z|}} \, dz \leqslantqslant C\int_{3\pi/4}^{5\pi/4} \int_0^{3/4} \frac{1}{r \log^{\beta} \frac{1}{r}} \, dr d\theta < \infty.\] On the set $S_3$, the expression $|1-z|$ is bounded away from zero. Hence bounding this term and the logarithm from below and changing to polar coordinates around the origin yields that \begin{align*}\int_{S_3}& \frac{1}{(1-|z|)^{2-p}|1 - z|^p \log^{\beta} \frac{1}{1-|z|}} \, dz \leqslantqslant C \int_{\pi/4}^{7\pi/4} \int_{1/2}^1 \frac{r}{(1-r)^{2-p}} \, dr d\theta < \infty. \end{align*} On the set $S_2$, we change to polar coordinates around the origin. For each angle $\theta$, we let $R_\theta$ denote the intersection of the ray with angle $\theta$ starting from the origin and the set $S_2$. On each such ray, we find that the expression $|1-z|$ is comparable to the size of the angle $\theta$. Since $1-|z| < |1-z|$, we may also replace $1-|z|$ by $|1-z|$ inside the logarithm, in total giving us the estimate \begin{equation}\label{thetaEstim} \frac{1}{|1-z|^p \log^{\beta} \frac{1}{1-|z|}} \leqslantqslant \frac{C}{|\theta|^p \log^{\beta} \frac{1}{|\theta|}}, \qquad z \in R_\theta. \end{equation} On each of the segments $R_\theta$ and small enough $\theta$, the modulus $r = |z|$ ranges from a certain distance $\rho(\theta)$ to $1$. This distance is found by applying the sine theorem to the triangle with vertices $0, 1$ and $\rho(\theta) e^{i\theta}$, giving us the equation \[\frac{\rho(\theta)}{\mathbb{S}in(\pi/4)} = \frac{1}{\mathbb{S}in(\pi - \pi/4 - \theta)} = \frac{1}{\mathbb{S}in(\pi/4 + \theta)}.\] From this one finds that the expression $1 - \rho(\theta) = \frac{\mathbb{S}in(\pi/4 + \theta) - \mathbb{S}in(\pi/4)}{\mathbb{S}in(\pi/4+\theta)}$, which also denotes the length of the segment $R_\theta$, is comparable to $|\theta|$. Using this and \eqref{thetaEstim} we now estimate that \begin{align*}\int_{S_2}& \frac{1}{(1-|z|)^{2-p}|1 - z|^p \log^{\beta} \frac{1}{1-|z|}} \, dz \\ &\leqslantqslant C \int_{-\pi/4}^{\pi/4} \frac{1}{|\theta|^p \log^{\beta} \frac{1}{|\theta|}} \int_{\rho(\theta)}^1 \frac{1}{(1-r)^{2-p}} \, dr d\theta \\ &= C \int_{-\pi/4}^{\pi/4} \frac{1}{|\theta|^p \log^{\beta} \frac{1}{|\theta|}} \frac{(1-\rho(\theta))^{p-1}}{p-1} \, dr d\theta \\ &\leqslantqslant C \int_{-\pi/4}^{\pi/4} \frac{1}{|\theta| \log^{\beta} \frac{1}{|\theta|}} \, dr d\theta \\&< \infty. \end{align*} This finishes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main}] Since $\mathbb{Y}$ is a rectifiable Jordan domain, there exists a constant speed parametrization $\gamma : \partial \mathbb{D} \to \partial \mathbb{Y}$. Such a parametrization is then automatically a Lipschitz embedding of $\partial \mathbb{D}$ to $\mathbb{C}$, and hence Theorem \ref{thm:lip} implies that there exists a homeomorphic Lipschitz extension $G: \bar{\mathbb{D}} \to \bar{\mathbb{Y}}$ of $\gamma$. Let now $\varphi : \partial\mathbb{X} \to \partial\mathbb{Y}$ be a given boundary homeomorphism. We define a boundary homeomorphism $\varphi_0 : \partial \mathbb{X} \to \partial \mathbb{D}$ by setting $\varphi_0 := \varphi \circ \gamma^{-1}$. Let $h_0$ denote the harmonic extension of $\varphi_0$ to $\mathbb{X}$, so that by the RKC-theorem (Lemma~\ref{lem:RKC}) the composed map $h := G \circ h_0 : \bar{\mathbb{X}} \to \bar{\mathbb{Y}}$ gives a homeomorphic extension of the boundary map $\varphi$. If the map $h_0$ lies in the Sobolev space $\mathscr{W}^{1,p}(\mathbb{X}, \mathbb{C})$, then so does the map $h$ since the Sobolev integrability is preserved under a composition by a Lipschitz map. Hence Theorem~\ref{thm:main} now follows from Theorem~\ref{harmonicThm}. \end{proof} \mathbb{S}ection{Sharpness of Theorem \ref{thm:main}} In this section we prove Theorem \ref{counterExampleThm1}. We handle the two claims of this theorem separately.\\\\ \textbf{Example (1).} In this example we construct a nonrectifiable Jordan domain $\mathbb{X}$ and a boundary map $\varphi:\partial \mathbb{X} \to \partial \mathbb{D}$ which does not admit a continuous extension in the Sobolev class $\mathscr{W}^{1,1}(\mathbb{X},\mathbb{C})$. The domain $\mathbb{X}$ will be defined as the following ``spiral'' domain. Let $R_k$, $k = 1,2,3,\ldots$, be a set of disjoint rectangles in the plane such that their bottom side lies on the $x$-axis. Each rectangle has width $w_k$ so that $\mathbb{S}um_{k=1}^{\infty} w_k < \infty$ and the rectangles are sufficiently close to each other so that the collection stays in a bounded set. The heights $h_k$ satisfy $\lim_{k \to \infty} h_k = 0$ and $\mathbb{S}um_{k=1}^{\infty} h_k = \infty$. We now join these rectangles into a spiral domain as in Figure \ref{figSpiral}, and add a small portion of boundary to the bottom end of $R_1$. The exact way these rectangles are joined is not significant, but it is clear that it may be done in such a way as to produce a nonrectifiable Jordan domain $\mathbb{X}$ for any sequence of rectangles $R_k$ as described above. \begin{figure} \caption{The rectangles $R_k$ joined into the spiral domain $\mathbb{X} \label{figSpiral} \end{figure} Let us now define the boundary homeomorphism $\varphi$. The map $\varphi$ shall map the ``endpoint" (i.e. the point on the $x$-axis to which the rectangles $R_k$ converge) of the spiral domain $\mathbb{X}$ to the point $1 \in \partial \mathbb{D}$. Furthermore, we choose disjoint arcs $A_k^+$ on the unit circle so that the endpoints of $A_k^+$ are given by $e^{i\alpha_k}$ and $e^{i\beta_k}$ with \[\pi/2 > \alpha_1 > \beta_1 > \alpha_2 > \beta_2 > \cdots\] and $\lim_{k\to\infty} \alpha_k = 0$. We mirror the arcs $A_k^+$ over the $x$-axis to produce another set of arcs $A_k^-$. The arcs are chosen in such a way that the minimal distance between $A_k^+$ and $A_k^-$ is greater than a given sequence of numbers $d_k$ with $\lim_{k\to\infty} d_k = 0$. It is clear that for any such sequence we may make a choice of arcs as described here. We now define $\varphi$ to map the left side of the rectangle $R_k$ to the arc $A_k^+$, and the right side to $A_k^-$. On the rest of the boundary $\partial \mathbb{X}$ we may define the map $\varphi$ in an arbitrary way to produce a homeomorphism $\varphi : \partial \mathbb{X} \to \partial \mathbb{D}$. Let now $H$ be a continuous $\mathscr{W}^{1,1}$-extension of $\varphi$. Let $I_k$ denote any horizontal line segment with endpoints on the vertical sides of $R_k$. Then by the above construction, $H$ must map the segment $I_k$ to a curve of length at least $d_k$, as this is the minimal distance between $A_k^+$ and $A_k^-$. Hence we find that \[\int_{R_k} |DH| dz \geqslantqslant \int_0^{h_k} d_k dz = h_k d_k.\] Summing up, we obtain the estimate \[\int_\mathbb{X} |DH| dz \geqslantqslant \mathbb{S}um_{k=1}^\infty h_k d_k.\] We may now choose, for example, $h_k = 1/k$ and $d_k = 1/\log (1+k)$ to make the above sum diverge, showing that $H$ cannot belong to $\mathscr{W}^{1,1}(\mathbb{X}, \mathbb{C})$. This finishes the proof.\\\\ \textbf{Example (2).} Let $1 < p < 2$. Here we construct a Jordan domain $\mathbb{X}$ whose boundary has $(p-1)$-hyperbolic growth and a boundary map $\varphi:\partial \mathbb{X} \to \partial \mathbb{D}$ which does not admit a continuous extension in the Sobolev class $\mathscr{W}^{1,p}(\mathbb{X},\mathbb{C})$. In fact, this domain may be chosen as the domain $\mathbb{X}_s$ defined after Definition \ref{cuspCondition} for $s = p-1$. The construction of the boundary map $\varphi$ is as follows. We set $\varphi(0) = 1$. Furthermore, we choose two sequences of points $p_k^{+}$ and $p_k^{-}$ belonging to the graph $\{(x,|x|^s) : -1 \leqslantqslant x \leqslantqslant 1\}$ as follows. The points $p_k^{+}$ all have positive $x$-coordinates, their $y$-coordinates are decreasing in $k$ with limit zero and the difference between the $y$-coordinates of $p_{k-1}^{+}$ and $p_k^+$ is comparable to a number $\epsilon_k$, for which \[\mathbb{S}um_{k=1}^{\infty} \epsilon_k < \infty.\] In fact, for any sequence of numbers $\epsilon_k$ satisfying the above conditions one may choose a corresponding sequence $p_k^+$. We then let $p_k^-$ be the reflection of $p_k^+$ along the $y$-axis. Similarly, we choose points $a_k^+$ on the unit circle, so that $a_k^+ = e^{i \theta_k}$ for a sequence of angles $\theta_k > 0$ decreasing to zero. Letting $a_k^-$ be the reflection of $a_k^+$ along the $x$-axis, we choose the sequence in such a way that the line segment between $a_k^+$ and $a_k^-$ has length $d_k$ for some decreasing sequence $d_k$ with $\lim_{k\to \infty} d_k = 0$. Again, any such sequence $d_k$ gives rise to a choice of points $a_k$. Let $\Gamma_k^{+}$ denote the part of the boundary of $\mathbb{X}_s$ between $p_{k-1}^{+}$ and $p_k^{+}$. We define the map $\varphi$ to map $\Gamma_k^{-}$ to the arc of the unit circle between $a_{k-1}^{-}$ and $a_{k}^{-}$ with constant speed. We define $\Gamma_k^-$ and $\varphi \vert_{\Gamma_k^-}$ similarly. \begin{figure} \caption{The portions of height $\epsilon_k$ get mapped onto slices with side length $d_k$.} \end{figure} Let now $H$ denote any continuous $\mathscr{W}^{1,p}$-extension of $\varphi$ to $\mathbb{X}$. By the above definition, any horizontal line segment with endpoints on $\Gamma_k^+$ and $\Gamma_k^-$ is mapped into a curve of length at least $d_k$ under $H$. Such a line segment is of length at most the distance of $p_{k-1}^+$ to $p_{k-1}^-$, a distance which is comparable to $\leqslantft(\mathbb{S}um_{j = k}^{\infty} \epsilon_j \right)^{1/s}$. If $S_k$ denotes the union of all the horizontal line segments between $\Gamma_k^+$ and $\Gamma_k^-$, this gives the estimate \[\int_{S_k} |DH|^p dz \geqslantqslant \frac{\leqslantft(\int_{S_k} |DH| dz\right)^p}{|S_k|^{p-1}} \geqslantqslant \frac{c \leqslantft(\int_0^{\epsilon_k} d_k dy\right)^p}{\epsilon_k^{p-1} \leqslantft(\mathbb{S}um_{j = k}^{\infty} \epsilon_j \right)^{(p-1)/s} } = \frac{c d_k^p \epsilon_k}{\mathbb{S}um_{j = k}^{\infty} \epsilon_j}\] Let now, for example, $\epsilon_k = 1/k^2$. Then $\mathbb{S}um_{j = k}^{\infty} \epsilon_j$ is comparable to $1/k$, so by summing up we obtain the estimate \begin{equation}\label{sumEstim1}\int_{\bigcup_k S_k} |DH|^p dz \geqslantqslant c\mathbb{S}um_{k=1}^\infty \frac{d_k^p}{k}.\end{equation} Choosing a suitably slowly converging sequence $d_k$ such as $d_k = (\log (1+k))^{-1/p}$, we find that the right hand side of \eqref{sumEstim1} diverges. It follows that $H$ cannot lie in the Sobolev space $\mathscr{W}^{1,p}(\mathbb{X}_s, \mathbb{C})$, which completes our proof. \mathbb{S}ection{The case $p = 2$} In this section we address Theorem \ref{thm:dirichlet} as well as Examples \ref{ex:nodirichglet} and \ref{ex:yi}.\\\\ \textbf{Example \ref{ex:nodirichglet}.} For this example, let first $\Phi_\tau$ for any $\tau \in (0,1]$ denote the conformal map \[\Phi_\tau(z) = \log^{-\tau}\leqslantft(\frac{1-z}{3}\right)\] defined on the unit disk and having target $\mathbb{Y}_\tau := \Phi_\tau(\mathbb{D})$. In fact, the domain $\mathbb{Y}_\tau$ is a domain with smooth boundary apart from one point at which it has an outer cusp of degree $\tau/(1+\tau)$ (i.e. it is bilipschitz-equivalent with the domain $\mathbb{X}_{\tau/(1+\tau)}$ as pictured in Figure \ref{cuspFig}). Since $\Phi_\tau$ is conformal and maps the unit disk into a set of finite measure, it lies in the Sobolev space $\mathscr{W}^{1,2}(\mathbb{D}, \mathbb{C})$. However, it does not admit a homeomorphic extension to the whole plane in the Sobolev class $\mathscr{W}^{1,2}_{loc}(\mathbb{C})$. The reason for this is that there is a modulus of continuity estimate for any homeomorphism in the Sobolev class $\mathscr{W}^{1,2}_{loc}(\mathbb{C})$. Indeed, let $\omega (t)$ denote the the modulus of continuity of $g \colon \mathbb{C} \to \mathbb{C}$; that is, \[\omega (t) = \underset{B(z,t)}{\osc} g = \mathbb{S}up \{ \abs{g(x_1)- g(x_2)} \colon x_1, x_2 \in B(z,t)\} \, . \] If $g$ is a homeomorphism in $\mathscr{W}_{\loc}^{1,2} (\mathbb{C}, \mathbb{C})$, then \begin{equation}\label{eq456}\int_0^r \frac{\omega(t)^2}{t} dt < \infty.\end{equation} \begin{proof}[Proof of~\eqref{eq456}] Since $g$ is a homeomorphism we have \[\underset{B(z,t)}{\osc} g \leqslant\underset{\partial B(z,t)}{\osc} g \, . \] According to Sobolev's inequality on spheres for almost every $t>0$ we obtain \[ \underset{\partial B(z,t)}{\osc} g \leqslant C \int_{\partial B(z,t)} \abs{Dg} \, . \] These together with H\"older's inequality imply \[\omega (t) = \underset{B(z,t)}{\osc} \leqslant \underset{\partial B(z,t)}{\osc} g\leqslant C\leqslantft(t \, \int_{\partial B(z,t)} \abs{Dg}^2 \right)^\frac{1}{2} \] and, therefore, for almost every $t>0$ we have \[\frac{\omega (t)^2}{t} \leqslant C \int_{\partial B(z,t)} \abs{Dg}^2 \, . \] Integrating this from $0$ to $r>0$, the claim~\eqref{eq456} follows. \end{proof} Now, since the map $\Phi_\tau$ for $\tau \leqslantqslant 1$ does not satisfy the modulus of continuity estimate~\eqref{eq456} at the boundary point $z = 1$, it follows that it is not possible to extend $\Phi_\tau$ even locally as a $\mathscr{W}^{1,2}$-homeomorphism around the point $z = 1$. To address the exact claim of Example \ref{ex:nodirichglet}, we now define an embedding $\varphi : \partial \mathbb{D} \to \mathbb{C}$ as follows. Fixing $\tau \in (0,1]$, in the set $\{z \in \partial \mathbb{D} \colon \re(z) \geqslantqslant 0\}$ we let $\varphi(z) = \Phi_\tau(z)$. We also map the complementary set $\{z \in \partial \mathbb{D} \colon \re(z) < 0\}$ smoothly into the complement of $\bar{\mathbb{Y}_\tau}$, and in such a way that $\varphi(\partial \mathbb{D})$ becomes the boundary of a Jordan domain $\tilde{\mathbb{Y}}$. See Figure \ref{phiFig} for an illustration. \begin{figure} \caption{The Jordan domains $\mathbb{Y} \label{phiFig} \end{figure} It is now easy to see that the map $\varphi$ satisfies the Douglas condition \eqref{eq:douglas}. Indeed, since the map $\Phi_\tau$ is in the Sobolev space $\mathscr{W}^{1,2}(\mathbb{D}, \mathbb{C})$ its restriction to the boundary must necessarily satisfy the Douglas condition. Since the map $\varphi$ aligns with this boundary map in a neighborhood of the point $z=1$, verifying the finiteness of the integral in \eqref{eq:douglas} poses no difficulty in this neighborhood. On the rest of the boundary of $\partial\mathbb{D}$ we may choose $\varphi$ to be locally Lipschitz, which shows that \eqref{eq:douglas} is necessarily satisfied for $\varphi$. Hence we have found a map from $\partial \mathbb{D}$ into the boundary of the chord-arc domain $\tilde{\mathbb{Y}}$ which admits a $\mathscr{W}^{1,2}$-extension to $\mathbb{D}$ but not a homeomorphic one.\\\\ \textbf{Example \ref{ex:yi}.} In \cite{Zh}, Zhang constructed an example of a Jordan domain, which we shall denote by $\mathbb{Y}$, so that the conformal map $g : \mathbb{D} \to \mathbb{Y}$ does not admit a $\mathscr{W}^{1,1}$-homeomorphic extension to the whole plane. We shall not repeat this construction here, but will instead briefly show how it relates to our questions. The domain $\mathbb{Y}$ is constructed in such a way that there is a boundary arc $\Gamma \mathbb{S}ubset \partial \mathbb{Y}$ over which one cannot extend the conformal map $g$ even locally as a $\mathscr{W}^{1,1}$-homeomorphism. The complementary part of the boundary $\mathbb{Y} \mathbb{S}etminus \Gamma$ is piecewise linear. Hence we may employ the same argument as in the previous example. We choose a Jordan domain $\tilde{\mathbb{Y}}$ in the complement of $\mathbb{Y}$ whose boundary consists of the arc $\Gamma$ and, say, a piecewise linear curve. We then define a boundary map $\varphi : \partial \mathbb{D} \to \partial \tilde{\mathbb{Y}}$ so that it agrees with $g$ in a neighborhood of the set $g^{-1} (\Gamma)$ and is locally Lipschitz everywhere else. With the same argument as before, this boundary map must satisfy the Douglas condition \eqref{eq:douglas}. Hence this boundary map admits a $\mathscr{W}^{1,2}$-extension to $\mathbb{D}$ but not even a $\mathscr{W}^{1,1}$-homeomorphic extension. Naturally the boundary of the domain $\tilde{\mathbb{Y}}$ is quite ill-behaved, in particular nonrectifiable (though the Hausdorff dimension is still one). \begin{proof}[Proof of Theorem~\ref{thm:dirichlet}] Let $\gamma : \partial \mathbb{D} \to \partial \mathbb{Y}$ denote a constant speed parametrization of the rectifiable curve $\partial \mathbb{Y}$. Let $G: \mathbb{C} \to \mathbb{C}$ be the homeomorphic Lipschitz extension of $\gamma$ given by Theorem \ref{thm:lip}. Denoting $f := \varphi^{-1} \circ \gamma$, we find by change of variables that \begin{align*}\int_{\partial \mathbb Y} \int_{\partial \mathbb Y} \leqslantft|\log \abs{\varphi^{-1} (\mathfrak{X}i) - \varphi^{-1} (\eta) } \right| \abs{\textnormal d \mathfrak{X}i } \abs{\textnormal d \eta } = \int_{\partial \mathbb{D}} \int_{\partial \mathbb{D}} \leqslantft|\log \abs{f (z) - f (\omega) } \right| \abs{\textnormal d z } \abs{\textnormal d \omega } . \end{align*} Now the result of Astala, Iwaniec, Martin and Onninen~\cite{AIMO} shows that the inverse map $f^{-1} : \partial \mathbb{X} \to \partial \mathbb{D}$ satisfies the Douglas condition \eqref{eq:douglas}. Thus $f^{-1}$ extends to a harmonic $\mathscr{W}^{1,2}$-homeomorphism $H_1$ to $\bar{\mathbb{D}}$ by the RKC-Theorem (Lemma~\ref{lem:RKC}). Letting $h := G \circ H_1$, we find that $h$ lies in the space $\mathscr{W}^{1,2}(\mathbb{X})$ since $G$ is Lipschitz. Moreover, the boundary values of $h$ are equal to $\gamma \circ (\varphi^{-1} \circ \gamma)^{-1} = \varphi$, giving us a homeomorphic extension of $\varphi$ in the Sobolev space $\mathscr{W}^{1,2}(\mathbb{X})$. To further extend $\varphi$ into the complement of $\mathbb{X}$, assume first without loss of generality that $0 \in \mathbb{X}$ and $0 \in \mathbb{Y}$. We now let $\tau(z) = 1/\bar{z}$ denote the inversion map, which is a diffeomorphism in $\mathbb{C} \mathbb{S}etminus \{0\}$. The map $\psi := \tau \circ \varphi \circ \tau$ is then a homeomorphism from $\partial \tau(\mathbb{X})$ to $\partial \tau(\mathbb{Y})$, and must also satisfy the condition \eqref{eq:invdouglas} due to the bounds on $\tau$. The earlier part of the proof shows that we may extend $\psi$ as a $\mathscr{W}^{1,2}$-homeomorphism $\tilde{h}$ from the Jordan domain bounded by $\partial \tau(\mathbb{X})$ to the Jordan domain bounded by $\partial \tau(\mathbb{Y})$. Hence the map $\tau \circ \tilde{h} \circ \tau$ is a $\mathscr{W}^{1,2}_{loc}$-homeomorphism from the complement of $\mathbb{X}$ to the complement of $\mathbb{Y}$ and equal to $\varphi$ on the boundary. This concludes the proof. \end{proof} \mathbb{S}ection{The multiply connected case, Proof of Theorem~\ref{thm:multiply}}\label{anyplansguysz} In this section we consider multiply connected Jordan domains $\mathbb{X}$ and $\mathbb{Y}$ of the same topological type. Any such domains can be equivalently obtained by removing from a simply connected Jordan domain the same number, say $0\leqslant k < \infty$, of closed disjoint topological disks. If $k=1$, the obtained doubly connected domain is conformally equivalent with a circular annulus $\mathbb A = \{z \in \mathbb C \colon r < \abs{z} <1\}$ with some $0<r<1$. In fact, if $k \geqslant1$ every $(k+1)$-connected Jordan domain can be mapped by a conformal mapping onto a {\it circular domain}, see~\cite{Gob}. In particular we may consider a $(k+1)$-connected circuilar domain consisting of the domain bounded by the boundary of the unit disk $\mathbb D $ and $k$ other circles (including points) in the interior of $\mathbb D$. The conformal mappings between multiply connected Jordan domains extends continuously up to the boundaries. The idea of the proof of Theorem~\ref{thm:multiply} is simply to split the multiply connected domains $\mathbb{X}$ and $\mathbb{Y}$ into simply connected parts and apply Theorem \ref{thm:main} in each of these parts. Let us consider first the case where $\mathbb{X}$ and $\mathbb{Y}$ are doubly connected. \mathbb{S}ubsection{Doubly connected $\mathbb{X}$ and $\mathbb{Y}$} $\,$ \emph{Case 1.} $p=1$. Suppose that the boundary of $\mathbb{X}$ is rectifiable. We split the domain $\mathbb{X}$ into two rectifiable simply connected domains as follows. Take a line $\ell$ passing through any point in the bounded component of $\mathbb{C} \mathbb{S}etminus \mathbb{X}$. Then necessarily there exist two open line segments $I_1$ and $I_2$ on $\ell$ such that these segments are contained in $\mathbb{X}$ and their endpoints lie on different components of the boundary of $\mathbb{X}$. These segments split the domain $\mathbb{X}$ into two rectifiable Jordan domains $\mathbb{X}_1$ and $\mathbb{X}_2$. For $k=1,2$, let $p_k$ denote the endpoint of $I_k$ lying on the inner boundary of $\mathbb{X}$ and $P_k$ the endpoint on the outer boundary. We let $q_k = \varphi(p_k)$ and $Q_k = \varphi(P_k)$. We would now simply like to connect $q_k$ with $Q_k$ by a rectifiable curve $\gamma_k$ inside of $\mathbb{Y}$ such that $\gamma_1$ and $\gamma_2$ do not intersect. It is quite obvious this can be done but we provide a proof regardless. Let $\mathbb{Y}_+$ denote the Jordan domain bounded by the outer boundary of $\mathbb{Y}$. Take a conformal map $g_+ : \mathbb{D} \to \mathbb{Y}_+$. Then $g_+'$ is in the Hardy space $H^1$ since $\partial \mathbb{Y}_1$ is rectifiable, and we find by Theorem 3.13 in~\cite{Duhp} that $g_+$ maps the segment $[0,g_+^{-1}(Q_k)]$ into a rectifiable curve in $\mathbb{Y}_+$. Let $\gamma_k^+$ denote the image of the segment $[(1-\epsilon)g_+^{-1}(Q_k),g_+^{-1}(Q_k)]$ under $g_+$ for a sufficiently small $\epsilon$. Hence we have a rectifiable curve $\gamma_k^+$ connecting $Q_k$ to an interior point $Q_k^+$ of $\mathbb{Y}$ if $\epsilon$ is small enough. With a similar argument, possibly adding a M\"obius transformation to the argument to invert the order of the boundaries, one finds a rectifiable curve $\gamma_k^-$ connecting $q_k$ to an interior point $q_k^-$. For small enough $\epsilon$ the four curves constructed here do not intersect. If $\Gamma$ denotes the union of these four curves, we may now use the path-connectivity of the domain $\mathbb{Y} \mathbb{S}etminus \Gamma$ to join the points $Q_1^+$ and $q_1^-$ with a smooth simple curve inside $\mathbb{Y}$ that does not intersect $\Gamma$. By adding the curves $\gamma_1^+$ and $\gamma_1^-$ one obtains a rectifiable simple curve $\gamma_1$ connecting $Q_1$ and $q_1$. Using the fact that $\mathbb{Y} \mathbb{S}etminus \Gamma$ is doubly connected, we may now join $Q_2^+$ and $q_2^-$ with a smooth curve that does not intersect $\gamma_1$ nor $\Gamma$. This yields a rectifiable simple curve $\gamma_2$ connecting $Q_2$ and $q_2$. This proves the existence of the curves $\gamma_k$ with the desired properties. These curves split $\mathbb{Y}$ into two simply connected Jordan domains $\mathbb{Y}_1$ and $\mathbb{Y}_2$. We may now extend the homeomorphism $\varphi$ to map the boundary of $\mathbb{X}_k$ to the boundary of $\mathbb{Y}_k$ homeomorphically. The exact parametrization which maps the segments $I_k$ to the curves $\gamma_k$ does not matter. The rest of the claim follows directly from the first part of Theorem \ref{thm:main}, giving us a homeomorphic extension of $\varphi$ in the Sobolev class $\mathscr{W}^{1,1}(\mathbb{X}, \mathbb C)$, as claimed. \\\\ \emph{Case 2.} $1<p<2$. Suppose that $\mathbb{X}$ has $s$-hyperbolic growth. Then we take an annulus $\mathbb{A}$ centered at the origin such that there exists a conformal map $g : \mathbb{A} \to \mathbb{X}$. By a result of Gehring and Osgood~\cite{GO}, the quasihyperbolic metrics $h_{\mathbb{X}}$ and $h_{\mathbb{A}}$ are comparable via the conformal map $g$. This shows that for any fixed $x_0 \in \mathbb{A}$ and all $x \in \mathbb{A}$ we have \begin{equation}\label{hyperEq1}h_{\mathbb{A}}(x_0,x) \leqslantqslant C h_{\mathbb{X}}(g(x_0),g(x)) \leqslantqslant \frac{C}{\dist(g(x),\partial\mathbb{X})^{1-s}}.\end{equation} Let now $\mathbb{A}_+$ denote the simply connected domain obtained by intersecting $\mathbb{A}$ and the upper half plane. We claim that the the domain $\mathbb{X}_+ := g(\mathbb{A}_+)$ has $s$-hyperbolic growth as well. To prove this claim, fix $x_0 \in \mathbb{A}_+$ and take arbitrary $x \in \mathbb{A}$. Let $d = \dist(x,\partial \mathbb{A}_+)$. We aim to establish the inequality \begin{equation}\label{hyperEq2}h_{\mathbb{A}_+}(x_0,x) \leqslantqslant \frac{C}{\dist(g(x),\partial\mathbb{X}_+)^{1-s}}.\end{equation} Note that $\mathbb{A}_+$ is bi-Lipschitz equivalent with the unit disk, implying that $h_{\mathbb{A}_+}(x_0,x)$ is comparable to $\log(1/d)$. Since the boundary of $\mathbb{A}_+$ contains two line segments on the real line, let us denote them by $I_1$ and $I_2$. Note that we have the estimate \begin{equation}\label{distgEstim1}\dist(g(x),\partial \mathbb{X}_+) \leqslantqslant \dist(g(x),\partial \mathbb{X}).\end{equation} If it would happen that $d = \dist(x,\partial \mathbb{A})$, meaning that the closest point to $x$ on $\partial \mathbb{A}_+$ is not on $I_1$ or $I_2$, then the hyperbolic distances $h_{\mathbb{A}_+}(x_0,x)$ and $h_{\mathbb{A}}(x_0,x)$ are comparable and by the inequalities \eqref{hyperEq1} and \eqref{distgEstim1} the inequality \eqref{hyperEq2} holds. It is hence enough to prove \eqref{hyperEq2} in the case when $d = \dist(x,I_1 \cup I_2)$. We may also assume that $d$ is small. Due to the geometry of the half-annulus $\mathbb{A}_+$, the vertical line segment $L_x$ between $x$ and its projection to the real line lies on either $I_1$ or $I_2$ and its length is $d$. Letting $D$ denote the distance of $x$ to $\partial\mathbb{A}_+ \mathbb{S}etminus (I_1 \cup I_2)$, we have that $D \geqslantqslant d$. We may now reiterate the proof of \eqref{hyperbolicEstim3} to find that \[|g'(z)| \leqslantqslant \frac{C}{\dist(z,\partial \mathbb{A}) \log^{\frac{1}{1-s}}(\dist(z,\partial \mathbb{A})^{-1})}\] for $z \in \mathbb{A}$. We should mention that the simply connectedness assumption used in the proof of \eqref{hyperbolicEstim3} may be circumvented by using the equivalence of the quasihyperbolic metrics under $g$ instead of passing to the hyperbolic metric. Hence \[\dist(g(x),\partial \mathbb{X}_+) \leqslantqslant \int_{L_x} |g'(z)| |dz| \leqslantqslant \frac{C d}{D \log^{\frac{1}{1-s}}(1/D)}.\] From this we find that \eqref{hyperEq2} is equivalent to \[\log(1/d) \leqslantqslant C\frac{D^{1-s} \log(1/D)}{d^{1-s}},\] which is true since $D \geqslantqslant d$. Hence \eqref{hyperEq2} holds, and this implies that $\mathbb{X}_+$ has $s$-hyperbolic growth by reversing the argument that gives \eqref{hyperEq1}. We define $\mathbb{X}_-$ similarly. Hence we have split $\mathbb{X}$ into two simply connected domains with $s$-hyperbolic growth. On the image side, we may split $\mathbb{Y}$ into two simply connected domains with rectifiable boundary as in Case 1. Extending $\varphi$ in an arbitrary homeomorphic way between the boundaries of these domains and applying part 2 of Theorem \ref{thm:main} gives a homeomorphic extension of $\varphi$ in the Sobolev class $\mathscr{W}^{1,p}(\mathbb{X}, \mathbb C)$ whenever $s > p-1$. \mathbb{S}ubsection{The general case} $\,$ \emph{Case 3.} $p=1$. Assume that $\mathbb{X}$ and $\mathbb{Y}$ are $\ell$-connected Jordan domains with rectifiable boundaries. By induction, we may assume that the result of Theorem \ref{thm:multiply} holds for $(\ell-1)$-connected Jordan domains. Hence we are only required to split $\mathbb{X}$ and $\mathbb{Y}$ into two domains with rectifiable boundary, one which is doubly connected and another which is $(\ell-1)$-connected. We hence describe how to 'isolate' a given boundary component $X_0$ from a $\ell$-connected Jordan domain $\mathbb{X}$. Let $X_{outer}$ denote the outer boundary component of $\mathbb{X}$. Take a small neighborhood of $X_0$ inside $\mathbb{X}$. Let $\gamma_0$ be a piecewise linear Jordan curve contained in this neighborhood and separating $X_0$ from the rest of the boundary components of $\mathbb{X}$. Let also $\gamma_1$ be a piecewise linear Jordan curve inside $\mathbb{X}$ and in a small enough neighborhood of $X_{outer}$ so that all of the other boundary components of $\mathbb{X}$ are contained inside $\gamma_1$. Take $y_0$ and $y_1$ on $\gamma_0$ and $\gamma_1$ respectively, and connect them with a piecewise linear curve $\alpha_y$ not intersecting any boundary components of $\mathbb{X}$. Choose $z_0$ and $z_1$ close to $y_0$ and $y_1$ respectively so that we may connect $z_0$ and $z_1$ by a piecewise linear curve $\alpha_z$ arbitrarily close to $\alpha_y$ but neither intersecting it nor any boundary components of $\mathbb{X}$. Since the region bounded by $X_{outer}$ and $\gamma_1$ is doubly connected, by the construction in Case 1 we may connect $y_1$ and $z_1$ with any two given points $y_2$ and $z_2$ on the boundary $X_{outer}$ via non-intersecting rectifiable curves $\beta_y$ and $\beta_z$ lying inside this region. Let now $\Gamma$ denote the union of the curves $\beta_y$, $\beta_z$, $\alpha_y$, $\alpha_z$, and the curve $\gamma_0'$ obtained by taking the curve $\gamma_0$ and removing the part between $y_0$ and $z_0$. By construction $\Gamma$ contains two arbitrary points on $\mathbb{X}_{outer}$ and separates the domain $\mathbb{X}$ into a doubly connected domain with inner boundary component $X_0$ and a $(n-1)$-connected Jordan domain. Since $\Gamma$ is rectifiable, both of these domains are also rectifiable. Applying the same construction for $\mathbb{Y}$, we may separate the boundary component $\varphi(X_0)$ of $\mathbb{Y}$ by a rectifiable curve $\Gamma'$. Since the boundary points $y_2$ and $z_2$ above were arbitrary, we may assume that $\Gamma'$ intersects the outer boundary of $\mathbb{Y}$ at the points $\varphi(y_2)$ and $\varphi(z_2)$. Extending $\varphi$ to a homeomorphism from $\Gamma$ onto $\Gamma'$ and applying the induction assumptions now gives a homeomorphic extension in the class $\mathscr{W}^{1,1}(\mathbb{X} , \mathbb C)$. \\\\ \emph{Case 4.} $1<p<2$. We still have to deal with the case where $\mathbb{X}$ has $s$-hyperbolic growth and is $\ell$-connected. By the same arguments as in the previous case, it will be enough to split $\mathbb{X}$ into a doubly connected and $(\ell-1)$-connected domain with $s$-hyperbolic growth. Since $\mathbb{X}$ is $\ell$-connected, there exists a domain $\Omega$ such that every boundary component of $\Omega$ is a circle and there is a conformal map $g : \Omega \to \mathbb{X}$. Let $\Gamma \mathbb{S}ubset \Omega$ be a piecewise linear curve separating one of the inner boundary components of $\partial\Omega$. Hence $\Omega$ splits into a doubly connected set $\Omega_1$ and a $(\ell-1)$-connected set $\Omega_2$. We claim that the domains $\mathbb{X}_1 = g(\Omega_1)$ and $\mathbb{X}_2 = g(\Omega_2)$ have $s$-hyperbolic growth. The proof of this claim is nearly identical to the arguments in Case 2, so we will summarize it briefly. For $\mathbb{X}_2$, we aim to establish the inequality \begin{equation}\label{hyperEq3} h_{\Omega_2}(x_0,x) \leqslantqslant \frac{C}{\dist(g(x),\partial\mathbb{X}_2)^{1-s}} \end{equation} for fixed $x_0 \in \Omega_2$ and $x \in \Omega_2$. For this inequality, it is only essential to consider $x$ close to $\partial \Omega_2$. If $x$ is closer to the boundary of the original set $\partial \Omega$ than to $\Gamma$, then the hyperbolic distance of $x_0$ and $x$ in $\Omega_2$ is comparable to the distance inside the larger set $\Omega$. Then the $s$-hyperbolic growth of $\Omega$ implies \eqref{hyperEq3} as in Case 2. If $x$ is closer to $\Gamma$ but a fixed distance away from the boundary of $\Omega$, then the smoothness of $g$ in compact subsets of $\Omega$ implies the result. If $x$ is closest to a line segment in $\Gamma$ which has its other endpoint on $\partial \Omega$, then we may employ a similar estimate as in Case 2, using the bound for $|g'(z)|$ in terms of $\dist(z,\partial \Omega)$, to conclude that \eqref{hyperEq3} also holds here. This implies that $\mathbb{X}_2$ satisfies \eqref{hyperEq3}, and hence it has $s$-hyperbolic growth. The argument for $\mathbb{X}_1$ is the same. After splitting $\mathbb{X}$ into two domains of smaller connectivity and $s$-hyperbolic growth, we split the target $\mathbb{Y}$ accordingly into rectifiable parts using the argument from Case 3. Applying induction on $n$ now proves the result. This finishes the proof of Theorem \ref{thm:multiply}. \mathbb{S}ection{Monotone Sobolev minimizers}\label{sec:mono} The classical {\it harmonic mapping problem} deals with the question of whether there exists a harmonic homeomorphism between two given domains. Of course, when the domains are Jordan such a mapping problem is always solvable. Indeed, according to the Riemann Mapping Theorem there is a conformal mapping $h \colon \overline{\mathbb{X}} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}}$. Finding a harmonic homeomorphism which coincides with a given boundary homeomorphism $\varphi \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ is a more subtle question. If $\mathbb{Y}$ is convex, then there always exists a harmonic homeomorphism $h \colon \overline {\mathbb{X}} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}}$ with $h (x)= \varphi (x)$ on $\partial \mathbb{X}$ by Lemma~\ref{lem:RKC}. For a non-convex target $\mathbb{Y}$, however, there always exists at least one boundary homeomorphism whose harmonic extension takes points in $\mathbb{X}$ beyond $\overline{\mathbb{Y}}$. To find a deformation $h \colon \overline{\mathbb{X}} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}}$ which resembles harmonic homeomorphisms Iwaniec and Onninen~\cite{IOmhh} applied the direct method in the calculus of variations and considered minimizing sequences in $\mathscr H_\varphi^{1,2} (\overline{\mathbb{X}} , \overline{\mathbb{Y}})$. They called such minimizers {\it monotone Hopf-harmonics} and proved the existence and uniqueness result in the case when $\mathbb{Y}$ is a Lipschitz domain and the boundary data $\varphi$ satisfies the Douglas condition. Note that by the Riemann Mapping Theorem one may always assume that $\mathbb{X}= \mathbb D$. Theorem~\ref{thm:dirichlet} opens up such studies beyond the Lipschitz targets. Indeed, under the assumptions of Theorem~\ref{thm:dirichlet}, the class $\mathscr H_\varphi^{1,2} (\overline{\mathbb D} , \overline{\mathbb{Y}})$ is non-empty. Furthermore, if $h_\circ \in \mathscr H_\varphi^{1,2} (\overline{\mathbb D} , \overline{\mathbb{Y}})$, then $h_\circ$ satisfies the uniform modulus of continuity estimate \[\abs{h_\circ (x_1) - h_\circ (x_2)}^2 \leqslant C \frac{\int_{\mathbb D} \abs{Dh_\circ}^2}{\log \leqslantft( \frac{1}{\abs{x_1-x_2}}\right) }\] for $x_1, x_2 \in \mathbb D$ such that $\abs{x_1-x_2} <1$. This follows from taking the global $\mathscr{W}^{1,2}_{loc}$-homeomorphic extension given by Theorem~\ref{thm:dirichlet} and applying a standard local modulus of continuity estimate for $\mathscr{W}^{1,2}$-homeomorphisms, see~\cite[Corollary 7.5.1 p.155]{IMb}. Now, applying the direct method in the calculus of variations allows us to find a minimizing sequence in $\mathscr H_\varphi^{1,2} (\overline{\mathbb D} , \overline{\mathbb{Y}})$ for the Dirichlet energy which converges weakly in $\mathscr{W}^{1,2} (\mathbb D , \mathbb C)$ and uniformly in $\overline{ \mathbb D}$. Being a uniform limit of homeomorphisms the limit mapping $H \colon \overline{\mathbb D} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}} $ becomes {\it monotone}. Indeed, the classical Youngs approximation theorem~\cite{Yo} which asserts that a continuous map between compact oriented topological 2-manifolds (surfaces) is monotone if and only if it is a uniform limit of homeomorphisms. {Monotonicity, the concept of Morrey~\cite{Mor}, simply means that for a continuous $H \colon \overline{\mathbb{X}} \to \overline{\mathbb{Y}}$ the preimage $H^{-1} (y_\circ)$ of a point $y_\circ \in \overline{\mathbb{Y}}$ is a continuum in $\overline{\mathbb{X}}$. We have hence just given a proof of the following result. \begin{theorem} Let $\mathbb{X}$ and $\mathbb{Y}$ be Jordan domains and assume that $\partial \mathbb{Y}$ is rectifiable. If $\varphi \colon \partial \mathbb{X} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \partial \mathbb{Y}$ satisfies~\eqref{eq:invdouglas}, then there exists a monotone Sobolev mapping $H \colon \overline{\mathbb{X}} \xrightarrow[]{{}_{\!\!\textnormal{onto\,\,}\!\!}} \overline{\mathbb{Y}}$ in $\mathscr{W}^{1,2} (\mathbb{X}, \mathbb C)$ such that $H$ coincides with $\varphi$ on $\partial \mathbb{X}$ and \[ \int_{\mathbb{X}} \abs{DH(x)}^2 \, \textnormal d x = \inf_{h \in \mathscr H_\varphi^{1,2} (\overline{\mathbb{X}} , \overline{\mathbb{Y}})} \int_\mathbb{X} \abs{Dh(x)}^2 \, \textnormal d x \, . \] \end{theorem} \end{document}
\begin{document} \title[QEC: Noise-adapted Techniques and Applications]{Quantum Error Correction: Noise-adapted Techniques and Applications} \author*[1]{\fnm{Akshaya} \sur{Jayashankar}}\email{[email protected]} \author[2]{\fnm{Prabha} \sur{Mandayam}}\email{[email protected]} \affil*[1]{\orgdiv{Department of Physics}, \orgname{Indian Institute of Technology Madras}, \orgaddress{\city{Chennai}, \postcode{600036}, \country{India}}} \affil[2]{\orgdiv{Department of Physics}, \orgname{Indian Institute of Technology Madras}, \orgaddress{\city{Chennai}, \postcode{600036}, \country{India}}} \date{\today} \abstract{ The quantum computing devices of today have tens to hundreds of qubits that are highly susceptible to noise due to unwanted interactions with their environment. The theory of quantum error correction provides a scheme by which the effects of such noise on quantum states can be mitigated, paving the way for realising robust, scalable quantum computers. In this article we survey the current landscape of quantum error correcting (QEC) codes, focusing on recent theoretical advances in the domain of noise-adapted QEC, and highlighting some key open questions. We also discuss the interesting connections that have emerged between such adaptive QEC techniques and fundamental physics, especially in the areas of many-body physics and cosmology. We conclude with a brief review of the theory of quantum fault tolerance which gives a quantitative estimate of the physical noise threshold below which error-resilient quantum computation is possible.} \keywords{noise-adapted QEC, Petz map, amplitude damping, fault tolerance} \maketitle \section{Introduction} Quantum computing technologies have advanced by leaps and bounds over the last decade. We are already witness to the first generation of quantum processors successfully demonstrating that the theoretical promise of quantum computational speedups can be realised in practice~\cite{qSupremacy, boson_sampling2021}. There are however a few key challenges that must be overcome in order to scale from the current generation of noisy intermediate-scale quantum (NISQ) devices~\cite{preskill2018_quantum}, to robust universal quantum computers. Error mitigation and fault tolerance is arguably the biggest of these challenges, both from a theoretical and experimental standpoint. The theory of quantum error correction (QEC)~\cite{lidar} lays down the basic framework for dealing with noise affecting the quantum states of interest. The early works of Shor~\cite{Shor95, Calderbank_shor96} and Steane~\cite{Steane97} demonstrated how quantum error correcting codes can be constructed for bit-flip and phase-flip noise, by making use of entanglement to circumvent the challenge posed by no-cloning theorem~\cite{no_cloning82}. Subsequently, a general theory of QEC was developed for arbitrary errors, in terms of algebraic~\cite{knill97} and information-theoretic conditions~\cite{infoqec_96}. Today, the standard approach to quantum error correction works by discretizing the errors affecting the quantum state in terms of the \emph{Pauli basis}~\cite{nielsen}, with the group structure of the Pauli operators naturally leading to the rich mathematical framework of quantum stabilizer codes~\cite{gottesman1997}. We refer to a recent review~\cite{terhal2015} that surveys the major developments in QEC, from the stabilizer codes and CSS codes to the more recently proposed classes of topological codes~\cite{bombin2013} and surface codes~\cite{raussendorf2007}. In contrast to the \emph{general-purpose} QEC schemes described above, error correction protocols tailored to specific noise models have been devised, starting with a $4$-qubit code that protects against amplitude-damping noise~\cite{leung}. Such adaptive QEC protocols have shown to offer the same degree of protection while using fewer resources~\cite{hui_prabha, fletcher_AD}, particularly in the case of non-Pauli noise models. In this article, we focus on this emerging area of \emph{channel-adapted} or \emph{noise-adapted} QEC and survey some of the recent progress in this area. We will first review the early theoretical progress on approximate QEC~\cite{fletcher_rec, beny2010, tyson2010, mandayam2012}, and introduce an important noise-adapted recovery map, namely, the Petz map~\cite{barnum, Petz}. We then discuss various recent approaches to constructing noise-adapted quantum codes~\cite{ak_cartan, qvector, cao2022quantum, reinforcement}. Next we survey some interesting applications of noise-adapted recovery maps, especially in the context of many-body quantum systems. We also briefly touch upon the interesting role that approximate recovery maps are coming to play in the AdS/CFT setting~\cite{holography_qec}. While QEC focuses on dealing with the noise affecting the quantum states assuming that all the gate operations are ideal, the theory of quantum fault tolerance~\cite{preskill1998} lays down the basic framework for dealing with faulty gate operations in quantum computing devices. Fault-tolerant protocols build upon the QEC framework and enable the construction of quantum circuits that are truly noise-resilient, provided the noise is below a certain \emph{threshold} value~\cite{knill2005, aliferis2006}. We devote the final section of our review to discussing the latest developments in the theory of quantum fault tolerance, focusing on fault-tolerant schemes that are based on noise-adapted codes~\cite{ak_ft}. The rest of this review is organized as follows. In Sec.~\ref{sec:perfect_aqec}, we introduce the basic mathematical formalism of perfect and approximate QEC. In Sec.~\ref{sec:noise-adapted} we formally introduce the idea of noise-adapted codes and recovery maps. We then proceed to discuss various analytical and numerical approaches to constructing such noise-adapted recovery maps and noise-adapted quantum codes. In Sec.~\ref{sec:applications}, we survey some interesting applications of adaptive QEC protocols. In Sec.~\ref{sec: FT} we discuss how quantum fault tolerance can be achieved using noise-adapted QEC protocols. We conclude with a brief summary and future outlook in Sec.~\ref{sec:summary}. \section{Preliminaries}\label{sec:perfect_aqec} \begin{figure} \caption{ Unitary evolution of a closed system (left) vs non-unitary dynamics of an open quantum system (right)~\cite{thesis} \label{fig:quantum_channel} \end{figure} We begin with a brief review of the mathematical description of decoherence in quantum systems, and refer to some of the well-known textbooks~\cite{nielsen, lidar} for further details. Noise in quantum systems is described by the mathematical framework of quantum operations, often referred to as \emph{quantum channels} in the context of quantum computation~\cite{nielsen}. A quantum channel is a completely positive trace-preserving (CPTP) map describing the non-unitary evolution of an open quantum system interacting with its environment, as shown in Fig.~\ref{fig:quantum_channel}. The action of such a map $\mathcal{E}$ on the state $\rho$ of a quantum system can be described via a set of \emph{Kraus operators} $\{E_{i}\}$, as, $\mathcal{E}(\rho) = \sum_{i}E_{i}\rho E_{i}^{\dagger}$, satisfying the normalization condition, $sum_i E_i^\dagger E_i = I$. These Kraus operators essentially represent the different \emph{error operators} associated with the noise channel $\mathcal{E}$. A classical example of quantum channel arising out of a system-environment interaction is the \emph{amplitude-damping channel}. This is a channel that models dissipation in a two-dimensional quantum system (a qubit!), such as a two-level atom interacting with an optical mode in a cavity via the Jaynes-Cummings Hamiltonian, as shown in Fig.~\ref{fig:atom_cavity}. The action of the amplitude-dampng channel on the qubit system is described by a a pair of Kraus operators, $E_{0}$ and $E_{1}$ which are given by~\cite{nielsen}, \begin{equation}\label{eq:ampdamp} E_{0} = \vert0\rangle\langle 0\vert+ \sqrt{1-\gamma}\,\vert1\rangle\langle 1\vert, \; \quad E_1 = \sqrt{\gamma}\,\vert0\rangle\langle 1\vert. \end{equation} Here, $\gamma$ is the probability of decay of the excited state $\vert1\rangle$ to the ground state $\vert0\rangle$. Amplitude-damping noise is an important model of \emph{non-Pauli} noise, since its Kraus operators are not simply proportional to Pauli operators. This is in contrast to other well known noise models such as bit-flip, phase-flip and depolarizing noise which are all described by Kraus operators that belong to the set $\{I,X,Y,Z\}$ of single-qubit Pauli operators. \begin{figure} \caption{ An atom in a cavity undergoing spontaneous emission~\cite{cao} \label{fig:atom_cavity} \end{figure} \subsection{Quantum Error Correction} \label{sec:qec} Quantum error correction involves protecting the information in say, a $d_0$-dimensional Hilbert space $\mathcal{H}_0$, by \emph{encoding} it in a $d_0$-dimensional subspace $\mathcal{C}$ of a larger Hilbert space $\mathcal{H}$. Subsequently, a \emph{recovery} map $\mathcal{R}$ is applied in order to reverse the effects of errors, followed by \emph{decoding}, which brings the information back into the original Hilbert space $\mathcal{H}_0$. The encoding $\mathcal{W}$ is an isometry whose action can be described as, $\mathcal{W}:\mathcal{B}(\mathcal{H}_0)\rightarrow \mathcal{B}(\mathcal{C})\subseteq\mathcal{B}(\mathcal{H})$, where $\mathcal{B}(.)$ denotes the set of bounded linear operators on the respective Hilbert spaces. The subspace $\mathcal{C}$ is called the \emph{codespace} or simply, a quantum code~\cite{nielsen, lidar}. \begin{figure} \caption{Perfect QEC: codespace gets mapped to mutually orthogonal subspaces by the errors $\{E_i\} \label{fig:perfectqec} \end{figure} Ideally, to protect against the error $\{E_{i}\}$ caused by a quantum channel $\mathcal{E}$, the codespace must be chosen such that it gets mapped onto mutually orthogonal subspaces under the action of the errors, as shown in Fig.~\ref{fig:perfectqec}. This allows for the errors $\{E_{i}\}$ afflicting the codespace to be identified unambiguously and hence corrected for. Such codes are referred to as \emph{perfectly} correctable codes. This property of a quantum code to be perfectly correctable under the action of the errors $\{E_{i}\}$ is captured by the Knill-Laflamme conditions, stated as~\cite{knill97}, \begin{equation}\label{eq:perfectqec} PE_{i}^{\dagger}E_{j}P = \alpha_{ij}P, \; \forall i, j. \end{equation} \noindent Here, $P$ is the projection onto the codespace $\mathcal{C}$ and $\alpha_{ij}$ are complex elements of a Hermitian matrix $\alpha$. The algebraic conditions in Eq.~\eqref{eq:perfectqec} are necessary and sufficient for the existence of a recovery (CPTP) map $\mathcal{R}$ which can correct perfectly for the errors $\{E_i\}$. We note here that information-theoretic conditions for perfect QEC have also been formulated in terms of the coherent information~\cite{infoqec_96}. In contrast to the active approach to QEC involving a recovery operation, there exist passive QEC strategies~\cite{lidar}, where the information to be protected is stored in subspaces or subsytems of a larger Hilbert space that remain unaffected by the noise, {called Decoherence-free subspaces (DFS) and Noiseless subsystems (NS)~\cite{kribs}, respectively}. Such passive QEC schemes are outside the scope of this review, here we will focus exclusively on active quantum error correction. Since the QEC conditions in Eq.~\eqref{eq:perfectqec} are linear, a code that corrects perfectly for the set $\{E_{i}\}$ will also correct perfectly for any linear combination of the errors in the set. Thus, to correct for an arbitrary single-qubit error, it suffices to find codes that can correct perfectly for a unitary error basis such as the Pauli basis spanned by the single-qubit Pauli operators $\{I,X,Y,Z\}$. This in turn leads to an elegant reformulation of the theory of perfect QEC using the group theoretic algebra of Pauli operators called the stabilizer formalism~\cite{lidar,nielsen}. We refer to other review articles in this special issue for further details on the stabilizer formalism and its role in quantum code construction. Based on the set of errors correctable by it, a quantum code is characterized by three parameters. An $[[n, k, d]]$ quantum code is one the encodes $k$ qubits into $n$ physical qubits and corrects for errors on upto $t$ qubits, where, $d=2t+1$ is often referred to as the distance of the code. Stabilizer codes such as the well known $9$-qubit Shor code~\cite{Shor95}, the $7$-qubit Steane code~\cite{Steane97} and the $5$-qubit code~\cite{laflamme_96} are all general-purpose quantum codes, since they can correct for arbitrary single-qubit errors. However, their error-detection and error-correction mechanisms work by discretizing arbitrary errors in terms of the Pauli errors and distinguishing single-qubit Pauli errors perfectly. These perfect QEC codes are in essence tailored to protect against single-qubit Pauli noise, which in turn imposes rigid constraints on the structure of such codes. In what follows, we will look at how it is possible to construct newer classes of quantum codes both by deviating from the demands of perfect QEC and by moving beyond the class of Pauli noise channels. \subsection{Approximate Quantum Error Correction}\label{sec:aqec} In order to correct perfectly for arbitrary single-qubit errors by decomposing them in terms of Pauli errors, a single qubit must be encoded into at least five qubits~\cite{knill97}. In violation of this quantum Hamming bound, the pioneering work of Leung \emph{et al.}~\cite{leung} provided an example of $4$-qubit QEC code that can correct for single-qubit amplitude-damping noise. It was further shown that this $4$-qubit code does not satisfy the Knill-Laflamme conditions in Eq.~\ref{eq:perfectqec} exactly, but satisfies a \emph{perturbed} form of the QEC conditions. The $4$-qubit code is thus the first example of an \emph{approximate} quantum code, with a codespace that is spanned by, \begin{equation}\label{eq:4qubit} \vert0_L\rangle = \tfrac{1}{\sqrt 2}(\vert0000\rangle +\vert1111\rangle), \quad \vert1_{L}\rangle = \tfrac{1}{\sqrt 2}(\vert1100\rangle +\vert0011\rangle). \end{equation} Finally, it was shown that this approximate $4$-qubit code achieves a comparable degree of protection as the standard $5$-qubit code, against amplitude-damping noise~\cite{leung}. These observations generated a lot of interest in looking for approximate codes and QEC protocols. \begin{figure} \caption{Approximate QEC: codespace gets mapped to overlapping subspaces due to errors $\{E_i\} \label{fig:approxqec} \end{figure} In contrast to the idea of perfect QEC, approximate QEC is achieved by relaxing the demands on the code. The codespace need not be mapped to distinct subspaces under the action of the errors, but can instead be mapped to overlapping subspaces as shown in Fig.~\ref{fig:approxqec}. Furthermore, while the standard recovery procedure for a perfect code simply involves applying the appropriate Pauli operator once the error is detected and identified, an approximate code often requires a non-trivial recovery, one that is also adapted to the noise, so as to reverse the effects of the noise in the best possible way. Thus, we may also refer to approximate codes and approximate QEC protocols as \emph{noise-adapted} schemes, since the code and/or the recovery are tailored to a particular noise model. Formally, a channel $\mathcal{E}$ is approximately correctible on codespace $\mathcal{C}$ if and only if there exists a CPTP map $\mathcal{R}$ such that $\mathbb{F}[\mathcal{C}; \mathcal{R} \circ \mathcal{E}] \approx 1 $, where $\mathbb{F}$ is any average or worst-case fidelity measure, quantifying how well the combination of $\mathcal{E}$ followed by $\mathcal{R}$ protects the states in the codespace. Before we proceed with our survey of approximate or noise-adapted QEC, it may be useful to recall two standard fidelity measures that are used to quantify the performance of any QEC protocol. The first is the \emph{entanglement fidelity} $F_{e}(\rho, \mathcal{E})$, which describes how well the noise channel $\mathcal{E}$ preserves entanglement. Specifically, let $\vert\Psi_{AB}\rangle$ be a purification of the state $\rho_{A}$ onto the the joint system $\mathcal{H}_{A}\otimes\mathcal{H}_{B}$. Then, $F_{e}(\rho, \mathcal{E})$ defined as~\cite{nielsen}, \begin{equation}\label{eq:entfid} F_e(\rho, \mathcal{E}) = \langle \Psi_{AB}\vert (\mathcal{E}_{A}\otimes \mathcal{I}_{B}) (\vert \Psi_{AB}\rangle\langle\Psi_{AB} \vert)\vert \Psi_{AB}\rangle, \end{equation} where the noise $\mathcal{E}_{A}$ acts only on the subsystem $\mathcal{H}_A$. In contrast to this, the \emph{fidelity} $F (\rho,\mathcal{E}(\rho))$ quantifies how well the channel preserves the state $\rho$ in terms of the Bures' metric, and is defined as~\cite{nielsen}, \begin{equation} F (\rho, \mathcal{E}) = \mathrm{Tr}\sqrt{\sqrt{\rho}\mathcal{E}(\rho)\sqrt{\rho}} \label{eq:fid} \end{equation} The performance of a QEC protocol described by the pair of encoding and recovery maps $(\mathcal{W},\mathcal{R})$ for a noise $\mathcal{E}$ is then quantified by the \emph{worst-case fidelity}, defined as, \begin{equation} F_{\mathrm{i}n}(\mathcal{W}, \mathcal{R};\mathcal{E}) \equiv \mathrm{i}n _{\vert \psi\rangle \in \mathcal{H}_{0}} F(\vert \psi\rangle,\mathcal{W}^{-1}\circ \mathcal{R} \circ \mathcal{E} \circ \mathcal{W}). \label{eq:worstcase_fidelity} \end{equation} \section{Noise-adapted Quantum Error Correction}\label{sec:noise-adapted} Much of the existing work on QEC focuses on general-purpose QEC codes that assume no knowledge about the noise affecting the system. This allows one to construct codes capable of correcting any arbitrary error by discretizing the error in terms of Pauli operators, as explained earlier. However, such Pauli-based QEC codes are resource intensive, whereas noise-adapted codes like the $4$-qubit code demonstrated that it may be possible to achieve a comparable degree of protection using fewer qubits. The problem of noise-adapted QEC is to identify a pair $(\mathcal{W},\mathcal{R})$ of encoding and recovery maps respectively, that achieve optimal protection against the noise map under consideration. For a given noise map $\mathcal{E}$ (assuming some dimension $d$ of the physical system), the best QEC protocol is thus the solution to an optimization over encoding isometries $\mathcal{W}$ and recovery maps $\mathcal{R}$, for a chosen measure of fidelity $\mathbb{F}$. \begin{equation} \argmax_{\mathcal{W}}\argmax_{\mathcal{R}} \mathbb{F}(\mathcal{W}, \mathcal{R}; \mathcal{E}). \label{eq:opt} \end{equation} This is in general a hard problem, since it involves a double optimization or a triple optimization depending on whether the measure $\mathbb{F}$ is an average fidelity measure or a worst-case fidelity measure such as the one defined in Eq.~\eqref{eq:worstcase_fidelity}. The problem of noise-adapted QEC becomes more tractable when these two tasks are decoupled: we may instead ask what is the best possible encoding assuming a fixed recovery map, or, we may try to solve for the best recovery map for a given encoding. In what follows, we will first survey the known results on noise-adapted recovery maps (Sec.~\ref{sec:adaptive_recovery}) and then proceed to discuss different approaches to search for and construct good noise-adapted quantum codes (Sec.~\ref{sec:adaptive_codes}). \subsection{Noise-adapted recovery maps}\label{sec:adaptive_recovery} One of the earliest analytical results on noise-adapted QEC was the demonstration of the existence of a universal, near-optimal recovery map for any noise channel, with optimality defined in terms of the average entanglement fidelity~\cite{barnum}. This noise-adapted recovery map is based on an original construction due to Petz~\cite{petz2003}, and is often referred to as the \emph{Petz map} today in the literature. Subsequently, this idea was used to obtain conditions for approximate QEC~\cite{beny} as a generalization of the Knill-Laflamme conditions, which in turn provided a way to construct a near-optimal recovery map in terms of the worst-case entanglement fidelity. In related work, it was shown that a modified version of the Petz map is in fact a noise-adapted recovery map for any combination of noise and codespace, achieving near-optimal worst-case fidelity~\cite{hui_prabha}, leading to simple algebraic conditions for approximate QEC as a perturbation of the Knill-Laflamme conditions. We note here that information-theoretic conditions for approximate QEC have also been formulated~\cite{SW2002}, as a perturbation of the perfect conditions. Given the important role played by the Petz map, it might be useful to briefly review its definition and properties here. The map was originally introduced in an information-theoretic setting, in the context of saturating the monotonicity of the quantum relative entropy~\cite{Petz}. For a density operator $\rho$ and noise channel $\mathcal{E}$ with Kraus operators $\{E_{i}, i = 1,2, \ldots, N\}$, the Petz map is defined via its Kraus operator decomposition as follows. \begin{equation}\label{eq:petz1} \mathcal{R}_{\rho} \sim \lbrace R_i \equiv \sqrt{\rho} E_{i}^{\dagger} \mathcal{E}(\rho)^{-1/2} \rbrace . \end{equation} Note that this map recovers the state $\rho$ perfectly after the action of the noise $\mathcal{E}$. This \emph{state-specific} Petz map was generalized for an ensemble of states in~\cite{barnum}, to obtain a near-optimal recovery map in terms of the average entanglement fidelity. Subsequently, a \emph{code-specific} Petz map $\mathcal{R}_{P}$ was defined for the noise channel $\mathcal{E}$ with Kraus operators $\{E_{i}, i = 1,2, \ldots, N\}$ and a codespace $\mathcal{C}$, as, \begin{equation} \mathcal{R}_{P} \equiv \{ PE_{i}^{\dagger}\mathcal{E}(P)^{-1/2} \}, \; i = 1,2,\ldots, N, \label{eq:petz_kraus} \end{equation} where, $P$ is the projector onto the codespace and $\mathcal{E}(P)=\sum_{i}E_{i}PE_{i}^{\dagger}$. The action of the code-specific Petz map is thus defined on the support of $\mathcal{E}(P)$, as, \begin{equation}\label{eq:Petzmap} \mathcal{R}_{P}(\cdot) \equiv \sum_{i=1}^{N} PE_{i}^{\dagger} \mathcal{E}(P)^{-1/2}(\cdot) \mathcal{E}(P)^{-1/2}E_{i}P. \end{equation} Note that $\mathcal{R}_{P}$ can be thought of as being composed of three CP maps as follows. \begin{align} \mbox{Normalizer map :} &\quad \mathcal{E}(\mathcal{P}_{\mathcal{C}})^{-1/2}(.)\mathcal{E}(\mathcal{P}_{\mathcal{C}})^{-1/2} \nonumber \\ \mbox{Adjoint map :} &\quad \mathcal{E}^{\dagger} (.) \nonumber \\ \mbox{Projector map :} &\quad \mathcal{P}_{\mathcal{C}}\,(.) \mathcal{P}_{\mathcal{C}} \label{eq:petz3} \end{align} This map was shown to achieve close to optimal worst-case fidelity for the codespace $\mathcal{C}$ affected by the noise channel $\mathcal{E}$~\cite{hui_prabha}. Furthermore, in case the codespace $\mathcal{C}$ is perfectly correctable for noise $\mathcal{E}$, it can be shown that the map $\mathcal{R}_{P}$ is indeed the standard recovery map of perfect QEC! We note here that the code-specific Petz map construction was further generalised as a near-optimal recovery for subsystem codes~\cite{mandayam2012}. On a related note, approximate QEC has also been illustrated in a different setting, where the noise channel is assumed to act jointly on system and bath~\cite{hui_2}. In recent times, the Petz map which is a close analogue of the classical Bayesian reversal map~\cite{petz_bayesian} has been used to recover information in the case of non-Markovian dynamics~\cite{petz_nonmarkovian}. A physical protocol which implements the Petz map in the form of Hamiltonians and jump operators has been given in Ref.~\cite{petz_physicalprocess} . The Petz map construction for the case of continuous variable case, namely the bosonic Gaussian channel has also been studied in Ref.~\cite{wilde}. Finally, a quantum algorithm to implement Petz map based on the quantum singular value decomposition was proposed recently~\cite{gilyen2022}. We conclude this section with a summary of numerical studies of optimal noise-adapted recovery maps. Solving for the optimal recovery map in terms of the worst-case fidelity is computationally hard given that Eq.~\eqref{eq:opt} requires a triple optimization in this case. However, upon relaxing certain constraints, it was shown that this triple optimization is tractable via a semidefinite program (SDP), although the recovery map thus obtained is typically suboptimal~\cite{yamamoto}. The entanglement fidelity measure defined in Eq.~\eqref{eq:entfid}, on the other hand, is amenable to convex optimization techniques~\cite{kosut}. In terms of the worst-case entanglement fidelity, the problem of finding the optimal noise-adapted code for a fixed recovery and the problem of finding the optimal noise-adapted recovery for a fixed code can both be recast as semidefinite programs, which are simply dual to each other~\cite{fletcher_rec, fletcher_thesis}. \subsection{Noise-adapted quantum codes}\label{sec:adaptive_codes} Starting with the $4$-qubit code~\cite{leung} described in Eq.~\eqref{eq:4qubit} above, several analytical constructions of approximate codes have been obtained in the literature. These include the class of \emph{cat-codes}~\cite{catcode1, catcode2} and \emph{binomial codes}~\cite{binomial2016} which are tailored towards protecting information stored in bosonic modes against dominant noise processes such as photon-loss and amplitude-damping. There are a few analytical constructions of noise-adapted codes for amplitude-damping noise~\cite{shor_lang2007, shor2011} and generalized amplitude-damping~\cite{cafaro2014}, which make use of the structure of the Kraus operators of the respective channels to get the desired error mitigation properties. Constructing noise-adapted codes for an arbitrary quantum channels is in general a hard problem. Recently, a systematic way of constructing optimal noise-adapted approximate codes for general noise models using the Cartan decomposition was provided in Ref.~\cite{ak_cartan}. Furthermore, in contrast to earlier numerical approaches, the worst-case fidelity was used as the figure of merit to obtain the optimal codes. We briefly summarize this approach here before surveying other numerical approaches to finding noise-adapted quantum codes. As a first step towards simplifying the search for good quantum codes, the recovery map is simply fixed to be the code-specific recovery map $\mathcal{R}_P$ defined in Eq.~\ref{eq:Petzmap}. Specifically, for a qubit noise channel $\mathcal{E}$ and two-dimensional code $\mathcal{C}$, using the Petz map leads to a simple optimization problem for the worst-case fidelity, namely~\cite{ak_cartan}, \begin{equation}\label{eq:fid_loss} F_{\rm min} (\mathcal{W}) = 1- \frac{1}{2}{\left[1-t_{\mathrm{i}n}(\mathcal{W})\right]}. \end{equation} where $t_{\rm min}$ refers to the smallest eigenvalue of a $3\times 3$ matrix. Therefore, the encoding unitary $\mathcal{W}$ which fixes the optimal codespace is the one that maximizes the fidelity, or, minimizes the fidelity loss $\eta_\mathcal{W} = 1 - F_{\rm min} (\mathcal{W})$ over all encoding unitaries. In order to solve Eq.~\ref{eq:fid_loss} each element of $SU(2^n)$ is then parameterized using the so-called \emph{Cartan decomposition}~\cite{khaneja, earp}. The Cartan decomposition provides a way to represent any element of $SU(2^{n})$, upto local unitaries, as a product of operators from $SU(2^{n-1}) \otimes SU(2)$ and unitary operators \emph{nonlocal} on the entire $n$-qubit space. Such a decomposition can be applied recursively, to further decompose each $SU(2^m)$ operator as a product of local and nonlocal unitary operators. \begin{figure} \caption{ Performance of $4$-qubit noise-adapted codes for amplitude-damping channel, obtained using the numerical search procedure based on the Cartan decomposition.} \label{fig:4qubit} \end{figure} The Cartan form is naturally useful in the context of searching for good codespaces, especially in the regime where the noise assumed to be independent and local. The Cartan decomposition allows one to look for potential codespaces by directly searching over the non-local unitaries in the decomposition, while keeping the local unitaries fixed. For instance, in the case of a $4$-qubit encoding, the total number of parameters required to describe any element in $SU(2^4)$ is $363$ using the Cartan decomposition. However, since we only need to search over the intermediate non-local unitaries to find good codespaces, one needs to search over only $82$ parameters. Therefore, this Cartan-form-based search leads to a quick and efficient way of obtaining optimal noise-adapted codes for arbitrary noise channels. Fig~\ref{fig:4qubit} compares the performance of various quantum codes for the amplitude-damping channel, by plotting the worst-case fidelity as a function of the noise strength $\gamma$. We see that the best $4$-qubit codes obtained via the numerical search outperform the perfect $5$-qubit code (solid line) as well as the noise-adapted $4$-qubit code (dot-dashed line). It further emphasizes the point made earlier that it suffices to search only over the non-local parts of the unitary in the Cartan decomposition (structured search, cross-line), since the best code thus obtained performs comparably to the best code obtained by searching over the entire parameter space in the Cartan decomposition (unstructured search, dashed line). \subsection{Learning-based approaches to construct adaptive QEC codes} We conclude this section on noise-adapted quantum codes with a survey of some recent works that propose to use learning-based approaches in searching for good QEC protocols. On the one hand, Ref.~\cite{qvector} demonstrates a variational quantum algorithm to obtain quantum codes specific to the device hardware, in a manner that does not require prior knowledge about the dominant noise model in the system. A simple enough cost function, namely, the average fidelity has been optimized in this algorithm. Alternately, Ref.~\cite{cao2022quantum} develops a variational quantum algorithm to obtain arbitrary quantum codes, not necessarily the channel-adapted ones, but also those with specific code parameters such as distance. In this case, the cost function is defined by generalizing the Knill-Laflamme condition in Eq.~\eqref{eq:perfectqec} and the efficacy of their algorithm is demonstrated by identifying various known codes. There have also been reinforcement based approaches in finding quantum codes. For example, Ref.~\cite{neural_networks} show how neural network based learning can be used to construct quantum codes for various quantum channels. On a different note, Ref.~\cite{qeccodes} demonstrates a reinforcement-based learning approach to adapting QEC codes by benchmarking them against the desired logical error rate. Finally, Ref.~\cite{reinforcement} demonstrates how to train a network-based agent to learn QEC strategies from scratch and hence protect a set of qubits from noise. \section{Applications of noise-adapted QEC}\label{sec:applications} It is well known that ideas from quantum error correction have today permeated diverse fields in physics, from the toric code~\cite{toric_code} which has given rise to several interesting topological models in condensed matter~\cite{baskaran2007} to the more recent connections between stabilizer codes, tensor networks~\cite{ferris2014} and holography~\cite{happy2015}. In this section, we will describe some interesting connections between the theory of approximate or noise-adapted QEC and areas like many body physics and the AdS/CFT correspondence. \subsection{Quantum state transfer over spin chains}\label{sec:state_transfer} In the context of quantum many body systems, it was recently shown that approximate quantum codes can occur naturally as ground spaces of certain topologically-ordered, gapped Hamiltonian systems~\cite{elizabeth, elizabeth_2}. Another interesting connection between quantum channels, quantum error correction and many body systems arises in the context of state transfer over spin chains~\cite{bose2007}. Here, we would like to elaborate on this connection and emphasize the role played by approximate QEC in achieving information transfer over spin chains with a high degree if fidelity. Quantum communications using interacting spins is one of the areas that has generated a lot of interest over the recent years. Following the pioneering work of Bose~\cite{bose} who showed that such a state transfer could be viewed as transmitting quantum information over a quantum channel, multiple protocols demonstrating \emph{perfect} and \emph{pretty-good} state transfers using spin chains have been proposed. Rather than use a single spin chain, it was subsequently shown that conclusive and perfect state transfer could be achieved using a pair of spin-$1/2$ chains with dual-rail encoding~\cite{Burgarth}. As opposed to a perfect transfer, in the case of pretty good transfer one finds an optimal scheme to transfer the information with high fidelity, using permanently coupled spin chains~\cite{osborne}. Going beyond the $2$-qubit dual-rail code, it has been shown that state transfer can be achieved over disordered spin chains beyond the localization length, by using a concatenated form of the standard $5$-qubit code~\cite{allcock}. Alternately, the role of QEC in enhancing fidelity of state transfer under local noise affecting each individual spin in the chain has also been studied~\cite{kay_2016}. Since the underlying quantum channel that arises in the context of state transfer over spin chains is typically not of the Pauli form, a natural question to ask is whether noise-adapted QEC may enable information transfer using spin chains over longer distances. This question was addressed recently~\cite{ak_statetransfer}, where it was shown that pretty good state transfer can be achieved using adaptive QEC protocols on spin chains. We will now briefly describe such a noise-adapted protocol for quantum state transfer over a $1$-d Heisenberg chain. Consider the problem of transmitting a single qubit worth information across a spin-$1/2$ chain with the interaction Hamiltonian given by, \begin{equation} \label{eq:H_gen} \mathcal{H} = -\sum_{k} J_{k}\left(\sigma^{k}_{x}\sigma^{k+1}_{x}+\sigma^{k}_{y}\sigma^{k+1}_{y}\right) - \sum_{k}\tilde{J}_{k}\sigma_{z}^{k}\sigma^{k+1}_{z} + \sum_{k}B_{k}\sigma_{k}^{z}, \end{equation} where, $\{J_{k}\}>0$ and $\{\tilde{J}_{k}\}>0$ are site-dependent exchange couplings of a ferromagnetic spin chain, $\{B_{k}\}$ denote the magnetic field strengths at each site, and, $(\sigma^{k}_{x},\sigma^{k}_{y},\sigma^{k}_{z})$ are the Pauli operators at the $k^{\rm th}$ site. The standard state transfer protocol works by initialising a single spin at site $s$ (\emph{sender}) to the quantum state to be transferred and then \emph{receiving} it at site $r$. It is easily shown that if the spins are interacting via the Hamiltonian in Eq.~\eqref{eq:H_gen}, this leads to a quantum channel $\mathcal{E}$ from the input spin to the receiver's end with Kraus operators, \begin{equation} E_{0} = \left( \begin{array}{cc} 1 & 0 \\ 0 & f_{r,s}^{N}(t) \end{array} \right), \; E_{1} = \left( \begin{array}{cc} 0 & \sqrt{1-\vert f_{r,s}^{N}(t)\vert^{2}} \\ 0 & 0 \end{array} \right). \label{eq:Kraus_ideal} \end{equation} The Kraus operators in Eq.\eqref{eq:Kraus_ideal} lead to a channel that has the same structure as the amplitude-damping channel described in Eq.~\eqref{eq:ampdamp}, but is more general since the parameter $f_{r,s}^{N}(t)$ characterizing the noise in the channel is complex. The parameter $f_{r,s}^{N}(t)$ is the so-called \emph{transition amplitude}, $r$ refers to the receiver's site and $s$ refers to the senders site and $N$ refers to the total number of spins on the spin chain. Since the quantum channel $\mathcal{E}$ looks very similar to amplitude-damping channel, the adaptive QEC protocol in~\cite{ak_statetransfer} picks the $4$-qubit code in Eq.~\eqref{eq:4qubit} and the Petz recovery map in Eq.~\eqref{eq:Petzmap} adapted to this code to achieve state transfer with pretty good fidelity. The plots in Fig.~\ref{fig:spinchain} describes the performance of such an adaptive QEC protocol on a spin chain of length $N$ subject to the $XXX$ interaction, with $\{B_k=0\}$ and $\{J_i=J\}$ in Eq.~\eqref{eq:H_gen}. \begin{figure} \caption{a) Spin transfer on an ideal XXX chain (left). b) Spin transfer on a disordered XXX chain (right) } \label{fig:spinchain} \end{figure} In these plots, the performance of the state transfer protocol from the first to the $N^{\rm th}$ site is characterised using the worst-case fidelity as function of the length $N$ of the spin chain. The first plot in Fig.~\ref{fig:spinchain} shows the state transfer fidelities for an ideal $XXX$ chain, whereas the second plot shows the fidelities obtained for disordered spin chain with disorder strength $\delta$. In this case, the disorder-averaged worst-case fidelity $\langle F_{\rm min}^2\rangle_\delta$ is the performance metric, and the noise-adapted Petz recovery is constructed in terms of the disorder-averaged transition amplitude $\langle f_{r,s}^{N}(t)\rangle_\delta$. It is shown both analytically as well as numerically that for small disorder strengths the disorder-averaged transition amplitude is the same as the no-disorder case~\cite{ak_statetransfer} and the variance in the transition amplitude is vanishingly small. This makes a convincing case for using the noise-adapted Petz recovery to achieve pretty good state transfer over ideal as well as disordered spin chains. \subsection{The Petz map in Ads/CFT} Moving on from many body systems to holography, we now provide a brief summary of the interesting emerging connections between perfect/approximate quantum codes and the bulk-boundary correspondence in Ads/CFT. The AdS/CFT duality proposes a correspondence between a theory of quantum gravity on a $d+1$-dimensional (asymptotically) anti-deSitter (AdS) spacetime and a conformal field theory (CFT) in one less spatial dimension defined on its boundary. It was shown in the early works of Almheiri \emph{et al}~\cite{almheiri2015} and Pastawski \emph{et al}~\cite{happy2015, pastawski2017} that the framework of quantum error correction provides an elegant characterization of this bulk-boundary correspondence. The mathematical formalism of QEC is naturally suited to answer the question of bulk reconstruction, namely, can one find a representation of operators in the bulk gravity theory as operators acting on a subregion of the boundary. Viewing the AdS/CFT correspondence as a map from the bulk to the boundary, a noisy quantum channel arises from tracing over a subregion of the boundary. The problem of bulk reconstruction while having access only to a subregion of the boundary, is then the same as the problem of recovering from this erasure noise channel. Formally, it has been shown that such a reconstruction is possible when there is an \emph{exact} equivalence of the bulk relative entropy and boundary relative entropy~\cite{dong2016, jlms2016}. In practice one can only expect \emph{approximate} equality of the bulk and boundary relative entropies, and this naturally leads to the question of whether approximate or noise-adapted recovery maps like the Petz map might find use in the context of the bulk-boundary reconstruction. We will conclude with a quick survey of some recent results in this area and refer to~\cite{holography_qec} for a detailed review of this emerging field. The first concrete proposal for an approximate recovery map that could solve the problem of bulk reconstruction involved a variant of the Petz map, often called the \emph{twirled} Petz map~\cite{cotler2019}. Recall that the original state-dependent Petz map was originally derived as the recovery map that characterizes saturation of monotonicity of relative entropy under a noise channel. The twirled Petz map is a generalization of the original Petz construction which was shown to characterize approximate saturation of the monotonicity of relative entropy under a noise channel~\cite{junge2018}. Subsequently, it was shown that approximate reconstruction of the bulk operators can be achieved via an averaged version of the Petz map~\cite{chen2020}. In related work, a Petz-based reconstruction map was demonstrated for toy models of holography based on random tensor networks~\cite{jia2020petz}. Ultimately, a complete understanding of the role of approximate recovery maps in the holographic setting requires a construction of such maps for general von Neumann algebras. There has been some progress in this regard recently, with a modified Petz map for arbitrary von Neumann algebras~\cite{faulkner2022}, but the quest is on to find a suitable physical construction of a universal, approximate recovery map that can solve the bulk reconstruction puzzle in holography. \section{Noise-adapted techniques for quantum fault tolerance}\label{sec: FT} A practical quantum computer suffers from decoherence due to the inevitable system-environment interactions. The theory of fault-tolerant quantum computing~\cite{preskill1998} provides a framework by which reliable and long computations can be done even in the presence of noise, which could arise either from gate imperfections or noisy quantum systems. A fault-tolerant computation proceeds by first encoding the information into a quantum error correcting code and then performing encoded gate operations on them in a noise-resilient way. The errors generated are then to be removed periodically, before they can cause irreparable damage. Central to this theory is the \emph{threshold theorem} which provides a critical threshold value, namely, the noise strength below which fault tolerance scheme is successful and the encoded operations outperform the unencoded operations~\cite{aliferis2006}. This threshold number depends crucially on the QEC protocol being used and the noise model assumption. For example, Ref.~\cite{cross2009} provides a comparative study of the threshold values achieved by different quantum codes for the case of depolarizing noise, wheres Ref.~\cite{campbell2017} discusses the thresholds achievable by topological codes such as the surface codes. The threshold numbers obtained thus far against various noise models by specific quantum codes are tabulated in Table.~\ref{tab:table1}. As with standard QEC, much of the work on fault tolerance (FT) in the past has been based on \emph{perfect} quantum codes that are essentially based on Pauli noise models. However, in most cases, the dominant system noise is non-Pauli, and in these cases the standard fault-tolerance scheme is too resourceful to be implemented on current NISQ hardware. Recently, there have been efforts towards developing noise-adapted fault tolerance schemes that are tailored to the dominant noise processes in physical systems, and these will form our focus in the following sections. \begin{table} \begin{tabular}{|c|c|c|} \hline Fault-tolerant Scheme& Noise model& Threshold\\ \hline \hline General&Simple stochastic noise&$\sim 10^{-6}$\\ \hline Polynomial codes (CSS codes)&general noise model&$10^{-6}$\\ \hline Concatenated $7$-qubit code&Adversarial stochastic noise&$2.7 \times 10^{-5}$\\ \hline Error detecting concatenated codes ($C_4/ C_6$)&depolarizing noise&$1\%$\\ \hline Concatenated $7$-qubit code&Hamiltonian noise&$10^{-8}$\\ \hline Concatenated repetition code& biased dephasing noise & $0.5 \%$, $0.24\%$ \\ \hline Surface codes&depolarizing&$0.75\%$ , $0.5\% - 1\%$\\ \hline $[[4,2,2]]$ concatenated toric code& depolarizing&$0.41\%$\\ \hline \end{tabular} \caption{\label{tab:table1} Noise thresholds for various quantum fault tolerance schemes~\cite{thesis}.} \end{table} \subsection{Quasi-exact fault tolerance} One interesting approach that goes beyond the standard prescription of quantum fault tolerance (FT) is the theory of \emph{quasi-exact fault tolerance}~\cite{wang,quasi}. This scheme demonstrates fault tolerance using a class of approximate quantum codes called \emph{quasi codes}. These are approximate quantum codes defined in terms of scaling parameters, which when tuned can lead to exact codes in some limit. The advantage of this framework of quasi-exact does is that it allows one to interpolate between approximate and perfect codes and correspondingly define parameters like (quasi) code distance which cannot be defined for other classes of approximate codes. Examples of quasi codes include valence-bond solid codes and symmetry-protected topological states that occur naturally in quantum many body systems. Quasi-exact FT further invokes a weaker version of universality referred to as \emph{quasi universality}. This is a weakening of the usual universality, which induces a coarse-graining structure on the unitary group by identifying a suitable cut-off. Specifically, a set of logical unitary operators, $U$ $\in$ $SU(d)$ is now realized as a coarse-grained unitary $U'$ $\in$ $SU(d)_\eta$, where $\eta$ is the accuracy, which is the distance between $U'$ and $U$, and $SU(d)_\eta$ $\in$ $SU(d)$. One interesting aspect of quasi-exact FT is that it opens up the possibility of circumventing the no-go theorem for regular FT, namely, that transversality and universality cannot coexist for exact codes~\cite{eastin}. The theory of quasi-exact codes and quasi-exact fault tolerance does allow for transversality with \emph{quasi-universality}, but, the computations allowed by quasi-exact FT are only finite length. The notion of quasi-exact FT is strictly weaker than the standard fault tolerance since the uncorrectable errors accumulate with the number of operations performed. \subsection{Achieving fault tolerance using noise-adapted quantum codes} Much of the work on quantum fault tolerance assumes that the noise is of the Pauli type and aims to build encoded units that are tolerant against Pauli-type faults. Even in cases where the fault tolerance scheme is tailored to a specific noise model, the dominant noise has been typically of the Pauli type. For instance, Ref.~\cite{biased_noise} presents a fault-tolerant scheme specific to dephasing noise using the $3$-qubit phase flip code. In contrast to such approaches, a recent work demonstrates the possibility of biased-noise fault-tolerance based on a noise model as well as QEC code that is non-Pauli~\cite{ak_ft}. Specifically, this work assumes that the dominant noise affecting the physical qubits is amplitude-damping channel. Expanding the amplitude damping channel in Eq.~\eqref{eq:ampdamp} in the limit of small values of the noise parameter $p$, \begin{equation}\label{eq:ampdamp2} \mathcal{E}_\mathrm{AD}(\,\cdot\,) =\tfrac{1}{4}{\left(1+\sqrt{1-p}\right)}^2\mathcal{I}(\,\cdot\,)+p\mathcal{F}(\,\cdot\,) +{\left[\tfrac{1}{4}p^2+O(p^3)\right]}Z(\,\cdot\,)Z, \end{equation} where $\mathcal{I}(\cdot)\equiv (\cdot)$ is the identity channel, and $\mathcal{F}(\cdot)$ is the TP (but not CP) channel defined by, \begin{equation}\label{eq:amp_fault} \mathcal{F}(\,\cdot\,) \equiv\tfrac{1}{4}{\left[(\,\cdot\,)Z+Z(\,\cdot\,)\right]}+E(\,\cdot\,)E^{\dagger} \equiv \tfrac{1}{2}\mathcal{F}_z(\,\cdot\,) + \mathcal{F}_a(\,\cdot\,). \end{equation} In Eq.~\eqref{eq:amp_fault}, $\mathcal{F}_Z$ refers to an off-diagonal error leading to a non-state operator and $\mathcal{F}_a$ refers to a damping error. Following the basic principles of quantum fault tolerance developed in~\cite{aliferis2006}, Ref.~\cite{ak_ft} constructs encoded units tolerant against faults of upto $O(p)$ in Eq.~\eqref{eq:ampdamp2} arising from amplitude-damping noise. Starting with faulty physical units, and using the $4$-qubit code given in Eq.~\eqref{eq:4qubit}, it describes a fault-tolerant universal encoded gate set comprising of $\{S, T, H, \textsc{cphase}, \textsc{ccz}\}$ and a fault-tolerant error correction unit. Furthermore, going against the conventional wisdom of quantum fault tolerance, this work shows that there could be transversal gates that are not fault tolerant, such as the \textsc{cnot} gate in the case of amplitude-damping noise. The transversal \textsc{cphase} gate turns out to be fault-tolerant to single faults arising from ampltiude-damping noise. Thus, \textsc{cphase} gate is a \emph{noise-structure preserving} gate, which is then used to implement the set of other single-qubit encoded gates via teleportation. We note here that a similar idea of identifying noise-bias-preserving gates has been recently discussed in the context of systems that are subject to biased Pauli noise~\cite{puri2020bias, xu2022}. Finally, we briefly discuss the nosie-adapted recovery circuit used to construct the fault-tolerant error correction (\textsc{ec}) unit in~\cite{ak_ft}. Since the $4$-qubit code is approximate and adapted to a non-Pauli kind of noise, in order to perform QEC one has to measure more than just the stabilizer set defining the codespace. Specifically one needs to measure the set of two qubit nearest neighbour $Z$ operators— $\{ZZII, IIZZ\}$ and single qubit $Z$ operator — $\{ZIII\}$ in order to find exactly where the damping has occurred. Finally one measures $XXXX$ to restore the superposition and to kill the off-diagonal errors. The basic units required to build the error correction unit are shown in Fig.~\ref{fig:ec_unit}. These units are then strung up fault-tolerantly~\cite{ak_ft} with ancilla qubits and flag qubits to build a fault-tolerant \textsc{ec}-unit. These fault-tolerant circuit constructions lead to an estimation of a pseudo-threshold value for amplitude-damping noise. This was estimated as $5.13\times 10^{-5}$ for the memory unit and $2.26\times 10^{-5}$ for the encoded \textsc{cz}-unit~\cite{ak_ft}. \begin{figure} \caption{Basic units of error correction using approximate $4$-qubit code.} \label{fig:ec_unit} \end{figure} \section{Summary and Outlook}\label{sec:summary} The current era of NISQ devices presents both a challenge and an opportunity for theorists and experimentalists alike, from the perspective of error correction and fault tolerance. On the one hand, in order to fully tap into the potential of the available quantum technologies, one has to ascend the ladder of \emph{scalability} and \emph{control} over the qubits simultaneously~\cite{jurcevic2021}. On the other hand, these NISQ devices also provide an opportunity to rework our theoretical frameworks and perform experiments that leverage NISQ devices to test theoretical ideas. These include, for instance, experiments on the quantum cloud that validate the theory of quantum error correction~\cite{pokharel2018demonstration, ghosh2018}, while also providing some insight into the nature of the dominant noise process in the system~\cite{repibm} and studies that demonstrate applications to quantum chemistry (see~\cite{nisq_review} for a recent review) and nuclear structure~\cite{dumitrescu}. In this review, we have surveyed QEC approaches that deviate from the standard QEC and fault tolerance formalism, and instead aim to correct for the dominant noise affecting the quantum devices under consideration. Such noise-adaptive QEC strategies are known to be less resourceful~\cite{leung,fletcher_AD,hui_prabha}, but they also require non-trivial circuit constructions~\cite{gilyen2022} and syndrome extractions~\cite{ak_ft} in contrast to the simpler Pauli measurements for standard stabilizer codes. Thus, an immediate question to address is whether such noise-specific techniques can be scaled up to be able to suppress more errors and perform long computations reliably. One possible approach towards such a scale up could be to concatenate a noise-adapted code tailored to the dominant noise in the system with another QEC code of the same or different type. Alternately, one could use channel-adapted techniques on lattice-based codes like the Bacon-Shor code~\cite{piedrafita2017reliable} and surface codes which have the ability to tolerate more errors depending on their lattice size. The NISQ era has also spawned the development of newer ideas such as \emph{quantum error mitigation}~\cite{cao2021nisq} which focuses on the limited goal of reducing the effective noise levels in near-term quantum devices, rather than implementing a complete QEC protocol. This in turn opens up the possibility of merging such hardware-specific error mitigation strategies from standard QEC and fault tolerance~\cite{qem2022}. Going forward, efficient and powerful optimization techniques could lead us to better adaptive QEC strategies with optimal encodings and recovery. For instance, optimization involving machine learning~\cite{reinforcement} and other learning-based approaches including a quantum variational strategy~\cite{cao2022quantum} are already being explored. In the coming years, we expect to realize qubits with longer coherence times and gates with higher accuracy in the lab. In the meantime, learnings from noise-adapted QEC and fault tolerance will continue to give us insights on how well we can make use of NISQ devices, and develop strategies that will enable us to make the transition from noisy quantum devices to robust and scalable quantum computing architectures. \section{Declarations} Funding and/or Conflicts of interests/Competing interests: On behalf of both authors, the corresponding author states that there is no conflict of interest. \end{document}
\begin{document} \title{A reference ball based iterative algorithm for imaging acoustic obstacle from phaseless far-field data} \author{ Heping Dong\thanks{School of Mathematics, Jilin University, Changchun, P. R. China. {\it [email protected]}}, Deyue Zhang\thanks{School of Mathematics, Jilin University, Changchun, P. R. China. {\it [email protected]} (Corresponding author)}\ \ and Yukun Guo\thanks{Department of Mathematics, Harbin Institute of Technology, Harbin, P. R. China. {\it [email protected]}}} \maketitle \begin{abstract} In this paper, we consider the inverse problem of determining the location and the shape of a sound-soft obstacle from the modulus of the far-field data for a single incident plane wave. By adding a reference ball artificially to the inverse scattering system, we propose a system of nonlinear integral equations based iterative scheme to reconstruct both the location and the shape of the obstacle. The reference ball technique causes few extra computational costs, but breaks the translation invariance and brings information about the location of the obstacle. Several validating numerical examples are provided to illustrate the effectiveness and robustness of the proposed inversion algorithm. \end{abstract} \noindent{\it Keywords} Inverse obstacle scattering problem, Phaseless data, Reference ball, Helmholtz equation, Nonlinear integral equation \section{Introduction} It is well known that inverse scattering problems (ISP) play a significantly crucial role in a vast variety of realistic applications such as radar sensing, ultrasound tomography, biomedical imaging, geophysical exploration and noninvasive detecting. Therefore, effective and efficient numerical inversion approaches have been extensively and intensively studied in the recent decades \cite{DR-shu2}. In typical inverse scattering problems, reconstruction of the geometrical information is based on the knowledge of full measured data (both the intensity and phase information are collected). However, in a number of practical scenarios, to measure the full data is extremely difficult or might even be unavailable. Thus, phaseless inverse scattering problems arise naturally and attract a great attention from mathematical and numerical point of view. Without the phase information, the theoretical justifications of the uniqueness issue and the development of inversion algorithms are more challenging than the phased inverse scattering problem. Various numerical methods have been proposed for solving phaseless inverse acoustic obstacle scattering problems. Kress and Rundell \cite{RW1997} investigate the phaseless inverse obstacle scattering and propose a Newton method for imaging a two-dimensional sound-soft obstacle from the modulus of the far-field data with only one incoming direction. In particular, it is pointed out that the location of the obstacle cannot be reconstructed since the modulus of the far-field pattern has translation invariance. This means that the solution of the inverse problem is not unique. If the location of the obstacle is known, then a uniqueness result is presented in \cite{LZ2010} for a special case, that is, the sound-soft ball can be determined by phaseless far-field pattern when the product of wavenumber and the radius of the ball is less than a constant. In \cite{Ivanyshyn2007}, a nonlinear integral equation method is investigated for the two-dimensional shape reconstruction from a given incident field and the modulus of the far-field data. This method involves the full linearization of a pair of integral equations, i.e., field equation and phaseless data equation, with respect to both the boundary parameterization and the density. Then, the nonlinear integral equation method is extended to the three-dimensional shape reconstruction in \cite{OR2010}. The problem is divided into two subproblems including the shape reconstruction from the modulus of the far-field data and the location identification based on the invariance relation using only few far field measurements in the backscattering direction. In addition, fundamental solution method \cite{KarageorghisAPNUM} and a hybrid method \cite{Lee2016} are proposed to detect the shape of a sound-soft obstacle by using of the modulus of the far-field data for one incident field. Besides the aforementioned phaseless inverse acoustic obstacle scattering, there exist other types of phaseless inverse scattering problems as well as the relevant numerical methods. Based on the physical optics approximation and the local maximum behavior of the backscattering far-field pattern, a numerical method is developed in \cite{Li2017} to reconstruct an electromagnetic polyhedral PEC obstacle from a few phaseless backscattering measurements. In \cite{CH2016}, a direct imaging method based on reverse time migration is proposed for reconstructing extended obstacles by phaseless electromagnetic scattering data. A continuation method \cite{BLL2013} is developed to reconstruct the periodic grating profile from phaseless scattered data. In \cite{Bao2016}, a recursive linearization algorithm is proposed to image the multi-scale rough surface with tapered incident waves from measurements of the multi-frequency phaseless scattered data. In \cite{ChengJin2017}, an iterative method based on Rayleigh expansion is developed to solve the inverse diffraction grating problem with super-resolution by using phase or phaseless near-field data. For a recent work on the Fourier method for solving phaseless inverse source problem, we refer to \cite{ZGLL18}. We also refer to \cite{Ammari2016, Kli2017, KR2016a, KR2016b} for phaseless inverse medium scattering problems. A major difficulty in solving the phaseless inverse obstacle scattering problem is to determine the location of the obstacle from phaseless farfield data, since the modulus of the far-field pattern is invariant under translations. Recently, a recursive Newton-type iteration method is developed in \cite{ZhangBo2017} to recover both the location and the shape of the obstacle simultaneously from multi-frequency phaseless far-field data. In this numerical method, some sets of superpositions of two plane waves with different directions are used as the incident fields to break the translation invariance property of the phaseless far-field pattern. In this paper, we consider the inverse problem of determining the location and the shape of a sound-soft obstacle from the modulus of the far-field data for a single incident plane wave. In a recent work \cite{GDM2018}, the nonlinear integral equation method proposed by Johansson and Sleeman \cite{TB2007} is extended to reconstruct the shape of a sound-soft crack by using phaseless far-field data for one incident plane wave. Motivated by \cite{GDM2018, TB2007} and the reference ball technique in \cite{LiJingzhi2009, ZG18}, we first introduce an artificial reference ball to the inverse scattering system, and propose an iterative scheme which involves a system of nonlinear and ill-posed integral equations to reconstruct both the location and the shape of the obstacle. Since the location of the reference ball is known and fixed, it has the capability of calibrating the scattering system so that the translation invariance does not occur. As a result, the location information of the obstacle could be recovered with negligible additional computational costs. To our best knowledge, this is the first attempt in the literature towards reconstructing both the location and the shape of the unknown obstacle by using the intensity-only far-field data due to a single incident plane wave. The novelty of this work lies in the incorporation of the reference ball technique into the iterative scheme. Hence, our approach exhibits the following salient features: First, rather than the superposition of different incident waves, only a single incident plane wave with a fixed wavenumber and a fixed incoming direction is needed. Second, the iteration procedure is very fast and easy to implement because the Fr\'{e}chet derivatives can be explicitly formulated and thus do not require the forward solver. Third, the location and profile information of the obstacle could be simultaneously reconstructed. Finally, the iterative scheme is robust in the sense that it is insensitive to the parameters such as the initial guess, the location and size of the reference ball, as well as the measurement noise. The rest of this paper is organized as follows. In the next section, we formulate the phaseless inverse obstacle scattering problem in conjunction with the reference ball. In Section 3, we derive a system of boundary integral equations with a reference ball, and an iterative scheme is presented to solve the boundary integral equations. In Section 4, several numerical examples will be presented, including the numerical implementation details. Finally, some concluding remarks are summarized in Section 5. \section{Problem formulation} Let $D\subset\mathbb{R}^2$ be an open bounded domain with $\mathcal{C}^2$ boundary and the positive constant $\kappa$ be the wavenumber. Given an incident plane wave $u^i=\mathrm{e}^{\mathrm{i}\kappa x\cdot d}$ with the incoming direction $d$, the forward/direct obstacle scattering problem is to find the total field $u$ satisfying the Helmholtz equation \begin{equation}\label{Helmequ} \Delta u + \kappa^2u = 0, \quad\text{in}~\mathbb{R}^2\backslash\overline{D}, \end{equation} and the Dirichlet boundary condition \begin{equation}\label{DirichletBC} u=0, \quad \text{on}\ \partial D. \end{equation} The total field $u=u^i+u^s$ is given as the sum of the known incident wave $u^i$ and the unknown scattered wave $u^s$ which is required to fulfill the Sommerfeld radiation condition \begin{equation}\label{SRC} \lim_{r=|x|\to\infty}r^{1/2}\left(\frac{\partial u^s}{\partial r}-\mathrm{i}\kappa u^s\right)=0. \end{equation} Further, the scattered field $u^s$ has the following asymptotic behavior \cite{DR-shu2} $$ u^s(x) = \frac{\mathrm{e}^{\mathrm{i}\kappa|x|}}{\sqrt{|x|}}\left\{u^\infty(\hat{x}) + \mathcal{O}\left(\frac{1}{|x|}\right)\right\},~~~\text{as}~|x|\rightarrow\infty, $$ uniformly for all direction $\hat{x}:=x/|x|$, where the complex-valued function $u^\infty(\hat{x})$ defined on the unit circle $\Omega$ is known as the far field pattern or scattering amplitude. Then, the phaseless inverse scattering problem is stated as follows: \begin{problem}[Phaseless ISP]\label{problem_1} Given an incident plane wave $u^i$ for a single wavenumber and a single incident direction $d$, together with the corresponding phaseless far-field data $|u_D^\infty(\hat x)|, \ \hat x\in\Omega,$ due to the unknown obstacle $D$, determine the location and shape of $\partial D$. \end{problem} It has been pointed out in \cite{RW1997} that Problem \ref{problem_1} does not admit a unique solution, namely, the location of the obstacle cannot be reconstructed since the modulus of the far-field pattern has translation invariance. Specifically, for the shifted domain $D^h:=\{x+h: x\in D\}$ with a fixed vector $h\in\mathbb{R}^2$, the far-field pattern $u_{D^h}^\infty$ satisfies the relation $$ u_{D^h}^\infty(\hat{x},d)=\mathrm{e}^{\mathrm{i}\kappa h\cdot(d-\hat{x})}u_{D}^\infty(\hat{x},d). $$ Moreover, the ambiguity induced by this translation invariance relation cannot be remedied by using a finite number of incident waves with different wavenumbers or different incident directions, see \cite{RW1997}. Our goal in this paper is to overcome this difficulty via the reference ball based iterative scheme. To this end, we reformulate Problem \ref{problem_1} as follows: \begin{figure} \caption{An illustration of the reference ball technique.} \label{fig:illustration} \end{figure} \begin{problem}[Phaseless ISP with a reference ball]\label{problem_2}\ Let $B\subset\mathbb{R}^2$ be the artificially added sound-soft ball such that $D\cap B=\emptyset$. Given an incident plane wave $u^i$ for a single wavenumber and a single incident direction $d$, together with the corresponding phaseless far-field data $|u_{D\cup B}^\infty(\hat x)|, \hat x\in\Omega,$ due to the scatterer $D\cup B$, determine the location and shape of $\partial D$. \end{problem} The geometry setting of Problem \ref{problem_2} is illustrated in Fig. \ref{fig:illustration}. For brevity, we denote by $\Gamma_1:=\partial D$ and $\Gamma_2:=\partial B$ in what follows. In the next section, we will introduce a system of nonlinear integral equations based iterative scheme for solving Problem \ref{problem_2}. After that, several numerical experiments will be conducted to demonstrate the feasibility of breaking the translation invariance using the proposed method. \section{The inversion scheme} \subsection{Nonlinear integral equations} We begin this section with establishing a system of nonlinear integral equations for the inverse scattering problem. Denote the fundamental solution to the Helmholtz equation by $$ \Phi(x,y)=\frac{\mathrm{i}}{4}H_0^{(1)}(\kappa|x-y|), \quad x\neq y, $$ where $H_0^{(1)}$ denotes the zero-order Hankel function of the first kind. By applying the Huygens' principle \cite[Theorem 3.14]{DR-shu2} to the scattering system $D\cup B$, we obtain \begin{equation} u(x)= u^i(x)-\sum\limits_{j=1,2}\int_{\Gamma_j}\frac{\partial u}{\partial\nu}(y)\Phi(x,y)\,\mathrm{d}s(y), \quad x\in \mathbb{R}^2\backslash\overline{D\cup B}. \label{Huygens1} \end{equation} Then the far-field pattern of the scattered field $u^s$ is given by \begin{equation} u^\infty(\hat{x})=-\gamma\sum\limits_{j=1,2}\int_{\Gamma_1}\frac{\partial u}{\partial \nu}(y)\mathrm{e}^{-\mathrm{i}\kappa\hat{x}\cdot y}\,\mathrm{d}s(y),\quad\hat{x}\in\Omega. \label{Huygens2} \end{equation} where $\gamma=\mathrm{e}^{\mathrm{i}\pi/4}/\sqrt{8\kappa\pi}$ and $\nu$ denotes the unit outward normal. We define the single-layer operator $$ (S_{jl} g)(x):=\int_{\Gamma_j}\Phi(x,y) g(y)\,\mathrm{d}s(y), \quad x\in\Gamma_l, \quad j,l=1,2 $$ and the corresponding far-field operator $$ (S^\infty_{j}g)(\hat{x}):=-\gamma\int_{\Gamma_j}\mathrm{e}^{-\mathrm{i}\kappa\hat{x}\cdot y}g(y)\,\mathrm{d}s(y), \quad \hat{x}\in\Omega,\quad j=1,2. $$ Let $x\in \mathbb{R}^2\backslash\overline{D\cup B}$ tend to the boundary $\Gamma_1$ and $\Gamma_2$, respectively, and in terms of \eqref{Huygens1}, the Dirichlet boundary condition \eqref{DirichletBC} and the continuity of the single-layer potential, one can readily deduce the following field equations \begin{align} S_{11}g_1+S_{21}g_2 & =u^i\quad \textrm{on}\ \Gamma_1, \label{MHuygens1} \\ S_{12}g_1+S_{22}g_2 & =u^i\quad \textrm{on}\ \Gamma_2, \label{MHuygens3} \end{align} with respect to the densities $g_j=\partial u/{\partial\nu|_{\Gamma_j}}, j=1,2$. On the other hand, let $|x|\to\infty$, then the linearity of the forward scattering problem and equation \eqref{Huygens2} lead to the phaseless data equation \begin{equation}\label{MHuygens2} |S^\infty_1 g_1+S^\infty_2 g_2|^2=|u_{D\cup B}^\infty|^2 \quad\textrm{on}\ \Omega. \end{equation} \subsection{The iterative procedure} We now seek a sequence of approximation to $\Gamma_1$ by solving the field equations \eqref{MHuygens1}--\eqref{MHuygens3} and phaseless data equation \eqref{MHuygens2} in an alternating manner. Given an approximation for the boundary $\Gamma_1$ one can solve \eqref{MHuygens1}--\eqref{MHuygens3} for $g_1$ and $g_2$. Then keeping $g_1$ and $g_2$ fixed, the equation \eqref{MHuygens2} is linearized with respect to $\Gamma_1$ to update the boundary approximation. For the sake of simplicity, the boundary $\Gamma_1$ is assumed to be a starlike curve with the parametrized form \begin{equation}\label{obstacle} \Gamma_1=\{p_1(\hat{x})=c+r(\hat{x})\hat{x}: ~c=(c_1,c_2),\ \hat{x}\in\Omega\}, \end{equation} the boundary $\Gamma_2$ and the unit circle $\Omega$ are parameterized by $$ \Gamma_2=\{p_2(\hat{x})=b+R\hat{x}: ~b=(b_1,b_2),\ \hat{x}\in\Omega\}, $$ where $$ \Omega=\{\hat{x}(t)=(\cos t, \sin t): ~0\leq t< 2\pi\}. $$ Let $p_j(t)$ be the points on $\Gamma_j$, described by $p_1(t):=(c_1,c_2)+r(t)(\cos t, \sin t)$ and $p_2(t):=(b_1,b_2)+R(\cos t, \sin t)$, where $0\leq t<2\pi$. Further, we introduce the parameterized single-layer operators $S_{jl} $ and far field operator $S^\infty_j$ by \begin{align}\label{A} A_{jl}(p_l,\psi_j)(t)=\frac{\mathrm{i}}{4}\int_0^{2\pi}H_0^{(1)}(\kappa|p_l(t)-p_j(\tau)|)\psi_j(\tau)\,\mathrm{d}\tau,\quad j=1,2 \end{align} and \begin{align}\label{Ainfty} A^\infty_j(p_j,\psi_j)(t)&=-\gamma\int_0^{2\pi}\mathrm{e}^{-\mathrm{i}\kappa\hat{x}(t)\cdot p_j(\tau)}\psi_j(\tau)\,\mathrm{d}\tau,\quad j=1,2 \end{align} where we have set $\psi_j(\tau)=G_j(\tau)g_j(p_j(\tau))$. Here, $G_1(\tau)=\big(r^2(\tau)+\big(\frac{dr}{d\tau}(\tau)\big)^2\big)^{1/2}$ and $G_2(\tau)=R$ denote the Jacobian of the transformation. The corresponding right-hand sides are $w_l(t)=u^i(p_l(t))$ and $w^\infty(t)=u_{D\cup B}^\infty(\hat{x}(t))$. Thus we can obtain the parametrized integral equations \eqref{MHuygens1}--\eqref{MHuygens2} in the form \begin{align} A_{11}(p_1,\psi_1)+A_{21}(p_1,\psi_2) & =w_1, \label{PHuygens1}\\ A_{12}(p_2,\psi_1)+A_{22}(p_2,\psi_2) & =w_2, \label{PHuygens3}\\ |A^\infty_1(p_1,\psi_1)+A^\infty_2(p_2,\psi_2)|^2 & =|w^\infty|^2. \label{PHuygens2} \end{align} The linearization of equation \eqref{PHuygens2} with respect to $p_1$ requires the Fr\'{e}chet derivatives of the operator $A_1^\infty$, that is \begin{align}\label{FAinfty} \left(A'^\infty_1[p_1,\psi_1]q\right)(t) = & \mathrm{i}\kappa\gamma\int_0^{2\pi} \mathrm{e}^{-\mathrm{i}\kappa\hat{x}(t)\cdot p_1(\tau)}\hat{x}(t)\cdot q(\tau)\psi_1(\tau)\,\mathrm{d}\tau \nonumber \\ = & \mathrm{i}\kappa\gamma\int_0^{2\pi}\exp(-\mathrm{i}\kappa(c_1\cos t+c_2\sin t+r(\tau)\cos(t-\tau))) \nonumber \\ &\qquad\quad\cdot(\Delta c_1\cos t+\Delta c_2\sin t +\Delta r(\tau) \cos(t-\tau))\psi_1(\tau)\,\mathrm{d}\tau. \end{align} where the update $q(\tau)=(\Delta c_1, \Delta c_2)+\Delta r(\tau)(\cos\tau,\sin\tau)$. Hence, by using of the product rule, we have \begin{align*} &\left(\overline{A^\infty_1(p_1,\psi_1)+A^\infty_2(p_2,\psi_2)} \big(A^\infty_1(p_1,\psi_1)+A^\infty_2(p_2,\psi_2)\big)\right)'q \\ =&2\Re\left(\overline{A^\infty_1(p_1,\psi_1)+A^\infty_2(p_2,\psi_2)} A'^\infty_1[p_1,\psi_1]q\right), \end{align*} and the linearization of \eqref{PHuygens2} leads to \begin{equation}\label{LHuygens2} Bq=f. \end{equation} where \begin{align*} Bq:= & 2\Re\left(\overline{A^\infty_1(p_1,\psi_1)+A^\infty_2(p_2,\psi_2)}A'^\infty_1[p_1,\psi_1]q\right) \\ f:= & |w^\infty|^2-|A^\infty_1(p_1,\psi_1)+A^\infty_2(p_2,\psi_2)|^2. \end{align*} As usual for iterative algorithms, the stopping criteria is necessary to justify the convergence in numerics. Regarding our iterative procedure, we choose the following relative error estimator \begin{equation} E_k:=\frac{\||w^\infty|^2-|A^\infty_1(p^{(k)}_1,\psi_1)+A^\infty_2(p_2,\psi_2)|^2\|_{L^2}} {\||w^\infty|^2\|_{L^2}}\leq\epsilon \label{relativeerror} \end{equation} for some sufficiently small parameter $\epsilon>0$ depending on the noise level. Here, $p^{(k)}_1$ is the $k$th approximation of the boundary $\Gamma_1$. We are now in the position to present the reference ball based iteration algorithm: \begin{table}[ht] \centering \begin{tabular}{cp{.8\textwidth}} \toprule \multicolumn{2}{l}{{\bf Algorithm:}\quad Iterative procedure for phaseless inverse scattering} \\ \midrule {\bf Step 1} & With a mild a priori information of the unknown scatterer $D$, add a suitable reference ball $B$ such that $D\cap B=\emptyset$; \\ {\bf Step 2} & Emanate an incident plane wave with a fixed wavenumber $\kappa>0$ and a fixed incident direction $d\in\Omega$, and then collect the corresponding noisy phaseless far-field data $|u_{D\cup B}^\infty(\hat x)|, \hat x\in\Omega$ for the scatterer $D\cup B$; \\ {\bf Step 3} & Select an initial star-like curve $\Gamma^{(0)}$ for the boundary $\partial D$ and the error tolerance $\epsilon$. Set $k=0$; \\ {\bf Step 4} & For the curve $\Gamma^{(k)}$, find the densities $\psi_1$ and $\psi_2$ from \eqref{PHuygens1}--\eqref{PHuygens3}. \\ {\bf Step 5} & Solve \eqref{LHuygens2} to obtain the updated approximation $\Gamma^{(k+1)}:=\Gamma^{(k)}+q$ and evaluate the error $E_{k+1}$ defined in \eqref{relativeerror}; \\ {\bf Step 6} & If $E_{k+1}\geq\epsilon$, then set $k=k+1$ and go to Step 4. Otherwise, the current approximation $\Gamma^{(k+1)}$ is served as the final reconstruction of $\partial D$. \\ \bottomrule \end{tabular} \end{table} \begin{remark} The advantage of introducing the reference ball is that we would be able to obtain the location of the obstacle via this iterative method, since the update information $(\Delta c_1, \Delta c_2)$ about the location of the obstacle is contained in the term $\Re(\overline{A^\infty_2(p_2,\psi_2)}A'^\infty_1[p_1,\psi_1]q)$ of equation \eqref{LHuygens2}. \end{remark} \begin{remark} For typical nonlinear integral equations based iterative schemes (see, e.g., \cite{Ivanyshyn2007}), the linearization are carried out with respect to both the boundary and the density functions, so the process of solving unknowns should be essentially intertwined. In contrast, we would like to emphasize that our inversion scheme requires the linearization only with respect to the boundary for phaseless data equation, hence the field equations and the phaseless data equation could be completely decoupled and then solved alternatively and separately in each iterative loop. Moreover, the Fr\'{e}chet derivatives in this paper also does not rely on any solution process and can be explicitly given. Therefore, the numerical implementation is now significantly simplified. \end{remark} \section{Numerical experiments} \subsection{Discretization} In the following, we describe the fully discretizations of \eqref{PHuygens1}--\eqref{PHuygens3} and \eqref{LHuygens2} respectively. Let $\tau_j^n:=\pi j/n$, $j=0,\cdots,2n-1$ be an equidistant set of quadrature knots. Setting $w_{l,s}^n=w_l(\tau_s^n)$ and $\psi_{l,j}^n=\psi_l(\tau_j^n)$ for $s,j=0,\cdots,2n-1$ and $l=1,2$. By using the Nystr\"{o}m method \cite{DR-shu2}, we see that the full discretization of \eqref{PHuygens1}--\eqref{PHuygens3} is of the form \begin{align*} w_{1,s}^n= & \sum_{j=0}^{2n-1}\left(R_{|s-j|}^{n}K^1_1(\tau_s^n,\tau_j^n)+ \frac{\pi}{n}K^1_2(\tau_s^n,\tau_j^n)\right)\psi_{1,j}^n \\ & +\frac{\pi}{n}\sum_{j=0}^{2n-1}\frac{\mathrm{i}}{4} H_0^{(1)}(\kappa|p_1(\tau_s^n)-p_2(\tau_j^n)|)\psi_{2,j}^n, \end{align*} \begin{align*} w_{2,s}^n= & \sum_{j=0}^{2n-1}\big(R_{|s-j|}^{n}K^2_1(\tau_s^n,\tau_j^n) +\frac{\pi}{n}K^2_2(\tau_s^n,\tau_j^n)\big)\psi_{2,j}^n \\ &+\frac{\pi}{n}\sum_{j=0}^{2n-1}\frac{\mathrm{i}}{4} H_0^{(1)}(\kappa|p_2(\tau_s^n)-p_1(\tau_j^n)|)\psi_{1,j}^n. \end{align*} where \begin{align*} R_j^{n}:=&-\frac{2\pi}{n}\sum_{m=1}^{n-1}\frac{1}{m}\cos\frac{mj\pi}{n}-\frac{(-1)^j\pi}{n^2}, \\ K^l_1(t,\tau)=& -\dfrac{1}{4\pi}J_0(k|p_l(t)-p_l(\tau)|), \quad l=1,2, \\ K^l_2(t,\tau)=& K^l(t,\tau)-K^l_1(t,\tau)\ln\big(4\sin^2\frac{t-\tau}{2}\big), \quad l=1,2, \end{align*} and the diagonal term can be deduced as $$ K^l_2(\tau,\tau)=\left\{\frac{i}{4}-\frac{E_c}{2\pi}-\frac{1}{2\pi}\ln\big(\frac{\kappa}{2}G_l(\tau)\big)\right\},\quad l=1,2 $$ with the Euler constant $E_c=0.57721\cdots$. We proceed by discussing the discretization of the linearized equation \eqref{LHuygens2} using Newton's method with least squares \cite{Kress2003}. As finite dimensional space for the approximation of the radial function $r$ and its update $\Delta r$ we choose the space of trigonometric polynomials of the form \begin{equation}\label{updataq} \Delta r(\tau)=\sum_{m=0}^M\alpha_m\cos{m\tau}+\sum_{m=1}^M\beta_m\sin{m\tau}. \end{equation} where the integer $M>1$ signifies the truncation. For simplicity, we reformulate the parameterized operators $A_{jl}, A^\infty_j$ by introducing the following definitions \begin{align*} M_1(t,\tau)=&-\gamma\exp(-\mathrm{i}\kappa(c_1\cos t+c_2\sin t+r(\tau)\cos(t-\tau))\psi_1(\tau),\\ M_2(t,\tau)=&-\gamma\exp(-\mathrm{i}\kappa(b_1\cos t+b_2\sin t+R\cos(t-\tau))\psi_2(\tau),\\ L_1(t,\tau)=&\mathrm{i}\kappa\gamma\exp(-\mathrm{i}\kappa(c_1\cos t+c_2\sin t+r(\tau)\cos(t-\tau))\cos t~ \psi_1(\tau),\\ L_2(t,\tau)=&\mathrm{i}\kappa\gamma\exp(-\mathrm{i}\kappa(c_1\cos t+c_2\sin t+r(\tau)\cos(t-\tau))\sin t~ \psi_1(\tau),\\ L_3(t,\tau)=&\mathrm{i}\kappa\gamma\exp(-\mathrm{i}\kappa(c_1\cos t+c_2\sin t+r(\tau)\cos(t-\tau))\cos(t-\tau)~\psi_1(\tau), \end{align*} Then, by combining \eqref{Ainfty}, \eqref{FAinfty} and \eqref{LHuygens2} together, we obtain the linear system \begin{align} f(\tau_s^n)=&(B_1\Delta c_1)(\tau_s^n)+(B_2\Delta c_2)(\tau_s^n)\nonumber \\ &+\sum_{m=0}^M\alpha_m(B_3\cos{m\tau})(\tau_s^n)+\sum_{m=1}^{M}\beta_m(B_3\sin{m\tau})(\tau_s^n)\label{RLHuygens2} \end{align} to be solved to determine the real coefficients $\Delta c_1$, $\Delta c_2$, $\alpha_m$ and $\beta_m$, where \begin{align*} (B_i\chi(\tau))(t)=2\Re\left\{\overline{\int_0^{2\pi} \Big(M_1(t,\tau)+M_2(t,\tau)\Big)\mathrm{d}\tau}\int_0^{2\pi}L_i(t,\tau) \chi(\tau)\mathrm{d}\tau\right\} \end{align*} with $i=1,2,3$. In general, $2M+1\ll 2n$ and due to the ill-posedness, the overdetermined system \eqref{RLHuygens2} is solved via the Tikhonov regularization, that is, by minimizing the penalized defect \begin{align} & \sum_{s=0}^{2n-1}\big|(B_1\Delta c_1)(\tau_s^n)+(B_2\Delta c_2)(\tau_s^n) \nonumber \\ &+\sum_{m=0}^M\alpha_m(B_3\cos{m\tau})(\tau_s^n) +\sum_{m=1}^{M}\beta_m(B_3\sin{m\tau})(\tau_s^n)-f(\tau_s^n)\big|^2 \nonumber \\ & +\lambda\bigg(|\Delta c_1|^2+|\Delta c_2|^2+2\pi\big[\alpha_0^2+\frac{1}{2}\sum_{m=1}^M(1+m^2)^2(\alpha_m^2+\beta_m^2)\big]\bigg) \label{RLHuygens3} \end{align} with a positive regularization parameter $\lambda$ and $H^2$ penalty term. In view of the trapezoidal rule \begin{equation}\label{traperule} \int_0^{2\pi}f(\tau)\,\mathrm{d}\tau\approx\frac{\pi}{n}\sum_{j=0}^{2n-1}f(\tau_j^n), \end{equation} one obtains the approximation of $(B_i\chi(\tau))(\tau_s^n)$, that is, for $i=1,2,3$, \begin{align*} \widetilde{B}_i^{\chi}(\tau_s^n)\approx2\left(\frac{\pi}{n}\right)^2\sum_{s=0}^{2n-1}\sum_{j=0}^{2n-1} \Re\left\{\overline{M_1(\tau_s^n,\tau_j^n)+M_2(\tau_s^n,\tau_j^n)} L_i(\tau_s^n,\tau_j^n)\right\}\chi(\tau_j^n) \end{align*} where the function $\chi(\tau)$ is chosen to be the constant $\Delta c_1$, $\Delta c_2$ or the function $\cos(m\tau)$, $\sin(m\tau)$. Then, the approximations are denoted in turn by $\widetilde{B}_1$, $\widetilde{B}_2$,$\widetilde{B}_3^{cos,m}$ and $\widetilde{B}_3^{sin,m}$. Furthermore, we assume that \begin{align*} \widetilde{B}=&(\widetilde{B}_1,\widetilde{B}_2,\widetilde{B}_3^{cos, 0},\cdots,\widetilde{B}_3^{cos,M},\widetilde{B}_3^{sin,1}, \cdots,\widetilde{B}_3^{sin,M})_{(2n)\times(2M+3)}\\ \xi=&(\Delta c_1, \Delta c_2, \alpha_0,\cdots, \alpha_M,\beta_1,\cdots, \beta_M)^\top \\ \widetilde{I}= & \mathrm{diag}\{1, 1, 2\pi, \pi(1+1^2)^2, \cdots, \pi(1+M^2)^2, \pi(1+1^2)^2, \cdots, \pi(1+M^2)^2\} \\ \widetilde{f}=&(f(\tau_0^n),\cdots, f(\tau_{2n-1}^n))^\top. \end{align*} Thus, the minimizer in \eqref{RLHuygens3} is equivalent to the unique solution of \begin{align}\label{EqualRLHuygens3} \lambda \widetilde{I}\xi+\widetilde{B}^*\widetilde{B}\xi=\widetilde{B}^*\widetilde{f}. \end{align} \subsection{Numerical examples} \begin{figure} \caption{Reconstructions of an apple-shaped domain with $1\%$ noise and $\epsilon=0.015$.} \label{4.1} \end{figure} \begin{figure} \caption{Reconstructions of an apple-shaped domain with $5\%$ noise and $\epsilon=0.035$.} \label{4.2} \end{figure} \begin{figure} \caption{Reconstructions of an apple-shaped domain with different reference balls and $1\%$ noise. Here we are using the initial guess $(c_1^{(0)} \label{4.3} \end{figure} \begin{figure} \caption{Reconstructions of an apple-shaped domain with different incoming wave directions, $1\%$ noise is added. Here, the initial guess $(c_1^{(0)} \label{4.4} \end{figure} \begin{figure} \caption{Reconstructions of an apple-shaped domain with different locations of the reference ball, $1\%$ noise is added. Here, the initial guess $(c_1^{(0)} \label{4.5} \end{figure} \begin{figure} \caption{Reconstructions of a peanut-shaped domain with $1\%$ noise and $\epsilon=0.015$.} \label{4.6} \end{figure} \begin{figure} \caption{Reconstructions of a peanut-shaped domain with $5\%$ noise and $\epsilon=0.035$.} \label{4.7} \end{figure} \begin{figure} \caption{Reconstructions of a peanut-shaped domain with different reference balls, $1\%$ noise is added. Here, the initial guess $(c_1^{(0)} \label{4.8} \end{figure} In this subsection, we present three numerical examples to illustrate the feasibility of the iterative reconstruction method. To avoid committing an inverse crime, the synthetic data is numerically generated at 64 points, i.e., $n=32$, by the combined double- and single-layer potential method \cite{DR-shu2}. The noisy data $|u^{\infty,\delta}|^2$ is constructed in the following way \begin{align*} |u^{\infty,\delta}|^2=|u^\infty|^2(1+\delta\eta) \end{align*} where $\eta$ are normally distributed random numbers ranging in $[-1,1]$ and $\delta>0$ is the relative noise level. In the numerical examples, we obtain the update $\xi$ from a scaled Newton step with Tikhonov regularization and $H^2$ penalty term, that is $$ \xi=\rho(\lambda\widetilde{I}+\widetilde{B}^*\widetilde{B})^{-1}\widetilde{B}^*\widetilde{f}, $$ where the scaling factor $\rho\geq0$ is fixed throughout the iterations. According to \cite{OT2007}, the regularization parameters $\lambda$ in equation \eqref{EqualRLHuygens3} are chosen as \[ \lambda_k:=\Big\||w^\infty|^2-|A^\infty_1(p^{(k-1)}_1,\psi^{(k-1)}_1) +A^\infty_2(p_2,\psi^{(k-1)}_2)|^2\Big\|_{L^2},\ k=1,2,\cdots. \] Note that in each step of the iteration, the derivative $\mathrm{d}r/\mathrm{d}\tau$ is calculated by resorting to the approximation \eqref{updataq} and $r^{(k)}(\tau)=r^{(k-1)}(\tau)+{\Delta r}^{(k-1)}(\tau)$, $k=1,2\cdots$. Analogously to \cite{RW1997}, the initial approximation is chosen as a circle and the values of the Fourier modes in the directions $\cos\theta$ and $\sin\theta$ (i.e., coefficients $\alpha_1$ and $\beta_1$) are set to be the exact values in the reconstructions. The reason for doing so is to ease the comparison with the exact curve. So the update procedure does not take these two modes into account. In the subsequent figures, the exact boundary curves are displayed as solid lines, the reconstruction are depicted with the dashed lines $--$, the initial guess are taken to be circles with radius $r^{(0)}=0.1$ indicated by the dash-dotted lines $\cdot-$. The incident directions are denoted by arrows. Throughout all the numerical examples, we set the wavenumber $\kappa=2$, the scaling factor $\rho=0.6$, and the parameter $M=5$. \begin{figure} \caption{Reconstructions of a peanut shaped domain with different initial guesses and $1\%$ noise. Here, we are using the reference ball $(b_1, b_2)=(4, 0), R=0.4$ and $\epsilon=0.015$.} \label{4.9} \end{figure} \begin{figure} \caption{Reconstructions of a rectangle shaped domain with $1\%$ and $5\%$ noise data respectively. Here, we are using the initial guess $(c_1^{(0)} \label{4.10} \end{figure} \begin{figure} \caption{Reconstructions of a rectangle shaped domain with different initial guesses ($1\%$ noise is added). Here, we are using the incoming wave direction $d=(\cos(2\pi/3),\sin(2\pi/3))$, the reference ball $(b_1, b_2)=(4, 0), R=0.5$ and $\epsilon=0.015$.} \label{4.11} \end{figure} \begin{figure} \caption{Reconstructions of a rectangle-shaped domain with different reference balls, $1\%$ noise is added. Here, the incoming wave direction $d=(\cos(\pi/6), \sin(\pi/6))$, the initial guess $(c_1^{(0)} \label{4.12} \end{figure} \begin{example}\label{E1} \rm In the first example, we aim to reconstruct an apple-shaped obstacle with parametrized boundary $$ p_1(t)=\frac{0.55(1+0.9\cos{t}+0.1\sin{2t})}{1+0.75\cos{t}}(\cos{t},\sin{t}), \quad t\in [0,2\pi]. $$ Here the incident direction $d=(\cos(-\pi/6),\sin(-\pi/6))$ is used in Fig. \ref{4.1}-\ref{4.3}. Several snapshots of the iterative process are shown in Fig. \ref{4.1} and Fig. \ref{4.2}, where the initial guess is the circle centered at $(-0.7, 0.45)$ with radius 0.1, and the reference ball is centered at $(4,0)$ with radius 0.4. Moreover, the relative $L^2$ error $Er_k$ between the reconstructed and exact boundaries and the error $E_k$ defined in \eqref{relativeerror} are also presented with respect to the number of iterations. As we can see from the figures, the trend of two error curves is basically the same for larger number of iteration. So, the choice of the stopping criteria is reasonable. The reconstructions with different reference balls for the incoming wave direction $d=(\cos(-\pi/6),\sin(-\pi/6))$ and $d=(\cos(4\pi/3),\sin(4\pi/3))$ are shown in Fig. \ref{4.3} and Fig. \ref{4.5} respectively, and the reconstructions with different incoming wave directions are presented in Fig. \ref{4.4}. As shown in these results, the location and shape of the obstacle could be simultaneously and satisfactorily reconstructed. \end{example} \begin{example}\label{E2} \rm For the second example, we consider the scattering by a peanut-shaped obstacle described by $$ p_1(t)=0.275\sqrt{3\cos^2{t}+1}(\cos{t},\sin{t}), \quad t\in[0,2\pi]. $$ In this example, the reconstructions with $1\%$ noise and $5\%$ noise, corresponding to the incoming wave direction $d=(\cos(2\pi/3),\sin(2\pi/3))$, are shown in Fig. \ref{4.6} and Fig. \ref{4.7}, respectively. Here we use the initial guess $(c_1^{(0)},c_2^{(0)})=(0.3, -0.6), r^{(0)}=0.1$ and the reference ball $(b_1, b_2)=(4, 0), R=0.4$. The relative $L^2$ error $Er_k$ and the error $E_k$ are also presented in the figures. The reconstructions with different reference balls for the incoming wave direction $d=(\cos(\pi/6),\sin(\pi/6))$ are shown in Fig. \ref{4.8}, and the reconstructions with different initial guesses for the incoming wave direction $d=(\cos(2\pi/3),\sin(2\pi/3))$ are shown in Fig. \ref{4.9}. \end{example} \begin{example}\label{E3} \rm Our third example is intended to reconstruct a rounded rectangle given by $$ p_1(t)=\frac{9}{20}\left(\cos^{10}t+\frac{2}{3}\sin^{10}t\right)^{-1/10}(\cos{t},\sin{t}), \quad t\in[0,2\pi]. $$ In this example, the reconstructed obstacle and the relative error $Er_k$ and $E_k$ from the phaseless far-field data with $1\%$ noise and $5\%$ noise, corresponding to the incoming wave direction $d=(\cos(\pi/6),\sin(\pi/6))$, are shown in Fig. \ref{4.10}. The relative $L^2$ error $Er_k$ and the error $E_k$ are also presented in the figures. The influences of the choices of initial guesses and reference balls are shown in Fig. \ref{4.11} and Fig. \ref{4.12} respectively. \end{example} The above numerical results illustrate that by adding a reference ball artificially to the inverse scattering system the iteration method gives a feasible reconstruction of the location and the shape of the obstacle from phaseless far-field data for one incident field. The reference ball causes few extra computational costs, but breaks the translation invariance and brings information about the location of the obstacle. In addition, a promising feature of the algorithm is that the Fr\'{e}chet derivatives involved in our method can be explicitly characterized as integral operators and thus easily evaluated. Hence it does not require the solution of the forward problem in each iteration step and it is very easy to implement with computational efficiency. To evaluate the computational time, our codes of the experiments are written in Matlab and run on a laptop with 2.6 GHz CPU. According to the fact that the CPU time for all the reconstructions is less then 10 seconds, we conclude that our algorithm is very fast. \section {Conclusions and future works} In this paper, a new numerical method is devised to solve the inverse obstacle scattering problem from the modulus of the far-field data for one incident field. That is, by introducing a reference ball artificially to the inverse scattering system, the translation invariance of the phaseless far-field pattern can be broken down, and an iterative scheme which is based on a system of boundary integral equations is propose to reconstruct both the location and shape of the obstacle. The reference ball causes few extra computational costs, but breaks the translation invariance and brings the location information of the obstacle. The numerical implementation details of the iterative scheme is described, and the numerical examples illustrate that the iterative method yields satisfactory reconstructions. Concerning our future work, the proposed methodology could be extended directly to the case of recovering a sound-hard or impedance obstacle, as well as the three-dimensional case. Although our numerical results show that the proposed novel iterative scheme works very well, the corresponding theoretical justifications of convergence and stability are still open. In other words, the mathematical analysis of this method is beyond the scope of our current work and deserves future investigations. In addition, we would also like to study the applicability of this approach to imaging crack-like scatterers from phaseless data. Moreover, we believe that the reference ball based iteration approach is also a feasible technique for solving the phaseless inverse electromagnetic scattering problem. \end{document}
\begin{document} \title{Tensor Program Optimization with Probabilistic Programs} \begin{abstract} Automatic optimization for tensor programs becomes increasingly important as we deploy deep learning in various environments, and efficient optimization relies on a rich search space and effective search. Most existing efforts adopt a search space which lacks the ability to efficiently enable domain experts to grow the search space. This paper introduces MetaSchedule{}, a domain-specific probabilistic programming language abstraction to construct a rich search space of tensor programs. Our abstraction allows domain experts to analyze the program, and easily propose stochastic choices in a modular way to compose program transformation accordingly. We also build an end-to-end learning-driven framework to find an optimized program for a given search space. Experimental results show that MetaSchedule{} can cover the search space used in the state-of-the-art tensor program optimization frameworks in a modular way. Additionally, it empowers domain experts to conveniently grow the search space and modularly enhance the system, which brings 48\% speedup on end-to-end deep learning workloads. \end{abstract} \section{Introduction\label{sec:intro}} Deep learning has become pervasive in daily life. From video understanding~\cite{liu2021video}, natural language understanding~\cite{devlin-etal-2019-bert}, and recommendation system~\cite{DLRM19} to autonomous driving~\cite{huang2020autonomous}, different deep learning models are deployed on different hardware platforms and devices. Deep learning frameworks usually rely on manually optimized libraries~\cite{chetlur2014cudnn,intel2017mkldnn} to accelerate deployment. Engineers need to choose from many tensor programs that are logically equivalent but differ significantly in performance due to memory access, threading, and the use of specialized hardware primitives. The engineering effort required for tensor program optimization has become a significant bottleneck for machine learning deployment with the growing number of models and hardware backends. Automatic program optimization~\cite{chen2018learning, zheng2020ansor, adams2019learning} is a recent sequence of efforts that aims to use machine learning to solve this problem. There are two vital elements of automatic tensor program optimizations. First, a search space is defined to provide a possible set of equivalent tensor programs. Then existing systems use learning-based search algorithms to find an optimized tensor program in the search space with feedback from the deployment environment. Most of the current approaches~\cite{chen2018learning, zheng2020ansor, adams2019learning, li2020adatune, baghdadi2021deep} use pre-defined search spaces that effectively encode the domain knowledge of the authors \emph{once} and focus on developing efficient search algorithms. While efficient search is essential, the search space itself fundamentally limits the best possible performance search algorithms can get. To construct a good search space, domain experts have to make numerous choices over loop transformation, vectorization, threading patterns, and hardware acceleration. Additionally, the best search space itself evolves as new tensor program optimization techniques~\cite{Winograd} and hardware primitives~\cite{nvidia2017tensorcore} grow. As a result, there is a strong need to enable easy customization and construction of the search space at scale by taking inputs from system engineers and domain experts. Unfortunately, any change to search space construction currently requires surgical modifications to the automatic program optimization frameworks. This research asks the following question: \textit{can we decouple the search space construction from search and provide an adequate abstraction for domain experts and the learning system to collaborate on search space construction?} We give an affirmative answer to the question with two key observations. First, we can parameterize an optimization search space by the initial program followed by a sequence of transformations on the program. Next, using this parameterization, domain experts can then provide probabilistic choices that represent possible transformations after examining the program state. These two observations lead to a simple yet powerful abstraction for search space construction through a domain-specific probabilistic language. Finally, our framework composes multiple possible probabilistic transformations to form a rich search space. We make the following contributions: \begin{itemize} \item We introduce a simple yet powerful probabilistic language abstraction for tensor program search space construction. \item We build a learning-driven framework to find optimized tensor programs specified by the search space constructed using our abstraction. \item We build an end-to-end system that can take prior knowledge from domain experts to construct optimization search space to optimize deep learning deployment on multiple platforms. \end{itemize} Our end-to-end system can easily expand search space that matches previous approaches without surgical changes and achieve comparable performance on multiple hardware backends. Experimental results show that our abstraction is expressive enough to cover the optimization space of a diverse set of tensor programs, delivering a competitive performance of popular deep learning models, and convenient to incorporate hardware-specific knowledge into the search space to outperform the state-of-the-art frameworks. \section{Background and Problem Overview\label{sec:problem-formulation}} \begin{figure} \caption{Automatic tensor program optimization contains two key elements: the search space $S(e_0)$ and the search algorithm that finds the optimal tensor program $e^\star$. The search space usually incorporates choices over loop transformation, vectorization, threading patterns, and hardware acceleration.} \label{fig:sample-tensor-program} \end{figure} \autoref{fig:sample-tensor-program} shows a typical workflow for tensor program optimization. For a given program $e_0$, a typical tensor program optimization framework will generate candidates from a pre-defined search space $S(e_0)$ containing semantically-equivalent programs. Then the framework finds optimized tensor program $e^* \in S(e_0)$ with the minimum latency on the target hardware. A typical search space $S(e_0)$ contains choices over threading, loop ordering, memory access, and hardware primitives. Defining the search space $S(e_0)$ for a wide range of tensor programs brings several challenges. First, $S(e_0)$ is highly dependent on $e_0$. For example, $S(e_0)$ of a compute-intensive program (e.g., \texttt{Dense}) needs to consider many more possible configurations than a communication-intensive program such as \texttt{ReLU}. The space also differs significantly in different hardware domains. For example, $S(e_0)$ on CPU involves multi-core parallelism and vectorization, while $S(e_0)$ on GPU involves thread binding and tensorization. Finally, as the hardware and model settings change, we need to bring in fresh domain experts to update the $S(e_0)$ to leverage the latest improvements. This paper aims to provide a programmable abstraction to construct $S(\cdot)$ in a composable and modular way. Our key goals are listed as follows: \textbf{Expressiveness.} We need to be able to build a rich search space that covers the optimization programs that domain experts will write. \textbf{Modularity.} Tensor program optimization likely will involve inputs from multiple domain experts over different periods. Therefore, we need to be able to combine prior knowledge in a composable and modular way. \textbf{Designed for learning.} We need to build a generic learning-based framework to enable diverse variations of the cost model and search for search space specified in our abstraction. We will address the above goals in the following two sections. \section{Composable Search Space Parameterization\label{sec:our-method}} This section presents MetaSchedule{}, a probabilistic approach to search space parameterization. \subsection{Stochastic Search Space Construction\label{sec:stochastic-transformation}} \begin{figure} \caption{ Parameterizing programs with the initial program and sequence of transformations. Tensor program $e_1$ is parameterized by the initial program $e_0$, plus Step~\textcircled{\scriptsize{1} \label{fig:relu} \end{figure} MetaSchedule{} constructs a search space $S(\cdot)$ with stochastic program transformations as the primitive building blocks. Traditional program optimization can usually be represented by a sequence of \textit{transformations} $\tau$, where at step $i$, the program $e_{i - 1}$ is transformed into a semantically-equivalent program $e_i$, which finally leads to the optimized program $e_n$. MetaSchedule{} generalizes this idea by allowing further parameterization of each transformation step in $\tau$. Taking \autoref{fig:relu} as an example: $e_0$ is the initial program for the program $B = \texttt{ReLU}(A)$\footnote{In practice, we ingest models from PyTorch/TensorFlow/JAX. See Appendix~\ref{sec:integration-with-dl-framework} for details.}. In MetaSchedule{}, transformation $t_1 = \texttt{Split}$ is parameterized by a loop $i$ and a sequence of integers indicating the loop extents after splitting; Similarly, transforms $t_2 = \texttt{Parallelize}$ and $t_3 = \texttt{Vectorize}$ are parameterized by loops respectively. As a result, an optimized program $e_n$ is obtained by applying a sequence of parameterized transformations $\tau$ to the initial program $e_0$. Accordingly, the search space $S(e_0)$ is composed of $e_0$ and all possible sequences of parameterized transformations. \begin{figure} \caption{The MetaSchedule{} \label{fig:matmul-tiling-fusion} \end{figure} On the other hand, it could be less practical for practitioners to determine the best combination of the parameter values in transformations. For instance, in \autoref{fig:relu}, it is usually efficient to use 512-bit vectorization over the inner loop when AVX-512~\footnote{A single X86 instruction that performs the computation to a 512 bits vector in one CPU cycle.} vector instructions are available on Intel CPUs, or other vectorization lengths may lead to better performance, otherwise. Therefore, deep knowledge of the target hardware is mandatory to enumerate plausible parameter combinations to control the search space size while covering the optimal program. To let practitioners efficiently define parameterized transformations without worrying about candidate values, MetaSchedule{} introduces \textit{random variables} drawn from \textit{analysis, sampling}. Parameterized by random variables, a transformation naturally becomes stochastic, and the underlying probabilistic space reflects the space of possible transformations. As illustrated in \autoref{fig:matmul-tiling-fusion}, when creating \texttt{Split} transformations to tile loop $i$ and $j$ in the \texttt{Dense} operator, the tile sizes are drawn by random variables $\theta_{0-3}$ defined from \texttt{Sample-Tile}. In this way, the \texttt{Split} transformation becomes stochastic. Similarly, we use \texttt{Sample-Compute-Location} to enumerate valid loops in \texttt{Dense} after splitting for \texttt{ReLU} to fuse its computation. In summary, 7 lines of MetaSchedule{} program covers a family of possible optimized tensor programs with stochastic transformations in its search space $S(e_0)$, where $e_0$ is \texttt{Dense-ReLU}. Notably, unlike orthogonal grid space in hyperparameter tuning, MetaSchedule{} captures long-term structural and arithmetic dependency between random variables and the tensor program $e_i$ being transformed. As demonstrated on Step~\textcircled{\scriptsize{2}} in \autoref{fig:matmul-tiling-fusion}, sampling distribution from \texttt{Sample-Compute-Location} depends on the latest tensor program $e_5$, whose structure depends on all previous random variables. \subsection{Modular Search Space Composition\label{sec:modular-composition}} Although the search space constructed by stochastic transformations proposed in the previous subsection is efficient and is capable of covering the optimal tensor program, it is hard for other developers to learn how the search space is constructed by reading a long sequence of transformations. It makes transformations designed for a workload hard to be reused by other workloads. Meanwhile, we observe that it is straightforward to group a sub-sequence of transformations for a particular fine-grained optimization. For example, some transformations implement multi-level tiling for better memory locality in compute-intensive operators like \texttt{Conv2d} and \texttt{Dense}; some other transformations are used to fold/inline elementwise operations such as activation functions into their predecessors or successors for better memory bandwidth efficiency. To improve the usability and make MetaSchedule{} more practical, we introduce \textit{transformation module}. Just like the convolutional module with \texttt{Conv2D}, \texttt{BiasAdd} and \texttt{ReLU} in ResNet, a transformation module in MetaSchedule{} is defined as either atomic stochastic transformation, or composition of program analysis, sampling as well as smaller transformations. Each transformation module can have a meaningful name so that it can be easily adopted by many workloads to hierarchically construct a search space. \begin{figure} \caption{Transformation modules. A transformation module consists of tensor program analysis, sampling, and stochastic transformations. The figure uses \texttt{Multi-Level-Tiling} \label{fig:transformation-module} \end{figure} \autoref{fig:transformation-module} shows hierarchical composition of transformation modules. Specifically, \texttt{Multi-Level-Tiling} interleaves program analysis on loops and the stochastic tiling of the loop structure and organizes the original tensor program into a 5-level tiling structure. Notably, the transformation module is generic to tensor programs and thus could be applied to a variety of operators, including \texttt{conv1d}, \texttt{conv3d}, \texttt{matmul}, etc. \begin{figure} \caption{ Left: An example algorithm to compose transformation modules. A sequence of transformation modules is composed together into a single transformation module. Right: Hierarchical composition of transformation modules gives generic search space. In this example, a hardware-specific module \texttt{Use-Tensor-Core} \label{fig:e2e-compose} \end{figure} \autoref{fig:e2e-compose} depicts an example of composing a search space with transformation modules. In this simple example, we select a set of transformation modules, which are implemented in advance by practitioners with prior domain knowledge, and apply them to every available location in the tensor program to form a search space. Consequently, the formed search space covers common optimizations on diverse hardware. \subsection{Relation to Existing Tensor Program Optimization Methods\label{sec:relation-to-related-work}} In this subsection, we discuss prior approaches for automatic tensor program optimization and illustrate that many of them can be covered by the MetaSchedule{} framework. \textbf{Domain specific languages for program transformations} used by prior frameworks~\cite{ragan2013halide,baghdadi2019tiramisu, chen2018tvm, senanayake2020scheduling} allow developers to easily optimize a program manually. When there is no random variable sampling in the program, MetaSchedule{} reduces to a DSL for deterministic program transformations and achieves the same functionality. \textbf{Template-guided auto-tuning}~\cite{chen2018learning,ahn2020chameleon,li2020adatune,liu2019optimizing} fully relies on developers to define a search space. In MetaSchedule{}, it means all random variables in a search space are defined ahead of the transformations, so there is no interaction between program analysis and follow-up random sampling choices conditioned on the program state. \textbf{Auto-scheduling}~\cite{zheng2020ansor,zheng2020flextensor,adams2019learning,haj2020protuner} requires developers to implement workload agnostic transformation rules. MetaSchedule{} achieves the same programmability and functionality through specific probabilistic transformation modules that correspond to the search space generation rules. Notably, all approaches mentioned above have important use-cases in tensor program optimizations, depending on how much domain knowledge we want to incorporate for a particular scenario. By decoupling the search space construction from the search, we effectively build a single framework for all the use cases and enable further customization without surgical changes to the system. \section{Learning-driven Search} The last section provides a modular abstraction for search space. We still need to do an effective search to find an optimized program within the search space. This section provides a generic learning-driven framework to find an optimized program. \textbf{Objective formalization.} For a given probabilistic program $e_0$, let us use $\tau$ to denote the transformations performed on $e_0$. $\tau$ can be sampled from a prior distribution specified by the probabilistic program. We define $g(e_0, \tau)$ to be the tensor program after applying transformation $\tau$ to $e_0$. Let $f(e)$ be the latency of the particular program $e$ on the hardware environment. We define a posterior probability of an optimized program as: \begin{equation} P(\tau \;\vert\; e_0) \propto e ^ {-f(g(e_0, \tau))} \cdot P(\tau).\label{eq:map} \end{equation} Intuitively, we want to assign a higher probability to the programs that perform well. Our final goal is to find $\tau^\star= \mathop{\mathrm{argmax}}_{\tau} P(\tau \;\vert\; e_0)$ that maximizes the posterior through maximum a posteriori estimation (MAP) estimation. \begin{figure} \caption{Execution tracing in MetaSchedule{} \label{fig:tracing} \end{figure} \textbf{Execution tracing.} To enable domain experts to express their knowledge via transformations modules productively, we embed MetaSchedule{} in Python. We introduced execution tracing to reduce the cost of repetitive re-execution of the Python program. \autoref{fig:tracing} demonstrates an example tracing process. During program execution, the system records all samplings and transformations while ignoring control flow and other constructs of the host language. The resulting trace is a sequence of MetaSchedule{} primitives with only sampling and transformation instructions, which could be re-executed as a normal MetaSchedule{} program. We can then continue to explore different sampling choices for a given collection of initial traces. Conceptually, this is equivalent to dividing up our support set and then sampling the program condition on the execution sequence of the program. \begin{figure} \caption{Learning-driven search. Based on the traces of MetaSchedule{} \label{fig:evo-search} \end{figure} \textbf{End-to-end search.} \autoref{fig:evo-search} shows the overall workflow of our learning-driven framework. The search algorithm first samples the MetaSchedule{} program to obtain a collection of traces. Then it continues to explore the space condition on the traces. Notably, there is a significantly higher cost measuring $f(e)$ directly on the hardware, so we also incorporated a proxy cost model $\hat{f}(e)$, which is updated throughout the process, similar to previous efforts on tensor program optimization~\cite{chen2018learning,zheng2020ansor}. At each iteration, we adopt an evolutionary search algorithm that proposes a new variant of the trace by mutating the random variables, then accept or reject the proposal based on the cost model. While evolutionary search could be viewed as parallel chain MCMC, we also made our system modular enough to incorporate other ways to select the probabilistic choices, such as those through Bayesian optimization and reinforcement learning. \textbf{Cost model.} Our approach allows extensive cost models, enabling us to supply those pre-trained from existing datasets~\cite{zheng2021tenset}. We pick a tree-boosting-based cost model in $\hat{f}(\cdot)$ by default and leverage a common set of features that are used in previous works~\cite{zheng2020ansor}. \textbf{Trace validation.} Importantly, invalid traces may show up as we propose updates. Such a scenario can happen when some of the random variable choices go beyond the physical hardware limit or a variable that induces changes to the execution sequence. Instead of enforcing a conservative proposal, we introduce a validator that validates the correctness of the trace. The trace validation allows us to move around the space more freely while still ensuring the correctness of the sample outcome to be on the right support set. \section{Related Work\label{sec:related-work}} Tensor Program Transformations are proposed by many prior works, such as Halide~\cite{ragan2013halide}, TVM~\cite{chen2018tvm}, Tiramisu~\cite{baghdadi2019tiramisu} and TACO~\cite{kjolstad2017tensor, senanayake2020scheduling}. Note that all the previous transformation languages are deterministic and cannot be directly used for search space construction, meaning that they have to introduce a separate programming model to express a search space. This paper makes a simple but powerful generalization to domain-specific probabilistic language. The resulting abstraction enables a unified approach to deterministic transformation and search space construction. Black-box optimization has been adopted in high-performance computing libraries~\cite{frigo1998fftw,atlas2005atlas}. Recent advances in automatic tensor program optimization brought a rich set of techniques to accelerate search through better cost modeling~\cite{chen2018learning, baghdadi2021deep,ryu2021metatune} and learning-based search~\cite{ahn2020chameleon,li2020adatune, adams2019learning, haj2020protuner, zheng2021tenset}, which could be incorporated into MetaSchedule{} search. Different variations of pre-defined search spaces have also been proposed that couple with the automatic tensor program optimization frameworks~\cite{chen2018learning, zheng2020ansor, adams2019learning}. Polyhedral model~\cite{vasilache2018tensor,baghdadi19tiramisu,vasilache2006polyhedral} is one useful way to construct a rich pre-defined search space. This paper focuses on modular search space construction and provides orthogonal contributions to these prior works. Probabilistic programming language is a powerful abstraction for incorporating domain knowledge and probabilistic inference. There are many general-purpose probabilistic languages, such as Church~\cite{goodman2012church}, Stan~\cite{carpenter2017stan}, Pyro~\cite{bingham2019pyro}, NumPyro~\cite{phan2019composable}, PyMC3~\cite{salvatier2016probabilistic} and Edward~\cite{tran2018simple}. This paper proposes a domain-specific probabilistic language for tensor program optimization with specializations such as tensor program analysis that would otherwise be opaque to the previous systems. Our learning-driven search can be viewed as an application of previous works~\cite{yang2014generating,ritchie2016c3,wingate2011lightweight, pmlr-v119-zhou20e} that use tracing to divide programs into subprograms with fixed support. We focus on the MAP inference problem where the posterior depends on an unknown cost function. We solve the problem through a learned cost-model-driven evolutionary search over traces and with validation. Automatic neural program synthesis ~\cite{chen2021latent, gupta2020synthesize, chen2018executionguided} has seen large progress recently. Alphacode~\cite{li2022competition} builds a system that can output creative and sound solutions to problems that require deeper-level human understanding. These approaches generate abstract syntax trees (ASTs) that can be incorrect and use input-output pairs to filter out those erroring programs. Our compiler approach requires us to ensure the correctness of all transformations. However, some ideas like validation after creation might be reusable. \section{Experiments\label{sec:experiment}} \begin{figure} \caption{Operator- and subgraph-level performance. MetaSchedule{} \label{fig:perf_op} \end{figure} \begin{figure} \caption{Optimizing end-to-end deep learning models. MetaSchedule{} \label{fig:perf_e2e} \end{figure} \subsection{Expressiveness to Cover Common Optimization Techniques\label{sec:op-cover}} This section aims to answer the following question: \emph{Is MetaSchedule{} expressive enough to capture the search space of the state-of-the-art optimization techniques?} To answer this question, we evaluate our work on a diverse set of operators and subgraphs extracted from popular deep learning models, including variants of convolution, dense, and normalization. As baselines, PyTorch (v1.11.0) results are provided to compare performance with vendor libraries; TVM (commit: \texttt{8d4f4dd73f}), which incorporates AutoTVM~\cite{chen2018learning} and Ansor~\cite{zheng2020ansor}, is used as the state-of-the-art tensor program optimization system, and we pick the best among the two in each respective setups. Full operators and hardware configurations are documented in Appendix~\ref{sec:workloads}. \autoref{fig:perf_op} shows that, in all cases on CPU and GPU, MetaSchedule{} delivers performance comparable with or even better than TVM, from which we could infer that MetaSchedule{} could express optimization techniques comparable to TVM on diverse workloads. Additionally, in most of the cases, MetaSchedule{} outperforms PyTorch by a significant margin except for SFM, which is highly optimized manually in PyTorch. \subsection{Optimizing End-to-End Deep Learning Models\label{sec:e2e}} Operator performance does not always translate directly to full model optimization. Therefore, this section is dedicated to answering the following question: \emph{Can MetaSchedule{} deliver competitive performance with state-of-the-art works for end-to-end models?} Therefore, a series of experiments are conducted to compare MetaSchedule{} and TVM, including BERT-Base~\cite{devlin2018bert}, ResNet-50~\cite{he2016resnet}, and MobileNet-v2~\cite{sandler2018mobilenetv2} on both CPU and GPU. As shown in \autoref{fig:perf_e2e}, MetaSchedule{} performance is on parity with TVM, while surpassing PyTorch in all cases, which indicates that MetaSchedule{} framework delivers end-to-end performance. Additionally, tuning time is provided in Appendix~\ref{sec:tuning-time}. \subsection{Search Space Composition and Hardware-Specific Modules\label{sec:tensor-core}} \begin{figure} \caption{Performance with different search spaces.} \label{fig:composite_rules} \caption{BERT-Large Performance.} \label{fig:tensorize_e2e} \caption{ Left: Search space composition conducted on a representative subgraph of BERT called \texttt{fused-dense} \label{fig:perf_custom} \end{figure} Besides performance parity with existing work, in this section, we demonstrate the extra value of modular search space composition by answering the following question: \emph{How convenient is it to compose transformation modules, and how does it translate to performance?} We design an ablation study for transformation modules composition. As indicated in \autoref{fig:composite_rules}, by progressively enriching the search space, the performance of optimized tensor programs consistently increases. Composed with a hardware-specific module \texttt{Use-Tensor-Core}, MetaSchedule{} delivers significantly better performance compared with generic search space. The performance gain, brought by search space composition with customized rules, does translate to end-to-end model performance, as shown in \autoref{fig:tensorize_e2e}. Specifically, on BERT-large workloads, MetaSchedule{} with \texttt{Use-Tensor-Core} delivers 48\% speedup over TVM. Notably, it took a graduate student only 2 days to craft the 82-line \texttt{Use-Tensor-Core} module in Python (see supplementary materials), which provides strong evidence of the convenience of customization and composition. More details are in Appendix~\ref{sec:cost-of-construct-search-space}. \section{Conclusion\label{sec:conclusion}} This paper presents MetaSchedule{}, a programming model to describe search space construction in tensor program optimization. Our method abstracts search space as a probabilistic language and enables flexible incorporation of domain knowledge by allowing practitioners to implement customized probabilistic programs. A learning-driven search algorithm is developed on top of the probabilistic language abstraction, which delivers competitive performance with state-of-the-art frameworks. In the future, we will explore and modularize declarative API for various hardware environments. Therefore, we will open-source our framework and hope it could enable broader collaboration between the machine learning deployment engineers and intelligent machine learning algorithms for tensor programs. \section*{Checklist} The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default \answerTODO{} to \answerYes{}, \answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf justification to your answer}, either by referencing the appropriate section of your paper or providing a brief inline description. For example: \begin{itemize} \item Did you include the license to the code and datasets? \answerYes{See Section gen\_inst.} \item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.} \item Did you include the license to the code and datasets? \answerNA{} \end{itemize} Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{}{} \item Did you describe the limitations of your work? \answerYes{See section~\ref{sec:conclusion}} \item Did you discuss any potential negative societal impacts of your work? \answerNo{To the best of our knowledge, there is no potential negative societal impact of our work, given the only focus is to accelerate existing machine learning models.} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerNA{Our work does not include theoretical results.} \item Did you include complete proofs of all theoretical results? \answerNA{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerNo{We will not include the URL for codebase for anonymity, and will release the link after the review process.} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{Operator configurations and hyperparameters for evolutionary search are shown in the Appendix.} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerNo{}{The variance of running time of tensor programs is negligible across several runs.} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{}{We use the codes of TVM and PyTorch which are both cited.} \item Did you mention the license of the assets? \answerNo{}{TVM is under Apache 2.0 license. PyTorch is under Modified BSD license.} \item Did you include any new assets either in the supplemental material or as a URL? \answerNo{}{We did not use any new assets.} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{}{We did not use data obtained from other people.} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{}{We did not use data contains personally identifiable information or offensive content.} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{We did not use any crowdsourcing or conduct any research with human subjects.} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \appendix \section{Appendix\label{sec:appendix}} \subsection{Environment Setup for Experiments} All CPU experiments are conducted on AWS C5.9xlarge instances with Intel Xeon Platinum 8124M CPUs. All GPU experiments are done with NVIDIA GeForce RTX 3070 graphics cards. \subsection{Workload Configurations in the Evaluation\label{sec:workloads}} \begin{itemize} \item C1D (1D Convolution): batch=1, length=256, input channel=64, output channel=128, kernel size=3, stride=2, padding=1 \item C2D (2D Convolution): batch=1, height=224, width=224 input channel=3, output channel=64, kernel size=7, stride=2, padding=3 \item C3D (3D Convolution): batch=1, depth=16, height=224, width=224 input channel=3, output channel=64, kernel size=7, stride=2, padding=3 \item DEP (Depthwise Convolution): batch=1, height=112, width=112 channel=32, kernel size=3, stride=1, padding=1 \item DIL (Dilated Convolution): batch=1, height=224, width=224 input channel=3, output channel=64, kernel size=7, stride=2, padding=3, dilation=2 \item GMM (Matrix Multiply): batch=1, N=M=K=128 \item GRP (Group Convolution): batch=1, height=56, width=56 input channel=64, output channel=128, kernel size=3, stride=2, padding=1, groups=4 \item T2D (Transposed 2D Convolution): batch=1, height=4, width=4 input channel=512, output channel=256, kernel size=4, stride=2, padding=1 \item CBR (2D Convolution + Batch Norm + RuLU): batch=1, height=224, width=224 input channel=3, output channel=64, kernel size=7, stride=2, padding=3 \item TBG (Transpose + Matrix Multiply): batch=1, seq=128, head=12, dim=64 \item NRM (Norm): batch=1, m=256, n=256 \item SFM (Softmax): batch=1, m=256, n=256 \end{itemize} \subsection{Use-Tensor-Core Search Space Definition Code\label{sec:tensor_core_trace}} \subsection{Search Space Construction and Customization for New Hardware\label{sec:cost-of-construct-search-space}} Our previous experience suggests that adapting to new hardware would require months of effort and deep expertise in the compilation stack, across code generation, transformations, and search across the legacy codebase. Take TensorCore GPUs as an example. Unfortunately, Ansor in TVM does not support TensorCore, and it is highly non-trivial to extend its support to cover TensorCore without a dramatic revamp to its IR; Similarly, AutoTVM does not deliver comparable TensorCore performance without a dramatic re-design of its schedule tree, even though it supports a limited set of tensor intrinsics (e.g. WMMA). In both cases, with the assumption of upgrading the core design, we anticipate that it will take months of engineering effort, and more to first get familiar with the codebase. MetaSchedule{} enables doing that without such coupled complexity and no prerequisite experience on code generation or existing transformations, where each transformation is modeled as an independent primitive in the probabilistic language. Incorporating TensorCore, as an independent program transformation, could be engineered within 2 days by a grad student, and then composed into the existing system without having to re-design or affect any existing functionality. \subsection{Tuning Time\label{sec:tuning-time}} MetaSchedule{} makes an orthogonal contribution as it is a probabilistic language for composable search space construction rather than speeding up tuning. To provide a fair comparison of tuning speed, we reproduced Ansor’s search space in our MetaSchedule{} probabilistic language in~\autoref{tab:tuning-time} \begin{table}[ht] \begin{center} \begin{tabular}{|l|l|l|} \hline & TVM Ansor (min) & MetaSchedule{} (min) \\ \hline ResNet-50 & 287.17 & 220.66 \\ BERT-base & 292.45 & 288.4644 \\ MobileNet-v2 & 280.53 & 251.8831 \\ GPT-2 & 270.97 & 214.1358 \\ Inception-v1 & 295.766 & 290.3632 \\ \hline \end{tabular} \end{center} \caption{Tuning time comparison.} \label{tab:tuning-time} \end{table} \subsection{End-to-End Integration Workflow with Deep Learning Framework\label{sec:integration-with-dl-framework}} From frontend frameworks, for example, TensorFlow, PyTorch, or JAX, the tensor program to be optimized is generated from their computational graph. The generation process is generic to the shapes/ranks of tensors. The MetaSchedule{} program is generated from the tensor program obtained, based on which the search algorithm is performed. \subsection{Available Transformations Primitives\label{sec:available-transformation-primitives}} \begin{table}[ht] \begin{center} \begin{tabular}{|c|p{250pt}|} \hline Transformation & Explanation\tabularnewline \hline split & Split a loop into a sequence of consecutive loops\tabularnewline \hline fuse & Fuse a sequence of consecutive loops into one\tabularnewline \hline reorder & Reorder a sequence of loops\tabularnewline \hline parallel & Parallelize a loop across CPU cores\tabularnewline \hline vectorize & Vectorize a loop with SIMD\tabularnewline \hline unroll & Unroll a loop\tabularnewline \hline bind & Bind a loop to a GPU thread\tabularnewline \hline cache-read & Create a block that reads a buffer region into a read cache\tabularnewline \hline cache-write & Create a block that writes a buffer region into a write cache\tabularnewline \hline compute-at & Move a producer block under the specific loop\tabularnewline \hline compute-inline & Inline a block into its consumer(s)\tabularnewline \hline rfactor & Factorize an associative reduction block by the specified loop\tabularnewline \hline storage-align & Set alignment requirement for specific dimension of a buffer\tabularnewline \hline set-scope & Set the storage scope of a buffer\tabularnewline \hline add-unit-loop & Create a new unit loop on top of the specific block\tabularnewline \hline re-index & Create a block that read/write a buffer region into a read/write cache with reindexing\tabularnewline \hline reverse-compute-at & Move a consumer block under the specific loop\tabularnewline \hline reverse-compute-inline & Inline a block into its only producer\tabularnewline \hline decompose-reduction & Decompose a reduction block into two separate blocks\tabularnewline \hline blockize & Convert the subtree rooted at a specific loop into a block\tabularnewline \hline tensorize & Tensorize the computation enclosed by loop with the tensor intrin\tabularnewline \hline annotate & Annotate a block/loop with a key value pair\tabularnewline \hline unannotate & Unannotate a block/loop's\tabularnewline \hline transform-layout & Apply a transformation to buffer layout, represented by an index map \tabularnewline \hline transform-block-layout & Apply a transformation to block layout, represented by an index map\tabularnewline \hline decompose-padding & Decompose a padding block into a block filling const pad values and a block writing in-bound values\tabularnewline \hline sample-categorical & Sample an integer given the probability distribution\tabularnewline \hline sample-perfect-tile & Sample the factors to perfect tile a specific loop\tabularnewline \hline sample-compute-location & Sample a compute-at location of the given block\tabularnewline \hline \end{tabular} \end{center} \caption{All available transformation primitives.} \label{tab:available-transformation-primitives} \end{table} \end{document}
\begin{document} \title{ The Boundary for Quantum Advantage in Gaussian Boson Sampling } \author{Jacob F. F. Bulmer} \email{these authors contributed equally:\\ [email protected], [email protected]} \affiliation{Quantum Engineering Technology Labs, University of Bristol, Bristol, UK} \author{Bryn A. Bell} \email{these authors contributed equally:\\ [email protected], [email protected]} \affiliation{Ultrafast Quantum Optics group, Department of Physics, Imperial College London, London, UK} \author{Rachel S. Chadwick} \affiliation{Quantum Engineering Technology Labs, University of Bristol, Bristol, UK} \affiliation{Quantum Engineering Centre for Doctoral Training, University of Bristol, Bristol, UK} \author{Alex E. Jones} \affiliation{Quantum Engineering Technology Labs, University of Bristol, Bristol, UK} \author{Diana Moise} \affiliation{Hewlett Packard Enterprise, Switzerland} \author{Alessandro Rigazzi} \affiliation{Hewlett Packard Enterprise, Switzerland} \author{Jan Thorbecke} \affiliation{Hewlett Packard Enterprise, the Netherlands} \author{Utz-Uwe Haus} \affiliation{HPE HPC EMEA Research Lab, Wallisellen, Schweiz} \author{Thomas Van Vaerenbergh} \affiliation{Hewlett Packard Labs, HPE Belgium, Diegem, Belgium} \author{Raj B. Patel} \affiliation{Ultrafast Quantum Optics group, Department of Physics, Imperial College London, London, UK} \affiliation{Department of Physics, University of Oxford, Oxford, UK} \author{Ian A. Walmsley} \affiliation{Ultrafast Quantum Optics group, Department of Physics, Imperial College London, London, UK} \author{Anthony Laing} \email{[email protected]} \affiliation{Quantum Engineering Technology Labs, University of Bristol, Bristol, UK} \begin{abstract} Identifying the boundary beyond which quantum machines provide a computational advantage over their classical counterparts is a crucial step in charting their usefulness. Gaussian Boson Sampling (GBS), in which photons are measured from a highly entangled Gaussian state, is a leading approach in pursuing quantum advantage. State-of-the-art quantum photonics experiments that, once programmed, run in minutes, would require 600 million years to simulate using the best pre-existing classical algorithms. Here, we present substantially faster classical GBS simulation methods, including speed and accuracy improvements to the calculation of loop hafnians, the matrix function at the heart of GBS. We test these on a $\sim \! 100,000$ core supercomputer to emulate a range of different GBS experiments with up to 100 modes and up to 92 photons. This reduces the run-time of classically simulating state-of-the-art GBS experiments to several months---a nine orders of magnitude improvement over previous estimates. Finally, we introduce a distribution that is efficient to sample from classically and that passes a variety of GBS validation methods, providing an important adversary for future experiments to test against. \end{abstract} \date{\today} \maketitle \section{Introduction} A \emph{quantum advantage} is typically considered to be achieved when a quantum experiment outperforms a classical computer at a computational task, with strong evidence of an exponential separation between quantum and classical run-times. Based on plausible complexity conjectures, boson sampling~\cite{aaronson2011computational, lund2014boson} is a class of photonic experiments with potential to deliver quantum advantage. Measurement of correlated photon detection events constitutes sampling from a distribution with probabilities that correspond to classically intractable matrix functions. In Gaussian Boson Sampling (GBS)~\cite{hamilton2017gaussian}, squeezed states are injected into an interferometer, with subsequent photon detection producing correlation events that are related to matrix \emph{loop hafnians}~\cite{PhysRevA.100.032326, quesada2019franck}. A major advancement in experimental photonics was recently reported, in which a GBS experiment comprised of 100 optical modes, named Jiŭzhāng~\cite{Zhong1460}, observed up to 76 photon detection events and claimed a quantum advantage. Once assembled, Jiŭzhāng ran in 200~s, while the best available classical algorithms running on the most powerful contemporary supercomputer would require 600~million years to simulate Jiŭzhāng. While the theoretical proposal for GBS assumed the use of photon number resolving detectors (PNRDs), experimental implementations frequently use threshold detectors, which \emph{click} to distinguish between 0 and at least 1 photon. This does not affect the complexity of GBS provided that \emph{collisions} (multiple photons arriving at the same detector) are unlikely~\cite{quesada2018gaussian}. Such events were assumed improbable and were neglected in the original proposal~\cite{aaronson2011computational}. There is a lack of progress in understanding the classical complexity of GBS in regimes with high degrees of collisions, which obfuscates the boundary for quantum advantage. Jiŭzhāng both uses threshold detectors and operates in a regime where there is a high probability of collisions between photons. Here, we present classical algorithms that calculate exact, correlated photon detection probabilities for GBS simulations with PNRDs, in the presence of collisions, faster than existing methods. Futhermore, we introduce a new classical method to generate samples for GBS simulations with threshold detectors, which runs orders of magnitude faster than classical methods to generate samples with PNRDs, when collisions dominate. We apply these results to two sampling algorithms: a probability chain-rule method~\cite{quesada2020quadratic} and Metropolis independence sampling (MIS)~\cite{neville2017classical}. We report nine orders of magnitude reduction in the time taken to simulate idealised Jiŭzhāng-type GBS experiments with threshold detectors. This enabled us to classically simulate, on a $\sim \! 100,000$ core supercomputer, GBS experiments with 100 modes and up to 60 click detection events. Replacing threshold detectors with PNRDs in this simulation allows us to generate a 92 photon sample but increases the run-time significantly. We find that simulating a 60 mode experiment with PNRDs is of comparable complexity to simulating a 100 mode experiment with the same density of photons and threshold detectors. Finally, we develop and investigate a classically tractable distribution that passes a variety of canonical GBS verification tests, highlighting the importance of verifying GBS experiments against the most stringent adversarial tests available. These results significantly sharpen the boundary of quantum advantage in GBS. \section{Loop hafnian algorithms} A particular detection event can be described by a photon number pattern $\vec{n}$, where $n_{i}$ is the number of photons in mode $i$. The probability of obtaining some $\vec{n}$ from a GBS experiment is: \begin{equation} P(\vec{n})=\frac{P_0}{\bm{p}rod_i n_i!}\lhaf\left(\bm{A}_{\vec{n}}\right), \label{mixed_prob_short} \end{equation} where $P_0$ is the probability of measuring vacuum, $\lhaf(\cdot)$ is the loop hafnian function, and $\bm{A}_{\vec{n}}$ is a matrix which can be derived from $\vec{n}$ and the covariance matrix and displacement vector of the Gaussian state (see Appendix~\ref{GBS}). $\bm{A}_{\vec{n}}$ is a $2N\times 2N$ matrix, where $N=\sum_i n_i$. However, for a pure Gaussian state, $\bm{A}_{\vec{n}}$ is block diagonal, with blocks $\bm{B}_{\vec{n}}$ and $\bm{B}_{\vec{n}}^*$, in which case: \begin{equation} \lhaf\left(\bm{A}_{\vec{n}}\right)=|\lhaf(\bm{B}_{\vec{n}})|^2. \end{equation} $\bm{B}_{\vec{n}}$ is an $N\times N$ matrix, so it is considerably faster to calculate its loop hafnian compared to $\bm{A}_{\vec{n}}$. While a realistic GBS experiment will not produce a pure state, a Gaussian mixed state can be expressed as a statistical ensemble of pure states with differing displacement vectors~\cite{Serafini_2017, quesada2020quadratic}; so for the purposes of a sampling algorithm, it is generally possible to randomly choose a complex displacement vector $\vec{\bm{\alpha}pha}$ from the correct distribution, then sample from the corresponding pure state. Hence the computational complexity of generating a sample is set by the calculation of an $N\times N$ loop hafnian, $\lhaf(\bm{B}_{\vec{n}})$. \begin{figure} \caption{(a) A GBS outcome with collisions, measured with PNRDs. (b) To calculate the associated probability, we group the photons into pairs (red lines) to maximise the number of repeated, identical pairs. (c) An inclusion/exclusion formula, or a finite-difference sieve, can then operate on the resulting pairs, with repeated pairs leading to a speed-up. (d) the same event measured with threshold detectors, with `clicks' shown as green ticks. (e) We consider a fan-out to an array of sub-detectors, with none likely to receive $>1$ photon. We can ignore the outcomes of all but the first detector to see a photon. $x$ is introduced as the relative position of the detected photon, and is also the fraction of the sub-detectors that are ignored. (f) The probability of detecting the first photon at position $x$ can be expressed as a loss followed by single photon detection.} \label{red_edges} \end{figure} The fastest known algorithms for the loop hafnian run in exponential time, using an inclusion/exclusion formula similar to the Ryser algorithm for the permanent~\cite{Ryser1963}. In boson sampling with Fock state inputs, Ryser can be generalised to take advantage of collisions, reducing the number of inclusion/exclusion terms to calculate from $2^N$ to $\bm{p}rod_i (n_i+1)$~\cite{shchesnovich2013asymptotic,tichy2011thesis, chin2018generalized}. The repeated-moment formula for the loop hafnian achieves the same scaling for GBS~\cite{KAN2008542}. However, there is a much faster formula for general loop hafnians---the eigenvalue-trace algorithm performs inclusion/exclusion on \emph{pairs} of photons, and so requires only $2^{N/2}$ terms~\cite{bjorklund2019faster, CYGAN201575}. Here, we generalise eigenvalue-trace to take advantage of collisions, reducing the number of terms to $\bm{p}rod_i (\eta_i+1)$, where $\eta_i$ is the number of times a particular pairing of photons is repeated. This is lower-bounded by $\bm{p}rod_i \sqrt{n_i+1}$ and upper-bounded by $2^{N/2}$. The grouping of photons into pairs is arbitrary, so we make use of a greedy algorithm to choose repeated pairings, reducing the number of inclusion/exclusions steps to as close to the lower-bound as possible (Appendix~\ref{repeated_edges}). Fig.~\ref{red_edges}(a)-(c) shows how, when collisions occur in more than one mode, repeated pairs can be formed. In this example, $\vec{n}=(2,1,0,3)$, which is arranged into two pairings, one of which is repeated. This gives a sum over $(2+1)(1+1)=6$ terms, reduced from 8 using eigenvalue-trace~\cite{bjorklund2019faster}, and compared to 24 using the repeated-moment algorithm~\cite{KAN2008542}. We also present a loop hafnian formula which uses a \emph{finite difference sieve} instead of an inclusion/exclusion formula, like the Glynn formula for the permanent~\cite{Balasubramanian1980thesis, BAX1996171, GLYNN20101887}. This significantly improves the numerical accuracy with only a minor time penalty. Whereas accuracy is an issue for eigenvalue-trace when $N>50$~\cite{quesada2020quadratic}, the finite-difference sieve method has relative error $<10^{-8}$ when tested up to $N=60$ (Appendix~\ref{fds}). This allows us to maintain accuracy for large loop hafnians while using a conventional 128-bit complex floating point data type, which is desirable for speed and portability. We therefore use the finite difference sieve formula for all benchmarking results presented in section~\ref{benchmarking}. \section{Threshold detectors} When threshold detectors are used, the detection probabilities can be calculated using the Torontonian matrix function~\cite{quesada2018gaussian}, which involves a sum over $2^{N_c}$ terms, where $N_c$ is the number of clicks (outputs with one or more photons). However, calculating this quantity is not necessarily the fastest approach to \emph{sampling} threshold detection patterns. For a sufficiently low density of photons, it may be faster to simulate PNRDs, then simply reduce each non-zero photon number to a click. We show that it is possible to improve this, for any density of photons, to the level of an $N_c\times N_c$ loop hafnian, containing $2^{N_c/2}$ terms. We consider the detection system depicted in Fig.~\ref{red_edges}(e). The mode is uniformly fanned out to many PNRD sub-detectors, such that the probability of a collision in any one sub-detector can be neglected. This system provides a conceptual bridge between threshold detection and number resolved detection~\cite{thomas2020general}. If these sub-detectors within a mode are sampled sequentially, once a single photon is seen, that mode registers a click. The remaining sub-detectors, which have not yet been sampled, can be ignored since no more information is required about that mode. Hence the number of detected single photons to simulate is $N_c$, which sets the size of loop hafnian calculation. $x$ is introduced as an additional variable giving the position of the single photon within the fan-out, normalised to vary between 0 and 1. As a result, a fraction $x$ of the sub-detectors are ignored - this can be related to applying a loss of $x$ to the mode before detecting a single photon, shown in Fig.~\ref{red_edges}(f). \section{Sampling algorithms} \subsection{Chain-rule sampling} These methods can be applied directly to the chain-rule for simulating GBS described in~\cite{quesada2020quadratic} and Appendix~\ref{chain_rule}. Here, the photon number in each mode is sampled sequentially, conditioned on the photon numbers in the previous modes. Finding the conditional probability distribution for mode $j$ requires calculating the joint probabilities of $(n_1, ..., n_j)$ for all values of $n_j$ up to $n_{\mathrm{cut}}$, where $n_{\mathrm{cut}}$ is some cutoff such that the probability of having a greater number of photons can be neglected. Since $n_{\mathrm{cut}}$ should generally be several times larger than the expected number of photons, the speed-up for calculating collision probabilities is especially applicable here. Furthermore, we make use of a batched method for simultaneously calculating all of the loop hafnians required for different values of $n_j$, with approximately the same run-time as calculating the largest loop hafnian, where $n_j=n_{\mathrm{cut}}$ (see Appendix~\ref{batching}). When simulating threshold detectors, we choose to reduce $n_{\mathrm{cut}}$ for each sub-detector to 1. We again use a batching method to more efficiently sample different sub-detectors within the same mode, which largely offsets the additional overhead from sampling several sub-detectors per mode. \subsection{Metropolis Independence Sampling} We also investigate MIS, a Markov Chain Monte Carlo method, for generating GBS samples. Here, samples $s_i$ are drawn from a proposal distribution, where $s_i$ is the $i$th sample in the chain. They are then accepted with probability \begin{equation} p_{\mathrm{accept}}=\mathrm{min}\left(1,\frac{P(s_i)Q(s_{i-1})}{P(s_{i-1})Q(s_i)}\right), \label{eq:transitionprob} \end{equation} where $P(s_{i})$ is the \emph{target} probability distribution, in this case that of ideal GBS, while $Q(s_{i})$ is the proposal probability distribution, i.e. the probability of proposing a particular $s_i$. If a proposed sample is rejected, the previous sample is repeated, $s_i=s_{i-1}$. This update rule ensures the chain will converge towards the target distribution, which is its equilibrium state~\cite{liu1996metropolizedindependent, 10.5555/1571802}. Usually some burn-in time, $\tau_{\mathrm{burn}}$, is used to allow the chain to converge. As sequential samples are not independent, some thinning interval, $\tau_{\mathrm{thin}}$, can also be used to suppress the probability of seeing repeated samples, keeping only 1 in every $\tau_{\mathrm{thin}}$ samples. These parameters are critical to the efficiency of MIS, and can generally be improved by choosing a proposal distribution which is close to the target distribution. \begin{figure} \caption{Probability distribution for all 6-photon detection outcomes for an 8-mode PNRD GBS simulation (a) and an 8-mode, 3-click threshold detector GBS simulation (b). Blue bars show estimated probabilities using MIS, orange bars show exact probabilities.} \label{pnrd_dist} \end{figure} We expand our sample space so that $s_i$ contains the photon number pattern $\vec{n}$ and the complex displacement vector $\vec{\bm{\alpha}pha}$. For the proposal samples, we draw $\vec{\bm{\alpha}pha}$ from the correct distribution for the desired mixed state, then generate $\vec{n}$ from an `Independent Pairs and Singles' (IPS) distribution based on the resulting pure state (Appendix~\ref{proposal}). This distribution, that we introduce in this work, can be sampled from efficiently, and has probabilities given by $N\times N$ loop hafnians of positive matrices. As an aside, we observe that the IPS distribution is already sufficient to pass many GBS verification methods (Appendix~\ref{verification}). The run-time per sample is dominated by the two loop hafnians in $P(s_i)$ and $Q(s_i)$, with $P(s_{i-1})$ and $Q(s_{i-1})$ already calculated in the previous step. When simulating threshold detectors with MIS, we take the continuum limit of a large number of sub-detectors, and introduce $x$ as an additional continuous random variable that gives the position of the `first' detected photon within each mode with non-zero photons. Given a proposed photon number pattern, $\vec{x}$ can be sampled efficiently from its conditional distribution $p(\vec{x}|\vec{n})$. $P(s_i)$, $Q(s_i)$ are then calculated with $N_c\times N_c$ loop hafnians, tracing out the unused sub-detectors. One subtlety is that tracing out reintroduces mixture into the quantum state, so it is necessary to sample a further adjustment to the displacement vector $d\vec{\bm{\alpha}pha}$ to obtain a pure state. This is only used in the calculation of $P(s_i)$. Details are given in Appendix~\ref{click_mis}. \begin{figure} \caption{Run-time using the HPE benchmarking system, comparing eigenvalue-trace loop hafnian algorithm on $N \times N$ matrices with and without speed-up due to collisions (orange and blue dots). Blue line is an exponential fitted to the blue points. Collisions are determined by generating 39 samples for each $N$ from the IPS distribution on 60 modes.} \label{speed_repeats} \end{figure} \begin{figure} \caption{Chain-rule simulation of $M=60$ experiment with PNRDs. (a) Number of samples as a function of photon number, with the theoretically calculated distribution (red line) and (b) run-time versus number of photons fitted with an exponential plus a constant (red line).} \label{chainrulePNRD} \end{figure} \section{Benchmarking \label{benchmarking}} To benchmark these methods, we choose parameters similar to those of Zhong et al.~\cite{Zhong1460}, while varying the system size by choosing the number of modes $M$. For the interferometer we select Haar random unitary matrices, fed with $M/4$ sources of two-mode squeezed vacuum. We choose a uniform squeezing parameter $r=1.55$, and overall transmission $\eta=0.3$. To demonstrate the correctness of our methods, we first test them on an $M=8$ example, which is small enough that the results can be compared to the exactly calculated distributions. Fig.~\ref{pnrd_dist} shows the accumulated distribution from $10^6$ samples with total photon number $N=6$, generated by MIS for PNRD, along with the exactly calculated distribution. The total variation distance, $\mathrm{TVD}(p,q) = \frac12 \sum_i |p_i-q_i|=0.0153$, which is consistent with statistical uncertainties. With threshold detectors, we find the TVD for the $N_c=3$ distribution is $2.9\times 10^{-3}$, which benefits from the smaller statistical uncertainty due to the smaller number of possible outcomes. For the chain-rule algorithm, we produce $10^6$ samples with both PNRD and threshold detectors. For PNRD with a cutoff of 12 photons, there were 74,973 samples with $N=6$ from $10^6$ total samples, giving a $\mathrm{TVD}=0.0554$. For threshold detectors with twelve sub-detectors, there were 195,150 $N_c=3$ samples and these gave a $\mathrm{TVD}=0.0138$. The larger TVDs are explained by the smaller sample size of the post-selected distributions. For large-scale tests we make use of an internal HPE Cray EX benchmarking system, consisting of 1024 nodes. A typical node is equipped with two AMD EPYC 7742 64-core processors clocked at 2.25GHz and the nodes are interconnected with the Cray Slingshot 10 high-performance network. We first benchmark our loop hafnian formula on proposed IPS samples for an $M=60$ example. The run-time as a function of $N$ is shown in Fig.~\ref{speed_repeats}, along with timings for the basic formula without speed-up due to collisions. Making use of collisions generally improves the run-time by one to two orders of magnitude for this range, and allows 80 photon probabilities to be calculated in comparable time to a 60 photon probability without collisions. However, there is a large variation in run-time between samples with the same $N$, depending on the amount of collisions in any particular configuration of the sampled photons. On the other hand, the run-time for a loop hafnian without speed-up from collisions shows little variation from $\mathcal{O}(N^3 2^{N/2})$ scaling, at least for $N>40$. \begin{figure} \caption{Chain-rule simulation of $M=100$ experiment with threshold detectors. (a) Number of samples as a function of number of clicks, fitted with a Gaussian (red line) (b) Run-time versus number of clicks, fitted with an exponential plus a constant (red line).} \label{chainruleclick} \end{figure} Using chain-rule sampling, we simulate an $M=60$ experiment with PNRDs, setting $n_{\mathrm{cut}}=12$ and an additional global cutoff of $80$ photons. We generate 4200 samples in $\sim \! 3$ hours. The global cutoff has no effect on the probability distribution of samples below the cutoff, and is used to keep the run-time per sample constrained. Fig.~\ref{chainrulePNRD}(a) shows a histogram of the number of samples against number photons, which is in good agreement with the calculated distribution. Fig.~\ref{chainrulePNRD}(b) shows the corresponding run-times of the samples. Below $\sim \! 45$ photons, the sample time appears approximately constant, suggesting the problem size is not large enough to take full advantage of the system. Beyond that, the run-time increases rapidly, though there is a wide range of variation depending on the particular configuration of output photons. We provide a rough fit-line to this scaling, equal to $(0.15+1.59\times 10^{-9}\times N^3e^{0.147 N})$s. Using this to extrapolate to photon numbers $>80$, we estimate that the average time per sample is $\sim \! 10$~s. With the $\sim \! 66$ times larger number of CPUs available in Fugaku---the world's top ranked supercomputer---this could be reduced to 130~ms. We then test chain-rule simulation of an $M=100$ experiment with threshold detectors, using 12 sub-detectors per mode, and a global cutoff of $60$ clicks. We generate 1600 samples in $\sim \! 3.5$ hours. Fig.~\ref{chainruleclick}(a) shows the histogram of click numbers, and (b) shows the corresponding run-times of the samples. Beyond $\sim \! 45$ clicks, the sample time increases approximately exponentially, from which we extrapolate to click numbers $>60$. The run-times are fitted with a line $(0.58+3.15\times 10^{-7}\times 2^{N/2})$s. From this, we predict that the mean time per sample is 8.4~s. On Fugaku this could be reduced to around 127~ms. Based on the scaling of the loop hafnian calculation, and on the distribution of samples over number of clicks, the estimated average time per MIS step is 0.45~s for an $M=100$ system with threshold detectors. On Fugaku, this could be reduced to 7~ms, which is somewhat faster than generating a sample through the chain-rule. However, the raw MIS chain will contain a high frequency of repeated samples due to rejections of the proposal sample---for some applications this may be unimportant, but it would provide a clear difference from a true GBS experiment, where repeated samples are highly unlikely. In Appendix~\ref{mis_scaling} we investigate the $\tau_{\mathrm{thin}}$ required to suppress repeated samples, and find it increases rapidly with system size, such that for $M=100$ it is likely to be in excess of $600$. Hence if independently distributed samples are required, the chain-rule method is most likely preferable. \section{Conclusion} Our results provide a new reference point for classical run-times of GBS, an improved understanding of the classical complexity, and could improve verification techniques by making it practical to generate small numbers of samples from the distribution of much larger scale experiments. Our `Independent Pairs and Singles' proposal distribution generates samples in polynomial time and is a better approximation than the standard adversarial models in the verification of GBS. IPS is largely able to pass the quantitative tests of GBS used in ref.~\cite{Zhong1460} (see Appendix~\ref{verification}), which suggests a need for stronger verification methods - at the least, using IPS as a classically simulable adversary. For GBS with threshold detectors, we have shown the complexity can be reduced quadratically from $\mathcal{O}(N_c^3 2^{N_c})$ to $\mathcal{O}(N_c^3 2^{N_c/2})$. Comparing to the experiment of Zhong et al.~\cite{Zhong1460}---where 50 million samples were accumulated in 200~s---our 100 mode chain-rule simulation implies the classical run-time can be reduced to $\sim \! 73$ days. This does not diminish the experimental achievement of large-scale GBS from Zhong et al.~\cite{Zhong1460}, which remains faster than classical methods on supercomputers, if the time required for circuit programming (or in the present case fabricating a new, fixed interferometer) is not included. However, it has previously been reported that in boson sampling with Fock state inputs, at least 50 photon events are required to extend beyond the reach of an exact classical simulation in a reasonable time-scale~\cite{neville2017classical}; for collision-free GBS, this threshold has been reported as being around 100 photons~\cite{quesada2020quadratic}; we have now demonstrated that for GBS with threshold detectors, the number of correlated detector clicks should also be around 100. For GBS with PNRDs, the number of photons required to surpass this classical threshold will depend on the amount of collisions, but must be $\bm{\gamma}e 100$. Future claims to quantum advantage in GBS experiments might include increasing the level of programmability~\cite{zhong2021phaseprogrammable} and including photon number resolving detectors, which our results suggest adds significantly to the complexity, thus providing an alternative route to a larger quantum advantage than increasing the size of threshold detector experiments. For example, our 60 mode PNRD chain-rule simulation ran in comparable time to the 100 mode threshold detector simulation. Meanwhile a 100 mode PNRD simulation proved impractically slow even on the HPE Cray EX benchmarking system - we generated a single 92 photon event in 82 minutes. Our methods are near-exact simulations of GBS which do not assume or exploit any experimental imperfections beyond the presence of collisions, and so are quite generally applicable to future GBS experiments. Much faster classical methods to simulate GBS may be possible through other techniques that exploit errors such as photon loss and photon distinguishability~\cite{renema2018,garciapatron2019}, or limitations such as the inability to implement Haar random transformations. \section{Gaussian Boson Sampling}\label{GBS} The Wigner function of an $M$-mode Gaussian state can be efficiently represented by using the $2M$ length mean vector $\bm{R}$, and the $2M\times2M$ covariance matrix $\bm{V}$, of the canonical position and momentum operators $\vec{q}$ and $\vec{p}$. Equivalently it can be represented in terms of creation and annihilation operators $\bm{a}=\left( \begin{smallmatrix} \vec{a} \\ \vec{a}^\dagger \end{smallmatrix}\right)$ as a complex valued displacement $\bm{\alpha}$ and covariance matrix $\bm{\sigma}$~\cite{RevModPhys.84.621}. \begin{equation} \bm{\alpha}_i=\braket{\bm{a}_i} \end{equation} \begin{equation} \bm{\sigma}_{i,j}=\frac{1}{2}\left(\braket{\bm{a}_i \bm{a}_j^\dagger} + \braket{\bm{a}_j^\dagger \bm{a}_i}\right)-\bm{\alpha}_i\bm{\alpha}_j^*. \end{equation} We further define: $\bm{\sigma}_Q=\bm{\sigma} + \bm{\mathrm{I}}/2$ as the complex-valued covariance matrix of the state's Husimi Q-function, $\bm{O}=\left(\bm{\mathrm{I}} - \bm{\sigma}_Q^{-1}\right)$, \begin{equation} \bm{X}=\begin{pmatrix}\bm{0} & \bm{\mathrm{I}} \\ \bm{\mathrm{I}} & \bm{0}\end{pmatrix}, \end{equation} $\bm{A}=\bm{X}\bm{O}$, and $\bm{\gamma} = \bm{\alpha}^\dagger \bm{\sigma}_Q^{-1} $. Probabilities of measuring photon number patterns $\vec{n}$ with PNRDs are now given by: \begin{equation} P(\vec{n}|\bm{\sigma},\bm{\alpha})=\frac{\exp\left(-\frac{1}{2} \bm{\alpha}^\dagger \bm{\sigma}_Q^{-1} \bm{\alpha} \right)}{\sqrt{\det(\bm{\sigma}_Q)}\bm{p}rod_i n_i!}\lhaf\left(\bm{A}_{\vec{n}}\right), \label{mixed_prob} \end{equation} where $\lhaf(\cdot)$ is the loop hafnian function. $\bm{A}_{\vec{n}}$ is formed from $\bm{A}$ by repeating the $i$th and $(i+M)$th rows and columns $n_i$ times, and similarly the $i$th and $(i+M)$th entry in $\bm{\gamma}$ is repeated $n_i$ times to form $\bm{\gamma}_{\vec{n}}$. Then the diagonal elements of $\bm{A}_{\vec{n}}$ are replaced by the elements of $\bm{\gamma}_{\vec{n}}$, since the weights of the loops are given on the diagonal of the matrix. For pure states, $\bm{A}_{\vec{n}}$ can be written in block form as \begin{equation} \bm{A}_{\vec{n}} = \begin{pmatrix} \bm{B}_{\vec{n}} & \bm{0} \\ \bm{0} & \bm{B}_{\vec{n}}^* \end{pmatrix}. \label{Adef} \end{equation} Here $\bm{B}_{\vec{n}}$ is a symmetric $N\times N$ matrix, with $N$ the total photon number. As a result, \begin{equation} \lhaf\left(\bm{A}_{\vec{n}}\right)=\left|\lhaf\left(\bm{B}_{\vec{n}}\right)\right|^2, \end{equation} so probabilities from a pure state can be calculated using loop hafnians of matrices of half the size compared to a mixed state. \subsection{Sampling pure Gaussian states from mixed Gaussian states \label{sample_pure}} Using the Williamson decomposition, we can write the covariance matrix as, $\bm{V} = \bm{S} \bm{D} \bm{S}^T$. Here, $\bm{D}$ is a diagonal covariance matrix describing a thermal state in each mode, and $\bm{S}$ defines a symplectic transformation. Hence any mixed Gaussian state can be written as a pure channel acting on thermal states. By defining $\bm{T}=\frac{\hbar}{2}\bm{S}\bm{S}^T$, a covariance matrix of a pure Gaussian state, and $\bm{W} = \bm{S}(\bm{D} - \frac{\hbar}{2}\bm{\mathrm{I}})\bm{S}^T$, a covariance matrix describing the Gaussian classical noise added to the state, we can now write the original covariance matrix as $\bm{V}=\bm{T} + \bm{W}$~\cite{Serafini_2017, quesada2020quadratic}. For the purposes of sampling the state, we can choose a pure state with vector of means $\bm{R'}$ sampled from the multivariate normal distribution described by covariance matrix $\bm{W}$ and means $\bm{R}$. This results in a pure state with covariance matrix given by $\bm{T}$ and means given by $\bm{R'}$. \subsection{Chain-rule GBS sampler\label{chain_rule}} Sampling using the chain-rule for probability proceeds by choosing part of the sample (in this case, e.g. the number of photons in the first mode) from its marginal probability distribution, then fixing this and choosing the next part (e.g. number of photons in the second mode) from its conditional probability distribution depending on the first part. This is expressed as: \begin{equation} P(n_1,n_2)=P(n_1)P(n_2|n_1). \end{equation} This allows samples to be built-up from distributions with very large numbers of possible outcomes, without calculating the probability of every possible outcome. In GBS, a difficulty is that the marginal probabilities are equivalent to probabilities from a mixed quantum state, and these are quadratically harder to calculate than for a pure state. To circumvent this, the modes are initially sampled in the coherent state basis, obtaining a set of coherent state amplitudes $\vec{\beta}$ which are then progressively replaced by photon numbers $\vec{n}$ using a modified form of the chain-rule. The coherent state basis has the benefits that it can be sampled from efficiently, and that when intermediate probabilities are calculated, combining photon number and coherent state bases, the coherent states do not add to the complexity of the calculation. The procedure is as follows~\cite{quesada2020quadratic}: \begin{enumerate} \item Sample modes 2 to $M$ in the coherent state basis, obtaining a sample from the distribution $P(\beta_2,...,\beta_M)$. \item Sample the photon number in the first mode from the distribution $P(n_1|\beta_2,...,\beta_M)$. \item For $m=2$ to $M-1$: \begin{enumerate} \item Begin with a sample from the intermediate distribution. $P(n_1,...,n_{m-1},\beta_m,...,\beta_M)$ \item Discard the coherent state amplitude $\beta_m$ and replace it with a photon number $n_m$ drawn from the distribution $P(n_j|n_1,...,n_{m-1},\beta_{m+1},...,\beta_M)$. \item This leaves a sample drawn from the distribution $P(n_1,...,n_m,\beta_{m+1},...,\beta_M)$ which can be used as a starting point for the next step. \end{enumerate} \item Discard $\beta_M$ and replace it with $n_M$, drawn from $P(n_M|n_1,..,n_{M-1})$. This leaves a photon number sample drawn from the distribution $P(n_1,...,n_M)$. \end{enumerate} To sample from $P(n_m|n_1,...,n_{m-1},\beta_{m+1},...,\beta_M)$, the joint probabilities $P(n_1,...,n_m,\beta_{m+1},...,\beta_M)$ are calculated for all $n_m$ between zero and some finite cutoff $n_{\mathrm{cut}}$. Assuming the probability that $n_j>n_{\mathrm{cut}}$ is small enough to be neglected, normalising the joint probabilities to 1 provides a good approximation to the conditional distribution. Calculating these joint probabilities dominates the computational effort for sampling each mode, and grows with the number of detected photons. Specifically, the relative joint probabilities are given by: \begin{equation} P(n_1,...,n_m,\beta_{m+1},...,\beta_M)\bm{p}ropto \frac{\lhaf(\bm{B}_{\vec{n},\vec{\beta}})}{n_m!}, \end{equation} where $\bm{B}_{\vec{n},\vec{\beta}}$ is formed from $\bm{B}$ by repeating the $i$th row and column $n_i$ times, then in the same manner repeating the entries of $\bm{\gamma}amma'$ along the diagonal of $\bm{B}_{\vec{n},\vec{\beta}}$, where $\bm{\gamma}amma'$ is given by: \begin{equation} \bm{\gamma}amma'=(\vec{\bm{\alpha}pha}-\vec{\beta})^\dagger \bm{\bm{\sigma}ma}_Q^{-1}. \end{equation} Here, $\vec{n}$ is non-zero only for the modes which have already been sampled in photon number, and similarly the values of $\vec{\beta}$ are set to zero as the corresponding mode is sampled in photon number. We note that since $n_{\mathrm{cut}}$ should usually be several times greater than the expected number of photons, these calculations will often contain photon collisions. Below, we describe algorithms to speed up loop hafnian calculations in the presence of detecting photon collision events, and a method of batching together the calculations for different $n_j$ such that the total run-time is approximately equal to that of calculating the largest $n_j$. When simulating threshold detectors, we expand each mode to several sub-detectors and treat them as separate modes in the chain-rule sampling algorithm, with the only difference being that once a photon is detected, no further information is required from the remaining sub-detectors within that mode. Hence they can continue to be projected onto the coherent state basis, where they do not contribute to the complexity of calculating the probabilities. In section~\ref{batching}, we provide a batched method of calculating the loop hafnians required for different sub-detectors within the same mode, achieving a speed-up by noting that only the diagonal entries of $\bm{B}_{\vec{n},\vec{\beta}}$ change between sub-detectors. Since the order with which this algorithm progresses through the modes is arbitrary, we choose to go in order of increasing mean photon/click number. This slightly reduces the run-time since photons are less likely to be detected in the earlier modes, and so the size of the loop hafnians required in these stages is generally reduced. An implementation of the chain-rule algorithm can be found in~\cite{gbs_mis}. \section{Loop hafnian algorithms} The loop hafnian function of an $N \times N$ symmetric matrix $A$ is defined as \begin{equation} \lhaf(A)=\sum_{M\in \mathrm{SPM}}\bm{p}rod_{(i,j)\in M} A_{i,j}, \end{equation} where $\mathrm{SPM}$ is the set of single-pair matchings, the ways in which the indices $[N]$ can be grouped into sets of sizes 1 and 2. This is a generalisation of the set of perfect matchings (all of the groupings into pairs) which occurs in a hafnian, with the `loops' referring to sets of size 1, which have weightings given on the diagonal of the matrix. Hence $M$ can contain pairs $(i,j)$ where $i\neq j$, but also $(i,i)$ singles. \subsection{Eigenvalue-trace} The eigenvalue-trace algorithm for the loop hafnian (with $N$ even) can be written as~\cite{bjorklund2019faster}: \begin{equation} \lhaf(A) = \sum_{Z \in P([N/2])} (-1)^{|Z|} f\left(A_Z\right). \label{ET_loop} \end{equation} $P([N/2])$ is the powerset of $[N/2]$, and subscript $Z$ refers to taking a submatrix where rows and columns $i$ and $N/2+i$ are retained only if $i$ is an element of the set $Z$. The function $f(C)$ is defined as the $\lambda^{N/2}$ coefficient of the polynomial: \begin{multline} p_{N/2}(\lambda, C,v) = \\ \sum_{j=1}^{N/2} \frac{1}{j!} \left(\sum_{k=1}^{N/2} \left( \frac{\Tr((CX)^k)}{2k} + \frac{v X(CX)^{k-1} v^T}{2} \right) \lambda^k \right)^j \label{ET_poly} \end{multline} where $v$ is a vector given by the diagonal elements of $C$ and $X$ is defined like $\bm{X}$, introduced earlier, but with dimensions matching $C$. The eigenvalue-trace algorithm can be thought of as performing inclusion/exclusion over the set of pairs in one fixed perfect matching, defined by $X$. The complexity of evaluating $f(C)$ is dominated by finding the traces of matrix powers, $\Tr((CX)^k)$, which can be reduced to finding the eigenvalues of $CX$ in $\mathcal{O}(N^3)$ time. Given there are $2^{N/2}$ terms in the summation in Eq.~\ref{ET_loop}, this results in $\mathcal{O}(N^3 2^{N/2})$ complexity. \subsection{Repeated pairs \label{repeated_edges}} This algorithm makes use of a fixed perfect matching given by the adjacency matrix $X$, defining pairs $(i,N/2+i)$ for $i\in [1,N/2)$. The summation in Eq.~\ref{ET_loop} corresponds to inclusion/exclusion of these pairs. If we consider the way that the $\bm{A}_{\vec{n}}$ matrix is formed when evaluating the $\vec{n}$ probability from a mixed state, $X$ will pair the $i$th index in $\bm{A}$ with the $(i+M)$th index, and this pairing will be repeated $n_i$ times. Instead of summing over all inclusion/exclusion possibilities, we can sum over a vector $\vec{z}$ where $z_i$ runs from 0 to $n_i$, corresponding to including $z_i$ copies of the $i$th pair: \begin{equation} \mathrm{lhafmix}(\bm{A},\bm{\gamma},\vec{n})=\sum_{\vec{z}} (-1)^{|\vec{z}|}\bm{p}rod {n_i \choose z_i} f'(\bm{A}, \bm{\gamma}, \vec{z}). \end{equation} We label this function $\mathrm{lhafmix}$ because it does not apply to general matrices, only those with the particular form of $\bm{A}_{\vec{n}}$. $f'(C, \vec{v}, \vec{z})$ is defined as the $\lambda^{N/2}$ coefficient in the polynomial \begin{multline} p'_{N/2}(\lambda, C,\vec{v},\vec{z}) = \\ \sum_{j=1}^{N/2} \frac{1}{j!} \left(\sum_{k=1}^{N/2} \left( \frac{\Tr((CX_{\vec{z}})^k)}{2k} + \frac{v X_{\vec{z}}(CX_{\vec{z}})^{k-1} v^T}{2} \right) \lambda^k \right)^j \end{multline} where \begin{equation} X_{\vec{z}} = \begin{pmatrix} \bm{0} & \diag(\vec{z}) \\ \diag(\vec{z}) & \bm{0} \end{pmatrix}, \end{equation} with $\diag(\vec{z})$ a diagonal matrix containing the elements of $\vec{z}$. This makes use of the fact that increasing the weight of a pairing in $X$ has the same effect as including a pair multiple times. Where there are elements $z_i=0$, the $i$th and $(N/2+i)$th row/column can be deleted from $\bm{A}$ and $X$ to speed up the eigenvalue calculation. This algorithm calculates mixed state probabilities in time $\mathcal{O}\left(N^3\bm{p}rod_i (n_i+1)\right)$. Noting that this corresponds to a $2N\times 2N$ loop hafnian, this compares well to using the repeated moment algorithm~\cite{KAN2008542}, which would take $\mathcal{O}\left(N\bm{p}rod_i (n_i+1)^2\right)$, and improves on eigenvalue-trace whenever there are elements of $\vec{n}$ greater than 1. For general matrices such as those in pure state calculations, even if there are photon collisions which lead to repeated rows/columns in the $B$ matrix, these do not necessarily lead to repeated pairings. However if identical pairs do occur, with the $i$th pair occurring $\eta_i$ times, we can make use of the above formula to obtain some speed-up, reducing the number of inclusion/exclusion terms to $\bm{p}rod_i (\eta_i+1)$. This quantity is upper-bounded by $2^{N/2}$, which occurs if no pairs are repeated. It is lower-bounded by $\bm{p}rod_j \sqrt{n_j+1}$. To see this, consider that for a total of $H$ unique pairings we can write: \begin{equation} \bm{p}rod_{i=1}^H (\eta_i+1) = \bm{p}rod_{i=1}^H \bm{p}rod_{j=1}^M \bm{p}rod_{k=L,R} \sqrt{n_j^{(i,k)}+1}, \end{equation} with $n_j^{(i,k)}$ the number of photons from mode $j$ which are associated with the $i$th pair and position $k=L,R$ within that pair. The equality follows from the fact that only one mode will be associated with a particular $(i,k)$, i.e. for a given $(i,k)$ there is only one $j$ for which $n_j^{(i,k)}$ is non-zero. The factor associated with a given mode $j$ is lower-bounded: \begin{equation} \bm{p}rod_{i=1}^H \bm{p}rod_{k=L,R} \sqrt{n_j^{(i,k)}+1}\bm{\gamma}eq \sqrt{n_j+1}, \end{equation} which occurs when $n_j^{(i,k)}$ is non-zero for only one choice of $(i,k)$. Hence the overall number of inclusion/exclusion terms is lower-bounded by \begin{equation} \bm{p}rod_{i=1}^H (\eta_i+1)\bm{\gamma}eq\bm{p}rod_{j=1}^M \sqrt{n_j+1}. \end{equation} \subsection{Matching algorithm \label{edge_match}} Since the fixed perfect matching in the eigenvalue-trace algorithm is arbitrary, we can choose it so as to create identical pairs and reduce the number of steps. Equivalently, we can permute rows/columns in the input matrix so as to change the pairings created using the $X$ matrix. Here, we give a greedy algorithm which chooses the pairings in the fixed perfect matching so as to minimise the number of inclusion/exclusion steps. We start by creating a list $\vec{m}=(0,1,\dots, M-1)$. The algorithm then proceeds as follows: \begin{enumerate} \item Sort $\vec{n}$ and $\vec{m}$ in descending order according to $\vec{n}$. \label{sort} \item If $n_1\bm{\gamma}eq 2n_2$, create $(m_1,m_1)$ pairs, which are repeated $\lfloor n_1/2 \rfloor$ times, with $\lfloor n_1/2 \rfloor$ rounding $n_1/2$ down to the nearest integer. Otherwise, create $(m_1,m_2)$ pairs which are repeated $n_2$ times. \item If $(m_1,m_1)$ pairs were created, subtract ${2\lfloor n_1/2 \rfloor}$ from $n_1$. Otherwise, subtract $n_2$ from $n_1$ and from $n_2$. \item Remove elements of $\vec{n}$ and $\vec{m}$ where $n=0$. \item If $\sum \vec{n} > 1$, return to step~\ref{sort}, otherwise end. \end{enumerate} An implementation of this algorithm can be found in the function \verb|matched_reps| of our repository~\cite{gbs_mis}. This returns a set of pairings and a number of repeats for each pairing, $\vec{\eta}$. Then, following the repeated pairs algorithm above, the loop hafnian can be calculated in time $\mathcal{O}\left(N^3\bm{p}rod (\eta_i+1)\right)$. This improves on eigenvalue-trace for a general matrix whenever $\vec{\eta}$ contains an entry $>1$, and this is true whenever there are at least two elements $>1$ in $\vec{n}$, or if there is at least one element $\bm{\gamma}eq 4$. \subsection{Finite difference sieve \label{fds}} By analogy to the Glynn formula for the permanent~\cite{Balasubramanian1980thesis, BAX1996171, GLYNN20101887}, we can find an alternative expression for the loop hafnian which uses a finite difference sieve instead of an inclusion/exclusion formula: \begin{equation} \lhaf(A) = \frac{1}{2^{N/2}} \sum_{\vec{\delta}} \left( \bm{p}rod_{k=1}^{N/2} \delta_k \right) f(A X_{\vec{\delta}}) \label{fdsieve} \end{equation} where $\vec{\delta}$ describes all possible $N/2$ length vectors with $\delta_i \in \{-1,1\}$. Here $X_{\vec{\delta}}$ is defined as: \begin{equation} X_{\vec{\delta}} = \begin{pmatrix} \bm{0} & \diag(\vec{\delta}) \\ \diag(\vec{\delta}) & \bm{0} \end{pmatrix}. \end{equation} Since an overall sign change to $\vec{\delta}$ leaves the terms inside the summation unchanged, one element of $\vec{\delta}$ can be fixed, e.g. $\delta_1=1$, and the result multiplied by 2, halving the run-time. We can make use of repeated pairings in the finite difference sieve algorithm in the same way as above. Then, for each pairing, $\delta$ runs from $-n_{\mathrm{pair}}$ to $+n_{\mathrm{pair}}$ in steps of 2, with the different terms corresponding to how many copies of the pair are associated with a $-1$. \begin{figure} \caption{Numerical accuracy when comparing inclusion/exclusion and finite difference sieve based loop hafnian algorithms. We construct an $N\times N$ matrix, $C$, using 2 random $N/2 \times N/2$ matrices, $A$ and $B$, on the diagonal quadrants. We define: $\mathrm{error} \label{glynn_accuracy} \end{figure} We find that this method offers significant accuracy improvements over inclusion/exclusion, as shown in Fig.~\ref{glynn_accuracy}. In both algorithms, the absolute value of the machine precision errors accumulate at a fairly similar rate inside the sum, however in the finite difference sieve, the prefactor in equation~\ref{fdsieve} divides the value of the error by the number of terms in the sum. \subsection{Batching probability calculations\label{batching}} For each step of the chain-rule algorithm, we require probabilities where all but one mode has a fixed outcome, $\vec{n}_{\mathrm{fixed}}$, while one `batched' mode takes all values from 0 to the cutoff, $n_{\mathrm{cut}}$. In the pair matching algorithm, we only input $\vec{n}_{\mathrm{fixed}}$. If we consider the calculation when the batched mode is equal to $n_{\mathrm{cut}}$, this leaves $\lfloor n_{n_{\mathrm{cut}}} / 2 \rfloor$ copies of the batched mode paired to itself. This calculation includes finding all the necessary eigenvalues required for calculating any outcome $\leq n_{n_{\mathrm{cut}}}$. Since this is the only cubic time step within each term in the sum, we can compute all probabilities for $n \leq n_{n_{\mathrm{cut}}}$ in the same time complexity as calculating $P(n_{\mathrm{fixed}}, n_{n_{\mathrm{cut}}})$. For batching across sub-detectors within the same mode in threshold detector sampling, each detector is treated independently and so has a different $\beta_i$, but this does not change the eigenvalue calculation, so these calculations can be batched in a similar way. We have implemented these methods in~\cite{gbs_mis}. These methods could also be applied to speed up calculations of heralded non-Gaussian states in the Fock basis, as is described in ref.~\cite{quesada2019realistic}. \subsection{Implementation details} Our loop hafnian code is written in Python and uses Numba, a just-in-time compiler which automatically generates highly efficient code~\cite{10.1145/2833157.2833162}. To run efficiently on distributed systems, we use MPI for Python~\cite{DALCIN2008655}. The eigenvalue-trace algorithm is readily parallelisable as each term in the sum can be computed independently of all other terms. Whilst testing and benchmarking our code, we ran on all major operating systems, and on x86-64 and arm64 architectures. Both Fugaku and the Isambard system (used for data in Fig.~\ref{glynn_accuracy}) use arm64 chip architectures, giving us further confidence in our run-time predictions. \section{MIS GBS algorithms} MIS is a Markov Chain Monte Carlo method of sampling which works by suggesting a state from a proposal distribution and accepting it according to the acceptance probability, Eq.~(\ref{eq:transitionprob}). Otherwise, the previous state is added to the Markov chain. \subsection{Independent Pairs and Singles GBS distribution \label{proposal}} Choosing a suitable proposal distribution is an extremely important factor for MIS to be useful. If the proposal distribution does not match closely to the target distribution, this will result in low acceptance probabilities, and hence a very long thinning interval. Here, we introduce an Independent Pairs and Singles (IPS) distribution, where as the name suggests we generate multi-photon samples from many independent single-photon and pair-photon generation processes, without quantum interference between separately generated singles/pairs. We find this is a better approximation to GBS than other efficiently simulable alternatives such as thermal states or distinguishable squeezed states. Beginning from a pure Gaussian state which we wish to approximate, we first sample the number of individual photons created in each mode by the displacement, using Poisson distributions with the mean of the $j$th mode given by $|\bm{\alpha}pha_j|^2$. We then sample the number of photon pairs created by squeezing between all mode pairs $(j,k)$ (with $j \leq k$) from a Poisson distribution with mean given by $|\bm{B}|^2_{j,k}$. Combining all outcomes results in a photon number pattern, $\vec{n}$. For MIS, we must calculate the probability of our generated proposal sample, $\vec{n}$. There are many possible ways to create the same sample, corresponding to different groupings of the photons into pairs and singles. The total probability is related to a loop hafnian, which contains a corresponding sum over all single-pair matchings. We can write this probability as: \begin{multline} Q(\vec{n}|\bm{B},\bm{\alpha}pha) = \\ \frac{e^{ - \sum_j |\bm{\alpha}pha_j|^2} e^{- \sum_{j,k}\frac{1}{2}|B_{j,k}|^2}} {\bm{p}rod_i n_i!} \lhaf(\bm{C}_{\vec{n}}), \end{multline} where $\bm{C}_{\vec{n}}$ is the matrix formed by taking $|\bm{B}_{\vec{n}}|^2$ and replacing the diagonal elements with $|\vec{\bm{\alpha}pha}_{\vec{n}}|^2$. The loop hafnian of a positive matrix is likely to be efficient to compute approximately~\cite{rudelson2016hafnians, Gupt2019}. However, in MIS we must also compute a loop hafnian of a complex matrix to evaluate the target probability of the sample. Hence for convenience and simplicity we make use of the same optimised and parallelised code to compute both loop hafnians, without losing any accuracy. This increases the run-time by at most a factor of 2. \subsection{PNRD GBS \label{PNRD_MIS}} We first consider the case of sampling in the photon number basis, $\vec{n}$. We expand the sample space to include a displacement variable, $\vec{\bm{\alpha}pha}$, so that only pure-state probabilities need to be evaluated. The target distribution $P(\vec{n},\vec{\bm{\alpha}pha})$ can be written as $P(\vec{\bm{\alpha}pha})P(\vec{n}|\vec{\bm{\alpha}pha})$ where $P(\vec{\bm{\alpha}pha})$ is a multivariate normal distribution and hence efficient to sample from, while $P(\vec{n}|\vec{\bm{\alpha}pha})$ is given by Eq.~(\ref{mixed_prob}) and depends on an $N\times N$ loop hafnian. We choose $Q(\vec{\bm{\alpha}pha})=P(\vec{\bm{\alpha}pha})$, which results in the acceptance probability: \begin{equation} p_\text{accept}=\text{min}\left(1,\frac{P(\vec{n}_i|\vec{\bm{\alpha}pha}_i)Q(\vec{n}_{i-1}|\vec{\bm{\alpha}pha}_{i-1})}{P(\vec{n}_{i-1}|\vec{\bm{\alpha}pha}_{i-1})Q(\vec{n}_i|\vec{\bm{\alpha}pha}_i)}\right), \end{equation} so the acceptance probability does not depend on the probability density of $\vec{\bm{\alpha}pha}$. In some cases it may be useful to fix the total photon number $N$ when sampling; for example verification methods often focus on samples of a particular $N$. In MIS it is possible to fix $N$ by post-selecting our proposed states - this does not add appreciably to the run-time, since generating proposed states can be done efficiently and the computational effort is dominated by calculating $p_\text{accept}$. In this case, the acceptance probability is \begin{align} p_\text{accept}&=\text{min}\left(1,\frac{P(\vec{n}_i,\vec{\bm{\alpha}pha}_i|N)Q(\vec{n}_{i-1},\vec{\bm{\alpha}pha}_{i-1}|N)}{P(\vec{n}_{i-1},\vec{\bm{\alpha}pha}_{i-1}|N)Q(\vec{n}_i,\vec{\bm{\alpha}pha}_i|N)}\right) \nonumber \\ &=\text{min}\left(1,\frac{P(\vec{n}_i,\vec{\bm{\alpha}pha}_i,N)Q(\vec{n}_{i-1},\vec{\bm{\alpha}pha}_{i-1},N)}{P(\vec{n}_{i-1},\vec{\bm{\alpha}pha}_{i-1},N)Q(\vec{n}_i,\vec{\bm{\alpha}pha}_i,N)}\right) \nonumber \\ &=\text{min}\left(1,\frac{P(\vec{n}_i,\vec{\bm{\alpha}pha}_i)Q(\vec{n}_{i-1},\vec{\bm{\alpha}pha}_{i-1})}{P(\vec{n}_{i-1},\vec{\bm{\alpha}pha}_{i-1})Q(\vec{n}_i,\vec{\bm{\alpha}pha}_i)}\right), \end{align} where in the second line we used the definition of conditional probability $P(\vec{n}_i,\vec{\bm{\alpha}pha}_i,N)=P(\vec{n}_i,\vec{\bm{\alpha}pha}_i|N)P(N)$, and the $P(N)$'s cancel and so do the $Q(N)$'s. In the third line, we know that if we are post-selecting, all $\vec{n}$ will automatically satisfy $N$ so it is a redundant variable. Hence an identical $p_\text{accept}$ can be used when fixing $N$. We outline the algorithm below. To sample from a state with vector of means $\bm{R}$ and covariance matrix $\bm{V}$: \begin{enumerate} \item Use the Williamson decomposition to write $\bm{V}=\bm{T}+\bm{W}$, where $\bm{T}$ is the covariance matrix of a pure state. Calculate the matrix $\bm{B}$ based on $\bm{T}$. \item Sample a displacement vector $\bm{R'}$ from the multivariate normal distribution $\bm{R'} \sim \mathcal{N}(\bm{R},\bm{W})$. Calculate the complex displacement $\vec{\bm{\alpha}pha}'_1$ from $\bm{R'}$. \item Sample a photon pattern $\vec{n}_1$ from $Q(\vec{n}|\bm{B},\vec{\bm{\alpha}pha}_1)$. This involves sampling from Poissonian distributions. \item Start a Markov chain from the state $(\vec{n}_1,\vec{\bm{\alpha}pha}'_1)$. \item For step $i$ in the Markov chain from 2 to the desired length: \begin{enumerate} \item Sample a new displacement vector $\vec{\bm{\alpha}pha}'_i$. \item Sample a new photon pattern $\vec{n}_i$ from $Q(\vec{n})$ for the pure state with displacement $\vec{\bm{\alpha}pha}'_i$ and covariance matrix $\bm{T}$. \item Calculate the acceptance probability $p_{\text{accept}}$. \item Add $(\vec{n}_i,\vec{\bm{\alpha}pha}'_i)$ to the chain with probability $p_{\text{accept}}$, otherwise add the previous state again. \end{enumerate} \item Keep only the $\vec{n}$ values in the chain (ignore $\vec{\bm{\alpha}pha}'$). Discard the first $\tau_{\mathrm{burn}}$ samples and then keep every 1 in $\tau_{\mathrm{thin}}$ samples. \end{enumerate} \subsection{Threshold detector GBS \label{click_mis}} For MIS with threshold detectors, we also need to include the fan out of each mode into sub-detectors, where we only register the position of the `first' photon. So we now expand the sample space to include a variable describing this position, $x$. We take the limit of a large number of sub-detectors where $x$ becomes a continuous variable, and choose larger $x$ to correspond to `earlier' detections. The POVM element for a click outcome where the first photon is at position $x$ can be written: \begin{equation} \bm{p}i_c(x)=\sum_{n=1}^\infty p(x|n)\ket{n}\bra{n}, \label{pi_cx} \end{equation} with $p(x|n)=nx^{n-1}$. $\bm{p}i_c(x)$ is closely related to the POVM element for measuring a single photon after a loss of $x$: \begin{equation} \Pi_L(x)=\sum_{j=1}^\infty j(1-x)x^{j-1}\ket{j}\bra{j}=(1-x)\bm{p}i_c(x). \label{eq:clickx} \end{equation} Hence we can express the probability of a click pattern $\vec{c}$ with an accompanying $\vec{x}$ in terms of the probability of obtaining the same pattern of single photons, but from a covariance matrix $\bm{V}(\vec{x})$ where the loss $x_m$ has been applied to the $m$th mode (for unoccupied modes $x_m=0$): \begin{equation} P_c(\vec{c},\vec{x},\vec{\bm{\alpha}pha}')=\frac{P(\vec{\bm{\alpha}pha}') P_n(\vec{c}|\vec{x},\vec{\bm{\alpha}pha}')}{\bm{p}rod_m(1-x_m)}, \end{equation} where as before $\vec{\bm{\alpha}pha}'$ is a complex displacement vector chosen from a multivariate normal distribution. Since $\bm{V}(\vec{x})$ is a mixed state, we expand it as an ensemble of pure states with differing displacement vectors $\vec{\bm{\alpha}pha}''$: \begin{equation} P_c(\vec{c},\vec{x},\vec{\bm{\alpha}pha}',\vec{\bm{\alpha}pha}'')=\frac{P(\vec{\bm{\alpha}pha}')P(\vec{\bm{\alpha}pha}''|\vec{x},\vec{\bm{\alpha}pha}')P_n(\vec{c}|\vec{x},\vec{\bm{\alpha}pha}'')}{\bm{p}rod_m(1-x_m)}, \label{thresh_targ} \end{equation} where $P(\vec{\bm{\alpha}pha}''|\vec{x},\vec{\bm{\alpha}pha}')$ is the probability distribution of $\vec{\bm{\alpha}pha}''$, depending on the applied loss, $\vec{x}$, and the complex displacement before the loss, $\vec{\bm{\alpha}pha}'$. $P_n(\vec{c}|\vec{x},\vec{\bm{\alpha}pha}'')$ is the photon number pattern probability of a pure state and can be calculated with an $N_c\times N_c$ loop hafnian, in time $\mathcal{O}(N_c^3 2^{N_c/2})$, resulting in a quadratic speedup compared to a Torontonian. If we sample $(\vec{c},\vec{x},\vec{\bm{\alpha}pha}'',\vec{\bm{\alpha}pha}')$ and then ignore the $\vec{x}$ and $\vec{\bm{\alpha}pha}$ outcomes, this is equivalent to sampling from $P_c(\vec{c})$ as desired. To generate proposal samples, we begin by generating a displacement vector $\vec{\bm{\alpha}pha}'$ and photon number pattern $\vec{n}$ as in Appendix~\ref{PNRD_MIS}. Then a $\vec{x}$ vector can be generated by sampling from $p(x|n)$ for each element, and a click pattern $\vec{c}$ taken by reducing each $>0$ element of $\vec{n}$ to a 1. The loss $\vec{x}$ is applied to the state, resulting in an updated displacement $\vec{\bm{\alpha}pha}'(x)$ and covariance matrix $\bm{V}(\vec{x})$, from which a Williamson decomposition can be used to sample a pure state - with a displacement $\vec{\bm{\alpha}pha}''$ and covariance matrix $T'$. The proposal probability, marginalised over $\vec{n}$, can be written \begin{equation} Q_c(\vec{c},\vec{x},\vec{\bm{\alpha}pha}',\vec{\bm{\alpha}pha}'')=P(\vec{\bm{\alpha}pha})P(\vec{\bm{\alpha}pha}''|\vec{x},\vec{\bm{\alpha}pha}')Q_c(\vec{c},\vec{x}|\vec{\bm{\alpha}pha}'), \label{thresh_prop} \end{equation} where we note that the proposal distribution for $(\vec{c}$, $\vec{x})$ is conditioned on $\vec{\bm{\alpha}pha}'$ rather than $\vec{\bm{\alpha}pha}''$, which is the last variable to be chosen. As with the target distribution, this probability can be rewritten in terms of a pattern of single photons after application of a loss: \begin{equation} Q_c(\vec{c},\vec{x}|\vec{\bm{\alpha}pha}')=\frac{Q_n(\vec{c}|\vec{x},\vec{\bm{\alpha}pha}')}{\bm{p}rod_m (1-x_m)}. \end{equation} The probability of detecting a pattern of single photons from IPS after loss is still given by the loop hafnian of a non-negative matrix: \begin{equation} Q_n(\vec{c}|\vec{x},\vec{\bm{\alpha}pha}')\bm{p}ropto \lhaf(\bm{C}_{\vec{c}}(\vec{x})), \end{equation} where we take \begin{equation} \bm{C}_{j,k}(\vec{x})=(1-x_j)(1-x_k)|B_{j,k}|^2, \end{equation} except for diagonal elements \begin{equation} \bm{C}_{j,j}(\vec{x})=(1-x_j)\left(|\bm{\alpha}pha_j|^2+\sum_k x_k |B_{j,k}|^2\right), \end{equation} and form $\bm{C}_{\vec{c}}(\vec{x})$ by keeping the elements of $\bm{C}$ where $c=1$. This results in an acceptance probability: \begin{equation} p_\text{accept}=\text{min}\left(1,\frac{P_n(\vec{c}_i|\vec{x}_i,\vec{\bm{\alpha}pha}''_i)Q_n(\vec{c}_{i-1}|\vec{x}_{i-1},\vec{\bm{\alpha}pha}'_{i-1})}{Q_n(\vec{c}_i|\vec{x}_i,\vec{\bm{\alpha}pha}'_i)P_n(\vec{c}_{i-1}|\vec{x}_{i-1},\vec{\bm{\alpha}pha}''_{i-1})}\right). \label{thresh_acc} \end{equation} We outline the steps of the MIS algorithm below. \begin{enumerate} \item Use the Williamson decomposition to write $\bm{V}=\bm{T}+\bm{W}$, where $\bm{T}$ is the covariance matrix of a pure state. \item Sample the starting state from the proposal distribution \begin{enumerate} \item Sample a complex displacement vector $\vec{\bm{\alpha}pha}'_1$. \item Sample a photon pattern $\vec{n}_1$ from $Q(\vec{n}|\vec{\bm{\alpha}pha}')$. Find $\vec{c}_1$ from $\vec{n}_1$ by fixing all $n_i>1$ as $c_i=1$. \item If post-selecting on the number of clicks, repeat the above steps until $\vec{c}$ contains the desired number of clicks. \item Sample the loss, $\vec{x}_1$, conditional on the photon number pattern $\vec{n}_1$ using $p(x|n)$. \item Apply the loss $\vec{x}_1$ to the displacement vector $\vec{\bm{\alpha}pha}'_1$ and the covariance matrix $\bm{T}$, resulting in $\vec{\bm{\alpha}pha}'_1(\vec{x})$ and $\bm{V}(\vec{x})$. \item Perform a Williamson decomposition on the mixed state to obtain a pure state covariance matrix $\bm{T'}$ and sample a new complex displacement vector $\vec{\bm{\alpha}pha}_1''$. \end{enumerate} \item Start the Markov chain from the state $(\vec{c}_1,\vec{x}_1,\vec{\bm{\alpha}pha}'_1,\vec{\bm{\alpha}pha}''_1)$. Calculate the target probability using Eq.~\ref{thresh_targ} and the proposal probability using Eq.~\ref{thresh_prop}. \item For step i in the Markov chain from 2 to the desired length: \begin{enumerate} \item Sample another proposal sample $(\vec{c}_i,\vec{x}_i,\vec{\bm{\alpha}pha}'_i,\vec{\bm{\alpha}pha}''_i)$. \item Calculate the target and proposal probabilities for this state. \item Calculate the acceptance probability $p_\text{accept}$ using Eq.~\ref{thresh_acc}. \item Add $(\vec{c}_i,\vec{x}_i,\vec{\bm{\alpha}pha}'_i,\vec{\bm{\alpha}pha}''_i)$ to the chain with probability $p_\text{accept}$, otherwise add the previous state again. \end{enumerate} \item Keep only the $\vec{c}$ values in the chain (ignore $\vec{x}$, $\vec{\bm{\alpha}pha}'$ and $\vec{\bm{\alpha}pha}''$). Discard the first $\tau_{\mathrm{burn}}$ samples and then keep every 1 in $\tau_{\mathrm{thin}}$ samples. \end{enumerate} \subsection{Thinning interval and burn-in time scaling \label{mis_scaling}} To investigate the scaling of our algorithms, we fix the number of photons to the mean photon number number, rounded to the nearest integer. The tests described in this section are applicable to both PNRD and threshold GBS unless stated otherwise. However, we only implement them for the number resolving case. \begin{figure} \caption{Estimated probability of repeated samples. Calculated for each $M$ from 10 Haar random unitary matrices, with a 10,000 long MIS chain for each unitary.} \label{repeats_probs} \end{figure} \begin{figure} \caption{Thinning intervals generated from the data in Fig.~\ref{repeats_probs} \label{thinning_rate_fit} \end{figure} It is important to be able to predict the run-time of simulations before they are performed. For MIS methods, this is challenging as thinning intervals and burn-in times depend on how close the proposal distribution is to the target distribution. To construct heuristics to allow us to make these predictions, we investigate how the thinning interval and burn-in time scale with the number of modes $M$. However, we wish to highlight that the requirements on accuracy and sample autocorrelation will vary depending on what is desired from the simulation. Therefore the results in this section should be viewed as a guide for how to predict the scaling, rather than as a prescriptive guide for what parameters should be used. To predict the thinning interval, $\tau_{\mathrm{thin}}$, we investigate systems of different sizes with $M$ varied between 8 and 52 in steps of 4. For each $M$ we choose 10 Haar random interferometers, and implement an MIS chain with 10,000 steps. In Fig.~\ref{repeats_probs}, we plot the estimated probability of a sample being repeated as a function of the thinning interval. From this we extract the thinning interval required to suppress the repeat probability to 0.1 for each $M$ and perform a linear fit on this data, shown in Fig.~\ref{thinning_rate_fit}. The data for $M=36$ appears anomalous. We believe this is caused by one of the chains drawing a proposal sample which has an unusually large target/proposal probability ratio. Such events can cause chains to reject a very large number of samples before accepting a new proposal sample. The large degree of autocorrelation which is created by events such as this are an intrinsic drawback of the MIS method, and so we do not discard this data. The second parameter we need to determine is the burn-in time, $\tau_\text{burn}$. We know that the chain begins sampling from the proposal distribution and over time converges to the target distribution. It will converge continuously, getting asymptotically closer to the target distribution, but at some point it will be close enough that the change will not be noticeable from finite sample sizes. Therefore, we can analyse when our distribution appears to be stationary. We provide two tests to predict how the burn-in time scales with the number of modes. For the first, we use a Bayesian likelihood ratio test for each burn-in time until we see no improvement. The likelihood ratio tests whether a set of samples $s$ is more likely to have come from the target distribution or an adversary distribution. To test how close our distribution is to the target, $\mathcal{P}$, or proposal, $\mathcal{Q}$, we choose the proposal distribution as the adversary. We begin with the ratio \begin{equation} \chi=\frac{p(\mathcal{P}|s)}{p(\mathcal{Q}|s)}=\frac{p(s|\mathcal{P})p(\mathcal{P})p(s)}{p(s|\mathcal{Q})p(\mathcal{Q})p(s)}. \end{equation} If we assume equal priors $p(\mathcal{P})=p(\mathcal{Q})$, this simplifies to \begin{equation} \chi=\frac{p(s|\mathcal{P})}{p(s|\mathcal{Q})}. \end{equation} Assuming that the probability distribution is either $\mathcal{P}$ or $\mathcal{Q}$ and so $p(\mathcal{P}|s)+p(\mathcal{Q}|s)=1$, we can write \begin{align} p(\mathcal{P}|s)&=p(\mathcal{Q}|s) \chi = (1-p(\mathcal{P}|s)) \chi \\ \implies p(\mathcal{P}|s)&=\frac{\chi}{1+\chi}. \end{align} Here our samples $s_i$ are described by $(\vec{\bm{\alpha}pha}_i,\vec{n}_i)$ which we sample from the chain. So we can write $p(s_i|\mathcal{P})= p(\vec{\bm{\alpha}pha}_i,\vec{n}_i|\mathcal{P})= p(\vec{\bm{\alpha}pha}_i|\mathcal{P})p(\vec{n}_i|\vec{\bm{\alpha}pha}_i,\mathcal{P})$. For purposes of benchmarking the efficiency, we fix the number of photons and so we have to adjust for post-selecting on $N$ photons. We still assume equal priors, now $p(\mathcal{P}|N)=p(\mathcal{Q}|N)$. So the likelihood ratio becomes \begin{align} \chi&=\frac{p(s|\mathcal{P},N)}{p(s|\mathcal{Q},N)}=\bm{p}rod_i\frac{P(\vec{\bm{\alpha}pha}_i,\vec{n}_i|N)}{Q(\vec{\bm{\alpha}pha}_i,\vec{n}_i|N)}\\ &=\bm{p}rod_i \frac{P(\vec{n}_i,N|\vec{\bm{\alpha}pha}_i)Q(N)}{Q(\vec{n}_i,N|\vec{\bm{\alpha}pha}_i)P(N)}, \end{align} where we use the fact that $P(\vec{\bm{\alpha}pha}_i)=Q(\vec{\bm{\alpha}pha}_i)$. This only requires the calculation of pure-state probabilities and the probability of getting $N$ photons in both the proposal and target distribution. This can be done for PNRDs with no additional cost to the sampling algorithm as we must calculate the pure-state probabilities in the formation of our chain, and calculating the probabilities of $N$ photons is efficient. However, for threshold detectors, although we could add the $x$ variable, we are not aware of a way to calculate the probability of $N_c$ clicks for either distribution. Therefore we do not apply this test to threshold detectors. We evaluate this probability for all burn-in times up to 100, for an increasing sample size up to 100. As we increase the sample size, the likelihood should eventually converge to either 0 (if it fails) or 1 (if it passes) - see Fig.~\ref{likelihood_samplesize}. The closer the sampled distribution is to the target, the faster it converges to 1, so we can test how close our distribution is for different burn-in times by sampling and comparing the rate of the convergence to 1. To isolate the burn-in for testing, we need to start a new chain every time we sample. As a metric for comparing the rates of convergence, we find the sample size required to reach a likelihood of 0.95. As we increase the burn-in time, the sample size should decrease and approach the minimum. We find the burn-in time at which the sample size is within 5\% of the estimated minimum. Each likelihood is estimated by averaging over 1000 Haar random unitaries. Despite this, the likelihood still gives quite noisy data and we further average over a range of 10 burn-in times, ie the likelihood at burn-in $i$ is given by the average of the likelihood for burn-in times between $i$ and $i+9$ (see Fig.~\ref{samplesize_burnin}). The minimum is estimated in a similar way where we average over the last 20 burn-in times, when we can assume it has converged. Fig.~\ref{convergence1} shows the estimated burn-in time for up to 28 modes and we extrapolate the linear fit to give an estimate of $\tau_\text{burn}=155$ for 100 modes. \begin{figure} \caption{The estimated likelihood ratio as a function of the number of samples included for a range of burn-in times. The plot shows how the likelihood converges to either 0 if the test fails or 1 if it passes for increasing burn-in times averaged over 1000 Haar random unitaries in 28 modes. We wish to find the sample size at which the likelihood ratio reaches 0.95 as indicated by the dashed line.} \label{likelihood_samplesize} \end{figure} \begin{figure} \caption{The estimated sample size required to give a likelihood ratio of 0.95 for burn-in times between 0 and 100 (averaging over 10 burn-in times). We wish to find the burn-in time beyond which we see no improvement in the number of samples required, as indicated by the dashed lines.} \label{samplesize_burnin} \end{figure} \begin{figure} \caption{The estimated burn-in time from the likelihood test as a function of the number of modes. The likelihood is calculated for increasing sample size for up to a burn-in time of 100. We estimate at which burn-in time we do not see an improvement to the likelihood. We find the sample size required to reach a likelihood of 0.95 and find the burn-in at which it is within 5\% of the final value. Each likelihood was estimated by averaging over 1000 Haar random unitaries. We use a linear fit of the data to predict a relationship of $\tau_\text{burn} \label{convergence1} \end{figure} For the second test, we note that the rate of accepting a proposed sample decreases towards an asymptotic minimum value as the chain converges. This minimum value would be reached only when sampling from the target distribution. We estimate the probability of accepting at each burn-in time up to 300 by running 10,000 chains and counting the number of times we accept for a Haar random unitary. As with the likelihood test, we still have noisy data and smooth out the curve by averaging across 10 burn-in times. We choose the burn-in time at which the probability of accepting is no more than 0.001 greater than the estimated minimum value. Again we estimate the minimum value from the end of our chain, averaging over the last 50 burn-in times. As long as the estimated burn-in time is significantly before the end of the chain, we can be reassured that the probability of accepting is changing slowly enough to consider the chain to have converged by the maximum burn-in time we test. See Fig.~\ref{acceptancerate} for an example of how the acceptance rate varies with the chain length. We run this test for 10 Haar random unitaries and find the average burn-in time for each $M$ up to 24, shown in Fig.~\ref{convergence2}, which extrapolating gives an estimate of $\tau_\text{burn}=785$ for 100 modes. \begin{figure} \caption{The acceptance rate as a function of the length of the chain for various values of $M$, the number of modes. We find the point in the chain at which the acceptance rate becomes approximately constant. This plot shows an example for one Haar random unitary for each $M$. The dashed horizontal lines show when the curve reaches withing 0.001 of the estimated final value.} \label{acceptancerate} \end{figure} \begin{figure} \caption{The estimated burn-in time from the acceptance rate test as a function of the number of modes. We estimate the burn-in beyond which the probability of accepting the proposed sample is approximately constant. Each burn-in was estimated by averaging over 10 Haar random unitaries. We use a linear fit of the data to predict a relationship of $\tau_\text{burn} \label{convergence2} \end{figure} We note that these two tests give significantly different estimates for the burn-in times. This is likely due to two reasons. The first is that the likelihood may be less sensitive to the convergence and doesn't distinguish between two close distributions as well. The data for this test is more noisy and so that may hide small differences in the distributions. The second is that we fix it to have converged further for the acceptance rate test. It is an arbitrary choice to decide how close you require the distribution to be to the target distribution. In both tests, we are limited by how noisy our data is from finite sampling. If our acceptance rate test is less noisy, we are able to find the burn-in times for a better convergence. The important findings from our numerical analysis are that the burn-in time seems to scale approximately linearly with the number of modes and the gradient of the scaling depends on how close you want the distribution you are sampling from to be to the target distribution. \begin{figure} \caption{$M=52$ CHOG tests for different total click numbers to determine the relative likelihood of IPS and thermal samples from the ideal click distribution. Convergence to 1 for the IPS samplers indicates they are more likely to have come from the click distribution than the thermal samples.} \label{chog_ips_thermal} \end{figure} \section{Validation tests \label{verification}} \subsection{CHOG ratio} As we approach a scale where exact validation of samples becomes unfeasible, we can use the Chen Heavy Output Generation (CHOG) ratio test outlined in ref.~\cite{Zhong1460}. Certain output patterns from a random optical network occur more frequently due to constructive interference and it is thought to be difficult to replicate this observation with classical samplers. This adversarial test assesses the relative likelihood of two sets of samples (`trial' and `adversarial') being drawn from a given ideal distribution. As samples are drawn, the CHOG ratio is updated: \begin{align} r_{\mathrm{CHOG}}&=\frac{P_{\mathrm{ideal}}(\mathrm{samples}_{\mathrm{trial}})}{P_{\mathrm{ideal}}(\mathrm{samples}_{\mathrm{trial}})+P_{\mathrm{ideal}}(\mathrm{samples}_{\mathrm{adv}})}\\ &=\left(1+\bm{p}rod_{j}\frac{P_{\mathrm{ideal}}(\mathrm{sample}_{\mathrm{adv}}(j))}{P_{\mathrm{ideal}}(\mathrm{sample}_{\mathrm{trial}}(j))}\right)^{-1}.\label{eq:chog} \end{align} Convergence to a value of 1 indicates that the trial samples were more likely to be drawn from the ideal distribution than the adversarial samples. In the case of click detection samples, the probabilities of observing these samples from the ideal (squeezed) distribution are calculated using the Torontonian. In the work by USTC~\cite{Zhong1460}, click samples are drawn from the trial GBS experiment and an adversarial thermal sampler. A value of 1 indicates that the GBS samples were more likely to be drawn from the ideal distribution than the thermal samples. For Jiŭzhāng, GBS samples corresponding to fixed total numbers of clicks of between 26 and 38 were validated against a thermal sampler. \subsubsection{IPS vs thermal} \label{ips_vs_thermal} Our IPS sampler (used as a proposal distribution for the MIS algorithm) naturally incorporates constructive interference of pairs of photons. Here, we use it to generate trial samples and apply the CHOG test to validate against an adversarial thermal sampler. For the IPS sampler we use a covariance matrix corresponding to 13 sources of two-mode squeezed vacuum, with squeezing parameter $r=1.55$ and transmission $\eta=0.3$, injected into a Haar random 52-mode unitary interferometer. The covariance matrix for the thermal sampler is constructed using the same transmissions and unitary interferometer, but now injected with 26 thermal states, each with mean photon number $n_{th}=\sinh^{2}(r)$. We update the CHOG ratio using equation~\ref{eq:chog} and results for click numbers of between 13 and 18 are shown in Fig.~\ref{chog_ips_thermal}. The convergence to 1 for the IPS sampler shows that those samples are more likely to have been drawn from the ideal distribution than the thermal samples. Hence, we have shown that our IPS sampler -- from which samples can be efficiently drawn classically -- passes the CHOG test against a thermal sampler in a similar way to the experimental GBS samples from Jiŭzhāng. This challenges the usefulness of the CHOG test against a thermal adversarial sampler in validating quantum computational complexity of GBS. It also suggests that the IPS distribution should be used as an adversary model to test against in future experiments. Because the IPS distribution contains no interference between different photon pairs, it could be considered as the distribution generated by squeezers with zero spectral purity. Following this intuition, we also suggest a finite purity adversary (i.e. squeezing across $\bm{\gamma}e 2$ Schmidt modes) as another important, more challenging, model to test against. Calculating probabilities of Gaussian states in the presence of spectral impurity has been investigated in ref.~\cite{thomas2020general}. \begin{figure} \caption{$M=24$ CHOG tests for different total click numbers to determine the relative likelihood of MIS and IPS samples from the ideal click distribution. Convergence to 1 for the MIS samplers indicates they are more likely to have come from the click distribution than the IPS samples.} \label{chog_mis_ips} \end{figure} \subsubsection{MIS vs IPS} Our MIS method takes the IPS as its proposal distribution and should then converge to the target (Torontonian) distribution. Here, we use a CHOG ratio test to validate trial MIS samples against the IPS distribution as the adversary. Our discussion in the previous section showed that IPS samples are more likely to be drawn from the ideal distribution than thermal samples, and so here they should provide a more stringent test. We use a covariance matrix corresponding to 6 sources of two-mode squeezed vacuum, with squeezing parameter $r=1.55$ and transmission $\eta=0.3$, injected into a Haar random 24-mode unitary interferometer. We draw $10^5$ samples from the IPS distribution and use these in an MIS chain with burn-in of 50 and a thinning interval of 10. We then post-select for samples of different fixed total click numbers and update the CHOG ratio. Results for click numbers of between 9 and 12 are shown in Fig.~\ref{chog_mis_ips}. The convergence to 1 for the MIS samples shows that they are more likely to have been drawn from the ideal distribution than the starting IPS samples, and this is indicative of convergence of the chain. \subsection{Two-point correlators} Two-point correlators have been proposed as a benchmark for GBS~\cite{dphil_tpc}. Two-point correlations of the light emerging from some optical network are defined as: \begin{equation} C_{i,j}=\langle \Pi_{1}^{i}\Pi_{1}^{j}\rangle - \langle \Pi_{1}^{i}\rangle\langle\Pi_{1}^{j}\rangle, \end{equation} where the projector $\Pi_{1}^{i}=\mathbb{I}-\ket{0}_{i}\bra{0}_{i}$ corresponds to a click on mode $i$. The distributions of two-point correlators are expected to differ between ideal squeezed and thermal samplers. Correlators from a GBS device can therefore be used to validate a squeezed over a thermal hypothesis. As discussed in the main text, the IPS distribution naturally includes interference of pairs of photons. We therefore expect the two-point correlators for this distribution to match those for the ideal distribution. For the ideal and IPS distributions we use a covariance matrix corresponding to 8 sources of two-mode squeezed vacuum, with squeezing parameter $r=1.55$ and transmission $\eta=0.3$, injected into a Haar random 32-mode unitary interferometer. We draw $10^5$ IPS samples for each single output and pair of output modes, convert them to click patterns and use these to estimate the IPS click probabilities and evaluate $C_{i,j}$. For the thermal distribution we use the same unitary interferometer and transmissions but set the mean photon number for 16 thermal states to be $\sinh^{2}(r)$. Results are shown in Fig.~\ref{tpc_ideal_ips_thermal}. \begin{figure} \caption{$M=32$ histogram of two-point correlator values for ideal (squeezed), thermal and IPS distributions. The values for ideal and IPS show good overlap and differ significantly from those for the thermal distribution.} \label{tpc_ideal_ips_thermal} \end{figure} The two-point correlators for the squeezed and IPS distributions are in good agreement. Slight deviations arise from probability estimation errors due to finite sampling. These distributions both significantly diverge from that for the thermal correlators. The IPS distribution is efficient to sample and shows high overlap with the ideal distribution for squeezers, suggesting such tests are not a sufficient indicator of GBS complexity. \end{document}
\begin{document} \title{Arc spaces of $cA$-type singularities} \author{Jennifer M.\ Johnson and J\'anos Koll\'ar} \maketitle Let $X$ be a complex variety or an analytic space and $x\in X$ a point. A formal arc through $x$ is a morphism $\phi:\spec \c[[t]]\to X$ such that $\phi(0)=x$. The set of formal arcs through $x$ -- denoted by $\farc(x\in X)$ -- is naturally a (non-noetherian) scheme. A preprint of Nash, written in 1968 but only published later as \cite{nash-arc}, describes an injection -- called the {\it Nash map} -- from the irreducible components of $\farc(x\in X)$ to the set of so called {\it essential divisors.} These are the divisors whose center on every resolution $\pi:X'\to X$ is an irreducible component of $\pi^{-1}(x)$. The {\it Nash problem} asks if this map is also surjective or not. Surjectivity fails in dimensions $\geq 3$ \cite{MR2030097, df-arc} but holds in dimension 2 \cite{boba-pp}. In all dimensions, the most delicate cases are singularities whose resolutions contain many rational curves. For example, while it is very easy to describe all arcs and their deformations on Du Val singularities of type $A$, the type $E$ cases have been notoriously hard to treat \cite{ps-E6, pereira-phd}. The first aim of this note is to determine the irreducible components of the arc space of $cA$-type singularities in all dimensions. In Section \ref{sec.irreds} we prove the following using quite elementary arguments. \begin{thm}\label{cA.sharc.thm} Let $f(z_1,\dots, z_n)$ be a holomorphic function whose multiplicity at the origin is $m\geq 2$. Let $X:=\bigl(xy=f(z_1,\dots, z_n)\bigr)\subset \c^{n+2}$ denote the corresponding $cA$-type singularity. Assume that $\dim X\geq 2$. \begin{enumerate} \item $\farc(0\in X)$ has $(m-1)$ irreducible components $\farc_i(0\in X)$ for $0<i<m$. \item There are dense, open subsets $\farc_i^{\circ}(0\in X)\subset \farc_i(0\in X)$ such that $$ \bigl(\psi_1(t), \psi_2(t),\phi_1(t), \dots, \phi_n(t)\bigr)\in \farc_i^{\circ}(0\in X) $$ iff $\mult\psi_1(t)=i,\ \mult \psi_2(t)=m-i $ and $\mult f\bigl(\phi_1(t), \dots, \phi_n(t)\bigr)=m$. \end{enumerate} \end{thm} We found it much harder to compute the set of essential divisors and we have results only if $\mult_0f=2$. If $\dim X=3$ then, after a coordinate change, we can write the equation as $(xy=z^2-u^m)$. Already \cite{nash-arc} proved that these singularities have at most 2 essential divisors: an easy one obtained by blowing-up the origin and a difficult one obtained by blowing-up the origin twice. In Section \ref{sec.essent} we use ideas of \cite{df-arc} to determine the cases when the second divisor is essential. The following is obtained by combining Theorem \ref{cA.sharc.thm} and Proposition \ref{res.prop}. \begin{exmp}\label{main.nash.exmp} For the singularities $X_m:=(xy=z^2-u^m)\subset \c^4$ the Nash map is not surjective for odd $m\geq 5$ but surjective for even $m$ and for $m=3$. \end{exmp} Thus the simplest counter example to the Nash conjecture is the singularity $$ (x^2+y^2+z^2+t^5=0)\subset \c^4. $$ In higher dimensions our answers are less complete. We describe the situation for the divisors obtained by the first and second blow-ups as above, but we do not control other exceptional divisors. Using Theorem \ref{cA.sharc.thm} and Proposition \ref{res.higher.prop} we get the following partial generalization of Example \ref{main.nash.exmp}. \begin{exmp} \label{res.higher.cor} Let $g(u_1,\dots, u_r) $ be an analytic function near the origin. Set $m=\mult_0g$ and let $g_m$ denote the degree $m$ homogeneous part of $g$. If $m\geq 4$ and the Nash map is surjective for the singularity $$ X_g:=\bigl(xy=z^2-g(u_1,\dots, u_r)\bigr)\subset \c^{r+3} $$ then $g_m(u_1,\dots, u_r)$ is a perfect square. \end{exmp} Since we do not determine all essential divisors, the cases when $g_m(u_1,\dots, u_r)$ is a perfect square remain undecided. On the one hand, this can be interpreted to mean that the Nash conjecture hopelessly fails in dimensions $\geq 3$. On the other hand, the proof leads to a reformulation of the Nash problem and to an approach that might be feasible, at least in dimension 3; see Section \ref{sec.revised}. In Section \ref{sec.short} we observe that the deformations constructed in Section \ref{sec.irreds} also lead to an enumeration of the irreducible components of the space of short arcs -- introduced in \cite{k-short} -- for $cA$-type singularities. \begin{ques}[Arcs on $cDV$ singularities]\label{cDV.ques} It is easy to see that Theorem \ref{cA.sharc.thm} is equivalent to saying that the image of every general arc on $X$ is contained in an $A$-type surface section of $X$. It is natural to ask if this holds for all $cDV$ singularities. That is, let $(0\in X)\subset \c^n$ be a hypersurface singularity such that $X\cap L^3$ is a Du~Val singularity for every general 3-dimensional linear space (or smooth 3--fold) $0\in L^3\subset \c^n$. Let $\phi$ be a general arc on $X$. Is it true that there is a 3--fold $L^3\subset \c^n$ containing the image of $\phi$ such that $X\cap L^3$ is a Du~Val singularity? \end{ques} \end{ack} \section{Arcs on $cA$-type singularities}\label{sec.irreds} \begin{defn}[$cA$-type singularities] In some coordinates write a hypersurface singularity as $$ X:=\bigl(f(x_1,\dots, x_{n+1})=0\bigr)\subset \c^{n+1}. $$ Assume that $X$ is singular at the origin and let $f_2$ denote the quadratic part of $f$. If $\mult_0 f=2$ then $(f_2=0)$ is the tangent cone of $X$ at the origin. We say that $X$ has {\it $cA$-type} if $\rank f_2\geq 2$ and {\it $cA_1$-type} that $\rank f_2\geq 3$. By the Morse lemma, if $\rank f_2=r$ then we can choose local analytic or formal coordinates $y_i$ such that $$ f=y_1^2+\cdots+y_r^2+g(y_{r+1},\dots, y_{n+1}) \qtq{where} \mult_0 g\geq 3. $$ In the sequel we also use other forms of the quadratic part if that is more convenient. Note that by adding 2 squares in new variables we get a map from hypersurface singularities in dimension $n-2$ (modulo isomorphism) to $cA$-type hypersurface singularities in dimension $n$ (modulo isomorphism). This map is one-to-one and onto; see \cite[Sec.11.1]{avg}. Thus $cA$-type singularities are quite complicated in large dimensions. \end{defn} We rename the coordinates and write a $cA$-type singularity as $$ X:=\bigl(xy=f(z_1,\dots, z_n)\bigr). $$ Thus an arc through the origin is written as $$ t\mapsto \bigl(\psi_1(t), \psi_2(t),\phi_1(t), \dots, \phi_n(t)\bigr) $$ where $\psi_i, \phi_j$ are power series such that $\mult \psi_i,\mult \phi_j\geq 1$ for $i=1,2$ and $j=1,\dots, n$. We set ${\vec \phi}(t)=\bigl(\phi_1(t), \dots, \phi_n(t)\bigr)$. A deformation of ${\vec \phi}(t)$ is given by power series $ \bigl(\Phi_1(t,s), \dots, \Phi_n(t,s)\bigr)$. Then we compute $$ f\bigl(\Phi_1(t,s), \dots, \Phi_n(t,s)\bigr)\in \c[[t,s]] $$ and try to factor it to obtain $$ \Psi_1(t,s) \Psi_2(t,s)= f\bigl(\Phi_1(t,s), \dots, \Phi_n(t,s)\bigr)\in \c[[t,s]]. $$ Usually this factoring is not possible, but Newton's method of rotating rulers says that $$ f\bigl(\Phi_1(t,s^r), \dots, \Phi_n(t,s^r)\bigr) $$ factors for some $r\geq 1$. \begin{say}[Proof of Theorem \ref{cA.sharc.thm}] After a linear change of coordinates we may assume that $z_1^m$ appears in $f$ with nonzero constant coefficient. Set $D:=\mult_t f\bigl(\phi_1(t), \dots, \phi_n(t)\bigr)$. Assume first that $D<\infty$ and consider $$ F(t, s):=f\bigl(\phi_1(t)+st, \phi_2(t), \dots, \phi_n(t)\bigr)= \sum_i \frac{\partial^i f}{\partial z_1^i}\bigl(\vec\phi\bigr) \cdot \frac{(st)^i}{i!}. $$ We know that $t^m$ divides $F(s,t)$ (since $\mult_0 f= m$) and $(st)^m$ appears in $F$ with nonzero coefficient (since $z_1^m$ appears in $f$ with nonzero coefficient). Thus $t^m$ is the largest $t$-power that divides $F(s,t)$. Furthermore, $t^D$ is the smallest $t$-power that appears in $F$ with nonzero constant coefficient. Thus, by Lemma \ref{newton.lem}, there is an $r\geq 1$ such that $$ F(t, s^r)=u(t,s)\prod_{i=1}^D \bigl(t-\sigma_i(s)\bigr) $$ where $u(0,0)\neq 0$ and $\sigma_i(0)=0$. Furthermore, exactly $m$ of the $\sigma_i$ are identically zero. For $j=1,2$ write $\psi_j(t)=t^{a_j}v_j(t)$ where $v_j(0)\neq 0$. Note that $a_1+a_2=D$ and $u(t,0)=v_1(t)v_2(t)$. Divide $\{1,\dots, D\}$ into two disjoint subsets $A_1, A_2$ such that $|A_j|=a_j$ and they both contain at least 1 index $i$ such that $\sigma_i(t)\equiv 0$. Finally set $$ \Psi_1(t,s)=v_1(t)\cdot \prod_{i\in A_1} \bigl(t-\sigma_i(s)\bigr) \qtq{and} \Psi_2(t,s)=\frac{u(t,s)}{v_1(t)} \cdot \prod_{i\in A_2} \bigl(t-\sigma_i(s)\bigr). $$ Then $$ \bigl(\Psi_1(t,s), \Psi_2(t,s), \phi_1(t)+st, \phi_2(t), \dots, \phi_n(t)\bigr) $$ is a deformation of $\bigl(\psi_1(t), \psi_2(t),\phi_1(t), \dots, \phi_n(t)\bigr)$ whose general member is in the $r$th irreducible component as in (\ref{cA.sharc.thm}.2) iff exactly $r$ of the $\{\sigma_i: i\in A_1\}$ are identically zero. (This also shows that arcs with $\mult \psi_1(t)\geq m-1$ and $\mult \psi_2(t)\geq m-1$ constitute the intersection of all of the irreducible components.) If $D=\infty$, that is, when $f\bigl(\phi_1(t), \dots, \phi_n(t)\bigr)$ is identically zero, we need to perform some similar preliminary deformations first. First, if both $\psi_1(t), \psi_2(t)$ are identically zero then we can take $$ \bigl(st, 0, \phi_1(t), \phi_2(t), \dots, \phi_n(t)\bigr). $$ Hence, up-to interchanging $x$ and $y$, we may assume that $d:=\mult \psi_1(t)<\infty$. Again assuming that $z_1^m$ appears in $f$ with nonzero coefficient, we see that $$ F(t,s):=f\bigl(\phi_1(t)+st^{d+1}, \phi_2(t), \dots, \phi_n(t)\bigr) $$ is not identically zero and divisible by $t^{d+1} $. Thus $F(t,s)/\psi_1(t) $ is holomorphic and divisible by $t$. Therefore $$ \Bigl(\psi_1(t), \frac{F(t,s)}{\psi_1(t)}, \phi_1(t)+st^{d+1}, \phi_2(t), \dots, \phi_n(t)\Bigr) $$ is a deformation of $\bigl(\psi_1(t), 0, \phi_1(t), \phi_2(t), \dots, \phi_n(t)\bigr)$ such that $$ \mult_t f\bigl(\phi_1(t)+st^{d+1}, \phi_2(t), \dots, \phi_n(t)\bigr)<\infty $$ for $0<|s|\ll 1$. \qed \end{say} We used Newton's lemma on Puiseux series solutions in the following form. \begin{lem}\label{newton.lem} Let $g(x,y)\in \c[[x,y]]$ be a power series. Assume that $m:=\mult_0g(x,0)<\infty$. Then there is an $r\geq 1$ such that one can write $g(x, z^r)$ as $$ g(x, z^r)=u(x,z)\prod_{i=1}^m \bigl(x-\sigma_i(z)\bigr) $$ where $u(0,0)\neq 0$ and $\sigma_i(0)=0 $ for every $i$. The representation is unique, up-to permuting the $\sigma_i(z) $. Furthermore, if $g(x,y)$ is holomorphic on the bidisc $\bdd_x\times \dd_y$ then $u(x,z)$ and the $\sigma_i(z)$ are holomorphic on the smaller bidisc $\bdd_x\times \dd_z(\epsilon)$ for some $0<\epsilon\leq 1$. \qed \end{lem} \section{Essential divisors on $cA_1$-type 3-fold singularities}\label{sec.essent} In dimension 3, the only $cA_1$-type singularities are $X_m:=(xy=z^2-t^m)$ for $m\geq 2$. Already \cite[p.37]{nash-arc} proved that they have at most 2 essential divisors. We use the method of \cite[4.1]{df-arc} to determine the precise count. \begin{defn} \label{first.ess.defn} Let $X$ be a normal variety or analytic space and $E$ a divisor over $X$. That is, there is a birational or bimeromorphic morphisms $p:X'\to X$ such that $E\subset X'$ is an exceptional divisor. The closure of $p(E)\subset X$ is called the {\it center} of $E$ on $X$; it is denoted by $\cent_XE$. If $\cent_XE=\{x\}$, we say that $E$ is a divisor over $(x\in X)$. We say that $E$ is an {\it essential divisor} over $X$ if for every resolution of singularities $\pi:Y\to X$, $\cent_YE$ is an irreducible component of $\pi^{-1}\bigl(\cent_XE\bigr)$. (Note that $\pi^{-1}\circ p:X'\map Y$ is regular on a dense subset of $E$, hence $\cent_YE$ is defined.) If $X$ is an analytic space, then $Y$ is allowed to be any analytic resolution. If $X$ is algebraic, one gets slightly different notions depending on whether one allows $Y$ to be a quasi-projective variety, an algebraic space or an analytic space; see \cite{df-arc}. We believe that for the Nash problem it is natural to allow analytic resolutions. \end{defn} \begin{prop} \label{res.prop} Set $X_m:=(xy=z^2-t^m)\subset \c^4$. \begin{enumerate} \item If $m\geq 5$ is odd, there are 2 essential divisors. \item If $m\geq 2$ is even or $m= 3$, there is 1 essential divisor. \end{enumerate} \end{prop} Even in dimension 3, it seems surprisingly difficult to determine the set of essential divisors. A basic invariant is given by the discrepancy. \begin{defn} \label{discrep.defn} Let $X$ be a normal variety or analytic space. Assume for simplicity that the canonical class $K_X$ is Cartier. (This holds for all hypersurface singularities.) Let $\pi:Y\to X$ be a resolution of singularities and write $$ K_Y\sim \pi^*K_X+\tsum_i a(E_i,X)E_i $$ where the $E_i$ are the $\pi$-exceptional divisors. The integer $a(E_i,X)$ is called the {\it discrepancy} of $E_i$. (See \cite[Sec.2.3]{km-book} for basic references and more general definitions.) For example, let $X$ be smooth and $Z\subset X$ a smooth subvariety of codimension $r$. Let $\pi_Z:B_ZX\to X$ denote the blow-up and $E_Z\subset B_ZX$ the exceptional divisor. Then $a(E_Z, X)=r-1$ and easy induction shows that $a(F,X)\geq r$ for every other divisor whose center on $X$ is $Z$. We say that $X$ is {\it canonical} (resp.\ {\it terminal}) of $a(E_i,X)\geq 0$ (resp.\ $a(E_i,X)> 0$) for every resolution and every exceptional divisor. For instance, normal $cA$-type singularities are canonical and a $cA$-type singularity is terminal iff its singular set has codimension $\geq 3$; see \cite{Reid83} for a proof that applies to all $cDV$ singularities or \cite[1.42]{kk-singbook} for a simpler argument in the $cA$ case. \end{defn} \begin{say}[Resolving $X_m$] Blow up the origin to get $\pi_1:X_{m,1}:=B_0X_m\to X_m$. The exceptional divisor is the singular quadric $E_1\cong (xy-z^2=0)\subset \p^3(x,y,z,t)$. $B_0X_m$ has one singular point, visible in the chart $$ (x_1, y_1, z_1, t):=\bigl(x/t, y/t, z/t, t\bigr) $$ where the local equation is $x_1y_1=z_1^2-t^{m-2}$. We can thus blow up the origin again and continue. After $r:=\rdown{\tfrac{m}{2}}$ steps we have a resolution $$ \Pi_r: X_{m,r}\to X_{m,r-1}\to \cdots \to X_{m,1} \to X_m. $$ We get $r$ exceptional divisors $E_r,\dots, E_1$. For $1\leq c\leq r$ the divisor $E_c$ first appears on $X_{m,c}$. At the unique singular point one can write the local equation as $$ X_{m,c}=\bigl(x_cy_c=z_c^2-t^{m-2c}\bigr) \qtq{and} E_c=(t=0). $$ where $(x_c, y_c, z_c, t):=\bigl(x/t^c, y/t^c, z/t^c, t\bigr)$. \end{say} We thus need to decide which of the divisors $E_1,\dots, E_{\rdown{\tfrac{m}{2}}}$ are essential. It is easy to see that $E_1$ is essential and a direct computation (\ref{lem.res4}) shows that $E_3,\dots, E_{\rdown{\tfrac{m}{2}}}$ are not. (This is actually not needed in order to establish Example \ref{main.nash.exmp}.) The hardest is to decide what happens with $E_2$. \begin{lem}\label{lem.res1} Notation as above. Then \begin{enumerate} \item $a(E_c, X_m)=c$ for every $c$. \item $E_1$ is the only exceptional divisor whose center is the origin and whose discrepancy is $1$. \item $E_1$ appears on every resolution of $X_m$ whose exceptional set is a divisor. \item Let $p:Y\map X_m$ be any (not necessarily proper) bimeromorphic map from a smooth analytic space $Y$ such that $\cent_YE_1\subset Y$ is not empty. Then $\cent_YE_1$ is an irreducible component of the exceptional set $\ex(p)$. \end{enumerate} \end{lem} Proof. The first claim follows from the formula $$ \Pi_r^*\Bigl(\tfrac{dx\wedge dy\wedge dt}{z}\Bigr)= t^{-c}\cdot \tfrac{dx_c\wedge dy_c\wedge dt}{z_c}. $$ Let $F$ be any other exceptional divisor whose center is the origin. Then $\cent_{X_r}F$ lies on one of the $E_c$, thus $a(F,X)>a(E_c,X)\geq 1$. (This also proves that $X_m$ is terminal.) To see (3) set $W_1:=\cent_YE_1\subset Y$. Let $F_i\subset Y$ be the exceptional divisors and note that, as in \cite[2.29]{km-book}, $$ a(E_1, X_m)\geq \bigl(\codim_YW_1-1\bigr)+\tsum_i \mult_{W_1}F_i\cdot a(F_i, X_m). \eqno{(\ref{lem.res1}.5)} $$ Note that $a(E_1, X_m)=1$ and $a(F_i, X_m)\geq 1$ for every $i$. If $W_1$ is not an irreducible component of $\ex(p)$ then $W_1\subset F_i$ form some $i$ and then both terms on the right hand side of (\ref{lem.res1}.5) are positive, a contradiction. \qed \begin{lem}\label{lem.res2} If $m\in \{2,3\}$ then $B_0X$ is smooth, hence the only essential divisor is $E_1$. \qed \end{lem} \begin{say}[Small resolutions and factoriality of $X_m$] \label{small.res.say.1} If $m=2a$ is even, then $X_m$ has a small resolution obtained by blowing up either $(x=z-t^a=0)$ or $(x=z+t^a=0)$. The resulting blow-ups $ Y^{\pm}_{2a}\subset \c^4_{xyzt}\times \p^1_{uv}$ are defined by the equations $$ Y^{\pm}_{2a}:=\rank \left( \begin{array}{ccc} x & z\pm t^a & u\\ z\mp t^a & y & v \end{array} \right) \leq 1 \eqno{(\ref{small.res.say.1}.1)} $$ By contrast, $X_m$ does not have small resolutions if $m$ is odd. More generally, let $$ X_f:=\bigl(xy=f(z,t)\bigr)\subset \c^{4} $$ be an isolated $cA$-type singularity. Write $f=\prod_j f_j$ as a product of irreducibles. The $f_j$ are distinct since the singularity is isolated. Set $D_j:=(x=f_j=0)$. By \cite[2.2.7]{k-etc} the local divisor class group is $$ \ddiv\bigl(0\in X_f\bigr)= \bigl(\tsum_j \z[D_j]\bigr)\big/\tsum_j [D_j]. \eqno{(\ref{small.res.say.1}.2)} $$ In particular, $X_f$ is factorial iff $f$ is irreducible. This formula works both algebraically and analytically. If we are interested in the affine variety $X_f$, then we consider factorizations of $f$ in the polynomial ring. If we are interested in the complex analytic germ $X_f$, then we consider factorizations of $f$ in the ring of germs of analytic functions. Thus, for example, $$ (xy=z^2-t^2-t^3)\subset \c^4 $$ is algebraically factorial, since $z^2-t^2-t^3$ is an irreducible polynomial, but it is not analytically factorial, since $$ z^2-t^2-t^3=\bigl(z-t\sqrt{1+t}\bigr)\bigl(z+t\sqrt{1+t}\bigr). $$ Thus if $m$ is odd then $X_m$ is factorial (both algebraically and analytically) and it does not have small resolutions; see Lemma \ref{purecodim1.lem} for stronger results. \end{say} \begin{lem}\label{lem.res3} If $m$ is even then there is a divisorial resolution whose sole exceptional divisor is birational to $E_1$. Thus the only essential divisor is $E_1$. \end{lem} Proof. The $m=2$ case is in (\ref{lem.res2}), hence we may assume that $m=2a\geq 4$. There are 2 ways to obtain such resolutions. First, we can blow up the exceptional curve in either of the $Y^{\pm}_{2a} $ as in (\ref{small.res.say.1}.1). Alternatively, we first blow up the origin to get $B_0X_m$ which has one singular point with local equation $x_1y_1=z_1^2-t_1^{2a-2}$ and then blow up $D^+:=(x_1=z_1+t_1^{a-1}=0)$ or $D^-:=(x_1=z_1-t_1^{a-1}=0)$.\qed \begin{lem}\label{lem.res4} \cite[p.37]{nash-arc} The divisors $E_3,\dots, E_r$ are not essential. \end{lem} Proof. If $m$ is even, this follows from (\ref{lem.res3}), but for the proof below the parity of $m$ does not matter. If $2b\geq a\geq 0$ and $m\geq a$ then $(u,v,w,t)\mapsto (ut, vt^{a+1}, wt^{b+1}, t)=(x,y,z,t)$ defines a birational map $$ g(a,b,m):Z_{abm}:=(uv=w^2t^{2b-a}-t^{m-2-a}\bigr)\to X_m. $$ Note that $\ex\bigl(g(a,b,m)\bigr)=(t=0)$ is mapped to the origin and $Z_{abm}$ is smooth along the $v$-axis, save at the origin. If $1\leq c\leq m/2$ then $(x_c,y_c,z_c,t)\mapsto (x_ct^c, y_ct^c, z_ct^c, t)=(x,y,z,t)$ defines a birational map $$ h(c,m): X_{m,c}:=(x_cy_c=z_c^2-t^{m-2c}\bigr)\to X_m. $$ By composing we get a birational map $g(a,b,m)^{-1}\circ h(c,m): Y_c\map Z_{abm}$ given by $$ (x_c,y_c,z_c,t)\mapsto (x_ct^{c-1}, y_ct^{c-a-1}, z_ct^{c-b-1}, t)= (u,v,w,t) $$ which is a morphism if $c\geq a+1,b+1$. If $c=a+1$ and $c>b+1$ then we have $$ (x_c,y_c,z_c,t)\mapsto (x_ct^{c-1}, y_c, z_ct^{c-b-1}, t)= (u,v,w,t) $$ which maps $E_c$ to the $v$-axis. If $c\geq 3$ then by setting $a=c-1, b=c-2$ we get a birational morphism $p(c,m):=g(c,c{-}1,m)^{-1}\circ h(c,m)$ given by $$ (x_c,y_c,z_c,t)\mapsto (x_ct^c, y_c, z_ct, t)= (u,v,w,t). $$ Note that $$ p(c,m): Y_{c}=(x_cy_c=z_c^2-t^{m-2c}\bigr)\to (uv=w^2t^{c-2}-t^{m-c}\bigr)=Z_{c,c-1,m} $$ maps $E_c$ onto the $v$-axis. Thus $E_c$ is not essential for $c\geq 3$. \qed \begin{lem}\label{lem.res5} If $m\geq 5$ is odd then $E_2$ is essential. \end{lem} Proof. We follows the arguments in \cite[4.1]{df-arc}. Let $p:Y\to X_m$ be any resolution and set $Z:=\cent_YE_2\subset Y$. Since $X_m$ is factorial (here we use that $m$ is odd), $\ex(p)$ has pure dimension 2 by (\ref{purecodim1.lem}.2). Assume to the contrary that $Z$ is not a divisor. Using that $a(E_2, X_m)=2$, (\ref{lem.res1}.5) implies that $Z$ is a curve, there is a unique exceptional divisor $F\subset Y$ that contains $Z$, $F$ is smooth at general points of $Z$ and $a(F,X_m)=1$. If $p(F)$ is a curve then $Z$ is an irreducible component of $p^{-1}(0)$. The remaining case is when $p(F)=0$, thus $F=E_1$ by (\ref{lem.res1}.2). Since $t$ vanishes along $E_2$ with multiplicity 1, it also vanishes along $Z$ with multiplicity 1. Since $p^*x, p^*y, p^*z, p^*t$ all vanish along $E_1$ the rational functions $p^*(x/t), p^*(y/t), p^*(z/t)$ are regular generically along $Z$. Thus $p_1:=\pi_1^{-1}\circ p:Y\map X_{m,1}$ is a morphism generically along $Z$. Note that our $E_2$ is what we would call $E_1$ if we started with $ X_{m,1}$. Applying (\ref{lem.res1}.4) to $p_1:Y\map X_{m,1}$ we see that $Z$ is an irreducible component of $\ex(p_1)$. Since $m$ is odd, $X_{m,1}$ is analytically factorial by (\ref{small.res.say.1}), hence $Z$ is a divisor by (\ref{purecodim1.lem}.2). This is a contradiction. \qed \begin{lem} \label{purecodim1.lem} Let $X, Y$ be normal varieties or analytic spaces and $g:Y\to X$ a birational or bimeromorphic morphism. Then the exceptional set $\ex(g)$ has pure codimension 1 in $Y$ in the following cases. \begin{enumerate} \item $Y$ is an algebraic variety and $X$ is $\q$-factorial. \item $\dim Y=3$ and $X$ is analytically locally $\q$-factorial. \end{enumerate} \end{lem} Proof. The algebraic case is well known; see for instance the method of \cite[Sec.II.4.4]{shaf}. If $\dim Y=3$ and $\ex(g)$ does not have pure codimension 1 then it has a 1-dimensional irreducible component $C\subset Y$. After replacing $X$ by a suitable neighborhood of $g(C)\in X$ we may assume that there is a divisor $D_Y\subset Y$ such that $\ex(g)\cap D_Y$ is a single point of $C$ and $g|_{D_Y}$ is proper. Thus $D_X:=g(D_Y)$ is a divisor on $X$. If $mD_X$ is Cartier then so is $g^*(mD_X)$ hence its support has pure codimension 1 in $Y$. On the other hand, $\supp\bigl(g^*(mD_X)\bigr)=\ex(g)\cup D_Y$ does not have pure codimension 1. (Note that there are many possible choices for $D_Y$; the resulting $D_X$ determine an algebraic equivalence class of divisors.)\qed Somewhat surprisingly, the analog of (\ref{purecodim1.lem}.2) fails in dimension 4. \begin{exmp} Let $W\subset \p^4$ be a smooth quintic 3--fold and $C\subset W$ a line whose normal bundle is $\o(-1)+\o(-1)$. Let $X\subset \c^5$ denote the cone over $W$ with vertex $0$; it is analytically locally factorial by \cite[XI.3.14]{sga2}. The exceptional divisor of the blow-up $B_0X\to X$ can be identified with $W$; let $C\subset B_0X$ be our line. Its normal bundle is $\o(-1)+\o(-1)+\o(-1)$. Blow up the line $C$ to obtain $B_CB_0X\to B_0X$. Its exceptional divisor is $E\cong \p^1\times \p^2$. One can contract $E$ in the other direction to obtain $g:Y\to X$. By construction, $\ex(g)$ is the union of $\p^2$ and of a 3-fold obtained from $W$ by flopping the line $C$. The two components intersect along a line. \end{exmp} \begin{rem} \label{corves.to.locdiv} We will need to understand in detail the proof of (\ref{purecodim1.lem}.2) for $$ X_c:=\bigl(xy=z^2-ct^{m}\bigr)\subset \c^4\qtq{where $c\neq 0$.} $$ Let $g_c:Y_c\to X_c$ be a proper birational or bimeromorphic morphism and $E_c\subset \ex(g_c)$ a 1-dimensional irreducible component. The proof of (\ref{purecodim1.lem}.2) associates to $E_c$ an algebraic equivalence class of non-Cartier divisors on $X_c$. Thus $m$ has to be even by (\ref{small.res.say.1}). If $m=2a$ is even then the divisor class group is $\ddiv(X_c)\cong \z$. The two possible generators correspond to $(x=z-\sqrt{c}t^a=0)$ and $(x=z+\sqrt{c}t^a=0)$. Starting with $E_c$ we constructed a divisor $D_c\subset X_c$ which is a nontrivial element of $\ddiv(X_c)$. Thus $[D_c]$ is a positive multiple of either $(x=z-\sqrt{c}t^a=0)$ or $(x=z+\sqrt{c}t^a=0)$. Hence, to $E_c\subset Y_c$ we can associate a choice of $\sqrt{c} $. This may not be very interesting for a fixed value of $c$ (since many other choices are involved) but it turns out to be quite useful when $c$ varies. \end{rem} \begin{prop}\label{div.varies.infams} Let $g(u_1,\dots, u_r, v) $ be a holomorphic function for $u_i\in \c$ and $|v|<\epsilon$ such that $g(u_1,\dots, u_r, 0)$ is not identically zero. Set $$ X:=\bigl(xy=z^2-v^mg(u_1,\dots, u_r, v)\bigr)\subset \c^{r+4} $$ Let $\pi:Y\to X$ be a birational or bimeromorphic morphism. Assume that there is an irreducible component $Z\subset \ex(\pi)$ that dominates $(x=y=z=v=0)\subset X$, has codimension $\geq 2$ in $Y$ and such that $\pi|_{Z}:Z\to (x=y=z=v=0)$ has connected fibers. Then $m$ is even and $g(u_1,\dots, u_r,0)$ is a perfect square. \end{prop} Proof. For general ${\mathbf c}=(c_1,\dots, c_r)\in \c^r $ the repeated hyperplane section $$ X({\mathbf c}):=\bigl(xy=z^2-v^mg({\mathbf c}, v)\bigr)\subset \c^{4} $$ has an isolated singularity at the origin and we get a proper birational or bimeromorphic morphism $$ \pi({\mathbf c}):Y({\mathbf c})\to X({\mathbf c}) $$ where $Y({\mathbf c})\subset Y$ is the preimage of $X({\mathbf c}) $. Furthermore, $Z({\mathbf c}):=Z\cap Y({\mathbf c})$ is an irreducible component of $ \ex\bigl(\pi({\mathbf c})\bigr)$ and has codimension $\geq 2$ in $Y({\mathbf c})$. Thus, as we noted above, $m=2a$ is even and our construction gives a function $$ (c_1,\dots, c_r)\mapsto \mbox{a choice of } \sqrt{g(c_1,\dots, c_r,0)}. $$ It is clear that this function is continuous on a Zariski open set $U\subset \c^r$. Therefore $g(u_1,\dots, u_r,0)$ is a perfect square. \qed \begin{rem}\label{converse.to.square} Conversely, assume that $m$ is even and $g(u_1,\dots, u_r, 0)= h^2(u_1,\dots, u_r)$ is a square. Write the equation of $X$ as $$ xy=z^2-v^m\bigl(h^2(u_1,\dots, u_r)+vR(u_1,\dots, u_r, v)\bigr) $$ Over the open set $X^0\subset X$ where $h\neq 0$, change coordinates to $w:=h^{-2}v$. (Equivalently, blow up $(v=h=0)$ twice.) Then $$ D:=\bigl(x=z-w^{m/2}h^{m+1}\sqrt{1+wR(u_1,\dots, u_r, h^2w)}\bigr) $$ is a globally well defined analytic divisor. Blowing it up gives a bimeromorphic morphism $X_D\to X$ whose exceptional set over $X^0$ has codimension 2. It seems that even if $X$ is algebraic, usually $X_D$ is not an algebraic variety. \end{rem} \section{Essential divisors on $cA_1$-type singularities}\label{sec.essent.higher} In higher dimensions $cA_1$-type singularities are more complicated and their resolutions are much harder to understand. There is no simple complete answer as in dimension 3. In the previous Section, the key part is to understand the exceptional divisors that correspond to the first 2 blow-ups. These are the 2 divisors that we understand in higher dimensions as well. \begin{say}[Defining $E_1$ nd $E_2$]\label{def.E1.E2} In order to fix notation, write the equation as $$ X:=\bigl(xy=z^2-g(u_1,\dots, u_r)\bigr)\subset \c^{r+3}. \eqno{(\ref{def.E1.E2}.1)} $$ Set $m:=\mult_0 g$ and let $g_s(u_1,\dots, u_r) $ denote the homogeneous degree $s$ part of $g$. In a typical local chart the 1st blow-up $\sigma_1:X_1:=B_0X\to X$ is given by $$ x_1y_1=z_1^2-\bigl(u'_r\bigr)^{-2} g(u'_1u'_r,\dots, u'_{r-1}u'_r, u'_r) \eqno{(\ref{def.E1.E2}.2)} $$ where $x=x_1u'_r, y=y_1u'_r, z=z_1u'_r$, $u_1=u'_1u'_r, \dots, u_{r-1}=u'_{r-1}u'_r$ and $ u_r=u'_r$. The exceptional divisor is the rank 3 quadric $$ E_1:=\bigl(x_1y_1-z_1^2=0\bigr)\subset \p^{r+2}. \eqno{(\ref{def.E1.E2}.3)} $$ Note also that $$ \begin{array}{l} \bigl(u'_r\bigr)^{-2}g(u'_1u'_r,\dots, u'_{r-1}u'_r, u'_r)=\\ \qquad =\bigl(u'_r\bigr)^{m-2}\Bigl(g_m(u'_1,\dots, u'_{r-1}, 1)+ u'_rg_{m+1}(u'_1,\dots, u'_{r-1}, 1)+\cdots\Bigr). \end{array} \eqno{(\ref{def.E1.E2}.4)} $$ From this we see that, for $m\geq 4$, the blow-up $X_1$ is singular along the closure of the linear space $$ L:=(x_1=y_1=z_1=u'_r=0), \eqno{(\ref{def.E1.E2}.5)} $$ $X_1$ has terminal singularities and a general 3-fold section has equation $$ x_1y_1=z_1^2-\bigl(u'_r\bigr)^{m-2} \Bigl(g_m(c_1,\dots, c_{r-1}, 1)+ u'_rg_{m+1}(c_1,\dots, c_{r-1}, 1)+\cdots\Bigr). $$ Blowing up the closure of $L$ we obtain $X_2$ with exceptional divisor $E_2$. As in Lemma \ref{lem.res1} we compute that \begin{enumerate}\setcounter{enumi}{5} \item $a(E_1, X)=r$, \item $a(E_2, X)=r+1$, \item $a(F, X)\geq r+1$ for every other exceptional divisor whose center on $X$ is the origin and \item the pull-backs of the $u_i$ vanish along $E_1, E_2$ with multiplicity 1. \end{enumerate} \end{say} The key computation is the following. \begin{prop} \label{res.higher.prop} Notation as above and assume that $m\geq 4$. \begin{enumerate} \item $E_1$ is an essential divisor. \item $E_2$ is an essential divisor iff $g_m(u_1,\dots, u_r)$ is not a perfect square. \end{enumerate} \end{prop} Proof. By (\ref{def.E1.E2}.6) and (\ref{def.E1.E2}.8), $E_1$ has the smallest discrepancy among all divisors over $X$ whose center on $X$ is the origin. Thus $E_1$ is essential by Proposition \ref{can.mind.ess.prop}. If $E_2$ is non-essential then there is a resolution $\pi:Y\to X$ and an irreducible component $W\subset \supp\pi^{-1}(0)$ such that $Z:=\cent_YE_2\subsetneq W$. By (\ref{def.E1.E2}.9), the $\pi^*u_i$ vanish at a general point of $Z$ with multiplicity 1. Since the $\pi^*u_i$ vanish along $W$, this implies that $\supp\pi^{-1}(0)$ is smooth at a general point of $Z$. In particular, $W$ is the only irreducible component of $\supp\pi^{-1}(0)$ that contains $Z$ and $W$ is smooth at general points of $Z$. Therefore the blow-up $B_WY$ is smooth over the generic point of $Z$. So, if we replace $Y$ by a suitable desingularization of $B_WY$, we get a situation as before where in addition $W$ is a divisor. The $\pi^*u_i$ are local equations of $W$ at general points of $Z$ and $\pi^*x, \pi^*y, \pi^*z$ all vanish along $W$. Thus the rational functions $$ \pi^*(x/u_r), \pi^*(y/u_r),\pi^*(z/u_r), \pi^*(u_1/u_r),\dots, \pi^*(u_{r-1}/u_r), $$ are all regular at general points of $Z$. Hence the birational map $\sigma_1^{-1}\circ \pi:Y\to B_0X=X_1$ is a morphism at general points of $Z$. Furthermore, $\sigma_1^{-1}\circ \pi $ maps $W$ birationally to $E_1\subset X_1$ and it is not a local isomorphism along $Z$ since $Y$ is smooth but $X_1$ is singular along the center $L$ of $E_2$. Thus $Z$ is an irreducible component of $\ex\bigl(\sigma_1^{-1}\circ \pi\bigr)$. Since $E_2\to L$ has connected fibers, all the assumptions of Proposition \ref{div.varies.infams} are satisfied by the equation of the blow-up $$ x_1y_1=z_1^2-\bigl(u'_r\bigr)^{m-2} \Bigl(g_m(u'_1,\dots, u'_{r-1}, 1)+ u'_rg_{m+1}(u'_1,\dots, u'_{r-1}, 1)+\cdots\Bigr). \eqno{(\ref{res.higher.prop}.3)} $$ Thus $m$ is even and $g_m(u'_1,\dots, u'_{r-1}, 1) $ is a perfect square. Since it is a dehomogenization of $g_m(u_1,\dots, u_{r-1}, u_r) $, the latter is also a perfect square. The converse follows from Remark \ref{converse.to.square}. \qed \begin{defn} For $(x\in X)$ let $\operatorname{min-discrep}(x\in X)$ be the infimum of the discrepancies $a(E,X)$ where $E$ runs through all divisors over $X$ such that $\cent_XE=\{x\}$. (It is easy to see that either $\operatorname{min-discrep}(x\in X)\geq -1$ and the infimum is a minimum or $\operatorname{min-discrep}(x\in X)=-\infty$; cf.\ \cite[2.31]{km-book}. We do not need these facts.) \end{defn} \begin{prop} \label{can.mind.ess.prop} Let $(x\in X)$ be a canonical singularity and $E$ a divisor over $X$ such that \begin{enumerate} \item $\cent_XE=\{x\}$ and \item $a(E,X)<1+\operatorname{min-discrep}(x\in X)$. \end{enumerate} Then $E$ is essential. \end{prop} Proof. Let $F$ be any non-essential divisor over $X$ whose center on $X$ is the origin. Thus there is a resolution $\pi:Y\to X$ and an irreducible component $W\subset \supp\pi^{-1}(x)$ such that $$ Z:=\cent_YF\subsetneq W. $$ Let $E_W$ be the divisor obtained by blowing up $W\subset Y$. As we noted in (\ref{discrep.defn}), $$ a(E_W, Y)=\codim_YW -1\qtq{and} a(F, Y)\geq\codim_YZ -1\geq \codim_YW. \eqno{(\ref{can.mind.ess.prop}.3)} $$ Write $K_Y=\pi^*K_X+D_Y$ where $D_Y$ is effective since $X$ is canonical and note that $$ a(E_W, X)=a(E_W, Y)+\mult_WD_Y \qtq{and} a(F, X)\geq a(F, Y)+\mult_ZD_Y. \eqno{(\ref{can.mind.ess.prop}.4)} $$ Since $\mult_ZD_Y\geq \mult_WD_Y$, we conclude that $$ a(F, X)\geq 1+a(E_W, X)\geq 1+\operatorname{min-discrep}(x\in X). \eqno{(\ref{can.mind.ess.prop}.5)} $$ Thus any divisor $E$ with $a(E,X)<1+\operatorname{min-discrep}(x\in X)$ is essential. \qed \section{Short arcs}\label{sec.short} Let $\dd\subset \c$ denote the open unit disk and $\bdd\subset \c$ its closure. The open (resp.\ closed) disc of radius $\epsilon$ is denoted by $\dd(\epsilon)$ (resp.\ $\bdd(\epsilon)$). If several variables are involved, we use a subscript to indicate the name of the coordinate. \begin{say}[Short arcs]\cite{k-short}\label{short.defs.say} Let $X$ be an analytic space and $p\in X$ a point. A {\it short arc} on $(p\in X)$ is a holomorphic map $\phi(t): \bdd_t\to X$ such that $\supp\phi^{-1}(p)=\{0\}$. The space of all short arcs is denoted by $\sharc(p\in X)$. It has a natural topology and most likely also a complex structure that, at least for isolated singularities, locally can be written as the product of a (finite dimensional) complex space and of a complex Banach space; see \cite[Sec.11]{k-short} for details. A {\it deformation} of short arcs is a holomorphic map $\Phi(t,s): \bdd_t\times \dd_s\to X$ such that $\Phi(t,s_0): \bdd_t\to X$ is a short arc for every $s_0\in \dd_s$. Equivalently, if $\supp\Phi^{-1}(p)=\{0\}\times \dd_s$. In general the space of short arcs has more connected components than the space of formal arcs. As a simple example, consider arcs on $(xy=z^n)\subset \c^3$. For $0<i<m$ the deformations $$ (t,s)\mapsto \bigl(t^i(t+s)^{m-i}, t^{m-i}(t+s)^{i}, t(t+s)\bigr) \eqno{(\ref{short.defs.say}.1)} $$ show that the arc $(t^m, t^m, t^2)$ is in the closure of the families (\ref{cA.sharc.thm}.2), provided we work in the space of formal arcs. However, (\ref{short.defs.say}.1) is {\em not} a deformation of short arcs and $(t^m, t^m, t^2)$ is a typical member of a new connected component of $\sharc\bigl(0\in (xy=z^m)\bigr)$. By contrast, adding one more variable kills this component. For example, starting with the arc $(t^m, t^m, t^2, 0)$ on $(xy=z^n)\subset \c^4$, we have deformations of short arcs $$ (t,s)\mapsto \bigl(t^i(t+s)^{m-i}, t^{m-i}(t+s)^{i}, t(t+s), ts\bigr). \eqno{(\ref{short.defs.say}.2)} $$ \end{say} This example turns out to be typical and it is quite easy to modify the deformations in the proof of Theorem \ref{cA.sharc.thm} to yield the following. \begin{thm}\label{cA.sharc.sharc.thm} Let $X=(xy=f(z_1,\dots, z_n)\subset \c^{n+2}$ be a $cA$-type singularity. Assume that $\dim X\geq 3$ and $m:=\mult_0 f\geq 2$. Then $\sharc(0\in X)$ has $(m-1)$ irreducible components as in (\ref{cA.sharc.thm}.2). \end{thm} It is not always clear if a deformation $\Phi(t,s)$ is short or not. There is, however, one case when this is easy, at least over a smaller disc $\dd_s(\epsilon)\subset \dd_s$. \begin{lem} \label{easy.short.lem} Let $\Phi(t,s)=\bigl(\Phi_1(t,s),\dots, \Phi_r(t,s)\bigr)$ be a deformation of arcs on $X\subset \c^r$. Assume that $\Phi(t,0)$ is short and $\Phi_i(t,s)$ is independent of $s$ and not identically zero for some $i$. Then $\Phi(t,s_0):\bdd_t\to X$ is short for $|s_0|\ll 1$. \end{lem} Proof. By assumption $\Phi(*,s_0)^{-1}(p)\subset \Phi_i(*,s_0)^{-1}(p)=\Phi_i(*,0)^{-1}(p)$ for every $s_0\in \dd_s$, thus there is a finite subset $Z=\Phi_i(*,0)^{-1}(p)\subset \bdd_t$ such that $$ \Phi^{-1}(0)\subset Z\times \dd_s\qtq{and} \Phi^{-1}(0)\cap (s=0)=\{(0,0)\}. $$ Since $\Phi^{-1}(0) $ is closed, this implies that $$ \Phi^{-1}(0)\cap \bigl(\bdd_t\times \dd_s(\epsilon)\bigr) \subset \{0\}\times \dd_s(\epsilon)\qtq{for $0<\epsilon\ll 1$.}\qed $$ \begin{say}[Proof of Theorem \ref{cA.sharc.sharc.thm}] At the very beginning of the proof of Theorem \ref{cA.sharc.thm}, after a linear change of coordinates we may assume that $z_1^m$ appears in $f$ with nonzero coefficient and $\phi_2$ is not identically zero. Then the construction gives a deformation of short arcs by Lemma \ref{easy.short.lem}. The deformations at the end of the proof were written to yield short arcs. \qed \end{say} \section{A revised version of the Nash problem}\label{sec.revised} As we saw, the Nash map is not surjective in dimensions $\geq 3$. In this section we develop a revised version of the notion of essential divisors. This leads to a smaller target for the Nash map, so surjectivity should become more likely. Our proposed variant of the Nash problem at least accounts for all known counter examples. We start with a reformulation of the original definition of essential divisors. \begin{say} \label{reform.ess.say} Let $Y$ be a complex variety and $Z\subset Y$ a closed subset. Let $\farc(Z\subset Y)$ denote the scheme of formal arcs $\phi:\spec \c[[t]]\to Y$ such that $\phi(0)\in Z$. An easy but key observation is the following. \ref{reform.ess.say}.1. If $Y$ is smooth, then the irreducible components of $\farc(Z\subset Y)$ are in a natural one--to--one correspondence with the irreducible components of $Z$. We say that a divisor $E$ over $Y$ is {\it essential} for $Z\subset Y$ if $E$ is obtained by blowing up one of the irreducible components of $Z$. (For each irreducible component $Z_i\subset Z$, the blow-up $B_ZY$ contains a unique divisor that dominates $Z_i$.) The definition of essential divisors can now be reformulated as follows. \ref{reform.ess.say}.2. Let $(x\in X)$ be a singularity. A divisor $E$ is {\it essential} for $(x\in X)$ if $E$ is essential for $\bigl(\supp\pi^{-1}(x)\subset Y\bigr)$ for every resolution $\pi:Y\to X$. \end{say} In order to refine the Nash problem, we need to understand singular spaces for which the analog of (\ref{reform.ess.say}.1) still holds. \begin{defn}[Sideways deformations] Let $X$ be a variety (or an analytic space) and $\phi:\spec \c[[t]]\to X$ a formal arc such that $\phi(0)\in \sing X$. A {\it sideways deformation} of $\phi$ is a morphism $\Phi:\spec \c[[t,s]]\to X$ such that $$ \Phi^* I_{\sing X}\subset (t,s)^m \qtq{for some $m\geq 1$} $$ where $I_{\sing X}\subset \o_X$ is the ideal sheaf defining $\sing X$. If $\Phi$ comes from a convergent arc $\Phi^{\rm an}:\dd_t\times \dd_s\to X$ then this is equivalent to assuming that for every $0\neq |s_0|\ll 1$ the nearby arc $\Phi^{\rm an}(t,s_0)$ maps $\dd_t(\epsilon)$ to $X\setminus \sing X$ for some $0<\epsilon\leq 1$. We say that $(x\in X)$ is {\it arc-wise Nash-trivial} if every general arc in $\farc(x\in X)$ has a sideways deformation. (By \cite{2012arXiv1201.6310F}, this implies that every arc in $\farc(x\in X)$ has a sideways deformation.) \end{defn} \begin{comm} If $(x\in X)$ is an isolated singularity with a small resolution $\pi:X'\to X$ then every arc has a sideways deformation. We can lift the arc to $X'$ and there move it away from the $\pi$-exceptional set. This is not very interesting and the notion of essential divisors captures this phenomenon. To exclude these cases, we are mainly interested in arc-wise Nash-trivial singularities that do not have small modifications. If arc-wise Nash-trivial singularities are log terminal then assuming $\q$-factoriality captures this restriction, but in general one needs to be careful of the difference between $\q$-factoriality and having no small modifications. Also, in the few examples we know of, general arcs of every irreducible component of $\farc(x\in X)$ have sideways deformations. If there are singularities where sideways deformations exist only for some of the irreducible components, the following outline needs to be suitably modified. \end{comm} The main observation is that, for the purposes of the Nash problem, $\q$-factorial arc-wise Nash-trivial singularities should be considered as good as smooth points. The first evidence is the following straightforward analog of (\ref{reform.ess.say}.1). \begin{lem} Let $Y$ be a complex variety with isolated, arc-wise Nash-trivial singularities. Let $Z\subset Y$ a closed subset that is the support of an effective Cartier divisor. Then the irreducible components of $\farc(Z\subset Y)$ are in a natural one--to--one correspondence with the irreducible components of $Z$. \qed \end{lem} If $Z$ has lower dimensional irreducible components, the situation seems more complicated, but, at least in dimension 3, the following seems to be the right generalization of (\ref{reform.ess.say}.1). \begin{conj} \label{easy.ess.conj} Let $Y$ be a 3--dimensional complex variety with isolated, $\q$-factorial, arc-wise Nash-trivial singularities. Let $Z\subset Y$ be a closed subset. Then the irreducible components of $\farc(Z\subset Y)$ are in a natural one--to--one correspondence with the union of the following two sets. \begin{enumerate} \item Irreducible components of $Z$. \item Irreducible components of $\farc(p\in Y)$, where $p\in Y$ is any singular point such that $p\in Z$ and $\dim_pZ\leq 1$. \end{enumerate} \end{conj} \begin{defn} Assumptions as in (\ref{easy.ess.conj}). A divisor over $Y$ is {\it essential} for $Z\subset Y$ if it corresponds to one of the irreducible components of $\farc(Z\subset Y)$, as enumerated in (\ref{easy.ess.conj}.1--2). \end{defn} \begin{defn} Let $(x\in X)$ be a 3--dimensional, normal singularity. A divisor $E$ over $X$ is called {\it very essential} for $(x\in X)$ if $E$ is essential for $\bigl(\supp\pi^{-1}(x)\subset Y\bigr)$ for every proper birational morphism $\pi:Y\to X$ where $Y$ has only isolated, $\q$-factorial, arc-wise Nash-trivial singularities. (As in (\ref{first.ess.defn}), it is better to allow $Y$ to be an algebraic space.) \end{defn} It is easy to see that the Nash map is an injection from the irreducible components of $\farc(x\in X)$ into the set of very essential divisors. One can hope that there are no other obstructions. \begin{prob}[Revised Nash problem]\label{reform.nash} Is the Nash map surjective onto the set of very essential divisors for normal 3-fold singularities? \end{prob} As a first step, one should consider the following. \begin{prob} In dimension 3, classify all $\q$-factorial, arc-wise Nash-trivial singularities. \end{prob} Hopefully they are all terminal and a complete enumeration is possible. The papers \cite{MR2145316, MR2145317} contain several results about partial resolutions of terminal singularities. We treat two easy cases in (\ref{sideways.cA}--\ref{sideways.quot}). A positive solution of Question \ref{cDV.ques} would imply that all isolated, 3-dimensional cDV singularities are arc-wise Nash-trivial. \begin{thm} \label{sideways.cA} Let $(0\in X)$ be a $cA$-type singularity such that $\dim \sing X\leq \dim X-3$. Then all general arcs as in (\ref{cA.sharc.thm}.2) have sideways deformations. \end{thm} Proof. We use the notation of the proof of Theorem \ref{cA.sharc.thm}. Since $\mult f\bigl(\phi_1(t), \dots, \phi_n(t)\bigr)=m$, we see that $\mult \phi_j(t)=1$ for at least one index $j$. We may assume that $j=1$ and $\phi_1(t)=t$. Thus, after the coordinate change $z_i\mapsto z_i-\phi_i(z_1)$ for $i=2,\dots,n$ and an additional general linear coordinate change among the $z_2,\dots, z_n$ we may assume that \begin{enumerate} \item $\phi_1(t)=t$, \item $\phi_j(t)\equiv 0$ for $j>1$, \item $\bigl(xy=g(z_1, z_2)\bigr)\subset \c^4$ has an isolated singularity at the origin and $g(z_1,z_2)$ is divisible neither by $z_1$ nor by $z_2$ where $g(z_1, z_2)=f(z_1, z_2, 0,\dots, 0)$. \end{enumerate} By Lemma \ref{newton.lem} there is an $r\geq 1$ such that $$ g(t, s^r)=u(t,s)\prod_{i=1}^m \bigl(t-\sigma_i(s)\bigr). $$ Since $g(z_1,z_2)$ is not divisible by $z_1$ none of the $\sigma_i$ are identically zero. Since $g(t, s) $ has an isolated singuarity at the origin and is not divisible by $s$, $g(t, s^s) $ has an isolated singuarity at the origin. Thus all the $\sigma_i(s)$ are distinct. As before, for $j=1,2$ write $\psi_j(t)=t^{a_j}v_j(t)$ where $v_j(0)\neq 0$. Note that $a_1+a_2=m$ and $u(t,0)=v_1(t)v_2(t)$. Divide $\{1,\dots, m\}$ into two disjoint subsets $A_1, A_2$ such that $|A_j|=a_j$. Finally set $$ \Psi_1(t,s)=v_1(t)\cdot \prod_{i\in A_1} \bigl(t-\sigma_i(s)\bigr) \qtq{and} \Psi_2(t,s)=\frac{u(t,s)}{v_1(t)} \cdot \prod_{i\in A_2} \bigl(t-\sigma_i(s)\bigr). $$ Then $$ \bigl(\Psi_1(t,s), \Psi_2(t,s), t, s^r, 0,\dots, 0\bigr) $$ is a sideways deformation of $\bigl(\psi_1(t), \psi_2(t),t, 0,\dots, 0\bigr)$. \qed The opposite happens for quotient singularities. \begin{prop} \label{sideways.quot} Let $(0\in X):=\c^n/G$ be an isolated quotient singularity. Then arcs with a sideways deformation are nowhere dense in $\farc(0\in X)$. \end{prop} Proof. Let $\Phi:\spec \c[[t,s]]\to X$ be a sideways deformation of an arc $\phi(t)=\Phi(t,0)$. By the purity of branch loci, $\Phi$ lifts to an arc $\tilde \Phi:\spec \c[[t,s]]\to \c^n$. In particular, $\phi:\spec \c[[t]]\to X$ lifts to $\tilde \phi:\spec \c[[t]]\to \c^n$. By \cite{k-short}, such arcs constitute a connected component of $\sharc(0\in X)$. We claim, however, that these arcs do not cover a whole irreducible component of $\farc(0\in X)$. It is enough to show the latter on some intermediate cover of $X$. The simplest is to use $(0\in Y):=\c^n/C$ where $C\subset G$ is any cyclic subgroup. Set $r:=|C|$, fix a generator $g\in C$ and diagonalize its action as $$ (x_1,\dots, x_n)\mapsto \bigl(\epsilon^{a_1}x_1,\dots, \epsilon^{a_n}x_n\bigr) $$ where $\epsilon $ is a primitive $r$th root of unity. Thus $Y$ is the toric variety corresponding to the free abelian group $$ N=\z^n+\z \bigl(a_1/r, \dots, a_n/r\bigr) \qtq{and the cone} \Delta=\bigl(\q_{\geq 0}\bigr)^n. $$ The Nash conjecture is true for toric singularities and by \cite[Sec.3]{MR2030097} the essential divisors are all toric and correspond to interior vectors of $N\cap \Delta$ that can not be written as the sum of an interior vector of $N\cap \Delta$ and of a nonzero vector of $N\cap \Delta$. In our case, all such vectors are of the form $$ \bigl(\overline{ca_1}/r, \dots, \overline{ca_n}/r\bigr) \qtq{for} c=1,\dots, r-1 $$ where $\overline{ca_i} $ denotes remainder mod $r$. Arcs that lift to $\c^n$ correspond to the vector $(1,\dots, 1)$, which is not minimal. In fact $$ (1,\dots, 1)= \bigl(\overline{a_1}/r, \dots, \overline{a_n}/r\bigr) + \bigl(\overline{(r-1)a_1}/r, \dots, \overline{(r-1)a_n}/r\bigr). \qed $$ \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\dbar{\leavevmode\hbox to 0pt{\hskip.2ex \accent"16\hss}d} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def$''${$''$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2} \vskip1cm \noindent Princeton University, Princeton NJ 08544-1000 {\begin{verbatim}[email protected]\end{verbatim}} {\begin{verbatim}[email protected]\end{verbatim}} \end{document}
\begin{document} \title{Semi-global invariants of piecewise smooth Lagrangian fibrations} \begin{abstract} We study certain types of piecewise smooth Lagrangian fibrations of smooth symplectic manifolds, which we call \emph{stitched Lagrangian fibrations}. We extend the classical theory of action-angle coordinates to these fibrations by encoding the information on the non-smoothness into certain invariants consisting, roughly, of a sequence of closed $1$-forms on a torus. The main motivation for this work is given by the piecewise smooth Lagrangian fibrations previously constructed by the authors \cite{CB-M-torino}, which topologically coincide with the local models used by Gross in Topological Mirror Symmetry \cite{TMS}. \end{abstract} \section{Introduction} Lagrangian fibrations arise naturally from integrable systems. It is a standard fact of Hamiltonian mechanics that such fibrations are locally given by maps of the type: \[ f=(f_1,\ldots, f_n), \] where the function components of $f$ are Poisson commuting functions on a symplectic manifold and such that the differentials $df_1,\ldots ,df_n$ are pointwise linearly independent almost everywhere. It is customary to assume $f$ to be $C^\infty$ differentiable (smooth). Under this regularity assumption, a classical theorem of Arnold-Liouville says that a smooth proper Lagrangian submersion with connected fibres has locally the structure of a trivial Lagrangian $T^n$-bundle. In particular, all proper Lagrangian submersions are locally modelled on $U\times T^n$, where $U\subseteq\numberset{R}^n$ is a contractible open set and $U\times T^n$ has the standard symplectic form induced from $\numberset{R}^{2n}$. Standard coordinates with values in $U\times T^n$ are known as action-angle coordinates. Since these are defined on a fibred neighbourhood, action-angle coordinates are \emph{semi-global} canonical coordinates. Thus proper Lagrangian submersions have no semi-global symplectic invariants. In this article we investigate the semi-global symplectic topology of proper Lagrangian fibrations given by piecewise smooth maps. In \cite{CB-M-torino}\S 6 we introduced the notion of \textit{stitched Lagrangian fibration}. These are continuous proper $S^1$ invariant fibrations of smooth symplectic manifolds $X$ which fail to be smooth only along the zero level set $Z=\mu^{-1}(0)$ of the moment map of the $S^1$ action and whose fibres are all smooth Lagrangian $n$-tori. Essentially, these fibrations consist of two honest smooth pieces $X^+=\{\mu\geq 0\}$ and $X^{-}=\{\mu\leq 0\}$, stitched\footnote{We have chosen to use `stitching' rather than `gluing' since the resulting map is in general non smooth; the term `gluing' usually has a smoothness meaning attached to it.} together along $Z$, which we call the \textit{seam}. These fibrations, roughly speaking, can be expressed locally as: \[ f=(\mu, f_2^\pm,\ldots, f_n^\pm), \] where $f_j^+$ and $f_j^-$ are smooth functions defined on $X^+$ and $X^-$, respectively, whose differentials do not necessarily coincide along $Z$. Fibrations of this type are implicit in the examples proposed earlier by the authors \cite{CB-M-torino}\S 5 and may also be implicit in those in \cite{Ruan}. In this paper, we develop a theory of action-angle coordinates for this class of piecewise smooth fibrations. Contrary to what happens in the smooth case, we found that these fibrations do give rise to semi-global symplectic invariants. To the authors' knowledge, the kind of non-smoothness we investigate here does not seem to be of relevance to Hamiltonian mechanics. Nevertheless it is an important issue in symplectic topology and mirror symmetry. Over the past ten years, Lagrangian torus fibrations, in particular those which are \emph{special Lagrangian}, have been discovered to play a fundamental role in mirror symmetry \cite{SYZ}. One should expect mirror pairs of Calabi-Yau manifolds to be fibred by Lagrangian tori and the mirror relation to be expressed in terms of a Legendre transform between the corresponding affine bases \cite{Hitchin}, \cite{G-Siebert}, \cite{Kontsevich-Soibelman}. This approach to mirror symmetry has some intricacies. For instance, there are examples of (non proper) special Lagrangian fibrations which are not given by smooth maps. Actually, one should expect a generic special Lagrangian fibration to be piecewise smooth \cite{Joyce-SYZ}. Non-smoothness may also arise even in the purely Lagrangian case. In fact, there are examples of Lagrangian torus fibrations of Calabi-Yau manifolds which are piecewise smooth \cite{Ruan}. This lack of regularity has two important consequences. In first place, the discriminant locus of the fibration --i.e. the set of points in the base corresponding to singular fibres-- may have codimension less than 2. Secondly, the base of the fibration may no longer carry the structure of an integral affine manifold away from the discriminant locus. In fact, the affine structure may break off not only along the discriminant --as it normally occurs in the smooth case-- but also along a larger set containing the discriminant. Under these circumstances, it may become problematic to interpret the SYZ duality as a Legendre transform between affine manifolds. One should therefore understand the symplectic topology of piecewise smooth Lagrangian fibrations. Some of the piecewise smooth examples here actually resemble the singular behaviour expected to appear in generic special Lagrangian fibrations. What is more important for our purposes, however, is the fact that our Lagrangian models coincide topologically with the non-Lagrangian models used by Gross \cite{TMS}; the discriminant locus in our case may jump to codimension 1 in some regions but the total spaces are the same. In some cases, the discriminant has the shape of a planar amoeba $\Delta$ (see Figure~\ref{fig: amoeba}) and fails to be smooth over the hyperplane $\Gamma=\{\mu =0\}$ containing $\Delta$. Away from $\Delta$ these fibrations are stitched Lagrangian torus fibrations. In particular, the affine structure on the base breaks apart along $\Gamma\setminus\Delta$. In this paper we provide some useful techniques to understand how this degeneration of the affine structure occurs. The material of this paper is organised as follows. In \S \ref{section aa} we start reviewing the classical theory of action-angle coordinates for smooth fibrations. In \S \ref{sec:def-ex} we recall the construction of piecewise smooth fibrations of \cite{CB-M-torino}, these are explicitly given examples, some of them with codimension 1 discriminant locus. Then we revise the definition of stitched fibration, introduced in \cite{CB-M-torino}. We formalise the idea of action-angle coordinates for stitched fibrations, allowing us to define the first order invariant, $\ell_1$, of a stitched fibration. This invariant measures the discrepancy along $Z$ between the distributions spanned by the Hamiltonian vector fields $\eta_2^+,\ldots ,\eta^+_n$ and $\eta_2^-,\ldots ,\eta^-_n$ corresponding to $f_2^+,\ldots ,f_n^+$ and $f_2^-,\ldots ,f_n^-$, respectively. The seam $Z$ is an $S^1$-bundle $p:Z\rightarrow \bar Z:=Z\slash S^1$ such that: \[ \xymatrix{ Z \ar[dr]_{f|_Z} \ar[rr]^{p} & & \bar{Z} \ar[dl]^{\bar{f}} \\ & \Gamma } \] where $\bar f$ is the reduced fibration over the wall $\Gamma=\{\mu=0\}$, with tangent $(n-1)$-plane distribution: \[ \mathfrak{L}=\ker \bar f_\ast\subset T\bar Z. \] Let $\mathscr L_{\bar Z}$ be the set of fibrewise closed sections of $\mathfrak{L}^\ast$, i.e. elements in $\mathscr L_{\bar Z}$ can be viewed as closed 1-forms on the fibres of $\bar f$. The first order invariant of $f$ is defined as follows. There are smooth $S^1$-invariant functions $a_2,\ldots ,a_n$ on $Z$ such that $a_j\eta_1=\eta_j^+-\eta_j^-$. In particular, this implies that $\eta_j^+$ and $\eta_j^-$ are mapped under $p_\ast$ to the same vector field $\bar\eta_j$ on $\bar Z$. The first order invariant $\ell_1$ is defined to be the section of $\mathfrak L^\ast$ such that $\ell_1(\bar\eta_j)=a_j$. It turns out that $\ell_1\in\mathscr L_{\bar Z}$. In \S \ref{sec:ho-inv} we investigate higher order invariants. These are sequences $\{ \ell_k\}_{k\in\numberset{N}}$, with each $\ell_k\in\mathscr L_{\bar Z}$. Given a stitched Lagrangian fibration $f$, we define $\inv (f)$ to consist of the data of $(\bar Z, \bar f)$, suitably normalized, together with the sequence $\ell=\{\ell_k\}_{k\in\numberset{N}}$. The main result of this paper is proved in \S \ref{sec:normal} (cf. Theorem \ref{thm: grosso} and Theorem \ref{broken:constr2}) where we give a classification of stitched fibrations up to fibre-preserving symplectomorphims. Roughly, this can be stated as follows: \noindent\textbf{Theorem.} There are stitched Lagrangian fibrations $f$ having any specified set of data $\inv (f)$. Moreover, given stitched Lagrangian fibrations $f$ and $f'$ with invariants $\inv(f)$ and $\inv(f')$, respectively, there is a smooth symplectomorphism $\Phi$, defined on a neighbourhood of $Z$, and a smooth diffeomorphism $\phi$ preserving $\Gamma\subset B$ and a commutative diagram: \[ \begin{CD} X @>\Phi>> X'\\ @VfVV @Vf'VV\\ B @>\phi>> B' \end{CD} \] if and only if $\inv(f)=\inv(f')$. This result extends Arnold-Liouville's theorem to this piecewise smooth setting. In \S \ref{sec. stitch w/mono} we study stitched fibrations over non simply connected bases. We show that one can read the monodromy of a stitched fibration as a jump of the cohomology class $[\ell_1(b)]$ as $b\in\Gamma$ traverses a component of the discriminant locus. In the last section, we propose the following: \noindent\textbf{Conjecture.} Let $Y\subseteq (\numberset{C}^\ast)^{n-1}$ be a smooth algebraic hypersurface, $\Log :(\numberset{C}^\ast)^{n-1}\rightarrow\numberset{R}^{n-1}$ be the map defined by: \[ \Log (z_2,\ldots ,z_n)=(\log |z_2|,\ldots ,\log |z_n|). \] Then there is a piecewise smooth Lagrangian $n$-torus fibration with discriminant locus being the amoeba $\Delta=\Log (Y)$ inside $\{0\}\times\numberset{R}^{n-1}\subset\numberset{R}^n$. Away from $\Delta$ these fibrations are stitched Lagrangian fibrations. To support this conjecture we propose a construction. The results of this article allow us to have good control on the regularity of a large class of proper Lagrangian fibrations. Using simple techniques, one may deform the invariants of a given stitched fibration and produce proper Lagrangian fibrations with $S^1$ symmetry which are smooth on prescribed regions. This can be done, for instance, by multiplying a given sequence of invariants by a smooth function on the base $B$ vanishing on a prescribed region. In joint work in progress \cite{CB-M}, the authors use these and other techniques to give a construction of Lagrangian 3-torus fibrations of compact symplectic 6-manifolds starting from the information encoded in suitable integral affine manifolds, such as those arising from toric degenerations \cite{G-Siebert2003}. Such affine structures are expected to appear as Gromov-Hausdorff limits of degenerating families of Calabi-Yau manifolds (in the sense of \cite{G-Wilson2}, \cite{Kontsevich-Soibelman}). \section{Action-angle coordinates}\label{section aa} We review the classical theory of action-angle coordinates for $C^\infty$ Lagrangian fibrations. For the details we refer the reader to \cite{Arnold}. Assume we are given a $2n$-dimensional symplectic manifold $X$ with symplectic structure $\omega$, a smooth $n$-dimensional manifold $B$ and a proper submersion $f:X\rightarrow B$ whose fibres are connected Lagrangian submanifolds. Let $F_b$ be the fibre of $f$ over $b\in B$. We can define an action of $T_b^\ast B$ on $F_b$ as follows. For every $\alpha\in T^\ast_b B$ we can associate a vector field $v_\alpha$ on $F_b$ determined by \begin{equation}\label{eq contr.} \iota_{v_\alpha}\omega=f^\ast\alpha . \end{equation} Let $\phi_\alpha^t$ be the flow of $v_\alpha$ with time $t\in\numberset{R}$. Define $\theta_\alpha$ as $\theta_\alpha(p)=\phi^1_\alpha (p)$ where $p\in F_b$. One can check that $\theta_\alpha$ is well defined and that it induces an action $(\alpha, p)\mapsto \theta_\alpha (p)$. Furthermore, the action is transitive. Then, $\Lambda_b$ defined as \[ \Lambda_{b} = \{ \lambda \in T^{\ast}_{b}B \ | \ \theta_{\lambda}(p) = p, \ \text{for all} \ p \in F_b \} \] is a closed discrete subgroup of $T^\ast_bB$, i.e. a lattice. From the properness of $f$ it follows that $\Lambda_b$ is maximal (in particular homomorphic to $\numberset{Z}^{n}$) and that $F_b$ is diffeomorphic to $T^{\ast}_bB/ \Lambda_b$ and therefore $F_b$ is an $n$-torus. Let $\Lambda = \cup_{b \in B} \Lambda_b$. One can compute $\Lambda$ as follows. Given a point $b_0 \in B$ and a contractible neighbourhood $U$ of $b_0$, for every $b \in U$, $H_1(F_b, \numberset{Z})$ is naturally identified with $H_1(F_{b_0}, \numberset{Z})$. Choose a basis $\gamma_1, \ldots, \gamma_n$ of $H_1(F_{b_0},\numberset{Z})$. Given a vector field $v$ on $U$, denote by $\tilde{v}$ a lift of $v$ on $f^{-1}(U)$. We can define the following $1$-forms $\lambda_1, \ldots, \lambda_n$ on $B$: \begin{equation} \label{per:def} \lambda_j(v) = - \int_{\gamma_j} \iota_{\tilde{v}} \omega. \end{equation} It is well known that the 1-forms $\lambda_j$ are closed and they generate $\Lambda$. If $\sigma :B\rightarrow X$ is a smooth section of $f$ we can define the map \[ \Theta: T^\ast B\slash\Lambda\rightarrow X \] by $\Theta (b,\alpha )=\theta_\alpha(\sigma (b))$. This map is a diffeomorphism and it is a symplectomorphism if $\sigma (B)\subseteq X$ is Lagrangian. A choice of functions $a_j$ such that $da_j=\lambda_j$ defines coordinates $a=(a_1,\ldots a_n)$ on $U$ called \textit{action coordinates}. In particular, a covering $\{U_i\}$ of $B$ by small enough contractible open sets and a choice of action coordinates $a_i$ on each $U_i$ defines an integral affine structure on $B$, i.e. an atlas whose change of coordinates maps are transformations in $\numberset{R}^n\rtimes\Gl (n,\numberset{Z})$. A less invariant approach --but useful for explicit computations-- can be described as follows. Let $(b_1,\ldots b_n)$ be local coordinates on $U\subseteq B$ and let $f_j=b_j\circ f$. Then $f_1,\ldots ,f_n$ define an integrable Hamiltonian system. Let $\Phi^t_{\eta_j}$ be the flow of the Hamiltonian vector field $\eta_j$ of $f_j$. Let $\sigma$ be a Lagrangian section of $f$ over $U$. Then the map $\Theta$ above can be expressed as: \[ \Theta: (b,t_1db_1+\cdots +t_ndb_n)\mapsto \Phi^{t_1}_{\eta_1}\circ\cdots\Phi^{t_n}_{\eta_n}(\sigma (b)). \] One may verify that \[ \Lambda_b=\{ (b,t_1db_1+\cdots +t_ndb_n)\in T^\ast_bU\mid \Phi^{t_1}_{\eta_1}\circ\cdots\Phi^{t_n}_{\eta_n}(\sigma (b))=\sigma (b)\}. \] When $(b_1,\ldots , b_n)$ are action coordinates, $(b_1,\ldots , b_n, t_1,\ldots ,t_n)$ are \textit{action-angle coordinates}. These coordinates always exist on a fibred neighbourhood $f^{-1}(U)$ of a fibre $F_b$ with $U\subseteq\numberset{R}^n$ a small neighbourhood of $b$, thus they can be regarded as \emph{semi-global} canonical coordinates. In particular, we have the following classical result: \begin{thm}[Arnold, Liouville] A proper Lagrangian submersion with connected fibres and a Lagrangian section has no semi-global symplectic invariants. \end{thm} The \emph{global} existence of action-angle coordinates is obstructed. For the details concerning this issue we refer the reader to Duistermaat \cite{Dui}. In the next section we consider a larger class of Lagrangian submersions which include some Lagrangian fibrations which fail to be given by $C^\infty$ maps. \section{Stitched Lagrangian fibrations: definitions and examples} \label{sec:def-ex} \begin{defi}\label{defi stitched} Let $(X, \omega)$ be a smooth $2n$-dimensional symplectic manifold. Suppose there is a free Hamiltonian $S^1$ action on $X$ with moment map $\mu: X \rightarrow \numberset{R}$. Let $X^+ = \{ \mu \geq 0 \}$ and $X^- = \{ \mu \leq 0 \}$. Given a smooth $(n-1)$-dimensional manifold $M$, a map $f: X \rightarrow \numberset{R} \times M$ is said to be a \textbf{stitched Lagrangian fibration} if there is a continuous $S^1$ invariant function $G: X \rightarrow M$, such that the following holds: \begin{itemize} \item[\textit{(i)}] Let $G^{\pm} = G|_{X^{\pm}}$. Then $G^+$ and $G^-$ are restrictions of $C^\infty$ maps on $X$; \item[\textit{(ii)}] $f$ can be written as \[ f = (\mu, G) \] and $f$ restricted to $X^{\pm}$ is a proper submersion with connected Lagrangian fibres. We denote \[ f^{\pm}=f|_{X^{\pm}}.\] \end{itemize} We call $Z = \mu^{-1}(0)$ the \textbf{seam}. \end{defi} We warn the reader that throughout the paper the superscript $\pm$ appearing in a sentence means that the sentence is true if read separately with the $+$ superscript and with the $-$ superscript. Notice that a stitched Lagrangian fibration may be non-smooth. In general it will be only piecewise $C^\infty$, however all its fibres are smooth Lagrangian tori. Observe also that $f^+$ and $f^-$ are restrictions of $C^\infty$ maps, they are not a priori required to extend to smooth Lagrangian fibrations beyond $X^+$ and $X^-$, respectively. Later we show, however, that for any stitched fibration, $f^+$ and $f^-$ are indeed restrictions of some locally defined smooth Lagrangian fibrations (cf. \S \ref{sec:normal}). Let $\pi_{\numberset{R}}$ be the projection of $\numberset{R} \times M$ onto $\numberset{R}$. Given a point $m \in M$ we study the geometry of a stitched Lagrangian fibration $f$ in a neighbourhood of the fibre over $(0,m)$. For this purpose it is convenient to allow a more general set of coordinates on $\numberset{R} \times M$ than just the smooth ones. \begin{defi}\label{defi:admissible} Let $B$ be a neighbourhood of $(0,m) \in \numberset{R} \times M$, let $B^{+} = B \cap (\numberset{R}_{\geq 0} \times M)$ and $B^{-} = B \cap (\numberset{R}_{\leq 0} \times M)$. A continuous coordinate chart $(B, \phi)$ around $(0,m)$ is said to be \textbf{admissible} if the components of $\phi=(\phi_1,\ldots ,\phi_n)$ satisfy the following properties: \begin{itemize} \item[\textit{(i)}] $\phi_1 = \pi_{\numberset{R}}$; \item[\textit{(ii)}] for $j = 2, \ldots, n$ the restrictions of $\phi_j$ to $B^+$ and $B^-$ are locally restrictions of smooth functions on $B$. \end{itemize} \end{defi} \begin{lem} \label{discrep} Let $f: X \rightarrow \numberset{R} \times M$ be a stitched Lagrangian fibration and let $(B, \phi)$ be an admissible coordinate chart around $(0,m) \in \numberset{R} \times M$. For $j = 2, \ldots , n$, the function $G_{j}^{\pm} = (\phi_j \circ f)|_{f^{-1}(B^{\pm})}$ is the restriction of a $C^\infty$ function on $X$ to $X^\pm$. Let $\eta_1$ and $\eta_{j}^{\pm}$ be the Hamiltonian vector fields of $\mu$ and $G^{\pm}_{j}$ respectively. Then there are $S^1$ invariant functions $a_j$, $j=2, \ldots, n$ on $Z \cap f^{-1}(B)$ such that \begin{equation} (\eta^{+}_{j} - \eta^{-}_{j})|_{Z \cap f^{-1}(B)} = a_j \, \eta_1|_{Z \cap f^{-1}(B)}. \label{discrep eq} \end{equation} \end{lem} \begin{proof} Let $\bar{Z} = Z / S^1$, with projection $p: Z \rightarrow \bar{Z}$ and let $\omega_r$ be the Marsden-Weinstein reduced symplectic form on $\bar{Z}$. Given a vector field $v$ on $\bar{Z}$, let $\tilde{v}$ be a lift of $v$ on $Z$. Then we have \begin{eqnarray*} \omega_r(p_{\ast}(\eta_j^+ - \eta_j^-), v) & = & \omega (\eta_j^+ - \eta_j^-, \tilde{v}) \\ & = & (dG^{+}_{j} - dG^{-}_{j})(\tilde{v}) = 0, \end{eqnarray*} where the last equality comes from the fact that, being $G$ continuous, $G^{+}_{j}|_{Z \cap f^{-1}(B)} = G^{-}_{j}|_{Z \cap f^{-1}(B)}$. Since $\omega_r$ is non-degenerate on $Z$, it follows that $$p_{\ast}(\eta_j^+ - \eta_j^-) = 0. $$ Therefore (\ref{discrep eq}) must hold for some function $a_j$, which must be $S^1$ invariant since the left-hand side of (\ref{discrep eq}) is $S^1$ invariant. \end{proof} Clearly, when $f$ and the coordinate map $\phi$ are smooth, all the $a_j$'s vanish, so equation (\ref{discrep eq}) measures how far $f$ and $\phi$ are from being smooth. We will say more about this in the coming sections. We now recall some of the examples which we already introduced and discussed extensively in \cite{CB-M-torino}. Consider the following $S^{1}$ action on $\numberset{C}^3$: \begin{equation} \label{action} e^{i\theta}(z_1,z_2,z_3)=(e^{i\theta}z_1,e^{-i\theta}z_2,z_3). \end{equation} This action is Hamiltonian with respect to the standard symplectic form $\omega_{\numberset{C}^3}$. Clearly, it is singular along the surface $\Sigma = \{ z_1=z_2=0 \}$. The corresponding moment map is: \begin{equation}\label{eq mu} \mu (z_1,z_2,z_3)=\frac{|z_1|^2-|z_2|^2}{2}. \end{equation} The only critical value of $\mu$ is $t=0$ and $\numberset{C}rit(\mu )= \Sigma \subset \mu^{-1}(0)$. Let $\gamma: \numberset{C}^2 \rightarrow \numberset{C}$ be the following piecewise smooth map \begin{equation}\label{eq. g} \gamma (z_1,z_2)= \begin{cases} \frac{z_1z_2}{|z_1|},\quad\text{when}\ \mu \geq 0\\ \\ \frac{z_1z_2}{|z_2|},\quad\text{when}\ \mu <0. \end{cases} \end{equation} In two dimensions we have the following: \begin{ex} [\textbf{Stitched focus-focus}] \label{broken focus focus} Consider the map \begin{equation} \label{fib ff} f(z_1,z_2)=\left( \frac{|z_1|^2-|z_2|^2}{2}, \, \log |\gamma (z_1,z_2)+1| \right). \end{equation} It is clearly well defined on $X = \{ (z_1, z_2) \in \numberset{C}^2 \ | \ \gamma(z_1, z_2) + 1 \neq 0 \}$ and it has Lagrangian fibres. We showed in \cite{CB-M-torino} that $f$ has the same topology of a smooth focus-focus fibration. The only singular fibre, $f^{-1}(0)$, is a (once) pinched torus. One can easily see that, when restricted to $X - f^{-1}(0)$, $f$ is a stitched Lagrangian fibration. The seam is $Z = \mu^{-1}(0) - f^{-1}(0)$. Notice that $Z$ has two connected components. Let $\eta_1$ and $\eta_2^{\pm}$ be the Hamiltonian vector fields defined as in Lemma~\ref{discrep}. After some computation one can verify that \[ (\eta_2^+ - \eta_2^-)|_{Z} = a \, \eta_{1}|_Z, \] where \[ a = \re \left( \frac{z_1 z_2}{ |z_1|^2 z_1z_2 - |z_1|^3} \right) \left|_{Z} \right.. \] \end{ex} There is an analogous model in three dimensions: \begin{ex} \label{leg} Consider the map \begin{equation}\label{eq leg fibr} f(z_1, z_2, z_3) = \left( \mu, \, \log |z_3|, \, \log |\gamma (z_1,z_2)-1| \right). \end{equation} The discriminant locus of $f$ is $\Delta=\{ 0 \} \times \numberset{R} \times \{0 \} \subset \numberset{R}^3$. Again, $f$ restricted to $X-f^{-1}(\Delta )$ defines a stitched Lagrangian fibration. \end{ex} \begin{ex} \label{ex amoebous fibr} Consider the map \begin{equation} \label{eq. the fibration} f(z_1, z_2, z_3) = (\mu, \log \frac{1}{\sqrt{2}}| \gamma - z_3|, \log \frac{1}{\sqrt{2}} |\gamma + z_3 - \sqrt{2}|). \end{equation} Let $X$ be the dense open subset of $\numberset{C}^3$ where $f$ is well defined. The general construction discussed in \S 5 of \cite{CB-M-torino} shows that $f$ is a piecewise smooth Lagrangian fibration. It contains singular fibres, in fact the discriminant locus $\Delta$ of $f$ is depicted in Figure \ref{fig: amoeba}. \begin{figure} \caption{Amoeba of $v_1+v_2+1=0$} \label{fig: amoeba} \end{figure} One easily checks that $f$ restricted to $X-f^{-1}(\Delta)$ is a stitched Lagrangian fibration. The seam is $Z = \mu^{-1}(0) - f^{-1}(\Delta)$, notice that $Z$ has three connected components. Let $\eta_1$ and $\eta_{j}^{\pm}$ be the Hamiltonian vector fields defined as in Lemma~\ref{discrep}. A computation shows that, for $j=2,3$ \[ (\eta_j^{+} - \eta_j^{-})|_{Z} = a_j \, \eta_1 |_{Z}, \] where \[ a_2 = - \frac{ \re \left( (\gamma - z_3) \frac{\ensuremath{\overline{z}}_1 \ensuremath{\overline{z}}_2}{|z_1|^3} \right)}{|\gamma -z_3|^2 } \] and \[ a_3 = - \frac{ \re \left( (\gamma + z_3 - \sqrt{2}) \frac{\ensuremath{\overline{z}}_1 \ensuremath{\overline{z}}_2}{|z_1|^3} \right)}{|\gamma + z_3 - \sqrt{2}|^2 }. \] In \cite{CB-M-torino} we describe the topology of the singular fibres of $f$ and discuss the relevance of this fibration in the context of Gross' topological mirror symmetry construction. We also show how this example can be perturbed to obtain other interesting stitched Lagrangian fibrations with discriminant locus of mixed codimension one and two. \end{ex} \section{The first order invariant} Our goal in this paper is to give a semi-global classification of stitched Lagrangian fibrations up to \emph{smooth} fibre-preserving symplectomorphism. For this purpose in this section we restrict our attention to stitched Lagrangian fibrations $f: X \rightarrow \numberset{R} \times M$, where $M = \numberset{R}^{n-1}$. We assume that $f(X)\subseteq\numberset{R}^n$ is a contractible open neighbourhood $U$ of $0 \in \numberset{R}^n$ and we denote coordinates on $U$ by $b = (b_1, \ldots, b_n)$. Clearly $(X, f)$ is a topologically trivial torus bundle over $U$. Let $U^+ := U \cap \{ b_1 \geq 0 \}$, $U^- := U \cap \{ b_1 \leq 0 \}$ and $\Gamma := U \cap \{ b_1 = 0 \} $. We assume for simplicity that the pair $(U, \Gamma)$ is homeomorphic to the pair $(D^n, D^{n-1})$, where $D^n \subset \numberset{R}^n$ is an $n$-dimensional ball centred at $0$ and $D^{n-1} \subset D^n$ is the intersection of $D^n$ with an $n-1$ dimensional subspace. We have that $X^{\pm} = f^{-1}(U^{\pm})$ and $Z = f^{-1}(\Gamma)$. If $f_1, \ldots, f_n$ are the components of $f$, then $f_1 = \mu$ is the moment map of the $S^1$ action. When $j=2, \ldots, n$, we denote $f_j^{\pm} = f_j|_{X^{\pm}}$. As in Lemma~\ref{discrep} we let $\eta_1$ and $\eta_{j}^{\pm}$ denote the Hamiltonian vector fields of $f_1$ and $f_j^{\pm}$ respectively. Let us recall the notation used in the proof of Lemma~\ref{discrep}. Since $Z = \mu^{-1}(0)$, the $S^1$ action on $X$ induces an $S^{1}$-bundle $p: Z \rightarrow \bar{Z}$, where $\bar{Z} = Z / S^1$. There exists $\bar{f}: \bar{Z} \rightarrow \Gamma$ such that the following diagram commutes \[ \xymatrix{ Z \ar[dr]_f \ar[rr]^{p} & & \bar{Z} \ar[dl]^{\bar{f}} \\ & \Gamma } \] In fact $\bar{f}= (f_2, \ldots, f_n)$, where each component of $\bar f$ is thought of as a function on $\bar{Z}$. For $b \in \Gamma$ denote by $F_b$ the fibre over $b$ and $\bar{F}_b = F_b\slash S^1$, clearly $\bar{F}_b = \bar{f}^{-1}(b)$. Denote by \[ \mathfrak{L} = \ker \bar{f}_{\ast}, \] the bundle over $\bar Z$ whose fibre at a point $y \in \bar{F}_b$ is $T_y\bar F_b$. From Lemma~\ref{discrep} it follows that $p_{\ast} \eta_j^{+} = p_{\ast} \eta_j^{-}$, so we can define $\bar{\eta} = (\bar{\eta}_2, \ldots, \bar{\eta}_n)$ to be the frame of $\mathfrak{L}$ where $\bar{\eta}_j = p_{\ast} \eta_j^{\pm}$. We say that a section of $\Lambda^k \mathfrak{L}^{\ast}$ is fibrewise closed (exact) if it is closed (exact) when viewed as a $k$-form on each fibre $\bar{F}_b$. We have the following: \begin{prop}\label{prop. ell_1} Let $(X,f)$ be a stitched Lagrangian fibration. If $\ell_{1}$ is the section of $\mathfrak{L}^{\ast}$ defined by \[ \ell_{1}(\bar{\eta}_j) = a_j, \] where $a_j$ is the $S^1$-invariant function appearing in (\ref{discrep eq}), then $\ell_1$ is fibrewise closed. \end{prop} \begin{proof} Since $f$ is a Lagrangian submersion, the Hamiltonian vector fields $\eta_1, \eta_2^{\pm}, \ldots, \eta_2^{\pm}$ commute and are linearly independent. Therefore, for every fixed $b \in \Gamma$, the vector fields $\eta_j^{\pm}|_{F_b}$ span $(n-1)$-dimensional integrable distributions $H_{b}^{\pm}$, which are horizontal with respect to the $S^1$-bundle $p_b: F_b \rightarrow \bar F_b$. From the $S^1$ invariance of $f_2, \ldots, f_n$, it also follows that $H_{b}^{\pm}$ are $S^1$ invariant. Thus they define flat connections $\theta_{b}^{\pm}$ of the bundle $p_b: F_b \rightarrow \bar F_b$. From the properties of flat connections, it follows that $\theta_{b}^{-} - \theta_{b}^{+}$ is the pull back of a closed one form on $\bar{F}_b$. From (\ref{discrep eq}) we obtain \[ (\theta_{b}^{-} - \theta_{b}^{+})(\eta_{j}^{\pm}) = a_j|_{\bar{F}_b}, \] i.e. that \[ \theta_{b}^{-} - \theta_{b}^{+} = p_{b}^{\ast} (\ell_1|_{\bar{F_b}}). \] Therefore $\ell_1$ is fibrewise closed. \end{proof} Clearly, the definition of $\ell_1$ depends on a choice of coordinates on $U$. Let $\ell_1^{\prime}$ be a fibrewise closed section of $\mathfrak{L}^{\ast}$. We say that $\ell_1^{\prime}$ is equivalent to $\ell_1$ up to a change of coordinates on the base if there exists a neighbourhood $W \subseteq U$ of $\Gamma$ and an admissible coordinate map $\phi: W \rightarrow \numberset{R}^n$ such that $\ell_1^{\prime}$ is the section associated to $(f^{-1}(W), \phi \circ f)$ via Proposition \ref{prop. ell_1}. Denote by $[\ell_1]$ the class represented by $\ell_1$ modulo this equivalence relation. We say that a section $\delta$ of $\mathfrak{L}^{\ast}$ is fibrewise constant if \[ \mathcal{L}_{\bar{\eta}_j} \, \delta = 0, \] for all $j=2, \ldots, n$, here $\mathcal{L}_{\bar{\eta}_j}$ denotes the Lie derivative. One can easily check that the latter definition is independent of the admissible coordinates on the base used to define $\bar{\eta}_j$. We have the following \begin{prop} \label{base:chcoor} A section $\ell_1^{\prime}$ of $\mathfrak{L}^{\ast}$ is equivalent to $\ell_1$ up to a change of coordinates on the base if and only if \[ \ell_1^{\prime} = \ell_1 + \delta, \] where $\delta$ is fibrewise constant. In particular, the class of $\ell_1$ may be written as \[ [\ell_1]=\{ \ell_1+\delta\mid\delta\ \textrm{is fibrewise constant} \}. \] \end{prop} \begin{proof} Given an admissible change of coordinates $\phi: W \rightarrow \numberset{R}^n$, we must have $\phi_1 = b_1$. Moreover the partial derivatives $\partial_{k} \phi_j$ are defined and continuous on $W$ for all $k,j = 2, \ldots, n$. As far as derivatives with respect to $b_1$ are concerned, only left and right derivatives are defined and smooth on $\Gamma$, i.e. only $\partial_1 \phi^+_j$ and $\partial_1 \phi^-_j$, which may a priori differ. Let $(\eta_j^{\prime})^{\pm}$ be the Hamiltonian vector fields on $Z$ corresponding to $\phi_j \circ f$, with $j =2, \ldots, n$, and let $\bar{\eta}^{\prime}_j = p_{\ast}(\eta_j^{\prime})^{\pm}$ . An easy calculation shows that \[ (\eta_j^{\prime})^{\pm} = \partial_1 \phi^{\pm}_j \, \eta_1 + \sum_{k=2}^{n} \partial_k \phi_j \, \eta^{\pm}_k. \] In particular this implies \begin{equation} \label{chng} \bar{\eta}^{\prime}_j = \sum_{k=2}^{n} \partial_k \phi_j \, \bar{\eta}_k. \end{equation} and \begin{eqnarray*} (\eta_j^{\prime})^{+}- (\eta_j^{\prime})^{-} & = & (\partial_1 \phi^{+}_j - \partial_1 \phi^{-}_j) \, \eta_1 + \sum_{k=2}^{n} \partial_k \phi_j \, (\eta^{+}_k - \eta^{-}_k) \\ & = & (\partial_1 \phi^{+}_j - \partial_1 \phi^{-}_j + \sum_{k=2}^{n} a_k\partial_k \phi_j ) \, \eta_1. \end{eqnarray*} If $\ell_1^{\prime}$ is the 1-form associated to $\phi \circ f$ via Proposition \ref{prop. ell_1}, then by definition we must have \[ \ell_1^{\prime}(\bar{\eta}^{\prime}_j) = \partial_1 \phi^{+}_j - \partial_1 \phi^{-}_j + \sum_{k=2}^{n} a_k\partial_k \phi_j. \] Let $\delta$ be the section of $\mathfrak{L}^{\ast}$ defined by \begin{equation} \label{base:discr} \delta(\bar{\eta}^{\prime}_j) = \partial_1 \phi^{+}_j - \partial_1 \phi^{-}_j. \end{equation} Then, also using (\ref{chng}), we see that \[ (\ell_1 + \delta)(\bar{\eta}^{\prime}_j) = \ell_1^{\prime}(\bar{\eta}^{\prime}_j). \] Moreover, from (\ref{base:discr}) one can see that $\delta (\bar\eta'_{j})$ descends to a function on $\Gamma$ and therefore $\delta$ is fibrewise constant. Now suppose that $\delta$ is a fibrewise constant section of $\mathfrak{L}^{\ast}$. Let \[ \delta(\bar{\eta}_j) = d_j. \] Since $\delta$ is fibrewise constant, the $d_j$'s are fibrewise constant functions on $\bar{Z}$, i.e. they descend to functions on $\Gamma$. Define the following map \[\phi(b_1, \ldots, b_n) = \begin{cases} (b_1, \, b_2 + d_2(b_2, \ldots, b_n) \, b_1, \ldots, \, b_n + d_n(b_2, \ldots, b_n) \, b_1),\quad\text{when}\ b_1 \geq 0\\ \\ \I,\quad\text{when}\ b_1 <0. \end{cases} \] It is a well defined admissible coordinate map on some open neighbourhood of $\Gamma$. It is also clear that (\ref{base:discr}) holds. \end{proof} \begin{defi} We call $\ell_1$ the \textbf{first order invariant} of the stitched fibration $(X, \omega ,f)$. \end{defi} The name ``invariant" in the above Definition will be fully justified later on. It is clear from the proof of Proposition \ref{base:chcoor} that $\delta$ is a first order measure of how far the change of coordinates on the base is from being smooth, in particular if it is smooth then $\delta = 0$. We also have the following: \begin{cor} If there exists an admissible change of coordinates on the base which makes the stitched Lagrangian fibration smooth, then $\ell_1$ is fibrewise constant. \end{cor} \begin{proof} It is clear that if $\phi \circ f$ is smooth then we must have that its first order invariant $\ell_1^{\prime}$ is zero. It then follows from Proposition~\ref{base:chcoor} that $\ell_1$ must be fibrewise constant. \end{proof} We now describe action-angle coordinates of a stitched Lagrangian fibration $f: X \rightarrow U$. Let $\alpha$ be a 1-form on $U$. Since $f^\pm$ is the restriction of a smooth map, $\alpha$ pulls back to an honest smooth 1-form $\alpha^\pm$ defined on a neighbourhood of $Z$. The latter defines a smooth vector field $v_\alpha^\pm$ determined by the equation (\ref{eq contr.}). The flow of $v_\alpha^\pm$, when restricted to $X^\pm$, is fibre-preserving. This induces an action of $T^\ast_bU$ on the fibre $(f^\pm)^{-1}(b)$ for all $b\in U^\pm$. Let $\sigma: U \rightarrow X$ be a continuous section which is smooth and Lagrangian when restricted to $U^{\pm}$. Then, as explained in \S \ref{section aa}, there is a maximal smooth lattice $\Lambda_{\pm} \subset T^{\ast}U^{\pm}$ and a diagram \[ \xymatrix{ T^{\ast}U^{\pm} / \Lambda_{\pm} \ar[dr]_{\pi^\pm} \ar[rr]^{\Theta^\pm} & & X^\pm \ar[dl]^{f^\pm} \\ & U^\pm } \] where $\Theta^{\pm}$ is a symplectomorphism and $\pi^{\pm}$ is the standard projection. Let $\Phi_{\eta_1}^{t}, \Phi_{\eta_2^{\pm}}^{t} \ldots, \Phi_{\eta_n^{\pm}}^{t}$ denote the flow of $\eta_1, \eta_2^{\pm}, \ldots, \eta_n^{\pm}$ respectively. Then \begin{equation} \label{theta:flows} \Theta^{\pm}: (b, \, \sum_{j} t_j \, db_j) \mapsto \Phi_{\eta_1}^{t_1} \circ \Phi_{\eta_2^{\pm}}^{t_2} \circ \ldots \circ \Phi_{\eta_n^{\pm}}^{t_n}(\sigma(b)), \end{equation} and \[ \Lambda_{\pm} = \{ (b, \, \sum_{j} T_j \, db_j) \in T^{\ast}U^{\pm} \ | \ \Phi_{\eta_1}^{T_1} \circ \Phi_{\eta_2^{\pm}}^{T_2} \circ \ldots \circ \Phi_{\eta_n^{\pm}}^{T_n}(\sigma(b)) = \sigma(b) \} \] Now let \[ \lambda_1= db_1. \] The $S^1$ action implies $db_1 \in \Lambda_{\pm}$. Let us denote a basis for $\Lambda_{\pm}$ by $\{\lambda_1, \lambda_2^{\pm}, \ldots, \lambda_n^{\pm} \}$, where \[ \lambda_{j}^{\pm} = \sum_{k=1}^{n} T_{jk}^{\pm} db_k. \] The $S^1$ action on $X$ corresponds to translations along the $\lambda_1$ direction. Let \[ Z^{\pm} = (\pi^{\pm})^{-1}(\Gamma). \] If we denote \[ \bar{\lambda}_{j}^{\pm} = \lambda_{j}^{\pm} \mod db_1 \] and let $\bar{\Lambda}^{\pm} = \spn \inner{ \bar{\lambda}_{2}^{\pm}}{, \ldots, \bar{\lambda}_{n}^{\pm}}_{\numberset{Z}}$, then $\Theta^{\pm}$ identifies $Z / S^1$ with \[ \bar{Z}^{\pm} = T^{\ast} \Gamma / \bar{\Lambda}_{\pm}. \] Denote by $\bar{t} = (t_2, \ldots, t_n)$ the coordinates on the fibres of $\bar{Z}^-$. Now observe that, due to the discrepancy (\ref{discrep eq}) between $\eta_{j}^{+}$ and $\eta_{j}^{-}$ along $Z$, $\Theta^+$ and $\Theta^-$ behave differently on fibres lying over $\Gamma$. We have the diagram: \[ \xymatrix{ Z^- \ar[dr]_{\Theta^-} & & Z^+ \ar[dl]^{\Theta^+} \\ & Z } \] and the difference between the two maps is measured by \[ (\Theta^{+})^{-1} \circ \Theta^{-} : Z^{-} \rightarrow Z^{+}. \] We have the following characterisation of this map: \begin{prop} \label{map:Q} The discrepancy (\ref{discrep eq}) between the Hamiltonian vector fields of the stitched Lagrangian fibration $f: X \rightarrow U$ induces the map \[ Q = (\Theta^{+})^{-1} \circ \Theta^{-} \] between $Z^-$ and $Z^+$. Let \[ \ell_1^- = (\Theta^-)^{\ast} \ell_1. \] Then, computed explicitly in the canonical coordinates on $ T^{\ast} U^{-}$ and $T^{\ast} U^{+}$, $Q$ is given by \begin{equation} \label{glue:Q} Q: (b, t_1, \bar{t}) \mapsto \left(b, \ t_1 - \int_0^{\bar{t}} \ell_1^-, \ \bar{t} \right), \end{equation} where $(b, \bar{t})$ are the canonical coordinates on $\bar{Z}^-$ and the integral is a line integral in $T^{\ast}_b\Gamma$ along a path joining $(b,0)$ and $(b, \bar t)$. \end{prop} \begin{proof} Let $(b_1, \ldots, b_n, t_1, \ldots, t_n)$ and $(b_1, \ldots, b_n, y_1, \ldots, y_n)$ be the canonical coordinates on $ T^{\ast} U^{-}$ and $T^{\ast} U^{+}$ respectively. From its definition, we see that $\Theta^{+}$ identifies $\eta_1, \eta_{2}^{+}, \ldots, \eta_{n}^{+}$ with $\partial_{y_1}, \ldots, \partial_{y_n}$ and w.l.o.g. we can assume that it sends $\sigma$ to the zero section of $T^\ast U$. Therefore (\ref{discrep eq}) becomes \begin{equation} \label{vf:coo} \eta_{j}^{-} = \partial_{y_j} - (a_j \circ \Theta^{+}) \ \partial_{y_1}. \end{equation} Notice that $a_j \circ \Theta^{+}$ is independent of $y_1$. Computing the flows of $\eta_1, \eta_2^{-}, \ldots, \eta_n^{-}$ in these coordinates is not difficult and it turns out that $Q$ is given by \[ Q: (b, t_1, \ldots, t_n) \mapsto \left( b, \ t_1 - \sum_{j=2}^{n} \int_0^{t_j} a_j \circ \Theta^- (b, t_2, \ldots, t_{j-1}, t,0, \ldots, 0) dt, \ t_2, \ldots, \ t_n \right), \] which is equivalent to (\ref{glue:Q}), since $\ell_1$ is fibrewise closed\footnote{To verify that the above expression of $Q$ is correct, it is enough to check that $\partial_{t_j}Q=\eta^-_j\circ Q$.}. \end{proof} We now explain how the map $Q$ matches the periods in $\Lambda^-$ with those in $\Lambda^+$. The maps $\Theta^{\pm}$ naturally identify $\Lambda_{\pm}$ with $H_1( X, \numberset{Z}) \cong \numberset{Z}^n$, but in general $\Theta^{-}$ does it differently from $\Theta^+$. Let $\gamma_1$ be the cycle represented by the orbit of the $S^1$ action. We know that $\gamma_1$ always corresponds to the period $db_1$. We have the following \begin{cor} \label{broken:per} Suppose we choose bases $\{ \lambda_{1}, \lambda_2^{\pm}, \ldots, \lambda_n^{\pm} \}$ of $\Lambda_{\pm}$ corresponding to two bases $\gamma^{\pm} = \{ \gamma_1,\gamma_2^{\pm}, \ldots, \gamma_n^{\pm} \}$ of $H_1( X, \numberset{Z})$, such that \begin{itemize} \item[\textit{(i)}] $\gamma_1$ is represented by an orbit of the $S^1$ action, \item[\textit{(ii)}] $\gamma_j^{+} = \gamma_j^{-} + m_j \gamma_1$, for some $m_2, \ldots,m_n\in\numberset{Z}$. \end{itemize} Then at a point $b \in \Gamma$ we have \begin{equation} \label{per:corr} \lambda_{j}^{+}(b) = \lambda_{j}^{-}(b) + \left( m_j - \int_{\bar{\lambda}_{j}^{-}} \ell_1^{-} \right) \, \lambda_1, \end{equation} where the integral of $\ell_1^-$ is taken along the cycle represented by $\bar{\lambda}_{j}^{-}$. In particular \begin{equation} \label{p2:id} \bar{\lambda}_{j}^{+}(b) = \bar{\lambda}_{j}^{-} (b). \end{equation} \end{cor} \begin{proof} To obtain (\ref{per:corr}) it suffices to observe that since $m_j \lambda_1 + \lambda_{j}^{-}$ and $\lambda_{j}^{+}$ have to represent the same $1$-cycle in $f^{-1}(b)$, they must be mapped one to the other by $Q$. The result is therefore obtained by applying (\ref{glue:Q}) to $m_j\lambda_1+\lambda_j^-$. \end{proof} \begin{rem} Condition $(ii)$ means that under the map $p_*: H_1(X,\numberset{Z} ) \rightarrow H_1(X\slash S^1,\numberset{Z} )$, bases $\gamma^+$ and $\gamma^-$ are mapped to the same base of $H_1(X\slash S^1,\numberset{Z} )$. We will need to consider condition $\textit{(ii)}$ in \S~\ref{sec. stitch w/mono} where we discuss stitched Lagrangian fibrations over non simply connected bases, for which non-trivial monodromy may occur. \end{rem} \begin{rem} \label{quotient} From Proposition~\ref{map:Q} and Corollary \ref{broken:per} it follows that $\bar{\Lambda}_- = \bar{\Lambda}_+$ and that on the quotients $\bar{Z}^+$ and $\bar{Z}^-$, $Q$ acts as the identity. Therefore, if we let \[ \ell_1^+ = (\Theta^+)^{\ast} \ell_1, \] then we have $\bar{Z}^+ = \bar{Z}^-$ and $\ell_1^+ = \ell_1^-$. Thus we can remove the $+$ and $-$ signs and denote \[ \bar{\lambda}_j = \bar{\lambda}_{j}^{+} = \bar{\lambda}_{j}^{-} \] \[ \bar{\Lambda} = \bar{\Lambda}_- = \bar{\Lambda}_+ \] and, with slight abuse of notation, identify $\bar{Z}$ with $T^{\ast} \Gamma / \bar{\Lambda}$ and $\bar{f}$ with the projection $\bar{\pi}: \bar{Z} \rightarrow \Gamma$. Notice then that $\mathfrak{L}$ is identified with $\ker \bar{\pi}_{\ast}$ and $\ell_1$ with $\ell_1^{\pm}$. \end{rem} It is natural to consider bases of $H_1(X, \numberset{Z})$ satisfying conditions $(i)$ and $(ii)$ also because of the following \begin{lem} \label{broken:action} Let $\{ \gamma_1,\gamma_2^{\pm}, \ldots, \gamma_n^{\pm} \}$ be bases of $H_1( X, \numberset{Z})$ satisfying conditions $(i)$ and $(ii)$ of Corollary~\ref{broken:per} and let $\alpha^{\pm}: U^{\pm} \rightarrow \numberset{R}^n$ be the corresponding action coordinates satisfying $\alpha^{\pm}(0) = 0$. Then the map \begin{equation}\label{eq. stitched action} \alpha = \begin{cases} \alpha^{+} \quad\text{on} \ U^{+}, \\ \alpha^{-} \quad\text{on} \ U^{-}, \end{cases} \end{equation} is an admissible change of coordinates. \end{lem} \begin{proof} Action coordinates $\alpha^{\pm} = (\alpha_1^\pm ,\ldots , \alpha_n^\pm )$ are defined by the integral \[ \alpha^\pm_j(b) = \int_{0}^{b} \lambda_j^{\pm}. \] along a curve in $U^\pm$ joining $0$ and $b$. When $j=1$, this gives $\alpha_1^+=\alpha_1^-=b_1$. Clearly $\alpha$ is a diffeomorphism when restricted to $U^+$ or $U^{-}$. Moreover $\alpha$ is injective. The fact that $\alpha^+$ and $\alpha^-$ coincide along $\Gamma$ follows from (\ref{p2:id}) and the connectedness of $\Gamma$. In fact (\ref{p2:id}) implies that when $b \in \Gamma$ the above integral gives \[ \alpha^+_j(b) = \int_{0}^{b} \bar{\lambda}_j^{+}= \int_{0}^{b} \bar{\lambda}_j^{-} = \alpha^-_j(b). \] This concludes the proof. \end{proof} \begin{rem} \label{rem:action} The upshot of Lemma \ref{broken:action} is that after a change of coordinates as in (\ref{eq. stitched action}) we can always assume that the coordinates on the base $U$, when restricted to $U^{\pm}$, are action coordinates corresponding to bases $\{ \gamma_1,\gamma_2^{\pm}, \ldots, \gamma_n^{\pm} \}$ of $H_1( X, \numberset{Z})$ satisfying $(i)$ and $(ii)$ of Corollary~\ref{broken:per}. Then $\{ db_1, db_2, \ldots, db_n \}$ form a basis of $\Lambda_{+}$ and $\Lambda_-$. From (\ref{per:corr}) it also follows that, in view of the identifications of Remark~\ref{quotient}, $\ell_1$ must satisfy \[ \int_{\bar{\lambda}_{j}} \ell_1 = m_j. \] The reader should be warned at this point that, although the map $\alpha$ as in (\ref{eq. stitched action}) allows us to find action coordinates on both $U^+$ and $U^-$, we still have two different sets of action-angle coordinates, $(b_1,\ldots ,b_n,y_1,\ldots ,y_n)$ on $X^+$ and $(b_1,\ldots ,b_n,t_1,\ldots ,t_n)$ on $X^-$. This is due to the discrepancy between $\Theta^+$ and $\Theta^-$, which makes the map: \[ \Theta = \begin{cases} (\Theta^{+})^{-1} \quad\text{on} \ X^{+} \\ (\Theta^{-})^{-1} \quad\text{on} \ X^{-} \end{cases} \] discontinuous along the seam $Z$. As pointed out before, this discrepancy is measured by $\ell_1$. \end{rem} In the next theorem we show that any fibrewise closed section $\ell_1 \in \mathfrak{L}^{\ast}$ can be the first order invariant of a stitched Lagrangian fibration. \begin{thm} \label{broken:constr} Let $U$ be an open contractible neighbourhood of $0 \in \numberset{R}^n$ such that $\Gamma = U \cap \{ b_1 = 0 \}$ is contractible. Let $\bar{\Lambda} \subseteq T^{\ast} \Gamma$ be the lattice spanned by $\{ db_2, \ldots, db_n \}$, and let $\bar{Z} = T^{\ast} \Gamma / \bar{\Lambda}$, with projection $\bar{\pi}: \bar{Z} \rightarrow \Gamma$ and bundle $\mathfrak{L} = \ker \bar{\pi}_{\ast}$. Given integers $m_2, \ldots, m_n$ and a smooth, fibrewise closed section $\ell_{1}$ of $\mathfrak{L}^{\ast}$ such that \begin{equation} \label{int:cond} \int_{db_j} \ell_1 = m_j \ \ \ \ \text{for all} \ \ j=2, \ldots, n, \end{equation} there exists a smooth symplectic manifold $(X, \omega)$ and a stitched Lagrangian fibration $f: X \rightarrow U$ satisfying the following properties: \begin{itemize} \item[\textit{(i)}] the coordinates $(b_1, \ldots, b_n)$ on $U$ are action coordinates of $f$ with $\mu = f^{\ast}b_1$; \item[\textit{(ii)}] the periods $\{ db_1, \ldots, db_n \}$, restricted to $U^{\pm}$ correspond to basis $\{ \gamma_1, \gamma_2^{\pm}, \ldots, \gamma_n^{\pm} \}$ of $H_1(X, \numberset{Z})$ satisfying $(i)$ and $(ii)$ of Corollary~\ref{broken:per}; \item[\textit{(iii)}] $\ell_1$ is the first order invariant of $(X,f)$. \end{itemize} \end{thm} \begin{proof} We regard the two halves of $U$, $U^+$ and $U^-$ defined as before, as disjoint sets. Let $\Lambda_{\pm}$ be the lattices in $T^{\ast}U^{\pm}$ spanned by $\{ db_1, db_2, \ldots, db_n \}$ and define $X^{\pm} = T^{\ast}U^{\pm} / \Lambda_{\pm}$, with corresponding projections $\pi^{\pm}$. Let $Z^{\pm} = \partial X^{\pm} = (\pi^{\pm})^{-1}(\Gamma)$. Translations along the $db_1$ direction define an $S^1$ action on $Z^{\pm}$ such that $\bar{Z} = Z^{\pm} / S^1$. On $T^{\ast}U^{+}$ and $T^{\ast}U^{-}$, we consider canonical coordinates $(b_1, \ldots, b_n, y_1, \ldots, y_n)$ and $(b_1, \ldots, b_n, t_1, \ldots, t_n)$ respectively (or $(b,y)$ and $(b,t)$ for short). Coordinates on $\bar{Z}$ are given by $(b, \bar{t})$ (or $(b, \bar{y})$), where $b=(0, b_2, \ldots, b_n) \in \Gamma$ and $\bar{t} = (t_2, \ldots, t_n)$ (or $\bar{y} = (y_2, \ldots, y_n)$). For $j =2, \ldots, n$, let \[ a_{j} = \ell_1(\partial_{t_j})= \ell_1(\partial_{y_j})\] On $X^+$, let $\eta_1 = \partial_{y_1}$ and $\eta_j^{+} = \partial_{y_j}$ then on $Z^{+}$ we can define vector fields \[ \eta_{j}^{-} = \eta_j^{+} - a_j \ \eta_1, \] which is coherent with (\ref{vf:coo}). We can define a map $Q: Z^- \rightarrow Z^+$ by composition of the flows of $\eta_1, \eta_2^{-}, \ldots, \eta_n^{-}$, i.e. \[ Q: (b, t_1, \ldots, t_n) \mapsto \Phi_{\eta_1}^{t_1} \circ \Phi_{\eta_{2}^{-}}^{t_2} \circ \ldots \circ \Phi_{\eta_{n}^{-}}^{t_n} (b,0). \] Clearly, $Q$ can be written as in (\ref{glue:Q}). One can easily see that the properties of $\ell_1$ ensure that $Q$ is a well defined fibre-preserving diffeomorphism which sends the cycles represented by $db_1$ and $db_j$ in $H_1(Z^-,\numberset{Z} )$ to the cycles represented by $db_1$ and $db_j-m_jdb_1$ in $H_1(Z^+,\numberset{Z} )$, $j = 2, \ldots, n$, respectively. Intuitively, $Q$ identifies fibres of $\pi^-$ inside $Z^-$ with fibres of $\pi^+$ inside $Z^+$ after the latter ones have been twisted by iteratively flowing in the direction of $\eta_j^-$, $j=2,\ldots ,n$. Topologically we define \[ X = X^+ \cup_{Q} X^- .\] To give $X$ smooth and symplectic structures we have to extend the gluing map $Q$ to open neighbourhoods of $Z^+$ and $Z^-$. Let open sets $\tilde{U}^+$ and $\tilde{U}^{-}$ be small enlargements of $U^+$ and $U^-$ respectively, obtained by joining small open neighbourhoods of $\Gamma$ to $U^+$ and $U^-$. Extend $\Lambda_{\pm}$ to lattices of $T^{\ast}\tilde{U}^{\pm}$ in a constant way. We look for neighbourhoods $V^{\pm}$ of $Z^{\pm}$ inside $T^{\ast}\tilde{U}^{\pm}/ \Lambda_{\pm}$ and a symplectomorphism $\tilde{Q}: V^- \rightarrow V^+$ extending $Q$. One can achieve this by considering an ``auxiliary" fibration. Suppose for now that we could find a neighbourhood $V^{+}$ of $Z^+$ and a smooth, proper $S^1$-invariant Lagrangian fibration $u: V^+ \rightarrow \numberset{R}^n$, with components $u_j$ such that: \begin{equation} \label{ext:ham} \begin{array}{ll} u_1=b_1, & \\ u|_{Z^+} = \pi^+, & \\ \eta_{u_j}|_{Z^+} = \eta_{j}^{-}, &\textrm{when}\ j=2, \ldots, n. \end{array} \end{equation} This amounts to prescribing zero and first order terms of $u$ along $Z^+$ in the Taylor expansion of $u$ with respect to $b_1$. Now inside $\tilde{U}^-$ there will be a small open neighbourhood $W$ of $\Gamma$ and a symplectomorphism: \[ \tilde{Q}: V^- \rightarrow V^+, \] where $V^-:=(\pi^-)^{-1}(W)$ and \[ \tilde Q: (b, t_1, \ldots, t_n) \mapsto \Phi_{\eta_1}^{t_1} \circ \Phi_{\eta_{u_2}}^{t_2} \circ \ldots \circ \Phi_{\eta_{u_n}}^{t_n} (b,0). \] In other words, $\tilde Q$ is the action-angle coordinate map associated to the fibration $u: V^+ \rightarrow \numberset{R}^n$, computed with respect to the cycles $\{ db_1, -m_2 db_1 + db_2, \ldots, -m_n db_1 + db_n \}$ (it may be necessary, for this purpose, to restrict to a smaller $V^+$). From (\ref{ext:ham}) it follows that $\tilde Q$ extends $Q$. We define \[ X = (X^+ \cup V^+) \cup_{\tilde{Q}} (X^- \cup V^-). \] and the stitched Lagrangian fibration to be \[ f = \begin{cases} \pi^+ \quad\text{on} \ X^+ \\ \pi^- \quad\text{on} \ X^-. \end{cases} \] Due to the non-triviality of the gluing map $\tilde{Q}$ used to define $X$, $f$ is in general piecewise smooth. In fact if we pull back $f$ via the inclusion $ X^+ \cup V^+ \hookrightarrow X$, then we obtain \[ f|_{ X^+ \cup V^+} = \begin{cases} \pi^+ \quad\text{on} \ b_1 \geq 0 \\ u \quad\text{on} \ b_1 \leq 0. \end{cases} \] This is because $\pi^- = \tilde{Q}^{\ast} u$. By construction $(X, \omega)$ and $f$ satisfy the conditions $(i) - (iii)$. Now we prove that a fibration $u:V^+\rightarrow\numberset{R}^n$ satisfying (\ref{ext:ham}) exists. For every $b \in \Gamma$, consider the following one-parameter family of closed $1$-forms on the fibre $F_{b} = (\pi^+)^{-1}(b)$ \[ \ell(r) = r(dy_1 + \ell_1), \] where $r \in \numberset{R}$. For every $r$, the graph of $\ell(r)$ defines a Lagrangian submanifold inside $T^{\ast}F_b$. For $r$ sufficiently small, let $L_{r,b}$ be the image of the graph of $\ell(r)$ under the symplectomorphism \[ (y_1, \ldots, y_n, \sum_{k=1}^{n} x_k dy_k) \mapsto (x_1, b_2 + x_2, \ldots, b_n + x_n, y_1, \ldots, y_n),\] between a neighbourhood of the zero section of $T^{\ast}F_b$ and a neighbourhood of $F_b$ inside $T^{\ast}\tilde{U}^{+}/ \Lambda_{+}$. Then there will be a sufficiently small neighbourhood $V^+$ of $Z^+$ which is fibred by the submanifolds $L_{r,b}$, i.e. on which the manifolds $L_{r,b}$ are the fibres of a Lagrangian fibration $u: V^+ \rightarrow \numberset{R}^n$. This is due to the fact that the map \[ (r, b_2, \ldots, b_n, y_1, \ldots, y_n) \mapsto (r, b_2 + r a_2(b,\bar{y}), \ldots, b_n + r a_n(b,\bar{y}), y_1, \ldots, y_n) \] is a diffeomorphism when restricted to a neighbourhood of $\{0 \} \times Z^+$ inside $\numberset{R} \times Z^+$. We now show that a possible choice of $u$ also satisfies (\ref{ext:ham}). Notice that $u$ will be $S^1$-invariant since its fibres $L_{r,b}$ are $S^1$-invariant. Given $(b^{\prime}, y^{\prime}) \in V^+$, there exists a unique $(r,b) \in \numberset{R} \times Z^+$ such that $L_{r,b} \subset V^+$ and $(b^{\prime}, y^{\prime}) \in L_{r,b}$. In fact $(r,b)$ can be determined as a function of $(b^{\prime}, y^{\prime})$ by solving the non linear system \begin{equation} \label{sistema} \begin{cases} r = b_1^{\prime} \\ b_j + r a_j(b, y^{\prime}) = b_j^{\prime} \quad\text{when} \ j=2, \ldots, n \end{cases} \end{equation} using the implicit function theorem. Now define \[ u_1(b^{\prime}, y^{\prime}) = b_1^{\prime} \] and, when $j=2, \ldots, n$ \begin{equation} \label{ugei} u_j(b^{\prime}, y^{\prime}) = b_j, \end{equation} where $b_j$ (and thus $b$) are functions of $(b^{\prime}, y^{\prime})$. Notice that $S^1$-invariance of $u_j$ can also be seen from the fact that $u_j$ is independent of $y_1$. It is clear that, when $j = 2, \ldots, n$ \[ \begin{cases} \partial_{y_k^{\prime}} u_j |_{Z^+} = 0 \quad\text{for all} \ k = 1, \ldots,n \\ \partial_{b_k^{\prime}} u_j |_{Z^+} = \delta_{kj} \quad\text{for all} \ k=2, \ldots, n. \end{cases} \] Therefore \[ \eta_{u_j}|_{Z^+} = \partial_{b_1^{\prime}} b_j \, \partial_{y_1} + \partial_{y_j}. \] Using (\ref{sistema}) we compute that \[ \partial_{b_1^{\prime}} b_j|_{Z^+} = - a_j, \] which proves that conditions (\ref{ext:ham}) are satisfied. \end{proof} \section{Higher order terms} \label{sec:ho-inv} In Theorem~\ref{broken:constr} we provided a (local) construction of stitched Lagrangian fibrations with any given first order invariant satisfying integrality conditions (\ref{int:cond}). It involved the choice of a Poisson commuting set of functions $u_1, \ldots, u_n$ (producing a Lagrangian fibration $u$) defined on a neighbourhood of $Z$ and with prescribed $0$-th and $1$-st order terms (cf. (\ref{ext:ham})). In general there may be many choices of such functions giving stitched Lagrangian fibrations which are not fibrewise symplectomorphic. It is necessary to look at higher order terms. In this Section we give a description of these higher order terms and prove an existence result of stitched Lagrangian fibrations with prescribed higher order terms.. We fix here some basic notation. Let $(b_1, \ldots, b_n)$ be standard coordinates on $\numberset{R}^n$. Let $\numberset{R}^{n-1}$ be embedded in $\numberset{R}^n$ as the subset $\{ b_1 = 0 \}$ and let $\Gamma \subset \numberset{R}^{n-1}$ be an open neighbourhood of $0 \in \numberset{R}^{n-1}$. We will denote by $U$ an open neighbourhood of $\Gamma$ in $\numberset{R}^n$. We assume that the pair $(U, \Gamma)$ is diffeomorphic to the pair $(D^{n}, D^{n-1})$ where $D^k \subset \numberset{R}^k$ is a unit ball centred at $0$. Denote $U^+ = U \cap \{ b_1 \geq 0 \}$ and $U^- = U \cap \{ b_1 \leq 0 \}$. Then $\Gamma = U^+ \cap U^-$. Let $\Lambda$ be the lattice in $T^{\ast} U$ generated by $\{ db_1, \ldots, db_n \}$ and consider $T^{\ast}U / \Lambda$ with the standard symplectic form and with projection onto $U$ denoted by $\pi$. We assume $S^1$ acts on $T^{\ast}U / \Lambda$ via translations along the $db_1$ direction. Let $Z = \pi^{-1}(\Gamma)$ and $\bar{Z} = Z / S^1$. If $\bar{\Lambda}$ denotes the lattice in $T^{\ast} \Gamma$ spanned by $\{ db_2, \ldots, db_n \}$, we have $\bar{Z} = T^{\ast} \Gamma / \bar{\Lambda}$ with projection $\bar{\pi}$. Given $b \in \Gamma$, we denote $F_b = \pi^{-1}(b)$ and $\bar{F}_b = \bar{\pi}^{-1}(b) = F_b / S^1$. Canonical coordinates on $T^{\ast} U$ are denoted by $(b,y)=(b_1, \ldots, b_n, y_1, \ldots, y_n)$. We also have the bundle $\mathfrak{L} = \ker \bar{\pi}_{\ast}$. Throughout this section we will study the set defined in the following \begin{defi} We define $\mathscr U_{\bar{Z}}$ to be the set of pairs $(V,u)$ where $V$ is a neighbourhood of $Z$ and $u: V \rightarrow \numberset{R}^n$ is a $C^\infty$, proper, $S^1$-invariant, Lagrangian submersion, with components $(u_1, \ldots, u_n)$, such that $u|_{Z} = \pi$ and $u_1 = b_1$. \end{defi} Given $(V,u) \in \mathscr U_{\bar{Z}}$, let $Y^+ := \pi^{-1}(U^+)$, $Y:= Y^+ \cup V$, $Y^- := Y \cap \pi^{-1}(U^-)$ and define the map $f_u: Y \rightarrow \numberset{R}^n$ by \begin{equation} \label{u:st} f_u = \begin{cases} u \quad\text{on} \ Y^-, \\ \pi \quad\text{on} \ Y^+. \end{cases} \end{equation} Clearly $(Y,f_u)$ is a stitched Lagrangian fibration. We study the aforementioned higher order terms of such fibrations. \begin{prop} \label{u:tay eq} Let $(V,u) \in \mathscr U_{\bar{Z}}$. For every $N \in \numberset{N}$ and $j=2, \ldots, n$, consider the $N$-th order Taylor series expansion of $u_j$ in the variable $b_1$, evaluated at $b_1=0$: \begin{equation} \label{ugei:tay} u_j = \sum_{k=0}^{N} S_{j,k} b_1^k + o(b_1^N), \end{equation} where $S_{j,k}$ are smooth functions on $Z$ which are $S^1$ invariant (i.e. independent of $y_1$). For every $m \in \numberset{N}$, define the following sections of $\mathfrak{L}^{\ast}$ and $\Lambda^2 \, \mathfrak{L}^{\ast}$ respectively \begin{equation}\label{eq:S_m} S_m = \sum_{j=2}^{n} S_{j,m} \, dy_j \end{equation} and \begin{equation} \label{p:em} \begin{cases} P_1 = 0 \\ P_m = \sum_{j < l}^{n} \left( \sum_{k=1}^{m-1} \{ S_{j,k},S_{l, m-k} \} \right) \, dy_j \wedge dy_l \quad\text{when} \ m\geq 2. \end{cases} \end{equation} where $\{ \cdot , \cdot \}$ denotes the Poisson bracket on $\bar{Z}$. Then on every fibre $\bar{F}_b$, $S_m$ and $P_m$ satisfy the following equations \begin{equation} \label{u:rec} d \, S_m|_{\bar{F}_b} = P_{m}|_{\bar{F}_b}. \end{equation} \end{prop} \begin{proof} We recall that the Poisson bracket on $T^*U / \Lambda$ can be written as \[ \{ f, g \} = \sum_{k=1}^{n} \partial_{y_k} f \, \partial_{b_k} g - \partial_{b_k}f \, \partial_{y_k}g .\] Since the functions $S_{j,m}$ do not depend on $y_1$, one can easily see that the following holds \[ \{ S_{j,k} \, b_1^k , \, S_{l,m} \, b_1^m \} = \{ S_{j,k} , \, S_{l,m} \} \, b_1^{k+m}, \] where the bracket on the right hand side reduces to the bracket on $\bar{Z}$. Also we have \[ \{ S_{j,k} \, b_1^k, \, o(b_1^N) \} = o(b_1^{N+k}). \] Thus we have \begin{eqnarray*} \{ u_j, u_l \} & = & \sum_{0 \leq m+k \leq N} \{ S_{j,k},\, S_{l,m} \} \, b_1^{m+k} + o(b_1^{N}) \\ & = & \sum_{m = 0}^{N} \left( \sum_{k=0}^{m} \{ S_{j,k},\, S_{l, m-k} \} \right) \, b_1^m + o(b_1^N). \end{eqnarray*} Therefore if $u_j$ and $u_l$ commute then we must have that for all $m \in \numberset{N}$ \[ \sum_{k=0}^{m} \{ S_{j,k},\, S_{l, m-k} \} = 0, \] or that \begin{equation} \label{u:a} \{ S_{j,m}, S_{l,0} \} + \{ S_{j,0}, S_{l,m} \} = - \sum_{k=1}^{m-1} \{ S_{j,k},\, S_{l, m-k} \}. \end{equation} The condition that $u|_{Z} = \pi$ implies \[ S_{j,0} = b_j. \] Therefore \[ \{ S_{j,m}, S_{l,0} \} = \{ S_{j,m}, b_l \} = \partial_{y_l} S_{j,m}. \] We then see that (\ref{u:a}) becomes \[ \partial_{y_l} S_{j,m} - \partial_{y_j} S_{l,m} = - \sum_{k=1}^{m-1} \{ S_{j,k},\, S_{l, m-k} \}\] which is exactly what we get by expanding (\ref{u:rec}). \end{proof} \begin{rem} Notice that (\ref{u:rec}) are a set of partial differential equations satisfied by the sequence $\{ S_m \}_{m \in \numberset{N}}$. Moreover the definition of $P_m$ depends only on the $S_k$'s with $k \leq m-1$, therefore one may think of solving the equations recursively. From each solution $S_m$ of the $m$-th equation, we may determine another by adding to $S_m$ a fibrewise closed section of $\mathfrak{L}^{\ast}$. \end{rem} Now we provide a method to construct and characterise sequences $\{ S_m \}_{m\in \numberset{N}}$ of solutions to (\ref{u:rec}). Suppose $(V,u) \in \mathscr U_{\bar{Z}}$ and let $W \subseteq u(V)$ be a neighbourhood of $\Gamma$. Let $r \in \numberset{R}$ be a parameter. For $b = (0, b_2, \ldots, b_n) \in \Gamma$, let $(r,b)$ denote the point $(r,b_2, \ldots, b_n) \in \numberset{R}^n$. Given $(r,b) \in W$, denote by $L_{r,b}$ the fibre $u^{-1}((r,b))$. For every fibre $F_b \subset Z$ of $\pi$, consider the symplectomorphism \begin{equation} \label{fb:symp} (y_1, \ldots, y_n, \sum_{k=1}^{n} x_k dy_k) \mapsto (x_1, b_2 + x_2, \ldots, b_n + x_n, y_1, \ldots, y_n), \end{equation} between a neighbourhood of the zero section of $T^{\ast}F_b$ and a neighbourhood of $F_b$ in $V$. If $W$ is sufficiently small, for every $(r,b) \in W$, the Lagrangian submanifold $L_{r,b}$ will be the image of the graph of a closed $1$-form on $F_b$. Due to the $S^1$ invariance of $u$ and the fact that $u_1=b_1$, this 1-form has to be of the type \[ r dy_1 + \ell(r,b), \] where $\ell(r,b)$ is the pull back to $F_b$ of a closed one form on $\bar{F}_b$. Denote by $\ell(r)$ the smooth one parameter family of sections of $\mathfrak{L}^{\ast}$ such that $\ell(r)|_{\bar{F}_b} = \ell(r,b)$. The condition $u|_Z=\pi$ implies that $\ell (0,b)=0$. Furthermore, the $N$-th order Taylor series expansion of $\ell(r)$ in the parameter $r$ can be written as \begin{equation} \label{l:tay} \ell(r) =\sum_{k=1}^{N} \ell_k \, r^k + o(r^N), \end{equation} where the $\ell_k$'s are fibrewise closed sections of $\mathfrak{L}^{\ast}$. We can write \begin{equation}\label{eq:ell_k} \ell_k = \sum_{j=2}^{n} a_{j,k} \, dy_j. \end{equation} The following Lemma is rather technical but straightforward, thus its proof may be skipped on first reading. \begin{lem} \label{ls:form} Given $(V,u) \in \mathscr U_{\bar{Z}}$, let $\{ S_m\}_{m\in\numberset{N}}$ be the sequence (\ref{eq:S_m}) of sections of $\mathcal L^*$ encoding the Taylor coefficients of $u$ and let $\{ \ell_m \}_{m \in \numberset{N}}$ be the sequence of fibrewise closed sections of $\mathfrak{L}^{\ast}$ constructed from $u$ as above. Then for every $m \in \numberset{N}$, there exist formulae \begin{equation} \label{ls:rec} a_{j,m} = - S_{j,m} + R_{j,m}, \end{equation} where $R_{j,m}$ is an explicit polynomial expression depending on the $S_{l,k}$'s and their derivatives in the $b_i$'s up to order $m-1$ and with $0 \leq k \leq m-1$. In particular $R_{j,1} = 0$. Thus the sequence $\{ \ell_m \}_{m \in \numberset{N}}$ uniquely determines the sequence $\{ S_m \}_{m \in \numberset{N}}$ recursively and viceversa. \end{lem} \begin{proof} First of all let us write \[ \ell(r) = r \, dy_1 + \sum_{j=2}^{n} a_j(r) dy_j. \] Then by definition \begin{equation} \label{agei:tay} a_j(r) = \sum_{k=1}^{N} a_{j,k} \, r^k + o(r^N). \end{equation} The $a_j$'s are functions of $(r, b, y)$, with $(b,y) \in \bar{Z}$, satisfying by construction \begin{equation} \label{sistema1} \begin{cases} u_1(r, b_2 + a_2, \ldots, b_n + a_n, y) = r, \\ u_j(r, b_2 + a_2, \ldots, b_n + a_n, y) = b_j \quad\text{for all} \ j=2, \ldots, n. \end{cases} \end{equation} When $W$ is sufficiently small and $(r,b) \in W$, this system can be solved using the implicit function theorem to determine the $a_j$'s uniquely. We will now use it to compute the $a_{j,m}$'s and determine the formulae (\ref{ls:rec}). Let $j=2, \ldots, n$, then from the system and the conditions on $u$ we obtain \[ a_j|_{r=0} = 0 \] and \begin{equation} \label{a:1stder} \partial_{b_1}u_j + \sum_{k=2}^{n} \partial_{b_k} u_j \, \partial_r a_k = 0 . \end{equation} When evaluating at $r=0$, using $u|_{Z} = \pi$, we get \[ \partial_{b_1}u_j|_{r=0} + \partial_r a_j|_{r=0} = 0, \] i.e. that \begin{equation} \label{as:1} a_{j,1} = - S_{j,1}. \end{equation} Now we do the second order terms. Derivating (\ref{a:1stder}) we obtain \[ \partial_{b_1}^2 u_j + \sum_{k=2}^{n} \partial_{b_1} \partial_{b_k} u_j \, \partial_r a_k + \sum_{k,l=2}^{n} \partial_{b_l} \partial_{b_k} u_j \, \partial_r a_l \, \partial_r a_k + \sum_{k=2}^{n} \partial_{b_k} u_j \, \partial_r^2 a_k = 0. \] Evaluating at $r=0$ we get \[ \partial_{b_1}^2 u_j|_{r=0} + \left( \sum_{k=2}^{n} \partial_{b_1} \partial_{b_k} u_j \, \partial_r a_k \right) |_{r=0} + \partial_r^2 a_j|_{r=0} = 0,\] i.e. we obtain \begin{equation} \label{as:2} a_{j,2} = - S_{j,2} + \sum_{k=2}^{n} \partial_{b_k} S_{j,1} \, S_{k,1}. \end{equation} So we have that (\ref{ls:rec}) holds for $m=2$, where \[ R_{j,2} = \sum_{k=2}^{n} \partial_{b_k} S_{j,1} \, S_{k,1}. \] For the terms of order greater that two we refer the reader to the Appendix in \S \ref{appendix}. \end{proof} \begin{rem} We point out that (\ref{as:1}) shows that the definition of $\ell_1$ given in this section coincides with the first order invariant defined in the previous section. \end{rem} One good reason to work with the sequence $\{ \ell_k \}_{k \in \numberset{N}}$ rather than with the sequence $\{ S_{m} \}_{m \in \numberset{N}}$ is that we can easily prove the following \begin{prop} \label{lk:exist} Given any sequence $\{ \ell_k \}_{k\in \numberset{N}}$ of fibrewise closed sections of $\mathfrak{L}^{\ast}$, there exists a smooth $1$-parameter family $\ell(r)$ of fibrewise closed sections of $\mathfrak{L}^{\ast}$ such that (\ref{l:tay}) holds for every $N \in \numberset{N}$. \end{prop} The proof of this is based on the following general: \begin{lem}\label{lem:borel} For any sequence of $C^\infty$ functions $\{\alpha_k:\numberset{R}^p\rightarrow\numberset{R}\}$, there is a $C^\infty$ function $f:\numberset{R}\times\numberset{R}^p\rightarrow\numberset{R}$, such that $\alpha_k(x)=\partial_r^kf(r,x)|_{r=0}$, for all $k\in\numberset{N}$. \end{lem} A proof of this Lemma in the case when $\{\alpha_k\}$ is a sequence of real numbers is hinted in \cite{Rudin} Exercise 13, page 384. It is an exercise to show that the method proposed there can be adapted to the case when $\alpha_k$ depends smoothly on a parameter $x\in\numberset{R}^p$. \begin{proof}[Proof of Proposition \ref{lk:exist}] Let us first prove the statement assuming that all the $\ell_k$'s are fibrewise exact, i.e. there exists a sequence of functions $\{ f_k \}_{k \in \numberset{N}}$ on $\bar{Z}$ such that \[ \ell_k|_{\bar{F}_b} = d \, f_k|_{\bar{F}_b}. \] We have $\bar{Z} \cong \numberset{R}^{n-1} \times T^{n-1}$, where $T^{n-1}$ is the $(n-1)$-torus. Let $\{ U_{\alpha}, \phi_{\alpha} \}_{\alpha \in J}$ be a partition of unity on $T^{n-1}$. Define \[ f_{k,\alpha} = \sqrt{\phi_{\alpha}} f_{k}. \] We apply Lemma \ref{lem:borel}, for every $\alpha \in J$, to the sequence $\{ f_{k, \alpha} \}_{k \in \numberset{N}}$ lifted to the covering $\numberset{R}^{n-1}$ of $T^{n-1}$. So there exists a $C^\infty$ function $f_{\alpha}=f_{\alpha}(r)$ such that \[ f_{\alpha}(r) = \sum_{k=1}^{N} f_{k, \alpha} \, r^k + o(r^N), \] for every $N \in \numberset{N}$. Let \[ f(r) = \sum_{\alpha \in J} \sqrt{\phi_{\alpha}} \, f_{\alpha}(r).\] Then $f(r)$ descends to a smooth $1$-parameter family of functions on $\bar{Z}$. Moreover \begin{eqnarray*} f(r) & = & \sum_{k=1}^{N} \left( \sum_{\alpha \in J} \sqrt{\phi_{\alpha}} f_{k, \alpha} \right) \, r^k + o(r^N) \\ & = & \sum_{k=1}^{N} \left( \sum_{\alpha \in J} \phi_{\alpha} f_{k} \right) \, r^k + o(r^N) \\ & = & \sum_{k=1}^{N} f_{k} \, r^k + o(r^N). \\ \end{eqnarray*} If we let $\ell(r)$ be the $1$-parameter family of sections of $\mathfrak{L}^{\ast}$ such that \[ \ell(r)|_{\bar{F}_b} = d \, f(r)|_{\bar{F}_b}, \] then we clearly have \[ \ell(r) =\sum_{k=1}^{N} \ell_k \, r^k + o(r^N). \] We now do the general case. There certainly is a sequence $\{ l_k \}_{k \in \numberset{N} }$ of fibrewise constant sections of $\mathfrak{L}^{\ast}$ such that for every $k \in \numberset{N}$, $\ell_k - l_k$ is fibrewise exact. Since $l_k$ is fibrewise constant we can write \[ l_k = \sum_{j=2}^{n} q_{j,k} \, dy_j, \] where the $q_{j,k}$'s are fibrewise constant functions. Invoking Lemma \ref{lem:borel}, for every $j =2, \ldots, n$, there exists a family of fibrewise constant functions $q_j(r)$ such that \[ q_j(r) =\sum_{k=1}^{N} q_{j,k} \, r^k + o(r^N). \] Let \[ l(r) = \sum_{j=2}^{n} q_j(r) dy_j, \] and let $\tilde{\ell}(r)$ be the fibrewise exact family of forms such that \[ \tilde{\ell}(r) = \sum_{k=1}^{N} (\ell_k - l_k) \, r^k + o(r^N), \] which exists from the previous step. Define \[ \ell(r) = \tilde{\ell} (r) + l(r). \] One easily checks that (\ref{l:tay}) holds. \end{proof} The following is an existence result \begin{thm} \label{l to s} Let $\{ \ell_m \}_{m \in \numberset{N}}$ be a sequence of fibrewise closed sections of $\mathfrak{L}^{\ast}$ and let $\{ S_m\}_{m \in \numberset{N}}$ be the sequence of sections of $\mathfrak{L}^{\ast}$ obtained recursively from $\{ \ell_m \}_{m \in \numberset{N}}$ using formulae (\ref{ls:rec}) in Proposition~\ref{ls:form}, then there exists $(V,u) \in \mathscr U_{\bar{Z}}$ such that for every $N \in \numberset{N}$ \[ u_j = \sum_{k=0}^{N} S_{j,k} \, b_1^k + o(b_1^N). \] \end{thm} \begin{proof} Following Proposition~\ref{lk:exist}, given the sequence $\{ \ell_m \}_{m \in \numberset{N}}$, we can construct $\ell(r)$, a smooth $1$-parameter family of fibrewise closed sections of $\mathfrak{L}^{\ast}$ satisfying (\ref{l:tay}). We show that $\ell(r)$ can be used to construct the pair $(V,u)$. In fact the process is the inverse of the one which led us to the construction of a family $\ell(r)$ from a fibration $u$. The construction is identical to the one in the proof of Theorem~\ref{broken:constr}. Denote $\ell(r,b) = \ell(r)|_{\bar{F}_b}$ and write \[ \ell(r,b) = \sum_{j=2}^{n} a_j(r,b) dy_j, \] where the $a_j(r,b)$'s are functions depending on $y$ and they satisfy \begin{equation} \label{aj:tay} a_j(r,b) = \sum_{k=1}^{N} a_{j,k}(b) \, r^k + o(r^N). \end{equation} Let $L_{r,b}$ be the Lagrangian submanifold of $T^*U/ \Lambda $ which is the image of the closed one form $$r dy_1 + \ell(r,b)$$ under the symplectomorphism (\ref{fb:symp}). When $W \subseteq U$ is sufficiently small and $(r,b) \in W$, then the submanifolds $L_{r,b}$ are the fibres of a Lagrangian fibration $u: V \rightarrow \numberset{R}^n$. We describe $u$ explicitly and show that $(V,u) \in \mathscr U_{\bar{Z}}$ . Given $(b^{\prime}, y^{\prime}) \in V$, there exists a unique $(r,b) \in W$ such that $L_{r,b} \subset V$ and $(b^{\prime}, y^{\prime}) \in L_{r,b}$, in fact $(r,b)$ can be determined as functions of $(b^{\prime}, y^{\prime})$ by solving the non linear system \begin{equation} \label{sistema2} \begin{cases} r = b_1^{\prime} \\ b_j + a_j(r, b, y^{\prime}) = b_j^{\prime} \quad\text{when} \ j=2, \ldots, n \end{cases} \end{equation} using the implicit function theorem. We define \[ {u}_1(b^{\prime}, y^{\prime}) = b_1^{\prime} \] and, when $j=2, \ldots, n$ \begin{equation} \label{ugeii} u_j(b^{\prime}, y^{\prime}) = b_j(b^{\prime}, y^{\prime}). \end{equation} We claim that the coefficients of the Taylor series expansion of $u_j$ in $b_1^{\prime}$ are exactly the coefficients $S_{j,m}$ obtained from the sequence $\{ \ell_m \}_{m \in \numberset{N}}$ through formulae (\ref{ls:rec}). In fact notice that, by construction of $u_j$, the functions $a_j$ satisfy \[ u_j(r, b_2 + a_2(r, b, y^{\prime}), \ldots, b_n + a_n(r, b, y^{\prime})) = b_j, \] i.e. they are obtained from $u_j$ as the unique solution to system (\ref{sistema1}) and therefore the claim follows from the proof of Lemma~\ref{ls:form}. \end{proof} \begin{cor} \label{lclos:s} Let $\{ \ell_m \}_{m \in \numberset{N}}$ and $\{ S_m\}_{m \in \numberset{N}}$ be sequences of sections of $\mathfrak{L}^{\ast}$ with coefficients $a_{j,k}$ and $S_{j,k}$, respectively, related by formulae (\ref{ls:rec}). Then all the $\ell_m$'s are fibrewise closed if and only if the sequence $\{ S_m\}_{m \in \numberset{N}}$ satisfies equations (\ref{u:rec}). \end{cor} \begin{proof} If all the $\ell_m$'s are fibrewise closed, then Theorem~\ref{l to s} shows that there is a Lagrangian fibration $u: V \rightarrow \numberset{R}^n$ whose Taylor coefficients are given by the sequence $\{ S_m\}_{m \in \numberset{N}}$. Being $u$ Lagrangian, the claim follows from Proposition~\ref{u:tay eq}. Suppose now that $\{ S_m\}_{m \in \numberset{N}}$ satisfies equations (\ref{u:rec}). We prove the claim by induction. First of all notice that when $m=1$, $S_1 = - \ell_1$ and equation (\ref{u:rec}) implies that $\ell_1$ is fibrewise closed. Now suppose we have proved that $\ell_m$ is fibrewise closed for all $m \leq N$. Consider the sequence $\{ \tilde{\ell}_m \}_{m \in \numberset{N}}$, where $\tilde{\ell}_m = \ell_m$ when $m \leq N$ and $0$ otherwise. Using formulae (\ref{ls:rec}), we construct the associated sequence $\{ \tilde{S}_m\}_{m \in \numberset{N}}$. Since all the $\tilde{\ell}_m$'s are fibrewise closed, from the first part of this Corollary it follows that $\{ \tilde{S}_m\}_{m \in \numberset{N}}$ satisfies equations (\ref{u:rec}). Denote by $\tilde{P}_m$ the $2$-forms in (\ref{p:em}) constructed from $\{ \tilde{S}_m\}_{m \in \numberset{N}}$. Now notice that \[ \tilde{S}_m = S_m, \] when $m \leq N$ and \[ \tilde{P}_m = P_m \] when $m \leq N+1$. Moreover, if we denote by $\tilde{R}_{j,k}$ the expressions $R_{j,k}$ appearing in (\ref{ls:rec}) applied to $\{ \tilde{S}_m\}_{m \in \numberset{N}}$, then \[ \tilde{R}_{j,N+1} = R_{j,N+1}, \] where the right hand side denotes the same expression obtained using $\{ S_m\}_{m \in \numberset{N}}$. Therefore formula (\ref{ls:rec}) with $m = N+1$ and the fact that $\tilde{\ell}_{N+1} = 0$, implies \begin{equation} \label{sn1} \tilde{S}_{j,N+1} = \tilde{R}_{j,N+1} = R_{j,N+1}. \end{equation} Define the one form \[ R_{N+1} = \sum_{j=2}^{n} R_{j, N+1} dy_j. \] Clearly (\ref{sn1}) says that \[ \tilde{S}_{N+1} = R_{N+1} \] and that equation (\ref{u:rec}) for $\{ \tilde{S}_m\}_{m \in \numberset{N}}$ when $m = N+1$ becomes \begin{equation} \label{d:er} d R_{N+1}|_{\bar{F}_b} = \tilde{P}_{N+1}|_{\bar{F}_b} = P_{N+1}|_{\bar{F}_b}. \end{equation} Using (\ref{ls:rec}) for $\{ S_m\}_{m \in \numberset{N}}$ when $m= N+1$, we obtain \[ d \ell_{N+1}|_{\bar{F}_b} = - dS_{N+1}|_{\bar{F}_b} + d R_{N+1}|_{\bar{F}_b}. \] Now substituting (\ref{d:er}) and using the fact that (\ref{u:rec}) holds for $\{ S_m\}_{m \in \numberset{N}}$ when $m = N+1$ we obtain \[d \ell_{N+1}|_{\bar{F}_b}= - dS_{N+1}|_{\bar{F}_b} + P_{N+1}|_{\bar{F}_b} = 0, \] which completes the proof. \end{proof} Finally we have the most general existence result \begin{thm} \label{hoi:exist} Let $\{ S_m \}_{m \in \numberset{N}}$ be a sequence of sections of $\mathfrak{L}^{\ast}$, satisfying (\ref{u:rec}), then there exists $(V,u) \in \mathscr U_{\bar{Z}}$ such that for every $N \in \numberset{N}$ \[ u_j = \sum_{k=0}^{N} S_{j,k} \, b_1^k + o(b_1^N). \] \end{thm} \begin{proof} It is just a matter of applying the previous results. In fact given the sequence $\{ S_m \}_{m \in \numberset{N}}$ satisfying (\ref{u:rec}), using Proposition~\ref{ls:form} we construct the sequence $\{ \ell_m \}_{m \in \numberset{N}}$, whose terms are all fibrewise closed thanks to Corollary~\ref{lclos:s}. Finally we apply Theorem~\ref{l to s} to obtain $u$. \end{proof} Define the following sets: \[ \begin{array}{l} \mathscr{L}_{\bar{Z}} = \{ \{ \ell_m \}_{m \in \numberset{N} }\mid \ell_m \ \textrm{is a}\ C^\infty,\ \textrm{fibrewise closed section of} \ \mathfrak{L}^{\ast} \};\\ \\ \mathscr{S}_{\bar{Z}}=\{ \{ S_m \}_{m \in \numberset{N} }\mid S_m \ \text{is a}\ C^\infty\ \textrm{section of} \ \mathfrak{L}^{\ast}\ \textrm{satisfying (\ref{u:rec})}\}. \end{array} \] Clearly Proposition~\ref{u:tay eq} gives a map $T: \mathscr{U}_{\bar{Z}} \rightarrow \mathscr{S}_{\bar{Z}}$, assigning to $(V,u)$ the Taylor coefficients of $u$. We summarise the previous results in the following: \begin{thm}\label{thm:sequences} There is a one to one correspondence between the sets $\mathscr{L}_{\bar{Z}}$, $\mathscr{S}_{\bar{Z}}$ and $T( \mathscr{U}_{\bar{Z}})$. In particular from every sequence $\ell \in \mathscr L_{\bar{Z}}$ we can construct a unique element $S(\ell) \in \mathscr{S}_{\bar{Z}}$ and an element $(V,u) \in \mathscr{U}_{\bar{Z}}$ such that $T(V,u) = S(\ell)$. \end{thm} Given two elements $(V,u)$ and $(\tilde{V}, \tilde{u}) \in\mathscr U_{\bar{Z}}$, we can construct two stitched Lagrangian fibrations $(Y,f_u)$ and $(\tilde{Y}, f_{\tilde{u}})$ as in (\ref{u:st}). We recall that $f_u$ and $f_{\tilde{u}}$ are equivalent up to a change of coordinates on the base if $f_{\tilde{u}} = \phi \circ f_u$, where $\phi:W\rightarrow \phi(W)\subseteq\numberset{R}^n$ is an admissible change of coordinates on the base. If we write $\phi = (\phi_1, \ldots, \phi_n)$, then $\phi$ must satisfy $\phi_1 = b_1$ and $\phi|_{U^+} = \I$. Similarly, we say that two sequences $\ell, \tilde{\ell} \in \mathscr{L}_{\bar{Z}}$ are equivalent up to a change of coordinates in the base if they define fibrations $f_u$ and $f_{\tilde{u}}$ respectively which are equivalent up to a change of coordinates in the base. We now describe this equivalence relation in terms of a group action. Given a change of coordinate map $\phi$ on the base satisfying the above properties, we can consider its Taylor expansion in $b_1$ from the left, i.e. where the coefficients are given by left derivatives. For each component $\phi_j$, $j = 2, \ldots, n$, it can be written as \[ \phi_j(b_1, \ldots, b_n)|_{W \cap U^-} = b_j + \sum_{k=1}^{N} \Phi_{j,k}(b_2, \ldots, b_n) b_1^k + o(b_1^N). \] The left Taylor coefficients of $\phi$ thus define a sequence $\{ \Phi_m \}_{m \in \numberset{N}}$, where $\Phi_m: \Gamma \rightarrow \numberset{R}^{n-1}$ is a $C^\infty$ map whose components are $\Phi_m = (\Phi_{2,m}, \ldots, \Phi_{n,m})$. \begin{lem}\label{lem:germ-adm} Given any sequence $\{ \Phi_m \}_{m \in \numberset{N}}$ of smooth maps $\Phi_m: \Gamma \rightarrow \numberset{R}^{n-1}$ with components $\Phi_m = (\Phi_{2,m}, \ldots, \Phi_{n,m})$ there exists an admissible change of coordinate map $\phi=(\phi_1, \ldots, \phi_n) $ defined on some neighbourhood $W$ of $\Gamma$ such that $\phi_1 = b_1$, $\phi|_{U^+ \cap W} = \I$ and \[ \phi_j(b_1, \ldots, b_n)|_{U^- \cap W} = b_j + \sum_{k=1}^{N} \Phi_{j,k}(b_2, \ldots, b_n) b_1^k + o(b_1^N). \] for all $N \in \numberset{N}$. \end{lem} \begin{proof} It follows from Lemma \ref{lem:borel}. \end{proof} Define the following set \[ \mathscr{D}_{\Gamma} = \{ \{ \Phi_{m} \}_{m \in \numberset{N}} \, | \, \Phi_m \in C^{\infty}(\Gamma, \numberset{R}^{n-1}) \}. \] We say that two admissible change of coordinate maps $\phi$ and $\phi'$ are equivalent if their corresponding left Taylor coefficients define the same element in $\mathscr{D}_{\Gamma}$. We call $\mathscr{D}_{\Gamma}$ the set of \textbf{germs of admissible change of coordinates}. Given a germ $\Phi\in\mathscr{D}_{\Gamma}$ we say that an admissible change of coordinates $\phi$ is a representative of $\Phi$, if $\phi$ satisfies Lemma \ref{lem:germ-adm}. Composition of germs of admissible maps induces a group structure on $\mathscr{D}_{\Gamma}$, i.e. given $\Phi, \Phi^{\prime} \in \mathscr{D}_{\Gamma}$, we define $\Phi \cdot \Phi^{\prime}$ to be the germ of the map $\phi \circ \phi^{\prime}$, where $\phi$ and $\phi^{\prime}$ are representatives of $\Phi$ and $\Phi^{\prime}$ respectively. It is easy to see that this product on $\mathscr{D}_{\Gamma}$ does not depend on the choice of representatives. The group $\mathscr D_{\Gamma}$ acts on the set $\mathscr{L}_{\bar{Z}}$ as follows. Given $ \ell\in\mathscr{L}_{\bar{Z}}$ and $\Phi\in\mathscr D_{\Gamma}$, we define $\Phi \cdot \ell$ to be the sequence $\tilde{\ell}\in\mathscr{L}_{\bar{Z}}$ associated to the Lagrangian fibration $\tilde{u} = \phi \circ u$, where $\phi$ is a representative of $\Phi$ and $u$ is a Lagrangian fibration obtained from $\ell$ via Theorem~\ref{l to s}. \begin{lem} The above action is well-defined. \end{lem} \begin{proof} We need to show that the action does not depend on the choices made. The sequence $\tilde{\ell}\in\mathscr{L}_{\bar{Z}}$ determines a unique sequence $\{\tilde S_m\}_{m\in\numberset{N}}\in\mathscr S_{\bar Z}$, where each $\tilde S_m$ is defined in terms of the Taylor coefficients of $\tilde u=\phi\circ u$. These coefficients, in turn, are expressed in terms of the Taylor coefficients of $\phi$ and $u$. If we take a different representative $\phi'$ of $\Phi$, clearly, the Taylor coefficients of $\phi'\circ u$ and $\phi\circ u$ coincide. Now let $S\in\mathscr S_{\bar Z}$ be the sequence determined by $\ell$. If $u'\in\mathscr U_{\bar Z}$ is a different realisation of $\ell$ then, by construction, $u'$ defines a sequence $S'\in\mathscr S_{\bar Z}$ such that $S=S'$. Therefore the Taylor coefficients of $\phi\circ u$ and $\phi\circ u'$ coincide. \end{proof} \begin{prop} Let $f_u$ and $f_{\tilde{u}}$ be two stitched Lagrangian fibrations constructed as in (\ref{u:st}). If locally $f_{\tilde{u}}= \phi \circ f_u$ for some admissible change of coordinates, then the sequences $\ell, \tilde{\ell} \in \mathscr{L}_{\bar{Z}}$ associated to $f_u$ and $f_{\tilde{u}}$ are in the same orbit of $\mathscr D_{\Gamma}$. Moreover $f_u$ is equivalent to a smooth fibration up to a change of coordinates on the base if and only if $\ell$ is a sequence of fibrewise constant sections of $\mathfrak{L}^{\ast}$. \end{prop} \begin{proof} The first part of the statement is obvious. If $f_{\tilde{u}} = \phi \circ f_u$ is smooth, then $\tilde{\ell}$ is the zero sequence $0$. It is easy to verify that $\ell = \Phi^{-1} \cdot 0$ is a sequence of fibrewise constant sections of $\mathfrak{L}^{\ast}$. Suppose, viceversa, that $\ell$ is a sequence of fibrewise constant sections. Consider the associated sequence $S \in \mathscr S_{\bar Z}$. The coefficients $S_{j,m}$ of each element $S_m \in S$ can be regarded as functions on the base $\Gamma$, therefore $S$ also defines a sequence $\Phi \in \mathscr D_{\Gamma}$ by setting $\Phi_m = (S_{2,m}, \ldots, S_{n,m})$. Let $\phi$ be an admissible change of coordinates representing $\Phi$. It is clear that $\phi^{-1} \circ f_u$ is smooth and that $\Phi^{-1} \cdot \ell = 0$. \end{proof} In \S \ref{sec:normal} we will consider equivalences up to smooth fibre preserving symplectomorphism. \section{The semiglobal classification}\label{sec:normal} Let $(X,\omega )$ be a symplectic manifold and $f:X\rightarrow B$ be a stitched fibration as in Definition \ref{defi stitched}. Let $Z\subset X$ be the seam of $f$ and let $\Gamma:=f(Z)\subset B$. Since we are interested in a semiglobal classification, throughout this section we will consider stitched Lagrangian fibrations satisfying the following assumption \begin{ass} \label{st:ass} The stitched Lagrangian fibration $f:X \rightarrow B$ satisfies the following condition \begin{enumerate} \item the pair $(B, \Gamma)$ is diffeomorphic to the pair $(D^n, D^{n-1})$ where $D^n \subset \numberset{R}^n$ is a ball centred at the origin and $D^{n-1} \subset D^n$ is the intersection of $D^n$ with an $n-1$ dimensional subspace; \end{enumerate} Also, the following data is specified \begin{enumerate} \setcounter{enumi}{1} \item a basis $\gamma = (\gamma_1, \gamma_2, \ldots, \gamma_n)$ of $H_1(X, \numberset{Z})$ so that $\gamma_1$ is represented by the orbit of the $S^1$ action; \item a continuous section $\sigma$ of $f$ defined on a neighbourhood of $\Gamma$, such that $\sigma|_{f(X^+)}$ and $\sigma|_{f(X^-)}$ are restrictions of smooth maps on $B$ and the image of $\sigma$ is a smooth Lagrangian submanifold of $X$.\end{enumerate} We denote a stitched Lagrangian fibration together with this data by $(X, B, f, \gamma, \sigma)$. \end{ass} We will soon show that a section $\sigma$ as in $(3)$ always exists. \begin{defi}\label{def:symp-eq} We say that two stitched fibrations $(X, B, f, \gamma, \sigma)$ and $(X', B', f', \gamma', \sigma')$, with seams $Z$ and $Z'$ respectively are \textbf{fibrewise symplectically equivalent} (or just equivalent) if there are neighbourhoods $ W \subseteq B$ of $\Gamma :=f(Z)$ and $W'\subseteq B'$ of $\Gamma':= f'(Z')$ and a commutative diagram: \[ \begin{CD} (f')^{-1}(W') @> \Psi >> f^{-1}(W)\\ @Vf'VV @VVfV \\ W' @> \phi >> W \end{CD} \] where $\Psi$ is an $S^1$ equivariant $C^\infty$ symplectomorphism sending $Z'$ to $Z$ and $\phi$ is a $C^\infty$ diffeomorphism such that $\Psi\circ\sigma'=\sigma\circ\phi$ and $\Psi_* \gamma' = \gamma$. The set of equivalence classes under this relation will be denoted by $\mathscr F$ and elements therein will be called \textbf{germs of stitched fibrations}. \end{defi} Now we show that any stitched fibration, satisfying Assumption~\ref{st:ass}, is fibrewise symplectically equivalent to a stitched fibration of the type $(Y,f_u)$ studied in \S \ref{sec:ho-inv}. Before doing this we need some preliminary results. Recall we can write a stitched fibration as: \[ f=\begin{cases} f^+ \quad\text{on} \ X^+;\\ f^- \quad\text{on} \ X^-, \end{cases} \] where $f^\pm$ is the restriction of a $C^\infty$, $S^1$ invariant map to $X^\pm$ whose fibres are Lagrangian when restricted to $X^\pm$. As pointed out in \S \ref{sec:def-ex}, the fibres of such a map are not a priori required to be Lagrangian beyond $X^\pm$. Nevertheless we have the following: \begin{prop}\label{prop: ext} Let $(X,B,f)$ be a stitched fibration with seam $Z\subset X$, satisfying condition (1) of Assumption~\ref{st:ass}. Then there are neighbourhoods $V\subseteq X$ of $Z$ and $W\subseteq B$ of $\Gamma:= f(Z)$ and a $C^\infty$, proper Lagrangian fibration $\tilde f^+:V\rightarrow W$ such that $\tilde f^+|_{X^+\cap V}=f^+|_{X^+\cap V}$. The same is true for $f^-$. \end{prop} \begin{proof} Define $f_0:Z\rightarrow\Gamma$ to be $f_0=f|_{Z}$. Consider the reduced space $\bar Z=Z\slash S^1$ with its reduced symplectic form $\omega_{red}$. On $\numberset{R}\times S^1\times\bar Z$ we define the symplectic form: \[ \omega_{red}+ds\wedge dt \] where $(t,s)$ are coordinates on $\numberset{R}\times S^1$. From the coisotropic neighbourhood theorem (cf. \cite{Salamon} \S3.3) there exists a function $\epsilon:\Gamma\rightarrow\numberset{R}_{> 0}$, a neighbourhood $V\subset X$ of $Z$ and a $S^1$-equivariant symplectomorphism between $V$ and \begin{equation}\label{eq coiso} \{(t,s,p)\in\numberset{R}\times S^1\times\bar Z\mid -\epsilon(\bar f(p))<t<\epsilon(\bar f(p))\}. \end{equation} In particular, the projection onto $\numberset{R}$ corresponds to the moment map $\mu$ on $V$. Now, on the set in (\ref{eq coiso}), we can define an ``auxiliary" smooth Lagrangian fibration given by \[ \tilde\pi (t,s,p)=(t,f_0(s,p)). \] Fix a basis $\gamma$ of $H_1(V,\numberset{Z})\cong H_1(S^1\times\bar Z,\numberset{Z})$, satisfying condition $(2)$ of Assumption~\ref{st:ass} and a smooth Lagrangian section of $\tilde\pi$. The action-angle coordinates map $\Theta$ associated to $\tilde\pi$, with respect to $\gamma$ and $\sigma$, together with (\ref{eq coiso}), induces a $C^\infty$ symplectomorphism \begin{equation}\label{eq ident} \tilde V:=T^\ast U\slash\Lambda \cong V \end{equation} for some open neighbourhood $U$ of $0\in\numberset{R}^n$ with coordinates $(b_1,\ldots ,b_n)$, which are the action coordinates of $\tilde\pi$. The pull back to $\tilde{V}$ of the $S^1$ action on $V$ is given by translations along $db_1$ and the corresponding moment map is $b_1$. Pulling back $f|_{V}$ to $\tilde V$ via the latter identification we obtain a stitched fibration --with abuse of notation-- defined by: \begin{equation}\label{eq:stitched-prime} f = \begin{cases} u^+ \quad\text{on} \ \tilde V^+;\\ u^- \quad\text{on} \ \tilde V^-, \end{cases} \end{equation} where $u^\pm$ is the pull back of $f^\pm$. It follows that $u^+|_Z=u^-|_Z=\pi|_Z$. What we gained so far is an identification which allows us to view $f|_{V}$ as a stitched fibration on the smooth symplectic manifold $\tilde V$, where global canonical coordinates exist. Now we can use the results of \S \ref{sec:ho-inv} to show that $u^+$ (equivalently, $u^-$) can be extended as required. This can be done as follows. Since $u^+$ is the restriction of a $C^\infty$ map to $\tilde V^+$, all the derivatives of its function components with respect to $b_1$ exist. Evaluating them at $b_1=0$ produces a unique sequence in $\mathscr S_{\bar Z}$ which in turn induces a unique sequence in $\mathscr L_{\bar Z}$ and a smooth Lagrangian fibration $(\tilde{V},w) \in \mathscr U_{\bar Z}$ (where eventually we restricted to a smaller $\tilde{V}$) whose Taylor coefficients in $b_1$ evaluated at $b_1=0$ coincide with those of $u^+$ (cf. Theorem \ref{thm:sequences}). In particular, this allows us to define \begin{equation} \tilde u^+ = \begin{cases} u^+ \quad\text{on} \ \tilde V^+;\\ w \quad\text{on} \ \tilde V^-, \end{cases} \end{equation} obtaining an element $( \tilde{V}, \tilde u^+) \in\mathscr U_{\bar Z}$, where $\tilde u^+$ extends $u^+$. Observe that different choices of $w$ induce different smooth extensions of $u^+$, however, all such choices are obtained starting from the same sequence in $\mathscr S_{\bar Z}$ determined by the derivatives of $u^+$. Finally, pulling back $\tilde u^+$ to $V$ under the identification (\ref{eq ident}), and perhaps shrinking $V$, gives us the required $\tilde f^+$. One can use the same arguments to find a suitable smooth extension of $f^-$. \end{proof} \begin{cor} A section $\sigma$ of $(X,B,f)$ satisfying condition (3) of Assumption~\ref{st:ass} exists. \end{cor} \begin{proof} Perhaps after an admissible change of coordinates on the base, a smooth Lagrangian section of $\tilde{f}^+$ is also a section of $f$. \end{proof} Let $(b_1,\ldots b_n)$ be coordinates on $U\subseteq\numberset{R}^n$ and let $\Lambda$ be the integral lattice inside $T^\ast U$ generated by $db_1,\ldots ,db_n$. Consider $T^\ast U\slash\Lambda$ with its standard symplectic structure and let $\pi: T^\ast U\slash\Lambda \rightarrow U$ be the standard projection. For convenience we change slightly the usual notation. Let $\Gamma_{\text{nor}} =\{ b_1=0\}\cap U$, $U^+:=\{b_1\geq 0\}\cap U$, $U^-:=\{b_1\leq 0\}\cap U$, $Z_{\text{nor}} := \pi^{-1}(\Gamma)$ and $\bar{Z}_{\text{nor}} := Z_{\text{nor}} / S^1$. We assume that $(U, \Gamma_{\text{nor}})$ is diffeomorphic to the pair $(D^n, D^{n-1})$. Given $(V, u) \in \mathscr U_{\bar{Z}_{\text{nor}}}$, we can construct a stitched Lagrangian fibration $(Y, f_u)$ as in (\ref{u:st}). The zero section of $\pi$ also defines a section of $f_u$, after perhaps an admissible change of coordinates on the base. We denote this section by $\sigma_0$. As a basis of $H_1(Y, \numberset{Z})$ we take the basis $(db_1, \ldots, db_2)$ of $\Lambda$. We denote it by $\gamma_0$. Then $(Y, f_u(Y), f_u, \sigma_0, \gamma_0)$ satisfies Assumption~\ref{st:ass}. \begin{defi} Let $F := (X,B,f, \sigma, \gamma)$ be a stitched Lagrangian fibration with seam $Z$, satisfying Assumption~\ref{st:ass}. A stitched fibration $F_u := (Y, f_u(Y), f_u, \sigma_0, \gamma_0)$ of the type above is a \textbf{normal form} of $F$ if $F_u$ and $F$ define the same germ of a stitched fibration, according to Definition~\ref{def:symp-eq}. \end{defi} Observe that the above is a normalisation of a $T^n$-fibred neighbourhood of the seam of $F$. In this sense, $F_u$ is a \emph{semi-global} normal form. If $F_u = (Y, f_u(Y), f_u, \sigma_0, \gamma_0)$ is a normal form of $F := (X,B,f, \sigma, \gamma)$ and $Z$ is the seam of $F$, then $Z_{\text{nor}}$ of $F_u$ is nothing else but $Z$ expressed in action angle coordinates, and thus it is a normalisation of $Z$. Since $\sigma_0$ and $\gamma_0$ are chosen canonically, we will from now on omit to specify them and just denote the normal form by $F_u= (Y,f_u)$. \begin{prop}\label{prop:normalform} Every stitched Lagrangian fibration $(X,B,f)$ satisfying (1) of Assumption~\ref{st:ass} has a section $\sigma$ and a basis $\gamma$ as in (2) and (3) of Assumption~\ref{st:ass} such that $(X,B,f, \sigma, \gamma)$ has a normal form $(Y, f_u)$ . \end{prop} \begin{proof} From Proposition \ref{prop: ext} we can assume there exist open neighbourhoods $V\subset X$ of $Z$ and $W \subseteq B$ of $\Gamma$ and a proper smooth Lagrangian fibration $\tilde f^+ :V \rightarrow W$ extending $f^+$. Now, fixing a basis $\gamma$ of $H_1(V,\numberset{Z})$ as in (3) of Assumption~\ref{st:ass} and a smooth Lagrangian section $\sigma$ of $\tilde f^+$, we obtain a unique symplectomorphism \[ \Theta^+ :T^\ast U\slash\Lambda\rightarrow V \] given by the action-angle coordinates associated to $\tilde f^+$. Then by defining $u$ to be the pull back of $\tilde{f}^-$ under $\Theta^+$ one readily sees that $f$ transforms into a fibration of the type $(Y, f_u)$. \end{proof} \begin{defi}\label{def:seq-inv} Let $(X,B,f, \sigma, \gamma)$ be a stitched fibration with a normal form $(Y, f_u)$. Let $\ell\in\mathscr{L}_{\bar{Z}_{\text{nor}}}$ be the unique sequence determined by $u$. We denote $\inv (f_u):=(\bar{Z}_{\text{nor}},\ell)$ and we call it the \textbf{invariants} of $(Y, f_u)$. The invariants of $F=(X,B,f, \sigma, \gamma)$ are defined to be $\inv (F):=\inv(f_u)$. \end{defi} \begin{prop} Let $F = (X,B,f, \sigma, \gamma)$ and $F'= (X',B',f', \sigma', \gamma')$ be stitched fibrations with normal forms $(Y, f_u)$ and $(Y',f_{u'})$ defining invariants $\inv (F)$ and $\inv(F')$ respectively. If $F$ and $F'$ are fibrewise symplectically equivalent, then $\inv(F)=\inv (F')$. \end{prop} \begin{proof} Assume there is a commutative diagram as in Definition \ref{def:symp-eq}. To keep notation simple, let us assume $W=B$ and $W'=B'$. We have the diagram with commutative squares: \begin{equation}\label{eq:squares} \begin{CD} Y @<\Theta<< X @>\Psi>> X' @>\Theta'>>Y' \\ @Vf_uVV @VfVV @Vf'VV @VVf_{u'}V\\ U @<a<< B @>\phi>> B' @>a'>> U'. \end{CD} \end{equation} Let us concentrate on the outermost square of (\ref{eq:squares}) and define $\tilde\Psi=\Theta'\circ\Psi\circ\Theta^{-1}$ and $\tilde\phi=a'\circ\phi\circ a^{-1}$. We claim that: \begin{itemize} \item[\textit{(i)}] $\inv (f_{\tilde\phi\circ u})=\inv (f_u)$; and \item[\textit{(ii)}] $\inv (f_{u'\circ\tilde\Psi})=\inv (f_{u'})$. \end{itemize} Since $\tilde\phi\circ f_u=f_{u'}\circ\tilde\Psi$, \textit{(i)} and \textit{(ii)} would imply that $\inv(F)=\inv(F')$. It is clear that $\bar{Z}_{\text{nor}}$ and $\bar{Z}_{\text{nor}}'$ must coincide. Observe that $\tilde{\Psi}$, restricted to $Y^+$, is a symplectomorphism onto $(Y')^+$ which commutes with the projections $\pi$ and $\pi'$ on $T^*U$ and $T^*U'$ and sends the zero section to the zero section. Therefore we must have $\tilde{\Psi}|_{Y^+} = \tilde{\phi}^*$. To prove \textit{(i)} observe that ${\tilde{\phi}}^\ast|_{T^*U^+}$ must send the lattice $\Lambda'$ defining $Y'$ to the lattice $\Lambda$ defining $Y$. From this it follows that $\tilde\phi|_{U^+}$ is the identity map and the restriction of $\tilde\Psi$ to $Y^+$ is also the identity map. Then $\tilde\phi\circ u|_{V^+}=u|_{V^+}$. From this and the smoothness of $\tilde\phi\circ u$ it follows that the sequences in $\mathscr S_{\bar{Z}_{\text{nor}}}$ defined by $\tilde\phi\circ u$ and $u$ coincide. Hence $\inv(f_u)=\inv(\tilde\phi\circ f_u)$. Similarly, to prove \textit{(ii)}, observe that $u'\circ\tilde\Psi|_{V^+}=u'|_{V^+}$. Since $u'$ and $\tilde{\Psi}$ are smooth it follows that $\inv (f_{u'\circ\tilde\Psi})=\inv(f_{u'})$. \end{proof} \begin{cor} The definition of the invariants of $F = (X,B,f, \sigma, \gamma)$ is independent on the choice of normalisation. \end{cor} \begin{proof} Suppose we have two normalisations $(Y,f_u)$ and $(Y',f_{u'})$. Clearly $\bar{Z}_{\text{nor}}$ and $\bar{Z}_{\text{nor}}'$ must coincide. We can also assume, w.l.o.g. that $Y = Y'$. What may be different are the maps $u$ and $u'$ such that $f_u$ and $f_{u'}$ are two different normalisations of $f$ induced from different extensions $\tilde f^+$ of $f^+$. Consider the invariants $\inv(f_u)$ and $\inv(f_{u'})$, respectively. Since $f_u$ and $f_{u'}$ are symplectically equivalent via $\tilde\Psi=\Theta'\circ\Theta^{-1}$ and $\tilde\phi=a'\circ a^{-1}$, it follows that $\inv (f_u)=\inv(f_{u'})$. \end{proof} \begin{prop} \label{inv:sympl} Let $F = (X,B,f, \sigma, \gamma)$ and $F'= (X',B',f', \sigma', \gamma')$ be stitched Lagrangian fibrations satisfying Assumption~\ref{st:ass}. If $\inv (F)=\inv (F')$ then $F$ is fibrewise symplectically equivalent to $F'$ . \end{prop} \begin{proof} Let $(Y, f_u)$ and $(Y',f_{u'})$ be normal forms of $F$ and $F'$, respectively. We can assume, w.l.o.g., $Y = Y'$. Let $S_u$ and $S_{u'}$ be the series in $\mathscr S_{\bar{Z}_{\text{nor}}}$ defined by $u$ and $u'$ respectively. By assumption $S_u=S_{u'}$. This allows us to find Lagrangian fibrations $(\bar{V},\bar u), (\tilde{V},\tilde u),(\tilde V', \tilde u') \in\mathscr U_{\bar{Z}_{\text{nor}}}$ such that \[ \tilde u = \begin{cases} u\quad\textrm{on}\ \tilde V ^- \\ \bar u\quad\textrm{on}\ \tilde V^+ \end{cases} \ \textrm{and}\quad \tilde u' = \begin{cases} u'\quad\textrm{on}\ (\tilde V')^- \\ \bar u\quad\textrm{on}\ (\tilde V')^+ \end{cases} \] where $S_{\bar u}=S_{\tilde u}=S_{\tilde u'}$. Now there is a neighbourhood $W$ of $\Gamma_{\text{nor}}$ and smooth symplectomorphisms $\Theta :T^\ast W\slash\Lambda\rightarrow \tilde V $ and $\Theta':T^\ast W\slash\Lambda\rightarrow \tilde V'$ which are the action-angle coordinate map of the fibrations $\tilde u$ and $\tilde u'$, respectively. Defining $\Psi=\Theta'\circ\Theta^{-1}$, it is clear that $\Psi|_{\tilde V^+}$ is the identity. Furthermore, when restricted to $\tilde V^-$, $\Psi$ sends the fibres of $\tilde u|_{\tilde V^-}=u|_{\tilde V^-}$ to the fibres of $\tilde u'|_{(\tilde V')^-}=u'|_{(\tilde V')^-}$. Therefore $\Psi$ is fibre preserving with respect to $f_u$ and $f_{u'}$. It follows that $f$ and $f'$ are symplectically equivalent. \end{proof} We summarise the previous Propositions in the following: \begin{thm}\label{thm: grosso} Let $F = (X,B,f, \sigma, \gamma)$ and $F'= (X',B',f', \sigma', \gamma')$ be stitched Lagrangian fibrations satisfying Assumption~\ref{st:ass}, with invariants $\inv (F)$ and $\inv (F')$, respectively. Then $F$ and $F'$ define the same germ if and only if $\inv (F)=\inv (F')$. In other words, the set of germs of stitched fibrations $\mathscr F$ is classified by the pairs $(\bar{Z}_{\text{nor}}, \ell)$, where $\ell \in \mathscr L_{\bar{Z}_{\text{nor}}}$. \end{thm} The above provides a semi-global classification of stitched Lagrangian fibrations. In contrast to what happens for smooth Lagrangian submersions where no semi-global symplectic invariants exist, stitched fibrations in general do give rise to non trivial semi-global invariants. We can now also state a more precise version of Theorem~\ref{broken:constr}: \begin{thm} \label{broken:constr2} \label{I added this theorem, for completeness. Check the statement, see if you agree!} Let $(U, \Gamma)$ be a pair, where $U$ is an open neighbourhood of $0 \in \numberset{R}^n$ and $\Gamma = U \cap \{ b_1 = 0 \}$. Assume $(U, \Gamma)$ is diffeomorphic to the pair $(D^n, D^{n-1})$. Let $\bar{\Lambda} \subseteq T^{\ast} \Gamma$ be the lattice spanned by $\{ db_2, \ldots, db_n \}$, and let $\bar{Z} = T^{\ast} \Gamma / \bar{\Lambda}$, with projection $\bar{\pi}: \bar{Z} \rightarrow \Gamma$ and bundle $\mathfrak{L} = \ker \bar{\pi}_{\ast}$. Given integers $m_2, \ldots, m_n$ and a sequence $\ell = \{ \ell_k \}_{k \in \numberset{N}} \in \mathscr L_{\bar{Z}}$ such that \begin{equation} \label{int:cond2} \int_{db_j} \ell_1 = m_j \ \ \ \ \text{for all} \ \ j=2, \ldots, n, \end{equation} there exists a smooth symplectic manifold $(X, \omega)$ and a stitched Lagrangian fibration $f: X \rightarrow U$ satisfying the following properties: \begin{itemize} \item[\textit{(i)}] the coordinates $(b_1, \ldots, b_n)$ on $U$ are action coordinates of $f$ with $\mu = f^{\ast}b_1$ the moment map of the $S^1$ action; \item[\textit{(ii)}] the periods $\{ db_1, \ldots, db_n \}$, restricted to $U^{\pm}$ correspond to bases $\gamma^{\pm} = \{ \gamma_1, \gamma_2^{\pm}, \ldots, \gamma_n^{\pm} \}$ of $H_1(X, \numberset{Z})$ satisfying $(i)$ and $(ii)$ of Corollary~\ref{broken:per}; \item[\textit{(iii)}] there is a Lagrangian section $\sigma$ of $f$, such that $(\bar{Z}, \ell)$ are the invariants of $(X,f, U, \sigma, \gamma^+)$. \end{itemize} The fibration $(X,f, U)$ satisfying the above properties is unique up to fibre preserving symplectomorphism. \end{thm} \begin{proof} The construction of $(X, \omega)$ is like in the proof of Theorem~\ref{broken:constr}, i.e. \[ X = (X^+ \cup V^+) \cup_{\tilde{Q}} (X^- \cup V^-). \] But now the map $u$, used to construct $\tilde{Q}$, is chosen so that the fibration $f_u: X^+ \cup V^+ \rightarrow \numberset{R}^n$, defined by \[ f_u = \begin{cases} \pi^+ \quad\text{on} \ b_1 \geq 0 \\ u \quad\text{on} \ b_1 \leq 0. \end{cases} \] satisfies $\inv(f_u) = (\bar{Z}, \ell)$. Such a $u$ exists thanks to Theorem~\ref{thm:sequences}. The fibration $f$ is again defined by \[ f = \begin{cases} \pi^+ \quad\text{on} \ X^+ \\ \pi^- \quad\text{on} \ X^-. \end{cases} \] It is clear that by construction $(X,f,U)$ satisfies $(i)-(iii)$. Notice that $\tilde{Q}$ matches the zero section of $\pi^+$ to the zero section of $\pi^-$. Therefore the section $\sigma$ is just given by the zero section of $\pi^+$ on $U^+$ and by the zero section of $\pi^-$ on $U^-$. It is clear from the results proved in this Section (in particular from the existence of a normal form) that any stitched Lagrangian fibration $(X,f,U)$ satisfying $(i)-(iii)$ can be constructed in this way. Uniqueness of $(X,f,U)$ is proved as follows. The only choice involved in the construction is the function $u$. Any other choice $u'$ must still satisfy $\inv(f_{u'})=(\bar{Z}, \ell)$. Denote by $X$ and $X'$ the manifolds obtained from choices $u$ and $u'$ respectively. Let $\Psi: X \rightarrow X'$ be the map defined to be the identity on $X^+$ and on $X^-$. One can see that $\Psi$ is well defined since the first order invariants of $f_{u}$ and $f_{u'}$ coincide. It is clearly a smooth symplectomorphism away from $Z$. We need to show that it is smooth on $Z$. To see this we can use an argument similar to the one used in Proposition~\ref{inv:sympl}. If we think of $\Psi$ in the coordinates on $X^+ \cup V^+$, $\Psi$ is a symplectomorphism sending the fibres of $f_u$ to the fibres of $f_{u'}$ and the zero section to the zero section. In a neighbourhood of $Z$ and in these coordinates, we can describe $\Psi$, as follows. Since $\inv (f_u) = \inv (f_{u'})$, we can replace $u$ and $u'$ with $\tilde{u}$ and $\tilde{u}'$ as in Proposition~\ref{inv:sympl}. Let $\Theta: T^*W/ \Lambda \rightarrow V^+$ and $\Theta': T^*W / \Lambda \rightarrow V^+$ be action angle coordinates of $\tilde{u}$ and $\tilde{u}'$ respectively, associated to the zero section and to the basis $\gamma^-$. Then, in these coordinates, $\Psi$ coincides with $\Theta' \circ \Theta^{-1}$. It is therefore smooth. \end{proof} \section{Stitched Lagrangian fibrations with monodromy}\label{sec. stitch w/mono} We now study stitched Lagrangian fibrations defined over a non simply connected open set $U$. In this case it may be that the fibration has non-trivial monodromy. When the fibration is smooth, this monodromy is usually detected by the behaviour of the periods of the fibration expressed in terms of smooth coordinates on the base. In the case of stitched Lagrangian fibrations there may not exist smooth coordinates on $U$, i.e. coordinates with respect to which the fibration is smooth. We will see how to detect monodromy from the behaviour of the first order invariant $\ell_1$. This will be done mainly through the discussion of examples. In Example \ref{broken focus focus}, the fibration is topologically isomorphic to a focus-focus fibration. The singular fibre is over $0 \in \numberset{R}^2$. Restricted to $X - f^{-1}(0)$, $f$ is a stitched Lagrangian fibration onto $U = \numberset{R}^2 - \{ 0 \}$. We know that the locally constant presheaf on $U$ given by \[ W \mapsto H_1(f^{-1}(W), \numberset{Z}) \] has monodromy around $0$, i.e. the monodromy map \[ \mathcal{M}_{b}: \pi_1(U) \rightarrow H_1(F_b, \numberset{Z}) \] at a fibre over $b \in U$ is non-trivial. In fact, if $e$ is a generator of $\pi_1(U)$, $\mathcal{M}_{b}(e)$ is conjugate to the matrix \[ \left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} \right). \] We now look at a more general $2$-dimensional case. \begin{ex} \label{two:mon} Let $U \subset \numberset{R}^2$ be an open annulus in $\numberset{R}^2$ centred at the origin. As usual denote $U^+ = U \cap \{ b_1 \geq 0 \}$, $U^- = U \cap \{ b_1 \leq 0 \}$ and $\Gamma = U^+ \cap U^-$. This time $\Gamma$ is disconnected. We let $\Gamma_{u} = \Gamma \cap \{ b_2 \geq 0 \}$ and $\Gamma_{d} = \Gamma \cap \{ b_2 \leq 0 \}$ be the upper and lower parts of $\Gamma$ respectively. Now let $f: X \rightarrow \numberset{R}^2$ be a stitched Lagrangian fibration such that $f(X) = U$. Observe that the seam $Z$ has two connected components: $Z_u = f^{-1}(\Gamma_u)$ and $Z_d = f^{-1}(\Gamma_d)$. Denote by $\bar{Z}_u$ and $\bar{Z}_d$ the respective $S^1$ quotients, i.e. the connected components of $\bar{Z}$. Given $b \in \Gamma_u$ and choosing a curve going anticlock-wise once around $0$ as generator $e \in \pi_1(U)$, suppose that with respect to a basis $\{ \gamma_1, \gamma_2 \}$ of $H_1(F_b, \numberset{Z})$ the monodromy is \begin{equation} \label{monodr} \mathcal{M}_{b}(e) = \left( \begin{array}{cc} 1 & -m \\ 0 & 1 \end{array} \right), \end{equation} for some integer $m \neq 0$. In this case we must have that $\gamma_1$ is represented by the orbits of the $S^1$ action. As usual let $X^{\pm} = f^{-1}(U^{\pm})$. Since $U - \Gamma_{d}$ is contractible we can think of $\{ \gamma_1, \gamma_2 \}$ as a basis of $H_1(f^{-1}(U-\Gamma_{d}), \numberset{Z})$. Consider the diagrams: \[ \xymatrix{ & H_1(X^{+},\numberset{Z}) \ar[dr] \\ H_1(f^{-1}(U-\Gamma_d),\numberset{Z}) \ar[ur] \ar[rr]^{j_+} & & H_1(f^{-1}(U-\Gamma_u),\numberset{Z}) } \] or \[ \xymatrix{ H_1(f^{-1}(U-\Gamma_d),\numberset{Z}) \ar[dr] \ar[rr]^{j_-} & & H_1(f^{-1}(U-\Gamma_u),\numberset{Z}) \\ & H_1(X^{-},\numberset{Z}) \ar[ur] } \] induced by inclusions and restrictions. The map $j_+$ identifies $\{ \gamma_1, \gamma_2 \}$ with a basis $\{ \gamma_1, \gamma_2^+ \}$ of $H_1(f^{-1}(U-\Gamma_{u}), \numberset{Z})$, whereas $j_-$ with a basis $\{ \gamma_1, \gamma_2^- \}$. Notice that monodromy is given by $j_+^{-1}\circ j_-$. Therefore we must have $\gamma_2^+ = m \gamma_1 + \gamma_2^-$. Hence $\{ \gamma_1, \gamma_2^+ \}$ and $\{ \gamma_1, \gamma_2^- \}$ satisfy conditions $(i)$ and $(ii)$ of Corollary~\ref{broken:per}. Applying Lemma~\ref{broken:action} to $f$ restricted to $f^{-1}(U-\Gamma_u)$ we can consider the action coordinates map $\alpha$ constructed by taking action coordinates with respect to $\{ \gamma_1, \gamma_2^+ \}$ on $U^+$ and with respect to $\{ \gamma_1, \gamma_2^- \}$ on $U^-$. Denote by $(b_1^d, b_2^d)$ such coordinates. Similarly on $U-\Gamma_{d}$ we can consider action angle coordinates with respect to the basis $\{ \gamma_1, \gamma_2 \}$. Denote by $(b_1^u, b_2^u)$ these coordinates. In particular we can identify \[ \bar{Z}_d = T^{\ast} \Gamma_d \, / \, \langle db_2^d \rangle_{\numberset{Z}} \] and \[ \bar{Z}_u= T^{\ast} \Gamma_u \, / \, \langle db_2^u \rangle_{\numberset{Z}} \] With respect to this choice of coordinates we can construct the first order invariants $\ell_1^u$ and $\ell_1^d$ of $f$ on $\bar{Z}_u$ and $\bar{Z}_d$ respectively, then by applying Remark~\ref{rem:action} we obtain \[ \int_{db_2^u} \ell_1^u = 0 \ \ \text{and} \ \ \int_{db_2^d} \ell_1^d = m. \] This tells us that monodromy can be read from a jump in cohomology class of the first order invariant associated to action coordinates. \end{ex} Using the methods of Theorem~\ref{broken:constr} we can also construct stitched Lagrangian fibrations with prescribed monodromy and and invariants. In fact we have \begin{thm} Let $U \subset \numberset{R}^2$ be an annulus as above with coordinates $(b_1, b_2)$. Let $\bar{Z}_d = T^{\ast} \Gamma_d \, / \, \langle db_2 \rangle_{\numberset{Z}}$ and $\bar{Z}_u= T^{\ast} \Gamma_u \, / \, \langle db_2 \rangle_{\numberset{Z}}$ with projections $\bar{\pi}^d$ and $\bar{\pi}^u$ and bundles $\mathfrak{L}_d = \ker\bar{\pi}^d_{\ast}$ and $\mathfrak{L}_u = \ker\bar{\pi}^u_{\ast}$ respectively. Given an integer $m$ and sequences $\ell^d = \{ \ell_k^d \}_{k \in \numberset{N}} \in \mathscr L_{\bar{Z}_d}$ and $\ell^u = \{ \ell_k^u \}_{k \in \numberset{N}} \in \mathscr L_{\bar{Z}_u}$ such that \[ \int_{db_2} \ell_1^u = 0 \ \ \text{and} \ \ \int_{db_2} \ell_1^d = m, \] there exists a smooth symplectic manifold $(X, \omega)$ and a stitched Lagrangian fibration $f: X \rightarrow U$ having monodromy (\ref{monodr}) with respect to some basis $\gamma = \{ \gamma_1, \gamma_2 \}$ of $H_{1}(f^{-1}(U- \Gamma_{d}), \numberset{Z})$ and satisfying the following properties: \begin{itemize} \item[\textit{(i)}] the coordinates $(b_1, b_2)$ are action coordinates of $f$ with moment map $f^{\ast}b_1$; \item[\textit{(ii)}] the periods $\{ db_1, db_2 \}$, restricted to $U^{\pm}$ correspond to the basis $\{ \gamma_1, \gamma_2 \}$; \item[\textit{(iii)}] there is a Lagrangian section $\sigma$ of $f$, such that $(\bar{Z}_u, \ell^u)$ and $(\bar{Z}_d, \, \ell^d)$ are the invariants of $(f^{-1}(U-\Gamma_d),\, f, \, U-\Gamma_d, \, \sigma, \, \gamma)$ and $(f^{-1}(U- \Gamma_u),\, f, \, U-\Gamma_u, \, \sigma, \, j_+(\gamma))$ respectively. \end{itemize} The fibration $(X,f, U)$ satisfying the above properties is unique up to fibre preserving symplectomorphism. \end{thm} \begin{proof} We let $\Lambda_+$ and $\Lambda_-$ be the lattices generated by $db_1$ and $db_2$ in $T^{\ast}U^+$ and $T^{\ast}U^-$ respectively. Define $X^{\pm} = T^{\ast}U^{\pm} / \Lambda_{\pm}$, $Z^{\pm}_{u} = (\pi^{\pm})^{-1}(\Gamma_u)$ and $Z^{\pm}_{d} = (\pi^{\pm})^{-1}(\Gamma_d)$. Then, using $\ell_1^u$ and $\ell_1^d$, we construct maps \[ Q_u: Z^{-}_{u} \rightarrow Z^+_{u} \] and \[ Q_d: Z^{-}_{d} \rightarrow Z^+_{d} \] like in Theorem \ref{broken:constr}. We use these maps to glue $X^+$ and $X^-$ topologically along their boundary and thus form $X$. For the smooth and symplectic gluing we follow the same method as in Theorem \ref{broken:constr2}, where higher order invariants are used. From the discussion of Example~\ref{two:mon} it follows that the fibration has the prescribed monodromy. Uniqueness is proved like in Theorem \ref{broken:constr2}. \end{proof} We now discuss a three dimensional example. \begin{ex} \label{amoeb:mon} In $\numberset{R}^3$ consider the three-valent graph \[ \Delta = \{(0,0,- t), \ t \geq 0 \} \cup \{ (0,- t,0), \ t \geq 0 \} \cup \{ (0,t,t), \ t\geq 0 \} \] and let $D$ be a tubular neighbourhood of $\Delta$. Take $U = \numberset{R}^3 - D$ and assume we have a stitched Lagrangian fibration $f: X \rightarrow \numberset{R}^3$ such that $U=f(X)$. The seam is $Z= f^{-1}( \{ b_1 = 0 \} \cap U)$. Again we let $U^+ = U \cap \{ b_1 \geq 0 \}$, $U^- = U \cap \{ b_1 \leq 0 \}$ and $\Gamma = U^+ \cap U^-$. Also let $X^{\pm} = f^{-1}(U^{\pm})$. This time $\Gamma$ (and thus $Z$) has three connected components \begin{eqnarray*} \Gamma_c & = & \{ ( 0, t,s), \ t,s < 0 \} \cap U, \\ \Gamma_d & = & \{ (0, t,s), \ t> 0, s < t \} \cap U, \\ \Gamma_e & = & \{ (0, t,s), \ s> 0, t < s \} \cap U. \end{eqnarray*} Also denote by $Z_c$, $Z_d$ and $Z_e$ the corresponding connected components of $Z$ and by $\bar{Z}_c$, $\bar{Z}_d$ and $\bar{Z}_e$ their $S^1$ quotients. Fix $b \in \Gamma_c$ and suppose that there is a basis $\{ \gamma_1, \gamma_2, \gamma_3 \}$ of $H_1( F_b, \numberset{Z})$ and generators $e_0, e_1, e_2$ of $\pi_1(U)$, satisfying $e_0 e_1 e_2 = 1$, with respect to which the monodromy transformations are \begin{equation} \mathcal{M}_b(e_1) = T_1 = \left( \begin{array}{ccc} 1 & 0 & -m_1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right), \ \ \ \mathcal{M}_b(e_2) = T_2 = \left( \begin{array}{ccc} 1 & -m_2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right). \label{t12} \end{equation} and ${M}_b(e_0) = T_0 = T_1^{-1} T_2^{-1}$, for non zero integers $m_1$ and $m_2$. We have that $\gamma_1$ is represented by the orbits of the $S^1$ action, since it is the only monodromy invariant cycle. Now, since $U - ( \Gamma_d \cup \Gamma_e)$ is contractible, $\{ \gamma_1, \gamma_2, \gamma_3 \}$ is a basis of $H_1( f^{-1}(U - ( \Gamma_d \cup \Gamma_e)), \numberset{Z})$. Consider the diagrams: \[ \xymatrix{ & H_1(X^{+},\numberset{Z}) \ar[dr] \\ H_1(f^{-1}(U-(\Gamma_d\cup\Gamma_e)),\numberset{Z}) \ar[ur] \ar[rr]^{j_+} & & H_1(f^{-1}(U-(\Gamma_c\cup\Gamma_d)),\numberset{Z}) } \] or \[ \xymatrix{ H_1(f^{-1}(U-(\Gamma_d\cup\Gamma_e)),\numberset{Z}) \ar[dr] \ar[rr]^{j_-} & & H_1(f^{-1}(U-(\Gamma_c\cup\Gamma_d)),\numberset{Z}) \\ & H_1(X^{-},\numberset{Z}) \ar[ur] } \] induced by inclusions and restrictions. The map $j_+$ identifies $\{ \gamma_1, \gamma_2, \gamma_3 \}$ with a basis of $H_1(f^{-1}(U- (\Gamma_c \cup \Gamma_d)), \numberset{Z}) $, which we call $\{ \gamma_1, \gamma_2^+, \gamma_3^+ \}$, while $j_-$ identifies it with another basis, which we call $\{ \gamma_1, \gamma_2^-, \gamma_3^- \}$. Notice that the monodromy map $\mathcal{M}_b(e_1) = j_+^{-1}\circ j_-$. We must have \begin{equation} \label{tre:mon} \begin{cases} \gamma_2^+ = \gamma_2^-, \\ \gamma_3^+ = m_1 \gamma_1 + \gamma_3^- . \end{cases} \end{equation} Therefore $\{ \gamma_1, \gamma_2^+, \gamma_3^+ \}$ and $\{ \gamma_1, \gamma_2^-, \gamma_3^- \}$ satisfy conditions $(i)$ and $(ii)$ of Corollary~\ref{broken:per}. Applying Lemma~\ref{broken:action} to $f$ restricted to $f^{-1}(U- (\Gamma_c \cup \Gamma_d))$, we can consider the action coordinates map $\alpha$ on $U- (\Gamma_c \cup \Gamma_d) $ constructed by taking action coordinates with respect to $\{ \gamma_1, \gamma_2^+ ,\gamma_3^+ \}$ on $U^+$ and with respect to $\{ \gamma_1, \gamma_2^-, \gamma_3^- \} \}$ on $U^-$. Let us denote these coordinates by $(b_1^e, b_2^e, b_3^e)$. Similarly we can consider action coordinates on $U - (\Gamma_d \cup \Gamma_e)$ with respect to the basis $\{ \gamma_1, \gamma_2, \gamma_3 \}$ of $H_1(f^{-1}(U- (\Gamma_d \cup \Gamma_e)), \numberset{Z}) $. We denote them by $(b_1^c, b_2^c, b_3^c)$. We have the identifications \[ \bar{Z}_e = T^{\ast} \Gamma_e \, / \, \langle db_2^e, db_3^e \rangle_{\numberset{Z}} \] and \[ \bar{Z}_c= T^{\ast} \Gamma_c \, / \, \langle db_2^c, db_3^c \rangle_{\numberset{Z}}. \] With respect to these coordinates we can compute the first order invariants $\ell_1^e$ and $\ell_1^c$ on $\bar{Z}_e$ and $\bar{Z}_c$ respectively. From Remark~\ref{rem:action} and identities (\ref{tre:mon}) applied to $\ell_1^c$ and $\ell_1^e$ we obtain \begin{equation*} \int_{db_2^c} \ell_1^c = \int_{db_3^c} \ell_1^c = 0 \end{equation*} and \begin{equation*} \int_{db_2^e} \ell_1^e = 0 \ \ \text{and} \ \ \int_{db_3^e} \ell_1^e = m_1. \end{equation*} Similarly we construct the first order invariant $\ell_1^d$ on $\bar{Z}_d$. It will satisfy \begin{equation*} \int_{db_2^d} \ell_1^d = m_2 \ \ \text{and} \ \ \int_{db_3^d} \ell_1^d = 0. \end{equation*} Again, monodromy is understood in terms of the difference in the cohomology class of the first order invariant. Example \ref{ex amoebous fibr} is a special case of this situation, where $m_1 = m_2 = 1$. \end{ex} Again, one can produce stitched Lagrangian fibrations of the type described in this example with the gluing method Theorem~\ref{broken:constr}. In fact we can prove \begin{thm} Let $U \subset \numberset{R}^3$, $\Gamma_c$, $\Gamma_d$ and $\Gamma_e$ be as in Example~\ref{amoeb:mon} and let $(b_1, b_2, b_3)$ be coordinates on $U$. Define $\bar{Z}_c = T^{\ast} \Gamma_c \, / \, \langle db_2, db_3 \rangle_{\numberset{Z}}$, $\bar{Z}_d= T^{\ast} \Gamma_d \, / \, \langle db_2, db_3 \rangle_{\numberset{Z}}$ and $\bar{Z}_e= T^{\ast} \Gamma_e \, / \, \langle db_2, db_3 \rangle_{\numberset{Z}}$ with projections $\bar{\pi}^c$, $\bar{\pi}^d$, $\bar{\pi}^e$ and bundles $\mathfrak{L}_c = \ker\bar{\pi}^c_{\ast}$, $\mathfrak{L}_d = \ker\bar{\pi}^d_{\ast}$, $\mathfrak{L}_e = \ker\bar{\pi}^e_{\ast}$. Suppose we are given integers $m_1$, $m_2$ and sequences $\ell^c = \{ \ell_k^c \}_{k \in \numberset{N}} \in \mathscr L_{\bar{Z}_c}$, $\ell^d = \{ \ell_k^d \}_{k \in \numberset{N}} \in \mathscr L_{\bar{Z}_d}$ and $\ell^e = \{ \ell_k^e \}_{k \in \numberset{N}} \in \mathscr L_{\bar{Z}_e}$ satisfying \begin{eqnarray*} \int_{db_2} \ell_1^c & = & \int_{db_3} \ell_1^c = 0, \\ \int_{db_2} \ell_1^e & = & 0 \ \ \text{and} \ \ \int_{db_3} \ell_1^e = m_1, \\ \int_{db_2} \ell_1^d & = & m_2 \ \ \text{and} \ \ \int_{db_3} \ell_1^d = 0. \end{eqnarray*} Then there exists a smooth symplectic manifold $(X, \omega)$ and a stitched Lagrangian fibration $f: X \rightarrow U$ having the same monodromy of Example~\ref{amoeb:mon} with respect to some basis $\gamma = \{ \gamma_1, \gamma_2, \gamma_3 \}$ of $H_1( f^{-1}(U - ( \Gamma_d \cup \Gamma_e)), \numberset{Z})$ and satisfying the following properties: \newcounter{mon3} \begin{list}{(\roman{mon3})}{\usecounter{mon3} \setlength{\parsep}{0cm} \setlength{\topsep}{\itemsep} \setlength{\leftmargin}{.5cm}} \item[\textit{(i)}] the coordinates $(b_1, b_2, b_3)$ are action coordinates of $f$ with moment map $f^{\ast}b_1$; \item[\textit{(ii)}] the periods $\{ db_1, db_2, db_3 \}$, restricted to $U^{\pm}$ correspond to the basis $\gamma$; \item[\textit{(iii)}] there is a Lagrangian section $\sigma$ of $f$, such that $(\bar{Z}_c, \ell^c)$, $(\bar{Z}_d, \, \ell^d)$ and $(\bar{Z}_e, \, \ell^e)$ are the invariants of $(f^{-1}(U-( \Gamma_d \cup \Gamma_e)),\, f, \, U-( \Gamma_d \cup \Gamma_e), \, \sigma, \, \gamma)$, $(f^{-1}(U-( \Gamma_c \cup \Gamma_e)),\, f, \, U-( \Gamma_c \cup \Gamma_e), \, \sigma, \, j_+(\gamma))$ and $(f^{-1}(U-( \Gamma_c \cup \Gamma_d)),\, f, \, U-( \Gamma_c \cup \Gamma_d), \, \sigma, \, j_+(\gamma))$ respectively. \end{list} The fibration $(X,f, U)$ satisfying the above properties is unique up to fibre preserving symplectomorphism. \end{thm} We omit the proof which is simply a repetition of the usual gluing method from Theorems~\ref{broken:constr} and \ref{broken:constr2}. \section{More examples?} In this section we would like to propose a conjectural construction generalising the one, described in \cite{CB-M-torino}, which led us to Example~\ref{ex amoebous fibr}. In \cite{Gui-Stern-bi}, Guillemin and Sternberg make the following observation. Let $N=n+m$, with $n,m$ positive integers. Consider $\numberset{C}^{N+1}$ with its standard symplectic structure, then $S^1$ acts on it, in a Hamiltonian way, via the action given by, \begin{equation} \label{simple:act} \theta:(z_1, \ldots, z_{N+1} ) \mapsto (e^{i\theta} z_1, e^{-i\theta} z_2, \ldots, e^{-i\theta} z_{n+1}, z_{n+2}, \ldots, z_{N+1}) \end{equation} with moment map $$\mu = \frac{|z_1|^2 - |z_2|^2 - \ldots - |z_{n+1}|^2}{2}.$$ The action is singular along $\Sigma = \{ z_1= \ldots =z_{n+1}=0 \}$, which can be identified with $\numberset{C}^m$. The observation is that for any $\epsilon \in \numberset{R}_{\geq 0}$ the reduced spaces $(M_{\epsilon}, \omega_{r}(\epsilon))$ can be identified with $(\numberset{C}^N, \omega_{\numberset{C}^N})$ with standard symplectic form, (this includes the case of the critical value $\epsilon =0$). While when $\epsilon \in \numberset{R}_{<0}$, $(M_{\epsilon}, \omega_r(\epsilon))$ can be identified with the $\epsilon$-blow up of $(\numberset{C}^N, \omega_{\numberset{C}^N})$ along the symplectic submanifold $\Sigma$. The $\epsilon$-blow up can be described as follows. Let $L$ be the total space of the tautological line bundle on $\numberset{P}^{n-1}$. The incidence relation gives $L$ as \[ L = \{ (v, l ) \in \numberset{C}^{n} \times \numberset{P}^{n-1} \, | \, v \in l \}. \] There are two natural projections: $\pi: L \rightarrow \numberset{P}^{n-1}$, which is the bundle projection, and $\beta: L \rightarrow \numberset{C}^{n}$ which is the blow-up map. The latter is a biholomorphism onto $\numberset{C}^{n} - \{0\}$ once the zero section is removed from $L$. Let $\omega_{FS}$ be the standard Fubini-Study symplectic form on $\numberset{P}^{n-1}$. The $\epsilon$-blow up of $\numberset{C}^n$ at $0$ is $L$ together with the symplectic form given by \[ \omega_{\epsilon} = \beta^{\ast} \omega_{\numberset{C}^n} + \epsilon \, \pi^{\ast} \omega_{FS}. \] The $\epsilon$-blow up of $\numberset{C}^N$ along $\Sigma = \numberset{C}^m$ can be identified with $L \times \numberset{C}^m$ with symplectic form $\omega_{\epsilon} + \omega_{\numberset{C}^m}$. In the case $n=1$ the blow-up is topologically (and holomorphically) trivial, i.e. blowing up does not do anything. In fact one can also show, by following Guillemin and Sternberg's argument, that the reduced spaces can all be identified with $(\numberset{C}^{m+1}, \omega_{\numberset{C}^{m+1}})$ for all values of $\epsilon$. This identification can also be explained as follows. Consider the map $\gamma$ given in (\ref{eq. g}) and define the map $p: \numberset{C}^{m+2} \rightarrow \numberset{C}^{m+1}$ given by \begin{equation} \label{pws:p} p: (z_1, z_2, z_3, \ldots, z_{m+2}) \mapsto (\gamma(z_1, z_2), z_3, \ldots, z_{m+2}). \end{equation} Restricted to $\mu^{-1}(\epsilon)$, this map can be regarded as the quotient map $\mu^{-1}(\epsilon) \rightarrow M_\epsilon$. It can be shown that the reduced symplectic form with respect to this map is precisely $\omega_{\numberset{C}^{m+1}}$. Example~\ref{ex amoebous fibr} comes from this construction in the case $m=n=1$. In fact the fibration $f$ is of the type $\Log \circ \Phi \circ p$, where $\Phi$ is a symplectomorphism of $\numberset{C}^2$ and $\Log:(\numberset{C}^{\ast})^2 \rightarrow \numberset{R}^2$ is the map $(v_1, v_2) \mapsto (\log|v_1|, \log |v_2|)$. The fact that $f$ is not smooth is due to the non-smoothness of $p$, i.e. the reduced spaces are not identified with $\numberset{C}^2$ in a smooth way. We think that it may be possible to generalize this construction. The idea is to use another result of Guillemin and Sternberg proved in the same paper. The result is as follows. Let $\bar{X}$ be a compact $2N$ dimensional symplectic manifold with symplectic form $\omega$ and $2m$-dimensional symplectic submanifold $Y$. Consider now a principal $S^1$ bundle $p_0: P \rightarrow \bar{X}$ with a connection one form $\alpha$. Given an interval $I=(-\epsilon, \epsilon)$, Guillemin and Sternberg \cite{Gui-Stern-bi}\S 12 construct a $2(N+1)$ symplectic manifold $X$ with the following properties. \begin{enumerate} \item There exists a Hamiltonian $S^1$ action on $X$ with proper, surjective moment map $\mu: X \rightarrow I$. \item For positive $t \in I$, $\mu^{-1}(t)$ is equivalent, as an $S^1$ bundle, to $P$ and the reduced symplectic space $(X_t, \omega_r(t))$ is symplectomorphic to $(\bar{X}, \omega)$. \item The only critical value of $\Phi$ is $t=0$. If $\Sigma := \numberset{C}rit(\mu) \subset \mu^{-1}(0)$, i.e. the set of critical points of $\mu$, then $\Sigma$ is a smooth symplectic, $2m$ dimensional submanifold of $X$ and the $S^1$ action is locally modelled on (\ref{simple:act}) (in this case $0$ is also called a simple critical value). If $X_0$ denotes the reduced symplectic space at $0$, with reduced symplectic form $\omega_{r}(0)$ and quotient map $\pi_0: \mu^{-1}(0) \rightarrow X_0$, then the triple $(X_0, \pi_0(\Sigma), \omega_r(0))$ can be identified with $(\bar{X}, Y, \omega)$. \item When $t \in I$ is negative, then the reduced space $(X_t, \omega_r(t))$ can be identified with the blow-up $\tilde{X}$ of $\bar{X}$ along $Y$ with symplectic form $\omega_{Y,t} + \beta^* t d\alpha$, where $\omega_{Y,t}$ is the $t$-blow-up form along $Y$ on $\tilde{X}$ and $\beta: \tilde{X} \rightarrow \bar{X}$ is the blow down map. \end{enumerate} We are interested in Guillemin-Sternberg's construction in the case $N = m+1$, i.e. in the case $Y$ is a codimension $2$ symplectic manifold. For simplicity we also assume that $P= \bar{X} \times S^1$ and $\alpha=0$. We can make the following observations. \begin{itemize} \item[(a)] Topologically $\tilde{X}$ is equivalent to $\bar{X}$, but symplectically $(\tilde{X}, \omega_{Y,t})$ and $(\bar{X}, \omega)$ differ since the latter one has less area (blowing up removes the area of a small tubular neighbourhood of $Y$). \item[(b)] Consider the quotient $p: X \rightarrow X / S^1$, then $X / S^1$ can be identified with $\bar{X} \times I$. If we restrict $p$ to $X - \Sigma$ then it becomes an $S^1$ bundle onto $(\bar{X} \times I) - (Y \times \{0 \})$. Let $c_1$ be the first Chern class of this bundle. If $S$ is a small $2$-sphere centred at the origin in a fibre of the normal bundle of $Y \times \{ 0 \}$ inside $(\bar{X} \times I)$, then $c_1(S) = 1$. \end{itemize} As we saw in the beginning of this section, in the non-compact case $(\bar{X}, \omega) = (\numberset{C}^{m+1}, \omega_{\numberset{C}^{m+1}})$ and $Y = \numberset{C}^m$, the observation in $(a)$ was not true, in the sense that the identification could be made also symplectically. This is because, although blowing up locally reduces area, in this non-compact case the area is infinite so it does not constitute a symplectic invariant. So the idea is to try to generalize Guillemin and Sternberg's construction to other non-compact cases. One interesting situation is if we take $(\bar{X}, \omega)$ with $\bar{X} = (\numberset{C}^*)^N$ and \[ \omega = \sum_{k=1}^{N} \frac{dz_k \wedge d\bar{z}_k}{|z_k|^2}. \] As symplectic submanifold $Y$ we can take some smooth algebraic hypersurface. We think it may be possible to generalize Guillemin and Sternberg's construction to this case. The hypothesis of compactness was made in order to be able to use the coisotropic embedding theorem in symplectic topology, but this theorem holds also in non-compact situations. The question is whether the reduced spaces can all be identified with $((\numberset{C}^*)^N, \omega)$. Since the space is non-compact, area is not an obstruction. Why would such a construction be useful? We could use it to construct interesting examples of piecewise smooth Lagrangian fibrations with singular fibres. In fact suppose the conjectured symplectic manifold $X$ exists with the above properties and such that all reduced spaces can be identified with $((\numberset{C}^*)^N, \omega)$. Then, on $X$ we could define a piecewise smooth Lagrangian fibration as follows. On $(\numberset{C}^*)^N \times I$ define the $T^N$ fibration given by \[ F : (z_1, \ldots, z_N, t) \rightarrow ( \log|z_1|, \ldots, \log|z_N|, t). \] Clearly $F_t = F|_{(\numberset{C}^*)^N \times \{t \}}$ is Lagrangian. Now suppose there exists a map $p: X \rightarrow (\numberset{C}^*)^N \times I$, equivalent to the quotient $X \rightarrow X/S^1$ and with respect to which the reduced spaces are all $((\numberset{C}^*)^N, \omega)$. Presumably this map would be locally modelled on (\ref{pws:p}), in particular it would fail to be smooth on $\mu^{-1}(0)$. The piecewise smooth Lagrangian fibration would be \begin{equation} f = F \circ p. \end{equation} We expect $f$ to be a stitched Lagrangian fibration when restricted to $X - f^{-1}(\Delta)$. The interesting aspect of this map is the structure of the singular fibres. In fact its discriminant locus is $\Delta = F(Y \times \{ 0 \})$, which is $\Log(Y) \times \{0 \}$. Images of algebraic hypersurfaces of $(\numberset{C}^*)^N$ by $\Log$ are called amoebas and they have shapes of the type pictured in Figure~\ref{fig: general_amoeba} \begin{figure} \caption{Amoebas with their respective Newton polygons.} \label{fig: general_amoeba} \end{figure} The topological property, discussed in the observation (b), of the bundle $p: X - \Sigma \rightarrow (\bar{X} \times I) - (Y \times \{0\})$, ensures that the fibration $f$, restricted to $X-f^{-1}(\Delta)$ has non-trivial monodromy. In fact one can find examples where monodromy would be of the types discussed in (\ref{amoeb:mon}). These examples, and the calculation of monodromy, generalize the construction in \cite{TMS} of the negative fibre, also called the fibre of type $(2,1)$, where a circle bundle with the topological property $(b)$ is used. In a work in progress \cite{CB-M} the authors use the piecewise smooth Lagrangian fibration in Example~\ref{ex amoebous fibr} as one of the building blocks for the construction of Lagrangian fibrations of $6$-dimensional compact Calabi-Yau manifolds. One of the ideas involved is that the invariants we have defined for stitched Lagrangian fibrations can be used to perturb the fibration in Example~\ref{ex amoebous fibr} away from the singular fibres in order to glue it to other pieces of fibration. In fact the sequence $\ell = \{ \ell_k \}_{k \in \numberset{N}}$ of fibrewise closed sections of $\mathcal L^*$ on $\bar{Z}$ can be easily perturbed, for example by multiplying each element by cut-off functions on the base $\Gamma$ or by summing to each element other fibrewise closed section and so on. We believe that the more general construction proposed in this section is interesting because, if it can be carried through, then these Lagrangian fibrations could be used as building blocks of more general Lagrangian fibrations of compact symplectic manifolds. \section{Appendix to Lemma \ref{ls:form}}\label{appendix} We give here a proof of Lemma \ref{ls:form} for all $m\in\numberset{N}$. Recall we can write \begin{equation}\label{aj:tay_again} a_j(r) = \sum_{k=1}^{N} a_{j,k} \, r^k + o(r^N). \end{equation} The $a_j$'s are functions of $(r, b, y)$, with $(b,y) \in \bar{Z}$, satisfying \[ \begin{cases} u_1(r, b_2 + a_2, \ldots, b_n + a_n, y) = r, \\ u_j(r, b_2 + a_2, \ldots, b_n + a_n, y) = b_j \quad\text{for all} \ j=2, \ldots, n. \end{cases} \] When $W$ is sufficiently small and $(r,b) \in W$, the functions $a_{j,m}$'s can be uniquely determined using the implicit function theorem. We will now use it to compute the $a_{j,m}$'s and obtain formulae (\ref{ls:rec}). We can rewrite the second equation of the above system by applying \begin{equation} \label{ugei:tay_again} u_j = \sum_{k=0}^{N} S_{j,k} b_1^k + o(b_1^N). \end{equation} We obtain \[ b_j + a_j + \sum_{k=1}^{N} S_{j,k}( b_2 + a_2, \ldots, b_n + a_n, y) r^k + o(r^N) = b_j \] which implies \begin{equation} \label{aj:bla} a_j + \sum_{k=1}^{N} S_{j,k}( b_2 + a_2, \ldots, b_n + a_n, y) r^k + o(r^N) = 0. \end{equation} To express everything as a power series in $r$ we use the Taylor expansion up to a certain order $N^{\prime}$ of the $S_{j,k}$'s, which in the multi-index notation is given by: \[ S_{j,k}( b_2 + a_2, \ldots, b_n + a_n, y) = \sum_{l=0}^{N^{\prime}} \sum_{|I| = l} C_I \, \partial^l_{I} S_{j,k}(b_2, \ldots, b_n) \, a_{2}^{i_2} \cdot \ldots \cdot a_{n}^{i_n} + \ldots, \] where $I=(i_2, \ldots, i_n)$ is a multi-index and the $C_I$'s are suitable constants. Let us introduce the following notation. For every multi-index $I = (i_2, \ldots, i_n)$, let us define the following set \[ \mathcal{H}_{I} = \{ (H_2, \ldots, H_n) \, | \, H_k \in (\numberset{Z}_{>0})^{i_k} \ \text{if} \ i_k \geq 1 \ \text{and} \ H_k = 0 \in \numberset{Z} \ \text{if} \ i_k = 0 \}. \] When $i_k \geq 1$, we also write $H_k = (h_{k,1}, \ldots, h_{k,i_{k}})$. For every $m \in \numberset{N}$, we denote \[ \mathcal{H}_{I, m} = \left\{ (H_2, \ldots, H_n) \in \mathcal{H}_{I} \, | \, \sum_{i_k \neq 0} \sum_{j=1}^{i_k} h_{k,j} = m \right\}. \] Clearly if $|I| = 0$ and $m \geq 1$ or if $0 \leq m < |I|$ then $\mathcal{H}_{I, m}$ is empty. When $i_k \neq 0$ for all $k=2, \ldots, n$, substituting (\ref{agei:tay}) we compute that \[ a_{1}^{i_1} \cdot \ldots \cdot a_{n}^{i_n} = \sum_{m=1}^{N^{\prime}} \left( \sum_{H \in \mathcal{H}_{I, m}} a_{2,h_{2,1}} \cdot \ldots \cdot a_{2,h_{2,i_2}} \cdot \ldots \cdot a_{n,h_{n,1}} \cdot \ldots \cdot a_{2,h_{n,i_n}} \right) r^m + o(r^{N^{\prime}}). \] Let us introduce another bit of notation. When $|I| \neq 0$, for all $H \in \mathcal{H}_I$, let \[ A_{H} = \prod_{i_{k} \neq 0} \prod_{j=1}^{i_k} a_{k,h_{k,j}}. \] When $|I|=0$, the only element in $\mathcal{H}_I$ is $0 \in \numberset{Z}^n$, so we set \[ A_0 = 1. \] Thus for all multi-indices $I$, we have \[ a_{1}^{i_1} \cdot \ldots \cdot a_{n}^{i_n} = \sum_{m=0}^{N^{\prime}} \left( \sum_{H \in \mathcal{H}_{I, m}} A_H \right) r^m + o(r^{N^{\prime}}). \] Therefore $S_{j,k}( b_2 + a_2, \ldots, b_n + a_n, y)$ written as a power series in $r$ becomes \[ S_{j,k}( b_2 + a_2, \ldots, b_n + a_n, y) = \sum_{m=0}^{N^{\prime}} \left( \sum_{|I| \leq m} \sum_{H \in \mathcal{H}_{I, m}} C_I \, \partial^{|I|}_{I} S_{j,k}(b,y) \, A_H \right) r^m + o(r^{N^{\prime}}) . \] Substituting this into (\ref{aj:bla}) we obtain \[ a_j + \sum_{l=1}^{N} \left( \sum_{m=0}^{l-1} \sum_{|I| \leq m} \sum_{H \in \mathcal{H}_{I, m}} C_I \, \partial^{|I|}_{I} S_{j,l-m}(b,y) \, A_H \right) r^l + o(r^N) = 0. \] Substituting also (\ref{aj:tay_again}) we have \[ \sum_{l=1}^{N} \left( a_{j,l} + \sum_{m=0}^{l-1} \sum_{|I| \leq m} \sum_{H \in \mathcal{H}_{I, m}} C_I \, \partial^{|I|}_{I} S_{j,l-m}(b,y) \, A_H \right) r^l + o(r^N) = 0. \] Therefore, for every $l \in \numberset{Z}_{>0}$, we have \[ a_{j,l} = - \sum_{m=0}^{l-1} \sum_{|I| \leq m} \sum_{H \in \mathcal{H}_{I, m}} C_I \, \partial^{|I|}_{I} S_{j,l-m}(b,y) \, A_H. \] When $l=1$, this becomes \[ a_{j,1} = - S_{j,1}, \] when $l \geq 2$ it can also be written as \[ a_{j,l} = - S_{j,l} - \sum_{m=1}^{l-1} \sum_{|I| \leq m} \sum_{H \in \mathcal{H}_{I, m}} C_I \, \partial^{|I|}_{I} S_{j,l-m} \, A_H. \] Now notice that when $1 \leq m \leq l-1$ and $H \in \mathcal{H}_{I, m}$, then $A_H$ only depends on the $a_{j,k}$'s with $1 \leq k \leq l-1$. Therefore if we define \[ R_{j,l} = - \sum_{m=1}^{l-1} \sum_{|I| \leq m} \sum_{H \in \mathcal{H}_{I, m}} C_I \, \partial^{|I|}_{I} S_{j,l-m}(b,y) \, A_H, \] when $l \geq 2$ and $R_{j,1} = 0$, then (\ref{ls:rec}) holds with $R_{j,m}$ satisfying the required properties. \begin{flushleft} Ricardo~CASTA\~NO-BERNARD \\ Max-Planck-Institut f\"ur Mathematik\\ Vivatsgasse 7, \\ D-53111, Bonn, Germany\\ e-mail: \texttt{[email protected]}\\ \ \\ \ \ \\ Diego~MATESSI\\ Dipartimento di Scienze e Tecnologie Avanzate\\ Universit\`{a} del Piemonte Orientale\\ Via Bellini 25/G\\ I-15100 Alessandria, Italy\\ e-mail: \texttt{[email protected]}\\ \end{flushleft} \end{document}
\begin{document} \title{Asymptotic Dimension of Graphs of Groups and One Relator Groups.} \author{Panagiotis Tselekidis} \maketitle \begin{abstract} We prove a new inequality for the asymptotic dimension of HNN-extensions. We deduce that the asymptotic dimension of every finitely generated one relator group is at most two, confirming a conjecture of A.Dranishnikov.\\ As further corollaries we calculate the exact asymptotic dimension of Right-angled Artin groups and we give a new upper bound for the asymptotic dimension of fundamental groups of graphs of groups. \end{abstract} \tableofcontents \section{Introduction} In 1993, M. Gromov introduced the notion of the asymptotic dimension of metric spaces (see \cite{Gr}) as an invariant of finitely generated groups. It can be shown that if two metric spaces are quasi isometric then they have the same asymptotic dimension.\\ The asymptotic dimension $asdimX$ of a metric space $X$ is defined as follows: $asdimX \leq n$ if and only if for every $R > 0$ there exists a uniformly bounded covering $\mathcal{U}$ of $X$ such that the R-multiplicity of $\mathcal{U}$ is smaller than or equal to $n+1$ (i.e. every R-ball in $X$ intersects at most $n+1$ elements of $\mathcal{U}$).\\ There are many equivalent ways to define the asymptotic dimension of a metric space. It turns out that the asymptotic dimension of an infinite tree is $1$ and the asymptotic dimension of $\mathbb{E}^{n}$ is $n$. \\ In 1998, the asymptotic dimension achieved particular prominence in geometric group theory after a paper of Guoliang Yu, (see \cite{Yu}) which proved the Novikov higher signature conjecture for manifolds whose fundamental group has finite asymptotic dimension.\\ Unfortunately, not all finitely presented groups have finite asymptotic dimension. For example, Thompson's group $F$ has infinite asymptotic dimension since it contains $\mathbb{Z}^{n}$ for all $n$.\\ However, we know for many classes of groups that they have finite asymptotic dimension, for instance, hyperbolic, relative hyperbolic, Mapping Class Groups of surfaces and one relator groups have finite asymptotic dimension (see \cite{BD08}, \cite{Os}, \cite{BBF}, \cite{Mats}). The exact computation of the asymptotic dimension of groups or finding the optimal upper bound is more delicate.\\ Another remarkable result is that of Buyalo and Lebedeva (see \cite{BL}) where in 2006 they established the following equality for hyperbolic groups: \begin{center} $asdim G = dim \partial_{\infty}G + 1$. \end{center} The inequalities of G.Bell and A.Dranishnikov (see \cite{BD04} and \cite{Dra08}) play a key role on finding an upper bound for the asymptotic dimension of groups. However, in some cases the upper bounds that the inequalities of G.Bell and A.Dranishnikov provide us are quite far from being optimal. An example is the asymptotic dimension of one relator groups.\\ In this paper we prove some new inequalities that can be a useful tool for the computation of the asymptotic dimension of groups. As an application we give the optimal upper bound for the asymptotic dimension of one relator groups which was conjectured by A.Dranishnikov. As a further corollary we calculate the exact asymptotic dimension of any Right-angled Artin group-this has been proven earlier by N.Wright \cite{Wr} by different methods.\\ The first inequality and one of the main results we prove is the following: \begin{thm}\label{1.1} Let $G \ast_{N}$ be an HNN-extension of the finitely generated group $G$ over $N$. We have the following inequality \begin{center} $asdim\,G \ast_{N} \leq max \lbrace asdim G, asdimN +1 \rbrace.$ \end{center} \end{thm} Next, we calculate the asymptotic dimension of the Right-angled Artin groups. To be more precise, let $\Gamma$ be a finite simplicial graph, we denote by $A(\Gamma)$ the \textit{Right-angled Artin group} (RAAG) associated to the graph $\Gamma$. We set \begin{center} $Sim(\Gamma)= max \lbrace n \mid $ $\Gamma$ contains the 1-skeleton of the standard $(n-1)$-simplex $\Delta^{n-1} \rbrace.$ \end{center} Then by applying Theorem \ref{1.1} we obtain the following: \begin{thm}\label{1.2} Let $\Gamma$ be a finite simplicial graph. Then, $$asdimA(\Gamma)=Sim(\Gamma).$$ \end{thm} In 2005, G.Bell, and A.Dranishnikov (see \cite{BD05}) gave a proof that the asymptotic dimension of one relator groups is finite and also they gave an upper bound, namely the length of the relator plus one. Let $G= \langle S \mid r \rangle $ be a finitely generated one relator group such that $\mid\!r\!\mid = n$. Then \begin{center} $asdim\,G \leq n+1.$ \end{center} To prove this upper bound G. Bell, and A. Dranishnikov used an inequality for the asymptotic dimension of HNN-extensions (see \cite{BD04}).\\ In particular, let $G$ be a finitely generated group and let $N$ be a subgroup of $G$. Then, \begin{center} $asdim\,G\ast_{N} \leq asdim\,G +1$. \end{center} In 2006, D. Matsnev (see \cite{Mats}) proved a sharper upper bound for the asymptotic dimension of one relator groups. D. Matsnev proved the following: let $G= \langle S \mid r \rangle$ be a one relator group then \begin{center} $asdim\,G \leq \lceil\!\frac{length(r)}{2}\!\rceil$. \end{center} Here by $\lceil\!a\!\rceil$ ($a \in \mathbb{R}$) we denote the minimal integer greater than or equal to $a$.\\ Applying Theorem \ref{1.1} we answer a conjecture of A.Dranishnikov (see \cite{Dra}) giving the optimal upper bound for the asymptotic dimension of one relator groups. \begin{thm}\label{1.3} Let $G$ be a finitely generated one relator group. Then \begin{center} $asdim\,G \leq 2$. \end{center} \end{thm} We note that R. C. Lyndon (see \cite{Ly}) has shown that the \textit{cohomological dimension} of a torsion-free one-relator group is smaller than or equal to $2$. Our result can be seen as a large scale analog of this.\\ We note that the large scale geometry of one relator groups can be quite complicated, for example one relator groups can have very large isoperimetric functions (see e.g. \cite{Pl}). It is worth noting that L.Sledd showed that the Assouad-Nagata dimension of any finitely generated $C^{\prime}(1/6)$ group is at most two (see \cite{Sl}).\\ Theorem \ref{1.3} combined with the results of M.Kapovich and B.Kleiner (see \cite{KK}) leads us to a description of the boundary of hyperbolic one relator groups. We determine also the one relator groups that have asymptotic dimension exactly two. We prove that every infinite finitely generated one relator group $G$ that is not a free group or a free product of a free group and a finite cyclic group has asymptotic dimension equal to 2 (Proposition \ref{3.5}).\\ We obtain the following:\\ \textbf{Corollary.} \textit{Let $G$ be finitely generated freely indecomposable one relator group which is not cyclic. Then \begin{center} $asdim\,G = 2$. \end{center}} Moreover, we describe the finitely generated one relator groups in the following corollary:\\ \textbf{Corollary.} \textit{Let $G$ be a finitely generated one relator group. Then one of the following is true}:\\ \textbf{(i)} \textit{$G$ is finite cyclic, and $asdim\,G = 0$} \\ \textbf{(ii)} \textit{$G$ is a nontrivial free group or a free product of a nontrivial free group and a finite cyclic group, and $asdim\,G = 1$}\\ \textbf{(iii)} \textit{$G$ is an infinite freely indecomposable not cyclic group or a free product of a nontrivial free group and an infinite freely indecomposable not cyclic group, and $asdim\,G = 2$.} Using Theorem \ref{1.1} and an inequality of A.Dranishnikov about the asymptotic dimension of amalgamated products (see \cite{Dra08}) we obtain a more general theorem for the asymptotic dimension of fundamental groups of graphs of groups. \begin{thm}\label{1.4} Let $(\mathbb{G}, Y)$ be a finite graph of groups with vertex groups $\lbrace G_{v} \mid v \in Y^{0} \rbrace$ and edge groups $\lbrace G_{e} \mid e \in Y^{1}_{+} \rbrace$. Then the following inequality holds: \begin{center} $asdim \pi_{1}(\mathbb{G},Y,\mathbb{T}) \leq max_{v \in Y^{0} ,e \in Y^{1}_{+}} \lbrace asdim G_{v}, asdim\,G_{e} +1 \rbrace.$ \end{center} \end{thm} \textbf{Acknowledgments:} I would like to thank Panos Papasoglu for his valuable advices during the development of this research work.\\ I would also like to offer my special thanks to Mark Hagen and Richard Wade for their very useful comments. \section{Asymptotic dimension of HNN-extensions.} Let $X$ be a metric space and $\mathcal{U}$ a covering of $X$, we say that the covering $\mathcal{U}$ is \textit{$d$-bounded} or \textit{$d$-uniformly bounded} if $sup_{U \in \mathcal{U}}\lbrace \diam U \rbrace \leq d$. The \textit{Lebesgue number} $L(\mathcal{U})$ of the covering $\mathcal{U}$ is defined as follows: \begin{center} $L(\mathcal{U})= sup \lbrace \lambda \mid $ if $ A \subseteq X $ with $ \diam A \leq \lambda $ then there exists $ U \in \mathcal{U} $ s.t. $ A\subseteq U \rbrace$. \end{center} We recall that the order $ord (\mathcal{U})$ of the cover $\mathcal{U}$ is the smallest number $n$ (if it exists) such that each point of the space belongs to at most $n$ sets in the cover.\\ For a metric space $X$, we say that $(r,d)-dim X \leq n$ if for $r > 0$ there exists a $d$-bounded cover $\mathcal{U}$ of $X$ with $ord(\mathcal{U}) \leq n + 1$ and with Lebesgue number $L(\mathcal{U}) > r$. We refer to such a cover as an ($r,d$)-\textit{cover} of $X$.\\ The following proposition is due to G.Bell and A.Dranishnikov (see \cite{BD04}). \begin{prop}\label{2.1} For a metric space $X$, $asdimX \leq n $ if and only if there exists a function $d(r)$ such that $(r,d(r))-dim X \leq n$ for all $r > 0$. \end{prop} We recall that the family $X_{i}$ of subsets of $X$ satisfies the inequality $asdim X_{i} \leq n$ \textit{uniformly} if for every $R>0$ there exists a $D$-bounded covering $ \mathcal{U}_{i}$ of $X_{i}$ with $R-mult(\mathcal{U}_{i}) \leq n+1$, for every $i$. For the proofs of the following theorems \ref{2.2} and \ref{2.3} see \cite{BD01}. \begin{thm}{(Infinite Union Theorem)}\label{2.2} Let $X= \cup_{a} X_{a}$ be a metric space where the family $\lbrace X_{a} \rbrace$ satisfies the inequality $asdimX_{a} \leq n$ uniformly. Suppose further that for every $r>0$ there is a subset $Y_{r} \subseteq X$ with $asdim Y_{r} \leq n$ so that $d(X_{a} \setminus Y_{r} , X_{b} \setminus Y_{r}) \geq r$ whenever $X_{a} \neq X_{b}$. Then $asdimX \leq n $. \end{thm} \begin{thm}{(Finite Union Theorem)}\label{2.3} For every metric space presented as a finite union $X= \cup_{i} X_{i}$ we have \begin{center} $asdimX = max\lbrace asdimX_{i} \rbrace$. \end{center} \end{thm} A partition of a metric space $X$ is a presentation as a union $X= \cup_{i} W_{i}$ such that $Int(W_{i})\cap Int(W_{j}) = \varnothing $ whenever $i \neq j$. We denote by $\partial W_{i}$ the topological boundary of $W_{i}$ and by $Int(W_{i})$ the topological interior. We have that $ \partial W \cap Int(W) = \varnothing$. The boundary can be written as $$\partial W_{i}= \lbrace x \in X \mid d(x, W_{i})=d(x,X \setminus W_{i})=0 \rbrace.$$\\ For the proof of the following theorem see \cite{Dra08}. \begin{thm}{(Partition Theorem)}\label{2.4} Let X be a geodesic metric space. Suppose that for every $R > 0$ there is $d > 0$ and a partition $X= \cup_{i} W_{i}$ with $asdimW_{i}\leq n$ uniformly in $i$, and such that $(R,d)-dim(\cup_{i} \partial W_{i}) \leq n-1 $, where $\partial W_{i}$ is taken with the metric restricted from $X$. Then $asdim X \leq n$. \end{thm} Let $G$ be a finitely generated group, $N$ a subgroup of $G$ and $\phi : N \rightarrow G$ a monomorphism. We set $\overline{G}=G\ast_{N}$ the HNN-extension of $G$ over the subgroup $N$ with respect to the monomorphism $\phi$. We fix a finite generating set $S$ for the group $G$. Then the set $\overline{S}=S\cup \lbrace t , t^{-1}\rbrace$ is a finite generating set for the group $\overline{G}$ and we set $C(\overline{G})=Cay(\overline{G},\overline{S})$ its Cayley graph.\\ \textit{Normal forms for HNN-extensions.}\\ We note that there exist two types of \emph{normal forms} for HNN-extensions, the right normal form and the left normal form. We are going to use both of them in this paper.\\ \textit{Right normal form:} Let $S_{N}$ and $S_{\phi(N)}$ be sets of representatives of \emph{right cosets} of $G/N$ and of $G/\phi(N)$ respectively. Then every $w \in \overline{G}$ has a unique normal form $w=gt^{\epsilon_{1}}s_{1}t^{\epsilon_{2}}s_{2}...t^{\epsilon_{k}}s_{k}$ where $g \in G$, $\epsilon_{i} \in \lbrace -1, 1 \rbrace$ and if $\epsilon_{i}=1 $ then $s_{i} \in S_{N}$, if $\epsilon_{i}=-1 $ then $s_{i} \in S_{\phi(N)}$.\\ \textit{Left normal form:} Let $_{N}S$ and $_{\phi(N)}S$ be sets of representatives of \emph{left cosets} of $G/N$ and of $G/\phi(N)$ respectively. Then every $w \in \overline{G}$ has a unique normal form $w=s_{1}t^{\epsilon_{1}}s_{2}t^{\epsilon_{2}}...s_{k}t^{\epsilon_{k}}g$ where $g \in G$, $\epsilon_{i} \in \lbrace -1, 1 \rbrace$ and if $\epsilon_{i}=1 $ then $s_{i} \in _{\phi(N)} S$, if $\epsilon_{i}=-1 $ then $s_{i} \in _{N}S$.\\ \textbf{Convention:} When we write a normal form we mean the right normal form, unless otherwise stated.\\ The group $\overline{G}= G\ast_{N}$ acts on its Bass-Serre tree $T$. There is a natural projection $\pi: G\ast_{N} \rightarrow T$ defined by the action: $ \pi(g)= gG$.\\ \begin{figure} \caption{An illustration of the projection $\pi: C(\overline{G} \end{figure} \begin{lem}\label{2.5} The map $\pi: \overline{G} \rightarrow T$ extends to a simplicial map from the Cayley graph, $\pi: C(\overline{G},S) \rightarrow T$ which is 1-Lipschitz. \end{lem} \begin{proof} Let $g \in \overline{G}$ and $s \in \overline{S}$. Then the vertex $g$ is mapped to the vertex $\pi(g)=\pi(gs)=gG$. If $s \in S$, then the edge $[g,gs]$ is mapped to the vertex $\pi(g)=\pi(gs)=gG$. If $s \in \lbrace t , t^{-1}\rbrace$, without loss of generality we may assume that $s =t$, then the edge $[g,gs]$ is mapped to the edge $[\pi(g),\pi(gs)]=[gG,gtG]$ of $T$.\\ We observe that the simplicial map $\pi: C(\overline{G}) \rightarrow T$ is 1-Lipschitz. \end{proof} \begin{figure} \caption{An illustration of $T_+$ and $T_{-} \end{figure} The base vertex $G$ separates $T$ into two parts $T_{-} \setminus G$ and $T_{+} \setminus G$, where \begin{center} $\pi^{-1}(T_{+})=\lbrace w \in \overline{G} \mid $ if $ w=gt^{\epsilon_{1}}s_{1}t^{\epsilon_{2}}s_{2}...t^{\epsilon_{k}}s_{k} $ is the normal form of $w$ then $ \epsilon_{1} = 1 \rbrace$ \end{center} and similarly \begin{center} $\pi^{-1}(T_{-})=\lbrace w \in \overline{G} \mid $ if $ w=gt^{\epsilon_{1}}s_{1}t^{\epsilon_{2}}s_{2}...t^{\epsilon_{k}}s_{k} $ is the normal form of $w$ then $ \epsilon_{1} = -1 \rbrace$. \end{center} We note that both $ T_{+} \setminus G$ and $T_{-} \setminus G $ are unions of connected components of $T$ and $ \pi^{-1}(T_{+})$ and $\pi^{-1}(T_{-}) $ are unions of connected components of $C(\overline{G})$. See figure 2 for an illustration of $T_-$ and $T_+$.\\ We consider the Bass-Serre tree $T$ as a metric space with the simplicial metric $\overline{d}$. If $Y$ is a graph, we denote by $Y^{0}$ or $V(Y)$ the vertices of $Y$.\\ For $u \in T^{0}$ we denote by $\mid\!u\!\mid$ the distance to the vertex with label $G$. We note that the distance of the vertex $wG$ from $G$ in the Bass-Serre tree $T$ equals to the length $l(w)$ of the normal form of $w$, $\mid\!wG\!\mid=l(w)$. We denote by $l(w)$ the length of the normal form of $w$, we note that the length of both the right and the left normal form of $w$ is the same.\\ We recall that a \emph{full subgraph} of a graph $\Gamma$ is a subgraph formed from a subset of vertices $V$ and from all of the edges that have both endpoints in the subset $V$.\\ If $A$ is a subgraph of $\Gamma$ we define the \textit{edge closure} $E(A)$ of $A$ to be the full subgraph of $\Gamma$ formed from $V(A)$. Obviously, $V(E(A))=V(A)$.\\ We fix some notation on the Bass-Serre tree $T$ and on the Cayley graph.\\ \emph{In the tree $T$.} We denote by $B^{T}_{r}$ the $r-$ball in $T$ centered at $G$ ($r \in \mathbb{N}$). There is a partial order on vertices of $T$ defined as follows: $v \leq u$ if and only if $v$ lies in the geodesic segment $[G,u]$ joining the base vertex $G$ with $u$. For $u \in T^{0}$ of nonzero level (i.e. $u \neq G$) and $r > 0$ we set \begin{center} $T^{u}= E(\lbrace v \in T^{0} \mid u \leq v \rbrace)$, $B^{u}_{r}=E(\lbrace v \in T^{u} \mid $ s.t. $ \mid\!v\!\mid \leq \mid\!u\!\mid + r \rbrace)$. \end{center} For every vertex $u \in T^{0}$ represented by a coset $g_{u}G$ we have the equality $B^{u}_{r}= g_{u}B^{T}_{r} \cap T^{u} $. We also observe that $B^{u}_{r}=E(\lbrace v \in T^{u} \mid $ s.t. $ \overline{d}(v,u) \leq r \rbrace)$. See figure 3 for an illustration of the sets $T^u$ and $B^u_r$\\ \begin{figure} \caption{Here we have an illustration of $T^{u} \label{fig:sub1} \caption{Here we have an illustration of $B^{u} \label{fig:sub2} \end{figure} \emph{In the Cayley graph.} For $R \in \mathbb{N}$, let $$M_{R}= \lbrace g \in \overline{G} \mid dist(g,N\cup\phi(N))=R \rbrace.$$ Let $u=g_{u}G$, we set $M_{R}^{u}=g_{u}M_{R} \cap \pi^{-1} (T^{u})$. We observe that $\pi(M^{u}_{R}) \subseteq B^{u}_{R} $ since $\pi$ is 1-Lipschitz.\\ \begin{figure} \caption{An illustration of $E_R$.} \label{fig:sub1} \caption{An illustration of $M_R$. } \label{fig:sub2} \end{figure} Let $u=g_{u}G$, we set $E_{R}=E(N_{R}(N \cup \phi(N)))$ and $$E_{R}^{u}=g_{u}E_{R} \cap \pi^{-1} (T^{u}).$$ Obviously, $M_{R}^{u} \subseteq E_{R}^{u} \subseteq \pi^{-1}(B^{u}_{R}$).\\ \textbf{Convention:} We associate every $u \in T^{0}$ to an element $g_u \in \overline{G}$ such that the following two conditions hold:\\ (i) $u= g_u G$ .\\ (ii) if the left normal form of $g_u $ is $s_{1}t^{\epsilon_{1}}s_{2}t^{\epsilon_{2}}...s_{k}t^{\epsilon_{k}}g $ then $g=1_{\overline{G}}$.\\ We see that in this way we may define a bijective map from $T^{0}$ to the set $\mathcal{G}_T$ which consists of the elements of $\overline{G}$ such that conditions (i) and (ii) hold. \begin{prop}\label{2.6} If $4 < 4R \leq r$, and the distinct vertices $u, u^{\prime} \in T^0$, satisfy $ \mid\!u\!\mid , \mid\!u^{\prime}\!\mid \in \lbrace nr \mid n \in \mathbb{N}\rbrace$ then $$d(M_{R}^{u},M_{R}^{u^{\prime}}) \geq 2 R.$$ \end{prop} \begin{proof} We distinguish two cases. See figure 5(a) and figure 5(b) for the case 1 and case 2 respectively. \begin{figure} \caption{An illustration of case 1 of proposition \ref{2.6} \label{fig:sub1} \caption{An illustration of case 2 of proposition \ref{2.6} \label{fig:sub2} \caption{In figure (a) we have $u^{\prime} \label{fig:test} \end{figure} \textbf{Case 1:} $\mid\!u\!\mid \neq \mid\!u^{\prime}\!\mid$. We recall that every path $\gamma$ in $C(\overline{G})$ projects to a path $\pi(\gamma)$ in the tree $T$. Then since \begin{center} $M_{R}^{u}=g_{u}M_{R}\cap \pi^{-1}(T^{u}) \subseteq \pi^{-1}(B_{R}^{u})$, \end{center} \begin{center} $M_{R}^{u^{\prime}}=g_{u^{\prime}}M_{R} \cap \pi^{-1}(T^{u^{\prime}}) \subseteq \pi^{-1}(B_{R}^{u^{\prime}})$ \end{center} and $\pi$ is 1-Lipschitz we have that \begin{center} $d(M^{u}_{R} ,M^{u^{\prime}}_{R}) \geq \overline{d}(B_{R}^{u},B_{R}^{u^{\prime}}) ) \geq r - R \geq 3R.$ \end{center} \textbf{Case 2:} $\mid\!u\!\mid = \mid\!u^{\prime}\!\mid$ ($u \neq u^{\prime}$). We denote by $\zeta_{0}$ the last vertex of the common geodesic segment $[G,\zeta_{0}]$ of the geodesics $[G,u]$ and $[G,u^{\prime}]$. We observe that $\overline{d}(u,\zeta_{0}),\overline{d}(u^{\prime},\zeta_{0}) \geq 1$.\\ Let $x \in M_{R}^{u}$, $y \in M_{R}^{u^{\prime}}$ and let $\gamma$ be a geodesic from $x$ to $y$. Then the path $\pi(\gamma)$ passes through the vertices $u$, $u^{\prime}$ and $\zeta_{0}$. So the geodesic $\gamma$ intersects both $g_{u}(N \cup \phi(N))$ and $g_{u^{\prime}}(N \cup \phi(N))$. Hence \begin{center} $d(x,y) \geq dist(x,g_{u}(N \cup \phi(N)) ) + dist(y,g_{u^{\prime}}(N \cup \phi(N)) )+ length([\zeta_{0},u^{\prime}]) + length([\zeta_{0},u]) \geq R+R+2=2(R+1).$ \end{center} \end{proof} For $w \in G\ast_{N}$, we denote by $\parallel\!w\!\parallel$ the distance from $w$ to $1_{\overline{G}}$ in the Cayley graph $Cay(\overline{G},\overline{S})$. \begin{figure} \caption{An illustration of $Q_m$, for $m=2$ (proposition \ref{2.8} \label{fig:sub1} \caption{An illustration of $\pi^{-1} \label{fig:sub2} \caption{We note that $Q_m= V(\pi^{-1} \label{fig:test} \end{figure} \begin{lem}\label{2.7} Let $w=gt^{\epsilon_{1}}s_{1}t^{\epsilon_{2}}s_{2}...t^{\epsilon_{k}}s_{k} $ be the normal form of $w$. Then \begin{center} $\parallel\!w\!\parallel \geq d(s_{k},N)$ if $\epsilon_{k}=1$ and $\parallel\!w\!\parallel \geq d(s_{k},\phi(N))$ if $\epsilon_{k}=-1$. \end{center} \end{lem} \begin{proof} Without loss of generality we assume that $\epsilon_{k}=1$. Let $$w=(\prod_{i_{0}=1}^{m_{0}}s_{i_{0}})t^{\epsilon_{1}}(\prod_{i_{1}=1}^{m_{1}}s_{i_{1}})t^{\epsilon_{2}}...t(\prod_{i_{k}=1}^{m_{k}}s_{i_{k}})$$ be a shortest presentation of $w$ in the alphabet $\overline{S}$ (we note that $s_{i_{j}} \notin \lbrace t, t^{-1} \rbrace$). We set $\prod_{i_{j}=1}^{m_{j}}s_{i_{j}}=g_{j}$ for every $j \in \lbrace 1,...,k \rbrace.$ Then $w=gt^{\epsilon_{1}}g_{1}t^{\epsilon_{2}}s_{2}...tg_{k} = w_{0}tg_{k}$. \\ The first step when we rewrite $w$ in normal form starting from the previous presentation is to write $g_{k}=ns_{k}$ (where $n \in N$). Then $$\parallel\!w\!\parallel \geq \parallel\!g_{k}\!\parallel = \parallel\!ns_{k}\!\parallel =d(ns_{k},1) =d(s_{k},n^{-1}) \geq d(s_{k},N).$$ \end{proof} We note that there exists an amalgamated product analogue of the following proposition proved by A.Dranishnikov in \cite{Dra08}. \begin{prop}\label{2.8} Suppose that $asdim\,G \leq n$. Let \begin{center} $Q_{m}= \lbrace w \in \overline{G} \mid w=gt^{\epsilon_{1}}s_{1}t^{\epsilon_{2}}s_{2}...t^{\epsilon_{m}}s_{m} $ is the normal form of $w$ $ \rbrace.$ \end{center} Then $asdim Q_{m} \leq n $, for every $m \in \mathbb{N}$. \end{prop} \begin{proof} We set $P_{\lambda}= \lbrace w \in \overline{G} \mid l(w)= \lambda \rbrace$. To prove the statement of the proposition it is enough to show that $asdimP_{\lambda} \leq n$, for every $\lambda \in \mathbb{N}$. Indeed, since \begin{center} $Q_{m}= \cup _{i=1}^{m}P_{i}$ \end{center} by the Finite Union Theorem we obtain that $asdimQ_{m} \leq n$.\\ \textbf{Claim:} For $\lambda \in \mathbb{N}$ we have $asdimP_{\lambda} \leq n.$\\ \textit{Proof of claim}: We use induction on $\lambda$. We have $P_{0}=G$, so $asdimP_{0} \leq n$. We observe that $P_{\lambda} \subseteq P_{\lambda-1}tG \cup P_{\lambda-1}t^{-1}G$. Using the Finite Union Theorem it suffices to show that $asdim(P_{\lambda} \cap P_{\lambda-1}tG) \leq n$ and $asdim(P_{\lambda} \cap P_{\lambda-1}t^{-1}G) \leq n$, we show the first.\\ To show that $asdimP_{\lambda} \cap P_{\lambda-1}tG \leq n$ we use the Infinite Union Theorem. For $r>0$ we set $Y_{r}=P_{\lambda-1}tN_{r}(N)$. We claim that $$Y_{r} \subseteq N_{r+1}(P_{\lambda-1}).$$ Indeed, if $z \in Y_{r}$ then $z=z_{0}tz_{1}$, where $z_{0} \in P_{\lambda-1}$ and $z_{1} \in N_{r}(N)$. Since $z_{1} \in N_{r}(N)$ there exists $n \in N$ with $ d(n,z_{1}) \leq $ r.\\ So $z=z_{0}tnn^{-1}z_{1}= z_{0}\phi(n)tn^{-1}z_{1}$, and $$d(z,P_{\lambda-1}) \leq d(z, z_{0}\phi(n)) = \parallel\!tn^{-1}z_{1}\!\parallel \leq \parallel\!t\!\parallel + \parallel\!t^{-1}z_{1}\!\parallel \leq 1 + r.$$ Hence $Y_{r} \sim_{q.i.} P_{\lambda-1}$, so $asdimY_{r}\leq n$.\\ We consider the family $xtG$ where $x \in P_{\lambda-1}$. For $xtG \neq ytG$, we have $d(xtG \setminus Y_{r}, ytG \setminus Y_{r}) = d(xtg,yth) = \parallel\!g^{-1}t^{-1}x^{-1}yth\!\parallel $, where $g, h \in G \setminus N_{r}(N)$. The first step when we rewrite $g^{-1}t^{-1}x^{-1}yth$ in normal form is to replace $h=ns_{k}$, where $n \in N$ and $s_{k} \in S_{N}$, so $ g^{-1}t^{-1}x^{-1}yth = g^{-1}t^{-1}x^{-1}y\phi(n)ts_{k}$.\\ Since $h \in G \setminus N_{r}(N)$ we have that $\Vert\!s_{k}\!\Vert =\Vert\!n^{-1}h\!\Vert \geq d(h,N) \geq r$. By lemma \ref{2.7} we obtain that $\parallel\!g^{-1}t^{-1}x^{-1}y\phi(n)ts_{k}\!\parallel \geq \parallel\!s_{k}\!\parallel \geq r$.\\ Finally, by observing that $xtG$ and $G $ are isometric we deduce that $asdim(xtG) \leq n$ uniformly. Since all the conditions of the Infinite Union Theorem hold we have that $$asdim(P_{\lambda} \cap P_{\lambda-1}tG) \leq n$$ for every $\lambda \in \mathbb{N}.$ \end{proof} We observe that $E(Q_{m}) = \pi^{-1}(B^{T}_{m})$ and $ Q_{m} = \overline{G} \cap \pi^{-1}(B^{T}_{m})$.\\ For $w \in \overline{G}$, we set $T^{w} = T^{\pi(w)} $, where $\pi(w)=wG$. \begin{thm}\label{2.9} Let $G \ast_{N}$ be an HNN-extension of the finitely generated group $G$ over $N$. We have the following inequality \begin{center} $asdim\,G \ast_{N} \leq max \lbrace asdim G, asdimN +1 \rbrace.$ \end{center} \end{thm} \begin{proof} Let $n= max \lbrace asdim G, asdimN +1 \rbrace$. We denote by $\pi:C(\overline{G},S) \rightarrow T$ the map of Lemma \ref{2.5}.\\ We recall that we denote by $l(g)$ the length of the normal form of $g$.\\ We will use the Partition Theorem (Thm \ref{2.4}). Let $R,r \in \mathbb{N}$ be such that $R>1$ and $r > 4R$. We set, $$U_{r}= E[(\pi^{-1}(B^{T}_{r-1}) \cap E(\lbrace g \in \overline{G} \mid d(g,N \cup \phi(N)) \geq R \rbrace))\cup(\bigcup_{u \in \partial B^{T}_{r} } E_{R}^{u})],$$ where $E_{R}^{u}= g_{u}E(N_{R}(N \cup \phi(N))) \cap \pi^{-1}(T^{u})$.\\ We recall that $M_{R}=\lbrace g \in \overline{G} \mid d(g,N \cup \phi(N)) = R \rbrace$.\\ Let $A_{R}$ be the collection of the edges between the elements of $M_{R} \subseteq U_{r}$. We have that $A_{R} \subseteq U_{r}$. We define $V_r$ to be the set obtained by removing the interior of the edges of $A_{R}$ from $U_{r}$.\\ Formally we have that \begin{center} $V_{r} = U_{r} \setminus \lbrace interior(e) \mid e \in A_{R} \rbrace$. \end{center} \begin{figure} \caption{An illustration of $V_r$.} \end{figure} See figure 7 for an illustration of the set $V_r$. We observe that the sets $U_{r}$ and $V_{r}$ are subgraphs of $C(\overline{G})$, $\partial U_{r}= \partial V_{r}$ and $ V_{r} \cap \overline{G} = U_{r} \cap \overline{G}$. Obviously, $\bigcup_{u \in \partial B^{T}_{r} } E_{R}^{u} \subseteq V_{r}.$ We also have $$V_{r} \cap \overline{G} = (\overline{G} \cap \pi^{-1}(B^{T}_{r-1}) \cap E(\lbrace g \in \overline{G} \mid d(g,N \cup \phi(N)) \geq R \rbrace))\cup( \overline{G} \cap \bigcup_{u \in \partial B^{T}_{r} } E_{R}^{u}).$$ To be more precise, \begin{center} $V_{r} \cap \overline{G} = \lbrace wx \in \overline{G} \mid d(w,N \cup \phi(N)) \geq R $ and if $ w=g_{0}t^{\epsilon_{1}}g_{1}...t^{\epsilon_{k}}g_{k} $ is the normal form of $w$ then $ k \leq r -1$ or $g_{k}=1$ and $k=r$ , if $x \neq 1$ then $ k = r $, $g_{k}=1$ , $d(x, N \cup \phi(N)) \leq R \rbrace $. \end{center} For every vertex $u \in T^0$ satisfying $\mid\!u\!\mid \in \lbrace nr \mid n \in \mathbb{N}\rbrace$, we define \begin{center} $V^{u}_{r} = g_{u}V_{r} \cap \pi^{-1}(T^{u})$. \end{center} Obviously, the sets $V^{u}_{r}$ are subgraphs of $C(\overline{G})$ and $V^{u}_{r} \nsubseteq \overline{G}$. We observe that $V_{r} \subseteq \pi^{-1}(B^{T}_{r+R})$, so $V_{r}^{u} \subseteq \pi^{-1}(B^{u}_{r+R})$. Obviously, for every $h$ such that $h=g_{1}t^{\epsilon_{1}}g_{2}...t^{\epsilon_{r}}$ is the \emph{left normal form} of $h$ we have that: \begin{center} $(g_{u} M_{R} \cap \pi^{-1}(T^{g_{u}G}))\cup (g_{u} h M_{R} \cap \pi^{-1}(T^{g_{u}hG}))\subseteq \partial V_{r}^{u}$, where $l(h)=r$. ($\star$) \end{center} This can also be written as: \begin{center} $M_{R}^{g_{u}G} \cup M_{R}^{g_{u}hG}\subseteq \partial V_{r}^{u}$. \end{center} We set $V_{r}^{G}=V_{r}$. We consider the partition \begin{equation} C(\overline{G}) =\pi^{-1}(T) = (\bigcup_{\mid u \mid \in \lbrace nr \mid n \in \mathbb{N}_{+} \cup \lbrace 0 \rbrace \rbrace} V^{u}_{r}) \cup E(N_{R}(N \cup \phi(N))) \end{equation} We set, \begin{center} $Z = (\bigcup_{\mid u \mid \in \lbrace nr \mid n \in \mathbb{N}_{+} \rbrace \cup \lbrace 0 \rbrace} \partial V^{u}_{r}) \cup \partial E(N_{R}(N \cup \phi(N)))$. \end{center} We observe that if $V^{u}_{r} \cap V^{v}_{r} \neq \varnothing $, then either $u \leq v$ and $\mid\!u\!\mid +r = \mid\!v\!\mid$ or $u \geq v$ and $\mid\!v\!\mid +r = \mid\!u\!\mid$. If $V^{u}_{r} \cap V^{v}_{r} \neq \varnothing $ such that $u\leq v$ and $\mid\!u\!\mid +r = \mid\!v\!\mid$, then \begin{center} $V_{r}^{u} \cap V_{r}^{v} = M^{v}_{R} = \partial V_{r}^{u} \cap \partial V_{r}^{v}.$ \end{center} We deduce that \begin{center} $Z = (\bigcup_{\mid u \mid \in \lbrace nr \mid n \in \mathbb{N}_{+} \rbrace } M^{u}_{R})) \cup M_{R}$. \end{center} We will show that there exists $d>0$ such that $(R,d)-dimZ \leq n-1 $.\\ Since $M_{R}$ is quasi isometric to $N_{R}(N \cup \phi(N))$, which is quasi isometric to $N \cup \phi(N)$, we have that $asdim M_{R} \leq n-1$. Then for $R>0$ there exists a $(R,d)$-covering $\mathcal{U}$ of $M_{R}$ with $ord(\mathcal{U})\leq n$.\\ In view of the proposition \ref{2.6} we have that the following covering $$\mathcal{V}=\mathcal{U} \cup \bigcup_{\mid u \mid \in \lbrace nr \mid n \in \mathbb{N}_{+} \rbrace} (g_{u}\mathcal{U} \cap M_{R}^{u})$$ is a $(R,d)$-covering of $Z$ with $ord(\mathcal{V})\leq n$. We conclude that $(R,d)-dimZ \leq n-1 $.\\ Next, we will show that $asdimV^{u}_{r} \leq n$ and $asdimN_{R}(N \cup \phi(N)) \leq n$ uniformly. This will complete our proof that all the conditions of the Partition Theorem are satisfied.\\ It suffices to show that $asdimV^{u}_{r} \leq n$ uniformly and $$asdimN_{R}(N \cup \phi(N))\leq n.$$ We observe that $V_{r} \subseteq \pi^{-1}(B^{T}_{r+R}) \subseteq N_{1}(Q_{r+R})$, so by the proposition \ref{2.8} we have that $asdimV^{u}_{r} \leq n$. Since the sets $V^{u}_{r}$ of our partition are isometric to each other we conclude that $asdimV^{u}_{r} \leq n$ uniformly.\\ Finally, $asdimN_{R}(N \cup \phi(N)) \leq n-1$ since $N_{R}(N \cup \phi(N)))$ is quasi isometric to $N \cup \phi(N)$.\\ By the Partition Theorem (Thm \ref{2.4}), $asdimC(\overline{G}) = asdim \pi^{-1}(T) \leq n$. \end{proof} \subsection{Right-angled Artin groups.} We use the following theorem of G.Bell, A.Dranishnikov, and J.Keesling (see \cite{BDK}). \begin{thm}\label{2.10} If $A$ and $B$ are finitely generated groups then $$asdim\, A \ast B = max \lbrace asdimA , asdimB \rbrace.$$ \end{thm} Let $\Gamma$ be a finite simplicial graph with $n$ vertices, the \textit{Right-angled Artin group} (RAAG) $A(\Gamma)$ associated to the graph $\Gamma$ has the following presentation: \begin{center} $ A(\Gamma)= \langle s_{u} $ , $ (u \in V(\Gamma)) \mid [s_{u},s_{v}] $ , $ ([u,v] \in E(\Gamma)) \rangle .$ \end{center} By $[s_{u},s_{v}]=s_{u}s_{v}s_{u}^{-1}s_{v}^{-1}$ we mean the commutator.\\ We set $Val(\Gamma)= max \lbrace valency(u) \mid u \in V(\Gamma) \rbrace$. By $valency(u)$ of a vertex $u$ we denote the number of edges incident to the vertex $u$.\\ Clearly, $Val(\Gamma) \leq rank(A(\Gamma))-1.$\\ If $\Gamma$ is a simplicial graph, we denote by $1-skel(\Gamma)$ the 1-skeleton of $\Gamma$. Recall that a \emph{full subgraph} of a graph $\Gamma$ is a subgraph formed from a subset of vertices $V$ and from all of the edges that have both endpoints in the subset $V$. \textbf{Conventions:} Let $\Gamma$ be simplicial graph, $u \in V(\Gamma)$ and $e \in E(\Gamma)$. We denote by:\\ (i) $\Gamma \setminus \lbrace u \rbrace$ the full subgraph of $\Gamma$ formed from $V(\Gamma)\setminus \lbrace u \rbrace$.\\ (ii) $\Gamma \setminus e$ the subgraph of $\Gamma$ such that $V(\Gamma \setminus e)=V(\Gamma)$ and $E(\Gamma \setminus e)=E(\Gamma) \setminus \lbrace e \rbrace$. \begin{lem}\label{2.11} Let $\Gamma$ be a finite simplicial graph. Then \begin{center} $asdimA(\Gamma) \leq Val(\Gamma)+1.$ \end{center} \end{lem} \begin{proof} Since theorem \ref{2.10} holds we observe that it suffices to show the statement of the lemma \ref{2.11} for connected simplicial graphs. We assume that $\Gamma$ is a connected simplicial graph.\\ We use induction on the $rank(A(\Gamma))$. For $rank(A(\Gamma))=1$ we have that $A(\Gamma)$ is the integers $\mathbb{Z}$, so the statement holds. We assume that the statement holds for every $k \leq n$ and we show that it holds for $n+1$ ($n+1 \geq 2$).\\ Let $\Gamma$ be a simplicial graph with $n+1$ vertices. We remove a vertex $u$ from the graph $\Gamma$ such that $valency(u)= Val(\Gamma) = m \geq 1$. Let's denote by $v_{i}$ ($i \in \lbrace 1,..., m \rbrace$) the vertices of $\Gamma$ which are adjacent to $u$.\\ We set $\Gamma^{\prime}=\Gamma \setminus \lbrace u \rbrace $. Obviously, $Val(\Gamma^{\prime}) \leq Val(\Gamma)$.\\ We denote by $Y$ the full subgraph of $\Gamma$ formed from $\lbrace v_{1}, \ldots ,v_{m} \rbrace$.\\ We observe that the RAAG $A(\Gamma)$ is an HNN-extension of the RAAG $A(\Gamma^{\prime})$. To be more precise, we have that \begin{center} $A(\Gamma)= A(\Gamma^{\prime})\ast_{A(Y)}.$ \end{center} By Theorem \ref{2.9} we obtain that $$asdimA(\Gamma) \leq max\lbrace asdimA(\Gamma^{\prime}) , asdimA(Y)+1 \rbrace .$$ We observe that $ Val(Y) \leq Val(\Gamma) -1$, so by the induction ($ rank(A(Y)) \leq n$) we obtain $$asdimA(Y) \leq Val(Y) +1 \leq Val(\Gamma).$$ Since $rankA(\Gamma^{\prime}) = n$, by the induction we deduce that $$asdimA(\Gamma^{\prime}) \leq Val(\Gamma^{\prime}) + 1 \leq Val(\Gamma) + 1.$$ Combining the three previous inequalities we obtain: \begin{center} $ asdimA(\Gamma) \leq max\lbrace Val(\Gamma) + 1 , Val(\Gamma) + 1 \rbrace = Val(\Gamma) + 1 . $ \end{center} \end{proof} Using the previous lemma we can compute the exact asymptotic dimension of $A(\Gamma)$. We note that this has already been computed by N.Wright \cite{Wr} using different methods.\\ We set \begin{center} $Sim(\Gamma)= max \lbrace n \mid $ $\Gamma$ contains the 1-skeleton of the standard $(n-1)$-simplex $\Delta^{n-1} \rbrace.$ \end{center} Obviously if $\Gamma^{\prime} \subseteq \Gamma$, then $Sim(\Gamma^{\prime}) \leq Sim(\Gamma)$. \begin{thm}\label{2.12} Let $\Gamma$ be a finite simplicial graph. Then, $$asdimA(\Gamma)=Sim(\Gamma).$$ \end{thm} \begin{proof} Since theorem \ref{2.10} holds we observe that it suffices to show the statement of Theorem \ref{2.12} for connected simplicial graphs. We assume that $\Gamma$ is a connected simplicial graph.\\ \textbf{Claim 1.} $Sim(\Gamma) \leq asdimA(\Gamma)$.\\ \textit{Proof of claim 1:} Let $Sim(\Gamma)=n$. We observe that $\mathbb{Z}^{n} = A(S_{n-1}) \leq A(\Gamma)$. It follows that \begin{center} $n= asdim\mathbb{Z}^{n} \leq asdimA(\Gamma)$. \end{center} \textbf{Claim 2.} $asdimA(\Gamma ) \leq Sim(\Gamma)$.\\ \textit{Proof of claim 2:} We use induction on the $rank(A(\Gamma))$, for $rank(A(\Gamma))=1$ we have that $A(\Gamma)$ is the integers $\mathbb{Z}$, so the statement holds. We assume that the statement holds for every $r \leq m$, we will show that holds for $m+1$ as well. Let $\Gamma$ be a connected simplicial graph with $m+1$ vertices.\\ Let $Sim(\Gamma)=n$, then $\Gamma$ contains the 1-skeleton of the standard $(n-1)$-simplex $S_{n-1}$ ($S_{n-1} = 1-skel(\Delta^{n-1})).$\\ \textbf{Case 1.} $\Gamma = S_{n-1}.$\\ Then $m+1=n$, so by lemma \ref{2.11} we have $asdimA(S_{n-1}) \leq Val(S_{n-1}) +1$. By observing that $Val(S_{n-1}) = n-1$ we obtain that \begin{center} $asdimA(S_{n-1}) \leq n = Sim(\Gamma)$. \end{center} \textbf{Case 2.} $ S_{n-1} \subsetneqq \Gamma.$\\ We will remove a vertex $u \in V(S_{n-1})$. Let's denote by $v_{i}$ ($i \in \lbrace 1,..., k \rbrace$) the vertices of $\Gamma$ which are adjacent to $u$ . We set $\Gamma^{\prime}=\Gamma \setminus \lbrace u \rbrace $. Obviously $Sim(\Gamma^{\prime}) \leq n$. \\ We denote by $Y$ the full subgraph of $\Gamma$ formed from $\lbrace v_{1}, \ldots ,v_{k} \rbrace$.\\ We observe that the RAAG $A(\Gamma)$ is an HNN-extension of the RAAG $A(\Gamma^{\prime})$. To be more precise, we have that \begin{center} $A(\Gamma)= A(\Gamma^{\prime})\ast_{A(Y)}.$ \end{center} By Theorem \ref{2.9} we obtain that \begin{equation}\label{eq2} asdimA(\Gamma) \leq max\lbrace asdimA(\Gamma^{\prime}) , asdimA(Y)+1 \rbrace . \end{equation} Since $Sim(\Gamma^{\prime}) \leq n$ and $rank(\Gamma^{\prime}) \leq m$, by the inductive assumption we have that: \begin{equation}\label{eq3} asdimA(\Gamma^{\prime}) \leq Sim(\Gamma^{\prime}) \leq n. \end{equation} We observe that $Sim(Y) \leq n-1$ and $rank(Y) \leq m$, then by the induction we obtain \begin{equation}\label{eq4} asdim A(Y) + 1 \leq Sim(Y) +1 \leq n. \end{equation} by (\ref{eq2}), (\ref{eq3}) and (\ref{eq4}) we conclude that \begin{center} $asdimA(\Gamma) \leq n = Sim(\Gamma) .$ \end{center} \end{proof} \section{Asymptotic dimension of one-relator groups.} \begin{thm}\label{3.1} Let G be a finitely generated one relator group. Then \begin{center} $asdimG \leq 2$. \end{center} \end{thm} \begin{proof} Let $G= \langle S \mid r \rangle$ be a presentation of $G$ where $S$ is finite and $r$ is a cyclically reduced word in $S \bigcup S^{-1}$. To omit trivial cases, we assume that $S$ contains at least two elements and $\mid r \mid > 0$, (we denote by $\mid r \mid$ the length of the relator $r$ in the free group $F(S)$).\\ We may assume that every letter of $S$ appears in $r$. Otherwise our group $G$ is isomorphic to a free product $H \ast F$ of a finitely generated one relator group $H$ with relator $r$ and generating set $S_{H} \subseteq S$ consisting of all letters which appear in $r$ and a free group $F$ with generating set the remaining letters of $S$. We recall that the asymptotic dmension of any finitely generated non-abelian free group is equal to one. Then $asdim G = max \lbrace asdim H, asdim F \rbrace = max \lbrace asdim H, 1 \rbrace$ (see \cite{BD04}).\\ We denote by $\epsilon_{r}(s)$ the exponent sum of a letter $s \in S$ in a word $r$ and by $oc_{r}(s)$ the minimum number of the positions of appearance of the elements of the set $ \lbrace s^{k}$ for some $0 \neq k \in \mathbb{Z} \rbrace $ in a cyclically reduced word $r$. For example, if $r=abcab^{10}a^{-2}c^{-1}$, then $oc_{r}(a)=3$, $oc_{r}(b)=2$, $oc_{r}(c)=2$ and $\epsilon_r (c)=0$. We observe that if there exists $b \in S$ such that $oc_{r}(b)=1$, then the group $G$ is free (see \cite{LynSch}, thm 5.1, page 198 ), so $asdim G =1$. From now on we assume that for every $s \in S$ we have that $oc_{r}(s) \geq 2$ (so $\mid r \mid\geq 4$).\\ The proof is by induction on the length of $r$. We observe that if $\mid\!r\!\mid = 4$ then the statement of the theorem holds since by the result of D.Matsnev \cite{Mats} we have that $asdim G \leq \frac{\lfloor \mid\!r\!\mid \rfloor}{2} = \frac{4}{2} =2$ (where $\lfloor \ast \rfloor$ is the floor function).\\ We assume that the statement of the theorem holds for all one relator groups with relator length smaller than or equal to $\mid\!r\!\mid - 1$. We follow the arguments of McCool, Schupp and Magnus (see \cite{LynSch}, thm 5.1, page 198), which we shall describe in what follows. We distinguish two cases.\\ We note that the argument we use in case 1 is slightly different from the argument of McCool and Schupp as we will consider HNN-extensions over finitely generated groups. To be more precise, the classical arguments prove that if the relation of $G$ has exponent sum zero, then $G$ is an HNN-extension of another one relator group over an infinitely generated non-abelian free subgroup. Our contribution here is that we showed that $G$ is an HNN-extension of another one relator group over a \emph{finitely} generated non-abelian free subgroup.\\ In case 2 with non-zero exponent sum we use the original argument of Magnus to show that $G$ can be embedded in an one relator group $\Gamma$ whose defining relator has exponent sum zero.\\ \textbf{Case 1}: There exists a letter $a \in S$ such that $\epsilon_{r}(a)=0$.\\ We shall exhibit $G$ as an HNN-extension of a one relator group $G_{1}$ whose defining relator has shorter length than $r$, over a finitely generated free subgroup $F$.\\ Let $S = \lbrace a=s_{1}, s_{2}, s_{3}, s_{4}, . . . s_{k} \rbrace $. Set $s^{(j)}_{i} = a^{j} s_{i} a^{-j}$ for $j \in \mathbb{Z}$ and for $k \geq i \geq 2$. Rewrite $r$ scanning it from left to right and changing any occurrence of $a^{j} s_{i}$ to $s^{(j)}_{i} a^{j}$, collecting the powers of adjacent $a-$letters together and continuing with the leftmost occurrence of $a$ or its inverse in the modified word.\\ We denote by $r^{\prime}$ the modified word in terms of $s^{(j)}_{i}$. We note that by doing this we make at least one cancellation of $a$ and its inverse. The resulting word $r^{\prime}$ which represents $r$ in terms of $s^{(j)}_{i}$ and their inverses has length smaller than or equal to $\mid r \mid - 2$.\\ For example, if $r=as_{2}s_{3}as_{2}^{4}a^{-2}s_{3}$ then $r^{\prime}=s^{(1)}_{2}s^{(1)}_{3}(s^{(2)}_{2})^{4}s^{(0)}_{3}$.\\ Let $m$ and $M$ be the minimal and the maximal superscript of all $s_{i}^{(j)}$ ($i \geq 2$) occurring in $r^{\prime}$ respectively. To be more precise, \begin{center} $m=min \lbrace j \mid s_{i}^{(j)}$ occurs in $r^{\prime} \rbrace$ and $M=max \lbrace j \mid s_{i}^{(j)}$ occurs in $r^{\prime} \rbrace.$ \end{center} Continuing our example, we have $m=0$ and $M=2$.\\ \textbf{Claim 1.1:} In case 1 we have $M-m>0$ and $m \leq 0 \leq M$.\\ We may assume, replacing $r$ with a suitable permutation if necessary, that $r$ begins with $a^{k}$ for some $k\neq 0$. Then we can write $r=a^{k}swa^{n}tz$, where $k,n\neq 0$, $a \notin \lbrace s,t \rbrace \subseteq S$ and both $a$ and $a^{-1}$ do not appear in the word $z$ ($oc_z (a)=0$).\\ Then we observe that the letter $s$ has as superscript $k$ in the word $r^{\prime}$ while $t$ has as superscript $0$ in the word $r^{\prime}$. Since $k\neq 0$ we have that $M-m>0$. This completes the proof of the claim 1.1. \\ \textbf{Claim 1.2:} We claim that $G$ has a presentation \begin{center} $\langle a, s^{(j)}_{i},(i \in \lbrace 2 , \ldots , k \rbrace ) , (j \in \lbrace m, \ldots , M \rbrace ) \mid r^{\prime} $, $ a s^{(j^{\prime})}_{i} a^{-1}(s^{(j^{\prime}+1)}_{i})^{-1} $ $ (j^{\prime} \in \lbrace m, \ldots , M-1 \rbrace ) \rangle$. \end{center} To verify the claim, let $H$ be the group defined by the presentation given above. The map $\phi :G \longrightarrow H$ defined by \begin{center} $a \longmapsto a$, $s_{i} \longmapsto s_{i}^{(0)}$ \end{center} is a homomorphism since $\phi(r)=r^{\prime}$. On the other hand, the map $\psi :H \longrightarrow G$ defined by \begin{center} $a \longmapsto a$, $s_{i}^{(j)} \longmapsto a^{j}s_{i}a^{-j}$ \end{center} is also a homomorphism since all relators of $H$ are sent to $1_{G}$.\\ It is easy to verify that $\psi \circ \phi$ is the identity map of $G$. The homomorphism $\phi \circ \psi : H \rightarrow H$ maps $a \mapsto a$, $s_{i}^{(0)} \mapsto s_{i} \mapsto s_{i}^{(0)}$ and $s_{i}^{(j)} \mapsto a^{j}s_{i}a^{-j} \mapsto a^{j}s_{i}^{(0)}a^{-j}$.\\ Now we show that $s_{i}^{(j)} = a^{j}s_{i}^{(0)}a^{-j}$. We have \begin{center} $ a^{1}s_{i}^{(0)}a^{-1}=s_{i}^{(1)}$\\ $ a^{1}s_{i}^{(1)}a^{-1}=s_{i}^{(2)}$\\ $\ldots$\\ $ a^{1}s_{i}^{(j-1)}a^{-1}=s_{i}^{(j)},$ \end{center} we combine these equations and we get $s_{i}^{(j)} = a^{1}s_{i}^{(j-1)}a^{-1}= a^{2}s_{i}^{(j-2)}a^{-2}= \ldots a^{j}s_{i}^{(0)}a^{-j}$, so $\phi \circ \psi =id_{H}$.\\ Since $\phi \circ \psi$ and $\psi \circ \phi$ are the identity maps on $H$ and $G$ respectively we deduce that $\phi$ is an isomorphism. This completes the proof of claim 1.2.\\ We set \begin{center} $G_{1}=\langle s^{(j)}_{i},(i \in \lbrace 2 , \ldots , k \rbrace ) , (j \in \lbrace m, \ldots , M \rbrace ) \mid r^{\prime} \rangle$. \end{center} We note that there exists a letter $s_{i_{m}} \in S$ such that $s_{i_{m}}^{(m)}$ appears in $r^{\prime}$ and a letter $s_{i_{M}} \in S$ such that $s_{i_{M}}^{(M)}$ appears in $r^{\prime}$.\\ Now let $F$ and $\Lambda$ be the subgroups of $G_{1}$ generated respectively by the set $X = \lbrace s^{(j)}_{i},(i \in \lbrace 2 , \ldots , k \rbrace ) , (j \in \lbrace m, \ldots , M-1 \rbrace )\rbrace$ and the set $Y = \lbrace s^{(j)}_{i},(i \in \lbrace 2 , \ldots , k \rbrace ) , (j \in \lbrace m+1, \ldots , M \rbrace )\rbrace$.\\ \textbf{Claim 1.3:} The groups $F$ and $\Lambda$ are free subgroups of $G_1$.\\ This claim follows by the Freiheitssatz (see \cite{LynSch}, thm 5.1, page 198), since $X$ omits a generator of $G_{1}$ occurring in $r^{\prime}$ (this is the letter $s_{i_{M}}^{(M)}$) the subgroup $F$ is free. The same holds for $\Lambda$, since $Y$ omits the letter $s_{i_{m}}^{(m)}$.\\ \textbf{Claim 1.4:} We have that $G \simeq G_{1}\ast_{F}$.\\ In particular, the map $s^{(j)}_{i} \longmapsto s^{(j+1)}_{i}$ from $X$ to $Y$ extends to an isomorphism from $F$ to $\Lambda$.\\ Thus $H$ is exhibited as the HNN extension of $G_{1}$ over the finitely generated free group $F$ using $a$ as a stable letter. Since $G \simeq H $ (claim 1.2) we have that \begin{center} $G \simeq G_{1}\ast_{F}$. \end{center} By the fact that $\mid\!r^{\prime}\!\mid < \mid\!r\!\mid$ and the inductive assumption we have that $asdimG_{1} \leq 2$.\\ To conclude we apply the inequality for HNN-extensions (Theorem \ref{2.9}):\\ $asdim G \leq max \lbrace asdim G_{1} , asdimF +1 \rbrace = max \lbrace asdim G_{1} , 2 \rbrace = 2$.\\ \textbf{Case 2}: For every letter $s \in S$ we have $\mid \epsilon_{r}(s) \mid \geq 1$.\\ Let $S = \lbrace a=s_{1}, b= s_{2}, s_{3}, s_{4}, . . . s_{k} \rbrace $ and $S_1 = \lbrace t,x,s_{i}, (3\leq i \leq k) \rbrace $. We consider the following homomorphism between the free group $F(S)$ and the free group $F(S_1)$ \begin{equation}\label{eq5} \phi : a \longmapsto t^{-\epsilon_{r}(b)} x , b \longmapsto t^{\epsilon_{r}(a)} ,s_{i} \longmapsto s_{i} (3 \leq i \leq k). \end{equation} We set \begin{center} $\Gamma=<S_1 \vert $ $ r(t,x,s_{i}, (3\leq i \leq k))>,$ \end{center} where we denote by $r(t,x,s_{i},... (i>2))$ the modified word in terms of $t,x,s_{i}, (3\leq i \leq k)$ which is obtained from $r$ when we replace a generator $s$ with $\phi(s)$. Then $\phi$ induces a homomorphism $$\phi : G \rightarrow \Gamma.$$ The following claim shows that the homomorphism $\phi$ is actually a monomorphism into $\Gamma$, so we have an embedding of $G$ into $\Gamma$ via $\phi$:\\ \textbf{Claim 2.1:} The homomorphism $\phi : G \rightarrow \Gamma$ is monomorphism.\\ \textit{Proof of the claim:} We set $S_{2} = \lbrace a,t,s_{i}, (3\leq i \leq k) \rbrace$ and $S_{1} = \lbrace x,t,s_{i}, (3\leq i \leq k) \rbrace$. We define $g : F(S) \rightarrow F(S_{2})$ and $f : F(S_{2}) \rightarrow F(S_{1})$, by \begin{center} $g : a \longmapsto a , b \longmapsto t^{\epsilon_{r}(a)} ,s_{i} \longmapsto s_{i} (3 \leq i \leq k)$, \end{center} \begin{center} $f : a \longmapsto t^{-\epsilon_{r}(b)} x , t \longmapsto t ,s_{i} \longmapsto s_{i} (3 \leq i \leq k)$. \end{center} We set: $r_{2}=g(r)$ , $G_{2}= <S_{2} \mid r_{2}>$, $r_{1}=f \circ g (r)=r(t,x,s_{i}, (3\leq i \leq k))$ and we observe that $\Gamma= <S_{1} \mid r_{1}>$. Then $g$ induces a homomorphism $\overline{g}: G \mapsto G_{2}$ and $f$ induces a homomorphism $\overline{f}: G_{2} \mapsto \Gamma$. Obviously, $\phi = \overline{f} \circ \overline{g}$.\\ We can easily see that $\overline{f}$ is an isomorphism. Indeed, the homomorphism $\psi : \Gamma \rightarrow G_2$ given by \begin{center} $ x \longmapsto t^{\epsilon_{r}(b)} a , t \longmapsto t ,s_{i} \longmapsto s_{i} (3 \leq i \leq k) $ \end{center} is the inverse homomorphism of $ \overline{f}$. It is enough to prove that $\overline{g}$ is monomorphism. This follows by the fact that the group $G_{2}$ is the amalgamated product $G \ast_{\mathbb{Z}} <t>$, where $\mathbb{Z} = < \lambda >$ and $\psi_{1}(\lambda)= b$ , $\psi_{2}(\lambda)= t^{\epsilon_{r}(a)}$ are the corresponding monomorphisms. We can see that the homomorphism $\overline{g}$ is the inclusion of $G$ into the amalgamated product, so $\overline{g}$ is injective. This completes the proof of the claim 2.1.\\ We denote by $r(t,x,s_{i},... (i \geq 3))$ the modified word in terms of $t,x,s_{i}, (3\leq i \leq k)$ which can be obtained from $r$ when we replace a generator $s$ with $\phi(s)$ and by $p$ the cyclically reduced $r(t,x,s_{i}, (3\leq i \leq k))$.\\ We observe that $\epsilon_{p}(t)=0$ and that $x$ occurs in $p$.\\ If the letter $t$ occurs in the word $p$, from Case 1 we have that $\Gamma$ is an HNN extension of some group $H$ over a free subgroup $F$, namely, $\Gamma=H\ast_{F}$. \\ As in Case 1 by assuming that $p$ starts with $t$ or $t^{-1}$ we introduce new variables $s^{(j)}_{i} = t^{j}s_{i}t^{-j}$. Using these variables, we rewrite $p$ as a word $w$, eliminating all occurrences of $t$ and its inverse. Then we observe that $ \mid\!w\!\mid \leq \mid\!r\!\mid - 1$. By using the inductive assumption for $w$ we obtain that $$asdimG \leq asdim\Gamma \leq 2.$$ If the letter $t$ does not occur in the word $p$, we observe that $$\mid\!p\!\mid \leq \mid\!r\!\mid - 1.$$ Then $$\Gamma = \langle t \rangle \ast \Gamma^{\prime},$$ where $$\Gamma^{\prime}= \langle x,s_{i}, (3\leq i \leq k) \mid p \rangle.$$ Since $asdim ( G_{1} \ast G_{2} ) = max \lbrace asdim G_{1}, asdim G_{2} \rbrace$ holds (see \cite{BD04}) we have that $$asdim\Gamma = max \lbrace 1 , asdim \Gamma^{\prime} \rbrace.$$ Then by the inductive assumption for $p$ we have that $asdim\Gamma^{\prime} \leq 2$. Finally, we conclude that $$asdimG \leq asdim\Gamma \leq 2.$$ \end{proof} \subsection{One relator groups with asymptotic dimension two.} We recall that a nontrivial group $H$ is \textit{freely indecomposable} if $H$ can not be expressed as a free product of two non-trivial groups. A natural question derived from Theorem \ref{3.1} is, which one relator groups have asymptotic dimension two.\\ In this subsection, we will show that the asymptotic dimension of every finitely generated one relator group that is not a free group or a free product of a free group and a finite cyclic group is exactly two. We will use following propositions \ref{3.2} and \ref{3.3} from \cite{FKS} and \cite{S} respectively. \begin{prop}\label{3.2} Let $G$ be an infinite finitely generated one relator group with torsion. If $G$ has more than one ends, then $G$ is a free product of a nontrivial free group and a freely indecomposable one relator group. \end{prop} \begin{prop}\label{3.3} Let $G$ be a torsion free infinite finitely generated group. If $G$ is virtually free, then it is free. \end{prop} \begin{lem}\label{3.4} Let $G$ be an infinite finitely generated one relator group such that is not a free group or a free product of a nontrivial free group and a freely indecomposable one relator group. Then $G$ is not virtually free. \end{lem} \begin{proof} If $G$ has torsion, by proposition \ref{3.2} we have that $G$ has exactly one end, so $G$ can not be virtually free. If $G$ is torsion free, by proposition \ref{3.3} we obtain that $G$ is free and this is a contradiction by the assumption of the lemma. \end{proof} We note that every finite one relator group is cyclic. To see that it is enough to observe that every one relator group with at least two generators has infinite abelianization. The following proposition is the main result of this subsection. \begin{prop}\label{3.5} Let $G$ be a finitely generated one relator group that is not a free group or a free product of a free group and a finite cyclic group. Then \begin{center} $asdim\,G = 2$. \end{center} \end{prop} \begin{proof} By Theorem \ref{3.1} we have that $asdim\,G \leq 2$. If $G$ is finite then it is cyclic. If $G$ is infinite we have that $1 \leq asdim\,G$. By a theorem of Gentimis (\cite{Ge}) we have that $asdim\,G=1$ if and only if $G$ is virtually free.\\ We assume that $G$ is an infinite virtually free group. So if $G$ is torsion free then by proposition \ref{3.3} we obtain that $G$ is free. If $G$ has torsion then by lemma \ref{3.4} $G$ is a free product of a nontrivial free group and a freely indecomposable one relator group $G_1$. Observe that $G_1$ is an infinite noncyclic group, then by the same lemma $G_1$ is not virtually free so $G$ is not virtually free either, which is a contradiction.\\ We conclude that $asdim\,G = 2$. \end{proof} \begin{cor} \textit{Let $G$ be a finitely generated freely indecomposable one relator group which is not cyclic. Then \begin{center} $asdim\,G = 2$. \end{center}} \end{cor} The following proposition can be found in \cite{LynSch} (prop. 5.13, page 107). \begin{prop}\label{3.6} Let $G=\langle x_1, \ldots ,x_n \vert r \rangle$ be a finitely generated one relator group, where $r$ is of minimal lenght under $Aut(F(\lbrace x_1, \ldots ,x_n \rbrace))$ and contains exactly the generators $x_1, \ldots ,x_k$ for some $k$, $0 \leq k \leq n$. Then $G$ is isomorphic to the free product $G_1 \ast G_2$, where $G_1=\langle x_1, \ldots ,x_k \vert r \rangle$ is freely indecomposable and $G_2$ is free with basis $\lbrace x_{k+1}, \ldots ,x_{n} \rbrace$. \end{prop} Using the above results we sum up to following corollary which describes the finitely generated one relator groups. \begin{cor} \textit{Let $G$ be a finitely generated one relator group. Then one of the following is true}:\\ \textbf{(i)} \textit{$G$ is finite cyclic, and $asdim\,G = 0$} \\ \textbf{(ii)} \textit{$G$ is a nontrivial free group or a free product of a nontrivial free group and a finite cyclic group, and $asdim\,G = 1$}\\ \textbf{(iii)} \textit{$G$ is an infinite freely indecomposable not cyclic group or a free product of a nontrivial free group and an infinite freely indecomposable not cyclic group, and $asdim\,G = 2$.} \end{cor} We could further describe the boundaries of hyperbolic one relator groups. We recall the following result of Buyalo and Lebedeva (see \cite{BL}) for hyperbolic groups: \begin{center} $asdim G = dim \partial_{\infty}G + 1$. \end{center} Let $G$ be an infinite finitely generated hyperbolic one relator group that is not virtually free. By T.Gentimis (\cite{Ge}) we obtain that $asdim G \neq 1$,so $asdim G =2$. Using the previous equality we obtain that $G$ has one dimensional boundary.\\ Applying a theorem of M. Kapovich and B. Kleiner (see \cite{KK}) we can describe the boundaries of hyperbolic one relator groups. \begin{prop}\label{3.7} Let $G$ be a hyperbolic one relator group. Then $asdim\,G= 0 ,1 $or $2$.\\ \textbf{(i)} If $asdim\,G=0$, then $G$ is finite.\\ \textbf{(ii)} If $asdim\,G=1$, then $G$ is virtually free and the boundary is a Cantor set.\\ \textbf{(iii)} If $asdim\,G=2$ providing that $G$ does not split over a virtually cyclic subgroup, then one of the following holds:\\ 1. $\partial_{\infty}G$ is a Menger curve.\\ 2. $\partial_{\infty}G$ is a Sierpinski carpet.\\ 3. $\partial_{\infty}G$ is homeomorphic to $S^{1}$. \end{prop} \section{Graphs of Groups.} We will prove a general theorem for the asymptotic dimension of fundamental groups of finite graphs of groups. \begin{thm}\label{4.1} Let $(\mathbb{G}, Y)$ be a finite graph of groups with vertex groups $\lbrace G_{v} \mid v \in Y^{0} \rbrace$ and edge groups $\lbrace G_{e} \mid e \in Y^{1}_{+} \rbrace$. Then the following inequality holds: \begin{center} $asdim \pi_{1}(\mathbb{G},Y,\mathbb{T}) \leq max_{v \in Y^{0} ,e \in Y^{1}_{+}} \lbrace asdim G_{v}, asdim\,G_{e} +1 \rbrace.$ \end{center} \end{thm} \begin{proof} We use induction on the number $\sharp E(Y)$ of edges of the graph $Y$. For $\sharp E(Y)=1$ we distinguish two cases. The first case is when the fundamental group $\pi_{1}(\mathbb{G},Y,\mathbb{T})$ is an amalgamated product, so the statement of the theorem follows by the following inequality of A.Dranishnikov (see \cite{Dra08}) \begin{center} $asdimA \ast_{C} B \leq max \lbrace asdim A, asdim B, asdimC +1 \rbrace.$ \end{center} The second case is when the fundamental group $\pi_{1}(\mathbb{G},Y,\mathbb{T})$ is an HNN-extension, so the statement of the theorem follows by Theorem \ref{2.9}.\\ We assume that the statement of the theorem holds for $E(Y) \leq m$. Let $(\mathbb{G}, Y)$ be a finite graph of groups with $\sharp E(Y) = m+1$. We denote by $\mathbb{T}$ a maximal tree of $Y$. We distinguish two cases:\\ \textbf{Case 1:} $Y= \mathbb{T}$. We remove a terminal edge $e^{\prime}=[v,u]$ from the graph $Y$ such that the full subgraph of $Y$ denoted by $\Gamma$ and formed from the vertices $V(Y) \setminus \lbrace u \rbrace$ is connected. We observe that $\Gamma$ is also a tree which we denote by $\mathbb{T}^{\prime}$. Then $\pi_{1}(\mathbb{G},Y,\mathbb{T})= \pi_{1}(\mathbb{G},\Gamma,\mathbb{T}^{\prime}) \ast_{G_{e^{\prime}}} G_{u}$, so by the inequality for amalgamated products of A.Dranishnikov (see \cite{Dra08}), we have \begin{center} $asdim \pi_{1}(\mathbb{G},Y,\mathbb{T}) \leq max \lbrace asdim\pi_{1}(\mathbb{G},\Gamma,\mathbb{T}^{\prime}) , asdim\,G_{u} , asdim G_{e^{\prime}}+1 \rbrace$. \end{center} Since $\sharp E(\Gamma) = m$, by the inductive assumption we obtain that \begin{center} $asdim \pi_{1}(\mathbb{G},\Gamma,\mathbb{T}^{\prime}) \leq max_{v \in Y^{0}\setminus \lbrace u \rbrace ,e \in Y^{1}_{+}\setminus \lbrace e^{\prime} \rbrace} \lbrace asdim G_{v}, asdim\,G_{e} +1 \rbrace$, \end{center} so \begin{center} $asdim \pi_{1}(\mathbb{G},Y,\mathbb{T}) \leq max_{v \in Y^{0} ,e \in Y^{1}_{+}} \lbrace asdim G_{v}, asdim\,G_{e} +1 \rbrace.$ \end{center} \textbf{Case 2:} $\mathbb{T} \subsetneqq Y $. We remove from $Y$ an edge $e^{\prime}=[v,u]$ which doesn't belong to $\mathbb{T}$.\\ Since the tree $\mathbb{T}$ is a maximal tree of $Y$ and $e^{\prime} \not\in E(\mathbb{T})$ we have that the graph $\Gamma=Y \setminus e^{\prime}$ is connected and $\mathbb{T} \subseteq \Gamma$.\\ Then $\pi_{1}(\mathbb{G},Y,\mathbb{T})= \pi_{1}(\mathbb{G},\Gamma,\mathbb{T}) \ast_{G_{e^{\prime}}}$, so by the inequality for HNN-extensions (Theorem \ref{2.9}) we have \begin{center} $asdim \pi_{1}(\mathbb{G},Y,\mathbb{T}) \leq max \lbrace asdim\pi_{1}(\mathbb{G},\Gamma,\mathbb{T}), asdim G_{e^{\prime}}+1 \rbrace$. \end{center} Since $\sharp E(\Gamma) = m$, by the inductive assumption we obtain that \begin{center} $asdim \pi_{1}(\mathbb{G},\Gamma,\mathbb{T}) \leq max_{v \in Y^{0},e \in Y^{1}_{+}\setminus \lbrace e^{\prime} \rbrace} \lbrace asdim G_{v}, asdim\,G_{e} +1 \rbrace$, \end{center} so \begin{center} $asdim \pi_{1}(\mathbb{G},Y,\mathbb{T}) \leq max_{v \in Y^{0} ,e \in Y^{1}_{+}} \lbrace asdim G_{v}, asdim\,G_{e} +1 \rbrace.$ \end{center} \end{proof} We obtain as a corollary the following proposition. \begin{prop} Let $(\mathbb{G}, Y)$ be a finite graph of groups with \begin{center} vertex groups $\lbrace G_{v} \mid v \in Y^{0} \rbrace$ and edge groups $\lbrace G_{e} \mid e \in Y^{1}_{+} \rbrace$. \end{center} We assume that $max_{e \in Y^{1}_{+}} \lbrace asdim\,G_{e} \rbrace < max_{v \in Y^{0} } \lbrace asdim G_{v} \rbrace = n $. Then, \begin{center} $asdim \pi_{1}(\mathbb{G},Y,\mathbb{T}) = n.$ \end{center} \end{prop} \textit{E-mail}: [email protected] \textit{Address:} Mathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Rd, Oxford OX2 6GG, U.K. \end{document}
\begin{document} \title{Extending Context-Sensitivity in Term Rewriting} \author{Bernhard Gramlich and Felix Schernhammer \footnote{This author has been supported by the Austrian Academy of Sciences under grant 22.361.} \institute{Institute of Computer Languages, Theory and Logic Group\\Vienna University of Technology} \email{\{gramlich,felixs\}@logic.at} }\maketitle \begin{abstract} We propose a generalized version of context-sensitivity in term rewriting based on the notion of ``forbidden patterns''. The basic idea is that a rewrite step should be forbidden if the redex to be contracted has a certain shape and appears in a certain context. This shape and context is expressed through forbidden patterns. In particular we analyze the relationships among this novel approach and the commonly used notion of context-sensitivity in term rewriting, as well as the feasibility of rewriting with forbidden patterns from a computational point of view. The latter feasibility is characterized by demanding that restricting a rewrite relation yields an improved termination behaviour while still being powerful enough to compute meaningful results. Sufficient criteria for both kinds of properties in certain classes of rewrite systems with forbidden patterns are presented. \end{abstract} \mathsf{s}ection{Introduction and Overview} Standard term rewriting systems (TRSs) are well-known to enjoy nice logical and closure properties. Yet, from an operational and computational point of view, i.e., when using term rewriting as computational model, it is also well-known that for non-terminating systems restricted versions of rewriting obtained by imposing context-sensitivity and/or strategy requirements may lead to better results (e.g., in terms of computing normal forms, head-normal forms, etc.). One major goal when using reduction strategies and context restrictions is to avoid non-terminating reductions. On the other hand the restrictions should not be too strong either, so that the ability to compute useful results in the restricted rewrite systems is not lost. We introduce a novel approach to context restrictions relying on the notion of ``forbidden patterns'', which generalizes existing approaches and succeeds in handling examples in the mentioned way (i.e., producing a terminating reduction relation which is powerful enough to compute useful results) where others fail. The following example motivates the use of reduction strategies and/or context restrictions. \begin{example} \label{ex2nd} Consider the following rewrite system, cf.\ e.g.\ \cite{ppdp01-lucas}: \begin{eqnarray*} \mathsf{inf}(x) & \rightarrow & x : \mathsf{inf}(\mathsf{s}(x)) \\ \mathsf{\mathsf{2nd}}(x : (y : zs)) & \rightarrow & y \end{eqnarray*} This TRS is non-terminating and not even weakly normalizing. Still some terms like $\mathsf{2nd}(\mathsf{inf}(x))$ are reducible to a normal form while also admitting infinite reduction sequences. One goal of context restrictions and reduction strategies is to restrict derivations in a way such that normal forms can be computed whenever they exist, while infinite reductions are avoided. \end{example} One way to address the problem of avoiding non-normalizing reductions in Example \ref{ex2nd} is the use of reduction strategies. For instance for the class of (almost) orthogonal rewrite systems (the TRS of Example \ref{ex2nd} is orthogonal), always contracting all outermost redexes in parallel yields a normalizing strategy (i.e.\ whenever a term can be reduced to a normal form it is reduced to a normal form under this strategy) \cite{odonnell}. Indeed, one can define a sequential reduction strategy having the same property for an even wider class of TRSs \cite{middeldorp}. One major drawback (or asset depending on one's point of view) of using reduction strategies, however, is that their use does not introduce new normal forms. This means that the set of normal forms w.r.t.\ to some reduction relation is the same as the set of normal forms w.r.t.\ to the reduction relation under some strategy. Hence, strategies can in general not be used to detect non-normalizing terms or to impose termination on not weakly normalizing TRSs (with some exceptions cf.\ e,g.\ \cite[Theorem 7.4]{middeldorp}). Moreover, the process of selecting a suitable redex w.r.t.\ to a reduction strategy is often complex and may thus be inefficient. These shortcomings of reduction strategies led to the advent of proper restrictions of rewriting that usually introduce new normal forms and select respectively forbid certain reductions according to the syntactic structure of a redex and/or its surrounding context. The most well-known approach to context restrictions is context-sensitive rewriting. There, a \emph{replacement map} $\mu$ specifies the arguments $\mu(f) \mathsf{s}ubseteq \{1, \dots, ar(f)\}$ which can be reduced for each function $f$. However, regarding Example \ref{ex2nd}, context-sensitive rewriting does not improve the situation, since allowing the reduction of the second argument of `$:$' leads to non-termination, while disallowing its reduction leads to incompleteness in the sense that for instance a term like $\mathsf{2nd}(\mathsf{inf}(x))$ cannot be normalized via the corresponding context-sensitive reduction relation, despite having a normal form in the unrestricted system. Other ideas of context restrictions range from explicitly modeling lazy evaluation (cf.~e.g.~ \cite{ toplas00-fokkink-et-al, wflp01-entcs02-lucas, wrs07-entcs08-schernhammer-gramlich}), to imposing constraints on the order of argument evaluation of functions (cf.\ e.g.\ \cite{ popl85-futatsugi-et-al, wrla98-entcs}), and to combinations of these concepts, also with standard context-sensitive rewriting (cf.~e.g.~ \cite{ ppdp01-lucas, lpar02-alpuente-et-al}). The latter generalized versions of context-sensitive rewriting are quite expressive and powerful (indeed some of them can be used to restrict the reduction relation of the TRS in Example \ref{ex2nd} in a way, so that the restricted relation is terminating and still powerful enough to compute (head-)normal forms), but on the other hand tend to be hard to analyze and understand, due the subtlety of the strategic information specified. The approach we present in this paper is simpler in that its definition only relies on matching and simple comparison of positions rather than on laziness or prioritizing the evaluation of certain arguments of functions over others. In order to reach the goal of restricting the reduction relation in such a way that it is terminating while still being powerful enough to compute useful results, we provide a method to verify termination of a reduction relation restricted by our approach (Section \ref{sec:proving-termination}) as well as a criterion which guarantees that normal forms computed by the restricted system are head-normal forms of the unrestricted system (Section \ref{sec:computing-meaningful-results}). Recently it turned out that, apart from using context-sensitivity as computation model for standard term rewriting (cf.~e.g.~ \cite{ic02-lucas,jflp98-lucas}), context-sensitive rewrite systems naturally also appear as intermediate representations in many areas relying on transformations, such as program transformation and termination analysis of rewrite systems with conditions \cite{hosc08-duran-et-al,techrep09-schernhammer-gramlich} / under strategies \cite{rta09-endrullis-hendriks}. This suggests that apart from using restrictions as guidance and thus as operational model for rewrite derivations, a general, flexible and well-understood framework of restricted term rewriting going beyond context-sensitive rewriting may be useful as a valuable tool in many other areas, too. The major problem in building such a framework is that imposing context restrictions on term rewriting in general invalidates the closure properties of term rewriting relations, i.e., stability under contexts and substitutions. Note that in the case of context-sensitive rewriting \`a la \cite{jflp98-lucas,ic02-lucas} only stability under contexts is lost. In this work we will sketch and discuss a generalized approach to context-sensitivity (in the sense of \cite{jflp98-lucas,ic02-lucas}) relying on \emph{forbidden patterns} rather than on forbidden arguments of functions. From a systematic point of view we see the following design decisions to be made. \begin{itemize} \item What part of the context of a (sub)term is relevant to decide whether the (sub)term may be reduced or not? \item In order to specify the restricted reduction relation, is it better/advantageous to explicitly define the allowed or the forbidden part of the context-free reduction relation? \item What are the forbidden/allowed entities, for instance whole subterms, contexts, positions, etc.? \item Does it depend on the shape of the considered subterm itself (in addition to its outside context) whether it should forbidden or not (if so, stability under substitutions may be lost)? \item Which restrictions on forbidden patterns seem appropriate (also w.r.t.\ practical feasibility) in order to guarantee certain desired closure and preservation properties. \end{itemize} The remainder of the paper is structured as follows. In Section \ref{sec:preliminaries} we briefly recall some basic notions and notations. Rewriting with forbidden patterns is defined, discussed and exemplified in Section \ref{sec:rewriting-with-forbidden patterns}. In the main Sections \ref{sec:computing-meaningful-results} and \ref{sec:proving-termination} we develop some theory about the expressive power of rewriting with forbidden patterns (regarding the ability to compute original (head-)normal forms), and about how to prove ground termination for such systems via a constructive transformational approach. Crucial aspects are illustrated with the two running Examples \ref{ex2nd} and \ref{ex_app}. Finally, in Section \ref{sec:conclusion-and-related-work} we summarize our approach and its application in the examples, discuss its relationship to previous approaches and briefly touch the important perspective and open problem of (at least partially) automating the generation of suitable forbidden patterns in practice. \footnote{Due to lack of space the obtained results are presented without proofs. The latter can be found in the full technical report version of the paper, cf.\ \texttt{http://www.logic.at/staff/\{gramlich,schernhammer\}/}.} \mathsf{s}ection{Preliminaries} \label{sec:preliminaries} We assume familiarity with the basic notions and notations in term rewriting, cf.~e.g.~\cite{book98-baader-nipkow}, \cite{BeKlVr03}. Since we develop our approach in a many-sorted setting, we recall a few basics on many-sorted equational reasoning (cf.\ e.g.\ \cite{BeKlVr03}). A many-sorted signature $\mathcal{F}$ is a pair $(S, \Omega)$ where $S$ is a set of sorts and $\Omega$ is a family of (mutually disjoint) sets of typed function symbols: $\Omega = (\Omega_{\omega, s} \mid \omega \in S^*, s \in S)$. We also say, $f$ is of type $\omega \rightarrow s$ (or just $s$ if $\omega = \emptyset$) if $f \in \Omega_{\omega, s}$. $V = (V_s \mid s \in S)$ is a family of (mutually disjoint) countably infinite sets of typed variables (with $V \cap \Omega = \emptyset$). The set $\mathcal{T}(\mathcal{F}, V)_s$ of (well-formed) terms of sort $s$ is the least set containing $V_s$, and whenever $f \in \Omega_{(s_1, \dots, s_n), s}$ and $t_i \in \mathcal{T}(\mathcal{F}, V)_{s_i}$ for all $1 \leq i \leq n$, then $f(t_1, \dots, t_n) \in \mathcal{T}(\mathcal{F}, V)_s$. The sort of a term $t$ is denoted by $sort(t)$. Rewrite rules are pairs of terms $l \rightarrow r$ where $sort(l) = sort(r)$. Subsequently, we make the types of terms and rewrite rules explicit only if they are relevant. Throughout the paper $x, y, z$ represent (sorted) variables. Positions are possibly empty sequences of natural numbers (the empty sequence is denoted by $\epsilon$). We use the standard partial order $\leq$ on positions given by $p \leq q$ if there is some position $p'$, such that $p.p' = q$ (i.e., $p$ is a prefix of $q$). $Pos(s)$ ($Pos_\mathcal{F}(s)$) denotes the set of (non-variable) positions of a term $s$. By $s \overset{p}{\rightarrow} t$ we mean rewriting at position $p$. Given a TRS $\mathcal{R} = (\mathcal{F}, R)$ we partition $\mathcal{F}$ into the set $D$ of defined function symbols, which are those that occur as root symbols of left-hand sides of rules in $R$, and the set $C$ of constructors (given by $\mathcal{F} \mathsf{s}etminus D$). For TRSs $\mathcal{R} = (\mathcal{F}, R)$ we sometimes confuse $\mathcal{R}$ and $R$, e.g., by omitting the signature. \mathsf{s}ection{Rewriting with Forbidden Patterns} \label{sec:rewriting-with-forbidden patterns} In this section we define a generalized approach to rewriting with context restrictions relying on term patterns to specify forbidden subterms/super\-terms/positions rather than on a replacement map as in context-sensitive rewriting. \begin{definition}[forbidden pattern] A \emph{forbidden pattern} (w.r.t.~to a signature $\mathcal{F}$) is a triple $\langle t, p, \lambda \rangle$, where $t \in \mathcal{T}(\mathcal{F}, V)$ is a term, $p$ a position from $Pos(t)$ and $\lambda \in \{h, b, a\}$. \end{definition} The intended meaning of the last component $\lambda$ is to indicate whether the pattern forbids reductions \begin{itemize} \item exactly at position $p$, but not outside (i.e., strictly above or parallel to $p$) or strictly below -- ($h$ for here), or \item strictly below $p$, but not at or outside $p$ -- ($b$ for below), or \item strictly above position $p$, but not at, below or parallel to $p$ -- ($a$ for above). \end{itemize} Abusing notation we sometimes say a forbidden pattern is linear, unifies with some term etc.\ when we actually mean that the term in the first component of a forbidden pattern has this property. We denote a \emph{finite} set of forbidden patterns for a signature $\mathcal{F}$ by $\Pi_{\mathcal{F}}$ or just $\Pi$ if $\mathcal{F}$ is clear from the context or irrelevant. For brevity, patterns of the shape $\langle \_, \_, h/b/a \rangle$ are also called $h/b/a$-patterns, or $here/below/above$-patterns. \footnote{ Here and subsequently we use a wildcard notation for forbidden patterns. For instance, $\langle \_, \_, i\rangle$ stands for $\langle t, p, i\rangle$ where $t$ is some term and $p$ some position in $t$ of no further relevance.} Note that if for a given term $t$ we want to specify more than just one restriction by a forbidden pattern, this can easily be achieved by having several triples of the shape $\langle t,\_,\_ \rangle$. In contrast to context-sensitive rewriting, where a replacement map defines the allowed part of the reduction, the patterns are supposed to explicitly define its \emph{forbidden} parts, thus implicitly yielding allowed reduction steps as those that are not forbidden. \begin{definition}[forbidden pattern reduction relation] \label{fp_relation} Let $\mathcal{R} = (\mathcal{F}, R)$ be a TRS with forbidden patterns $\Pi_{\mathcal{F}}$. The \emph{forbidden pattern reduction relation} $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$, or $\rightarrow_{\Pi}$ for short, induced by some set of forbidden patterns $\Pi$ and $\mathcal{R}$, is given by $s \rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}} t$ if $s \overset{p}{\rightarrow}_{\mathcal{R}} t$ for some $p \in Pos_{\mathcal{F}}(s)$ such that there is no pattern $\langle u, q, \lambda \rangle \in \Pi_{\mathcal{F}}$, no context $C$ and no position $q'$ with \begin{itemize} \item $s = C[u\mathsf{s}igma]_{q'}$ and $p = q'.q$, if $\lambda = h$, \item $s = C[u\mathsf{s}igma]_{q'}$ and $p > q'.q$, if $\lambda = b$, and \item $s = C[u\mathsf{s}igma]_{q'}$ and $p < q'.q$, if $\lambda = a$. \end{itemize} \end{definition} Note that for a finite rewrite system $\mathcal{R}$ (with finite signature $\mathcal{F}$) and a finite set of forbidden patterns $\Pi_{\mathcal{F}}$ it is decidable whether $s \rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}} t$ for terms $s$ and $t$. We write $(\mathcal{R}, \Pi)$ for rewrite systems with associated forbidden patterns. Such a rewrite system $(\mathcal{R}, \Pi)$ is said to be $\Pi$-terminating (or just terminating if no confusion arises) if $\rightarrow_{\mathcal{R}, \Pi}$ is well-founded. We also speak of $\Pi$-normal forms instead of $\rightarrow_{\mathcal{R}, \Pi}$-normal forms. Special degenerate cases of $(\mathcal{R}, \Pi)$ include e.g.\ $\Pi = \emptyset$ where $\rightarrow_{\mathcal{R}, \Pi} = \rightarrow_{\mathcal{R}}$, and $\Pi = \{ \langle l,\epsilon,h \rangle \;|\; l \rightarrow r \in R\}$ where $\rightarrow_{\mathcal{R}, \Pi} = \emptyset$. In the sequel we use the notions of \emph{allowed} and \emph{forbidden} (by $\Pi$) redexes. A redex $s|_p$ of a term $s$ is allowed if $s \overset{p}{\rightarrow_{\Pi}} t$ for some term $t$, and forbidden otherwise. \begin{example} \label{ex2nd2} Consider the TRS from Example \ref{ex2nd}. If $\Pi = \{(x : (y : \mathsf{inf}(z)), 2.2, h)\}$, then $\rightarrow_{\Pi}$ can automatically be shown to be terminating. Moreover, $\rightarrow_{\Pi}$ is powerful enough to compute original head-normal forms if they exist (cf. Examples \ref{ex2nd3} and \ref{ex2nd4} below). \end{example} \begin{example} \label{ex_app} Consider the non-terminating TRS $\mathcal{R}$ given by \begin{equation} \nonumber\begin{tabular}[b]{r@{ $\rightarrow$ }l@{\;\;\;\;\;\;}r@{ $\rightarrow$ }l} $\mathsf{take}(0, y:ys)$ & $y$ & $\mathsf{app}(\mathsf{nil}, ys)$ & $ys$ \\ $\mathsf{take}(\mathsf{s}(x), y:ys)$ & $\mathsf{take}(x, ys)$ & $\mathsf{app}(x:xs, ys)$ & $x : \mathsf{app}(xs, ys)$ \\ $\mathsf{take}(x, \mathsf{nil})$ & $0$ & $\mathsf{inf}(x)$ & $\mathsf{inf}(\mathsf{s}(x))$ \end{tabular} \end{equation} with two sorts $S = \{Nat, NatList\}$, where the types of function symbols are as follows: $\mathsf{nil} \colon NatList$, $0: Nat$, $s: Nat \rightarrow Nat$, $:$ is of type $Nat, NatList \rightarrow NatList$, $\mathsf{inf}: Nat \rightarrow NatList$, $\mathsf{app}: NatList, NatList \rightarrow NatList$ and $take: Nat, NatList \rightarrow Nat$. If one restricts rewriting in $\mathcal{R}$ via $\Pi$ given by \begin{equation} \nonumber\begin{tabular}[b]{r@{\hspace*{3ex}}c@{\hspace*{3ex}}l} $\langle x : \mathsf{inf}(y), 2, h\rangle\;\;$ & $\;\;\langle x : \mathsf{app}(\mathsf{inf}(y), zs), 2.1 , h\rangle\;\;$ & $\;\;\langle x : \mathsf{app}(y : \mathsf{app}(z, zs), us), 2, h\rangle$, \end{tabular} \end{equation} then $\rightarrow_{\Pi}$ is terminating and still every well-formed ground term can be normalized with the restricted relation $\rightarrow_{\Pi}$ (provided the term is normalizing). See Examples \ref{ex_app_compl} and \ref{ex_app_term} below for justifications of these claims. \end{example} Several well-known approaches to restricted term rewriting as well as to rewriting guided by reduction strategies occur as special cases of rewriting with forbidden patterns. In the following we provide some examples. Context-sensitive rewriting, where a replacement map $\mu$ specifies the arguments $\mu(f) \mathsf{s}ubseteq \{1, \dots, ar(f)\}$ which can be reduced for each function $f$, arises as special case of rewriting with forbidden patterns by defining $\Pi$ to contain for each function symbol $f$ and each $j \in \{1, \dots, ar(f)\} \mathsf{s}etminus \mu(f)$ the forbidden patterns $(f(x_1, \ldots, x_{ar(f)}), j, h)$ and $(f(x_1, \ldots, x_{ar(f)}), j, b)$. Moreover, with forbidden patterns it is also possible to simulate position-based reduction strategies such as innermost and outermost rewriting. The innermost reduction relation of a TRS $\mathcal{R}$ coincides with the forbidden pattern reduction relation if one uses the forbidden patterns $ \langle l, \epsilon, a \rangle$ for the left-hand sides $l$ of each rule of $\mathcal{R}$. Dually, if patterns $(l, \epsilon, b)$ are used, the forbidden pattern reduction relation coincides with the \emph{outermost} reduction relation w.r.t.\ $\mathcal{R}$. However, note that more complex layered combinations of the aforementioned approaches, such as innermost context-sensitive rewriting cannot be modeled by forbidden patterns as proposed in this paper. Still, the definition of forbidden patterns and rewriting with forbidden patterns is rather general and leaves many parameters open. In order to make this approach feasible in practice, it is necessary to identify interesting classes of forbidden patterns that yield a reasonable trade-off between power and simplicity. For these interesting classes of forbidden patterns we need methods which guarantee that the results (e.g.\ normal forms) computed by rewriting with forbidden patterns are meaningful, in the sense that they have some natural correlation with the actual results obtained by unrestricted rewriting. For instance, it is desirable that normal forms w.r.t.\ the restricted rewrite system are original head-normal forms. In this case one can use the restricted reduction relation to compute original normal forms (by an iterated process) whenever they exist (provided that the TRS in question is left-linear, confluent and the restricted reduction relation is terminating) (cf.\ Section \ref{sec:computing-meaningful-results} below for details). We define a criterion ensuring that normal forms w.r.t.\ the restricted system are original head-normal forms in the following section. \mathsf{s}ection{Computing Meaningful Results} \label{sec:computing-meaningful-results} We are going to use canonical context-sensitive rewriting as defined in \cite{jflp98-lucas,ic02-lucas} as an inspiration for our approach. There, for a given (left-linear) rewriting system $\mathcal{R}$ certain restrictions on the associated replacement map $\mu$ guarantee that $\rightarrow_{\mu}$-normal forms are $\rightarrow_{\mathcal{R}}$-head-normal-forms. Hence, results computed by $\rightarrow_{\mu}$ and $\rightarrow_{\mathcal{R}}$ share the same root symbol. The basic idea is that reductions that are essential to create a more outer redex should not be forbidden. In the case of context-sensitive rewriting this is guaranteed by demanding that whenever an $f$-rooted term $t$ occurs (as subterm) in the left-hand side of a rewrite rule and has a non-variable direct subterm $t|_i$, then $i \in \mu(f)$. It turns out that for rewriting with forbidden patterns severe restrictions on the shape of the patterns are necessary in order to obtain results similar to the ones for canonical context-sensitive rewriting in \cite{jflp98-lucas}. First, no forbidden patterns of the shape $\langle \_, \epsilon, h \rangle$ or $\langle \_,\_,a \rangle$ may be used as they are in general not compatible with the desired root-normalizing behaviour of our forbidden pattern rewrite system. Moreover, for each pattern $\langle t, p, \_ \rangle$ we demand that \begin{itemize} \item $t$ is linear, \item $p$ is a variable or maximal (w.r.t.\ to the prefix ordering $\leq$ on positions) non-variable position in $t$, and \item for each position $q \in Pos(t)$ with $q || p$ we have $t|_q \in V$. \end{itemize} We call the class of patterns obtained by the above restrictions \emph{simple patterns}. \begin{definition}[simple patterns] \label{def_simple_patterns} A set $\Pi$ of forbidden patterns is called \emph{simple} if it does not contain patterns of the shape $\langle \_, \epsilon, h \rangle$ or $\langle \_,\_,a \rangle$ and for every pattern $(t, p, \_) \in \Pi$ it holds that $t$ is linear, $t|_p \in V$ or $t|_p = f(x_1, \dots, x_{ar(f)})$ for some function symbol $f$, and for each position $q \in Pos(t)$ with $q || p$ we have that $t|_q$ is a variable. \end{definition} Basically these syntactical properties of forbidden patterns are necessary to ensure that reductions which are essential to enable other, more outer reductions are not forbidden. Moreover, these properties, contrasting those defined in Definition \ref{def_canonicity} below, are independent of any concrete rewrite system. The forbidden patterns of the TRS ($\mathcal{R}, \Pi$) in Example \ref{faa} below are not simple, since the patterns contain terms with parallel non-variable positions. This is the reason why it is not possible to head-normalize terms (w.r.t\ $\mathcal{R}$) with $\rightarrow_{\Pi}$: \begin{example} \label{faa} Consider the TRS $\mathcal{R}$ given by \begin{equation} \nonumber\begin{tabular}[b]{r@{ $\rightarrow$ }l@{\;\;\;\;\;\;}r@{ $\rightarrow$ }l} $f(b, b)$ & $g(f(a, a))$ & $a$ & $b$ \end{tabular} \end{equation} and forbidden patterns $\langle f(a, a), 1, h \rangle$ and $\langle f(a, a), 2, h \rangle$. $f(a, a)$ is linear and $1$ and $2$ are maximal positions (w.r.t.\ $\leq$) within this term. However, positions $1$ and $2$ are both non-variable and thus e.g.\ for $\langle f(a, a), 1, h \rangle$ there exists a position $2 || 1$ such that $f(a, a)|_2 = a \not\in V$. Hence, $\Pi$ is too restrictive to compute all $\mathcal{R}$-head-normal forms in this example. Indeed, $f(a,a) \rightarrow_{\mathcal{R}}^* f(b,b) \rightarrow_{\mathcal{R}} g(f(a,a))$ where the latter term is a $\mathcal{R}$-head-normal form. The term $f(a, a)$ is a $\Pi$-normal form, although it is not a head-normal form (w.r.t.\ $\mathcal{R}$). Note also that the (first components of) forbidden patterns are not unifiable with the left-hand side of the rule that is responsible for the (later) possible root-step when reducing $f(a, a)$, not even if the forbidden subterms in the patterns are replaced by fresh variables. \end{example} Now we are ready to define canonical rewriting with forbidden patterns within the class of simple forbidden patterns. To this end, we demand that patterns do not overlap with left-hand sides of rewrite rules in a way such that reductions necessary to create a redex might be forbidden. \begin{definition}[canonical forbidden patterns] \label{def_canonicity} Let $\mathcal{R} = (\mathcal{F}, R)$ be a TRS with simple forbidden patterns $\Pi_{\mathcal{F}}$ (w.l.o.g.\ we assume that $R$ and $\Pi_{\mathcal{F}}$ have no variables in common). Then, $\Pi_{\mathcal{F}}$ is \emph{$\mathcal{R}$-canonical} (or just \emph{canonical}) if the following holds for all rules $l \rightarrow r \in R:$ \begin{enumerate} \item \label{rule overlaps pattern} There is no pattern $(t, p, \lambda)$ such that \begin{itemize} \item $t'|_q$ and $l$ unify for some $q \in Pos_\mathcal{F}(t)$ where $t' = t[x]_p$ and $q > \epsilon$, and \item there exists a position $q' \in Pos_{\mathcal{F}}(l)$ with $q.q' = p$ for $\lambda = h$ respectively $q.q' > p$ for $\lambda = b$. \end{itemize} \label{pattern overlaps rule} \item There is no pattern $(t, p, \lambda)$ such that \begin{itemize} \item $t'$ and $l|_q$ unify for some $q \in Pos_\mathcal{F}(l)$ where $t' = t[x]_p$, and \item there exists a position $q'$ with $q.q' \in Pos_{\mathcal{F}}(l)$ and $q' = p$ for $\lambda = h$ respectively $q' > p$ for $\lambda = b$. \end{itemize} Here, $x$ denotes a fresh variable. \end{enumerate} \end{definition} \begin{example} Consider the TRS $\mathcal{R}$ given by the single rule \begin{eqnarray*} l = f(g(h(x))) & \rightarrow & x = r\,. \end{eqnarray*} Then, $\Pi = \{\langle t,p,h \rangle\}$ with $t = g(f(a))$, $p = 1.1$ is not canonical since $t[x]_p|_q = g(f(y))|_1 = f(y)$ and $l$ unify where $q = q' = 1$ and thus $q.q' = p$ (hence $root(l|_{q'}) = g$). Moreover, also $\Pi = \{\langle t, p, h\rangle\}$ with $t = g(i(x))$, $p = 1$ is not canonical, since $l|_q = g(h(x))$ and $t[x]_p = f(y)$ unify for $q = 1$ and $q.p = 1.1$ is a non-variable position in $l$. On the other hand, $\Pi = \{\langle g(g(x)), 1.1, h \rangle\}$ is canonical. Note that all of the above patterns are simple. \end{example} In order to prove that normal forms obtained by rewriting with simple and canonical forbidden patterns are actually head-normal forms w.r.t.\ unrestricted rewriting, and also to provide more intuition on canonical rewriting with forbidden patterns, we define the notion of a \emph{partial redex} (w.r.t.\ to a rewrite system $\mathcal{R}$) as a term that is matched by a non-variable term $l'$ which in turn matches the left-hand side of some rule of $\mathcal{R}$. We call $l'$ a \emph{witness} for the partial match. \begin{definition}[Partial redex] Given a rewrite system $\mathcal{R} = (\mathcal{F}, R)$, a \emph{partial redex} is a term $s$ that is matched by a non-variable term $l'$ which in turn matches the left-hand side of some rule in $R$. The (non-unique) term $l'$ is called \emph{witness} for a partial redex $s$. \end{definition} Thus, a partial redex can be viewed as a candidate for a future reduction step, which can only be performed if the redex has actually been created through more inner reduction steps. Hence, the idea of canonical rewriting with forbidden patterns could be reformulated as guaranteeing that the reduction of subterms of partial redexes is allowed whenever these reductions are necessary to create an actual redex. \begin{lemma} \label{lem_canonicity} Let $\mathcal{R} = (\mathcal{F}, R)$ be a \emph{left-linear} TRS with \emph{canonical} (hence, in particular \emph{simple}) forbidden patterns $\Pi_{\mathcal{F}}$. Moreover, let $s$ be a partial redex w.r.t.\ to the left-hand side of some rule $l$ with witness $l'$ such that $l|_p \not\in V$ but $l'|_p \in V$. Then in the term $C[s]_q$ the position $q.p$ is allowed by $\Pi_{\mathcal{F}}$ for reduction provided that $q$ is allowed for reduction. \end{lemma} \begin{theorem} \label{prop_complete} Let $\mathcal{R} = (\mathcal{F}, R)$ be a \emph{left-linear} TRS with \emph{canonical} (hence in particular \emph{simple}) forbidden patterns $\Pi_{\mathcal{F}}$. Then $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-normal forms are $\rightarrow_{\mathcal{R}}$-head-normal forms. \end{theorem} Given a left-linear and confluent rewrite system $\mathcal{R}$ and a set of canonical forbidden patterns $\Pi$ such that $\rightarrow_{\Pi}$ is well-founded, one can thus normalize a term $s$ (provided that $s$ is normalizing) by computing the $\rightarrow_{\Pi}$-normal form $t$ of $s$ which is $\mathcal{R}$-root-stable according to Theorem \ref{prop_complete}, and then do the same recursively for the immediate subterms of $t$. Confluence of $\mathcal{R}$ assures that the unique normal form of $s$ will indeed be computed this way. \begin{example} \label{ex2nd3} As the forbidden pattern defined in Example \ref{ex2nd2} is (simple and) canonical, Theorem \ref{prop_complete} yields that $\rightarrow_{\mathcal{R}, \delta}$-normal forms are $\rightarrow_{\mathcal{R}}$-head-normal forms. For instance we get $2nd(\mathsf{inf}(0)) \rightarrow_{\Pi}^* \mathsf{s}(0)$. \end{example} \begin{example} \label{ex_app_compl} Consider the TRS with $\mathcal{R}$ and forbidden patterns $\Pi$ from Example \ref{ex_app}. We will prove below that $\mathcal{R}$ is $\Pi$-terminating (cf.\ Example \ref{ex_app_term}). Furthermore we are able to show that every well-formed ground term that is reducible to a normal form in $\mathcal{R}$ is reducible to the same normal form with $\rightarrow_{\mathcal{R}, \Pi}$ and that every $\rightarrow_{\mathcal{R}}$-normal form is root-stable w.r.t.\ $\rightarrow_{\mathcal{R}}$. \end{example} \mathsf{s}ection{Proving Termination} \label{sec:proving-termination} We provide another example of a result on a restricted class of forbidden patterns, this time concerning termination. We exploit the fact that, given a finite signature and linear $h$-patterns, a set of allowed contexts complementing each forbidden one can be constructed. Thus, we can transform a rewrite system with this kind of forbidden patterns into a standard (i.e., context-free) one by explicitly instantiating and embedding all rewrite rules (in a minimal way) in contexts (including a designated $\mathsf{top}$-symbol representing the \emph{empty} context) such that rewrite steps in these contexts are allowed. To this end we propose a transformation that proceeds by iteratively instantiating and embedding rules in a minimal way. This is to say that the used substitutions map variables only to terms of the form $f(x_1, \dots, x_{ar(f)})$ and the contexts used for the embeddings have the form $g(x_1, \dots, x_{i-1}, \Box, x_{i+1}, x_{ar(f)})$ for some function symbols $f \in \mathcal{F}$, $g \in \mathcal{F} \uplus \{\mathsf{top}\}$ and some argument position $i$ of $f$ (resp.\ $g$). It is important to keep track of the position of the initial rule inside the embeddings. Thus we associate to each rule introduced by the transformation a position pointing to the embedded original rule. To all initial rules of $\mathcal{R}$ we thus associate $\epsilon$. Note that it is essential to consider a new unary function symbol $\mathsf{top}_s$ for every sort $s \in S$ (of type $s \rightarrow s$) representing the empty context. This is illustrated by the following example. \begin{example} Consider the TRS given by \begin{equation} \nonumber\begin{tabular}[b]{r@{ $\rightarrow$ }l@{\;\;\;\;\;\;}r@{ $\rightarrow$ }l} $a$ & $f(a)$ & $f(x)$ & $x$ \end{tabular} \end{equation} with $\mathcal{F} = \{a, f\}$ and the set of forbidden patterns $\Pi = \{\langle f(x), 1, h\}\rangle\}$. This system is not $\Pi$-terminating as we have \begin{equation*} a \rightarrow_{\Pi} f(a) \rightarrow_{\Pi} a \rightarrow_{\Pi} \dots \end{equation*} Whether a subterm $s|_p = a$ is allowed for reduction by $\Pi$ depends on its context. Thus, according to the idea of our transformation we try to identify all contexts $C[a]_p$ such that the reduction of $a$ at position $p$ is allowed by $\Pi$. However, there is no such (non-empty) context, although $a$ may be reduced if $C$ is the empty context. Moreover, there cannot be a rule $l \rightarrow r$ in the transformed system where $l = a$, since that would allow the reduction of terms that might be forbidden by $\Pi$. Our solution to this problem is to introduce a new function symbol $\mathsf{top}$ explicitly representing the empty context. Thus, in the example the transformed system will contain a rule $\mathsf{top}(a) \rightarrow \mathsf{top}(f(a))$. \end{example} Abusing notation we subsequently use only one $\mathsf{top}$-symbol, while we actually mean the $\mathsf{top}_s$-symbol of the appropriate sort. Moreover, in the following by rewrite rules we always mean rewrite rules with an associated (embedding) position, unless stated otherwise. All forbidden patterns used in this section (particularly in the lemmata) are linear here-patterns. We will make this general assumption explicit only in the more important results. \begin{definition}[instantiation and embedding] \label{def_t} Let $\mathcal{F} = (S, \Omega)$ be a signature, let $\langle l \rightarrow r, p\rangle$ be a rewrite rule of sort $s$ over $\mathcal{F}$ and let $\Pi$ be a set of forbidden patterns (linear, $h$). The set of minimal instantiated and embedded rewrite rules $T_{\Pi}(\langle l \rightarrow r, p\rangle)$ (or just $T(\langle l \rightarrow r, p\rangle)$) is $T^i_{\Pi}(\langle l \rightarrow r, p\rangle) \uplus T_{\Pi}^e(\langle l \rightarrow r, p\rangle)$ where \begin{eqnarray*} T^e(\langle l \rightarrow r, p\rangle) & = & \{ \langle C[l] \rightarrow C[r], i.p \rangle \mid C = f(x_1, \dots, x_{i-1}, \Box, x_{i+1}, \dots, x_{ar(f)}),\\ & & f \in \Omega_{(s_1, \dots, s_{i-1}, s, s_{i+1}, \dots, s_{ar(f)}), s'}, f \in \mathcal{F} \uplus \{\mathsf{top}_s \mid s \in S\}, i \in \{1, \dots, ar(f)\},\\ & & \exists \langle u, o, h \rangle \in \Pi. u|_q \theta = l \theta \wedge q \not= \epsilon \wedge o = q.p \}\\ T^i_{\Pi}(\langle l \rightarrow r, p\rangle) & = & \{ \langle l \mathsf{s}igma \rightarrow r \mathsf{s}igma, p\rangle \mid x \mathsf{s}igma = f(x_1, \dots, x_{ar(f)}), sort(x) = sort(f(x_1, \dots x_{ar(f)})), \\ & &f \in \mathcal{F}, y \not= x \Rightarrow y \mathsf{s}igma = y, x \in RV_{\Pi}(l, p)\} \end{eqnarray*} and $RV_{\Pi}(l, p) = \{x \in Var(l) \mid \exists \langle u, o, h \rangle \in \Pi. \theta = mgu(u, l|_q) \wedge q.o = p \wedge x \theta \not \in V \}$. We also call the elements of $T(\langle l \rightarrow r, p\rangle)$ the one-step $T$-successors of $\langle l \rightarrow r, p\rangle$. The reflexive-transitive closure of the one-step $T$-successor relation is the many-step $T$-successor relation or just $T$-successor relation. We denote the set of all many-step $T$-successors of a rule $\langle l \rightarrow r, p\rangle$ by $T^*(\langle l \rightarrow r, p\rangle)$. \end{definition} The set $RV_{\Pi}(l, p)$ of ``relevant variables'' is relevant in the sense that their instantiation might contribute to a matching by some (part of a) forbidden pattern term. Note that in the generated rules $\langle l' \rightarrow r', p'\rangle$ in $T_{\Pi}(\langle l \rightarrow r, p\rangle)$, a fresh $\mathsf{top}_s$-symbol can only occur at the root of both $l'$ and $r'$ or not at all, according to the construction in Definition \ref{def_t}. \begin{example} \label{ex-for-instantiaion-and-embedding} Consider the TRS $(\mathcal{R}, \Pi)$ where $\mathcal{R} = (\{a,f,g\},\{f(x) \rightarrow g(x)\})$ and the forbidden patterns $\Pi$ are given by $\{\langle g(g(f(a))), 1.1, h\rangle \}$. $T(\langle f(x) \rightarrow g(x), \epsilon\rangle)$ consists of the following rewrite rules. \begin{eqnarray} \label{1} \langle f(f(x)) & \rightarrow & g(f(x)), \epsilon\rangle \\ \label{2} \langle f(g(x)) & \rightarrow & g(g(x)), \epsilon\rangle \\ \label{3} \langle f(a) & \rightarrow & g(a), \epsilon\rangle \\ \label{4} \langle f(f(x)) & \rightarrow & f(g(x)), 1\rangle \\ \label{5} \langle g(f(x)) & \rightarrow & g(g(x)), 1\rangle \end{eqnarray} Note that $RV_{\Pi}(f(x), \epsilon) = \{x\}$ because $g(g(f(a)))_{1.1} = f(a)$ unifies with $f(x)$ and mgu $\theta$ where $x \theta = a \not \in V$. On the other hand $RV_{\Pi}(f(f(x)), 1) = \emptyset$. \end{example} \begin{lemma}[finiteness of instantiation and embedding] \label{lem_term} Let $\langle l \rightarrow r, p\rangle$ be a rewrite rule and let $\Pi$ be a set of forbidden patterns. The set of (many-step) instantiations and embeddings of $\langle l \rightarrow r, p\rangle$ (i.e.\ $T^*(\langle l \rightarrow r, p\rangle))$ is finite. \end{lemma} The transformation we are proposing proceeds by iteratedly instantiating and embedding rewrite rules. The following definitions identify the rules for which no further instantiation and embedding is needed. \begin{definition}[$\Pi$-stable] \label{def_stable} Let $\langle l \rightarrow r, p\rangle$ be a rewrite rule and let $\Pi$ be a set of forbidden patterns. $\langle l \rightarrow r, p\rangle$ is $\Pi$-stable ($stb_{\Pi}(\langle l \rightarrow r, p\rangle)$ for short) if there is no context $C$ and no substitution $\mathsf{s}igma$ such that $C[l \mathsf{s}igma]_q|_{q'} = u \theta$ and $q.p = q'.o$ for any forbidden pattern $\langle u, o, h \rangle \in \Pi$ and any $\theta$. \end{definition} Note that $\Pi$-stability is effectively decidable (for finite signatures and finite $\Pi$), since only contexts and substitutions involving terms not exceeding a certain depth depending on $\Pi$ need to be considered. \begin{definition}[$\Pi$-obsolete] Let $\langle l \rightarrow r, p\rangle$ be a rewrite rule and let $\Pi$ be a set of forbidden patterns. $\langle l \rightarrow r, p\rangle$ is $\Pi$-obsolete ($obs_{\Pi}(\langle l \rightarrow r, p\rangle)$ for short) if there is a forbidden pattern $\Pi = \langle u, o, h \rangle$ such that $l|_q = u \theta$ and $p = q.o$. \end{definition} In Example \ref{ex-for-instantiaion-and-embedding}, the rules (\ref{1}), (\ref{2}) and (\ref{4}) are $\Pi$-stable, while rules (\ref{3}) and (\ref{5}) would be processed further. After two more steps e.g.\ a rule $\langle g(g(f(a))) \rightarrow g(g(g(a))), 1.1\rangle$ is produced that is $\Pi$-obsolete. The following lemmata state some properties of $\Pi$-stable rules. \begin{lemma} \label{lem_comp} Let $\Pi$ be a set of forbidden patterns and let $\langle l' = C[l \mathsf{s}igma]_p \rightarrow C[r \mathsf{s}igma]_p = r', p\rangle$ be a $\Pi$-stable rewrite rule corresponding to $l \rightarrow r$. If $s \rightarrow t$ with $l' \rightarrow r'$, then $s \rightarrow_{\Pi} t$ with $l \rightarrow r$. \end{lemma} \begin{lemma} \label{lem_succ} Let $\langle l \rightarrow r, p\rangle$ be a rule and $\Pi$ be a set of forbidden patterns. If $T(\langle l \rightarrow r, p\rangle) = \emptyset$, then $\langle l \rightarrow r, p\rangle$ is either $\Pi$-stable or $\Pi$-obsolete. \end{lemma} \begin{definition} \label{trafo} Let $\mathcal{R} = (\mathcal{F}, R)$ be a TRS with an associated set of forbidden patterns $\Pi$ where $\mathcal{F} = (S, \Omega)$. The transformation $T$ maps TRSs with forbidden patterns to standard TRSs $T(\mathcal{R}, \Pi)$. It proceeds in $5$ steps. \begin{enumerate} \item $R^{tmp} = \{\langle l \rightarrow r, \epsilon\rangle \mid l \rightarrow r \in R\}$\\ $R^{acc} = \emptyset$ \item \label{rec} $R^{acc} = \{\langle l \rightarrow r, p\rangle \in R^{tmp} \mid stb_{\Pi}(\langle l \rightarrow r, p\rangle) \}$ \\ $R^{tmp} = \{\langle l \rightarrow r, p\rangle \in R^{tmp} \mid \neg stb_{\Pi}(\langle l \rightarrow r, p\rangle) \wedge \neg obs_{\Pi}(\langle l \rightarrow r, p\rangle) \}$ \item $R^{tmp} = \bigcup_{\langle l \rightarrow r, p \rangle \in R^{tmp}} T(\langle l \rightarrow r, p \rangle)$ \item If $R^{tmp} \not= \emptyset$ go to \ref{rec} \item $T(\mathcal{R},\Pi) = (\mathcal{F} \uplus \{top_s \mid s \in S\}, \{l \rightarrow r \mid \langle l \rightarrow r, p\rangle \in R^{acc} \})$ \end{enumerate} \end{definition} In the transformation rewrite rules are iteratively created and collected in $R^{tmp}$ (temporary rules). Those rules that are $\Pi$-stable and will thus be present in the final transformed system are collected in $R^{acc}$ (accepted rules). \begin{lemma} \label{lemma_sound} Let $\mathcal{R}$ be a rewrite system and $\Pi$ be a set of forbidden (linear $h$-)patterns. If $s \rightarrow_{\mathcal{R},\Pi} t$ for ground terms $s$ and $t$, then $\mathsf{top}(s) \rightarrow \mathsf{top}(s)$ in $T(\mathcal{R}, \Pi)$. \end{lemma} \begin{theorem} \label{ground-termination} Let $\mathcal{R}$ be a TRS and $\Pi$ be a set of linear $here$-patterns. We have $s \rightarrow_{\Pi}^+ t$ for ground terms $s$ and $t$ if and only if $\mathsf{top}(s) \rightarrow_{T(\mathcal{R}, \Pi)}^+ \mathsf{top}(t)$. \end{theorem} \begin{proof} The result is a direct consequence of Lemmata \ref{lem_comp} and \ref{lemma_sound}. \end{proof} \begin{corollary} \label{thm_soundness} Let $\mathcal{R}$ be a TRS and $\Pi$ be a set of linear $h$-patterns. $\mathcal{R}$ is ground terminating under $\Pi$ if and only if $T(\mathcal{R}, \Pi)$ is ground terminating. \end{corollary} Note that the restriction to ground terms is crucial in Corollary \ref{thm_soundness}. Moreover, ground termination and general termination do not coincide in general for rewrite systems with forbidden patterns (observe that the same is true for other important rewrite restrictions and strategies such as the outermost strategy). \begin{example} Consider the TRS $\mathcal{R} = (\mathcal{F}, R)$ given by $\mathcal{F} = \{a, f\}$ (where $a$ is a constant) and $R$ consisting of the rule \begin{eqnarray*} f(x) & \rightarrow & f(x). \end{eqnarray*} Moreover, consider the set of forbidden patterns $\Pi = \{ \langle f(a), \epsilon, h \rangle, \langle f(f(x)), \epsilon, h \rangle\}$. Then $\mathcal{R}$ is not $\Pi$-terminating because we have $f(x) \rightarrow_{\Pi} f(x)$ but it is $\Pi$-terminating on all ground terms, as can be shown by Theorem \ref{ground-termination}, since $T(\mathcal{R}, \Pi) = \emptyset$. \end{example} \begin{example} \label{ex2nd4} Consider the TRS of Example \ref{ex2nd2}. We use two sorts $NatList$ and $Nat$, with function symbol types $\mathsf{2nd}: NatList \rightarrow Nat$, $\mathsf{inf}: Nat \rightarrow NatList$, $\mathsf{top}: NatList \rightarrow NatList$ (note that another ``$\mathsf{top}$'' symbol of type $Nat \rightarrow Nat$ is not needed here), $s: Nat \rightarrow Nat$, $0: Nat$, $\mathsf{nil}: NatList$ and $:$ of type $Nat, NatList \rightarrow NatList$. According to Definition \ref{trafo}, the rules of $T(\mathcal{R}, \Pi$) are: \begin{equation} \nonumber\begin{tabular}[b]{r@{ $\rightarrow$ }l@{\;\;\;\;\;\;}r@{ $\rightarrow$ }l} $\mathsf{2nd}(\mathsf{inf}(x))$ & $\mathsf{2nd}(x:\mathsf{inf}(\mathsf{s}(x)))$ & $\mathsf{2nd}(x:(y:zs))$ & $y$ \\ $\mathsf{top}(\mathsf{inf}(x))$ & $\mathsf{top}(x:\mathsf{inf}(\mathsf{s}(x)))$ & $\mathsf{2nd}(x':\mathsf{inf}(x))$ & $\mathsf{2nd}(x':(x:\mathsf{inf}(\mathsf{s}(x))))$ \\ $\mathsf{top}(x' : \mathsf{inf}(x))$ & $\mathsf{top}(x' : (x : \mathsf{inf}(\mathsf{s}(x))))$. \end{tabular} \end{equation} \normalsize This system is terminating (and termination can be verified automatically, e.g.\ by AProVE \cite{aprove}). Hence, by Corollary \ref{thm_soundness} also the TRS with forbidden patterns from Example \ref{ex2nd2} is ground terminating. \end{example} \begin{example} \label{ex_app_term} The TRS $\mathcal{R}$ and forbidden patterns $\Pi$ from Example \ref{ex_app} yield the following system $T(\mathcal{R}, \Pi)$. For the sake of saving space we abbreviate $\mathsf{app}$ by $\mathsf{a}$, $\mathsf{take}$ by $\mathsf{t}$ and $\mathsf{inf}$ by $\mathsf{i}$. \begin{equation} \nonumber\begin{tabular}[b]{r@{ $\rightarrow$ }l@{\;\;\;\;\;\;}r@{ $\rightarrow$ }l} $\mathsf{top}(\mathsf{i}(x))$ & $\mathsf{top}(x : \mathsf{i}(\mathsf{s}(x)))$ & $\mathsf{t}(y, \mathsf{i}(x))$ & $\mathsf{t}(y, x : \mathsf{i}(\mathsf{s}(x)))$ \\ $\mathsf{a}(y, \mathsf{i}(x))$ & $\mathsf{a}(y, x : \mathsf{i}(\mathsf{s}(x)))$ & $\mathsf{top}(\mathsf{a}(\mathsf{i}(x), y))$ & $\mathsf{top}(\mathsf{a}(x : \mathsf{i}(\mathsf{s}(x)), y))$ \\ $\mathsf{t}(\mathsf{a}(\mathsf{i}(x), y), z)$ & $\mathsf{t}(\mathsf{a}(x : \mathsf{i}(\mathsf{s}(x)), y), z)$ & $\mathsf{t}(z, \mathsf{a}(\mathsf{i}(x), y))$ & $\mathsf{t}(z, \mathsf{a}(x : \mathsf{i}(\mathsf{s}(x)), y))$ \\ $\mathsf{a}(\mathsf{a}(\mathsf{i}(x), y), z)$ & $\mathsf{a}(\mathsf{a}(x : \mathsf{i}(\mathsf{s}(x)), y), z)$ & $\mathsf{a}(z, \mathsf{a}(\mathsf{i}(x), y))$ & $\mathsf{a}(z, \mathsf{a}(x : \mathsf{i}(\mathsf{s}(x)), y))$ \\ $\mathsf{top}(\mathsf{a}(x : xs, ys))$ & $\mathsf{top}(x : \mathsf{a}(xs, ys))$ & $\mathsf{t}(z, \mathsf{a}(x : xs, ys))$ & $\mathsf{t}(z, x : \mathsf{a}(xs, ys))$ \\ $\mathsf{a}(\mathsf{a}(x : xs, ys), z)$ & $\mathsf{a}(x : \mathsf{a}(xs, ys), z)$ & $\mathsf{a}(z, \mathsf{a}(x : xs, ys))$ & $\mathsf{a}(z, x : \mathsf{a}(xs, ys))$ \\ $\mathsf{a}(x : \mathsf{i}(zs), ys)$ & $x : \mathsf{a}(\mathsf{i}(zs), ys)$ & $\mathsf{a}(x : \mathsf{s}(zs), ys)$ & $x : \mathsf{a}(\mathsf{s}(zs), ys)$ \\ $\mathsf{a}(x : (y : zs), ys)$ & $x : \mathsf{a}(y : zs, ys)$ & $\mathsf{a}(\mathsf{nil}, x)$ & $x$ \\ $\mathsf{t}(\mathsf{s}(x), y : ys)$ & $\mathsf{t}(x, ys)$ & $\mathsf{t}(0, y : ys)$ & $y$ \\ $\mathsf{t}(x, \mathsf{nil})$ & $0$ \end{tabular} \end{equation} This system is terminating (and termination can be verified automatically, e.g.\ by AProVE \cite{aprove}). Hence, again by Corollary \ref{thm_soundness} also the TRS with forbidden patterns from Example \ref{ex_app} is ground terminating. \end{example} \mathsf{s}ection{Conclusion and Related Work} \label{sec:conclusion-and-related-work} We have presented and discussed a novel approach to rewriting with context restrictions using forbidden patterns to specify forbidden/allowed positions in a term rather than arguments of functions as it was done previously in context-sensitivity. Thanks to their flexibility and parametrizability, forbidden patterns are applicable to a wider class of TRSs than traditional methods. In particular, position-based strategies and context-sensitive rewriting occur as special cases of such patterns. For the TRSs in Examples \ref{ex2nd} and \ref{ex_app} nice operational behaviours can be achieved by using rewriting with forbidden patterns. The restricted reduction relation induced by the forbidden patterns is terminating while still being powerful enough to compute (head-) normal forms. When using simpler approaches such as position-based strategies or context-sensitive rewriting in these examples, such operational properties cannot be achieved. For instance, consider Example \ref{ex2nd}. There is an infinite reduction sequence starting from $\mathsf{inf}(x)$ with the property that every term has exactly one redex. Thus, non-termination is preserved under any reduction strategy (as strategies do not introduce new normal forms by definition). On the other hand, in order to avoid this infinite sequence using context-sensitive rewriting, we must set $2 \not\in \mu(:)$ (regardless of any additional reduction strategy). But in this case $\rightarrow_{\mu}$ does not compute head-normal forms. In \cite{ppdp01-lucas} \emph{on-demand rewriting} was introduced, which is able to properly deal with the TRS of Example \ref{ex2nd}. This means that with the on-demand rewriting the reduction relation induced by the TRS of Example \ref{ex2nd} can be restricted in a way such that it becomes terminating while still normal forms w.r.t.\ the restricted relation are head-normal forms w.r.t.\ the unrestricted one. Indeed, Example \ref{ex2nd} was the main motivating example for the introduction of on-demand rewriting in \cite{ppdp01-lucas}. However, for Example \ref{ex_app} we get that by restricting rewriting by the proposed forbidden patterns we obtain a terminating relation that is able to compute the normal forms of all well-formed ground terms. As the system is orthogonal, any outermost-fair reduction strategy, e.g.\ parallel outermost, is normalizing. Yet, by using such a strategy the relation still remains non-terminating. In particular, our forbidden patterns approach yields an effective procedure for deciding whether a ground term is normalizing or not (it is not normalizing if its $\rightarrow_{\Pi}$-normal form is not an $\rightarrow$-normal form) for this example. On the other hand, by using context-sensitive rewriting, termination can only be obtained if $2 \not\in \mu(:)$ which in turn implies that the term $0 : \mathsf{app}(\mathsf{nil}, \mathsf{nil})$ cannot be normalized despite having a normal form $0 : \mathsf{nil}$. For Examples \ref{ex2nd} and \ref{ex_app} effective strategies like parallel outermost or $\mathcal{S}_{\omega}$ of \cite{middeldorp} are normalizing (though under either strategy there are still infinite derivations). We provide another example for which these strategies fail to provide normalization while the use of appropriate forbidden patterns yields normalization (and termination) \begin{example} \label{parout} Consider the TRS $\mathcal{R}$ consisting of the following rules \begin{equation} \nonumber\begin{tabular}[b]{r@{ $\rightarrow$ }l@{\;\;\;\;\;\;}r@{ $\rightarrow$ }l@{\;\;\;\;\;\;}r@{ $\rightarrow$ }l} $a$ & $b$ & $b$ & $a$ & $c$ & $c$ \\ $g(x, x)$ & $d$ & $f(b, x)$ & $d$ \end{tabular} \end{equation} Using a parallel outermost strategy the term $g(a, b)$ is not reduced to its (unique) normal form $d$. Using $\mathcal{S}_{\omega}$, $f(a, c)$ is not reduced to its (unique) normal form $d$. However, it is easy to see that when using a $\Pi = \{\langle c, \epsilon, h\rangle, \langle b, \epsilon, h\rangle\}$, $\rightarrow_{\Pi}$ is terminating and all $\mathcal{R}$-normal forms can be computed. \end{example} Note however, that the forbidden patterns used in Example \ref{parout} are not canonical. Thus it is not clear how to come up with such patterns automatically. We argued that for our forbidden pattern approach it is crucial to identify reasonable classes of patterns that provide trade-offs between practical feasibility, simplicity and power, favoring either component to a certain degree. We have sketched and illustrated two approaches to deal with the issues of verifying termination and guaranteeing that it is possible to compute useful results (in our case original head-normal forms) with the restricted rewrite relation. To this end we proposed a transformation from rewrite systems with forbidden patterns to ordinary rewrite systems and showed that ground termination of both induced reduction relations coincide. Moreover, we provided a criterion based on canonical rewriting with forbidden patterns to ensure that normal forms w.r.t.\ the restricted reduction relation are original head-normal forms. In particular ``here''-patterns seem interesting as their use avoids context restrictions to be \emph{non-local}. That is to say that whether a position is allowed for reduction or not depends only on a restricted ``area'' around the position in question regardless of the actual size of the whole object term. Note that this is not true for ordinary context-sensitive rewriting and has led to various complications in the theoretical analysis (cf.~e.g.~ \cite[Definition 23]{jfp04-giesl-middeldorp} \cite[Definition 7]{lpar08-alarcon-et-al} and \cite[Definitions 1-3]{rta06-gramlich-lucas}). Regarding future work, among many interesting questions and problems one particularly important aspect is to identify conditions and methods for the automatic (or at least automatically supported) synthesis of appropriate forbidden pattern restrictions. ~\\[+1ex] \textbf{Acknowledgements}: We are grateful to the anonymous referees for numerous helpful and detailed comments and criticisms. \end{document} \pagebreak \huge \noindent Appendix \Large \noindent Missing Proofs \normalsize \noindent \textbf{Lemma \ref{lem_canonicity}.} Let $\mathcal{R} = (\mathcal{F}, R)$ be a \emph{left-linear} TRS with \emph{canonical} (hence, in particular \emph{simple}) forbidden patterns $\Pi_{\mathcal{F}}$. Moreover, let $s$ be a partial redex w.r.t.\ to the left-hand side of some rule $l$ with witness $l'$ such that $l|_p \not\in V$ but $l'|_p \in V$. Then in the term $C[s]_q$ the position $q.p$ is allowed by $\Pi_{\mathcal{F}}$ for reduction provided that $q$ is allowed for reduction. \begin{proof} Assume on the contrary that $q.p$ is forbidden in $C[s]_q$. As position $q$ is allowed this means that there is a forbidden pattern $\langle t, o, \lambda \rangle$, such that $\lambda \in \{h, b\}$ and $t$ matches $C[s]_q$ at some position $q'$ and $q < q'.o \leq q.p$. Assume $\lambda = b$. As $s$ partially matches $l$, we have that $root(s|_{p'}) = root(l|_{p'})$ for all $p' < p$. Hence, as all positions parallel to $p$ are variable positions in $t$ (due to simplicity of $\Pi$) and $t$ is linear, we have that either $t|_{o'}$ unifies with $l[x]_p$ for some position $o'$ such that $o'.p > o$ (if $q' \leq q$), or $t$ unifies with $l[x]_p|_{o'}$ such that $p > o'.o$ (if $q' > q$). Either way, we get a contradiction to the canonicity of $\Pi$ (cf. Definition \ref{def_canonicity}). The case where $\lambda = h$ is analogous. \end{proof} \noindent \textbf{Theorem \ref{prop_complete}.} Let $\mathcal{R} = (\mathcal{F}, R)$ be a \emph{left-linear} TRS with \emph{canonical} and \emph{simple} forbidden patterns $\Pi_{\mathcal{F}}$. Then $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-normal forms are $\rightarrow_{\mathcal{R}}$-head-normal forms. \begin{proof} For a proof by minimal counterexample assume $s$ is an $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-normal form, but not a $\rightarrow_{\mathcal{R}}$-head-normal form, and has minimal depth. If the depth of $s$ is $0$ then it is either a constant or a variable. In case it is a variable, it is a $\rightarrow_{\mathcal{R}}$-head-normal form. Otherwise, if it is a constant and not $\rightarrow_{\mathcal{R}}$-head-normal, then it is not a $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-normal form, because only patterns of the shape $\langle \_, \epsilon, h \rangle$ can forbid root reduction steps and these are not simple (cf. Definition \ref{def_simple_patterns}). Note that $s$ cannot be an $\mathcal{R}$-redex itself, because in this case it would also be $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-reducible, as there are no $\langle \_, \epsilon, h \rangle$-patterns. Now assume the depth of $s$ is greater than $0$. Since the term $s$ is not an $\rightarrow_{\mathcal{R}}$-head-normal form, there exists a reduction sequence $S: s \overset{> \epsilon}{\rightarrow}_{\mathcal{R}}^* t = l \mathsf{s}igma$. Hence, $s$ is a partial redex and there is some maximal subterm $s|_p$ of $s$ where $p \in Pos_{\mathcal{F}}(l)$ that is not an $\rightarrow_{\mathcal{R}}$-head-normal form (otherwise $s$ would be a redex because of left-linearity of $\mathcal{R}$). According to the minimality of $s$, $s|_p$ must be $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-reducible. Thus, as $s$ is not $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-reducible, there must be some forbidden pattern $\langle t, o, \lambda \rangle$, where $t$ matches $s$ at some position $q < p$ and forbids the reduction of some position $q.o \geq p$ (because of Lemma \ref{lem_canonicity}). We distinguish two cases. First, if $s|_p$ is a redex, then $s$ is $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-reducible because position $p$ cannot be forbidden in $s$ according to Lemma \ref{lem_canonicity}. Second, if $s|_p$ is not a redex, then it is a partial redex w.r.t.\ to some $l \rightarrow r \in R$ and contains a maximal proper subterm $s|_{p'}$ ($p' > p$) which is $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-reducible and not a $\rightarrow_{\mathcal{R}}$-head-normal form. Again position $p'$ cannot be forbidden in $s$ according to Lemma \ref{lem_canonicity}. Thus, again either $s|_{p'}$ is a redex implying $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-reducibility of $s$ or it contains a $\rightarrow_{\mathcal{R}, \Pi_{\mathcal{F}}}$-reducible proper subterm $s|_{q''}$. Eventually, either an allowed redex in $s$ is found or there is some subterm $s|_{p^{(n)}}$ of $s$ such that $|p^{(n)}| - |p| > n$ where $n$ is the maximal term depth of all forbidden patterns. Thus, $s$ is reducible below $p^{(n)}$ if and only if $s|_p$ is reducible below $o$ where $p.o = p^{(n)}$. Since $s|_p$ is reducible below $o$ (because by our construction this reduction step is necessary to head-normalize $s|_p$), we have a contradiction to $s$ being irreducible w.r.t.\ $\rightarrow_{\Pi}$. \end{proof} \noindent \textbf{Propositions in Example \ref{ex_app_compl}.} Consider the TRS with $\mathcal{R}$ and forbidden patterns $\Pi$ from Example \ref{ex_app}. We will prove below that $\mathcal{R}$ is $\Pi$-terminating (cf.\ Example \ref{ex_app_term}). Furthermore we are able to show that every well-formed ground term that is reducible to a normal form in $\mathcal{R}$ is reducible to the same normal form with $\rightarrow_{\mathcal{R}, \Pi}$ and that every $\rightarrow_{\mathcal{R}}$-normal form is root-stable w.r.t.\ $\rightarrow_{\mathcal{R}}$. \begin{proof} Regarding root-stability of $\rightarrow_{\mathcal{R}, \Pi}$-normal forms, assume on the contrary that there is a non-root stable $\rightarrow_{\mathcal{R}, \Pi}$ normal-form $s$ of minimal term depth. Since $s$ is non-root-stable, $root(s) \in \{\mathsf{app}, \mathsf{inf}, \mathsf{take}\}$. The immediate subterms of $s$ are reducible (in $\mathcal{R}$) to terms such that eventually $s$ becomes a redex. Each of these relevant terms is rooted by a constructor, hence if $s$ is not a redex then some immediate subterm of $s$ is not root-stable. Moreover, this subterm is also a $\rightarrow_{\mathcal{R}, \Pi}$-normal form as no forbidden pattern term in $\Pi$ has a defined root symbol. Thus, we have a contradiction to minimality of $s$. Regarding the power of $\rightarrow_{\mathcal{R}, \Pi}$ to compute $\mathcal{R}$-normal forms, assume on the contrary that there is a well-formed ground $\Pi$-normal form $s$ that is reducible to an $\mathcal{R}$-normal form $t \not= s$ in $\mathcal{R}$, and that $s$ has minimal depth among all such terms. First, note that no well-formed ground $\mathcal{R}$-normal form $t$ can contain a defined symbol, as all functions are completely defined over ground constructor arguments of the respective types (otherwise, any subterm of $t$ rooted by some innermost defined symbol would have to be reducible, thus contradicting $\mathcal{R}$-irreducibility of $t$). Let $f = root(s|_p)$ be an outermost defined symbol in $s$. First, $f$ cannot be $\mathsf{inf}$, as in this case $s$ would not be $\rightarrow_{\mathcal{R}}$-normalizing. Second, assume $f = root(s|_p) = \mathsf{take}$. As $take$ is not part of any forbidden pattern term the immediate subterms of $s|_p$ must be $\rightarrow_{\mathcal{R}, \Pi}$ normal-forms and thus root-stable. Hence, as $s|_p$ must eventually be reducible in some $\mathcal{R}$-reduction and all immediate subterms of $s|_p$ are root-stable (and $root(s|_p)$ is an outermost defined symbol in $s$) $s|_p$ itself must be a redex and we get a contradiction to $s$ being a $\rightarrow_{\mathcal{R}, \Pi}$ normal form, because no $take$-rooted redex is forbidden by $\Pi$. Finally, assume $f = \mathsf{app}$. We distinguish two cases. First, assume $s|_p$ is not forbidden for reduction by $\Pi$. Then either, $s|_p$ is an $\mathsf{app}$-rooted $\rightarrow_{\mathcal{R}, \Pi}$-normal form that is reducible in $\mathcal{R}$ to an $\mathcal{R}$-normal form (remember that there are no defined symbols in $s$ above $p$) which is a contradiction (as this $\mathcal{R}$-normal form must be rooted by a constructor and $s|_p$ can thus not be root-stable). Otherwise, $s|_p$ has the form $\mathsf{app}(\mathsf{inf}(s_1), s_2)$ which is non-normalizing, hence we get a contradiction to our assumption of $s$ being $\mathcal{R}$-normalizable. Second, assume $s|_p$ is forbidden in $s$. thus $s = C[s_1 : \mathsf{app}(s_2 : \mathsf{app}(s_3, s_4), s_5)]_q$ where $q.1 = p$ and $root(C|_o)$ is a constructor for all $o \leq q$. In this case we take a closer look at the inner $\mathsf{app}$-term, i.e. at $s|_{p.1.2}$. Again this subterm could be forbidden by $\Pi$ or allowed. We investigate the general case of having several nested $\mathsf{app}$-symbols, i.e. where $s$ has the shape $C[s_1 : \mathsf{app}(s_2 : \mathsf{app}(s_3 : \mathsf{app}(s_4 : \dots \mathsf{app}(s_n, s_n')), \dots), s_4'), s_3'), s_2')]$ and $s_n$ is not matched by $x : \mathsf{app}(y, z)$. Thus $\mathsf{app}(s_n, s_n')$ is not forbidden for reduction by $\Pi$. Either $s_n$ is rooted by $\mathsf{inf}$ in which case it is easy to see that $s$ is not $\mathcal{R}$-normalizing, or $s_n$ is a $\rightarrow_{\mathcal{R}, \Pi}$ normal form in which case it must be rooted by $:$ or $\mathsf{nil}$, because it must be reducible to a $:$-rooted term or $\mathsf{nil}$ in $\mathcal{R}$ in a normalizing reduction of $s$, since the $\mathsf{app}$-symbol cannot be erased. In the latter case $s$ would not be a $\rightarrow_{\mathcal{R}, \Pi}$ normal form, as the innermost (indicated) $\mathsf{app}$ term would be reducible and we would have a contradiction. \end{proof} \noindent \textbf{Lemma \ref{lem_term}.} Let $\langle l \rightarrow r, p\rangle$ be a rewrite rule and let $\Pi$ be a set of forbidden patterns. The set of (many-step) instantiations and embeddings of $\langle l \rightarrow r, p\rangle$ (i.e.\ $T^*(\langle l \rightarrow r, p\rangle))$ is finite. \begin{proof} Assume towards a contradiction that $T^*(\langle l \rightarrow r, p\rangle))$ were infinite. Each rule from $T^*(\langle l \rightarrow r, p\rangle))$ has the form $\langle C[l]_q \mathsf{s}igma \rightarrow C[r]_q \mathsf{s}igma, q.p\rangle$ for some context $C$ and some substitution $\mathsf{s}igma$. Now infinity of $T^*(\langle l \rightarrow r, p\rangle))$ implies that the term depth of its terms is not bounded. Thus, it either contains rules $\langle C[l]_q \mathsf{s}igma \rightarrow C[r]_q \mathsf{s}igma, q.p \rangle$ where the term depth of $x \mathsf{s}igma$ is $n$ for arbitrarily high $n$ and some (fixed) $x$, or it contains rules $\langle C[l]_q \mathsf{s}igma \rightarrow C[r]_q \mathsf{s}igma, q.p \rangle$ where $|q|$ is arbitrarily big. We investigate both cases. First, assume there is some variable $x$ such that the term depth of $x \mathsf{s}igma$ is not bounded in $T^*(\langle l \rightarrow r, p\rangle))$. Let $\langle C[l]_q \mathsf{s}igma \rightarrow C[r]_q \mathsf{s}igma, q.p \rangle$ be a rule such that the term depth of $x \mathsf{s}igma = n$ for some $x \in Var(C[l]_q)$ where $n > max_{\langle u, o, h\rangle \in \Pi}(depth(u))$. As the term depth of $x \mathsf{s}igma$ is not bounded in $T^*(\langle l \rightarrow r, p\rangle))$, some rule of this shape must have a one-step $T$-successor $\langle C[l]_q \mathsf{s}igma' \rightarrow C[r]_q \mathsf{s}igma', q.p \rangle$ with $depth(x \mathsf{s}igma') > depth(x \mathsf{s}igma)$. Say $\mathsf{s}igma' = \mathsf{s}igma \mathsf{s}igma''$, $y \mathsf{s}igma'' \not= y$ and $x \mathsf{s}igma|_{r} = y$ with $depth(r) = n$. Thus, according to Definition \ref{def_t}, $y \in RV_{\Pi}(C[l]_q, q.p)$. Therefore, $C[l]_q \mathsf{s}igma |_{q'} \theta = u \theta$ for some $\langle u, o, h\rangle \in \Pi$ with $q' \leq p.q$ and $y \theta \not = y$. However, because of linearity of $u$ this means $u|_{p''}$ is non-variable where $p'' = q''.r$ for some $q''$. However this contradicts the fact that the term depth of $u$ is smaller than $n = |r|$. Second, assume we have rules $\langle C[l]_q \mathsf{s}igma \rightarrow C[r]_q \mathsf{s}igma, q.p \rangle$ with arbitrarily high $|q|$. Let $\langle C[l]_q \mathsf{s}igma \rightarrow C[r]_q \mathsf{s}igma, q.p \rangle$ be a rule such that $|q| = n$ where $n > max_{\langle u, o, h\rangle \in \Pi}(depth(u))$. Some rule of this shape must have a one-step $T$-successor $\langle C[l]_{q'} \mathsf{s}igma \rightarrow C[r]_{q'} \mathsf{s}igma, q.p \rangle$ with $|q'| > |q|$. Thus, according to Definition $\ref{def_t}$ there is a forbidden pattern $\langle u, o, h\rangle \in \Pi$ such that $u|_{p'} = l$ and $o = p'.q.p$ which contradicts $|o| < |q|$. \end{proof} \noindent \textbf{Lemma \ref{lem_comp}.} Let $\Pi$ be a set of forbidden patterns and let $\langle l' = C[l \mathsf{s}igma]_p \rightarrow C[r \mathsf{s}igma]_p = r', p\rangle$ be a $\Pi$-stable rewrite rule corresponding to $l \rightarrow r$. If $s \rightarrow t$ with $l' \rightarrow r'$, then $s \rightarrow_{\Pi} t$ with $l \rightarrow r$. \begin{proof} Suppose $s = s[l'\theta]_q \rightarrow s[r'\theta]_q = t$. If $s|_{q.p}$ were forbidden for reduction by $\Pi$ (say through a forbidden pattern $\langle u, o, h\rangle$), then $s|_{p'} = u \theta'$ and $q.p = p'.o$. This is a direct contradiction to the fact that $l' \rightarrow r'$ is $\Pi$-stable according to Definition \ref{def_stable}. \end{proof} \noindent \textbf{Lemma \ref{lem_succ}.} Let $\langle l \rightarrow r, p\rangle$ be a rule and $\Pi$ be a set of forbidden patterns. If $T(\langle l \rightarrow r, p\rangle) = \emptyset$, then $\langle l \rightarrow r, p\rangle$ is either $\Pi$-stable or $\Pi$-obsolete. \begin{proof} Assume $\langle l \rightarrow r, p\rangle$ is neither $\Pi$-stable nor $\Pi$-obsolete. Then, there exist a context $C$ and a substitution $\mathsf{s}igma$ such that $C[l \mathsf{s}igma]_q|_{q'} = u \theta$ and $q.p = q'.o$ for some pattern $\langle u, o, h\rangle$ and on the other hand there is no pattern $\langle u, o, h\rangle$ such that $l|_q = u \theta$ and $p = q.o$. Hence, either $C$ or $\mathsf{s}igma$ are non-trivial (i.e.\ $C \not= \Box$ or $x \mathsf{s}igma \not= x$ for some $x$). Assume $C$ is non-trivial, then there exists a pattern $\langle u, o, h\rangle$ with $u|_q \theta = l \theta$, $q \not= \epsilon$ and $o = q.p$, hence $T_{\Pi}^e(\langle l \rightarrow r, p\rangle) \not= \emptyset$ and we get a contradiction. On the other hand assume $\mathsf{s}igma$ is non-trivial and $C$ is trivial (say $x \mathsf{s}igma \not= x$). Then $x \in RV_{\Pi}(l, p)$ and thus $T_{\Pi}^i(\langle l \rightarrow r, p\rangle) \not= \emptyset$. Hence we get a contradiction as well. \end{proof} \noindent \textbf{Lemma \ref{lemma_sound}.} Let $\mathcal{R}$ be a rewrite system and let $\Pi$ be a set of forbidden (linear $h$-)patterns. If $s \rightarrow_{\mathcal{R},\Pi} t$ for ground terms $s$ and $t$, then $\mathsf{top}(s) \rightarrow \mathsf{top}(s)$ in $T(\mathcal{R}, \Pi)$. \begin{proof} Assume the step $s \rightarrow_{\Pi} t$ occurs at position $p$ with rule $l \rightarrow r$. If $\langle l \rightarrow r, \epsilon\rangle$ is $\Pi$-stable, we have $\mathsf{top}(s) \rightarrow \mathsf{top}(t)$ with $l \rightarrow r$ at position $1.p$ and the claim holds. $\langle l \rightarrow r, \epsilon\rangle$ cannot be $\Pi$-obsolete, since this would contradict the fact that $p$ is allowed in $s$ according to $\Pi$. Thus, according to Lemma \ref{lem_succ}, $T(\langle l \rightarrow r, \epsilon\rangle)$ is non-empty and thus in particular contains a rule $\langle l' \rightarrow r', p'\rangle$ such that $\mathsf{top}(s)|_q = l' \mathsf{s}igma$ and $1.p = q.p'$, since all possible instantiations and embeddings are covered by $T$ (note that the rule is also embedded in the $\mathsf{top}(\Box)$-context). This rule cannot be $\Pi$-obsolete, as this would imply that $p$ is a forbidden position in $s$, because $l'$ matches (a subterm of) $\mathsf{top}(s)$ and thus a forbidden pattern matching $l'$ would also match $s$ (note that forbidden pattern terms do not contain $\mathsf{top}$). Hence, again either the rule is $\Pi$-stable in which case we use it for the reduction $\mathsf{top}(s) \rightarrow_{T(\mathcal{R})} \mathsf{top}(t)$ or it is further instantiated and embedded. By repetition we obtain new sets of rules each containing rules whose left-hand sides match $\mathsf{top}(s)$ and thus are not $\Pi$-obsolete. By Lemmata \ref{lem_succ} and \ref{lem_term} eventually one of these rules must be $\Pi$-stable and thus be in $T(\mathcal{R}, \Pi)$. Hence we finally get $\mathsf{top}(s) \rightarrow_{T(\mathcal{R}, \Pi)} \mathsf{top}(t)$. \end{proof} \begin{appendix} \end{appendix} \end{document}
{}^{\bot}eta\etagin{document} \title{Factorization in Call-by-Name and Call-by-Value Calculi via Linear Logic} \author{Claudia Faggian\inst{1} \and Giulio Guerrieri\inst{2}} \authorrunning{C. Faggian \and G. Guerrieri} \institute{Universit√© de Paris, IRIF, CNRS, F-75013 Paris, France \and University of Bath, Department of Computer Science, Bath, UK } \maketitle {}^{\bot}eta\etagin{abstract} In each variant of the $\lambda$-calculus, factorization and normalization are two key-properties that show how results are computed. Instead of proving factorization/normalization for the call-by-name (CbN\xspace) and call-by-value (CbV\xspace) variants separately, we prove them only once, for the bang calculus (an extension of the $\lambda$-calculus inspired by linear logic and subsuming CbN\xspace and CbV\xspace), and then we transfer the result via translations, obtaining factorization/normalization for CbN\xspace and CbV\xspace. The approach is robust: it still holds when extending the calculi with operators and extra rules to model some additional computational features. \end{abstract} \section{Introduction} \label{sect:intro} The $\lambda$-calculus is the model of computation underlying functional programming languages and proof assistants. Actually there are many $\lambda$-calculi, depending on the \emph{evaluation mechanism} (for instance, call-by-name and call-by-value---CbN\xspace and CbV\xspace for short) and \emph{computational features} that the calculus aims to model. In $\lambda$-calculi, a rewriting relation formalizes computational steps in program execution, and normal forms are the results of computations. In each calculus, a key question is to define a \emph{normalizing strategy}: How to compute a result? Is there a reduction strategy which is guaranteed to output a result, if any exists? Proving that a calculus admits a normalizing strategy is complex, and many techniques have been developed. A well-known method first proves \emph{factorization} \cite{Barendregt84,Takahashi95,HirokawaMiddledorpMoser15,AccattoliFaggianGuerrieri19}. Given a calculus with a rewriting relation $\xrightarrow{}$, a strategy $\textsc {l}red \subseteq \xrightarrow{}$ \emph{factorizes} if $\xrightarrow{}^* \subseteq \textsc {l}red^* \cdot \nllred^*$ ($\nllred$ is the dual of $\textsc {l}red$), \textit{i.e.}\xspace any reduction sequence can be rearranged so as to perform first $\textsc {l}red$-steps and then the other steps. If, moreover, the strategy satisfies some “good properties”, we can conclude that the strategy is normalizing. Factorization is important also because it is commonly used as a building block in the proof of other properties of the \emph{how-to-compute} kind. For instance, \emph{standardization}, which generalizes factorization: every reduction sequences can be rearranged according to a predefined order between redexes. \paragraph{Two for One.} Quoting Levy \cite{Levy99}: \emph{the existence of two separate paradigms} (CbN\xspace and CbV\xspace) is troubling because to prove a certain property---such as factorization or normalization---for both systems \emph{we always need to do it twice}. The \emph{first aim} of our paper is to develop a technique for deriving factorization for both the CbN\xspace \cite{Barendregt84} and CbV\xspace \cite{Plotkin75} $\lam$-calculi as corollaries of a \emph{single} factorization theorem, and similarly for normalization. A key tool in our study is the \emph{bang calculus} \cite{EhrhardGuerrieri16,GuerrieriManzonetto18}, a calculus inspired by linear logic in which CbN\xspace and CbV\xspace embed. \paragraph{The Bang Calculus.} The bang calculus is a variant of the $\lambda$-calculus where an operator $\oc$ plays the role of a marker for non-linear management: duplicability and discardability. The bang calculus is nothing but Simpson's linear $\lambda$-calculus \cite{Simpson05} without linear abstraction, or the untyped version of the implicative fragment of Levy's Call-by-Push-Value \cite{Levy99}, as first observed by Ehrhard \cite{Ehrhard16}. The motivation to study the bang calculus is to have a general framework where both CbN\xspace and CbV\xspace $\lambda$-calculi can be simulated, via two distinct \emph{translations} inspired by Girard's embeddings \cite{Girard87} of the intuitionistic arrow into linear logic. So, a certain property can be studied in the bang calculus and then automatically transferred to the CbN\xspace and CbV\xspace settings by translating back. This approach has so far mainly be exploited semantically \cite{Levy06,Ehrhard16,EhrhardGuerrieri16,GuerrieriManzonetto18,ChouquetTasson20,BucciarelliKesnerRiosViso20}, but can be used it also to study operational properties \cite{GuerrieriManzonetto18,SantoPintoUustalu19,FaggianRonchi}. In this paper, we push forward this operational direction. \paragraph{The Least-Level Strategy.} We study a strategy from the literature of linear logic \cite{CarvalhoPF11}, namely \emph{least-level reduction} $\textsc {l}red$, which fires a redex at minimal level---the \emph{level} of a redex $tThree$ is the number of $\oc$ under which the redex appears. We prove that the least-level reduction factorizes and normalizes in the bang calculus, and then we transfer the same results to CbN\xspace and CbV\xspace $\lam$-calculi (for suitable definitions of least-level in CbN\xspace and CbV\xspace), by exploiting properties of their translations into the bang calculus. A single proof suffices. It is two-for-one! Or even better, three-for-one. The rewriting study of the least level strategy in the bang calculus is based on simple techniques for factorization and normalization we developed recently with Accattoli \cite{AccattoliFaggianGuerrieri19}, which simplify and generalize Takahashi's method \cite{Takahashi95}. \paragraph{Subtleties of the Embeddings.} Transferring factorization and normalization results via translation is highly non-trivial, \emph{e.g.}\xspace in CPS translations \cite{Plotkin75}. This applies also to transferring least-level factorizations from the bang calculus to the CbN\xspace and CbV\xspace $\lambda$-calculi. To transfer the property smoothly, the translations should preserve levels and normal forms, which is delicate, in particular for CbV\xspace. The embedding of CbV\xspace into the bang calculus defined in \cite{GuerrieriManzonetto18,SantoPintoUustalu19} does not preserve levels and normal forms (see \Cref{rmk:least-level-normal}). As a consequence, the CbV\xspace translation studied in \cite{GuerrieriManzonetto18,SantoPintoUustalu19} cannot be used to derive least-level factorization or \emph{any} normalization result in a CbV\xspace setting from the corresponding result in the bang calculus. Here we adopt the refined CbV\xspace embedding of Bucciarelli et al. \cite{BucciarelliKesnerRiosViso20} which does preserve levels and normal forms. While the preservation of normal forms is already stressed in \cite{BucciarelliKesnerRiosViso20}, the preservation of levels is proved here for the first time, and it is based on non-trivial properties of the embedding. \paragraph{Beyond pure.} Our \emph{second aim} is to show that the developed technique for the joined factorization and normalization of CbN\xspace and CbV\xspace via the bang calculus is \emph{robust}. We do so, by studying extensions of all three calculi with operators (or, in general, with extra rules) which model some additional computational features, such as non-deterministic or probabilistic choice. We then show that the technique scales up smoothly, under mild assumptions on the extension. \paragraph{A Motivating Example.} Let us illustrate our approach on a simple case, which we will use as running example. De' Liguoro' and Piperno's CbN\xspace non-deterministic $\lam$-calculus $\mathfrak lambda_\oplus^{\mathtt{cbn}}$ \cite{deLiguoroP95} extends the CbN $\lam$-calculus with an operator $\mathbf{o}lus$ whose reduction models \emph{non-deterministic choice}: $\mathbf{o}lus (t, ts)$ rewrites to either $t$ or $ts$. It admits a standardization result, from which if follows that the leftmost-outermost reduction strategy (noted $\textsc {lo}redx{{}^{\bot}eta\etata\mathbf{o}lus}$) is \emph{complete}: if $t$ has a \textit{normal form} $tu$ then $ t \textsc {lo}redx{{}^{\bot}eta\etata\mathbf{o}lus}^* tu$. In \cite{deLiguoro91}, de' Liguoro considers also a CbV\xspace variant $ \mathfrak lambda_\oplus^{\mathtt{cbv}} $, extending with an operator $\mathbf{o}lus$ the CbV\xspace $\lam$-calculus. One may prove standardization and completeness---again---from scratch, even though the proofs are similar. The approach we propose here is to work in the bang calculus enriched with the operator $\mathbf{o}lus$, it is denoted by ${\mathfrak lambda_\oc}_{\mathbf{o}lus}$. We show that the calculus satisfies \emph{least-level factorization} from which it follows that the least-level strategy is \emph{complete}, \textit{i.e.}\xspace if $t$ has a \textit{normal form} $tu$, then $ t \textsc {l}redx{\mathfrak tot\mathbf{o}lus}^* tu $. The translation then guarantees that analogous results hold also in $\mathfrak lambda_\oplus^{\mathtt{cbn}}$ and $\mathfrak lambda_\oplus^{\mathtt{cbv}}$. \paragraph{The Importance of Being Modular.} The bang calculus with operators is actually a general formalism for several calculi, one calculus for each kind of computational feature modeled by operators. Concretely, the reduction $\mathrel{\rightarrow}$ consists of $\mathfrak to{\mathfrak tot}$ (which subsumes CbN\xspace $\mathfrak to{{}^{\bot}eta\etata}$ and CbV\xspace $\mathfrak to{{}^{\bot}eta\etata}v$) and other reduction rules $\mathfrak to{\Rule}$. We decompose the proof of factorization of $\mathrel{\rightarrow}$ in modules, by using the \emph{modular approach} recently introduced by the authors together with Accattoli \cite{AccattoliFaggianGuerrieri21}. The key module is the least-level factorization of $\mathrel{\rightarrow}bb$, because it is where the higher-order comes into play---this is done, once for all. Then, we consider a generic reduction rule $\mathfrak to{\Rule}$ to add to $\mathfrak to{\mathfrak tot}$. Our general result is that if $\mathfrak to{\Rule}$ has `good properties' and interacts well with $\mathfrak to{\mathfrak tot}$ (which amounts to an easy test, combinatorial in nature), then we have least-level factorization for $\mathfrak to{\mathfrak tot} \cup \mathfrak to{\Rule}$. Putting all together, when $\mathfrak to{\Rule}$ is instantiated to a concrete reduction (such as $\mathrel{\rightarrow}o$), the user of our method only has to verify a simple test (namely \Cref{prop:test_ll}), to conclude that $\mathfrak to{\mathfrak tot} \cup \mathrel{\rightarrow}o$ has least-level factorization. In particular factorization for $\mathfrak to{\mathfrak tot}$ is a ready-to-use black box the user need not to worry about---our proof is robust enough to hold whatever the other rules are. Finally, the embedding automatically give least-level factorization for the corresponding CbV\xspace and CbN\xspace calculi. In \Cref{sec:case_study}, we illustrate our method on this example. \paragraph{Subtleties of the Modular Extensions.} In order to adopt the modular approach presented in \cite{AccattoliFaggianGuerrieri21} we need to deal with an important difficulty which appears when dealing with normalizing strategies and that it is not studied in \cite{AccattoliFaggianGuerrieri21}. A normalizing strategies select the redex to fire usually through a property such as being a \emph{least level} redex or being the \emph{leftmost-outermost} (shortened to LO) redex---normalizing strategies are \emph{positional}. The problem is that the---in general--- if $\mathrel{\rightarrow} =\mathrel{\rightarrow}b\cup\mathrel{\rightarrow}x{\mathcal Rule}$, then $\textsc {lo}redx{}$ reduction is not the union of $\textsc {lo}redx{{}^{\bot}eta\etata}$ and $\textsc {lo}redx{\mathcal Rule}$. I.e., the normalizing strategy of the compound system is not obtained putting together the normalizing strategies of the components. Let us explain the issue on our running example $\mathrel{\rightarrow}x{{}^{\bot}eta\etata \mathbf{o}lus}$, in the familiar case of leftmost-outermost reduction. \renewcommand{\mathsf{I}}{\mathsf{I}} {}^{\bot}eta\etagin{example}\label{ex:issue} Let us first consider head reduction with respect to ${}^{\bot}eta\etata$ (written $\hredb $) and with respect to ${}^{\bot}eta\etata\mathbf{o}lus$ (written $\hredx{{}^{\bot}eta\etata\mathbf{o}lus} $). Consider the term $ts=(\mathsf{I}\mathsf{I})(x \mathbf{o}lus y)$, where $\mathsf{I}=\lam x.x$. The subterm $\mathsf{I}\mathsf{I}$ (which is a ${}^{\bot}eta\etata$-redex) is in head position whenever we consider the reduction $\mathrel{\rightarrow}b$ or its extension $\mathrel{\rightarrow}x{{}^{\bot}eta\etata \mathbf{o}lus}$. So $ts \hredb \mathsf{I}(x\mathbf{o}lus y)$ and $ts \hredx{{}^{\bot}eta\etata\mathbf{o}lus} \mathsf{I}(x\mathbf{o}lus y)$. Conversely, given $t=(x \mathbf{o}lus y)(\mathsf{I}\mathsf{I})$ the head position is occupied by $ (x\mathbf{o}lus y) $, which is a $\mathbf{o}lus$-redex, but not a ${}^{\bot}eta\etata$-redex. Therefore, $(\mathsf{I}\mathsf{I}) $ is not the head-redex in $t$, neither for ${}^{\bot}eta\etata$ nor for ${}^{\bot}eta\etata\mathbf{o}lus$. Otherwise stated: \[\hredx{{}^{\bot}eta\etata\mathbf{o}lus}~ = ~\hredb \cup \hredo.\] In contrast, if we consider leftmost-outermost reduction $\textsc {lo}red$, which reduces a redex in the leftmost-outermost position, it is easy to see that \[\textsc {lo}redx{{}^{\bot}eta\etata\mathbf{o}lus} ~\not= ~\textsc {lo}redb \cup \textsc {lo}redx{\mathbf{o}lus}.\] Consider again the term $t=(x \mathbf{o}lus y){(\mathsf{I}\mathsf{I})}$. Since $ (x\mathbf{o}lus y) $ is not a ${}^{\bot}eta\etata$-redex, $(\mathsf{I}\mathsf{I})$ is the leftmost redex for $\mathrel{\rightarrow}b$. Instead, $(\mathsf{I}\mathsf{I})$ is not the $\textsc {lo}$-redex for $\mathrel{\rightarrow}x{{}^{\bot}eta\etata \mathbf{o}lus}$ (here the leftmost redex is $(x\mathbf{o}lus y)$). So $t \textsc {lo}redb (x\mathbf{o}lus y)\mathsf{I}$ but $t \not\textsc {lo}redx{{}^{\bot}eta\etata\mathbf{o}lus} (x\mathbf{o}lus y)\mathsf{I}$. \end{example} The least-level factorization for $\mathrel{\rightarrow}bb$, $\mathrel{\rightarrow}b$, and $\mathrel{\rightarrow}bv$ we prove here is robust enough to make it ready to be used as a module in a larger proof, where it may combine with operators and other rules. The key point is to define the least-level reduction from the very beginning as a reduction firing a redex at minimal level with respect to a general set of redexes (containing ${}^{\bot}eta\etata_\oc$, ${}^{\bot}eta\etata$ or ${}^{\bot}eta\etata_v$, respectively), so that it is ``ready'' to be extended with other reduction rules (see \Cref{sec:ll}). \paragraph{Proofs.} All proofs are available in \mathtt{unit\;}rl{https://www.irif.fr/~giuliog/fact.pdf} \section{Background in Abstract Rewriting}\label{sec:background} An (\emph{abstract}) \emph{rewriting system}, \cite[Ch. 2]{Terese03} is a pair $(\mathcal{A}A, \xrightarrow{})$ consisting of a set $A$ and a binary relation $\xrightarrow{} \subseteq \mathcal{A}A\times \mathcal{A}A$ (called reduction) whose pairs are written $t \xrightarrow{} s$ and called \emph{steps}. A \emph{$\xrightarrow{}$-sequence} from $t$ is a sequence of $\xrightarrow{}$-steps. As usual, $\mathrel{\rightarrow}^*$ (resp. $\mathrel{\rightarrow}^=$) denotes the transitive-reflexive (resp. reflexive) closure of $\mathrel{\rightarrow}$. A relation $\mathrel{\rightarrow}$ is \emph{confluent} if $s \mathfrak mRevTo{} r\mathrel{\rightarrow}^{*} t$ implies $s \mathrel{\rightarrow}^{*}u \mathfrak mRevTo{} t$ for some $u$. We say that $u$ is $\mathrel{\rightarrow}$-\emph{normal} (or a $\mathrel{\rightarrow}$-normal form) if there is no $t$ such that $u\mathrel{\rightarrow} t$. In general, a term may or may not reduce to a normal form. If it does, not all reduction sequences necessarily lead to normal form. A term is \emph{weakly} or \emph{strongly normalizing}, depending on if it may or must reduce to normal form. If a term $t$ is strongly normalizing, any choice of steps will eventually lead to a normal form. However, if $t$ is weakly normalizing, how do we compute a normal form? This is the problem tackled by \emph{normalization}: by repeatedly performing \emph{only specific steps}, a normal form will be computed, provided that $t $ can reduce to~any. A \emph{strategy} $\xrightarrow{}_{\esym} \ \subseteq \ \xrightarrow{}$ is a way to control that in a term there are different possible choices of reduction. A \emph{normalizing strategy} for $\mathrel{\rightarrow}$, is a reduction strategy which, given a term $t$, is guaranteed to reach its $\mathrel{\rightarrow}$-normal form, if any exists (a key tool to show that certain terms are not $\mathrel{\rightarrow}$-normalizable). {}^{\bot}eta\etagin{definition}[Normalizing and complete strategy] \label{def:strategy} A reduction $\ered \,\subseteq \, \mathrel{\rightarrow}$ is a \emph{strategy for} $\mathrel{\rightarrow}$ if it has the same normal forms as $\mathrel{\rightarrow}$. A strategy $\ered$ for~$\mathrel{\rightarrow}$~is: {}^{\bot}eta\etagin{itemize} \item \emph{complete} if $t\ered^*u$ whenever $t\xrightarrow{}^*u$ with $u$ $\mathrel{\rightarrow}$-normal; \item \emph{normalizing} if \emph{every} maximal $\ered$-sequence from $t$ ends in a normal form, whenever $t\xrightarrow{}^*u$ for some $\mathrel{\rightarrow}$-normal form $u$. \end{itemize} \end{definition} Note that if the strategy $\ered$ is complete and \emph{deterministic} (\textit{i.e.}\xspace for every $t\in \mathcal{A}A$, $t\ered s$ for at most one $s\in \mathcal{A}A$), then $\ered$ is a normalizing strategy for $\mathrel{\rightarrow}$. {}^{\bot}eta\etagin{definition}[Factorization] Let $(A,\xrightarrow{})$ be a rewriting system with $\mathrel{\rightarrow} \,=\, \ered \cup \ired $. The relation $\mathrel{\rightarrow}$ satisfies \emph{$\mathsf {e}$-factorization}, written $\F{\ered}{\ired}$, if {}^{\bot}eta\etagin{equation}\tag{\textbf{Factorization}} \F{\ered}{\ired}: \quad (\ered \cup \ired)^*~ \subseteq ~\ered^* \cdot \ired^* \end{equation} \end{definition} \subsubsection{Proving Normalization.} Factorization provides a simple technique to establish that a strategy is normalizing. {}^{\bot}eta\etagin{lemma}[Normalization \cite{AccattoliFaggianGuerrieri19}]\label{prop:abs_normalization} Let $\mathrel{\rightarrow} \,=\, \ered \cup \nered$, and $\ered$ be a \mbox{strategy for $\mathrel{\rightarrow}$.} The strategy $\ered$ is \emph{complete} for $\mathrel{\rightarrow}$ if the following hold: {}^{\bot}eta\etagin{enumerate} \item \emph{Persistence:} If $t \nered t'$ then $t'$ is not normal. \item \emph{Factorization}: $t\xrightarrow{}^* tu$ implies $ t\ered^* \!\cdot\! \nered^*tu$. \end{enumerate} The strategy $\ered$ is \emph{normalizing} for $\mathrel{\rightarrow}$ if it is complete and: {}^{\bot}eta\etagin{enumerate}\setcounter{enumi}{2} \item \emph{Uniformity:} all weakly $\ered$-normalizing terms are strongly $\ered$-normalizing. \end{enumerate} \end{lemma} A sufficient condition for uniform normalization and confluence \mbox{is the following}: {}^{\bot}eta\etagin{fact}[Newman \cite{Newman42}]\label{fact:diamond} A reduction is \emph{quasi-diamond} if ($t_1\leftarrow t \rightarrow t_2$) implies ($t_1=t_2$ or $t_1\rightarrow tu \leftarrow t_2$ for some $tu$). If $\mathrel{\rightarrow}$ is quasi-diamond then $\mathrel{\rightarrow}$ is uniformly normalizing and confluent. \end{fact} \subsubsection{Proving Factorization.}\label{sec:Hindley} Hindley\cite{HindleyPhD} first noted that a local property implies factorization. Let $\mathrel{\rightarrow} \, = \, \ered \cup \ired$. We say that $\ired$ \emph{strongly postpones} after $\ered$, if {}^{\bot}eta\etagin{equation}\label{eq:SP}\tag{\textbf{Strong Postponement}} \mathfrak lP{\ered}{\ired}: \mathbf{q}uad \ired \cdot \ered ~\subseteq~\ered^*\cdot \ired^= \end{equation} {}^{\bot}eta\etagin{lemma}[Hindley \cite{HindleyPhD}] \label{l:SP} $\mathfrak lP{\ered}{\ired}$ implies $\F{\ered}{\ired}$. \end{lemma} Strong postponement can rarely be used \emph{directly}, because several interesting reductions---including ${}^{\bot}eta\etata$-reduction---do not satisfy it. However, it is at the heart of Takahashi's method \cite{Takahashi95} to prove head factorization of $\mathfrak to{{}^{\bot}eta\etata}$, via the following immediate property that can be used also to prove other factorizations (see \cite{AccattoliFaggianGuerrieri19}). {}^{\bot}eta\etagin{fact}[Characterization of factorization] \label{fact:fact} Factorization $\F \ered \ired$ holds \emph{if and only if} there is a reduction ${\makepar \ired}$ such that ${\makepar \ired}^* \, = \ \ired^*$ and $\mathfrak lP{\ered}{{\makepar \ired}}$. \end{fact} The core of Takahashi's method \cite{Takahashi95} is to introduce a relation $\iPRed$, called \emph{internal parallel reduction}, which verifies the hypotheses above. We will follow a similar path in \Cref{sect:factorization}, to prove \textit{least-level} factorization. \subsubsection{Compound systems: proving factorization in a modular way.} \label{sec:modular} In this paper, we will consider compound systems that are obtained by extending the $\lam$-calculus with extra rules to model advanced features. In an abstract setting, let us consider a rewrite system $(A,\mathrel{\rightarrow})$ where $\mathrel{\rightarrow} \,=\,\mathrel{\rightarrow}a \cup \mathrel{\rightarrow}c$. Under which condition $\mathrel{\rightarrow}$ admits factorization, assuming that both $\mathrel{\rightarrow}a$ and $\mathrel{\rightarrow}c$ do? To deal with this question, a technique for proving factorization for \emph{compound systems} in a \emph{modular} way has been introduced in \cite{AccattoliFaggianGuerrieri21}. The approach can be seen as an analogous for factorization of the classical technique for confluence based on Hindley-Rosen lemma \cite{Barendregt84}: if $\mathrel{\rightarrow}a,\mathrel{\rightarrow}c$ are $\mathsf {e}$-factorizing reductions, their union $\mathrel{\rightarrow}a \cup \mathrel{\rightarrow}c$ also is, provided that two \textit{local} conditions of commutation hold. {}^{\bot}eta\etagin{lemma}[Modular factorization \cite{AccattoliFaggianGuerrieri21}]\label{thm:modular} Let $\mathrel{\rightarrow}a \,=\, \ereda \cup \ireda$ and $\mathrel{\rightarrow}c \,=\, \eredc \cup \iredc$ be $\mathsf {e}$-factorizing relations. Let $\ered \,=\,def \ereda \cup\eredc$, and $\ired \,=\,def \ireda \cup\iredc$. The union $\mathrel{\rightarrow}a\cup \mathrel{\rightarrow}c$ fulfills factorization $\F{\ered}{\ired}$ if the following swaps hold {}^{\bot}eta\etagin{equation}\label{eq:LS}\tag{\textbf{Linear Swaps}} \ireda \cdot \eredc \ \subseteq\ \eredc \cdot \mathrel{\rightarrow}a^* \quad\text{ and }\quad \iredc \cdot \ereda \ \subseteq\ \ereda \cdot \mathrel{\rightarrow}c^* \end{equation} \end{lemma} The subtlety here is to set $\ereda$ and $\eredc$ so that $\ered\,=\, \eredc \cup \iredc$. As already shown in \Cref{sect:intro}, when dealing with normalizing strategies one needs extra~care. \newcommand{$\op$-redex\xspace}{$\mathbf{o}$-redex\xspace} \newcommand{$\op$-redex\xspaces}{$\mathbf{o}$-redexes\xspace} \renewcommand{\mathsf{Var}}{\mathsf{Var}} \renewcommand{\mathsf{Val}}{\mathsf{Val}} \renewcommand{\ValSet}{\mathsf{Val}} \renewcommand{\mathfrak lambdaVal}{\ValSet} \renewcommand{\mathfrak lambda_\ocOpCtx}{\mathfrak lambda_\ocCtx} \renewcommand{{\mathbf{C}Set}}{{\mathbf{C}Set}} \section{ \texorpdfstring{$\lambda$}{lambda}-calculi: CbN\xspace, CbV\xspace, and bang} \label{sect:lambda-calculi} We present here a generic syntax for $\lambda$-calculi, possibly containing operators. All the variants of the $\lambda$-calculus we shall study use this language. We assume some familiarity with the $\lam$-calculus, and refer to \cite{Barendregt84,HindleySeldin86} for details. Given a countable set $\mathsf{Var}$ of variables, denoted by $x, xTwo, xThree, \dots$, \emph{terms} and \emph{values} (whose sets are denoted by $\mathfrak lambda_\OpSet$ and $\mathfrak lambdaVal$, respectively) are defined as~follows: {}^{\bot}eta\etagin{align*} \tm, \tmTwo, \tmThree &\Coloneqq v \mid \tm\tmTwo \mid \mathbf{o}(\tm_1, \dots, \tm_k) \ \text{ \emph{Terms}:~ } \mathfrak lambda_\OpSet &&&&& v &\Coloneqq x \mid \mathfrak la{x}{\tm} \ \text{ \emph{Values}:~ }\mathfrak lambdaVal \end{align*} where $\mathbf{o}$ ranges over a set $\mathcal{O}$ of function symbols called \emph{operators}, each one with its own arity $k \in {\mathbb N}$. If the operators are $\mathbf{o}_1,\dots, \mathbf{o}_n$, the set of terms is indicated as $\mathfrak lambda_{\mathbf{o}_1...\mathbf{o}_n}$. When the set $\mathcal{O}$ of operators is empty, the calculus is called \emph{pure}, and the sets of terms is denoted by $\mathfrak lambda$. Otherwise, the calculus is \emph{applied}. Terms are identified up to renaming of bound variables, where abstraction is the only binder. We denote by $\tm\mathfrak sub{\tmTwo}{x}$ the capture-avoiding substitution of $\tmTwo$ for the free occurrences of $x$ in $\tm$. \emph{Contexts} (with exactly one hole $\mathfrak hole{\cdot}$) are generated by the grammar below, and $\mathbf{c}\mathfrak hole{\tm}$ stands for the term obtained from the context $\mathbf{c}$ by replacing the hole with the term $\tm$ (possibly capturing free variables). {}^{\bot}eta\etagin{align*} \mathbf{c} & \Coloneqq \mathfrak hole{\cdot} \mid \tm\mathbf{c} \mid \mathbf{c} \tm \mid \mathfrak la{x}\mathbf{c} \mid \mathbf{o}(\tm_1, \dots, \mathbf{c},\dots, \tm_k) & \text{\emph{Contexts:~}} {\mathbf{C}Set} \end{align*} Let $ \mathcal Rule$ be a binary relation on $\mathfrak lambda_\OpSet$; we call it \emph{$\mathcal Rule$-rule} and denote it also by $\mathcal Root{\mathcal Rule}$, writing $t \mathcal Root{\mathcal Rule} t'$ rather than $(t,t')\in \rho$. A $\mathcal Rule$-\emph{reduction step} $\mathfrak to{\mathcal Rule}$ is the contextual closure of $\mathcal Rule$. Explicitly, $\tm \mathfrak to{\Rule} \tm'$ holds if $\tm = \mathbf{c}p{\tmThree}$ and $\tm' = \mathbf{c}p{\tmThree'}$ for some context $\mathbf{c}$ with $ \tmThree \mathcal Root{\mathcal Rule} \tmThree'$. The term $\tmThree$ is called a $\mathcal Rule$-\emph{redex}. The set of $\mathcal Rule$-\emph{redexes} is denoted~by~$\mathcal R_{\mathcal Rule}.$ Given a set of rules $\mathcal RulesSet$, the relation $\mathrel{\rightarrow} = {}^{\bot}igcup_\mathcal Rule \mathfrak to{\Rule}$ ($\mathcal Rule \in \mathcal RulesSet$) can equivalently be defined as the contextual closure of $\mapsto = {}^{\bot}igcup_\mathcal Rule \mathcal Root{\mathcal Rule}$. \condinc{}{ Given a binary relation $\mathcal Rule$ on $\mathfrak lambda_\OpSet$, called \emph{$\mathcal Rule$-rule}, a $\mathcal Rule$-\emph{reduction step} $\mathfrak to{\mathcal Rule}$ is the contextual closure of $\mathcal Rule$. Explicitly, $\tm \mathfrak to{\Rule} \tm'$ holds if $ (\tmThree,\tmThree')\in \mathcal Rule$, $\tm = \mathbf{c}p{\tmThree}$, and $\tm' = \mathbf{c}p{\tmThree'}$, for some context $\mathbf{c}$. The term $\tmThree$ is called a $\mathcal Rule$-\emph{redex}. The set of all $\mathcal Rule$-\emph{redex} is denoted $\mathcal R_{\mathcal Rule}.$ We write the pair $ (\tmThree,\tmThree')\in \mathcal Rule $ also as $\tmThree \mathcal Root{\mathcal Rule}\tmThree'$. Given a set of rules $\mathcal RulesSet$, the relation $\mathrel{\rightarrow} = {}^{\bot}igcup \mathfrak to{\Rule}$ ($\mathcal Rule \in \mathcal RulesSet$) can equivalently be defined as the contextual closure of ${}^{\bot}igcup \mathcal Rule$. We also write $\mathrel{\rightarrow} =\{\mathfrak to{\Rule} \mid \mathcal Rule \in \mathcal RulesSet \}$. } \pink{\paragraph{General properties of the contextual closure.} We recall a basic but key property of contextual closure. We say that $t$ and $t'$ have \emph{the same shape} if both terms are an application (resp. an abstraction, a variable, a constant, or a term of shape $\mathbf{o} (tp_1, ...,tp_k)$ ). {}^{\bot}eta\etagin{fact}[Shape preservation]\label{fact:shape} Assume $t=\mathbf{c}p{tr}\mathrel{\rightarrow}x{\mathcal Rule} \mathbf{c}p {tr'}=t'$ and that the context $\mathtt{c}$ is \emph{non-empty}. Then $t$ and $t'$ have the same shape. \end{fact} That is, if a step $\mathrel{\rightarrow}c$ is obtained by closure under \emph{non-empty context}, then it preserve the shape of the term. Please notice that we will often write $\xred{\rsym}c$ to indicate the step $\mathrel{\rightarrow}c$ which is obtained by \emph{empty contextual closure}. } \subsection{Call-by-Name and Call-by-Value $\lam$-calculi} \paragraph{Pure CbN\xspace and Pure CbV\xspace $\lambda$-calculi.} \label{ex:cbn-cbv-calculi} The \emph{pure call-by-name} (CbN\xspace for short) $\lambda$-calculus \cite{Barendregt84,HindleySeldin86} is $(\mathfrak lambda, \mathrel{\rightarrow}b)$, the set of terms $\mathfrak lambda$ together with the ${}^{\bot}eta\etata$-reduction $\mathrel{\rightarrow}b$, defined as the contextual closure of the usual ${}^{\bot}eta\etata$-rule, which we recall in \,=\,ref{eq:rule-beta} below. The \emph{pure call-by-value} (CbV\xspace for short) $\lambda$-calculus \cite{Plotkin75} is the set $\mathfrak lambda$ endowed with the reduction $\mathfrak to{{}^{\bot}eta\etata}v$, defined as the contextual closure of the ${}^{\bot}eta\etata_v$-rule in \,=\,ref{eq:rule-betav}. \noindent {}^{\bot}eta\etagin{minipage}{0.38\mathcal{lin}ewidth} {}^{\bot}eta\etagin{equation} \label{eq:rule-beta} \textup{CbN\xspace: } \, (\mathfrak la{x}\tm)\tmTwo \mathcal Root{{}^{\bot}eta\etata} \tm\mathfrak sub{\tmTwo}{x} \end{equation} \end{minipage} \quad {}^{\bot}eta\etagin{minipage}{0.56\mathcal{lin}ewidth} {}^{\bot}eta\etagin{equation} \label{eq:rule-betav} \textup{CbV\xspace: } \, (\mathfrak la{x}{\tm})v \mapsto_{{}^{\bot}eta\etata_v} \tm\mathfrak sub{v}{x} \mbox{ \ with } v \!\in\! \ValSet \end{equation} \end{minipage} \paragraph{CbN\xspace and CbV\xspace $\lambda$-calculi.} A CbN\xspace (resp. CbV\xspace) $\lambda$-calculus is the set of terms endowed with a reduction $\mathfrak to{} $ which extends $\mathrel{\rightarrow}b$ (resp. $\mathrel{\rightarrow}bv$). In particular, the \emph{applied} setting with operators (when $\mathcal{O} \neq \emptyset$) models in the $\lam$-calculus richer computational features, allowing $\mathbf{o}$-reductions as the contextual closure of $\mathbf{o}$-rules of the form $\mathbf{o} (\tm_1, \dots, \tm_k) \mathcal Root{\mathbf{o}} \tmTwo$. {}^{\bot}eta\etagin{example} [Non-deterministic $\lam$-calculus]\label{ex:NDext} Let $\mathcal{O} =\{\mathbf{o}lus\}$ where $\mathbf{o}lus$ is a binary operator; let $\mathrel{\rightarrow}o$ be the contextual closure of the (non-deterministic) rule below: \[ \mathbf{o}lus (t_1,t_2)\mapsto_{\mathbf{o}lus} t_1 \quad \text{ and } \quad \mathbf{o}lus (t_1,t_2)\mapsto_{\mathbf{o}lus} t_2\] The \emph{non-deterministic CbN\xspace $ \lam $-calculus} $\mathfrak lambda_\oplus^{\mathtt{cbn}}=(\mathfrak lambda_{\mathbf{o}lus},\mathrel{\rightarrow}x{{}^{\bot}eta\etata\mathbf{o}lus})$ is the set $\mathfrak lambda_{\mathbf{o}lus}$ with the reduction $\mathrel{\rightarrow}x{{}^{\bot}eta\etata\mathbf{o}lus} \ = \ \xrightarrow{}_{}^{\bot}eta\etata \cup \mathrel{\rightarrow}_{\mathbf{o}lus} $. The \emph{non-deterministic CbV\xspace $ \lam $-calculus} $\mathfrak lambda_\oplus^{\mathtt{cbv}}=(\mathfrak lambda_{\mathbf{o}lus},\mathrel{\rightarrow}x{{{}^{\bot}eta\etata_v}\mathbf{o}lus})$ is the set $\mathfrak lambda_{\mathbf{o}lus}$ with the reduction $\mathrel{\rightarrow}x{{{}^{\bot}eta\etata_v}\mathbf{o}lus} \ = \ \xrightarrow{}_{}^{\bot}eta\etatav \cup \mathrel{\rightarrow}_{\mathbf{o}lus} $. \end{example} \subsection{Bang calculi} \label{sect:bang-calculus} The bang calculus \cite{EhrhardGuerrieri16,GuerrieriManzonetto18} is a variant of the $\lambda$-calculus inspired by linear logic. An operator $\oc$ plays the role of a marker for duplicability and discardability. Here we allow also the presence of operators other than $\Bang$, ranging over a set $\mathcal{O}$. So, terms and contexts of the bang calculus (denoted by capital letters) are: {}^{\bot}eta\etagin{align*} T, TTwo, TThree &\Coloneqq x \mid \mathfrak la{x}{T} \mid TTTwo \mid \Bang{T} \mid \mathbf{o}(T_1, \dots, T_k) & \text{\emph{Terms:~}}\mathfrak lambda_\ocOp \\ \mathbf{C} & \Coloneqq \mathfrak hole{\cdot} \mid \mathfrak la{x}\mathbf{C} \mid T\mathbf{C} \mid \mathbf{C} T \mid \Bang{\mathbf{C}} \mid \mathbf{o}(T_1, \dots, \mathbf{C},\dots, T_k) & \text{\emph{Contexts:~}}\mathfrak lambda_\ocOpCtx \end{align*} Terms of the form $\Bang{T}$ are called \emph{boxes} and their set is denoted by $\oc\mathfrak lambda_\ocOp$. When there are no operators other than $\oc$ (\textit{i.e.}\xspace $\mathcal{O} = \emptyset$), the sets of terms, boxes and contexts are denoted by $\mathfrak lambda_\oc$, $\oc\mathfrak lambda_\oc$ and $\mathfrak lambda_\ocCtx$, respectively. This syntax can be expressed in the one of \Cref{sect:lambda-calculi}, where $\oc$ is an unary operator called \emph{bang}. \paragraph{The pure bang calculus.} The \emph{pure} bang calculus $(\mathfrak lambda_\oc, \mathrel{\rightarrow}bb)$ is the set of terms $\mathfrak lambda_\oc$ endowed with reduction $\mathfrak to{\mathfrak tot}$, the closure under contexts in $\mathfrak lambda_\ocOpCtx$ of the \emph{${}^{\bot}eta\etata_\oc$-rule}: {}^{\bot}eta\etagin{equation} (\mathfrak la{x}{T})\,\Bang{TTwo} \mathcal Root{{}^{\bot}eta\etata_\oc} T \mathfrak sub{TTwo}{x} \end{equation} Intuitively, in the bang calculus the bang-operator $\oc$ marks the only terms that can be erased and duplicated. Indeed, a \emph{${}^{\bot}eta\etata$-like redex} $(\mathfrak la{x}{T}){TTwo}$ can be fired by $\mathcal Root{{}^{\bot}eta\etata_\oc}$ only when its argument $TTwo$ is a box\xspace, \textit{i.e.}\xspace$TTwo = \Bang{TThree}$: if it is so, the content $TThree$ of the box $TTwo$ (and not $TTwo$ itself) replaces any free occurrence of $x$ in $T$.\footnotemark \footnotetext{Syntax and reduction rule of the bang calculus follow \cite{GuerrieriManzonetto18}, which is slightly different from \cite{EhrhardGuerrieri16}. Unlike \cite{GuerrieriManzonetto18} (but akin to \cite{SantoPintoUustalu19}), here we do not use $\Der{\!}$ (aka $\mathsf{der}$) as a primitive, since $\Der{\!}$ and its associated rule $\mathcal Root{\mathsf{d}}$ can be simulated, see \Cref{ex:identity}~and~\,=\,ref{eq:der}.} A proof of confluence of ${}^{\bot}eta\etata_\oc$-reduction $\mathfrak to{\mathfrak tot}$ is in \cite{GuerrieriManzonetto18}. \newcommand{I}{I} {}^{\bot}eta\etagin{notation} \label{note:terms} We use the following notations to denote some notable terms. {}^{\bot}eta\etagin{align*} \Der{\!} := \mathfrak la{x}{x} && \delta:=\mathfrak la{x}{x}{x} && I := \mathfrak la{x}{\Bang{x}} && \Delta := \mathfrak la{x}{x\,\Bang{x}}. \end{align*} \end{notation} {}^{\bot}eta\etagin{remark}[Notable terms]\label{ex:delta}\label{ex:identity} The term $I = \mathfrak la{x}{\Bang{x}}$ plays the role of the identity in the bang calculus: $I \, \Bang{T} \mathfrak to{\mathfrak tot} \Bangp{x\mathfrak sub{T}{x}} = \Bang{T}$ for any term $T$. Instead, the term $\Der{\!} = \mathfrak la{x}{x}$, when applied to a box $\Bang{T}$, opens the box, \textit{i.e.}\xspace returns its content $T$: $\Der{}\Bang{T} \mathfrak to{\mathfrak tot} x\mathfrak sub{T}{x} = T$. Finally, $\Delta\, \Bang{\Delta} \mathfrak to{\mathfrak tot} \Delta\, \Bang{\Delta} \mathfrak to{\mathfrak tot} \dots$ is a diverging term. \end{remark} \paragraph{A bang calculus.} A \emph{bang calculus} $(\mathfrak lambda_\ocOp, \mathfrak to{})$ is the set $\mathfrak lambda_\ocOp$ of terms endowed with a reduction $\mathfrak to{} $ which extends $ \mathfrak to{\mathfrak tot}$. In this paper we shall consider calculi where $\xrightarrow{}$ contains $\mathfrak to{\mathfrak tot}$ and $\mathbf{o}$-reductions $\mathfrak to{\op}$ ($\mathbf{o}\in \mathcal{O} $) defined from $\mathbf{o}$-rules of the form $\mathbf{o} (T_1, \dots, T_k) \mathcal Root{\mathbf{o}} TTwo$, and possibly other rules. So, $\mathrel{\rightarrow} \,={}^{\bot}igcup_{\mathcal Rule}\mathfrak to{\Rule} (\mathcal Rule\in \mathcal RulesSet)$, with $\mathcal RulesSet \supseteq \{\Bang{{}^{\bot}eta\etata}, \mathbf{o} \mid \mathbf{o}\in \mathcal{O}\}$. We set $\mathrel{\rightarrow}x{\mathcal{O}} \,= {}^{\bot}igcup_{\mathbf{o}\in \mathcal{O}}\mathrel{\rightarrow}x{\mathbf{o}}$. \subsection{CbN\xspace and CbV\xspace translations into the bang calculus} \label{subsect:translations} Our motivation to study the bang calculus is to have a general framework where both CbN\xspace \cite{Barendregt84} and CbV\xspace \cite{Plotkin75} $\lambda$-calculi can be embedded, via two distinct translations. Here we show how these translations work. We extend the simulation results in \cite{GuerrieriManzonetto18,SantoPintoUustalu19,BucciarelliKesnerRiosViso20} for the pure case to the case with operators (\Cref{thm:embedding}). Following \cite{BucciarelliKesnerRiosViso20}, the CbV\xspace translation defined here differs from \cite{GuerrieriManzonetto18,SantoPintoUustalu19} in the application case. \Cref{sect:embedding} will show why this optimization is crucial. \emph{CbN\xspace} and \emph{CbV\xspace} \emph{translations} are two maps $\Cbn{(\cdot)} \colon \mathfrak lambda_\OpSet \xrightarrow{} \mathfrak lambda_\ocOp$ and $\Cbv{(\cdot)} \colon \mathfrak lambda_\OpSet \xrightarrow{} \mathfrak lambda_\ocOp$, respectively, translating terms of the $\lambda$-calculus into terms of the bang~calculus: {}^{\bot}eta\etagin{align*} \Cbn{x} &\coloneqq x & \ \Cbn{(\mathfrak la{x\,}{\tm})} &\coloneqq \mathfrak la{x\,}{\Cbn{\tm}} & \Cbn{(\mathbf{o}(\tm_1, \dots, \tm_k))} &\coloneqq \mathbf{o}(\Cbn{\tm_1}, \dots, \Cbn{\tm_k}) & \ \Cbn{(\tm\tmTwo)} &\coloneqq \Cbn{\tm} \,{\Bang{{\Cbn{\tmTwo}}}} \,; \\ \Cbv{x} &\coloneqq \Bang{x} & \ \Cbv{(\mathfrak la{x\,}{\tm})} &\coloneqq \Bang{(\mathfrak la{x}{\Cbv{\tm}})} & \Cbv{(\mathbf{o}(\tm_1, \dots, \tm_k))} &\coloneqq \mathbf{o}(\Cbv{\tm_1}, \dots, \Cbv{\tm_k}) & \ \Cbv{(\tm\tmTwo)} &\coloneqq {}^{\bot}eta\etagin{cases} T \,\Cbv{\tmTwo} & \text{if } \Cbv{\tm} = \Bang{T} \\ (\Der{\Cbv{\tm}}){\Cbv{\tmTwo}} &\text{otherwise}. \end{cases} \end{align*} {}^{\bot}eta\etagin{example} \label{ex:delta-translated} Consider the $\lambda$-term $\omega \coloneqq \delta\delta$: then, $\Cbn{\delta} = \Delta$, $ \Cbv{\delta} = \Bang{ \Delta}$ and $\Cbn{\omega} = \Delta\,{\Bang{\Delta}} = \Cbv{\omega}$ ($\delta$ and $\Delta$ are defined in \Cref{note:terms}). The $\lambda$-term $\omega$ is diverging in CbN\xspace and CbV\xspace $\lambda$-calculi, and so is $\Cbn{\omega} = \Cbv{\omega}$ in the bang calculus, see \Cref{ex:delta}. \end{example} For any term $\tm \in \mathfrak lambda_\OpSet$, $\Cbn{\tm}$ and $\Cbv{\tm}$ are just different decorations of $\tm$ by means of the bang-operator $\oc$ (recall that $\Der{\!} = \mathfrak la{x}{x}$). The translation $\Cbn{(\cdot)}$ puts the argument of any application into a box: in CbN\xspace any term is duplicable or discardable. On the other hand, only \emph{values} (\textit{i.e.}\xspace abstractions and variables) are translated by $\Cbv{(\cdot)}$ into boxes, as they are the only terms duplicable or discardable in~CbV\xspace. As in \cite{GuerrieriManzonetto18,SantoPintoUustalu19}, we prove that the CbN\xspace translation $\Cbn{(\cdot)}$ (\mathcal Resp CbV\xspace translation $\Cbv{(\cdot)}$) from the pure CbN\xspace (\mathcal Resp CbV\xspace) $\lambda$-calculus into the bang calculus is \emph{sound} and \emph{complete}: it maps ${}^{\bot}eta\etata$-reductions (\mathcal Resp ${}^{\bot}eta\etata_v$-reductions) of the $\lambda$-calculus into ${}^{\bot}eta\etata_\oc$-reductions of the bang calculus, and conversely ${}^{\bot}eta\etata_\oc$-reductions\,---\,when restricted to the image of the translation\,---\,into ${}^{\bot}eta\etata$-reductions (\mathcal Resp ${}^{\bot}eta\etata_v$-reductions). The same holds if we consider any $\mathbf{o}$-reduction for operators. In the simulation, $\mathfrak toBang$ denotes the contextual closure of the rule: {}^{\bot}eta\etagin{equation} \label{eq:der} \Der{\Bang{T}} \mathcal Root{\mathsf{d}} T \quad \text{(this is nothing but } (\mathfrak la{x}{x}) \Bang{T} \mathcal Root{{}^{\bot}eta\etata_\oc} T \text{)} \end{equation} Clearly, $\mathfrak toBang \, \subseteq \, \mathfrak to{\mathfrak tot}$ (\Cref{ex:identity}). We write $T \mathfrak toBangNorm TTwo$ if $T \mathfrak toBang^* TTwo$ and $TTwo$ is $\mathsf{d}$-normal. {}^{\bot}eta\etagin{restatable}[Simulation of CbN\xspace and CbV\xspace]{proposition}{embedding} \label{thm:embedding} Let $\tm \in \mathfrak lambda_\OpSet$ and $\mathbf{o} \in \mathcal{O}$. {}^{\bot}eta\etagin{enumerate} \item\label{p:embedding-cbn} \emph{CbN\xspace soundness:} If $\tm \mathfrak to{{}^{\bot}eta\etata} \tm'$ then $\Cbn{\tm} \mathfrak to{\mathfrak tot} \Cbn{{\tm'}}$. If $\tm \mathfrak to{\op} \tm'$ then $\Cbn{\tm} \mathfrak to{\op} \Cbn{{\tm'}}$. \\ \emph{CbN\xspace completeness:} If $\Cbn{\tm} \mathfrak to{\mathfrak tot} TTwo$ then $TTwo = \Cbn{{\tm'}}$ and $\tm \mathfrak to{{}^{\bot}eta\etata} \tm'$, for some $\tm' \in \mathfrak lambda_\OpSet$. If $\Cbn{\tm} \mathfrak to{\op} TTwo$ then $TTwo = \Cbn{{\tm'}}$ and $\tm \mathfrak to{\op} \tm'$, for some $\tm'\in \mathfrak lambda_\OpSet$. \item\label{p:embedding-cbv} \emph{CbV\xspace soundness:} If $\tm \mathfrak to{{}^{\bot}eta\etata}v \tm'$ then $\Cbv{\tm} \mathfrak to{\ValScript}\mathfrak toBang^= \Cbv{{\tm'}}$ with $\Cbv{{\tm'}}$ $\mathsf{d}$-normal. If $\tm \mathfrak to{\op} \tm'$ then $\Cbv{\tm} \mathfrak to{\op}\mathfrak toBang^= \Cbv{{\tm'}}$ with $\Cbv{{\tm'}}$ $\mathsf{d}$-normal. \\ \emph{CbV\xspace completeness:} If $\Cbv{\tm} \mathfrak to{\ValScript} \mathfrak toBangNorm TTwo$ then $\Cbv{\tm} \mathfrak to{\ValScript} \mathfrak toBang^= TTwo$ with $TTwo = \Cbv{{\tm'}}$ and $\tm \mathfrak to{{}^{\bot}eta\etata}v \tm'$, for some $\tm' \in \mathfrak lambda_\OpSet$. If $\Cbv{\tm} \mathfrak to{\op} \mathfrak toBangNorm TTwo$ then $\Cbv{\tm} \mathfrak to{\op} \mathfrak toBang^= TTwo$ with $TTwo = \Cbv{{\tm'}}$ and $\tm \mathfrak to{\op} \tm'$, for some $\tm' \in \mathfrak lambda_\OpSet$. \end{enumerate} \end{restatable} \pink{Note that \emph{one step} of ${}^{\bot}eta\etata$-reduction corresponds exactly, via $\Cbn{(\cdot)}$, to \emph{one step} of ${}^{\bot}eta\etata_\oc$-reduction, and vice-versa; \mathcal RED{and \emph{one step} of ${}^{\bot}eta\etata_v$-reduction corresponds exactly, via $\Cbv{(\cdot)}$, to \emph{one step} of ${}^{\bot}eta\etata_\oc$-reduction, and vice-versa.} The same holds for $\mathbf{o}$-reduction.} {}^{\bot}eta\etagin{example} \label{ex:simulation} Let $\tm = ((\mathfrak la{xThree}{xThree})x)xTwo$ and $\tm' = xxTwo$. Then $\tm \mathfrak to{{}^{\bot}eta\etata} \tm'$ while $\Cbn{\tm} = ((\mathfrak la{xThree}{xThree})\Bang{x})\Bang{xTwo} \mathfrak to{\mathfrak tot} x\, \Bang{xTwo} = \Cbn{{\tm'}}$; and $\tm \mathfrak to{{}^{\bot}eta\etata}v \tm'$ while $\Cbv{\tm} = (\Derp{(\mathfrak la{xThree}{\Bang{xThree}})\Bang{x}})\Bang{xTwo} \mathfrak to{\ValScript} (\Der{\Bang{x}})\Bang{xTwo} \mathfrak toBang x\, \Bang{xTwo} = \Cbv{{\tm'}}$. \end{example} \section{The least-level strategy}\label{sec:ll} \newcommand{good\xspace}{good\xspace} The bang calculus $\mathfrak lambda_\oc$ has a natural normalizing strategy, issued by linear logic (where it was first used in \cite{CarvalhoPF11}), namely the \emph{least-level reduction}. It reduces only redexes at \emph{least level}, where the \emph{level} of a redex $R$ in a term $T$ is the number of boxes $\oc$ in which $R$ is nested. Least-level reduction is easily extended to a general bang calculus $(\mathfrak lambda_\ocOp, \mathfrak to{})$. The level of a redex $R$ is then the number of boxes $\oc$ and operators $\mathbf{o}$ in which $R$ is nested; intuitively, least-level reduction fires a redex which is \emph{minimally nested}. Below, we formalize the reduction in a way that is independent of the specific shape of the redexes, and even of specific definition of level one chooses. The interest of least-level reduction is in the properties it satisfies. All our developments will rely on such properties, rather than the specific definition of least level. In this section, $\mathrel{\rightarrow}={}^{\bot}igcup_\mathcal Rule \mathfrak to{\Rule}, \text{for } \mathcal Rule\in \mathcal RulesSet $ a set of rules. We write $\mathcal R = {}^{\bot}igcup_\mathcal Rule \mathcal R_{\mathcal Rule}$ for the set of \emph{all} redexes. \subsection{Least-level reduction in bang calculi}\label{sec:ll_explicit}\label{sec:ll_def} The \emph{level} of an occurrence of redex $R$ in a term $T$ is a measure of its depth. Formally, we indicate the \emph{occurrence of a subterm} $R$ in $T$ with the context $\mathbf{C}$ such that $\mathbf{C}p{R} = T$. Its level then corresponds to the \emph{level} $\lev{\mathbf{C}}$ of the hole in $\mathbf{C}$. The definition of \emph{level} in a bang calculus $\mathfrak lambda_\ocOp$ is formalized as follows. {}^{\bot}eta\etagin{equation} \label{eq:level-bang} {}^{\bot}eta\etagin{gathered} \lev{\mathfrak hole{\cdot}} = 0 \mathbf{q}uad \lev{\mathfrak la{x}\mathbf{C}} = \lev {\mathbf{C}} \mathbf{q}uad \lev {\mathbf{C} T} = \lev {\mathbf{C}} \mathbf{q}uad \lev {T \mathbf{C}} = \lev {\mathbf{C}} \\ \lev {\Bang{\mathbf{C}}} = \lev {\mathbf{C}} + 1 \mathbf{q}uad \lev {\mathbf{o} (\dots, \mathbf{C}, \dots)} = \lev{\mathbf{C}} +1 \end{gathered} \end{equation} Note that the level increases by $1$ in the scope of $\oc$, and of any operator $\mathbf{o}\in \mathcal{O}$. A reduction step $T \mathfrak to{\Rule} TTwo$ is \emph{at level} $k$ if it fires a $\mathcal Rule$-redex at level $k$; it is \emph{least-level } if it reduces a redex whose level is minimal. The \emph{least level} $\textsc {l}Bang{T}$ of a term $T$ expresses the minimal level of any occurrence of redexes in $T$; if no redex is in $T$, we set $\textsc {l}Bang{T} = \infty$. Formally {}^{\bot}eta\etagin{definition}[Least-level reduction] \label{def:ll} \label{def:depth-bang} Let $\mathrel{\rightarrow}\,=\, {}^{\bot}igcup_\mathcal Rule \mathrel{\rightarrow}x{\mathcal Rule}$ ($ \mathcal Rule\in \mathcal RulesSet$) and $\mathcal R = {}^{\bot}igcup_\mathcal Rule \mathcal R_{\mathcal Rule}$ the set of redexes. Given a function $\lev{-}$ from contexts into ${\mathbb N} $: {}^{\bot}eta\etagin{itemize} \item The \emph{least level} of a term $T$ is defined as {}^{\bot}eta\etagin{align} \label{eq:least-level} \textsc {l}b{}{T} \coloneqq \inf \{\lev{\mathbf{C}} \mid T = \mathbf{C}p{TThree} \text{ for some } TThree\in \mathcal R \} \in ({\mathbb N} \cup \{\infty\}).\footnotemark \end{align} \footnotetext{Recall that $\inf \emptyset = \infty$, when $\emptyset$ is seen as the empty subset of ${\mathbb N}$ with the usual order.} \item A $\mathcal Rule$-reduction step $T \mathfrak to{\Rule} TTwo$ is: {}^{\bot}eta\etagin{enumerate} \item \emph{at level $k$}, written $T \mathfrak to{\Rule}At{k} TTwo$, if $T \coloneqq \mathbf{C}p{TThree} \mathfrak to{\Rule} \mathbf{C}p{TThree'} {\scriptscriptstyle =}def TTwo$ and $\lev{\mathbf{C}} = k$. \item \emph{least-level}, written $T \textsc {l}redRule TTwo$, if $T \mathfrak to{\Rule}At{k} TTwo$ and $k=\textsc {l}b{}{T}$. \item \emph{internal}, written $T \mathrel{\nllredx{\Rule}} TTwo$, if $T \mathfrak to{\Rule}At{k} TTwo$ and $k>\textsc {l}b{}{T}$. \end{enumerate} \item \emph{Least-level reduction} is $\textsc {l}red \,=\, {}^{\bot}igcup_\mathcal Rule \textsc {l}redRule$ ($\mathcal Rule\in \mathcal RulesSet$). \item \emph{Internal reduction} is $\nllred \,=\, {}^{\bot}igcup_\mathcal Rule \mathrel{\nllredx{\Rule}} $ ($\mathcal Rule\in \mathcal RulesSet$). \end{itemize} \end{definition} Note that $\mathrel{\rightarrow} \ = \ \textsc {l}red \!\cup \nllred $. Note also that the definition of least level of a term depends on the set $\mathcal R = {}^{\bot}igcup_\mathcal Rule \mathcal R_{\mathcal Rule}$ of redexes associated with $\mathrel{\rightarrow}$.\footnotemark \footnotetext{ We should write $ \textsc {l}BangR{T} $, $\textsc {l}_{\mathcal R}$ and $\xredx{\textsc {l}_{\mathcal R}}{\mathcal Rule}$, but we avoid it for the sake of readability.} \paragraph{Normal Forms.} It is immediate that $\textsc {l}red\subset \mathrel{\rightarrow} $ is a \emph{strategy} for $\mathrel{\rightarrow}$. Indeed, $\textsc {l}red$ and $\mathrel{\rightarrow}$ have the \emph{same normal forms} because if $M$ has a $\mathrel{\rightarrow}$-redex, it has a redex at least-level, \textit{i.e.}\xspace it has a $\textsc {l}red$-redex. {}^{\bot}eta\etagin{remark}[Least level of normal forms] \label{rmk:least-level-normal} Note that $\textsc {l}b{}{T} = \infty$ if and only if $T$ is $\mathrel{\rightarrow}$-normal, because $\lev{\mathbf{C}} \in {\mathbb N}$ for all contexts $\mathbf{C}$. \end{remark} \paragraph{A good\xspace least-level reduction.} The beauty of least-level reduction for the bang calculus, is that it satisfies some elegant properties, which allow for neat proofs, in particular monotonicity and internal invariance (in \Cref{def:good}). The developments in the rest of the paper rely on such properties, and in fact will apply to any calculus whose reduction $\mathrel{\rightarrow}$ has the properties described below. {}^{\bot}eta\etagin{definition}[Good least-level]\label{def:good} A reduction $\mathrel{\rightarrow}$ has a \emph{good\xspace least-level} if: {}^{\bot}eta\etagin{enumerate} \item\label{p:ll-properties-monotone} \emph{Monotonocity:} $T \mathrel{\rightarrow} TTwo$ implies $\textsc {l}Bang{T} \leq \textsc {l}Bang{TTwo}$. \item\label{p:ll-properties-invariance} \emph{Internal invariance:} $T \nllred TTwo$ implies $\textsc {l}Bang T = \textsc {l}Bang{TTwo}$. \end{enumerate} \end{definition} Point 1. states that no step can decrease the least level of a term. Point 2. says that internal steps cannot change the least level of a term. Therefore, only least-level steps may increase the least level. Together, they imply persistence: only least-level steps can approach normal forms. {}^{\bot}eta\etagin{fact}[Persistence] \label{p:ll-properties-persistence} If $\mathrel{\rightarrow}$ has a good least-level, then $T \nllred TTwo$ implies $TTwo$ is not $\mathrel{\rightarrow}$-normal. \end{fact} The pure bang calculus $({\mathfrak lambda_\oc}, \mathrel{\rightarrow}bb) $ has a good\xspace least-level; the same holds true when extending the reduction with operators. {}^{\bot}eta\etagin{restatable}[Good least-level of bang calculi]{proposition}{llproperties}\label{prop:ll-properties} Given ${\mathfrak lambda_\ocOp}$, let $\mathrel{\rightarrow} \,=\, \mathrel{\rightarrow}bb\cup \mathrel{\rightarrow}x{\mathcal{O}}$, where each $\mathbf{o}\in \mathcal{O}$ has a redex of shape $\mathbf{o}(P_1, \dots, P_k)$. The reduction $\mathrel{\rightarrow}$ has a good\xspace least-level. \end{restatable} \subsection{Least-level for a bang calculus: examples.} Let us examine more closely least-level reduction for a bang calculus $(\mathfrak lambda_\ocOp, \mathfrak to{})$. For concreteness, we consider $\mathcal RulesSet =\{\mathfrak tot, \mathbf{o}\mid \mathbf{o}\in \mathcal{O}\}$, hence the set of redexes is $\mathcal R=\mathcal R_{\mathfrak tot} \cup \mathcal R_\mathcal{O}$, where $\mathcal R_\mathcal{O}$ is a set of terms of shape $\mathbf{o}(P_1,\dots, P_k)$. We observe that the least level $\textsc {l}b{}{T}$ of a term $T\in \mathfrak lambda_\ocOp$ can easily be defined in a direct way, inductively. {}^{\bot}eta\etagin{itemize} \item $\textsc {l}b{}{T} = 0 \text{ if }T \in \mathcal R\,=\,\mathcal R_{\mathfrak tot} \cup \mathcal R_\mathcal{O}$, \item otherwise:\\ $ \textsc {l}b{}{x} = \infty \quad \textsc {l}b{}{\mathfrak la{x}T} = \textsc {l}b{}{T} \quad \textsc {l}b{}{\Bang{T}} = \textsc {l}b{}{T} + 1 \quad \textsc {l}b{}{T TTwo} = \min\{\textsc {l}b{}{T} , \textsc {l}b{}{TTwo} \} $ \end{itemize} {}^{\bot}eta\etagin{example}[Least level of a term]\label{ex:ll} Let $R \in \mathcal R_{\mathfrak tot} $. If $T_0:=R\,(!R$), then $\textsc {l}b{}{T_0}= 0$. If $T_1:= x!R$ then $\textsc {l}b{}{T_2}= 1$. If $T_2 := \mathbf{o}(x,y)!R$ then $\textsc {l}b{}{T_2}=0$, as $\mathbf{o}(x,y) \in \mathcal R_{\mathbf{o}}$. \end{example} Intuitively, least-level reduction fires a redex that is \emph{minimally nested}, where a redex is any subterm whose form is in $\mathcal R \,=\,\mathcal R_{\mathfrak tot} \cup \mathcal R_\mathcal{O}$. Note that least-level reduction can choose to fire one among possibly \emph{several} redexes at minimal level. {}^{\bot}eta\etagin{example}\label{ex:llred1}Let us revisit \Cref{ex:ll} with $R= (\lam x.x) \Bang{z}\in \mathcal R_{\mathfrak tot} $ ($R\mathcal Root{\mathfrak tot} z$). Then $T_1:= x\,\Bang{R} \textsc {l}redbb x\,\Bang{z}$ but $T_0:= R\,(\Bang{R}) \not\textsc {l}red R\, \Bang{z}$ and similarly $T_2 := \mathbf{o}(x,y)\,\Bang{R} \not\textsc {l}redbb \mathbf{o}(x,y) !z$. Observe also that $\mathbf{o}(x,\mathtt{unit\;}nderline{R}) \not\textsc {l}redbb \mathbf{o}(x,z ) $. \end{example} {}^{\bot}eta\etagin{example}\label{ex:llred2} Let $R = (\lam x.x) \Bang{z}$. Two least-level steps are possible in ${(\lam z. R)\Bang{R} }$: $\mathtt{unit\;}nderline{(\lam z. R)\Bang{R} }\textsc {l}redbb (\lam x.x)\Bang{R}$, and $(\lam z. \mathtt{unit\;}nderline{R})\Bang{R} \textsc {l}redbb (\lam z. z)\Bang{R}$. But $(\lam z. R) \Bang{\mathtt{unit\;}nderline{R}} \not\textsc {l}redbb (\lam z. R)!z$. \end{example} \mathfrak sLV{}{Finally, let us revisit Examples \ref{ex:llred1} and \ref{ex:llred2} making explicit the level. {}^{\bot}eta\etagin{example*}[\ref{ex:llred1}, revisited] $T_1:= x!(\iota !z) \mathrel{\rightarrow}x{\mathfrak tot:1} x!z$ and $\textsc {l}b{}{T_1}=1$. $T_2 := \mathbf{o}(x,y)!(\iota !z) \mathrel{\rightarrow}x{\mathfrak tot:1} \mathbf{o}(x,y) !z$ and $\textsc {l}b{}{T_2}=0$. Note that $\mathbf{o}(x,\mathtt{unit\;}nderline{R}) \mathrel{\rightarrow}x{\mathfrak tot:1} \mathbf{o}(x,z ) $, while $\textsc {l}b{}{\mathbf{o}(x,\mathtt{unit\;}nderline{R}) }=0$. \end{example*} {}^{\bot}eta\etagin{example*}[\ref{ex:llred2}, revisited] Let $R,S$ as in \Cref{ex:llred2}. Then $\mathtt{unit\;}nderline{(\lam z. R)!S }\mathrel{\rightarrow}x{\mathfrak tot:0} \iota!S$, $(\lam z. \mathtt{unit\;}nderline{R})!S \mathrel{\rightarrow}x{\mathfrak tot:0} (\lam x. z)!S$, and $(\lam z. R)!\mathtt{unit\;}nderline{S} \mathrel{\rightarrow}x{\mathfrak tot:1} (\lam z. R)!z$, with $\textsc {l}b{}{(\lam x. R)!S} =0$. \end{example*} } \subsection{Least-level for CbN\xspace and CbV\xspace $\lambda$-calculi} \label{subsect:level-cbn-cbv} The definition of least-level reduction in \Cref{sec:ll_def} is independent from the specific notion of level that is chosen, and also from the specific calculus. The idea is that the reduction strategy persistently fires a redex at minimal level, once such a notion is set. Least-level reduction can indeed be defined also for the CbN\xspace and CbV\xspace $\lambda$-calculi, given an opportune definition of level. In CbN\xspace, we count the number of nested arguments and operators containing the occurrence of redex. In CbV\xspace, we count the number of nested operators and \emph{unapplied} abstractions containing the redex, where an abstraction is unapplied if it is not the right-hand side of an application. Formally, an occurrence of redex is identified by a context (as explained in \Cref{sec:ll_def}), and we define the following $ \levCbn{\cdot} $ and $ \levCbv{\cdot} $ functions from ${\mathbf{C}Set}$ to ${\mathbb N}$, the \emph{level} in CbN\xspace and CbV\xspace $\lam$-calculi. {\footnotesize {}^{\bot}eta\etagin{align*} \label{eq:level-cbn-cbv} \levCbn{\mathfrak hole{\cdot}} &= 0 & \levCbv{\mathfrak hole{\cdot}} &= 0 \\ \levCbn {\mathfrak la{x}\mathbf{c}}&= \levCbn {\mathbf{c}} & \levCbv {\mathfrak la{x}\mathbf{c}}&= \levCbv {\mathbf{c}} + 1 \\ \levCbn {\mathbf{c} \tm} &= \levCbn {\mathbf{c}} & \levCbv {\mathbf{c} \tm} &= {}^{\bot}eta\etagin{cases} \levCbv{\mathbf{c}Two} &\text{if } \mathbf{c} = \mathfrak la{x}{\mathbf{c}Two} \\ \levCbv{\mathbf{c}} &\text{otherwise} \end{cases} \\ \levCbn {\tm \mathbf{c}} &= \levCbn {\mathbf{c}} + 1 & \levCbv {\tm \mathbf{c}} &= \levCbv {\mathbf{c}} \\ \levCbn {\mathbf{o} (\dots, \mathbf{c}, \dots)}&= \levCbn {\mathbf{c}} +1 & \levCbv {\mathbf{o} (\dots, \mathbf{c}, \dots)}&= \levCbv {\mathbf{c}} +1 \end{align*}} In both CbN\xspace and CbV\xspace $\lambda$-calculi, the \emph{least level} of a term (denoted by $\textsc {l}Cbn{\cdot}$ and $\textsc {l}Cbv{\cdot}$) and \emph{least-level} and \emph{internal} reductions are given by \Cref{def:depth-bang} (replace $\lev{\cdot}$ with $\levCbn{\cdot}$ for CbN\xspace and $\levCbv{\cdot}$ for CbV\xspace). In \Cref{sect:embedding} we will see that the definitions of CbN\xspace and CbV\xspace least level are not arbitrary, but induced by the CbN\xspace and CbV\xspace translations defined in \Cref{subsect:translations}. \section{Embedding of CbN\xspace and CbV\xspace by level} \label{sect:embedding} Here we refine the analysis of the CbN\xspace and CbV\xspace translations given in \Cref{subsect:translations}, by showing two new results: translations preserve normal forms (\Cref{prop:preservation-normal}) and least-level (\Cref{prop:preservation-reduction}), back and forth. This way, to obtain least-level \emph{factorization} or least-level \emph{normalization} results, it suffices to prove them in the bang calculus. The translation transfers the results into the CbN\xspace and CbV\xspace $\lambda$-calculi (\Cref{thm:translation}). We use here the expression ``translate'' in a strong sense: the results for CbN\xspace and CbV\xspace $\lambda$-calculi are obtained from the corresponding results in the bang calculus almost for free, just via CbN\xspace and CbV\xspace translations. \paragraph{Preservation of normal forms.} The targets of the CbN\xspace translation $\Cbn{(\cdot)}$ and CbV\xspace translation $\Cbv{(\cdot)}$ into the bang calculus can be \emph{characterized syntactically}. A fine analysis of these fragments of the bang calculus (see \cite{bangLong} for details) proves that both CbN\xspace and CbV\xspace translations preserve normal forms, back and forth. {}^{\bot}eta\etagin{proposition}[Preservation of normal forms] \label{prop:preservation-normal} Let $\tm, \tmTwo \in \mathfrak lambda$ and $\mathbf{o} \in \mathcal{O}$. {}^{\bot}eta\etagin{enumerate} \item\label{p:preservation-normal-cbn} \emph{CbN\xspace:} $\tm$ is ${}^{\bot}eta\etata$-normal iff $\Cbn{\tm}$ is ${}^{\bot}eta\etata_\oc$-normal; $\tm$ is $\mathbf{o}$-normal iff $\Cbn{\tm}$ is $\mathbf{o}$-normal. \item\label{p:preservation-normal-cbv} \emph{CbV\xspace:} $\tm$ is ${}^{\bot}eta\etata_v$-normal iff $\Cbv{\tm}$ is ${}^{\bot}eta\etata_\oc$-normal; $\tm$ is $\mathbf{o}$-normal iff $\Cbv{\tm}$ is $\mathbf{o}$-normal. \end{enumerate} \end{proposition} By \Cref{rmk:least-level-normal}, \Cref{prop:preservation-normal} can be seen as the fact that CbN\xspace and CbV\xspace translations preserve the least-level of a term, back and forth, when the least-level is infinite. Actually, this holds more in general for any value of the least-level. \paragraph{Preservation of levels.} We aim to show that least-level steps in CbN\xspace and CbV\xspace $\lambda$-calculi correspond to least-level steps in the bang calculus, back and forth, via CbN\xspace and CbV\xspace translations respectively (\Cref{prop:preservation-reduction}). This result is subtle, one of the main technical contributions of this paper. First, we extend the definition of translations to contexts. The \emph{CbN\xspace and CbV\xspace translations for contexts} are two functions $\Cbn{(\cdot)} \colon {\mathbf{C}Set} \xrightarrow{} \mathfrak lambda_\ocCtx$ and $\Cbv{(\cdot)} \colon {\mathbf{C}Set} \xrightarrow{} \mathfrak lambda_\ocCtx$, respectively, mapping contexts of the $\lambda$-calculus into contexts of the bang calculus: {\footnotesize ^{\bot}aselineskip} {}^{\bot}eta\etagin{center} \small {}^{\bot}eta\etagin{align*} \Cbn{\mathfrak hole{\cdot}} &= \mathfrak hole{\cdot} & \Cbv{\mathfrak hole{\cdot}} &= \mathfrak hole{\cdot} \\ \Cbn{(\mathfrak la{x}{\mathbf{c}})} &= \mathfrak la{x}{\Cbn{\mathbf{c}}} & \Cbv{(\mathfrak la{x}{\mathbf{c}})} &= \Bang{(\mathfrak la{x}{\Cbv{\mathbf{c}}})} \\ \Cbn{(\mathbf{o}(\tm_1, ... , \mathbf{c}, ... , \tm_k))} &= \mathbf{o}(\Cbn{\tm_1}, ... , \Cbn{\mathbf{c}}, ... , \Cbn{\tm_k}) &\quad \Cbv{(\mathbf{o}(\tm_1, ..., \mathbf{c}, ..., \tm_k))} &= \mathbf{o}(\Cbv{\tm_1}, ..., \Cbv{\mathbf{c}}, ..., \Cbv{\tm_k}) \\ \Cbn{(\mathbf{c}\tm)} &= \Cbn{\mathbf{c}}\,{\Bangp{{\Cbn{\tm}}}} & \Cbv{(\mathbf{c}\tm)} &= {}^{\bot}eta\etagin{cases} \mathbf{C} \, \Cbv{\tm} &\!\!\text{if } \Cbv{\mathbf{c}} = \Bang{\mathbf{C}} \\ (\Der{\Cbv{\mathbf{c}}}){\Cbv{\tm}} &\!\!\text{otherwise} \end{cases} \\ \Cbn{(\tm\mathbf{c})} &= \Cbn{\tm}\,{\Bangp{{\Cbn{\mathbf{c}}}}} \,; & \Cbv{(\tm\mathbf{c})} &= {}^{\bot}eta\etagin{cases} T \, \Cbv{\mathbf{c}} &\!\!\text{if } \Cbv{\tm} = \Bang{T} \\ (\Der{\Cbv{\tm}}){\Cbv{\mathbf{c}}} &\!\!\text{otherwise.} \end{cases} \end{align*} \end{center}} Note that CbN\xspace (\mathcal Resp CbV\xspace) level of a context defined in \Cref{subsect:level-cbn-cbv} increases by $1$ whenever the CbN\xspace (\mathcal Resp CbV\xspace) translation for contexts add $\oc$. Thus, CbN\xspace and CbV\xspace translations preserve, back and forth, the level of a redex and the least-level of a term. Said differently, the level for CbN\xspace and CbV\xspace is defined in \Cref{subsect:level-cbn-cbv} so as to enable the preservation of level via CbN\xspace and CbV\xspace translations. {}^{\bot}eta\etagin{lemma}[Preservation of level via CbN\xspace translation] \label{lemma:preservation-level-cbn} {}^{\bot}eta\etagin{enumerate} \item\label{p:preservation-level-cbn-context} \emph{For contexts:} For any context $\mathbf{c} \in \mathcal{C}$, one has $\levCbn{\mathbf{c}} = \lev{\Cbn{\mathbf{c}}}$. \item\label{p:preservation-level-cbn-reduction} \emph{For reduction:} For any term $\tm \in \mathfrak lambda_\OpSet$: $\tm \mathfrak to{{}^{\bot}eta\etata}Ind{k} \tmTwo$ if and only if $\Cbn{\tm} \mathfrak to{\mathfrak tot}Ind{k} \Cbn{\tmTwo}$; and $\tm \mathfrak to{\op}At{k} \tmTwo$ if and only if $\Cbn{\tm} \mathfrak to{\op}At{k} \Cbn{\tmTwo}$, for any $\mathbf{o} \in \mathcal{O}$. \item\label{p:preservation-level-cbn-ll} \emph{For least-level of a term:} For any term $\tm \in \mathfrak lambda_\OpSet$, one has $\textsc {l}Cbn{\tm} = \textsc {l}Bang{\Cbn{\tm}}$. \end{enumerate} \end{lemma} {}^{\bot}eta\etagin{lemma}[Preservation of level via CbV\xspace translation] \label{lemma:preservation-level-cbv} {}^{\bot}eta\etagin{enumerate} \item\label{p:preservation-level-cbv-context} \emph{For contexts:} For any context $\mathbf{c} \in \mathcal{C}$, one has $\levCbv{\mathbf{c}} = \lev{\Cbv{\mathbf{c}}}$. \item\label{p:preservation-level-cbv-reduction} \emph{For reduction:} For any term $\tm \in \mathfrak lambda_\OpSet$: $\tm \mathfrak to{{}^{\bot}eta\etata}vAt{k} \tmTwo$ if and only if $\Cbv{\tm} \mathfrak to{\Derel}At{k} \mathfrak toBangInd{k}^= \Cbv{\tmTwo}$; and $\tm \mathfrak to{\op}At{k} \tmTwo$ if and only if $\Cbv{\tm} \mathfrak to{\op}At{k} \mathfrak toBangInd{k}^= \Cbv{\tmTwo}$, for any $\mathbf{o} \in \mathcal{O}$. \item\label{p:preservation-level-cbv-ll} \emph{For least-level of a term:} For any term $\tm \in \mathfrak lambda_\OpSet$, one has $\textsc {l}Cbv{\tm} = \textsc {l}Bang{\Cbv{\tm}}$. \end{enumerate} \end{lemma} From the two lemmas above it follows that CbN\xspace and CbV\xspace translations preserve least-level and internal reductions, back and forth. {}^{\bot}eta\etagin{proposition}[Preservation of least-level and internal reductions] \label{prop:preservation-reduction} Let $\tm$ be a \lam-term and $\mathbf{o} \in \mathcal{O}$. {}^{\bot}eta\etagin{enumerate} \item\label{p:preservation-reduction-ll-cbn} \emph{CbN\xspace least-level:} $\tm \textsc {l}redb \tmTwo$ iff $\Cbn{\tm} \textsc {l}redBang \Cbn{\tmTwo}$; and $\tm \mathrel{\textsc {l}redx{\op}} \tmTwo$ iff $\Cbn{\tm} \mathrel{\textsc {l}redx{\op}} \Cbn{\tmTwo}$. \item\label{p:preservation-reduction-nll-cbn} \emph{CbN\xspace internal:} $\tm \mathrel{\nllredx{{}^{\bot}eta\etata}} \tmTwo$ iff $\Cbn{\tm} \mathrel{\nllredx{\mathfrak tot}} \Cbn{\tmTwo}$; and $\tm \mathrel{\nllredx{\op}} \tmTwo$ iff $\Cbn{\tm} \mathrel{\nllredx{\op}} \Cbn{\tmTwo}$. \item\label{p:preservation-reduction-ll-cbv} \emph{CbV\xspace least-level:} $\tm \textsc {l}redbv \tmTwo$ iff $\Cbv{\tm} \textsc {l}redBang\textsc {l}redDer^= \Cbv{\tmTwo}$; and $\tm \mathrel{\textsc {l}redx{\op}} \tmTwo$ iff $\Cbv{\tm} \mathrel{\textsc {l}redx{\op}}\textsc {l}redDer^= \Cbv{\tmTwo}$. \item\label{p:preservation-reduction-nll-cbv} \emph{CbV\xspace internal:} $\tm \textsc {l}redbv \tmTwo$ iff $\Cbv{\tm} \mathrel{\nllredx{\mathfrak tot}} \mathrel{\nllredx{\Derel}}^= \Cbv{\tmTwo}$; and $\tm \mathrel{\textsc {l}redx{\op}} \tmTwo$ iff $\Cbv{\tm} \mathrel{\nllredx{\op}} \mathrel{\nllredx{\Derel}}^= \Cbv{\tmTwo}$. \end{enumerate} \end{proposition} As a consequence, least-level reduction induces factorization in CbN\xspace and CbV\xspace $\lambda$-calculi as soon as it does in the bang calculus. And, by \Cref{prop:preservation-normal}, it is a normalizing strategy in CbN\xspace and CbV\xspace as soon as it is so in the bang calculus. {}^{\bot}eta\etagin{theorem}[Factorization and normalization by translation]\label{thm:translation} Let $\mathfrak lambda_\OpSet^{\mathtt{cbn}}\,=\, (\mathfrak lambda_\OpSet, \mathrel{\rightarrow}b \cup \mathrel{\rightarrow}x{\mathcal{O}})$ and $\mathfrak lambda_\OpSet^{\mathtt{cbv}}\,=\, (\mathfrak lambda_\OpSet,\ \mathrel{\rightarrow}bv \cup \mathrel{\rightarrow}x{\mathcal{O}})$. {}^{\bot}eta\etagin{enumerate} \item If $\mathfrak lambda_\ocOp$ admits least-level factorization $\F{\textsc {l}red}{\nllred}$, then so do $\mathfrak lambda_\OpSet^{\mathtt{cbn}}$ and $\mathfrak lambda_\OpSet^{\mathtt{cbv}}$. \item If $\mathfrak lambda_\ocOp$ admits least-level normalization, then so do $\mathfrak lambda_\OpSet^{\mathtt{cbn}}$ and $\mathfrak lambda_\OpSet^{\mathtt{cbv}}$. \end{enumerate} \end{theorem} A similar result will hold also when extending the pure calculi with a rule $\mathcal Root\mathcal Rule$ other than $\mathcal Root{\mathbf{o}}$, as long as the translation preserves redexes. {}^{\bot}eta\etagin{remark}[Preservation of least-level and of normal forms.] Preservation of normal form and least-level is delicate. For instance, it does not hold with the definition CbV\xspace translation $\Cbv{(\cdot)}$ in \cite{GuerrieriManzonetto18,SantoPintoUustalu19}. There, the translation $t = tThree tTwo \in \mathfrak lambda$ would be $\Cbv{t} = (\Der{\Bangp{\Cbv{tThree}}}) \Cbv{tTwo}$ and then \Cref{prop:preservation-normal} and \Cref{prop:preservation-reduction} would not hold: $\Der{\Bangp{\Cbv{tThree}}}$ is a ${}^{\bot}eta\etata_\oc$-redex in $\Cbv{t}$ (see \Cref{ex:identity}) and hence $\Cbv{t}$ would not be normal even though so is $t$, and $\textsc {l}Bang{\Cbv{t}} = 0$ even though $\textsc {l}Cbv{t} \neq 0$. This is why we defined two distinct case when defining $\Cbv{(\cdot)}$ for applications, akin to \cite{BucciarelliKesnerRiosViso20}. \end{remark} \section{Least-level factorization via bang calculus} We have shown that least-level factorization in a bang calculus $\mathfrak lambda_\ocOp$ implies least-level factorization in the corresponding CbN\xspace and CbV\xspace calculi, via forth-and-back translation. The central question now is \emph{how to prove least-level factorization} for a bang calculus: the rest of the paper is devoted to that. \subsubsection{Overview.} \label{sec:overview} Let us overview our approach by considering $\mathcal{O}=\{\mathbf{o}\}$, and $\mathrel{\rightarrow} \,=\, \mathrel{\rightarrow}bb \cup \mathrel{\rightarrow}x{\mathbf{o}}$. Since by definition $\textsc {l}red = \textsc {l}redbb \cup \textsc {l}redx{\mathbf{o}}$ (and $\nllred = \mathrel{\nllredx{{}^{\bot}eta\etata}}b \cup \nllredx{\mathbf{o}}$), \Cref{thm:modular} states that we can \emph{decompose} least-level factorization of $\mathrel{\rightarrow}$ in three modules: {}^{\bot}eta\etagin{enumerate} \item prove \textsc {l}-factorization of $\mathrel{\rightarrow}bb$, \textit{i.e.}\xspace ~~ $\mathrel{\rightarrow}bb^* ~\subseteq~ \textsc {l}redbb^* \cdot \mathrel{\nllredx{{}^{\bot}eta\etata}}b$ \item prove \textsc {l}-factorization of $\mathrel{\rightarrow}x{\mathbf{o}}$, \textit{i.e.}\xspace ~~ $\mathrel{\rightarrow}x{\mathbf{o}}^* ~\subseteq~ \textsc {l}redx{\mathbf{o}}^* \cdot \nllredx{\mathbf{o}}$ \item prove the two linear swaps of \Cref{thm:modular}. \end{enumerate} Please note that the least level for both $\textsc {l}redbb$ and $\textsc {l}redx{\mathbf{o}}$ is defined with respect to the redexes $\mathcal R=\mathcal R_{\mathfrak tot} \cup \mathcal R_{\mathbf{o}}$, so to have $\textsc {l}red = \textsc {l}redbb \cup \textsc {l}redx{\mathbf{o}}$. This addresses the issue we mentioned in \Cref{ex:issue}. Clearly, points 2. and 3. depend on the specific rule $\mathcal Root{\mathbf{o}}$. However, the beauty of a modular approach is that point 1. can be established in general: we do not need to know $\mathcal Root{\mathbf{o}}$, only the shape of its redexes $\mathcal R_{\mathbf{o}}$. In \Cref{sect:factorization} we provide a general result of \textsc {l}-factorization for $\mathrel{\rightarrow}bb$ (\Cref{thm:factorize-bang}). In fact, we shall show a bit more: the way of decomposing the study of factorization that we have sketched, can be applied to study least-level factorization of any reduction $\mathrel{\rightarrow} \,=\, \mathrel{\rightarrow}bb \cup \mathrel{\rightarrow}c$, as long as $\mathrel{\rightarrow}$ has a good\xspace least-level. Once (1.) is established (once and for all), to prove factorization of a reduction $\mathrel{\rightarrow}bb\cup \mathrel{\rightarrow}x{\mathbf{o}}$ we are only left with (2.) and (3.). In \Cref{sec:ll_modular} we show that the proof of the two linear swaps can be reduced to a single, simple test, involving only the $\mathcal Root{\mathbf{o}}$ step (\Cref{prop:test_ll}). In \Cref{sec:case_study}, we will illustrate how all elements play together on a concrete case, applying them to non-deterministic $\lam$-calculi. \subsection{Factorization of $\mathrel{\rightarrow}bb$ in a bang calculus} \label{sect:factorization} We prove that $\mathrel{\rightarrow}bb$-reduction \emph{factorizes} via least-level reduction (\Cref{thm:factorize-bang}). The result holds for a definition of $\textsc {l}redbb$ (as in \Cref{sec:ll}) where the set of redexes $\mathcal R$ is $\mathcal R_{\mathfrak tot} \cup \mathcal R_{\mathbf{o}}$---this generalization has essentially no cost, and allows us to use \Cref{thm:factorize-bang} as a module in the factorization of a larger reduction. We prove factorization via Takahashi's Parallel Reduction method \cite{Takahashi95}. We define a reflexive reduction $\mathrel{\Rightarrow}NotLlBang$ (called parallel internal ${}^{\bot}eta\etata_\oc$-reduction) which satisfies the conditions of \Cref{fact:fact}, \textit{i.e.}\xspace $ \mathrel{\Rightarrow}NotLlBang^* \,=\, \mathrel{\nllredx{\mathfrak tot}}^*$ and $\mathrel{\Rightarrow}NotLlBang \!\cdot\! \textsc {l}redBang \subseteq \textsc {l}redBang^* \!\cdot\! \mathrel{\Rightarrow}NotLlBang$. The tricky point is to prove $\mathrel{\Rightarrow}NotLlBang \!\cdot\! \textsc {l}redBang \subseteq \textsc {l}redBang^* \!\cdot\! \mathrel{\Rightarrow}NotLlBang$ We adapt the proof technique in \cite{AccattoliFaggianGuerrieri19}. All details are \mathfrak sLV{in \cite{bangLong}.}{in the appendix.} Here we just give the definition of $\mathrel{\Rightarrow}NotLlBang$. We first introduce $\mathrel{\Rightarrow}BangAt{n}$ (the parallel version of $\mathfrak to{\Derel}At{n}$), which fires simultaneously a number of ${}^{\bot}eta\etata_\oc$-redexes at level at least $n$ (and $\mathrel{\Rightarrow}BangAt{\infty}$ does not reduce any ${}^{\bot}eta\etata_\oc$-redex: $T \mathrel{\Rightarrow}BangAt{\infty} TTwo$ implies $T = TTwo$). {}^{\bot}eta\etagin{center} \small {}^{\bot}eta\etagin{prooftree} \infer0{x \mathrel{\Rightarrow}BangAt{\infty} x} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangAt{n} T'} \infer1{\mathfrak la{x}{T} \mathrel{\Rightarrow}BangAt{n} \mathfrak la{x}{T'}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T \mathrel{\Rightarrow}BangAt{m} T'} \hypo{TTwo \mathrel{\Rightarrow}BangAt{n} TTwo'} \infer2{TTTwo \mathrel{\Rightarrow}BangAt{\min\{m,n\}} T'TTwo'} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangAt{n} T'} \infer1{\Bang{T} \mathrel{\Rightarrow}BangAt{n\!+\!1} \Bang{T'}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T \mathrel{\Rightarrow}BangAt{n} T'} \hypo{TTwo \mathrel{\Rightarrow}BangAt{m} TTwo'} \infer2{(\mathfrak la{x}{T})\Bang{TTwo} \mathrel{\Rightarrow}BangAt{0} T'\mathfrak sub{TTwo'\!}{x}} \end{prooftree} \end{center} The \emph{parallel internal ${}^{\bot}eta\etata_\oc$-reduction} $\mathrel{\Rightarrow}NotLlBang$ is the parallel version of $\mathrel{\nllredx{\mathfrak tot}}$, which fires simultaneously a number of ${}^{\bot}eta\etata_\oc$-redexes that are not at minimal level. Formally, {}^{\bot}eta\etagin{center} \small $T \mathrel{\Rightarrow}NotLlBang TTwo$ \quad if $T \mathrel{\Rightarrow}BangAt{n} TTwo$ with $n = \infty$ or $n > \textsc {l}Bang{T}$. \end{center} {}^{\bot}eta\etagin{restatable}[Least-level factorization of $\mathrel{\rightarrow}bb$]{theorem}{factorizebang} \label{thm:factorize-bang} Assume $\mathrel{\rightarrow} =\mathrel{\rightarrow}bb \cup \mathrel{\rightarrow}x{\mathcal Rule}$ has good\xspace least-level in $\mathfrak lambda_\ocOp$. Then: $ T \mathfrak to{\mathfrak tot}^* TTwo \text{ implies } T \textsc {l}redBang^* \!\cdot\! \mathrel{\nllredx{\mathfrak tot}}^* TTwo.$ \end{restatable} {}^{\bot}eta\etagin{corollary}[Least-level factorization in the pure bang calculus] \label{cor:factorize-bang} In the pure bang calculus $(\mathfrak lambda_\oc, \mathfrak to{\mathfrak tot})$, if $T \mathfrak to{\mathfrak tot}^* TTwo$ then $T \textsc {l}redBang^* \!\cdot\! \mathrel{\nllredx{\mathfrak tot}}^* TTwo$. \end{corollary} \mathfrak sLV{}{ \subsubsection{Surface Reduction.} \emph{Surface reduction} $\sredbb$ (defined by Simpson in \cite{Simpson05}) is the reduction which only reduces a redex at level $0$ (any such redex). It fires redexes that are not inside boxes. It can be equivalently defined as the closure of $\mathcal Root{\mathfrak tot}$ under contexts $\mathbf{S}$ defined by the grammar $\mathbf{S} \Coloneqq \, \mathfrak hole{\cdot} \mid \mathfrak la{x}{\mathbf{S}} \mid \mathbf{S}T \mid T\mathbf{S} $. We write $T \sredTTwo$ for a step $\mathrel{\rightarrow}$ which is surface; otherwise, we write $T \nsred TTwo$. Clearly $\sred\subset \textsc {l}red$. \emph{Surface factorization} was already proven in \cite{Simpson05}: {}^{\bot}eta\etagin{center} $T \mathfrak to{\mathfrak tot}^* TTwo$ implies $T \mathrel{\sredx{{\mathfrak tot}}}^* \!\cdot\! \mathrel{\nsredx{{\mathfrak tot}}}^* TTwo$. \end{center} We obtain this result as a consequence of least-level factorization (\Cref{cor:factorize-bang} and monotonicity (\Cref{prop:ll-properties}) of least-level reduction. } \subsection{Pure calculi and least-level normalization} Least-level factorization of $\mathrel{\rightarrow}bb$ implies in particular least-level factorization for $\mathrel{\rightarrow}b$ and $\mathrel{\rightarrow}bv$. As a consequence, least-level reduction is a normalizing strategy for all three pure calculi: the bang calculus, the CbN\xspace, and the CbV\xspace $\lam$-calculi. \subsubsection{The pure bang calculus.}\label{sec:bang_strategy} $\textsc {l}redbb$ is a \textit{normalizing strategy} for $\mathrel{\rightarrow}bb$. Indeed, it satisfies all ingredients in \Cref{prop:abs_normalization}. Since we have least-level factorization (\Cref{cor:factorize-bang})), same normal forms, and \emph{persistence} (\Cref{prop:ll-properties}), $\textsc {l}redbb$ is a \emph{complete strategy} for $\mathrel{\rightarrow}bb$: $\text{If } N \text{ is }\mathfrak tot\text{-normal and } M \mathrel{\rightarrow}bb^* N, \text{ then } M \textsc {l}redbb^* N.$ We already observed (\Cref{ex:llred2}) that the least-level reduction $\textsc {l}redbb$ is non-deterministic, because several redexes at least level may be available. Such non-determinism is however inessential, because $\textsc {l}redbb$ is \emph{uniformly normalizing}. {}^{\bot}eta\etagin{lemma}[Quasi-Diamond]\label{lemma:diamond} In $(\mathfrak lambda_\oc, \mathfrak to{\mathfrak tot})$, the reduction $\textsc {l}redbb $ is quasi-diamond (\Cref{fact:diamond}), and therefore uniformly normalizing. \end{lemma} Putting all the ingredients together, we have (by \Cref{prop:abs_normalization}): {}^{\bot}eta\etagin{restatable}[Least-level normalization]{theorem}{normalizebang} \label{thm:normalize-bang} In the pure bang calculus $\textsc {l}redbb$ is a normalizing strategy for $\mathrel{\rightarrow}bb$. \end{restatable} \Cref{thm:normalize-bang} means not only that if $T$ is ${}^{\bot}eta\etata_\oc$-normalizable then $T$ can reach its normal form by just performing least-level steps, but also that performing \emph{whatever} least-level steps eventually leads to the normal form, if any. \subsubsection{Pure CbV\xspace and CbN\xspace $\lam$-calculi.} By forth-and-back translation (\Cref{thm:translation}) the least-level factorization and normalization results for the pure bang calculus immediately transfers to the CbN\xspace and CbV\xspace setting. {}^{\bot}eta\etagin{theorem}[CbV\xspace and CbN\xspace least-level normalization] {}^{\bot}eta\etagin{itemize} \item CbN\xspace: In $(\mathfrak lambda, \mathrel{\rightarrow}b)$, $\textsc {l}redb$ is a normalizing strategy for $\mathrel{\rightarrow}b$. \item CbV\xspace: In $(\mathfrak lambda, \mathrel{\rightarrow}bv)$, $\textsc {l}redbv$ is a normalizing strategy for $\mathrel{\rightarrow}bv$. \end{itemize} \end{theorem} \subsection{Least-level Factorization, Modularly. }\label{sec:ll_modular} As anticipated at the beginning of this section, we can use \Cref{thm:factorize-bang} also as part of the proof of factorization for a more complex calculus. We now introduce one more useful tool: a simple test to establish least-level factorization of a reduction $\mathrel{\rightarrow}bb \cup \mathrel{\rightarrow}x{\mathcal Rule} $ (where $\mathrel{\rightarrow}x{\mathcal Rule}$ is a new reduction added to $\mathrel{\rightarrow}bb$). We shall give an example of its use in \Cref{sec:case_study} (see the proof of \Cref{thm:ND_fact}). The test embodies \Cref{thm:modular}, and the fact that we already know (once for all) that $\mathrel{\rightarrow}bb$ factorizes via $\textsc {l}redbb$. It turns out that the proof of the two linear swaps can be reduced to a single, simple test, which only involves the $\mathcal Root{\mathcal Rule}$ step. {}^{\bot}eta\etagin{restatable}[Test for {modular} least-level factorization]{proposition}{testll}\label{prop:test_ll} Let $\mathrel{\rightarrow}c$ be the contextual closure of a rule $\mathcal Root{\mathcal Rule}$, and assume $\mathrel{\rightarrow} \,=\,( \mathrel{\rightarrow}bb \cup \mathrel{\rightarrow}c)$ has a good\xspace least-level. Then $\mathrel{\rightarrow}$ factorizes via $\textsc {l}red \,=\, (\textsc {l}redbb \cup \textsc {l}redc)$ if the following hold: {}^{\bot}eta\etagin{enumerate} \item \emph{$\textsc {l}$-factorization of $\mathrel{\rightarrow}c$}: ~~ $\mathrel{\rightarrow}c^* ~\subseteq ~\textsc {l}redc^* \cdot \mathrel{\nllredx{\rho}}^*$ \item $\xred{\rsym}c$ is \emph{substitutive}: ~~ $R \xred{\rsym}c R' \text{ implies } R \subs x Q \xred{\rsym}c R'\subs x Q.$ \item \emph{Root linear swap}: ~~ $\mathrel{\nllredx{{}^{\bot}eta\etata}}b \cdot \xred{\rsym}c \ \subseteq \ \xred{\rsym}c \cdot\mathrel{\rightarrow}bb^* $. \end{enumerate} \end{restatable} Note that, as usual, at point (1.) the least level is defined w.r.t. $\mathcal R=\mathcal R_{!{}^{\bot}eta\etata}\cup \mathcal R_{\rho}$. \mathfrak sLV{}{ \paragraph{A modular test for CbN\xspace and CbV\xspace least-level factorization.} A modular test analogous to \Cref{prop:test_ll} holds also in CbN\xspace and CbV\xspace: simply replace in the statement $\mathfrak tot$ with ${}^{\bot}eta\etata$ (for CbN\xspace) or ${{}^{\bot}eta\etata_v}$ (for CbV\xspace). } \section{Case study: non-deterministic $\lam$-calculi}\label{sec:case_study} \newcommand{{\BangSet}_{\oplus}}{{\mathfrak lambda_\oc}_{\mathbf{o}lus}} To show how to use our framework, we apply the set of tools which we have developed on our running example. We extend the bang calculus with a non-deterministic operator, then considering $({\BangSet}_{\oplus}, \mathrel{\rightarrow}_{\mathfrak tot\mathbf{o}lus})$ where $\mathrel{\rightarrow}_{\mathfrak tot\mathbf{o}lus} \,=\, (\mathrel{\rightarrow}bb\cup\mathrel{\rightarrow}o)$, and $\mathrel{\rightarrow}o$ is the contextual closure of the (non-deterministic) rules: {}^{\bot}eta\etagin{equation} \mathbf{o}(P,Q)\mathcal Root{\mathbf{o}lus} P \quad \quad \mathbf{o}(P,Q) \mathcal Root{\mathbf{o}lus} Q \end{equation} \paragraph{First step: non-deterministic bang calculus.} We analyze ${\BangSet}_{\oplus}$. We use our modular test to prove least-level factorization for ${\BangSet}_{\oplus}$: if $\mathrel{\rightarrow}_{\mathfrak tot\mathbf{o}lus}^*U \text{ then } T \textsc {l}redx{{}^{\bot}eta\etata_\oc\mathbf{o}lus}^* \cdot \nllredx{{}^{\bot}eta\etata_\oc\mathbf{o}lus}^* U$. By \Cref{prop:abs_normalization}, an immediate consequence of the factorization result is that the least-level strategy is \emph{complete}, \textit{i.e.}\xspace if $U$ is normal: $T \mathrel{\rightarrow}_{\mathfrak tot\mathbf{o}lus}^*U$ implies $T \textsc {l}redx{{}^{\bot}eta\etata\mathbf{o}lus}^*U$. \mathfrak sLV{}{We can apply \Cref{prop:abs_normalization} for the same reasons as in \Cref{sec:bang_strategy} : $\mathrel{\rightarrow}_{\mathfrak tot\mathbf{o}lus}$ and $\textsc {l}red_{\mathfrak tot\mathbf{o}lus}$ have the same normal forms, and moreover \emph{persistence} holds (by \Cref{prop:ll-properties}).} \paragraph{Second step: CbN\xspace and CbV\xspace non-deterministic calculi.} By translation, we have \textit{for free}, that the analogous results hold in $\mathfrak lambda_\oplus^{\mathtt{cbn}}$ and $\mathfrak lambda_\oplus^{\mathtt{cbv}}$, as defined in \Cref{ex:NDext}. So, least-level factorization holds for both calculi, and moreover {}^{\bot}eta\etagin{itemize} \item \emph{CbN\xspace completeness}: in $\mathfrak lambda_\oplus^{\mathtt{cbn}}$, if $tu$ is normal:~ $t \mathrel{\rightarrow}_{{}^{\bot}eta\etata\mathbf{o}lus}^*tu$ implies $t \textsc {l}redx{{}^{\bot}eta\etata\mathbf{o}lus}^*tu$. \item \emph{CbV\xspace completeness}: in $\mathfrak lambda_\oplus^{\mathtt{cbv}}$, if $tu$ is normal:~ $t \mathrel{\rightarrow}_{{{}^{\bot}eta\etata_v}\mathbf{o}lus}^*tu$ implies $t \textsc {l}redx{{{}^{\bot}eta\etata_v}\mathbf{o}lus}^*tu$. \end{itemize} \paragraph{What do we really need to prove?} The only result we need to prove is least-level factorization of $\mathrel{\rightarrow}_{\mathfrak tot\mathbf{o}lus}$. Completeness then follows by \Cref{prop:abs_normalization} and the translations will automatically take care of transferring the results. To prove factorization of $\mathrel{\rightarrow}_{\mathfrak tot\mathbf{o}lus}$, most of the work is done, as $\textsc {l}$-factorization of $\mathrel{\rightarrow}bb$ is already established; we then use our test (\Cref{prop:test_ll}) to extend $\mathrel{\rightarrow}bb$ with $\mathrel{\rightarrow}o$. \mathfrak sLV{}{ To expose how neat and compact this is, we give explicitly the full proof---it turns out to be just a few lines. } The only ingredients we need are substitutivity of $\mathbf{o}lus$ (which is an obvious property), and the following easy lemma. {}^{\bot}eta\etagin{lemma}[Roots]\label{l:oplus_basic} Let $\rho\in \{\mathfrak tot,\mathbf{o}lus \}$. If $T \mathrel{\nllredx{\rho}} P \mapsto_{\mathbf{o}lus} TTwo$ then $T \mapsto_{\mathbf{o}lus} \cdot \mathrel{\rightarrow}_{\rho} ^= TTwo$. \end{lemma} \mathfrak sLV{}{{}^{\bot}eta\etagin{proof} Let $P=\mathbf{o}lus (P_1,P_2)$, and consider $P \mapsto_{\mathbf{o}lus} P_1 = TTwo$ (the case $P \mapsto_{\mathbf{o}lus} P_2 = TTwo$ is similar). Assume $T\nllredx{\mathcal Rule} P$. Hence $T = \mathbf{o}lus(T_1,T_2)$, and either $T_1\mathfrak to{\Rule} P_1$ with $T_2 = P_2$, or $T_2\mathfrak to{\Rule} P_2$ with $T_1 = P_1$. Therefore, $\mathbf{o}lus( T_1,T_2) \mapsto_{\mathbf{o}lus} T_1 \mathfrak to{\Rule}^= P_1 $. \qed \end{proof}} {}^{\bot}eta\etagin{theorem}[Least-level factorization]\label{thm:ND_fact} {}^{\bot}eta\etagin{enumerate} \item\label{thm:ND_fact-bang} In $({\BangSet}_{\oplus}, \xrightarrow{})$, $\F{\textsc {l}red}{\nllred}$ holds for $\mathrel{\rightarrow} \,=\, \mathrel{\rightarrow}o \cup \mathrel{\rightarrow}bb$. \item Least-level factorization holds in $(\mathfrak lambda_\oplus^{\mathtt{cbn}},\mathrel{\rightarrow}o \cup \mathrel{\rightarrow}b)$, and in $(\mathfrak lambda_\oplus^{\mathtt{cbv}}, \mathrel{\rightarrow}o \cup \mathrel{\rightarrow}bv)$. \end{enumerate} \end{theorem} {}^{\bot}eta\etagin{proof} {}^{\bot}eta\etagin{enumerate} \item It is enough to verify the hypotheses of \Cref{prop:test_ll}. \mathfrak sLV{}{ {}^{\bot}eta\etagin{enumerate} \item \emph{Least-level factorization of $\mathrel{\rightarrow}o$}:~ $\F{\textsc {l}redo}{\nllredx{\oplus}}$. By \Cref{l:oplus_basic} (with $\rho=\mathbf{o}lus$) we obtain $\nllredx{\oplus} \cdot \textsc {l}redo ~\subseteq ~ \textsc {l}redo \cdot \mathrel{\rightarrow}o^= $ \mathfrak sLV{}{(use \Cref{l:ll_swaps})}, \textit{i.e.}\xspace $\nllredx{\oplus}$ strongly postpones after $\textsc {l}redo$. We conclude by \Cref{l:SP}. \item \emph{Substitutivity}: ~ {$\mapsto_{\mathbf{o}lus}$ is substitutive.} Indeed, $$\mathbf{o}lus (P_1,P_2)\subs x Q = \mathbf{o}lus (P_1 \subs x Q, P_2\subs x Q)~ \mapsto_{\mathbf{o}lus}~ P_i\subs x Q.$$ \item \emph{Root linear swap}: {$\mathrel{\nllredx{{}^{\bot}eta\etata}}b \!\cdot\! \mapsto_{\mathbf{o}lus}\ \subseteq\ \mapsto_{\mathbf{o}lus} \!\cdot\! \mathrel{\rightarrow}_{{}^{\bot}eta\etata} ^=$.} This is \Cref{l:oplus_basic} (with $\rho=\mathfrak tot$). \end{enumerate} } \item It follows from \Cref{thm:translation} and \Cref{thm:ND_fact}.\ref{thm:ND_fact-bang}. \qed \end{enumerate} \end{proof} \mathfrak sLV{}{ Completeness is the best that can be achieved in these calculi, because of the true non-determinism of $\mathrel{\rightarrow}o$ and hence of least-level reduction and of any other complete strategy for $\xrightarrow{}$. For instance, in $\mathfrak lambda_\oplus^{\mathtt{cbn}}$ there is no normalizing strategy for $\mathbf{o}lus(x, \delta\delta)$ in the sense of \Cref{def:strategy}, since $x \xbackredx{\textsc {l}}{\mathbf{o}lus}\mathbf{o}lus(x, \delta\delta) \textsc {l}redx{\mathbf{o}lus} \delta\delta \textsc {l}redx{{}^{\bot}eta\etata} \dots$\,. } \vspace*{-6pt} \section{Conclusions and Related Work} The combination of translations (\Cref{thm:translation}), $\textsc {l}$-factorization for $\mathrel{\rightarrow}bb$ (\Cref{thm:factorize-bang}), and modularity (\Cref{prop:test_ll}), give us a powerful method to analyze factorization in various $\lam$-calculi that \emph{extend} the pure CbN\xspace and CbV\xspace calculi. The main novelty is transferring the results from a calculus to another via translations. We chose to study least-level reduction as a normalizing strategy because it is natural to define in the bang calculus, and it is easier to transfer via translations to CbN\xspace and CbV\xspace calculi than leftmost-outermost. Since leftmost-outermost is the most common normalizing strategy in CbN\xspace, it is worth noticing that least-level normalization implies leftmost-outermost normalization (and vice-versa). This is an easy consequence of the---easy to check---fact that their union is quasi-diamond (and hence, again, uniformly normalizing). A proof of least-level normalization lends a proof also of leftmost-outermost normalization. \paragraph{Related Work.} Many calculi inspired by linear logic subsumes CbN\xspace and CbV\xspace, such as \cite{BentonBPH93,BentonWadler96,RonchiRoversi97,MaraistOderskyTurnerWadler99} (other than the ones already cited). We chose the bang calculus for its simplicity, which eases the analysis of the CbN\xspace and CbV\xspace translations. Least-level reduction is studied for linear-logic-based calculi in \cite{Terui07,Accattoli12} and for linear logic proof-nets in \cite{CarvalhoPF11,PaganiTranquilli17}. Least-level factorization and normalization for the pure CbN\xspace $\lambda$-calculus is studied in \cite{AccattoliFaggianGuerrieri19}. {}^{\bot}ibliographystyle{splncs04} {}^{\bot}ibliography{biblio} \mathfrak sLV{}{ \appendix \section*{Technical Appendix} \section{\Cref{sect:lambda-calculi}: General properties of the contextual closure } We start by recalling a basic but key property of contextual closure. If a step $\mathrel{\rightarrow}c$ is obtained by closure under \emph{non-empty context} of a rule $\xred{\rsym}c$, then it preserve the shape of the term. {We say that $T$ and $T'$ have \emph{the same shape} if both terms are an application (resp. an abstraction, an atom, a term of shape $!P$ or $\mathbf{o} {(P_1, \dots,P_k)}$ ).} {}^{\bot}eta\etagin{fact}[Shape preservation]\label{fact:shape} Assume $T=\textsf {C}\hole{R}\mathrel{\rightarrow} \textsf {C}\hole {R'}=T'$ and that the context $\textsf {C}$ is \emph{non-empty}. Then $T$ and $T'$ have the same shape. \end{fact} \subsection{Surface Reduction.}Reduction at level $0$ has a key role -- we call reduction which only fires a redex at level $0$ \emph{surface reduction} and write $\sred$. Surface reduction (already defined by Simpson in \cite{Simpson05}); it only fires redexes that are not inside a box, or inside the scope of an operator. It can be equivalently defined as the closure of a rule $\mathcal Root{}$ under contexts $\mathbf{S}$ defined by the grammar $\mathbf{S} \Coloneqq \, \mathfrak hole{\cdot} \mid \mathfrak la{x}{\mathbf{S}} \mid \mathbf{S}T \mid T\mathbf{S} $. We write $T \sredTTwo$ for a step $\mathrel{\rightarrow}$ which is surface; otherwise, we write $T \nsred TTwo$. Clearly {}^{\bot}eta\etagin{fact} $ \sred \subset \textsc {l}red$ and $\nllred \subset \nsred$, because a reduction at level $0$ is surely least-level. \end{fact} Note that a root steps $\mathcal Root{}$ is both a \emph{least level} and a \emph{surface} step. \subsection{Shape preservation for internal steps} Let $\mathrel{\rightarrow}$ be the contextual closure of some rules. \Cref{fact:shape} implies that $\nllred$ and $ \nsred$ steps always preserve the shape of terms. {}^{\bot}eta\etagin{fact}[Internal Steps]\label{fact:isteps} By \Cref{fact:shape}, $\nsred$ and $\nllred$ preserve the shapes of terms. Moreover, the following hold for $\nsred$, and hence for $\nllred$ (recall that $\nllred \subset \nsred$): {}^{\bot}eta\etagin{enumerate} \item There is no $T$ such that $T \nsred x$, for any variable $x$; \item $T \nsred \mathbf{o}(U_1,...,U_k)$ implies $T = \mathbf{o}(T_1,...,T_k)$, and there exists $1\leq j \leq k$ such that $T_j\mathrel{\rightarrow} U_j$ ($T_i=U_i$ for $i\not=j$). \item $T\nsred ! U_1$ implies $T = ! T_1$ and $T_1\mathrel{\rightarrow} U_1$. \item $T\nsred \lam x. U_1$ implies $T = \lam x. T_1$ and $T_1\nsred U_1$ \item $T\nsred U_1U_2$ implies $T = T_1T_2$, with either (i) $T_1\nsred U_1$ (and $T_2=U_2$), or (ii) $T_2\nsred U_2$ (and $T_1=U_1$). Moreover, $T_1$ and $U_1$ have the same shape, and so $T_2$ and $U_2$. \end{enumerate} \end{fact} {}^{\bot}eta\etagin{corollary}\label{cor:bo_redex}Assume $T\nsred S$ {}^{\bot}eta\etagin{itemize} \item $T$ is a $\mathfrak tot$-redex iff $S$ is. \item $T$ is an $\mathbf{o}$-redex iff $S$ is. \end{itemize} \end{corollary} {}^{\bot}eta\etagin{example}If $T\mathrel{\rightarrow} S$ and $S$ is a $\mathfrak tot$-redex, $T$ does not need to be a $\mathfrak tot$-redex. Consider $((\lam x.x)!(\lam x. P))!z \mathrel{\rightarrow}bb (\lam x. P) !z$. In the following example, $T$ contains no $\mathfrak tot$-redex: $\mathbf{o}(\lam x. P, z)!z \mathrel{\rightarrow}x{\mathbf{o}} (\lam x. P) !z$. \end{example} \input{embeddingNew-proofs} \section{Omitted proofs of \Cref{sec:ll_def}} In this section, let $\mathrel{\rightarrow}$ be the contextual closure of a set of rules, and $\mathcal R$ the set of all its redexes. {}^{\bot}eta\etagin{definition}[Redex-preserving] $\mathrel{\rightarrow}$ is said \textbf{redex-preserving} if $T\nsred T'$ implies that $T$ is a redex if and only if $T'$ is a redex. \end{definition} By \Cref{cor:bo_redex}, $\mathrel{\rightarrow}bb\cup \mathfrak to{\op}$ is redex preserving. {}^{\bot}eta\etagin{lemma}[A sufficient condition for good least-level]\label{lem:good_redex} If $\mathrel{\rightarrow}$ is redex-preserving, then it has a good least-level. \end{lemma} {}^{\bot}eta\etagin{proof}Given a redex-preserving reduction $\mathrel{\rightarrow}$, we prove {}^{\bot}eta\etagin{enumerate} \item\label{p:ll-properties-monotone-bang} \emph{Monotonocity:} $T \mathrel{\rightarrow} S$ implies $\textsc {l}ev{T} \leq \textsc {l}ev{S}$. \item\label{p:ll-properties-invariance-bang} \emph{Internal invariance:} $T \nllred S$ implies $\textsc {l}ev T = \textsc {l}ev{S}$. \end{enumerate} In both cases, the proof is by induction on $T \in \mathfrak lambda_\oc$. {}^{\bot}eta\etagin{enumerate} \item \emph{Monotonicity.} Assume $T\mathrel{\rightarrow} S$. {}^{\bot}eta\etagin{enumerate} \item\label{p:ll-properties-monotone-bang-root} If $T$ is a redex, then $\textsc {l}ev{T}=0 \leq \textsc {l}ev{S}$. \item If $T$ is not a redex, and $S$ is a redex, then $T\sred S$ (by the assumption that $\mathrel{\rightarrow}$ is redex-preserving), and so $\textsc {l}ev T=0=\textsc {l}ev S$. \item If neither $T$ nor $S$ is a redex, we have the following cases (note that cases $T=(\lam x.P) \Bang{Q}$ and $T=\mathbf{o}(P_1, \dots, P_k)$ are included in \,=\,ref{p:ll-properties-monotone-bang-root}). {}^{\bot}eta\etagin{itemize} \item $T=\lam x.P \mathrel{\rightarrow} \lam x.P'=S$. Then $P\mathrel{\rightarrow} {P'}$, and we conclude by \emph{i.h.}\xspace, because $\textsc {l}ev {\lam x.P}=\textsc {l}ev P \leq \textsc {l}ev P' =\textsc {l}ev {\lam x.P'}$ \item $T=!P \mathrel{\rightarrow} !P'=S$. Then $P\mathrel{\rightarrow} P'$, and we conclude by \emph{i.h.}\xspace, because $\textsc {l}ev P \leq \textsc {l}ev {P'} $ implies $\textsc {l}ev P \leq \textsc {l}ev { !P'}$. \item $T=PQ\mathrel{\rightarrow} S$. Since $ T $ is not a redex, either (i) $P\mathrel{\rightarrow} P'$ and $PQ\mathrel{\rightarrow} P'Q=S$, or (ii) $Q\mathrel{\rightarrow} Q'$ and $PQ\mathrel{\rightarrow} PQ'=S$. In case (i), by \emph{i.h.}\xspace, $\textsc {l}ev P \leq \textsc {l}ev {P'}$ and $\textsc {l}ev {P'Q}=\min \{\textsc {l}ev P, \textsc {l}ev Q\} \leq \min \{\textsc {l}ev P, \textsc {l}ev Q\} =\textsc {l}ev {P'Q} $. Case (ii) is similar. Observe that in the definition of least level we use the fact that neither $T$ nor $S$ is a redex. \end{itemize} \end{enumerate} \item \emph{Internal invariance.} Assume $T\nllred S$. {}^{\bot}eta\etagin{enumerate} \item If $T$ is a redex, then $S$ is a redex, because $\nllred \subset \nsred$. Therefore $\textsc {l}ev{T}=0 = \textsc {l}ev{S}$. \item Otherwise, if neither $T$ nor $S$ is a redex, and we have the following cases. {}^{\bot}eta\etagin{itemize} \item $!T_1\nllred ! S_1$ and $T_1\nllred S_1$. We conclude by \emph{i.h.}\xspace. \item $\lam x. T_1 \nllred \lam x. S_1$ and $T_1\nllred S_1$. We conclude by \emph{i.h.}\xspace. \item $ T_1T_2\nllred S$ and either (i) $T_1\mathrel{\rightarrow}_{:k} S_1$ (and $S=S_1T_2$), or (ii) $T_2\mathrel{\rightarrow}_{:k} S_2$ (and $S=T_1S_2$). We have $k>\textsc {l}ev{T_1T_2}=\min\{\textsc {l}ev{T_1}, \textsc {l}ev{T_2}\}$. We examine case $T_1\mathrel{\rightarrow}_{:k} S_1$ (case (ii) is similar). {}^{\bot}eta\etagin{itemize} \item If $\textsc {l}ev {T_1}\leq \textsc {l}ev{T_2}$, then $T_1\nllred S_1$, and by \emph{i.h.}\xspace $\textsc {l}ev {T_1}= \textsc {l}ev{S_1}$. Hence $\textsc {l}ev {S_1T_2}= \textsc {l}ev{S_1}=\textsc {l}ev{T_1}=\textsc {l}ev{T_1T_2} $ \item If $\textsc {l}ev {T_1}> \textsc {l}ev{T_2}$, since $\textsc {l}ev{S_1} \geq \textsc {l}ev {T_1}$ (by monotonicity), then $\textsc {l}ev{S_1T_2}=\textsc {l}ev{T_2}=\textsc {l}ev{T_1T_2}$. \end{itemize} \end{itemize} Note that in the definition of least level, we use the fact that neither $T_1T_2$ nor $S_1S_2$ is a redex. \qed \end{enumerate} \end{enumerate} \end{proof} \Cref{lem:good_redex} and \Cref{cor:bo_redex} give \textsc {l}properties* \input{embeddingNew-proofs} \section{Omitted proofs and lemmas of \Cref{sect:factorization}} \paragraph{Factorization.} Our proof of factorization follows and generalizes the approach used in \cite{AccattoliFaggianGuerrieri19} for the pure CbN\xspace $\lambda$-calculus, in particular the following characterization. {}^{\bot}eta\etagin{proposition}[Abstract factorization \cite{AccattoliFaggianGuerrieri19}] \label{prop:abstract-factorize} Let $\mathfrak to{} \ = \ \essred \cup \nessred$ be a reduction over a set $\mathcal{A}$. Suppose that there are reductions $\mathrel{\Rightarrow}$ and $\mathrel{\Rightarrow}NotEss$ such that: {}^{\bot}eta\etagin{itemize} \item \emph{Macro:} $\nessred \ \subseteq \ \mathrel{\Rightarrow}NotEss \ \subseteq \nessred^*$; \item \emph{Merge:} For any $t, tTwo \in \mathcal{A}$, if $t \mathrel{\Rightarrow}NotEss \!\cdot\! \essred tTwo$ then $t \mathrel{\Rightarrow} tTwo$; \item \emph{Split:} For any $t, tTwo \in \mathcal{A}$, if $t \mathrel{\Rightarrow} tTwo$ then $t \essred^* \!\cdot\! \mathrel{\Rightarrow}NotEss tTwo$. \end{itemize} Then, $(\mathcal{A},\mathfrak to{})$ $\textsc{e}$-factorize: for any $t, tTwo \in \mathcal{A}$, if $t \mathfrak to{}^* tTwo$ then $t \essred^* \!\cdot\! \nessred^* tTwo$. \end{proposition} Our goal is then to apply \Cref{prop:abstract-factorize} to the bang calculus, where $\mathfrak to{} \ = \ \mathfrak to{\mathfrak tot}$ and $\essred \ = \ \textsc {l}redBang$ and $\nessred \ = \ \mathrel{\nllredx{\mathfrak tot}}$. So, we have to identify reductions $\mathrel{\Rightarrow}$ and $\mathrel{\Rightarrow}NotEss$ in the bang calculus such that the properties macros, merge and split hold. The natural solution is to take: {}^{\bot}eta\etagin{itemize} \item $\mathrel{\Rightarrow} \ = \ \mathrel{\Rightarrow}Bang$, the parallel version of $\mathfrak to{\mathfrak tot}$, which fires simultaneously a number of ${}^{\bot}eta\etata_\oc$-redexes; \item $\mathrel{\Rightarrow}NotEss \ = \ \mathrel{\Rightarrow}NotLlBang$, the parallel version of $\mathrel{\nllredx{\mathfrak tot}}$, which fires simultaneously a number of ${}^{\bot}eta\etata_\oc$-redexes that are not at minimal level. \end{itemize} Formally, \emph{parallel ${}^{\bot}eta\etata_\oc$-reduction} $\mathrel{\Rightarrow}Bang$ is defined by the following rules: {}^{\bot}eta\etagin{center} \small {}^{\bot}eta\etagin{prooftree} \infer0{x \mathrel{\Rightarrow}Bang x} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}Bang T'} \infer1{\mathfrak la{x}{T} \mathrel{\Rightarrow}Bang \mathfrak la{x}{T'}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}Bang T'} \hypo{TTwo \mathrel{\Rightarrow}Bang TTwo'} \infer2{TTTwo \mathrel{\Rightarrow}Bang T'TTwo'} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}Bang T'} \infer1{\Bang{T} \mathrel{\Rightarrow}Bang \Bang{T'}} \end{prooftree} \\[5pt] {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}Bang T'} \hypo{TTwo \mathrel{\Rightarrow}Bang TTwo'} \infer2{(\mathfrak la{x}{T})\Bang{TTwo} \mathrel{\Rightarrow}Bang T'\mathfrak sub{TTwo'\!}{x}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T_1 \mathrel{\Rightarrow}Bang T_1'} \hypo{\overset{k \in {\mathbb N}}{\dots}} \hypo{T_k \mathrel{\Rightarrow}Bang T_k'} \infer3{\mathbf{o}(T_1, \dots, T_k) \mathrel{\Rightarrow}Bang \mathbf{o}(T_1', \dots, T_k')} \end{prooftree} \end{center} To define $\mathrel{\Rightarrow}NotLlBang$, we first introduce $\mathrel{\Rightarrow}BangAt{n}$ (the parallel version of $\mathfrak to{\Derel}At{n}$), which fires simultaneously a number of ${}^{\bot}eta\etata_\oc$-redexes at level at least $n$ (and $\mathrel{\Rightarrow}BangAt{\infty}$ does not reduce~any~${}^{\bot}eta\etata_\oc$-redex). {}^{\bot}eta\etagin{center} \small {}^{\bot}eta\etagin{prooftree} \infer0{x \mathrel{\Rightarrow}BangAt{\infty} x} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangAt{n} T'} \infer1{\mathfrak la{x}{T} \mathrel{\Rightarrow}BangAt{n} \mathfrak la{x}{T'}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T \mathrel{\Rightarrow}BangAt{m} T'} \hypo{TTwo \mathrel{\Rightarrow}BangAt{n} TTwo'} \infer2{TTTwo \mathrel{\Rightarrow}BangAt{\min\{m,n\}} T'TTwo'} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangAt{n} T'} \infer1{\Bang{T} \mathrel{\Rightarrow}BangAt{n\!+\!1} \Bang{T'}} \end{prooftree} \\[5pt] {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T \mathrel{\Rightarrow}BangAt{n} T'} \hypo{TTwo \mathrel{\Rightarrow}BangAt{m} TTwo'} \infer2{(\mathfrak la{x}{T})\Bang{TTwo} \mathrel{\Rightarrow}BangAt{0} T'\mathfrak sub{TTwo'\!}{x}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T_1 \mathrel{\Rightarrow}BangAt{n_1} T_1'} \hypo{\overset{k \in {\mathbb N}}{\dots}} \hypo{T_k \mathrel{\Rightarrow}BangAt{n_k} T_k'} \infer3{\mathbf{o}(T_1, \dots, T_k) \mathrel{\Rightarrow}BangAt{1\!+\!\min\{n_1, \dots, n_k\}} \mathbf{o}(T_1', \dots, T_k')} \end{prooftree} \end{center} Note that $T \mathrel{\Rightarrow}Bang TTwo$ if and only if $T \mathrel{\Rightarrow}BangAt{n} TTwo$ for some $n \in {\mathbb N}$; and $T \mathrel{\Rightarrow}BangAt{\infty} TTwo$ implies $T = TTwo$. The \emph{parallel internal ${}^{\bot}eta\etata_\oc$-reduction} $\mathrel{\Rightarrow}NotLlBang$ is then defined as: {}^{\bot}eta\etagin{center} \small $T \mathrel{\Rightarrow}NotLlBang TTwo$ \quad if $T \mathrel{\Rightarrow}BangAt{n} TTwo$ with $n = \infty$ or $n > \textsc {l}Bang{T}$. \end{center} Clearly, $\mathrel{\Rightarrow}Bang$ and $\mathrel{\Rightarrow}NotLlBang$ are reflexive, and $\mathrel{\nllredx{\mathfrak tot}} \ \subseteq \ \mathrel{\Rightarrow}NotLlBang \ \subseteq \ \mathrel{\nllredx{\mathfrak tot}}^*$ (macro condition in \Cref{prop:abstract-factorize}). To prove the merge property, we first prove a refined version of it ``by level''. {}^{\bot}eta\etagin{lemma}[Merge] \label{lemma:merge} In the bang calculus $(\mathfrak lambda_\oc, \mathfrak to{\mathfrak tot})$: {}^{\bot}eta\etagin{enumerate} \item\label{p:merge-level} \emph{Merge by level:} If $T \mathrel{\Rightarrow}BangAt{n} TThree \mathfrak to{\Derel}At{m} TTwo$ with $n > m$, then $T \mathrel{\Rightarrow}Bang TTwo$. \item\label{p:merge-ll} \emph{Merge for least-level:} If $T \mathrel{\Rightarrow}NotLlBang \!\cdot\! \textsc {l}redBang TTwo$, then $T \mathrel{\Rightarrow}Bang TTwo$. \end{enumerate} \end{lemma} {}^{\bot}eta\etagin{proof} {}^{\bot}eta\etagin{enumerate} \item By induction on the definition of $T \mathrel{\Rightarrow}BangAt{n} TThree$. Consider the last rule of the derivation of $T \mathrel{\Rightarrow}BangAt{n} TThree$. It cannot conclude $T = (\mathfrak la{x}{T_0})\Bang{T_1} \mathrel{\Rightarrow}BangAt{0} T_0 \mathfrak sub{T_1}{x} = TThree$ because otherwise $n = 0$, which contradicts the hypothesis $n > m \in {\mathbb N}$. Therefore, the only cases are: {}^{\bot}eta\etagin{itemize} \item \emph{Variable}: $T = x \mathrel{\Rightarrow}BangAt{\infty} x = TThree$. Then, there is no $TTwo$ such that $TThree \mathfrak to{\Derel}At{m} TTwo$ for any $m \in {\mathbb N}$. \item \emph{Abstraction}: $T = \mathfrak la{x}T' \mathrel{\Rightarrow}BangAt{n} \mathfrak la{x} TThree' = TThree$ because $T' \mathrel{\Rightarrow}BangAt{n} TThree'$. According to definition of $TThree \mathfrak to{\Derel}At{m} TTwo$, by necessity $TTwo = \mathfrak la{x}{TTwo'}$ with $TThree' \mathfrak to{\Derel}At{m} TTwo'$. By \textit{i.h.}\xspace, $T' \mathrel{\Rightarrow}Bang TTwo'$, thus {}^{\bot}eta\etagin{align*} { {}^{\bot}eta\etagin{prooftree} \hypo{T' \mathrel{\Rightarrow}Bang TTwo'} \infer1{T = \mathfrak la{x}{T'} \mathrel{\Rightarrow}Bang \mathfrak la{x}{TTwo'} = TTwo} \end{prooftree}} \,. \end{align*} \item \emph{Box}: $T = \Bang{T'} \mathrel{\Rightarrow}BangAt{n} \Bang{TThree'} = TThree$ because $T' \mathrel{\Rightarrow}BangAt{n\!-\!1} TThree'$. According to the definition of $TThree \mathfrak to{\Derel}At{m} TTwo$, by necessity $TTwo = \Bang{TTwo'}$ with $TThree' \mathfrak to{\Derel}At{m} TTwo'$. By \textit{i.h.}\xspace, $T' \mathrel{\Rightarrow}Bang TTwo'$, thus {}^{\bot}eta\etagin{align*} { {}^{\bot}eta\etagin{prooftree} \hypo{T' \mathrel{\Rightarrow}Bang TTwo'} \infer1{T = \Bang{T'} \mathrel{\Rightarrow}Bang \Bang{TTwo'} = TTwo} \end{prooftree}} \,. \end{align*} \item \emph{Application}: {}^{\bot}eta\etagin{align*} { {}^{\bot}eta\etagin{prooftree} \hypo{T_0 \mathrel{\Rightarrow}BangAt{n_0} TThree_0} \hypo{T_1 \mathrel{\Rightarrow}BangAt{n_1} TThree_1} \infer2{T = T_0 T_1 \mathrel{\Rightarrow}BangAt{n} TThree_0 TThree_1 = TThree} \end{prooftree} } \end{align*} where $n = \min\{n_0,n_1\}$. According to the definition of $TThree \mathfrak to{\Derel}At{m} TTwo$, there are the following sub-cases: {}^{\bot}eta\etagin{enumerate} \item $TThree = TThree_0TThree_1 \mathfrak to{\Derel}At{m} TTwo_0TThree_1 = TTwo$ with $TThree_0 \mathfrak to{\Derel}At{m} TTwo_0$; since $m < n \leq n_0$, by \textit{i.h.}\xspace applied to $T_0 \mathrel{\Rightarrow}BangAt{n_0} TThree_0 \mathfrak to{\Derel}At{m} TTwo_0$, we have $T_0 \mathrel{\Rightarrow}Bang TTwo_0$, and so (as $\mathrel{\Rightarrow}BangAt{n_1} \,\subseteq\, \mathrel{\Rightarrow}Bang$) {}^{\bot}eta\etagin{align*} {{}^{\bot}eta\etagin{prooftree} \hypo{T_0 \mathrel{\Rightarrow}Bang TTwo_0} \hypo{T_1 \mathrel{\Rightarrow}Bang TThree_1} \infer2{T = T_0T_1 \mathrel{\Rightarrow}Bang TTwo_0TThree_1 = TTwo} \end{prooftree}} \,; \end{align*} \item $TThree = TThree_0TThree_1 \mathfrak to{\Derel}At{m} TTwo_0TThree_1 = TTwo$ with $TThree_1 \mathfrak to{\Derel}At{m} TTwo_1$; analogous to the previous sub-case. \item $TThree = (\mathfrak la{x}{TThree_0'})\Bang{TThree_1'} \mathfrak to{\Derel}At{0} TThree_0'\mathfrak sub{TThree_1'}{x} = TTwo$ with $TThree_0 = \mathfrak la{x}TThree_0'$ and $TThree_1 = \Bang{TThree_1'}$ and $m = 0$; as $0 < n \leq n_0,n_1$ then, according to the definition of $T \mathrel{\Rightarrow}BangAt{n} (\mathfrak la{x}{TThree_0'})\Bang{TThree_1'} = TThree$, {}^{\bot}eta\etagin{align*} { {}^{\bot}eta\etagin{prooftree} \hypo{T_0 \mathrel{\Rightarrow}BangAt{n_0} TThree_0'} \infer1{\mathfrak la{x}T_0 \mathrel{\Rightarrow}BangAt{n_0} \mathfrak la{x}TThree_0'} \hypo{T_1 \mathrel{\Rightarrow}BangAt{n_1\!-\!1} TThree_1'} \infer1{\Bang{T_1} \mathrel{\Rightarrow}BangAt{n_1}\Bang{TThree_1'}} \infer2{T = (\mathfrak la{x}T_0)T_1 \mathrel{\Rightarrow}BangAt{n} (\mathfrak la{x}{TThree_0'})\Bang{TThree_1'} = TThree} \end{prooftree} } \end{align*} where $n = \min\{n_0, n_1\}$; therefore (as $\mathrel{\Rightarrow}BangAt{k} \,\subseteq\, \mathrel{\Rightarrow}Bang$) \[ {}^{\bot}eta\etagin{prooftree} \hypo{T_0 \mathrel{\Rightarrow}Bang TThree_0'} \hypo{T_1 \mathrel{\Rightarrow}Bang TThree_1'} \infer2{T = (\mathfrak la{x}{T_0})\Bang{T_1} \mathrel{\Rightarrow}Bang TThree_0'\mathfrak sub{TThree_1'}{x} = TTwo} \end{prooftree} \] \end{enumerate} \end{itemize} \item Since $T \mathrel{\Rightarrow}NotLlBang TThree \textsc {l}redBang TTwo$, $T \mathrel{\Rightarrow}BangAt{n} TThree \mathfrak to{\mathfrak tot}Ind{m} TTwo$ for some $n \in {\mathbb N} \cup \{\infty\}$ and $m \in {\mathbb N}$ with $n > \textsc {l}Bang{T}$ and $m = \textsc {l}Bang{TThree}$. As $\mathrel{\Rightarrow}NotLlBang \ \subseteq \ \mathrel{\nllredx{\mathfrak tot}}^*$ and $\mathrel{\nllredx{\mathfrak tot}}$ cannot change the least-level (\Cref{def:good}.\ref{p:ll-properties-invariance} and \Cref{prop:ll-properties}), $\textsc {l}Bang{T} = \textsc {l}Bang{TThree}$ and so $n > m$. By merging by level (\Cref{lemma:merge}.\ref{p:merge-level}), $T \mathrel{\Rightarrow}Bang TTwo$. \qed \end{enumerate} \end{proof} The proof of the split property requires a further tool, to get the right induction hypothesis: the \emph{indexed parallel ${}^{\bot}eta\etata_\oc$-reduction} $\mathrel{\Rightarrow}BangInd{n}$ (not to be confused with $\mathrel{\Rightarrow}BangAt{n}$), \textit{i.e.}\xspace $\mathrel{\Rightarrow}Bang$ equipped with a natural number $n$ which is, roughly, the number of ${}^{\bot}eta\etata_\oc$-redexes reduced simultaneously by $\mathrel{\Rightarrow}Bang$. The formal definition of $\mathrel{\Rightarrow}BangInd{n}$ is (where $\mathfrak size{T}_x$ is the number of free occurrences of $x$ in $T$): {}^{\bot}eta\etagin{center} \small {}^{\bot}eta\etagin{prooftree} \infer0{x \mathrel{\Rightarrow}BangInd{0} x} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangInd{n} T'} \infer1{\mathfrak la{x}{T} \mathrel{\Rightarrow}BangInd{n} \mathfrak la{x}{T'}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangInd{m} T'} \hypo{TTwo \mathrel{\Rightarrow}BangInd{n} TTwo'} \infer2{TTTwo \mathrel{\Rightarrow}BangIndLong{m+n} T'TTwo'} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangInd{n} T'} \infer1{\Bang{T} \mathrel{\Rightarrow}BangIndLong{n+1} \Bang{T'}} \end{prooftree} \\[5pt] {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangInd{n} T'} \hypo{TTwo \mathrel{\Rightarrow}BangInd{m} TTwo'} \infer2{(\mathfrak la{x}{T})\Bang{TTwo} \mathrel{\Rightarrow}BangIndLong{n + \mathfrak size{T'}_x\cdot m + 1} T'\mathfrak sub{TTwo'\!}{x}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T_1 \mathrel{\Rightarrow}BangInd{n_1} T_1'} \hypo{\overset{k \in {\mathbb N}}{\dots}} \hypo{T_k \mathrel{\Rightarrow}BangInd{n_k} T_k'} \infer3{\mathbf{o}(T_1, \dots, T_k) \mathrel{\Rightarrow}BangIndLong{n_1 + \dots + n_k} \mathbf{o}(T_1', \dots, T_k')} \end{prooftree} \end{center} The intuition behind the last clause is: $(\mathfrak la{x}{T})TTwo$ reduces to $T'\mathfrak sub{TTwo'\!}{x}$ by: {}^{\bot}eta\etagin{enumerate} \item first reducing $(\mathfrak la{x}{T})TTwo$ to $T\mathfrak sub{TTwo}{x}$ ($1$ step); \item then reducing in $T\mathfrak sub{TTwo}{x}$ the $n$ steps corresponding to the sequence $T \mathrel{\Rightarrow}BangInd{n} T'$; \item finally reducing $TTwo$ to $TTwo'$ for every occurrence of $x$ in $T'$ replaced by $TTwo$, that is, $m$ steps for $\mathfrak size{T'}_x$ times, obtaining $T'\mathfrak sub{TTwo'\!}{x}$. \end{enumerate} {}^{\bot}eta\etagin{lemma}[Split] \label{lemma:split} In the bang calculus $(\mathfrak lambda_\oc, \mathfrak to{\mathfrak tot})$: {}^{\bot}eta\etagin{enumerate} \item\label{p:split-indexed} \emph{Indexed split}: If $T \mathrel{\Rightarrow}BangInd{n} TTwo$ then $T \mathrel{\Rightarrow}NotLlBang TTwo$, or $n > 0 $ and $T \textsc {l}redBang \!\cdot\! \mathrel{\Rightarrow}BangIndLong{n-1} TTwo$. \item\label{p:split-ll} \emph{Split for least-level}: If $T \mathrel{\Rightarrow}Bang TTwo$, then $T \textsc {l}redBang^* \!\cdot\! \mathrel{\Rightarrow}NotLlBang TTwo$. \end{enumerate} \end{lemma} {}^{\bot}eta\etagin{proof} {}^{\bot}eta\etagin{enumerate} \item By induction on the definition of $T \mathrel{\Rightarrow}BangInd{n} TTwo$. Consider the last rule of the derivation of $T \mathrel{\Rightarrow}BangInd{n} TTwo$. We freely use the fact that if $T \mathrel{\Rightarrow}BangInd{n} TTwo$ then $T \mathrel{\Rightarrow}Bang TTwo$. Cases: {}^{\bot}eta\etagin{itemize} \item \emph{Variable}: $T = x \mathrel{\Rightarrow}BangInd{0} x = TTwo$. Then, $T = x \mathrel{\Rightarrow}NotLlBang x = TTwo$ since $x \mathrel{\Rightarrow}BangInd{\infty} x$. \item \emph{Abstraction}: $T = \mathfrak la x{T'} \mathrel{\Rightarrow}BangInd n \mathfrak la {x} {TTwo'} = TTwo$ because $T' \mathrel{\Rightarrow}BangInd n TTwo'$. It follows from the \textit{i.h.}\xspace. \item \emph{Box}: $T = \Bang{T'} \mathrel{\Rightarrow}BangInd{n} \Bang{TTwo'} = TTwo$ because $T' \mathrel{\Rightarrow}BangInd{n} TTwo'$. It follows from the \textit{i.h.}\xspace. \item \emph{Application}: {}^{\bot}eta\etagin{align} \label{eq:app-parallel} { {}^{\bot}eta\etagin{prooftree} \hypo{TThree \mathrel{\Rightarrow}BangInd {n_1} TThree'} \hypo{TFive \mathrel{\Rightarrow}BangInd {n_2} TFive'} \infer2{T = TThree TFive \mathrel{\Rightarrow}BangIndLong{n_1 + n_2} TThree' TFive' = TTwo} \end{prooftree} } \end{align} with $n = n_1 + n_2$. There are only two cases: {}^{\bot}eta\etagin{itemize} \item either $TThree TFive \mathrel{\Rightarrow}NotLlBang TThree' TFive'$, and then the claim holds; \item or $TThreeTFive \not\mathrel{\Rightarrow}NotLlBang TThree'TFive'$ and so (as $TThreeTFive \mathrel{\Rightarrow}Bang TThree'TFive'$ with $TThreeTFive \neq TThree'TFive'$) any derivation with conclusion $TThree TFive \mathrel{\Rightarrow}BangAt{d} TThree' TFive'$ is such that $d =\textsc {l}Bang{TThree TFive} \in {\mathbb N}$. Let us rewrite derivation \,=\,ref{eq:app-parallel} replacing $\mathrel{\Rightarrow}BangInd{n}$ with $\mathrel{\Rightarrow}BangAt{k}$: we have\footnote{This is possible because the inference rules for $\mathrel{\Rightarrow}BangInd{n}$ and $\mathrel{\Rightarrow}BangAt{k}$ are the same except for the way they manage their own indexes $n$ and $k$.} \[ {}^{\bot}eta\etagin{prooftree} \hypo{TThree \mathrel{\Rightarrow}BangAt {d_{TThree}} TThree'} \hypo{TFive \mathrel{\Rightarrow}BangAt{d_{TFive}} TFive'} \infer2{T = TThree TFive \mathrel{\Rightarrow}BangAt d TThree' TFive' = TTwo} \end{prooftree} \] where $d = \min\{d_TThree, d_TFive \}$. Thus, there are two sub-cases: {}^{\bot}eta\etagin{enumerate} \item $d = d_TThree \leq d_TFive$ and then $d= \textsc {l}Bang{TThree TFive} \leq \textsc {l}Bang{TThree} \leq d_{TThree}=d $ (the first inequality holds by definition of $\textsc {l}Bang{TThreeTFive}$), hence $\textsc {l}Bang TThree = d_{TThree}$; we apply the \textit{i.h.}\xspace\ to $TThree \mathrel{\Rightarrow}BangInd{n_1} TThree'$ and we have that $TThree \mathrel{\Rightarrow}NotLlBang TThree'$, or $n_1>0$ and $TThree \textsc {l}redBang TThree_1 \mathrel{\Rightarrow}BangInd {n_1-1} TThree'$; but $TThree \mathrel{\Rightarrow}NotLlBang TThree'$ is impossible because otherwise $TThree TFive \mathrel{\Rightarrow}NotLlBang TThree' TFive'$ (as $d_TThree \leq d_TFive$); therefore, $n_1>0$ and $TThree \textsc {l}redBang TThree_1 \mathrel{\Rightarrow}BangInd{n_1-1} TThree'$, so $n>0 $ and $T = TThree TFive \textsc {l}redBang TThree_1 TFive \mathrel{\Rightarrow}BangIndLong {n_1-1+n_2} TThree'TFive' = TTwo$. \item $d=d_{TFive} \leq d_{TThree}$ and then $d= \textsc {l}Bang{TThree TFive}\leq \deg TFive \leq d_{TFive} =d $, hence $\textsc {l}Bang{TFive} = d_{TFive}$; we conclude analogously to thee previous sub-case. \end{enumerate} \end{itemize} \item \emph{${}^{\bot}eta\etata$ step}: \[ {}^{\bot}eta\etagin{prooftree} \hypo{TThree \mathrel{\Rightarrow}BangInd {n_1} TThree'} \hypo{TFour \mathrel{\Rightarrow}BangInd {n_2} TFour'} \infer2{T = (\mathfrak lax TThree)TFour \mathrel{\Rightarrow}BangIndLong {n_1 + \mathfrak sizeP{TThree'}x \cdot n_2 +1} TThree'\mathfrak sub{TFour'}x = TTwo} \end{prooftree} \] With $n = n_1 + \mathfrak sizeP{TThree'}x \cdot n_2 +1 > 0$. We have $T = (\mathfrak lax TThree)TFour \textsc {l}redBang TThree \mathfrak sub{TFour}x$ and by substitutivity of $\mathrel{\Rightarrow}BangInd{n}$ (\Cref{partobind-subs}) $TThree\mathfrak sub{TFour}x \mathrel{\Rightarrow}BangIndLong{n_1 + \mathfrak sizeP{TThree'}x \cdot n_2} TThree' \mathfrak sub{TFour'}x = TTwo$. \end{itemize} \item If $T \mathrel{\Rightarrow}Bang TTwo$ then $T \mathrel{\Rightarrow}BangInd{n} TTwo$ for some $n \in {\mathbb N}$. We prove the statement by induction $n$. By indexed split (\Cref{lemma:split}.\ref{p:split-indexed}), there are only two cases: {}^{\bot}eta\etagin{itemize} \item \emph{$T \mathrel{\Rightarrow}NotLlBang TTwo$}. This is an instance of the statement (since $\textsc {l}redBang^*$ is reflexive). \item $n>0$ and there exists $TFour$ such that $T \textsc {l}redBang TFour \mathrel{\Rightarrow}BangInd{n-1} TTwo$. By \textit{i.h.}\xspace applied to $TFour \mathrel{\Rightarrow}BangInd{n-1} TTwo$, there is $TThree$ such that $TFour \textsc {l}redBang^* TThree \mathrel{\Rightarrow}NotLlBang TTwo$, and so $T \textsc {l}redBang^* TThree \mathrel{\Rightarrow}NotLlBang TTwo$. \qed \end{itemize} \end{enumerate} \end{proof} By \Cref{prop:abstract-factorize}, \Cref{lemma:merge,lemma:split}, least-level factorization of $\mathfrak to{\mathfrak tot}$ holds. \factorizebang* \section{Omitted proofs and lemmas of \Cref{sect:factorization}} \paragraph{Factorization.} Our proof of factorization follows and generalizes the approach used in \cite{AccattoliFaggianGuerrieri19} for the pure CbN\xspace $\lambda$-calculus, in particular the following characterization. {}^{\bot}eta\etagin{proposition}[Abstract factorization \cite{AccattoliFaggianGuerrieri19}] \label{prop:abstract-factorize} Let $\mathfrak to{} \ = \ \essred \cup \nessred$ be a reduction over a set $\mathcal{A}$. Suppose that there are reductions $\mathrel{\Rightarrow}$ and $\mathrel{\Rightarrow}NotEss$ such that: {}^{\bot}eta\etagin{itemize} \item \emph{Macro:} $\nessred \ \subseteq \ \mathrel{\Rightarrow}NotEss \ \subseteq \nessred^*$; \item \emph{Merge:} For any $t, tTwo \in \mathcal{A}$, if $t \mathrel{\Rightarrow}NotEss \!\cdot\! \essred tTwo$ then $t \mathrel{\Rightarrow} tTwo$; \item \emph{Split:} For any $t, tTwo \in \mathcal{A}$, if $t \mathrel{\Rightarrow} tTwo$ then $t \essred^* \!\cdot\! \mathrel{\Rightarrow}NotEss tTwo$. \end{itemize} Then, $(\mathcal{A},\mathfrak to{})$ $\textsc{e}$-factorize: for any $t, tTwo \in \mathcal{A}$, if $t \mathfrak to{}^* tTwo$ then $t \essred^* \!\cdot\! \nessred^* tTwo$. \end{proposition} Our goal is then to apply \Cref{prop:abstract-factorize} to the bang calculus, where $\mathfrak to{} \ = \ \mathfrak to{\mathfrak tot}$ and $\essred \ = \ \textsc {l}redBang$ and $\nessred \ = \ \mathrel{\nllredx{\mathfrak tot}}$. So, we have to identify reductions $\mathrel{\Rightarrow}$ and $\mathrel{\Rightarrow}NotEss$ in the bang calculus such that the properties macros, merge and split hold. The natural solution is to take: {}^{\bot}eta\etagin{itemize} \item $\mathrel{\Rightarrow} \ = \ \mathrel{\Rightarrow}Bang$, the parallel version of $\mathfrak to{\mathfrak tot}$, which fires simultaneously a number of ${}^{\bot}eta\etata_\oc$-redexes; \item $\mathrel{\Rightarrow}NotEss \ = \ \mathrel{\Rightarrow}NotLlBang$, the parallel version of $\mathrel{\nllredx{\mathfrak tot}}$, which fires simultaneously a number of ${}^{\bot}eta\etata_\oc$-redexes that are not at minimal level. \end{itemize} Formally, \emph{parallel ${}^{\bot}eta\etata_\oc$-reduction} $\mathrel{\Rightarrow}Bang$ is defined by the following rules: {}^{\bot}eta\etagin{center} \small {}^{\bot}eta\etagin{prooftree} \infer0{x \mathrel{\Rightarrow}Bang x} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}Bang T'} \infer1{\mathfrak la{x}{T} \mathrel{\Rightarrow}Bang \mathfrak la{x}{T'}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}Bang T'} \hypo{TTwo \mathrel{\Rightarrow}Bang TTwo'} \infer2{TTTwo \mathrel{\Rightarrow}Bang T'TTwo'} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}Bang T'} \infer1{\Bang{T} \mathrel{\Rightarrow}Bang \Bang{T'}} \end{prooftree} \\[5pt] {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}Bang T'} \hypo{TTwo \mathrel{\Rightarrow}Bang TTwo'} \infer2{(\mathfrak la{x}{T})\Bang{TTwo} \mathrel{\Rightarrow}Bang T'\mathfrak sub{TTwo'\!}{x}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T_1 \mathrel{\Rightarrow}Bang T_1'} \hypo{\overset{k \in {\mathbb N}}{\dots}} \hypo{T_k \mathrel{\Rightarrow}Bang T_k'} \infer3{\mathbf{o}(T_1, \dots, T_k) \mathrel{\Rightarrow}Bang \mathbf{o}(T_1', \dots, T_k')} \end{prooftree} \end{center} To define $\mathrel{\Rightarrow}NotLlBang$, we first introduce $\mathrel{\Rightarrow}BangAt{n}$ (the parallel version of $\mathfrak to{\Derel}At{n}$), which fires simultaneously a number of ${}^{\bot}eta\etata_\oc$-redexes at level at least $n$ (and $\mathrel{\Rightarrow}BangAt{\infty}$ does not reduce~any~${}^{\bot}eta\etata_\oc$-redex). {}^{\bot}eta\etagin{center} \small {}^{\bot}eta\etagin{prooftree} \infer0{x \mathrel{\Rightarrow}BangAt{\infty} x} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangAt{n} T'} \infer1{\mathfrak la{x}{T} \mathrel{\Rightarrow}BangAt{n} \mathfrak la{x}{T'}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T \mathrel{\Rightarrow}BangAt{m} T'} \hypo{TTwo \mathrel{\Rightarrow}BangAt{n} TTwo'} \infer2{TTTwo \mathrel{\Rightarrow}BangAt{\min\{m,n\}} T'TTwo'} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangAt{n} T'} \infer1{\Bang{T} \mathrel{\Rightarrow}BangAt{n\!+\!1} \Bang{T'}} \end{prooftree} \\[5pt] {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T \mathrel{\Rightarrow}BangAt{n} T'} \hypo{TTwo \mathrel{\Rightarrow}BangAt{m} TTwo'} \infer2{(\mathfrak la{x}{T})\Bang{TTwo} \mathrel{\Rightarrow}BangAt{0} T'\mathfrak sub{TTwo'\!}{x}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T_1 \mathrel{\Rightarrow}BangAt{n_1} T_1'} \hypo{\overset{k \in {\mathbb N}}{\dots}} \hypo{T_k \mathrel{\Rightarrow}BangAt{n_k} T_k'} \infer3{\mathbf{o}(T_1, \dots, T_k) \mathrel{\Rightarrow}BangAt{1\!+\!\min\{n_1, \dots, n_k\}} \mathbf{o}(T_1', \dots, T_k')} \end{prooftree} \end{center} Note that $T \mathrel{\Rightarrow}Bang TTwo$ if and only if $T \mathrel{\Rightarrow}BangAt{n} TTwo$ for some $n \in {\mathbb N}$; and $T \mathrel{\Rightarrow}BangAt{\infty} TTwo$ implies $T = TTwo$. The \emph{parallel internal ${}^{\bot}eta\etata_\oc$-reduction} $\mathrel{\Rightarrow}NotLlBang$ is then defined as: {}^{\bot}eta\etagin{center} \small $T \mathrel{\Rightarrow}NotLlBang TTwo$ \quad if $T \mathrel{\Rightarrow}BangAt{n} TTwo$ with $n = \infty$ or $n > \textsc {l}Bang{T}$. \end{center} Clearly, $\mathrel{\Rightarrow}Bang$ and $\mathrel{\Rightarrow}NotLlBang$ are reflexive, and $\mathrel{\nllredx{\mathfrak tot}} \ \subseteq \ \mathrel{\Rightarrow}NotLlBang \ \subseteq \ \mathrel{\nllredx{\mathfrak tot}}^*$ (macro condition in \Cref{prop:abstract-factorize}). To prove the merge property, we first prove a refined version of it ``by level''. {}^{\bot}eta\etagin{lemma}[Merge] \label{lemma:merge} In the bang calculus $(\mathfrak lambda_\oc, \mathfrak to{\mathfrak tot})$: {}^{\bot}eta\etagin{enumerate} \item\label{p:merge-level} \emph{Merge by level:} If $T \mathrel{\Rightarrow}BangAt{n} TThree \mathfrak to{\Derel}At{m} TTwo$ with $n > m$, then $T \mathrel{\Rightarrow}Bang TTwo$. \item\label{p:merge-ll} \emph{Merge for least-level:} If $T \mathrel{\Rightarrow}NotLlBang \!\cdot\! \textsc {l}redBang TTwo$, then $T \mathrel{\Rightarrow}Bang TTwo$. \end{enumerate} \end{lemma} {}^{\bot}eta\etagin{proof} {}^{\bot}eta\etagin{enumerate} \item By induction on the definition of $T \mathrel{\Rightarrow}BangAt{n} TThree$. Consider the last rule of the derivation of $T \mathrel{\Rightarrow}BangAt{n} TThree$. It cannot conclude $T = (\mathfrak la{x}{T_0})\Bang{T_1} \mathrel{\Rightarrow}BangAt{0} T_0 \mathfrak sub{T_1}{x} = TThree$ because otherwise $n = 0$, which contradicts the hypothesis $n > m \in {\mathbb N}$. Therefore, the only cases are: {}^{\bot}eta\etagin{itemize} \item \emph{Variable}: $T = x \mathrel{\Rightarrow}BangAt{\infty} x = TThree$. Then, there is no $TTwo$ such that $TThree \mathfrak to{\Derel}At{m} TTwo$ for any $m \in {\mathbb N}$. \item \emph{Abstraction}: $T = \mathfrak la{x}T' \mathrel{\Rightarrow}BangAt{n} \mathfrak la{x} TThree' = TThree$ because $T' \mathrel{\Rightarrow}BangAt{n} TThree'$. According to definition of $TThree \mathfrak to{\Derel}At{m} TTwo$, by necessity $TTwo = \mathfrak la{x}{TTwo'}$ with $TThree' \mathfrak to{\Derel}At{m} TTwo'$. By \textit{i.h.}\xspace, $T' \mathrel{\Rightarrow}Bang TTwo'$, thus {}^{\bot}eta\etagin{align*} { {}^{\bot}eta\etagin{prooftree} \hypo{T' \mathrel{\Rightarrow}Bang TTwo'} \infer1{T = \mathfrak la{x}{T'} \mathrel{\Rightarrow}Bang \mathfrak la{x}{TTwo'} = TTwo} \end{prooftree}} \,. \end{align*} \item \emph{Box}: $T = \Bang{T'} \mathrel{\Rightarrow}BangAt{n} \Bang{TThree'} = TThree$ because $T' \mathrel{\Rightarrow}BangAt{n\!-\!1} TThree'$. According to the definition of $TThree \mathfrak to{\Derel}At{m} TTwo$, by necessity $TTwo = \Bang{TTwo'}$ with $TThree' \mathfrak to{\Derel}At{m} TTwo'$. By \textit{i.h.}\xspace, $T' \mathrel{\Rightarrow}Bang TTwo'$, thus {}^{\bot}eta\etagin{align*} { {}^{\bot}eta\etagin{prooftree} \hypo{T' \mathrel{\Rightarrow}Bang TTwo'} \infer1{T = \Bang{T'} \mathrel{\Rightarrow}Bang \Bang{TTwo'} = TTwo} \end{prooftree}} \,. \end{align*} \item \emph{Application}: {}^{\bot}eta\etagin{align*} { {}^{\bot}eta\etagin{prooftree} \hypo{T_0 \mathrel{\Rightarrow}BangAt{n_0} TThree_0} \hypo{T_1 \mathrel{\Rightarrow}BangAt{n_1} TThree_1} \infer2{T = T_0 T_1 \mathrel{\Rightarrow}BangAt{n} TThree_0 TThree_1 = TThree} \end{prooftree} } \end{align*} where $n = \min\{n_0,n_1\}$. According to the definition of $TThree \mathfrak to{\Derel}At{m} TTwo$, there are the following sub-cases: {}^{\bot}eta\etagin{enumerate} \item $TThree = TThree_0TThree_1 \mathfrak to{\Derel}At{m} TTwo_0TThree_1 = TTwo$ with $TThree_0 \mathfrak to{\Derel}At{m} TTwo_0$; since $m < n \leq n_0$, by \textit{i.h.}\xspace applied to $T_0 \mathrel{\Rightarrow}BangAt{n_0} TThree_0 \mathfrak to{\Derel}At{m} TTwo_0$, we have $T_0 \mathrel{\Rightarrow}Bang TTwo_0$, and so (as $\mathrel{\Rightarrow}BangAt{n_1} \,\subseteq\, \mathrel{\Rightarrow}Bang$) {}^{\bot}eta\etagin{align*} {{}^{\bot}eta\etagin{prooftree} \hypo{T_0 \mathrel{\Rightarrow}Bang TTwo_0} \hypo{T_1 \mathrel{\Rightarrow}Bang TThree_1} \infer2{T = T_0T_1 \mathrel{\Rightarrow}Bang TTwo_0TThree_1 = TTwo} \end{prooftree}} \,; \end{align*} \item $TThree = TThree_0TThree_1 \mathfrak to{\Derel}At{m} TTwo_0TThree_1 = TTwo$ with $TThree_1 \mathfrak to{\Derel}At{m} TTwo_1$; analogous to the previous sub-case. \item $TThree = (\mathfrak la{x}{TThree_0'})\Bang{TThree_1'} \mathfrak to{\Derel}At{0} TThree_0'\mathfrak sub{TThree_1'}{x} = TTwo$ with $TThree_0 = \mathfrak la{x}TThree_0'$ and $TThree_1 = \Bang{TThree_1'}$ and $m = 0$; as $0 < n \leq n_0,n_1$ then, according to the definition of $T \mathrel{\Rightarrow}BangAt{n} (\mathfrak la{x}{TThree_0'})\Bang{TThree_1'} = TThree$, {}^{\bot}eta\etagin{align*} { {}^{\bot}eta\etagin{prooftree} \hypo{T_0 \mathrel{\Rightarrow}BangAt{n_0} TThree_0'} \infer1{\mathfrak la{x}T_0 \mathrel{\Rightarrow}BangAt{n_0} \mathfrak la{x}TThree_0'} \hypo{T_1 \mathrel{\Rightarrow}BangAt{n_1\!-\!1} TThree_1'} \infer1{\Bang{T_1} \mathrel{\Rightarrow}BangAt{n_1}\Bang{TThree_1'}} \infer2{T = (\mathfrak la{x}T_0)T_1 \mathrel{\Rightarrow}BangAt{n} (\mathfrak la{x}{TThree_0'})\Bang{TThree_1'} = TThree} \end{prooftree} } \end{align*} where $n = \min\{n_0, n_1\}$; therefore (as $\mathrel{\Rightarrow}BangAt{k} \,\subseteq\, \mathrel{\Rightarrow}Bang$) \[ {}^{\bot}eta\etagin{prooftree} \hypo{T_0 \mathrel{\Rightarrow}Bang TThree_0'} \hypo{T_1 \mathrel{\Rightarrow}Bang TThree_1'} \infer2{T = (\mathfrak la{x}{T_0})\Bang{T_1} \mathrel{\Rightarrow}Bang TThree_0'\mathfrak sub{TThree_1'}{x} = TTwo} \end{prooftree} \] \end{enumerate} \end{itemize} \item Since $T \mathrel{\Rightarrow}NotLlBang TThree \textsc {l}redBang TTwo$, $T \mathrel{\Rightarrow}BangAt{n} TThree \mathfrak to{\mathfrak tot}Ind{m} TTwo$ for some $n \in {\mathbb N} \cup \{\infty\}$ and $m \in {\mathbb N}$ with $n > \textsc {l}Bang{T}$ and $m = \textsc {l}Bang{TThree}$. As $\mathrel{\Rightarrow}NotLlBang \ \subseteq \ \mathrel{\nllredx{\mathfrak tot}}^*$ and $\mathrel{\nllredx{\mathfrak tot}}$ cannot change the least-level (\Cref{def:good}.\ref{p:ll-properties-invariance} and \Cref{prop:ll-properties}), $\textsc {l}Bang{T} = \textsc {l}Bang{TThree}$ and so $n > m$. By merging by level (\Cref{lemma:merge}.\ref{p:merge-level}), $T \mathrel{\Rightarrow}Bang TTwo$. \qed \end{enumerate} \end{proof} The proof of the split property requires a further tool, to get the right induction hypothesis: the \emph{indexed parallel ${}^{\bot}eta\etata_\oc$-reduction} $\mathrel{\Rightarrow}BangInd{n}$ (not to be confused with $\mathrel{\Rightarrow}BangAt{n}$), \textit{i.e.}\xspace $\mathrel{\Rightarrow}Bang$ equipped with a natural number $n$ which is, roughly, the number of ${}^{\bot}eta\etata_\oc$-redexes reduced simultaneously by $\mathrel{\Rightarrow}Bang$. The formal definition of $\mathrel{\Rightarrow}BangInd{n}$ is (where $\mathfrak size{T}_x$ is the number of free occurrences of $x$ in $T$): {}^{\bot}eta\etagin{center} \small {}^{\bot}eta\etagin{prooftree} \infer0{x \mathrel{\Rightarrow}BangInd{0} x} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangInd{n} T'} \infer1{\mathfrak la{x}{T} \mathrel{\Rightarrow}BangInd{n} \mathfrak la{x}{T'}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangInd{m} T'} \hypo{TTwo \mathrel{\Rightarrow}BangInd{n} TTwo'} \infer2{TTTwo \mathrel{\Rightarrow}BangIndLong{m+n} T'TTwo'} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangInd{n} T'} \infer1{\Bang{T} \mathrel{\Rightarrow}BangIndLong{n+1} \Bang{T'}} \end{prooftree} \\[5pt] {}^{\bot}eta\etagin{prooftree} \hypo{T \mathrel{\Rightarrow}BangInd{n} T'} \hypo{TTwo \mathrel{\Rightarrow}BangInd{m} TTwo'} \infer2{(\mathfrak la{x}{T})\Bang{TTwo} \mathrel{\Rightarrow}BangIndLong{n + \mathfrak size{T'}_x\cdot m + 1} T'\mathfrak sub{TTwo'\!}{x}} \end{prooftree} \quad {}^{\bot}eta\etagin{prooftree}[separation=1.2em] \hypo{T_1 \mathrel{\Rightarrow}BangInd{n_1} T_1'} \hypo{\overset{k \in {\mathbb N}}{\dots}} \hypo{T_k \mathrel{\Rightarrow}BangInd{n_k} T_k'} \infer3{\mathbf{o}(T_1, \dots, T_k) \mathrel{\Rightarrow}BangIndLong{n_1 + \dots + n_k} \mathbf{o}(T_1', \dots, T_k')} \end{prooftree} \end{center} The intuition behind the last clause is: $(\mathfrak la{x}{T})TTwo$ reduces to $T'\mathfrak sub{TTwo'\!}{x}$ by: {}^{\bot}eta\etagin{enumerate} \item first reducing $(\mathfrak la{x}{T})TTwo$ to $T\mathfrak sub{TTwo}{x}$ ($1$ step); \item then reducing in $T\mathfrak sub{TTwo}{x}$ the $n$ steps corresponding to the sequence $T \mathrel{\Rightarrow}BangInd{n} T'$; \item finally reducing $TTwo$ to $TTwo'$ for every occurrence of $x$ in $T'$ replaced by $TTwo$, that is, $m$ steps for $\mathfrak size{T'}_x$ times, obtaining $T'\mathfrak sub{TTwo'\!}{x}$. \end{enumerate} {}^{\bot}eta\etagin{lemma}[Split] \label{lemma:split} In the bang calculus $(\mathfrak lambda_\oc, \mathfrak to{\mathfrak tot})$: {}^{\bot}eta\etagin{enumerate} \item\label{p:split-indexed} \emph{Indexed split}: If $T \mathrel{\Rightarrow}BangInd{n} TTwo$ then $T \mathrel{\Rightarrow}NotLlBang TTwo$, or $n > 0 $ and $T \textsc {l}redBang \!\cdot\! \mathrel{\Rightarrow}BangIndLong{n-1} TTwo$. \item\label{p:split-ll} \emph{Split for least-level}: If $T \mathrel{\Rightarrow}Bang TTwo$, then $T \textsc {l}redBang^* \!\cdot\! \mathrel{\Rightarrow}NotLlBang TTwo$. \end{enumerate} \end{lemma} {}^{\bot}eta\etagin{proof} {}^{\bot}eta\etagin{enumerate} \item By induction on the definition of $T \mathrel{\Rightarrow}BangInd{n} TTwo$. Consider the last rule of the derivation of $T \mathrel{\Rightarrow}BangInd{n} TTwo$. We freely use the fact that if $T \mathrel{\Rightarrow}BangInd{n} TTwo$ then $T \mathrel{\Rightarrow}Bang TTwo$. Cases: {}^{\bot}eta\etagin{itemize} \item \emph{Variable}: $T = x \mathrel{\Rightarrow}BangInd{0} x = TTwo$. Then, $T = x \mathrel{\Rightarrow}NotLlBang x = TTwo$ since $x \mathrel{\Rightarrow}BangInd{\infty} x$. \item \emph{Abstraction}: $T = \mathfrak la x{T'} \mathrel{\Rightarrow}BangInd n \mathfrak la {x} {TTwo'} = TTwo$ because $T' \mathrel{\Rightarrow}BangInd n TTwo'$. It follows from the \textit{i.h.}\xspace. \item \emph{Box}: $T = \Bang{T'} \mathrel{\Rightarrow}BangInd{n} \Bang{TTwo'} = TTwo$ because $T' \mathrel{\Rightarrow}BangInd{n} TTwo'$. It follows from the \textit{i.h.}\xspace. \item \emph{Application}: {}^{\bot}eta\etagin{align} \label{eq:app-parallel} { {}^{\bot}eta\etagin{prooftree} \hypo{TThree \mathrel{\Rightarrow}BangInd {n_1} TThree'} \hypo{TFive \mathrel{\Rightarrow}BangInd {n_2} TFive'} \infer2{T = TThree TFive \mathrel{\Rightarrow}BangIndLong{n_1 + n_2} TThree' TFive' = TTwo} \end{prooftree} } \end{align} with $n = n_1 + n_2$. There are only two cases: {}^{\bot}eta\etagin{itemize} \item either $TThree TFive \mathrel{\Rightarrow}NotLlBang TThree' TFive'$, and then the claim holds; \item or $TThreeTFive \not\mathrel{\Rightarrow}NotLlBang TThree'TFive'$ and so (as $TThreeTFive \mathrel{\Rightarrow}Bang TThree'TFive'$ with $TThreeTFive \neq TThree'TFive'$) any derivation with conclusion $TThree TFive \mathrel{\Rightarrow}BangAt{d} TThree' TFive'$ is such that $d =\textsc {l}Bang{TThree TFive} \in {\mathbb N}$. Let us rewrite derivation \,=\,ref{eq:app-parallel} replacing $\mathrel{\Rightarrow}BangInd{n}$ with $\mathrel{\Rightarrow}BangAt{k}$: we have\footnote{This is possible because the inference rules for $\mathrel{\Rightarrow}BangInd{n}$ and $\mathrel{\Rightarrow}BangAt{k}$ are the same except for the way they manage their own indexes $n$ and $k$.} \[ {}^{\bot}eta\etagin{prooftree} \hypo{TThree \mathrel{\Rightarrow}BangAt {d_{TThree}} TThree'} \hypo{TFive \mathrel{\Rightarrow}BangAt{d_{TFive}} TFive'} \infer2{T = TThree TFive \mathrel{\Rightarrow}BangAt d TThree' TFive' = TTwo} \end{prooftree} \] where $d = \min\{d_TThree, d_TFive \}$. Thus, there are two sub-cases: {}^{\bot}eta\etagin{enumerate} \item $d = d_TThree \leq d_TFive$ and then $d= \textsc {l}Bang{TThree TFive} \leq \textsc {l}Bang{TThree} \leq d_{TThree}=d $ (the first inequality holds by definition of $\textsc {l}Bang{TThreeTFive}$), hence $\textsc {l}Bang TThree = d_{TThree}$; we apply the \textit{i.h.}\xspace\ to $TThree \mathrel{\Rightarrow}BangInd{n_1} TThree'$ and we have that $TThree \mathrel{\Rightarrow}NotLlBang TThree'$, or $n_1>0$ and $TThree \textsc {l}redBang TThree_1 \mathrel{\Rightarrow}BangInd {n_1-1} TThree'$; but $TThree \mathrel{\Rightarrow}NotLlBang TThree'$ is impossible because otherwise $TThree TFive \mathrel{\Rightarrow}NotLlBang TThree' TFive'$ (as $d_TThree \leq d_TFive$); therefore, $n_1>0$ and $TThree \textsc {l}redBang TThree_1 \mathrel{\Rightarrow}BangInd{n_1-1} TThree'$, so $n>0 $ and $T = TThree TFive \textsc {l}redBang TThree_1 TFive \mathrel{\Rightarrow}BangIndLong {n_1-1+n_2} TThree'TFive' = TTwo$. \item $d=d_{TFive} \leq d_{TThree}$ and then $d= \textsc {l}Bang{TThree TFive}\leq \deg TFive \leq d_{TFive} =d $, hence $\textsc {l}Bang{TFive} = d_{TFive}$; we conclude analogously to thee previous sub-case. \end{enumerate} \end{itemize} \item \emph{${}^{\bot}eta\etata$ step}: \[ {}^{\bot}eta\etagin{prooftree} \hypo{TThree \mathrel{\Rightarrow}BangInd {n_1} TThree'} \hypo{TFour \mathrel{\Rightarrow}BangInd {n_2} TFour'} \infer2{T = (\mathfrak lax TThree)TFour \mathrel{\Rightarrow}BangIndLong {n_1 + \mathfrak sizeP{TThree'}x \cdot n_2 +1} TThree'\mathfrak sub{TFour'}x = TTwo} \end{prooftree} \] With $n = n_1 + \mathfrak sizeP{TThree'}x \cdot n_2 +1 > 0$. We have $T = (\mathfrak lax TThree)TFour \textsc {l}redBang TThree \mathfrak sub{TFour}x$ and by substitutivity of $\mathrel{\Rightarrow}BangInd{n}$ (\Cref{partobind-subs}) $TThree\mathfrak sub{TFour}x \mathrel{\Rightarrow}BangIndLong{n_1 + \mathfrak sizeP{TThree'}x \cdot n_2} TThree' \mathfrak sub{TFour'}x = TTwo$. \end{itemize} \item If $T \mathrel{\Rightarrow}Bang TTwo$ then $T \mathrel{\Rightarrow}BangInd{n} TTwo$ for some $n \in {\mathbb N}$. We prove the statement by induction $n$. By indexed split (\Cref{lemma:split}.\ref{p:split-indexed}), there are only two cases: {}^{\bot}eta\etagin{itemize} \item \emph{$T \mathrel{\Rightarrow}NotLlBang TTwo$}. This is an instance of the statement (since $\textsc {l}redBang^*$ is reflexive). \item $n>0$ and there exists $TFour$ such that $T \textsc {l}redBang TFour \mathrel{\Rightarrow}BangInd{n-1} TTwo$. By \textit{i.h.}\xspace applied to $TFour \mathrel{\Rightarrow}BangInd{n-1} TTwo$, there is $TThree$ such that $TFour \textsc {l}redBang^* TThree \mathrel{\Rightarrow}NotLlBang TTwo$, and so $T \textsc {l}redBang^* TThree \mathrel{\Rightarrow}NotLlBang TTwo$. \qed \end{itemize} \end{enumerate} \end{proof} By \Cref{prop:abstract-factorize}, \Cref{lemma:merge,lemma:split}, least-level factorization of $\mathfrak to{\mathfrak tot}$ holds. \factorizebang* \section{Omitted proofs of \Cref{sec:ll_modular}} The proofs in this section rely on three ingredients: the properties of the contextual closures---in particular \Cref{fact:shape}, as spelled out in \Cref{fact:isteps}---,the properties of \emph{substitution}, which we recall below, and an assumption of good least-level. A relation $\textsc {lo}oparrowright$ on terms is \emph{substitutive} if {}^{\bot}eta\etagin{equation}\tag{\textbf{substitutive}} R \textsc {lo}oparrowright R' \text{ implies } R \subs x Q \textsc {lo}oparrowright R'\subs x Q. \end{equation} An obvious induction on the shape of terms shows the following (\cite{Barendregt84} pag. 54). {}^{\bot}eta\etagin{property}[Substitutive]\label{fact:subs} Let $\mathrel{\rightarrow}c$ be the contextual closure of $\xred{\rsym}c$. {}^{\bot}eta\etagin{enumerate} \item\label{fact:subs-function} If $\xred{\rsym}c $ is substitutive then $\mathrel{\rightarrow}c$ is substitutive: ~ $T\mathrel{\rightarrow}c T'$ implies $T \subs{x}{Q} \mathrel{\rightarrow}c T' \subs{x}{Q}$. \item\label{fact:subs-argument} If $Q\mathrel{\rightarrow}c Q'$ then $T\subs{x}{Q} \mathrel{\rightarrow}c^* T\subs{x}{Q'} $, always. \end{enumerate} \end{property} We also rely on an assumption of good least-level, which we use to obtain the following technical lemma. {}^{\bot}eta\etagin{lemma}\label{lem:nll_app}If $\mathrel{\rightarrow}$ has a good least-level, then $P Q \nllred P' Q \textsc {l}red P'' Q $ implies $P\nllred P' \textsc {l}red P''$. Similarly, $ Q P\nllred Q P' \textsc {l}red Q P''$ implies $P\nllred P' \textsc {l}red P''$. \end{lemma} {}^{\bot}eta\etagin{proof} Assume $P\mathrel{\rightarrow}_{:k} P' \mathrel{\rightarrow}_{:l} P''$. By assumption, $l=\textsc {l}ev{P'Q}=\textsc {l}ev{P'}$, and so $ P' \textsc {l}red P''$. By internal invariance, $\textsc {l}ev{PQ}=l $, and so by assumption $k>l$. By monotonicity, $\textsc {l}ev P \leq \textsc {l}ev{P'}=l <k$. Since $k>\textsc {l}ev {P}$, we have $P\nllred P'$. \end{proof} We recall that we often write $\mathcal Root{}$ to indicate the step $\mathrel{\rightarrow}$ which is obtained by \emph{empty contextual closure}. \subsection{ $\mathfrak lambda_\ocOp$: least-level Factorization, Modularly.} Consider a calculus $(\mathfrak lambda_\ocOp, \mathrel{\rightarrow} \,=\, \mathrel{\rightarrow}bb\cup \mathrel{\rightarrow}c)$, where $\mathrel{\rightarrow}c$ is a new reduction added to $\mathfrak tot$. \Cref{thm:modular} {states} that the compound system $\mathrel{\rightarrow}bb\cup \mathrel{\rightarrow}c$ satisfies least-level factorization if $\F\textsc {l}redbb\mathrel{\nllredx{{}^{\bot}eta\etata}}b$, $\F\textsc {l}redc\mathrel{\nllredx{\rho}}$, and the two linear swaps hold. We have already proved that if $\mathrel{\rightarrow}$ has a good least-level, then $\F\textsc {l}redbb\mathrel{\nllredx{{}^{\bot}eta\etata}}b$ always hold. We now show that to verify the linear swaps reduces to a single simple test, leading to \Cref{prop:test_ll}. First, we observe that each linear swap condition can be tested by considering for the least-level step only $\mapsto$, that is, only the closure of $\mapsto$ under \emph{empty} context. This is expressed in the following lemma, where we include also a useful variant. {}^{\bot}eta\etagin{lemma}[Root linear swaps]\label{l:ll_swaps}Let $\mathrel{\rightarrow}a, \mathrel{\rightarrow}c$ be the contextual closure of rules $\xred{\rsym}a,\xred{\rsym}c$, and assume $\mathrel{\rightarrow}a\cup \mathrel{\rightarrow}c$ to have a good least-level. {}^{\bot}eta\etagin{enumerate} \item $\nllredx{\xi} \cdot \xred{\rsym}c \subseteq {\textsc {l}redx{\rho}} \cdot \mathrel{\rightarrow}a^* $ implies $\nllredx{\xi} \cdot \textsc {l}redx{\rho} \subseteq {\textsc {l}redx{\rho}} \cdot \mathrel{\rightarrow}a^* $. \item Similarly, $\nllredx{\xi} \cdot \xred{\rsym}c \subseteq {\textsc {l}redx{\rho}} \cdot \mathrel{\rightarrow}a^= $ implies $\nllredx{\xi} \cdot \textsc {l}redx{\rho} \subseteq {\textsc {l}redx{\rho}} \cdot \mathrel{\rightarrow}a^= $. \end{enumerate} \end{lemma} {}^{\bot}eta\etagin{proof}Assume $M \nllredx{\xi} U \textsc {l}redx{\rho}N$. If $U$ is the redex, the claim holds by assumption. Otherwise, we prove $M{\textsc {l}redx{\rho}} \cdot \mathrel{\rightarrow}a^* N $, by induction on the structure of $ U $. Observe that both $M$ and $N$ have the same shape as $U$ (by Property \ref{fact:shape} ). {}^{\bot}eta\etagin{itemize} \item $U=U_1U_2$ (hence $M=M_1M_2$ and $N=N_1N_2$). We have two cases. {}^{\bot}eta\etagin{enumerate} \item Case $U_1 \textsc {l}redc N_1$. By \Cref{fact:isteps}, either $M_1\mathrel{\rightarrow}a U_1$ or $M_2\mathrel{\rightarrow}a U_2$. {}^{\bot}eta\etagin{enumerate} \item Assume $M:=M_1M_2 \mathrel{\nllredx{\xi}} U_1 M_2\textsc {l}redc N_1 M_2:=N$. By \Cref{lem:nll_app}, we have $M_1 \mathrel{\nllredx{\xi}} U_1 \textsc {l}redc N_1 $, and we conclude by \emph{i.h.}\xspace. \item Assume $M:=U_1M_2 \mathrel{\nllredx{\xi}} U_1 U_2\textsc {l}redc N_1 U_2:=N$. Then $U_1M_2 \textsc {l}redc N_1M_2\mathrel{\rightarrow}a N_1U_2$. \end{enumerate} \item Case $U_2 \textsc {l}redc N_2$. Similar to the above. \end{enumerate} \item $U=\lam x.U_0$ (hence $M=\lam x. M_0$ and $N=\lam x. N_0$). We conclude by \emph{i.h.}\xspace. \item $U=!U_0$ (hence $M=! M_0$ and $N=! N_0$). We conclude by \emph{i.h.}\xspace. \end{itemize} \end{proof} Since we study $ \mathrel{\rightarrow}b\cup \mathrel{\rightarrow}c$, one of the linear swap is $\nllredx{\rho} \cdot \textsc {l}redbb \subseteq {\textsc {l}redbb} \cdot \mathrel{\rightarrow}c^*$. We show that, whatever is $\mathrel{\rightarrow}c$, it linearly swaps after $\textsc {l}redbb$ as soon as $\xred{\rsym}c$ is \emph{substitutive}. {}^{\bot}eta\etagin{lemma}[Swap with $\textsc {l}redbb$] \label{l:swap_after_b} Let $\mathrel{\rightarrow}c$ be the contextual closure of rule $\xred{\rsym}c$, and assume $\mathrel{\rightarrow}bb\cup \mathrel{\rightarrow}c$ has good least level. If $\xred{\rsym}c$ is substitutive, then $\nllredx{\rho} \cdot \textsc {l}redbb \subseteq {\textsc {l}redbb} \cdot \mathrel{\rightarrow}c^* $ always holds. \end{lemma} {}^{\bot}eta\etagin{proof} We prove $ \nllredx{\rho} \cdot \mapsto_{\mathfrak tot} ~\subseteq ~ {\textsc {l}redbb} \cdot \mathrel{\rightarrow}c^* $, and conclude by \Cref{l:ll_swaps}. Assume $M \nllredx{\rho} (\lam x.P) !Q \mapsto_{\mathfrak tot} P \subs x Q$. We want to prove $ M{\textsc {l}redbb} \cdot \mathrel{\rightarrow}c^* P \subs x Q$. By \Cref{fact:isteps}, $M=M_1M_2$ and {either $M_1= \lam x.M_P \mathrel{\rightarrow}c (\lam x.P)$ or $M_2= !M_Q \mathrel{\rightarrow}c !Q$. } {}^{\bot}eta\etagin{itemize} \item In the first case, $M=(\lam x.M_P)!Q$, with $M_P \mathrel{\rightarrow}c P$. Hence $M=(\lam x.M_P)!Q \mapsto_{\mathfrak tot} M_P \subs x Q $ and we conclude by substitutivity of $\mathrel{\rightarrow}c$ (Property \ref{fact:subs}, point 1.). \item In the second case, $M=(\lam x.P)!M_Q$ with $M_Q \mathrel{\rightarrow}c Q$. Hence $M=(\lam x.P)!M_Q \mapsto_{\mathfrak tot} P \subs x {M_Q} $, and we conclude by Property \ref{fact:subs}, point 2. \end{itemize} \end{proof} Summing up, since surface factorization for $ \mathfrak tot$ is known, we obtain the following compact test for least-level factorization in extensions of $\mathfrak lambda^!$. \testll* \subsection{Modular factorization for CbV and CbN} A modularity result similar to \Cref{prop:test_ll} can be established for CbN and CbV. Moreover, there is nothing special with least-level: leftmost works equally well. {}^{\bot}eta\etagin{proposition}[A test for CbN least-level factorization] Let $\mathrel{\rightarrow}c$ be the contextual closure of $\xred{\rsym}c $, and assume $\mathrel{\rightarrow} \,=\,( \mathrel{\rightarrow}b \cup \mathrel{\rightarrow}c)$ has a good\xspace least-level. The union $\mathrel{\rightarrow}b \cup \mathrel{\rightarrow}c$ satisfies CbN least-level factorization if: {}^{\bot}eta\etagin{enumerate} \item \emph{$\textsc {l}$-factorization of $\mathrel{\rightarrow}c$}: $\F{\textsc {l}redx{\rho}}{\nllredx{\rho}}$. \item \emph{Substitutivity}: $\xred{\rsym}c$ is substitutive. \item \emph{Root linear swap}: $\nllredx{{}^{\bot}eta\etata} \cdot \xred{\rsym}c \ \subseteq \ \textsc {l}redx{\rho} \cdot\mathrel{\rightarrow}b^* $. \end{enumerate} \end{proposition} {}^{\bot}eta\etagin{proposition}[A test for CbV least-level factorization] Let $\mathrel{\rightarrow}c$ be the contextual closure of $\xred{\rsym}c$, and assume $\mathrel{\rightarrow} \,=\,( \mathrel{\rightarrow}bv \cup \mathrel{\rightarrow}c)$ has a good\xspace least-level. The union $\mathrel{\rightarrow}bv \cup \mathrel{\rightarrow}c$ satisfies CbV least-level factorization if: {}^{\bot}eta\etagin{enumerate} \item \emph{$\textsc {l}$-factorization of $\mathrel{\rightarrow}c$}: $\F{\textsc {l}redx{\rho}}{\nllredx{\rho}}$. \item \emph{Substitutivity}: $\xred{\rsym}c$ is substitutive. \item \emph{Root linear swap}: $\nllredx{{{}^{\bot}eta\etata_v}} \cdot \xred{\rsym}c \ \subseteq \ \textsc {l}redx{\rho} \cdot\mathrel{\rightarrow}bv^* $. \end{enumerate} \end{proposition} } \end{document}
\begin{document} \title{The Multivariate Generalised von Mises Distribution: Inference and Applications} \begin{abstract} Circular variables arise in a multitude of data-modelling contexts ranging from robotics to the social sciences, but they have been largely overlooked by the machine learning community. This paper partially redresses this imbalance by extending some standard probabilistic modelling tools to the circular domain. First we introduce a new multivariate distribution over circular variables, called the multivariate Generalised von Mises (mGvM) distribution. This distribution can be constructed by restricting and renormalising a general multivariate Gaussian distribution to the unit hyper-torus. Previously proposed multivariate circular distributions are shown to be special cases of this construction. Second, we introduce a new probabilistic model for circular regression inspired by Gaussian Processes, and a method for probabilistic Principal Component Analysis with circular hidden variables. These models can leverage standard modelling tools (e.g.~kernel functions and automatic relevance determination). Third, we show that the posterior distribution in these models is a mGvM distribution which enables development of an efficient variational free-energy scheme for performing approximate inference and approximate maximum-likelihood learning. \end{abstract} \section{Introduction} Many data modelling problems in science and engineering involve circular variables. For example, the spatial configuration of a molecule~\cite{boomsma08,frellsen09}, robot, or the human body~\cite{chirikjian2000} can be naturally described using a set of angles. Phase variables arise in image and audio modelling scenarios~\cite{wadhwa2013}, while directional fields are also present in fluid dynamics~\cite{jona-lasinio2012}, and neuroscience~\cite{benyishai1995}. Phase-locking to periodic signals occurs in a multitude of fields ranging from biology~\cite{gao2010} to the social sciences~\cite{brunsdon_using_2006}. It is possible, at least in principle, to model circular variables using distributional assumptions that are appropriate for variables that live in a standard Euclidean space. For example, a na\"{i}ve application might represent a circular variable in terms of its angle $\phi \in [0,2pi)$ and use a standard distribution over this variable (presumably restricted to the valid domain). Such an approach would, however, ignore the topology of the space e.g.~that $\phi =0$ and $\phi=2 pi$ are equivalent. Alternatively, the circular variable can be represented as a unit vector in $\mathbb{R}^{2}$, $\vt{x} = [\cos(\phi),\sin(\phi)]^{\top}$, and a standard bivariate distribution used instead. This partially alleviates the aforementioned topological problem, but standard distributions place probability mass off the unit circle which adversely affects learning, prediction and analysis. In order to predict and analyse circular data it is therefore key that machine learning practitioners have at their disposal a suite of bespoke modelling, inference and learning methods that are specifically designed for circular data~\cite{lebanon2005}. The fields of circular and directional statistics have provided a toolbox of this sort~\cite{mardia_directional_2000}. However, the focus has been on fairly simple and small models that are applied to small datasets enabling MCMC to be tractably deployed for approximate inference. The goal of this paper is to extend the existing toolbox provided by statistics, by leveraging modelling and approximate inference methods from the probabilistic machine learning field. Specifically, the paper makes three technical contributions. First, in \cref{sec:mgvm} it introduces a central multivariate distribution for circular data---called the multivariate Generalised von Mises distribution---that has elegant theoretical properties and which can be combined in a plug-and-play manner with existing probabilistic models. Second, in \cref{sec:applications} it shows that this distribution arises in two novel models that are circular versions of Gaussian Process regression and probabilistic Principal Component Analysis with circular hidden variables. Third, it develops efficient approximate inference and learning techniques based on variational free-energy methods as demonstrated on four datasets in \cref{sec:results}. \section{Circular distributions primer\label{sec:review}} In order to explain the context and rationale behind the contributions made in this paper, it is necessary to know a little background on circular distributions. Since \emph{multidimensional} circular distributions are not generally well-known in the machine learning community, we present a brief review of the main concepts related to these distributions in this section. The expert reader can jump to \cref{sec:mgvm} where the multivariate Generalised von Mises distribution is introduced. A univariate circular distribution is a probability distribution defined over the unit circle. Such distributions can be constructed by wrapping, marginalising or conditioning standard distributions defined in Euclidean spaces and are classified as \emph{wrapped}, \emph{projected} or \emph{intrinsic} according to the geometric interpretation of their construction. More precisely, the \emph{wrapped} approach consists of taking a univariate distribution $p(x)$ defined on the real line, parametrising any point $x \in \mathbb{R}$ as $x = \phi + 2 pi k$ with $k \in \mathbb{Z}$ and summing over all $k$ so that $p(x)$ is wrapped around the unit circle. The most commonly used wrapped distribution is the Wrapped Gaussian distribution~\cite{ferrari_wrapping_2009,jona-lasinio2012}. An alternative approach takes a standard bivariate distribution $p(x,y)$ that places probability mass over $\mathbb{R}^2$, transforms it to polar coordinates $[x, y]^{\top} \rightarrow [r\cos\phi, r\sin\phi]^{\top}$ and marginalises out the radial component $\int_{0}^{\infty}p(r\cos\phi, r\sin\phi)r \text{d} r$. This approach can be interpreted as projecting all the probability mass that lies along a ray from the origin onto the point where it crosses the unit circle. The most commonly used projected distribution is the Projected Gaussian~\cite{wang_directional_2013}. Instead of marginalising the radial component, circular distributions can be constructed by conditioning it to unity, $p(x, y | x^2+y^2=1)$. This can be interpreted as restricting the original bivariate density to the unit circle and renormalising. A distribution constructed in this way is called ``intrinsic'' (to the unit circle). The construction has several elegant properties. First, the resulting distribution inherits desirable characteristics of the base distribution, such as membership of the exponential family. Second, the form of the resulting density often affords more analytical tractability than those produced by wrapping or projection. The most important intrinsic distribution is the von Mises (vM), $p(\phi|\mu,\kappa) propto \exp(\kappa \cos(\phi-\mu))$, which is obtained by conditioning an isotropic bivariate Gaussian to the unit circle. The vM has two parameters, the mean $\mu \in [0,2 pi)$ and the concentration $\kappa \in \mathbb{R}^{+}$. If the covariance matrix of the bivariate Gaussian is a general real positive definite matrix, we obtain the Generalised von Mises (GvM) distribution~\cite{gatto_generalized_2007}{f}ootnote{To be precise, Gatto and Jammalamadaka define this to be a Generalised von Mises of order 2, but since higher-order Generalised von Mises distributions are more intractable and consequently have found fewer applications, we use the shorthand throughout.} \begin{align} p(\phi) propto \exp(\kappa_1 \cos(\phi-\mu_1) + \kappa_2 \cos( 2 (\phi-\mu_2)))\,. \label{eq:3} \end{align} The GvM has four parameters, two mean-like parameters $\mu_i \in [0,2 pi)$ and two concentration-like parameters $\kappa_i \in \mathbb{R}^{+}$ and is an exponential family distribution. The GvM is generally asymmetric. It has two modes when $4\kappa_{2} \geq \kappa_{1}$, otherwise it has one mode except when it is a uniform distribution $\kappa_{2} = \kappa_{1} = 0$. The GvM is arguably more tractable than the distributions obtained by wrapping or projection as its unnormalised density takes a simple form. In comparison, the unnormalised density of the wrapped normal involves an infinite sum and that of the projected normal is complex and requires special functions. However, the normalising constant of the GvM (and its higher moments) are still complicated, containing infinite sums of modified Bessel functions~\cite{gatto_computational_2008}. In this paper the focus will be on the extensions to vectors of dependent circular variables that lie on a (hyper-) torus (although similar methods can be applied to multivariate hyper-spherical models). An example of a multivariate distribution on the hyper-torus is the multivariate von Mises (mvM) by~\citet{mardia_multivariate_2008} \begin{align} \xmcal{mvM}(\vt{\arv}) propto \exp\Big\{\vt{\kappa}^{\top} \cos(\vt{\arv}) + \sin(\vt{\arv})^{\top}\mat{G}\sin(\vt{\arv})\Big\}\,. \end{align} The terms $\cos(\vt{\arv})$ and $\sin(\vt{\arv})$ denote element-wise application of sine and cosine functions to the vector $\vt{\arv}$, $\vt{\kappa}$ is a element-wise positive $D$-dimensional real vector, $\vt{\nu}$ is a $D$-dimensional vector whose entries take values on $[0, 2pi)$, and $\mat{G}$ is a matrix whose diagonal entries are all zeros. The mvM distribution draws its name from the its property that the one dimensional conditionals, $p(\phi_d|\vt{\arv}_{\neq d})$, are von Mises distributed. As shown in the Supplementary Material, this distribution can be obtained by applying the \emph{intrinsic} construction to a $2D$-dimensional Gaussian, mapping $\vt{x}\to (r \cos\vt{\arv}^{\top}, r\sin\vt{\arv}^{\top})^{\top}$ and assuming its precision matrix has the form \begin{align} \mat{W} = \vt{\Sigma}^{-1} = \begin{bmatrix} \Lambda & \mat{A} \\ \mat{A}^{\top} & \Lambda \end{bmatrix} \label{eq:sparsityMvM} \end{align} where $\Lambda$ is a diagonal $D$ by $D$ matrix and $\mat{A}$ is an antisymmetric matrix. Other important facts about the mvM are that it bears no simple closed analytic form for its normalising constant, it has $D+(D-1)D/2$ degrees of freedom in its parameters and it is not closed under marginalisation. We will now consider multi-dimensional extensions of the GvM distribution. \section{The multivariate Generalised von Mises\label{sec:mgvm}} In this section, we present the multivariate Generalised von Mises (mGvM) distribution as an \emph{intrinsic} circular distribution on the hyper-torus and relate it to existing distributions in the literature. Following the construction of \emph{intrinsic} distributions, the multivariate Generalised von Mises arises by constraining a $2D$-dimensional multivariate Gaussian with arbitrary mean and covariance matrix to the $D$-dimensional torus. This procedure yields the distribution \begin{multline} \xmcal{mGvM}(\vt{\arv}; \vt{\nu}, \vt{\kappa}, \mat{W}) propto \exp \Big\{ \vt{\kappa}^{\top}\cos(\vt{\arv} - \vt{\nu}) \\ - {f}rac{1}{2} \begin{bmatrix} \cos(\vt{\arv})\\ \sin(\vt{\arv}) \end{bmatrix}^{\top} \begin{bmatrix} \mat{W}^{cc} & \mat{W}^{cs} \\ (\mat{W}^{cs})^{\top} & \mat{W}^{ss} \end{bmatrix} \begin{bmatrix} \cos(\vt{\arv}) \\ \sin(\vt{\arv}) \end{bmatrix} \Big\} \label{eq:mgvm_1} \end{multline} where $\mat{W}^{cc}, \mat{W}^{cs}, \mat{W}^{ss}$ are the blocks of the underlying Gaussian precision matrix $\mat{W}=\vt{\Sigma}^{-1}$, $\vt{\nu}$ is a $D$-dimensional angle vector and $\vt{\kappa}$ is a $D$-dimensional concentration vector. \cref{eq:mgvm_1} is over-parametrised with $2D + 3(D-1)D/2$ parameters, $D$ more than the degrees of freedom of the most succinct form of the mGvM given in Supplemental Material. The mGvM distribution generalises the multivariate von Mises by~\citet{mardia_multivariate_2008}; it collapses to the mvM when $\mat{W}$ has the form of \cref{eq:sparsityMvM}. Whereas the one-dimensional conditionals of the mvM are von Mises and therefore unimodal and symmetric, those of the mGvM are generalised von Mises and therefore can be bimodal and asymmetric. The mGvM also captures a richer set of dependencies between the variables than the mvM, notice that the mvM is not the most general form of mGvM that has vM conditionals. The tractability of the one-dimensional conditionals of the mGvM can be leveraged for approximate inference using variational mean-field approximations and Gibbs sampling (see \cref{sec:inference}). The mGvM is a member of the exponential family and a maximum entropy distribution subject to multidimensional first and second order circular moments constraints. We will now show that the mGvM can be used to build rich probabilistic models for circular data. \section{Some applications of the mGvM\label{sec:applications}} In this section, we outline two novel and important probabilistic models in which inference produces a posterior distribution that is a mGvM. The first model is a circular analogue of Gaussian Process regression and the second is a version of Principal Component Analysis for circular latent variables. \subsection{Regression of circular data\label{sec:regression}} Consider a regression problem in which a set of noisy output circular variables $\{psi_n \}_{n=1}^N$ have been collected at a number of input locations $\{\vt{s}_n \}_{n=1}^N$. The treatment will apply to inputs that can be multi-dimensional and lie in any space (e.g.~they could be circular themselves). The goal is to predict circular variables $\{ psi^{*}_m \}_{m=1}^M$ at unseen input points $\{ \vt{s}^{*}_m \}_{m=1}^M$. Here we leverage the connection between the mGvM distribution and the multivariate Gaussian in order to produce a powerful class of probabilistic models for this purpose based upon Gaussian Processes. In what follows the outputs and inputs will be represented as vectors and matrices respectively, that is $\vt{psi}$, $\xmcal{S}$, $\vt{psi}^*$ and $\xmcal{S}^*$. In standard Gaussian Process regression~\cite{rasmussen_gaussian_2006} a multivariate Gaussian prior is placed over the underlying unknown function values at the input points $p(\vt{f}|\xmcal{S}) = \xmcal{GP}(\vt{f}; 0, \mat{K}(\vt{s},\vt{s}^{prime}))$, and a Gaussian noise model is assumed to produce the observations at each input location, $p(y_n|f_n,\vt{s}_n) = \xmcal{N}(y_n;f_n,\sigma_y^2)$. The prior over the function values is specified using the Gaussian Process's covariance function $K(\vt{s},\vt{s}^{prime})$ that encapsulates prior assumptions about the properties of the underlying function, such as smoothness, periodicity, stationarity etc. Prediction then involves forming the posterior predictive distribution, $p(\vt{f}^* |\vt{y}, \xmcal{S}, \xmcal{S}^{*})$, which also takes a Gaussian form due to conjugacy. Here an analogous approach is taken. The circular underlying function values and observations are denoted $\vt{\arv}$ and $\vt{psi}$. The prior over the underlying function is given by a mGvM in overparametrised form $p(\vt{\arv}|\xmcal{S}) = \xmcal{mGvM}(\vt{\arv}; 0, 0, \mat{K}(\vt{s}, \vt{s}^{prime})^{-1})$ and the observations are assumed to be von Mises noise corrupted versions of this function $p(psi_n| \phi_n, \vt{s}_n) = \xmcal{vM}(psi_{n}; \phi_{n}, \kappa)$. In order to construct a sensible prior over circular function values we use a construction that is inspired by a multi-output GP to produce bivariate variables at each input location. We then leverage the intrinsic construction of the mGvM to constrain each regressed point to the unit circle to allow the mGvM to inherit the properties from the GP covariance function it was built from. This is central to creating a flexible and powerful mGvM regression framework, as GP covariance functions that can handle exotic input variables such as circular variables, strings or graphs~\cite{gartner2003graph,duvenaud2011additive}. Inference proceeds subtly differently to that in a GP due to an important difference between multivariate Gaussian and multivariate Generalised von Mises distributions. That is, the former are consistent under marginalisation whilst the latter are not: if a subset of mGvM variables are marginalised out, the remaining variables are not distributed according to a mGvM. Technically, this means that for analytic tractability of inference we have to handle the joint posterior predictive distribution $p(\vt{\arv}, \vt{\arv}^* |\vt{psi}, \xmcal{S}, \xmcal{S}^{*})$, which is a mGvM due to conjugacy, rather than $p(\vt{\arv}^* |\vt{psi}, \xmcal{S}, \xmcal{S}^{*})$, which is not. Whilst this is somewhat less elegant than GP regression as it requires the prediction locations to be known up front, in many applications this is not a great restriction. This model type is termed transductive~\cite{quinonero2005}. \subsection{Latent angles: dimensionality reduction and representation learning\label{sec:latentAngles}} Next consider the task of learning the motion of an articulated rigid body from noisy measurements on a Euclidean space. Articulated rigid bodies can represent a large class of physical problems including mechanical systems, human motion and molecular interactions. The dynamics of rigid bodies can also be fully described by rotations around a fixed point plus a translation and, therefore, can be succinctly represented using angles see~\cite{chirikjian2000}. For simplicity, we will restrict our treatment to a rigid body with $D$ articulations on a 2-dimensional Euclidean space and rotations only, as the discussion trivially generalises to higher dimensional spaces and translations can be incorporated through an extra linear term. Extensions for 3-dimensional models follow directly from the 2-dimensional case, which can be seen as a first step towards these more complex models. The Euclidean components of any point on an articulated rigid body can be described using the angles between each articulation and their distances. More precisely, for an upright, counter-clockwise coordinate system, the horizontal and vertical components of a point in the $d$-th articulator can be written as $x_{d} = \sum_{j=1}^{d} l_{j} \sin(\vt{\varphi}_{j})$ and $y_{d} = -\sum_{j=1}^{d} l_{j} \cos(\vt{\varphi}_{j})$, where $l_{j}$ is the length of a link $j$ to the next link or the marker. Without loss of generality, we can model only the variation around the mean angle for each joint, i.e. $\varphi_d=\phi_d-\nu_d$ which results in the general model for noisy measurements \begin{equation} \begin{bmatrix} \vt{\yy} \\ \vt{x} \end{bmatrix} = \begin{bmatrix} -\mat{L} \\ \mat{L} \end{bmatrix} \begin{bmatrix} \cos(\vt{\varphi}) \\ \sin(\vt{\varphi}) \end{bmatrix} + \vt{\epsilon} = \begin{bmatrix} \mat{A} \\ \mat{B} \end{bmatrix} \begin{bmatrix} \cos(\vt{\arv}) \\ \sin(\vt{\arv}) \end{bmatrix} + \vt{\epsilon} \label{eq:rigid_body} \end{equation} where $\mat{L}$ is the matrix that encodes the distances between joints, $\mat{A}$ and $\mat{B}$ are the distance matrix rotated by the vector $\vt{\nu}$ and $\vt{\epsilon}\sim\xmcal{N}(0,\sigma^2I)$. The prior over the joint angles can be modelled by a multivariate Generalised von Mises. Here we take inspiration from Principal Component Analysis, and use independent von Mises distributions \begin{align} p(\vt{\arv}_{1,\ldots,N}) &= \textstyleprod\limits_{n = 1}^{N}prod\limits_{d = 1}^{D} \xmcal{vM}\left(\phi_{d,n};0,\kappa_{d}\right). \label{eq:vonMisesPriors} \end{align} Due to conjugacy, the posterior distribution over the latent angles is a mGvM distribution. This can be informally verified by noting that the priors on the latent angles $\vt{\arv}$ are exponentials of linear functions of sines and cosines, while the likelihood is the exponential of a quadratic function in sine and cosines. This leads to the posterior being an exponential quadratic function of sines and cosines and, hence, mGvM. The model can be extended to treat the parameters in Bayesian way by including sparse priors over the coefficient matrices $\mat{A}$ and $\mat{B}$ and the observation noise. A standard choice for this task is to define Automatic Relevance Detection priors~\cite{mackay_bayesian_1994} over the columns of these matrices defined as $\xmcal{N}(\mat{A}_{m,d};0,\sigma^{2}_{\mat{A},d})$ and $\xmcal{N}(\mat{B}_{m,d};0,\sigma^{2}_{\mat{B},d})$ in order to perform automatic structure learning. Additional Inverse Gamma priors over $\sigma^{2}_{\mat{A},d}$, $\sigma^{2}_{\mat{B},d}$ and $\sigma^{2}$ are also employed. The dimensionality of the latent angle space can be lower than the dimensionality of the observed space, in which case learning and inference perform dimensionality reduction that maps real-valued data to a lower-dimensional torus. Besides motion capture, toroidal manifolds can also prove useful when modelling other relevant applications, such as electroencephalogram (EEG) and audio signals~\cite{turner_probabilistic_2011}. Further connections between dimensionality reduction with the mGvM and Probabilistic Principal Component Analysis (PPCA) proposed by~\citet{tipping_probabilistic_1999} (including limiting behaviour and geometrical relations between these models) are explored in the Supplementary Material. As a consequence of these similarities, we denote this model as circular Principal Component Analysis (cPCA). \section{Approximate inference for the mGvM\label{sec:inference}} The multivariate Generalised von Mises does not admit an analytic expression for its normalizing constant, therefore we need to resort to approximate inference techniques. This section presents two approaches that exploit the tractable univariate conditionals of the mGvM: Gibbs sampling and mean-field variational inference. \subsection{Gibbs sampling} A Gibbs sampling procedure for sampling the mGvM of Equation \eqref{eq:mgvm_1} can be derived leveraging the GvM form of the one-dimensional mGvM conditionals. In particular, the Gibbs sampler updates for the $d$-th conditional of the mGvM will have the form \begin{equation} p(\phi_{d}|\vt{\arv}_{\neq d}) = \xmcal{GvM}(\phi_{d}; \tilde{\kappa}_{1,d}, \kappa_{2,d},\tilde{\nu}_{1,d}, \nu_{2,d}) \end{equation} where $\tilde{\kappa}_{1,d}$ and $\tilde{\nu}_{1,d}$ are functions of $\vt{\kappa}$, $\vt{\nu}$ and $\vt{\arv}_{\neq d}$ given in the Supplementary Material. The Gibbs sampler can be used to support approximate maximum-likelihood learning by using it to compute the expectations required by the EM algorithm~\cite{wei_tanner_1990}. However, it is well-known that Gibbs sampling becomes less effective as the joint distribution becomes more correlated and the dimensionality grows. This is particularly significant when using the distribution in high-dimensional cases with rich correlational structure, such as those considered later in the paper. \subsection{Mean-field Variational inference} As a consequence of the problems encountered when using Gibbs sampling, the variational inference framework emerges as an attractive, scalable approach to handling inference in when the posterior distribution is a mGvM. The variational inference framework~\cite{jordan1999} aims to approximate an intractable posterior $p(\vt{\arv}|\vt{psi}, \theta)$ with a distribution $q(\vt{\arv}|\rho)$ by minimising the Kullback-Leiber divergence from the distribution $q$ to $p$. If the approximating distribution is chosen to be fully factored, i.e. $q(\vt{\arv}) = prod_{d=1}^{d}q_{d}(\phi_{d})$, the optimal functional form for $q_{d}(\phi_{d})$ can be obtained analytically using calculus of variations. The functional form of each mean-field factor is inherited from the one-dimensional conditionals and consequently is a Generalised von Mises of the form \begin{equation*} q_{d}(\phi_{d}) = \xmcal{GvM}(\phi_{d}; \bar{\kappa}_{1,d}, \kappa_{2,d},\bar{\nu}_{1,d}, \nu_{2,d}) \end{equation*} where the formulas for the parameters $\bar{\kappa}_{1,d}$ and $\bar{\nu}_{1,d}$ are similar in nature to the Gibbs sampling update and given in the Supplementary Material. Furthermore, since the moments of the Generalised von Mises can be computed through series approximations~\cite{gatto_computational_2008}, the errors from series truncation are negligible if a sufficiently large number of terms is considered. It is possible to obtain gradients of the variational free energy and optimise it with standard optimisation methods such as Conjugate Gradient or Quasi-Newton methods instead of resorting to coordinate-ascent under the variational Expectation-Maximization algorithm which often is slow to converge. Despite these improvements, we found empirically that accurate calculations of the moments of a Generalised von Mises distribution can become costly when the magnitude of the concentration parameters exceeds $\approx$100 and the posterior concentrates. This numerical instability occurs when the infinite expansion for computing the moments contains a large number of significant terms that have alternating signs leading to accumulation of numerical errors. It is possible to use other approximate integrations schemes if these cases arise during inference. An alternative way to alleviate this problem is to consider a sub-optimal form of factorised approximating distribution. An obvious choice is to use von Mises factors as this results in tractable updates and requires simpler moment calculations. A von Mises field can also be motivated as a first order approximation to a GvM field by requiring that the log approximating distribution is linear in sine and cosine terms, as shown in the Supplementary Material. In addition to inference, we can use the same variational framework for learning in cases where the mGvM we wish to approximate is a posterior of tractable likelihoods and priors, as in the cPCA model. To achieve this, we form the variational free-energy lower bound on the log-marginal likelihood as \begin{equation*} \log p(\vt{psi}|\theta) \geq \xmcal{F}(q,\theta) = \expect{\log p(\vt{psi},\vt{\arv}| \theta)}_{q(\vt{\arv}|\rho)} + \xmcal{H}(q), \end{equation*} where $\xmcal{H}(q)$ is the entropy of the approximating distribution and $q(\vt{\arv}|\rho)$, $p(\vt{\arv}, \vt{psi}| \theta)$ is the model log-joint distribution, $\xmcal{F}(q,\theta,\rho)$ is the variational free-energy, $\theta$ are the model parameters and $\rho$ represents the parameters of the approximating distribution. The same bound cannot be used directly for doubly-intractable mGvM models, such as the circular regression model, and it constitutes an area for further work. \section{Experimental results\label{sec:results}} To demonstrate approximate inference on the applications outlined in \cref{sec:applications} we present experiments on synthetic and real datasets. A comprehensive description and the data sets used in the all experiments conducted are available at \url{http://tinyurl.com/mgvm-release}. Further experimental details are also provided in the Supplementary Material. \subsection{Comparison to other circular distributions} For illustrative purposes we qualitatively compared multivariate Wrapped Gaussian and mvM approximations to a base mGvM and mGvM approximations to these two distributions. The approximations were obtained by numerically minimising the KL divergence between the approximating distribution and the base distributions. These experiments were conducted on a two-dimensional setting in order to render the computation of the normalising constant of the mGvM and the mvM tractable by numerical integration. The resulting distributions are shown in \cref{fig:mgvm-comparison}. \begin{figure*} \caption{Circular distribution approximations: a base mGvM (a) and its optimal approximations using a multivariate wrapped Gaussian (b) and the mvM (c). The mGvM approximation to the mWG (b) is presented in (d) and the mGvM approximation to the mvM in (c) is presented in (e). Neither the multivariate wrapped Gaussian nor the mvM can accommodate for the asymmetries and the multiple modes of the mGvM, however, the mGvM is able to approximate the mWG high-probability regions and fully recover the mvM. Darker regions have higher probability than lighter regions.} \label{fig:mgvm-comparison} \end{figure*} In \cref{fig:mgvm-comparison}, the mvM and the multivariate wrapped Gaussian cannot capture the multimodality and asymmetry of the mGvM. Moreover, these distributions approximate the multiple modes by increasing their variance and assigning high probability to the region of low-probability between the modes of the mGvM. On the other hand, when the mvM and the multivariate wrapped Gaussian are approximated by the mGvM, the mGvM is able to approximate well the high-probability zones of the wrapped Gaussian and its unimodality and fully recover the mvM. We also compared the performance of Gibbs sampling and variational inference for a bivariate GvM. To compare the approximate inference procedures, we analysed the run time for each method and the error it produced in terms of the KL divergence between the true distribution and the approximations on a discretized grid. The Gibbs sampling procedure required a total of 3466 samples and \SI{3.1}{\s} to achieve the same level of error as the variational approximation achieved after \SI{0.02}{\s}. The variational approach was considerably more efficient than Gibbs sampling, and theory suggests this behaviour holds for higher dimensions, see~\citet{david_mackay_information_2003}. \subsection{Regression with the mGvM} In this section, we investigate the advantages of employing the mGvM regression model discussed in \cref{sec:regression} over two common approaches to handling circular data in machine learning contexts. The first approach is to ignore the circular nature of the data and fit a non-circular model. This approach is not infrequent as it is reasonable in contexts where angles are constrained to a subset of the unit circle and there is no wrappping. A typical example of the motivation for such models is the use of a first-order Taylor approximation to the rate of change of an angle as can be found in classical aircraft control applications. To represent this approach to modeling, we will fit a one-dimensional GP (1D-GP) to the data sets. The second approach tries to address the circular behaviour by regressing the sine and cosine of the data. In this approach, the angle can be extracted by taking the arc tangent of the ratio between sine and cosine components. While this approach partially addresses the underlying topology of the data, the uncertainty estimates for a non-circular model can be poorly calibrated. Here, each data point is modeled by a two-dimensional vector with the sine and cosine of each data point using a two-dimensional GP (2D-GP). Five data sets were used in this evaluation. A toy data set generated by wrapping a Mexican hat function around the unit circle, a dataset consisting Uber ride requests in NYC in April 2014{f}ootnote{\url{https://github.com/fivethirtyeight/uber-tlc-foil-response}}, the tide levels predictions from the UK Hydrographic Office in 2016{f}ootnote{\url{http://www.ukho.gov.uk/Easytide/easytide/SelectPort.aspx}} as function of the latitude and longitude of a given port, the first side chain angle of aspartate as a function of backbone angles in proteins~\cite{harder_beyond_2010}, and yeast cell cycle phase as a function of gene expression~\cite{santos2015}. To assess how well the fitted models approximate the distribution of the data, a subset of the data points was kept for validation and the models scored in terms of the log likelihood of the validation data set. To guarantee fairness in the comparison, the likelihood of the 2D-GP was projected back to the unit circle by marginalising the radial component of the model for each point. This converts the 2D-GP into a one-dimensional projected Gaussian distribution over angles. The results are summarised in \cref{tab:regression}. \makeatletter \newcolumntype{B}[3]{>{\boldmath\DC@{#1}{#2}{#3}}c<{\DC@end}} \makeatother \newcolumntype{d}{D{.}{.}{2.7}} \newcolumntype{b}{B{.}{.}{2.7}} \newcommand{\bcell}[1]{\multicolumn{1}{b}{#1}} \begin{table}[tb] \caption{Log-likelihood score for regression with the mGvM, 1D-GP and 2D-GP on validation data.} \label{tab:regression} \centering \begin{tabular}{lddd} \toprule Data set & \multicolumn{1}{c}{mGvM} & \multicolumn{1}{c}{1D-GP} & \multicolumn{1}{c}{2D-GP} \\ \midrule Toy & \bcell{2.02\E{4}} & -1.62\E{3} & 8.28\E{2} \\ Uber & \bcell{3.29\E{4}} & -1.49\E{3} & -2.83\E{2} \\ Tides & \bcell{1.25\E{4}} & -6.46\E{4} & -8.41\E{1} \\ Protein & \bcell{1.42\E{5}} & -3.34\E{5} & 1.28\E{5} \\ Yeast & \bcell{1.33\E{2}} & -1.46\E{2} & -1.65\E{1} \\ \bottomrule \end{tabular} \end{table} The results shown in \cref{tab:regression} indicate that the mGvM provides a better overall fit than the 1D-GP and the 2D-GP in all experiments. The 1D-GP approach performs poorly in every case studied as it cannot account for the wrapping behaviour of circular data. The 2D-GP performs better than the 1D-GP, however in the Uber, Tides and Yeast datasets its performance is substantially closer to the one presented by the 1D-GP case rather than the mGvM. The toy dataset is examined in \cref{fig:1d_regression}, showing the 2D-GP learns a different underlying function and cannot capture bimodality. \begin{figure} \caption{Regression on a toy data set using the mGvM (left) and 2D GP (right): data points are denoted by crosses, the true function by circles and predictions by solid dots.} \label{fig:1d_regression} \end{figure} \subsection{Dimensionality reduction} To demonstrate the dimensionality reduction application, we analysed two datasets: one motion capture dataset comprising marker positions placed on a subject's arm and captured through a low resolution camera and another set comprising of a noisy simulation of a 4-DOF robot arm under the same motion capture conditions. We compared the model using point estimates for the matrices $\mat{A}$ and $\mat{B}$, a variational Bayes approach by including ARD priors for $\mat{A}$ and $\mat{B}$, Probabilistic Principal Component Analysis (PPCA)~\cite{tipping_probabilistic_1999} and the Gaussian Process Latent Variable Model (GP-LVM)~\cite{lawrence2004gpml} using a squared exponential kernel and a linear kernel. The models using the mGvM require special attention to initialisation. To initialise the test, we used a greedy clustering algorithm to estimate the matrices $\mat{A}$ and $\mat{B}$. The variational Bayes model was initialised using the learned parameters for the point estimate model. The performance of each model was assessed by denoising the original dataset corrupted by additional Gaussian noise of 2.5, 5 and 10 pixels and comparing the signal-to-noise ratio (SNR) on a test dataset. The best results after initializing the models at 3 different initial starting points are summarized in \cref{tab:mocap-results} and additional experiments for a wider range of noise levels are available in the Supplemental Material. In \cref{tab:mocap-results}, the point estimate cPCA model performs best and is followed by its variational Bayes version for both datasets (the poor performance of the variational Bayes version is likely to be due to biases that can affect variational methods \cite{turner-and-sahani:2011a}). In the motion capture dataset, the latent angles are highly concentrated. Under these circumstances, the small-angle approximation for sine and cosine provides good results and the cPCA model degenerates into the PPCA model as shown in the Supplementary Material. This behaviour is reflected in the proximity of the PPCA and cPCA signal to noise ratios in \cref{tab:mocap-results}. In the robot dataset, the latent angles are less concentrated. As a result, the behaviour of the PPCA and cPCA models is different which explains the larger gap between the results obtained for these models. \makeatletter \newcolumntype{B}[3]{>{\boldmath\DC@{#1}{#2}{#3}}c<{\DC@end}} \makeatother \newcolumntype{d}{D{.}{.}{1.8}} \newcolumntype{b}{B{.}{.}{1.8}} \begin{table}[tb] \caption{Signal-to-noise ratio (dB) of the learned latent structure after denoising corrupted signals with by Gaussian noise.} \label{tab:mocap-results} \centering \begin{tabular}{lcccccc} \toprule \multirow{2}{*}{Model} & \multicolumn{3}{c}{Motion Capture} & \multicolumn{3}{c}{Robot}\\ \cmidrule(lr){2-4}\cmidrule(lr){5-7} & \multicolumn{1}{c}{2.5} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{2.5} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{10} \\ \midrule cPCA-Point & \textbf{29.6} & \textbf{23.5} & \textbf{17.6} & \textbf{33.5} & \textbf{30.0} & \textbf{24.9} \\ cPCA-VB & 24.6 & 21.9 & 17.6 & 33.2 & 29.8 & 24.8 \\ PPCA & 23.6 & 20.9 & 17.2 & 22.3 & 21.8 & 20.5 \\ GPLVM-SE & 8.6 & 8.5 & 8.2 & 21.8 & 15.7 & 15.2 \\ GPLVM-L & 11.0 & 7.5 & 8.1 & 24.0 & 16.6 & 15.9 \\ \bottomrule \end{tabular} \end{table} \section{Conclusions} In this paper we have introduced the multivariate Generalised von Mises, a new circular distribution with novel applications in circular regression and circular latent variable modelling in a first attempt to close the gap between circular statistics and the machine learning communities. We provided a brief review of the construction of circular distributions including the connections between the Gaussian distribution and the multivariate Generalised von Mises. We provided a scalable way to perform inference on the mGvM model through the variational free energy framework and demonstrated the advantages of the mGvM over GP and mvM through a series of experiments. pagebreak \appendix \onecolumn \section{Supplementary Material for \\The Multivariate Generalised\ von Mises\\ Distribution: Inference and application} \subsection{Diagramatic view of circular distributions genesis\label{sec:circgenesis}} The discussion on circular distributions genesis on the main paper can be diagramatically sumarised as \cref{fig:circDistSummary}. \begin{figure} \caption{Graphical summary of the genesis of circular distributions through transformations of Euclidean distributions.} \label{fig:circDistSummary} \end{figure} \subsection{Derivation of the Multivariate Generalised von Mises\label{sec:genesis}} The Multivariate Generalised von Mises distribution can be derived by applying the polar transformation to a $2D$-dimensional multivariate Gaussian distribution and conditioning $D$-pairs to the unit circle. Since order in which we conduct these two operations is interchangeable, we will first condition its pairs to the unit circle and then apply the polar transformation. More precisely, we assume that $\vt{x} \in \mathbb{R}^{2D}$, such that $x_{d}^2 + x_{D + d}^2 = 1$ for $d=1, \ldots, D$. In this case, the polar transformation of $\vt{x}$ allows us to write $x_{d} = \cos(\phi_{d})$, $x_{D + d} = \sin(\phi_{d})$. Furthermore, without loss of generality, the mean $\xmcal{vM}ean$ of a multivariate Gaussian will also be constrained to the unit circle and can be parametrised in terms of angles $\vt{\nu}$ so that $\mu_{d}=\cos(\nu_n)$, $\mu_{D + d}=\sin(\nu_{D + d})$. Now let and $\mat{W} = \vt{\Sigma}^{-1}$ be the inverse covariance matrix of a multivariate Gaussian. Using the parametrisation of $\vt{x}$ and $\xmcal{vM}ean$ in terms of $\vt{\arv}$ and $\vt{\nu}$, we can expand the quadratic in the exponential of the multivariate Gaussian into \begin{align} \begin{bmatrix} \cos(\vt{\arv})-\cos(\vt{\nu})\\ \sin(\vt{\arv})-\sin(\vt{\nu}) \end{bmatrix}^{\top} &\mat{W} \begin{bmatrix} \cos(\vt{\arv})-\cos(\vt{\nu}) \\ \sin(\vt{\arv})-\sin(\vt{\nu}) \end{bmatrix} \label{eq:expanded_prod1} \\ &= \sum_{d=1}^{D} w_{d,d}(\cos(\phi_{d})-\cos(\nu_{d}))^2 + w_{D+n,D+d}(\sin(\phi_{d})-\sin(\nu_{d}))^2 \notag \\ &+ \sum_{d=1}^{D}\sum_{Ix{j}=1}^{d-1} w_{d, Ix{j}}(\cos(\phi_{d})-\cos(\nu_{d}))(\cos(\phi_{Ix{j}})-\cos(\nu_{Ix{j}})) \notag \\ &+ \sum_{d=1}^{D}\sum_{Ix{j}=1}^{D} w_{d, D+Ix{j}}(\cos(\phi_{d})-\cos(\nu_{d}))(\sin(\phi_{Ix{j}})-\sin(\nu_{Ix{j}})) \notag \\ &+ \sum_{d=1}^{D}\sum_{Ix{j}=1}^{D} w_{D+d,Ix{j}}(\sin(\phi_{d})-\sin(\nu_{d}))(\cos(\phi_{Ix{j}})-\cos(\nu_{Ix{j}})) \notag \\ &+ \sum_{d=1}^{D}\sum_{Ix{j}=1}^{d-1} w_{D+d,D+Ix{j}}(\sin(\phi_{d})-\sin(\nu_{d}))(\sin(\phi_{Ix{j}})-\sin(\nu_{Ix{j}})) \notag. \end{align} The sums on the RHS in Equation \eqref{eq:expanded_prod1} can be expanded into \begin{align} &\sum_{d=1}^{D} w_{d,d}(\cos(\phi_{d})^2 - 2\cos(\phi_{d})\cos(\nu_{d}) -\cos(\nu_{d})^2) \label{eq:expanded_prod2} \\ &+ \sum_{d=1}^{D}w_{D+n,D+d}(\sin(\phi_{d})^2 - 2\sin(\phi_{d})\sin(\nu_{d}) -\sin(\nu_{d})^2) \notag \\ &+2\sum_{d=1}^{D}\sum_{Ix{j}=1}^{d-1} w_{d, Ix{j}}(\cos(\phi_{d})\cos(\phi_{Ix{j}})-\cos(\nu_{d})\cos(\phi_{Ix{j}}) -\cos(\phi_{d})\cos(\nu_{Ix{j}})+\cos(\nu_{d})\cos(\nu_{Ix{j}})) \notag \\ &+2\sum_{d=1}^{D}\sum_{Ix{j}=1}^{d-1} w_{D+d, D+Ix{j}}(\sin(\phi_{d})\sin(\phi_{Ix{j}})-\sin(\nu_{d})\cos(\phi_{Ix{j}}) -\sin(\phi_{d})\sin(\nu_{Ix{j}})+\sin(\nu_{d})\cos(\nu_{Ix{j}})) \notag \\ &+ \sum_{d=1}^{D}\sum_{Ix{j}=1}^{D} w_{d,D+Ix{j}}(\cos(\phi_{d})\sin(\phi_{Ix{j}})-\cos(\nu_{d})\sin(\phi_{Ix{j}})-\cos(\phi_{d})\sin(\nu_{Ix{j}})+\cos(\nu_{d})\sin(\nu_{Ix{j}})) \notag \\ &+ \sum_{d=1}^{D}\sum_{Ix{j}=1}^{D} w_{D+d, Ix{j}}(\sin(\phi_{d})\cos(\phi_{Ix{j}})-\sin(\nu_{d})\cos(\phi_{Ix{j}})-\sin(\phi_{d})\cos(\nu_{Ix{j}})+\sin(\nu_{d})\cos(\nu_{Ix{j}})) \notag. \end{align} By aggregating all terms that are independent of $\phi$ and rearranging terms, Equation \eqref{eq:expanded_prod2} becomes \begin{align} &\sum_{d=1}^{D}\sum_{Ix{j}=1}^{D} w_{d, Ix{j}}\cos(\phi_{d})\cos(\phi_{Ix{j}}) + 2 w_{d,D+Ix{j}}\cos(\phi_{d})\sin(\phi_{Ix{j}}) + w_{D+d,D+Ix{j}}\sin(\phi_{d})\sin(\phi_{Ix{j}}) \label{eq:aggregated} \\ &-2\sum_{d=1}^{D}\sum_{Ix{j}=1}^{D} w_{d,Ix{j}}\cos(\phi_{d})\cos(\nu_{Ix{j}}) + w_{d,D+Ix{j}}\cos(\phi_{d})\sin(\nu_{Ix{j}}) \notag \\ &-2\sum_{d=1}^{D}\sum_{Ix{j}=1}^{D} w_{D+d,D+Ix{j}}\sin(\phi_{d})\sin(\nu_{Ix{j}}) + w_{D+d,Ix{j}}\sin(\phi_{d})\cos(\nu_{Ix{j}}) \notag \end{align} These sums can be written in matrix notation as \begin{align} \vt{\kappa}_{c}^{\top} \cos(\vt{\arv}-\vt{\nu}) + \vt{\kappa}_{s}^{\top} \sin(\vt{\arv}-\vt{\nu}) -{f}rac{1}{2} \begin{bmatrix} \cos(\vt{\arv}) \\ \sin(\vt{\arv}) \end{bmatrix}^{\top} \begin{bmatrix} \mat{W}^{cc} & \mat{W}^{cs}\\ (\mat{W}^{cs})^\top & \mat{W}^{ss} \end{bmatrix} \begin{bmatrix} \cos(\vt{\arv}) \\ \sin(\vt{\arv}) \end{bmatrix} \label{eq:constrained_quadratic} \end{align} where $\kappa_{d}=\text{abs}\{z_{d}\}$ and $\nu_{d}=\text{arg}\{z_{d}\}$ with the real and imaginary parts of $z_{d}$ such that \begin{equation*} \Re\{z_{d}\} = -2\sum_{Ix{j}=1}^{D} w_{d, Ix{j}}\cos(\phi_{d})\cos(\nu_{Ix{j}}) + w_{d,D+Ix{j}}\cos(\phi_{d})\sin(\nu_{Ix{j}}) \end{equation*} and \begin{equation*} \Im\{z_{d}\} = -2\sum_{Ix{j}=1}^{D} w_{D+d, D+Ix{j}}\sin(\phi_{d})\sin(\nu_{Ix{j}}) + w_{D+d, Ix{j}}\sin(\phi_{d})\cos(\nu_{Ix{j}}). \end{equation*} Therefore, Equation \eqref{eq:constrained_quadratic} imples that a multivariate Gaussian distribution under radial transformation and conditionning to the unit circle yields the log density \begin{align} \log p(\vt{\arv}) &= \text{const.} + \vt{\kappa}^{\top} \cos(\vt{\arv}-\vt{\nu}) \notag \\ &-{f}rac{1}{2} \Big( \cos(\vt{\arv})^{\top}\mat{W}^{cc}\cos(\vt{\arv}) + 2 \cos(\vt{\arv})^{\top}\mat{W}^{cs}\sin(\vt{\arv}) + \sin(\vt{\arv})^{\top}\mat{W}^{ss}\sin(\vt{\arv}) \Big) \label{eq:log_mGvM} \end{align} which is the log density of a multivariate Generalised von Mises distribution in overparametrised form. To obtain the minimal number of parameters for the mGvM, Equation \eqref{eq:aggregated} can be further simplified using trigonometric identities to yield the minimal form of the mGvM distribution \begin{align} \log p(\vt{\arv}) &= \text{const.} + \exp \Big\{ \vt{\kappa}_{1}^{\top}\cos(\vt{\arv} - \vt{\nu}_{1}) + \vt{\kappa}_{2}^{\top}\cos(2 (\vt{\arv} - \vt{\nu}_{2})) \notag \\ & + {f}rac{1}{2} \sum_{d=1}^{D}\sum_{j=1}^{D} u_{d,j} \cos(\phi_{d}-\phi_{j} - \alpha_{d,j}) + v_{d,j} \cos(\phi_{d} + \phi_{j} - \beta_{d,j}) \Big\}. \label{minimal-mgvm} \end{align} where $\vt{\kappa}_{1}=\vt{\kappa}$ and $\vt{\nu}_{1}=\vt{\nu}$ given as before, while $\kappa_{2,d}=\text{abs}\{z_{d}\}$ and $\nu_{2,d}=0.5\text{arg}\{z_{2,d}\}$ with \begin{equation*} z_{d} = {f}rac{1}{4}(w_{d, d} - w_{D+d, D+d}) + i{f}rac{1}{2}(w_{d, D+d}) \end{equation*} and the cross terms given by $u_{d,j}=\text{abs}\{z^{U}_{d,j}\}$, $\alpha_{d,j}=0.5\text{arg}\{z^{U}_{d,j}\}$, $v_{d,j}=\text{abs}\{z^{V}_{d,j}\}$, $\beta_{d,j}=0.5\text{arg}\{z^{V}_{d,j}\}$ where \begin{align*} z^{U}_{d,j} &= (w_{d,j} + w_{D+d, D+j}) + i (w_{j,D+d} - w_{d, D+j})\\ z^{V}_{d,j} &= (w_{d,j} - w_{D+d, D+j}) + i (w_{j,D+d} + w_{d, D+j}). \end{align*} A final point to make about the mGvM derivation is related to the distributions it generalises. \citet{gatto_generalized_2007} discussed that the GvM could be constructed by conditioning a 2D Gaussian the unit circle, but were not aware of multivariate generalisations. \citet{mardia_multivariate_2008} constructed the multivariate mvM, which we show is a submodel of the mGvM, but did not relate it to the a multivariate Gaussian nor to kernels. \subsection{Informal argument for mGvM being the maximum entropy distribution on the hyper-torus\label{sec:maxent}} The maximum entropy distribution $p$ subject to specified covariance and the first and second moments is the solution for the problem \begin{align} \text{minimize}_{p} & \int p(\vt{x}) \log p(\vt{x}) \text{d} \vt{\arv} \\ \text{subject to} & \int x_{d}^{m} p(\vt{\arv}) \text{d} \phi_{d} = \alpha_{d,m}, quad d = 1, \ldots, 2D; m = 0, \ldots, 2 \end{align} is the multivariate Gaussian distribution. If we further add the constraints to the maximum entropy problem that the distribution must be under the unit circle, the problem becomes \begin{align} \text{minimize}_{p} & \int p(\vt{x}) \log p(\vt{x}) \text{d} \vt{\arv} \\ \text{subject to} & \int x_{d}^{m} p(\vt{\arv}) \text{d} \phi_{d} = \alpha_{d,m}, quad d = 1, \ldots, 2D; m = 0, \ldots, 2 \\ & x_{d}^{2} + x_{d+D}^{2} = 1, quad d = 1, \ldots, D; \end{align} the solution of which is a multivariate Gaussian constrained to the unit hyper-torus, hence, a the mGvM distribution. \subsection{Conditionals of the mGvM: derivation and their relationship to inference algorithms\label{sec:conditionals}} The conditionals of the mGvM can be found by expanding the terms containing cosine of the difference and sum of two circular variables terms using sum-to-product relations. More precisely, if we partition the indexes of mGvM distributed circular vector $\vt{\arv}$ into two disjoint sets $\xmcal{A}$ and $\xmcal{B}$, \begin{align} p(\vt{\arv}_{\xmcal{A}}|\vt{\arv}_{\xmcal{B}}) &propto \exp \Big\{ \vt{\kappa}_{1,\xmcal{A}}^{\top}\cos(\vt{\arv}_{\xmcal{A}} - \vt{\nu}_{1}) \vt{\kappa}_{1,\xmcal{B}}^{\top}\cos(\vt{\arv}_{\xmcal{B}} - \vt{\nu}_{1}) \notag \\ & + \vt{\kappa}_{2,\xmcal{A}}^{\top}\cos(2 (\vt{\arv}_{\xmcal{A}} - \vt{\nu}_{2})) + \vt{\kappa}_{2,\xmcal{B}}^{\top}\cos(2 (\vt{\arv}_{\xmcal{B}} - \vt{\nu}_{2})) \notag \\ & + {f}rac{1}{2} \sum_{d\in\xmcal{A}}\sum_{j\in\xmcal{A}} u_{d,j} \cos(\phi_{d}-\phi_{j} - \alpha_{d,j}) + v_{d,j} \cos(\phi_{d} + \phi_{j} - \beta_{d,j}) \notag \\ & + {f}rac{1}{2} \sum_{d\in\xmcal{A}}^{D}\sum_{j\in\xmcal{B}}^{D} u_{d,j} \cos(\phi_{d}- \phi_{j} - \alpha_{d,j}) + v_{d,j} \cos(\phi_{d} + \phi_{j} - \beta_{d,j}) \notag \\ & + {f}rac{1}{2} \sum_{d\in\xmcal{B}}^{D}\sum_{j\in\xmcal{A}}^{D} u_{d,j} \cos(\phi_{d}- \phi_{j} - \alpha_{d,j}) + v_{d,j} \cos(\phi_{d} + \phi_{j} - \beta_{d,j}) \notag \\ & + {f}rac{1}{2} \sum_{d\in\xmcal{B}}^{D}\sum_{j\in\xmcal{B}}^{D} u_{d,j} \cos(\phi_{d}- \phi_{j} - \alpha_{d,j}) + v_{d,j} \cos(\phi_{d} + \phi_{j} - \beta_{d,j}) \Big\}. \end{align} If we consider that the variables whose indexes are in $\xmcal{B}$ are constant and note that the cross terms between variables in $\xmcal{A}$ and $\xmcal{B}$ have the functional form of $\kappa\cos(\phi_{\xmcal{A}} - \nu)$, we can rewrite the conditional using phasor arithmetic as \begin{align} p(\vt{\arv}_{\xmcal{A}}|\vt{\arv}_{\xmcal{B}}) &propto \exp \Big\{ \tilde{\vt{\kappa}}_{1}^{\top}\cos(\vt{\arv}_{\xmcal{A}} - \tilde{\vt{\nu}_{1}}) + \vt{\kappa}_{2,\xmcal{A}}^{\top}\cos(2 (\vt{\arv}_{\xmcal{A}} - \vt{\nu}_{2})) \notag \\ & + {f}rac{1}{2} \sum_{d\in\xmcal{A}}\sum_{j\in\xmcal{A}} u_{d,j} \cos(\phi_{d}-\phi_{j} - \alpha_{d,j}) + v_{d,j} \cos(\phi_{d} + \phi_{j} - \beta_{d,j}) \end{align} where $\tilde{\kappa}_{1,d}=\text{abs}(z_d)$, $\tilde{\vt{\nu}}_{1,d}=\text{arg}(z_d)$ and \begin{align*} \Re(z_d) &= \kappa_{1,d}\cos(\nu_{1,d}) + \sum_{j=1} u_{d,j}\cos(\phi_{j} - \alpha_{d,j}) - v_{d,j}\cos(\phi_{j} - \beta_{d,j})\\ \Im(z_d) &= \kappa_{1,d}\sin(\nu_{1,d}) + \sum_{j=1} u_{d,j}\sin(\phi_{j} - \alpha_{d,j}) - v_{d,j}\sin(\phi_{j} - \beta_{d,j}) \end{align*} with $\Re(z)$ denoting the real part of $z$ and $\Im(z)$ denoting the imaginary part of $z$. In the particular case of the unidimensional conditional, the covariance term \begin{equation*} + {f}rac{1}{2} \sum_{d\in\xmcal{A}}\sum_{j\in\xmcal{A}} u_{d,j} \cos(\phi_{d}-\phi_{j} - \alpha_{d,j}) + v_{d,j} \cos(\phi_{d} + \phi_{j} - \beta_{d,j}) \end{equation*} will vanish as the diagonals of the parameter matrices $\mat{U}$ and $\mat{V}$ are zero. \subsubsection{Unidimensional conditionals and Gibbs sampling} When the set $\xmcal{A}$ contains a single index, the expressions in the previous section define how to obtain all unidimensional conditionals of the mGvM. These one-dimensional conditionals are the same mentioned in the main paper for the Gibbs sampling, for which the parametric dependencies can be explicitly written as \begin{align} \tilde{\kappa}_{1,d} = \text{abs}(z_d) & \tilde{\nu}_{1,d} = \text{arg}(z_d) \end{align} where $z_{d}=$ is given by \begin{align*} \Re(z_d) &= \kappa_{1,d}\cos(\nu_{1,d}) + \sum_{j=1} u_{d,j}\cos(\phi_{j} - \alpha_{d,j}) - v_{d,j}\cos(\phi_{j} - \beta_{d,j})\\ \Im(z_d) &= \kappa_{1,d}\sin(\nu_{1,d}) + \sum_{j=1} u_{d,j}\sin(\phi_{j} - \alpha_{d,j}) - v_{d,j}\sin(\phi_{j} - \beta_{d,j}) \end{align*} with $\Re(z)$ denoting the real part of $z$ and $\Im(z)$ denoting the imaginary part of $z$. \subsubsection{Unidimensional conditionals and mean field variational approximation} To find the approximation $q(\phi|\rho)$ for a true posterior $p(\phi|psi, \theta)$ that minimisesthe Kullback-Leiber divergence from $q$ to $p$ under the variational free energy framework~\cite{jordan1999}, can equivalently maximise the variational free energy $\xmcal{F}(q,params_{p})$ by noting \begin{align*} \text{KL}(q(\vt{\arv})||p(\vt{\arv}|\vt{psi})) &= \int q(\vt{\arv}|\rho)\log {f}rac{q(\vt{\arv}|\rho)}{p(\vt{\arv}|\vt{psi},\theta)}\text{d} \vt{\arv} \\ &=-\int q(\vt{\arv}|\rho)\log p(\phi|\vt{psi},\theta)\text{d} \phi +\int q(\vt{\arv}|\rho)\log q(\vt{\arv}|\rho) \text{d} \vt{\arv}\\ &=-\langle\log p(\vt{\arv},\vt{psi}|\theta)\rangle_{q(\vt{\arv}|\rho)} +\logp(\vt{psi}|\theta) - \xmcal{H}(q) \\ &= \log p(\vt{psi}|\theta) - \xmcal{F}(q, \theta, \rho). \end{align*} By assuming a fully factored form for the distribution $q$, i.e., $q(\vt{\arv})=prod_{d=1}^{D} q_{d}(\phi_{d})$, we can use calculus of variations to obtain analytically the functional form of the distributions $q_d$, that is \begin{equation} {f}rac{\text{d}ta}{\text{d}ta q} \xmcal{F}(q,params_{p}) - \lambda \left(\int q(\vt{\arv})\text{d} \vt{\arv} - 1 \right) = 0 \end{equation} which implies \begin{align} {f}rac{\text{d}ta}{\text{d}ta q_{\ell}} \left[ \expect{\log p(\phi,psi)}_{prod_{Ix{d}}^{D}q_{Ix{d}}(\phi_{Ix{d}})} -\int prod_{Ix{d}=1}^{D} q_{Ix{d}}(\phi_{Ix{d}}) \left(\sum_{Ix{d}=1}^{D}\log q_{Ix{d}}(\phi_{Ix{d}})\right) \text{d} \vt{\arv} - \lambda \sum_{Ix{d}=1}^{D} \int q_{Ix{d}}(\phi_{Ix{d}}) d\phi_{Ix{d}} \right] = 0 \end{align} and leads to the factors approximation \begin{equation} q_{d}(\phi_{d}) = {f}rac{1}{\exp(\lambda+1)}\exp \left\{ \expect{\log p(\phi,psi)}_{prod_{\neq d}^{D}q_{\neq d}(\phi_{\neq d})} \right\} \label{eq:meanFieldEquation} \end{equation} resulting in the set of distributions known as mean field approximation. This equation when applied to the mGvM yields GvM distributions \begin{align} q(\phi_{d}|\vt{\arv}_{\neq d}) = \xmcal{GvM}(\phi_{d}; \bar{\kappa}_{1,d}(\vt{\kappa},\vt{\nu}, \langle e^{i\vt{\arv}_{\neq d}}\rangle_{q_{\neq d}}, \langle e^{i \vt{\arv}_{\neq d}} \rangle_{q_{\neq d}}),\kappa_{2,d}, \bar{\nu}_{1,d}(\vt{\kappa},\vt{\nu}, \langle e^{i\vt{\arv}_{\neq d}}\rangle_{q_{\neq d}}, \langle e^{i \vt{\arv}_{\neq d}} \rangle_{q_{\neq d}}), \nu_{2,d}) \end{align} where $\langle e^{i n \vt{\arv}}\rangle= \langle \cos(n \vt{\arv})\rangle + i\langle \sin(n \vt{\arv})\rangle $, $\bar{\kappa}_{1,d} = \text{abs}(z_d)$ and $\bar{\nu}_{1,d} = \text{arg}(z_d)$ with $z_{d}=$ given in overparametrised form by \begin{align*} \Re(z_d) &= \kappa_{d}\cos(\nu_{1,d}) -{f}rac{1}{2} \left\langle \begin{bmatrix} \cos(\vt{\arv}_{\neq d}) \\ \sin(\vt{\arv}_{\neq d}) \end{bmatrix}^{\top} \begin{bmatrix} \mat{W}^{cc}_{\neq d, d} & \mat{W}^{cs}_{\neq d, d}\\ (\mat{W}^{cs})_{\neq d, d}^\top & \mat{W}^{ss}_{\neq d, d} \end{bmatrix} \begin{bmatrix} \cos(\vt{\arv}_d) \\ \sin(\vt{\arv}_d) \end{bmatrix} \right\rangle_{q_{\neq \vt{\arv}_d}} \\ \Im(z_d) &= \kappa_{d}\sin(\nu_{1,d}) -{f}rac{1}{2} \left\langle \begin{bmatrix} \cos(\vt{\arv}_{\neq d}) \\ \sin(\vt{\arv}_{\neq d}) \end{bmatrix}^{\top} \begin{bmatrix} \mat{W}^{cc}_{\neq d, d} & \mat{W}^{cs}_{\neq d, d}\\ (\mat{W}^{cs})_{\neq d, d}^\top & \mat{W}^{ss}_{\neq d, d} \end{bmatrix} \begin{bmatrix} \cos(\vt{\arv}_d) \\ \sin(\vt{\arv}_d) \end{bmatrix} \right\rangle_{q_{\neq \vt{\arv}_d}} \end{align*} with $\Re(z)$ denoting the real part of $z$ and $\Im(z)$ denoting the imaginary part of $z$. \subsection{Higher order mGvM\label{sec:higher}} As with the higher order GvM, the mGvM can also be expanded to include $T$ cosine harmonics if the Gaussian genesis is cast aside. In this case, a mGvM of order $T$ can be defined as \begin{align} \xmcal{mGvM}_{T}(\vt{\arv}; \vt{\nu}_{1:T}, \vt{\kappa}_{1:T}, \mat{U}, \mat{V}, \mat{\alpha}, \mat{\beta}) &propto \exp \Big\{ \sum_{t=1}^{T} \vt{\kappa}_{t}^{\top}\cos(t (\vt{\arv} - \vt{\nu}_{t})) \notag \\ & + {f}rac{1}{2} \sum_{i=1}^{D}\sum_{j=1}^{D} u_{i,j} \cos(\phi_{i}-\phi_{j} - \alpha_{i,j}) + v_{i,j} \cos(\phi_{i}+\phi_{j} - \beta{i,j}) \Big\} \label{eq:mgvm_2} \end{align} which is a distribution whose conditionals allow up to $T$ modes, but bears the same correlation correlation structure of the `standard' order 2 mGvM.$\xmcal{A}$ is a single index, the \subsection{Assumptions over the precision matrix of the Gaussian that leads to a mvM} In this section the mvM is derived by conditioning a 4-dimensional multivariate Gaussian to highlight the assumptions made regarding the precision matrix of the multivariate Gaussian. We take the 4D Gaussian to be zero mean without loss of generality and, after applying the polar variable transformation and constraining the radial components to unity we obtain the distribution \begin{align} p(phi_1, phi_2) &propto \exp \left\{ -{f}rac{1}{2} \begin{bmatrix} \cos(phi_1)\\ \cos(phi_2)\\ \sin(phi_1)\\ \sin(phi_2) \end{bmatrix}^{\top} \begin{bmatrix} a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4} \\ a_{1,2} & a_{2,2} & a_{2,3} & a_{2,4} \\ a_{1,3} & a_{2,3} & a_{3,3} & a_{3,4} \\ a_{1,4} & a_{2,4} & a_{3,4} & a_{4,4} \\ \end{bmatrix} \begin{bmatrix} \cos(phi_1)\\ \cos(phi_2)\\ \sin(phi_1)\\ \sin(phi_2) \end{bmatrix} \right\} \end{align} which can be expanded into \begin{align} p(phi_1, phi_2) &propto \exp \Big\{ - {f}rac{1}{2} ( a_{1,1} \cos(phi_1)^2 + a_{2,2} \cos(phi_2)^2 + a_{3,3} \sin(phi_1)^2 + a_{4,4} \sin(phi_2)^2 + \notag \\ & 2 a_{1,2} \cos(phi_1) \cos(phi_2) + 2 a_{1,3} \cos(phi_1) \sin(phi_1) + 2 a_{1,4} \cos(phi_1) \sin(phi_2) + \notag \\ & 2 a_{2,3} \cos(phi_2) \sin(phi_1) + 2 a_{2,4} \cos(phi_2) \sin(phi_2) + 2 a_{3,4} \sin(phi_1) \sin(phi_2) ) \Big\}. \notag \end{align} Using the fundamental identity and double angle formulas, we can rewrite the last Equation as \begin{align} p(phi_1, phi_2) &propto \exp \Big\{ - {f}rac{1}{2} ( a_{1,1} \cos(phi_1)^2 + a_{2,2} \cos(phi_2)^2 + a_{3,3} (1 - \cos(phi_1)^2) + a_{4,4} (1 - \cos(phi_2)^2) + \notag \\% Fundamental identity & 2 a_{1,2} \cos(phi_1) \cos(phi_2) + a_{1,3} \sin(2 phi_1) + 2 a_{1,4} \cos(phi_1) \sin(phi_2) + 2 a_{2,3} \cos(phi_2) \sin(phi_1) + \notag \\ & a_{2,4} \sin(2 phi_2) + 2 a_{3,4} \sin(phi_1) \sin(phi_2)) ) \Big\} \notag \end{align} Further simplifications arise from \begin{align} p(phi_1, phi_2) &propto \exp \Big\{ - {f}rac{1}{2} ( (a_{1,1} - a_{3,3}) \cos(2 phi_1 - 2 \nu_1) + (a_{2,2} - a_{4,4}) \cos(2 phi_2 - 2 \nu_2) + \notag \\% Double angle formula & 2 a_{1,2} \cos(phi_1) \cos(phi_2) + 2 a_{1,4} \cos(phi_1) \sin(phi_2) + 2 a_{2,3} \cos(phi_2) \sin(phi_1) + \notag \\ & 2 a_{3,4} \sin(phi_1) \sin(phi_2) + a_{1,3} \sin(2 phi_1) + a_{2,4} \sin(2 phi_2) ) \Big\}. \notag \end{align} The product of sine and cosines can be also translated using product-to-sum formulas \begin{align} p(phi_1, phi_2) &propto \exp \Big\{ - {f}rac{1}{2} ( (a_{1,1} - a_{3,3}) \cos(2 phi_1 - 2 \nu_1) + (a_{2,2} - a_{4,4}) \cos(2 phi_2 - 2 \nu_2) + \notag \\ & a_{1,2} \cos(phi_1 - phi_2) + a_{1,2} \cos(phi_1 + phi_2) + a_{1,4} \sin(phi_1 + phi_2) - a_{1,4} \sin(phi_1 - phi_2) + \notag \\% Product to sum & a_{2,3} \sin(phi_1 + phi_2) + a_{2,3} \sin(phi_1 - phi_2) + a_{3,4} \cos(phi_1 - phi_2) - a_{3,4} \cos(phi_1 + phi_2) + \notag \\% Product to sum & a_{1,3} \sin(2 phi_1) + a_{2,4} \sin(2 phi_2)) \Big\} \notag \end{align} grouping similar terms \begin{align} p(phi_1, phi_2) &propto \exp \Big\{ - {f}rac{1}{2} ( (a_{1,1} - a_{3,3}) \cos(2 phi_1 - 2 \nu_1) + (a_{2,2} - a_{4,4}) \cos(2 phi_2 - 2 \nu_2) + \notag \\ & a_{1,3} \sin(2 phi_1) + a_{2,4} \sin(2 phi_2) + \notag \\ & (a_{1,2} + a_{3,4}) \cos(phi_1 - phi_2) + (a_{1,2} - a_{3,4}) \cos(phi_1 + phi_2) + \notag \\ & (a_{2,3} - a_{1,4}) \sin(phi_1 - phi_2) + (a_{1,4} + a_{2,3}) \sin(phi_1 + phi_2)) \Big\} \notag \end{align} Therefore, we can conclude that since the mvM has only the term $\sin(phi_1 - phi_2)$ from the equation above that $a_{1,1}=a_{3,3}$, $a_{2,2}=a_{4,4}$, $a_{1,3}=a_{2,4}=a_{1,2}=a_{3,4}=0$ and $a_{1,4}=-a_{3,2}$. This leads to the precision matrix having the sparsity pattern \begin{align} \begin{bmatrix} a_{1,1} & 0 & 0 & a_{1,4} \\ 0 & a_{2,2} & -a_{1,4} & 0 \\ 0 & -a_{1,4} & a_{1,1} & 0 \\ a_{1,4} & 0 & 0 & a_{2,2} \\ \end{bmatrix}. \notag \end{align} \subsection{Dimensionality reduction with the mGvM and Probabilistic Principal Component Analysis\label{sec:CPCA}} In this section we discuss in greater detail dimensionality reduction with the Multivariate Generalised von Mises and its relationship to Probabilistic Principal Component Analysis (PPCA) from \cite{tipping_probabilistic_1999}. The PCA model is defined as \begin{equation} \begin{aligned} p(\vt{x}) &= \xmcal{N}(\vt{x}; 0, I)\\ p(\vt{\yy}|\vt{x}) &= \xmcal{N}(\vt{\yy}; \mat{W}\vt{x}, \sigma^{2} I) \end{aligned} \end{equation} where $\mat{W}$ is a matrix that encodes the linear mapping between hidden components $\vt{x} \in \mathbb{R}^{D}$ and data $\vt{\yy} \in \mathbb{R}^{M}$, with $M > D$. If we impose that each of the latent components $x_{d}$ is sinusoidal and may be parametrised by a hidden angle $\phi_{d}$ plus a phase shift $\varphi_{d}$, we obtain the model \begin{equation} \begin{aligned} p(\phi_{d}) &= \xmcal{GvM}(\phi; \kappa_{1, d}, \kappa_{2, d}, \nu_{1, d}, \nu_{2, d})\\ p(x_{d}|\phi_{d}) &= \text{d}ta(x_{d} - \sin(\phi_{d} + \varphi_{d}))\\ p(\vt{\yy}|\vt{x}) &= \xmcal{N}(\vt{\yy}; \mat{W}\vt{x}, \sigma^{2} I) \end{aligned} \end{equation} To obtain the relation directly between the data and the hidden angle, we integrate out the latent components $\vt{x}$ \begin{equation} p(\vt{\yy}|\vt{\arv}) = \int \text{d}ta(\vt{x} - \sin(\vt{\arv} + \vec{\varphi})) \xmcal{N}(\vt{\yy}; \mat{W}\vt{x}, \sigma^{2} I) d\vt{x} \end{equation} which results in the model used in the mGvM dimensionality reduction application. Alternatively, it is also possible to show the limiting behaviour of the model arising fromthe mGvM dimensionality reduction application becomes the PCA model, for mean angles $\vec{\nu} \rightarrow 0$ and high concentration parameters. In this regime, the small angle approximation \begin{equation} \sin\phi \approx \phi, quad \cos\phi \approx 0 \end{equation} is valid and leads to the Generalised Von Mises priors simplification to \begin{equation} \begin{aligned} p(\phi) &propto \exp\left\{ \kappa_{1}\cos(\phi - \nu_{1}) + \kappa_{2}\cos(2(\phi - \nu_{2})) \right\}\\ &propto \exp\left\{ -\kappa_{1}\cos(\nu_{2})\phi^{2} + (\kappa_{1}\sin(\nu_{1}) + 2 \kappa_{2}\sin(2\nu_{2}))\phi \right\}\\ &propto \exp\left\{ -\kappa_{1}\cos(\nu_{2}) \left[ \phi - {f}rac{\kappa_{1}\sin(\nu_{1}) + 2 \kappa_{2}\sin(2\nu_{2})}{2\kappa_{1}\cos(\nu_{2})} \right]^{2} \right\} \end{aligned} \end{equation} which is proportional to a Gaussian distribution and shows that under the small angle regime, the coefficient matrix $\mat{A}$ is a good approximation for $\mat{W}$ and the model collapses to PCA. Another connection between the dimensionality reduction with the mGvM and PCA may be established geometrically. While PCA describes the data in terms of hidden hyperplanes, the lower dimensional description of the data with mGvM occurs in terms of hidden tori, as illustrated in \cref{fig:doughnut}. \begin{figure} \caption{Plots of the model $x = 2 \cos{\phi_{1} \label{fig:doughnut} \end{figure} The effect of priors in this systems is also highlighted by \cref{fig:doughnut}. The mean angle and concentration of each prior impacts the distribution of mass along the direction of the angular component on the hyper-torus. High concentration values on the prior leads to dense regions around the mean angle, as presented in the middle graph of \cref{fig:doughnut} while low concentration leads to uniform mass distribution, shown in the right graph of \cref{fig:doughnut}. An analogy often used to describe this shape of the data in the PCA's hidden space is a ``fuzzy pancake'', as the Gaussian noise induces the shape irregularity (``fuzzyness''), of the hidden plane (``pancake''). Likewise, for dimensionality reduction with the mGvM the corresponding analogy would be a ``fuzzy doughnut'', as the Gaussian noise also incur in irregularities over the surface of a ``doughnut'', which bears similar shape to a torus. \subsection{Supporting graphs and analysis for the experiments} \subsubsection{Regression experiments} The plots in \cref{fig:1d_regression,fig:tides} help us understand the some reasons why the mGvM provides better regression performance than the other models considered. The mGvM is able to accurately infer where the underlying function wraps, and provides a reasonable estimate for both the expected value of the underlying function and variance on the unit circle. The 1D-GP cannot account for the angular equivalences, therefore, it has assign this phenomenon to noise resulting in flat predictions as shown in \cref{fig:1d_regression}. While the 2D-GP is able to cope with wrapping, it learns a different lengthscale parameter. Furthermore, the 2D-GP cannot learn bimodal errors which can be accounted for by the mGvM as shown in \cref{fig:tides}. \begin{figure*} \caption{Regression on a one-dimensional a synthetic data set using the mGvM (left), 1D GP (center) and 2D GP (right): data points are represented as balck crosses, the true function with circles and model predictions in solid dots. Best visualised in colour.} \label{fig:1d_regression} \end{figure*} \begin{figure*} \caption{Tide time predictions on the UK coast: port location for a subset of the dataset (left), mGvM fit (left) and 2D-GP (right). The ports whose data was supplied for training are displayed in magenta (darker) rose diagrams whereas the ports held out for prediction are displayed in cyan (lighter). The regression model predictions are given as orange lines. Best visualised in colour.} \label{fig:tides} \end{figure*} \subsubsection{Dimensionality reduction experiments} In this section, we provide additional experiments and noise values for the average signal-to-noise ratio for motion capture and the simulation of motion capture of a robot arm. In the motion capture data sets, we applied a colour filter to the resulting images to isolate each marker and then the marker position was found by calculating the centre of mass of each marker as shown in Figure \ref{fig:mocap_xp}. \begin{figure} \caption{Capturing 2D motion: the datasets was generated by recording the motion of a subject with markers on its body then using a colour threshold algorithm and taking the location of the centre of mass of the filtered region.} \label{fig:mocap_xp} \end{figure} Additional experiments are given in \cref{fig:mocap_results}. The conclusions and discussion of these experimental results mirror the discussions presented in the main paper. \begin{figure*} \caption{Signal-to-noise ratio with 3 standard deviations for the latent variable modelling datasets: filmed subject running(left), fishing (middle) and synthetic dataset (right).} \label{fig:mocap_results} \end{figure*} \end{document}
\begin{document} \tildetle{Invariant Solutions for Gradient Ricci Almost Solitons} \author{ \textbf{Benedito Leandro} \\ {\Bbb{S}mall\it CIEXA-Universidade Federal de Jata\'i }\\ {\Bbb{S}mall\it BR 364, km 195, 3800, 75801-615, Jata\'i, GO, Brazil. } \\ {\Bbb{S}mall\it e-mail: [email protected]} \\ \textbf{Romildo Pina \footnote{ Partially supported by CAPES-PROCAD.}} \\ {\Bbb{S}mall\it IME, Universidade Federal de Goi\'as,}\\ {\Bbb{S}mall\it 131, 74001-970, Goi\^ania, GO, Brazil. }\\ {\Bbb{S}mall\it e-mail: [email protected] } \\ \textbf{Tatiana Pires Fleury Bezerra} \\ {\Bbb{S}mall\it IFG -Instituto Federal de Educa\c{c}\~{a}o, Ci\^{e}ncia e Tecnologia de Goi\'as }\\ {\Bbb{S}mall\it 74968-755, Lt-1A, Parque Itatiaia, Ap. de Goi\^ania, GO, Brazil.} \\ {\Bbb{S}mall\it e-mail: [email protected]} } \title{Invariant Solutions for Gradient Ricci Almost Solitons} \thispagestyle{empty} \markboth{abstract}{abstract} \addcontentsline{toc}{chapter}{abstract} \begin{abstract} \noindentonumberindent In this paper we provide an ansatz that reduces a pseudo-Riemannian gradient Ricci almost soliton (PDE) into an integrable system of ODE. First, considering a warped structure with conformally flat base invariant under the action of an $(n-1)$-dimensional translation group and semi-Riemannian Einstein fiber, we provide the ODE system which characterizes all such solitons. Then, we also provide a classification for a conformally flat pseudo-Riemannian gradient Ricci almost soliton invariant by the actions of a translation group or a pseudo-orthogonal group. Finally, we conclude with some explicit examples. \end{abstract} \noindentonumberindent 2010 Mathematics Subject Classification: 53C21, 53C50, 53C25 \\ Keywords: semi-Riemannian metric, gradient Ricci solitons, warped product \Bbb{S}ection{Introduction and main statements} In the early eighties, Jean-Pierre Bourguignon introduced a flow to study the evolution of the Ricci curvature and the Einstein metrics. The {\it Ricci Bourguignon flow} is given by $$\fracac{\Bbb{P}artialrtial}{\Bbb{P}artialrtial\,t}g(t)=-2(Ric-\kappa\,R\,g)(t),$$ where $Ric$ and $R$ are, respectively, the Ricci tensor and the scalar curvature for the metric $g$. This flow is an interpolation between the Ricci flow and the Yamabe flow (see \cite{catino} and the references therein). The self-similar solutions of this flow are called {\it Einstein solitons} and correspond to the equation $$Ric+Hess(h)=(\kappa\,R+\mu\,)g,$$ where $\kappa$, $\mu\in\mathbb{R}$. When we replace in the above equation the term $\kappa R+\mu$ for $\lambdambda$, an arbitrary smooth function, we can call this equation a {\it gradient Ricci almost soliton}. We say that a {\it Ricci almost soliton} is a smooth manifold satisfying $$Ric + \mathcal{L}_{X}g=\lambdambda g,$$ where $\mathcal{L}_{X}g$ represents the Lie derivative of the metric $g$ with respect to a tangent vector field $X$ and $\lambdambda$ is an arbitrary smooth function. The Ricci almost solitons generalize Einstein solitons. They are also natural generalizations for Ricci solitons, which are self-similar solutions for the Ricci flow. \begin{defi} A smooth manifold $(M^{n}, g)$ is a gradient Ricci almost soliton if there exist two smooth functions $h$ and $\lambdambda$ on $M$ such that \begin{eqnarray}\lambdabel{eq1} Ric_{g}+Hess_{g}(h)=\lambdambda g, \end{eqnarray} where Ric$_g$ is the Ricci tensor, Hess$_g(h)$ is the Hessian of the potential function $h$ with respect to the metric $g$. \end{defi} A smooth manifold $(M^{n}, g)$ is a \emph{ \it gradient Ricci soliton} if there exist a smooth function $h:M\longrightarrow\mathbb{R}$ (called the potential function) and a constant $\lambdambda$ satisfying \ref{eq1}. A gradient Ricci soliton is said to be shrinking, steady, or expanding if $\lambdambda>0$, $\lambdambda=0$, or $\lambdambda<0$, respectively. When $M$ is a Riemannian manifold, usually, one requires the manifold to be complete. In the case of semi-Riemannian manifolds, one does not require $(M,g)$ to be complete (see \cite{BBGG}, \cite{BCGG}, \cite{BGG} e \cite{Onda}). We observe that \ref{eq1} can be considered a perturbation of the Einstein equation $$\mbox{Ric}_g=\rho g, \qquad \rho\in\mathbb{R}.$$ When $h$ is constant, we call the underlying Einstein manifold a trivial Ricci soliton. Pigola et al. \cite{PRRS}, explored the Ricci almost solitons in a very comprehensive way. They provided topological properties, volume comparison results, a gap theorem and some explicit examples are given. Since gradient Ricci almost solitons contain gradient Ricci solitons as a particular case, we can say that the gradient Ricci almost soliton is \emph{ \it proper} if the function $\lambdambda$ is non-constant (see \cite{BGRR}). In \cite{FFGP} the authors presented a necessary and sufficient condition for constructing gradient Ricci almost solitons that are realized as warped products. They provided an example of a particular Riemannian solution of the PDEs that arise from the hypothesis that the base is conformally flat and invariant by translation, in which the fiber is an Einstein manifold. Here, we classify all pseudo-Riemannian gradient Ricci almost soliton with warped structure in Theorem \ref{teo1.3}. Moreover, Theorem \ref{coro1.3} provides a method capable of producing an infinite number of pseudo-Euclidean warped product gradient Ricci almost soliton such that the base is invariant under the action of an $(n-1)$-translation group and the fiber is Ricci flat (see explicit examples below). Further, Barros, Batista and Ribeiro Jr \cite{BBR}, proved that either a Euclidean space $\mathbb{R}^{n}$ or a standard sphere $\mathbb{S}^{n}$ is the unique manifold with non-negative scalar curvature, which carries a structure of a gradient Ricci almost soliton, provided this gradient is a nontrivial conformal vector field. Also, they showed that a compact locally conformally flat almost Ricci soliton is isometric to a Euclidean sphere $\mathbb{S}^{n}$ provided that an integral condition holds. In addition, they constructed examples of gradient Ricci almost solitons. Consider the warped product manifold $M^{n+1}=\mathbb{R}\tildemes_{\Bbb{C}sh(t)}\mathbb{S}^{n}$ with metric $g=dt^{2}+\Bbb{C}sh^{2}(t)g_{0}$, where $g_{0}$ is the standard metric of $\mathbb{S}^{n}$. They proved that $(M^{n+1}, g, \noindentabla h, \rho )$, where $h(x, t)=\Bbb{S}inh(t)$ and $\lambdambda(x,t)=\Bbb{S}inh(t) +n$, is a gradient Ricci almost soliton. Barros et al. \cite{BGR} proved that a gradient Ricci almost soliton $(M^{n}, g,\noindentabla h, \lambdambda )$, whose Ricci tensor is Codazzi, has constant sectional curvature. In particular, in the compact case, they deduced that $(M^{n}, g)$ is isometric to a Euclidean sphere and $h$ is a height function. They classified gradient Ricci almost solitons with constant scalar curvature and provided a suitable function that achieved a maximum in $M^{n}$. In 2011, Catino \cite{CA} introduced the notion of generalized quasi–Einstein manifold that generalizes the concepts of Ricci soliton, Ricci almost soliton, and quasi–Einstein manifolds. He proved that a complete generalized quasi–Einstein manifold with harmonic Weyl tensor and zero radial Weyl curvature is locally a warped product with $(n-1)$-dimensional Einstein fibers. Furthermore, in this paper, Catino proved the following result: ``Let $(M^{n}, g)$, $n\geq3$ be a locally conformally flat gradient Ricci almost soliton. Then, around any regular point of $h$, the manifold $(M^{n}, g)$ is locally a warped product with $(n-1)$-dimensional Einstein fibers of constant sectional curvature.'' In particular, this implies a local characterization for locally conformally flat gradient Ricci almost solitons, similar to that proved for gradient Ricci solitons. The local structure of half conformally flat gradient Ricci almost solitons was recently investigated in \cite{BGRR}, showing they are locally conformally flat in a neighborhood of any point where the gradient of the potential function is non-null. In \cite{CEGGV}, the authors proved that a locally homogeneous proper Ricci almost soliton has either constant sectional curvature or is locally isometric to a product $ \mathbb{R} \tildemes N(c)$, where $N(c)$ is a space of constant curvature. Inspired by the local classification of conformally flat gradient Ricci almost soliton that we discussed above, we consider a warped product structure for gradient Ricci almost solitons which is not necessarily conformally flat and then we classify such solitons. Considering $(B, g_{B})$ and $(F, g_{F})$ semi-Riemannian manifolds, with $f>0$ being a smooth function on the base $B$, the warped product $M=B\tildemes_{f} F$, with {\it fiber} $F$ and {\it warping function} $f$, is the product manifold $M=B\tildemes F$ furnished with metric tensor $$ \tildelde{g}=g_{B}+f^{2}g_{F}.$$ In what follows, we find a family of gradient Ricci almost solitons in the case of the warped product $(M, \tildelde{g})=(\mathbb{R}^{n},\bar{g})\tildemes_{f}(F^{m}, g_{F})$, where the fiber is a semi-Riemannian Einstein manifold and the base is conformal to a pseudo-Euclidean space which is invariant under the action of an $(n-1)$-dimensional translation group. We denote $\Bbb{P}si,_{x_{i}}$, $f,_{x_{i}}$, and $h,_{x_{i}}$, first order derivative and $\Bbb{P}si,_{x_{i}x_{j}}$, $f,_{x_{i}x_{j}}$, and $h,_{x_{i}x_{j}}$ as the second order derivative of functions $\Bbb{P}si$, $f$, and $h$, with respect to $x_{i}$ and $x_{i}x_{j}$, respectively. Moreover, for any smooth function $W$ we denote $$W^{'}=\fracac{dW}{d\xi}\quad\mbox{or}\quad W^{'}=\fracac{dW}{dr}.$$ Without further ado, we state our main results. \begin{teorema} \lambdabel{teo1.3} Let $(\mathbb{R}^{n}, g)$ be a pseudo-Euclidean space with Cartesian coordinates $x=(x_{1}, ..., x_{n})$, $g_{ij}=\delta_{ij}\varepsilon_{i}$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i}=\Bbb{P}m1$ with at least one $\varepsilon_{i}=1$. Consider $(M, \tildelde{g})=(\mathbb{R}^{n},\bar{g})\tildemes_{f}(F^{m}, g_{F}),$ where $\overline{g}=\fracac{1}{\Bbb{P}si^{2}}g$ and $F$ is a semi-Riemannian Einstein manifold with constant Ricci curvature $\lambdambda_{F}$. Moreover, assume non-constant smooth functions $h(\xi)$, $\lambdambda(\xi)$ and $f(\xi)>0$, where $\xi=\Bbb{S}um^{n}_{i=1}\alpha_{i} x_{i},\; \alpha_{i} \ \in \mathbb{R} $ and $\Bbb{S}um^{n}_{i=1} \varepsilon_{i}\alpha_{i}^{2}=\varepsilon_{i_{0}}$ equals $-1$ or $1$ if is a timelike or spacelike vector, respectively. Then, the warped product metric $\tildelde{g}=\bar{g}+f^{2}g_{F}$ is a gradient Ricci almost soliton \ref{eq1} with $h$ as a potential function if, and only if, the functions $\Bbb{P}si$, $f$, $\lambdambda$, and $h$ satisfy \begin{equation} \lambdabel{eq7} \left\{ \begin{array}{lcl} f\left[ (n-2)\Bbb{P}si^{''}+2\Bbb{P}si^{'}h^{'}+\Bbb{P}si h^{''}\rightarrowght]-m\Bbb{P}si f^{''}-2m \Bbb{P}si^{'}f^{'} =0;\\ \\ \varepsilon_{i_{0}} \left[f \Bbb{P}si\Bbb{P}si^{''} -(n-1)f\left( \Bbb{P}si^{'} \rightarrowght)^{2} +m\Bbb{P}si\Bbb{P}si^{'}f^{'}-f\Bbb{P}si\Bbb{P}si^{'}h^{'} \rightarrowght] = \lambdambda f;\\ \\ \varepsilon_{i_{0}} \left[-f \Bbb{P}si^{2} f^{''}+(n-2)f \Bbb{P}si f^{'}\Bbb{P}si^{'}-(m-1) \Bbb{P}si^{2} \left( f^{'}\rightarrowght)^{2}+f \Bbb{P}si^{2}f^{'}h^{'} \rightarrowght] = \lambdambda f^{2}-\lambdambda_{F}. \end{array} \rightarrowght. \end{equation} \end{teorema} In the next result we prove that if $f\Bbb{P}si=1$ and $F$ is Ricci flat, then the metrics $\tildelde{g}$ are gradient Ricci almost solitons. \begin{teorema} \lambdabel{coro1.3} Let $(\mathbb{R}^{n}, g)$ be a pseudo-Euclidean space with coordinates $x=(x_{1}, ..., x_{n})$, $g_{ij}=\delta_{ij}\varepsilon_{i}$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i}=\Bbb{P}m1$ with at least one $\varepsilon_{i}=1$. Consider $M=(\mathbb{R}^{n},\bar{g})\tildemes_{f} F^{m},$ a warped product, where $\overline{g}=\fracac{1}{\Bbb{P}si^{2^{}}}g,$ $F$ is a Ricci-flat semi-Riemannian Einstein manifold. Consider $f(\xi)>0$, $\lambdambda(\xi)$ e $h(\xi)$ non-constant smooth functions, where $\xi=\Bbb{S}um^{n}_{i=1}\alpha_{i} x_{i},\ \ \alpha_{i} \ \in \mathbb{R}$ and $\Bbb{S}um^{n}_{i=1} \varepsilon_{i}\alpha_{i}^{2}=\varepsilon_{i_{0}}$ equals $-1$ or $1$ if is a timelike or spacelike vector, respectively. Given any function $ \Bbb{P}si (\xi)$, the warped product metric $\tildelde{g}=\bar{g}+f^{2}g_{F}$ is a gradient Ricci almost soliton \ref{eq1} with $h$ as a potential function, where the functions $f$, $h$, and $\lambdambda$ are given by \begin{equation} \lambdabel{eq8} \left\{ \begin{array}{lcl} f(\xi)\Bbb{P}si(\xi)=1;\\ \\ h(\xi) = k+ \displaystyleplaystyle \int \left\lbrace c-(m+n-2) \displaystyleplaystyle \int \Bbb{P}si \Bbb{P}si^{''} d\xi \rightarrowght\rbrace \fracac{1}{\Bbb{P}si^{2}} d\xi;\\ \\ \lambdambda(\xi)=\varepsilon_{i_{0}} \left\lbrace \Bbb{P}si\Bbb{P}si^{''}-(m+n-1)(\Bbb{P}si^{'})^{2}-c \dfrac{\Bbb{P}si^{'}}{\Bbb{P}si}+ (m+n-2) \dfrac{\Bbb{P}si^{'}}{\Bbb{P}si} \displaystyleplaystyle\int \Bbb{P}si\Bbb{P}si^{''}d\xi \rightarrowght\rbrace, \end{array} \rightarrowght. \end{equation} where $c$ and $k$ are constants. \end{teorema} \begin{obs}\lambdabel{mas} In Theorem \ref{coro1.3} $f\Bbb{P}si=1$, thus the metric $\tildelde{g}$ can be rewritten as $$\tildelde{g}=\bar{g}+f^{2}g_{F}=\fracac{1}{\Bbb{P}si^{2^{}}}g_{E}+\left(\fracac{1}{\Bbb{P}si} \rightarrowght) ^{2}g_{F}=\fracac{1}{\Bbb{P}si^{2^{}}} \left(g_{E}+g_{F} \rightarrowght).$$ Thus, all metrics conformal to the product manifold $\left( R^{n}\tildemes F^{m}\rightarrowght)$, invariant by translation, where $F$ is a Ricci flat manifold, are gradient Ricci almost solitons. \end{obs} Hereafter, we find a family of gradient Ricci almost solitons in the case of $(M, \tildelde{g})=(\mathbb{R}^{n},\bar{g})$, where $M$ is conformal to a pseudo-Euclidean space which is invariant under the action of a pseudo-orthogonal group. Let $(\mathbb{R}^{n}, g)$ be the standard pseudo-Euclidean space with metric $g$ and coordinates $x=(x_{1}, ..., x_{n})$ with $g_{ij}=\delta_{ij}\varepsilon_{i}, 1\leq i, j \leq n$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i}=\Bbb{P}m1$ with at least one $\varepsilon_{i}=1$. Let $r=\displaystyleplaystyle\Bbb{S}um^{n}_{i=1}\varepsilon_{i} x^{2}_{i},\ \ \alpha_{i} \ \in \mathbb{R},$ be a basic invariant for an $(n-1)-$dimensional pseudo-orthogonal group. We want to obtain differentiable functions $f(r)$, $\Bbb{P}si(r)$ and $\lambdambda(r)$ such that the metric $\overline{g}=\fracac{1}{\Bbb{P}si^{2}}g $ satisfies \ref{eq1}. We show that in the Riemannian case, all metrics conformal to a Euclidean metric and invariant by rotation are gradient Ricci almost solitons, see Corollary \ref{coro1.1}. Moreover, in Corollary \ref{coro1.1}, given any function $\Bbb{P}si(r)$, there are $f(r)$ and $\lambdambda(r)$ such that the metric $\overline{g}$ is a gradient Ricci almost soliton. This provides a method to build many examples of solitons invariants by rotation. In Corollary \ref{coro1.1} and Theorem \ref{teo1.1}, we consider metrics conformal to pseudo-Euclidean spaces, then we find families of gradient Ricci almost soliton invariants by a pseudo-orthogonal group action. \begin{teorema} \lambdabel{teo1.1} Let $(\mathbb{R}^{n}, g)$ be a Euclidean space $n\geq3$ with coordinates $x=(x_{1}, ..., x_{n})$, $g_{ij}=\delta_{ij}\varepsilon_{i}$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i}=\Bbb{P}m1$ with at least one $\varepsilon_{i}=1$. Consider non-constant smooth functions $h(r)$ and $\Bbb{P}si(r)$, where $r=\Bbb{S}um^{n}_{i=1}\varepsilon_{i}x_{i}^{2}$. There exists metric $\overline{g}=\fracac{1}{\Bbb{P}si^{2}}g $ such that $(\mathbb{R}^{n}, \overline{g})$ is a gradient Ricci almost soliton \ref{eq1} with $h$ as a potential function if, and only if, the functions $h$, $\Bbb{P}si$ and $\lambdambda$ satisfy \begin{equation} \lambdabel{eq3} \left\{ \begin{array}{lc} (n-2)\Bbb{P}si^{''}+2\Bbb{P}si^{'}h^{'}+\Bbb{P}si h^{''} =0;\\ \\ 4(n-1) \Bbb{P}si\Bbb{P}si^{'}+4r \Bbb{P}si\Bbb{P}si^{''} -4(n-1)r\left( \Bbb{P}si^{'}\rightarrowght) ^{2} -4r\Bbb{P}si\Bbb{P}si^{'} h^{'} +2\Bbb{P}si^{2}h^{'}=\lambdambda. \end{array} \rightarrowght. \end{equation} \end{teorema} Note that, the conformal function is free for choice. Therefore, if we choose a conformal function for Theorem \ref{teo1.1}, we can build a pseudo-Riemannian gradient Ricci almost soliton invariant by the action of a pseudo-orthogonal group (rotational in the Riemannian case), provided that the system \ref{eq3} is integrable. We sum up this discussion in the following corollary. \begin{coro} \lambdabel{coro1.1} Let $(\mathbb{R}^{n}, g)$ be a Euclidean space $n\geq3$ with coordinates $x=(x_{1}, ..., x_{n})$, $g_{ij}=\delta_{ij}\varepsilon_{i}$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i}=\Bbb{P}m1$ with at least one $\varepsilon_{i}=1$. Consider non-constant smooth functions $h(r)$ and $\Bbb{P}si(r)$, where $r=\Bbb{S}um^{n}_{i=1}\varepsilon_{i}x_{i}^{2}$. Given any function $ \Bbb{P}si(r)$, the metric $ \overline{g}=\fracac{1}{\Bbb{P}si^{2^{}}}g $ is a gradient Ricci almost soliton \ref{eq1} with $h$ as a potential function, where the functions $h$ and $\lambdambda$ are given by \begin{equation} \lambdabel{eq4} \left\{ \begin{array}{llc} h(r)&=&\displaystyleplaystyle\int \left[ c-(n-2)\displaystyleplaystyle\int\Bbb{P}si\Bbb{P}si^{''}dr \rightarrowght] \dfrac{1}{\Bbb{P}si^{2}}dr +k;\\ \\ \lambdambda(r)&=& 4(n-1) \Bbb{P}si\Bbb{P}si^{'}+4r \Bbb{P}si\Bbb{P}si^{''} -4(n-1)r\left( \Bbb{P}si^{'}\rightarrowght) ^{2}\\ \\ &-&4cr\fracac{\Bbb{P}si^{'}}{\Bbb{P}si} -2(n-2) \left( 1-2r \fracac{\Bbb{P}si^{'}}{\Bbb{P}si} \rightarrowght) \displaystyleplaystyle\int\Bbb{P}si\Bbb{P}si^{''}dr +2c, \end{array} \rightarrowght. \end{equation} where $c$ and $k$ are constants. \end{coro} For our next results, let $(\mathbb{R}^{n}, g)$ be the standard pseudo-Euclidean space with metric $g$ and coordinates $x=(x_{1}, ..., x_{n})$ with $g_{ij}=\delta_{ij}\varepsilon_{i}, 1\leq i, j \leq n$, where $\delta_{ij}$ is the delta Kronecker, $\varepsilon_{i}=\Bbb{P}m1$ with at least one $\varepsilon_{i}=1$. Let $\xi=\displaystyleplaystyle\Bbb{S}um^{n}_{i=1}\alpha_{i} x_{i},\ \ \alpha_{i} \ \in \mathbb{R},$ be a basic invariant for an $(n-1)-$dimensional pseudo-orthogonal group, where $\Bbb{S}um^{n}_{i=1} \varepsilon_{i}\alpha_{i}^{2}=\varepsilon_{i_{0}}$ equals $-1$, $0$, or $1$ if it is a timelike, lightlike, or spacelike vector, respectively. We want to obtain differentiable functions $f(r)$, $\Bbb{P}si(r)$ and $\lambdambda(r)$ such that the metric $\overline{g}=\fracac{1}{\Bbb{P}si^{2}}g $ satisfies \ref{eq1}. Now, we first obtain the necessary and sufficient conditions on $f(\xi)$ and $\Bbb{P}si(\xi)$ for the existence of $ \overline{g}$. These conditions differ depending on the direction $\alpha=\displaystyleplaystyle\Bbb{S}um^{n}_{i=1}\alpha_{i} \dfrac{\Bbb{P}artialrtial}{\Bbb{P}artialrtial x_{i}}$ being timelike or spacelike. Remember that we are considering proper solitons. \begin{teorema} \lambdabel{teo1.2} Let $(\mathbb{R}^{n}, g)$ be a pseudo-Euclidean space $n\geq3$ with coordinates $x=(x_{1}, ..., x_{n})$, $g_{ij}=\delta_{ij}\varepsilon_{i}$. Consider non-constant smooth functions $h(\xi)$ and $\Bbb{P}si(\xi)$, where $ \xi=\displaystyleplaystyle\Bbb{S}um^{n}_{i=1}\alpha_{i} x_{i},\ \ \alpha_{i} \ \in \mathbb{R} $ and $\displaystyleplaystyle \Bbb{S}um^{n}_{i=1} \varepsilon_{i}\alpha_{i}^{2}=\varepsilon_{i_{0}}\noindenteq 0$. There exists the metric $ \overline{g}=\fracac{1}{\Bbb{P}si^{2^{}}}g $ such that $(\mathbb{R}^{n}, \overline{g})$ is a gradient Ricci almost soliton \ref{eq1} with $h$ as a potential function if, and only if, the functions $h$, $\Bbb{P}si$, and $\lambdambda$ satisfy \begin{equation} \lambdabel{eq5} \left\{ \begin{array}{lc} (n-2)\Bbb{P}si^{''}+2\Bbb{P}si^{'}h^{'}+\Bbb{P}si h^{''} =0;\\ \\ \varepsilon_{i_{0}} \left[ \Bbb{P}si\Bbb{P}si^{''}-(n-1) \left( \Bbb{P}si^{'} \rightarrowght)^{2} - \Bbb{P}si\Bbb{P}si^{'} h^{'} \rightarrowght]= \lambdambda. \end{array} \rightarrowght. \end{equation} \end{teorema} In the next result we provide families of gradient Ricci almost solitons which are invariant under the action of an $(n-1)$-dimensional translation group. \begin{coro} \lambdabel{coro1.2} Let $(\mathbb{R}^{n}, g)$ be a pseudo-Euclidean space $n\geq3$ with coordinates $x=(x_{1}, ..., x_{n})$, $g_{ij}=\delta_{ij}\varepsilon_{i}$. Given any function $ \Bbb{P}si(\xi)$, the metric $ \overline{g}=\fracac{1}{\Bbb{P}si^{2^{}}}g $ is a gradient Ricci almost soliton \ref{eq1} with $h$ as a potential function, where the functions $h$ and $\lambdambda$ are given by \begin{equation} \lambdabel{eq6} \left\{ \begin{array}{lc} h(\xi)=\displaystyleplaystyle\int \left[ c-(n-2)\int\Bbb{P}si\Bbb{P}si^{''}d\xi \rightarrowght] \dfrac{1}{\Bbb{P}si^{2}}d\xi +k; \\ \\ \lambdambda(\xi)=\varepsilon_{i_{0}}\left\lbrace \left[ \Bbb{P}si\Bbb{P}si^{''}-(n-1) (\Bbb{P}si^{'})^{2} \rightarrowght] - \dfrac{\Bbb{P}si^{'}}{\Bbb{P}si} \left[ c-(n-2) \displaystyleplaystyle\int\Bbb{P}si\Bbb{P}si^{''}d\xi \rightarrowght]\rightarrowght\rbrace, \end{array} \rightarrowght. \end{equation} where $c$ and $k$ are constants. \end{coro} \begin{coro} \lambdabel{coro1.4} If $(\mathbb{R}^{n}, g)$ is the Euclidean space, $F$ a Ricci-flat complete Riemannian manifold (if it is the case) and $0 < |\Bbb{P}si (x) |\leq c$ for some constant $c$, then the metrics in Theorem \ref{coro1.3}, Corollary \ref{coro1.1} and Corollary \ref{coro1.2} are complete. \end{coro} As a consequence of Corollary \ref{coro1.4} we obtain the following examples. \begin{exam}\lambdabel{aaaaa} Considering $\alpha_{1}=\ldots=\alpha_{n-1}$, $\alpha_{n}=1$, $\Bbb{P}si(x_{1},\ldots,x_{n})=x_{n}$ and $\mathbb{R}^{n^{\ast}}_{+}=\{(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}; x_{n}>0\}$, i.e., $\left(\mathbb{R}^{n^{\ast}}_{+},g_{can}=\fracac{\delta_{ij}}{x_{n}^{2}}\rightarrowght)=(\mathbb{H}^{n},g_{can})$, where $g_{can}$ is the standard metric of the hyperbolic space. Therefore, from Theorem \ref{coro1.3} we have that the product manifold $\mathbb{H}^{n}\tildemes F^{m}$, in which $(F^{m}, g_{F})$ is a complete Ricci-flat manifold, is a complete gradient Ricci almost soliton with metric tensor $$ds^{2}= g_{can}+\fracac{1}{x_{n}^{2}}g_{F},$$ where the potential function is $$h(x_{1},\ldots,x_{m+n})=k-\fracac{c}{x_{n}},$$ and $$\lambdambda(x_{1},\ldots,x_{m+n})=-\left[(m+n-1)+\fracac{c}{x_{n}}\rightarrowght].$$ \end{exam} \begin{exam} Consider $\alpha_{1}=\ldots=\alpha_{n-1}$, $\alpha_{n}=1$ and $\mathbb{R}^{n^{\ast}}_{+}=\{(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}; x_{n}>0\}$, i.e., $\left(\mathbb{R}^{n^{\ast}}_{+},g_{can}=\fracac{\delta_{ij}}{x_{n}^{2}}\rightarrowght)=(\mathbb{H}^{n},g_{can})$, where $g_{can}$ is the standard metric of the hyperbolic space. Now, taking $\Bbb{P}si(x_{1},\ldots,x_{n})=x_{n}\noindentu$, where $\noindentu$ is any bounded and positive smooth function (cf. Corollary \ref{coro1.4}) we have that the product manifold $\mathbb{H}^{n}\tildemes F^{m}$, in which $(F^{m}, g_{F})$ is a complete Ricci-flat manifold, is a complete gradient Ricci almost soliton with metric tensor $$ds^{2}= \fracac{1}{\noindentu^{2}}\left(g_{can}+\fracac{1}{x_{n}^{2}}g_{F}\rightarrowght),$$ where the functions $\lambdambda$ and $h$ are given by \ref{eq8}. Therefore, there exists many complete Ricci almost solitons conformal to Example \ref{aaaaa}. \end{exam} \begin{exam} If $\Bbb{P}si(x_{1},\ldots,x_{n})= e^{-x_{1}^{2}-\ldots-x_{n}^{2}}$ in Corollary \ref{coro1.1}, then $\mathbb{R}^{n}$ with Riemannian metric $$ds^{2}=e^{2(x_{1}^{2}+\ldots+x_{n}^{2})}(dx_{1}^{2}+\ldots+dx_{n}^{2})$$ is a complete gradient Ricci almost soliton where the potential function is $$h(x_{1},\ldots,x_{n})=\fracac{c}{2}e^{2x_{1}^{2}+\ldots+2x_{n}^{2}}+\dfrac{(n-2)}{2}(x_{1}^{2}+\ldots+x_{n}^{2})+k,$$ and \begin{eqnarray*} \lambdambda(x_{1},\ldots,x_{n})&=&-[2(n-2)(x_{1}^{2}+\ldots+x_{n}^{2})+(3n-2)]e^{-2x_{1}^{2}-\ldots-2x_{n}^{2}}\\ &+&2c[2(x_{1}^{2}+\ldots+x_{n}^{2})+1]. \end{eqnarray*} \end{exam} \begin{exam} Choosing n=2, $\alpha_{1}=\alpha_{2}=1$ and $\Bbb{P}si(x_{1},x_{2})=\fracac{1}{1+(x_{1}+x_{2})^{2}}$, then from Corollary \ref{coro1.2} we have that $\mathbb{R}^{2}$ with metric $$ds^{2}=[1+2(x_{1}+x_{2})^{2}+(x_{1}+x_{2})^{4}](dx_{1}^{2}+dx_{2}^{2})$$ is a complete gradient Ricci almost soliton, where the potential function is $$h(x_{1},x_{2})=c\left[(x_{1}+x_{2})+\fracac{2}{3}(x_{1}+x_{2})^{3}+\fracac{1}{5}(x_{1}+x_{2})^{5}\rightarrowght]+k$$ and $$\lambdambda(x_{1},x_{2})=\fracac{4(x_{1}+x_{2})^{2}}{[1+(x_{1}+x_{2})^{2}]^{4}}-\fracac{2}{[1+(x_{1}+x_{2})^{2}]^{3}}+\fracac{2c(x_{1}+x_{2})}{1+(x_{1}+x_{2})^{2}}.$$ \end{exam} \Bbb{S}ection{Proofs of the Main Results} \noindent \textbf{Proof of Theorem \ref{teo1.3}:} Let $(\mathbb{R}^{n}, g)$ be a pseudo-Euclidean space $n\geq3$ with coordinates $x=(x_{1}, ..., x_{n})$, $(M, \tildelde{g} )=(\mathbb{R}^{n},\bar{g})\tildemes_{f} (F^{m}, g_{F})$ a warped product where $\tildelde{g}= \overline{g} +f^{2}g_{F},\ \overline{g}=\fracac{1}{\Bbb{P}si^{2^{}}}g, \ g_{ij}=\delta_{ij}\varepsilon_{i}$, $F$ is a pseudo-Riemannian Einstein manifold with constant Ricci curvature $\lambdambda_{F}$. Considering $X_{1},...,X_{n}\ \in \ \Bbb{P}ounds (\mathbb{R}^{n})$ and $Y_{1},...,Y_{m}\ \in \ \Bbb{P}ounds (F)$, where $\Bbb{P}ounds (\mathbb{R}^{n})$ and $\Bbb{P}ounds (F)$ are, respectively, the spaces of lifts of vector fields on $\mathbb{R}^{n}$ and $F$ to $\mathbb{R}^{n}\tildemes_{f} F^{m}$. Since $\tildelde{g}\left(Y_{i}, Y_{j} \rightarrowght)=f^{2} g_{F}\left(Y_{i}, Y_{j} \rightarrowght)$, from the warped structure (see \cite{B,O'neil}) we obtain \begin{equation} \lambdabel{eq22} \left\{ \begin{array}{lll} Ric_{\tildelde{g}}\left(X_{i}, X_{j} \rightarrowght)&=& Ric_{\overline{g}}\left(X_{i}, X_{j} \rightarrowght) -\dfrac{m}{f} Hess_{\overline{g}} f\left(X_{i}, X_{j} \rightarrowght), \ \forall \ i, j=1,...,n;\\ \\ Ric_{\tildelde{g}}\left(X_{i}, Y_{j} \rightarrowght)&=& 0, \ \forall \ i=1,...,n;\ \ e \ \ j=1,...,m \\ \\ Ric_{\tildelde{g}}\left(Y_{i}, Y_{j} \rightarrowght)&=& Ric_{g_{F}}\left(Y_{i}, Y_{j} \rightarrowght)\\ &-& \left(f\Delta_{\overline{g}}f +(m-1)\left|\noindentabla_{\overline{g}}f \rightarrowght|^{2}\rightarrowght) g_{F}\left(Y_{i}, Y_{j} \rightarrowght), \ \forall \ i, j=1,...,m. \end{array} \rightarrowght. \end{equation} It is well known (cf. \cite{B}), if $\overline{g}=\fracac{1}{\Bbb{P}si^{2^{}}}g $, that \begin{equation} \lambdabel{eq9} Ric_{\overline{g}}=\dfrac{1}{\Bbb{P}si^{2}}\left\lbrace \left( n-2\rightarrowght)\Bbb{P}si Hess_{g}\Bbb{P}si + \left[ \Bbb{P}si\Delta_{g}\Bbb{P}si-\left( n-1\rightarrowght)\left|\noindentabla_{g}\Bbb{P}si \rightarrowght|^{2} \rightarrowght]g\rightarrowght\rbrace. \end{equation} Furthermore, we get from the metric $g$ $$\left( Hess_{g}\Bbb{P}si\rightarrowght)\left(X_{i}, X_{j} \rightarrowght)=\Bbb{P}si_{, x_{i}x_{j}}, \ \ \Delta_{g}\Bbb{P}si=\Bbb{S}um_{k}\varepsilon_{k}\Bbb{P}si_{, x_{k}x_{k}}, \ \ \left|\noindentabla_{\overline{g}}\Bbb{P}si \rightarrowght|^{2}=\Bbb{S}um_{k}\varepsilon_{k} \left( \Bbb{P}si_{, x_{k}}\rightarrowght) ^{2}$$ inserting these expressions into \ref{eq9}, we get \begin{equation} \lambdabel{eq24} \left\{ \begin{array}{lll} Ric_{\overline{g}}\left(X_{i}, X_{j} \rightarrowght)&=& \left( n-2\rightarrowght) \dfrac{\Bbb{P}si_{, x_{i}x_{j}}}{\Bbb{P}si}, \ \forall \ i\noindenteq j=1,...,n;\\ \\ Ric_{\overline{g}}\left(X_{i}, X_{i} \rightarrowght)&=& \dfrac{1}{\Bbb{P}si^{2}}\big\{ \left( n-2\rightarrowght)\Bbb{P}si\Bbb{P}si_{, x_{i}x_{i}} \\ &+& \big[ \Bbb{P}si\displaystyleplaystyle\Bbb{S}um_{k}\varepsilon_{k}\Bbb{P}si_{, x_{k}x_{k}}-\left( n-1\rightarrowght)\displaystyleplaystyle\Bbb{S}um_{k}\varepsilon_{k} \Bbb{P}si_{, x_{k}}^{2} \big]\varepsilon_{i} \big\} , \ \forall \ i=1,...,n. \end{array} \rightarrowght. \end{equation} Considering $X_{1},...,X_{n}\ \in \ \Bbb{P}ounds (\mathbb{R}^{n})$ and \ref{eq1}, we have $$ Ric_{\tildelde{g}}\left(X_{i}, X_{j} \rightarrowght)= \lambdambda\tildelde{g}\left(X_{i}, X_{j} \rightarrowght)-Hess_{\tildelde{g}}\left( h\rightarrowght) \left(X_{i}, X_{j} \rightarrowght).$$ Then, from the first equation of \ref{eq22} \begin{eqnarray} \lambdabel{sia}Ric_{\overline{g}}\left(X_{i}, X_{j} \rightarrowght)-\dfrac{m}{f}Hess_{\overline{g}} (f) \left(X_{i}, X_{j} \rightarrowght)= \lambdambda\overline{g}\left(X_{i}, X_{j} \rightarrowght)- Hess_{\overline{g}}(h) \left(X_{i}, X_{j} \rightarrowght). \end{eqnarray} Let now $ \overline{g}=\fracac{1}{\Bbb{P}si^{2^{}}}g $ be a conformal metric of $g$, $g_{ij}=\varepsilon_ {i}\delta_{ij}$, then the expressions of the Christoffel symbols are given by $$ \overline{\Gamma}_{ij}^{k}=0, \ \ \ \overline{\Gamma}_{ij}^{i}=-\dfrac{ \Bbb{P}si_{,x_{j}}}{\Bbb{P}si}, \ \ \ \overline{\Gamma}_{ii}^{k}= \varepsilon_{i} \varepsilon_{k}\dfrac{ \Bbb{P}si_{,x_{k}}}{\Bbb{P}si}\quad\mbox{and}\quad \overline{\Gamma}_{ii}^{i}=-\dfrac{ \Bbb{P}si_{,x_{i}}}{\Bbb{P}si}.$$ Then, for any smooth function $W$, the Hessian operator is given by \begin{equation} \lambdabel{eq17} \left\{ \begin{array}{lll} Hess_{\overline{g}}(W)_{ij}=W_{, x_{i} x_{j}}+\dfrac{ \Bbb{P}si_{,x_{j}}}{\Bbb{P}si}W_{,x_{i}}+\dfrac{ \Bbb{P}si_{,x_{i}}}{\Bbb{P}si}W_{,x_{j}}, \ \ i\noindenteq j;\\ \\ Hess_{\overline{g}}(W)_{ii}=W_{, x_{i} x_{i}}+2\dfrac{ \Bbb{P}si_{,x_{i}}}{\Bbb{P}si}W_{,x_{i}}-\varepsilon_{i}\displaystyleplaystyle\Bbb{S}um_{k=1}^{n}\varepsilon_{k} \dfrac{ \Bbb{P}si_{,x_{k}}}{\Bbb{P}si}W_{,x_{k}}. \end{array} \rightarrowght. \end{equation} Applying \ref{eq24} and the expression of Hessian in the metric $\bar{g}$ given by \ref{eq17} to \ref{sia}, we obtain \begin{equation} \lambdabel{eq26} \begin{array}{lll} (n-2)f \Bbb{P}si_{,x_{i}x_{j}} &+& f \Bbb{P}si h_{,x_{i}x_{j}} -m \Bbb{P}si f_{,x_{i}x_{j}} - m\Bbb{P}si_{,x_{i}} f_{,x_{j}}\\ &-&m\Bbb{P}si_{,x_{j}} f_{,x_{i}} + f \Bbb{P}si_{,x_{i}} h_{,x_{j}} + f \Bbb{P}si_{,x_{j}} h_{,x_{i}}=0,\quad 1\leq i\noindenteq j\leq n, \end{array} \end{equation} and for all $i$ \begin{equation} \lambdabel{eq27} \begin{array}{lll} \Bbb{P}si \left[ (n-2) f \Bbb{P}si_{,x_{i}x_{i}}+ f \Bbb{P}si h_{,x_{i}x_{i}} -m \Bbb{P}si f_{,x_{i}x_{i}} -2m \Bbb{P}si_{,x_{i}} f_{,x_{i}} +2f \Bbb{P}si_{,x_{i}} h_{,x_{i}} \rightarrowght] + \\ \varepsilon_{i} \Bbb{S}um^{n}_{k=1} \varepsilon_{k} \left[ f \Bbb{P}si \Bbb{P}si_{,x_{k}x_{k}} -(n-1) f \Bbb{P}si_{,x_{k}}^{2} +m \Bbb{P}si \Bbb{P}si_{,x_{k}} f_{,x_{k}} -f \Bbb{P}si \Bbb{P}si_{,x_{k}} h_{,x_{k}}\rightarrowght]=\varepsilon_{i} \lambdambda f. \ \ \ \end{array} \end{equation} Now, considering $Y_{1},...,Y_{m}\ \in \ \Bbb{P}ounds (F)$, from \ref{eq1} and the third equation of \ref{eq22} we get \begin{equation} \lambdabel{eq28} \begin{array}{lll} Ric_{g_{F}}\left(Y_{i}, Y_{j} \rightarrowght) &-& \left(f\Delta_{\overline{g}}f +(m-1)\left|\noindentabla_{\overline{g}}f \rightarrowght|^{2}\rightarrowght) g_{F}\left(Y_{i}, Y_{j} \rightarrowght)\\ &-&\lambdambda f^{2}g_{F}\left(Y_{i}, Y_{j} \rightarrowght) +\left( Hess_{\tildelde{g}}h\rightarrowght) \left(Y_{i}, Y_{j} \rightarrowght)=0. \end{array} \end{equation} It is a straightforward computation that \begin{equation} \lambdabel{eq29} \begin{array}{lll} |\noindentabla_{\overline{g}}f|^{2}=\Bbb{P}si^{2}\Sigma_{k}\varepsilon_{k}f_{,x_{k}}^{2}\quad\mbox{and}\\ \\ \triangle_{\overline{g}}f=\Bbb{P}si^{2}\Sigma_{k}\varepsilon_{k}f_{,x_{k}x_{k}}-(n-2)\Bbb{P}si\Sigma_{k}\varepsilon_{k}\Bbb{P}si_{, x_{k}}f_{,x_{k}}. \end{array} \end{equation} We also have that $F$ is an Einstein Manifold, thus \begin{equation} \lambdabel{eq30} Ric_{g_{F}}\left(Y_{i}, Y_{j} \rightarrowght)=\lambdambda_{F}g_{F}\left(Y_{i}, Y_{j} \rightarrowght). \end{equation} Moreover, \begin{eqnarray}\lambdabel{dan} Hess_{\tildelde{g}}(h)(Y_{i},Y_{j})&=&(Y_{i}Y_{j})(h)-(\noindentabla_{\tildelde{g}Y_{i}}Y_{j})(h)\noindentonumbernumber\\ &=&\left(\fracac{\noindentabla_{\tildelde{g}}f}{f}\rightarrowght)(h)\tildelde{g}(Y_{i},\,Y_{j})\noindentonumbernumber\\ &=&f\bar{g}(\noindentabla_{\bar{g}}h,\noindentabla_{\bar{g}} f)g_{F}(Y_{i},Y_{j})\noindentonumbernumber\\ &=&\left(f\Bbb{P}si^{2}\displaystyleplaystyle\Bbb{S}um_{k}\varepsilon_{k}f_{,x_{k}}h_{,x_{k}}\rightarrowght)g_{F}(Y_{i},Y_{j}). \end{eqnarray} Then, replacing this expression and \ref{eq29}, \ref{eq30}, \ref{dan} in \ref{eq28}, we get \begin{equation} \lambdabel{eq31} \Bbb{S}um^{n}_{k=1} \varepsilon_{k} \left[-f \Bbb{P}si^{2} f_{,x_{k}x_{k}} +(n-2) f \Bbb{P}si f_{,x_{k}} \Bbb{P}si_{,x_{k}} - (m-1) \Bbb{P}si^{2} f_{,x_{k}}^{2}+f \Bbb{P}si^{2} f_{,x_{k}} h_{,x_{k}} \rightarrowght] = \lambdambda f^{2} -\lambdambda_{F}. \end{equation} We assume that $f(\xi)$ and $\Bbb{P}si(\xi)$ are functions of $\xi$, where $\xi=\Bbb{S}um^{n}_{i=1}\alpha_{i} x_{i}$. Hence, we have $$\Bbb{P}si_{, x_{i}} = \alpha_{i} \Bbb{P}si^{'}, \ \ \ \ \ \ \Bbb{P}si_{, x_{i} x_{j}} = \alpha_{i} \alpha_{j} \Bbb{P}si^{''}, \ \ \ \ \ \ \Bbb{P}si_{, x_{i} x_{i}} = \alpha_{i}^{2} \Bbb{P}si^{''},$$ $$f_{, x_{i}} = \alpha_{i} f^{'}, \ \ \ \ \ \ f_{, x_{i} x_{j}} = \alpha_{i} \alpha_{j} f^{''}, \ \ \ \ \ \ f_{, x_{i} x_{i}} = \alpha_{i}^{2} f^{''} ,$$ and $$|\noindentabla_{g}\Bbb{P}si|^{2} = \left( \displaystyleplaystyle\Sigma_{i=1}^{n} \varepsilon_{i} \alpha_{i}^{2} \rightarrowght) (\Bbb{P}si^{'})^{2} = \varepsilon_{i_{0}} (\Bbb{P}si^{'})^{2}, \ \ \ \ \ \ \triangle_{g}\Bbb{P}si= \left(\displaystyleplaystyle \Sigma_{i=1}^{n}\varepsilon_{i} \alpha_{i}^{2} \rightarrowght) \Bbb{P}si^{''}=\varepsilon_{i_{0}} \Bbb{P}si^{''},$$ where $\displaystyleplaystyle\Bbb{S}um_{i=1}^{n}\varepsilon_{i}\alpha_{i}^{2}=\varepsilon_{i_{0}}.$ We replace these expressions in \ref{eq26}, \ref{eq27} and \ref{eq31}. From \ref{eq26}, we have $$\alpha_{i} \alpha_{j} \left\lbrace f \left[ (n-2) \Bbb{P}si^{''}+2\Bbb{P}si^{'} h^{'}+\Bbb{P}si h^{''}\rightarrowght] -m\Bbb{P}si f^{''}-2m\Bbb{P}si^{'} f^{'}\rightarrowght\rbrace =0, \ \ \ \forall i\noindenteq j. $$ If there exists $i\noindenteq j$ such that $\alpha_{i} \alpha_{j}\noindenteq 0$, then this equation reduces to the first equation of \ref{eq7}. Likewise, from \ref{eq27}, we get \begin{equation*} \begin{array}{lll} &&\Bbb{P}si \alpha_{i}^{2}\left\lbrace \left[ (n-2) \Bbb{P}si^{''}+2\Bbb{P}si^{'} h^{'}+\Bbb{P}si h^{''}\rightarrowght] -m\Bbb{P}si f^{''}-2m\Bbb{P}si^{'} f^{'}\rightarrowght\rbrace \\ &+& \varepsilon_{i}\Sigma_{k=1}^{n} \varepsilon_{k} \alpha_{k}^{2}\left[ f\Bbb{P}si\Bbb{P}si^{''}-(n-1)f \left( \Bbb{P}si^{'}\rightarrowght)^{2}+m\Bbb{P}si\Bbb{P}si^{'} f^{'} - f \Bbb{P}si\Bbb{P}si^{'} h^{'} \rightarrowght]=\varepsilon_{i}\lambdambda f, \end{array} \end{equation*} applying the first equation of \ref{eq7} and taking $\Bbb{S}um_{k=1}^{n} \varepsilon_{k} \alpha_{k}^{2}=\varepsilon_{i_{0}}\noindenteq 0$, the equation above reduces exactly to the second equation of \ref{eq7}. Finally, we have the third equation of \ref{eq7} by \ref{eq31}. The inverse statement of this theorem it is a straightforward computation. \vspacepace{.2in} \noindent \textbf{Proof of Theorem \ref{coro1.3}:} We can rewrite the first equation of \ref{eq7} as $$f \Bbb{P}si h^{''}+2f \Bbb{P}si^{'} h^{'}+ (n-2)f\Bbb{P}si^{''}-m\Bbb{P}si f^{''}-2m \Bbb{P}si^{'} f^{'}=0.$$ Making the change of variable $y=h^{'}$, the above equation becomes a first-order linear differential equation in $y$, $$y^{'}=-2\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si}y+\left[ -(n-2)\dfrac{\Bbb{P}si^{''}}{\Bbb{P}si}+m\dfrac{f^{''}}{f}+2m\dfrac{f^{'} \Bbb{P}si^{'}}{f \Bbb{P}si}\rightarrowght].$$ Considering $\bar{f}(\xi)= -2\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si}$ and $\bar{g}(\xi)= -(n-2)\dfrac{\Bbb{P}si^{''}}{\Bbb{P}si}+m\dfrac{f^{''}}{f}+2m\dfrac{f^{'} \Bbb{P}si^{'}}{f \Bbb{P}si}$, by the linearity of the differential equations, we get $$y= \left[c+\int \bar{g}( \xi) e^{-\int \bar{f}(\xi) d \xi } d \xi \rightarrowght] e^{\int \bar{f}(\xi) d \xi} d \xi.$$ Notice that $$\int \bar{f}(\xi) d \xi=-ln \Bbb{P}si^{2}.$$ Thus, \begin{equation} \lambdabel{eq33} h(\xi)= k+ \int \left\lbrace c+ \int\left[ m\Bbb{P}si^{2}\dfrac{f^{''}}{f} -(n-2) \Bbb{P}si\Bbb{P}si^{''} +2m\dfrac{\Bbb{P}si\Bbb{P}si{'} f^{'}}{f} \rightarrowght] d\xi \rightarrowght\rbrace \fracac{1}{\Bbb{P}si^{2}} d\xi. \end{equation} Hereafter, assume $f\Bbb{P}si=1$. Hence, \begin{equation} \lambdabel{eq34} \Bbb{P}si f^{''} +2f^{'} \Bbb{P}si^{'}=-f \Bbb{P}si^{''}. \end{equation} Note that we can rewrite equation \ref{eq33} using the equation \ref{eq34} \begin{eqnarray} \lambdabel{eq35} h(\xi) &=& k+ \displaystyleplaystyle \int \left\lbrace c+ \displaystyleplaystyle \int\left[ m\Bbb{P}si^{2}\dfrac{f^{''}}{f} -(n-2) \Bbb{P}si\Bbb{P}si^{''} +2m\dfrac{\Bbb{P}si\Bbb{P}si^{'} f^{'}}{f} \rightarrowght] d\xi \rightarrowght\rbrace \fracac{1}{\Bbb{P}si^{2}} d\xi \noindentonumbernumber\\ \noindentonumbernumber\\ &=& k+ \displaystyleplaystyle \int \left\lbrace c+ \displaystyleplaystyle \int \fracac{\Bbb{P}si}{f} \left[ m\underbrace{\left( \Bbb{P}si f^{''}+2f^{'} \Bbb{P}si^{'}\rightarrowght)}_{-f \Bbb{P}si^{''}} -(n-2)f \Bbb{P}si^{''} \rightarrowght] d\xi \rightarrowght\rbrace \fracac{1}{\Bbb{P}si^{2}} d\xi \noindentonumbernumber \\ \noindentonumbernumber\\ &=& k+ \displaystyleplaystyle \int \left\lbrace c-(m+n-2) \displaystyleplaystyle \int \Bbb{P}si \Bbb{P}si^{''} d\xi \rightarrowght\rbrace \fracac{1}{\Bbb{P}si^{2}} d\xi. \end{eqnarray} Now, replacing $h$ and $h^{'}$ in the second equation of \ref{eq7} by the expressions given by \ref{eq35}, we obtain \begin{equation} \lambdabel{eq37} \lambdambda= \varepsilon_{i_{0}} \left\lbrace \Bbb{P}si\Bbb{P}si^{''}-(n-1)(\Bbb{P}si^{'})^{2}+m\Bbb{P}si \Bbb{P}si^{'} \fracac{f^{'}}{f} - c\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si} +(m+n-2)\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si} \displaystyleplaystyle \int \Bbb{P}si \Bbb{P}si^{''} d\xi \rightarrowght\rbrace. \end{equation} Applying the expressions $f=\dfrac{1}{\Bbb{P}si}$ and $f^{'}=-\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si^{2}}$ to \ref{eq37}, we obtain \begin{eqnarray} \lambdabel{eq38} \lambdambda (\xi)= \varepsilon_{i_{0}} \left\lbrace \Bbb{P}si\Bbb{P}si^{''}-(m+n-1)(\Bbb{P}si^{'})^{2} -c \dfrac{\Bbb{P}si^{'}}{\Bbb{P}si} +(m+n-2)\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si} \displaystyleplaystyle \int \Bbb{P}si \Bbb{P}si^{''} d\xi \rightarrowght\rbrace . \end{eqnarray} Now, we prove that the expression \ref{eq38} is well defined. The third equation of the system \ref{eq8} is given by $$ \varepsilon_{i_{0}} \left[-f \Bbb{P}si^{2} f^{''}+(n-2)f \Bbb{P}si f^{'}\Bbb{P}si^{'}-(m-1) \Bbb{P}si^{2} \left( f^{'}\rightarrowght)^{2}+f \Bbb{P}si^{2}f^{'}h^{'} \rightarrowght] = \lambdambda f^{2}-\lambdambda_{F}. $$ Inserting $f=\dfrac{1}{\Bbb{P}si}, \ f^{'}=-\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si^{2}},\ f^{''}=2\dfrac{\left( \Bbb{P}si^{'}\rightarrowght) ^{2}}{\Bbb{P}si^{3}}-\dfrac{\Bbb{P}si^{''}}{\Bbb{P}si^{2}}$ and $\lambdambda_F =0$ into the above equation we obtain \begin{eqnarray*} \lambdambda (\xi) = \varepsilon_{i_{0}} \left[ \Bbb{P}si \Bbb{P}si^{''}-(m+n-1)(\Bbb{P}si^{'})^{2}- \Bbb{P}si \Bbb{P}si^{'} h^{'} \rightarrowght]. \end{eqnarray*} Then, using the expression of $h^{'}$ given by equation \ref{eq35}, we get \begin{eqnarray}\lambdabel{eq39} \lambdambda (\xi)=\varepsilon_{i_{0}} \left\lbrace \Bbb{P}si\Bbb{P}si^{''}-(m+n-1)(\Bbb{P}si^{'})^{2} - c\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si} +(m+n-2)\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si} \displaystyleplaystyle \int \Bbb{P}si \Bbb{P}si^{''} d\xi \rightarrowght\rbrace. \end{eqnarray} Thus, equation \ref{eq39} is exactly the same as \ref{eq38}. $\Box$ \vspacepace{.2in} \noindent \textbf{Proof of Theorem \ref{teo1.1}:} Let $(\mathbb{R}^{n}, g)$ be a pseudo-Euclidean space with coordinates $x=(x_{1}, ..., x_{n}), \ g_{ij}=\varepsilon_{i}\delta_{ij}$. Since the metric $\overline{g}$ is a gradient Ricci almost soliton \ref{eq1}, we have \begin{equation} \lambdabel{eq10} Ric_{\overline{g}}+ Hess_{\overline{g}}(h)=\lambdambda\overline{g}, \ \ \ \lambdambda \in C^{\infty} (\mathbb{R}^{n}). \end{equation} Now applying \ref{eq9} and \ref{eq17} to \ref{eq10}, we get \begin{equation} \lambdabel{eq13} \left\{ \begin{array}{llc} (n-2) \Bbb{P}si_{,x_{i}x_{j}}+ \Bbb{P}si_{,x_{_{i}}}h_{,x_{j}}+{\Bbb{P}si_{,x_{_{j}}}h_{,x_{i}}+\Bbb{P}si h_{,x_{i}x_{j}}}=0 ,\quad i\noindenteq j;\\ \\ \Bbb{P}si[(n-2)\Bbb{P}si_{,x_{i}x_{i}}+ h_{,x_{i}x_{i}}\Bbb{P}si+2\Bbb{P}si_{,x_{i}}h_{,x_{i}}]\\ + \varepsilon_{i}\big[ \Bbb{P}si \Delta_{g} \Bbb{P}si -(n-1)|\noindentabla_{g} \Bbb{P}si|^{2}-\Bbb{P}si\displaystyleplaystyle\Bbb{S}um^{n}_{k=1}\varepsilon_{k}h_{,x_{k}}\Bbb{P}si_{,x_{k}}\big]= \varepsilon_{i}\lambdambda,\quad\mbox{for all}\quad i. \end{array} \rightarrowght. \end{equation} Considering that $f(r)$ and $\Bbb{P}si(r)$ are functions of $r$, where $r=\Bbb{S}um^{n}_{i=1}\varepsilon_{i}x_{i}^{2}$. Hence, we have $$\Bbb{P}si_{, x_{i}} = 2\varepsilon_{i}x_{i} \Bbb{P}si^{'}, \ \ \ \ \ \ \Bbb{P}si_{, x_{i} x_{j}} = 4\varepsilon_{i}\varepsilon_{j}x_{i} x_{j} \Bbb{P}si^{''}, \ \ \ \ \ \ \Bbb{P}si_{,x_{i} x_{i}} = 4x_{i}^{2} \Bbb{P}si^{''} + 2\varepsilon_{i}\Bbb{P}si^{'},$$ $$h_{,x_{i}} = 2\varepsilon_{i}x_{i}h^{'}, \ \ \ \ \ \ h_{, x_{i} x_{j}} = 4\varepsilon_{i}\varepsilon_{j}x_{i}x_{j}h^{''}, \ \ \ \ \ \ h_{,x_{i} x_{i}} = 4x_{i}^{2}h^{''} + 2\varepsilon_{i}h^{'}.$$ Furthermore, $$|\noindentabla_{g}\Bbb{P}si|^{2} = 4r(\Bbb{P}si^{'})^{2}, \ \ \ \ \ \ \triangle_{g}\Bbb{P}si=4r \Bbb{P}si^{''}+2n\Bbb{P}si^{'}.$$ Replacing these expressions in the first equation of \ref{eq13}, we get $$4\varepsilon_{i}\varepsilon_{j}x_{i} x_{j} \left[ (n-2) \Bbb{P}si^{''}+2\Bbb{P}si^{'} h^{'}+\Bbb{P}si h^{''}\rightarrowght]=0, \ \ \ \forall i\noindenteq j. $$ Therefore, \begin{equation} \lambdabel{eq14} (n-2) \Bbb{P}si^{''}+2\Bbb{P}si^{'}h^{'}+\Bbb{P}si h^{''}=0 . \end{equation} In an analogous way, from the second equation of \ref{eq13}, we get \begin{eqnarray*} &&\varepsilon_{i}[4(n-1)\Bbb{P}si\Bbb{P}si^{'}+4r\Bbb{P}si\Bbb{P}si^{''}-4(n-1)r(\Bbb{P}si^{'})^{2}+2\Bbb{P}si^{2}h^{'}-4r\Bbb{P}si\Bbb{P}si^{'} h^{'}]\\ &&+4x_{i}^{2}\Bbb{P}si \left[ (n-2) \Bbb{P}si^{''}+2\Bbb{P}si^{'} h^{'}+\Bbb{P}si h^{''} \rightarrowght] =\varepsilon_{i}\lambdambda. \end{eqnarray*} Now applying \ref{eq14} to the above equation, it yields \begin{equation} \lambdabel{eq15} 4(n-1)\Bbb{P}si\Bbb{P}si^{'}+4r\Bbb{P}si\Bbb{P}si^{''}-4(n-1)r(\Bbb{P}si^{'})^{2}+2\Bbb{P}si^{2}h^{'}-4r\Bbb{P}si\Bbb{P}si^{'}h^{'}=\lambdambda. \end{equation} The reciprocal of this theorem can be easily verified. $\Box$ \vspacepace{.2in} \noindent \textbf{Proof of Corollary \ref{coro1.1}:} Note that we can rewrite \ref{eq14} as $$ h^{''} +2\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si}h^{'}+(n-2)\dfrac{\Bbb{P}si^{''}}{\Bbb{P}si}=0. $$ Taking $y=h^{'}$ in this last equation, it is equivalent to first-order linear differential equation $$y^{'} +2\dfrac{\Bbb{P}si^{'}}{\Bbb{P}si}y+(n-2)\dfrac{\Bbb{P}si^{''}}{\Bbb{P}si}=0 $$ Solving this ordinary differential equation we get $$y= \left[ c-(n-2)\int \Bbb{P}si\Bbb{P}si^{''} dr \rightarrowght] \dfrac{1}{\Bbb{P}si^{2}} $$ where $c>0$. Thus, we obtain the first equation of \ref{eq4}. Moreover, applying the first equation of \ref{eq4} to \ref{eq15}, we have the second equation of \ref{eq4}. $\Box$ \vspacepace{.2in} \noindent \textbf{Proof of Theorem \ref{teo1.2}:} From \ref{eq17}, \ref{eq10} gives \begin{equation} \lambdabel{eq18} \left\{ \begin{array}{lll} &&\Bbb{P}si_{,x_{i}x_{j}}=-\dfrac{1}{n-2} \left( {\Bbb{P}si_{,x_{_{i}}}h_{,x_{j}}+{\Bbb{P}si_{,x_{_{j}}}h_{,x_{i}}+\Bbb{P}si h_{,x_{i}x_{j}}}} \rightarrowght) ,\quad i\noindenteq j;\\ \\ &&(n-2)\Bbb{P}si\Bbb{P}si_{,x_{i}x_{i}}+ \left[ \Bbb{P}si \Delta_{g} \Bbb{P}si-(n-1)|\noindentabla_{g} \Bbb{P}si|^{2} \rightarrowght] \varepsilon_{i}\\ &+&2\Bbb{P}si\Bbb{P}si_{,x_{i}}h_{,x_{i}}+\Bbb{P}si^{2}h_{,x_{i}x_{i}}-\Bbb{P}si \varepsilon_{i}\displaystyleplaystyle\Bbb{S}um^{n}_{k=1} \varepsilon_{k}h_{,x_{k}}\Bbb{P}si_{,x_{k}}= \lambdambda\varepsilon_{i}. \end{array} \rightarrowght. \end{equation} From now on, $h(\xi)$ and $\Bbb{P}si(\xi)$ are functions of $\xi$, where $\xi=\displaystyleplaystyle\Bbb{S}um^{n}_{i=1}\alpha_{i} x_{i}$ and $\varepsilon_{i_{0}}=\displaystyleplaystyle\Bbb{S}um^{n}_{i=1}\varepsilon_{i}\alpha_{i}^{2}$. Hence, we have $$\Bbb{P}si_{, x_{i}} = \alpha_{i} \Bbb{P}si^{'}, \ \ \ \ \ \ \Bbb{P}si_{, x_{i} x_{j}} = \alpha_{i} \alpha_{j} \Bbb{P}si^{''}, \ \ \ \ \ \ \Bbb{P}si_{, x_{i} x_{i}} = \alpha_{i}^{2} \Bbb{P}si^{''},$$ $$h_{, x_{i}} = \alpha_{i} h^{'}, \ \ \ \ \ \ h_{, x_{i} x_{j}} = \alpha_{i} \alpha_{j} h^{''}, \ \ \ \ \ \ h_{, x_{i} x_{i}} = \alpha_{i}^{2}h^{''} .$$ Moreover, $$|\noindentabla_{g}\Bbb{P}si|^{2} = \left( \displaystyleplaystyle\Bbb{S}um_{i=1}^{n} \varepsilon_{i} \alpha_{i}^{2} \rightarrowght) (\Bbb{P}si^{'})^{2} = \varepsilon_{i_{0}} (\Bbb{P}si^{'})^{2}, \ \ \ \ \ \ \triangle_{g}\Bbb{P}si= \left(\displaystyleplaystyle \Bbb{S}um_{i=1}^{n}\varepsilon_{i} \alpha_{i}^{2} \rightarrowght) \Bbb{P}si^{''}=\varepsilon_{i_{0}} \Bbb{P}si^{''}.$$ Replacing these expressions in the first equation of \ref{eq18}, we get $$\alpha_{i} \alpha_{j} \left[ (n-2) \Bbb{P}si^{''}+2\Bbb{P}si^{'} h^{'}+\Bbb{P}si h^{''}\rightarrowght]=0, \ \ \ \forall\, i\noindenteq j. $$ If there exists $i\noindenteq j$ such that $\alpha_{i} \alpha_{j}\noindenteq 0$, then we obtain \begin{equation} \lambdabel{eq19} (n-2) \Bbb{P}si^{''}+2\Bbb{P}si^{'}h^{'}+\Bbb{P}si h^{''}=0, \end{equation} which is exactly the first equation of \ref{eq5}. Likewise, considering the second equation of \ref{eq18}, we get \begin{eqnarray} \lambdabel{x1} \Bbb{P}si \alpha_{i}^{2}\left[ (n-2) \Bbb{P}si^{''}+2\Bbb{P}si^{'} h^{'}+\Bbb{P}si h^{''} \rightarrowght] +\varepsilon_{i_{0}} \left[ \Bbb{P}si\Bbb{P}si^{''}-(n-1) (\Bbb{P}si^{'})^{2} \rightarrowght]\varepsilon_{i} -\varepsilon_{i_{0}} \Bbb{P}si\Bbb{P}si^{'}h^{'} \varepsilon_{i} =\lambdambda\varepsilon_{i}. \end{eqnarray} From the first equation of \ref{eq5} and \ref{x1}, we obtain the second equation in \ref{eq5}. We still have to consider the case in which $\alpha_{i_{0}}=1$ and $\alpha_{i}\noindenteq0$ for all $i\noindenteq i_{0}$. Then, the first equation of \ref{eq18} is trivially satisfied for all $i\noindenteq j$. Considering the second equation of \ref{eq18} for $i\noindenteq i_{0}$, we get $$\varepsilon_{i_{0}} ( \Bbb{P}si\Bbb{P}si^{''}-(n-1) (\Bbb{P}si^{'})^{2} - \Bbb{P}si\Bbb{P}si^{'}h^{'}) =\lambdambda,$$ and hence the second equation of \ref{eq6} is satisfied. Considering $i = i_{0}$ in the second equation of \ref{eq18}, we get that the first equation \ref{eq5} is also verified. $\Box$ \vspacepace{.2in} \noindent \textbf{Proof of Corollary \ref{coro1.2}:} From the first equation of \ref{eq5} we have \begin{equation*} h^{'}(\xi)= \dfrac{1}{\Bbb{P}si^{2}}\left[c-(n-2)\int \Bbb{P}si\Bbb{P}si^{''} d\xi \rightarrowght]. \end{equation*} Therefore, applying the above equation to the second equation of \ref{eq5}, we obtain our result. $\Box$ \vspacepace{.2in} \vspacepace{.2in} \noindent \textbf{Proof of Corollary \ref{coro1.4}:} Consider the Euclidean space $(R^{n}, g)$ $n\geq3$ and a metric $\overline{g}$ given by Corollaries \ref{coro1.1} and \ref{coro1.3}. If $0<|\Bbb{P}si (x)|\leq c$, then the metric $\overline{g}$ is complete, since there exists a constant $k >0$ such that for any vector $v \in R^{n}$, $|v|_{\overline{g}}\geq k |v|.$ We have that $M=(R^{n},\bar{g})\tildemes_{f} F^{m},$ is complete if, and only if, $ (R^{n},\bar{g}) $ and $F^m$ are complete (see \cite{O'neil}). Then, the metrics obtained in Corollaries \ref{coro1.1} and \ref{coro1.3} are complete. $\Box$ \vspacepace{.2in} \end{document}
\begin{document} \title{Triangular decomposition of right coideal subalgebras} \author{V.K. Kharchenko} \address{FES-Cuautitl\'an, Universidad Nacional Aut\'onoma de M\'exico, Centro de Investigaciones Te\'oricas, Primero de Mayo s/n, Campo 1, Cuautitl\'an Izcalli, Edstado de M\'exico, 54768, M\'EXICO} \email{[email protected]} \dedicatory{To Susan Montgomery --- famous mathematician and beautiful person} \thanks{The author was supported by PACIVE CONS-304, FES-C UNAM, M\'exico.} \subjclass{Primary 16W30, 16W35; Secondary 17B37.} \date{} \keywords{Hopf algebra, coideal subalgebra, PBW-basis.} \begin{abstract} \small Let $\mathfrak g$ be a Kac-Moody algebra. We show that every homogeneous right coideal subalgebra $U$ of the multiparameter version of the quantized universal enveloping algebra $U_q(\mathfrak{g}),$ $q^m\neq 1$ containing all group-like elements has a triangular decomposition $U=U^-\otimes _{{\bf k}[F]} {\bf k}[H] \otimes _{{\bf k}[G]} U^+$, where $U^-$ and $ U^+$ are right coideal subalgebras of negative and positive quantum Borel subalgebras. However if $ U_1$ and $ U_2$ are arbitrary right coideal subalgebras of respectively positive and negative quantum Borel subalgebras, then the triangular composition $ U_2\otimes _{{\bf k}[F]} {\bf k}[H]\otimes _{{\bf k}[G]} U_1$ is a right coideal but not necessary a subalgebra. Using a recent combinatorial classification of right coideal subalgebras of the quantum Borel algebra $U_q^+(\mathfrak{so}_{2n+1}),$ we find a necessary condition for the triangular composition to be a right coideal subalgebra of $U_q(\mathfrak{so}_{2n+1}).$ If $q$ has a finite multiplicative order $t>4,$ similar results remain valid for homogeneous right coideal subalgebras of the multiparameter version of the small Lusztig quantum groups $u_q({\frak g}),$ $u_q(\frak{ so}_{2n+1}).$ \end{abstract} \maketitle \section{Introduction} It is well-known that the quantized universal enveloping algebras $U_q({\mathfrak g})$ of the Kac-Moody algebras have so called triangular decomposition. In this paper we are studying when a right coideal subalgebra of $U_q({\mathfrak g})$ also has the triangular decomposition. In fact the triangular decomposition holds not only for $U_q({\mathfrak g}),$ but also for a large class of character Hopf algebras $\mathfrak A$ having positive and negative skew-primitive generators connected by relations of the type $x_ix_j^--p_{ji}x_j^-x_i=\delta _i^j(1-g_if_i),$ see \cite[Proposition 3.4]{KL}. In Theorem \ref{raz2} we show that a right coideal subalgebra $U$ of $\mathfrak A$ containing all group-like elements has a required triangular decomposition provided that $U$ is homogeneous with respect to the degree function $D$ under the identification $D(x^-_i)=-D(x_i).$ Interestingly, if ${\mathfrak A}=U_q({\mathfrak g}),$ $q^t\neq 1$ with $\mathfrak g$ defined by a Cartan matrix of finite type then each subalgebra containing all group-like elements is homogeneous with respect to the above degree function, \cite[Corollary 3.3]{KL}. Hence in Corollary \ref{fin2}, applying a recent Heckenberger---Schneider theorem, \cite[Theorem 7.3]{HS}, we see that for a semisimple complex Lie algebra $\mathfrak g$ the quantized universal enveloping algebra $U_q(\frak{g}),$ $q^t\neq 1$ has not more then $|W|^2$ different right coideal subalgebras containing the coradical. Here $W$ is the Weyl group of $\mathfrak g.$ We should stress that when $U^{\pm }$ run through the sets of right coideal subalgebras of the quantum Borel subalgebras, the triangular composition $ U^-\otimes _{{\bf k}[F]} {\bf k}[H]\otimes _{{\bf k}[G]} U^+$ is a right coideal but not always a subalgebra. For example, in \cite{KL} there are given the numbers $C_n$ of pairs that define right coideal subalgebras of $U_q({\mathfrak g})$ when ${\mathfrak g}={\mathfrak sl}_{n+1}$ is the simple Lie algebra of type $A_n.$ Using these numbers we can find the probabilities $p_n$ for a pair $U^-, U^+$ to define a right coideal subalgebra of $U_q({\mathfrak g}),$ ${\mathfrak g}={\mathfrak sl}_{n+1}$: $$ p_2=72.3\% ; \ p_3=43.8\% ;\ p_4=23.4\% ;\ p_5=11.4\% ; \ p_6=5.1\% ;\ p_7=2.2\% . $$ If $\mathfrak g$ is the simple Lie algebra of type $G_2$ then the probability equals $60/144=41.7\% ,$ see B. Pogorelsky \cite{Pog, Pog1}. The next goal of the paper is to prove a necessary condition for two right coideal subalgebras of the quantum Borel subalgebras to define by means of the triangular composition a right coideal subalgebra of $U_q({\mathfrak g})$ (respectively of $u_q(\mathfrak{g})$) when ${\mathfrak g }={\mathfrak so}_{2n+1}$ is the simple Lie algebra of type $B_n.$ In the fourth and fifth sections we follow the classification given in \cite{Kh08} to recall the basic properties of right coideal subalgebras of quantum Borel algebras $U_q^{\pm }({\mathfrak so}_{2n+1}).$ In particular we lead out the following ``integrability" condition: if all partial derivatives of a homogeneous polynomial $f$ in positive generators of an admissible degree belong to a right coideal subalgebra $U\supseteq G$ of $U_q^{+}({\mathfrak so}_{2n+1})$ then $f$ itself belongs to $U,$ see Corollary \ref{lat2}. In Section 6 we introduce the elements $\Phi^{S}(k,m)$ defined by the sets $S\subseteq [1,2n]$ and the ordered pairs of indices $1\leq k\leq m\leq 2n,$ see (\ref{dhs}). We display the element $\Phi ^{S}(k,m)$ schematically as a sequence of black and white points labeled by the numbers $k-1,$ $k,$ $k+1, \ldots $ $m-1,$ $m,$ where the first point is always white, and the last one is always black, while an intermediate point labeled by $i$ is black if and only if $i\in S:$ \begin{equation} \stackrel{k-1}{\circ } \ \ \stackrel{k}{\circ } \ \ \stackrel{k+1}{\circ } \ \ \stackrel{k+2}{\bullet }\ \ \ \stackrel{k+3}{\circ }\ \cdots \ \ \stackrel{m-2}{\bullet } \ \ \stackrel{m-1}{\circ }\ \ \stackrel{m}{\bullet }\ . \label{grbi} \end{equation} These elements are very important since every right coideal subalgebra $U\supseteq G$ of the quantum Borel algebra is generated as an algebra by $G$ and the elements of that form, see \cite[Corollary 5.7]{Kh08}. Moreover $U$ is uniquely defined by its {\it root sequence} $\theta =(\theta _1,\theta_2,\ldots ,\theta _n).$ The root sequence satisfies $0\leq \theta_i\leq 2n-2i+1,$ and each sequence satisfying these conditions is a root sequence for some $U$. There exists a constructive algorithm that allows one to find the generators $\Phi ^{S}(k,m)$ if the sequence $\theta $ is given, see \cite[Definition 10.1 and Eq. (10.6)]{Kh08}. In particular one may construct all schemes (\ref{grbi}) for the generators. The minimal generators $\Phi ^{S}(k,m)$ (the generators that do not belong to the subalgebra generated by the other generators of that form) satisfy important duality relation $\Phi ^{S}(k,m)=\alpha \, \Phi ^{R}(\psi (m),\psi (k)),$ $\alpha \neq 0,$ where by definition $\psi (i)=2n-i+1,$ while $R$ is the complement of $\{ \psi (s)-1\, |\, s\in S\} $ with respect to the interval $[\psi (m),\psi (k)),$ see Proposition \ref{xn0}. In particular to every minimal generator $\Phi ^{S}(k,m)$ correspond two essentially different schemes (\ref{grbi}). Respectively, if $\Phi ^{S}(k,m)$ and $\Phi ^{T}_-(i,j)$ are minimal generators for given right coideal subalgebras $U_1\subseteq U_q^+({\mathfrak so}_{2n+1})$ and $U_2\subseteq U_q^-({\mathfrak so}_{2n+1})$ then we have four different diagrams of the form \begin{equation} \begin{matrix} S:\stackrel{k-1}{\circ } \ & \cdots \ & \stackrel{i-1}{\bullet } \ & \stackrel{i}{\bullet }\ \ & \stackrel{i+1}{\circ }\ & \cdots & \ & \stackrel{m}{\bullet } \ & \ & \stackrel{j}{\cdot } \cr T:\ \ \ \ \ & \ \ & \circ \ & \circ \ \ & \bullet \ & \cdots & \ & \bullet \ & \cdots \ & \bullet \end{matrix} \ \ . \label{gt} \end{equation} In Theorem \ref{bale} we prove the main result of the paper: If the triangular composition $ U_2\otimes _{{\bf k}[F]} {\bf k}[H]\otimes _{{\bf k}[G]} U_1$ is a subalgebra then for each pair of minimal generators one of the following two options is fulfilled: \noindent a) no one of the possible four diagrams (\ref{gt}) has fragments of the form $$ \begin{matrix} \stackrel{t}{\circ } \ & \cdots & \stackrel{s}{\bullet } \cr \circ \ & \cdots & \bullet \end{matrix}\ \ ; $$ \noindent b) one of the possible four diagrams (\ref{gt}) has the form $$ \begin{matrix} \stackrel{k-1}{\circ } \ & \cdots & \circ & \cdots & \bullet & \cdots & \stackrel{m}{\bullet } \cr \circ \ & \cdots & \bullet & \cdots & \circ & \cdots & \bullet \end{matrix}\ \ , $$ \noindent where no one of the intermediate columns has points of the same color. Certainly $U_q({\mathfrak sl}_n)$ is a Hopf subalgebra of $U_q({\mathfrak so}_{2n+1}).$ If we apply the found condition to right coideal subalgebras of $U_q({\mathfrak sl}_{n}),$ we get precisely the necessary and sufficient condition given in \cite[Theorem 11.1]{KL}. Hence we have a reason to believe that the found necessary condition is also sufficient for the triangular composition to define a right coideal subalgebra of $U_q({\mathfrak so}_{2n+1}).$ Finally we would like to stress that right coideal subalgebras that do not admit the triangular decomposition (inhomogeneous or not including the coradical) are also of interest due to their relations with quantum symmetric pairs, quantum Harish-Chandra modules, and quantum symmetric spaces. Many of the (left) coideal subalgebras studied by M. Noumi and G. Letzter, see the survey \cite{Let}, do not admit a triangular decomposition. \section{Bracket technique} Let $X=$ $\{ x_1, x_2,\ldots, x_n\} $ be {\it quantum variables}; that is, associated with each letter $x_i$ are an element $g_i$ of a fixed Abelian group $G$ and a character $\chi ^i:G\rightarrow {\bf k}^*.$ For every word $w$ in $X$ let $g_w$ or gr$(w)$ denote an element of $G$ that appears from $w$ by replacing each $x_i$ with $g_i.$ In the same way $\chi ^w$ denotes a character that appears from $w$ by replacing each $x_i$ with $\chi ^i.$ Let $G\langle X\rangle $ denote the skew group algebra generated by $G$ and {\bf k}$\langle X\rangle $ with the commutation rules $x_ig=\chi ^i(g)gx_i,$ or equivalently $wg=\chi ^w(g)gw,$ where $w$ is an arbitrary word in $X.$ Certainly $G\langle X\rangle $ is spanned by the products $gw,$ where $g\in G$ and $w$ runs through the set of words in $X.$ The algebra $G\langle X\rangle $ has natural gradings by the group $G$ and by the group $G^*$ of characters. More precisely the basis element $gw$ belongs to the $g {\rm gr}(w)$-homogeneous component with repect to the grading by $G$ and it belongs to the $\chi ^w$-homogeneous component with respect to the grading by $G^*.$ Let $u$ be a homogeneous element with respect to the grading by $G^*,$ and $v$ be a homogeneous element with respect to the grading by $G.$ We define a skew commutator by the formula \begin{equation} [u,v]=uv-\chi ^u(g_v) vu, \label{sqo} \end{equation} where $u$ belongs to the $\chi ^u$-homogeneous component, while $v$ belongs to the $g_v$-homogeneous component. Sometimes for short we use the notation $\chi ^u(g_v)=p_{uv}=p(u,v).$ Of course $p(u,v)$ is a bimultiplicative map: \begin{equation} p(u,v)p(u,t)=p(u, vt), \ \ p(u,v)p(t,v)=p(ut,v). \label{sqot} \end{equation} In particular the form $p(-,-)$ is completely defined by the {\it quantification matrix} $||p_{ij}||,$ where $p_{ij}=\chi ^{i}(g_{j}).$ The brackets satisfy the following Jacobi identities for homogeneous (with respect to the both gradings) elements: \begin{equation} [[u, v],w]=[u,[v,w]]+p_{wv}^{-1}[[u,w],v]+(p_{vw}-p_{wv}^{-1})[u,w]\cdot v. \label{jak1} \end{equation} \begin{equation} [[u, v],w]=[u,[v,w]]-p_{vu}^{-1}[v,[u,w]]+(p_{vu}^{-1}-p_{uv})v\cdot [u,w]. \label{ja} \end{equation} These identities can be easily verified by direct computations using (\ref{sqo}), (\ref{sqot}). In particular the following conditional identities are valid (both in $G \langle X\rangle $ and in all of its homomorphic images) \begin{equation} [[u, v],w]=[u,[v,w]],\hbox{ provided that } [u,w]=0. \label{jak3} \end{equation} \begin{equation} [u,[v,w]]=p_{uv}[v,[u,w]],\hbox{ provided that } [u,v]=0 \hbox{ and }p_{uv}p_{vu}=1. \label{jak4} \end{equation} By an evident induction on the length these conditional identities admit the following generalization, see \cite[Lemma 2.2]{KL}. \begin{lemma} Let $y_1,$ $y_2,$ $\ldots ,$ $y_m$ be linear combinations of words homogeneous in each $x_k\in X.$ If $[y_i,y_j]=0,$ $1\leq i<j-1<m,$ then the bracketed polynomial $[y_1y_2\ldots y_m]$ is independent of the precise alignment of brackets: \begin{equation} [y_1y_2\ldots y_m]=[[y_1y_2\ldots y_s],[y_{s+1}y_{s+2}\ldots y_m]], \ 1\leq s<m. \label{ind} \end{equation} \label{indle} \end{lemma} The brackets are related to the product by the following ad-identities \begin{equation} [u\cdot v,w]=p_{vw}[u,w]\cdot v+u\cdot [v,w], \label{br1f} \end{equation} \begin{equation} [u,v\cdot w]=[u,v]\cdot w+p_{uv}v\cdot [u,w]. \label{br1} \end{equation} In particular, if $[u,w]=0,$ we have \begin{equation} [u\cdot v,w]=u\cdot [v,w]. \label{br2} \end{equation} The antisymmetry identity takes the form \begin{equation} [u,v]=-p_{uv}[v,u] \ \ \hbox{ provided that } \ \ p_{uv}p_{vu}=1. \label{bri} \end{equation} Further we have \begin{equation} [u, gv]=u\cdot gv-\chi ^u(gg_v)gv\cdot u=\chi ^u(g)\, g[u,v], \ \ \ g\in G; \label{cuq1} \end{equation} \begin{equation} [gu, v]=gu\cdot v-\chi ^u(g_v)v\cdot gu=g(uv-p_{uv}\chi ^v(g)\, vu), \label{cuq} \end{equation} or in a bracket form \begin{equation} [gu,v]=g[u,v]+p_{uv}(1-\chi^v(g))\, g\, v\cdot u, \ \ \ g\in G. \label{cuq2} \end{equation} \begin{equation} [gu,v]=\chi^v(g)\, g[u,v]+(1-\chi^v(g))\, g\, u\cdot v, \ \ \ g\in G. \label{cuq21} \end{equation} \noindent {\bf Quantization of variables.} Let $p_{ij},$ $1\leq i,j\leq n$ be a set of parameters, $0\neq p_{ij}\in {\bf k}.$ Let $g_j$ be the linear transformation $g_j:x_i\rightarrow p_{ij}x_i$ of the linear space spanned by a set of variables $X=\{ x_1, x_2, \ldots , x_n\} .$ Let $\chi ^i$ denote a character $\chi^i :g_j\rightarrow p_{ij}$ of the group $G$ generated by $g_i,$ $1\leq i\leq n.$ We may consider each $x_i$ as a quantum variable with parameters $g_i,$ $\chi ^i.$ \noindent {\bf Algebra ${\mathfrak F}_n.$} Let $X^-=$ $\{ x^-_1, x^-_2,\ldots , x^-_n\} $ be a new set of variables. We consider $X^-$ as a set of quantum variables quantized by the parameters $p_{ji}^{-1},$ $1\leq i,j\leq n.$ More precisely we have an Abelian group $F$ generated by elements $f_1, f_2, \ldots ,f_n$ acting on the linear space spanned by $X^-$ so that $(x_i^-)^{f_j}=p_{ji}^{-1}x_i^-,$ where $p_{ij}$ are the same parameters that define the quantization of the variables $X.$ In this case gr$(x_i^-)=f_i,$ $\chi ^{x_i^-}(f_j)=p_{ji}^{-1}.$ We may extend the characters $\chi ^i $ on $G\times F$ in the following way \begin{equation} \chi ^i(f_j)\stackrel{df}{=}p_{ji}=\chi ^j(g_i). \label{shar1} \end{equation} Indeed, if $\prod_k f_k^{m_k}=1$ in $F,$ then application to $x_i^-$ implies $\prod_k p_{ki}^{-m_k}=1,$ hence $\chi ^i(\prod _k f_k^{m_k})=\prod p_{ki}^{m_k}=1.$ In the same way we may extend the characters $\chi ^{x_i^-}$ on $G\times F$ so that \begin{equation} \chi ^{x_i^-}=(\chi ^i)^{-1} \ \ \hbox{as characters of } G\times F. \label{shar2} \end{equation} In what follows $H$ denotes a quotient group $(G\times F)/N,$ where $N$ is an arbitrary subgroup with $\chi ^{i}(N)=1,$ $1\leq i\leq n.$ For example, if the quantification parameters satisfy additional symmetry conditions $p_{ij}=p_{ji},$ $1\leq i,j\leq n,$ (as this is a case for the original Drinfeld-Jimbo and Lusztig quantifications) then $\chi ^i(g_k^{-1}f_k)=p_{ik}^{-1}p_{ki}=1,$ and we may take $N$ to be the subgroup generated by $g_k^{-1}f_k,$ $1\leq k\leq n. $ In this particular case the groups $H,$ $G,$ $F$ may be identified. In the general case without loss of generality we may suppose that $G,F\subseteq H.$ Certainly $\chi ^i, 1\leq i\leq n$ are characters of $H$ and $H$ still acts on the space spanned by $X\cup X^-$ by means of these characters and their inverses. We define the algebra ${\mathfrak F}_n$ as a quotient of $H\langle X\cup X^-\rangle $ by the following relations \begin{equation} [x_i, x_j^-]=\delta_i^j(1-g_if_i), \ \ \ \ \ 1\leq i,j\leq n, \label{rela3} \end{equation} where the brackets are defined on $H\langle X\cup X^-\rangle $ by the above quantization of the variables $X\cup X^-$; that is, $[x_i,x_j^-]=x_ix_j^--p_{ji}x_j^-x_i,$ for $\chi ^{i}(f_j)=p_{ji}.$ We go ahead with a number of useful notes for calculation of the skew commutators in ${\mathfrak F}_n.$ If $u$ is a word in $X,$ then $u^-$ denotes a word in $X^-$ that appears from $u$ under the substitution $x_i\leftarrow x_i^-.$ We have $p(v,w^-)=\chi ^v(f_w)=p(w,v),$ while $p(w^-,v)=(\chi ^w)^{-1}(g_v)=p(w,v)^{-1}.$ Thus $p(v,w^-)p(w^-,v)=1.$ Therefore the Jacobi and antisymmetry identities (see, (\ref{jak1}), (\ref{bri})) take up their original ``colored" form: \begin{equation} [[u,v],w^-]=[u,[v,w^-]]+p_{wv}[[u,w^-],v]; \label{uno} \end{equation} \begin{equation} [u^-,w]=-p_{uw}^{-1}[w,u^-]. \label{dos} \end{equation} In the same way \begin{equation} [[u^-,v^-],w]=[u^-,[v^-,w]]+p_{vw}^{-1}[[u^-,w],v^-]. \label{tres} \end{equation} Using (\ref{ja}) we have also \begin{equation} [u,[v^-,w^-]]=[[u,v^-],w^-]+p_{vu}[v^-,[u,w^-]]. \label{cua} \end{equation} If we put $w^-\leftarrow [w^-,t^-]$ in (\ref{uno}) we have $$ [[u,v],[w^-,t^-]]=[u,[v, [w^-,t^-]]]+p_{wt,v}[[u,[w^-,t^-]],v]. $$ Using (\ref{cua}) we get $$ [[u,v],[w^-,t^-]]= \hbox{\big [}u,[[v,w^-],t^-]\hbox{\big ]}+p_{w,v}\hbox{\big [}u,[w^-,[v,t^-]]\hbox{\big ]} $$ \begin{equation} +p_{wt,v}[[u,[w^-,t^-]],v]. \label{fo} \end{equation} Using once more (\ref{cua}) we get $$ [[u,v],[w^-,t^-]]= \hbox{\big [}u,[[v,w^-],t^-]\hbox{\big ]}+p_{w,v}\hbox{\big [}u,[w^-,[v,t^-]]\hbox{\big ]} $$ \begin{equation} +p_{wt,v}\hbox{\big [}[[u,w^-],t^-],v\hbox{\big ]}+p_{wt,v}p_{w,u}\hbox{\big [}[w^-,[u,t^-]],v\hbox{\big ]}. \label{fo1} \end{equation} We must stress that relations (\ref{rela3}) are homogeneous with respect to the grading by the character group $H^*,$ but they are not homogeneous with respect to the grading by $H.$ Therefore once we apply relations (\ref{rela3}), or other ``inhomogeneous in $H$" relations, we have to develop the bracket to its explicit form as soon as the inhomogeneous substitution applies to the right factor of the bracket. For example we have \begin{equation} [u,[x_i,x_i^-]]=u(1-g_if_i)-\chi ^u(g_if_i)(1-g_if_i)u=(1-\chi^u(g_if_i))u, \label{cuq3} \end{equation} but not $[u,[x_i,x_i^-]]=[u,1-g_if_i]=[u,1]-[u,g_if_i]=0.$ In fact here the bracket $[u,1-g_if_i]$ is undefined since the right factor $1-g_if_i$ is inhomogeneous in $H$ (unless $g_if_i=1$). At the same time \begin{equation} [[x_i,x_i^-],u]=(1-g_if_i)u-u(1-g_if_i)=(\chi^u(g_if_i)-1)\, g_if_i\cdot u, \label{cuq4} \end{equation} and $[[x_i,x_i^-],u]=[1-g_if_i, u]$ $=[1,u]-[g_if_i,u]$ is valid since the inhomogeneous substitution has been applied to the left factor in the brackets. \begin{lemma} Let $X_1,$ $X_2$ be subsets of $X.$ Suppose that $u$ is a word in $X_1$ and $v$ is a word in $X_2.$ If $X_1\cap X_2=\emptyset ,$ then in the algebra ${\mathfrak F}_n$ we have $[u,v^-]=0.$ \label{suu} \end{lemma} \begin{proof} Defining relations (\ref{rela3}) imply $[x_i,x_j^-]=0,$ $x_i\in X_1,$ $x_j\in X_2.$ Ad-identities (\ref{br1f}) and (\ref{br1}) with evident induction prove the statement. \end{proof} \begin{lemma} In the algebra ${\mathfrak F}_n$ for any pair $(i,j)$ with $1\leq i,j\leq n,$ $i\neq j$ we have $$ \hbox{\rm \Large [}[x_i,x_j],[x_j^-,x_i^-]\hbox{\rm \Large ]}=(1-p_{ij}p_{ji}) (1-g_ig_jf_if_j). $$ \label{suu1} \end{lemma} \begin{proof} Without loss of generality we may assume $i=1,$ $j=2.$ Since $[x_1,x_2^-]=[x_2,x_1^-]=0,$ identity (\ref{fo1}) implies \begin{equation} [[x_1,x_2],[x_2^-,x_1^-]]=[x_1,[[x_2,x_2^-],x_1^-]]]+p(x_2x_1,x_2)p(x_2, x_1)[[x_2^-,[x_1,x_1^-]],x_2]. \label{sh1} \end{equation} Using (\ref{cuq4}) and then (\ref{cuq1}) we get $$ [x_1,[[x_2,x_2^-],x_1^-]]]=((\chi^1)^{-1}(g_2f_2)-1)\chi^1(g_2f_2)g_2f_2[x_1,x_1^-]= (1-p_{12}p_{21})g_2f_2 (1-g_1f_1). $$ Taking into account (\ref{cuq3}), we have $$ [x_2^-,[x_1,x_1^-]]=(1-(\chi^2)^{-1}(g_1f_1))x^-_2=(1-p_{21}^{-1}p_{12}^{-1})x_2^-. $$ Antisymmetry relation (\ref{dos}) implies $[x_2^-,x_2]=-p_{22}^{-1}[x_2,x_2^-].$ Hence $$ [[x_2^-,[x_1,x_1^-]],x_2]=(1-p_{21}^{-1}p_{12}^{-1})(-p_{22}^{-1})(1-g_2f_2). $$ In (\ref{sh1}) we have $p(x_2x_1,x_2)p(x_2, x_1)=p_{22}p_{12}p_{21}, $ hence $$ [[x_1,x_2],[x_2^-,x_1^-]]=(1-p_{12}p_{21})(g_2f_2-g_1g_2f_1f_2+1-g_2f_2), $$ which is required. \end{proof} \begin{lemma} In the algebra ${\mathfrak F}_n$ for any pair $(i,j)$ with $1\leq i,j\leq n,$ $i\neq j$ we have $$ \hbox{\bf \Large [}[[x_i,x_j],x_j],[x_j^-,[x_j^-,x_i^-]]\hbox{\bf \Large ]}= \varepsilon \, (1-g_ig_j^2f_if_j^2), $$ where $\varepsilon =(1+p_{jj})(1-p_{ij}p_{ji})(1-p_{ij}p_{ji}p_{jj}).$ \label{suu2} \end{lemma} \begin{proof} Again, without loss of generality we may assume $i=1,$ $j=2.$ Let us put $u\leftarrow [x_1,x_2],$ $v\leftarrow x_2,$ $w^-\leftarrow x_2^-, $ $t^-\leftarrow [x_2^-,x_1^-]$ in (\ref{fo1}). We have $[v,w^-]=[x_2,x_2^-]=1-g_2f_2.$ By means of (\ref{cuq4}) we get $[[v,w^-],t^-]=(\chi ^{t^-}(g_2f_2)-1)g_2f_2\cdot t^-.$ Here $\chi ^{t^-}(g_2f_2)=p_{22}^{-2}p_{12}^{-1}p_{21}^{-1}.$ Using first (\ref{cuq1}) and then Lemma \ref{suu1} we get \begin{equation} [u,[[v,w^-],t^-]]=\varepsilon _1\, g_2f_2(1-g_1g_2f_1f_2), \label{ssh1} \end{equation} where $$ \varepsilon _1=(p_{22}^{-2}p_{12}^{-1}p_{21}^{-1}-1)\chi ^u(g_2f_2)(1-p_{12}p_{21})= (1-p_{12}p_{21}p_{22}^2)(1-p_{12}p_{21}). $$ Further, $[v,t^-]=[x_2,[x_2^-,x_1^-]].$ By (\ref{jak3}) we have $[x_2,[x_2^-,x_1^-]]=[[x_2,x_2^-],x_1^-].$ Hence (\ref{cuq4}) implies $[v,t^-]=((\chi ^1)^{-1}(g_2f_2)-1)g_2f_2\cdot x_1^-.$ By (\ref{cuq1}) we get $[w^-,[v,t^-]]=p_{22}^{-2}(p_{12}^{-1}p_{21}^{-1}-1)g_2f_2\cdot [x_2^-,x_1^-].$ Using first (\ref{cuq1}) and then Lemma \ref{suu1} we get \begin{equation} p_{w,v}[u,[w^-,[v,t^-]]]=\varepsilon _2\, g_2f_2(1-g_1g_2f_1f_2), \label{ssh2} \end{equation} where $$ \varepsilon _2= p_{22}\cdot p_{22}^{-2}(p_{12}^{-1}p_{21}^{-1}-1)\cdot \chi ^u(g_2f_2)\cdot (1-p_{12}p_{21})= p_{22}(1-p_{12}p_{21})^2. $$ In the same way $[u,w^-]=[[x_1,x_2],x_2^-]=(1-\chi ^1(g_2f_2))\cdot x_1$ due to (\ref{jak3}) and (\ref{cuq3}). Further $[[u,w^-],t^-]=(1-p_{12}p_{21})[x_1, [x_2^-,x_1^-]].$ Using (\ref{cua}) we have $[x_1, [x_2^-,x_1^-]]=p_{21}[x_2^-,[x_1,x_1^-]].$ Hence (\ref{cuq3}) allows us to find $[[u,w^-],t^-]=(1-p_{12}p_{21})p_{21}(1-p_{21}^{-1}p_{12}^{-1})\cdot x_2^-.$ This implies $$ [[[u,w^-],t^-],v]=(1-p_{12}p_{21})(p_{21}-p_{12}^{-1})[x_2^-,x_2]. $$ Since $[x_2^-,x_2]=-p_{22}^{-1}[x_2,x_2^-],$ and $[x_2,x_2^-]=1-g_2f_2,$ we get \begin{equation} p_{wt,v}[[[u,w^-],t^-],v]=\varepsilon _3\, (1-g_2f_2), \label{ssh3} \end{equation} where $$ \varepsilon _3=p_{22}^2p_{12}\cdot (1-p_{12}p_{21})(p_{21}-p_{12}^{-1})\cdot (-p_{22}^{-1}) =\varepsilon _2. $$ Finally, by Lemma \ref{suu1} we have $[u,t^-]=(1-p_{12}p_{21})(1-g_1g_2f_1f_2).$ If we apply (\ref{cuq3}) with $x_i\leftarrow u,$ $x_i^-\leftarrow t^-,$ then $ [w^-,[u,t^-]] =(1-p_{12}p_{21})(1-(\chi ^2)^{-1}(g_1g_2f_1f_2))\cdot x_2^-. $ Hence $$[[w^-,[u,t^-]],v^-]=(1-p_{12}p_{21})(1-p_{22}^{-2}p_{12}^{-1}p_{21}^{-1})[x_2^-,x_2].$$ Here $[x_2^-,x_2]=-p_{22}^{-1}(1-g_2f_2).$ Since $p_{wt,v}p_{w,u}=p_{22}^2p_{12}p_{21}p_{22},$ we may write \begin{equation} p_{wt,v}p_{w,u}[[w^-,[u,t^-]],v^-]=\varepsilon _4\, (1-g_2f_2), \label{ssh4} \end{equation} where $$ \varepsilon _4=p_{22}^2p_{12}p_{21}p_{22}\cdot (1-p_{12}p_{21})(1-p_{22}^{-2}p_{12}^{-1}p_{21}^{-1}) \cdot (-p_{22}^{-1}) =\varepsilon _1. $$ Now we see that the sum of (\ref{ssh1}) and (\ref{ssh4}) equals $\varepsilon _1\, (1-g_1g_2^2f_1f_2^2),$ while the sum of (\ref{ssh2}) and (\ref{ssh3}) equals $\varepsilon _2\, (1-g_1g_2^2f_1f_2^2).$ It remains to check that $\varepsilon _1+\varepsilon _2=\varepsilon .$ \end{proof} The algebra ${\mathfrak F}_n$ has a structure of Hopf algebra with the following coproduct: \begin{equation} \Delta (x_i)=x_i\otimes 1+g_i\otimes x_i,\ \ \ \Delta (x_i^-)=x_i^-\otimes 1+f_i\otimes x_i^-. \label{AIcm} \end{equation} \begin{equation} \Delta (g_i)=g_i\otimes g_i,\ \ \ \Delta (f_i)=f_i\otimes f_i. \label{AIcm1} \end{equation} In this case $G\langle X\rangle $ and $F\langle X^-\rangle $ are Hopf subalgebras of ${\mathfrak F}_n.$ The free algebra ${\bf k}\langle X\rangle $ has a coordinate differential calculus \begin{equation} \partial_i(x_j)=\delta _i^j,\ \ \partial _i (uv)=\partial _i(u)\cdot v+\chi ^u(g_i)u\cdot \partial _i(v). \label{defdif} \end{equation} The partial derivatives connect the calculus with the coproduct on $G\langle X\rangle$ via \begin{equation} \Delta (u)\equiv u\otimes 1+\sum _ig_i\partial_i(u)\otimes x_i\ \ \ (\hbox{mod }G\langle X\rangle \otimes \hbox{\bf k}\langle X\rangle ^{(2)}), \label{calc} \end{equation} for all $u\in {\bf k}\langle X\rangle .$ Here ${\bf k}\langle X\rangle ^{(2)}$ is the ideal of ${\bf k}\langle X\rangle $ generated by $x_ix_j,$ $1\leq i,j\leq n.$ Symmetrically the equation \begin{equation} \Delta (u)\equiv g_u\otimes u+\sum _ig_ug_i^{-1}x_i\otimes\partial _i^*(u)\ \ \ (\hbox{mod }G\langle X\rangle ^{(2)}\otimes \hbox{\bf k}\langle X\rangle ) \label{calcdu} \end{equation} defines a dual differential calculus on ${\bf k}\langle X\rangle $ where the partial derivatives satisfy \begin{equation} \partial _j^*( x_i)=\delta _i^j,\ \ \partial _i^*(uv)= \chi ^{i}(g_v) \partial _i^*(u)\cdot v+u\cdot \partial _i^*(v). \label{difdu} \end{equation} Here $G\langle X\rangle ^{(2)}$ is the ideal of $G\langle X\rangle $ generated by $x_ix_j,$ $1\leq i,j\leq n.$ Similarly the algebra {\bf k}$\langle X^-\rangle$ has a pair of differential calculi: \begin{equation} \partial_{-i}(x_j^-)=\delta _i^j,\ \ \partial _{-i} (u^-v^-) =\partial _{-i}(u^-)\cdot v^-+\chi ^{u^-}(f_i)u^-\cdot \partial _{-i}(v^-), \label{dem} \end{equation} \begin{equation} \partial _{-j}^*( x_i^-)=\delta _i^j,\ \ \partial _{-i}^*(u^-v^-)= (\chi ^{i}(f_v))^{-1} \partial _{-i}^*(u^-)\cdot v^-+u^-\cdot \partial _{-i}^*(v^-). \label{dem1} \end{equation} These calculi are related to the coproduct by the similar formulae \begin{equation} \Delta (u^-)\equiv u^-\otimes 1+\sum _if_i\partial_{-i}(u^-)\otimes x_i^-\ \ \ (\hbox{mod }F\langle X^-\rangle \otimes \hbox{\bf k}\langle X^-\rangle ^{(2)}), \label{calc1} \end{equation} \begin{equation} \Delta (u^-)\equiv f_u\otimes u^-+\sum _if_uf_i^{-1}x_i^-\otimes\partial _{-i}^*(u^-)\ \ \ (\hbox{mod }F\langle X^-\rangle ^{(2)}\otimes \hbox{\bf k}\langle X^-\rangle ). \label{dum2} \end{equation} It will be important for us that operators $[x_i,-]$ and $[-,x^-_i]$ defined respectively on ${\bf k}\langle X^-\rangle$ and ${\bf k}\langle X\rangle$ have a nice differential form (see \cite[Remark, page 2586]{KL}): \begin{equation} [x_i,u^-]=\partial_{-i}^*(u^-)p(x_i,u^-)p_{ii}^{-1}-g_if_i \partial_{-i}(u^-),\ \ \ u^-\in{\bf k}\langle X^-\rangle , \label{sqi3} \end{equation} \begin{equation} [u, x_i^-]= \partial_ i^*(u)-p_{ii}^{-1}p(u,x_i)\partial_i(u)g_if_i, \ \ \ u\in{\bf k}\langle X\rangle . \label{sqi4} \end{equation} These relations are clear if $u=x_j,$ or $u^-=x_j^-$ while ad-identities (\ref{br1f}) and (\ref{br1}) with Leibniz rules (\ref{defdif}, \ref{difdu}, \ref{dem}, \ref{dem1}) allow one to perform evident induction. \noindent {\bf Quantification of Kac-Moody algebras}. Let $C=||a_{ij}||$ be a symmetrizable by $D={\rm diag }(d_1, \ldots d_n)$ generalized Cartan matrix, $d_ia_{ij}=d_ja_{ji}.$ Let $\mathfrak g$ be a Kac-Moody algebra defined by $C,$ see \cite{Kac}. Suppose that the quantification parameters $p_{ij}=p(x_i,x_j)=\chi ^i(g_j)$ are related by \begin{equation} p_{ii}=q^{d_i}, \ \ p_{ij}p_{ji}=q^{d_ia_{ij}},\ \ \ 1\leq i,j\leq n. \label{KM1} \end{equation} As above $g_j$ denotes a linear transformation $g_j:x_i\rightarrow p_{ij}x_i$ of the linear space spanned by a set of variables $X=\{ x_1, x_2, \ldots , x_n\} .$ Let $\chi ^i$ denote a character $\chi^i :g_j\rightarrow p_{ij}$ of the group $G$ generated by $g_i,$ $1\leq i\leq n.$ We consider each $x_i$ as a quantum variable with parameters $g_i,$ $\chi ^i.$ Respectively ${\mathfrak F}_n$ is the above defined algebra related to quantum variables $X,$ and $X^-=\{ x_1^-, x_2^-, \ldots , x_n^- \} ,$ where by definition gr$({x_i^-})=f_i,$ $\chi ^{x_i^-}=(\chi ^i)^{-1},$ see (\ref{shar2}), (\ref{rela3}). In this case the multiparameter quantization $U_q ({\mathfrak g})$ of ${\mathfrak g}$ is a quotient of $H\langle X\cup X^-\rangle$ defined by Serre relations with the skew brackets in place of the Lie operation: \begin{equation} [\ldots [[x_i,\underbrace{x_j],x_j], \ldots ,x_j]}_{1-a_{ji} \hbox{ times}}=0, \ \ 1\leq i\neq j\leq n; \label{rela1} \end{equation} \begin{equation} [\ldots [[x_i^-,\underbrace{x_j^-],x_j^-], \ldots ,x_j^-]}_{1-a_{ji} \hbox{ times}}=0, \ \ 1\leq i\neq j\leq n; \label{rela2} \end{equation} \begin{equation} [x_i, x_j^-]=\delta_i^j(1-g_if_i), \ \ \ \ \ 1\leq i,j\leq n, \label{rela31} \end{equation} where the brackets are defined on $H\langle X\cup X^-\rangle $ by (\ref{sqo}). Certainly relations (\ref{rela31}) coincide with (\ref{rela3}). Hence $U_q ({\mathfrak g})$ is a homomorphic image of ${\mathfrak F}_n.$ The algebra $U_q ({\mathfrak g})$ has a structure of Hopf algebra with the coproduct (\ref{AIcm}), (\ref{AIcm1}); that is, the above homomorphism is a homomorphism of Hopf algebras. If the multiplicative order $t$ of $q$ is finite then the multiparameter version of the small Lusztig quantum group is defined as the homomorphic image of $U_q(\frak{g})$ subject to additional relations $u=0, u\in {\bf \Lambda },$ $u^-=0, u^-\in {\bf \Lambda }^-,$ where ${\bf \Lambda }$ is the biggest Hopf ideal of $G\langle X\rangle $ that is contained in the ideal $G\langle X\rangle ^{(2)}$ generated by $x_ix_j,$ $1\leq i,j\leq n. $ Respectively ${\bf \Lambda }^-$ is the biggest Hopf ideal of $F\langle X^-\rangle $ that is contained in the ideal $F\langle X^-\rangle ^{(2)}$ generated by $x_i^-x_j^-,$ $1\leq i,j\leq n.$ \noindent {\bf Mirror generators}. Of course there is no essential difference between positive and negative quantum Borel subalgebras. More precisely, let us put $y_i=p_{ii}^{-1}x_i^-,$ $y_i^-=-x_i.$ Consider $y_i$ as a quantum variable with parameters $f_i,$ $(\chi ^i)^{-1},$ while $y_i^-$ as a quantum variable with parameters $g_i,$ $\chi ^i.$ Relations (\ref{rela1} -- \ref{rela31}) are invariant under the substitution $x_i\leftarrow p_{ii}^{-1}x_i^-,$ $x_i^-\leftarrow -x_i.$ Hence $y_i,$ $y_i^-$ with $H$ generate a subalgebra which can be identified with the quantification $U_{q^{-1}}(\frak{g}).$ At the same time this subalgebra coincides with $U_q(\frak{g}).$ In this way one may replace positive and negative quantum Borel subalgebras. We shall call the generators $y_i=p_{ii}^{-1}x_i^-,$ $y_i^-=-x_i$ as {\it mirror generators}. \noindent {\bf Antipode}. Recall that the antipode $\sigma $ by definition satisfies $\sum a^{(1)}\cdot \sigma (a^{(2)}) $ $=\sum \sigma (a^{(1)})\cdot a^{(2)}=\varepsilon (a).$ Hence (\ref{AIcm}) implies $\sigma (x_i)=-g_i^{-1}x_i,$ $\sigma (x_i^-)=-f_i^{-1}x_i^-.$ In particular if $u$ is a word in $X\cup X^-,$ then $g_u\sigma (u)$ is proportional to a word in $X\cup X^-,$ for $\sigma $ is an antiautomorphism: $\sigma (ab)=\sigma (b)\sigma (a).$ Moreover, if $u,v$ are linear combinations of words homogeneous in each $y\in X\cup X^- ,$ then we have \begin{equation} g_ug_v\sigma ([u,v])=p_{vu}^{-1}[g_v\sigma (v),g_u\sigma (u)]. \label{ant1} \end{equation} Indeed, the left hand side equals $g_ug_v(\sigma (v)\sigma (u)-p_{uv}\sigma (u)\sigma (v)),$ while the right hand side is $p_{vu}^{-1}g_v\sigma (v)g_u\sigma (u)-g_u\sigma (u)g_v\sigma (v).$ We have $g_v\sigma (v)\cdot g_u=p_{vu}g_u\cdot g_v\sigma (v),$ and $g_u\sigma (u)\cdot g_v=p_{uv}g_v\cdot g_u\sigma (u).$ This implies (\ref{ant1}). \noindent {\bf $\Gamma $-Grading and $\Gamma ^+\oplus \Gamma ^-$-filtration}. We are reminded that {\it constitution} of a word $u$ in $H\cup X\cup X^-$ is a family of nonnegative integers $\{ m_y, y\in X\cup X^-\} $ such that $u$ has $m_y$ occurrences of $y.$ Let $\Gamma ^+$ denote the free additive (commutative) monoid generated by $X,$ while by $\Gamma ^-$ the free additive monoid generated by $X^-.$ Respectively $\Gamma ^+\oplus \Gamma ^-$ is the free additive monoid generated by $X\cup X^-,$ while $\Gamma $ by definition is the free commutative group generated by $X\cup X^-$ with identification $x^-_i=-x_i,$ $1\leq i\leq n.$ We fix the following order on $X\cup X^-:$ \begin{equation} x_1>x_2>\ldots >x_n>x_1^->x_2^->\ldots >x_n^-. \label{orr} \end{equation} The monoid $\Gamma ^+\oplus \Gamma ^-$ is a completely ordered monoid with respect to the order \begin{equation} m_1y_{i_1}+m_2y_{i_2}+\ldots +m_ky_{i_k}> m_1^{\prime }y_{i_1}+m_2^{\prime }y_{i_2}+\ldots +m_k^{\prime }y_{i_k} \label{ord} \end{equation} if the first from the left nonzero number in $(m_1-m_1^{\prime}, m_2-m_2^{\prime}, \ldots , m_k-m_k^{\prime})$ is positive, where $y_{i_1}>y_{i_2}>\ldots >y_{i_k}$ in $X\cup X^-.$ We associate a formal degree $D(u)=\sum _{y\in X\cup X^-}m_yy\in \Gamma ^+\oplus \Gamma ^-$ to a word $u$ in $X\cup X^-,$ where $\{ m_y, y\in X\cup X^-\} $ is the constitution of $u.$ Respectively, if $f=\sum \alpha _iu_i\in H\langle X\cup X^-\rangle ,$ $0\neq \alpha _i\in \, {\bf k}[H]$ is a linear combination of different words, then $D(f)=\max _i\{ D(u_i)\} .$ This degree function defines a grading by $\Gamma ^+\oplus \Gamma ^-$ on $H\langle X\cup X^-\rangle .$ However relations (\ref{rela31}), (\ref{rela3}) are not homogeneous with respect to this grading. Hence neither ${\mathfrak F}_n$ nor $U_q({\mathfrak g}),$ $u_q({\mathfrak g}),$ are graded by $\Gamma ^+\oplus \Gamma ^-,$ but certainly they have a filtration defined by the induced degree function. Relations (\ref{rela31}), (\ref{rela3}) became homogeneous if we consider the degree $D(u)$ as an element of the group $\Gamma $ with identifications $x_i^-=-x_i.$ Hence ${\mathfrak F}_n,$ $U_q({\mathfrak g}),$ and $u_q({\mathfrak g})$ have grading by $\Gamma $ (are $\Gamma $-homogeneous). \section{Triangular decomposition} It is well-known that there is so called triangular decomposition \begin{equation} U_q(\frak{g})= U_q^-(\frak{g})\otimes _{{\bf k}[F]} {\bf k}[H] \otimes _{{\bf k}[G]}U_q^+(\frak{g}), \label{tr} \end{equation} where $U_q^+ (\frak{g})$ is the positive quantum Borel subalgebra, the subalgebra generated by $G$ and values of $x_i,$ $1\leq i\leq n, $ while $U_q^- (\frak{g})$ is the negative quantum Borel subalgebra, the subalgebra generated by $F$ and values of $x_i^-,$ $1\leq i\leq n.$ The small Lusztig quantum group has the triangular decomposition also \begin{equation} u_q(\frak{g})= u_q^-(\frak{g})\otimes _{{\bf k}[F]} {\bf k}[H] \otimes _{{\bf k}[G]}u_q^+(\frak{g}). \label{trs} \end{equation} In fact the triangular decomposition holds not only for the quantizations defined by the quantum Serre relations but also for arbitrary Hopf homomorphic images of ${\mathfrak F}_n.$ More precisely we have the following statement. \begin{theorem} {\rm (\cite[Proposition 3.4]{KL}).} The algebra ${\frak A}=\langle {\frak{F}_n\, ||\, u_l=0,\, w_t^-=0}\rangle $ has the triangular decomposition \begin{equation} {\frak A}=\langle {\frak{F}_n^-\, ||\, w_t^-=0}\rangle \otimes _{{\bf k}[F]} {\bf k}[H] \otimes _{{\bf k}[G]} \langle {\frak{F}_n^+\, ||\, u_l=0}\rangle \label{trug} \end{equation} provided that $\langle {\frak{F}_n^-\, ||\, w_t^-=0}\rangle $ and $\langle {\frak{F}_n^+\, ||\, u_l=0}\rangle$ are Hopf algebras, and $u_l,$ $l\in L,$ $w_t^-,$ $t\in T$ are homogeneous polynomials respectively in $x_i,$ $1\leq i\leq n$ and $x_i^-,$ $1\leq i\leq n$ of total degree $>1.$ \label{34} \end{theorem} Our goal in this section is to find conditions when a right coideal subalgebra of $\mathfrak A$ has a triangular decomposition. \begin{theorem} Let $\mathfrak A$ be the Hopf algebra defined in the above theorem. Every $\Gamma $-homogeneous right coideal subalgebra $U\supset H$ of $\mathfrak A$ has a decomposition \begin{equation} U=U^-\otimes _{{\bf k}[F]} {\bf k}[H] \otimes _{{\bf k}[G]}U^+, \label{tru} \end{equation} where $U^-\supset F$ and $U^+\supset G$ are homogeneous right coideal subalgebras respectively of $\langle {\frak{F}_n^-\, ||\, w_t^-=0}\rangle$ and $\langle {\frak{F}_n^+\, ||\, u_l=0}\rangle .$ \label{raz2} \end{theorem} \begin{proof} By \cite[Theorem 1.1]{KhT} the algebra $U$ has a PBW-basis over the coradical {\bf k}$[H].$ We shall prove that the PBW-basis can be constructed in such a way that each PBW-generator for $U$ belongs to either positive or negative component of (\ref{trug}). By definition of PBW-basis (see, for example \cite[Section 2]{KL}) this implies the required decomposition of $U$. Recall that the PBW-basis of $U$ is constructed in the following way, see \cite[Section 4]{KhT}. First, we fix a PBW-basis of ${\mathfrak A}$ defined by the {\it hard super-letters} \cite{Kh3}. Due to the triangular decomposition (\ref{trug}) the PBW-generators for $\mathfrak A$ belong to either ${\mathfrak A}^+=\langle {\frak{F}_n^+\, ||\, u_l=0}\rangle$ or ${\mathfrak A}^-=\langle {\frak{F}_n^-\, ||\, w_t^-=0}\rangle .$ Then, for each PBW-generator (hard super-letter) $[u]$ we fix an arbitrary element $c_u\in U$ with minimal possible $s,$ if any, such that \begin{equation} c_u=[u]^s+\sum \alpha _iW_iR_i+\sum_j\beta_jV_j \in U, \ \ \alpha _i\in {\bf k}, \ \ \beta _j\in {\bf k}[H], \label{vad10} \end{equation} where $W_i$ are basis words in less than $[u]$ super-letters, $R_i$ are basis words in greater than or equal to $[u]$ super-letters, $D(W_iR_i)=sD(u),$ $D(V_j)<sD(u).$ Next, Proposition 4.4 \cite{KhT} implies that the set of all chosen $c_u$ form a set of PBW-generators for $U.$ Since $U$ is $\Gamma $-homogeneous, we may choose $c_u$ to be $\Gamma $-homogeneous as well. We stress that the leading terms here are defined by the degree function with values in the additive monoid $\Gamma ^+\oplus \Gamma ^-$ freely generated by $X\cup X^-,$ but not in the group $\Gamma ,$ see the last subsection of Section 2. Equality $D(W_iR_i)=sD(u)$ implies that all $W_iR_i$ in (\ref{vad10}) have the same constitution in $X\cup X^-$ as the leading term $[u]^s$ does. Thus all $W_iR_i$'s and the leading term $[u]^s$ belong to the same component of the triangular decomposition. Hence it remains to show that if $c_u$ is $\Gamma $-homogeneous then there are no terms $V_j.$ In this case all terms $V_j$ have the same $\Gamma $-degree and smaller $\Gamma ^+\oplus \Gamma ^-$-degree. We shall prove that this is impossible. If $[u]\in {\mathfrak A}^-$ then $sD(u)=m_1x_1^-+m_2x_{2}^-+\ldots +m_nx_n^-,$ while the $\Gamma ^+\oplus \Gamma ^-$-degree of $V_j$ should be less than $m_1x_1^-+m_2x_{2}^-+\ldots +m_nx_n^-.$ Hence due to definitions (\ref{orr}) and (\ref{ord}) we have $V_j\in {\mathfrak A}^-.$ In particular the $\Gamma $-degree of $V_j$ coincides with the $\Gamma ^+\oplus \Gamma ^-$-degree, a contradiction. Suppose that $[u]\in {\mathfrak A}^+.$ In this case $sD(u)=m_1x_1+m_2x_{2}+\ldots +m_nx_n.$ Let $d=\sum_{i\leq n} s_ix_i+\sum_{i\leq n} r_ix_i^-$ be the $\Gamma ^+\oplus \Gamma ^-$-degree of $V_j.$ Since $\Gamma $-degree of $V_j$ coincides with $\Gamma $-degree of $[u]^s,$ we have $s_i-r_i=m_i,$ $1\leq i\leq n.$ This implies $s_i=m_i+r_i\geq m_i.$ At the same time definition (\ref{ord}) and the condition $d<sD(u)$ imply $s_k<m_k,$ where $k$ is the smallest index such that $s_k\neq m_k.$ Thus $s_k=m_k$ for all $1\leq k\leq n.$ This yields $r_k=0,$ $1\leq k\leq n.$ In particular $\Gamma $-degree of $V_j$ coincides with the $\Gamma ^+\oplus \Gamma ^-$-degree, again a contradiction. \end{proof} \begin{corollary} Let $\mathfrak g$ be a semisimple complex Lie algebra. If $q$ is not a root of 1, then $U_q(\frak{g})$ has at most $|W|^2$ different right coideal subalgebras containing the coradical, where $W$ is the Weyl group of $\mathfrak g$. \label{fin2} \end{corollary} \begin{proof} Due to Heckenberger---Schneider theorem, \cite[Theorem 7.3]{HS}, each of the quantum Borel subalgebras $U_q^{\pm }(\frak{g})$ has exactly $|W|$ different right coideal subalgebras containing the coradical. At the same time by \cite[Corollary 3.3]{KL} every subalgebra of $U_q(\frak{g})$ containing $H$ is $\Gamma $-homogeneous. Hence by Theorem \ref{raz2} we have a decomposition (\ref{tru}). We see that there are just $|W|^2$ options to form the right hand side of (\ref{tru}). \end{proof} We should stress that when $U^{\pm }$ run through the sets of right coideal subalgebras of the quantum Borel subalgebras the tensor product in the right hand side of (\ref{tru}) is a right coideal but not always a subalgebra. Our next goal is to state and prove a necessary condition for two right coideal subalgebras $U^{+},$ $U^{-}$ of the quantum Borel algebras to define in (\ref{tru}) a right coideal subalgebra of $U_q(\frak{so}_{2n+1})$ (respectively of $u_q(\frak{so}_{2n+1})$). \section{Structure of quantum Borel subalgebras of $U_q({\mathfrak so}_{2n+1})$} In this section we follow \cite{Kh08} to recall the basic properties of quantum Borel subalgebras $U_q^{\pm }({\mathfrak so}_{2n+1}).$ In what follows we fix a parameter $q$ such that $q^2\not= \pm 1,$ $q^3\not= 1.$ Let $\sim $ denote the projective equality: $a\sim b$ if and only if $a=\alpha b,$ where $0\neq \alpha \in {\bf k}.$ If $C$ is a Cartan matrix of type $B_n,$ relations (\ref{KM1}) take up the form \begin{equation} p_{nn}=q,\, p_{ii}=q^2, \ \ p_{i\, i+1}p_{i+1\, i}=q^{-2}, \ 1\leq i<n; \label{b1rel} \end{equation} \begin{equation} p_{ij}p_{ji}=1,\ j>i+1. \label{b1rell} \end{equation} Starting with parameters $p_{ij}$ satisfying these relations, we define the group $G$ and the character Hopf algebra $G\langle X\rangle $ as in the above section. In this case the quantum Borel algebra $U^+_q ({\frak so}_{2n+1})$ is defined as a quotient of $G\langle X\rangle $ by the following relations \begin{equation} [x_i,[x_i,x_{i+1}]]=0, \ 1\leq i<n; \ \ [x_i,x_j]=0, \ \ j>i+1; \label{relb} \end{equation} \begin{equation} [[x_i,x_{i+1}],x_{i+1}]=[[[x_{n-1},x_n],x_n],x_n]=0, \ 1\leq i<n-1. \label{relbl} \end{equation} Here we slightly modify Serre relations (\ref{rela1}) so that the left hand side of each relation is a bracketed Lyndon-Shirshov word. It is possible to do due to the following general relation in ${\bf k}\langle X\rangle ,$ see \cite[Corollary 4.10]{Kh4}: \begin{equation} [\ldots [[x_i, \underbrace{x_j],x_j],\ldots x_j]}_a\sim \underbrace{[x_j,[x_j,\ldots [x_j}_a,x_i]\ldots ]], \label{rsc} \end{equation} provided that $p_{ij}p_{ji}=p_{jj}^{1-a}.$ \begin{definition} \rm The elements $u,v$ are said to be {\it separated} if there exists an index $j,$ $1\leq j\leq n,$ such that either $u\in {\bf k}\langle x_i\ |\ i<j\rangle ,$ $v\in {\bf k}\langle x_i\ |\ i>j\rangle $ or vice versa $u\in {\bf k}\langle x_i\ |\ i>j\rangle ,$ $v\in {\bf k}\langle x_i\ |\ i<j\rangle .$ \label{sep} \end{definition} \begin{lemma} In the algebra $U^+_q ({\frak so}_{2n+1})$ every two separated homogeneous in each $x_i\in X$ elements $u,v$ $($skew$)$commute, $[u,v]=0,$ in particular $u\cdot v\sim v\cdot u.$ \label{sepp} \end{lemma} \begin{proof} The statement follows from the second group of defining relations (\ref{relb}) due to (\ref{br1f}), (\ref{br1}). \end{proof} \begin{definition} \rm In what follows $x_i,$ $n<i\leq 2n$ denotes the generator $x_{2n-i+1}.$ Respectively, $u(k,m),$ $1\leq k\leq m\leq 2n$ is the word $x_kx_{k+1}\cdots x_{m-1}x_m.$ If $1\leq i\leq 2n,$ then $\psi (i)$ is the number $2n-i+1,$ so that $x_i=x_{\psi (i)}.$ We shall frequently use the following properties of $\psi :$ if $i<j,$ then $\psi (i)>\psi (j);$ $\psi (\psi (i))=i;$ $\psi (i+1)=\psi (i)-1,$ $\psi (i-1)=\psi (i)+1.$ \label{fis} \end{definition} \begin{definition} \rm If $k\leq i<m\leq 2n,$ then we define \begin{equation} \sigma _k^m\stackrel{df}{=}p(u(k,m),u(k,m)), \label{mu11} \end{equation} \begin{equation} \mu _k^{m,i}\stackrel{df}{=}p(u(k,i),u(i+1,m))\cdot p(u(i+1,m),u(k,i)). \label{mu1} \end{equation} \label{slo} \end{definition} Of course, one can easily find the $\sigma $'s and the $\mu$'s by means of (\ref{b1rel}), (\ref{b1rell}). More precisely, by \cite[Eq. (3.10)]{Kh08} we have \begin{equation} \sigma_k^m =\left\{ \begin{matrix} q, &\hbox{if } m=n, \hbox{ or } k=n+1; \cr q^4, &\hbox{if } m=\psi (k); \cr q^{2}, &\hbox{otherwise}. \end{matrix} \right. \label{mu21} \end{equation} If $m<\psi (k),$ then by \cite[Eq. (3.13)]{Kh08} we have \begin{equation} \mu_k^{m,i} =\left\{ \begin{matrix} q^{-4}, &\hbox{if } m>n, \, i=\psi (m)-1; \cr 1, &\hbox{if } i=n; \cr q^{-2}, &\hbox{otherwise}. \end{matrix} \right. \label{mu2} \end{equation} If $m=\psi (k),$ then by \cite[Eq. (3.14)]{Kh08} we have \begin{equation} \mu_k^{m,i} =\left\{ \begin{matrix} q^2, &\hbox{if } i=n; \cr 1, &\hbox{otherwise}. \end{matrix} \right. \label{mu3} \end{equation} If $m>\psi (k),$ then then by \cite[Eq. (3.15)]{Kh08} we have \begin{equation} \mu_k^{m,i} =\left\{ \begin{matrix} q^{-4}, &\hbox{if } k\leq n,\, i=\psi (k); \cr 1, &\hbox{if } i=n; \cr q^{-2}, &\hbox{otherwise}. \end{matrix} \right. \label{mu4} \end{equation} We define the bracketing of $u(k,m),$ $k\leq m$ as follows. \begin{equation} u[k,m]=\left\{ \begin{matrix} [[[\ldots [x_k,x_{k+1}], \ldots ],x_{m-1}], x_m], &\hbox{if } m<\psi (k); \cr [x_k,[x_{k+1},[\ldots ,[x_{m-1},x_m]\ldots ]]], &\hbox{if } m>\psi (k); \cr \beta [u[n+1,m],u[k,n]], &\hbox{if } m=\psi (k), \end{matrix}\right. \label{ww} \end{equation} where $\beta =-p(u(n+1,m),u(k,n))^{-1}$ normalizes the coefficient at $u(k,m).$ Conditional identity (\ref{ind}) and the second group of defining relations (\ref{relb}) show that the value of $u[k,m]$ in $U_q^+({\mathfrak so}_{2n+1})$ is independent of the precise alignment of brackets provided that $m\leq n$ or $k>n.$ Formula (\ref{ant1}) and evident induction show that \begin{equation} g_kg_{k+1}\cdots g_m\sigma (u[k,m])\sim u[\psi (m),\psi (k)], \label{ant2} \end{equation} where $\sigma $ is the antipode. \begin{lemma} {\rm (\cite[Corollary 3.13]{Kh08}).} If $m\neq \psi (k),$ $k\leq n<m,$ then in $U_q^+({\mathfrak so}_{2n+1})$ we have \begin{equation} u[k,m]=[u[k,n],u[n+1,m]]=\beta[u[n+1,m],u[k,n]], \label{ww1} \end{equation} where $\beta =-p(u(n+1,m),u(k,n))^{-1}.$ \label{rww} \end{lemma} \begin{proposition} {\rm (\cite[Proposition 3.14]{Kh08}).} If $m\neq \psi (k),$ $k\leq i<m,$ then in $U_q^+({\mathfrak so}_{2n+1})$ for each $i,$ $k\leq i<m$ we have $$ [u[k,i],u[i+1,m]]=u[k,m] $$ with only two possible exceptions being $i=\psi (m)-1,$ and $i=\psi (k).$ In particular this decomposition holds for arbitrary $i$ if $m\leq n$ or $k>n.$ \label{ins2} \end{proposition} \begin{proposition} Let $k\leq i<j<m.$ If $m\neq \psi (i)-1,$ $j\neq \psi (k),$ and $m\neq \psi (k)$ then $[u[k,i],u[j+1,m]]=0.$ If $m\neq \psi (i)-1,$ $j\neq \psi (k),$ and $i\neq \psi (j)-1,$ then $[u[j+1,m],u[k,i]]=0.$ \label{NU} \end{proposition} \begin{proof} The former statement follows from \cite[Proposition 3.15]{Kh08}. Let $m\neq \psi (i)-1,$ $j\neq \psi (k),$ and $i\neq \psi (j)-1.$ If additionally $m\neq \psi (k)$ then still \cite[Proposition 3.15]{Kh08} applies. Assume $m=\psi (k).$ We shall use the following two relations \begin{equation} [x_{\lambda },[x_{\lambda -1}x_{\lambda }x_{\lambda +1}]] =[[x_{\lambda -1}x_{\lambda }x_{\lambda +1}],x_{\lambda }]=0, l* {too} \end{equation} where $1<\lambda <2n,$ $\lambda \neq n,n+1.$ The latter one is precisely \cite[Eq. (3.7)]{Kh08} with $k\leftarrow \lambda $ if $\lambda <n,$ and with $k \leftarrow \psi (\lambda )$ if $\lambda >n+1.$ The former one follows from antisymmetry identity (\ref{bri}), for $$ p(x_{\lambda }, x_{\lambda -1}x_{\lambda }x_{\lambda +1})p(x_{\lambda -1}x_{\lambda }x_{\lambda +1},x_{\lambda }) =q^{-2}q^4q^{-2}=1. $$ That equalities imply the following two ones \begin{equation} [x_{\lambda },u[k,a]]=0, \ \ k\leq \lambda <a\leq n; \label{too1} \end{equation} \begin{equation} [u[k,a],x_{\lambda }]=0, \ \ n<k<\lambda \leq a. \label{too2} \end{equation} Indeed, if in (\ref{too1}) we have $\lambda =k$ then $[x_k,u[k,a]]=[[x_k,[x_k,x_{k+1}]], u[k+2,a]]=0,$ for in this case $u[k,a]$ is independent of the precise alignment of brackets, see Lemma \ref{indle}, and of course $[x_k, u[k+2,a]]=0$ due to Lemma \ref{sepp}. If $\lambda >k$ then $$ [x_{\lambda },u[k,a]] \sim [u[k,\lambda -2], [x_{\lambda },[x_{\lambda -1}x_{\lambda }x_{\lambda +1}]],u[\lambda +2,a] ]]=0, $$ for $[x_{\lambda },u[k,\lambda -2]]=[x_{\lambda },u[\lambda +2,a]]=0.$ The proof of (\ref{too2}) is quite similar. Let $i\leq n<j.$ In this case the equality $[u[1+j,m],u[k,i]]=0$ follows from (\ref{too1}) with $a\leftarrow i$ if $1+j>\psi (i).$ If $1+j<\psi (i)$ this follows from (\ref{too2}) with $k\leftarrow 1+j,$ $a\leftarrow m.$ We have $1+j\neq \psi (i),$ for $i\neq \psi (j)-1.$ Let $i<j\leq n.$ By Lemma \ref{rww} we have $u[1+j,m]$ $=[u[1+j,n],$ $u[n+1,m]].$ At the same time $[u[n+1,m],u[k,i]]=0$ due to (\ref{too2}) with $k\leftarrow n+1,$ $a\leftarrow m,$ while $[u[1+j,n],u[k,i]]=0$ since $u[k,i]$ and $u[1+j,n]$ are separated, see Lemma \ref{sepp}. Similarly, if $n<i<j$ then by Lemma \ref{rww} we have $u[k,i]=[u[k,n],u[n+1,i]].$ At the same time $[u[1+j,m],u[k,n]]=0$ due to (\ref{too1}) with $a\leftarrow n,$ while $[u[1+j,m],u[n+1,i]]=0$ since $u[1+j,m]$ and $u[n+1,i]$ are separated. \end{proof} The elements $u[k,m]$ are important due to the following statements. \begin{proposition} {\rm \cite[Proposition 4.1]{Kh08}.} If $q^3\neq 1,$ $q^4\neq 1,$ then values of the elements $u[k,m],$ $k\leq m<\psi (k)$ form a set of PBW-generators for the algebra $U_q^+({\mathfrak so}_{2n+1})$ over {\bf k}$[G].$ All heights are infinite. \label{strB} \end{proposition} \begin{proposition} {\rm (\cite[Proposition 4.5]{Kh08}).} If the multiplicative order $t$ of $q$ is finite, $t>4,$ then the values of $u[k,m],$ $k\leq m<\psi (k)$ form a set of PBW-generators for $u_q^+({\mathfrak so}_{2n+1})$ over {\bf k}$[G].$ The height $h$ of $u[k,m]$ equals $t$ if $m=n$ or $t$ is odd. If $m\neq n$ and $t$ is even, then $h=t/2.$ In all cases $u[k,m]^h=0$ in $u_q^+({\mathfrak so}_{2n+1}).$ \label{strBu} \end{proposition} We stress that due to (\ref{mu21}) the height $h$ here equals the multiplicative order of $p_{uu},$ where $u=u[k,m].$ The coproduct on $u[k,m],$ $k\leq m\leq 2n$ is given by the following elegant formula, see \cite[Theorem 4.3]{Kh08}: \begin{equation} \Delta (u[k,m])=u[k,m]\otimes 1+g_{k\rightarrow m}\otimes u[k,m] \label{co} \end{equation} $$ +\sum _{i=k}^{m-1}\tau _i(1-q^{-2})g_{k\rightarrow i}\, u[i+1,m]\otimes u[k,i], $$ where by definition $g_{k\rightarrow i}=g_kg_{k+1}\cdots g_i=g(u[k,i]),$ and \begin{equation} \tau_i=q^{\delta _i^n} =\left\{ \begin{matrix} q, &\hbox{if } i=n; \cr 1, &\hbox{otherwise}. \end{matrix} \right. \label{tau} \end{equation} Formula (\ref{co}) with (\ref{calc}) and (\ref{calcdu}) allows one to find the differentiation formulae \begin{equation} \partial _i(u[k,m])=\left\{ \begin{matrix} (1-q^{-2})\tau _ku[k+1,m], & \hbox{ if } i\in \{ k,\psi (k)\} ,k<m; \cr 0, & \hbox{ if } i\notin \{ k,\psi (k)\} ; \cr 1, & \hbox{ if } i\in \{ k,\psi (k)\} ,k=m. \end{matrix} \right. \label{pdee} \end{equation} \begin{equation} \partial _i^*(u[k,m])=\left\{ \begin{matrix} (1-q^{-2})\tau _{m-1 }u[k,m-1], & \hbox{ if } i\in \{ m, \psi (m)\} , m>k; \cr 0, & \hbox{ if } i\notin \{ m,\psi (m)\}; \cr 1, & \hbox{ if } i\in \{ m, \psi (m)\} , m=k. \end{matrix} \right. \label{pdu} \end{equation} These differentiation formulae with differential representation of the simplest adjoint operators (\ref{sqi3}), (\ref{sqi4}) allows one to find the (skew) bracket of basis elements $u[k,m]^{\mp }$ with the main generators $x_i^{\pm }.$ \begin{lemma} If $k<m,$ then in $U_q(\mathfrak{so}_{2n+1})$ we have \begin{equation} [u[k,m],x_i^-]\sim \left\{ \begin{matrix} 0, & \hbox{ if } i\notin \{ k,m,\psi (k), \psi (m)\} ; \cr g_kf_ku[k+1,m], & \hbox{ if } i\in \{ k, \psi (k)\} ,\ m\neq \psi (k); \cr u[k,m-1], & \hbox{ if } i\in \{ m, \psi(m)\} , \ m\neq \psi (k). \end{matrix} \right. \label{kom1} \end{equation} \label{ruk1} \end{lemma} \begin{proof} The statement follows from (\ref{sqi4}), (\ref{pdu}), and (\ref{pdee}). \end{proof} \begin{lemma} If $i<j,$ then in $U_q(\mathfrak{so}_{2n+1})$ we have \begin{equation} [x_k,u[i,j]^-]\sim \left\{ \begin{matrix} 0, & \hbox{ if } k\notin \{ i,j,\psi (i), \psi (j)\} ; \cr g_if_iu[i+1,j], & \hbox{ if } k\in \{ i, \psi (i)\} ,\ j\neq \psi (i); \cr u[i,j-1], & \hbox{ if } k\in \{ j, \psi(j)\} , \ j\neq \psi (i). \end{matrix} \right. \label{kom2} \end{equation} \label{ruk2} \end{lemma} \begin{proof} The statement follows from (\ref{sqi3}), (\ref{pdu}), and (\ref{pdee}). \end{proof} \begin{corollary} If either $k,m, \psi (k), \psi (m)\notin [i,j]$ or $i,j, \psi (i), \psi (j)\notin [k,m],$ then $$ [u[k,m],u[i,j]^-]=0. $$ \label{ruk3} \end{corollary} \begin{proof} If $k,m, \psi (k), \psi (m)\notin [i,j],$ then due to Lemma \ref{ruk2} we have $[u[k,m],x_t^-]=0$ for every $t\in [i,j].$ Hence ad-identity (\ref{br1}) and evident induction imply the required equality, for $u[i,j]^-$ belongs to the subalgebra generated by $x_t^-, t\in [i,j].$ If $i,j, \psi (i), \psi (j)\notin [k,m],$ then in perfect analogy we use ad-identity (\ref{br1f}) and Lemma \ref{ruk1}. \end{proof} \section{Roots and related properties of quantum Borel subalgebras} Recall that a {\it root} of a homogeneous right coideal subalgebra $U$ is degree of a PBW-generator of $U$, see \cite[Definition 2.9]{KL}. Due to \cite[Corollary 5.7]{Kh08} all roots of a homogeneous right coideal subalgebra $U \supset G$ of positive quantum Borel subalgebra have the form $[k:m]\stackrel{df}{=}x_k+x_{k+1}+\cdots +x_{m-1}+x_m=D(u[k,m]),$ where $1\leq k\leq m \leq 2n.$ Here $x_{2n-i+1}=x_i,$ see Definition \ref{fis}. An $U$-root is {\it simple} if it is not a sum of two or more other $U$-roots. In what follows $\Sigma (U)$ denotes the submonoid of $\Gamma ^+$ generated by all $U$-roots. Certainly degree of any nonzero homogeneous element from $U$ belongs to $\Sigma (U).$ Moreover if $q$ is not a root of 1, then all PBW-generators have infinite heights. Hence in this case $\Sigma (U)$ is precisely the set of all degrees of nonzero homogeneous elements from $U$. Simple $U$-roots are nothing more than indecomposable elements from $\Sigma (U).$ In particular \cite[Lemma 8.9]{Kh08} shows that $U$ is uniquely defined by $\Sigma (U):$ if $\Sigma (U)=\Sigma (U_1),$ then $U=U_1.$ The following statement shows that the lattice of right coideal subalgebras that contain the coradical is isomorphic to some lattice of submonoids of $\Gamma ^+.$ \begin{proposition} Let $U, U_1\supseteq G$ be $($homogeneous$)$ right coideal subalgebras of $U_q^+(\mathfrak{so}_{2n+1}),$ $q^t\neq 1$ $($respectively of $u_q^+(\mathfrak{so}_{2n+1}),$ if $q^t=1,$ $t>4).$ Then $U\subseteq U_1$ if and only if $\Sigma (U)\subseteq \Sigma (U_1).$ \label{lat} \end{proposition} \begin{proof} If ${ U}\subseteq { U}_1,$ then every PBW-generator $a$ of { U} belongs to ${ U}_1.$ In particular $a$ is a (noncommutative) polynomial in $G$ and PBW-generators of ${ U}_1.$ Hence every $U$-root, being a degree of some $a,$ is a sum of $U_1$-roots (degrees of PBW-generators of $U_1);$ that is, $\Sigma ({ U})\subseteq \Sigma ({ U}_1).$ Let $\Sigma (U)\subseteq \Sigma (U_1).$ Consider the subalgebra $U_2$ generated by $U$ and $U_1.$ Certainly this is a right coideal subalgebra. At the same time $$\Sigma (U_1)\subseteq \Sigma (U_2)\subseteq \Sigma (U)+\Sigma (U_1)=\Sigma (U_1),$$ which implies $\Sigma (U_1)=\Sigma (U_2),$ and $U_1=U_2\supseteq U.$ \end{proof} The proved statement implies the following nice characterization of elements from $U$ in terms of degrees of its partial derivatives. Recall that the subalgebra $A$ of $U_q^+(\mathfrak{so}_{2n+1})$ or $u_q^+(\mathfrak{so}_{2n+1})$ generated over {\bf k} by $x_1,x_2,\ldots ,x_n$ has a noncommutative differential calculus (\ref{defdif}). Due to (\ref{calc}) the subalgebra $U_A\stackrel{df}{=}U\cap A$ is differential: $\partial _i(U_A)\subseteq U_A,$ $1\leq i\leq n.$ Conversely, if $U_A$ is any differential subalgebra of $A$ homogeneous in each $x_i,$ then the subalgebra $U$ generated by $U_A$ and $G$ is a right coideal subalgebra of $U_q^+(\mathfrak{so}_{2n+1})$ or $u_q^+(\mathfrak{so}_{2n+1}),$ see \cite[Lemma 2.10]{KL}. Let $\partial _u,$ $u=x_{i_1}x_{i_2}\cdots x_{i_m}$ denote the differential operator $\partial _{i_1}\partial _{i_2}\cdots \partial _{i_m}.$ Certainly if $f\in U_A,$ $\partial _u(f)\neq 0,$ then degree of $\partial _u(f)$ belongs to $\Sigma (U),$ for $\partial _u(f)\in U_A\subset U.$ Interestingly the converse statement is true as well. \begin{proposition} Let $U\supseteq G$ be a $($homogeneous$)$ right coideal subalgebra of $U_q^+(\mathfrak{so}_{2n+1}),$ $q^t\neq 1$ $($respectively of $u_q^+(\mathfrak{so}_{2n+1}),$ if $q^t=1,$ $t>4).$ If $f\in A$ is a homogeneous element such that for each differential operator $\partial _u$ we have $D(\partial _u(f))\in \Sigma (U)$ or $\partial _u(f)=0,$ then $f\in U.$ \label{lat1} \end{proposition} \begin{proof} Consider the differential subalgebra $B$ generated by $U_A$ and $f.$ As an algebra $B$ is generated by $U_A$ and all $\partial _u(f).$ Hence degrees of all nonzero homogeneous elements from $B$ belong to $\Sigma (U)$ (in particular $D(f)=D(\partial _{\emptyset }(f))\in \Sigma (U)$). Proposition \ref{lat} applied to the pair $U,$ $BG$ implies $BG\subseteq U,$ and $f\in U.$ \end{proof} We stress that the condition $D(\partial _u(f))\in \Sigma (U)$ is equivalent to $D(f)\in \Sigma (U)+D(u).$ Hence we may restate the proved statement: $f\in U$ if and only if $\partial _u(f)=0$ for all words $u$ such that $D(f)\notin \Sigma (U)+D(u).$ To put it another way, we have a representation of homogeneous components $U_A^{(\gamma )},$ $\gamma \in \Gamma ^+$ of $U_A$ in the form of kernel of a set of differential operators: \begin{equation} U_A^{(\gamma )}=\bigcap _{\gamma \notin \Sigma (U)+D(u)} {\rm Ker}\, \partial _u. \label{late1} \end{equation} Moreover Proposition \ref{lat1} shows that right coideal subalgebras are differentially closed in the following sense. \begin{corollary} If under the conditions of the above proposition $D(f)\in \Sigma (U)$ and $\partial _i (f)\in U,$ $1\leq i\leq n,$ then $f\in U.$ \label{lat2} \end{corollary} \begin{proof} Indeed, if $\partial _i (f)\in U,$ $1\leq i\leq n,$ then of course $\partial _u (f)\in U$ for all nonempty words $u.$ In particular either $D(\partial _u (f))\in \Sigma (U)$ or $\partial _u (f)=0.$ Proposition \ref{lat1} applies. \end{proof} Needless to say that all statements of this and the above sections remain valid for negative quantum Borel subalgebra too. In particular all roots of a homogeneous right coideal subalgebra $U^- \supset F$ of negative quantum Borel subalgebra have the form $[i:j]^-\stackrel{df}{=}x_i^-+x_{i+1}^-+\cdots +x_{j-1}^-+x_j^-,$ where $1\leq i\leq j \leq 2n$. \section{Minimal generators for right coideal subalgebras\\ of the quantum Borel algebra} Let $S$ be a set of integer numbers from the interval $[1,2n].$ A (noncommutative) polynomial $\Phi^{S}(k,m),$ $1\leq k\leq m\leq 2n$ is defined by induction on the number $r$ of elements in the set $S \cap [k,m)=\{ s_1,s_2,\ldots ,s_r\} ,$ $k\leq s_1<s_2<\ldots <s_r<m$ as follows: \begin{equation} \Phi^{S}(k,m)=u[k,m]-(1-q^{-2})\sum_{i=1}^r \alpha _{km}^{s_i} \, \Phi^{S}(1+s_i,m)u[k,s_i], \label{dhs} \end{equation} where $\alpha _{km}^{s}=\tau _{s}p(u(1+s,m),u(k,s))^{-1},$ while the $\tau $'s was defined in (\ref{tau}). We display the element $\Phi ^{S}(k,m)$ schematically as a sequence of black and white points labeled by the numbers $k-1,$ $k,$ $k+1, \ldots $ $m-1,$ $m,$ where the first point is always white, and the last one is always black, while an intermediate point labeled by $i$ is black if and only if $i\in S:$ \begin{equation} \stackrel{k-1}{\circ } \ \ \stackrel{k}{\circ } \ \ \stackrel{k+1}{\circ } \ \ \stackrel{k+2}{\bullet }\ \ \ \stackrel{k+3}{\circ }\ \cdots \ \ \stackrel{m-2}{\bullet } \ \ \stackrel{m-1}{\circ }\ \ \stackrel{m}{\bullet } \label{grb} \end{equation} Sometimes, if $k\leq n<m,$ it is more convenient to display the element $\Phi ^{S}(k,m)$ in two lines putting the points labeled by indices $i,\psi (i)$ that define the same variable $x_i=x_{\psi (i)}$ in one column: \begin{equation} \begin{matrix} \ \ \ \ \ \ \ & \stackrel{m}{\bullet } \ \cdots & \bullet & \stackrel{\psi (i)}{\circ } \ \cdots & \stackrel{n+1}{\bullet } \cr \stackrel{k-1}\circ \ \circ \ \cdots \ & \stackrel{\psi (m)}{\circ } \ \cdots & \bullet & \stackrel{i}{\bullet } \ \cdots & \stackrel{n}{\circ } \end{matrix} \label{grb1} \end{equation} The elements $\Phi ^{S}(k,m)$ are very important since every right coideal subalgebra $U\supseteq G$ of the quantum Borel subalgebra is generated as an algebra by $G$ and the elements of this form, see \cite[Corollary 5.7]{Kh08}. Moreover $U$ is uniquely defined by its {\it root sequence} $\theta =(\theta _1,\theta_2,\ldots ,\theta _n).$ The root sequence satisfies $0\leq \theta_i\leq 2n-2i+1,$ and each sequence satisfying these conditions is a root sequence for some $U$. There exists a constructive algorithm that allows one to find the generators $\Phi ^{S}(k,m)$ if the sequence $\theta $ is given, see \cite[Definition 10.1 and Eq. (10.6)]{Kh08}. More precisely the algorithm allows one to find all possible values of the numbers $k,m$ and the sets $S.$ In particular one may construct all schemes (\ref{grb}) for the generators. However the explicit form of $\Phi ^{S}(k,m)$ needs complicated inductive procedure (\ref{dhs}). These generators satisfy two additional important properties. First, their degrees, $D(\Phi ^{S}(k,m))=x_k+x_{k+1}+\cdots +x_m,$ are simple $U$-roots; that is, $D(\Phi ^{S}(k,m))$ is not a sum of nonzero degrees of other elements from $U$, see \cite[Claims 7,8]{Kh08}. Next, the set $S$ is always $(k,m)$-regular in the sense of the following definition, see \cite[Claim 5]{Kh08}. \begin{definition} \rm Let $1\leq k\leq n<m\leq 2n.$ A set $S$ is said to be {\it white $(k,m)$-regular} if for every $i,$ $k-1\leq i<m,$ such that $k\leq \psi (i)\leq m+1$ either $i$ or $\psi (i)-1$ does not belong to $S \cup \{ k-1, m\} .$ A set $S$ is said to be {\it black $(k,m)$-regular} if for every $i,$ $k\leq i\leq m,$ such that $k\leq \psi (i)\leq m+1$ either $i$ or $\psi (i)-1$ belongs to $S \setminus \{ k-1,m\} .$ A set $S$ is said to be $(k,m)$-{\it regular} if it is either black or white $(k,m)$-regular. If $m\leq n,$ or $k>n$ (or, equivalently, if $u[k,m]$ is of degree $\leq 1$ in $x_n$), then by definition each set $S$ is both white and black $(k,m)$-regular. \label{reg1} \end{definition} To illustrate the notion of a regular set, we shall need a {\it shifted representation} that appears from (\ref{grb1}) by shifting the upper line to the left by one step and putting the colored point labeled by $n,$ if any, to the vacant position (so that this point appears twice in the shifted scheme): \begin{equation} \begin{matrix} \ \ \ \ \ \ \ & \stackrel{m}{\bullet } \ \cdots & \circ & \stackrel{n+i}{\circ } \ \cdots & \stackrel{n+1}{\bullet } & \stackrel{n}{\circ } \Leftarrow \cr \stackrel{k-1}\circ \ \circ \ \cdots \ & \stackrel{\psi (m)-1}{\bullet } \ \cdots & \bullet & \stackrel{n-i}{\bullet } \ \cdots & \stackrel{n-1}{\circ }& \stackrel{n}{\circ } \end{matrix} \label{grb2} \end{equation} If $k\leq n<m$ and $S$ is white $(k,m)$-regular, then $n\notin S$, for $\psi (n)-1=n.$ If additionally $m<\psi (k),$ then taking $i=\psi (m)-1$ we get $\psi (i)-1=m,$ hence the definition implies $\psi (m)-1\notin S$. We see that if $m<\psi (k),$ $k\leq n<m,$ then $S$ is white $(k,m)$-regular if and only if the shifted scheme of $\Phi ^{S}(k,m)$ given in (\ref{grb2}) has no black columns: \begin{equation} \begin{matrix} \ \ \ \ \ \ \ & \stackrel{m}{\bullet }&\cdots & \bullet & \stackrel{n+i}{\circ } & \circ & \cdots & \stackrel{n}{\circ } \Leftarrow \cr \stackrel{k-1}\circ \ \cdots & \stackrel{\psi (m)-1}{\circ }& \cdots & \circ & \stackrel{n-i}{\bullet } & \circ & \cdots & \stackrel{n}{\circ } \end{matrix} \label{grab} \end{equation} In the same way, if $m>\psi (k),$ then for $i=\psi (k)$ we get $\psi (i)-1=k-1,$ hence $\psi (k)\notin S$. That is, if $m>\psi (k),$ $k\leq n<m,$ then $S$ is white $(k,m)$-regular if and only if the shifted scheme (\ref{grb2}) has no black columns and the first from the left complete column is a white one. \begin{equation} \begin{matrix} \stackrel{m}{\bullet } \ \cdots & \stackrel{\psi (k)}{\circ } & \cdots & \bullet & \stackrel{n+i}{\circ } & \circ & \cdots & \stackrel{n}{\circ } \Leftarrow \cr \ \ \ \ \ & \stackrel{k-1}{\circ }& \cdots & \circ & \stackrel{n-i}{\bullet } & \circ &\cdots & \stackrel{n}{\circ } \end{matrix} \label{grab1} \end{equation} {\it All in all, a set $S$ is white $(k,m)$-regular, where $1\leq k\leq n<m\leq 2n,$ if the shifted scheme obtained by painting $k-1$ black does not contain columns with two black points.} Similarly, if $k\leq n<m$ and $S$ is black $(k,m)$-regular, then $n\in S$. If additionally $m<\psi (k),$ then taking $i=\psi (m)-1$ we get $\psi (i)-1=m,$ hence $\psi (m)-1\in S$. We see that if $m<\psi (k),$ $k\leq n<m,$ then $S$ is black $(k,m)$-regular if and only if the shifted scheme (\ref{grb2}) has no white columns and the first from the left complete column is a black one. \begin{equation} \begin{matrix} \ \ \ \ \ \ \ &\stackrel{m}{\bullet } & \cdots & \bullet & \stackrel{n+i}{\circ } & \bullet & \cdots & \stackrel{n}{\bullet } \Leftarrow \cr \stackrel{k-1}{\circ }\ \cdots & \stackrel{\psi (m)-1}{\bullet } & \cdots & \bullet & \stackrel{n-i}{\bullet } & \circ & \cdots & \stackrel{n}{\bullet } \end{matrix} \label{grab2} \end{equation} If $m>\psi (k),$ then for $i=\psi (k)$ we get $\psi (i)-1=k-1,$ hence $\psi (k)\in S$. That is, if $m>\psi (k),$ $k\leq n<m,$ then $S$ is black $(k,m)$-regular if and only if the shifted scheme (\ref{grb2}) has no white columns: \begin{equation} \begin{matrix} \stackrel{m}{\bullet } \ \cdots & \stackrel{\psi (k)}{\bullet } & \cdots & \bullet & \stackrel{n+i}{\circ } & \bullet & \cdots & \stackrel{n}{\bullet } \Leftarrow \cr \ \ \ \ \ & \stackrel{k-1}{\circ } & \cdots & \circ & \stackrel{n-i}{\bullet } & \bullet & \cdots & \stackrel{n}{\bullet } \end{matrix} \label{grab3} \end{equation} {\it All in all, a set $S$ is black $(k,m)$-regular, where $1\leq k\leq n<m\leq 2n,$ if the shifted scheme obtained by painting $m$ white does not contain columns with two white points.} At the same time we should stress that {\it if $m=\psi (k),$ then no one set is $(k,m)$-regular}. Indeed, for $i=k-1$ we have $\psi (i)-1=m.$ Hence both of the elements $i, \psi (i)-1$ belong to $S \cup \{ k-1, m\} ,$ and therefore $S$ is not white $(k,\psi (k))$-regular. If we take $i=m,$ then $\psi (i)-1=k-1,$ and no one of the elements $i, \psi (i)-1$ belongs to $S \setminus \{ k-1,m\} .$ Thus $S$ is neither black $(k,\psi (k))$-regular. \begin{lemma} A set $S$ is white $($black$)$ $(k,m)$-regular if and only if its complement $\overline{S}$ with respect to $[k,m)$ is black $($white$)$ $(k,m)$-regular. \label{dop} \end{lemma} \begin{proof} The shifted scheme for $\Phi^{\overline{S}}(k,m)$ appears from that for $\Phi^{S}(k,m)$ by changing the color of all points except the first one, $k-1,$ and the last one, $m.$ Under this re-coloring a scheme of type (\ref{grb2}) is transformed to (\ref{grab2}), while a scheme of type (\ref{grab}) is transformed to (\ref{grab3}) and vice versa. \end{proof} \begin{lemma} A set $S$ is white $($black$)$ $(k,m)$-regular if and only if $\psi (S)-1$ is white $($black$)$ $(\psi (m), \psi (k))$-regular. Here $\psi (S)-1=\{ \psi (s)-1\, |\, s\in S\} .$ \label{dop1} \end{lemma} \begin{proof} The shifted scheme for $\Phi^{\psi (S)-1}(\psi (m), \psi (k))$ appears from that for $\Phi^{S}(k,m)$ by switching rows and changing the color of the first and the last points. Under that transformation a scheme of type (\ref{grab}) is transformed to (\ref{grab1}), while a scheme of type (\ref{grab2}) is transformed to (\ref{grab3}) and vice versa. \end{proof} \begin{theorem}{\rm (\cite[Corollary 10.4]{Kh08})}. If $q$ is not a root of 1 then every right coideal subalgebra of $U_q^+(\mathfrak{so}_{2n+1})$ that contains $G$ is generated as an algebra by $G$ and a set of elements $\Phi^{S}(k,m)$ with $(k,m)$-regular sets $S.$ If $q^t=1,$ $t>4,$ then this is the case for every homogeneous right coideal subalgebra of $u_q^+(\mathfrak{so}_{2n+1})$ that contains $G.$ \label{rig} \end{theorem} Of course this theorem is valid for negative quantum Borel subalgebra as well. In this case the generators take up the form $\Phi^{S}_-(k,m)$ with $(k,m)$-regular sets $S$, where $\Phi^{S}_-(k,m),$ is the element (\ref{dhs}) under the replacement $x_i\leftarrow x_i^-,$ $1\leq i\leq n.$ \begin{proposition} If $S$ is a $(k,m)$-regular set, then $$ \Phi ^{S}(k,m)\sim \Phi ^{T}(\psi (m),\psi (k)), $$ where ${T}=\overline{\psi (S)-1}$ is a $(\psi (m),\psi (k))$-regular set and $\psi (S)-1$ denotes the set $\{ \psi (s)-1\, |\, s\in S\} ,$ while the complement is related to the interval $[\psi (m),\psi (k)).$ \label{xn0} \end{proposition} \begin{proof} The proof follows from \cite[Proposition 7.10]{Kh08} since due to Lemmas \ref{dop} and \ref{dop1} the set $S$ is white (black) $(k,m)$-regular if and only if $T$ is black (white) $(\psi (m),\psi (k))$-regular. \end{proof} \begin{lemma} Let $S$ be a white $(k,m)$-regular set. Assume $s$ is a black point on the scheme $(\ref{grb}),$ and $k-1\leq t<s\leq m.$ Then $S$ is white $(1+t,s)$-regular if and only if either $\psi (t)-1$ is a white point or $\psi (t)-1\notin [t,s].$ In particular if either $t$ is black or $t=k-1,$ then $S$ is white $(1+t,s)$-regular. \label{si} \end{lemma} \begin{proof} The general statement follows from interpretation of regular sets given on diagrams (\ref{grab}), (\ref{grab1}). The points $t,$ $\psi (t)-1$ form a column on the shifted scheme. Hence if either $t$ is black or $t=k-1,$ then $\psi (t)-1$ is white or it does not appear on the scheme at all, that is $\psi (t)-1\notin [k-1,m]\supseteq [t,s].$ \end{proof} Similarly we have the following statement. \begin{lemma} Let $S$ be a black $(k,m)$-regular set. Assume $t$ is a white point on the scheme $(\ref{grb}),$ and $k-1\leq t<s\leq m.$ Then $S$ is black $(1+t,s)$-regular if and only if either $\psi (s)-1$ is a black point or $\psi (s)-1\notin [t,s].$ In particular if either $s$ is white or $s=m,$ then $S$ is black $(1+t,s)$-regular. \label{si1} \end{lemma} \begin{lemma} {\rm (\cite[Corollaries 7.7, 7.13]{Kh08})} Let $k\leq t<m.$ The decomposition \begin{equation} \Phi^{S}(k,m)\sim \left[ \Phi^{S}(k,t),\Phi^{S}(1+t,m)\right] \label{desc1} \end{equation} is valid if either $S \cup \{ t\} $ is white $(k,m)$-regular and $t\notin S$, or $S$ is black $(k,m)$-regular and $t\notin S\setminus \{ n\} .$ \label{xn1} \end{lemma} \begin{lemma} {\rm (\cite[Corollaries 7.5, 7.14]{Kh08})} Let $k\leq s<m.$ The decomposition \begin{equation} \Phi^{S}(k,m)\sim [\Phi^{S}(1+s,m),\Phi^{S}(k,s)] \label{desc2} \end{equation} is valid if either $S$ is white $(k,m)$-regular and $s\in S\cup \{ n\} ,$ or $S \setminus \{ s\}$ is black $(k,m)$-regular and $s\in S$. \label{xn2} \end{lemma} We stress that due to Lemmas \ref{si}, \ref{si1} in these lemmas the set $S$ appears to be both $(k,t)$-regular and $(1+t,m)$-regular; that is, the multiple use of the lemmas is admissible. \begin{lemma} If $S$ is $(k,m)$-regular set, then we have \begin{equation} g_{k\rightarrow m}\, \sigma (\Phi^{S}(k,m))\sim \Phi^{\psi (S)-1}(\psi(m), \psi (k))\sim \Phi^{\overline{S}}(k, m), \label{desc3} \end{equation} where $\overline{S}$ is the complement of $S$ with respect to $[k,m),$ and $\sigma $ is the antipode. \label{xn21} \end{lemma} \begin{proof} Assume $S$ is white $(k,m)$-regular. We use induction on the number $r$ of elements in the intersection $S \cap [k,m).$ If $r=0,$ then the left hand side equals $g_{k\rightarrow m}\, \sigma (u[k,m])\sim u[\psi (m),\psi (k)]$ due to (\ref{ant2}). Proposition \ref{xn0} with $S \leftarrow [k,m)$ implies $u[\psi (m),\psi (k)]\sim \Phi^{[k,m)}(k, m),$ which is required. If $r>0$ then we choose $s\in S,$ $k\leq s<m.$ By Lemma \ref{xn2} we have decomposition (\ref{desc2}). Using (\ref{ant1}) and the inductive supposition, we have \begin{equation} g_{k\rightarrow m}\, \sigma (\Phi^{S}(k,m))\sim [\Phi^{\psi (S)-1}(\psi(s), \psi (k)),\Phi^{\psi (S)-1}(\psi(m), \psi (1+s))]. \label{desc4} \end{equation} At the same time Lemma \ref{dop1} implies that $\psi (S)-1$ is a white $(\psi(m), \psi (k))$-regular set, and $ \psi (1+s)=\psi (s)-1\in \psi (S)-1.$ Hence we may apply Lemma \ref{xn2}, that shows that the right hand side of (\ref{desc4}) is proportional to $\Phi^{\psi (S)-1}(\psi(m), \psi (k)).$ This proves the first proportion in (\ref{desc3}). The second one follows from Proposition \ref{xn0}. If $S$ is black $(k,m)$-regular, then Lemma \ref{xn0} reduces the consideration to white regular case. \end{proof} \begin{lemma} Let $U^S(k,m)$ be the right coideal subalgebra generated by $G$ and by an element $\Phi ^S(k,m)$ with a $(k,m)$-regular set $S.$ In this case the monoid $\Sigma (U^S(k,m))$ defined in the above section coincides with the monoid $\Sigma $ generated by all $[1+t:s]$ with $t$ being a white point and $s$ being a black point on the scheme $(\ref{grb}).$ \label{sig} \end{lemma} \begin{proof} Proposition 9.3 \cite{Kh08} implies that degrees of all homogeneous elements from $U^S(k,m)$ belong to $\Sigma .$ Hence $\Sigma (U^S(k,m))\subseteq \Sigma .$ At the same time Lemma 9.7 \cite{Kh08} says that every indecomposable in $\Sigma $ element $[1+t:s]$ is a simple $U^S(k,m)$-root. Since certainly $\Sigma $ is generated by its indecomposable elements, we have $\Sigma \subseteq \Sigma (U^S(k,m)).$ \end{proof} \begin{lemma} Let $S$ be a white $(k,m)$-regular set, $t<s$ be respectively white and black points on the scheme $(\ref{grb}).$ If $\psi (1+t)$ is not a black point $($it is white or does not appear on the scheme at all$)$ then $[1+t:s]$ is a simple $U^S(k,m)$-root, and $\Phi ^S(1+t,s)\in U^S(k,m).$ \label{sig1} \end{lemma} \begin{proof} By \cite[Lemma 9.5]{Kh08} the element $[1+t:s]$ is indecomposable in $\Sigma .$ Hence by Lemma \ref{sig} it is a simple $U^S(k,m)$-root. At the same time \cite[Theorem 9.8]{Kh08} implies $\Phi ^S(1+t,s)\in U^S(k,m).$ \end{proof} \begin{lemma} Let $S$ be a black $(k,m)$-regular set, $t<s$ be respectively white and black points on the scheme $(\ref{grb}).$ If $\psi (1+s)$ is not a white point then $[1+t:s]$ is a simple $U^S(k,m)$-root, and $\Phi ^S(1+t,s)\in U^S(k,m).$ \label{sig2} \end{lemma} \begin{proof} Similarly by \cite[Lemma 9.6]{Kh08} the element $[1+t:s]$ is indecomposable in $\Sigma .$ Hence by Lemma \ref{sig} it is a simple $U^S(k,m)$-root, while \cite[Theorem 9.8]{Kh08} implies $\Phi ^S(1+t,s)\in U^S(k,m).$ \end{proof} \begin{lemma} Let $S$ be a $(k,m)$-regular set. If $t<s$ are respectively white and black points on the scheme $(\ref{grb}),$ then $\Phi ^S(1+t,s)\in U^S(k,m)$ unless $t<n<s.$ \label{sig3} \end{lemma} \begin{proof} Let $S$ be white $(k,m)$-regular. Assume $s\leq n.$ The point $\psi (k)$ is not black on the schemes (\ref{grab}), (\ref{grab1}). Hence Lemma \ref{sig1} with $t\leftarrow k-1,$ $s\leftarrow s$ implies $\Phi ^S(k,s)\in U^S(k,m).$ Again by Lemma \ref{sig1} applied to $U^S(k,s)$ we get $\Phi ^S(1+t,s)\in U^S(k,s)\subseteq U^S(k,m).$ Assume $t\geq n.$ The point $n=\psi (n+1)$ is white on the schemes (\ref{grab}), (\ref{grab1}). Therefore Lemma \ref{sig1} with $t\leftarrow n,$ $s\leftarrow m$ implies $\Phi ^S(1+n,m)\in U^S(k,m).$ Again by Lemma \ref{sig1} applied to $U^S(1+n,m)$ we get $\Phi ^S(1+t,s)\in U^S(1+n,m)\subseteq U^S(k,m).$ If $S$ is black $(k,m)$-regular, then we may apply Lemma \ref{sig2} in a similar way or just use the duality given in Proposition \ref{xn0}. \end{proof} \section{Necessary condition} Let $U^-\supseteq F$ and $U^+\supseteq G$ be right coideal subalgebras of respectively negative and positive quantum Borel subalgebras. As we mentioned in the above section $U^+$ is generated as algebra by $G$ and elements of the form $\Phi ^{S}(k,m)$ with $(k,m)$-regular sets $S.$ Respectively $U^-$ is generated as algebra by $F$ and elements of the form $\Phi ^{T}_-(i,j)$ with $(i,j)$-regular sets $T.$ Here $\Phi ^{T}_-(i,j)$ appears from $\Phi ^{T}(i,j)$ given in (\ref{dhs}) under the substitutions $x_t\leftarrow x_t^-,$ $1\leq t\leq 2n.$ To state a necessary condition for tensor product (\ref{tru}) to be a subalgebra we display the regular generators $\Phi ^{S}(k,m)$ and $\Phi ^{T}_-(i,j)$ graphically as defined in (\ref{grb}): \begin{equation} \begin{matrix} S \ \ \stackrel{k-1}{\circ } \ & \cdots \ & \stackrel{i-1}{\bullet } \ & \stackrel{i}{\bullet }\ \ & \stackrel{i+1}{\circ }\ & \cdots & \ & \stackrel{m}{\bullet } \ & \ & \ \cr T \ \ \ \ \ \ \ \ & \ \ & \circ \ & \circ \ \ & \bullet \ & \cdots & \ & \bullet \ & \cdots \ & \stackrel{j}{\bullet } \end{matrix}\ \ \ . \label{grr1} \end{equation} We shall call this scheme a $S_k^mT_i^j$-{\it scheme}. Sometimes in this notation we omit those of the indices that are fixed in the context. For example if $k,m,i,j$ are fixed, this is a $ST$-scheme. Lemma \ref{xn0} shows that the element $\Phi ^{S}(k,m)$ up to a scalar factor equals the element $\Phi ^{\overline{\psi(S)-1}}(\psi (m),\psi (k))$ that has essentially different representation (\ref{grb}). By this reason to the pair $\Phi ^{S}(k,m),$ $\Phi ^{T}_-(i,j)$ we may associate three more schemes: \begin{equation} \begin{matrix} S \ \ \stackrel{k-1}{\circ } \ & \cdots \ & \stackrel{\psi(j)-1}{\bullet } \ & \stackrel{\psi (j)}{\bullet }\ \ & \stackrel{\psi (j)+1}{\circ }\ & \cdots & \ & \stackrel{m}{\bullet } \ & \ & \ \cr {T^*} \ \ \ \ \ \ \ & \ \ & \circ \ & \bullet \ \ & \bullet \ & \cdots & \ & \circ \ & \cdots \ & \stackrel{\psi (i)}{\bullet } \end{matrix}\ \ \ . \label{grr2} \end{equation} Here $T^*$ is the set $\overline{\psi ({T})-1},$ the complement of $ \{ \psi (t)-1\, |\, t\in {T}\} $ with respect to $[\psi (j),\psi (i)).$ By definition this is the $S_k^mT^{*\psi (i)}_{\psi (j)}$-scheme, or shortly the $ST^*$-scheme. \begin{equation} \begin{matrix} {S^*} \ \ \ \ \ \ & \ & \stackrel{\psi (m)-1}{\circ } & \cdots \ &\stackrel{j-2}{\bullet } & \stackrel{j-1}{\circ } & \stackrel{j}{\circ }& \cdots & \stackrel{\psi (k)}{\bullet } \cr {T} \ \ \stackrel{i-1}{\circ } & \cdots & \bullet & \cdots &\circ & \circ & \bullet \end{matrix}\ \ \ . \label{grr3} \end{equation} Here $S^*$ is the set $\overline{\psi (S)-1},$ the complement of $ \{ \psi (s)-1\, |\, s\in {S}\} $ with respect to $[\psi (j),\psi (i)).$ By definition this is the $S_{\psi (m)}^{*\psi (k)}T^j_i$-scheme, or shortly the $S^*T$-scheme. \begin{equation} \begin{matrix} { S^*} \ \ \ \ \ \ \ \ \ \ \ & \ & \stackrel{\psi (m)-1}{\circ } & \cdots & \stackrel{\psi (i)-1}{\circ }\ \ & \stackrel{\psi (i)}{\circ } \ & \cdots & \stackrel{\psi (k)}{\bullet } \cr {T^*} \ \ \stackrel{\psi (j)-1}{\circ }& \cdots & \circ & \cdots & \bullet \ \ & \bullet \ \end{matrix}\ \ \ . \label{grr4} \end{equation} Again by definition this is the $S_{\psi (m)}^{*\psi (k)}T^{*\psi (i)}_{\psi (j)}$-scheme, or shortly the $S^*T^*$-scheme. \begin{definition} \rm A scheme is said to be {\it balanced} if it has no fragments of the form \begin{equation} \begin{matrix} \stackrel{t}{\circ } \ & \cdots & \stackrel{s}{\bullet } \cr \circ \ & \cdots & \bullet \end{matrix}\ \ \ . \label{gra2} \end{equation} \label{bal} \end{definition} \begin{theorem} Consider the triangular decomposition of a right coideal subalgebra given in Theorem $\ref{raz2}$ \begin{equation} U=U^-\otimes _{{\bf k}[F]} {\bf k}[H] \otimes _{{\bf k}[G]}U^+. \label{trus} \end{equation} If $\Phi ^{S}(k,m),$ $\Phi ^{T}_-(i,j)$ are the regular generators respectively of $U^+$ and $U^-$ defined by simple roots $[k:m]$ and $[i:j]^-,$ then either all four schemes $(\ref{grr1}-\ref{grr4})$ defined by this pair are balanced, or one of them has the form \begin{equation} \begin{matrix} \stackrel{t}{\circ } \ & \cdots & \circ & \cdots & \bullet & \cdots & \stackrel{s}{\bullet } \cr \circ \ & \cdots & \bullet & \cdots & \circ & \cdots & \bullet \end{matrix}\ \ \ , \label{gra3} \end{equation} where no one intermediate column has points of the same color. \label{bale} \end{theorem} The next lemma shows that to see that a given pair satisfies the conclusion of the theorem it is sufficient to check just two first schemes (\ref{grr1}), (\ref{grr2}). \begin{lemma} $ST$-Scheme $(\ref{grr1})$ is balanced if and only if so is $S^*T^*$- scheme $(\ref{grr4}).$ Similarly $ST^*$-scheme $(\ref{grr2})$ is balanced if and only if so is $S^*T$-scheme $(\ref{grr3}).$ $ST$-Scheme $(\ref{grr1})$ has the form $(\ref{gra3})$ if and only if so does $S^*T^*$-scheme $(\ref{grr4}).$ Respectively $ST^*$-scheme $(\ref{grr2})$ has the form $(\ref{gra3})$ if and only if so does $S^*T$-scheme $(\ref{grr3}).$ \label{bal2} \end{lemma} \begin{proof} Consider a transformation $\rho $ of schemes that moves a point $a$ to $\psi (a)-1$ and changes the color. This transformation maps $ST$-scheme to $S^*T^*$-scheme and $ST^*$-scheme to $S^*T$-scheme. At the same time it changes the order of columns. In particular the fragment of the form (\ref{gra2}) transforms to a fragment of the same form with $t\leftarrow \psi (s)-1,$ $s\leftarrow \psi (t)-1.$ \end{proof} \section{Additional relations} In this and the next technical sections we are going to describe two important cases when $\left[ \Phi^{S}(k,m),\Phi^{T}_-(i,j)\right ]$ belongs to ${\bf k}[H].$ The first one (Theorem \ref{des1}) is the case when $ST$-scheme has the form (\ref{gra3}), while the second one (Theorem \ref{str}) provides conditions when this bracket equals zero. We fix the following notations. Let $h_i$ denote $g_if_i\in H,$ while $g_{k\rightarrow m}$ is the product $g_kg_{k+1}\ldots g_{m},$ respectively $f_{k\rightarrow m}$ $=f_kf_{k+1}\ldots f_{m},$ and $h_{k\rightarrow m}$ $=g_{k\rightarrow m}f_{k\rightarrow m}.$ In the same way $\chi ^{k\rightarrow m}$ $=\chi ^{k}\chi ^{k+1}\ldots \chi ^{m}.$ Similarly $P_{k\rightarrow m,i\rightarrow j}$ is $\chi ^{k\rightarrow m}(g_{i\rightarrow j})$ $=\chi ^{i\rightarrow j}(f_{k\rightarrow m}).$ Of course we have $P_{k\rightarrow m,i\rightarrow j}$ $=P_{\psi(m)\rightarrow \psi (k),\psi (j)\rightarrow \psi (i)}.$ In these notations Definition 4.4 takes the form $\sigma _k^m$ $=P_{k\rightarrow m,k\rightarrow m};$ $\mu _k^{m,i}$ $=P_{k\rightarrow i,i+1\rightarrow m}$ $\cdot P_{i+1\rightarrow m,k\rightarrow i}.$ \begin{theorem} If $S$ is a $(k,m)$-regular set then $$ \left[ \Phi^{S}(k,m), \Phi^{\overline{S}}_-(k,m)\right] \sim 1-h_{k\rightarrow m}, $$ where $\overline{S}$ is a complement of $S$ with respect to the interval $[k,m).$ \label{des1} \end{theorem} \begin{proof} We use induction on $m-k.$ If $m=k,$ the statement is clear. Suppose firstly that $n\notin [k,m).$ In this case each set is both black and white $(k,m)$-regular. Hence by Lemma \ref{xn1} and Lemma \ref{xn2} with $t=m-1$ we have $$ \Phi^{S}(k,m)\sim \left\{ \begin{matrix} [ \Phi^{S}(k,m-1),x_m] & \mbox{ if } m-1\notin {S}; \cr [x_m,\Phi^{S}(k,m-1)] & \mbox{ if } m-1\in {S}, \end{matrix} \right. $$ and $$ \Phi^{\overline{S}}_-(k,m)\sim \left\{ \begin{matrix} [x_m^-,\Phi^{\overline{S}}_-(k,m-1)] & \mbox{ if } m-1\notin {S}; \cr [ \Phi^{\overline{S}}_-(k,m-1),x_m^-] & \mbox{ if } m-1\in {S}. \end{matrix} \right. $$ Let us fix for short the following designations: $u=\Phi^{S}(k,m-1),$ $v^-=\Phi^{\overline{S}}_-(k,m-1).$ By the inductive supposition we have $[u,v^-]=\alpha (1-h_{k\, m-1}),$ $\alpha \neq 0.$ Consider the algebra ${\mathfrak F}_2$ defined by the quantum variables $z_1, z_2$ with $g_{z_1}={\rm gr}(u)=g_{k\rightarrow m-1},$ $\chi ^{z_1}=\chi ^u,$ $g_{z_2}=g_m,$ $\chi ^{z_2}=\chi ^m,$ and respectively $g_{z_1^-}={\rm gr}(v^-)=f_{k\rightarrow m-1},$ $\chi ^{z_1^-}=(\chi ^u)^{-1},$ $g_{z_2^-}=f_m,$ $\chi ^{z_2^-}=(\chi ^m)^{-1}.$ Since due to Lemma \ref{suu} we have $[u,x_m^-]=[x_m,v^-]=0,$ the map $z_1\rightarrow u,$ $z_2\rightarrow x_m,$ $z_1^-\rightarrow \alpha ^{-1}v^-,$ $z_2^{-}\rightarrow x_m^-$ has an extension up to a homomorphism of algebras. Hence by Lemma \ref{suu1} we have $[[u,x_m],[x_m^-,v^-]]=\varepsilon (1-h_{k\rightarrow m}),$ where the coefficient $\varepsilon =(1-p(z_1,z_2)p(z_2,z_1))$ equals $1-q^{-2},$ for $p(z_1,z_2)$ $=p(u,x_m)$ $=p_{km}p_{k+1\, m}\ldots p_{m-1\, m}$ and $p(z_2,z_1)$ $=p(x_m, u)$ $=p_{mk}p_{m\, k+1}\ldots p_{m\, m-1}.$ Since conditions of Lemma \ref{suu1} are invariant under the substitution $i\leftrightarrow j,$ we have also $[[x_m,u],[v^-,x_m^-]]=\varepsilon (1-h_{k\rightarrow m}),$ which is required. Now consider the case $n\in [k,m).$ Suppose that $S$ is white $(k,m)$-regular and $m<\psi (k).$ In this case $\overline{S}$ is black $(k,m)$-regular. Let $t$ denote the first white point next in order to $\psi (m)-1.$ Since $n$ is a white point, we have $t\leq n.$ \begin{equation} \begin{matrix} \ \ \ \ \ \ \ & \stackrel{m}{\bullet }&\circ &\circ &\stackrel{\psi (t)}{\circ }& \stackrel{\psi (t)-1}{*} & \cdots & \stackrel{n}{\circ } \Leftarrow \cr \stackrel{k-1}\circ \ldots & \stackrel{\psi (m)-1}{\circ }& \bullet & \bullet &\stackrel{t-1}{\bullet } & \stackrel{t}{\circ } & \cdots & \stackrel{n}{\circ } \end{matrix} \label{gb1} \end{equation} The set $S\cup \{ \psi (t)-1\} $ is white $(k,m)$-regular, unless $\psi (t)-1=n.$ Hence by Lemma \ref{xn1} and Lemma \ref{xn2} we have $$ \Phi^{S}(k,m)\sim \left\{ \begin{matrix} [ \Phi^{S}(k,\psi (t)-1),\Phi^{S}(\psi (t),m)] & \mbox{ if } \psi (t)-1\notin {S}\cup \{ n\}; \cr [\Phi^{S}(\psi (t),m),\Phi^{S}(k, \psi (t)-1)] & \mbox{ if } \psi (t)-1\in {S}\cup \{ n\} . \end{matrix} \right. $$ Similarly $\overline{S}\setminus \{ \psi (t)-1\}$ is black $(k,m)$-regular, unless $\psi (t)-1=n.$ The condition $\psi (t)-1\notin \overline{S}\setminus \{ n\}$ is equivalent to $\psi (t)-1\in {S}\cup \{ n\}.$ Hence these lemmas imply also $$ \Phi^{\overline{S}}_-(k,m)\sim \left\{ \begin{matrix} [\Phi^{\overline{S}}_-(\psi (t),m),\Phi^{\overline{S}}_-(k,\psi (t)-1)] & \mbox{ if } \psi (t)-1\notin {S}\cup \{ n\}; \cr [ \Phi^{\overline{S}}_-(k,\psi (t)-1),\Phi^{\overline{S}}_-(\psi (t),m)] & \mbox{ if } \psi (t)-1\in {S}\cup \{ n\} . \end{matrix} \right. $$ Let us fix for short the following designations: $u=\Phi^{S}(k,\psi (t)-1),$ $v=\Phi^{S}(\psi (t),m),$ $w^-=\Phi^{\overline{S}}_-(k,\psi (t)-1),$ $y^-=\Phi^{\overline{S}}_-(\psi (t),m).$ By the inductive supposition we have \begin{equation} [u,w^-]=\alpha (1-h_{k\rightarrow \psi (t)-1}), \ \ [v,y^-]=\beta (1-h_{\psi (t)\rightarrow m}), \label{no1} \end{equation} where $\alpha\neq 0,$ $\beta \neq 0.$ {\bf Assume $t\neq n$} (equivalently, $\psi (t)-1\neq n$). In this case $u$ and $w^-$ have further decompositions according to Lemmas \ref{xn1}, \ref{xn2}: \begin{equation} u=[\Phi^{S}(n+1,\psi (t)-1),\Phi^{S}(k, n)], w^-=[ \Phi^{\overline{S}}_-(k,n),\Phi^{\overline{S}}_-(n+1,\psi (t)-1)]. \label{no2} \end{equation} Moreover, $S$ and $\overline{S}$ are both black and white $(k,n)$-regular. Since $\psi (m)-1,$ $t$ are white points for $S$ and black points for $\overline{S}$, we have $$ \Phi^{S}(k, n)=[[a_1,a_2],a_3], \ \ \Phi^{\overline{S}}_-(k,n)=[b_3^-,[b_2^-,b_1^-]], $$ where $a_1=\Phi^{S}(k, \psi (m)-1),$ $a_2=\Phi^{S}(\psi (m), t),$ $a_3=\Phi^{S}(t+1, n),$ and similarly $b_1^-=\Phi^{\overline{S}}_-(k, \psi (m)-1),$ $b_2^-=\Phi^{\overline{S}}_-(\psi (m), t),$ $b_3^-=\Phi^{\overline{S}}_-(t+1, n).$ All points of the interval $[\psi (m),t)$ are black for $S$ (of course if $t=\psi (m),$ then this interval is empty). Hence all points of the interval $[\psi (t),m)$ are white (otherwise $S$ is not white $(k,m)$-regular). In particular \begin{equation} v=\Phi^{S}(\psi (t),m)=\Phi^{\emptyset }(\psi (t),m)=u[\psi (t),m]. \label{gb2} \end{equation} At the same time, using Lemma \ref{xn0}, we have \begin{equation} a_2=\Phi^{S}(\psi (m),t)= \Phi^{[\psi (m),t) }(\psi (m),t)\sim \Phi^{\emptyset }(\psi (t),m)=u[\psi (t),m]. \label{gb3} \end{equation} Hence by (\ref{no1}) we have $[a_2,y^-]\sim [v,y^-]\sim 1-h_{\psi (t)\rightarrow m}.$ Lemma \ref{suu} implies $$ 0=[a_1,y^-]=[a_3,y^-]=[\Phi^{S}(n+1,\psi (t)-1),y^-]. $$ Therefore $$ [\Phi^{S}(k, n),y^-]=[[[a_1,a_2],a_3],y^-]\stackrel{(\ref{uno})}{\sim } [[[a_1,a_2],y^-],a_3] $$ $$ \stackrel{(\ref{jak3})}{=}[[a_1,[a_2,y^-]],a_3] \stackrel{(\ref{cuq3})}{\sim } [a_1,a_3]=0, $$ for $a_1,a_3$ are separated in $U_q^+(\mathfrak{so}_{2n+1}).$ Thus, (\ref{no2}) implies $[u,y^-]=0.$ In perfect analogy we have $[v,w^-]=0.$ Consider the algebra ${\mathfrak F}_2$ defined by quantum variables $z_1, z_2$ with $g_{z_1}={\rm gr}(u)=g_{k\rightarrow \psi (t)-1},$ $\chi ^{z_1}=\chi ^u,$ $g_{z_2}={\rm gr}(v)=g_{\psi (t)\rightarrow m},$ $\chi ^{z_2}=\chi ^v,$ and respectively $g_{z_1^-}={\rm gr}(w^-)=f_{k\rightarrow \psi (t)-1},$ $\chi ^{z_1^-}=(\chi ^u)^{-1},$ $g_{z_2^-}={\rm gr}{(y^-)}=f_{\psi (t)\rightarrow m},$ $\chi ^{z_2^-}=(\chi ^v)^{-1}.$ Due to (\ref{no1}) and $[u,y^-]=[v,w^-]=0,$ the map $z_1\rightarrow u,$ $z_2\rightarrow v,$ $z_1^-\rightarrow \alpha ^{-1}w^-,$ $z_2^{-}\rightarrow \beta ^{-1}y^-$ has an extension up to a homomorphism of algebras. Hence by Lemma \ref{suu1} we have $[[u,v],[y^-,w^-]]=\varepsilon (1-h_{k\rightarrow m}),$ where the coefficient $\varepsilon $ equals $1-q^{-2},$ for $p(z_1,z_2)p(z_2,z_1)$ $=p(u,v)p(v,u)$ $=\mu^{m,\psi (t)-1}_k=q^{-2}$ due to (\ref{mu2}). Conditions of Lemma \ref{suu1} are invariant under the substitution $i\leftrightarrow j.$ Hence we have also $[[v,u],[w^-,y^-]]=\varepsilon (1-h_{k\rightarrow m}),$ which proves the required relation for $t\neq n.$ {\bf Assume $t=n.$} In this case $\Phi^{S}(k,m)=[v,u],$ $\Phi^{\overline{S}}_-(k, m)=[w^-,y^-]$ and we have $$ u= \Phi^{S}(k,n)=[a_1,b_1], \ \ w^-=\Phi^{\overline{S}}_-(k,n)=[b_2^-,b_1^-], $$ where $a_1=\Phi^{S}(k,\psi (m)-1),$ $a_2=\Phi^{S}(\psi (m),n),$ and $b_1^-=\Phi^{\overline{S}}_-(k, \psi (m)-1),$ $b_2^-=\Phi^{\overline{S}}_-(\psi (m),n).$ Equalities (\ref{gb2}) and (\ref{gb3}) with $t\leftarrow n$ show that $a_2\sim v.$ Hence $\Phi^{S}(k,m)=[v,u]\sim [v,[a_1,v]],$ while $[v,[a_1,v]]\sim [[a_1,v],v]$ due to conditional identity (\ref{bri}), for $p(a_1v,v)p(v,a_1v)=\mu ^{m,n}_k=1,$ see (\ref{mu2}). Similarly $$ \Phi^{\overline{S}}_-(k, m)=[w^-,y^-]\sim [[b_2^-,b_1^-],y^-]\sim [[y^-,b_1],y^-] \sim [y^-,[y^-,b_1^-]]. $$ Consider the algebra ${\mathfrak F}_2$ defined by quantum variables $z_1, z_2$ with $g_{z_1}={\rm gr}(a_1)=g_{k\rightarrow \psi (m)-1},$ $\chi ^{z_1}=\chi ^{a_1},$ $g_{z_2}={\rm gr}(v)=g_{n+1\rightarrow m},$ $\chi ^{z_2}=\chi ^v,$ and respectively $g_{z_1^-}={\rm gr}(b_1^-)=f_{k\rightarrow \psi (m)-1},$ $\chi ^{z_1^-}=\chi ^{b_1^-}$ $=(\chi ^{a_1})^{-1},$ $g_{z_2^-}={\rm gr}{(y^-)}=f_{n+1\rightarrow m},$ $\chi ^{z_2^-}=\chi ^{y^-}$ $=(\chi ^v)^{-1}.$ By the considered above case $``n\notin [k,m)"$ we have $[a_1,b_1^-]=\gamma (1-h_{k,\psi (m)-1}).$ Since Lemma \ref{suu} implies $[a_1,y^-]$ $=[v,b_1^-]=0,$ the map $z_1\rightarrow a_1,$ $z_2\rightarrow v,$ $z_1^-\rightarrow \gamma ^{-1}b_1^-,$ $z_2^{-}\rightarrow \beta ^{-1}y^-$ has an extension up to a homomorphism of algebras. Hence by Lemma \ref{suu2} we have $[[[a_1,v],v], [y^-,[y^-,b_1^-]]]=\varepsilon (1-h_{k\rightarrow m}).$ It remains to note that $\varepsilon \neq 0.$ Definition (\ref{mu11}) implies $p(z_2,z_2)=p(v,v)=\sigma _{n+1}^m,$ while (\ref{mu21}) shows that $\sigma _{n+1}^m=q.$ Further, $p(z_1,z_2)p(z_2,z_1)=p(a_1,v)p(v,a_1)=\mu _k^{n,\psi (m)-1}=q^{-2},$ see (\ref{mu1}), (\ref{mu2}). Hence $\varepsilon =(1+q)(1-q^{-2})(1-q^{-1})\neq 0.$ This completes the proof of the case ``$m<\psi (k),$ $S$ is white $(k,m)$-regular". If $S$ is black $(k,m)$-regular and still $m<\psi (k),$ then by Lemma \ref{dop} the set $\overline{S}$ is white $(k,m)$-regular. Hence $\left[ \Phi^{\overline{S}}(k,m), \Phi^{S}_-(k,m)\right] \sim 1-h_{k\rightarrow m}.$ Let us apply $h_{k\rightarrow m}\sigma ,$ where $\sigma $ is the antipode, to this equality. By (\ref{ant1}) and (\ref{desc3}) we have $\left[ \Phi^{\overline{S}}_-(k,m), \Phi^{S}(k,m)\right] \sim 1-h_{k\rightarrow m}.$ It remains to apply antisymmetry (\ref{dos}). If $m>\psi (k)$ then Lemma \ref{xn0} reduces consideration to the case ``$m<\psi (k).$" \end{proof} \begin{corollary} If $k\leq m\neq \psi (k),$ then in the algebra $U_q(\mathfrak{so}_{2n+1})$ we have \begin{equation} [u[k,m],u[\psi(m),\psi (k)]^-]\sim \, 1-h_{k\rightarrow m}. \label{kgb1} \end{equation} \label{dus1} \end{corollary} \begin{proof} Proposition \ref{xn0} with $S=\emptyset $ applied to the mirror generators implies $u[\psi(m),\psi (k)]^-\sim \Phi ^{[k,m)}_-(k,m).$ Hence Theorem \ref{des1} works. \end{proof} \section{Pairs with strong schemes} In this section we determine when $\left[ \Phi^{S}(k,m),\Phi^{T}_-(i,j)\right ]$ equals zero. Let us consider firstly the case $S=T=\emptyset .$ \begin{proposition} Let $i\neq k,$ $j\neq m,$ $k\leq m,$ $i\leq j.$ If $\psi (m), \psi (k) \notin [i,j]$ or, equivalently, $k,m\notin [\psi (j),\psi (i)],$ then in $U_q(\mathfrak{so}_{2n+1})$ we have $$ [u[k,m],u[i,j]^-]=0. $$ \label{ruk4} \end{proposition} \begin{proof} If $m=\psi (k),$ then conditions $\psi (m), \psi (k)\notin [i,j]$ certainly imply $k,$ $m,$ $\psi (m),$ $\psi (k)\notin [i,j],$ and one may use Corollary \ref{ruk3}. If $j=\psi (i),$ then $\psi (t)\notin [i,j]$ if and only if $t\notin [i,j].$ Hence again $\psi (m), \psi (k)\notin [i,j]$ implies $k,m,\psi (m), \psi (k)\notin [i,j],$ and Corollary \ref{ruk3} applies. Thus, further we may assume $m\neq \psi (k),$ $j\neq \psi (i).$ We shall use induction on the parameter $m-k+j-i.$ If either $m=k$ or $j=i,$ then the statement follows from (\ref{kom1}) and (\ref{kom2}). Assume $k<m,$ $i<j.$ Condition $\psi (m),\psi (k)\notin [i,j]$ holds if and only if one of the following two options is fulfilled: {\bf A}. $\psi (m)<i<j<\psi (k);$ {\bf B}. $\psi (m)<\psi (k)<i< j,$ or $i<j<\psi (m)<\psi (k);$ Let us consider these options separately. {\bf A}. Since $\psi (k),$ $\psi (m)\notin [i,j],$ by Corollary (\ref{ruk3}) we may suppose that either $k\in [i,j]$ or $m\in [i,j].$ The option {\bf A} is equivalent to $k<\psi (j)<\psi (i)<m,$ for $\psi $ changes the order. By Proposition \ref{ins2} with $i\leftarrow \psi (j)-1$ we have $u[k,m]=[u[k,\psi (j)-1], u[\psi (j),m]].$ Indeed, the exceptional equality $\psi (j)-1=\psi (m)-1$ implies a contradiction $j=m.$ The exceptional equality $\psi (j)-1=\psi (k)$ implies $j=k-1,$ hence $j<k<m,$ in particular $k,m\notin [i,j].$ Similarly, Proposition \ref{ins2} with $k\leftarrow \psi (j),$ $i\leftarrow \psi (i)$ shows that $u[\psi (j),m]=[u[\psi (j),\psi (i)],u[\psi (i)+1,m]].$ Indeed, we have $m\neq \psi (\psi (j))=j,$ and $\psi (i)\neq \psi (\psi (j))=j.$ The remaining condition, $\psi (i)\neq \psi (m)-1,$ is also valid since otherwise $i=m+1,$ and again $k,m\notin [i,j],$ and again Corollary \ref{ruk3} applies. Let us fix the for short following designations: $u=u[k,\psi (j)-1],$ $v=u[\psi (j),\psi (i)],$ $w=u[\psi (i)+1,m],$ $z^-=u[i,j]^-.$ Corollary \ref{dus1} implies $[v,z^-]\sim 1-h_v.$ Proposition \ref{NU} with $i\leftarrow \psi (j)-1,$ $j\leftarrow \psi (i)$ shows that $[u,w]=0,$ for $m=\psi (\psi (j)-1)-1$ is equivqlent to $m=j,$ while $\psi (i)=\psi (k)$ is equivalent to $i=k.$ Using (\ref{uno}) we have \begin{equation} [u[k,m],u[i,j]^-]=[[u,[v,w]],z^-]=[u,[[v,w],z^-]]+p_{z,vw}[[u,z^-],[v,w]]. \label{fz} \end{equation} If $j\neq n,$ then $\psi (j)-1\neq j,$ and still $\psi (k)\neq \psi (i)-1.$ Hence by the inductive supposition with $m\leftarrow \psi (j)-1$ we have $[u,z^-]=0.$ If $i\neq n+1,$ then $\psi (i)+1\neq i,$ and still $m\neq \psi (\psi (i)+1)=i-1.$ Hence by the inductive supposition with $k\leftarrow \psi (i)+1$ we have $[w,z^-]=0.$ Thus for $i\neq n+1,$ $j\neq n$ we may continue (\ref{fz}): \begin{equation} \sim [u,[[v,z^-],w]] \stackrel{(\ref{cuq4})}{\sim } [u,h_v\cdot w]\stackrel{(\ref{cuq1})}{\sim } h_v[u,w]=0. \label{fz1} \end{equation} Suppose that $j=n.$ In this case we have $k\in [i,j]=[i,n],$ for $m>\psi (j)=n+1>j.$ Moreover $i\neq n+1,$ for $i<j=n.$ Hence still $[w,z^-]=0,$ and the first addend in (\ref{fz}) is zero (see arguments in (\ref{fz1})). By additional induction on $t-k$ we shall prove the following equation: \begin{equation} [u[k,t],u[i,t]^-]=\sum _{b=k-1}^{t-1} \alpha _bu[i,b]^-\cdot u[k,b], \label{zz2} \end{equation} where $i<k\leq t\leq n,$ $0\neq \alpha _b\in {\bf k},$ and by definition $u[k, k-1]=1.$ If $t=k,$ the formula follows from (\ref{kom2}). In the general case by the main inductive supposition we have $[u[k,t-1],u[i,t]^-]=0,$ for $k\neq i,$ $t-1\neq t;$ $\psi (k),$ $\psi (t-1)\notin [i,t];$ and $t-1\neq \psi (k),$ $t\neq \psi (i)$ due to $i<k\leq t\leq n.$ Therefore $$ [u[k,t],u[i,t]^-]=[[u[k,t-1],x_t], u[i,t]^-]\stackrel{(\ref{jak3})}{=}[u[k,t-1],[x_t,u[i,t]^-]]. $$ Here we would like to apply inhomogeneous substitution (\ref{kom2}) to the right factor of brackets. To do this we must fix the coefficient: $$ \sim u[k,t-1]\cdot u[i,t-1]^--\chi ^{k\rightarrow t-1}(g_tf_{i\rightarrow t})u[i,t-1]^-\cdot u[k,t-1] $$ $$ =[u[k,t-1],u[i,t-1]^-]+\alpha _{t-1}u[i,t-1]^-\cdot u[k,t-1], $$ where $\alpha _{t-1} =\chi ^{k\rightarrow t-1}(f_{i\rightarrow t-1})(1-\chi ^{k\rightarrow t-1}(h_t))\neq 0.$ Thus by induction on $t$ we get (\ref{zz2}). Relation (\ref{zz2}) with $t=n$ takes the form $[u,z^-]=\sum _{b=k-1}^{n-1} \alpha _bu[i,b]^-\cdot u[k,b].$ Sinse the first addend in (\ref{fz}) is zero, we may continue (\ref{fz}): $$ \sim \left[ \sum _{b=k-1}^{n-1} \alpha _bu[i,b]^-\cdot u[k,b], [v,w]\right] . $$ We have seen that $[v,w]=u[n+1,m].$ At the same time $[u[k,b], u[n+1,m]]=0$ by Proposition \ref{NU} with $i\leftarrow b,$ $j\leftarrow n.$ Indeed, $m\neq \psi (b)-1$ since $m>\psi (i)$ and $i<k\leq b$ implies $\psi (i)>\psi (b),$ while $n\neq \psi (k)$ since $k\in [i,n].$ It remains to note that $[u[i,b]^-,u[n+1,m]]\stackrel{(\ref{dos})}{\sim }[u[n+1,m], u[i,b]^-]=0$ by the inductive supposition with $k\leftarrow n+1,$ $j\leftarrow b,$ for now $n+1\neq i,$ $m\neq b,$ $\psi (n+1)=n\notin [i,b],$ $\psi (m)\notin [i,b],$ and of course $m\neq \psi (n+1)$ $=n=j,$ $b\neq \psi (i)>n.$ Similarly we consider the case $i=n+1.$ In this case $m\in [i,j]=[n+1,j],$ for $k<\psi (i)=n<i.$ Moreover, $j\neq n,$ for $n+1=i<j.$ Hence still $[u,z^-]=0;$ that is, the second addend in (\ref{fz}) equals zero, and by means of (\ref{uno}) we contimue (\ref{fz}): \begin{equation} =[u,[[v,w],z^-]]=[u,[v,[w,z^-]]]+p_{z,w}[u,[[v,z^-],w]]. \label{fz2} \end{equation} Arguments in (\ref{fz1}) show that here the second addend is zero. Since $[u,w]=[u,z^-]=0,$ we have $[u,[w,z^-]]=0.$ Hence conditional identity (\ref{jak3}) implies that the first addend in (\ref{fz2}) equals \begin{equation} [[u,v],[w,z^-]]=[u[k,n],[w,z^-]]. \label{fz3} \end{equation} By downward induction on $t$ we shall prove the following equation: \begin{equation} [u[t,m],u[t,j]^-]=\sum _{a=t+1}^{\mu +1} \alpha _a h_{t\rightarrow a-1}\, u[a,j]^-\cdot u[a,m], \label{zz3} \end{equation} where $n<t,$ $\mu=\min \{ m,j\} ,$ $0\neq \alpha _a\in {\bf k},$ and by definition $u[\mu +1,\mu]=1.$ If $t=j$ or $t=m$ the equation follows from (\ref{kom1}) and (\ref{kom2}). In the general case by the main inductive supposition we have $[u[t,m],u[t+1,j]^-]=0,$ for $t\neq t+1,$ $m\neq j,$ $\psi (t), \psi (m)\notin [t+1,j].$ Therefore $$ [u[t,m],u[t,j]^-]=[u[t,m],[x_t^-,u[t+1,j]^-]] $$ $$ \stackrel{(\ref{jak3})}{=}[[u[t,m],x_t^-]],u[t+1, j]^-] \stackrel{(\ref{kom1})}{\sim } [h_t\cdot u[t+1,m],u[t+1,j]^-]. $$ To prove (\ref{zz3}) it remains to apply (\ref{cuq2}) and the inductive supposition for downward induction. Here the new coefficient $\alpha _{t+1}$ is nonzero since $\chi ^{t+1\rightarrow j}(h_t)=q^{-2}\neq 1.$ Equation (\ref{zz3}) with $t=n+1$ implies \begin{equation} [u[k,n],[w,z^-]]=\left[ u[k,n], \sum _{a=n+2}^{m+1} \alpha _a h_{n+1\rightarrow a-1}u[a,j]^-\cdot u[a,m] \right] . \label{fz4} \end{equation} By Proposition \ref{NU} with $i\leftarrow n,$ $j\leftarrow a-1$ we have $[u[k,n], u[a,m]]=0,$ for $m\neq \psi (n)-1=n$ since $m>i=n+1,$ and $a-1\neq \psi (k)$ since $a-1\leq m\leq j<\psi (k).$ Due to (\ref{cuq1}) it remains to note that $[u[k,n],u[a,j]^-]=0$ by the inductive supposition with $m\leftarrow n,$ $i\leftarrow a,$ for now $n\neq j,$ $k\neq a,$ $\psi (k)\notin [a,j],$ $\psi (n)=n+1\notin [a,j],$ and $n\neq \psi (k)>n,$ $j\neq \psi (a)<n.$ {\bf B}. In this case $\psi (k),$ $\psi (m)\notin [i,j],$ hence by Corollary (\ref{ruk3}) we may suppose that either $k\in [i,j]$ or $m\in [i,j].$ Application of $\psi $ shows that the option {\bf B} is equivalent to \begin{equation} \psi (j)<\psi (i)<k<m,\, \hbox{ or } \, k<m<\psi (j)<\psi (i). \label{fz5} \end{equation} In particular again due to Corollary (\ref{ruk3}) we may suppose that either $i\in [k,m]$ or $j\in [k,m],$ for (\ref{fz5}) implies $\psi (i), \psi (j)\notin [k,m].$ Since $i\neq k,$ $j\neq m,$ it remains to consider two configurations: $k<i\leq m<j$ and $i<k\leq j<m.$ Moreover, the substitution $i\leftrightarrow k$ $j\leftrightarrow m$ transforms the original conditions {\bf B} to equivalent form (\ref{fz5}). Therefore it suffices to consider just one of the above configurations. Suppose that $k<i\leq m<j.$ In this case Proposition \ref{ins2} with $k\leftarrow i,$ $m\leftarrow j,$ $i\leftarrow i$ shows that $u[i,j]^-=[x_i^-,u[i+1,j]^-],$ unless $i=\psi (j)-1.$ If $i\neq \psi (j)-1,$ then by the inductive supposition we have $[u[k,m], u[i+1,j]^-]=0,$ for now $k<i,$ $m\neq j$ and $\psi (k), \psi (m)\notin [i,j]\supset [i+1,j].$ Hence by (\ref{jak3}) and (\ref{kom1}) we have $$ [u[k,m], [x_i^-,u[i+1,j]^-]]]=[u[k,m],x_i^-], u[i+1,j]^-]=\delta _i^m \cdot [u[k,m-1],u[i+1,j]], $$ for $i\neq k,$ $i\neq \psi (k),$ $i\neq \psi (m),$ see original conditions {\bf B}. At the same time if $\delta _i^m\neq 0$ (that is $m=i$), then $[u[k,m-1],u[i+1,j]]=0$ by Proposition \ref{NU} with $m\leftarrow j,$ $i\leftarrow m-1,$ $j\leftarrow i=m,$ for $j\neq \psi (k),$ $j\neq \psi (m-1)-1=\psi (m),$ $i=m\neq \psi (k)$ due to the original conditions {\bf B}. Thus, it remains to check the case $i=\psi (j)-1.$ Equality $i=\psi (j)-1$ with $k<i<m$ imply $k<\psi (j)\leq m,$ this contradicts to (\ref{fz5}). Hence in this case we have $i=m.$ Moreover, $k<i$ implies $$ \psi (k)>\psi (i)=\psi (\psi (j)-1)=j+1>i=m. $$ In particular $\psi (k)\neq m-1.$ Hence by Proposition \ref{ins2} with $i\leftarrow m$ we have $[u[k,m]=[u[k,m-1],x_m].$ Corollary \ref{ruk3} implies both $[u[k,m-1],u[i,j]^-]=0$ and $[u[k,m-1],u[i+1,j]^-]=0,$ for $m-1=i-1\notin [i,j]\supset [i+1,j],$ $\psi (m-1)$ $=\psi (i-1)$ $=\psi (\psi (j)-2)$ $=j+2\notin [i,j]\supset [i+1,j].$ Thus by (\ref{jak3}) and (\ref{cuq1}) we have $$ [u[k,m],u[i,j]^-]=[[u[k,m-1],x_m],u[i,j]^-]=[u[k,m-1], [x_i,u[i,j]^-]] $$ $$ =[u[k,m-1], h_i\cdot u[i+1,j]^-]]\sim h_i \cdot [u[k,m-1], u[i+1,j]^-]]=0. $$ This completes the proof. \end{proof} \begin{proposition} Let $i\neq k,$ $j\neq m,$ $k\leq m,$ $i\leq j.$ If $\psi (j), \psi (i)\notin [k,m]$ or, equivalently, $i,j\notin [\psi (m),\psi (k)],$ then in $U_q(\mathfrak{so}_{2n+1})$ we have $$ [u[k,m],u[i,j]^-]=0. $$ \label{ruk5} \end{proposition} \begin{proof} Substitution $i\leftrightarrow k,$ $j\leftrightarrow m$ transforms the conditions of Proposition \ref{ruk5} to the conditions of Proposition \ref{ruk4}. Let us apply Proposition \ref{ruk4} with $i\leftrightarrow k,$ $j\leftrightarrow m$ to the mirror generators $y_i=p_{ii}^{-1}x_i^-,$ $y_i^-=-x_i.$ We get $[u[i,j]_y,u[k,m]_y^-]=0.$ However $u[i,j]_y\sim u[i,j]^-,$ $u[k,m]_y^-\sim u[k,m].$ It remains to apply (\ref{dos}). \end{proof} \begin{proposition} Let $i\neq k, $ $j\neq m.$ If \begin{equation} \psi (j)\leq k\leq \psi (i)\leq m \label{rr1} \end{equation} or, equivalently, \begin{equation} \psi (m)\leq i\leq \psi (k)\leq j \label{rr2} \end{equation} then in $U_q(\mathfrak{so}_{2n+1})$ we have \begin{equation} [u[k,m],u[i,j]^-]\, \sim \, h_{k\rightarrow \psi (i)}\, u[\psi (k)+1,j]^-\cdot u[\psi (i)+1,m] \label{wff1} \end{equation} provided that $\psi (m)\neq i$ or $\psi (k)\neq j.$ Here by definition we set $u[j+1,j]^-=u[m+1,m]=1.$ \label{fkk1} \end{proposition} \begin{proof} We note that condition (\ref{rr1}) is equivalent to the condition (\ref{rr2}) since $\psi $ changes the order. Let $u=u[k,\psi (i)],$ $v=u[\psi (i)+1,m],$ $w^-=u[i,\psi (k)]^-,$ $t^-=u[\psi (k)+1,j]^-.$ Of course $v=1$ if $m=\psi (i),$ while $t^-=1$ if $j=\psi (k).$ By Lemma \ref{dus1} we have \begin{equation} [u,w^-]=[u[k,\psi (i)], u[i,\psi (k)]^-]\sim 1-h_u, \label{ze1} \end{equation} while Proposition \ref{ruk4} with $k\leftarrow \psi (i)+1,$ $i\leftarrow \psi (k)+1$ shows that \begin{equation} [v,t^-]=[u[\psi (i)+1,m], u[\psi (k)+1,j]^-]=0, \label{ze2} \end{equation} for $\psi (m),$ $\psi (\psi (i)+1)=i-1\notin [\psi (k)+1, j]$ due to (\ref{rr2}). At the same time Proposition \ref{ruk4} with $m\leftarrow \psi (i),$ $i\leftarrow \psi (k)+1$ implies \begin{equation} [u,t^-]=[u[k,\psi (i)], u[\psi (k)+1,j]^-]=0, \label{ze3} \end{equation} where $k\neq n+1,$ $j\neq \psi (i).$ Indeed, $\psi (\psi (i))=i,$ $\psi (k)$ $\notin [\psi (k)+1,j]$ due to (\ref{rr2}), while $k\neq \psi (k)+1$ due to $k\neq n+1.$ Similarly Proposition \ref{ruk5} with $k\leftarrow \psi (i)+1,$ $j\leftarrow \psi (k)$ shows that \begin{equation} [v,w^-]=[u[\psi (i)+1,m], u[i,\psi (k)]^-]=0, \ \hbox{ if } i\neq n+1,\ m\neq \psi (k), \label{ze4} \end{equation} for $\psi (\psi (k))=k,$ $\psi (i)$ $\notin [\psi (i)+1, m]$ due to (\ref{rr1}), while $\psi (i)+1\neq i$ due to $i\neq n+1.$ We shall prove firstly the proposition when the parameters are in the {\bf general position}; that is, when $i,k\neq n+1,$ $i\neq m+1,$ $k\neq j+1,$ $m\neq \psi (k),$ $j\neq \psi (i).$ By Proposition \ref{ins2} with $i \leftarrow \psi (i)$ we have $u[k,m]=[u,v]$ provided that $m>\psi (i)\neq \psi (m)-1,$ for $\psi (i)\neq \psi (k).$ The same proposition with $k\leftarrow i,$ $m\leftarrow j,$ $i\leftarrow \psi (k)$ shows that $u[i,j]^-=[w^-,t^-]$ provided that $j>\psi (k)\neq \psi (j)-1.$ In particular if in the general position we have additionally $\psi (i)\neq m,$ $\psi (k)\neq j,$ then $u[k,m]=[u,v],$ $u[i,j]^-=[w^-,t^-],$ and all relations (\ref{ze1} --- \ref{ze4}) hold. Hence we have the required proportions $$ [[u,v],[w^-,t^-]]\stackrel{(\ref{fo1})}{\sim }[[[u,w^-],t^-],v]\stackrel{(\ref{cuq4})}{\sim } [h_u\cdot t^-,v]\stackrel{(\ref{cuq21})}{\sim } h_ut^-\cdot v. $$ The omitted coefficient after the application of (\ref{cuq4}) is $\chi ^{t^-}(h_u)-1,$ while $$ \chi ^{t^-}(h_u)=\chi^{\psi (k)+1\rightarrow j}_-(h_{k\rightarrow \psi (j)})=\chi _-^{\psi (k)+1\rightarrow j}(h_{i\rightarrow \psi (k)}) =(\mu _i^{j,\psi (k)})^{-1}=q^2\neq 1 $$ due to definition (\ref{mu1}) and relations (\ref{mu2}) and (\ref{mu4}). Similarly the omitted coefficient after the application of (\ref{cuq21}) is $1-\chi ^v(h_u),$ while $$ \chi ^v(h_u)=\chi ^{\psi (i)+1\rightarrow m}(h_{k\rightarrow \psi (i)})=\mu _k^{m,\psi (i)}=q^{-2}\neq 1. $$ If in the general position we have $\psi (i)=m,$ $\psi (k)\neq j,$ then $u[k,m]=u,$ $u[i,j]^-=[w^-,t^-],$ $[u,t^-]=0.$ Hence we again have the required relation $$ [u,[w^-,t^-]]\stackrel{(\ref{cua})}{\sim }[[u,w^-],t^-]\stackrel{(\ref{cuq4})}{\sim } h_u\cdot t^- $$ with the omitted coefficient $\chi ^{t^-}(h_u)-1=q^2-1.$ Similarly, if in the general position we have $\psi (i)\neq m,$ $\psi (k)=j,$ then $u[k,m]=[u,v],$ $u[i,j]^-=w^-,$ $[v,w^-]=0,$ and \begin{equation} [[u,v],w^-]\stackrel{(\ref{uno})}{\sim }[[u,w^-],v]\stackrel{(\ref{cuq4})}{=} (1-q^{-2})h_u\cdot v. \label{ze6} \end{equation} This completes the proof if $k,m,i,j $ are in the general position. Consider the exceptional cases. {\bf 1. $\bf k=n+1.$} In this case $i\neq n+1,$ for $i\neq k.$ In particular by (\ref{ze4}) we have $[v,w^-]=0.$ Moreover $i\neq m+1,$ for $\psi (j)\leq n+1\leq \psi (i)\leq m$ and $\psi (m)\leq i\leq n\leq j$ imply $i\leq n<m.$ Hence $u[k,m]=[u,v]$ if $m\neq \psi (i),$ and $u[k,m]=u$ otherwise. {\bf 1.1.} If $j=n,$ then $u[i,j]^-=w^-,$ for $j=\psi (k)=n.$ Moreover we may assume $m\neq \psi (i)$ (otherwise one may apply Lemma \ref{dus1}); that is, $u[k,m]=[u,v].$ Now algebraic manipulations (\ref{ze6}) prove the required relation \begin{equation} [u[n+1,m],u[i,n]^-]\sim h_{n+1\rightarrow \psi (i)}\cdot u[\psi (i)+1,m], \ \ \ \psi (i)<m. \label{ze7} \end{equation} {\bf 1.2.} Let $j=n+1.$ By definition (\ref{ww}) we have $u[i,j]^-=[u[i,n]^-,x_{n+1}^-],$ and of course $x_{n+1}^-=x_n^-.$ Hence Jacobi identity (\ref{cua}) and (\ref{kom1}) show that $[u[k,m],u[i,j]^-]$ is a linear combination of the following two terms $$ [[u[n+1,m],u[i,n]^-],x_{n+1}^-], \ \ \ [h_{n+1}\cdot u[n+2,m],u[i,n]^-]. $$ We claim that the former term equals zero. Indeed, if $\psi (i)=m,$ then by Lemma \ref{dus1} we have $[u[n+1,m],u[i,n]^-]\sim1-h_{i\rightarrow n}.$ However $$ \chi ^{n+1}(h_{i\rightarrow n})=\chi ^{n}(g_{n-1}f_{n-1}g_nf_n)=p_{n\, n-1}p_{n-1\, n}p_{nn}p_{nn}=1. $$ Hence (\ref{cuq4}) shows that the former term equals zero. If $\psi (i)<m,$ then by (\ref{ze7}) we have $[u[n+1,m],u[i,n]^-]\sim h_{n+1\rightarrow \psi (i)}\cdot u[\psi (i)+1,m].$ Since $\psi (i)\geq k=n+1,$ Lemma \ref{suu} implies $[u[\psi (i)+1,m],x_{n+1}^-]=0.$ At the same time $\chi ^{n+1}(h_{n+1\rightarrow \psi (i)})=\chi ^{n+1}(h_{i\rightarrow n})=1.$ Thus (\ref{cuq21}) reduces the former term to zero. To find the latter term we note that $$ \chi ^{i\rightarrow n}(h_{n+1})=\chi ^{n-1}(g_nf_n)\chi ^{n}(g_nf_n)=p_{n\, n-1}p_{n-1\, n}p_{nn}p_{nn}=1. $$ Hence by (\ref{cuq21}) the latter term is proportional to $h_{n+1}\cdot [u[n+2,m],u[i,n]^-].$ Since the points $k^{\prime }=n+2,$ $m^{\prime }=m,$ $i^{\prime }=i,$ $j^{\prime }=n$ are in the general position, we may apply (\ref{wff1}): $$ [u[n+2,m],u[i,n]^-]\sim h_{n+2\rightarrow \psi (i)}\, u[n,n]^-\cdot u[\psi (i)+1,m], $$ which is required, for $u[n,n]^-=u[n+1,n+1]^-=x_n^-,$ and $h_{n+1}\cdot h_{n+2\rightarrow \psi (i)}=h_{n+1\rightarrow \psi (i)}.$ {\bf 1.3.} Let $j>n+1,$ $i<n.$ By definition (\ref{ww}) we have $u[k,m]=[x_{n+1},u[n+2,m]].$ Relation (\ref{kom2}) shows that $[x_{n+1}, u[i,j]^-]=0.$ Hence conditional identity (\ref{jak3}) implies \begin{equation} [u[k,m],u[i,j]^-]=[x_{n+1}, [u[n+2,m], u[i,j]^-] ]. \label{ze8} \end{equation} At the same time the points $k^{\prime }=n+2,$ $m^{\prime }=m,$ $i^{\prime }=i,$ $j^{\prime }=j$ are in the general position. Moreover $i<n$ implies $\psi (i)+1>n+2,$ hence $[ x_{n+1}, u[\psi (i)+1,m] ]=0$ by Lemma \ref{sepp}. This allows us to continue (\ref{ze8}) applying (\ref{cuq1}), (\ref{br1}): $$ \sim [x_{n+1},h_{n+2\rightarrow \psi (i)}\, u[n,j]^-\cdot u[\psi (i)+1,m]]\sim h_{n+2\rightarrow \psi (i)}\,[x_{n+1}, u[n,j]^-]\cdot u[\psi (i)+1,m], $$ which is required due to (\ref{kom2}). {\bf 1.4.} Let $j>n+1,$ $i=n.$ In this case by definition (\ref{ww}) we have $u[i,j]^-=[x_n^-,u[n+1,j]^-].$ Jacobi identity (\ref{cua}) and (\ref{kom1}) show that $[u[k,m],u[i,j]^-]$ is a linear combination of the following two terms $$ [h_{n+1}u[n+2,m],u[n+1,j]^-], \ \ \ [x_n^-,[u[n+1,m],u[n+1,j]^-]]. $$ Proposition \ref{ruk4} implies $[u[n+2,m],u[n+1,j]^-]=0,$ for both $\psi (n+2)=n-1,$ and $\psi (m)$ are less than $n+1.$ At the same time $$ \chi ^{n+1\rightarrow j}(h_{n+1})=\chi ^{\psi (j)\rightarrow n}(h_{n})=\chi ^{n-1}(g_nf_n)\chi ^{n}(g_nf_n)=p_{n-1\, n}p_{n\, n-1}p_{nn}^2=1. $$ Hence by (\ref{cuq2}) the first term equals zero. Due to (\ref{zz3}) the second term takes the form \begin{equation} \left[ x_n^-, \sum _{a=n+2}^{\mu }\alpha _a\, h_{n+1\rightarrow a-1}\, u[a,j]^-\cdot u[a,m]\right] , \label{ze9} \end{equation} where $\mu =\min \{ j,m\} .$ By Lemma \ref{suu} we have $[x_n^-,u[a,m]]=0$ for all $a.$ At the same time $[x_n^-,u[a,j]^-=0$ for all $a>n+2,$ see Lemma \ref{sepp}, while $[x_n^-,u[n+2,j]^-=u[n+1,j]^-$ since $x_n=x_{n+1}.$ Hence in (\ref{ze9}) remains just one term that corresponds to $a=n+2.$ By (\ref{cuq1}) and ( \ref{br1}) this term is proportional to $$ h_{n+1}u[n+1,j]^-\cdot u[n+2,m], $$ which coincides with the right hand side of (\ref{wff1}) with $k=n+1,$ $i=n.$ {\bf 2. $\bf k=j+1.$} In this case inequality $\psi (j)\leq k$ reads $\psi (j)\leq j+1,$ or, equivalently, $2n-j+1\leq j+1;$ that is, $j\geq n.$ If $j=n,$ then we turn to the considered above case $k=n+1.$ Thus we have to consider just the case $j>n.$ In this case $k=j+1>n+1,$ and $j=k-1<\psi (i)$ since by the conditions of the proposition we have $k\leq \psi (i).$ We shall prove firstly by downward induction on $i$ with fixed $j,k$ the following proportion \begin{equation} [u,u[i,j]^-]\sim h_{k\rightarrow \psi (i)}\, u[\psi (k)+1,j]^-. \label{ze10} \end{equation} If $i=\psi (k)$ then (\ref{ze10}) follows from (\ref{kom2}). Let $i<\psi (k).$ In this case by Proposition \ref{ins2} we have $u=[u[k,\psi (i)-1],x_i],$ for $k>n.$ At the same time Proposition \ref{ruk5} implies $[u[k,\psi (i)-1],u[i,j]^-]=0$ since $\psi (j)\leq n<j=k-1<k,$ and $\psi (j), \psi (i)\notin [k,\psi (i)-1].$ Hence conditional identity (\ref{jak3}) with (\ref{kom2}) show that $$ [u,u[i,j]^-]=[u[k,\psi (i)-1], [x_i, u[i,j]^-]]\sim [u[k,\psi (i)-1], h_i\cdot u[i+1,j]^-]. $$ This relation, after application of (\ref{cuq1}), and the inductive supposition imply (\ref{ze10}), for $h_i=h_{\psi (i)},$ $\psi (i)-1=\psi (i+1).$ If $m=\psi (i)$ then $u[k,m]=u,$ while (\ref{ze10}) coincides with the required (\ref{wff1}). Let $m>\psi (i).$ In this case $u[k,m]=[u,v],$ for $k>n,$ see Proposition \ref{ins2}. Lemma \ref{suu} shows that $[v,u[i,j]^-]=0;$ indeed, $v=u[\psi (i)+1,m]$ depends only in $x_s$ with $s<i,$ while $u[i,j]^-$ depends only in $x_s^-$ with $i\leq s\leq n,$ for $j<\psi (i).$ We have \begin{equation} [[u,v],u[i,j]^-]\stackrel{(\ref{uno})}{\sim }[[u,u[i,j]^-],v]\stackrel{(\ref{ze10})}{\sim }[h_{k\rightarrow \psi (i)}\, u[\psi (k)+1,j]^-,v]. \label{ze11} \end{equation} Again by Lemma \ref{suu} we get $[u[\psi (k)+1,j]^-,v]=0.$ Therefore we may continue (\ref{ze11}) applying (\ref{cuq2}): $$ \sim (1-\chi ^v(h_{k\rightarrow \psi (i)}))\, h_{k\rightarrow \psi (i)}\, u[\psi (k)+1,j]^-\cdot v $$ which is required since by definition $v=u[\psi (i)+1,m],$ and $$ \chi ^v(h_{k\rightarrow \psi (i)})=\chi ^{\psi (i)+1\rightarrow m}(h_{k\rightarrow \psi (i)})=\mu _k^{m, \psi (i)}=q^{-2}\neq 1. $$ {\bf 3. $\bf i=n+1$ or $\bf i=m+1.$} Conditions of the proposition are invariant under the transformation $i\leftrightarrow k,$ $j\leftrightarrow m.$ At the same time this transformation reduce the condition ``$i=n+1$ or $i=m+1$" to the considered above cases {\bf 1} or {\bf 2}. Hence for the mirror generators $y_i=p_{ii}^{-1}x_i^-,$ $y_i^-=-x_i$ we have $$ [u[i,j]_y,u[k,m]_y^-]\, \sim \, h_{k\rightarrow \psi (i)}\, u[\psi (i)+1,m]^-_y\cdot u[\psi (k)+1,j]_y. $$ However $u[a,b]_y\sim u[a,b]^-,$ $u[a,b]_y^-\sim u[a,b].$ It remains to apply (\ref{dos}) and to note that by Proposition \ref{ruk4} the factors in the right hand side of (\ref{wff1}) skew commute each other, for $\psi (m)\leq i\leq \psi (k)\leq j$ implies $\psi (m),\psi (\psi (i)+1)=i-1\notin [\psi (k)+1,j].$ {\bf 4. $\bf j=\psi (i).$} If also $m=\psi (k)$ then (\ref{rr1}) reads $i\leq k\leq \psi (i)\leq \psi (k),$ where the first and the last inequalities are not consistent provided that $i\neq k.$ Hence we assume $m\neq \psi (k).$ Denote for short $$ u=u[k,m], \ \ v^-=u[n+1,j]^-,\ \ w^-=u[i,n]^-. $$ By definition (\ref{ww}) we have $u[i,j]^-\sim [v^-,w^-].$ If $k\leq n$ then $\psi (k)\notin [i,n].$ We have also $\psi (m)\notin [i,n],$ for Eq. (\ref{rr1}) with $m\neq j=\psi (i)$ imply $\psi (m)<i.$ Hence by Proposition \ref{ruk4} with $j\leftarrow n$ we have $[u,w^-]=0.$ At the same time $\psi (j)\leq k\leq\psi (n+1)\leq m,$ $\psi (n+1)\neq j.$ Therefore already proved case of the proposition with $i\leftarrow n+1$ implies $$ [u,v^-]\sim h_{k\rightarrow n}u[\psi (k)+1,j]^-\cdot u[n+1,m]. $$ Taking into account Jacobi identity (\ref{cua}) we have \begin{equation} [u,[v^-,w^-]]=[[u,v^-],w^-]\sim [h_{k\rightarrow n}u[\psi (k)+1,j]^-\cdot u[n+1,m],w^-]. \label{ewz2} \end{equation} The second statement of Proposition \ref{NU} with $k\leftarrow i,$ $i\leftarrow n,$ $j\leftarrow \psi (k),$ $m\leftarrow j$ implies $[u[\psi (k)+1,j]^-,w^-]=0.$ Indeed, the conditions of Proposition \ref{NU} under that replacement are: $j\neq \psi (n)-1,$ $\psi (k)\neq \psi (i),$ and $n\neq \psi (\psi (k))-1.$ They are valid since $j=\psi (i)>n,$ $k\neq i,$ and $k\leq n$ respectively. Further, using Definition \ref{slo} and representations (\ref{mu21}), (\ref{mu2}), we have also $$ \chi ^{i\rightarrow n}(h_{k\rightarrow n})=P_{i\rightarrow n,k\rightarrow n}P_{k\rightarrow n,i\rightarrow n} =(\sigma _k^n)^2\mu _i^{n,k-1}=q^2\cdot q^{-2}=1. $$ Hence ad-identity (\ref{br1f}) and identity (\ref{cuq2}) imply that the right hand side of (\ref{ewz2}) equals $$ h_{k\rightarrow n}u[\psi (k)+1,j]^-\cdot [u[n+1,m],w^-]. $$ Here $\psi (n)=n+1\leq \psi (i)\leq m.$ Hence we may again use already proved case of the proposition with $k\leftarrow n+1,$ $j\leftarrow n.$ This yields $[u[n+1,m],w^-]\sim h_{n+1\rightarrow j}u[1+j,m],$ which proves (\ref{wff1}), for $h_{k\rightarrow n}\cdot h_{n+1\rightarrow j}=h_{k\rightarrow \psi (i)}$ in the case $j=\psi (i).$ If $k>n$ then in perfect analogy we have $[v^-,u]\sim [u,v^-]=0,$ while $[w^-,u]$ $\sim [u,w^-]$ $\sim h_{k\rightarrow j} u[1+\psi (k),n]^- \cdot u[1+j,m].$ Therefore $$ [u,[v^-,w^-]]\sim [[v^-,w^-],u]=[v^-,[w^-,u]] $$ $$ \sim h_{k\rightarrow j} [v^-,u[1+\psi (k),n]^-]\cdot u[1+j,m], $$ since $[v^-,u[1+j,m]]\sim [u[1+j,m],v^-]=0$ according to Lemma \ref{suu}. We have $[v^-,u[1+\psi (k),n]^-]\sim u[1+\psi (k),j]^-,$ see Lemma \ref{rww}. This completes the case $j=\psi (i).$ {\bf 5. $\bf m=\psi (k).$} By means of the mirror generators one may reduce the consideration to the case $j=\psi (i).$ The proposition is completely proved. \end{proof} \begin{definition} \rm A scheme (\ref{grr1}) is said to be {\it strongly white} provided that the following three conditions are met: first, it has no black-black columns; then, the first from the left column is incomplete; and next, if there are at least two complete columns, then the first from the left complete column is a white-white one. A scheme (\ref{grr1}) is said to be {\it strongly black} provided that the following three conditions are met: first, it has no white-white columns; then, the last column is incomplete; and next, if there are at least two complete columns, then the last complete column is a black-black one. A scheme is said to be {\it strong} if it is either strongly white or strongly black. \label{mb1} \end{definition} Alternatively we may define a strong scheme as follows. Let $S^{\prime }$-scheme be a scheme that appears from the $S$-scheme (\ref{grb}) by changing colors of the first and the last points. Then $ST$-scheme is strongly white (black) if and only if both $ST$-scheme and $S^{\prime } T^{\prime }$-scheme have no black-black (white-white) columns. We stress that the map $\rho $ defined in Lemma \ref{bal2} transforms strongly white schemes to strongly black ones and vice versa. Therefore the $ST$-scheme is strong if and only if the $S^*T^*$-scheme is strong. Similarly, the $ST^*$-scheme is strong if and only if the $S^*T$-scheme is strong. \begin{theorem} Suppose that $S$, $T$ are respectively $(k,m)$- and $(i,j)$-regular sets. If $ST$- and $ST^*$-schemes are strong, then $[\Phi ^{S}(k,m),\Phi ^{T}_-(i,j)]=0.$ \label{str} \end{theorem} \begin{proof} Without loss of generality we may suppose that both schemes are strongly white. Indeed, the mirror generators allow us, if necessary, to switch the roles of $S$ and $T$, while Lemma \ref{xn0} and Lemma \ref{bal2} allow us to find a pair of strongly white schemes. Moreover, once $ST$- and $ST^*$-schemes are strongly white, Lemma \ref{xn0} allows one to switch the roles of $T$ and $T^*.$ Thus, without loss of generality, we may suppose also that $T$ is white $(i,j)$-regular. {\bf 1}. Assume $S$ is white $(k,m)$-regular. We shall use double induction on numbers of elements in $S\cap [k,m)$ and in $T\cap [i,j).$ If both intersections are empty then $i\neq k,$ $j\neq m,$ for $ST$-scheme is strongly white. $$ \begin{matrix} \ & \circ & \stackrel{k}{\circ } & \cdots & \circ & \circ & \circ & \stackrel{m}{\bullet } \cr \circ & \stackrel{i}{\circ }& \circ & \cdots & \circ & \stackrel{j}{\bullet } & \ & \ \end{matrix} \ \ \ \ \ \begin{matrix} \ & \circ & \stackrel{k}{\circ } & \circ & \circ & \cdots & \circ & \circ & \stackrel{m}{\bullet } \cr \ & \ & \circ & \stackrel{\psi (j)}{\bullet }& \bullet & \cdots & \bullet & \stackrel{\psi (i)}{\bullet } & \ \end{matrix} $$ Similarly $k,m\notin [\psi (j), \psi (i)],$ for $ST^*$-scheme is strongly white. Hence Proposition \ref{ruk4} applies. If $s\in \, {S}\, \cap [k,m),$ then by Lemma \ref{xn2} we have $\Phi ^{S}(k,m)$ $\sim [\Phi ^{S}(1+s,m),\Phi ^{S}(k,s)].$ It is easy to see that $S^sT$- and $S^sT^*$-schemes (the schemes for the pair $\Phi ^{S}(k,s),\Phi ^{T}_-(i,j)$) are still strongly white, while $S$ is still white $(k,s)$- and $(1+s,m)$-regular. Hence the inductive supposition implies $[\Phi ^{S}(k,s),\Phi ^{T}_-(i,j)]=0.$ By the same reason $[\Phi ^{S}(1+s,m),\Phi ^{T}_-(i,j)]=0.$ Now Jacobi identity (\ref{uno}) implies the required equality. It remains to consider the case $S\cap [k,m)=\emptyset ;$ that is, $\Phi ^{S} (k,m)=u[k,m].$ If $t\in \, {T}\, \cap [i,j),$ then by Lemma \ref{xn2} we have $\Phi ^{T}_-(i,j)$ $\sim [\Phi ^{T}_-(1+t,j),\Phi ^{T}_-(i,t)].$ In this case $T$ is still $(k,s)$- and $(1+s,m)$-regular, while $ST^t$-scheme is strongly white. At the same time $ST^*_{\psi (t)}$-scheme is not strongly white only if $\psi (t)-1=k-1$ (the first from the left column is complete). Hence by the inductive supposition we have $[\Phi ^{S}(k,m),\Phi ^{T}_-(i,t)]=0$ with one exception being $\psi (t)=k.$ Similarly $[\Phi ^{S}(k,m),\Phi ^{T}_-(1+t,j)]=0$ with one exception being $\psi (t)-1=m$. Hence, if in the set $T\cap [i,j)$ there exists a point $t\neq \psi (k),$ $t\neq \psi (m)-1,$ then Jacobi identity (\ref{cua}) implies the required equality. Certainly if $T\cap [i,j)$ has more than two elements then such a point does exist. If $T\cap [i,j)$ has two points then there is just one exceptional configuration for the main $ST^*$-scheme: $$ \begin{matrix} \ \ & \ & \circ & \stackrel{k}{\circ } & \circ \ \ \cdots & \circ & \circ & \stackrel{m}{\bullet } & \ &\ \cr \circ & \stackrel{\psi (j)}{\bullet }\ \cdots \ \ \ \bullet & \circ & \bullet & \bullet \ \ \cdots & \bullet & \bullet & \circ & \bullet & \cdots \ \ \stackrel{\psi (i)}{\bullet } \end{matrix} $$ In this case $T\cap [i,j)=\{ t_1,t_2\} ,$ where $\psi (t_2)-1=k-1,$ $\psi (t_1)-1=m.$ Let $a=\Phi ^{S}(k,m)=u[k,m],$ $b^-=\Phi ^{T}_-(i,j),$ $u_0^-=u[i,t_1]^-$ $=u[i,\psi (m)-1]^-,$ $u_1^-=u[1+t_1,t_2]^-$ $=[\psi (m),\psi (k)]^-,$ $u_2^-=u[1+t_2,j]^-=u[1+\psi (k),j]^-.$ Using Lemma \ref{xn2} twice, we have $b^-\sim [[u_2^-,u_1^-],u_0^-].$ Lemma \ref{dus1} implies $[a,u_1^-]\sim 1-h_a.$ Inequality $\psi (m)\leq \psi (k)$ implies both $\psi (m), \psi (k)\notin [i,\psi (m)-1]$ $=[i,t_1]$ and $\psi (m), \psi (k)\notin [1+\psi (k),j]=[1+t_2,j].$ Therefore by Proposition \ref{ruk4} we have $[a,u_0^-]=0$ unless $m=\psi (m)-1,$ and $[a,u_2^-]=0$ unless $k=1+\psi (k).$ At the same time $k=1+\psi (k)$ implies $k=n+1,$ and hence $n=\psi (k)=t_2\in {T}\cap [i,j),$ which is impossible, for $T$ is white $(i,j)$-regular. Similarly, $m=\psi (m)-1$ implies $m=n,$ and hence $n=\psi (m)-1=t_1\in {T}\cap [i,j),$ which is wrong by the same reason. Taking into account the proved relations, we may write $$ [a,b^-]\sim [a,[[u_2^-,u_1^-],u_0^-]]\stackrel{(\ref{jak3})}{=}[[a,[u_2^-,u_1^-]],u_0^-] $$ $$ \stackrel{(\ref{cua})}{\sim }[[u_2^-,[a,u_1^-]],u_0^-] \stackrel{(\ref{cuq3})}{\sim }[u_2^-,u_0^-]. $$ Here we have applied inhomogeneous substitution (\ref{cuq3}) to the left factor in the brackets. Proposition \ref{NU} with $k\leftarrow i,$ $m\leftarrow j,$ $i\leftarrow t_1,$ $j\leftarrow t_2$ implies $[u_2^-,u_0^-]=0$ provided that $j\neq \psi (i),$ $j\neq \psi (t_1)-1,$ $t_2\neq \psi (i),$ and $t_1\neq \psi (t_2)-1.$ The first inequality is valid since $T$ is $(i,j)$-regular. The second and third inequalities are equivalent to $j\neq m$ and $\psi (k)\neq \psi (i)$ respectively. However $j\neq m$ and $k\neq i$ are valid, for the main $ST$-scheme is strongly white. The equality $t_1=\psi (t_2)-1$ is equivalent to $m=t_2,$ while in this case on the $ST$-scheme we have a black-black column. If ${T}\cap [i,j)=\{ t\} $ then there are just two exceptional configuration for the main $ST^*$-scheme, where $\psi (t)=k$ in case A, and $\psi (t)=m+1$ in case B: $$ {\rm A:}\ \begin{matrix} & & \circ & \stackrel{k}{\circ } & \circ & \circ & \stackrel{m}{\bullet } \cr \circ & \stackrel{\psi (j)}{\bullet } \bullet & \circ & \bullet & \bullet &\stackrel{\psi (i)}{\bullet } & \ \end{matrix};\ \ \ {\rm B:}\ \begin{matrix} \circ & \stackrel{k}{\circ } & \circ & \circ & \circ & \stackrel{m}{\bullet } & \ &\ \cr \ & \ & \circ & \stackrel{\psi (j)}{\bullet } & \bullet & \circ & \bullet & \stackrel{\psi (i)}{\bullet } \end{matrix} $$ In case A we keep the above notations $a=u[k,m],$ $b^-=\Phi ^{T}_-(i,j),$ $u_0^-=u[i,t]^-,$ $u_1^-=u[1+t,j]^-.$ Lemma \ref{xn2} implies $b^-=[u_1^-,u_0^-].$ We have $\psi (m),\psi (k)\notin [1+t,j]=[1+\psi (k),j].$ Moreover $k\neq 1+t,$ for otherwise the first from the left complete column on the main $ST$-scheme is white-black which contradicts the definition of a strongly white scheme (here $t\neq j$ and therefore the scheme has at least two complete columns). Hence Proposition \ref{ruk4} implies $[a,u_1^-]=0.$ Since $\psi (t)=k\leq \psi (i)\leq m,$ Proposition \ref{fkk1} shows that $[a,u_0^-]\sim h_{k\rightarrow \psi (i)}u[\psi (i)+1,m].$ Thus we get $$ [a,b^-]=[a,[u_1^-,u_0^-]]\stackrel{(\ref{cua})}{\sim }[u_1^-,[a,u_0^-]] \stackrel{(\ref{cuq1})}{\sim }h_{k\rightarrow \psi (i)}[u_1^-,u[\psi (i)+1,m]]=0. $$ The latter equality follows from antisymmetry identity (\ref{dos}) and Proposition \ref{ruk4}. Indeed, $\psi (i)+1\neq 1+t=1+\psi (k),$ for $i\neq k,$ while in configuration A we have $\psi (m),$ $\psi (\psi (i)+1)$ $\notin [1+t,j]$ since $\psi (i)+1, m\notin [\psi (j), \psi (1+t )]=[\psi (j),k-1].$ This allows one to apply Proposition \ref{ruk4}. In case B we consider the points $k^{\prime }=\psi (m),$ $m^{\prime }=\psi (k),$ $i^{\prime }=\psi (j),$ $j^{\prime }=\psi (i),$ and $t^{\prime }=\psi (t)-1=m.$ These points are in configuration A. Therefore we have $[u[k^{\prime },m^{\prime }], \Phi ^{\{ t^{\prime }\} }_-(i^{\prime },j^{\prime })]=0.$ Let us apply $g_{k\rightarrow m}f_{i\rightarrow j}\sigma ,$ where $\sigma $ is the antipode. Using properties of the antipode given in (\ref{ant1}), (\ref{ant2}), (\ref{desc3}) we get the required equality. {\bf 2}. If $S$ is black $(k,m)$-regular, but not white $(k,m)$-regular, then $n\in [k,m),$ and $n$ is a black point on the scheme $S.$ Lemma \ref{xn1} implies $\Phi ^{S}(k,m)$ $=[\Phi ^{S}(k,n),\Phi ^{S}(n+1,m)].$ By definition $S$, as well as any other set, is white $(k,n)$- and $(n+1,m)$-regular. Since $ST$- and $ST^*$-schemes are strongly white, the point $n$ is not black on the schemes $T,T^*.$ At the same time $n$ is a white point on $T$ if and only if it is a black point on $T^*.$ Hence $n$ does not appear on $T,T^*$ at all, $n\notin [i-1,j].$ In particular $S^nT$-, and $S^nT^*$-schemes (the schemes for the pair $\Phi ^{S}(k,n),$ $\Phi ^{T}_-(i,j)$) are still strongly white. The above considered case implies $[\Phi ^{S}(k,n),\Phi ^{T}_-(i,j)]=0.$ By the same reason $[\Phi ^{S}(n+1,m),\Phi ^{T}_-(i,j)]=0.$ It remains to apply Jacobi identity (\ref{uno}). \end{proof} \section{Proof of the main theorem} \begin{lemma} Let $k\leq s<n.$ If $s\in S,$ then \begin{equation} \left[ \Phi^{S}(k,n), \Phi^{\overline{S}}_-(k,s)\right] \sim \Phi^{S}(1+s,n), \label{but1} \end{equation} where $\overline{S}$ is the complement of $S$ with respect to $[k,s).$ \label{ed} \end{lemma} \begin{proof} By Lemma \ref{xn2} we have $\Phi^{S}(k,m)=\left[ \Phi^{S}(1+s,n), \Phi^{S}(k,s)\right].$ At the same time $\left[ \Phi^{S}(1+s,n), \Phi^{\overline{S}}_-(s,n)\right] =0$ due to Lemma \ref{suu}. Taking into account Theorem \ref{des1} we have $$ \left[\left[ \Phi^{S}(1+s,n), \Phi^{S}(k,s)\right], \Phi^{\overline{S}}_-(k,s)\right] $$ $$ \stackrel{(\ref{jak3})}{=} \left[ \Phi^{S}(1+s,n), \left[ \Phi^{S}(k,s), \Phi^{\overline{S}}_-(k,s)\right] \right]\stackrel{(\ref{cuq3})}{\sim } \Phi^{S}(1+s,n), $$ where the coefficient of the proportion equals $1-\chi ^{1+s\rightarrow n}(h_{k\rightarrow s})$ $=1-\mu _k^{n,s}$ $=1-q^{-2},$ see (\ref{mu2}). \end{proof} \begin{lemma} Let $i\neq k,$ $m\leq n.$ If the $S_k^mT_i^m$-scheme has only one black-black column $($the last one$),$ and the first complete column is white-white then \begin{equation} \left[ \Phi^{S}(k,m), \Phi^{T}_-(i,m)\right] =\sum _{b=\nu -1}^{m-1}\alpha _b \, \Phi^{T}_-(i,b)\cdot \Phi^{S}(k,b), \label{but} \end{equation} where $\nu =\max \{ i,k\} ,$ while $\alpha _b\neq 0$ if and only if the column $b$ is white-white. Here by definition $\Phi^{S}(k,k-1)=\Phi^{T}_-(i,i-1)=1.$ \label{ed1} \end{lemma} \begin{proof} For the sake of definiteness, assume that $k<i$ (if $i<k$ then the proof is quite similar). We use induction on the number of white-white columns on the $S_k^mT_i^m$-scheme. If there is just one white-white column then this is the first from the left column labeled by $i-1.$ Moreover all intermediate complete columns are white-black or black-white. Hence Theorem \ref{des1} implies $\left[ \Phi^{S}(i,m), \Phi^{T}_-(i,m)\right] \sim 1-h_{i\rightarrow m}.$ By Lemma \ref{xn1} we have $\Phi^{S}(k,m)=\left[ \Phi^{S}(k,i-1), \Phi^{S}(i,m)\right].$ At the same time $\left[ \Phi^{S}(k,i-1), \Phi^{T}_-(i,m)\right] =0$ due to Lemma \ref{suu}. Hence $$ \left[ \left[ \Phi^{S}(k,i-1), \Phi^{S}(i,m)\right] , \Phi^{T}_-(i,m)\right] $$ $$ \stackrel{(\ref{jak3})}{=}\left[ \Phi^{S}(k,i-1), \left[ \Phi^{S}(i,m), \Phi^{T}_-(i,m)\right] \right] \stackrel{(\ref{cuq3})}{\sim } \Phi^{S}(k,i-1), $$ which is required, for the coefficient of the proportion equals $1-\chi ^{k\rightarrow i-1}(h_{i\rightarrow m})=1-\mu _k^{m,i-1}=1-q^{-2}\neq 0$ (recall that $m\leq n$). To make the inductive step, let $a$ be the maximal white-white column. Then all columns between $a$ and $m$ are black-white or white-black. Hence Theorem \ref{des1} implies $\left[ \Phi^{S}(1+a,m), \Phi^{T}_-(1+a,m)\right] \sim 1-h_{1+a\rightarrow m}.$ Let us fix for short the following designations: $$u=\Phi^{S}(k,a),\ v=\Phi^{S}(1+a,m),\ w^-=\Phi^{T}_-(i,a), \ t^-=\Phi^{T}_-(1+a,m).$$ Then by Lemma \ref{xn1} we have $\Phi^{S}(k,m)=[u,v],$ $\Phi^{T}_-(i,m)=[w^-,t^-].$ Lemma \ref{suu} implies $[u,t^-]$ $=[v,w^-]$ $=0.$ At the same time $[v,t^-]\sim 1-h_{1+a\rightarrow m},$ while $[u,w^-]$ equals the left hand side of (\ref{but}) with $m\leftarrow a.$ Applying inductive supposition to $[u,w^-]$ we see that $[[u,w^-], t^-]=0.$ Indeed, we may apply inhomogeneous substitution (\ref{but}) to the left factor of the bracket. Then for each $b<a$ we have $\left[ \Phi^{S}(k,b),t^-\right ]=0$ by Lemma \ref{suu}, while $\left[ \Phi^{T}_-(i,b),t^-\right ]=0$ due to Lemma \ref{sepp}. Additionally, using (\ref{cuq3}) with $x_i\leftarrow v,$ $x_i^-\leftarrow t^-,$ we have $[w^-,[v,t^-]]\sim w^-,$ for $\chi ^{w^-}(g_vf_t)=(\mu _i^{m,a})^{-1}=q^{2}\neq 1$ according to (\ref{mu2}). All that relations allow us to simplify (\ref{fo1}): $$ [[u,v],[w^-,t^-]]\sim [u,[w^-,[v,t^-]]]=u\cdot w^--p(u,w^-vt^-)w^-\cdot u $$ \begin{equation} =[u,w^-]+p(u,w^-)(1-p(u,vt^-))w^-\cdot u. \label{vse} \end{equation} Here we apply inhomogeneous substitution to the right factor of the bracket. By this reason we have to develop the bracket to its explicit form. We have $p(u,vt^-)$ $=p(u,v)p(t,u)$ $=\mu _k^{m,a}$ $=q^{-2}$ $\neq 1.$ Thus inductive supposition applied to $[u,w^-]$ shows that (\ref{vse}) is the required sum. \end{proof} \begin{lemma} Let $S$ be a black $(k,m)$-regular set, $k\leq n<m.$ We have \begin{equation} \varepsilon ^-\otimes \varepsilon ^0\otimes {\rm id}\left( \left[ \Phi^{S}(k,m), \Phi^{\overline{S}}_-(k,n)\right] \right)\neq 0. \label{nac} \end{equation} This nonzero element has degree $[\psi (m):n]=[n+1:m].$ Here $\varepsilon ^-,$ $\varepsilon ^0$ are the counits of $ U_q^-$ and ${\bf k}[H]$ respectively, the tensor product of maps is related to the triangular decomposition $(\ref{tr}), (\ref{trs});$ while $\overline{S}$ is a complement of $S$ with respect to $[k,n).$ \label{xic} \end{lemma} \begin{proof} Let us fix for short the following designations: $$ u=\Phi^{S}(k,n), \ \ v=\Phi^{S}(1+n,m), \ \ w^-=\Phi^{\overline{S}}_-(k,n). $$ By Lemma \ref{xn1} we have $\Phi^{S}(k,m)=[u,v],$ while Jacobi identity (\ref{uno}) implies that $\left[ \Phi^{S}(k,m), \Phi^{\overline{S}}_-(k,n)\right]$ is a linear combination of two elements, $[u,[v,w^-]]$ and $[[u,w^-],v].$ The latter one equals zero since due to Theorem \ref{des1} we have $[u,w^-]\sim 1-h_v,$ and coefficient of (\ref{cuq4}) with $x_i\leftarrow u,$ $x_i^-\leftarrow w^-,$ $u\leftarrow v$ is $\chi ^v(g_uf_w)-1$ $=\mu _k^{m,n}-1$ $=0,$ see (\ref{mu2}), (\ref{mu4}). Further, due to Proposition \ref{xn0} we have \begin{equation} [v,w^-]\sim \left[ \Phi^{\overline{\psi (S)-1}}(\psi (m),n), \Phi^{\overline{S}}_-(k,n)\right] . \label{nac2} \end{equation} Let us show that we may apply Lemma \ref{ed1} to this bracket. If $a\in [k,n)$ is a black point on $\overline{S}$ then $a$ is a white point on $S.$ If additionally $a\in [\psi (m),n)$ then $\psi (a)-1\in [n,m).$ Moreover since $S$ is black $(k,m)$-regular, the point $\psi (a)-1$ is black on $S.$ Hence $a=\psi (\psi (a)-1)-1$ is a white point on $\overline{\psi (S)-1}.$ Thus the $\overline{\psi (S)-1}_{\, \psi (m)}^{\, n}\, \overline{S}_{\, k}^{\, n}$-scheme has no black-black columns except the last one. The first from the left complete column is labeled by $\nu -1,$ where $\nu =\max \{ \psi (m), k \} .$ If $\psi (m)<k$ then $\psi (k)$ is black on $S,$ see (\ref{grab3}), hence $k-1=\psi (\psi (k))-1$ is white on $\overline{\psi (S)-1}.$ If $k<\psi (m)$ then $\psi (m)-1$ is black on $S,$ see (\ref{grab2}), hence $\psi (m)-1$ is white on $\overline{S}.$ Thus in both cases the first form the left complete column is white-white. By Lemma \ref{ed1} we may continue (\ref{nac2}): \begin{equation} =\sum _{b=\nu -1}^{n-1}\alpha _b \, \Phi^{\overline{S}}_-(k,b)\cdot \Phi^{\overline{\psi (S)-1}}(\psi (m),b) \stackrel{\rm df}{=} \sum _{b=\nu -1}^{n-1}\alpha _b\, w_b^-\cdot v_b. \label{nac3} \end{equation} In order to find $[u,[v,w^-]]$ we would like to substitute the found value of $[v,w^-].$ However this is inhomogeneous substitution to the right factor of the bracket. Therefore we have to develop the brackets to their explicit form and analyze the coefficients. We have $$ p(u,vw^-)p(u,w^-_bv_b)^{-1}=p(u,v)p(u,v_b)^{-1}p(w,u)p(w_b,u)^{-1} $$ $$ =P_{k\rightarrow n, 1+n\rightarrow \psi (b)-1}P_{1+b\rightarrow n,k\rightarrow n} $$ $$ = P_{k\rightarrow n, 1+n\rightarrow \psi (b)-1}P_{1+n\rightarrow \psi (b)-1, k\rightarrow n} =\mu _k^{\psi (b)-1,n}, $$ see definition (\ref{mu1}). Relations (\ref{mu2}--\ref{mu4}) show that $\mu _k^{\psi (b)-1,n}=1$ unless $b=k-1.$ If $b=k-1$ then $\mu _k^{\psi (b)-1,n}$ $=\mu _k^{\psi (k),n}$ $=q^2,$ see (\ref{mu3}). Thus all brackets $[u, w_b^-\cdot v_b]$ have the same coefficient as $[u,vw^-]$ does with only one exception being $b=k-1.$ If $k<\psi (m)$ then of course $b\neq k-1,$ for $b\geq \nu -1=\psi (m)-1\geq k.$ Hence in this case by ad-identity (\ref{br1}) the element $[u,[v,w^-]]$ splits in linear combination of two sums: \begin{equation} \sum _{b=\psi (m) -1}^{n-1}\alpha _b \, \left[ \Phi^{S}(k,n),\Phi^{\overline{S}}_-(k,b)\right] \cdot \Phi^{\overline{\psi (S)-1}}(\psi (m),b). \label{nac4} \end{equation} and \begin{equation} \sum _{b=\psi (m) -1}^{n-1}\alpha _b \, \Phi^{\overline{S}}_-(k,b)\cdot \left[ \Phi^{S}(k,n),\Phi^{\overline{\psi (S)-1}}(\psi (m),b)\right] . \label{nac5} \end{equation} By Lemma \ref{ed} we have $\left[ \Phi^{S}(k,n),\Phi^{\overline{S}}_-(k,b)\right] \sim \Phi^{S}(1+b,n),$ for $b$ is a white point on $\overline{S}$ (otherwise $\alpha _b=0$). In particular all terms in (\ref{nac4}) belong to the positive quantum Borel subalgebra, and hence application of $\varepsilon ^-\otimes \varepsilon ^0\otimes {\rm id}$ does not change this sum. Application of $\varepsilon ^-\otimes \varepsilon ^0\otimes {\rm id}$ to (\ref{nac5}) kill all terms, for $\varepsilon ^- (\Phi^{\overline{S}}_-(k,b))=0,$ $b\geq k.$ Thus the left hand side of (\ref{nac}) takes up the form \begin{equation} \alpha \, \Phi^{S}(\psi (m),n)+\sum _{b=\psi (m)}^{n-1}\alpha _b \, \Phi^{S}(1+b,n) \cdot \Phi^{\overline{\psi (S)-1}}(\psi (m),b), \label{nac6} \end{equation} where $\alpha =\alpha _{\psi (m)-1}\neq 0.$ We may decompose all terms in this expression using definition (\ref{dhs}). As a result we will get a polynomial, say $F,$ in $u[i,j],$ $1\leq i\leq j\leq n.$ It is very important to note that all first from the left factors $u[i,j]$ in all monomials of $F$ satisfy $i>\psi (m)$ with only one exception, $\alpha \, u[\psi (m),n],$ coming from the first term of (\ref{nac6}). In particular $u[i,j]<u[\psi (m),n]$ (recall that $x_1>x_2>\ldots >x_n,$ while words in $X$ are ordered lexicographically). Hence further diminishing process of decomposition in PBW-basis (see \cite[Lemma 7]{Kh3}) produces words in $u[i,j],$ $j<\psi (i)$ that start with lesser than $u[\psi (m),n]$ elements. This means that $\alpha \, u[\psi (m),n]$ is still the leading term of (\ref{nac6}) after the PBW-decomposition. In particular (\ref{nac6}) is not zero. If $\psi (m)<k$ then again by ad-identity (\ref{br1}) the element $[u,[v,w^-]]$ splits in sums (\ref{nac4}), (\ref{nac5}) with $\sum\limits_{b=\psi (m)-1}^{n-1}\leftarrow \sum\limits_{b=k}^{n-1}$ and an additional term that corresponds to the value $b=k-1.$ Since $\alpha _{k-1}\neq 0,$ this term is proportional to \begin{equation} u\cdot v_{k-1}-p(u,vw^-)\, v_{k-1}\cdot u, \label{nac7} \end{equation} where $v_{k-1}=\Phi^{\overline{\psi (S)-1}}(\psi (m),k-1)$ was defined in (\ref{nac3}). We have already seen that $p(u,vw^-)=p(u,v_{k-1})\, q^2.$ At the same time $p(v_{k-1},u)p(u,v_{k-1})$ $=p_{k-1\, k}\, p_{k\, k-1}$ $=q^{-2},$ for $p_{ij}p_{ji}=1,$ $j>i+1,$ see (\ref{b1rell}). Hence $p(u,v_{k-1})\, q^2$ $=p(v_{k-1},u)^{-1}.$ Therefore the term (\ref{nac7}) is proportional to $[v_{k-1},u]$ with coefficient $-p(v_{k-1},u)^{-1}.$ Taking into account formula (\ref{desc1}), we have \begin{equation} [v_{k-1},u]=[\Phi^{\overline{\psi (S)-1}}(\psi (m),k-1), \Phi^{S}(k,n)]=\Phi^{R}(\psi (m),n), \label{nac8} \end{equation} where $R=\left( \overline{\psi (S)-1}\cap [\psi (m),k-1)\right) \cup \left(S\cap [k,n)\right) .$ Certainly the map $\varepsilon ^-\otimes \varepsilon ^0 \otimes {\rm id}$ kills all terms of (\ref{nac5}) with $b\geq k,$ while Lemma \ref{ed} implies $\left[ \Phi^{S}(k,n),\Phi^{\overline{S}}_-(k,b)\right] \sim \Phi^{S}(1+b,n),$ $b\geq k.$ Thus the left hand side of (\ref{nac}) is proportional to the sum \begin{equation} \Phi^{R}(\psi (m),n)+\sum _{b=k}^{n-1}\alpha _b^{\prime } \, \Phi^{S}(1+b,n) \cdot \Phi^{\overline{\psi (S)-1}}(\psi (m),b). \label{nac9} \end{equation} This is a nonzero element precisely by the same reasons as (\ref{nac6}) is. \end{proof} {\it Proof of Theorem} \ref{bale}. Suppose that there exists a pair of simple roots such that one of schemes (\ref{grr1}-\ref{grr4}) has fragment (\ref{bal}) and no one of these schemes has form (\ref{gra3}). Among all that pairs we choose a pair $[k:m],$ $[i:j]^-$ that has fragment (\ref{bal}) with minimal possible $s-t$ on one of the schemes. Actually, due to Lemma \ref{bal2}, there are at least two of the schemes that have fragments with that minimal value of $s-t.$ Without loss of generality, changing if necessary notations $S\leftrightarrow S^*$ or $T\leftrightarrow T^*$ or both, we may assume that the $ST$-scheme has that fragment. Since $s-t$ is minimal, there are no white-white or black-black columns between $t$ and $s.$ Hence due to Theorem \ref{des1} we have \begin{equation} \left[ \Phi^{S}(1+t,s), \Phi^{T}_-(1+t,s)\right] \sim 1-h_{1+t\rightarrow s}, \label{bit} \end{equation} provided that $S$ or, equivalently, $T$ is $(1+t,s)$-regular. \underline{Let, first, $s\leq n.$} In this case by definition $S$ is $(1+t,s)$-regular, while due to Lemma \ref{sig3} we have $\Phi^{S}(k,s)\in U^{S}(k,m)\subseteq U^+, $ $\Phi^{T}_-(1+t,s)\in U^{S}_-(i,j)\subseteq U^-.$ Hence we get \begin{equation} \left[ \Phi^{S}(k,s), \Phi^{T}_-(1+t,s)\right] \in [U^+,U^-]\subseteq U. \label{bit1} \end{equation} At the same time by Lemma \ref{xn1} we have a decomposition \begin{equation} \Phi^{S}(k,s)\sim \left[ \Phi^{S}(k,t),\Phi^{S}(1+t,s) \right] . \label{bit2} \end{equation} Lemma \ref{suu} implies \begin{equation} \left[ \Phi^{S}(k,t),\Phi^{T}_-(1+t,s) \right] =0. \label{bit3} \end{equation} Applying first (\ref{jak3}), and then (\ref{cuq3}) with $x_i\leftarrow \Phi^{S}(1+t,s),$ $x_i^-\leftarrow \Phi^{T}_-(1+t,s)$ due to (\ref{bit}) we see that the left hand side of (\ref{bit1}) is proportional to $\Phi^{S}(k,t),$ in which case the coefficient equals $1-\chi ^{k\rightarrow t}(h_{1+t\rightarrow s})$ $=1-\mu _k^{s,t}=1-q^{-2},$ see (\ref{mu1}), (\ref{mu2}). Thus $\Phi^{S}(k,t)\in U\cap U_q^+(\mathfrak{so}_{2n+1})=U^+;$ that is, $[k:t]$ is an $U^+$-root. According to Lemma \ref{sig} we have $[1+t:m]\in \Sigma (U^S(k,m))\subseteq \Sigma (U^+).$ This implies that $t=k-1,$ for otherwise we have a contradiction: $[k:m]=[k:t]+[1+t:m]$ is a decomposition of a simple $U^+$-root in $\Sigma (U^+).$ Similarly, due to the mirror symmetry, we have $t=i-1;$ that is, $k=i=1+t.$ Now we are going to show that $m=s.$ Equality $t=k-1$ implies \begin{equation} \left[ \Phi^{S}(k,m), \Phi^{T}_-(k,s)\right] \in [U^+,U^-]\subseteq U. \label{bit4} \end{equation} {\it Let $s=n.$} In this case $n$ is black on $S;$ that is, $S$ is black $(k,m)$-regular. We have $\varepsilon ^-\otimes \varepsilon ^0\otimes {\rm id} (U)\subseteq U^+.$ Hence if $m\neq s=n,$ Lemma \ref{xic} allows us to find in $U^+$ a nonzero element of degree $[n+1:m].$ In particular $[n+1:m]\in \Sigma (U^+).$ Hence $[k:m]=[k:n]+[1+n:m]$ is a decomposition of a simple $U^+$-root in $\Sigma (U^+).$ A contradiction that implies $m=n=s.$ {\it Let, further, $s<n.$} If $S$ is white $(k,m)$-regular, or if $S$ is black $(k,m)$-regular and $\psi (s)-1$ is not white, then $S$ is still $(1+s,m)$-regular, while Lemma \ref{xn2} provides a decomposition \begin{equation} \Phi^{S}(k,m)\sim \left[ \Phi^{S}(1+s,m), \Phi^{S}(k,s) \right] . \label{bit5} \end{equation} Let us show that Theorem \ref{str} implies \begin{equation} \left[ \Phi^{S}(1+s,m),\Phi^{T}_-(k,s) \right] =0. \label{bit6} \end{equation} To see this we have to check that $S_{1+s}^mT_k^s$- and $S_{1+s}^mT_{\psi (s)}^{*\psi (k)}$-schemes are strong. The first one has just one complete column, hence it is both strongly white and strongly black. Suppose firstly that $S$ is white $(k,m)$-regular. Let us show that if $s\neq n$ (even if $s>n$), then the $S_{1+s}^mT_{\psi (s)}^{*\psi (k)}$-scheme is strongly white. If $a$ is a black point on $T_{\psi (s)}^{*\psi (k)},$ $\psi (s)\leq a<\psi (k),$ then by definition $\rho (a)=\psi (a)-1$ is white on $T_k^s.$ The inequalities $k\leq \rho (a)<s$ imply that the point $\rho (a)$ is intermediate for the minimal fragment (\ref{bal}), recall that now $t=k-1.$ Therefore $\rho (a)$ is black on $S.$ Since $S$ is white $(k,m)$-regular, the point $a=\rho (\rho (a))$ is not black on $S.$ If $a=\psi (k),$ then still $a$ is not black on $S,$ see (\ref{grab1}). Thus the $S_{1+s}^mT_{\psi (s)}^{*\psi (k)}$-scheme has no intermediate complete black-black columns. Since $s\neq n,$ we have $\psi (s)\neq 1+s.$ Hence the first from the left column is incomplete. If there are at least two complete columns, then $m\geq \psi (s).$ In this case the first from the left complete column has the label $a=\psi (s)-1=\rho (s).$ Since $s$ is black on $S,$ and $S$ is white $(k,m)$-regular, the point $\rho (s)$ is white on $S.$ Thus the first from the left complete column is white-white one, and $S_{1+s}^mT_{\psi (s)}^{*\psi (k)}$-scheme is strongly white. Similarly we shall show that if $S$ is black $(k,m)$-regular and $\psi (s)-1$ is not white, $s<n,$ then the $S_{1+s}^mT_{\psi (s)}^{*\psi (k)}$-scheme is strongly black. If $a$ is a white point on $T_{\psi (s)}^{*\psi (k)},$ $\psi (s)\leq a<\psi (k),$ then by definition $\rho (a)=\psi (a)-1$ is black on $T_k^s,$ and hence it is white on $S.$ Since $S$ is black $(k,m)$-regular, the point $a=\rho (\rho (a))$ is not white on $S.$ Thus the $S_{1+s}^mT_{\psi (s)}^{*\psi (k)}$-scheme has no complete white-white columns (recall that now $\psi (s)-1$ is not white on $S,$ hence the column $\psi (s)-1$ is not white-white). The last column is incomplete, for $\psi (k)\neq m.$ If there are at least two complete columns; that is, $m\geq \psi (s),$ then the last complete column is labeled by $m$ or by $\psi (k).$ In the former case $\psi (m)-1$ is black on $S,$ see (\ref{grab2}). Hence, as an intermediate point for (\ref{bal}), it is white on $T.$ Therefore $m=\psi (\psi (m)-1)-1$ is black on $T^*.$ It is still black on $T_{\psi (s)}^{*\psi (k)},$ for $m\neq \psi (s)-1.$ In the latter case $\psi (k)$ is black on $S,$ see (\ref{grab3}). Hence it is black on $S_{1+s}^m$ too, for $\psi (k)\neq s$ (recall that now $k\leq s<n$). Thus the $S_{1+s}^mT_{\psi (s)}^{*\psi (k)}$-scheme is strongly black. This completes the proof of (\ref{bit6}). Now we show how (\ref{bit}) with $t=k-1$ and (\ref{bit4}--\ref{bit6}) imply $s=m.$ Applying first (\ref{jak3}), and then (\ref{cuq3}) due to (\ref{bit}) we see that the left hand side of (\ref{bit4}) is proportional to $\Phi^{S}(1+s,m),$ in which case $\chi ^{1+s\rightarrow m}(h_{k\rightarrow s})$ $=\mu _k^{m,s}\neq 1,$ with only one exception being $s=n,$ see (\ref{mu2}--\ref{mu4}). Thus $\Phi^{S}(1+s,m)\in U\cap U_q^+(\mathfrak{so}_{2n+1})=U^+;$ that is, $[1+s:m]$ is an $U^+$-root. According to Lemma \ref{sig} we have $[k:s]\in \Sigma (U^S(k,m))\subseteq \Sigma (U^+).$ This implies $s=m,$ for otherwise we have a forbidden decomposition of a simple $U^+$-root, $[k:m]=[k:s]+[1+s:m].$ If $S$ is black $(k,m)$-regular and $\psi (s)-1$ is white on $S,$ then we may not use (\ref{bit5}). However in this case $\Phi^{S}(k,n)\in U^S(k,m)\subseteq U^+,$ and certainly $S$ is white $(k,n)$-regular. Hence instead of (\ref{bit5}) we may consider the decomposition $\Phi^{S}(k,n)\sim \left[ \Phi^{S}(1+s,n), \Phi^{S}(k,s)\right] ,$ while instead of (\ref{bit6}) use $\left[ \Phi^{S}(1+s,n),\Phi^{T}_-(k,s) \right] =0,$ which is valid due to Lemma \ref{suu}. Hence we get $[1+s:n]\in \Sigma (U^+),$ for now $s<n.$ This also provides a forbidden decomposition, $[k:m]=[k:s]+[1+s:n]$ $+[n+1:\psi (s)-1]+[\psi (s):m],$ unless $s=m.$ Here $[n+1:\psi (s)-1]=[1+s:n],$ while $[\psi (s):m]\in \Sigma (U^+)$ due to Lemma \ref{sig}. Thus in all cases $s=m.$ Due to the mirror symmetry we have also $s=j;$ that is, the $ST$-scheme has the form (\ref{gra3}). This contradiction completes the case ``$s\leq n.$" \underline{Let, then, $n\leq t.$} By Lemma \ref{bal2} the $S^*T^*$-scheme also contains a fragment (\ref{bal}) with $t\leftarrow \psi (s)-1,$ $s\leftarrow \psi (t)-1.$ Since $n\leq t$ implies $\psi (t)-1\leq n,$ one may apply already considered case to the $S^*T^*$-scheme. \underline{Let, next, $t<n<s.$} In this case the $n$th column, as an intermediate one, is either white-black or black-white. Since the color of the point $n$ defines the color of regularity, $S$ and $T$ have different color of regularity. For the sake of definiteness, we assume that $S$ is white $(k,m)$-regular, while $T$ is black $(i,j)$-regular (otherwise one may change the roles of $S$ and $T$ considering the mirror generators). If $\psi (t)-1$ is a black point on the scheme $S,$ then on the $ST^*$-scheme we have a new fragment of the form (\ref{bal}) with $t\leftarrow n,$ $s\leftarrow \psi (t)-1,$ for the color of $\rho (t)=\psi (t)-1$ on the scheme $T^*$ is also black. Certainly $\psi (t)-1-n=n-t<s-t,$ for $n<s;$ that is, we have found a lesser fragment. Hence $\psi (t)-1$ is not black on the scheme $S.$ Lemma \ref{sig1} implies $\Phi^{S}(1+t,s)\in U^+,$ while Lemma \ref{si} shows that $S$ is white $(1+t,s)$-regular. In particular (\ref{bit}) is valid. Moreover $S\cup \{ t \}$ is still white $(k,m)$-regular, hence we have decomposition (\ref{desc1}). In perfect analogy $\psi (s)-1$ is not white on the scheme $T.$ Hence Lemma \ref{sig2} implies $\Phi^{T}_-(1+t,s)\in U^-,$ and we have decomposition (\ref{desc2}) of $\Phi^{T}_-(i,j).$ By definition of a white regular set the point $\psi (k-1)-1=\psi (k)$ is not black on the scheme $S,$ see (\ref{grab}), (\ref{grab1}). Hence Lemma \ref{sig1} implies $\Phi^{S}(k,s)\in U^+.$ Therefore (\ref{bit1}) is still valid. Lemma \ref{xn1} implies decomposition (\ref{bit2}), for $\psi (t)-1$ is not black on the scheme $S.$ Let us show that Theorem \ref{str} implies (\ref{bit3}). Indeed, the $S_k^{t}T_{1+t}^s$-scheme has just one complete column, hence it is strongly white (and of course it is strongly black too). Let us check the $S_k^{t}T_{\psi (s)}^{*\psi (t)-1}$-scheme. If $a$ is a black point on $S_k^t,$ $k\leq a<t,$ then $\psi (a)-1$ is not black on $S,$ for $(a, \psi (a)-1)$ is a column of the shifted scheme of white $(k,m)$-regular set $S.$ At the same time if $\psi (s)\leq a<\psi (t)-1,$ then $s>\psi (a)-1>t.$ In particular $\psi (a)-1$ appears on the scheme $S,$ and it is a white point on $S$. Further, $\psi (a)-1$ is an intermediate point on the minimal fragment (\ref{bal}), hence it is black on the scheme $T.$ Therefore $\psi (\psi (a)-1)-1=a$ is a white point on $T^*.$ Since $a\neq \psi (t)-1$ yet, it is a white point on $T_{\psi (s)}^{*\psi (t)-1}$ as well. Thus the $S_k^{t}T_{\psi (s)}^{*\psi (t)-1}$-scheme has no intermediate complete black-black columns. Consider the last column, $a=t.$ Since $T$ is black $(i,j)$-regular, and $(t,\psi (t)-1)$ is a column of the shifted $T$-scheme, the point $\psi (t)-1$ is not white on $T.$ Therefore $t=\psi (\psi (t)-1)-1$ is not black on $T^*.$ It is neither black on $T_{\psi (s)}^{*\psi (t)-1},$ for $t=\psi (t)-1$ implies $t=n,$ while now $t<n.$ Let $b$ be a label of the first from the left complete column of the $S_k^{t}T_{\psi (s)}^{*\psi (t)-1}$-scheme, $b=\max \{ k-1,\psi (s)-1\} .$ In this case $k-1\neq \psi (s)-1,$ for $\psi (k)$ is not black on $S,$ see (\ref{grab1}). In particular the first from the left column is incomplete. If $k<\psi (s),$ $b=\psi (s)-1,$ then $(b,s)$ is a column of the shifted $S$-scheme. Hence $b$ is white on $S.$ It is still white on $S_k^t,$ for $\psi (s)-1$ is not white on $T$ in particular $b\neq t.$ Thus, the first from the left complete column on the $S_k^{t}T_{\psi (s)}^{*\psi (t)-1}$-scheme is white-white one. If $k>\psi (s),$ $b=k-1,$ then due to (\ref{grab2}) the point $\psi (k)$ is white on $S.$ We have $t<n\leq \psi (k)<s;$ that is, $\psi (k)$ is an intermediate point of the fragment (\ref{bal}). Hence $\psi (k)$ is black on $T,$ while $k-1=\psi (\psi (k))-1$ is white on $T^*.$ Thus, the first from the left complete column on the $S_k^{t}T_{\psi (s)}^{*\psi (t)-1}$-scheme is still white-white one. This proves that $S_k^{t}T_{\psi (s)}^{*\psi (t)-1}$-scheme is strongly white, and one may apply Theorem \ref{str} to see that (\ref{bit3}) is valid. While considering the case ``$s\leq n$", we have seen how relations (\ref{bit}--\ref{bit3}) with $\mu _k^{s,t}\neq 1$ imply $t=k-1.$ Here $\mu _k^{s,t}\neq 1$ according to (\ref{mu2}--\ref{mu4}), for $t\neq n,$ and $s,$ being a black point on $S,$ is not equal to $\psi (k).$ Thus $t=k-1.$ Consider the $T^*S^*$-scheme that corresponds to the mirror generators. This scheme contains a fragment (\ref{bal}) with $t\leftarrow \psi (s)-1,$ $s\leftarrow \psi (t)-1.$ In this case $T^*$ is white $(\psi (j),\psi (j))$-regular. Therefore we may apply already proved ``$t=k-1$" to that situation. We get $\psi (s)-1=\psi (j)-1;$ that is, $j=s.$ Further, relations (\ref{bit4}) and (\ref{bit5}) are valid. While considering the case ``$s\leq n$", we have seen that if $t=k-1,$ then the $S_{1+s}^mT_{\psi (s)}^{*\psi (k)}$-scheme is strongly white even if $s>n.$ Hence Theorem \ref{str} implies (\ref{bit6}). At the same time we know that relations (\ref{bit4}--\ref{bit6}) imply $s=m.$ Applying this result to the $T^*S^*$-scheme that corresponds to the mirror generators we have $\psi (k)=\psi (i);$ that is $k=i=t-1,$ $m=j=s.$ Thus, $ST$-scheme has the form (\ref{gra3}). This contradiction completes the proof. \end{document}
\begin{document} \markboth{\centerline{E.~SEVOST'YANOV}}{\centerline{ON LOCAL BEHAVIOR OF MAPPINGS WITH INTEGRAL CONSTRAINTS...}} \def\cc{\setcounter{equation}{0} \setcounter{figure}{0}\setcounter{table}{0}} \overfullrule=0pt \author{E.~SEVOST'YANOV} \title{ {\bf ON GLOBAL BEHAVIOR OF MAPPINGS WITH INTEGRAL CONSTRAINTS}} \date{\today} \maketitle \begin{abstract} This article is devoted to the study of mappings with branch points whose characteristics satisfy integral-type constraints. We have proved theorems concerning their local and global behavior. In particular, we established the equicontinuity of families of such mappings inside their definition domain, as well as, under additional conditions, equicontinuity of the families of these mappings in its closure. \end{abstract} {\bf 2010 Mathematics Subject Classification: Primary 30C65; Secondary 31A15, 31B25} \section{Introduction} This article is devoted to the study of mappings satisfying upper bounds for the distortion of the modulus of families of paths, see, for example, \cite{Cr}, \cite{GRY}, \cite{MRV$_1$}--\cite{MRV$_2$}, \cite{MRSY} and \cite{RSY}. In particular, here we are dealing with mappings whose characteristics satisfy the so-called conditions of integral type, see, for example, \cite{RSY}, \cite{RS} and \cite{Sev$_3$}. The main purpose of the present manuscript is to study the local behavior of mappings having branch points whose characteristics are bounded only on the average. It is worth noting some of the previous results in this direction. In particular, in~\cite{RS}, homeomorphisms with similar conditions were investigated, and in~\cite{Sev$_3$}, mappings with branch points having a general characteristic $Q.$ Unfortunately, the most general case when the mappings are not homeomorphisms and also do not have a common majorant has been overlooked. Note that the most interesting applications related to the study of the Beltrami equation and the Dirichlet problem for it are associated with the absence of a general majorant for maximal complex characteristics, (see, e.g., \cite{Dyb} and \cite{L}). Let us move on to definitions and formulations of results. Given $p\geqslant 1,$ $M_p$ denotes the $p$-modulus of a family of paths, and the element $dm(x)$ corresponds to a Lebesgue measure in ${\Bbb R}^n,$ $n\geqslant 2,$ see~\cite{Va}. In what follows, we usually write $M(\Gamma)$ instead of $M_n(\Gamma).$ For the sets $A, B\subset{\Bbb R}^n$ we set, as usual, $${\rm diam}\,A=\sup\limits_{x, y\in A}|x-y|\,,\quad {\rm dist}\,(A, B)=\inf\limits_{x\in A, y\in B}|x-y|\,.$$ Sometimes we also write $d(A)$ instead of ${\rm diam}\,A$ and $d(A, B)$ instead of ${\rm dist\,}(A, B),$ if no misunderstanding is possible. For given sets $E$ and $F$ and a given domain $D$ in $\overline{{\Bbb R}^n}={\Bbb R}^n\cup \{\infty\},$ we denote by $\Gamma(E, F, D)$ the family of all paths $\gamma:[0, 1]\rightarrow \overline{{\Bbb R}^n}$ joining $E$ and $F$ in $D,$ that is, $\gamma(0)\in E,$ $\gamma(1)\in F$ and $\gamma(t)\in D$ for all $t\in (0, 1).$ Everywhere below, unless otherwise stated, the boundary and the closure of a set are understood in the sense of an extended Euclidean space $\overline{{\Bbb R}^n}.$ Let $x_0\in\overline{D},$ $x_0\ne\infty,$ $$S(x_0,r) = \{ x\,\in\,{\Bbb R}^n : |x-x_0|=r\}\,, S_i=S(x_0, r_i)\,,\quad i=1,2\,,$$ \begin{equation}\label{eq6} A=A(x_0, r_1, r_2)=\{ x\,\in\,{\Bbb R}^n : r_1<|x-x_0|<r_2\}\,. \end{equation} Everywhere below, unless otherwise stated, the closure $\overline{A}$ and the boundary $\partial A $ of the set $A$ are understood in the topology of the space $\overline{{\Bbb R}^n}={\Bbb R}^n\cup\{\infty\}.$ Let $Q:{\Bbb R}^n\rightarrow {\Bbb R}^n$ be a Lebesgue measurable function satisfying the condition $Q(x)\equiv 0$ for $x\in{\Bbb R}^n\setminus D.$ Given $p\geqslant 1,$ a mapping $f:D\rightarrow \overline{{\Bbb R}^n}$ is called a {\it ring $Q$-mapping at the point $x_0\in \overline{D}\setminus \{\infty\}$ with respect to $p$-modulus}, if the condition \begin{equation} \label{eq2*!} M_p(f(\Gamma(S_1, S_2, D)))\leqslant \int\limits_{A\cap D} Q(x)\cdot \eta^p (|x-x_0|)\, dm(x) \end{equation} holds for all $0<r_1<r_2<d_0:=\sup\limits_{x\in D}|x-x_0|$ and all Lebesgue measurable functions $\eta:(r_1, r_2)\rightarrow [0, \infty]$ such that \begin{equation}\label{eq8B} \int\limits_{r_1}^{r_2}\eta(r)\,dr\geqslant 1\,. \end{equation} Inequalities of the form~(\ref{eq2*!}) were established for many well-known classes of mappings. So, for quasiconformal mappings and mappings with bounded distortion, they hold for $p=n$ and some $Q(x)\equiv K=const$ (see, for example, \cite[Theorem~7.1]{MRV$_1$} and \cite[Definition~13.1]{Va}). Such inequalities also hold for many mappings with unbounded characteristic, in particular, for homeomorphisms belonging to the class $W_{\rm loc}^{1, p},$ $p>n-1,$ the inner dilatation of the order $\alpha:=\frac{p}{p-n+1}$ is locally integrable (see, for example, \cite[Theorems~8.1, 8.5]{MRSY} and \cite[Corollary~2]{Sal$_1$}, \cite[Theorem~9, Lemma~5]{Sal$_2$}). The concept of a set of capacity zero, used below, can be found in~\cite[Section~2.12]{MRV$_2$} and is therefore omitted. A mapping $f:D\rightarrow {\Bbb R}^n$ is called {\it discrete} if the preimage $\{f^{\,-1}(y)\}$ of each point $y\,\in\,{\Bbb R}^n$ consist of isolated points, and {\it open} if the image of any open set $U\subset D $ is an open set in ${\Bbb R}^n.$ Let us formulate the main results of this manuscript. In what follows, $h$ denotes the so-called chordal metric defined by the equalities \begin{equation}\label{eq1E} h(x,y)=\frac{|x-y|}{\sqrt{1+{|x|}^2} \sqrt{1+{|y|}^2}}\,,\quad x\ne \infty\ne y\,, \quad\,h(x,\infty)=\frac{1}{\sqrt{1+{|x|}^2}}\,. \end{equation} For a given set $E\subset\overline{{\Bbb R}^n},$ we set \begin{equation}\label{eq9C} h(E):=\sup\limits_{x,y\in E}h(x, y)\,, \end{equation} The quantity $h(E)$ in~(\ref{eq9C}) is called the {\it chordal diameter} of the set $E.$ For given sets $A, B\subset \overline{{\Bbb R}^n},$ we put $h(A, B)=\inf\limits_{x\in A, y\in B}h(x, y),$ where $h$ is a chordal metric defined in~(\ref{eq1E}). Given a domain $D\subset {\Bbb R}^n,$ a number $M_0>0,$ a set $E\subset \overline{{\Bbb R}^n}$ and a strictly increasing function $\Phi\colon\overline{{\Bbb R}^{+}}\rightarrow\overline{{\Bbb R}^{+}}$ let us denote by ${\frak F}^{\Phi}_{M_0, E}(D)$ the family of all open discrete mappings $f:D\rightarrow \overline{{\Bbb R}^n}\setminus E$ for which there exists a function $Q=Q_f(x):D\rightarrow [0, \infty]$ such that~(\ref{eq2*!})--(\ref{eq8B}) hold for any $x_0\in D$ with $p=n$ and, in addition, \begin{equation}\label{eq2!!A} \int\limits_D\Phi(Q(x))\frac{dm(x)}{\left(1+|x|^2\right)^n}\ \leqslant M_0<\infty\,. \end{equation} An analogue of the following statement was established for homeomorphisms in~\cite[Theorem~4.1]{RS}, and for mappings whose corresponding function $Q$ is fixed, in~\cite[Theorem~1]{Sev$_3$}. However, let us note that it is in the form given below that the indicated statement seems to be the most interesting from the point of view of applications to the problem of compactness of solutions of the Beltrami equations and the Dirichlet problem (see, for example, \cite[Theorem~2]{Dyb}) and \cite[Theorem~1]{L}). \begin{theorem}\label{th1} {\sl\, Let $D$ be a domain in ${\Bbb R}^n,$ $n\geqslant 2,$ and let ${\rm cap}\,E>0.$ If \begin{equation}\label{eq3!A} \int\limits_{\delta_0}^{\infty} \frac{d\tau}{\tau\left[\Phi^{-1}(\tau)\right]^{\frac{1}{n-1}}}= \infty \end{equation} holds for some $\delta_0>\tau_0:=\Phi(0),$ then ${\frak F}^{\Phi}_{M_0, E}(D)$ is equicontinuous in $D.$ } \end{theorem} Note that the statement of Theorem~\ref{th1} is much simpler and more elegant for the case $p\in(n-1, n).$ Given $p\geqslant 1,$ a domain $D\subset {\Bbb R}^n,$ a number $M_0>0$ and a strictly increasing function $\Phi\colon\overline{{\Bbb R}^{+}}\rightarrow\overline{{\Bbb R}^{+}}$ let us denote by ${\frak F}^{\Phi}_{M_0, p}(D)$ the family of all open discrete mappings $f:D\rightarrow {\Bbb R}^n$ for which there exists a function $Q=Q_f(x):D\rightarrow [0, \infty]$ such that~(\ref{eq2*!})--(\ref{eq8B}) hold for any $x_0\in D$ and, in addition, (\ref{eq2!!A}) holds. The following statement is true. \begin{theorem}\label{th2} {\sl\, Let $D$ be a domain in ${\Bbb R}^n,$ $n\geqslant 2,$ and let $p\in (n-1, n).$ If~(\ref{eq3!A}) holds for some $\delta_0>\tau_0:=\Phi(0),$ then ${\frak F}^{\Phi}_{M_0, p}(D)$ is equicontinuous in $D.$ } \end{theorem} \begin{remark} Let $(X, d)$ and $\left(X^{{\,\prime}}, d^{\,\prime}\right)$ be metric spaces with distances $d$ and $d^{\,\prime},$ respectively. A family $\frak{F}$ of mappings $f:X\rightarrow X^{\,\prime}$ is said to be {\it equicontinuous at a point} $x_0\in X,$ if for every $\varepsilon>0$ there is $\delta=\delta(\varepsilon, x_0)>0$ such that $d^{\,\prime}(f(x), f(x_0))<\varepsilon$ for all $f\in \frak{F}$ and $x\in X$ with $d(x , x_0)<\delta$. The family $\frak{F}$ is {\it equicontinuous} if $\frak{F}$ is equicontinuous at every point $x_0\in X.$ In Theorem~\ref{th1}, the equicontinuity of the corresponding family of mappings should be understood in the sense of mappings of the metric spaces $(X, d)$ and $\left(X^{\,\prime}, d^{\,\prime} \right),$ where $X$ is a domain $D,$ and $d$ is a Euclidean metric in $D,$ besides, $X^{\,\prime}=\overline{{\Bbb R}^n},$ and $h$ is a chordal metric defined in~(\ref{eq1E}). At the same time, in Theorem~\ref{th2} the space $X$ remains the same, and the space $X^{\,\prime}$ is an usual Euclidean $n$-dimensional space with the Euclidean metric $d^{\,\prime}.$ \end{remark} A separate research topic is the equicontinuity of families of mappings in the closure of a domain. Results of this kind for fixed characteristics were obtained in some of our papers. In particular, in~\cite{Sev$_4$} we considered the case of fixed domains between which the mappings act, and in the papers~\cite{SevSkv$_1$}--\cite{SevSkv$_2$} we considered the case when the mapped domain can change. It should be noted that theorems on normal families of mappings are especially important in the study of the properties of solutions to the Dirichlet problem for the Beltrami equation (see, for example, \cite{Dyb}). Note that the classical results on the equicontinuity of quasiconformal mappings in the closure of a domain were obtained by N\"{a}kki and Palka, see e.g.~\cite[Theorem~3.3]{NP}. Let us formulate the main results related to this case. Let $I$ be a fixed set of indices and let $D_i,$ $i\in I,$ be some sequence of domains. Following~\cite[Sect.~2.4]{NP}, we say that a family of domains $\{D_i\}_{i\in I}$ is {\it equi-uniform with respect to $p$-modulus} if for any $r> 0$ there exists a number $\delta> 0$ such that the inequality \begin{equation}\label{eq17***} M_p(\Gamma(F^{\,*},F, D_i))\geqslant \delta \end{equation} holds for any $i\in I$ and any continua $F, F^*\subset D$ such that $h(F)\geqslant r$ and $h(F^{\,*})\geqslant r.$ It should be noted that the condition of equi-uniformity of the sequence of domains implies strong accessibility of the boundary of each of them with respect to $p$-modulus (see, for example, \cite[Remark~1]{SevSkv$_1$}). Given $p\geqslant 1,$ u number $\delta>0,$ a domain $D\subset {\Bbb R}^n,$ $n\geqslant 2,$ a continuum $A\subset D$ and a strictly increasing function $\Phi\colon\overline{{\Bbb R}^{+}}\rightarrow\overline{{\Bbb R}^{+}}$ denote $\frak{F}_{\Phi, A, p, \delta}(D)$ the family of all homeomorphisms $f:D\rightarrow {\Bbb R}^n$ for which there exists a function $Q=Q_f(x):D\rightarrow [0, \infty]$ such that: 1) relations (\ref{eq2*!})--(\ref{eq8B}) hold for any $x_0\in \overline{D},$ 2) the relation (\ref{eq2!!A}) holds and 3) the relations $h(f(A))\geqslant\delta$ and $h(\overline{{\Bbb R}^n}\setminus f(D))\geqslant \delta$ hold. The following statement is true. \begin{theorem}\label{th3} {\sl\, Let $p\in (n-1, n],$ a domain $D$ is locally connected at any $x_0\in\partial D$ and domains $D_f^{\,\prime}=f(D)$ are equi-uniform with respect to $p$-modulus over all $f\in \frak{F}_{\Phi, A, p, \delta}(D).$ If~(\ref{eq3!A}) holds for some $\delta_0>\tau_0:=\Phi(0),$ then any $f\in\frak{F}_{Q, A, p, \delta}(D)$ has a continuous extension in $\overline{D}$ and, besides that, the family $\frak{F}_{\Phi, A, p, \delta}(\overline{D})$ of all extended mappings $\overline{f}: \overline{D}\rightarrow \overline{{\Bbb R}^n},$ is equicontinuous in $\overline{D}.$ } \end{theorem} As usual, we use the notation $$C(f, x):=\{y\in \overline{{\Bbb R}^n}:\exists\,x_k\in D: x_k\rightarrow x, f(x_k) \rightarrow y, k\rightarrow\infty\}\,.$$ A mapping $f$ between domains $D$ and $D^{\,\prime}$ is called {\it closed} if $f(E)$ is closed in $D^{\,\prime}$ for any closed set $E\subset D$ (see, e.g., \cite[Section~3]{Vu}). Any open discrete closed mapping is boundary preserving, i.e. $C(f, \partial D)\subset \partial D^{\,\prime},$ where $C(f, \partial D)=\bigcup\limits_{x\in \partial D}C(f, x)$ (see e.g.~\cite[Theorem~3.3]{Vu}). Given $p\geqslant 1,$ a domain $D\subset {\Bbb R}^n,$ a set $E\subset\overline{{\Bbb R}^n},$ a strictly increasing function $\Phi\colon\overline{{\Bbb R}^{+}}\rightarrow\overline{{\Bbb R}^{+}}$ and a number $\delta>0$ denote by $\frak{R}_{\Phi, \delta, p, E}(D)$ the family of all open discrete and closed mappings $f:D\rightarrow \overline{{\Bbb R}^n}\setminus E$ such that: 1) relations (\ref{eq2*!})--(\ref{eq8B}) hold for any $x_0\in \overline{D},$ 2) the relation (\ref{eq2!!A}) holds and 3) there exists a continuum $K_f\subset D^{\,\prime}_f$ such that $h(K_f)\geqslant \delta$ and $h(f^{\,-1}(K_f), \partial D)\geqslant \delta>0.$ The following statement is true. \begin{theorem}\label{th4} {\sl\, Let $p\in (n-1, n],$ a domain $D$ is locally connected at any point $x_0\in\partial D$ and, besides that, domains $D_f^{\,\prime}=f(D)$ are equi-uniform with respect to $p$-modulus over all $f\in\frak{R}_{\Phi, \delta, p, E}(D).$ Let ${\rm cap\,}E>0$ for $p=n,$ and let $E$ is any closed set for $n-1<p<n.$ If~(\ref{eq3!A}) holds for some $\delta_0>\tau_0:=\Phi(0),$ then any $f\in\frak{R}_{\Phi, \delta, p, E}(D)$ has a continuous extension in $\overline{D}$ and, besides that, the family $\frak{R}_{\Phi, \delta, p, E}(\overline{D})$ of all extended mappings $\overline{f}: \overline{D}\rightarrow \overline{{\Bbb R}^n},$ is equicontinuous in $\overline{D}.$} \end{theorem} \begin{remark}\label{rem3} In Theorems~\ref{th3} and~\ref{th4}, the equicontinuity should be understood in terms of families of mappings between metric spaces~$(X, d)$ and $\left(X^{\,\prime}, d^{\,\prime}\right),$ where $X=\overline{D},$ $d$ is a chordal metric $h,$ $X^{\,\prime}=\overline{{\Bbb R}^n}$ and $d^{\,\prime}$ is a chordal (spherical) metric $h,$ as well. \end{remark} Theorems~\ref{th3} and~\ref{th4} admit a natural generalization to the case of complex boundaries, when the maps do not have a continuous extension to points of the boundary of the domain in the usual sense, however, this extension holds in the sense of the so-called prime ends. Let us recall several important definitions associated with this concept. In the following, the next notation is used: the set of prime ends corresponding to the domain $D,$ is denoted by $E_D,$ and the completion of the domain $D$ by its prime ends is denoted $\overline{D}_P.$ The definition of prime ends used below corresponds to the definition given in~\cite{IS$_2$}, and therefore is omitted. Consider the following definition, which goes back to N\"akki~\cite{Na$_2$}, see also~\cite{KR}. We say that the boundary of the domain $D$ in ${\Bbb R}^n$ is {\it locally quasiconformal}, if each point $x_0\in\partial D$ has a neighborhood $U$ in ${\Bbb R}^n$, which can be mapped by a quasiconformal mapping $\varphi$ onto the unit ball ${\Bbb B}^n\subset{\Bbb R}^n$ so that $\varphi(\partial D\cap U)$ is the intersection of ${\Bbb B}^n$ with the coordinate hyperplane. For a given set $E\subset {\Bbb R}^n,$ we set $d(E):=\sup\limits_{x, y\in E}|x-y|.$ The sequence of cuts $\sigma_m,$ $m=1,2,\ldots ,$ is called {\it regular,} if $\overline{\sigma_m}\cap\overline{\sigma_{m+1}}=\varnothing$ for $m\in {\Bbb N}$ and, in addition, $d(\sigma_{m})\rightarrow 0$ as $m\rightarrow\infty.$ If the end $K$ contains at least one regular chain, then $K$ will be called {\it regular}. We say that a bounded domain $D$ in ${\Bbb R}^n$ is {\it regular}, if $D$ can be quasiconformally mapped to a domain with a locally quasiconformal boundary whose closure is a compact in ${\Bbb R}^n,$ and, besides that, every prime end in $D$ is regular. Note that space $\overline{D}_P=D\cup E_D$ is metric, which can be demonstrated as follows. If $g:D_0\rightarrow D$ is a quasiconformal mapping of a domain $D_0$ with a locally quasiconformal boundary onto some domain $D,$ then for $x, y\in \overline{D}_P$ we put: \begin{equation}\label{eq5} \rho(x, y):=|g^{\,-1}(x)-g^{\,-1}(y)|\,, \end{equation} where the element $g^{\,-1}(x),$ $x\in E_D,$ is to be understood as some (single) boundary point of the domain $D_0.$ The specified boundary point is unique and well-defined by~\cite[Theorem~2.1, Remark~2.1]{IS$_2$}, cf.~\cite[Theorem~4.1]{Na$_2$}. It is easy to verify that~$\rho$ in~(\ref{eq5}) is a metric on $\overline{D}_P,$ and that the topology on $\overline{D}_P,$ defined by such a method, does not depend on the choice of the map $g$ with the indicated property. The analogs of Theorems~\ref{th3} and~\ref{th4} for the case of prime ends are as follows. \begin{theorem}\label{th5} {\sl\, Let $p\in (n-1, n]$ and let $D$ be a regular domain. Assume that $D_f^{\,\prime}=f(D)$ are bounded equi-uniform domains with respect to $p$-modulus over all $f\in \frak{F}_{\Phi, A, p, \delta}(D),$ which are domains with a locally quasiconformal boundary, as well. If~(\ref{eq3!A}) holds for some $\delta_0>\tau_0:=\Phi(0),$ then any $f\in\frak{F}_{Q, A, p, \delta}(D)$ has a continuous extension in $\overline{D}_P$ and, besides that, the family $\frak{F}_{\Phi, A, p, \delta}(\overline{D})$ of all extended mappings $\overline{f}: \overline{D}_P\rightarrow \overline{{\Bbb R}^n},$ is equicontinuous in $\overline{D}_P.$ } \end{theorem} \begin{theorem}\label{th6} {\sl\, Let $p\in (n-1, n]$ and let $D$ be a regular domain. Assume that domains $D_f^{\,\prime}=f(D)$ are bounded equi-uniform domains with respect to $p$-modulus over all $f\in\frak{R}_{\Phi, \delta, p, E}(D),$ which are domains with a locally quasiconformal boundary, as well. Let ${\rm cap\,}E>0$ for $p=n,$ and let $E$ is any closed domain whenever $n-1<p<n.$ If~(\ref{eq3!A}) holds for some $\delta_0>\tau_0:=\Phi(0),$ then any $f\in\frak{R}_{\Phi, \delta, p, E}(D)$ has a continuous extension in $\overline{D}_P$ and, besides that, the family $f\in\frak{R}_{\Phi, \delta, p, E}(\overline{D})$ of all extended mappings $\overline{f}: \overline{D}_P\rightarrow \overline{{\Bbb R}^n},$ is equicontinuous in $\overline{D}_P.$ } \end{theorem} \begin{remark}\label{rem2} In Theorems~\ref{th5} and~\ref{th6}, the equicontinuity should be understood in terms of families of mappings between metric spaces~$(X, d)$ and $\left(X^{\,\prime}, d^{\,\prime}\right),$ where $X=\overline{D}_P,$ $d$ is one of the possible metrics, corresponding to the topological space $\overline{D}_P,$ $X^{\,\prime}=\overline{{\Bbb R}^n}$ and $d^{\,\prime}$ is a chordal (spherical) metric. \end{remark} \section{Auxiliary Lemmas} As in the article \cite{RS}, the key point related to the proof of the main statements of the article is related to the connection between conditions~(\ref{eq2!!A})-(\ref{eq3!A}) and the divergence of an integral of a special form (see, for example, \cite[Lemma~3.1]{RS}). Given a Lebesgue measurable function $Q:{\Bbb R}^n\rightarrow [0, \infty]$ and a point $x_0\in {\Bbb R}^n$ we set \begin{equation}\label{eq10} q_{x_0}(t)=\frac{1}{\omega_{n-1}r^{n-1}} \int\limits_{S(x_0, t)}Q(x)\,d\mathcal{H}^{n-1}\,, \end{equation} where $\mathcal{H}^{n-1}$ denotes $(n-1)$-dimensional Hausdorff measure. The following lemma is of particular importance. \begin{lemma}\label{lem1} {\sl\, Let $1\leqslant p\leqslant n,$ and let $\Phi:[0, \infty]\rightarrow [0, \infty] $ be a strictly increasing convex function such that the relation \begin{equation}\label{eq2} \int\limits_{\delta_0}^{\infty} \frac{d\tau}{\tau\left[\Phi^{-1}(\tau)\right]^{\frac{1}{p-1}}}= \infty \end{equation} holds for some $\delta_0>\tau_0:=\Phi(0).$ Let $\frak{Q}$ be a family of functions $Q:{\Bbb R}^n\rightarrow [0, \infty]$ such that \begin{equation}\label{eq5A} \int\limits_D\Phi(Q(x))\frac{dm(x)}{\left(1+|x|^2\right)^n}\ \leqslant M_0<\infty \end{equation} for some $0<M_0<\infty.$ Now, for any $0<r_0<1$ and for every $\sigma>0$ there exists $0<r_*=r_*(\sigma, r_0, \Phi)<r_0$ such that \begin{equation}\label{eq4} \int\limits_{\varepsilon}^{r_0}\frac{dt}{t^{\frac{n-1}{p-1}}q^{\frac{1}{p-1}}_{x_0}(t)}\geqslant \sigma\,,\qquad \varepsilon\in (0, r_*)\,, \end{equation} for any $Q\in \frak{Q}.$ } \end{lemma} \begin{proof} Using the substitution of variables $t=r/r_0, $ for any $\varepsilon\in (0, r_0) $ we obtain that \begin{equation}\label{eq34} \int\limits_{\varepsilon}^{r_0}\frac{dr}{r^{\frac{n-1}{p-1}}q^{\frac{1}{p-1}}_{x_0}(r)} \geqslant \int\limits_{\varepsilon}^{r_0}\frac{dr}{rq^{\frac{1}{p-1}}_{x_0}(r)} =\int\limits_{\varepsilon/r_0}^1\frac{dt}{tq^{\frac{1}{p-1}}_{x_0}(tr_0)} =\int\limits_{\varepsilon/r_0}^1\frac{dt}{t\widetilde{q}^{\frac{1}{p-1}}_{0}(t)}\,, \end{equation} where $\widetilde{q}_0(t)$ is the average integral value of the function $\widetilde{Q}(x):=Q(r_0x+x_0)$ over the sphere $|x|=t,$ see the ratio~(\ref{eq10}). Then, according to~\cite[Lemma~3.1]{RS}, \begin{equation}\label{eq35} \int\limits_{\varepsilon/r_0}^1\frac{dt}{t\widetilde{q}^{\frac{1}{p-1}}_{0}(t)}\geqslant \frac{1}{n}\int\limits_{eM_*\left(\varepsilon/r_0\right)}^{\frac{M_*\left(\varepsilon/r_0\right) r_0^n}{\varepsilon^n}}\frac{d\tau} {\tau\left[\Phi^{-1}(\tau)\right]^{\frac{1}{p-1}}}\,, \end{equation} where $$M_*\left(\varepsilon/r_0\right)= \frac{1}{\Omega_n\left(1-\left(\varepsilon/r_0\right)^n\right)} \int\limits_{A\left(0, \varepsilon/r_0, 1\right)} \Phi\left(Q(r_0x +x_0)\right)\,dm(x)=$$ $$= \frac{1}{\Omega_n\left(r_0^n-\varepsilon^n\right)} \int\limits_{A\left(x_0, \varepsilon, r_0\right)} \Phi\left(Q(x)\right)\,dm(x)$$ and~$A(x_0, \varepsilon, r_0)$ is defined in~(\ref{eq6}) for $r_1:=\varepsilon$ and $r_2:=r_0.$ Observe that $|x|\leqslant |x-x_0|+ |x_0|\leqslant r_0+|x_0|$ for any~$x\in A(x_0, \varepsilon, r_0).$ Thus $$M_*\left(\varepsilon/r_0\right)\leqslant \frac{\beta(x_0)} {\Omega_n\left(r_0^n-\varepsilon^n\right)}\int\limits_{A(x_0, \varepsilon, r_0)} \Phi(Q(x))\frac{dm(x)}{\left(1+|x|^2\right)^n}\,,$$ where $\beta(x_0)=\left(1+(r_0+|x_0|)^2\right)^n.$ Therefore, $$M_*\left(\varepsilon/r_0\right)\leqslant \frac{2\beta(x_0)}{\Omega_n r^n_0}M_0$$ for $\varepsilon\leqslant r_0 /\sqrt[n]{2},$ where $M_0$ is a constant in~(\ref{eq5A}). Observe that $$M_*\left(\varepsilon/r_0\right)>\Phi(0)>0\,,$$ because $\Phi$ is increasing. Now, by~(\ref{eq34}) and~(\ref{eq35}) we obtain that \begin{equation}\label{eq12}\int\limits_{\varepsilon}^{r_0}\frac{dr}{r^{\frac{n-1}{p-1}}q^{\frac{1}{p-1}}_{x_0}(r)} \geqslant \frac{1}{n}\int\limits_{\frac{2\beta(x_0)M_0e}{\Omega_nr^n_0}} ^{\frac{\Phi(0)r^n_0}{\varepsilon^n}}\frac{d\tau} {\tau\left[\Phi^{\,-1}(\tau)\right]^{\frac{1}{p-1}}}\,. \end{equation} The desired conclusion follows from~(\ref{eq12}) and~(\ref{eq2}).~$\Box$ \end{proof} Recall that a pair $E=\left(A,\,C\right),$ where $A$ is an open set in ${\Bbb R}^n,$ and $C$ is a compact subset of $A,$ is called {\it condenser} in ${\Bbb R}^n$. Given $p\geqslant 1,$ a quantity $${\rm cap}_p\,E={\rm cap}\,(A,\,C)=\inf\limits_{u\,\in\,W_0\left(E\right) }\quad\int\limits_{A}\,|\nabla u|^p\,\,dm(x)\,, $$ where $W_0(E)=W_0\left(A,\,C\right)$ is a family of all nonnegative absolutely continuous on lines (ACL) functions $u:A\rightarrow {\Bbb R}$ with compact support in $A$ and such that $u(x)\geqslant 1$ on $C,$ is called {\it $p$-capacity} of the condenser $E$. We write ${\rm cap}\,E$ for ${\rm cap}_n\,E.$ We also need the following statement given in \cite[Proposition~II.10.2]{Ri}. \begin{proposition}\label{pr3} {\sl\, Let $E=(A,\,C)$ be a condenser in ${\Bbb R}^n$ and let $\Gamma_E$ be the family of all paths of the form $\gamma:[a,\,b)\rightarrow A$ with $\gamma(a)\in C$ and $|\gamma|\cap(A\setminus F)\ne\varnothing$ for every compact set $F\subset A.$ Then ${\rm cap}\,E= M(\Gamma_E).$} \end{proposition} In what follows, we set $a/\infty=0$ for $a\ne\infty,$ $a/0=\infty $ for $a>0$ and $0\cdot\infty =0.$ One of the most important statements allowing us to connect the study of mappings in~(\ref{eq2*!}) with the conditions~(\ref{eq2!!A})--(\ref{eq3!A}) is the following proposition. The principal points related to its proof were indicated during the establishment of Lemma~1 in~\cite{SalSev$_2$}; however, for the sake of completeness of presentation, we will establish it in full in the text. \begin{proposition}\label{pr2}{\sl\, Let $D$ be a domain in~${\Bbb R}^n$, $n\geqslant 2,$ let $x_0\in \overline{D}\setminus\{\infty\},$ let $Q:D\rightarrow [0, \infty]$ be a Lebesgue measurable function and let $f:D\rightarrow \overline{{\Bbb R}^n}$ be an open discrete mapping satisfying relations ~(\ref{eq2!!A})--(\ref{eq3!A}) at a point $x_0.$ If $0<r_1<r_2<\sup\limits_{x\in D}|x-x_0|,$ then \begin{equation}\label{eq3B} M_p(f(\Gamma(S(x_0, r_1), S(x_0, r_2), D)))\leqslant \frac{\omega_{n-1}}{I^{p-1}}\,, \end{equation} where \begin{equation}\label{eq9} I=I(x_0,r_1,r_2)=\int\limits_{r_1}^{r_2}\ \frac{dr}{r^{\frac{n-1}{p-1}}q_{x_0}^{\frac{1}{p-1}}(r)}\,. \end{equation} If, in addition, $x_0\in D,$ $0<r_1<r_2<r_0={\rm dist}\,(x_0, \partial D)$ and $E=\left(B(x_0, r_2), \overline{B(x_0, r_1)}\right),$ then \begin{equation}\label{eq2A} {\rm cap}_p\, f(E)\leqslant\ \frac{\omega_{n-1}}{I^{p-1}}\,, \end{equation} where $f(E)=\left(f\left(B(x_0, r_2)\right), f\left(\overline{B(x_0, r_1)}\right)\right).$ } \end{proposition} \begin{proof} We may consider that $I \ne 0,$ since (\ref{eq3B}) and (\ref{eq2A}) are obvious, in this case. We also may consider that $I\ne \infty.$ Otherwise, we may consider $Q(x)+\delta$ instead of $Q(x)$ in (\ref{eq3B}) and (\ref{eq2A}), and then pass to the limit as $\delta\rightarrow 0.$ Let $I\ne\infty.$ Let us first prove relation~(\ref{eq3B}) for the case $x_0\in \overline{D}\setminus\{\infty\}.$ Now $q_{x_0}(r)\ne 0$ for a.e. $r\in(r_1,r_2).$ Set $$ \psi(t)= \left \{\begin{array}{rr} 1/[t^{\frac{n-1}{p-1}}q_{x_0}^{\frac{1}{p-1}}(t)], & t\in (r_1,r_2)\ , \\ 0, & t\notin (r_1,r_2)\ . \end{array} \right. $$ In this case, by Fubini's theorem, \begin{equation}\label{eq3} \int\limits_{A} Q(x)\cdot\psi^p(|x-x_0|)\,dm(x)=\omega_{n-1} I\,, \end{equation} where $A=A(r_1,r_2, x_0)$ is defined in~(\ref{eq6}). Observe that a function $\eta_1(t)=\psi(t)/I,$ $t\in (r_1,r_2),$ satisfies~(\ref{eq8B}) because $\int\limits_{r_1}^{r_2}\eta_1(t)\,dt=1.$ Now, by the definition of $f$ in~(\ref{eq2*!}) \begin{equation}\label{eq5G} M_p(f(\Gamma(S(x_0, r_1), S(x_0, r_2), D)))\leqslant\int\limits_A Q(x)\cdot {\eta_{1}}^p (|x-x_0|)\,dm(x)= \frac{\omega_{n-1}}{I^{p-1}}\,. \end{equation} The first part of Proposition~\ref{pr2} is established. Let us prove the second part, namely, relation~(\ref{eq2A}). Let $\Gamma_E$ and $\Gamma_{f(E)}$ be families of paths in the sense of the notation of Proposition~\ref{pr3}. By this proposition \begin{equation}\label{eq6*!} {\rm cap}_p\,f(E)={\rm cap}_p (f(B(x_0, r_2)), f(\overline{B(x_0 ,r_1)}))=M_p(\Gamma_{f(E)})\,. \end{equation} Let $\Gamma^{*}$ be a family of all maximal $f$-liftings of $\Gamma_{f(E)}$ starting in $\overline{B(x_0, r_1)}.$ Arguing similarly to the proof of Lemma~3.1 in~\cite{Sev$_1$}, one can show that $\Gamma^{*}\subset \Gamma_E.$ Observe that $\Gamma_{f(E)}>f(\Gamma^{*}),$ and $\Gamma_E> \Gamma(S(x_0, r_2-\delta), S(x_0, r_1), D)$ for sufficiently small $\delta>0.$ By~(\ref{eq5G}), we obtain that $$ M_p(\Gamma_{f(E)})\leqslant M_p(f(\Gamma^{*}))\leqslant M_p(f(\Gamma_E))\leqslant $$ \begin{equation}\label{eq6B} \leqslant M_p(f(\Gamma(S(x_0, r_1), S(x_0, r_2-\delta), A( r_1, r_2-\delta, x_0))))\leqslant \frac{\omega_{n-1}}{\left(\int\limits_{r_1}^{r_2-\delta}\frac{dt}{ t^{\frac{n-1}{p-1}}q_{x_0}^{\frac{1}{p-1}}(t)}\right)^{p-1}}\,. \end{equation} Observe that a function $\widetilde{\psi(t)}:=\psi|_{(r_1, r_2)} = \frac{1}{t^{\frac{n-1}{p-1}}q_{x_0}^{\frac{1}{p-1}}(t)}$ is integrable on $(r_1,r_2),$ because $I\ne \infty.$ Hence, by the absolute continuity of the integral, we obtain that \begin{equation}\label{eqA29}\int\limits_{r_1}^{r_2-\delta}\frac{dt}{t^{\frac{n-1}{p-1}}q_{x_0} ^{\frac{1}{p-1}}(t)}\rightarrow \int\limits_{r_1}^{r_2}\frac{dt}{t^{\frac{n-1}{p-1}}q_{x_0}^{\frac{1}{p-1}}(t)}\end{equation} as $\delta\rightarrow 0.$ By~(\ref{eq6B}) and (\ref{eqA29}), we obtain that \begin{equation}\label{eq7} M_p(\Gamma_{f(E)})\leqslant \frac{\omega_{n-1}}{\left(\int\limits_{r_1}^{r_2}\frac{dt} {t^{\frac{n-1}{p-1}}q_{x_0}^{\frac{1}{p-1}}(t)}\right)^{p-1}}\,. \end{equation} Combining~(\ref{eq6*!}) and~(\ref{eq7}), we obtain~(\ref{eq2A}). \end{proof}$\Box$ The next lemma contains an application of the previous Lemma~\ref{lem1} to mapping theory. \begin{lemma}\label{lem3} {\sl\, Let $D$ be a domain in ${\Bbb R}^n,$ let $1\leqslant p\leqslant n,$ let $\Phi:[0, \infty]\rightarrow [0, \infty] $ be a strictly increasing convex function such that the relation and let $x_0\in D.$ Denote by $\frak{R}_{\Phi, p}(D)$ the family of all discrete open mappings for which there exists a Lebesgue measurable function $Q=Q_f(x):{\Bbb R}^n\rightarrow [0, \infty],$ $Q(x)\equiv 0$ for $x\in {\Bbb R}^n\setminus D,$ satisfying~(\ref{eq2*!})--(\ref{eq8B}) for any $x_0\in D,$ and, in addition, (\ref{eq5A}) holds for some $0<M_0<\infty.$ Let $0<r_1<r_2<d_0={\rm dist}\,(x_0, \partial D),$ and let $E=(B(x_0, r_2), \overline{B(x_0, r_2)})$ be a condenser. If the relation~(\ref{eq2}) holds for some $\delta_0>\tau_0:=\Phi(0),$ then $${\rm cap}_p\,f(E)\rightarrow 0$$ as $r_1\rightarrow 0$ uniformly over $f\in \frak{R}_{\Phi, p}(D).$ } \end{lemma} \begin{proof} By Proposition~\ref{pr2} \begin{equation}\label{eq6A} {\rm cap}_p\,f(E)\leqslant \frac{\omega_{n-1}}{I^{p-1}}\,, \end{equation} where $\omega_{n-1}$ denotes an area of the unit sphere ${\Bbb S}^{n-1}:=S(0, 1)$ in ${\Bbb R}^n,$ $I:=\int\limits_{r_1}^{r_2}\frac{dr}{r^{\frac{n-1}{p-1}}q^{\frac{1}{p-1}}_{x_0}(r)}$ and $q_{x_0}$ is defined in~(\ref{eq10}). The rest of the statement follows from Lemma~\ref{lem1}.~$\Box$ \end{proof} \section{Proof of Theorems~\ref{th1} and~\ref{th2}} The following statement was proved for $p=n$ in \cite[Lemma~3.11]{MRV$_2$} (see also \cite[Lemma~2.6, Ch.~III]{Ri}). \begin{proposition}\label{pr1} {\sl\, Let $F$ be a compact proper subset of $\overline{{\Bbb R}^n}$ with ${\rm cap}\,F>0.$ Then for every $a>0$ there exists $\delta>0$ such that $$ {\rm cap}\,(\overline{{\Bbb R}^n}\setminus F,\, C)\geqslant \delta $$ for every continuum $C\subset \overline{{\Bbb R}^n}\setminus F$ with $h(C)\geqslant a.$ } \end{proposition} {\it Proof of Theorem~\ref{th1}} largely uses the classical scheme used in the quasiregular case, as well as applied by the author earlier, see, for example, \cite[Theorem~4.1]{MRV$_2$}, \cite[Theorem~2.9.III]{Ri}, \cite[Theorem~8]{Cr}, \cite[Lemma~3.1]{Sev$_1$} and \cite[Lemma~4.2]{SalSev$_1$}. Let $x_0\in D,$ $\varepsilon_0<d(x_0, \partial D),$ and let $E=(A, C)$ be a condenser, where $A=B(x_0, \varepsilon_0)$ and $C=\overline{B(x_0, \varepsilon)}.$ As usual, $\varepsilon_0:=\infty$ for $D={\Bbb R}^n.$ Let $a>0.$ Since ${\rm cap}\,E>0,$ by Proposition~\ref{pr1} there exists $\delta=\delta(a)>0$ such that \begin{equation}\label{eq7A}{\rm cap}\,(\overline{{\Bbb R}^n}\setminus F,\, E)\geqslant \delta \end{equation} for any continuum $C\subset \overline{{\Bbb R}^n}\setminus E$ such that $h(C)\geqslant a.$ On the other hand, by Lemma~\ref{lem3} there exists such that $${\rm cap}\,f(E)\leqslant \alpha(\varepsilon)\,,\quad \varepsilon\,\in (0,\,\varepsilon_0)\,,$$ for any $f\in{\frak F}^{\Phi}_{M_0, E}(D),$ where $\alpha$ is some function such that $\alpha(\varepsilon)\rightarrow 0$ as $\varepsilon\,\rightarrow 0.$ Now, for a number $\delta=\delta(a)$ there exists $\varepsilon_*=\varepsilon_*(a)$ such that \begin{equation}\label{eq28*!} {\rm cap\,} f(E)\leqslant \delta\,, \quad \varepsilon \in (0, {\varepsilon_*}(a))\,. \end{equation} By~(\ref{eq28*!}), we obtain that $${\rm cap\,}\left(\overline{{\Bbb R}^n}\setminus E,\,f(\overline{B(x_0,\,\varepsilon)})\right)\leqslant {\rm cap\,}\left(f({B(x_0,\varepsilon_0)}), f(\overline{B(x_0,\,\varepsilon)})\right)\leqslant\delta$$ for $\varepsilon (0, \varepsilon_*(a)).$ Now, by~(\ref{eq7A}), $h(f(\overline{B(x_0,\,\varepsilon)}))<a.$ Finally, for any $a>0$ there is $\varepsilon_*=\varepsilon_*(a)$ such that $h(f(\overline{B(x_0,\,\varepsilon)}))<a$ for $\varepsilon\in (0, \varepsilon_*(a)).$ Theorem is proved.~$\Box$ To prove Theorem~\ref{th2}, we need the following most important statement (see~\cite[(8.9)]{Ma}). \begin{proposition}\label{pr1A} {\sl Given a condenser $E=(A, C)$ and $1<p<n,$ $$ {\rm cap}_p\,E\geqslant n{\Omega}^{\frac{p}{n}}_n \left(\frac{n-p}{p-1}\right)^{p-1}\left[m(C)\right]^{\frac{n-p}{n}}\,, $$ where ${\Omega}_n$ denotes the volume of the unit ball in ${\Bbb R}^n$, and $m(C)$ is the $n$-dimensional Lebesgue measure of $C.$} \end{proposition} The basic lower estimate of capacity of a condenser $E=(A, C)$ in ${\Bbb R}^n$ is given by \begin{equation}\label{2.5} {\rm cap}_p\ E = {\rm cap}_p\ (A, C) \geqslant \left(b_n\frac{(d(C))^p} {(m(A))^{1-n+p}}\right)^{\frac{1}{n-1}}\,,\quad p>n-1, \end{equation} where $b_n$ depends only on $n$ and $p$ and $d(C)$ denotes the diameter of $C$ (see \cite[Proposition~6]{Kr}, cf.~\cite[Lemma~5.9]{MRV$_1$}). {\it Proof of Theorem~\ref{th2}} is based on the approach used in the proof of~Lemma~2.4 in~\cite{GSS}. Let $0<r_0<{\rm dist\,}(x_0,\,\partial D).$ Consider a condenser $E=(A, C)$ with $A=B(x_0, r_0),$ $C=\overline{B(x_0, \varepsilon)}.$ By Lemma~\ref{lem3}, there is a function $\alpha=\alpha(\varepsilon)$ and $0<\varepsilon^{\,\prime}_0<r_0$ such that $\alpha(\varepsilon)\rightarrow 0$ as $\varepsilon\rightarrow 0$ and, in addition, $${\rm cap}_p\,f(E)\leqslant \alpha(\varepsilon)$$ for any $\varepsilon\in (0, \varepsilon^{\,\prime}_0)$ and $f\in{\frak F}^{\Phi}_{M_0, p}(D).$ Applying Proposition~\ref{pr1A}, one obtains $$ \alpha(\varepsilon)\geqslant{\rm cap}_p\,f(E)\geqslant n{\Omega}^{\frac{p}{n}}_n \left(\frac{n-p}{p-1}\right)^{p-1}\left[m(f(C))\right]^{\frac{n-p}{n}}\,, $$ where ${\Omega}_n$ denotes the volume of the unit ball in ${\Bbb R}^n,$ and $m(C)$ stands for the $n$-dimensional Lebesgue measure of $C.$ In other words, \begin{equation*} m(f(C))\leqslant \alpha_1(\varepsilon)\,, \end{equation*} where $\alpha_1(\varepsilon)\rightarrow 0$ as $\varepsilon\rightarrow 0.$ The last relation implies the existence of a number $\varepsilon_1\in (0, 1),$ such that \begin{equation}\label{eqroughb} m(f(C))\leqslant 1\,, \end{equation} where $C=\overline{B(x_0, \varepsilon_1)}.$ Further reasoning is related to the repeated application of Lemma~\ref{lem3}. Consider one more condenser in this respect. Let $E_1=(A_1, C_{\varepsilon}),$ $A_1=B(x_0, \varepsilon_1),$ and $C_{\varepsilon}=\overline{B(x_0, \varepsilon)},$ $\varepsilon\in (0, \varepsilon_1).$ By Lemma~\ref{lem3} there is a function $\alpha_2(\varepsilon)$ and a number $0<\varepsilon^{\,\prime}_0<\varepsilon_1$ such that $$ {\rm cap}_p\,f(E_1)\leqslant \alpha_2(\varepsilon) $$ for any $\varepsilon\in (0, \varepsilon^{\,\prime}_0),$ where $\alpha_2(\varepsilon)\rightarrow 0$ as $\varepsilon\rightarrow 0.$ On the other hand, according to~(\ref{2.5}), \begin{equation}\label{eq1} \left(c_1\frac{\left(d(f(\overline{B(x_0, \varepsilon)}))\right)^p} {\left(m(f(B(x_0, \varepsilon_1)))\right)^{1-n+p}}\right)^{\frac{1}{n-1}} \leqslant {\rm cap\,}_p\,f\left(E_1\right)\leqslant \alpha_2(\varepsilon)\,. \end{equation} By~(\ref{eqroughb}) and~(\ref{eq1}), one gets \begin{equation}\label{eq1A} d(f(\overline{B(x_0, \varepsilon)})) \leqslant \alpha_3(\varepsilon)\,, \end{equation} where $\alpha_3(\varepsilon)\rightarrow0$ as $\varepsilon\rightarrow 0.$ The proof of Theorem~\ref{th2} is completed, since the mapping $f\in{\frak F}^{\Phi}_{M_0, p}(D)$ participating in~(\ref{eq1A}) is arbitrary.~$\Box$ \section{Proof of Theorems~\ref{th3}--\ref{th6}} The proofs of these theorems are conceptually close to the proofs of Theorems 1–4 in~\cite{SevSkv$_1$} and use the same approach. Let's start with the following very useful remark (see, for example, \cite[Remark~1]{SevSkv$_1$}). \begin{remark}\label{rem1} Let us show that, for a given domain $D_i,$ the relation~(\ref{eq17***}) implies the so-called strong accessibility of its boundary with respect to $p$-modulus (see also~\cite[Theorem~6.2]{Na$_1$}). Let $i\in I,$ let $x_0\in \partial D_i$ and let $U$ be some neighborhood of $x_0.$ We may assume that $x_0\ne \infty.$ Let $\varepsilon_1> 0$ be such that $V:=B(x_0, \varepsilon_1)$ and $\overline{V}\subset U.$ If $\partial U \ne \varnothing $ and $\partial V \ne \varnothing,$ put $\varepsilon_2:={\rm dist}\,(\partial U, \partial V)> 0.$ Let $F$ and $G$ be continua in $D_i$ such that $F\cap \partial U \ne \varnothing \ne F \cap \partial V$ and $G\cap \partial U \ne \varnothing \ne G \cap \partial V. $ From the last relations it follows that $h(F)\geqslant \varepsilon_2$ and $h(G)\geqslant \varepsilon_2.$ By the equi-uniformity of $D_i$ with respect to $p$-modulus, we may find $\delta=\delta(\varepsilon_2)> 0$ such that $M_p(\Gamma(F, G, D_i))\geqslant \delta> 0.$ In particular, {\it for any neighborhood $U$ of $x_0,$ there is a neighborhood $V$ of the same point, a compact set $F$ in $D_i$ and a number $\delta> 0$ such that $M_p(\Gamma(F, G, D_i))\geqslant \delta> 0 $ for any continuum $G\subset D_i$ such that $G\cap \partial U \ne \varnothing \ne G \cap \partial V.$} This property is called {\it a strong accessibility} of $\partial D_i$ at the point $x_0$ with respect to $p$-modulus. Thus, this property is established for any domain $D_i$ which is an element of some equi-uniform family $\{D_i\}_{i\in I}.$ \end{remark} {\it Proof of Theorem~\ref{th3}}. The equicontinuity of the family $\frak{F}_{\Phi, A, p, \delta}(D)$ inside the domain $D$ follows from \cite[Theorem~4.1]{RS} for $p=n$ and Theorem~\ref{th2} for $p\ne n$. Put $f\in \frak{F}_{\Phi, A, p, \delta}(D)$ and $Q=Q_f(x).$ Set $$Q^{\,\prime}(x)=\begin{cases}Q(x), & Q(x)\geqslant 1\\ 1, & Q(x)<1\end{cases}\,.$$ Observe that $Q^{\,\prime}(x)$ satisfies (\ref{eq2!!A}) up to a constant. Indeed, $$ \int\limits_D\Phi(Q^{\,\prime}(x))\frac{dm(x)}{\left(1+|x|^2\right)^n}= \int\limits_{\{x\in D: Q(x)< 1 \}}\Phi(Q^{\,\prime}(x))\frac{dm(x)}{\left(1+|x|^2\right)^n}+$$$$+ \int\limits_{\{x\in D: Q(x)\geqslant 1\} }\Phi(Q^{\prime}(x))\frac{dm(x)}{\left(1+|x|^2\right)^n}\leqslant M_0+\Phi(1)\int\limits_{{\Bbb R}^n}\frac{dm(x)}{\left(1+|x|^2\right)^n}=M^{\,\prime}_0<\infty\,.$$ No, by~\cite[Theorem~2]{Sev$_2$} and Remark~\ref{rem1}, a mapping $f\in\frak{F}_{\Phi, A, p, \delta}(D)$ has a continuous extension to $\overline{D}$ for $p=n.$ In addition, by Lemma~\ref{lem1}, $\int\limits_{0}^{r_0}\frac{dt}{t^{\frac{n-1}{p-1}}q^{\,\prime\frac{1}{p-1}}_{x_0}(t)}=\infty,$ where $q^{\,\prime}_{x_0}(t)~=~\frac{1}{\omega_{n-1}r^{n-1}} \int\limits_{S(x_0, t)}Q^{\,\prime}(x)\,d\mathcal{H}^{n-1}.$ In this case, a continuous extension of the mapping from $f$ to $\partial D $ can be established similarly to Theorem~1 in~\cite{Sev$_2$}. Note that a rigorous proof of this fact was given in~\cite[Theorem~1.2]{IS$_1$} for the case when the domains $D$ and $f(D)$ have compact closures, and its proof in an arbitrary case can be presented completely by analogy. It remains to show that the family $\frak{F}_{\Phi, A, p, \delta}(\overline{D})$ is equicontinuous at $\partial D.$ Suppose the opposite. Then there is $x_0\in \partial D$ for which $\frak{F}_{\Phi, A, p, \delta}(\overline{D})$ is not equicontinuous at $x_0.$ Due to the additional application of the inversion $\varphi(x)= \frac{x}{|x|^2},$ we may assume that $x_0\ne \infty.$ Then there is a number $a>0$ with the following property: for any $m=1,2, \ldots$ there is $x_m \in \overline{D}$ and ${f}_m\in \frak{F}_{\Phi, A, p, \delta}(\overline{D})$ such that $|x_0-x_m|< 1/m$ and, in addition, $h(f_m(x_m), f_m(x_0))\geqslant a.$ Since $f_m$ has a continuous extension at $x_0,$ we may find a sequence $x^{\,\prime}_m\in D,$ $x^{\,\prime}_m\rightarrow x_0$ as $m\rightarrow\infty$ such that $h(f_m(x^{\,\prime}_m), f_m(x_0))\leqslant 1/m.$ Thus, \begin{equation}\label{eq6***} h(f_m(x_m), f_m(x^{\,\prime}_m))\geqslant a/2\qquad \forall\,\,m\in {\Bbb N}\,. \end{equation} Since $f_m$ has a continuous extension to $\partial D,$ we may assume that $x_m\in D.$ Since the domain $D$ is locally connected, at the point $x_0,$ there is a sequence of neighborhoods $V_m$ of the point $x_0$ with $h(V_m)\rightarrow 0$ for $m\rightarrow\infty$ such that the sets $D \cap V_m$ are domains and $D\cap V_m \subset B(x_0, 2^{\,-m}).$ Without loss of the generality of reasoning, going to subsequences, if necessary, we may assume that $x_m, x^{\,\prime}_m \in D\cap V_m.$ Join the points $x_m$ and $x^{\,\prime}_m$ by the path $\gamma_m:[0,1]\rightarrow {\Bbb R}^n$ such that $\gamma_m(0)=x_m,$ $\gamma_m(1)=x^{\,\prime}_m$ and $\gamma_m(t)\in V_m$ for $t\in (0,1),$ see Figure~\ref{fig6}. \begin{figure} \caption{To the proof of Theorem~\ref{th3} \label{fig6} \end{figure} We denote by $C_m$ the image of the path $\gamma_m (t)$ under the mapping $f_m.$ From the relation~(\ref{eq6***}) it follows that \begin{equation}\label{eq5.1} h(C_m)\geqslant a/2\qquad\forall\, m\in {\Bbb N}\,, \end{equation} where $h$ denotes the chordal diameter of the set. Let $\varepsilon_0:={\rm dist}\,(x_0, A).$ Without loss of the generality of reasoning, one may assume that the continuum $A$ participating in the definition of the class $\frak{F}_{\Phi, A, p, \delta}(D),$ lies outside the balls $B(x_0, 2^{\,-m}),$ $m=1,2,\ldots, $ and $B(x_0, \varepsilon_0)\cap A=\varnothing.$ In this case, the theorem on the property of connected sets that lie neither inside nor outside the given set implies the relation \begin{equation}\label{eq5B} \Gamma_m>\Gamma(S(x_0, 2^{\,-m}), S(x_0, \varepsilon_0), D)\,, \end{equation} see e.g.~\cite[Theorem~1.I.5.46]{Ku}. Using Proposition~\ref{pr2} and by~(\ref{eq4}), (\ref{eq5B}), we obtain that \begin{equation}\label{eq10C} M_p(f_m(\Gamma_m))\leqslant \int\limits_{2^{\,-m}}^{r_0}\frac{dr}{r^{\frac{n-1}{p-1}}q^{\frac{1}{p-1}}_{mx_0}(r)}\rightarrow 0\,,\quad m\rightarrow\infty\,, \end{equation} where $q_{mx_0}(t)=\frac{1}{\omega_{n-1}r^{n-1}} \int\limits_{S(x_0, t)}Q_m(x)\,d\mathcal{H}^{n-1}$ and $Q_m$ corresponds to the function $Q$ of $f_m$ in~(\ref{eq2*!}). On the other hand, observe that $f_m(\Gamma_m)=\Gamma(C_m, f_m(A), D_m^{\,\prime}).$ By the condition of the lemma, $h(f_m(A))\geqslant \delta$ for any $m\in {\Bbb N}. $ Therefore, by~(\ref{eq5.1}) $h(f_m(A))\geqslant \delta_1$ and $h(C_m)\geqslant\delta_1,$ where $\delta_1:=\min\{\delta, a/2\}.$ Taking into account that the domains $D_m^{\,\prime}:=f_m(D)$ are equ-uniform with respect to $p$-modulus, we conclude that there exists $\sigma> 0$ such that $$M_p(f_m(\Gamma_m))=M_p(\Gamma(C_m, f_m(A), D_m^{\,\prime}))\geqslant \sigma\qquad\forall\,\, m\in {\Bbb N}\,,$$ which contradicts the condition~(\ref{eq6***}). The resulting contradiction indicates that the assumption about the absence of equicontinuity of $\frak {F}_{\Phi, A, p, \delta}(\overline {D})$ was wrong. The resulting contradiction completes the proof.~ $\Box$ {\it Proof of Theorem~\ref{th4}.} The equicontinuity of the family $\frak{R}_{\Phi, \delta, p, E}(D)$ inside the domain $D$ follows from Theorem~\ref{th1} for $p=n$ and Theorem~\ref{th2} for $p\ne n$. The possibility of continuous extension of any mapping $f\in \frak{R}_{\Phi, \delta, p, E}(D)$ to $\partial D$ is established in the same way as at the beginning of the proof of Theorem~\ref{th3}, and therefore the proof of this fact is omitted. It remains to show that the family $\frak{R}_{\Phi, \delta, p, E}(D)$ is equicontinuous at $\partial D.$ Suppose the opposite. Then there is $x_0\in \partial D$ for which $\frak{R}_{\Phi, \delta, p, E}(D)$ is not equicontinuous at $x_0.$ Due to the additional application of the inversion $\varphi(x)= \frac{x}{|x|^2},$ we may assume that $x_0\ne \infty.$ Then there is a number $a>0$ with the following property: for any $m=1,2, \ldots$ there is $x_m \in \overline{D}$ and ${f}_m\in\frak{R}_{\Phi, \delta, p, E}(D)$ such that $|x_0-x_m|< 1/m$ and, in addition, $h(f_m(x_m), f_m(x_0))\geqslant a.$ Since $f_m$ has a continuous extension at $x_0,$ we may assume that $x_m\in D.$ Besides that, we may find a sequence $x^{\,\prime}_m\in D,$ $x^{\,\prime}_m\rightarrow x_0$ as $m\rightarrow\infty$ such that $h(f_m(x^{\,\prime}_m), f_m(x_0))\leqslant 1/m.$ Now, the relation~(\ref{eq6***}) holds. Since the domain $D$ is locally connected, at the point $x_0,$ there is a sequence of neighborhoods $V_m$ of the point $x_0$ with $h(V_m)\rightarrow 0$ for $m\rightarrow\infty$ such that the sets $D \cap V_m$ are domains and $D\cap V_m \subset B(x_0, 2^{\,-m}).$ Without loss of the generality of reasoning, going to subsequences, if necessary, we may assume that $x_m, x^{\,\prime}_m \in D\cap V_m.$ Join the points $x_m$ and $x^{\,\prime}_m$ by the path $\gamma_m:[0,1]\rightarrow {\Bbb R}^n$ such that $\gamma_m(0)=x_m,$ $\gamma_m(1)=x^{\,\prime}_m$ and $\gamma_m(t)\in V_m$ for $t\in (0,1),$ see Figure~\ref{fig2}. \begin{figure} \caption{To the proof of Theorem~\ref{th4} \label{fig2} \end{figure} We denote by $C_m$ the image of the path $\gamma_m $ under the mapping $f_m.$ It follows from the relation~(\ref{eq6***}) that a condition~(\ref{eq5.1}) is satisfied, where $h$ denotes a chordal diameter of the set. By the definition of the family of mappings $\frak{R}_{\Phi, \delta, p, E}(D),$ for any $m=1,2,\ldots ,$ any $f_m\in \frak{R}_{\Phi, \delta, p, E}(D)$ and any domain $D^{\,\prime}_m:=f_m(D)$ there is a continuum $K_m\subset D^{\,\prime}_m$ such that $h(K_m)\geqslant \delta$ and $h(f_m^{\,-1}(K_m), \partial D)\geqslant \delta>0.$ Since, by the hypothesis of the lemma, the domains $D^{\,\prime}_m$ are equi-uniform with respect to $p$-modulus, by~(\ref{eq5.1}) we obtain that \begin{equation}\label{eq13} M_p(\Gamma(K_m, C_m, D^{\,\prime}_m))\geqslant b\,. \end{equation} for any $m=1,2,\ldots$ and some $b>0.$ Let $\Gamma_m$ be a family of all paths $\beta:[0, 1)\rightarrow D^{\,\prime}_m$ such that $\beta(0)\in C_m$ and $\beta(t)\rightarrow p\in K_m$ as $t\rightarrow 1.$ Recall that a path $\alpha:[a, b)\rightarrow {\Bbb R}^n$ is called a (total) $f$-lifting of a path $\beta:[a, b)\rightarrow {\Bbb R}^n$ starting at $x_0,$ if $(f\circ \alpha)(t)=\beta(t)$ for any $t\in [a, b).$ Let $\Gamma^*_m$ be a family of all total $f_m$-liftings $\alpha:[0, 1)\rightarrow D$ of $\Gamma_m$ starting at $\gamma_m.$ Such a family is well-defined by~\cite[Theorem~3.7]{Vu}. Since the mapping $f_m$ is closed, we obtain that $\alpha(t)\rightarrow f^{\,-1}_m(K_m)$ as $t\rightarrow b-0,$ where $f^{\,-1}_m(K_m)$ denotes the pre-image of $K_m$ under $f_m.$ Since $\overline{{\Bbb R}^n}$ is a compact metric space, the set $C_{\delta}:=\{x\in D: h(x, \partial D)\geqslant \delta\}$ is compact in $D$ for any $\delta>0$ and, besides that, $f_m^{\,-1}(K_m)\subset C_{\delta}.$ By~\cite[Lemma~1]{Sm} the set $C_{\delta}$ can be embedded in the continuum $E_{\delta}$ lying in the domain $D.$ In this case, we may assume that ${\rm dist}\,(x_0, E_{\delta})\geqslant \varepsilon_0$ by decreasing $\varepsilon_0.$ By the property of connected sets that lie neither inside nor outside the given set, we obtain that \begin{equation}\label{eq5C} \Gamma^{\,*}_m>\Gamma(S(x_0, 2^{\,-m}), S(x_0, \varepsilon_0), D)\,, \end{equation} see e.g.~\cite[Theorem~1.I.5.46]{Ku}. Using Proposition~\ref{pr2} and by~(\ref{eq4}), (\ref{eq5C}), we obtain that $$M_p(f_m(\Gamma_m^*))\leqslant M_p(f_m(\Gamma(S(x_0, 2^{\,-m}), S(x_0, \varepsilon_0), D)))\leqslant$$ \begin{equation}\label{eq10A} \leqslant \int\limits_{2^{\,-m}}^{r_0}\frac{dr}{r^{\frac{n-1}{p-1}}q^{\frac{1}{p-1}}_{mx_0}(r)}\rightarrow 0\,,\quad m\rightarrow\infty\,, \end{equation} where $q_{mx_0}(t)=\frac{1}{\omega_{n-1}r^{n-1}} \int\limits_{S(x_0, t)}Q_m(x)\,d\mathcal{H}^{n-1}$ and $Q_m$ corresponds to $f_m$ in~(\ref{eq2*!}). Observe that $f_m(\Gamma^{\,*}_m)= \Gamma_m$ and $M_p(\Gamma_m)=M_p(\Gamma(K_m, C_m, D^{\,\prime}_m)),$ so that \begin{equation}\label{eq12B} M_p(f_m(\Gamma^{\,*}_m))=M_p(\Gamma(K_m, C_m, D^{\,\prime}_m))\,. \end{equation} However, the relations (\ref{eq10A}) and (\ref{eq12B}) together contradict~(\ref{eq13}). The resulting contradiction indicates that the original assumption~(\ref{eq6***}) was incorrect, and therefore the family of mappings $\frak{R}_{\Phi, \delta, p, E}(D)$ is equicontinuous at every point $x_0\in \partial D.$~$\Box$ {\it Proof of Theorem~\ref{th5}}. The equicontinuity of the family $\frak{F}_{\Phi, A, p, \delta}(D)$ inside the domain $D$ follows from~\cite[Theorem~4.1]{RS} for $p=n$ and Theorem~\ref{th2} for $p\ne n$. The existence of a continuous extension of each $f\in\frak{F}_{\Phi, A, p, \delta}(D)$ to a continuous mapping in $\overline{D}$ follows from~\cite[Lemma~3]{Sev$_5$}. In particular, the strong accessibility of $D_f^{\,\prime}=f(D)$ with respect to $p$-modulus follows by Remark~\ref{rem1}. Let us show the equicontinuity of the family $\frak{F}_{\Phi, A, p, \delta}(\overline{D})$ at $E_D,$ where $E_D$ denotes the space of prime ends in $D.$ Suppose the contrary, namely, that the family $\frak{F}_{\Phi, A, p, \delta}(\overline{D})$ is not equicontinuous at some point $P_0\in E_D.$ Then there is a number $a>0,$ a sequence $P_k\in \overline{D}_P,$ $k=1,2,\ldots,$ and elements $f_k\in\frak{F}_{Q, A, p, \delta}(D)$ such that $d(P_k, P_0)<1/k$ and \begin{equation}\label{eq3C} h(f_k(P_k), f_k(P_0))\geqslant a\quad\forall\quad k=1,2,\ldots,\,. \end{equation} Since $f_k$ has a continuous extension to $\overline{D}_P,$ for any $k\in {\Bbb N}$ there is $x_k\in D$ such that $d(x_k, P_k)<1/k$ and $h(f_k(x_k), f_k(P_k))<1/k.$ Now, by~(\ref{eq3C}) we obtain that \begin{equation}\label{eq4C} h(f_k(x_k), f_k(P_0))\geqslant a/2\quad\forall\quad k=1,2,\ldots,\,. \end{equation} Similarly, since $f_k$ has a continuous extension to $\overline{D}_P,$ there is a sequence $x_k^{\,\prime}\in D,$ $x_k^{\,\prime}\rightarrow P_0$ as $k\rightarrow \infty$ for which $h(f_k(x_k^{\,\prime}), f_k(P_0))<1/k$ for $k=1,2,\ldots\,.$ Now, it follows from~(\ref{eq4C}) that \begin{equation}\label{eq5E} h(f_k(x_k), f_k(x_k^{\,\prime}))\geqslant a/4\quad\forall\quad k=1,2,\ldots\,, \end{equation} where $x_k$ and $x_k^{\,\prime}$ belong to $D$ and converge to $P_0$ as $k\rightarrow\infty,$ see Figure~\ref{fig3}. \begin{figure} \caption{To the proof of Theorem~\ref{th5} \label{fig3} \end{figure} By~\cite[Lemma~3.1]{IS$_2$}, cf.~\cite[Lemma~2]{KR}, a prime end $P_0$ of a regular domain $D$ contains a chain of cuts $\sigma_k$ lying on spheres $S_k$ centered at some point $x_0\in \partial D $ and with Euclidean radii $r_k \rightarrow 0$ as $k\rightarrow \infty $. Let $D_k$ be domains associated with the cuts $\sigma_k,$ $k=1,2, \ldots .$ Since the sequences $x_k$ and $x_k^{\,\prime}$ converge to the prime end $P_0$ as $k\rightarrow\infty,$ we may assume that $x_k$ and $x_k^{\,\prime}\in D_k$ for any $k=1,2,\ldots, .$ Let us join the points $x_k$ and $x_k^{\,\prime}$ by the path $\gamma_k,$ completely lying in $D_k.$ One can also assume that the continuum $A$ from the definition of the class $\frak{F}_{\Phi, A, p, \delta}(\overline{D})$ does not intersect with any of the domains $D_k,$ and that ${\rm dist}\,(\partial D, A)>\varepsilon_0.$ We denote by $C_k$ the image of the path $\gamma_k$ under the mapping $f_k.$ It follows from the relation~(\ref{eq5E}) that \begin{equation}\label{eq3G} h(C_k)\geqslant a/4\qquad\forall\, k\in {\Bbb N}\,, \end{equation} where $h$ is a chordal diameter of the set. Let $\Gamma_k$ be a family of all paths joining $|\gamma_k|$ and $A$ in $D.$ By~\cite[Theorem~1.I.5.46]{Ku}, \begin{equation}\label{eq5D} \Gamma_k>\Gamma(S(x_0, r_k), S(x_0, \varepsilon_0), D)\,. \end{equation} Using Proposition~\ref{pr2} and by~(\ref{eq4}), (\ref{eq5D}) we obtain that $$M_p(f_k(\Gamma_k))\leqslant M_p(f_k(\Gamma(S(x_0, r_k), S(x_0, \varepsilon_0), D)))\leqslant$$ \begin{equation}\label{eq14} \leqslant \int\limits_{r_k}^{r_0}\frac{dr}{r^{\frac{n-1}{p-1}}q^{\frac{1}{p-1}}_{kx_0}(r)}\rightarrow 0\,,\quad k\rightarrow\infty\,, \end{equation} where $q_{kx_0}(t)=\frac{1}{\omega_{n-1}r^{n-1}} \int\limits_{S(x_0, t)}Q_k(x)\,d\mathcal{H}^{n-1}$ and $Q_k$ corresponds to the function $Q$ of $f_k$ in~(\ref{eq2*!}). On the other hand, note that $f_k(\Gamma_k)=\Gamma(C_k, f_k(A), D_k^{\,\prime}),$ where $D_k^{\,\prime}=f_k(D).$ Since by the hypothesis of the lemma $h(f_k(A))\geqslant \delta$ for any $k\in {\Bbb N},$ by~(\ref{eq3G}), $h(f_k(A))\geqslant \delta_1$ and $h(C_k)\geqslant\delta_1,$ where $\delta_1:=\min\{\delta, a/4\}.$ Using the fact that the domains $D_k^{\,\prime}$ are equ-uniform with respect to $p$-modulus, we conclude that there is $\sigma>0$ such that $$M_p(f_k(\Gamma_k))=M_p(\Gamma(C_k, f_k(A), D_k^{\,\prime}))\geqslant \sigma\qquad\forall\,\, k\in {\Bbb N}\,,$$ which contradicts condition~(\ref{eq14}). The resulting contradiction indicates that the assumption of the absence of an equicontinuity of the family $\frak{F}_{\Phi, A, p, \delta}(\overline{D})$ was wrong. The resulting contradiction completes the proof of the theorem.~$\Box$ {\it Proof of Theorem~\ref{th6}.} The equicontinuity of the family $\frak{R}_{\Phi, \delta, p, E}(D)$ inside the domain $D$ follows from~\cite[Theorem~4.1]{RS} for $p=n$ and Theorem~\ref{th2} for $p\ne n$. The existence of a continuous extension of each $f\in\frak{R}_{\Phi, \delta, p, E}(D)$ to a continuous mapping in $\overline{D}$ follows from~\cite[Lemma~3]{Sev$_5$}. In particular, the strong accessibility of $D_f^{\,\prime}=f(D)$ with respect to $p$-modulus follows by Remark~\ref{rem1}. It remains to show that the family $\frak{R}_{\Phi, \delta, p, E}(D)$ is equicontinuous at $\partial_PD:=\overline{D}_P\setminus D.$ Suppose the opposite. Arguing as in the proof of Theorem~\ref{th5}, we construct two sequences $x_k$ and $x_k^{\,\prime}\in D,$ converging to the prime end $P_0$ as $k\rightarrow \infty,$ for which a relation~(\ref{eq5E}) holds. Let us join the points $x_k$ and $x^{\,\prime}_k$ of the path $\gamma_k:[0,1]\rightarrow{\Bbb R}^n$ such that $x_k^{\,\prime}\in D,$ $\gamma_k(0)=x_k,$ $\gamma_k(1)=x^{\,\prime}_k$ and $\gamma_k(t)\in D$ for $t\in (0,1).$ Denote by $C_k$ the image of $\gamma_k $ under the mapping $f_k.$ It follows from the relation (\ref{eq5E}) that \begin{equation}\label{eq1B} h(C_k)\geqslant a/4\,\,\forall\,\,k=1,2,\ldots . \end{equation} By~\cite[Lemma~3.1]{IS$_2$}, cf.~\cite[Lemma~2]{KR}, a prime end $P_0$ of a regular domain $D$ contains a chain of cuts $\sigma_k$ lying on spheres $S_k$ centered at some point $x_0\in \partial D $ and with Euclidean radii $r_k \rightarrow 0$ as $k\rightarrow \infty $. Let $D_k$ be domains associated with the cuts $\sigma_k,$ $k=1,2, \ldots .$ Since the sequences $x_k$ and $x_k^{\,\prime}$ converge to the prime end $P_0$ as $k\rightarrow\infty,$ we may assume that $x_k$ and $x_k^{\,\prime}\in D_k$ for any $k=1,2,\ldots, .$ By the definition of the family $\frak{R}_{\Phi, \delta, p, E}(D),$ for every $f_k\in \frak{R}_{Q, \delta, p, E}(D)$ and any domain $D^{\,\prime}_k:=f_k(D)$ there is a continuum $K_k\subset D^{\,\prime}_k$ such that $h(K_k)\geqslant \delta$ and $h(f^{\,-1}(K_k), \partial D)\geqslant \delta>0.$ Since, by the condition of the lemma, the domains $D^{\,\prime}_k$ are equi-uniform with respect to $p$-modulus, by~(\ref{eq1B}) we obtain that \begin{equation}\label{eq13A} M_p(\Gamma(K_k, C_k, D^{\,\prime}_k))\geqslant b\,. \end{equation} for any $k=1,2,\ldots$ and some $b>0.$ Let $\Gamma_k$ be a family of all paths $\beta:[0, 1)\rightarrow D^{\,\prime}_k,$ where $\beta(0)\in C_k$ and $\beta(t)\rightarrow p\in K_k$ as $t\rightarrow 1.$ Let $\Gamma^*_k$ be a family of all total liftings $\alpha:[0, 1)\rightarrow D$ of $\Gamma_k$ under the mapping $f_k$ starting at $\gamma_k.$ Such a family i well-defined by~\cite[òåîðåìà~3.7]{Vu}. Since $f_k$ is closed, $\alpha(t)\rightarrow f^{\,-1}_k(K_k)$ as $t\rightarrow 1,$ where $f^{\,-1}_k(K_k)$ denotes the pre-image of $K_k$ under~$f_k.$ Since $\overline{{\Bbb R}^n}$ is a compact metric space, the set $C_{\delta}:=\{x\in D: h(x, \partial D)\geqslant \delta\}$ is compact in $D$ for any $\delta>0$ and, besides that, $f_k^{\,-1}(K_k)\subset C_{\delta}.$ By~\cite[Lemma~1]{Sm} the set $C_{\delta}$ can be embedded in the continuum $E_{\delta}$ lying in the domain $D.$ In this case, we may assume that ${\rm dist}\,(x_0, E_{\delta})\geqslant \varepsilon_0$ by decreasing $\varepsilon_0.$ Let $\Gamma_k$ be a family of all paths joining $|\gamma_k|$ and $A$ in $D.$ By~\cite[Theorem~1.I.5.46]{Ku}, \begin{equation}\label{eq5F} \Gamma^*_k>\Gamma(S(x_0, r_k), S(x_0, \varepsilon_0), D)\,. \end{equation} Using Proposition~\ref{pr2} and by~(\ref{eq4}), (\ref{eq5F}) we obtain that $$M_p(f_k(\Gamma^*_k))\leqslant M_p(f_k(\Gamma(S(x_0, r_k), S(x_0, \varepsilon_0), D)))\leqslant$$ \begin{equation}\label{eq14A} \leqslant \int\limits_{r_k}^{r_0}\frac{dr}{r^{\frac{n-1}{p-1}}q^{\frac{1}{p-1}}_{kx_0}(r)}\rightarrow 0\,,\quad k\rightarrow\infty\,, \end{equation} where $q_{kx_0}(t)=\frac{1}{\omega_{n-1}r^{n-1}} \int\limits_{S(x_0, t)}Q_k(x)\,d\mathcal{H}^{n-1}$ and $Q_k$ corresponds to the function $Q$ of $f_k$ in~(\ref{eq2*!}). Observe that $f_k(\Gamma^{\,*}_k)=\Gamma_k$ and, simultaneously, $M_p(\Gamma_k)=M_p(\Gamma(K_k, C_k, D^{\,\prime}_k)).$ Now \begin{equation}\label{eq12A} M_p(f_k(\Gamma_k))=M_p(\Gamma(K_k, C_k, D^{\,\prime}_k))\,. \end{equation} Combining (\ref{eq14A}) and (\ref{eq12A}), we obtain a contradiction with~(\ref{eq13A}). The resulting contradiction indicates that the initial assumption~(\ref{eq6***}) was incorrect, and, therefore, the family of mappings $\frak{R}_{\Phi, \delta, p, E}(D)$ is equicontinuous at any point $x_0\in E_D.$~$\Box$ {\bf \noindent Evgeny Sevost'yanov} \\ {\bf 1.} Zhytomyr Ivan Franko State University, \\ 40 Bol'shaya Berdichevskaya Str., 10 008 Zhytomyr, UKRAINE \\ {\bf 2.} Institute of Applied Mathematics and Mechanics\\ of NAS of Ukraine, \\ 1 Dobrovol'skogo Str., 84 100 Slavyansk, UKRAINE\\ [email protected] \end{document}
\begin{document} \title{\textrm{ Restricted normal cones and the \\method of alternating projections}} \author{ Heinz H.\ Bauschke\thanks{Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \texttt{[email protected]}.}, D.\ Russell Luke\thanks{Institut f\"ur Numerische und Angewandte Mathematik,\ Universit\"at G\"ottingen,\ Lotzestr.~16--18, 37083 G\"ottingen, Germany. E-mail: \texttt{[email protected]}.}, Hung M.\ Phan\thanks{Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \texttt{[email protected]}.}, ~and Xianfu\ Wang\thanks{Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada. E-mail: \texttt{[email protected]}.}} \date{May 2, 2012} \maketitle \vskip 8mm \begin{abstract} \noindent The method of alternating projections (MAP) is a common method for solving feasibility problems. While employed traditionally to subspaces or to convex sets, little was known about the behavior of the MAP in the nonconvex case until 2009, when Lewis, Luke, and Malick derived local linear convergence results provided that a condition involving normal cones holds and at least one of the sets is superregular (a property less restrictive than convexity). However, their results failed to capture very simple classical convex instances such as two lines in three-dimensional space. In this paper, we extend and develop the Lewis-Luke-Malick framework so that not only any two linear subspaces but also any two closed convex sets whose relative interiors meet are covered. We also allow for sets that are more structured such as unions of convex sets. The key tool required is the restricted normal cone, which is a generalization of the classical Mordukhovich normal cone. We thoroughly study restricted normal cones from the viewpoint of constraint qualifications and regularity. Numerous examples are provided to illustrate the theory. \end{abstract} {\small \noindent {\bfseries 2010 Mathematics Subject Classification:} {Primary 49J52, 49M20; Secondary 47H09, 65K05, 65K10, 90C26. }} \noindent {\bfseries Keywords:} Constraint qualification, convex set, Friedrichs angle, linear convergence, method of alternating projections, normal cone, projection operator, restricted normal cone, superregularity. \section{Introduction} Throughout this paper, we assume that \boxedeqn{ \text{$X$ is a Euclidean space } } (i.e., finite-dimensional real Hilbert space) with inner product $\scal{\cdot}{\cdot}$, induced norm $\|\cdot\|$, and induced metric $d$. Let $A$ and $B$ be nonempty closed subsets of $X$. We assume first that $A$ and $B$ are additionally \ensuremath{\varnothing}h{convex} and that $A\cap B\neq\varnothing$. In this case, the \ensuremath{\varnothing}h{projection operators} $P_A$ and $P_B$ (a.k.a.\ projectors or nearest point mappings) corresponding to $A$ and $B$, respectively, are single-valued with full domain. In order to find a point in the intersection $A$ and $B$, it is very natural to simply alternate the operator $P_A$ and $P_B$ resulting in the famous \ensuremath{\varnothing}h{method of alternating projections (MAP)}. Thus, given a starting point $b_{-1}\in X$, sequences $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ are generated as follows: \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}})\qquad a_{n} := P_Ab_{n-1},\quad b_n := P_Ba_n. \end{equation} In the present consistent convex setting, both sequences have a common limit in $A\cap B$. Not surprisingly, because of its elegance and usefulness, the MAP has attracted many famous mathematicians, including John~von~Neumann and Norbert~Wiener and it has been independently rediscovered repeatedly. It is out of scope of this article to review the history of the MAP, its many extensions, and its rich and convergence theory; the interested reader is referred to, e.g., \cite{BC2011}, \cite{CensorZenios}, \cite{Deutsch}, and the references therein. Since $X$ is finite-dimensional and $A$ and $B$ are closed, the convexity of $A$ and $B$ is actually not needed in order to guarantee existence of nearest points. This gives rise to \ensuremath{\varnothing}h{set-valued} projection operators which for convenience we also denote by $P_A$ and $P_B$. Dropping the convexity assumption, the MAP now generates sequences via \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}})\qquad a_{n} \in P_Ab_{n-1},\quad b_n \in P_Ba_n. \end{equation} This iteration is much less understood than its much older convex cousin. For instance, global convergence to a point in $A\cap B$ cannot be guaranteed anymore \cite{CombTrus90}. Nonetheless, the MAP is widely applied to applications in engineering and the physical sciences for finding a point in $A\cap B$ (see, e.g., \cite{StarkYang}). Lewis, Luke, and Malick achieved a break-through result in 2009, when there are no normal vectors that are opposite and at least one of the sets is superregular (a property less restrictive than convexity). Their proof techniques were quite different from the well known convex approaches; in fact, the Mordukhovich normal cone was a central tool in their analysis. However, their results were not strong enough to handle well known convex and linear scenarios. For instance, the linear convergence of the MAP for two lines in $\ensuremath{\mathbb R}^3$ cannot be obtained in their framework. \ensuremath{\varnothing}h{The goal of this paper is to extend the results by Lewis, Luke and Malick to make them applicable in more general settings. We unify their theory with classical convex convergence results. Our principal tool is a new normal cone which we term the \ensuremath{\varnothing}h{restricted normal cone}. A careful study of restricted normal cones and their applications is carried out. We also allow for constraint sets that are \ensuremath{\varnothing}h{unions} of superregular (or even convex) sets. We shall recover the known optimal convergence rate for the MAP when studying two linear subspaces. } In a parallel paper \cite{BLPW12b} we apply the tools developed here to the important problem of sparsity optimization with affine constraints. The remainder of the paper is organized as follows. In Section~\ref{s:prelim}, we collect various auxiliary results that are useful later and to make the later analysis less cluttered. The restricted normal cones are introduced in Section~\ref{s:normalcone}. Section~\ref{s:normalandaffine} focuses on normal cones that are restricted by affine subspaces; the results achieved are critical in the inclusion of convex settings to the linear convergence framework. Further examples and results are provided in Section~\ref{s:further} and Section~\ref{s:nosuper}, where we illustrate that the restricted normal cone cannot be obtained by intersections with various natural conical supersets. Section~\ref{s:CQ1} and Section~\ref{s:CQ2} are devoted to constraint qualifications which describe how well the sets $A$ and $B$ relate to each other. In Section~\ref{s:superregularity}, we discuss regularity and superregularity, notions that extend the idea of convexity, for sets and collections of sets. We are then in a position to provide in Section~\ref{s:application} our main results dealing with the local linear convergence of the MAP. \subsection*{Notation} The notation employed in this article is quite standard and follows largely \cite{Borzhu05}, \cite{Boris1}, \cite{Rock70}, and \cite{Rock98}; these books also provide exhaustive information on variational analysis. The real numbers are $\ensuremath{\mathbb R}$, the integers are $\ensuremath{\mathbb Z}$, and $\ensuremath{\mathbb N} := \menge{z\in\ensuremath{\mathbb Z}}{z\geq 0}$. Further, $\ensuremath{\mathbb R}_+ := \menge{x\in\ensuremath{\mathbb R}}{x\geq 0}$, $\ensuremath{\mathbb R}_+P := \menge{x\in \ensuremath{\mathbb R}}{x>0}$ and $\ensuremath{\mathbb R}_-$ and $\ensuremath{\mathbb R}_-M$ are defined analogously. Let $R$ and $S$ be subsets of $X$. Then the closure of $S$ is $\overline{S}$, the interior of $S$ is $\ensuremath{\operatorname{int}}(S)$, the boundary of $S$ is $\ensuremath{\operatorname{bdry}}(S)$, and the smallest affine and linear subspaces containing $S$ are $\ensuremath{\operatorname{aff}} S$ and $\ensuremath{\operatorname{span}} S$, respectively. The linear subspace parallel to $\ensuremath{\operatorname{aff}} S$ is $\ensuremath{\operatorname{par}} S := (\ensuremath{\operatorname{aff}} S)-S=(\ensuremath{\operatorname{aff}} S)-s$, for every $s\in S$. The relative interior of $S$, $\ensuremath{\operatorname{ri}}(S)$, is the interior of $S$ relative to $\ensuremath{\operatorname{aff}}(S)$. The negative polar cone of $S$ is $S^\ominus=\menge{u\in X}{\sup\scal{u}{S}\leq 0}$. We also set $S^\oplus := -S^\ominus$ and $S^\perp := S^\oplus \cap S^\ominus$. We also write $R\oplus S$ for $R+S:=\menge{r+s}{(r,s)\in R\times S}$ provided that $R\perp S$, i.e., $(\forall (r,s)\in R\times S)$ $\scal{r}{s}=0$. We write $F\colon X\ensuremath{\rightrightarrows} X$, if $F$ is a mapping from $X$ to its power set, i.e., $\ensuremath{\operatorname{gr}} F$, the graph of $F$, lies in $X\times X$. Abusing notation slightly, we will write $F(x) = y$ if $F(x)=\{y\}$. A nonempty subset $K$ of $X$ is a cone if $(\forall\lambda\in\ensuremath{\mathbb R}_+)$ $\lambda K := \menge{\lambda k}{k\in K}\subseteq K$. The smallest cone containing $S$ is denoted $\ensuremath{\operatorname{cone}}(S)$; thus, $\ensuremath{\operatorname{cone}}(S) := \ensuremath{\mathbb R}_+\cdot S := \menge{\rho s}{\rho\in\ensuremath{\mathbb R}_+,s\in S}$ if $S\neq\varnothing$ and $\ensuremath{\operatorname{cone}}(\varnothing):=\{0\}$. The smallest convex and closed and convex subset containing $S$ are $\ensuremath{\operatorname{conv}}(S)$ and $\ensuremath{\overline{\operatorname{conv}}\,}(S)$, respectively. If $z\in X$ and $\rho\in\ensuremath{\mathbb R}_+P$, then $\ball{z}{\rho} := \menge{x\in X}{d(z,x)\leq \rho}$ is the closed ball centered at $z$ with radius $\rho$ while $\sphere{z}{\rho} := \menge{x\in X}{d(z,x)= \rho}$ is the (closed) sphere centered at $z$ with radius $\rho$. If $u$ and $v$ are in $X$, then $[u,v] := \menge{(1-\lambda)u+\lambda v}{\lambda\in [0,1]}$ is the line segment connecting $u$ and $v$. \section{Auxiliary results} \label {s:prelim} In this section, we fix some basic notation used throughout this article. We also collect several auxiliary results that will be useful in the sequel. \subsection*{Projections} \begin{definition}[distance and projection] Let $A$ be a nonempty subset of $X$. Then \begin{equation} d_A \colon X\to\ensuremath{\mathbb R}\colon x\mapsto \inf_{a\in A}d(x,a) \end{equation} is the \ensuremath{\varnothing}h{distance function} of the set $A$ and \begin{equation} P_A\colon X\ensuremath{\rightrightarrows} X\colon x\mapsto \menge{a\in A}{d_A(x)=d(x,a)} \end{equation} is the corresponding \ensuremath{\varnothing}h{projection}. \end{definition} \begin{proposition}[existence] \label{p:0224a} Let $A$ be a nonempty closed subset of $X$. Then $(\forall x\in X)$ $P_A(x)\neq\ensuremath{\varnothing}tyset$. \end{proposition} \begin{proof} Let $z\in X$. The function $f\colon X\to\ensuremath{\mathbb R}\colon x\mapsto \|x-z\|^2$ is continuous and $\lim_{\|x\|\to\ensuremath{+\infty}} f(x)=\ensuremath{+\infty}$. Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a sequence in $A$ such that $f(x_n)\to\inf f(A)$. Then $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is bounded. Since $A$ is closed and $f$ is continuous, every cluster point of $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is a minimizer of $f$ over the set $A$, i.e., an element in $P_Az$. \end{proof} \begin{example}[sphere] \label{ex:projS} Let $z\in X$ and $\rho\in\ensuremath{\mathbb R}_+P$. Set $S := \sphere{z}{\rho}$. Then \begin{equation} \label{e:0308a} (\forall x\in X)\quad P_S(x)=\begin{cases} z+\rho\frac{x-z}{\|x-z\|},&\text{if $x\neq z$;}\\ S, &\text{otherwise.} \end{cases} \end{equation} \end{example} \begin{proof} Let $x\in X$. The formula is clear when $x=z$, so we assume $x\neq z$. Set \begin{equation} c := z+\rho\frac{x-z}{\|x-z\|} \in S, \end{equation} and let $s = z+\rho b\in S\smallsetminus \{c\}$, i.e., $\|b\|=1$ and $b\neq (x-z)/\|x-z\|$. Hence, using that $|\|u\|-\|v\||<\|u-v\|$ $\Leftrightarrow$ $\scal{u}{v}<\|u\|\|v\|$ and because of Cauchy-Schwarz, we obtain \begin{subequations} \begin{align} \|x-c\| &= \big|\|x-z\|-\rho\big| =\big|\|x-z\|-\|\rho b\|\big| =\big|\|x-z\|-\|s-z\|\big|\\ &<\|x-s\|. \end{align} \end{subequations} We have thus established \eqref{e:0308a}. \end{proof} In view of Proposition~\ref{p:0224a}, the next result is in particular applicable to the union of finitely many nonempty closed subsets of $X$. \begin{lemma}[union] \label{l:unionproj} Let $(A_i)_{i\in I}$ be a collection of nonempty subsets of $X$, set $A := \bigcup_{i\in I}A_i$, let $x\in X$, and suppose that $a\in P_A(x)$. Then there exists $i\in I$ such that $a\in P_{A_i}(x)$. \end{lemma} \begin{proof} Indeed, since $a\in A$, there exists $i\in I$ such that $a\in A_i$. Then $d(x,a)=d_A(x)\leq d_{A_i}(x)\leq d(x,a)$. Hence $d(x,a)=d_{A_i}(x)$, as claimed. \end{proof} The following result is well known. \begin{fact}[projection onto closed convex set] \label{f:convproj} Let $C$ be a nonempty closed convex subset of $X$, and let $x$, $y$ and $p$ be in $X$. Then the following hold: \begin{enumerate} \item \label{f:convproj1} $P_C(x)$ is a singleton. \item \label{f:convproj2} $P_C(x)=p$ if and only if $p\in C$ and $\sup\scal{C-p}{x-p}\leq 0$. \item \label{f:convproj3} $\|P_C(x)-P_C(y)\|^2 + \|(\ensuremath{\operatorname{Id}}-P_C)(x)-(\ensuremath{\operatorname{Id}}-P_C)(y)\|^2\leq\|x-y\|^2$. \item \label{f:convproj4} $\|P_C(x)-P_C(y)\|\leq\|x-y\|$. \end{enumerate} \end{fact} \begin{proof} \ref{f:convproj1}\&\ref{f:convproj2}: \cite[Theorem~3.14]{BC2011}. \ref{f:convproj3}: \cite[Proposition~4.8]{BC2011}. \ref{f:convproj4}: Clear from \ref{f:convproj3}. \end{proof} \subsection*{Miscellany} \begin{lemma}\label{l:cone01} Let $A$ and $B$ be subsets of $X$, and let $K$ be a cone in $X$. Then the following hold: \begin{enumerate} \item\label{l:cone01i} $\ensuremath{\operatorname{cone}}(A\cap B)\subseteq\ensuremath{\operatorname{cone}} A\cap \ensuremath{\operatorname{cone}} B$. \item\label{l:cone01ii} $\ensuremath{\operatorname{cone}}(K\cap B)= K \cap \ensuremath{\operatorname{cone}} B$. \end{enumerate} \end{lemma} \begin{proof} \ref{l:cone01i}: Clear. \ref{l:cone01ii}: By \ref{l:cone01i}, $\ensuremath{\operatorname{cone}}(K\cap B) \subseteq (\ensuremath{\operatorname{cone}} K) \cap (\ensuremath{\operatorname{cone}} B) = K \cap \ensuremath{\operatorname{cone}} B$. Now assume that $x\in (K\cap \ensuremath{\operatorname{cone}} B)\smallsetminus\{0\}$. Then there exists $\beta>0$ such that $x/\beta \in B$. Since $K$ is a cone, $x/\beta\in K$. Thus $x/\beta\in K\cap B$ and therefore $x \in \ensuremath{\operatorname{cone}}(K\cap B)$. \end{proof} Note that the inclusion in Lemma~\ref{l:cone01}\ref{l:cone01i} may be strict: indeed, consider the case when $X=\ensuremath{\mathbb R}$, $A := \{1\}$, and $B=\{2\}$. \begin{lemma}[a characterization of convexity] \label{l:PA&cvex} Let $A$ be a nonempty closed subset of $X$. Then the following are equivalent: \begin{enumerate} \item\label{l:PA&cvex-i} $A$ is convex. \item\label{l:PA&cvex-ii} $P^{-1}_A(a)-a$ is a cone, for every $a\in A$. \item\label{l:PA&cvex-iii} $P_A(x)$ is a singleton, for every $x\in X$. \end{enumerate} \end{lemma} \begin{proof} ``\ref{l:PA&cvex-i}$\Rightarrow$\ref{l:PA&cvex-ii}'': Indeed, it is well known in convex analysis (see, e.g., \cite[Proposition~6.17]{Rock98}) that for every $a\in A$, $P_A^{-1}(a)-a$ is equal to the normal cone (in the sense of convex analysis) of $A$ at $a$. ``\ref{l:PA&cvex-ii}$\Rightarrow$\ref{l:PA&cvex-iii}'': Let $x\in X$. By Proposition~\ref{p:0224a}, $P_Ax\neq\varnothing$. Take $a_1$ and $a_2$ in $P_Ax$. Then $\|x-a_1\|=\|x-a_2\|$ and $x-a_1\in P_A^{-1}a_1-a_1$. Since $P_A^{-1}a-a$ is a cone, we have $2(x-a_1)\in P_A^{-1}a_1-a_1$. Hence $y := 2x-a_1\in P_A^{-1}a_1$ and $y-x=x-a_1$. Thus, \begin{subequations} \begin{align} \scal{y-a_2}{a_1-a_2}&=\scal{(y-x)+(x-a_2)}{(a_1-x)+(x-a_2)}\\ &=\scal{y-x}{a_1-x}+\scal{y-x}{x-a_2}+\scal{x-a_2}{a_1-x}+\|x-a_2\|^2\\ &=\scal{x-a_1}{a_1-x}+\scal{x-a_1}{x-a_2}+\scal{x-a_2}{a_1-x}+\|x-a_2\|^2\\ &=-\|x-a_1\|^2 + \|x-a_2\|^2\\ &=0. \end{align} \end{subequations} Since $a_1\in P_Ay$, it follows that \begin{subequations} \label{e:0224a} \begin{align} \|y-a_1\|^2&=\|y-a_2\|^2+2\scal{y-a_2}{a_2-a_1}+\|a_1-a_2\|^2\\ &=\|y-a_2\|^2+\|a_1-a_2\|^2\\ &\geq \|y-a_2\|^2\\ &\geq\|y-a_1\|^2. \end{align} \end{subequations} Hence equality holds throughout \eqref{e:0224a}. Therefore, $a_1=a_2$. ``\ref{l:PA&cvex-iii}$\Rightarrow$\ref{l:PA&cvex-i}``: This classical result due to Bunt and to Motzkin on the convexity of Chebyshev sets is well known; for proofs, see, e.g., \cite[Chapter~12]{Deutsch} or \cite[Corollary~21.13]{BC2011}. \end{proof} \begin{proposition} \label{p:onecone} Let $S$ be a convex set. Then the following are equivalent. \begin{enumerate} \item \label{p:oneconei} $0\in\ensuremath{\operatorname{ri}} S$. \item \label{p:oneconeii} $\ensuremath{\operatorname{cone}} S = \ensuremath{\operatorname{span}} S$. \item \label{p:oneconeiii} $\ensuremath{\overline{\operatorname{cone}}\,} S = \ensuremath{\operatorname{span}} S$. \end{enumerate} \end{proposition} \begin{proof} Set $Y = \ensuremath{\operatorname{span}} S$. Then \ref{p:oneconei} $\Leftrightarrow$ $0$ belongs to the interior of $S$ relative to $Y$. ``\ref{p:oneconei}$\Rightarrow$\ref{p:oneconeii}'': There exists $\delta>0$ such that for every $y\in Y\smallsetminus\{0\}$, $\delta y/\|y\| \in S$. Hence $y\in\ensuremath{\operatorname{cone}} S$. ``\ref{p:oneconeii}$\Rightarrow$\ref{p:oneconei}'': For every $y\in Y$, there exists $\delta>0$ such that $\delta y\in S$. Now \cite[Corollary~6.4.1]{Rock70} applies in $Y$. ``\ref{p:oneconeii}$\Leftrightarrow$\ref{p:oneconeiii}'': Set $K=\ensuremath{\operatorname{cone}} S$, which is convex. By \cite[Corollary~6.3.1]{Rock70}, we have $\ensuremath{\operatorname{ri}} K = \ensuremath{\operatorname{ri}} Y$ $\Leftrightarrow$ $\overline{K} = \overline{Y}$ $\Leftrightarrow$ $\ensuremath{\operatorname{ri}} Y \subseteq K \subseteq \overline{Y}$. Since $\ensuremath{\operatorname{ri}} Y = Y = \overline{Y}$, we obtain the equivalences: $\ensuremath{\operatorname{ri}} K = Y$ $\Leftrightarrow$ $\overline{K}=Y$ $\Leftrightarrow$ $K=Y$. \end{proof} \section{Restricted normal cones: basic properties} \label{s:normalcone} Normal cones are fundamental objects in variational analysis; they are used to construct subdifferential operators, and they have found many applications in optimization, optimal control, nonlinear analysis, convex analysis, etc.; see, e.g., \cite{BC2011}, \cite{Borzhu05}, \cite{clsw98}, \cite{Loewenbook}, \cite{Boris1}, \cite{Rock70}, \cite{Rock98}. One of the key building blocks is the Mordukhovich (or limiting) normal cone $N_{A}$, which is obtained by limits of proximal normal vectors. In this section, we propose a new, very flexible, normal cone of $A$, denoted by $\nc{A}{B}$, by constraining the proximal normal vectors to a set $B$. \begin{definition}[normal cones] \label{d:NCone} Let $A$ and $B$ be nonempty subsets of $X$, and let $a$ and $u$ be in $X$. If $a\in A$, then various normal cones of $A$ at $a$ are defined as follows: \begin{enumerate} \item \label{d:pnB} The \ensuremath{\varnothing}h{$B$-restricted proximal normal cone} of $A$ at $a$ is \begin{equation}\label{e:pnB} \pn{A}{B}(a):= \ensuremath{\operatorname{cone}}\Big(\big(B\cap P_A^{-1}a\big)-a\Big) = \ensuremath{\operatorname{cone}}\Big(\big(B-a\big)\cap \big(P_A^{-1}a-a\big)\Big). \end{equation} \item \label{d:pnX} The (classical) \ensuremath{\varnothing}h{proximal normal cone} of $A$ at $a$ is \begin{equation}\label{e:pnX} \pnX{A}(a):=\pn{A}{X}(a)= \ensuremath{\operatorname{cone}}\big(P_A^{-1}a-a\big). \end{equation} \item\label{d:nc} The \ensuremath{\varnothing}h{$B$-restricted normal cone} $\nc{A}{B}(a)$ is implicitly defined by $u\in\nc{A}{B}(a)$ if and only if there exist sequences $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ in $A$ and $(u_n)_\ensuremath{{n\in{\mathbb N}}}$ in $\pn{A}{B}(a_n)$ such that $a_n\to a$ and $u_n\to u$. \item\label{d:fnc} The \ensuremath{\varnothing}h{Fr\'{e}chet normal cone} $\fnc{A}(a)$ is implicitly defined by $u\in \fnc{A}(a)$ if and only if $(\forall \varepsilon>0)$ $(\ensuremath{\exists\,}\delta>0)$ $(\forall x\in A\cap \ball{a}{\delta})$ $\scal{u}{x-a}\leq\varepsilon\|x-a\|$. \item\label{d:cnc} The \ensuremath{\varnothing}h{normal convex from convex analysis} $\cnc{A}(a)$ is implicitly defined by $u\in\cnc{A}(a)$ if and only if $\sup\scal{u}{A-a}\leq 0$. \item\label{d:Mnc} The \ensuremath{\varnothing}h{Mordukhovich normal cone} $N_A(a)$ of $A$ at $a$ is implicitly defined by $u\in N_A(a)$ if and only if there exist sequences $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ in $ A$ and $(u_n)_\ensuremath{{n\in{\mathbb N}}}$ in $\pnX{A}(a_n)$ such that $a_n\to a$ and $u_n\to u$. \end{enumerate} If $a\notin A$, then all normal cones are defined to be empty. \end{definition} \begin{center} \psset{xunit=1.2cm, yunit=1.0cm} \begin{pspicture} (-3,-4)(3.2,1.5) \pscustom{ \psline[showpoints=false]{-}(-3,1.5)(0,0)(3,1.5) \gsave \psline(0,1.5) \fill[fillstyle=solid,fillcolor=Lyellow] \ensuremath{\operatorname{gr}}estore} \rput(-0.5,1){\psframebox*[framearc=.4]{$A$}} \rput(0,0.3){\psframebox*[framearc=.4]{$a$}} \pscustom{ \psline[showpoints=false]{<->}(-2,-4)(0,0)(2,-4) \gsave \psline(1,-4) \fill[fillstyle=solid,opacity=0.2,fillcolor=Lgray] \ensuremath{\operatorname{gr}}estore} \rput(0,-2.5){\psframebox*[framearc=.4]{$\pnX{A}(a)$}} \psline[linecolor=gray,showpoints=false]{-}(-0.3,0.15)(-0.45,-0.15)(-0.15,-0.3) \psline[linecolor=gray,showpoints=false]{-}(0.3,0.15)(0.45,-0.15)(0.15,-0.3) \rput(-2,-1){\begin{tabular}{c} The proximal\\ normal cone \end{tabular}} \psarcn{->}(-1,-1.6){1}{90}{0} \end{pspicture} \begin{pspicture} (-3,-4)(4,1.5) \pscustom{ \psline[showpoints=false]{-}(-3,1.5)(0,0)(3,1.5) \gsave \psline(0,1.5) \fill[fillstyle=solid,fillcolor=Lyellow] \ensuremath{\operatorname{gr}}estore} \rput(-0.5,1){\psframebox*[framearc=.4]{$A$}} \rput(0,0.3){\psframebox*[framearc=.4]{$a$}} \pscustom{ \psline[showpoints=false]{<->}(-2,-4)(0,0)(2,-4) \gsave \psline(1,-4) \fill[fillstyle=solid,opacity=0.2,fillcolor=Lgray] \ensuremath{\operatorname{gr}}estore} \psline[linecolor=gray,showpoints=false]{-}(-0.3,0.15)(-0.45,-0.15)(-0.15,-0.3) \psline[linecolor=gray,showpoints=false]{-}(0.3,0.15)(0.45,-0.15)(0.15,-0.3) \pscustom{ \psline[linecolor=red,showpoints=false]{<->}(0.5,-4)(0.02,-0.05)(2,-4) \gsave \psline(1,-4) \fill[fillstyle=solid,opacity=0.2,fillcolor=lightgray,linestyle=none] \ensuremath{\operatorname{gr}}estore} \rput(1.1,-3.5){\psframebox*[framearc=.4]{$\pn{A}{B}(a)$}} \psellipse[fillstyle=solid,fillcolor=greenyellow,opacity=0.2](1.565,-2.5)(1.25,0.3) \rput(2,-2.5){\psframebox*[framearc=.4]{$B$}} \psline[linecolor=red,showpoints=false]{<->}(0.5,-4)(0.02,-0.05)(2,-4) \rput(2.3,-0.5){\begin{tabular}{c} The restricted\\ proximal normal cone \end{tabular}} \psarc{->}(1.1,-1){1}{90}{180} \rput(2.50,-1.7){$P^{-1}_A(a)\cap B$} \psarc{->}(1.5,-2.5){1}{90}{180} \end{pspicture} \end{center} \begin{remark} Some comments regarding Definition~\ref{d:NCone} are in order. \begin{enumerate} \item Clearly, the restricted proximal normal cone generalizes the notion of the classical proximal normal cone. The name ``restricted'' stems from the fact that the pre-image $P_A^{-1}a$ is restricted to the set $B$. \item See \cite[Example~6.16]{Rock98} and \cite[Subsection~2.5.2.D on page~240]{Boris1} for further information regarding the classical proximal normal cone, including the fact that \begin{equation} \label{e:120209a} u\in\pnX{A}(a) \quad\Leftrightarrow\quad a\in A\;\text{and}\; (\ensuremath{\exists\,}\delta>0) (\forall x\in A)\;\; \scal{u}{x-a}\leq\delta\|x-a\|^2. \end{equation} This also implies that: $\pnX{A}(a)+(A-a)^\ominus\subseteq\pnX{A}(a)$. \item Note that $\ensuremath{\operatorname{gr}}\nc{A}{B} = (A\times X)\cap \overline{\ensuremath{\operatorname{gr}}\pn{A}{B}}$. Put differently, $\nc{A}{B}(a)$ is the outer (or upper Kuratowski) limit of $\pn{A}{B}(x)$ as $x\to a$ in $A$, written \begin{equation} \label{e:0224b} \nc{A}{B}(a)= \varlimsup_{x\to a\atop x\in A}\pn{A}{B}(x). \end{equation} See also \cite[Chapter~4]{Rock98}. \item See \cite[Definition~1.1]{Boris1} or \cite[Definition~6.3]{Rock98} (where this is called the regular normal cone) for further information regarding $\fnc{A}(a)$. \item The Mordukhovich normal cone is also known as the basic or limiting normal cone. Note that $N_A=\nc{A}{X}$ and $\ensuremath{\operatorname{gr}} N_A = (A\times X)\cap \overline{\ensuremath{\operatorname{gr}} \pn{A}{X}} = (A \times X)\cap \overline{\ensuremath{\operatorname{gr}}\pnX{A}}$ and once again $N_A(a)$ is the outer (or upper Kuratowski) limit of $\pn{A}{X}(x)$ or $\pnX{A}(x)$ as $x\to a$ in $A$. See also \cite[page~141]{Boris1} for historical notes. \end{enumerate} \end{remark} The next result presents useful characterizations of the Mordukhovich normal cone. \begin{proposition}[characterizations of the Mordukhovich normal cone] \label{p:Nequi} Let $A$ be a nonempty closed subset of $X$, let $a\in A$, and let $u\in X$. Then the following are equivalent: \begin{enumerate} \item\label{p:Ne-i} $u\in N_A(a)$. \item\label{p:Ne-ii} There exist sequences $(\lambda_n)_\ensuremath{{n\in{\mathbb N}}}$ in $\ensuremath{\mathbb R}_+$, $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ in $X$, $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ in $A$ such that $a_n\to a$, $\lambda_n(b_n-a_n)\to u$, and $(\forall\ensuremath{{n\in{\mathbb N}}})$ $a_n\in P_Ab_n$. \item \label{p:Ne-iii} There exist sequences $(\lambda_n)_\ensuremath{{n\in{\mathbb N}}}$ in $\ensuremath{\mathbb R}_+$, $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ in $X$, $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ in $A$ such that $x_n\to a$, $\lambda_n(x_n-a_n)\to u$, and $(\forall\ensuremath{{n\in{\mathbb N}}})$ $a_n\in P_Ax_n$. (This also implies $a_n\to a$.) \item \label{p:Ne-iv} There exist sequences $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ in $A$ and $(u_n)_\ensuremath{{n\in{\mathbb N}}}$ in $X$ such that $a_n\to a$, $u_n\to u$, and $(\forall\ensuremath{{n\in{\mathbb N}}})$ $u_n\in\fnc{A}(a_n)$. \end{enumerate} \end{proposition} \begin{proof} ``\ref{p:Ne-i}$\Leftrightarrow$\ref{p:Ne-ii}'': Clear from Definition~\ref{d:NCone}\ref{d:Mnc}. ``\ref{p:Ne-iii}$\Leftrightarrow$\ref{p:Ne-iv}'': Noting that the definition of $N_A(a)$ in \cite{Boris1} is the one given in \ref{p:Ne-iv}, we see that this equivalence follows from \cite[Theorem~1.6]{Boris1}. ``\ref{p:Ne-ii}$\Rightarrow$\ref{p:Ne-iii}'': Let $(\lambda_n)_\ensuremath{{n\in{\mathbb N}}}$, $(a_n)_\ensuremath{{n\in{\mathbb N}}}$, and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ be as in \ref{p:Ne-ii}. For every $\ensuremath{{n\in{\mathbb N}}}$, since $a_n\in P_Ab_n$, \cite[Example~6.16]{Rock98} implies that $a_n\in P_A[a_n,b_n]$. Now let $(\varepsilon_n)_\ensuremath{{n\in{\mathbb N}}}$ be a sequence in $\ensuremath{\left]0,1\right[}$ such that $\varepsilon_n a_n\to 0$ and $\varepsilon_n b_n\to 0$. Set \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}})\quad x_n=(1-\varepsilon_n)a_n+\varepsilon_nb_n=a_n+\varepsilon_n(b_n-a_n) \in [a_n,b_n]. \end{equation} Then $x_n\to a$ and $(\forall\ensuremath{{n\in{\mathbb N}}})$ $a_n\in P_Ax_n$. Furthermore, $(\lambda_n/\varepsilon_n)_\ensuremath{{n\in{\mathbb N}}}$ lies in $\ensuremath{\mathbb R}_+$ and \begin{equation} (\lambda_n/\varepsilon_n)(x_n-a_n) = \lambda_n(b_n-a_n)\to u. \end{equation} ``\ref{p:Ne-iii}$\Rightarrow$\ref{p:Ne-ii}'': Let $(\lambda_n)_\ensuremath{{n\in{\mathbb N}}}$, $(x_n)_\ensuremath{{n\in{\mathbb N}}}$, and $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ be as in \ref{p:Ne-iii}. Since $x_n\to a$ and $a\in A$, we deduce that $0\leq \|x_n-a_n\|=d_A(x_n)\leq \|x_n-a\|\to 0$. Hence $x_n-a_n\to 0$ which implies that $a_n-a=a_n-x_n+x_n-a\to 0+0=0$. Therefore, \ref{p:Ne-ii} holds with $(b_n)_\ensuremath{{n\in{\mathbb N}}}=(x_n)_\ensuremath{{n\in{\mathbb N}}}$. \end{proof} Here are some basic properties of the restricted normal cone and its relation to various classical cones. \begin{lemma}[basic inclusions among the normal cones] \label{l:NsubsetN} Let $A$ and $B$ be nonempty subsets of $X$, and let $a\in A$. Then the following hold: \begin{enumerate} \item \label{l:NsubsetNi} $\cnc{A}(a)\subseteq\pnX{A}(a)$. \item \label{l:NsubsetNi+} $\pn{A}{B}(a) = \ensuremath{\operatorname{cone}}((B-a)\cap(P_A^{-1}a-a))\subseteq (\ensuremath{\operatorname{cone}}(B-a))\cap\pnX{A}(a)$. \item \label{l:NsubsetNii} $\pn{A}{B}(a)\subseteq\pn{A}{X}(a)=\pnX{A}(a)$ and $\nc{A}{B}(a)\subseteq N_A(a)$. \item \label{l:NsubsetNiv} $\pn{A}{B}(a)\subseteq\nc{A}{B}(a)$. \item \label{l:NsubsetNiii} If $A$ is closed, then $\pnX{A}(a) \subseteq \fnc{A}(a)$. \item \label{l:NsubsetNv} If $A$ is closed, then $\fnc{A}(a)\subseteq N_A(a)$. \item \label{l:NsubsetNvi} If $A$ is closed and convex, then $\pn{A}{X}(a)=\pnX{A}(a)=\fnc{A}(a)=\cnc{A}(a)=N_A(a)$. \item \label{l:NsubsetNri} If $a\in\ensuremath{\operatorname{ri}}(A)$, then $\pn{A}{\ensuremath{\operatorname{aff}}(A)}(a)=\nc{A}{\ensuremath{\operatorname{aff}}(A)}(a)=\{0\}$. \item \label{l:NsubsetNvii} $(\ensuremath{\operatorname{aff}}(A)-a)^\bot\subseteq (A-a)^\ominus$. \item \label{l:NsubsetNviii} $(A-a)^\ominus\cap \ensuremath{\operatorname{cone}}(B-a) \subseteq \pn{A}{B}(a)\subseteq \ensuremath{\operatorname{cone}}(B-a)$. \end{enumerate} \end{lemma} \begin{proof} \ref{l:NsubsetNi}: Take $u\in\cnc{A}(a)$ and fix an arbitrary $\delta>0$. Then $(\forall x\in A)$ $\scal{u}{x-a}\leq 0 \leq \delta\|x-a\|^2$. In view of \eqref{e:120209a}, $u\in\pnX{A}(a)$. \ref{l:NsubsetNi+}: In view of Lemma~\ref{l:cone01}, the definitions yield \begin{subequations} \begin{align} \pn{A}{B}(a) &= \ensuremath{\operatorname{cone}}\big((B\cap P_A^{-1}a)-a\big) = \ensuremath{\operatorname{cone}}\big((B-a)\cap (P_A^{-1}a-a)\big)\\ &\subseteq \ensuremath{\operatorname{cone}}\big((B-a)\cap \ensuremath{\operatorname{cone}}(P_A^{-1}a-a)\big) =\ensuremath{\operatorname{cone}}\big((B-a)\cap\pnX{A}(a)\big)\\ &=\ensuremath{\operatorname{cone}}(B-a)\cap \pnX{A}(a). \end{align} \end{subequations} \ref{l:NsubsetNii}, \ref{l:NsubsetNiv} and \ref{l:NsubsetNvii}: This is obvious. \ref{l:NsubsetNiii}: Assume that $A$ is closed and take $u\in\pnX{A}(a)$. By \eqref{e:120209a}, there exists $\rho>0$ such that $(\forall x\in A)$ $\scal{u}{x-a}\leq\rho\|x-a\|^2$. Now let $\varepsilon>0$ and set $\delta = \varepsilon/\rho$. If $x\in A\cap \ball{a}{\delta}$, then $\scal{u}{x-a}\leq\rho\|x-a\|^2 \leq \rho\delta \|x-a\|=\varepsilon\|x-a\|$. Thus, $u\in\fnc{A}(a)$. \ref{l:NsubsetNv}: This follows from Proposition~\ref{p:Nequi}. \ref{l:NsubsetNvi}: Since $A$ is closed, it follows from \ref{l:NsubsetNi}, \ref{l:NsubsetNiii}, and \ref{l:NsubsetNv} that \begin{equation} \cnc{A}(a) \subseteq \pnX{A}(a) \subseteq \fnc{A}(a) \subseteq N_A(a). \end{equation} On the other hand, by \cite[Proposition~1.5]{Boris1}, $N_A(a)\subseteq \cnc{A}(a)$ because $A$ is convex. \ref{l:NsubsetNri}: By assumption, $(\ensuremath{\exists\,}\delta>0)$ $\ball{a}{\delta}\cap \ensuremath{\operatorname{aff}}(A)\subseteq A$. Hence $\ensuremath{\operatorname{aff}}(A)\cap P_A^{-1}a = \{a\}$ and thus $\pn{A}{\ensuremath{\operatorname{aff}}(A)}(a)=\{0\}$. Since $a\in\ensuremath{\operatorname{ri}}(A)$, it follows that $(\forall x\in \ball{a}{\delta/2}\cap\ensuremath{\operatorname{aff}}(A))$ $ \pn{A}{\ensuremath{\operatorname{aff}}(A)}(x)=\{0\}$. Therefore, $\nc{A}{\ensuremath{\operatorname{aff}}(A)}(a)=\{0\}$. \ref{l:NsubsetNviii}: Take $u\in ((A-a)^\ominus\cap\ensuremath{\operatorname{cone}}(B-a))\smallsetminus\{0\}$, say $u=\lambda(b-a)$, where $b\in B$ and $\lambda>0$. Then $0\geq \sup\scal{A-a}{u}=\lambda\sup\scal{A-a}{b-a}= \sup\lambda\scal{\ensuremath{\overline{\operatorname{conv}}\,} A-a}{b-a}$. By Fact~\ref{f:convproj}\ref{f:convproj2}, $a=P_{\ensuremath{\overline{\operatorname{conv}}\,} A}b$ and hence $a=P_Ab$. It follows that $u\in\ensuremath{\operatorname{cone}}((B\cap P_A^{-1}a)-a)$. The left inclusion thus holds. The right inclusion is clear. \end{proof} \begin{remark}[on closedness of normal cones] Let $A$ be a nonempty subset of $X$, let $a\in A$, and let $B$ be a subset of $X$. Then $\nc{A}{B}(a)$, $N_A(a)$, and $\cnc{A}(a)$ are obviously closed---this is also true for $\fnc{A}(a)$ but requires some work (see \cite[Proposition~6.5]{Rock98}). On the other hand, the classical proximal normal cone $\pnX{A}(a) = \pn{A}{X}(a)$ is not necessarily closed (see, e.g., \cite[page~213]{Rock98}), and hence neither is $\pn{A}{B}(a)$. For a concrete example, suppose that $X=\ensuremath{\mathbb R}^2$, that $A=\{(0,0)\}$, that $B=\ensuremath{\mathbb R}\times\{1\}$ and that $a=(0,0)$. Then $\pn{A}{B}(a)=\big(\ensuremath{\mathbb R}\times\ensuremath{\mathbb R}_+P\big)\cup\{(0,0)\}$, which is not closed; however, the classical proximal normal cone $\pnX{A}(a)=\ensuremath{\mathbb R}^2$ is closed. \end{remark} The sphere is a nonconvex set for which all classical normal cones coincide: \begin{example}[classical normal cones of the sphere] \label{ex:ncS} Let $z\in X$ and $\rho\in\ensuremath{\mathbb R}_+P$. Set $S := \sphere{z}{\rho}$ and let $s\in S$. Then $\pnX{S}(s) = \pn{S}{X}(s)=\fnc{S}(s)=N_S(s)=\ensuremath{\mathbb R}(s-z)$. \end{example} \begin{proof} By Example~\ref{ex:projS}, we have $P_S^{-1}(s) = z + \ensuremath{\mathbb R}_+(s-z)$ and so $P_S^{-1}(s)-s = \left[-1,\ensuremath{+\infty}\right[\cdot(s-z)$. Hence, using Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNiii}\&\ref{l:NsubsetNv}, we have \begin{subequations} \begin{align} \pnX{S}(s) &= \pn{S}{X}(s)= \ensuremath{\mathbb R}(s-z) \subseteq \fnc{S}(s) \subseteq N_S(s)\\ &=\varlimsup_{s'\to S\atop s'\in S}\pnX{S}(s') =\varlimsup_{s'\to S\atop s'\in S}\ensuremath{\mathbb R}(s'-z) = \ensuremath{\mathbb R}(s-z)\\ &=\pnX{S}(s), \end{align} \end{subequations} as announced. \end{proof} Here are some elementary yet useful calculus rules. \begin{proposition} \label{p:elementary} Let $A$, $A_1$, $A_2$, $B$, $B_1$, and $B_2$ be nonempty subsets of $X$, let $c\in X$, and suppose that $a\in A\cap A_1\cap A_2$. Then the following hold: \begin{enumerate} \item\label{p:ele-i} If $A$ and $B$ are convex, then $\pn{A}{B}(a)$ is convex. \item\label{p:ele-ii} $\pn{A}{B_1\cup B_2}(a)=\pn{A}{B_1}(a)\cup\pn{A}{B_2}(a)$ and $\nc{A}{B_1\cup B_2}(a)=\nc{A}{B_1}(a)\cup\nc{A}{B_2}(a)$. \item\label{p:ele-iii} If $B\subseteq A$, then $\pn{A}{B}(a)=\nc{A}{B}(a)=\{0\}$. \item\label{p:ele-iv} If $A_1\subseteq A_2$, then $\pn{A_2}{B}(a)\subseteq \pn{A_1}{B}(a)$. \item\label{p:ele-v} $-\pn{A}{B}(a)=\pn{-A}{-B}(-a)$, $-\nc{A}{B}(a)=\nc{-A}{-B}(-a)$, and $-N_{A}(a)=N_{-A}(-a)$. \item\label{p:ele-vi} $\pn{A}{B}(a)=\pn{A-c}{B-c}(a-c)$ and $\nc{A}{B}(a)=\nc{A-c}{B-c}(a-c)$. \end{enumerate} \end{proposition} \begin{proof} It suffices to establish the conclusions for the restricted proximal normal cones since the restricted normal cone results follows by taking closures (or outer limits). \ref{p:ele-i}: We assume that $B\cap P_A^{-1}a\neq\varnothing$, for otherwise the conclusion is clear. Then $P_A^{-1}(a) = P_{\overline{A}}^{-1}a = (\ensuremath{\operatorname{Id}}+N_{\overline{A}})a$ is convex (as the image of the maximally monotone operator $\ensuremath{\operatorname{Id}}+N_{\overline{A}}$ at $a$). Hence $(B\cap P_A^{-1}a)-a$ is convex as well, and so is its conical hull, which is $\pn{A}{B}(a)$. \ref{p:ele-ii}: Since $((B_1\cup B_2)\cap P_A^{-1}a)-a = ((B_1\cap P_A^{-1}a)-a)\cup((B_2\cap P_A^{-1}a)-a)$, the result follows by taking the conical hull. \ref{p:ele-iii}: Clear, because $(B\cap P_A^{-1}a)- a$ is either empty or equal to $\{0\}$. \ref{p:ele-iv}: Suppose $\lambda(b-a)\in \pn{A_2}{B}(a)$, where $\lambda\geq 0$, $b\in B$, and $a\in P_{A_2}b$. Since $a\in A_1\subseteq A_2$, we have $a\in P_{A_1}b$. Hence $\lambda(b-a)\in \pn{A_1}{B}(a)$. \ref{p:ele-v}: This follows by using elementary manipulations and the fact that $P_{-A} = (-\ensuremath{\operatorname{Id}})\circ P_A\circ (-\ensuremath{\operatorname{Id}})$. \ref{p:ele-vi}: This follows readily from the fact that $P^{-1}_{A-c}(a-c) = P_A^{-1}(a)-c$. \end{proof} \begin{remark} \label{r:120212a} The restricted normal cone counterparts of items \ref{p:ele-i} and \ref{p:ele-iv} are false in general; see Example~\ref{ex:120212a} (and also Example~\ref{ex:ice}\ref{ex:iceiii}) below. \end{remark} The Mordukhovich normal cone (and hence also the Clarke normal cone which contains the Mordukhovich normal cone) strictly contains $\{0\}$ at boundary points (see \cite[Corollary~2.24]{Boris1} or \cite[Exercise~6.19]{Rock98}); however, the restricted normal cone can be $\{0\}$ at boundary points as we illustrate next. \begin{example}[restricted normal cone at boundary points] \label{ex:cf} Suppose that $X=\ensuremath{\mathbb R}^2$, set $A:=\ball{0}{1}=\menge{x\in\ensuremath{\mathbb R}^2}{\|x\|\leq1}$ and $B:=\ensuremath{\mathbb R}\times\{2\}$, and let $a=(a_1,a_2)\in A$. Then \begin{equation} \pn{A}{B}(a)= \begin{cases} \ensuremath{\mathbb R}_+ a,&\text{if $\|a\|=1$ and $a_2>0$;}\\ \{(0,0)\},&\text{otherwise.} \end{cases} \end{equation} Consequently, \begin{equation} \nc{A}{B}(a)= \begin{cases} \ensuremath{\mathbb R}_+ a,&\text {if $\|a\|=1$ and $a_2\geq 0$;}\\ \{(0,0)\},&\text{otherwise.} \end{cases} \end{equation} Thus the restricted normal cone is $\{(0,0)\}$ for all boundary points in the lower half disk that do not ``face'' the set $B$. \end{example} \begin{remark} In contrast to Example~\ref{ex:cf}, we shall see in Corollary~\ref{c:N=0}\ref{c:N=0ii} below that if $A$ is closed, $B$ is the affine hull of $A$, and $a$ belongs to the relative boundary of $A$, then the restricted normal cone $\nc{A}{B}(a)$ strictly contains $\{0\}$. \end{remark} \section{Restricted normal cones and affine subspaces} \label{s:normalandaffine} In this section, we consider the case when the restricting set is a suitable affine subspace. This results in further calculus rules and a characterization of interiority notions. The following four lemmas are useful in the derivation of the main results in this section. \begin{lemma}\label{l:aff-lspan} Let $A$ and $B$ be nonempty subsets of $X$, and suppose that $c\in A\cap B$. Then \begin{equation} \ensuremath{\operatorname{aff}}(A\cup B)-c=\ensuremath{\operatorname{span}}(B-A). \end{equation} \end{lemma} \begin{proof} Since $c\in A\cap B \subseteq A\cup B$, it is clear that the $\ensuremath{\operatorname{aff}}(A\cup B)-c$ is a subspace. On the one hand, if $a\in A$ and $b\in B$, then $b-a = 1\cdot b + (-1)\cdot a + 1\cdot c - c \in\ensuremath{\operatorname{aff}}(A\cup B)-c$. Hence $B-A\subseteq\ensuremath{\operatorname{aff}}(A\cup B)-c$ and thus $\ensuremath{\operatorname{span}}(B-A)\subseteq\ensuremath{\operatorname{aff}}(A\cup B)-c$. On the other hand, if $x\in\ensuremath{\operatorname{aff}}(A\cup B)$, say $x=\sum_{i\in I} \lambda_i a_i+\sum_{j\in J} \mu_j b_j$, where each $a_i$ belongs to $A$, each $b_j$ belongs to $B$, and $\sum_{i\in I}\lambda_i +\sum_{j\in J}\mu_j=1$, then $x-c = \sum_{i\in I}(-\lambda_i)(c-a_i) + \sum_{j\in I}\mu_j(b_j-c)\in\ensuremath{\operatorname{span}}(B-A)$. Thus $\ensuremath{\operatorname{aff}}(A\cup B)-c\subseteq \ensuremath{\operatorname{span}}(B-A)$. \end{proof} \begin{lemma}\label{l:P(b+u)} Let $A$ be a nonempty subset of $X$, let $a\in A$, and let $u\in(\ensuremath{\operatorname{aff}}(A)-a)^\bot$. Then \begin{equation} (\forall x\in X)\quad P_A(x+u)=P_A(x). \end{equation} \end{lemma} \begin{proof} Let $x\in X$. For every $b\in A$, we have \begin{subequations} \begin{align} \|u+x-b\|^2&=\|u\|^2+2\scal{u}{x-b}+\|x-b\|^2\\ &=\|u\|^2+2\scal{u}{x-a}+2\scal{u}{a-b}+\|x-b\|^2\\ &=\|u\|^2+2\scal{u}{x-a}+\|x-b\|^2. \end{align} \end{subequations} Hence $P_{A}(x+u)=\ensuremath{\operatorname*{argmin}}_{b\in A}\|u+x-b\|^2=\ensuremath{\operatorname*{argmin}}_{b\in A}\|x-b\|^2 = P_Ax$, as announced. \end{proof} \begin{lemma}\label{l:PAPL} Let $A$ be a nonempty subset of $X$, and let $L$ be an affine subspace of $X$ containing $A$. Then \begin{equation} P_A=P_A\circ P_L. \end{equation} \end{lemma} \begin{proof} Let $a\in A$ and $x\in X$, and set $b=P_Lx$. Using \cite[Corollary~3.20(i)]{BC2011}, we have $x-b\in (L-a)^\bot\subset (\ensuremath{\operatorname{aff}}(A)-a)^\bot$. In view of Lemma~\ref{l:P(b+u)}, we deduce that $(P_A\circ P_L)x=P_A(b)=P_A(b+(x-b))=P_Ax$. \end{proof} \begin{lemma}\label{l:per} Let $A$ be a nonempty subset of $X$, let $a\in A$, and let $L$ be an affine subspace of $X$ containing $A$. Then the following hold: \begin{enumerate} \item \label{l:peri} $\pn{A}{L}(a) \bot (L-a)^\bot$. \item \label{l:perii} $\nc{A}{L}(a) \bot (L-a)^\bot$. \end{enumerate} \end{lemma} \begin{proof} Observe that $L-a=\ensuremath{\operatorname{par}}(A)$ does not depend on the concrete choice of $a\in A$. \ref{l:peri}: Using Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNviii}, we see that $\pn{A}{L}(a)\subseteq\ensuremath{\operatorname{cone}}(L-a)\subseteq\ensuremath{\operatorname{span}}(L-a)\perp (\ensuremath{\operatorname{span}}(L-a))^\perp = (L-a)^\perp = (\ensuremath{\operatorname{par}} A)^\perp$. \ref{l:perii}: By \ref{l:peri}, $\ensuremath{\operatorname{ran}}\pn{A}{L}\subseteq \ensuremath{\operatorname{par}} A$. Since $\ensuremath{\operatorname{ran}}\nc{A}{L}\subseteq\overline{\ensuremath{\operatorname{ran}}\pn{A}{L}}$, it follows that $\ensuremath{\operatorname{ran}}\nc{A}{L}\subseteq \ensuremath{\operatorname{par}} A = L-a$. \end{proof} For a normal cone restricted to certain affine subspaces, it is possible to derive precise relationships to the Mordukhovich normal cone. \begin{theorem}[restricted vs Mordukhovich normal cone] \label{p:pnA(L)} Let $A$ and $B$ be nonempty subsets of $X$, suppose that $a\in A$, and let $L$ be an affine subspace of $X$ containing $A$. Then the following hold: \begin{subequations} \label{e:pnAL} \begin{align} \pn{A}{X}(a)&=\pn{A}{L}(a)\oplus(L-a)^\bot=\pn{A}{X}(a)+(L-a)^\bot,\label{e:pnALa}\\ \pn{A}{L}(a)&=\pn{A}{X}(a)\cap (L-a),\label{e:pnALb}\\ N_A(a)&=\nc{A}{L}(a)\oplus(L-a)^\bot=N_A(a)+(L-a)^\bot,\label{e:pnALc}\\ \nc{A}{L}(a)&=N_A(a)\cap (L-a).\label{e:pnALd} \end{align} \end{subequations} Consequently, the following hold as well: \begin{subequations} \begin{align} \pn{A}{X}(a)&=\pn{A}{\ensuremath{\operatorname{aff}}(A)}(a)\oplus(\ensuremath{\operatorname{aff}}(A)-a)^\bot=\pn{A}{X}(a)+(\ensuremath{\operatorname{aff}}(A)-a)^\bot,\label{e:pnALipa}\\ \pn{A}{\ensuremath{\operatorname{aff}}(A)}(a)&=\pn{A}{X}(a)\cap (\ensuremath{\operatorname{aff}}(A)-a),\label{e:pnALipb}\\ N_A(a)&=\nc{A}{\ensuremath{\operatorname{aff}}(A)}(a)\oplus\big(\ensuremath{\operatorname{aff}}(A)-a\big)^\bot=N_A(a)+\big(\ensuremath{\operatorname{aff}}(A)-a\big)^\bot,\label{e:pnALipc}\\ \nc{A}{\ensuremath{\operatorname{aff}}(A)}(a)&=N_A(a)\cap \big(\ensuremath{\operatorname{aff}}(A)-a\big),\label{e:pnALipd}\\ a\in A\cap B\;\;\Rightarrow\;\; \nc{A}{\ensuremath{\operatorname{aff}}(A\cup B)}(a)&=N_A(a)\cap \ensuremath{\operatorname{span}}(A-B)\label{e:pnALipe}. \end{align} \end{subequations} \end{theorem} \begin{proof} \eqref{e:pnALa}: Take $u\in\pn{A}{X}(a)$. Then there exist $\lambda\geq 0$, $x\in X$, and $a\in P_Ax$ such that $\lambda(x-a)=u$. Set $b=P_Lx$. By Lemma~\ref{l:PAPL}, we have $a\in P_A x=(P_A\circ P_L)x= P_A b$. Using \cite[Corollary~3.20(i)]{BC2011}, we thus deduce that $\lambda(b-a)\in \pn{A}{L}(a)$ and $\lambda(x-b)\in (L-b)^\perp=(L-a)^\perp$. Hence $u=\lambda(b-a)+\lambda(x-b)\in \pn{A}{L}(a)+ (L-a)^\perp =\pn{A}{L}(a)\oplus (L-a)^\perp$ by Lemma~\ref{l:per}\ref{l:peri}. We have thus shown that \begin{equation} \label{e:0211a} \pn{A}{X}(a) \subseteq \pn{A}{L}(a)\oplus (L-a)^\perp. \end{equation} On the other hand, Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNii} implies that $\pn{A}{L}(a)\subseteq\pn{A}{X}(a)$ and thus \begin{equation} \pn{A}{L}(a)+(L-a)^\perp \subseteq \pn{A}{X}(a)+(L-a)^\perp. \end{equation} Altogether, \begin{equation} \pn{A}{X}(a) \subseteq \pn{A}{L}(a)\oplus (L-a)^\perp \subseteq \pn{A}{X}(a)+(L-a)^\perp. \end{equation} To complete the proof of \eqref{e:pnALa}, it thus suffices to show that $\pn{A}{X}(a)+ (L-a)^\bot\subseteq \pn{A}{X}(a)$. To this end, let $u\in\pn{A}{X}(a)$ and $v\in(L-a)^\bot\subseteq(\ensuremath{\operatorname{aff}}(A)-a)^\bot$. Then there exist $\lambda\geq0$, $b\in X$, and $a\in P_Ab$ such that $u=\lambda(b-a)$. If $\lambda=0$, then $u=0$ and $u+v=v\in(\ensuremath{\operatorname{aff}}(A)-a)^\perp \subseteq (A-a)^\ominus = (A-a)^\ominus \cap X = (A-a)^\ominus \cap \ensuremath{\operatorname{cone}}(X-a) \subseteq \pn{A}{X}(a)$ by Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNvii}\&\ref{l:NsubsetNviii}. Thus, we assume that $\lambda>0$. By Lemma~\ref{l:P(b+u)}, we have $a\in P_A b = P_A(b+\lambda^{-1}v)$. Hence $b+\lambda^{-1}v-a\in\pn{A}{X}(a)$ and therefore $\lambda(b+\lambda^{-1}v-a) = \lambda(b-a)+v=u+v\in\pn{A}{X}(a)$, as required. \eqref{e:pnALb}: By Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNii}\&\ref{l:NsubsetNviii}, $\pn{A}{L}(a)\subseteq\pn{A}{X}(a)\cap(L-a)$. Now let $u\in \pn{A}{X}(a)\cap(L-a)$. By \eqref{e:pnALa}, we have $u=v+w$, where $v\in\pn{A}{L}(a)\subseteq L-a$ and $w\in(L-a)^\perp$. On the other hand, $w=u-v\in (L-a)-(L-a)=L-a$. Altogether $w\in (L-a)\cap(L-a)^\perp=\{0\}$. Hence $u=v\in\pn{A}{L}(a)$. \eqref{e:pnALc}: Let $u\in N_A(a)$. By definition, there exist sequences $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ in $A$ and $(u_n)_\ensuremath{{n\in{\mathbb N}}}$ in $X$ such that $a_n\to a$, $u_n\to u$, and $(\forall\ensuremath{{n\in{\mathbb N}}})$ $u_n\in\pn{A}{X}(a_n)$. By \eqref{e:pnALa}, there exists a sequence $(v_n,w_n)_\ensuremath{{n\in{\mathbb N}}}$ such that $(a_n,v_n)_\ensuremath{{n\in{\mathbb N}}}$ lies in $\ensuremath{\operatorname{gr}}\pn{A}{L}$, $(w_n)_\ensuremath{{n\in{\mathbb N}}}$ lies in $(L-a)^\perp$, and $(\forall\ensuremath{{n\in{\mathbb N}}})$ $u_n=v_n+w_n$ and $v_n\perp w_n$. Since $\|u\|^2 \leftarrow \|u_n\|^2 = \|v_n\|^2 + \|w_n\|^2$, the sequences $(v_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(w_n)_\ensuremath{{n\in{\mathbb N}}}$ are bounded. After passing to subsequences and relabeling if necessary, we assume $(v_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(w_n)_\ensuremath{{n\in{\mathbb N}}}$ are convergent, with limits $v$ and $w$, respectively. It follows that $v\in\nc{A}{L}(a)$ and $w\in(L-a)^\perp$; consequently, $u=v+w\in\nc{A}{L}(a) \oplus (L-a)^\perp$ by Lemma~\ref{l:per}\ref{l:perii}. Thus $N_A(a)\subseteq \nc{A}{L}(a) \oplus (L-a)^\perp$. On the other hand, by Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNii}, $\nc{A}{L}(a)\oplus(L-a)^\bot\subseteq N_A(a)+(L-a)^\bot$. Altogether, \begin{equation} N_A(a)\subseteq \nc{A}{L}(a)\oplus(L-a)^\bot\subseteq N_A(a)+(L-a)^\bot. \end{equation} It thus suffices to prove that $N_A(a)+ (L-a)^\bot\subseteq N_A(a)$. To this end, take $u\in N_A(a)$ and $v\in (L-a)^\perp$. Then there exist sequences $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ in $A$ and $(u_n)_\ensuremath{{n\in{\mathbb N}}}$ in $X$ such that $a_n\to a$, $u_n\to u$, and $(\forall\ensuremath{{n\in{\mathbb N}}})$ $u_n\in\pn{A}{X}(a_n)$. For every $\ensuremath{{n\in{\mathbb N}}}$, we have $L-a=L-a_n$ and hence $u_n+v\in \pn{A}{X}(a_n) + (L-a_n)^\perp = \pn{A}{X}(a_n)$ by \eqref{e:pnALa}. Passing to the limit, we conclude that $u+v\in N_A(a)$. \eqref{e:pnALd}: First, take $u\in \nc{A}{L}(a)$. On the one hand, by Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNii}, $u\in N_A(a)$. On the other hand, by Lemma~\ref{l:per}\ref{l:perii}, $u\in (L-a)^{\perp\perp} = L-a$. Altogether, we have shown that \begin{equation} \label{e:0211b} \nc{A}{L}(a)\subseteq N_A(a)\cap(L-a). \end{equation} Conversely, take $u\in N_A(a)\cap(L-a)\subseteq N_A(a)$. By \eqref{e:pnALc}, there exist $v\in \nc{A}{L}(a)$ and $w\in (L-a)^\perp$ such that $u=v+w$ and $v\perp w$. By \eqref{e:0211b}, $v\in L-a$. Hence $w = u-v\in (L-a)-(L-a)=L-a$. Since $w\in (L-a)^\perp$, we deduce that $w=0$. This implies $u=v\in \nc{A}{L}(a)$. Therefore, $N_A(a)\cap (L-a)\subseteq \nc{A}{L}(a)$. ``Consequently'' part: Consider \eqref{e:pnAL} when $L=\ensuremath{\operatorname{aff}}(A)$ or $L=\ensuremath{\operatorname{aff}}(A\cup B)$, and recall Lemma~\ref{l:aff-lspan} in the latter case. \end{proof} An immediate consequence of Theorem~\ref{p:pnA(L)} (or of the definitions) is the following result. \begin{corollary}[the $X$-restricted and the Mordukhovich normal cone coincide] \label{c:Boris=ncX} Let $A$ be a nonempty subset of $X$, and let $a\in A$. Then \begin{equation} \nc{A}{X}(a)=N_A(a). \end{equation} \end{corollary} The next two results provide some useful calculus rules. \begin{corollary}[restricted normal cone of a sum]\label{c:cal1} Let $C_1$ and $C_2$ be nonempty closed convex subsets of $X$, let $a_1\in C_1$, let $a_2\in C_2$, and let $L$ be an affine subspace of $X$ containing $C_1+C_2$. Then \begin{equation} \nc{C_1+C_2}{L}(a_1+a_2)=\nc{C_1}{L-a_2}(a_1)\cap\nc{C_2}{L-a_1}(a_2). \end{equation} \end{corollary} \begin{proof} Set $C=C_1+C_2$ and $a=a_1+a_2$. Then \eqref{e:pnALd} and \cite[Exercise~6.44]{Rock98} yield \begin{subequations} \label{e:0213d} \begin{align} \nc{C}{L}(a)&=\nc{C}{}(a)\cap(L-a)=\nc{C_1}{}(a_1)\cap\nc{C_2}{}(a_2)\cap(L-a)\\ &=\big(\nc{C_1}{}(a_1)\cap(L-a)\big)\cap\big(\nc{C_2}{}(a_2)\cap(L-a)\big). \end{align} \end{subequations} Note that $L-a$ is a linear subspace of $X$ containing $C_1-a_1$ and $C_2-a_2$. Thus, $L-a_2=L-a+a_1$ is an affine subspace of $X$ containing $C_1$, and $L-a_1=L-a+a_2$ is an affine subspace of $X$ containing $C_2$. By \eqref{e:pnALd}, \begin{equation} \label{e:0213e} \nc{C_1}{L-a_2}(a_1)=\nc{C_1}{}(a_1)\cap(L-a) \quad\text{and}\quad \nc{C_2}{L-a_1}(a_2)=\nc{C_2}{}(a_2)\cap(L-a). \end{equation} The conclusion follows by combining \eqref{e:0213d} and \eqref{e:0213e}. \end{proof} \begin{corollary}[an intersection formula]\label{c:cal2} Let $A$ and $B$ be nonempty closed convex subsets of $X$, and suppose that $a\in A\cap B$. Let $L$ be an affine subspace of $X$ containing $A\cup B$. Then \begin{equation} \nc{A}{L}(a)\cap\big(-\nc{B}{L}(a)\big)=\nc{A-B}{L-a}(0). \end{equation} \end{corollary} \begin{proof} Using \eqref{e:pnALd}, Proposition~\ref{p:elementary}\ref{p:ele-v}, \cite[Exercise~6.44]{Rock98}, and again \eqref{e:pnALd}, we obtain \begin{subequations} \begin{align} \nc{A}{L}(a)\cap\big(-\nc{B}{L}(a)\big)&=N_A(a)\cap(L-a)\cap\big(-N_B(a)\big)\cap(L-a)\\ &=\Big(N_A(a)\cap\big(-N_B(a)\big)\Big)\cap(L-a)\\ &=\big(N_A(a)\cap N_{-B}(-a)\big)\cap(L-a)\\ &=N_{A-B}(0)\cap(L-a)\\ &=\nc{A-B}{L-a}(0), \end{align} \end{subequations} as required. \end{proof} Let us now work towards relating the restricted normal cone to the (relative and classical) interior and to the boundary of a given set. \begin{proposition}\label{p:L=aff(A)} Let $A$ be a nonempty subset of $X$, let $a\in A$, let $L$ be an affine subspace containing $A$, and suppose that $\nc{A}{L}(a)=\{0\}$. Then $L=\ensuremath{\operatorname{aff}}(A)$. \end{proposition} \begin{proof} Using $0\in\nc{A}{\ensuremath{\operatorname{aff}}(A)}(a)\subseteq\nc{A}{L}(a)=\{0\}$ and applying \eqref{e:pnALc} and \eqref{e:pnALipc}, we have \begin{equation} N_A(a)=0+(L-a)^\perp=0+(\ensuremath{\operatorname{aff}}(A)-a)^\perp. \end{equation} So $L-a=\ensuremath{\operatorname{aff}}(A)-a$, i.e., $L=\ensuremath{\operatorname{aff}}(A)$. \end{proof} \begin{theorem} \label{t:ncEquiv} Let $A$ and $B$ be nonempty subsets of $X$, and let $a\in A$. Then \begin{equation} \label{e:0212a} \nc{A}{B}(a)=\{0\} \quad\Leftrightarrow\quad (\ensuremath{\exists\,}\delta>0)\big(\forall x\in A\cap \ball{a}{\delta}\big)\;\; P^{-1}_A(x)\cap B \subseteq\{x\}. \end{equation} Furthermore, if $A$ is closed and $B$ is an affine subspace of $X$ containing $A$, then the following are equivalent: \begin{enumerate} \item \label{t:ncEquivi} $\nc{A}{B}(a)=\{0\}$. \item \label{t:ncEquivii} $(\ensuremath{\exists\,}\rho>0)$ $\ball{a}{\rho}\cap B\subseteq A$. \item \label{t:ncEquiviii} $B=\ensuremath{\operatorname{aff}}(A)$ and $a\in \ensuremath{\operatorname{ri}}(A)$. \end{enumerate} \end{theorem} \begin{proof} Note that $\nc{A}{B}(a)=\{0\}$ $\Leftrightarrow$ $(\ensuremath{\exists\,}\delta>0)$ $(\forall x\in A\cap \ball{a}{\delta})$ $\pn{A}{B}(x)=\{0\}$. Hence \eqref{e:0212a} follows from the definition of $\pn{A}{B}(x)$. Now suppose that $A$ is closed and $B$ is an affine subspace of $X$ containing $A$. ``\ref{t:ncEquivi}$\Rightarrow$\ref{t:ncEquivii}'': Let $\delta>0$ be as in \eqref{e:0212a} and set $\rho := \delta/2$. Let $b\in B(a;\rho)\cap B$, and take $x\in P_Ab$, which is possible since $A$ is closed. Then $\|b-x\|=d_A(b)\leq\|b-a\|\leq\rho$ and hence \begin{equation} \|x-a\|\leq\|x-b\|+\|b-a\| \leq\rho+\rho=2\rho = \delta. \end{equation} Using \eqref{e:0212a}, we deduce that $b\in P_A^{-1}(x)\cap B\subseteq\{x\}\subseteq A$. ``\ref{t:ncEquivii}$\Rightarrow$\ref{t:ncEquiviii}'': It follows that $B=\ensuremath{\operatorname{aff}}(B)\subseteq\ensuremath{\operatorname{aff}}(A)\subseteq B$; hence, $B=\ensuremath{\operatorname{aff}}(A)$. Thus $\ball{a}{\rho}\cap\ensuremath{\operatorname{aff}}(A)\subseteq A$, which means that $a\in\ensuremath{\operatorname{ri}}(A)$. ``\ref{t:ncEquiviii}$\Rightarrow$\ref{t:ncEquivi}'': Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNri}. \end{proof} \begin{corollary}[interior and boundary characterizations] \label{c:N=0} Let $A$ be a nonempty closed subset of $X$, and let $a\in A$. Then the following hold: \begin{enumerate} \item \label{c:N=0i} $\nc{A}{\ensuremath{\operatorname{aff}}(A)}(a)=\{0\}$ $\Leftrightarrow$ $a\in\ensuremath{\operatorname{ri}}(A)$. \item \label{c:N=0ii} $\nc{A}{\ensuremath{\operatorname{aff}}(A)}(a)\neq\{0\}$ $\Leftrightarrow$ $a\in A\smallsetminus \ensuremath{\operatorname{ri}}(A)$. \item \label{c:N=0iii} $N_A(a)=\{0\}$ $\Leftrightarrow$ $a\in\ensuremath{\operatorname{int}}(A)$. \item \label{c:N=0iv} $N_A(a)\neq\{0\}$ $\Leftrightarrow$ $a\in A\smallsetminus\ensuremath{\operatorname{int}}(A)$. \end{enumerate} \end{corollary} \begin{proof} \ref{c:N=0i}: Apply Theorem~\ref{t:ncEquiv} with $B=\ensuremath{\operatorname{aff}}(A)$. \ref{c:N=0ii}: Clear from \ref{c:N=0i}. \ref{c:N=0iii}: Apply Theorem~\ref{t:ncEquiv} with $B=X$, and recall Corollary~\ref{c:Boris=ncX}. \ref{c:N=0iv}: Clear from \ref{c:N=0iii}. \end{proof} A second look at the proof of \ref{t:ncEquivi}$\Rightarrow$\ref{t:ncEquivii} in Theorem~\ref{t:ncEquiv} reveals that this implication does actually not require the assumption that $B$ be an affine subspace of $X$ containing $A$. The following example illustrates that the converse implication fails even when $B$ is a superset of $\ensuremath{\operatorname{aff}}(A)$. \begin{example} Suppose that $X=\ensuremath{\mathbb R}^2$, and set $A := \ensuremath{\mathbb R}\times\{0\}$, $a=(0,0)$, and $B=\ensuremath{\mathbb R}\times\{0,2\}$. Then $A=\ensuremath{\operatorname{aff}}(A)\subseteq B$ and $\ball{a}{1}\cap B\subseteq A$; however, $(\forall x\in A)$ $\pn{A}{B}(x)=\{0\}\times\ensuremath{\mathbb R}_+$ and therefore $\nc{A}{B}(a)=\{0\}\times\ensuremath{\mathbb R}_+\neq\{(0,0)\}$. \end{example} \subsection*{Two convex sets} It is instructive to interpret the previous results for two convex sets: \begin{theorem}[two convex sets: restricted normal cones and relative interiors] \label{t:compareCQ2} Let $A$ and $B$ be nonempty convex subsets of $X$. Then the following are equivalent: \begin{enumerate} \item\label{t:CQ2i1} $\ensuremath{\operatorname{ri}} A\cap\ensuremath{\operatorname{ri}} B\neq\ensuremath{\varnothing}tyset$. \item\label{t:CQ2i} $0\in\ensuremath{\operatorname{ri}}(B-A)$. \item\label{t:CQ2i2} $\ensuremath{\operatorname{cone}}(B-A)=\ensuremath{\operatorname{span}}(B-A)$. \item\label{t:CQ2ii} $N_A(c)\cap(-N_B(c))\cap\overline{\ensuremath{\operatorname{cone}}}(B-A)=\{0\}$ for some $c\in A\cap B$. \item\label{t:CQ2iii} $N_A(c)\cap(-N_B(c))\cap\overline{\ensuremath{\operatorname{cone}}}(B-A)=\{0\}$ for every $c\in A\cap B$. \item\label{t:CQ2iv} $N_A(c)\cap(-N_B(c))\cap\ensuremath{\operatorname{span}}(B-A)=\{0\}$ for some $c\in A\cap B$. \item\label{t:CQ2v} $N_A(c)\cap(-N_B(c))\cap\ensuremath{\operatorname{span}}(B-A)=\{0\}$ for every $c\in A\cap B$. \item\label{t:CQ2vi} $\nc{A}{\ensuremath{\operatorname{aff}}(A\cup B)}(c) \cap (-\nc{B}{\ensuremath{\operatorname{aff}}(A\cup B)}(c)) = \{0\}$ for some $c\in A\cap B$. \item\label{t:CQ2vii} $\nc{A}{\ensuremath{\operatorname{aff}}(A\cup B)}(c) \cap (-\nc{B}{\ensuremath{\operatorname{aff}}(A\cup B)}(c)) =\{0\}$ for every $c\in A\cap B$. \item\label{t:CQ2viii} $\nc{A-B}{\ensuremath{\operatorname{span}}(B-A)}(0)=\{0\}$. \end{enumerate} \end{theorem} \begin{proof} By \cite[Corollary~6.6.2]{Rock70}, \ref{t:CQ2i} $\Leftrightarrow$ $\ensuremath{\operatorname{ri}} A\cap \ensuremath{\operatorname{ri}} B\neq\varnothing$ $\Leftrightarrow$ $0\in \ensuremath{\operatorname{ri}} A - \ensuremath{\operatorname{ri}} B$ $\Leftrightarrow$ \ref{t:CQ2i}. Applying Proposition~\ref{p:onecone} to $B-A$, and \cite[Proposition~3.1.3]{BBL} to $\ensuremath{\overline{\operatorname{cone}}\,}(B-A)$, we obtain \begin{subequations} \label{e:0302a} \begin{align} \text{\ref{t:CQ2i}} &\Leftrightarrow \text{\ref{t:CQ2i2}} \;\Leftrightarrow\; \ensuremath{\overline{\operatorname{cone}}\,}(B-A)=\ensuremath{\operatorname{span}}(B-A)\\ &\Leftrightarrow \ensuremath{\overline{\operatorname{cone}}\,}(B-A)\cap\big(\ensuremath{\overline{\operatorname{cone}}\,}(B-A)\big)^\oplus = \{0\}. \end{align} \end{subequations} Let $c\in A\cap B$. Then Corollary~\ref{c:cal2} (with $L=X$) yields $N_A(c)\cap\big(-N_B(c)\big)=N_{A-B}(0)= (A-B)^\ominus=(B-A)^\oplus=(\overline{\ensuremath{\operatorname{cone}}}(B-A))^\oplus$. Hence \begin{equation} \label{e:0302b} (\forall c\in C)\quad N_A(c)\cap\big(-N_B(c)\big) \cap \ensuremath{\overline{\operatorname{cone}}\,}(B-A) = \big(\ensuremath{\overline{\operatorname{cone}}\,}(B-A)\big)^\oplus \cap \ensuremath{\overline{\operatorname{cone}}\,}(B-A) \end{equation} and \begin{equation} \label{e:0302c} (\forall c\in C)\quad N_A(c)\cap\big(-N_B(c)\big) \cap \ensuremath{\operatorname{span}}(B-A) = \big(\ensuremath{\overline{\operatorname{cone}}\,}(B-A)\big)^\oplus \cap \ensuremath{\operatorname{span}}(B-A). \end{equation} Combining \eqref{e:0302a}, \eqref{e:0302b}, and \eqref{e:0302c}, we see that \ref{t:CQ2i}--\ref{t:CQ2v} are equivalent. Next, Lemma~\ref{l:aff-lspan} and Corollary~\ref{c:cal2} yield the equivalence of \ref{t:CQ2vi}--\ref{t:CQ2viii}. Finally, \ref{t:CQ2viii}$\Leftrightarrow$\ref{t:CQ2i} by Corollary~\ref{c:N=0}\ref{c:N=0i}. \end{proof} \begin{corollary}[two convex sets: normal cones and interiors] \label{c:compareCQ3} Let $A$ and $B$ be nonempty convex subsets of $X$. Then the following are equivalent: \begin{enumerate} \item\label{c:CQ3i} $0\in\ensuremath{\operatorname{int}}(B-A)$. \item\label{c:CQ3i1} $\ensuremath{\operatorname{cone}}(B-A)=X$. \item\label{c:CQ3ii} $N_A(c) \cap (-N_B(c)) = \{0\}$ for some $c\in A\cap B$. \item\label{c:CQ3iii} $N_A(c) \cap (-N_B(c)) = \{0\}$ for every $c\in A\cap B$. \item\label{c:CQ3iv} $N_{A-B}(0)=\{0\}$. \end{enumerate} \end{corollary} \begin{proof} We start by notating that if $C$ is a convex subset of $X$, then $0\in\ensuremath{\operatorname{int}} C$ $\Leftrightarrow$ $0\in\ensuremath{\operatorname{ri}} C$ and $\ensuremath{\operatorname{span}} C = X$. Consequently, \begin{equation} \label{e:0302d} \text{\ref{c:CQ3i}} \quad\Leftrightarrow\quad 0\in\ensuremath{\operatorname{ri}}(B-A)\;\text{and}\;\ensuremath{\operatorname{span}}(B-A)=X. \end{equation} Assume that \ref{c:CQ3i} holds. Then \eqref{e:0302d} and Theorem~\ref{t:compareCQ2} imply that $\ensuremath{\operatorname{cone}}(B-A)=\ensuremath{\overline{\operatorname{cone}}\,}(B-A)=\ensuremath{\operatorname{span}}(B-A)=X$. Hence \ref{c:CQ3i1} holds, and from Theorem~\ref{t:compareCQ2} we obtain that \ref{c:CQ3i1}$\Rightarrow$\ref{c:CQ3ii}$\Leftrightarrow$\ref{c:CQ3iii}$\Leftrightarrow$\ref{c:CQ3iv}. Finally, Corollary~\ref{c:N=0}\ref{c:N=0iii} yields the implication \ref{c:CQ3iv}$\Rightarrow$\ref{c:CQ3i}. \end{proof} \section{Further examples} \label{s:further} In this section, we provide further examples that illustrate particularities of restricted normal cones. As announced in Remark~\ref{r:120212a}, when $a\in A_2\subsetneqq A_1$, it is possible that the \ensuremath{\varnothing}h{nonconvex} restricted normal cones satisfy $\nc{A_1}{B}(a)\not\subseteq\nc{A_2}{B}(a)$ even when $A_1$ and $A_2$ are both \ensuremath{\varnothing}h{convex}. This lack of inclusion is also known for the Mordukhovich normal cone (see \cite[page~5]{Boris1}, where however one of the sets is not convex). Furthermore, the following example also shows that the restricted normal cone cannot be derived from the Mordukhovich normal cone by the simple relativization procedure of intersecting with naturally associated cones and subspaces. \begin{example}[lack of convexity, inclusion, and relativization] \label{ex:120212a} Suppose that $X=\ensuremath{\mathbb R}^2$, and define two nonempty closed \ensuremath{\varnothing}h{convex} sets by $A := A_1:=\ensuremath{\operatorname{epi}}(|\cdot|)$ and $A_2:=\ensuremath{\operatorname{epi}}(2|\cdot|)$. Then $a := (0,0) \in A_2\subsetneqq A_1$. Furthermore, set $B:=\ensuremath{\mathbb R}\times\{0\}$. Then \begin{subequations} \begin{align} \big(\forall x=(x_1,x_2)\in A_1\big)\quad \pn{A_1}{B}(x)&= \begin{cases} \ensuremath{\mathbb R}_+(1,-1),& \text{if $x_2=x_1>0$;}\\ \ensuremath{\mathbb R}_+(-1,-1),& \text{if $x_2=-x_1>0$;}\\ \{(0,0)\},&\text{otherwise,} \end{cases}\\ \big(\forall x=(x_1,x_2)\in A_2\big)\quad \pn{A_2}{B}(x)&= \begin{cases} \ensuremath{\mathbb R}_+(2,-1),& \text{if $x_2=2x_1>0$;}\\ \ensuremath{\mathbb R}_+(-2,-1),& \text{if $x_2=-2x_1>0$;}\\ \{(0,0)\},&\text{otherwise.} \end{cases} \end{align} \end{subequations} Consequently, \begin{subequations} \begin{align} \nc{A_1}{B}(a)&=\ensuremath{\operatorname{cone}}\big\{(1,-1),(-1,-1)\big\},\\ \nc{A_2}{B}(a)&=\ensuremath{\operatorname{cone}}\big\{(2,-1),(-2,-1)\big\}. \end{align} \end{subequations} Note that $\nc{A_1}{B}(a)\not\subseteq\nc{A_2}{B}(a)$ and $\nc{A_2}{B}(a)\not\subseteq\nc{A_1}{B}(a)$; in fact, $\nc{A_1}{B}(a)\cap \nc{A_2}{B}(a)=\{(0,0)\}$. Furthermore, neither $\nc{A_1}{B}(a)$ nor $\nc{A_2}{B}(a)$ is convex even though $A_1$, $A_2$, and $B$ are. Finally, observe that $\ensuremath{\operatorname{cone}}(B-a)=\ensuremath{\operatorname{span}}(B-a)=B$, that $\ensuremath{\operatorname{cone}}(B-A)=\ensuremath{\mathbb R}\times\ensuremath{\mathbb R}_-$, that $\ensuremath{\operatorname{span}}(B-A)= X$, and that $N_A(a)=\ensuremath{\operatorname{cone}}[(1,-1),(-1,-1)]\neq \nc{A}{B}(a)$. Consequently, $\ensuremath{\operatorname{cone}}(B-a)\cap N_A(a)=\ensuremath{\operatorname{span}}(B-a)\cap N_A(a)=\{(0,0)\}$, $\ensuremath{\operatorname{cone}}(B-A)\cap N_A(a)= N_A(a)=\ensuremath{\operatorname{span}}(B-A)\cap N_A(a)$. Therefore, $\nc{A}{B}(a)$ \ensuremath{\varnothing}h{cannot be obtained by intersecting the Mordukhovich normal cone} with one of the sets $\ensuremath{\operatorname{cone}}(B-a)$, $\ensuremath{\operatorname{span}}(B-a)$, $\ensuremath{\operatorname{cone}}(B-A)$, and $\ensuremath{\operatorname{span}}(B-A)$. \end{example} We shall present some further examples. The proof of the following result is straight-forward and hence omitted. \begin{proposition} \label{p:ncK} Let $K$ be a closed cone in $X$, and let $B$ be a nonempty cone of $X$. Then \begin{equation} \nc{K}{B}(0)=\overline{\bigcup_{x\in K}\pn{K}{B}(x)}= \overline{\bigcup_{x\in \ensuremath{\operatorname{bdry}} K}\pn{K}{B}(x)}= \overline{\bigcup_{x\in K}\nc{K}{B}(x)}= \overline{\bigcup_{x\in \ensuremath{\operatorname{bdry}} K}\nc{K}{B}(x)}. \end{equation} \end{proposition} \begin{example} \label{ex:ncK} Let $K$ be a closed convex cone in $X$, suppose that $u_0\in\ensuremath{\operatorname{int}}(K)$ and that $K\subseteq\{u_0\}^\oplus$, and set $B := \{u_0\}^\perp$. Then: \begin{enumerate} \item \label{ex:ncKi} $(\forall x\in K\cap B)$ $\pn{K}{B}(x)=\{0\}$. \item \label{ex:ncKii} $(\forall x\in K\smallsetminus B)$ $\pn{K}{B}(x)=\nc{K}{B}(x)=N_K(x)=K^\ominus\cap\{x\}^\perp$. \item \label{ex:ncKiii} $\nc{K}{B}(0)=\overline{\bigcup_{x\in K}\pn{K}{B}(x)}= \overline{\bigcup_{x\in K\smallsetminus B}(K^\ominus\cap \{x\}^\perp)}= \overline{K^\ominus\cap\bigcup_{x\in K\smallsetminus B}\{x\}^\perp}$.\\[+2mm] If one of these unions is closed, then all closures may be omitted. \end{enumerate} \end{example} \begin{proof} \ref{ex:ncKi}: Let $x\in K\cap B$. It suffices to show that $B\cap P_K^{-1}(x)=\{x\}$. To this end, take $y\in B\cap P_K^{-1}(x)$. By definition of $B$, we have $\scal{u_{0}}{x}=0$ and $\scal{u_{0}}{y}=0$. Hence \begin{equation}\label{e:ontheplane} \scal{u_{0}}{y-x}=0. \end{equation} Furthermore, $x=P_Ky$ and hence, using e.g.\ \cite[Proposition~6.27]{BC2011}, we have $y-x\in K^\ominus$. Since $u_{0}\in\ensuremath{\operatorname{int}} K$, there exists $\delta>0$ such that $\ball{u_{0}}{\delta}\subseteq K$. Thus $y-x\in (\ball{u_{0}}{\delta})^\ominus$. In view of \eqref{e:ontheplane}, $\delta\|y-x\|\leq 0$. Therefore, $y=x$. \ref{ex:ncKii}: Let $x\in K\smallsetminus B$. Using Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNii}\&\ref{l:NsubsetNiv}, Corollary~\ref{c:Boris=ncX}, Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNvi}, and \cite[Example~6.39]{BC2011}, we have \begin{equation} \label{e:0212b} \pn{K}{B}(x)\subseteq \pn{K}{X}(x) \subseteq \nc{K}{X}(x) = N_K(x) = \cnc{K}(x) = K^\ominus\cap \{x\}^\perp. \end{equation} Since $x\in K\subseteq\{u_0\}^\oplus$ and $x\notin B$, we have $\scal{u_0}{x}>0$. Now take $u\in (K^\ominus\cap\{x\}^\perp)\smallsetminus\{0\}$. Since $u\in K^\ominus$ and $u_0\in\ensuremath{\operatorname{int}}(K)$, we have $\scal{u}{u_0}<0$. Now set \begin{equation} b:=x-\frac{\scal{u_0}{x}}{\scal{u_0}{u}}u. \end{equation} Then $b\in B$ and $b-x = -\scal{u_0}{x}\scal{u_0}{u}^{-1}u\in\ensuremath{\mathbb R}_+P u \subseteq K^\ominus\cap\{x\}^\perp = \cnc{K}(x)$. By \cite[Proposition~6.46]{BC2011}, $x=P_Kb$. Hence $b-x\in\pn{K}{B}(x)$ and thus $u\in\pn{K}{B}(x)$. Therefore, $K^\ominus\cap\{x\}^\perp \subseteq \pn{K}{B}(x)$. In view of \eqref{e:0212b}, and since $\pn{K}{B}(x)\subseteq \nc{K}{B}(x)\subseteq N_K(x)$ by Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNii}\&\ref{l:NsubsetNiv}, we have established \ref{ex:ncKii}. \ref{ex:ncKiii}: Combine \ref{ex:ncKi}, \ref{ex:ncKii}, and Proposition~\ref{p:ncK}. \end{proof} \begin{example}[ice cream cone] \label{ex:ice} Suppose that $X=\ensuremath{\mathbb R}^m=\ensuremath{\mathbb R}^{m-1}\times\ensuremath{\mathbb R}$, where $m\in\{2,3,4,\ldots\}$, and let $\beta>0$. Define the corresponding closed convex \ensuremath{\varnothing}h{ice cream cone} by \begin{equation} K:=\Menge{x\in\ensuremath{\mathbb R}^m}{ \beta\sqrt{x_1^2+\cdots+x_{m-1}^2}\leq x_m}, \end{equation} and set $B:=\ensuremath{\mathbb R}^{m-1}\times\{0\}$. Then the following hold: \begin{enumerate} \item \label{ex:icei} $\pn{K}{B}(0,0)=\{(0,0)\}$. \item \label{ex:icei+} $N_K(0,0)=\menge{y\in\ensuremath{\mathbb R}^m}{\beta^{-1}\sqrt{y_1^2+\cdots+y_{m-1}^2}\leq -y_m} = \bigcup_{z\in\ensuremath{\mathbb R}^{m-1}\atop \|z\|\leq 1} \ensuremath{\mathbb R}_+(\beta z,-1)$. \item \label{ex:iceii} $(\forall z\in\ensuremath{\mathbb R}^{m-1}\smallsetminus\{0\})$ $\pn{K}{B}(z,\beta\|z\|)=\nc{K}{B}(z,\beta\|z\|) =N_K(z,\beta\|z\|)=\ensuremath{\mathbb R}_+(\beta{z},-\|z\|)$. \item \label{ex:iceiii} $\nc{K}{B}(0,0)=\bigcup_{z\in\ensuremath{\mathbb R}^{m-1}\atop \|z\|=1} \ensuremath{\mathbb R}_+(\beta z,-1)$, which is a closed cone that is \ensuremath{\varnothing}h{not convex}. \end{enumerate} \end{example} \begin{proof} Clearly, $K$ is closed and convex. Note that $K$ is the lower level set of height $0$ of the continuous convex function \begin{equation} f\colon \ensuremath{\mathbb R}^m=\ensuremath{\mathbb R}^{m-1}\times\ensuremath{\mathbb R}\to \ensuremath{\mathbb R}\colon x=(z,x_m)\mapsto \beta\|z\|-x_m; \end{equation} hence , by \cite[Exercise~2.5(b) and its solution on page~205]{Zalinescu}, \begin{equation} \label{e:0213a} \ensuremath{\operatorname{int}}(K)=\menge{x=(z,x_m)\in\ensuremath{\mathbb R}^{m-1}\times\ensuremath{\mathbb R}}{\beta\|z\|<x_m}. \end{equation} Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNii}\&\ref{l:NsubsetNiv}, Corollary~\ref{c:Boris=ncX}, and Corollary~\ref{c:N=0}\ref{c:N=0iii} imply that \begin{equation} \label{e:0213b} \big(\forall x\in\ensuremath{\operatorname{int}}(K)\big)\quad \pn{K}{B}(x) \subseteq \pn{K}{X}(x) \subseteq \nc{K}{X}(x) = N_K(x) = \{0\}. \end{equation} Write $x=(z,x_m)\in \ensuremath{\mathbb R}^{m-1}\times\ensuremath{\mathbb R}=X$, and assume that $x\in K$. We thus assume that $x\in\ensuremath{\operatorname{bdry}}(K)$, i.e., $\beta\|z\|=x_m$ by \eqref{e:0213a}, i.e., $x=(z,\beta\|z\|)$. Combining \cite[Proposition~16.8]{BC2011} with \cite[Corollary~2.9.5]{Zalinescu} (or \cite[Lemma~26.17]{BC2011}) applied to $f$, we obtain \begin{equation} N_K\big(z,\beta\|z\|\big) = \ensuremath{\operatorname{cone}}\big( \beta\ensuremath{\operatorname{par}}rtial\|\cdot\|(z)\times\{-1\}\big), \end{equation} where $\ensuremath{\operatorname{par}}rtial\|\cdot\|$ denotes the subdifferential operator from convex analysis applied to the Euclidean norm in $\ensuremath{\mathbb R}^{m-1}$. In view of \cite[Example~16.25]{BC2011} we thus have \begin{equation} \label{e:0213c} N_K\big(z,\beta\|z\|\big) = \begin{cases} \ensuremath{\operatorname{cone}}\big(\beta\|z\|^{-1}z\times\{-1\}\big), &\text{if $z\neq 0$;}\\ \ensuremath{\operatorname{cone}}\big(\ball{0}{\beta}\times\{-1\}\big), &\text{if $z=0$.}\\ \end{cases} \end{equation} The case $z=0$ in \eqref{e:0213c} readily leads to \ref{ex:icei+}. Now set $u_0 := (0,1)\in\ensuremath{\mathbb R}^{m-1}\times\ensuremath{\mathbb R}$. Then $\{u_0\}^\perp = B$ and $\{u_0\}^\oplus = \ensuremath{\mathbb R}^{m-1}\times\ensuremath{\mathbb R}_+ \supseteq K$. Note that $(0,0)\in K\cap B$ and thus $\pn{K}{B}(0,0)=\{(0,0)\}$ by Example~\ref{ex:ncK}\ref{ex:ncKi}. We have thus established \ref{ex:icei}. Now assume that $z\neq 0$. Then $N_K(z,\beta\|z\|) = \ensuremath{\mathbb R}_+(\beta z,-\|z\|)$. Note that $\beta z\neq 0$ and so $(z,\beta\|z\|)\notin B$. The formulas announced in \ref{ex:iceii} therefore follow from Example~\ref{ex:ncK}\ref{ex:ncKii}. Next, combining \eqref{e:0213a}, \eqref{e:0213b}, and Example~\ref{ex:ncK}\ref{ex:ncKiii} as well as utilizing the compactness of the unit sphere in $\ensuremath{\mathbb R}^{m-1}$, we see that \begin{equation} \nc{K}{B}(0,0) =\overline{\bigcup_{z\in\ensuremath{\mathbb R}^{m-1}\smallsetminus\{0\}} \ensuremath{\mathbb R}_+(\beta z,-\|z\|)} =\overline{\bigcup_{z\in\ensuremath{\mathbb R}^{m-1}\atop \|z\|=1} \ensuremath{\mathbb R}_+(\beta z,-1)} ={\bigcup_{z\in\ensuremath{\mathbb R}^{m-1}\atop \|z\|=1} \ensuremath{\mathbb R}_+(\beta z,-1)}. \end{equation} This establishes \ref{ex:iceiii}. \end{proof} \begin{remark} Consider Example~\ref{ex:ice}. Note that $\nc{K}{B}(0,0)$ is actually the boundary of $N_K(0,0)$. Furthermore, since $N_K(0,0)=\cnc{K}(0,0)$ by Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNvi}, the formulas in \ref{ex:icei+} also describe $K^\ominus$, which is therefore an ice cream cone as well. \end{remark} \section{Cones containing restricted normal cones} \label{s:nosuper} In this section, we provide various examples illustrating that the restricted (proximal) normal cone does not naturally arise by considering various natural cones containing it. Let $A$ and $B$ be nonempty subsets of $X$, and let $a\in A$. We saw in Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNi+} that \begin{equation}\label{e:pn=cone(B-a)} \pn{A}{B}(a)=\ensuremath{\operatorname{cone}}\big((B-a)\cap(P^{-1}_Aa-a)\big)\subseteq \ensuremath{\operatorname{cone}}(B-a)\cap\pnX{A}(a). \end{equation} This raises the question whether or not the inclusion in \eqref{e:pn=cone(B-a)} is strict. It turns out and as we shall now illustrate, both conceivable alternatives (equality and strict inclusion) do occur. Therefore, $\pn{A}{B}(a)$ is a new construction. We start with a condition sufficient for equality in \eqref{e:pn=cone(B-a)}, \begin{proposition}\label{p:pn=cone(B-a)} Let $A$ and $B$ be nonempty subsets of $X$. Let $A$ be closed and $a\in A$. Assume that one of the following holds: \begin{enumerate} \item \label{p:pn=cone(B-a)i} $P^{-1}_A(a)-a$ is a cone. \item \label{p:pn=cone(B-a)ii} $A$ is convex. \end{enumerate} Then $\pn{A}{B}(a)= \ensuremath{\operatorname{cone}}(B-a)\cap\pnX{A}(a)$. \end{proposition} \begin{proof} \ref{p:pn=cone(B-a)i}: Lemma~\ref{l:cone01}\ref{l:cone01ii}. \ref{p:pn=cone(B-a)ii}: Combine \ref{p:pn=cone(B-a)i} with Lemma~\ref{l:PA&cvex}. \end{proof} The next examples illustrates that equality in \eqref{e:pn=cone(B-a)} can occur even though $P^{-1}_A(a)-a$ is not a cone. Consequently, the assumption that $P^{-1}_A(a)-a$ be a cone in Proposition~\ref{p:pn=cone(B-a)} is sufficient---but not necessary---for equality in \eqref{e:pn=cone(B-a)}. \begin{example} Suppose that $X = \ensuremath{\mathbb R}^2$, and let $A:= X\smallsetminus\ensuremath{\mathbb R}_+P^2$, $B:=\ensuremath{\mathbb R}_+(1,1)$, and $a:=(0,1)$. Then one verifies that \begin{subequations} \begin{align} P^{-1}_A(a) - a&=[0,1]\times\{0\},\\ \pnX{A}(a)&=\ensuremath{\operatorname{cone}}(P^{-1}_A a - a)=\ensuremath{\mathbb R}_+\times\{0\},\\ \ensuremath{\operatorname{cone}}(B-a)&=\menge{(t_1,t_2)\in\ensuremath{\mathbb R}^2}{t_1\geq0,t_2<t_1}\cup\{(0,0)\},\\ \pn{A}{B}(a)&=\ensuremath{\mathbb R}_+\times\{0\}. \end{align} \end{subequations} Hence $\pn{A}{B}(a)=\ensuremath{\mathbb R}_+\times\{0\} = \ensuremath{\operatorname{cone}}(B-a)\cap\pnX{A}(a)$. \end{example} We now provide an example where the inclusion in \eqref{e:pn=cone(B-a)} is strict. \begin{example} \label{ex:pn-cone(B-a)} Suppose that $X=\ensuremath{\mathbb R}^2$, let $A := \ensuremath{\operatorname{cone}}\{(1,0),(0,1)\} =\ensuremath{\operatorname{bdry}}\ensuremath{\mathbb R}_+^2$, $B := \ensuremath{\mathbb R}_+(2,1)$, and $a:=(0,1)\in A$. Then one verifies that \begin{subequations} \begin{align} P^{-1}_A(a) - a&=\left]\ensuremath{-\infty},1\right]\times\{0\},\\ \pnX{A}(a)&=\ensuremath{\operatorname{cone}}(P^{-1}_A a - a)=\ensuremath{\mathbb R}\times\{0\},\\ \ensuremath{\operatorname{cone}}(B-a)&=\menge{(x_1,x_2)\in\ensuremath{\mathbb R}^2}{x_1\geq0,2x_2<{x_1}}\cup\{(0,0)\},\\ \pn{A}{B}(a)&=\{(0,0)\}. \end{align} \end{subequations} Hence $\pn{A}{B}(a)=\{(0,0)\} \subsetneqq \ensuremath{\mathbb R}_+\times\{0\} = \ensuremath{\operatorname{cone}}(B-a)\cap\pnX{A}(a)$, and therefore the inclusion in \eqref{e:pn=cone(B-a)} is strict. In accordance with Proposition~\ref{p:pn=cone(B-a)}, neither is $P_A^{-1}(a)-a$ a cone nor is $A$ convex. \end{example} Let us now turn to the restricted normal cone $\nc{A}{B}(a)$. Taking the outer limit in \eqref{e:pn=cone(B-a)} and recalling \eqref{e:0224b}, we obtain \begin{subequations} \label{e:nc=limsup} \begin{align} \nc{A}{B}(a)&=\varlimsup_{x\to a\atop x\in A}\pn{A}{B}(x)\\ &\subseteq\varlimsup_{x\to a\atop x\in A}\big(\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)\big)\label{e:nc=limsup-b}\\ &\subseteq \big(\varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x)\big)\cap N_A(a)\label{e:nc=limsup-c}. \end{align} \end{subequations} The inclusions in \eqref{e:nc=limsup} are optimal in the sense that all possible combinations (strict inclusion and equality) can occur: \begin{itemize} \item For results and examples illustrating equality in \eqref{e:nc=limsup-b} and equality in \eqref{e:nc=limsup-c}, see Proposition~\ref{p:nc=limsup} and Example~\ref{ex:0225b} below. \item For an example illustrating equality in \eqref{e:nc=limsup-b} and strict inequality in \eqref{e:nc=limsup-c}, see Example~\ref{ex:0225c} below. \item For an example illustrating strict inequality in \eqref{e:nc=limsup-b} and equality in \eqref{e:nc=limsup-c}, see Example~\ref{ex:0225e} below. \item For examples illustrating strict inequality in \eqref{e:nc=limsup-b} and strict inequality in \eqref{e:nc=limsup-c}, see Example~\ref{ex:0225a} and Example~\ref{ex:0225d} below. \end{itemize} The remainder of this section is devoted to providing these examples. \begin{proposition} \label{p:nc=limsup1} Let $A$ and $B$ be nonempty subsets of $X$. Let $A$ be closed $a\in A$. Assume that one of the following holds: \begin{enumerate} \item \label{p:nc=limsup1i} $P_A^{-1}(x)-x$ is a cone for every $x\in A$ sufficiently close to $a$. \item \label{p:nc=limsup1ii} $A$ is convex. \end{enumerate} Then \eqref{e:nc=limsup-b} holds with equality, i.e., $\nc{A}{B}(a) = \varlimsup_{x\to a\atop x\in A}\big(\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)\big)$ \end{proposition} \begin{proof} Indeed, if $x\in A$ is sufficiently close to $a$, then Proposition~\ref{p:pn=cone(B-a)} implies that $\pn{A}{B}(x)=\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)$. Now take the outer limit as $x\to a$ in $A$. \end{proof} \begin{proposition} \label{p:nc=limsup} Let $A$ be a nonempty closed convex subset of $X$, let $B$ be a nonempty subset of $X$, and let $a\in A$. Assume that $x\mapsto \ensuremath{\operatorname{cone}}(B-x)$ is outer semicontinuous at $a$ relative to $A$, i.e., \begin{equation} \label{e:nc=limsupii} \varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x)=\ensuremath{\operatorname{cone}}(B-a), \end{equation} Then \eqref{e:nc=limsup} holds with equalities, i.e., \begin{equation} \label{e:0224c} \nc{A}{B}(a)= \varlimsup_{x\to a\atop x\in A}\big(\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)\big) =\big(\varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x)\big)\cap N_A(a). \end{equation} \end{proposition} \begin{proof} The convexity of $A$ and Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNvi} yield \begin{equation} \ensuremath{\operatorname{cone}}(B-a)\cap N_A(a)=\ensuremath{\operatorname{cone}}(B-a)\cap\pnX{A}(a). \end{equation} On the other hand, Proposition~\ref{p:pn=cone(B-a)}\ref{p:pn=cone(B-a)ii} and Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNiv} imply \begin{equation} \ensuremath{\operatorname{cone}}(B-a)\cap\pnX{A}(a)=\pn{A}{B}(a) \subseteq\nc{A}{B}(a). \end{equation} Altogether, $\ensuremath{\operatorname{cone}}(B-a)\cap N_A(a)\subseteq \nc{A}{B}(a)$. In view of \eqref{e:nc=limsupii}, \begin{equation} \big(\varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x)\big) \cap N_A(a)\subseteq \nc{A}{B}(a). \end{equation} Recalling \eqref{e:nc=limsup}, we therefore obtain \eqref{e:0224c}. \end{proof} \begin{example} \label{ex:0225b} Let $A$ be a linear subspace of $X$, set $B := A$, and $a:=(0,0)$. Then $\nc{A}{B}(a) = \{0\}$ by \eqref{e:pnALd}, $N_A(a)=A^\perp$, and $\ensuremath{\operatorname{cone}}(B-x)=A$, for every $x\in A$. Hence $(\varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x))\cap N_A(a)=\{0\}$ and \eqref{e:nc=limsup} holds with equalities. \end{example} In Proposition~\ref{p:nc=limsup}, the convexity and the outer semicontinuity assumptions are both \ensuremath{\varnothing}h{essential} in the sense that absence of either assumption may make the inclusion \eqref{e:nc=limsup-c} strict; we shall illustrate this in the next three examples. \begin{example} \label{ex:0225c} Suppose that $X=\ensuremath{\mathbb R}^2$, and let $A:=\ensuremath{\operatorname{epi}}(|\cdot|)$, $B := \ensuremath{\mathbb R}\times\{0\}$, and $a:=(0,0)$. If $x=(x_1,x_2)\in A\smallsetminus\{a\}$, then $x_2>0$, $B-x=\ensuremath{\mathbb R}\times\{-x_2\}$, and so $\ensuremath{\operatorname{cone}}(B-x) =\ensuremath{\mathbb R}\times\ensuremath{\mathbb R}_-M\cup\{(0,0)\}$. Hence \begin{equation} \varlimsup_{x\to a\atop x\in A} \ensuremath{\operatorname{cone}}(B-x)=\ensuremath{\mathbb R}\times\ensuremath{\mathbb R}_-\neq\ensuremath{\mathbb R}\times\{0\}=\ensuremath{\operatorname{cone}}(B-a), \end{equation} i.e., \eqref{e:nc=limsupii} fails. Since $A$ is closed and convex, Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNvi} implies that $N_A(a)=\cnc{A}(a)=-A$. Thus \begin{equation} \big(\varlimsup_{x\to a\atop x\in A} \ensuremath{\operatorname{cone}}(B-x)\big) \cap N_A(a) = -A. \end{equation} Proposition~\ref{p:nc=limsup1}\ref{p:nc=limsup1ii} yields equality in \eqref{e:nc=limsup-b}, i.e., \begin{equation} \nc{A}{B}(a)=\varlimsup_{x\to a\atop x\in A} \big(\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)\big). \end{equation} Already in Example~\ref{ex:120212a} did we observe that \begin{equation} \nc{A}{B}(a)=\ensuremath{\operatorname{cone}}\{(1,-1),(-1,-1)\}. \end{equation} Therefore we have \begin{equation} \nc{A}{B}(a)=\varlimsup_{x\to a\atop x\in A}\big(\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)\big)\subsetneqq \big(\varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x)\big)\cap N_A(a), \end{equation} i.e., the inclusion \eqref{e:nc=limsup-c} is strict. \end{example} \begin{example} \label{ex:0225a} Suppose that $X=\ensuremath{\mathbb R}^2$, and let $A:=\ensuremath{\operatorname{cone}}\{(1,0),(0,1)\} = \ensuremath{\operatorname{bdry}}\ensuremath{\mathbb R}_+^2$, $B:=\ensuremath{\mathbb R}\times\{1\}\cup\{(1,0),(-1,0)\}$, and $a:=(0,0)$. Clearly, $A$ is not convex. If $x=(x_1,x_2)\in A$ is sufficiently close to $a$, we have \begin{equation} \label{e:0224d} \ensuremath{\operatorname{cone}}(B-x)= \begin{cases} \ensuremath{\mathbb R}\times\ensuremath{\mathbb R}_+, & \text{if $x_1\geq 0$;}\\ \ensuremath{\mathbb R}\times\ensuremath{\mathbb R}_+P\cup\ensuremath{\operatorname{cone}}\{(1,-x_2),(-1,-x_2)\}, &\text{if $x_2>0$.} \end{cases} \end{equation} This yields \begin{equation} \label{e:0224f} \varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x)= \ensuremath{\mathbb R}\times\ensuremath{\mathbb R}_+ = \ensuremath{\operatorname{cone}}(B-a), \end{equation} i.e., \eqref{e:nc=limsupii} holds. Next, if $x=(x_1,x_2)\in A$, then \begin{equation} P^{-1}_A(x)= \begin{cases} \{x_1\}\times\left]\ensuremath{-\infty},x_1\right], &\text{if $x_1>0$ and $x_2=0$;}\\ \left]\ensuremath{-\infty},x_2\right]\times\{x_2\},&\text{if $x_1=0$ and $x_2>0$;}\\ \ensuremath{\mathbb R}_-^2, &\text{if $x_1=x_2=0$,} \end{cases} \end{equation} and so \begin{equation} \label{e:0224e} \pnX{A}(x)=\ensuremath{\operatorname{cone}}\big(P^{-1}_A(x)-x\big)= \begin{cases} \{0\}\times\ensuremath{\mathbb R},&\text{if $x_1>0$ and $x_2=0$;}\\ \ensuremath{\mathbb R}\times\{0\}, &\text{if $x_1=0$ and $x_2>0$;}\\ \ensuremath{\mathbb R}_-^2, &\text{if $x_1=x_2=0$.} \end{cases} \end{equation} It follows that \begin{equation} \label{e:0224i} N_A(a)=\varlimsup_{x\to a\atop x\in A}\pnX{A}(x)= \ensuremath{\mathbb R}_-^2\cup \big(\{0\}\times\ensuremath{\mathbb R}\big) \cup \big(\ensuremath{\mathbb R}\times\{0\}\big). \end{equation} If $x\in A$ is sufficiently close $a$, then \begin{equation} \pn{A}{B}(x)= \begin{cases} \{(0,0)\},& \text{if $x\neq a$;}\\ \ensuremath{\mathbb R}_-\times\{0\}, &\text{if $x=a$.} \end{cases} \end{equation} It follows that \begin{equation} \label{e:0224g} \nc{A}{B}(a)=\ensuremath{\mathbb R}_-\times\{0\}. \end{equation} Combining \eqref{e:0224d} and \eqref{e:0224e}, we obtain for every $x=(x_1,x_2)\in A$ sufficiently close to $a$ that \begin{equation} \ensuremath{\operatorname{cone}}(B-x)\cap \pnX{A}(x)= \begin{cases} \{0\}\times\ensuremath{\mathbb R}_+,&\text{if $x_1>0$ and $x_2=0$;}\\ \{(0,0)\}, &\text{if $x_1=0$ and $x_2>0$;}\\ \ensuremath{\mathbb R}_-\times\{0\}, &\text{if $x_1=x_2=0$.} \end{cases} \end{equation} Thus \begin{equation} \label{e:0224h} \varlimsup_{x\to a\atop x\in A} \big(\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)\big)= \big(\{0\}\times\ensuremath{\mathbb R}_+\big) \cup \big(\ensuremath{\mathbb R}_-\times\{0\}\big). \end{equation} Using \eqref{e:0224g}, \eqref{e:0224h}, \eqref{e:0224f}, and \eqref{e:0224i}, we conclude that \begin{subequations} \begin{align} \nc{A}{B}(a)&= \ensuremath{\mathbb R}_-\times\{0\}\\ &\subsetneqq \big(\{0\}\times\ensuremath{\mathbb R}_+\big)\cup\big(\ensuremath{\mathbb R}_-\times\{0\}\big) = \varlimsup_{a'\to a\atop a'\in A}\big(\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)\big)\\ &\subsetneqq \big(\{0\}\times\ensuremath{\mathbb R}_+\big)\cup\big(\ensuremath{\mathbb R}\times\{0\}\big) = \Big(\varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x)\Big)\cap N_A(a). \end{align} \end{subequations} Therefore, both inclusions in \eqref{e:nc=limsup} are strict; however, $A$ is not convex while \eqref{e:nc=limsupii} does hold. \end{example} \begin{example} \label{ex:0225d} Suppose that $X=\ensuremath{\mathbb R}^2$, let $A:=\ensuremath{\operatorname{cone}}\{(1,0),(0,1)\}=\ensuremath{\operatorname{bdry}}\ensuremath{\mathbb R}_+^2$, $B:=\ensuremath{\mathbb R}_+(2,1)$ and $a:=(0,0)$. Let $x=(x_1,x_2)\in A$. Then (see Example~\ref{ex:0225a}) \begin{equation} P^{-1}_A(x)-x= \begin{cases} \{0\}\times\left]\ensuremath{-\infty},x_1\right], &\text{if $x_1>0$ and $x_2=0$;}\\ \left]\ensuremath{-\infty},x_2\right]\times\{0\},&\text{if $x_1=0$ and $x_2>0$;}\\ \ensuremath{\mathbb R}_-^2, &\text{if $x_1=x_2=0$,} \end{cases} \end{equation} \begin{equation} \label{e:0225a} \pnX{A}(x)= \begin{cases} \{0\}\times\ensuremath{\mathbb R},&\text{if $x_1>0$ and $x_2=0$;}\\ \ensuremath{\mathbb R}\times\{0\}, &\text{if $x_1=0$ and $x_2>0$;}\\ \ensuremath{\mathbb R}_-^2, &\text{if $x_1=x_2=0$,} \end{cases} \end{equation} and \begin{equation} \label{e:0225g} N_A(a)=\varlimsup_{x\to a\atop x\in A}\pnX{A}(x)= \ensuremath{\mathbb R}_-^2\cup \big(\{0\}\times\ensuremath{\mathbb R}\big) \cup \big(\ensuremath{\mathbb R}\times\{0\}\big). \end{equation} Thus \begin{equation} \pn{A}{B}(x)=\ensuremath{\operatorname{cone}}\big((P^{-1}_A(x)-x)\cap (B-x)\big)= \begin{cases} \{0\}\times\ensuremath{\mathbb R}_+,& \text{if $x_1>0$ and $x_2=0$;}\\ \{(0,0)\},& \text{if $x_1=0$ and $x_2\geq 0$.} \end{cases} \end{equation} Hence \begin{equation} \label{e:0225d} \nc{A}{B}(a)=\varlimsup_{x\to a\atop x\in A}\pn{A}{B}(x)= \{0\}\times\ensuremath{\mathbb R}_+. \end{equation} On the other hand, \begin{equation} \label{e:0225b} \ensuremath{\operatorname{cone}}(B-x)= \begin{cases} \menge{(y_1,y_2)}{y_2\geq0,\,y_1<2y_2}\cup\{(0,0)\}, &\text{if $x_1>0$ and $x_2=0$;}\\ \menge{(y_1,y_2)}{y_1\geq0,\,2y_2<y_1}\cup\{(0,0)\}, &\text{if $x_1=0$ and $x_2>0$;}\\ B, &\text{if $x_1=x_2=0$.} \end{cases} \end{equation} Combining \eqref{e:0225a} and \eqref{e:0225b}, we deduce that \begin{equation} \label{e:0225c} \ensuremath{\operatorname{cone}}(B-x) \cap \pnX{A}(x) = \begin{cases} \{0\}\times\ensuremath{\mathbb R}_+, &\text{if $x_1>0$ and $x_2=0$;}\\ \ensuremath{\mathbb R}_+\times\{0\}, &\text{if $x_1=0$ and $x_2>0$;}\\ \{(0,0)\}, &\text{if $x_1=x_2=0$.} \end{cases} \end{equation} Using \eqref{e:0225b} and \eqref{e:0225c}, we compute \begin{equation} \label{e:0225f} \varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x) =\menge{(y_1,y_2)}{y_1\geq0\text{\, or \,}y_2\geq0} = X\smallsetminus\ensuremath{\mathbb R}_-M^2 \neq B = \ensuremath{\operatorname{cone}}(B-a) \end{equation} and \begin{equation} \label{e:0225e} \varlimsup_{x\to a\atop x\in A} \big(\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)\big)=\big(\{0\}\times\ensuremath{\mathbb R}_+\big) \cup \big(\ensuremath{\mathbb R}_+\times\{0\}\big) = \ensuremath{\operatorname{cone}}\{(0,1),(1,0)\}. \end{equation} Using \eqref{e:0225d}, \eqref{e:0225e}, \eqref{e:0225f}, and \eqref{e:0225g}, we conclude that \begin{subequations} \begin{align} \nc{A}{B}(a)&= \{0\}\times\ensuremath{\mathbb R}_+ \\ &\subsetneqq \big(\{0\}\times\ensuremath{\mathbb R}_+\big)\cup\big(\ensuremath{\mathbb R}_+\times\{0\}\big) = \varlimsup_{x\to a\atop x\in A}\big(\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)\big)\\ &\subsetneqq \big(\{0\}\times\ensuremath{\mathbb R}\big)\cup\big(\ensuremath{\mathbb R}\times\{0\}\big) = \big(\varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x)\big)\cap N_A(a). \end{align} \end{subequations} Therefore, both inclusions in \eqref{e:nc=limsup} are strict; however, $A$ is not convex and \eqref{e:nc=limsupii} does not hold (see \eqref{e:0225f}). \end{example} Finally, we provide an example where the inclusion \eqref{e:nc=limsup-b} is strict while the inclusion \eqref{e:nc=limsup-c} is an equality. \begin{example} \label{ex:0225e} Suppose that $X=\ensuremath{\mathbb R}^2$, let $A:=\ensuremath{\operatorname{cone}}\{(1,0),(0,1)\}$, $B:=\menge{(y_1,y_2)}{y_1+y_2=1}$, and $a:=(0,0)$. Let $x=(x_1,x_2)\in A$ be sufficiently close to $a$. We compute \begin{subequations} \begin{align} \ensuremath{\operatorname{cone}}(B-x)&=\menge{(y_1,y_2)}{y_1+y_2>0}\cup\{(0,0)\},\label{e:0225h}\\[+2mm] \pnX{A}(x)&= \begin{cases} \{0\}\times \ensuremath{\mathbb R}, &\text{if $x_1>0$ and $x_2=0$;}\\ \ensuremath{\mathbb R}\times\{0\},&\text{if $x_1=0$ and $x_2>0$;}\\ \ensuremath{\mathbb R}_-^2, &\text{if $x_1=x_2=0$,} \end{cases}\\[+2mm] \pn{A}{B}(x) &= \{(0,0)\}. \end{align} \end{subequations} Furthermore, Example~\ref{ex:0225a} (see \eqref{e:0224i}) implies that $N_A(a) = \ensuremath{\mathbb R}_-^2 \cup\big(\{0\}\times\ensuremath{\mathbb R}\big)\cup \big(\ensuremath{\mathbb R}\times\{0\}\big)$. We thus deduce that \begin{subequations} \begin{align} \nc{A}{B}(a)&= \{(0,0)\}\\ &\subsetneqq \big(\{0\}\times\ensuremath{\mathbb R}_+\big)\cup\big(\ensuremath{\mathbb R}_+\times\{0\}\big) = \varlimsup_{x\to a\atop x\in A}\big(\ensuremath{\operatorname{cone}}(B-x)\cap\pnX{A}(x)\big)\\ &= \big(\{0\}\times\ensuremath{\mathbb R}_+\big)\cup\big(\ensuremath{\mathbb R}_+\times\{0\}\big) = \big(\varlimsup_{x\to a\atop x\in A}\ensuremath{\operatorname{cone}}(B-x)\big)\cap N_A(a). \end{align} \end{subequations} Therefore, the inclusion \eqref{e:nc=limsup-b} is strict while the inclusion \eqref{e:nc=limsup-c} is an equality. \end{example} \section{Constraint qualification conditions and numbers} \label{s:CQ1} \label{s:CQnumber} Utilizing restricted normal cones, we introduce in this section the notions of \ensuremath{\varnothing}h{CQ-number}, \ensuremath{\varnothing}h{joint-CQ-number}, \ensuremath{\varnothing}h{CQ condition}, and \ensuremath{\varnothing}h{joint-CQ condition}, where CQ stands for ``constraint qualification''. \subsection*{CQ and joint-CQ numbers} \begin{definition}[CQ-number] \label{d:CQn} Let $A$, $\wt{A}$, $B$, $\wt{B}$, be nonempty subsets of $X$, let $c\in X$, and let $\delta\in\ensuremath{\mathbb R}_+P$. The \ensuremath{\varnothing}h{CQ-number} at $c$ associated with $(A,\wt{A},B,\wt{B})$ and $\delta$ is \begin{equation} \label{e:CQn} \theta_\delta:=\theta_\delta\big(A,\wt{A},B,\wt{B}\big) :=\sup\mmenge{\scal{u}{v}} {\begin{aligned} &u\in\pn{A}{\wt{B}}(a),v\in-\pn{B}{\wt{A}}(b),\|u\|\leq 1, \|v\|\leq 1,\\ &\|a-c\|\leq\delta,\|b-c\|\leq\delta. \end{aligned}}. \end{equation} The \ensuremath{\varnothing}h{limiting CQ-number} at $c$ associated with $(A,\wt{A},B,\wt{B})$ is \begin{equation}\label{e:lCQn} \overline\theta:=\overline\theta\big(A,\wt{A},B,\wt{B}\big) :=\lim_{\delta\downarrow0}\theta_\delta\big(A,\wt{A},B,\wt{B}\big). \end{equation} \end{definition} Clearly, \begin{equation} \label{e:120405a} \theta_\delta\big(A,\wt{A},B,\wt{B}\big)=\theta_\delta\big(B,\wt{B},A,\wt{A}\big) \quad\text{and}\quad \overline\theta\big(A,\wt{A},B,\wt{B}\big)=\overline\theta\big(B,\wt{B},A,\wt{A}\big). \end{equation} Note that, $\delta\mapsto \theta_\delta$ is increasing; this makes $\overline\theta$ well defined. Furthermore, since $0$ belongs to nonempty $B$-restricted proximal normal cones and because of the Cauchy-Schwarz inequality, we have \begin{equation} \label{e:120406a} c\in \overline{A}\cap \overline{B}\text{~and~}0<\delta_1<\delta_2 \quad\Rightarrow\quad 0\leq\overline{\theta}\leq \theta_{\delta_1}\leq\theta_{\delta_2} \leq 1, \end{equation} while $\theta_\delta$, and hence $\overline{\theta}$, is equal to $\ensuremath{-\infty}$ if $c\notin \overline{A}\cap \overline{B}$ and $\delta$ is sufficiently small (using the fact that $\sup\varnothing=\ensuremath{-\infty}$). Using Proposition~\ref{p:elementary}\ref{p:ele-ii}\&\ref{p:ele-vi}, we see that \begin{equation} \wt{A}\subseteq A' \;\text{and}\; \wt{B}\subseteq B' \quad\Rightarrow\quad \theta_\delta(A,\wt{A},B,\wt{B}) \leq \theta_\delta(A,A',B,B') \end{equation} and, for every $x\in X$, \begin{equation} \label{e:120405b} \theta_\delta\big(A,\wt{A},B,\wt{B}\big)\text{~at $c$} \quad = \quad \theta_\delta\big(A-x,\wt{A}-x,B-x,\wt{B}-x\big)\text{~at $c-x$.} \end{equation} To deal with unions, it is convenient to extend this notion as follows. \begin{definition}[joint-CQ-number] \label{d:jCQn} Let $\mathcal{A} := (A_i)_{i\in I}$, $\wt{\ensuremath{\mathcal A}}:=(\wt{A}_i)_{i\in I}$, $\mathcal{B} := (B_j)_{j\in J}$, $\wt{\ensuremath{\mathcal B}}:=(\wt{B}_j)_{j\in J}$ be nontrivial collections\footnote{The collection $(A_i)_{i\in I}$ is said to be \ensuremath{\varnothing}h{nontrivial} if $I\neq\varnothing$.} of nonempty subsets of $X$, let $c\in X$, and let $\delta\in\ensuremath{\mathbb R}_+P$. The \ensuremath{\varnothing}h{joint-CQ-number} at $c$ associated with $(\mathcal{A},\wt{\ensuremath{\mathcal A}},\mathcal{B},\wt{\ensuremath{\mathcal B}})$ and $\delta$ is \begin{equation} \label{e:jCQn} \theta_\delta=\theta_\delta\big(\mathcal{A}, \wt{\ensuremath{\mathcal A}},\mathcal{B},\wt{\ensuremath{\mathcal B}}\big):=\sup_{(i,j)\in I\times J}\theta_\delta\big(A_i,\wt{A}_i,B_j,\wt{B}_j\big), \end{equation} and the limiting joint-CQ-number at $c$ associated with $(\mathcal{A},\wt{\ensuremath{\mathcal A}},\mathcal{B},\wt{\ensuremath{\mathcal B}})$ is \begin{equation} \label{e:ljCQn} \overline\theta=\overline\theta\big(\mathcal{A}, \wt{\ensuremath{\mathcal A}}, \mathcal{B},\wt{\ensuremath{\mathcal B}}\big) :=\lim_{\delta\downarrow0}\theta_\delta\big(\mathcal{A},\wt{\ensuremath{\mathcal A}},\mathcal{B},\wt{\ensuremath{\mathcal B}}\big). \end{equation} \end{definition} For convenience, we will simply write $\theta_\delta$, $\overline\theta$ and omit the possible arguments $(A,\wt{A},B,\wt{B})$ and $(\mathcal{A},\wt{\ensuremath{\mathcal A}},\mathcal{B},\wt{\ensuremath{\mathcal B}})$ when there is no cause for confusion. If $I$ and $J$ are singletons, then the notions of CQ-number and joint-CQ-number coincide. Also observe that \begin{equation} c\in \bigcup_{i\in I}A_i \cap \bigcup_{j\in J}B_j \quad\Rightarrow\quad (\forall \delta\in\ensuremath{\mathbb R}_+P)\;\; 0\leq \overline{\theta}\leq\theta_\delta\leq 1 \end{equation} while $\overline{\theta}=\theta_\delta=\ensuremath{-\infty}$ when $\delta>0$ is sufficiently small and $c$ does not belong to both $\overline{\bigcup_{i\in I}A_i}$ and $\overline{\bigcup_{j\in J}B_j}$. Furthermore, the joint-CQ-number (and hence the limiting joint-CQ-number as well) really depends only on those sets $A_i$ and $B_j$ for which $c\in \overline{A_i}\cap \overline{B_j}$. To illustrate this notion, let us compute the CQ-number of two lines. The formula provided is the cosine of the angle between the two lines --- as we shall see in Theorem~\ref{t:CQn=c} below, this happens actually for all linear subspaces although then the angle must be defined appropriately and the proof is more involved. \begin{proposition}[CQ-number of two distinct lines through the origin] \label{p:CQn2l} Suppose that $w_a$ and $w_b$ are two vectors in $X$ such that $\|w_a\|=\|w_b\|=1$. Let $A :=\ensuremath{\mathbb R} w_a$, $B:= \ensuremath{\mathbb R} w_b$, and $\delta\in\ensuremath{\mathbb R}_+P$. Assume that $A\cap B = \{0\}$. Then the CQ-number at $0$ is \begin{equation} \theta_\delta(A,A,B,B)=|\scal{w_a}{w_b}|. \end{equation} \end{proposition} \begin{proof} Set $s := \scal{w_a}{w_b}$. Assume first that $s\neq 0$. Let $a = \alpha w_a\in A$ and $b=\beta w_b\in B$. Then $P_A^{-1}(a)-a = N_A(a) = \{w_a\}^\perp$; considering $(B-a)\cap \{w_a\}^\perp$ leads to $\beta s = \alpha$. Hence $(P_A^{-1}(a)-a)\cap (B-a)=\beta w_b - \alpha w_a$ and \begin{equation} \pn{A}{B}(a) = \ensuremath{\operatorname{cone}}\big(\alpha s^{-1}w_b-\alpha w_a). \end{equation} Similarly, \begin{equation} -\pn{B}{A}(b) = \ensuremath{\operatorname{cone}}\big(\beta w_b-\beta s^{-1}w_a). \end{equation} Now set $u := \alpha s^{-1}w_b-\alpha w_a\in \pn{A}{B}(a)$ and $v := \beta w_b-\beta s^{-1}w_a \in -\pn{B}{A}(b)$. One computes \begin{equation} \|u\|=\frac{|\alpha|\sqrt{1-s^2}}{|s|}, \;\; \|v\|=\frac{|\beta|\sqrt{1-s^2}}{|s|}, \;\;\text{and}\;\; \scal{u}{v} = \frac{\alpha\beta(1-s^2)}{s}. \end{equation} Hence \begin{equation} \frac{\scal{u}{v}}{\|u\|\cdot\|v\|} = \ensuremath{\operatorname{sgn}}(\alpha)\ensuremath{\operatorname{sgn}}(\beta) s. \end{equation} Choosing $\alpha$ and $\beta$ in $\{-1,1\}$ appropriately, we arrange for $\scal{u}{v}/(\|u\|\cdot\|v\|) = |s|$, as claimed. Now assume that $s=0$. Arguing similarly, we see that \begin{equation} (\forall a\in A)\quad \pn{A}{B}(a) = \begin{cases} \{0\},&\text{if $a\neq 0$;}\\ B, &\text{if $a=0$,} \end{cases} \quad\text{and}\quad (\forall b\in B)\quad \pn{B}{A}(b) = \begin{cases} \{0\},&\text{if $b\neq 0$;}\\ A, &\text{if $b=0$.} \end{cases} \end{equation} This leads to $\theta_\delta(A,A,B,B)=0=|s|$, again as claimed. \end{proof} Let $\ensuremath{\mathcal A} := (A_i)_{i\in I}$, $\wt{\ensuremath{\mathcal A}} := (\wt{A}_i)_{i\in I}$, $\ensuremath{\mathcal B} := (B_j)_{j\in J}$ and $\wt{\ensuremath{\mathcal B}} := (\wt{B}_j)_{j\in J}$ be nontrivial collections of nonempty closed subsets of $X$ and let $\delta\in\ensuremath{\mathbb R}_+P$. Set $A := \bigcup_{i\in I} A_i$, $\wt{A} := \bigcup_{i\in I} \wt{A}_i$, $B := \bigcup_{j\in J} B_j$, $\wt{B} := \bigcup_{j\in J} \wt{B}_j$, and suppose that $c\in A\cap B$. It is interesting to compare the joint-CQ-number of collections, i.e., $\theta_\delta\big(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}}, \ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}}\big)$, to the CQ-number of the unions, i.e., $\theta_\delta\big(A,\wt{A},B,\wt{B}\big)$. We shall see in the following two examples that \ensuremath{\varnothing}h{neither of them is smaller than the other}; in fact, one of them can be equal to 1 while the other is {strictly} less than 1. \begin{example}[joint-CQ-number $<$ CQ-number of the unions] \label{ex:jCQn<CQn} Suppose that $X=\ensuremath{\mathbb R}^3$, let $I:=J:=\{1,2\}$, $A_1:=\ensuremath{\mathbb R}(0,1,0)$, $A_2:=\ensuremath{\mathbb R}(2,0,-1)$, $B_1:=\ensuremath{\mathbb R}(0,1,1)$, $B_2:=\ensuremath{\mathbb R}(1,0,0)$, $c:=(0,0,0)$, and let $\delta>0$. Furthermore, set $\ensuremath{\mathcal A}:=(A_i)_{i\in I}$, $\ensuremath{\mathcal B}:=(B_j)_{j\in J}$, $A:= A_1\cup A_2$, and $B:= B_1\cup B_2$. Then \begin{equation} \theta_\delta\big(\ensuremath{\mathcal A},\ensuremath{\mathcal A},\ensuremath{\mathcal B},\ensuremath{\mathcal B}\big) =\tfrac{2}{\sqrt{5}} < 1 = \theta_\delta\big(A,A,B,B\big). \end{equation} \end{example} \begin{proof} Using Proposition~\ref{p:CQn2l}, we compute, for the reference point $c$, \begin{subequations} \begin{align} \theta_\delta(A_1,A_1,B_1,B_1)&= \big|\bscal{(0,1,0)}{\tfrac{1}{\sqrt{2}}(0,1,1)}\big|=\tfrac{1}{\sqrt{2}},\\ \theta_\delta(A_1,A_1,B_2,B_2)&=|\scal{(0,1,0)}{(1,0,0)}|=0,\\ \theta_\delta(A_2,A_2,B_1,B_1)&= \big|\bscal{\tfrac{1}{\sqrt{5}}(2,0,-1)}{\tfrac{1}{\sqrt{2}}(0,1,1)}\big| =\tfrac{1}{\sqrt{10}},\\ \theta_\delta(A_2,A_2,B_2,B_2)&= \big|\bscal{\tfrac{1}{\sqrt{5}}(2,0,-1)}{(1,0,0}\big| =\tfrac{2}{\sqrt{5}}. \end{align} \end{subequations} Hence $\theta_\delta(\ensuremath{\mathcal A},\ensuremath{\mathcal A},\ensuremath{\mathcal B},\ensuremath{\mathcal B}) =\max_{(i,j)\in I\times J}\theta_\delta(A_i,A_i,B_j,B_j)=\tfrac{2}{\sqrt{5}}<1$. To estimate the CQ-number of the union, set \begin{equation} a:=(0,\delta,0)\in A_1\subseteq A \text{~and~} b:=(\delta,0,0)\in B_2\subseteq B. \end{equation} Note that $\|a-c\|=\|a\|=\delta$ and $\|b-c\|=\|b\|=\delta$. Now define \begin{equation} \wt{a}:=(\delta,0,-\delta/2)\in A_2\subseteq {A} \text{~~and~~} \wt{b}:=(0,\delta,\delta)\in B_1\subseteq {B}. \end{equation} Since $\|\wt{a}-P_{B_2}\wt{a}\|<\|\wt{a}-P_{B_1}\wt{a}\|$ and $P_{B_2}\wt{a}=b$, we have $b=P_B\wt{a}$. Since $\|\wt{b}-P_{A_1}\wt{b}\|<\|\wt{b}-P_{A_2}\wt{b}\|$ and $P_{A_1}\wt{b}=a$, we have $a=P_A\wt{b}$. Therefore, $\wt{b}\in B\cap P^{-1}_A(a)$ and $\wt{a}\in A\cap P^{-1}_B(b)$. It follows that \begin{subequations} \begin{align} u&:=\tfrac{1}{\delta}(\wt{b}-a)=(0,0,1)\in\pn{A}{B}(a),\\ v&:=\tfrac{2}{\delta}(b-\wt{a})=(0,0,1)\in -\pn{B}{A}(b). \end{align} \end{subequations} Since $\|u\|=\|v\|=1$, we obtain $1=\scal{u}{v}\leq\theta_\delta(A,A,B,B)\leq 1$. \end{proof} \begin{example}[CQ-number of the unions $<$ joint-CQ-number] Suppose that $X=\ensuremath{\mathbb R}$, let $I := J:= \{1,2\}$, $A_1:=B_1:=\ensuremath{\mathbb R}_-$, $A_2:=B_2:=\ensuremath{\mathbb R}_+$, $c:=0$, and $\delta>0$. Furthermore, set $\ensuremath{\mathcal A} := (A_i)_{i\in I}$, $\ensuremath{\mathcal B} := (B_j)_{j\in I}$, $A:= A_1\cup A_2=\ensuremath{\mathbb R}$, and $B:= B_1\cup B_2=\ensuremath{\mathbb R}$. Then \begin{equation} \theta_\delta\big(A,A,B,B\big)=0 < 1 = \theta_\delta\big(\ensuremath{\mathcal A},\ensuremath{\mathcal A},\ensuremath{\mathcal B},\ensuremath{\mathcal B}\big). \end{equation} \end{example} \begin{proof} Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNri} implies that $(\forall x\in \ensuremath{\mathbb R})$ $\pn{\ensuremath{\mathbb R}}{\ensuremath{\mathbb R}}(x)=\{0\}$. Hence $\theta_\delta(\ensuremath{\mathbb R},\ensuremath{\mathbb R},\ensuremath{\mathbb R},\ensuremath{\mathbb R})=0$ as claimed. On the other hand, $\pn{\ensuremath{\mathbb R}_+}{\ensuremath{\mathbb R}_-}(0)=\ensuremath{\mathbb R}_-$ and $\pn{\ensuremath{\mathbb R}_-}{\ensuremath{\mathbb R}_+}(0)=\ensuremath{\mathbb R}_+$. Hence $\theta_\delta(\ensuremath{\mathbb R}_-,\ensuremath{\mathbb R}_-,\ensuremath{\mathbb R}_+,\ensuremath{\mathbb R}_+)=1$ and therefore $\theta_\delta\big(\ensuremath{\mathcal A},\ensuremath{\mathcal A},\ensuremath{\mathcal B},\ensuremath{\mathcal B}\big)=1$ as well. \end{proof} The two preceding examples illustrated the independence of the two types of CQ-numbers (for the collection and for the union). In some cases, such as Example~\ref{ex:jCQn<CQn}, it is beneficial to work with a suitable partition to obtain a CQ-number that is less than one, which in turn is very desirable in applications (see Section~\ref{s:application}). \subsection*{CQ and joint-CQ conditions} \begin{definition}[CQ and joint-CQ conditions] \label{d:tildeCQ} Let $c\in X$. \begin{enumerate} \item Let $A$, $\wt{A}$, $B$ and $\wt{B}$ be nonempty subsets of $X$. Then the \ensuremath{\varnothing}h{$(A,\wt{A},B,\wt{B})$-CQ condition} holds at $c$ if \begin{equation}\label{e:CQ} \nc{A}{\wt{B}}(c) \cap\big(-\nc{B}{\wt{A}}(c)\big)\subseteq\{0\}. \end{equation} \item Let $\mathcal{A} := (A_i)_{i\in I}$, $\wt{\ensuremath{\mathcal A}} := (\wt{A}_i)_{i\in I}$, $\mathcal{B} := (B_j)_{j\in J}$ and $\wt{\ensuremath{\mathcal B}} := (\wt{B}_j)_{j\in J}$ be nontrivial collections of nonempty subsets of $X$. Then the \ensuremath{\varnothing}h{$(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$-joint-CQ condition} holds at $c$ if for every $(i,j)\in I\times J$, the $(A_i,\wt{A}_i,B_j,\wt{B}_j)$-CQ condition holds at $c$, i.e., \begin{equation}\label{e:jCQ} \big(\forall (i,j)\in I\times J\big)\quad \nc{A_i}{\wt{B}_j}(c) \cap\big(-\nc{B_j}{\wt{A}_i}(c)\big)\subseteq\{0\}. \end{equation} \end{enumerate} \end{definition} In view of the definitions, the key case to consider is when $c\in A\cap B$ (or when $c\in A_i\cap B_j$ in the joint-CQ case). The CQ-number is based on the behavior of the restricted proximal normal cone in a neighborhood of the point under consideration --- a related notion is that of the exact CQ-number, where we consider the restricted normal cone at the point instead of nearby restricted proximal normal cones. \begin{definition}[exact CQ-number and exact joint-CQ-number] \label{d:exactCQn} Let $c\in X$. \begin{enumerate} \item Let $A$, $\wt{A}$, $B$ and $\wt{B}$ be nonempty subsets of $X$. The \ensuremath{\varnothing}h{exact CQ-number} at $c$ associated with $(A,\wt{A},B,\wt{B})$ is \footnote{Note that if $c\notin A\cap B$, then $\overline{\alpha} = \sup\varnothing=\ensuremath{-\infty}$.} \begin{equation} \label{e:0217a} \overline{\alpha} := \overline{\alpha}\big(A,\wt{A},B,\wt{B}\big) := \sup\mmenge{\scal{u}{v}}{u\in\nc{A}{\wt{B}}(c),v\in-\nc{B}{\wt{A}}(c),\|u\|\leq 1, \|v\|\leq 1}. \end{equation} \item Let $\mathcal{A} := (A_i)_{i\in I}$, $\wt{\ensuremath{\mathcal A}} := (\wt{A}_i)_{i\in I}$, $\mathcal{B} := (B_j)_{j\in J}$ and $\wt{\ensuremath{\mathcal B}} := (\wt{B}_j)_{j\in J}$ be nontrivial collections of nonempty subsets of $X$. The \ensuremath{\varnothing}h{exact joint-CQ-number} at $c$ associated with $(\mathcal{A},\mathcal{B},\wt{\ensuremath{\mathcal A}},\wt{\ensuremath{\mathcal B}})$ is \begin{equation} \label{e:0217b} \overline{\alpha} := \overline{\alpha}(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}}) := \sup_{(i,j)\in I\times J}\overline{\alpha}(A_i,\wt{A}_i,B_j,\wt{B}_j). \end{equation} \end{enumerate} \end{definition} The next result relates the various condition numbers defined above. \begin{theorem}\label{t:CQ1} Let $\mathcal{A} := (A_i)_{i\in I}$, $\wt{\ensuremath{\mathcal A}} := (\wt{A}_i)_{i\in I}$, $\mathcal{B} := (B_j)_{j\in J}$ and $\wt{\ensuremath{\mathcal B}} := (\wt{B}_j)_{j\in J}$ be nontrivial collections of nonempty subsets of $X$. Set $A := \bigcup_{i\in I} A_i$ and $B := \bigcup_{j\in J}B_j$, and suppose that $c\in A\cap B$. Denote the exact joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ by $\overline{\alpha}$ (see \eqref{e:0217b}), the joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ and $\delta>0$ by $\theta_\delta$ (see \eqref{e:jCQn}), and the limiting joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ by $\overline{\theta}$ (see \eqref{e:ljCQn}). Then the following hold: \begin{enumerate} \item \label{t:CQ1i} If $\overline\alpha<1$, then the $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$-CQ condition holds at $c$. \item \label{t:CQ1ii} $\overline{\alpha}\leq\theta_\delta$. \item \label{t:CQ1iii} $\overline{\alpha}\leq\overline{\theta}$. \end{enumerate} Now assume in addition that $I$ and $J$ are finite. Then the following hold: \begin{enumerate}[resume] \item \label{t:CQ1iv} $\overline{\alpha}=\overline{\theta}$. \item \label{t:CQ1v} The $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$-joint-CQ condition holds at $c$ if and only if $\overline{\alpha}=\overline{\theta}<1$. \end{enumerate} \end{theorem} \begin{proof} \ref{t:CQ1i}: Suppose that $\overline\alpha<1$. The condition for equality in the Cauchy-Schwarz inequality implies that for all $(i,j)\in I\times J$, the intersection $\nc{A_i}{\wt{B}_j}(c)\cap (-\nc{B_j}{\wt{A}_i}(c))$ is either empty or $\{0\}$. In view of Definition~\ref{d:tildeCQ}, we see that the $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$-joint-CQ holds at $c$. \ref{t:CQ1ii}: Let $(i,j)\in I\times J$. Take $u\in\nc{A_i}{\wt{B}_j}(c)$ and $v\in-\nc{B_j}{\wt{A}_i}(c)$ such that $\|u\|\leq 1$ and $\|v\|\leq 1$. Then, by definition of the restricted normal cone, there exist sequences $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ in $A_i$, $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ in $B_j$, $(u_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(v_n)_\ensuremath{{n\in{\mathbb N}}}$ in $X$ such that $a_n\to c$, $b_n\to c$, $u_n\to u$, $v_n\to v$, and $(\forall\ensuremath{{n\in{\mathbb N}}})$ $u_n\in\pn{A_i}{\wt{B}_j}(a_n)$ and $v_n\in-\pn{B_j}{\wt{A}_i}(b_n)$. Note that since $\delta>0$, eventually $a_n$ and $b_n$ lie in $\ball{c}{\delta}$; consequently, $\scal{u_n}{v_n}\leq \theta_\delta(A_i,\wt{A}_i,B_j,\wt{B}_j)$. Taking the limit as $n\to\ensuremath{+\infty}$, we obtain $\scal{u}{v}\leq\theta_\delta(A_i,\wt{A}_i,B_j,\wt{B}_j)\leq\theta_\delta$. Now taking the supremum over suitable $u$ and $v$, followed by taking the supremum over $(i,j)$, we conclude that $\overline{\alpha}\leq \theta_\delta$. \ref{t:CQ1iii}: This is clear from \ref{t:CQ1ii} and \eqref{e:ljCQn}. \ref{t:CQ1iv}: Let $(\delta_n)_\ensuremath{{n\in{\mathbb N}}}$ be a sequence in $\ensuremath{\mathbb R}_+P$ such that $\delta_n\to 0$. Then for every $\ensuremath{{n\in{\mathbb N}}}$, there exist \begin{equation} i_n\in I,\ j_n\in J,\ a_n\in A_{i_n},\ b_n\in B_{j_n},\ u_n\in\pn{A_{i_n}}{\wt{B}_{j_n}}(a_n),\ v_n\in-\pn{B_{j_n}}{\wt{A}_{i_n}}(b_n) \end{equation} such that \begin{equation} \|a_n-c\|\leq \delta_n,\ \|b_n-c\|\leq \delta_n,\ \|u_n\|\leq1,\ \|v_n\|\leq1,\ \text{~and~}\scal{u_n}{v_n}>\theta_{\delta_n}-\delta_n. \end{equation} Since $I$ and $J$ are finite, and after passing to a subsequence and relabeling if necessary, we can and do assume that there exists $(i,j)\in I\times J$ such that $u_n\to u\in \nc{A_i}{\wt{B}_j}(c)$ and $v_n\to v\in -\nc{B_j}{\wt{A}_i}(c)$. Hence $\overline{\theta} \leftarrow \theta_{\delta_n}-\delta_n <\scal{u_n}{v_n}\to\scal{u}{v}\leq\overline{\alpha}$. Hence $\overline{\theta}\leq\overline{\alpha}$. On the other hand, $\overline{\alpha}\leq\overline{\theta}$ by \ref{t:CQ1iii}. Altogether, $\overline{\alpha}=\overline{\theta}$. \ref{t:CQ1v}: ``$\Rightarrow$'': Let $(i,j)\in I\times J$. If $c\not\in {A_i}\cap {B_j}$, then $\overline\alpha(A_i,\wt{A}_i,B_j,\wt{B}_j)=\ensuremath{-\infty}$. Now assume that $c\in {A_i}\cap {B_j}$. Since the $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$-joint-CQ condition holds, we have $\nc{A_i}{\wt{B}_j}(c)\cap -\nc{B_j}{\wt{A}_i}(c)=\{0\}$. By Cauchy-Schwarz, \begin{equation} \overline\alpha(A_i,\wt{A}_i,B_j,\wt{B}_j)= \sup\mmenge{\scal{u}{v}}{u\in\nc{A_i}{\wt{B}_j}(c),v\in-\nc{B_j}{\wt{A}_i}(c),\|u\|\leq 1, \|v\|\leq 1}<1. \end{equation} Since $I$ and $J$ are finite and because of \ref{t:CQ1iv}, we deduce that $\overline{\theta}=\overline\alpha<1$.\\ ``$\Leftarrow$'': Combine \ref{t:CQ1i} with \ref{t:CQ1iv}. \end{proof} \section{CQ conditions and CQ numbers: examples} \label{s:CQ2} In this section, we provide further results and examples illustrating CQ conditions and CQ numbers. First, let us note that the assumption that the sets of indices be finite in Theorem~\ref{t:CQ1}\ref{t:CQ1iv} is essential: \begin{example}[$\overline{\alpha}<\overline{\theta}$] Suppose that $X=\ensuremath{\mathbb R}^2$, let $\Gamma\subseteq\ensuremath{\mathbb R}_+P$ be such that $\sup\Gamma=\ensuremath{+\infty}$, set $(\forall\gamma\in\Gamma)$ $A_\gamma := \ensuremath{\operatorname{epi}}(\thalb\gamma|\cdot|^2)$, $B := \ensuremath{\mathbb R}_+\times\ensuremath{\mathbb R}$, $\ensuremath{\mathcal A} := (A_\gamma)_{\gamma\in\Gamma}$, $\wt{\ensuremath{\mathcal A}} := (X)_{\gamma\in\Gamma}$, $\ensuremath{\mathcal B} := (B)$, $\wt{\ensuremath{\mathcal B}} := (X)$, and $c:=(0,0)$. Denote the exact joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ by $\overline{\alpha}$ (see \eqref{e:0217b}), the joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ and $\delta>0$ by $\theta_\delta$ (see \eqref{e:jCQn}), and the limiting joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ by $\overline{\theta}$ (see \eqref{e:ljCQn}). Then \begin{equation} \overline{\alpha}=0<1 = \theta_\delta = \overline{\theta}. \end{equation} \end{example} \begin{proof} Let $\gamma\in\Gamma$ and pick $x>0$ such that $a := (x,\thalb\gamma x^2)\in A_\gamma$ satisfies $\|a\|=\|a-c\|=\delta$, i.e., $x>0$ and \begin{equation} \gamma^2x^2 = 2\Big(\sqrt{1+\gamma^2\delta^2}-1\Big) \to\ensuremath{+\infty} \quad\text{as $\gamma\to \ensuremath{+\infty}$ in $\Gamma$.} \end{equation} Hence \begin{equation} \label{e:0303b} \gamma x\to \ensuremath{+\infty}, \quad\text{as $\gamma\to \ensuremath{+\infty}$ in $\Gamma$.} \end{equation} Since $A_\gamma$ is closed and convex, it follows from Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNvi} that \begin{equation} u := \frac{(\gamma x,-1)}{\sqrt{\gamma^2 x^2+1}} \in \ensuremath{\mathbb R}_+(\gamma x,-1) = \cnc{A_\gamma}(a) = \pn{A_\gamma}{X}(a) = \nc{A_\gamma}{X}(a) = N_{A_\gamma}(a). \end{equation} Furthermore, $v := (1,0)\in -(\ensuremath{\mathbb R}_-\times\{0\})= -\pn{B}{X}(c) = -\nc{B}{X}(c)= - N_B(c)$, $\|u\|=\|v\|=1$, and, in view of \eqref{e:0303b}, \begin{subequations} \begin{align} 1 &\geq \theta_\delta \geq \theta_\delta(A_\gamma,X,B,X) \geq \scal{u}{v} = \frac{\gamma x}{\sqrt{\gamma^2 x^2+1}}\\ & \to 1 \quad\text{as $\gamma\to \ensuremath{+\infty}$ in $\Gamma$.} \end{align} \end{subequations} Thus $\theta_\delta=1$, which implies that $\overline{\theta}=1$. Finally, $N_{A_\gamma}(c) = (\{0\}\times\ensuremath{\mathbb R}_-) \perp (\ensuremath{\mathbb R}_+\times\{0\}) = -N_B(c)$, which shows that $\overline{\alpha}=0$. \end{proof} For the eventual application of these results to the method of alternating projections, the condition $\overline{\alpha}=\overline{\theta}<1$ is critical to ensure linear convergence. The following example illustrates that the CQ-number can be interpreted as a quantification of the CQ condition. \begin{example}[CQ-number quantifies CQ condition] \label{ex:compareCQ1} Let $A$ and $B$ be subsets of $X$, and suppose that $c\in A\cap B$. Let $L$ be an affine subspace of $X$ containing $A\cup B$. Then the following are equivalent: \begin{enumerate} \item \label{t:cCQ1-i} $\nc{A}{L}(c) \cap (-\nc{B}{L}(c))= \{0\}$, i.e., the $(A,L,B,L)$-CQ condition holds at $c$ (see \eqref{e:CQ}). \item \label{t:cCQ1-ii} $N_A(c) \cap (-N_B(c))\cap (L-c) = \{0\}$. \item \label{t:cCQ1-iii} $\overline{\theta}<1$, where $\overline{\theta}$ is the limiting CQ-number at $c$ associated with $(A,L,B,L)$ (see \eqref{e:lCQn}). \end{enumerate} \end{example} \begin{proof} The identity \eqref{e:pnALd} of Theorem~\ref{p:pnA(L)} yields $\nc{A}{L}(c)=N_A(c)\cap(L-c)$ and $\nc{B}{L}(c)=N_B(c)\cap(L-c)$. Hence \begin{equation} \nc{A}{L}(c) \cap \big(-\nc{B}{L}(c)\big)= N_A(c)\cap \big(-N_B(c)\big)\cap (L-c), \end{equation} and the equivalence of \ref{t:cCQ1-i} and \ref{t:cCQ1-ii} is now clear. Finally, Theorem~\ref{t:CQ1}\ref{t:CQ1iv}\&\ref{t:CQ1v} yields the equivalence of \ref{t:cCQ1-i} and \ref{t:cCQ1-iii}. \end{proof} Depending on the choice of the restricting sets $\wt{A}$ and $\wt{B}$, the $(A,\wt{A},B,\wt{B})$-CQ condition may either hold or fail: \begin{example}[CQ condition depends on restricting sets] \label{ex:CQdif(AB)} Suppose that $X=\ensuremath{\mathbb R}^2$, and set $A:=\ensuremath{\operatorname{epi}}(|\cdot|)$, $B:=\ensuremath{\mathbb R}\times\{0\}$, and $c:=(0,0)$. Then we readily verify that $N_A(c) = \nc{A}{X}(c) = -A$, $\nc{A}{B}(c) = -\ensuremath{\operatorname{bdry}} A$, $N_B(c)=\nc{B}{X}(c) =\{0\}\times\ensuremath{\mathbb R}$, and $\nc{B}{A}(c) = \{0\}\times\ensuremath{\mathbb R}_+$. Consequently, \begin{equation} \nc{A}{X}(c)\cap\big(-\nc{B}{X}(c)\big) = \{0\}\times\ensuremath{\mathbb R}_- \text{~~while~~} \nc{A}{B}(c)\cap\big(-\nc{B}{A}(c)\big) = \{(0,0)\}. \end{equation} Therefore, the $(A,A,B,B)$-CQ condition holds, yet the $(A,X,B,X)$-CQ condition fails. \end{example} For two spheres, it is possible to quantify the convergence of $\theta_\delta$ to $\overline{\delta}=\overline{\alpha}$: \begin{proposition}[CQ-numbers of two spheres] \label{p:sphere2} Let $z_1$ and $z_2$ be in $X$, let $\rho_1$ and $\rho_2$ be in $\ensuremath{\mathbb R}_+P$, set $S_1 := \sphere{z_1}{\rho_1}$ and $S_2 := \sphere{z_2}{\rho_2}$ and assume that $c\in S_1\cap S_2$. Denote the limiting CQ-number at $c$ associated with $(S_1,X,S_2,X)$ by $\overline{\theta}$ (see Definition~\ref{d:CQn}), and the exact CQ-number at $c$ associated with $(S_1,X,S_2,X)$ by $\overline{\alpha}$ (see Definition~\ref{d:exactCQn}). Then the following hold: \begin{enumerate} \item \label{p:sphere2i} $\displaystylelaystyle \overline{\theta} = \overline{\alpha} = \frac{|\scal{z_1-c}{z_2-c}|}{\rho_1\rho_2}$. \item \label{p:sphere2ii} $\overline{\alpha}<1$ unless the spheres are identical or intersect only at $c$. \end{enumerate} Now assume that $\overline{\alpha}<1$, let $\varepsilon\in\ensuremath{\mathbb R}_+P$, and set $\delta := (\sqrt{(\rho_1+\rho_2)^2+4\rho_1\rho_2\varepsilon}-(\rho_1+\rho_2))/2>0$. Then \begin{equation} \label{e:0308d} \overline\alpha\leq \theta_\delta\leq\overline\alpha+\varepsilon, \end{equation} where $\theta_\delta$ is the CQ-number at $c$ associated with $(S_1,X,S_2,X)$ (see Definition~\ref{d:CQn}). \end{proposition} \begin{proof} \ref{p:sphere2i}: This follows from Theorem~\ref{t:CQ1}\ref{t:CQ1iv} and Example~\ref{ex:ncS}. \ref{p:sphere2ii}: Combine \ref{p:sphere2i} with the characterization of equality in the Cauchy-Schwarz inequality. Let us now establish \eqref{e:0308d}. By Theorem~\ref{t:CQ1}\ref{t:CQ1ii}, we have $\overline{\alpha}\leq\theta_\delta$. Let $s_1\in S_1$ be such that $\|s_1-c\|\leq\delta$, let $u_1\in\pn{S_1}{X}(s_1)$ be such that $\|u_1\|=1$, let $s_2\in S_2$ be such that $\|s_2-c\|\leq\delta$, and let $u_2\in\pn{S_2}{X}(s_2)$ be such that $\|u_2\|=1$. By Example~\ref{ex:ncS}, \begin{equation} u_1=\pm\frac{s_1-z_1}{\|s_1-z_1\|}=\pm\frac{s_1-z_1}{\rho_1} \quad\text{and}\quad u_2=\pm\frac{s_2-z_2}{\|s_2-z_2\|}=\pm\frac{s_2-z_2}{\rho_2}. \end{equation} Hence \begin{subequations} \begin{align} \rho_1\rho_2\scal{u_1}{u_2} &\leq |\scal{s_1-z_1}{s_2-z_2}|\\ &=|\scal{(s_1-c)+(c-z_1)}{(s_2-c)+(c-z_2)}|\\ &\leq |\scal{s_1-c}{s_2-c}| + |\scal{s_1-c}{c-z_2}|\\ &\qquad + |\scal{c-z_1}{s_2-c}| + |\scal{c-z_1}{c-z_2}|\\ &\leq \delta^2 + \delta(\rho_1+\rho_2) + \rho_1\rho_2\overline{\alpha} \end{align} \end{subequations} and thus, using the definition of $\delta$, \begin{equation} \scal{u_1}{u_2}\leq \overline{\alpha} + \frac{\delta^2+\delta(\rho_1+\rho_2)}{\rho_1\rho_2} = \overline{\alpha}+\varepsilon. \end{equation} Therefore, by the definition of $\theta_\delta$, we have $\theta_\delta\leq\overline{\alpha}+\varepsilon$. \end{proof} \subsection*{Two convex sets} Let us turn to the classical convex setting. We start by noting that well known constraint qualifications are conveniently characterized using our CQ conditions. \begin{proposition} \label{p:0301a} Let $A$ and $B$ be nonempty convex subsets of $X$ such that $A\cap B\neq\varnothing$, and set $L=\ensuremath{\operatorname{aff}}(A\cup B)$. Then the following are equivalent: \begin{enumerate} \item \label{p:0301ai} $\ensuremath{\operatorname{ri}} A\cap \ensuremath{\operatorname{ri}} B\neq \varnothing$. \item \label{p:0301aii} The $(A,L,B,L)$-CQ condition holds at some point in $A\cap B$. \item \label{p:0301aiii} The $(A,L,B,L)$-CQ condition holds at every point in $A\cap B$. \end{enumerate} \end{proposition} \begin{proof} This is clear from Theorem~\ref{t:compareCQ2}. \end{proof} \begin{proposition} \label{p:0301b} Let $A$ and $B$ be nonempty convex subsets of $X$ such that $A\cap B\neq\varnothing$. Then the following are equivalent: \begin{enumerate} \item \label{p:0301bi} $0\in\ensuremath{\operatorname{int}}(B-A)$. \item \label{p:0301bii} The $(A,X,B,X)$-CQ condition holds at some point in $A\cap B$. \item \label{p:0301biii} The $(A,X,B,X)$-CQ condition holds at every point in $A\cap B$. \end{enumerate} \end{proposition} \begin{proof} This is clear from Corollary~\ref{c:compareCQ3}. \end{proof} In stark contrast to Proposition~\ref{p:0301a} and \ref{p:0301b}, if the restricting sets are not both equal to $L$ or to $X$, then the CQ-condition may actually depend on the reference point as we shall illustrate now: \begin{example}[CQ condition depends on the reference point] Suppose that $X=\ensuremath{\mathbb R}^2$, and let $f\colon\ensuremath{\mathbb R}\to\ensuremath{\mathbb R}\colon x\mapsto (\max\{0,x\})^2$, which is a continuous convex function. Set $A := \ensuremath{\operatorname{epi}} f$ and $B:=\ensuremath{\mathbb R}\times\{0\}$, which are closed convex subsets of $X$. Consider first the point $c:=(-1,0)\in A\cap B$. Then $\nc{A}{B}(c)=\{(0,0)\}$ and $\nc{B}{{A}}(c)=\{0\}\times\ensuremath{\mathbb R}_+$; hence, \begin{equation} \nc{A}{{B}}(c)\cap\big(-\nc{B}{A}(c)\big)=\{(0,0)\}, \end{equation} i.e., the $(A,A,B,B)$-CQ condition holds at $c$. On the other hand, consider now $d:=(0,0)\in A\cap B$. Then $\nc{A}{B}(d) = \{0\}\times\ensuremath{\mathbb R}_-$ and $\nc{B}{A}(d)=\{0\}\times\ensuremath{\mathbb R}_+$; thus, \begin{equation} \nc{A}{{B}}(d)\cap\big(-\nc{B}{A}(d)\big)=\{0\}\times\ensuremath{\mathbb R}_-, \end{equation} i.e., the $(A,A,B,B)$-CQ condition fails at $d$. \end{example} \subsection*{Two linear (or intersecting affine) subspaces} We specialize further to two linear subspaces of $X$. A pleasing connection between CQ-number and the angle between two linear subspaces will be revealed. But first we provide some auxiliary results. \begin{proposition} \label{p:nc-subsp} Let $A$ and $B$ be linear subspaces of $X$, and let $\delta\in\ensuremath{\mathbb R}_+P$. Then \begin{equation} \label{e:0301a} \bigcup_{a\in A\cap(B+A^\perp)\cap \ball{0}{\delta}}\pn{A}{B}(a)= \bigcup_{a\in A\cap \ball{0}{\delta}}\pn{A}{B}(a)= \bigcup_{a\in A}\pn{A}{B}(a) =A^\bot\cap(A+B). \end{equation} \end{proposition} \begin{proof} Let $a\in A$. Then $P_A^{-1}(a) = a+A^\perp$ and hence $P_A^{-1}(a)-a=A^\perp$. If $B\cap(a+A^\perp)=\varnothing$, then $\pn{A}{B}(a)=\{0\}$. Thus we assume that $B\cap(a+A^\perp)\neq\varnothing$, which is equivalent to $a\in A\cap(B+A^\perp)$. Next, by Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNi+}, $\pn{A}{B}(a) = A^\perp\cap \ensuremath{\operatorname{cone}}(B-a)$. This implies $(\forall\lambda\in\ensuremath{\mathbb R}_+P)$ $\ensuremath{\operatorname{cone}}(B-\lambda a)=\ensuremath{\operatorname{cone}}(\lambda(B-a))=\ensuremath{\operatorname{cone}}(B-a)$. Thus, \begin{equation} (\forall\lambda\in\ensuremath{\mathbb R}_+P)\quad \pn{A}{B}(\lambda a)= A^\perp \cap \ensuremath{\operatorname{cone}}(B-\lambda a)= A^\perp\cap\ensuremath{\operatorname{cone}}(B-a)= \pn{A}{B}(a). \end{equation} This establishes not only the first two equalities in \eqref{e:0301a} but also the third because \begin{subequations} \begin{align} \bigcup_{a\in A}\pn{A}{B}(a) &=\bigcup_{a\in A}\big(A^\perp\cap\ensuremath{\operatorname{cone}}(B-a)\big) = A^\perp\cap \bigcup_{a\in A}\ensuremath{\operatorname{cone}}(B-a)\\ &= A^\perp\cap\ensuremath{\operatorname{cone}}\Big(\bigcup_{a\in A}(B-a)\Big) =A^\perp\cap \ensuremath{\operatorname{cone}}(B-A) = A^\perp\cap(B-A)\\ &=A^\perp\cap (B+A). \end{align} \end{subequations} The proof is complete. \end{proof} We now introduce two notions of angles between subspaces; for further information, we highly recommend \cite{Deut94} and \cite{Deutsch}. \begin{definition} Let $A$ and $B$ be linear subspaces of $X$. \begin{enumerate} \item\label{d:aDix} {\rm \textbf{(Dixmier angle)} \cite{Dix49}} The \ensuremath{\varnothing}h{Dixmier angle} between $A$ and $B$ is the number in $[0,\frac{\pi}{2}]$ whose cosine is given by \begin{equation} c_0(A,B):=\sup\menge{|\scal{a}{b}|}{a\in A,b\in B, \|a\|\leq 1,\|b\|\leq 1}. \end{equation} \item\label{d:aFri} {\rm \textbf{(Friedrichs angle)} \cite{Frie37}} The \ensuremath{\varnothing}h{Friedrichs angle} (or simply the \ensuremath{\varnothing}h{angle}) between $A$ and $B$ is the number in $[0,\frac{\pi}{2}]$ whose cosine is given by \begin{subequations} \begin{align} c(A,B)&:=c_0(A\cap(A\cap B)^\perp,B\cap(A\cap B)^\perp)\\ &= \sup\mmenge{|\scal{a}{b}|} {\begin{aligned} &a\in A\cap(A\cap B)^\bot,\|a\|\leq 1,\\ &b\in B\cap(A\cap B)^\bot,\|b\|\leq 1 \end{aligned}}. \end{align} \end{subequations} \end{enumerate} \end{definition} Let us gather some properties of angles. \begin{fact} \label{f:c0&c} Let $A$ and $B$ be linear subspaces of $X$. Then the following hold: \begin{enumerate} \item \label{f:c0&c-ii} If $A\cap B=\{0\}$, then $c(A,B)=c_0(A,B)$. \item \label{f:c0&c-ii+} If $A\cap B\neq \{0\}$, then $c_0(A,B)=1$. \item \label{f:c0&c-ii++} $c(A,B)<1$. \item\label{f:c0&c-ii3} $c(A,B)=c_0(A,B\cap(A\cap B)^\bot)=c_0(A\cap(A\cap B)^\bot,B)$. \item \label{f:c0&c-iii} {\rm \textbf{(Solmon)}} $c(A,B)=c(A^\bot,B^\bot)$. \end{enumerate} \end{fact} \begin{proof} \ref{f:c0&c-ii}--\ref{f:c0&c-ii++}: Clear from the definitions. \ref{f:c0&c-ii3}: See, e.g., \cite[Lemma~2.10(1)]{Deut94} or \cite[Lemma~9.5]{Deutsch}. \ref{f:c0&c-iii}: See, e.g., \cite[Theorem~2.16]{Deut94}. \end{proof} \begin{proposition}[CQ-number of two linear subspaces and Dixmier angle] \label{p:CQn=c0} Let $A$ and $B$ be linear subspaces of $X$, and let $\delta>0$. Then \begin{subequations} \begin{align} \theta_\delta(A,A,B,B)&= c_0\big(A^\bot\cap(A+B),B^\bot\cap(A+B)\big),\\ \theta_\delta(A,X,B,B)&= c_0\big(A^\bot\cap(A+B),B^\bot\big),\\ \theta_\delta(A,A,B,X)&= c_0\big(A^\bot,B^\bot\cap(A+B)\big), \end{align} \end{subequations} where the CQ-numbers at 0 are defined as in \eqref{e:CQn}. \end{proposition} \begin{proof} This follows from Proposition~\ref{p:nc-subsp}. \end{proof} We are now in a position to derive a striking connection between the CQ-number and the Friedrichs angle, which underlines a possible interpretation of the CQ-number as a generalized Friedrichs angle between two sets. \begin{theorem}[CQ-number of two linear subspaces and Friedrichs angle] \label{t:CQn=c} Let $A$ and $B$ be linear subspaces of $X$, and let $\delta>0$. Then \begin{equation} \theta_\delta(A,A,B,B)=\theta_\delta(A,X,B,B)=\theta_\delta(A,A,B,X)=c(A,B)<1, \end{equation} where the CQ-number at $0$ is defined as in \eqref{e:CQn}. \end{theorem} \begin{proof} On the one hand, using Fact~\ref{f:c0&c}\ref{f:c0&c-iii}, we have \begin{subequations} \begin{align} c(A,B)&=c(A^\bot,B^\bot)\\ &=c_0\big(A^\bot\cap(A^\bot\cap B^\bot)^\bot,B^\bot\cap(A^\bot\cap B^\bot)^\bot\big)\\ &=c_0\big(A^\bot\cap(A+B),B^\bot\cap(A+B)\big). \end{align} \end{subequations} On the other hand, Fact~\ref{f:c0&c}\ref{f:c0&c-ii3} yields \begin{subequations} \begin{align} c_0\big(A^\bot\cap(A+B),B^\bot\big) &= c_0\big(A^\bot\cap(A^\bot\cap B^\bot)^\bot,B^\bot\big)\\ &= c(A^\bot,B^\bot)\\ &= c_0\big(A^\bot,B^\bot\cap(A^\bot\cap B^\bot)^\bot\big)\\ &=c_0\big(A^\bot,B^\bot\cap(A+B)\big). \end{align} \end{subequations} Altogether, recalling Proposition~\ref{p:CQn=c0}, we obtain the result. \end{proof} The results in this subsection have a simple generalization to intersecting affine subspaces. Indeed, if $A$ and $B$ are \ensuremath{\varnothing}h{intersecting} affine subspaces, then the corresponding Friedrichs angle is \begin{equation} c(A,B):=c(\ensuremath{\operatorname{par}} A,\ensuremath{\operatorname{par}} B). \end{equation} Combining \eqref{e:120405b} with Theorem~\ref{t:CQn=c}, we immediately obtain the following result. \begin{corollary}[CQ-number of two intersecting affine subspaces and Friedrichs angle] \label{c:CQn=c} Let $A$ and $B$ be affine subspaces of $X$, suppose that $c\in A\cap B$, and let $\delta>0$. Then \begin{equation} \theta_\delta(A,A,B,B)=\theta_\delta(A,X,B,B)=\theta_\delta(A,A,B,X)=c(A,B)<1, \end{equation} where the CQ-number at $c$ is defined as in \eqref{e:CQn}. \end{corollary} \section{Regularities} \label{s:superregularity} In this section, we study a notion of set regularity that is based on restricted normal cones. \begin{definition}[regularity and superregularity]\label{d:reg} Let $A$ and $B$ be nonempty subsets of $X$, and let $c\in X$. \begin{enumerate} \item We say that $B$ is \ensuremath{\varnothing}h{$(A,\varepsilon,\delta)$-regular} at $c\in X$ if $\varepsilon\geq 0$, $\delta>0$, and \begin{equation} \label{e:dreg} \left. \begin{array}{c} (y,b)\in B\times B,\\ \|y-c\| \leq \delta,\|b-c\|\leq \delta,\\ u\in \pn{B}{A}(b) \end{array} \right\}\quad\Rightarrow\quad \scal{u}{y-b}\leq \varepsilon\|u\|\cdot\|y-b\|. \end{equation} If $B$ is $(X,\varepsilon,\delta)$-regular at $c$, then we also simply speak of $(\varepsilon,\delta)$-regularity. \item The set $B$ is called $A$-\ensuremath{\varnothing}h{superregular} at $c\in X$ if for every $\varepsilon>0$ there exists $\delta>0$ such that $B$ is $(A,\varepsilon,\delta)$-regular at $c$. Again, if $B$ is $X$-superregular at $c$, then we also say that $B$ is superregular at $c$. \end{enumerate} \end{definition} \begin{remark} \label{r:0303a} Several comments on Definition~\ref{d:reg} are in order. \begin{enumerate} \item Superregularity with $A=X$ was introduced by Lewis, Luke and Malick in \cite[Section~4]{LLM}. Among other things, they point out that amenability and prox regularity are sufficient conditions for superregularity, while Clarke regularity is a necessary condition. \item The reference point $c$ does not have to belong to $B$. If $c\not\in \overline{B}$, then for every $\delta\in\left]0,d_B(c)\right[$, $B$ is $(0,\delta)$-regular at $c$; consequently, $B$ is superregular at $c$. \item If $\varepsilon_1>\varepsilon_2$ and $B$ is $(A,\varepsilon_2,\delta)$-regular at $c$ then $B$ is also $(A,\varepsilon_1,\delta)$-regular at $c$. \item If $\varepsilon \in \left[1,\ensuremath{+\infty}\right[$, then Cauchy-Schwarz implies that $B$ is $(\varepsilon,\ensuremath{+\infty})$-regular at every point in $X$. \item \label{r:0303av} It follows from Proposition~\ref{p:elementary}\ref{p:ele-ii} that $B$ is $(A_1\cup A_2,\varepsilon,\delta)$-regular at $c$ if and only if $B$ is both $(A_1,\varepsilon,\delta)$-regular and $(A_2,\varepsilon,\delta)$-regular at $c$. \item \label{r:0303avi} If $B$ is convex, then it follows with Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNvi} that $B$ is $(A,0,\ensuremath{+\infty})$-regular at $c$; consequently, $B$ is superregular. \item \label{r:0303avi+} Similarly, if $B$ is locally convex at $c$, i.e., there exists $\rho\in\ensuremath{\mathbb R}_+P$ such that $B\cap\ball{c}{\rho}$ is convex, then $B$ is superregular at $c$. \item \label{r:0303avii} If $B$ is $(A,0,\delta)$-regular at $c$, then $B$ is $A$-superregular at $c$; the converse, however, is not true in general (see Example~\ref{ex:sphere1} below). \end{enumerate} \end{remark} As a first example, let us consider the sphere. \begin{example}[sphere] \label{ex:sphere1} Let $z\in X$ and $\rho\in\ensuremath{\mathbb R}_+P$. Set $S:= \sphere{z}{\rho}$, suppose that $s\in S$, let $\varepsilon\in\ensuremath{\mathbb R}_+P$, and let $\delta\in\ensuremath{\mathbb R}_+P$. Then $S$ is $(\varepsilon,\rho\varepsilon)$-regular at $s$; consequently, $S$ is superregular at $s$ (see Definition~\ref{d:reg}). However, $S$ is not $(0,\delta)$-regular at $s$. \end{example} \begin{proof} Let $b\in S$ and $y\in S$. Then $\rho^2 = \|z-y\|^2=\|z-b\|^2+\|y-b\|^2-2\scal{z-b}{y-b} =\rho^2+\|y-b\|^2-2\scal{z-b}{y-b}$, which implies \begin{equation} \label{e:0308c} 2\scal{z-b}{y-b}=\|y-b\|^2. \end{equation} On the other hand, by Example~\ref{ex:ncS}, we have \begin{equation} \label{e:0308b} \pn{S}{X}(b) \cap \sphere{0}{1} = \bigg\{\pm \frac{z-b}{\|z-b\|}\bigg\} = \bigg\{\pm \frac{z-b}{\rho}\bigg\}. \end{equation} Suppose that $u\in\pn{S}{X}(b) \cap \sphere{0}{1}$. Combining \eqref{e:0308c} and \eqref{e:0308b}, we obtain \begin{equation}\label{e:0308c1} \scal{\pn{S}{X}(b)\cap\sphere{0}{1}}{y-b} = \bigg\{\pm\frac{1}{2\rho}\|y-b\|^2\bigg\}. \end{equation} Thus if $\|y-s\|\leq\rho\varepsilon$, $\|b-s\|\leq\rho\varepsilon$, and $u\in\pn{S}{X}(b) \cap \sphere{0}{1}$, then \begin{align} \scal{u}{y-b}&\leq \frac{1}{2\rho}\|y-b\|^2 \leq \frac{1}{2\rho}\big(\|y-s\|+\|s-b\|\big)\|y-b\| \leq \frac{\rho\varepsilon+\rho\varepsilon}{2\rho}\|y-b\|\\ &=\varepsilon\|u\|\cdot \|y-b\|, \end{align} which verifies the $(\varepsilon,\rho\varepsilon)$-regularity of $S$ at $s$. Finally, by \eqref{e:0308c1}, \begin{equation}\max\big\{ \tscal{\pn{S}{X}(b)\cap\sphere{0}{1}}{y-b} \big\} = \frac{1}{2\rho}\|y-b\|^2 >0 \end{equation} and therefore $S$ is not $(0,\delta)$-regular at $s$. \end{proof} We now characterizes $A$-superregularity using restricted normal cones. \begin{theorem}[characterization of $A$-superregularity] \label{t:Asreg} Let $A$ and $B$ be nonempty subsets of $X$, and let $c\in X$. Then $B$ is $A$-superregular at $c$ if and only if for every $\varepsilon\in\ensuremath{\mathbb R}_+P$, there exists $\delta\in\ensuremath{\mathbb R}_+P$ such that \begin{equation} \label{e:Asreg} \left. \begin{array}{c} (y,b)\in B\times B\\ \|y-c\| \leq \delta,\|b-c\|\leq \delta\\ u\in \nc{B}{A}(b) \end{array} \right\}\quad\Rightarrow\quad \scal{u}{y-b}\leq \varepsilon\|u\|\cdot\|y-b\|. \end{equation} \end{theorem} \begin{proof} ``$\Leftarrow$'': Clear from Lemma~\ref{l:NsubsetN}\ref{l:NsubsetNiv}. ``$\Rightarrow$'': We argue by contradiction; thus, we assume there exists $\varepsilon\in\ensuremath{\mathbb R}_+P$ and sequences $(y_n,b_n,u_n)_\ensuremath{{n\in{\mathbb N}}}$ in $B\times B\times X$ such that $(y_n,b_n)\to(c,c)$ and for every $\ensuremath{{n\in{\mathbb N}}}$, \begin{equation} u_n\in\nc{B}{A}(b_n) \quad\text{and}\quad \scal{u_n}{y_n-b_n}>\varepsilon\|u_n\|\cdot\|y_n-b_n\|. \end{equation} By the definition of the restricted normal cone, for every $\ensuremath{{n\in{\mathbb N}}}$, there exists a sequence $(b_{n,k},u_{n,k})_{k\in\ensuremath{\mathbb N}}$ in $B\times X$ such that $\lim_{k\in\ensuremath{\mathbb N}}b_{n,k}=b_n$, $\lim_{k\in\ensuremath{\mathbb N}}u_{n,k}=u_n$, and $(\forall k\in\ensuremath{\mathbb N})$ $u_{n,k}\in \pn{B}{A}(b_{n,k})$. Hence there exists a subsequence $(k_n)_\ensuremath{{n\in{\mathbb N}}}$ of $(n)_\ensuremath{{n\in{\mathbb N}}}$ such that $b_{n,k_n}\to c$ and \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{u_{n,k_n}}{y_n-b_{n,k_n}}>\frac{\varepsilon}{2}\|u_{n,k_n}\|\cdot \|y_n-b_{n,k_n}\|. \end{equation} However, this contradicts the $A$-superregularity of $B$ at $c$. \end{proof} When $B=X$, then Theorem~\ref{t:Asreg} turns into \cite[Proposition~4.4]{LLM}: \begin{corollary}[Lewis-Luke-Malick] \label{c:LLMsreg} Let $B$ be a nonempty subset of $X$ and let $c\in B$. Then $B$ is superregular at $c$ if and only if for every $\varepsilon\in\ensuremath{\mathbb R}_+P$ there exists $\delta\in\ensuremath{\mathbb R}_+P$ such that \begin{equation} \label{e:sreg} \left. \begin{array}{c} (y,b)\in B\times B\\ \|y-c\| \leq \delta,\|b-c\|\leq \delta\\ u\in N_B(b) \end{array} \right\}\quad\Rightarrow\quad \scal{u}{y-b}\leq \varepsilon\|u\|\cdot\|y-b\|. \end{equation} \end{corollary} We now introduce the notion of joint-regularity, which is tailored for collections of sets and which turns into Definition~\ref{d:reg} when the index set is a singleton. \begin{definition}[joint-regularity] \label{d:jreg} Let $A$ be a nonempty subset of $X$, let $\ensuremath{\mathcal B} := (B_j)_{j\in J}$ be a nontrivial collection of nonempty subsets of $X$, and let $c\in X$. \begin{enumerate} \item We say that $\ensuremath{\mathcal B}$ is $(A,\varepsilon,\delta)$-joint-regular at $c$ if $\varepsilon\geq 0$, $\delta>0$, and for every $j\in J$, $B_j$ is $(A,\varepsilon,\delta)$-regular at $c$. \item The collection $\ensuremath{\mathcal B}$ is $A$-joint-superregular at $c$ if for every $j\in J$, $B_j$ is $A$-superregular at $c$. \end{enumerate} As in Definition~\ref{d:reg}, we may omit the prefix $A$ if $A=X$. \end{definition} Here are some verifiable conditions that guarantee joint-(super)regularity. \begin{proposition} \label{p:jsreg} Let $\ensuremath{\mathcal A} := (A_j)_{j\in J}$ and $\ensuremath{\mathcal B} := (B_j)_{j\in J}$ be nontrivial collections of nonempty subsets of $X$, let $c\in X$, let $(\varepsilon_j)_{j\in J}$ be a collection in $\ensuremath{\mathbb R}_+$, and let $(\delta_j)_{j\in J}$ be a collection in $\left]0,\ensuremath{+\infty}\right]$. Set $A := \bigcap_{j\in J}A_j$, $\varepsilon := \sup_{j\in J}\varepsilon_j$, and $\delta := \inf_{j\in J}\delta_j$. Then the following hold: \begin{enumerate} \item \label{p:jsreg-i} If $\delta>0$ and $(\forall j\in J)$ $B_j$ is $(A_j,\varepsilon_j,\delta_j)$-regular at $c$, then $\ensuremath{\mathcal B}$ is $(A,\varepsilon,\delta)$-joint-regular at $c$. \item \label{p:jsreg-ii} If $J$ is finite and $(\forall j\in J)$ $B_j$ is $(A_j,\varepsilon_j,\delta_j)$-regular at $c$, then $\ensuremath{\mathcal B}$ is $(A,\varepsilon,\delta)$-joint-regular at $c$. \item \label{p:jsreg-iii} If $J$ is finite and $(\forall j\in J)$ $B_j$ is $A_j$-superregular at $c$, then $\ensuremath{\mathcal B}$ is $A$-joint-superregular at $c$. \end{enumerate} \end{proposition} \begin{proof} \ref{p:jsreg-i}: Indeed, by Remark~\ref{r:0303a}\ref{r:0303av}, $B_j$ is $(A,\varepsilon,\delta)$-regular at $c$ for every $j\in J$. \ref{p:jsreg-ii}: Since $J$ is finite, we have $\delta>0$ and so the conclusion follows from \ref{p:jsreg-i}. \ref{p:jsreg-iii}: This follows from \ref{p:jsreg-ii} and the definitions. \end{proof} \begin{corollary}[convexity and regularity] \label{c:jsreg} Let $\ensuremath{\mathcal B} := (B_j)_{j\in J}$ be a nontrivial collection of nonempty convex subsets of $X$, let ${A}\subseteq X$, and let $c\in X$. Then $\ensuremath{\mathcal B}$ is $(0,\ensuremath{+\infty})$-joint-regular, $(A,0,\ensuremath{+\infty})$-joint-regular, joint-superregular, and $A$-joint-superregular at $c$. \end{corollary} \begin{proof} By Remark~\ref{r:0303a}\ref{r:0303avi}, $B_j$ is $(0,\ensuremath{+\infty})$-regular, superregular, and $A$-superregular at $c$, for every $j\in J$. Now apply Proposition~\ref{p:jsreg}\ref{p:jsreg-i}\&\ref{p:jsreg-iii}. \end{proof} The following example illustrates the flexibility gained through the notion of joint-regularity. \begin{example}[two lines: joint-superregularity $\not\Rightarrow$ superregularity of the union] \label{ex:badlines} Suppose that $d_1$ and $d_2$ are in $\sphere{0}{1}$. Set $B_1 := \ensuremath{\mathbb R} d_1$, $B_2 := \ensuremath{\mathbb R} d_2$, and $B := B_1\cup B_2$, and assume that $B_1\cap B_2=\{0\}$. By Corollary~\ref{c:jsreg}, $(B_1,B_2)$ is joint-superregular at $0$. Let $\delta\in\ensuremath{\mathbb R}_+P$, and set $b := \delta d_1$ and $y := \delta d_2$. Then $\|y-0\|=\delta$, $\|b-0\|=\delta$, and $0<\|y-b\| = \delta\|d_2-d_1\|$. Using Proposition~\ref{p:Nequi}\ref{p:Ne-iii}, we see that $N_{B}(b)=\{d_1\}^\perp$. Note that there exists $v\in \{d_1\}^\perp$ such that $\scal{v}{d_2}\neq 0$ (for otherwise $\{d_1\}^\perp \subseteq \{d_2\}^\perp$ $\Rightarrow$ $B_2\subseteq B_1$, which is absurd). Hence there exists $u\in \{d_1\}^\perp= \{b\}^\perp = N_B(b)$ such that $\|u\|=1$ and $\scal{u}{d_2}>0$. It follows that $\scal{u}{y-b} = \scal{u}{y} =\delta\scal{u}{d_2}=\scal{u}{d_2}\|u\|\|y-b\|/\|d_2-d_1\|$. Therefore, $B$ is not superregular at $0$. \end{example} Let us provide an example of an $A$-superregular set that is not superregular. To do so, we require the following elementary result. \begin{lemma} \label{l:0303a} Consider in $\ensuremath{\mathbb R}^2$ the sets $C := [(0,1),(m,1+m^2)] = \menge{(x,1+mx)}{x\in[0,m]}$ and $D := [(m,1),(m,1+m^2)]$, where $m\in\ensuremath{\mathbb R}_+P$. Let $z\in\ensuremath{\mathbb R}$. Then \begin{equation} \label{e:0303a} P_{C\cup D}(z,0) = \begin{cases} (0,1), &\text{if $z<m/2$;}\\ \{(0,1),(m,1)\}, &\text{if $z=m/2$;}\\ (m,1), &\text{if $z>m/2$.} \end{cases} \end{equation} \end{lemma} \begin{proof} It is clear that $P_D(z,0)=(m,1)$. We assume that $0<z<m$ for otherwise \eqref{e:0303a} is clearly true. We claim that $P_C(z,0)=(0,1)$. Indeed, $f\colon x\mapsto \|(x,1+mx)-(z,0)\|^2$ is a convex quadratic with minimizer $x_z := (z-m)/(1+m^2)$. The requirement $x_z\geq 0$ from the definition of $C$ forces $z\geq m$, which is a contradiction. Hence $P_C(z,0)$ is a subset of the relative boundary of $C$, i.e., of $\{(0,1),(m,1+m^2)\}$. Clearly, $(0,1)$ is the closer to $(z,0)$ than $(m,1+m^2)$. This verifies the claim. Since $P_{C\cup D}(z,0)$ is the subset of points in $P_C(z,0)\cup P_D(z,0)$ closest to $(z,0)$, the result follows. \end{proof} \begin{example}[$A$-superregularity $\not\Rightarrow$ superregularity] \label{ex:saw} Suppose that $X=\ensuremath{\mathbb R}^2$. As in \cite[Example~4.6]{LLM}, we consider $c:=(0,0)\in X$ and $B :=\ensuremath{\operatorname{epi}} f$, where \begin{equation} f\colon\ensuremath{\mathbb R}\to\ensuremath{\,\left]-\infty,+\infty\right]}\colon x\mapsto \begin{cases} 2^k(x-2^k),&\text{if $2^k\leq x < 2^{k+1}$ and $k\in \ensuremath{\mathbb Z}$;}\\ 0,&\text{if $x=0$;}\\ \ensuremath{+\infty},&\text{if $x<0$.} \end{cases} \end{equation} Then $B$ is not superregular at $c$; however, $B$ is $A$-superregular at $c$, where $A := \ensuremath{\mathbb R}\times\{-1\}$. \end{example} \begin{proof} It is stated in \cite[Example~4.6]{LLM} that $B$ is not superregular at $c$ (and that $B$ is Clarke regular at $c$). To tackle $A$-superregularity, let us determine $P_B(A)$. Let us consider the point $a = (\alpha,-1)$, where $\alpha\in\left[2^{-1},1\right[$. Then Lemma~\ref{l:0303a} (see also the picture below) implies that \begin{equation} P_B(\alpha,-1)=\begin{cases} (\thalb,0), &\text{if $\thalb\leq\alpha<\tfrac{3}{4}$;}\\ \big\{(\thalb,0),(1,0)\big\}, &\text{if $\alpha=\tfrac{3}{4}$;}\\ (1,0), &\text{if $\tfrac{3}{4}<\alpha<1$;} \end{cases} \end{equation} \setlength{\unitlength}{0.03in} \begin{picture}(150,145)(-25,-120) \put(10,10){$B=\ensuremath{\operatorname{epi}} f$} \put(100,0){\line(1,1){25}} \put(100,-6){$1$} \put(50,0){\line(2,1){50}} \put(45,-6){$\frac{1}{2}$} \put(25,0){\line(4,1){25}} \put(22,-6){$\frac{1}{4}$} \put(12.5,0){\line(6,1){12.5}} \put(10,-6){$\frac{1}{8}$} \put(6.25,0){\line(6,1){6.25}} \put(3,-6){$\frac{1}{16}$} \put(3.125,0){\line(6,1){3.125}} \put(-4,-6){$0$} \put(0,0){\line(1,0){4}} \put(0,0){\varepsilonctor(0,1){25}} \put(120,0){\varepsilonctor(1,0){10}} \put(100,0){\line(0,1){25}} \put(50,0){\line(0,1){6.25}} \put(25,0){\line(0,1){2}} \put(12.5,0){\line(0,1){1}} \put(6.25,0){\line(0,1){0.5}} \multiput(-25,0)(5,0){30}{\line(1,0){2}} \multiput(0,-2)(0,-5){22}{\line(0,-1){2}} \multiput(75,-100)(3,12){9}{\line(1,4){1.2}} \multiput(75,-100)(-3,12){9}{\line(-1,4){1.2}} \put(75,-106){$\frac{3}{4}$} \multiput(37.5,-100)(3,24){5}{\line(1,6){1}} \multiput(37.5,-100)(-3,24){5}{\line(-1,6){1}} \put(37.5,-106){$\frac{3}{8}$} \multiput(25,-100)(0,12){9}{\line(0,4){4}} \put(25,-100){\circle*{2}} \put(25,-106){$\frac{1}{4}$} \multiput(50,-100)(0,12){9}{\line(0,4){4}} \put(50,-100){\circle*{2}} \put(50,-106){$\frac{1}{2}$} \multiput(100,-100)(0,12){9}{\line(0,4){4}} \put(100,-100){\circle*{2}} \put(100,-106){$1$} \linethickness{0.5mm} \put(-25,-100){\line(1,0){150}} \put(-23,-106){$A=\ensuremath{\mathbb R}\times\{-1\}$} \put(-10,-97){$-1$} \end{picture} and more generally, \begin{equation} 2^k\leq \alpha < 2^{k+1} \;\Rightarrow\; P_B(\alpha,-1)=\begin{cases} (2^k,0), &\text{if $2^k\leq\alpha<2^k+2^{k-1}$;}\\ \big\{(2^k,0),(2^{k+1},0)\big\}, &\text{if $\alpha=2^k+2^{k-1}$;}\\ (2^{k+1},0), &\text{if $2^k+2^{k-1}<\alpha<2^{k+1}$.} \end{cases} \end{equation} Clearly, if $a\in\ensuremath{\mathbb R}_-\times\{-1\}$, then $P_B(a)=(0,0)$. Let $b\in B$. Then \begin{equation} A \cap P^{-1}_B(b)=\begin{cases} \big[2^{k-2}+2^{k-1},2^{k-1}+2^k\big]\times\{-1\}, &\text{if $b=(2^k,0)$ and $k\in\ensuremath{\mathbb Z}$;}\\ \ensuremath{\mathbb R}_-\times\{-1\}, &\text{if $b=(0,0)$;}\\ \varnothing,&\text{otherwise.} \end{cases} \end{equation} Thus \begin{equation} \pn{B}{A}(b)=\begin{cases} \ensuremath{\operatorname{cone}}\Big(\big[-2^{k-2},2^{k-1}\big]\times\{-1\}\Big), &\text{if $b=(2^k,0)$ and $k\in\ensuremath{\mathbb Z}$;}\\ \{(0,0)\}\cup \big(\ensuremath{\mathbb R}_-\times\ensuremath{\mathbb R}_-M\big), &\text{if $b=(0,0)$;}\\ \{(0,0)\},&\text{otherwise.} \end{cases} \end{equation} Let $\varepsilon\in\ensuremath{\mathbb R}_+P$. Let $K\in\ensuremath{\mathbb Z}$ be such that $2^{K-1}\leq\varepsilon$, and let $\delta\in \left]0,2^K\right]$. Furthermore, let $y=(y_1,y_2)\in B$, let $b=(b_1,b_2)\in B$, let $u\in\pn{B}{A}(b)$, and assume that $\|y-c\|\leq\delta$ and that $\|b-c\|\leq\delta$. We consider three cases. \ensuremath{\varnothing}h{Case~1:} $b=(0,0)$. Then $u\in\ensuremath{\mathbb R}_-^2$ and $y\in\ensuremath{\mathbb R}_+^2$; consequently, $\scal{u}{y-b} = \scal{u}{y}\leq 0 \leq\varepsilon\|u\|\cdot\|y-b\|$. \ensuremath{\varnothing}h{Case~2:} $b\notin (\{0\}\cup 2^\ensuremath{\mathbb Z})\times\{0\}$. Then $\pn{B}{A}(b)=\{(0,0\}$; hence $u=0$ and so $\scal{u}{y-b}=0\leq \varepsilon\|u\|\cdot\|y-b\|$. \ensuremath{\varnothing}h{Case~3:} $b\in 2^\ensuremath{\mathbb Z}\times\{0\}$, say $b=(2^k,0)$, where $k\in \ensuremath{\mathbb Z}$. Since $2^k=\|b-0\|=\|b-c\|\leq\delta\leq 2^K$, we have $k\leq K$. Furthermore, $y_2\geq 0$, $\max\{|y_1-b_1|,|y_2-b_2|\}\leq\|y-b\|$, and $u=\lambda(t,-1)=(\lambda t,-\lambda)$ where $t\in[-2^{k-2},2^{k-1}]$ and $\lambda\geq0$. Hence $\lambda\leq\|u\|$ and \begin{subequations} \begin{align} \scal{u}{y-b}&=\lambda t(y_1-b_1)-\lambda (y_2-b_2) =\lambda t(y_1-b_1)-\lambda (y_2-0) \\ &\leq \lambda t(y_1-b_1) \leq \lambda|t|\cdot|y_1-b|\\ &\leq \|u\|\cdot 2^{k-1}\cdot\|y-b\| \leq 2^{K-1}\|u\|\cdot\|y-b\| \leq \varepsilon\cdot\|u\|\cdot\|y-b\|. \end{align} \end{subequations} Therefore, in all three cases, we have shown that $\scal{u}{y-b} \leq\varepsilon\|u\|\cdot\|y-b\|$. \end{proof} We now use Example~\ref{ex:saw} to construct an example complementary to Example~\ref{ex:badlines}. \begin{example}[superregularity of the union $\not\Rightarrow$ joint-superregularity] Suppose that $X=\ensuremath{\mathbb R}^2$, set $B_1 := \ensuremath{\operatorname{epi}} f$, where $f$ is as in Example~\ref{ex:saw}, $B_2 := X\smallsetminus B_1$, and $c := (0,0)$. Since $B_1\cup B_2= X$ is convex, it is clear from Remark~\ref{r:0303a}\ref{r:0303avi} that $B_1\cup B_2$ is superregular at $c$. On the other hand, since $B_1$ is not superregular at $c$ (see Example~\ref{ex:saw}), it is obvious that $(B_1,B_2)$ is not joint-superregular at $c$. \end{example} \section{The method of alternating projections (MAP)} \label{s:application} We now apply the machinery of restricted normal cones and associated results to derive linear convergence results. \subsection*{On the composition of two projection operators} The method of alternating projections iterates projection operators. Thus, in the next few results, we focus on the outcome of a single iteration of the composition. \begin{lemma} \label{l:0305a} Let $A$ and $B$ be nonempty closed subsets of $X$. Then the following hold\footnote{We denote by $\ensuremath{\operatorname{bdry}}_{\ensuremath{\operatorname{aff}} A\cup B}(S)$ the boundary of $S\subseteq X$ with respect to $\ensuremath{\operatorname{aff}}(A\cup B)$.}: \begin{enumerate} \item \label{l:0305ai} $P_A(B\smallsetminus A)\subseteq \ensuremath{\operatorname{bdry}}_{\ensuremath{\operatorname{aff}} A\cup B}A \subseteq \ensuremath{\operatorname{bdry}} A$. \item \label{l:0305aii} $P_B(A\smallsetminus B)\subseteq \ensuremath{\operatorname{bdry}}_{\ensuremath{\operatorname{aff}} A\cup B}(B)\subseteq \ensuremath{\operatorname{bdry}} B$. \item \label{l:0305aiii} If $b\in B$ and $a\in P_A b$, then: \begin{equation} a\in (\ensuremath{\operatorname{bdry}} A)\smallsetminus B \;\Leftrightarrow\; a\in A\smallsetminus B \;\Rightarrow\; b\in B\smallsetminus A \;\Rightarrow\; a\in\ensuremath{\operatorname{bdry}} A. \end{equation} \item \label{l:0305aiv} If $a\in A$ and $b\in P_B a$, then: \begin{equation} b\in (\ensuremath{\operatorname{bdry}} B)\smallsetminus A \;\Leftrightarrow\; b\in B\smallsetminus A \;\Rightarrow\; a\in A\smallsetminus B \;\Rightarrow\; b\in \ensuremath{\operatorname{bdry}} B. \end{equation} \end{enumerate} \end{lemma} \begin{proof} \ref{l:0305ai}: Take $b\in B\smallsetminus A$ and $a\in P_Ab$. Assume to the contrary that there exists $\delta\in\ensuremath{\mathbb R}_+P$ such that $\ensuremath{\operatorname{aff}}(A\cup B)\cap \ball{a}{\delta}\subseteq A$. Hence $\wt{a} := a+\delta(b-a)/\|b-a\|\in A$ and thus $d_A(b)\leq d(\wt{a},b)<d(a,b)=d_A(b)$, which is absurd. \ref{l:0305aii}: Interchange the roles of $A$ and $B$ in \ref{l:0305ai}. \ref{l:0305aiii}: If $a\in (\ensuremath{\operatorname{bdry}} A)\smallsetminus B$, then clearly $a\in A\smallsetminus B$. Now assume that $a\in A\smallsetminus B$. If $b\in A$, then $a\in P_Ab=\{b\}\subseteq B$, which is absurd. Hence $b\in B\smallsetminus A$ and thus \ref{l:0305ai} implies that $a\in P_A(B\smallsetminus A)\subseteq\ensuremath{\operatorname{bdry}} A$. \ref{l:0305aiv}: Interchange the roles of $A$ and $B$ in \ref{l:0305aiii}. \end{proof} \begin{lemma} \label{l:Ed} Let $A$ and $B$ be nonempty closed subsets of $X$, let $c\in X$, let $y\in B$, let $a\in P_Ay$, let $b\in P_Ba$, and let $\delta\in\ensuremath{\mathbb R}_+$. Assume that $d_A(y)\leq\delta$ and that $d(y,c)\leq\delta$. Then the following hold: \begin{enumerate} \item \label{l:Edi} $d(a,c)\leq 2\delta$. \item \label{l:Edii} $d(b,y)\leq 2d(a,y)\leq 2\delta$. \item \label{l:Ediii} $d(b,c)\leq3\delta$. \end{enumerate} \end{lemma} \begin{proof} Since $y\in B$, we have \begin{equation} \label{e:1026a} d(a,b)=d_B(a)\leq d(a,y)=d_A(y)\leq\delta. \end{equation} Thus, \begin{equation} \label{e:1026b} d(a,c)\leq d(a,y)+d(y,c) \leq \delta + \delta = 2\delta, \end{equation} which establishes \ref{l:Edi}. Using \eqref{e:1026a}, we also conclude that $d(b,y)\leq d(b,a) + d(a,y) \leq 2d(a,y)\leq 2\delta$; hence, \ref{l:Edii} holds. Finally, combining \eqref{e:1026a} and \eqref{e:1026b}, we obtain \ref{l:Ediii} via $d(b,c) \leq d(b,a) + d(a,c) \leq \delta + 2\delta = 3\delta$. \end{proof} \begin{corollary}\label{c:0330a} Let $A$ and $B$ be nonempty closed subsets of $X$, let $\rho\in\ensuremath{\mathbb R}_+P$, and suppose that $c\in A\cap B$. Then \begin{equation} P_AP_BP_A\ball{c}{\rho}\subseteq\ball{c}{6\rho}. \end{equation} \end{corollary} \begin{proof} Let $b_{-1}\in\ball{c}{\rho}$, $a_0\in P_A b_{-1}$, $b_0\in P_B a_0$, and $a_1\in P_A b_0$. We have $d(a_0,b_{-1})=d_A(b_{-1})\leq d(b_{-1},c)\leq\rho$, so $d_B(a_0)\leq d(a_0,c)\leq d(a_0,b_{-1})+d(b_{-1},c)\leq 2\rho$. Applying Lemma~\ref{l:Ed}\ref{l:Ediii} to the sets $B$ and $A$, the points $a_0,b_0,a_1$, and $\delta = 2\rho$, we deduce that $d(a_1,c)\leq 3(2\rho)=6\rho$. \end{proof} The next two results are essential to guarantee a local contractive property of the composition. \begin{proposition}[regularity and contractivity] \label{p:effect1} Let $A$ and $B$ be nonempty closed subsets of $X$, let $\wt{A}$ and $\wt{B}$ be nonempty subsets of $X$, let $c\in X$, let $\varepsilon\geq 0$, and let $\delta>0$. Assume that $B$ is $(\wt{A},\varepsilon,3\delta)$-regular at $c$ (see Definition~\ref{d:reg}). Furthermore, assume that $y\in B\cap\wt{B}$, that $a\in P_A(y)\cap\wt{A}$, that $b\in P_B(a)$, that $\|y-c\|\leq\delta$, and that $d_A(y)\leq\delta$. Then \begin{equation} \|a-b\|\leq(\theta_{3\delta}+2\varepsilon)\|a-y\|, \end{equation} where $\theta_{3\delta}$ the CQ-number at $c$ associated with $(A,\wt{A},B,\wt{B})$ (see \eqref{e:CQn}). \end{proposition} \begin{proof} Lemma~\ref{l:Ed}\ref{l:Edi}\&\ref{l:Ediii} yields $\|a-c\|\leq 2\delta$ and $\|b-c\|\leq 3\delta$. On the other hand, $y-a\in\pn{A}{\wt{B}}(a)$ and $b-a\in-\pn{B}{\wt{A}}(b)$. Therefore, \begin{equation} \label{e:1026d} \scal{b-a}{y-a}\leq\theta_{3\delta}\|b-a\|\cdot\|y-a\|. \end{equation} Since $a-b\in\pn{B}{\wt{A}}(b)$, $\|y-c\|\leq \delta$, and $\|b-c\|\leq3\delta$, we obtain, using the $(\wt{A},\varepsilon,3\delta)$-regularity of $B$, that $\scal{a-b}{y-b}\leq\varepsilon\|a-b\|\cdot\|y-b\|$. Moreover, Lemma~\ref{l:Ed}\ref{l:Edii} states that $\|y-b\|\leq 2\|a-y\|$. It follows that \begin{equation} \label{e:1026e} \scal{a-b}{y-b}\leq 2\varepsilon\|a-b\|\cdot\|a-y\|. \end{equation} Adding \eqref{e:1026d} and \eqref{e:1026e} yields $\|a-b\|^2\leq(\theta_{3\delta}+2\varepsilon)\|a-b\|\cdot\|a-y\|$. The result follows. \end{proof} We now provide a result for collections of sets similar to---and relying upon---Proposition~\ref{p:effect1}. \begin{proposition}[joint-regularity and contractivity] \label{p:effect} Let $\ensuremath{\mathcal A}:=(A_i)_{i\in I}$ and $\ensuremath{\mathcal B}:=(B_j)_{j\in J}$ be nontrivial collections of closed subsets of $X$, Assume that $A:=\bigcup_{i\in I}A_i$ and $B:=\bigcup_{j\in J}B_j$ are closed, and that $c\in A\cap B$. Let $\wt{\ensuremath{\mathcal A}}:=(\wt{A}_i)_{i\in I}$ and $\wt{\ensuremath{\mathcal B}}:=(\wt{B}_j)_{j\in J}$ be nontrivial collections of nonempty subsets of $X$ such that $(\forall i\in I)$ $P_{A_i}((\ensuremath{\operatorname{bdry}} B)\smallsetminus A)\subseteq\wt{A}_i$ and $(\forall j\in J)$ $P_{B_j}((\ensuremath{\operatorname{bdry}} A)\smallsetminus B)\subseteq\wt{B}_j$. Set $\wt{A}:=\bigcup_{i\in I}\wt{A}_i$ and $\wt{B}:=\bigcup_{j\in J}\wt{B}_j$, let $\varepsilon\geq 0$ and let $\delta> 0$. \begin{enumerate} \item\label{p:effect-i} If $b\in (\ensuremath{\operatorname{bdry}} B)\smallsetminus A$ and $a\in P_A(b)$, then $(\ensuremath{\exists\,} i\in I)$ $a\in P_{A_i}(b)\subseteq A_i\cap\wt{A}_i$. \item\label{p:effect-i1} If $a\in (\ensuremath{\operatorname{bdry}} A)\smallsetminus B$ and $b\in P_B(a)$, then $(\ensuremath{\exists\,} j\in J)$ $b\in P_{B_j}(a)\subseteq B_j\cap\wt{B}_j$. \item\label{p:0329a-iii} If $y\in B$, $a\in P_A(y)$ and $b\in P_B(a)$, then: \begin{equation} b\in\big((\ensuremath{\operatorname{bdry}} B)\smallsetminus A\big)\cap\bigcup_{j\in J}(B_j\cap\wt{B}_j)\ \Leftrightarrow\ b\in B\smallsetminus A\ \Rightarrow\ a\in A\smallsetminus B. \end{equation} \item\label{p:0329a-iv} If $x\in A$, $b\in P_B(x)$, and $a\in P_A(b)$, then: \begin{equation} a\in\big((\ensuremath{\operatorname{bdry}} A)\smallsetminus B\big)\cap\bigcup_{i\in I}(A_i\cap\wt{A}_i)\ \Leftrightarrow\ a\in A\smallsetminus B \ \Rightarrow\ b\in B\smallsetminus A. \end{equation} \item \label{p:effect-ii} Suppose that $\ensuremath{\mathcal B}$ is $(\wt{A},\varepsilon,3\delta)$-joint-regular at $c$ (see Definition~\ref{d:jreg}), that $y\in ((\ensuremath{\operatorname{bdry}} B)\smallsetminus A)\cap \bigcup_{j\in J}(B_j\cap\wt{B}_j)$, that $a\in P_A(y)$, that $b\in P_B(a)$, and that $\|y-c\|\leq\delta$. Then \begin{equation} \|b-a\|\leq(\theta_{3\delta}+2\varepsilon)\|a-y\|, \end{equation} where $\theta_{3\delta}$ is the joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see \eqref{e:jCQn}). \item \label{p:effect-iv} Suppose that $\ensuremath{\mathcal A}$ is $(\wt{B},\varepsilon,3\delta)$-joint-regular at $c$ (see Definition~\ref{d:jreg}), that $x\in ((\ensuremath{\operatorname{bdry}} A)\smallsetminus B)\cap \bigcup_{i\in I}(A_i\cap\wt{A}_i)$, that $b\in P_B(x)$, that $a\in P_A(b)$, and that $\|x-c\|\leq\delta$. Then \begin{equation} \|a-b\|\leq(\theta_{3\delta}+2\varepsilon)\|b-x\|, \end{equation} where $\theta_{3\delta}$ is the joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see \eqref{e:jCQn}). \end{enumerate} \end{proposition} \begin{proof} \ref{p:effect-i}\&\ref{p:effect-i1}: Clear from Lemma~\ref{l:unionproj} and the assumptions. \ref{p:0329a-iii}: Note that Lemma~\ref{l:0305a}\ref{l:0305aiv}\&\ref{l:0305aiii} and \ref{p:effect-i1} yield the implications \begin{equation} b\in B\smallsetminus A \;\Leftrightarrow\; b\in(\ensuremath{\operatorname{bdry}} B)\smallsetminus A \;\Rightarrow\; a\in A\smallsetminus B \;\Leftrightarrow\; a\in (\ensuremath{\operatorname{bdry}} A)\smallsetminus B \;\Rightarrow\; b\in\bigcup_{j\in J}(B_j\cap\wt{B}_j), \end{equation} which give the conclusion. \ref{p:0329a-iv}: Interchange the roles of $A$ and $B$ in \ref{p:0329a-iii}. \ref{p:effect-ii}: There exists $j\in J$ such that $y\in B_j\cap\wt{B}_j\cap ((\ensuremath{\operatorname{bdry}} B)\smallsetminus A)$. Let $b'\in P_{B_j}a$. Then \begin{equation} \label{e:0304a} \|a-b\|=d_B(a)\leq d_{B_j}(a)=\|a-b'\|. \end{equation} Since $\ensuremath{\mathcal B}$ is $(\wt{A},\varepsilon,3\delta)$-joint-regular at $c$, it is clear that $B_j$ is $(\wt{A},\varepsilon,3\delta)$-regular at $c$. Since $y\in (\ensuremath{\operatorname{bdry}} B)\smallsetminus A$ and because of \ref{p:effect-i}, there exists $i\in I$ such that $a\in P_{A_i}y\subseteq \wt{A}_i$. Since $\wt{A}_i\subseteq \wt{A}$, it follows that (see also Remark~\ref{r:0303a}\ref{r:0303av}) $B_j$ is $(\wt{A}_i,\varepsilon,3\delta)$-regular at $c$. Since $y\in B_j\cap\wt{B}_j$, $a\in P_{A_i}y\cap\wt{A}_i$, $b'\in P_{B_j}a$, and $d_{A_i}(y)=d_A(y)=\|y-a\|\leq\|y-c\|\leq\delta$, we obtain from Proposition~\ref{p:effect1} that \begin{equation} \|a-b'\|\leq \big(\theta_{3\delta}(A_i,\wt{A}_i,B_j,\wt{B}_j) +2\varepsilon\big)\|a-y\|. \end{equation} Combining with \eqref{e:0304a}, we deduce that $\|a-b\|\leq\|a-b'\|\leq(\theta_{3\delta}+2\varepsilon)\|a-y\|$. \ref{p:effect-iv}: This follows from \ref{p:effect-ii} and \eqref{e:120405a}. \end{proof} \subsection*{An abstract linear convergence result} Let us now focus on algorithmic results (which are actually true even in complete metric spaces). \begin{definition}[linear convergence] Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a sequence in $X$, let $\bar{x}\in X$, and let $\gamma\in\left[0,1\right[$. Then $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ \ensuremath{\varnothing}h{converges linearly} to $\bar{x}$ with \ensuremath{\varnothing}h{rate} $\gamma$ if there exists $\mu\in\ensuremath{\mathbb R}_+$ such that \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}}) \quad d(x_n,\bar{x})\leq \mu\gamma^n. \end{equation} \end{definition} \begin{remark}[rate of convergence depends only on the tail of the sequence] \label{r:0307a} Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a sequence in $X$, let $\bar{x}\in X$, and let $\gamma\in\ensuremath{\left]0,1\right[}$. Assume that there exists $n_0\in\ensuremath{\mathbb N}$ and $\mu_0\in\ensuremath{\mathbb R}_+$ such that \begin{equation} \big(\forall n\in\{n_0,n_0+1,\ldots\}\big) \quad d(x_n,\bar{x})\leq \mu_0\gamma^n. \end{equation} Set $\mu_1 := \max\menge{d(x_m,\bar{x})/\gamma^m}{m\in\{0,1,\ldots,n_0-1\}}$. Then \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}}) \quad d(x_n,\bar{x})\leq \max\{\mu_0,\mu_1\}\gamma^n, \end{equation} and therefore $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ converges linearly to $\bar{x}$ with rate $\gamma$. \end{remark} \begin{proposition}[abstract linear convergence] \label{p:geo} Let $A$ and $B$ be nonempty closed subsets of $X$, let $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ be a sequence in $A$, and let $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ be a sequence in $B$. Assume that there exist constants $\alpha\in\ensuremath{\mathbb R}_+$ and $\beta\in\ensuremath{\mathbb R}_+$ such that \begin{subequations} \begin{equation} \gamma := \alpha\beta < 1 \end{equation} and \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}}) \quad d(a_{n+1},b_n)\leq\alpha d(a_n,b_n) \;\text{and}\; d(a_{n+1},b_{n+1})\leq\beta d(a_{n+1},b_{n}). \end{equation} \end{subequations} Then $(\forall\ensuremath{{n\in{\mathbb N}}})$ $d(a_{n+1},b_{n+1})\leq\gamma d(a_n,b_n)$ and there exists $c\in A\cap B$ such that \begin{equation} \label{e:0305c} (\forall\ensuremath{{n\in{\mathbb N}}})\quad \max\big\{d(a_n,c),d(b_n,c)\big\} \leq \frac{1+\alpha}{1-\gamma}d(a_0,b_0)\cdot \gamma^n; \end{equation} consequently, $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ converge linearly to $c$ with rate $\gamma$. \end{proposition} \begin{proof} Set $\delta := d(a_0,b_0)$. Then for every $\ensuremath{{n\in{\mathbb N}}}$, \begin{equation} \label{e:1027d'} d(a_n,b_n) \leq \beta d(a_n,b_{n-1}) \leq\alpha\beta d(a_{n-1},b_{n-1}) =\gamma d(a_{n-1},b_{n-1}) \leq \cdots \leq \gamma^n\delta; \end{equation} hence, \begin{subequations} \label{e:0305a} \begin{align} \label{e:1027c'} d(b_n,b_{n+1})&\leq d(b_n,a_{n+1})+d(a_{n+1},b_{n+1}) \leq \alpha d(b_n,a_n)+\gamma d(a_n,b_n)\\ &= (\alpha+\gamma)d(a_n,b_n) \leq (\alpha+\gamma)\delta\gamma^n. \end{align} \end{subequations} Thus $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ is a Cauchy sequence, so there exists $c\in B$ such that $b_n\to c$. On the other hand, by \eqref{e:1027d'}, $d(a_n,b_n)\to 0$ and $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ lies in $A$. Hence, $a_n\to c$ and $c\in A$. Thus, $c\in A\cap B$. Fix $n\in\ensuremath{\mathbb N}$ and let $m\geq n$. Using \eqref{e:0305a}, \begin{equation} \label{e:1027e'} d(b_n,b_m)\leq \sum_{k=n}^{m-1} d(b_k,b_{k+1}) \leq \sum_{k\geq n} d(b_k,b_{k+1}) \leq \sum_{k\geq n} (\alpha+\gamma)\delta\gamma^k = \frac{(\alpha+\gamma)\delta\gamma^n}{1-\gamma}. \end{equation} Hence, using \eqref{e:1027d'} and \eqref{e:1027e'}, we estimate that \begin{equation} \label{e:0305b} d(a_n,b_m) \leq d(a_n,b_n)+d(b_n,b_m) \leq \delta\gamma^n + \frac{(\alpha+\gamma)\delta\gamma^n}{1-\gamma} =\frac{(1+\alpha)\delta\gamma^n}{1-\gamma}. \end{equation} Letting $m\to\ensuremath{+\infty}$ in \eqref{e:1027e'} and \eqref{e:0305b}, we obtain \eqref{e:0305c}. \end{proof} \subsection*{The sequence generated by the MAP} We start with the following definition, which is well defined by Proposition~\ref{p:0224a}. \begin{definition}[MAP] Let $A$ and $B$ be nonempty closed subsets of $X$, let $b_{-1}\in X$, and let \begin{equation} (\forall\ensuremath{{n\in{\mathbb N}}})\quad a_{n}\in P_A(b_{n-1}) \;\text{and}\;b_n\in P_B(a_n). \end{equation} Then we say that the sequences $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ are \ensuremath{\varnothing}h{generated by the method of alternating projections} (with respect to the pair $(A,B)$) with starting point $b_{-1}$. \end{definition} \begin{center} \psset{xunit=0.7cm, yunit=0.7cm} \begin{pspicture} (-6,-6)(11,3) \psline[linewidth=1.5pt,linecolor=blue]{-}(-5.5,-5.5)(3,3) \psline[linewidth=1.5pt,linecolor=blue]{-}(-4,2)(11,-5.5) \rput(-2.6,-2){$A_2$} \rput(-2,1.4){$A_1$} \psline[linewidth=1.5pt,linecolor=red]{-}(-6,-5)(11,-5) \psdot[dotstyle=o,dotsize=4pt](10,-5) \rput(10,-4.5){$c_1$} \psdot[dotstyle=o,dotsize=4pt](-5,-5) \rput(-5.2,-4.5){$c_2$} \rput(7,-4.6){$B$} \psline[linestyle=dashed,arrowsize=6pt,ArrowInside=->,ArrowInsidePos=0.65,showpoints=true] (3,1)(2,2)(2,-5)(3.33,-1.67)(3.33,-5) \rput(3.7,1){$b_{-1}$} \rput(1.7,2.3){$a_0$} \rput(2,-5.5){$b_0$} \rput(3.7,-1.3){$a_1$} \rput(3.5,-5.5){$b_1$} \rput(9,2){\begin{tabular}{c} The MAP between\\ $A=A_1\cup A_2$ and $B$, \\ $A\cap B=\{c_1,c_2\}$ \end{tabular}} \end{pspicture} \end{center} Our aim is to provide sufficient conditions for linear convergence of the sequences generated by the method of alternating projections. The following two results are simple yet useful. \begin{proposition} \label{p:easymap} Let $A$ and $B$ be nonempty closed subsets of $X$, and let $(a_n)$ and $(b_n)$ be sequences generated by the method of alternating projections. Then the following hold: \begin{enumerate} \item \label{p:easymap1} The sequences $(a_n)_{\ensuremath{{n\in{\mathbb N}}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ lie in $A$ and $B$, respectively. \item \label{p:easymap1+} $(\forall\ensuremath{{n\in{\mathbb N}}})$ $\|a_{n+1}-b_{n+1}\|\leq\|a_{n+1}-b_n\|\leq\|a_n-b_n\|$. \item \label{p:easymap2} If $\{a_n\}_\ensuremath{{n\in{\mathbb N}}} \cap B\neq\varnothing$, or $\{b_n\}_\ensuremath{{n\in{\mathbb N}}} \cap A\neq\varnothing$, then there exists $c\in A\cap B$ such that for all $n$ sufficiently large, $a_n=b_n=c$. \end{enumerate} \end{proposition} \begin{proof} \ref{p:easymap1}: This is clear from the definition. \ref{p:easymap1+}: Indeed, for every $\ensuremath{{n\in{\mathbb N}}}$, $\|a_{n+1}-b_{n+1}\| = d_B(a_{n+1}) \leq \|a_{n+1}-b_n\| =d_A(b_n)\leq\|b_n-a_n\|$ using \ref{p:easymap1}. \ref{p:easymap2}: Suppose, say that $a_n\in B$. Then $b_n = P_Ba_n = a_n =:c \in A\cap B$ and all subsequent terms of the sequences are equal to $c$ as well. \end{proof} \subsection*{New convergence results for the MAP} We are now in a position to state and derive new linear convergence results. In this section, we shall often assume the following: \boxedeqn{ \label{e:MAPsettings} \left\{ \begin{aligned} &\text{$\ensuremath{\mathcal A} := (A_i)_{i\in I}$ and $\ensuremath{\mathcal B} := (B_j)_{j\in J}$ are nontrivial collections}\\ &\quad\text{of nonempty closed subsets of $X$;}\\ &A :=\bigcup_{i\in I} A_i \text{~and~} B:= \bigcup_{j\in J} B_j\text{~are closed;}\\ &c\in A\cap B; \\ &\text{$\wt{\ensuremath{\mathcal A}} := (\wt{A}_i)_{i\in I}$ and $\wt{\ensuremath{\mathcal B}} := (\wt{B}_j)_{j\in J}$ are collections}\\ &\quad\text{of nonempty subsets of $X$ such that }\\ &\qquad (\forall i\in I)\;\;P_{A_i}\big((\ensuremath{\operatorname{bdry}} B)\smallsetminus A\big)\subseteq\wt{A}_i,\\ &\qquad (\forall j\in J)\;\;P_{B_j}\big((\ensuremath{\operatorname{bdry}} A)\smallsetminus B\big)\subseteq\wt{B}_j;\\ &\wt{A} :=\bigcup_{i\in I} \wt{A}_i \text{~and~} \wt{B}:= \bigcup_{j\in J} \wt{B}_j. \end{aligned} \right. } \begin{lemma}[backtracking MAP]\label{l:back} Assume that \eqref{e:MAPsettings} holds. Let $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ be generated by the MAP with starting point $b_{-1}$. Let $n\in\{1,2,3,\ldots\}$. Then the following hold: \begin{enumerate} \item\label{l:back-i} If $b_n\notin A$, then $a_n\in((\ensuremath{\operatorname{bdry}} A)\smallsetminus B)\cap\bigcup_{i\in I}(A_i\cap \wt{A}_i)$ and $b_n\in((\ensuremath{\operatorname{bdry}} B)\smallsetminus A)\cap\bigcup_{j\in J}(B_j\cap \wt{B}_j)$. \item\label{l:back-ii} If $a_n\notin B$, then $a_n\in((\ensuremath{\operatorname{bdry}} A)\smallsetminus B)\cap\bigcup_{i\in I}(A_i\cap \wt{A}_i)$. \item\label{l:back-iii} If $a_n\notin B$ and $n\geq 2$, then $b_{n-1}\in((\ensuremath{\operatorname{bdry}} B)\smallsetminus A)\cap\bigcup_{j\in J}(B_j\cap \wt{B}_j)$. \end{enumerate} \end{lemma} \begin{proof} \ref{l:back-i}: Applying Proposition~\ref{p:effect}\ref{p:0329a-iii} to $b_{n-1}\in B$, $a_n\in P_A b_{n-1}$, $b_n\in P_B a_n$, we obtain \begin{equation} b_n\in B\smallsetminus A\ \Leftrightarrow\ b_n\in\big((\ensuremath{\operatorname{bdry}} B)\smallsetminus A\big)\cap\bigcup_{j\in J}(B_j\cap \wt{B}_j) \ \Rightarrow\ a_n\in A\smallsetminus B. \end{equation} On the other hand, applying Proposition~\ref{p:effect}\ref{p:0329a-iv} to $a_{n-1}\in A$, $b_{n-1}\in P_B a_{n-1}$, $a_n\in P_A b_{n-1}$, we see that \begin{equation} a_n\in A\smallsetminus B\ \Leftrightarrow\ a_n\in\big((\ensuremath{\operatorname{bdry}} A)\smallsetminus B\big)\cap\bigcup_{i\in I}(A_i\cap \wt{A}_i). \end{equation} Altogether, \ref{l:back-i} is established. \ref{l:back-ii}\&\ref{l:back-iii}: The proofs are analogous to that of \ref{l:back-i}. \end{proof} Let us now state and prove a key technical result. \begin{proposition} \label{p:joint} Assume that \eqref{e:MAPsettings} holds. Suppose that there exist $\varepsilon\geq0$ and $\delta>0$ such that the following hold: \begin{enumerate} \item\label{p:joint1} $\ensuremath{\mathcal A}$ is $(\wt{B},\varepsilon,3\delta)$-joint-regular at $c$ (see Definition~\ref{d:jreg}) and set \begin{equation} {\sigma} := \begin{cases} 1, &\text{if $\ensuremath{\mathcal B}$ is not known to be $(\wt{A},\varepsilon,3\delta)$-joint-regular at $c$;}\\ 2, &\text{if $\ensuremath{\mathcal B}$ is also $(\wt{A},\varepsilon,3\delta)$-joint-regular at $c$.} \end{cases} \end{equation} \item \label{p:joint2} $\theta_{3\delta} < 1-2\varepsilon$, where $\theta_{3\delta}$ is the joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see Definition~\ref{d:jCQn}). \end{enumerate} Set $\theta := \theta_{3\delta}+2\varepsilon \in\ensuremath{\left]0,1\right[}$. Let $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ be sequences generated by the MAP with starting point $b_{-1}$ satisfying \begin{equation} \label{e:0306a} \|b_{-1}-c\|\leq \frac{(1-\theta^{\sigma})\delta}{6(2+\theta-\theta^{\sigma})}. \end{equation} Then $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ converge linearly to some point $\bar{c}\in A\cap B$ with rate $\theta^{\sigma}$; in fact, \begin{equation} \label{e:0306e} \|\bar{c}-c\|\leq\delta \quad\text{and}\quad (\forall n\geq1)\;\; \max\big\{\|a_n-\bar{c}\|,\|b_n-\bar{c}\|\big\}\leq \frac{\delta(1+\theta)}{2+\theta-\theta^{\sigma}}\theta^{{\sigma} (n-1)}. \end{equation} \end{proposition} \begin{proof} In view of $a_1\in P_AP_BP_Ab_{-1}$ and \eqref{e:0306a}, Corollary~\ref{c:0330a} yields \begin{equation} \label{e:120406b} \beta:=\|a_1-c\|\leq \frac{(1-\theta^{\sigma})\delta}{(2+\theta-\theta^{\sigma})}\leq\frac{\delta}{2}. \end{equation} Since $c\in A\cap B$, we have $\theta_{3\delta}\geq 0$ by \eqref{e:120406a} and hence $\theta>0$. Using \eqref{e:120406b}, we estimate \begin{subequations} \label{e:0306c} \begin{align} (\forall n\geq1)\quad \beta\theta^{{\sigma} (n-1)}+\beta+\beta(1+\theta)\sum_{k=0}^{n-2}\theta^{{\sigma} k} &\leq \beta+\beta(1+\theta)\sum_{k=0}^{n-1}\theta^{{\sigma} k} \\ &=\beta+\beta(1+\theta)\frac{1-\theta^{{\sigma} n}}{1-\theta^{\sigma}}\\ &\leq \beta+\beta\frac{1+\theta}{1-\theta^{\sigma}}\\ &=\beta\Big(\frac{2+\theta-\theta^{\sigma}}{1-\theta^{\sigma}}\Big)\\ &\leq \delta. \end{align} \end{subequations} We now claim that if \begin{equation}\label{e:0327a} n\geq 1,\quad \|a_n-b_n\|\leq\beta\theta^{{\sigma} (n-1)}\quad\mbox{and}\quad \|a_n-c\|\leq \beta+\beta(1+\theta)\sum_{k=0}^{n-2}\theta^{{\sigma} k}, \end{equation} then \begin{subequations} \label{e:0306b} \begin{align} \|a_{n+1}-b_{n+1}\|&\leq\theta^{{\sigma}-1}\|a_{n+1}-b_{n}\|\leq\theta^{{\sigma}}\|a_{n}-b_{n}\| \leq\beta\theta^{{\sigma} n},\label{e:Jb} \\ \|a_{n+1}-c\|&\leq \beta+\beta(1+\theta)\sum_{k=0}^{n-1}\theta^{{\sigma} k}.\label{e:Jc} \end{align} \end{subequations} To prove this claim, assume that \eqref{e:0327a} holds. Using \eqref{e:0327a} and \eqref{e:0306c}, we first observe that \begin{subequations} \begin{align} \label{e:Jd} \max\big\{\|a_n-c\|, \|b_n-c\|\big\} &\leq \|b_n-a_n\|+\|a_n-c\|\\ &\leq \beta\theta^{{\sigma} (n-1)}+\beta+\beta(1+\theta)\sum_{k=0}^{n-2}\theta^{{\sigma} k}\leq\delta. \end{align} \end{subequations} We now consider two cases: \ensuremath{\varnothing}h{Case~1}: $b_n\in A\cap B$. Then $b_n=a_{n+1}=b_{n+1}$ and thus \eqref{e:Jb} holds. Moreover, $\|a_{n+1}-c\|=\|b_n-c\|$ and \eqref{e:Jc} follows from \eqref{e:Jd}. \ensuremath{\varnothing}h{Case~2}: $b_n\not\in A\cap B$. Then $b_n\in B\smallsetminus A$. Lemma~\ref{l:back}\ref{l:back-i} implies $a_n\in((\ensuremath{\operatorname{bdry}} A)\smallsetminus B)\cap\bigcup_{i\in I}(A_i\cap \wt{A}_i)$ and $b_n\in((\ensuremath{\operatorname{bdry}} B)\smallsetminus A)\cap\bigcup_{j\in J}(B_j\cap \wt{B}_j)$. Note that $\|a_n-c\|\leq\delta$ by \eqref{e:Jd}, and recall that $\ensuremath{\mathcal A}$ is $(\wt{B},\varepsilon,3\delta)$-joint-regular at $c$ by \ref{p:joint1}. It thus follows from Proposition~\ref{p:effect}\ref{p:effect-iv} (applied to $a_n,b_n,a_{n+1}$) that \begin{equation} \label{e:0325a} \|a_{n+1}-b_n\|\leq\theta\|a_n-b_n\|. \end{equation} On the one hand, if ${\sigma}=1$, then Proposition~\ref{p:easymap}\ref{p:easymap1+} yields $\|a_{n+1}-b_{n+1}\|\leq\|a_{n+1}-b_n\|=\theta^{{\sigma}-1}\|a_{n+1}-b_n\|$. On the other hand, if ${\sigma}=2$, then $\ensuremath{\mathcal B}$ is $(\wt{A},\varepsilon,3\delta)$-joint-regular at $c$ by \ref{p:joint1}; hence, Proposition~\ref{p:effect}\ref{p:effect-ii} (applied to $b_n,a_{n+1},b_{n+1}$) yields $\|a_{n+1}-b_{n+1}\|\leq\theta\|a_{n+1}-b_n\|=\theta^{{\sigma}-1}\|a_{n+1}-b_n\|$. Altogether, in either case, \begin{equation}\label{e:0325b} \|a_{n+1}-b_{n+1}\|\leq\theta^{{\sigma}-1}\|a_{n+1}-b_n\|. \end{equation} Combining \eqref{e:0325b} with \eqref{e:0325a} and \eqref{e:0327a} gives \begin{equation} \label{e:120406c} \|a_{n+1}-b_{n+1}\|\leq\theta^{{\sigma}-1}\|a_{n+1}-b_n\| \leq\theta^{\sigma}\|a_n-b_n\|\leq\beta\theta^{{\sigma} n}, \end{equation} which is \eqref{e:Jb}. Furthermore, \eqref{e:0325a}, \eqref{e:0327a} and \eqref{e:Jd} yield \begin{subequations} \begin{align} \|a_{n+1}-c\| &\leq\|a_{n+1}-b_n\|+\|b_n-c\|\\ &\leq \theta\|a_n-b_n\| + \|b_n-c\|\\ &\leq \theta\beta\theta^{{\sigma}(n-1)}+\beta\theta^{{\sigma}(n-1)} +\beta+\beta(1+\theta)\sum_{k=0}^{n-2}\theta^{{\sigma} k}\\ &=\beta+\beta(1+\theta)\sum_{k=0}^{n-1}\theta^{{\sigma} k}, \end{align} \end{subequations} which establishes \eqref{e:Jc}. Therefore, in all cases, \eqref{e:0306b} holds. Since $\|a_1-b_{1}\|=d_B(a_{1})\leq\|a_{1}-c\|=\beta$, we see that \eqref{e:0327a} holds for $n=1$. Thus, the above claim and the principle of mathematical induction principle imply that \eqref{e:0306b} holds for every $n\geq 1$. Next, \eqref{e:Jb} implies \begin{equation} \label{e:0307e} (\forall n\geq1)\quad \|a_{n+1}-b_n\|\leq \theta\|a_n-b_n\|\quad\text{and}\quad \|a_{n+1}-b_{n+1}\|\leq\theta^{{\sigma}-1}\|a_{n+1}-b_{n}\|. \end{equation} In view of \eqref{e:0307e} and $\|a_1-b_{1}\|\leq\beta$, Proposition~\ref{p:geo} yields $\bar{c}\in A\cap B$ such that \begin{align} (\forall n\geq1)\quad \max\big\{\|a_n-\bar{c}\|,\|b_n-\bar{c}\|\big\} &\leq \frac{1+\theta}{1-\theta^{\sigma}}\|a_1-b_1\|\cdot\theta^{{\sigma} (n-1)}\\ &\leq \frac{1+\theta}{1-\theta^{\sigma}}\beta\cdot\theta^{{\sigma} (n-1)}\\ &\leq\frac{\delta(1+\theta)}{2+\theta-\theta^{\sigma}}\theta^{{\sigma} (n-1)}. \end{align} On the other hand, \eqref{e:Jc} and \eqref{e:0306c} imply $(\forall n\geq 1)$ $\|a_{n+1}-c\|\leq\delta$; thus, letting $n\to\ensuremath{+\infty}$, we obtain $\|\bar{c}-c\|\leq\delta$. This completes the proof of \eqref{e:0306e}. \end{proof} \begin{remark} \label{r:joint} In view of Lemma~\ref{l:0305a}\ref{l:0305ai}\&\ref{l:0305aii}, an aggressive choice for use in \eqref{e:MAPsettings} is $(\forall i\in I)$ $\wt{A}_i = \ensuremath{\operatorname{bdry}} A_i$ and $(\forall j\in J)$ $\wt{B}_j = \ensuremath{\operatorname{bdry}} B_j$. \end{remark} Our main convergence result on the linear convergence of the MAP is the following: \begin{theorem}[linear convergence of the MAP and superregularity] \label{t:jsuper} Assume that \eqref{e:MAPsettings} holds and that $\ensuremath{\mathcal A}$ is $\wt{B}$-joint-superregular at $c$ (see Definition~\ref{d:jreg}). Denote the limiting joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see Definition~\ref{d:jCQn}) by $\overline{\theta}$, and the the exact joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see Definition~\ref{d:exactCQn}) by $\overline{\alpha}$. Assume further that one of the following holds: \begin{enumerate} \item \label{t:jsuperi} $\overline{\theta}<1$. \item \label{t:jsuperii} $I$ and $J$ are finite, and $\overline{\alpha}<1$. \end{enumerate} Let $\theta \in \left]\overline{\theta},1\right[$ and set $\varepsilon := (\theta-\overline{\theta})/3 >0$. Then there exists $\delta>0$ such that the following hold: \begin{enumerate}[resume] \item \label{t:jsuper1} $\ensuremath{\mathcal A}$ is $(\wt{B},\varepsilon,3\delta)$-joint-regular at $c$ (see Definition~\ref{d:jreg}). \item \label{t:jsuper2} $\theta_{3\delta} \leq \overline{\theta}+\varepsilon< 1-2\varepsilon$, where $\theta_{3\delta}$ is the joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see Definition~\ref{d:jCQn}). \end{enumerate} Consequently, suppose the starting point of the MAP $b_{-1}$ satisfies $\|b_{-1}-c\|\leq (1-\theta)\delta/12$. Then $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ converge linearly to some point in $\bar{c}\in A\cap B$ with $\|\bar{c}-c\|\leq\delta$ and rate $\theta$: \begin{equation} (\forall n\geq1)\ \max\{\|a_n-\bar{c}\|,\|b_n-\bar{c}\|\}\leq\frac{\delta(1+\theta)}{2}\theta^{n-1}. \end{equation} \end{theorem} \begin{proof} Observe that \ref{t:jsuperii} implies \ref{t:jsuperi} by Theorem~\ref{t:CQ1}\ref{t:CQ1iv}. The definitions of $\wt{B}$-joint-superregularity and of $\overline\theta$ allow us to find $\delta>0$ sufficiently small such that both \ref{t:jsuper1} and \ref{t:jsuper2} hold. The result thus follows from Proposition~\ref{p:joint} with ${\sigma}=1$. \end{proof} \begin{corollary} \label{c:jsuper} Assume that \eqref{e:MAPsettings} holds and that, for every $i\in I$, $A_i$ is convex. Denote the limiting joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see Definition~\ref{d:jCQn}) by $\overline{\theta}$, and assume that $\overline{\theta}<1$. Let $\theta \in \left]\overline{\theta},1\right[$, and let $b_{-1}$, the starting point of the MAP, be sufficiently close to $c$. Then $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ converge linearly to some point in $A\cap B$ with rate $\theta$. \end{corollary} \begin{proof} Combine Theorem~\ref{t:jsuper} with Corollary~\ref{c:jsreg}. \end{proof} \begin{example}[working with collections and joint notions is useful] Consider the setting of Example~\ref{ex:jCQn<CQn}, and suppose that $\wt{\ensuremath{\mathcal A}}=\ensuremath{\mathcal A}$ and $\wt{\ensuremath{\mathcal B}}=\ensuremath{\mathcal B}$. Note that $A_i$ is convex, for every $i\in I$. Then $\theta_\delta(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})<1=\theta_\delta(A,A,B,B) =\overline{\theta}(A,X,B,X)$. Hence Corollary~\ref{c:jsuper} guarantees linear convergence of the MAP while it is not possible to work directly with the unions $A$ and $B$ due to their condition number being equal to $1$ \ensuremath{\varnothing}h{and} because neither $A$ nor $B$ is superregular by Example~\ref{ex:badlines}! This illustrates that the main result of Lewis-Luke-Malick (see Corollary~\ref{c:LLM} below) is not applicable because two of its hypotheses fail. \end{example} The following result features an improved rate of convergence $\theta^2$ due to the additional presence of superregularity. \begin{theorem}[linear convergence of the MAP and double superregularity] \label{t:jointdoubly} Assume that \eqref{e:MAPsettings} holds, that $\ensuremath{\mathcal A}$ is $\wt{B}$-joint-superregular at $c$ and that $\ensuremath{\mathcal B}$ is $\wt{A}$-joint-superregular at $c$ (see Definition~\ref{d:jreg}). Denote the limiting joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see Definition~\ref{d:jCQn}) by $\overline{\theta}$, and the the exact joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see Definition~\ref{d:exactCQn}) by $\overline{\alpha}$. Assume further that {\rm (a)} $\overline{\theta}<1$, or (more restrictively) that {\rm (b)} $I$ and $J$ are finite, and $\overline{\alpha}<1$ (and hence $\overline{\theta}=\overline{\alpha}<1$). Let $\theta \in \left]\overline{\theta},1\right[$ and $\varepsilon:=\frac{\theta-\overline{\theta}}{3}$. Then there exists $\delta>0$ such that \begin{enumerate} \item\label{t:jdb-i} $\ensuremath{\mathcal A}$ is $(\wt{B},\varepsilon,3\delta)$-joint-regular at $c$; \item\label{t:jdb-ii} $\ensuremath{\mathcal B}$ is $(\wt{A},\varepsilon,3\delta)$-joint-regular at $c$; and \item\label{t:jdb-iii} $\theta_{3\delta}<\overline{\theta}+\varepsilon=\theta-2\varepsilon<1-2\varepsilon$, where $\theta_{3\delta}$ is the joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see Definition~\ref{d:jCQn}). \end{enumerate} Consequently, suppose the starting point of MAP $b_{-1}$ satisfies $\|b_{-1}-c\|\leq\frac{(1-\theta)\delta}{6(2-\theta)}$. Then $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ converge linearly to some point in $\bar{c}\in A\cap B$ with $\|\bar{c}-c\|\leq\delta$ and rate $\theta^2$; in fact, \begin{equation} (\forall n\geq1)\quad \max\big\{\|a_n-\bar{c}\|,\|b_n-\bar{c}\|\big\} \leq\frac{\delta}{2-\theta}\big(\theta^2\big)^{n-1}. \end{equation} \end{theorem} \begin{proof} The existence of $\delta>0$ such that \ref{t:jdb-i}--\ref{t:jdb-iii} hold is clear. Then apply Proposition~\ref{p:joint} with ${\sigma}=2$. \end{proof} In passing, let us point out a sharper rate of convergence under sufficient conditions stronger than superregularity. \begin{corollary}[refined convergence rate]\label{c:jdb} Assume that \eqref{e:MAPsettings} holds and that there exists $\delta>0$ such that \begin{enumerate} \item $\ensuremath{\mathcal A}$ is $(\wt{B},0,3\delta)$-joint-regular at $c$; \item $\ensuremath{\mathcal B}$ is $(\wt{A},0,3\delta)$-joint-regular at $c$; and \item $\theta<1$, where $\theta:=\theta_{3\delta}$ is the joint-CQ-number at $c$ associated with $(\ensuremath{\mathcal A},\wt{\ensuremath{\mathcal A}},\ensuremath{\mathcal B},\wt{\ensuremath{\mathcal B}})$ (see Definition~\ref{d:jCQn}). \end{enumerate} Suppose also that the starting point of the MAP $b_{-1}$ satisfies $\|b_{-1}-c\|\leq\frac{(1-\theta)\delta}{6(2-\theta)}$. Then $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ converge linearly to some point in $\bar{c}\in A\cap B$ with $\|\bar{c}-c\|\leq\delta$ and rate $\theta^2$; in fact, \begin{equation} (\forall n\geq1)\quad \max\big\{\|a_n-\bar{c}\|,\|b_n-\bar{c}\|\big\}\leq \frac{\delta}{2-\theta}\big(\theta^2\big)^{n-1}. \end{equation} \end{corollary} \begin{proof} Apply Proposition~\ref{p:joint} with ${\sigma}=2$. \end{proof} Let us illustrate a situation where it is possible to make $\delta$ in Theorem~\ref{t:jointdoubly} precise. \begin{example}[the MAP for two spheres] Let $z_1$ and $z_2$ be in $X$, let $\rho_1$ and $\rho_2$ be in $\ensuremath{\mathbb R}$, set $A:= \sphere{z_1}{\rho_1}$ and $B := \sphere{z_2}{\rho_2}$, and assume that $\{c\}\subsetneqq A\cap B\subsetneqq A\cup B$. Then $\overline{\alpha} := |\scal{z_1-c}{z_2-c}|/(\rho_1\rho_2)<1$. Let $\theta\in\left]\overline{\alpha},1\right[$. Then the conclusion of Theorem~\ref{t:jointdoubly} holds with \begin{equation} \delta := \min \Bigg\{ \frac{\sqrt{(\rho_1+\rho_2)^2+\rho_1\rho_2(\theta-\overline{\alpha})}-(\rho_1+\rho_2)}{6},\frac{\varepsilon\rho_1}{3},\frac{\varepsilon\rho_2}{3}\Bigg\} \end{equation} \end{example} \begin{proof} Combine Example~\ref{ex:sphere1} (applied with $\varepsilon=(\theta-\overline{\alpha})/4$ there), Proposition~\ref{p:sphere2}, and Theorem~\ref{t:jointdoubly}. \end{proof} Here is a useful special case of Theorem~\ref{t:jointdoubly}: \begin{theorem}\label{t:dregAff} Assume that $A$ and $B$ are $L$-superregular, and that \begin{equation} N_A(c)\cap\big(-N_B(c)\big)\cap\big(L-c\big)=\{0\}, \end{equation} where $L:=\ensuremath{\operatorname{aff}}(A\cup B)$. Then the sequences generated by the MAP converge linearly to a point in $A\cap B$ provided that the starting point is sufficiently close to $c$. \end{theorem} \begin{proof} Combine Example~\ref{ex:compareCQ1} with Theorem~\ref{t:jointdoubly} (applied with $I$ and $J$ being singletons, and with $\wt{A}=\wt{B}=L$). \end{proof} We now obtain a well known global linear convergence result for the convex case, which does not require the starting point to be sufficiently close to $A\cap B$: \begin{theorem}[two convex sets]\label{t:globalCvex} Assume that $A$ and $B$ are convex, and $A\cap B\neq\varnothing$. Then for every starting point $b_{-1}\in X$, the sequences $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ generated by the MAP converge to some point in $A\cap B$. The convergence of these sequences is linear provided that $\ensuremath{\operatorname{ri}} A \cap \ensuremath{\operatorname{ri}} B \neq\varnothing$. \end{theorem} \begin{proof} By Fact~\ref{f:convproj}\ref{f:convproj4}, we have \begin{equation}\label{e:cvex1} (\forall c\in A\cap B)\quad \|a_0-c\|\geq\|b_0-c\|\geq\|a_1-c\|\geq\|b_1-c\|\geq\cdots \end{equation} After passing to subsequences if needed, we assume that $a_{k_n}\to a\in A$ and $b_{k_n}\to b\in B$. We show that $a=b$ by contradiction, so we assume that $\varepsilon := \|a-b\|/3>0$. We have eventually $\max\{\|a_{k_n}-a\|,\|b_{k_n}-b\|\}<\varepsilon$; hence $\|a_{k_n}-b_{k_n}\|\geq \varepsilon$ eventually. By Fact~\ref{f:convproj}\ref{f:convproj3}, we have \begin{equation} \|a_{k_n}-c\|^2\geq \|a_{k_n}-b_{k_n}\|^2+\|b_{k_n}-c\|^2\geq \varepsilon^2+\|a_{k_n+1}-c\|^2 \geq \varepsilon^2+\|a_{k_{n+1}}-c\|^2 \end{equation} eventually. But this would imply that for all $n$ sufficiently large, and for every $m\in\ensuremath{\mathbb N}$, we have $\|a_{k_n}-c\|^2\geq m\varepsilon^2+\|a_{k_{n+m}}-c\|^2\geq m\varepsilon^2$, which is absurd. Hence $\bar{c} := a=b\in A\cap B$ and now \eqref{e:cvex1} (with $c=\bar{c}$) implies that $a_n\to\bar{c}$ and $b_n\to\bar{c}$. Next, assume that $\ensuremath{\operatorname{ri}} A \cap\ensuremath{\operatorname{ri}} B\neq\varnothing$, and set $L := \ensuremath{\operatorname{aff}}(A\cup B)$. By Proposition~\ref{p:0301a}, the $(A,L,B,L)$-CQ conditions holds at $\bar{c}$. Thus, by Example~\ref{ex:compareCQ1}, $N_A(\bar{c})\cap(-N_B(\bar{c}))\cap(L-\bar{c})=\{0\}$. Furthermore, Corollary~\ref{c:jsreg} and Remark~\ref{r:0303a}\ref{r:0303avi}\&\ref{r:0303avii} imply that $A$ and $B$ are $L$-superregular at $\bar{c}$. The conclusion now follows from Theorem~\ref{t:dregAff}, applied to suitably chosen tails of the sequences $(a_n)_\ensuremath{{n\in{\mathbb N}}}$ and $(b_\ensuremath{{n\in{\mathbb N}}})$. \end{proof} \begin{example}[the MAP for two linear subspaces] \label{ex:vN} Assume that $A$ and $B$ are linear subspaces of $X$. Since $0\in A\cap B = \ensuremath{\operatorname{ri}} A\cap \ensuremath{\operatorname{ri}} B$, Theorem~\ref{t:globalCvex} guarantees the linear convergence of the MAP to some point in $A\cap B$, where $b_{-1}\in X$ is the arbitrary starting point. On the other hand, $A$ and $B$ are $(0,+\infty)$-regular (see Remark~\ref{r:0303a}\ref{r:0303avi}). Since $(\forall\delta\in\ensuremath{\mathbb R}_+P)$ $\theta_{\delta}(A,A,B,B)=c(A,B)<1$, where $c(A,B)$ is the cosine of the Friedrichs angle between $A$ and $B$ (see Theorem~\ref{t:CQn=c}), we obtain from Corollary~\ref{c:jdb} that the rate of convergence is $c^2(A,B)$. In fact, it is well known that this is the optimal rate, and also that $\lim_{n} a_n=\lim_{n} b_n = P_{A\cap B}(b_{-1})$; see \cite[Section~3]{Deut94} and \cite[Chapter~9]{Deutsch}. \end{example} \begin{remark} For further linear convergence results for the MAP in the convex setting we refer the reader to \cite{BB93}, \cite{bb96}, \cite{BBL}, \cite{DHp1}, \cite{DHp2}, \cite{DHp3}, and the references therein. See also \cite{Luke08} and \cite{Luke12} for recent related work for the nonconvex case. \end{remark} \subsection*{Comparison to Lewis-Luke-Malick results and further examples} The main result of Lewis, Luke, and Malick arises as a special case of Theorem~\ref{t:jsuper}: \begin{corollary}[Lewis-Luke-Malick]\label{c:LLM} {\rm (See \cite[Theorem~5.16]{LLM}.)} Suppose that $N_A(c)\cap (-N_B(c))=\{0\}$ and that $A$ is superregular at $c\in A\cap B$. If the starting point of MAP is sufficiently close to $c$, then the sequences generated by the MAP converge linearly to a point in $A\cap B$. \end{corollary} \begin{proof} Since $N_A(c)\cap (-N_B(c))=\{0\}$, we have $\overline{\theta}<1$. Now apply Theorem~\ref{t:jsuper}(i) with $\wt{\ensuremath{\mathcal A}}:=\wt{\ensuremath{\mathcal B}}:=(X)$, $\ensuremath{\mathcal A}:=(A)$ and $\ensuremath{\mathcal B}:=(B)$. \end{proof} However, even in simple situations, Corollary~\ref{c:LLM} is not powerful enough to recover known convergence results. \begin{example}[Lewis-Luke-Malick CQ may fail even for two subspaces] \label{ex:LLMsubsp} Suppose that $A$ and $B$ are two linear subspaces of $X$, and set $L:=\ensuremath{\operatorname{aff}}(A\cup B)=A+B$. For $c\in A\cap B$, we have \begin{equation} N_{A}(c)\cap(-N_{B}(c))=A^\perp \cap B^\perp =(A+B)^\perp = L^\perp. \end{equation} Therefore, the Lewis-Luke-Malick CQ (see \cite[Theorem~5.16]{LLM} and also Corollary~\ref{c:LLM}) holds for $(A,B)$ at $c$ if and only if \begin{equation} N_{A}(c)\cap(-N_{B}(c))=\{0\}\ \Leftrightarrow\ A+B=X. \end{equation} On the other hand, the CQ provided in Theorem~\ref{t:dregAff} (see also Example~\ref{ex:vN}) \ensuremath{\varnothing}h{always holds} and we obtain linear convergence of the MAP. However, even for two lines in $\ensuremath{\mathbb R}^3$, the Lewis-Luke-Malick CQ (see Corollary~\ref{c:LLM}) is unable to achieve this. (It was this example that originally motivated us to pursue the present work.) \end{example} \begin{example}[Lewis-Luke-Malick CQ is too strong even for convex sets] \label{ex:LLMcv} Assume that $A$ and $B$ are convex (and hence superregular). Then the Lewis-Luke-Malick CQ condition is $0\in\ensuremath{\operatorname{int}}(B-A)$ (see Corollary~\ref{c:CQ3i}) while the $(A,\ensuremath{\operatorname{aff}}(A\cup B),B,\ensuremath{\operatorname{aff}}(A\cup B))$-CQ is equivalent to the much less restrictive condition $\ensuremath{\operatorname{ri}} A\cap \ensuremath{\operatorname{ri}} B\neq\varnothing$ (see Theorem~\ref{t:compareCQ2}). \end{example} \subsection*{The flexibility of choosing $(\wt{A},\wt{B})$} Often, $L=\ensuremath{\operatorname{aff}}(A\cup B)$ is a convenient choice which yields linear convergence of the MAP as in Theorem~\ref{t:dregAff}. However, there are situations when this choice for $\wt{A}$ and $\wt{B}$ is not helpful but when a different, more aggressive, choice does guarantee linear convergence: \begin{example}[$(\wt{A},\wt{B})=(A,B)$] \label{ex:AABB} Let $A$, $B$, and $c$ be as in Example~\ref{ex:CQdif(AB)}, and let $L := \ensuremath{\operatorname{aff}}(A\cup B)$. Since $A$ and $B$ are \ensuremath{\varnothing}h{convex} and hence \ensuremath{\varnothing}h{superregular}, the $(A,L,B,L)$-CQ condition is equivalent to $\ensuremath{\operatorname{ri}} A\cap \ensuremath{\operatorname{ri}} B\neq\varnothing$ (see Proposition~\ref{p:0301a}), which fails in this case. However, the $(A,A,B,B)$-CQ condition does hold; hence, the corresponding limiting CQ-number is less than 1 by Theorem~\ref{t:CQ1}\ref{t:CQ1v}. Thus linear convergence of the MAP is guaranteed by Theorem~\ref{t:jointdoubly}. \end{example} The next example illustrates a situation where the choice $(\wt{A},\wt{B})=(A,B)$ fails while the even tighter choice $(\wt{A},\wt{B})=(\ensuremath{\operatorname{bdry}} A,\ensuremath{\operatorname{bdry}} B)$ results in success: \begin{example}[${(\wt{A},\wt{B})=(\ensuremath{\operatorname{bdry}} A,\ensuremath{\operatorname{bdry}} B)}$] \label{ex:diff-choice} Suppose that $X=\ensuremath{\mathbb R}^2$, that $A=\ensuremath{\operatorname{epi}}(|\cdot|/2)$, that $B=-\ensuremath{\operatorname{epi}}(|\cdot|/3)$, and that $c=(0,0)$. Note that $\ensuremath{\operatorname{aff}}(A\cup B)=X$ and $\ensuremath{\operatorname{ri}} A\cap \ensuremath{\operatorname{ri}} B=\varnothing$. Then \begin{subequations} \begin{align} &\nc{A}{{B}}(c)=\nc{A}{X}(c)=N_A(c)= \menge{(u_1,u_2)\in\ensuremath{\mathbb R}^2}{u_2+2|u_1|\leq0},\\ &\nc{B}{{A}}(c)=\nc{B}{X}(c)=N_B(c)= \menge{(u_1,u_2)\in\ensuremath{\mathbb R}^2}{-u_2+3|u_1|\leq0}, \end{align} \end{subequations} and so the $(A,{A},B,{B})$-CQ condition fails because \begin{equation} \nc{A}{{B}}(c)\cap(-\nc{B}{{A}}(c))= \menge{(u_1,u_2)\in\ensuremath{\mathbb R}^2}{u_2+3|u_1|\leq0}\neq\{0\}. \end{equation} Consequently, for either $(\wt{A},\wt{B})=(A,B)$ or $(\wt{A},\wt{B})=(X,X)$, Theorem~\ref{t:jointdoubly} is not applicable because $\overline\alpha=\overline{\theta}=1$: indeed, $u=(0,-1)\in N_A(c)$ and $v=(0,-1)\in-N_B(c)$, so $1=\scal{u}{v}\leq\bar\alpha\leq 1$. On the other hand, let us now choose $(\wt{A},\wt{B})=(\ensuremath{\operatorname{bdry}} A,\ensuremath{\operatorname{bdry}} B)$, which is justified by Remark~\ref{r:joint}. Then \begin{subequations} \begin{align} &\nc{A}{\wt{B}}(c)=\menge{(u_1,u_2)\in\ensuremath{\mathbb R}^2}{u_2+2|u_1|=0},\\ &\nc{B}{\wt{A}}(c)=\menge{(u_1,u_2)\in\ensuremath{\mathbb R}^2}{-u_2+3|u_1|=0}, \end{align} \end{subequations} $\nc{A}{\wt{B}}(c)\cap(-\nc{B}{\wt{A}}(c))=\{0\}$ and the $(A,\wt{A},B,\wt{B})$-CQ condition holds. Hence, using also Theorem~\ref{t:CQ1}\ref{t:CQ1v}, Theorem~\ref{t:globalCvex} and Theorem~\ref{t:jointdoubly}, we deduce linear convergence of the MAP. \end{example} However, even the choice $(\wt{A},\wt{B})=(\ensuremath{\operatorname{bdry}} A,\ensuremath{\operatorname{bdry}} B)$ may not be applicable to yield the desired linear convergence as the following shows. In this example, we employ the tightest possibility allowed by our framework, namely $(\wt{A},\wt{B})=(P_A((\ensuremath{\operatorname{bdry}} B)\smallsetminus A),P_B((\ensuremath{\operatorname{bdry}} A)\smallsetminus B))$. \begin{example}[$(\wt{A},\wt{B})=(P_A((\ensuremath{\operatorname{bdry}} B)\smallsetminus A),P_B((\ensuremath{\operatorname{bdry}} A)\smallsetminus B))$ ] Suppose that $X=\ensuremath{\mathbb R}^2$, that $A=\ensuremath{\operatorname{epi}}(|\cdot|)$, that $B=-A$, and that $c=(0,0)$. Then $\nc{A}{\ensuremath{\operatorname{bdry}} B}(c)=\ensuremath{\operatorname{bdry}} B = -\ensuremath{\operatorname{bdry}} A$ and $\nc{B}{\ensuremath{\operatorname{bdry}} A}(c)=\ensuremath{\operatorname{bdry}} A$; hence, the $(A,\ensuremath{\operatorname{bdry}} A,B,\ensuremath{\operatorname{bdry}} B)$-CQ condition fails because $\nc{A}{\ensuremath{\operatorname{bdry}} B}(c) \cap (-\nc{B}{\ensuremath{\operatorname{bdry}} A}(c)) = \ensuremath{\operatorname{bdry}} B\neq \{0\}$. On the other hand, if $(\wt{A},\wt{B})=(P_A((\ensuremath{\operatorname{bdry}} B)\smallsetminus A),P_B((\ensuremath{\operatorname{bdry}} A)\smallsetminus B))$, then $\nc{A}{\wt{B}} = \{0\} = \nc{B}{\wt{A}}=\{0\}$ because $\wt{A}=\{c\} = \wt{B}$. Thus, the $(A,\wt{A},B,\wt{B})$-CQ conditions holds. (Note that the MAP converges in finitely many steps.) \end{example} \section*{Conclusion} We have introduced restricted normal cones which generalize classical normal cones. We have presented some of their basic properties and shown their usefulness in describing interiority conditions, constraint qualifications, and regularities. The corresponding results were employed to yield new powerful sufficient conditions for linear convergence of the sequences generated by the method of alternating projections applied to two sets $A$ and $B$. A key ingredient were suitable restricting sets $(\wt{A}$ and $\wt{B})$. The least aggressive choice, $(\wt{A},\wt{B})=(X,X)$, recovers the framework by Lewis, Luke, and Malick. The choice $(\wt{A},\wt{B})=(\ensuremath{\operatorname{aff}}(A\cup B),\ensuremath{\operatorname{aff}}(A\cup B))$ allows us to include basic settings from convex analysis into our framework. Thus, the framework provided here unifies the recent nonconvex results by Lewis, Luke, and Malick with classical convex-analytical settings. When the choice $(\wt{A},\wt{B})=(\ensuremath{\operatorname{aff}}(A\cup B),\ensuremath{\operatorname{aff}}(A\cup B))$ fails, one may also try more aggressive choices such as $(\wt{A},\wt{B})=(A,B)$ or $(\wt{A},\wt{B})=(\ensuremath{\operatorname{bdry}} A,\ensuremath{\operatorname{bdry}} B)$ to guarantee linear convergence. In a follow-up work \cite{BLPW12b} we demonstrate the power of these tools with the important problem of sparsity optimization with affine constraints. Without any assumptions on the regularity of the sets or the intersection we achieve local convergence results, with rates and radii of convergence, where all other sufficient conditions, particularly those of \cite{LM} and \cite{LLM}, fail. \subsection*{Acknowledgments} HHB was partially supported by the Natural Sciences and Engineering Research Council of Canada and by the Canada Research Chair Program. This research was initiated when HHB visited the Institut f\"ur Numerische und Angewandte Mathematik, Universit\"at G\"ottingen because of his study leave in Summer~2011. HHB thanks DRL and the Institut for their hospitality. DRL was supported in part by the German Research Foundation grant SFB755-A4. HMP was partially supported by the Pacific Institute for the Mathematical Sciences and and by a University of British Columbia research grant. XW was partially supported by the Natural Sciences and Engineering Research Council of Canada. \end{document}
\begin{document} \title{Numerical solutions of the generalized equal width wave equation using Petrov Galerkin method} \author{Samir Kumar Bhowmik$^1$ and Seydi Battal Gazi Karakoc$^2$ \\ $1.$ Department of Mathematics, University of Dhaka\\ Dhaka 1000, Bangladesh. \\ e-mail: [email protected] \\ $2.$ Department of Mathematics, Faculty of Science and Art, \\ Nevsehir Haci Bektas Veli University, Nevsehir, 50300, Turkey.\\ e-mail: [email protected] \\ } \maketitle \begin{abstract} In this article we consider a generalized equal width wave (GEW) equation which is a significant nonlinear wave equation as it can be used to model many problems occurring in applied sciences. As the analytic solution of the (GEW) equation of this kind can be obtained hardly, developing numerical solutions for this type of equations is of enormous importance and interest. Here we are interested in a Petrov-Galerkin method, in which element shape functions are quadratic and weight functions are linear B-splines. We firstly investigate the existence and uniqueness of solutions of the weak form of the equation. Then we establish the theoretical bound of the error in the semi-discrete spatial scheme as well as of a full discrete scheme at $t=t^{n}$. Furthermore, a powerful Fourier analysis has been applied to show that the proposed scheme is unconditionally stable. Finally, propagation of single and double solitary waves and evolution of solitons are analyzed to demonstrate the efficiency and applicability of the proposed numerical scheme by calculating the error norms (in $L_{2}(\Omega)$ and $L_{\infty}(\Omega)$). The three invariants ($ I_{1},I_{2}$ and $I_{3})$ of motion have been commented to verify the conservation features of the proposed algorithms. Our proposed numerical scheme has been compared with other published schemes and demonstrated to be valid, effective and it outperforms the others. \end{abstract} \textbf{Keywords:} GEW equation; Petrov-Galerkin; B-splines; Solitary waves; Soliton. \textbf{AMS classification:} {65N30, 65D07, 74S05,74J35, 76B25.} \section{Introduction} Nonlinear partial differential equations are extensively used to explain complex phenomena in different fields of science, such as plasma physics, fluid mechanics, hydrodynamics, applied mathematics, solid state physics and optical fibers. One of the important issues to nonlinear partial differential equations is to seek for exact solutions. Because of the complexity of nonlinear differential equations, exact solutions of these equations are commonly not derivable. Owing to the fact that only limited classes of these equations are solved by analytical means, numerical solutions of these nonlinear partial differential equations are very functional to examine physical phenomena. The regularized long wave (RLW) equation, \begin{equation} U_{t}+U_{x}+\varepsilon UU_{x}-\mu U_{xxt}=0, \label{rlw} \end{equation} is a symbolisation figure of nonlinear long wave and can define many important physical phenomena with weak nonlinearity and dispersion waves, including nonlinear transverse waves in shallow water, ion-acoustic and magneto hydrodynamic waves in plasma, elastic media, optical fibres, acoustic-gravity waves in compressible fluids, pressure waves in liquid--gas bubbles and phonon packets in nonlinear crystals \cite{mei}. The RLW equation was first suggested to describe the behavior of the undular bore by Peregrine \cite{pereg,pereg1}, who constructed the first numerical method of the equation using finite difference method. RLW equation is an alternative description of nonlinear dispersive waves to the more usual \begin{equation} U_{t}+\varepsilon UU_{x}+\mu U_{xxx}=0, \label{KdV} \end{equation} Korteweg-de Vries (KdV) equation \cite{ben}. This equation was first generated by Korteweg and de Vries to symbolise the action of one dimensional shallow water solitary waves \cite{khalid}. The equation has found numerous applications in physical sciences and engineering field such as fluid and quantum mechanics, plasma physics, nonlinear optics, waves in enharmonic crystals, bubble liquid mixtures, ion acoustic wave and magneto-hydrodynamic waves in a warm plasma as well as shallow water waves. The Equal Width (EW) wave equation \begin{equation} U_{t}+\varepsilon UU_{x}-\mu U_{xxt}=0, \label{ew} \end{equation} which is less well recognised and was introduced by Morrison et al. \cite {morrison} is a description alternative to the more common KdV and RLW equations. This equation is named equal width equation, because the solutions for solitary waves with a perpetual form and speed, for a given value of the parameter $\mu $, are waves with an equal width or vawelength for all wave amplitudes \cite{hmd}. The solutions of this equation are sorts of solitary waves called as solitons whose figures are not changed after the collision. GEW equation, procured for long waves propagating in the positive $x$ direction takes the form \begin{equation} U_{t}+\varepsilon U^{p}U_{x}-\mu U_{xxt}=0, \label{gew} \end{equation} where $p$ is a positive integer, $\varepsilon $ and $\mu $ are positive parameters, $t$ is time and $x$ is the space coordinate, $U(x,t)$ is the wave amplitude. Physical boundary conditions require $U\rightarrow 0$ as $ \left\vert x\right\vert \rightarrow \infty $. For this work, boundary and initial conditions are chosen \begin{equation} \begin{array}{l} U(a,t)=0,~~~~~~\ \ \ \ \ U(b,t)=0, \\ U_{x}(a,t)=0,~~~~\ \ \ ~~U_{x}(b,t)=0, \\ U_{xx}(a,t)=0,~~~~\ \ \ ~~U_{xx}(b,t)=0, \\ U(x,0)=f(x),~~\ \ \ \ a\leq x\leq b, \end{array} \label{Bou.Con.} \end{equation} where $f(x)$ is a localized disturbance inside the considered interval and will be designated later. In the fluid problems as known, the quantity $U$ \ is associated with the vertical displacement of the water surface but in the plasma applications, $U$ is the negative of the electrostatic potential. That's why, the solitary wave solution of Eq.$(\ref{gew})$ helps us to find out the a lot of physical phenomena with weak nonlinearity and dispersion waves such as nonlinear transverse waves in shallow water, ion-acoustic and magneto- hydrodynamic waves in plasma and phonon packets in nonlinear crystals \cite{sbgk}. The GEW equation which we tackle here is based on the EW equation and relevant to the both generalized regularized long wave (GRLW) equation \cite{kaya,kaya1} and the generalized Korteweg-de Vries (GKdV) equation \cite{gard4}. These general equations are nonlinear wave equations with $(p+1)$th nonlinearity and have solitary wave solutions, which are pulse-like. The investigate of GEW equation ensures the possibility of investigating the creation of secondary solitary waves and/or radiation to get insight into the corresponding processes of particle physics \cite{dodd,levis}. This equation has many implementations in physical situations for example unidirectional waves propagating in a water channel, long waves in near-shore zones, and many others \cite{panahipour}. If $p=1$ is taken in Eq.$(\ref{gew})$ the EW equation [15-20] is obtained and if $p=2$ is taken in Eq.$(\ref{gew}),$ the obtained equation is named as the modified equal width wave (MEW) equation [21-27]. In recent years, various numerical methods have been improved for the solution of the GEW equation. Hamdi et al. \cite{hmd} generated exact solitary wave solutions of the GEW equation. Evans and Raslan \cite{evans} investigated the GEW equation by using the collocation method based on quadratic B-splines to obtain the numerical solutions of the single solitary wave, interaction of solitary waves and birth of solitons. The GEW equation solved numerically by a B-spline collocation method by Raslan \cite{raslan}. The homogeneous balance method was used to construct exact travelling wave solutions of generalized equal width equation by Taghizadeh et al. \cite{tag}. The equation is solved numerically by a meshless method based on a global collocation with standard types of radial basis functions (RBFs) by \cite {panahipour}. Quintic B-spline collocation method with two different linearization techniques and \ a lumped Galerkin method based on B-spline functions were employed to obtain the numerical solutions of the GEW equation by Karakoc and Zeybek, \cite{sbgk,sbgk6} respectively. Roshan \cite{roshan}, applied Petrov-Galerkin method using the linear hat function and quadratic B-spline functions as test and trial function respectively for the GEW equation. In this study, we have constructed a lumped Petrov-Galerkin method for the GEW equation using quadratic B-spline function as element shape function and linear B-spline function as the weight function. Context of this work has been planned as follows: \begin{itemize} \item[-] A semi-discrete Galerkin finite element scheme of the equation along with the error bounds are demonstrated in Section~2. \item[-] A full discrete Galerkin finite element scheme has been studied in Section~3. \item[-] Section~4 is concerned with the construction and implementation of the Petrov-Galerkin finite element method to the GEW equation. \item[-] Section~5 contains a linear stability analysis of the scheme. \item[-] Section~6 includes analysis of the motion of single solitary wave, interaction of two solitary wave and evolution of solitons with different initial and boundary conditions. \item[-] Finally, we conclude the study with some remarks on this study. \end{itemize} \section{Variational formulation and its analysis} The higher order nonlinear initial boundary value problem \eqref{gew} can be written as \begin{equation} u_{t}-\mu \Delta u_{t}=\nabla \mathcal{F}(u),~~~~~ \label{grlw11} \end{equation} where $ \mathcal{F}(u) = \frac{1}{p+1} u^{p+1}, $ subject to the initial condition \begin{equation} u(x,0)=f_1(x),~~~~~a\leq x\leq b, \label{intl} \end{equation} and the boundary conditions \begin{equation} \begin{array}{lll} u(a,t)=0,~~~~~u(b,t)=0, & & \\ u_{x}(a,t)=0,~~~~~u_{x}(b,t)=0, & & \\ u_{xx}(a,t)=0,~~~~~u_{xx}(b,t)=0, & t>0. & \end{array} \label{bndry} \end{equation} To define the weak form of the solutions of \eqref{grlw11} and to investigate the existence and uniqueness of solutions of the weak form we define the following spaces. Here $H^k(\Omega)$, $k\ge 0$ (integer) is considered as an usual normed space of real valued functions on $\Omega$ and \begin{equation*} H_0^{k}(\Omega) = \left\{v \in H^k(\Omega): D^{i}v = 0\ \text{on } \partial\Omega,\ i = 0, 1, \cdots, k-1 \right\} \end{equation*} where $D = \frac{\partial}{\partial x}$. We denote the norm on $H^k(\Omega)$ by $\|\cdot \|_k$ which is the well known usual $H^k$ norm, and when $k=0$, $ \|\cdot \|_0 = \|\cdot \|$ represents $L_2$ norm and $(\cdot, \cdot)$ represents the standard $L_2$ inner product~\cite{Noureddine2013, ThomeeVidar2006}. Multiplying \eqref{grlw11} by $\xi\in H_0^1(\Omega)$, and then integrating over $\Omega$ we have \begin{equation*} (u_{t}, \xi) - \mu (\Delta u_{t}, \xi) = (\nabla \mathcal{F}(u), \xi). \end{equation*} Applying Green's theorem for integrals on the above continuous inner products we aim to find $u(\cdot, t)\in H_0^1(\Omega)$ so that \begin{equation} \label{bbmbur_v2} \left(u_{t}, \xi \right) + \mu \left(\nabla u_{t}, \nabla\xi \right) = -\left( \mathcal{F}(u), \nabla \xi \right), \ \forall \ \xi\in H_0^1(\Omega), \end{equation} with $u(0) = u_0$. Here we state the uniqueness theorem without proof which can be well established following \cite{Noureddine2013, ThomeeVidar2006}. \begin{thrm} \label{thrm01} If $u$ satisfies \eqref{bbmbur_v2} then \begin{equation*} \|u(t)\|_1 = \|u_0\|_1,\ t\in\ (0,\ T],\ \text{and }\ \|u\|_{L^\infty(L^\infty(\Omega))} \le C\|u_0\|_1 \end{equation*} holds if $u_0\in H_0^1(\Omega)$, and $C$ is a positive constant. \end{thrm} \begin{thrm} Assume that $u_0 \in H_0^1(\Omega)$ and $T >0$. Then there exists one and only one $u\in H_0^1(\Omega)$ satisfying \eqref{bbmbur_v2} for any $T >0$ such that \begin{equation*} u \in L^\infty(0, T, H_0^1(\Omega)) \ \text{with }\ (u(x, 0), \xi) = (u_0, \xi), \xi\in H_0^1(\Omega). \end{equation*} \end{thrm} \subsection{Semi-discrete Galerkin Scheme} For any $0<h<1$ let $S_{h}$ of $H_{0}^{1}(\Omega )$ be a finite dimensional subspace such that for $u\in H_{0}^{1}(\Omega)\cap H^{3}(\Omega )$, $\exists$ a constant $C$ independent of $h$~\cite{Noureddine2013, ThomeeVidar2006, ciarlet} such that \begin{equation} \inf_{\xi \in S_{h}}\Vert u-\xi \Vert \leq Ch^{3}\|u\|_3. \label{interp01} \end{equation} Here it is our moto to look for solutions of a semi-discrete finite element formulation of \eqref{grlw11} $u_{h}:[0,\ T]\rightarrow S_{h}$ such that \begin{equation} \left( u_{ht},\xi \right) + \left( \nabla u_{ht},\nabla \xi \right) =-\left( \mathcal{F} (u_{h}),\nabla \xi \right) ,\ \ \forall\ \xi \in S_{h}, \label{bbmbur_v51} \end{equation} where $u_{h}(0)=u_{0,h}\in S_{h}$ approximates $u_{0}$. We start here first by stating a priori bound of the solution of \eqref{bbmbur_v51} below before establishing the original convergence result. \begin{thrm} \label{thrm04} Let $u_h \in S_h$ be a solution of \eqref{bbmbur_v51}. Then $ u_h \in S_h$ satisfies \begin{equation*} \|u_h\|_1^2 = \|u_{0,h}\|_1^2,\ t\in\ (0,\ T],\ \end{equation*} \text{and }\ \begin{equation*} \|u_h\|_{L^\infty(L^\infty(\Omega))} \le C\|u_{0,h}\|_1 \end{equation*} holds where $C$ is a positive constant. \end{thrm} \begin{proof} The proof is trivial, it follows from \cite{Battel_SKB_2018}. \end{proof} Our next goal is to establish the theoretical estimate of the error in the semi-discrete scheme \eqref{bbmbur_v51} of \eqref{bbmbur_v2}. To that end here we start by considering the following bilinear form \begin{equation*} \mathcal{A}(u,v)=(\nabla u,\nabla v),\ \forall\ u,\ v\in H_{0}^{1}(\Omega), \end{equation*} which satisfies the boundedness property \begin{equation} |\mathcal{A}(u,v)|\leq M\Vert u\Vert _{1}\Vert v\Vert _{1},\forall \ u,\ v\in H_{0}^{1}(\Omega) \label{boundedness} \end{equation} and coercivity property (on $\Omega $) \begin{equation} \mathcal{A}(u,u)\geq \alpha \Vert u\Vert _{1},\forall \ u\in H_{0}^{1} (\Omega),\ \text{for some }\alpha \in \mathbb{R}. \label{coercivity} \end{equation} Let $\tilde{u}$ be an auxiliary projection of $u$ \cite{Noureddine2013, ciarlet, ThomeeVidar2006}, then $\mathcal{A}$ satisfies \begin{equation} \mathcal{A}(u-\tilde{u},\xi )=0,\ \xi \in S_{h}. \label{projection} \end{equation} Now the rate of convergence (accuracy) in such a spatial approximation \eqref{bbmbur_v51} of \eqref{bbmbur_v2} is given by the following theorem. \begin{thrm} Let $u_h\in S_h$ be a solution of \eqref{bbmbur_v51} and $u\in H_0^1(\Omega)$ be that of \eqref{bbmbur_v2}, then the following inequality holds \begin{equation*} \|u - u_h\|\le C h^3, \end{equation*} where $C>0$ if $\|u(0) - u_{0, h}\|\le Ch^3$ holds. \end{thrm} \begin{proof} Letting $ \mathcal{E} = u-u_h = \psi +\theta, $ where $ \psi = u - \tilde u$ and $\theta = \tilde u - u_h$, we write \begin{align*} \alpha \|u-\tilde u\|_1^2 &\le \mathcal{A}(u-\tilde u, u-\tilde u) \\ & = \mathcal{A}(u-\tilde u, u-\xi),\ \xi\in S_h. \end{align*} From \eqref{boundedness}, \eqref{projection} and \cite{ThomeeVidar2006} it follows that \begin{equation}\label{pro_bound} \|u-\tilde u\|_1 \le \inf_{\xi\in S_h}\| u - \xi\|_1, \end{equation} and thus \eqref{interp01} and \eqref{pro_bound} confirms the following inequalities \[ \|\psi\|_1 \le C h^2 \|u\|_3, \ \text{and so \ }\ \|\psi \| \le C h^3 \|u\|_3. \] Now applying $\frac{\partial}{\partial t}$ on \eqref{projection} and having some simplifications yields~\cite{ThomeeVidar2006} \[ \|\psi_t\|\le C h^3 \|u_t\|_3. \] Also we subtract \eqref{bbmbur_v51} from \eqref{bbmbur_v2} to obtain \begin{equation}\label{bbmbur_v544} (\theta_t, \xi) + (\nabla \theta_t, \nabla\xi) = (\psi_t, \xi) - (\mathcal{F}(u)-\mathcal{F}(u_h),\nabla \xi). \end{equation} Now we substitute $\xi = \theta$ in \eqref{bbmbur_v544}, and then apply Cauchy-Schwarz inequality to obtain \[ \frac{1}{2}\frac{d}{dt} \|\theta\|_1^2 \le \|\psi_t\|\|\theta\| +\|\mathcal{F}(u) - \mathcal{F}(u_h)\| \|\nabla \theta\|. \] Here \[ \|\mathcal{F}(u) - \mathcal{F}(u_h)\| \le C(\|\psi\| + \|\theta\|), \] comes from Lipschitz conditions of $\mathcal{F}$ and boundedness of $u$ and $u_h$. Thus \[ \frac{d}{dt} \|\theta\|_1^2 \le C\left( \|\psi_t\|^2 + \|\psi\|^2 + \|\theta\|^2 + \|\nabla \theta\|^2 \right). \] So \[ \|\theta\|_1^2 \le \|\theta(0)\|_1^2+ C\int_0^t \left( \|\psi_t\|^2 + \|\psi\|^2 + \|\theta\|^2 + \|\nabla \theta\|^2 \right)dt. \] Hence Gronwall's lemma, bounds of $\psi$ and $\psi_t$ confirms \[ \|\theta\|_1 \le C(u) h^3, \] if $\theta(0) = 0$, completes the proof~\cite{ThomeeVidar2006, ciarlet}. \end{proof} \section{Full discrete scheme} Here we aim to find solution of the semi-discrete problem \eqref{bbmbur_v51} over $[0, T]$, $T>0$. Let $N$ be a positive full number and $\Delta t = \frac{T}{N}$ so that $t^n = n\Delta t$, $n = 0,\ 1,\ 2,\ 3,\cdots,\ N.$ Here we consider \begin{equation*} \phi^n = \phi(t^n),\ \quad \phi^{n-1/2} = \frac{\phi^n + \phi^{n-1}}{2}\quad \& \quad \ \partial_t\phi^n = \frac{\phi^n - \phi^{n-1}}{\Delta t }. \end{equation*} Using the above notations we present a time discretized finite element Galerkin scheme by \begin{equation} \left( \partial_t U^n,\xi \right) + \left( \nabla \partial_t U^n ,\nabla \xi \right) = -\left( \mathcal{F} (U^{n-1/2}),\nabla \xi \right) ,\ \xi \in S_{h}, \label{bbmbur_v5} \end{equation} where $U^0 = u_{0, h}$. \begin{thrm} If $U^n$ satisfies \eqref{bbmbur_v5} then \begin{equation*} \|U^J\|_1 = \|U^0\|_1\ \text{ for all }\ 1\le J\le N \end{equation*} and there exists a positive constant $C$ such that \begin{equation*} \|U^J\|_\infty \le C \|U^0\|_1\ \text{ for all} \ 1\le J\le N. \end{equation*} \end{thrm} \begin{proof} Substituting $\xi = U^{n-1/2}$ in \eqref{bbmbur_v5} it is easy to see that \begin{equation}\label{bbmbur_v11} \partial_t \left(\|U^n\|^2 + \|\nabla U^n\|^2 \right) = - \left(\mathcal{F}(U^{n-1/2}), \nabla U^{n - 1/2} \right) = 0. \end{equation} Thus the proof of the first part of the theorem follows from a sum from $n = 1$ to $J$ and that of the second part follows from the Sovolev embedding theorem~\cite{ThomeeVidar2006}. \end{proof} Now we focus on to establishing the theoretical upper bound of the error in such a full discrete approximation \eqref{bbmbur_v11} at $t = t^n$. \begin{thrm} Let $h$ and $\Delta t$ be sufficiently small, then \begin{equation*} \|u^j - U^j\|_\infty \le C(u, T) (h^3 + \Delta t^2) \text{ for } 1\le j \le N \text{ and } u_0^h = \tilde u(0) \end{equation*} where $C$ is independent of $h$ and $\Delta t$. \end{thrm} \begin{proof} Let \begin{align*} \mathcal{E}^{n} &= u^n - U^n = \psi^n + \theta^n \end{align*} where $\psi^n = u^n - \tilde{u^n}$, $\theta^n = \tilde{u^n} - U^n$, $u^n = u(t^n)$, and $\tilde{u^n} = \tilde u(t^n)$. From \eqref{bbmbur_v2} and \eqref{bbmbur_v5} along with auxiliary projection defined in the previous section the following equality holds \begin{equation}\label{bbmbur_v6} (\partial_t \theta^n,\xi) + (\nabla\partial_t \theta^n, \nabla \xi ) = (\partial_t \psi^n,\xi) + (\tau^n,\xi) + (\nabla \tau^n,\nabla\xi) + \left( \mathcal{F}(u^{n-1/2}) - \mathcal{F}(U^{n-1/2}), \nabla \xi \right), \end{equation} where $\tau^n = u^{n-1/2} - \partial_t u^n$. Now substituting $\xi$ by $\theta^{n-1/2}$ in \eqref{bbmbur_v6} yields \begin{equation}\label{bbmbur_v7} \frac{1}{2}\partial_t \|\theta^n\|^2_1 = C \left( \|\partial_t \psi^n\|^2 + \|\tau^n\|_1^2 + \|\theta^{n-1/2}\|_1^2 + \left\|\mathcal{F}(u^{n-1/2}) - \mathcal{F}(U^{n-1/2} \right\|^2\right). \end{equation} Now \begin{equation}\label{bbmbur_v8} \|\tau^n\|^2 \le C \Delta t^3 \int_{t_{n-1}}^{t_n} \|u_{ttt}(s)\|^2 ds, \end{equation} and from boundedness of $\|U^n\|_\infty$ and $\|u^n\|_\infty$ it yields \begin{equation}\label{bbmbur_v9} \left\|\mathcal{F}(u^{n-1/2}) - \mathcal{F}(U^{n-1/2} \right\| = C\left(\|\theta^{n-1/2} +\|\psi^{n-1/2}\|\| \right) \end{equation} since $\mathcal{F}$ is a Lipschitz function. Thus from \eqref{bbmbur_v7}, \eqref{bbmbur_v8} and \eqref{bbmbur_v9} it follows that \begin{eqnarray}\label{bbmbur_v10} \partial_t \|\theta^n\|^2_1 & \le C\|\theta^{n-1/2} \|_1^2 + C \left( \|\partial_t \psi^n\|^2 + \|\psi^n\|^2 + \|\psi^{n-1}\|^2 \right. \nonumber\\ & \qquad + \left. \Delta t^3 \int_{t_{n-1}}^{t_n} \|u_{ttt}(s)\|^2 ds \right). \end{eqnarray} So \eqref{bbmbur_v10} can be simplified as \begin{align*} (1-C\Delta t)\|\theta^n\|^2_1 & \le (1+C\Delta t)\|\theta^{n-1/2} \|_1^2 + C \Delta t \left( \|\partial_t \psi^n\|^2 + \right.\\ &\qquad \left. \|\psi^n\|^2 + \|\psi^{n-1}\|^2 + \Delta t^3 \int_{t_{n-1}}^{t_n} \|u_{ttt}(s)\|^2 ds \right). \end{align*} Choosing $\Delta t>0$ so that $1- C\Delta t \ge 0$ and summing over $n = 1, (1) , J$, and from the bounds of $\|\psi^n\|$ and $\|\partial_t \psi^n\|$ yields \[ \|\theta^n\|_1 \le C(u, T) (h^3 + \Delta t^2), \] and the rest follows from the triangular inequality and sobolev embedding theorem~\cite{ThomeeVidar2006, ciarlet}. \end{proof} \section{Construction and Implementation of the method} We take into account a uniformly spatially distributed set of knots $ a=x_{0}<x_{1}<...<x_{N}=b$ over the solution interval $a\leq x\leq b$ and $ h=x_{m+1}-x_{m},$ $m=0,1,2,...,N$. For this partition, we shall need the following quadratic B-splines $\phi _{m}(x)$ at the points $x_{m},$ $ m=0,1,2,...,N.$ Prenter \cite{prenter} identified following quadratic B-spline functions $\phi _{m}(x)$, (\emph{m}= $-1(1)$ $N$), at the points $ x_{m}$ which generate a basis over the interval $[a,b]$ by \begin{equation} \begin{array}{l} \phi _{m}(x)=\frac{1}{h^{2}}\left\{ \begin{array}{ll} (x_{m+2}-x)^{2}-3(x_{m+1}-x)^{2}+3(x_{m}-x)^{2},~~ & ~x\in \lbrack x_{m-1},x_{m}), \\ (x_{m+2}-x)^{2}-3(x_{m+1}-x)^{2},~~ & ~x\in \lbrack x_{m},x_{m+1}), \\ (x_{m+2}-x)^{2},~~ & ~x\in \lbrack x_{m+1},x_{m+2}), \\ 0~ & ~otherwise. \end{array} \right. \end{array} \label{3} \end{equation} We search the approximation $U_{N}(x,t)$ to the solution $U(x,t),$ which use these splines as the trial functions \begin{equation} U_{N}(x,t)=\sum_{j=-1}^{N}\phi _{j}(x)\delta _{j}(t),\ \label{4} \end{equation} in which unknown parameters $\delta _{j}(t)$ will be computed by using the boundary and weighted residual conditions. In each element, using $h\eta =x-x_{m}$ $(0\leq \eta \leq 1)$ local coordinate transformation for the finite element $[x_{m},x_{m+1}],$ quadratic B-spline shape functions $(\ref {3})$ in terms of $\eta $ over the interval $[0,1]$ can be reformulated as \begin{equation} \begin{array}{l} \phi _{m-1}=(1-\eta )^{2}, \\ \phi _{m}=1+2\eta -2\eta ^{2}, \\ \phi _{m+1}=\eta ^{2}. \end{array} \label{5} \end{equation} All quadratic B-splines, except that $\phi _{m-1}(x),\phi _{m}(x)$ and $\phi _{m+1}(x)$ are zero over the interval $[x_{m},x_{m+1}].$ Therefore approximation function $(\ref{4})$ over this element can be given in terms of the basis functions $(\ref{5})$ as \begin{equation} U_{N}(\eta ,t)=\sum_{j=m-1}^{m+1}\delta _{j}\phi _{j}. \label{6} \end{equation} Using quadratic B-splines $(\ref{5})$ and the approximation function $(\ref {6}),$ the nodal values $U_{m}$ and $U_{m}^{\prime }$ at the knot are found in terms of element parameters $\delta _{m}$ as follows: \begin{equation} \begin{array}{l} U_{m}=U(x_{m})=\delta _{m-1}+\delta _{m}, \\ U_{m}^{\prime }=U^{\prime }(x_{m})=2(\delta _{m}-\delta _{m-1}). \end{array} \label{7} \end{equation} Here weight functions $L_{m}$ are used as linear B-splines. The linear B-splines $L_{m}$ at the knots $x_{m}$ are identified as \cite{prenter}: \begin{equation} \begin{array}{l} L_{m}(x)=\frac{1}{h}\left\{ \begin{array}{ll} (x_{m+1}-x)-2(x_{m}-x),~~ & ~x\in \lbrack x_{m-1},x_{m}), \\ (x_{m+1}-x),~~ & ~x\in \lbrack x_{m},x_{m+1}), \\ 0~ & ~otherwise. \end{array} \right. \end{array} \label{lin} \end{equation} A characteristic finite interval $[x_{m},x_{m+1}]$ is turned into the interval $[0,1]$ by local coordinates $\eta $ concerned with the global coordinates using $h\eta =x-x_{m}$ $(0\leq \eta \leq 1).$ So linear B-splines $L_{m}$ are given as \begin{equation} \begin{array}{l} L_{m}=1-\eta \\ L_{m+1}=\eta . \end{array} \label{lin1} \end{equation} Using the Petrov-Galerkin method to Eq.$(\ref{gew}),$ we obtain the weak form of Eq.$(\ref{gew})$ as \begin{equation} \int_{a}^{b}L(U_{t}+\varepsilon U^{p}U_{x}-\mu U_{xxt})dx=0. \label{8} \end{equation} Applying the change of variable $x\rightarrow \eta $ into Eq.$(\ref{8})$ gives rise to \begin{equation} \int_{0}^{1}L\left( U_{t}+\frac{\varepsilon }{h}\hat{U}^{p}U_{\eta }-\frac{ \mu }{h^{2}}U_{\eta \eta t}\right) d\eta =0, \label{9} \end{equation} where $\hat{U}$ is got to be constant over an element to make the integral easier. Integrating Eq.$(\ref{9})$ by parts and using Eq.$(\ref{gew})$ leads to \begin{equation} \int_{0}^{1}[L(U_{t}+\lambda U_{\eta })+\beta L_{\eta }U_{\eta t}]d\eta =\beta LU_{\eta t}|_{0}^{1}, \label{100} \end{equation} where $\lambda =\frac{\varepsilon \hat{U}^{p}}{h}$ and $\beta =\frac{\mu }{ h^{2}}.$ Choosing the weight functions $L_{m}$ with linear B-spline shape functions given by $(\ref{lin1})$ and replacing approximation $(\ref{6})$ into Eq.$(\ref{100})$ over the element $[0,1]$ produces \begin{equation} \sum_{j=m-1}^{m+1}[(\int_{0}^{1}L_{i}\phi _{j}+\beta L_{i}^{\prime }\phi _{j}^{\prime })d\eta -\beta L_{i}\phi _{j}^{\prime }|_{0}^{1}~~]\dot{\delta} _{j}^{e}+\sum_{j=m-1}^{m+1}(\lambda \int_{0}^{1}L_{i}\phi _{j}^{\prime }d\eta )\delta _{j}^{e}=0, \label{11} \end{equation} which can be obtained in matrix form as \begin{equation} \lbrack A^{e}+\beta (B^{e}-C^{e})]\dot{\delta}^{e}+\lambda D^{e}\delta ^{e}=0. \label{120} \end{equation} In the above equations and overall the article, the dot denotes differentiation according to $t$ and $\delta ^{e}=(\delta _{m-1},\delta _{m},\delta _{m+1},\delta _{m+2})^{T}$ are the element parameters. $ A_{ij}^{e},B_{ij}^{e},C_{ij}^{e}$ and $D_{ij}^{e}$ are the $2\times 3$ rectangular element matrices represented by \begin{equation*} A_{ij}^{e}=\int_{0}^{1}L_{i}\phi _{j}d\eta =\frac{1}{12}\left[ \begin{array}{ccc} 3 & 8 & 1 \\ 1 & 8 & 3 \end{array} \right] , \end{equation*} \begin{equation*} B_{ij}^{e}=\int_{0}^{1}L_{i}^{\prime }\phi _{j}^{\prime }d\eta =\frac{1}{2} \left[ \begin{array}{ccc} 1 & 0 & -1 \\ -1 & 0 & 1 \end{array} \right] , \end{equation*} \begin{equation*} C_{ij}^{e}=L_{i}\phi _{j}^{\prime }|_{0}^{1}=\left[ \begin{array}{ccc} 2 & -2 & 0 \\ 0 & -2 & 2 \end{array} \right] , \end{equation*} \begin{equation*} D_{ij}^{e}=\int_{0}^{1}L_{i}\phi _{j}^{\prime }d\eta =\frac{1}{3}\left[ \begin{array}{ccc} -2 & 1 & 1 \\ -1 & -1 & 2 \end{array} \right] \end{equation*} where $i$ takes $m,m+1$ and $j$ takes $m-1,m,m+1$ for the typical element $ [x_{m},x_{m+1}].$ A lumped value for $U$ is attained from ($\frac{ U_{m}+U_{m+1}}{2}$)$^{p}$ as \begin{equation*} \lambda =\frac{\varepsilon }{2^{p}h}(\delta _{m-1}+2\delta _{m}+\delta _{m+1})^{p}. \end{equation*} Formally aggregating together contributions from all elements leads to the matrix equation \begin{equation} \lbrack A+\beta (B-C)]\dot{\delta}+\lambda D\delta =0, \label{13} \end{equation} where global element parameters are $\delta =(\delta _{-1},\delta _{0},...,\delta _{N},\delta _{N+1})^{T}$ and the $A$, $B,C$ and $\lambda D$ matrices are derived from the corresponding element matrices $ A_{ij}^{e},B_{ij}^{e},C_{ij}^{e}$ and $D_{ij}^{e}.$ Row $m$ of each matrices has the following form; \begin{equation*} \begin{array}{l} A=\frac{1}{12}\left( 1,11,11,1,0\right) ,B=\frac{1}{3}(-1,1,1,-1,0), \\ C=(0,0,0,0,0), \\ \lambda D=\frac{1}{3}\left( -\lambda _{1},-\lambda _{1}-2\lambda _{2},2\lambda _{1}+\lambda _{2},\lambda _{2},0\right) \end{array} \end{equation*} where \begin{equation*} \lambda _{1}=\frac{\varepsilon }{2^{p}h}\left( \delta _{m-1}+2\delta _{m}+\delta _{m+1}\right) ^{p},\ \lambda _{2}=\frac{\varepsilon }{2^{p}h} \left( \delta _{m}+2\delta _{m+1}+\delta _{m+2}\right) ^{p}. \end{equation*} Implementing the Crank-Nicholson approach $\delta =\frac{1}{2}(\delta ^{n}+\delta ^{n+1})$ and the forward finite difference $\dot{\delta}=\frac{ \delta ^{n+1}-\delta ^{n}}{\Delta t}$ in Eq.$(\ref{120})$ we get the following matrix system: \begin{equation} \lbrack A+\beta (B-C)+\frac{\lambda \Delta t}{2}D]\delta ^{n+1}=[A+\beta (B-C)-\frac{\lambda \Delta t}{2}D]\delta ^{n} \label{14} \end{equation} where $\Delta t$ is time step. Implementing the boundary conditions ($\ref {Bou.Con.})$ to the system $(\ref{14})$, we make the matrix equation square. This system is efficaciously solved with a variant of the Thomas algorithm but in solution process, two or three inner iterations $\delta ^{n\ast }=\delta ^{n}+\frac{1}{2}(\delta ^{n}-\delta ^{n-1})$ are also performed at each time step to cope with the nonlinearity. As a result, a typical member of the matrix system $(\ref{14})$ \ may be written in terms of the nodal parameters $\delta ^{n}$ and $\delta ^{n+1}$ as: \begin{equation} \begin{array}{l} \gamma _{1}\delta _{m-1}^{n+1}+\gamma _{2}\delta _{m}^{n+1}+\gamma _{3}\delta _{m+1}^{n+1}+\gamma _{4}\delta _{m+2}^{n+1}= \\ \gamma _{4}\delta _{m-1}^{n}+\gamma _{3}\delta _{m}^{n}+\gamma _{2}\delta _{m+1}^{n}+\gamma _{1}\delta _{m+2}^{n} \end{array} \label{15} \end{equation} where \begin{equation*} \begin{array}{l} \gamma _{1}=\frac{1}{12}-\frac{\beta }{3}-\frac{\lambda \Delta t}{6},~~~\ \ \ \ \ \ \ \gamma _{2}=\frac{11}{12}+\frac{\beta }{3}-\frac{3\lambda \Delta t }{6}, \\ \gamma _{3}=\frac{11}{12}+\frac{\beta }{3}+\frac{3\lambda \Delta t}{6},~~~\ \ \ \ \ \ \ \gamma _{4}=\frac{1}{12}-\frac{\beta }{3}+\frac{\lambda \Delta t }{6}. \end{array} \end{equation*} To start the iteration for computing the unknown parameters, the initial unknown vector $\delta ^{0}$ is calculated by using Eqs.($\ref{Bou.Con.}).$ Therefore, using the relations at the knots $U_{N}(x_{m},0)=U(x_{m},0)$, $ m=0,1,2,...,N$ and $U_{N}^{^{\prime }}(x_{0},0)=U^{^{\prime }}(x_{N},0)=0$ related with a variant of the Thomas algorithm, the initial vector $\delta ^{0}$ is easily obtained from the following matrix form \begin{equation*} \left[ \begin{array}{cccccc} 1 & 1 & & & & \\ & 1 & 1 & & & \\ & & & \ddots & & \\ & & & & 1 & 1 \\ & & & & -2 & 2 \end{array} \right] \left[ \begin{array}{c} \delta _{-1}^{0} \\ \delta _{0}^{0} \\ \vdots \\ \delta _{N-1}^{0} \\ \delta _{N}^{0} \end{array} \right] =\left[ \begin{array}{c} U(x_{0},0) \\ U(x_{1},0) \\ \vdots \\ U(x_{N},0) \\ hU^{^{\prime }}(x_{N},0) \end{array} \right] . \end{equation*} \section{ Stability analysis} In this section, to show the stability analysis of the numerical method, we have used Fourier method based on Von-Neumann theory and presume that the quantity $U^{p}$ in the nonlinear term $U^{p}U_{x}$ of the equation $(\ref {gew})$ is locally constant. Substituting the Fourier mode $\delta _{j}^{n}=g^{n}e^{ijkh}$ where $k$ is mode number and $h$ is element size, into scheme ($\ref{15})$ \begin{equation} g=\frac{a-ib}{a+ib}, \label{16} \end{equation} is obtained and where \begin{equation} \begin{array}{l} a=\left( 11+4\beta \right) \cos \left( \frac{\theta }{2}\right) h+\left( 1-4\beta \right) \cos \left( \frac{3\theta }{2}\right) h, \\ b=2\lambda \Delta t[3\sin \left( \frac{\theta }{2}\right) h+\sin \left( \frac{3\theta }{2}\right) h]. \end{array} \label{17} \end{equation} $|g|$ is found $1$ so our linearized scheme is unconditionally stable. \section{Computational results and discussions} The objective of this section is to investigate the deduced algorithm using different test problems relevant to the dispersion of single solitary waves, interaction of two solitary waves and the evolution\ of solitons. For the test problems, we have calculated the numerical solution of the GEW equation for $p=2,3$ and $4$ using the homogenous boundary conditions and different initial conditions. The $\mathit{L}_{2}$ \begin{equation*} \mathit{L}_{2}=\left\Vert U^{exact}-U_{N}\right\Vert _{2}\simeq \sqrt{ h\sum_{J=0}^{N}\left\vert U_{j}^{exact}-\left( U_{N}\right) _{j}\right\vert ^{2}}, \end{equation*} and $\mathit{L}_{\infty }$ \begin{equation*} \ \mathit{L}_{\infty }=\left\Vert U^{exact}-U_{N}\right\Vert _{\infty }\simeq \max_{j}\left\vert U_{j}^{exact}-\left( U_{N}\right) _{j}\right\vert . \end{equation*} error norms are considered to measure the efficiency and accuracy of the present algorithm and to compare our results with both exact values, Eq.$( \ref{exct})$, as well as other results in the literature whenever available. The exact solution of the GEW equation is taken \cite{evans,sbgk6} to be \begin{equation} \mathit{U(x,t)}=\sqrt[p]{\frac{c(p+1)(p+2)}{2\varepsilon }\sec h^{2}[\frac{p }{2\sqrt{\mu }}(x-ct-x_{0})]} \label{exct} \end{equation} which corresponds to a solitary wave of amplitude $\sqrt[p]{\frac{c(p+1)(p+2) }{2\varepsilon },}$ the speed of the wave traveling in the positive direction of the $x$-axis is $c$, width $\frac{p}{2\sqrt{\mu }}$ and $x_{0}$ is arbitrary constant. With the homogenous boundary conditions, solutions of GEW equation possess three invariants of the motion introduced by \begin{equation} I_{1}=\int_{a}^{b}U(x,t)dx,~~~~~I_{2}=\int_{a}^{b}[{U^{2}(x,t)+\mu U_{x}^{2}(x,t)}]dx,~~~~~I_{3}=\int_{a}^{b}{U^{p+2}(x,t)}dx \label{invrnt} \end{equation} related to mass, momentum and energy, respectively. \subsection{Propagation of single solitary waves} For the numerical study in this case, we firstly select $p=2$, $c=0.5$, $ h=0.1$, $\Delta t$ $=0.2$, $\mu =1$, $\varepsilon =3$ and $x_{0}$ $=30$ through the interval $[0,80]$ to match up with that of previous papers \cite {sbgk,sbgk6,roshan}. These parameters represent the motion of a single solitary wave with amplitude $1.0$ and the program is performed to time $ t=20 $ over the solution interval. The analytical values of conservation quantities are $I_{1}$ $=3.1415927$, $I_{2}$ $=2.6666667$ and $ I_{3}=1.3333333.$ Values of the three invariants as well as $\mathit{L}_{2}$ and $\ \mathit{L}_{\infty }$-error norms from our method have been found and noted in Table $(\ref{400})$. Referring to Table$(\ref{400}),$ the error norms $\mathit{L}_{2}$ and $\mathit{L}_{\infty }$ remain less than $ 1.286582\times 10^{-2}$, $8.31346\times 10^{-3}$ and they are still small when the time is increased up to $t=20$. The invariants $I_{1},I_{2}$, $ I_{3} $ change from their initial values by less than $9.8\times \ 10^{-6},$ $3.2\times \ 10^{-5}$ and $1.3\times \ 10^{-5},$ respectively, throughout the simulation. Also, this table confirms that the changes of the invariants are in agreement with their exact values. So we conclude that our method is sensibly conservative. Comparisons with our results with exact solution as well as the calculated values in \cite{sbgk,sbgk6,roshan} have been made and showed in Table$(\ref{401})$ at $t=20$. This table clearly shows that the error norms got by our method are marginally less than the others. The numerical solutions at different time levels are depicted in Fig. $(\ref{500} ).$ This figure shows that single soliton travels to the right at a constant speed and conserves its amplitude and shape with increasing time unsurprisingly. Initially, the amplitude of solitary wave is $1.00000$ and its top position is pinpionted at $x=30$. At $t=20,$ its amplitude is noted as $0.999416$ with center $x=40$. Thereby the absolute difference in amplitudes over the time interval $[0,20]$ are observed as $5.84\times 10^{-4}$. The quantile of error at discoint times are depicted in Fig.$(\ref {501})$ . The error aberration varies from $-8\times 10^{-2}$ to $1\times 10^{-2}$ and the maximum errors happen around the central position of the solitary wave. \begin{table}[h!] \caption{Invariants and errors for single solitary wave with $p=2,$ $c=0.5,$ $h=0.1,$ $\protect\varepsilon =3,$ $\Delta t=0.2,$ $\protect\mu =1,$ $x\in \left[ 0,80\right] .$ } \label{400}\vskip-1.cm \par \begin{center} {\scriptsize \begin{tabular}{cccccc} \hline\hline $Time$ & $I_{1}$ & $I_{2}$ & $I_{3}$ & $L_{2}$ & $L_{\infty }$ \\ \hline 0 & 3.1415863 & 2.6682242 & 1.3333283 & 0.00000000 & 0.00000000 \\ 5 & 3.1415916 & 2.6682311 & 1.3333406 & 0.00395289 & 0.00294851 \\ 10 & 3.1415934 & 2.6682352 & 1.3333413 & 0.00704492 & 0.00473785 \\ 15 & 3.1415948 & 2.6682434 & 1.3333413 & 0.00995547 & 0.00651735 \\ 20 & 3.1415961 & 2.6682568 & 1.3333413 & 0.01286582 & 0.00831346 \\ \hline\hline \end{tabular} } \end{center} \end{table} \begin{table}[h!] \caption{Comparisons of results for single solitary wave with $p=2,$ $c=0.5,$ $h=0.1,$ $\protect\varepsilon =3,$ $\Delta t=0.2,$ $\protect\mu =1,$ $x\in \left[ 0,80\right] $ at $t=20.$} \label{401}\vskip-1.cm \par \begin{center} {\scriptsize \begin{tabular}{cccccc} \hline\hline $Method$ & $I_{1}$ & $I_{2}$ & $I_{3}$ & $L_{2}$ & $L_{\infty }$ \\ \hline Analytic & 3.1415961 & 2.6666667 & 1.3333333 & 0.00000000 & 0.00000000 \\ Our Method & 3.1415916 & 2.6682568 & 1.3333413 & 0.01286582 & 0.00831346 \\ Cubic Galerkin\cite{sbgk} & 3.1589605 & 2.6902580 & 1.3570299 & 0.03803037 & 0.02629007 \\ Quintic Collocation First Scheme\cite{sbgk6} & 3.1250343 & 2.6445829 & 1.3113394 & 0.05132106 & 0.03416753 \\ Quintic Collocation Second Scheme\cite{sbgk6} & 3.1416722 & 2.6669051 & 1.3335718 & 0.01675092 & 0.01026391 \\ Petrov-Galerkin\cite{roshan} & 3.14159 & 2.66673 & 1.33341 & 0.0123326 & 0.0086082 \\ \hline\hline \end{tabular} } \end{center} \end{table} {\normalsize \begin{figure} \caption{Motion of single solitary wave for $p=2$, $c=0.5$, $h=0.1$, $\Delta t$ $=0.2,$ $\protect\varepsilon =3,$ $\protect\mu =1,$ over the interval $ [0,80]$ at $t=0,10,20.$} \label{500} \end{figure} \begin{figure} \caption{Error graph for $p=2,$ $c=0.5,$ $h=0.1,$ $\protect\varepsilon =3,$ $ \Delta t=0.2,$ $\protect\mu =1,$ $x\in \left[ 0,80\right] $ at $t=20$.} \label{501} \end{figure} } For our second experiment, we take the parameters $p=3,$ $c=0.3,$ $h=0.1,$ $ \Delta t=0.2,$ $\varepsilon =3,$ $\mu =1$, $x_{0}$ $=30$ with interval $ [0,80]$ to coincide with that of previous papers \cite{sbgk,sbgk6,roshan}. Thus the solitary wave has amplitude $1.0$ and the computations are carried out for times up to $t=20.$ The values of the error norms $L_{2},$ $ L_{\infty }$ and conservation quantities $I_{1},I_{2}$,$I_{3}$ are found and tabulated in Table $(\ref{402})$. According to Table$(\ref{402})$ the error norms $\mathit{L}_{2}$ and $\mathit{L}_{\infty }$ remain less than $ 4.48357\times 10^{-3}$, $3.37609\times 10^{-3}$ and they are still small when the time is increased up to $t=20$ and the invariants $I_{1},I_{2}$,$ I_{3}$ change from their initial values by less than $1.78\times \ 10^{-5},$ $2.52\times \ 10^{-5}$, $3.55\times \ 10^{-5},$ respectively. Therefore we can say our method is satisfactorily conservative. In Table$(\ref{403})$ the performance of the our new method is compared with other methods \cite {sbgk,sbgk6,roshan} at $t=20$. It is observed that errors of the method \cite {sbgk,sbgk6,roshan} are considerably larger than those obtained with the present scheme. The motion of solitary wave using our scheme is graphed at time $t=0,10,20$ in Fig.$(\ref{502}).$ As seen, single solitons move to the right at a constant speed and preserves its amplitude and shape with increasing time as anticipated. The amplitude is $1.00000$ at $t=0$ and located at $x=30$, while it is $0.999522$ at $t=20$ and located at $x=36$. Therefore the absolute difference in amplitudes over the time interval $ [0,20]$ are found as $4.78\times 10^{-4}$. The aberration of error at discrete times are drawn in Fig.$(\ref{503}).$ The error deviation varies from $-3\times 10^{-3}$ to $4\times 10^{-3}$ and the maximum errors arise around the central position of the solitary wave. \begin{table}[h!] \caption{Invariants and errors for single solitary wave with $p=3,$ $c=0.3,$ $h=0.1,$ $\Delta t=0.2,$ $\protect\varepsilon =3,$ $\protect\mu =1,$ $x\in \left[ 0,80\right] .$ } \label{402}\vskip-1.cm \par \begin{center} {\scriptsize \begin{tabular}{cccccc} \hline\hline $Time$ & $I_{1}$ & $I_{2}$ & $I_{3}$ & $L_{2}$ & $L_{\infty }$ \\ \hline 0 & 2.8043580 & 2.4664883 & 0.9855618 & 0.00000000 & 0.00000000 \\ 5 & 2.8043723 & 2.4665080 & 0.9855942 & 0.00183258 & 0.00177948 \\ 10 & 2.8043747 & 2.4665108 & 0.9855973 & 0.00291958 & 0.00233283 \\ 15 & 2.8043753 & 2.4665119 & 0.9855973 & 0.00372417 & 0.00285444 \\ 20 & 2.8043758 & 2.4665135 & 0.9855973 & 0.00448357 & 0.00337609 \\ \hline\hline \end{tabular} } \end{center} \end{table} \begin{table}[h!] \caption{Comparisons of results for single solitary wave with $p=3,$ $c=0.3,$ $h=0.1,$ $\Delta t=0.2,$ $\protect\varepsilon =3,$ $\protect\mu =1,$ $x\in \left[ 0,80\right] $ at $t=20.$} \label{403}\vskip-1.cm \par \begin{center} {\scriptsize \begin{tabular}{cccccc} \hline\hline $Method$ & $I_{1}$ & $I_{2}$ & $I_{3}$ & $L_{2}$ & $L_{\infty }$ \\ \hline Our Method & 2.8043758 & 2.4665135 & 0..9855973 & 0.00448357 & 0.00337609 \\ Cubic Galerkin\cite{sbgk} & 2.8187398 & 2.4852249 & 1.0070200 & 0.01655637 & 0.01370453 \\ Quintic Collocation First Scheme\cite{sbgk6} & 2.8043570 & 2.4639086 & 0.9855602 & 0.00801470 & 0.00538237 \\ Quintic Collocation Second Scheme\cite{sbgk6} & 2.8042943 & 2.4637495 & 0.9854011 & 0.00708553 & 0.00480470 \\ Petrov-Galerkin\cite{roshan} & 2.80436 & 2.46389 & 0.98556 & 0.00484271 & 0.00370926 \\ \hline\hline \end{tabular} } \end{center} \end{table} {\normalsize \begin{figure} \caption{Motion of single solitary wave for $p=3,$ $c=0.3,$ $h=0.1,$ $\Delta t=0.2,$ $\protect\varepsilon =3,$ $\protect\mu =1,$ $x\in \left[ 0,80\right] $ at $t=0,10,20.$} \label{502} \end{figure} \begin{figure} \caption{Error graph for $p=3,$ $c=0.3,$ $h=0.1,$ $\Delta t=0.2,$ $\protect \varepsilon =3,$ $\protect\mu =1,$ $x\in \left[ 0,80\right] $ at $t=20$.} \label{503} \end{figure} \ \ \ \ } For our final treatment, we put the parameters $p=4,$ $c=0.2,$ $h=0.1,$ $ \Delta t=0.2,$ $\varepsilon =3,$ $\mu =1$, $\ x_{0}$ $=30$ over the interval $[0,80]$ to make possible comparisons with those of earlier papers \cite {sbgk,sbgk6,roshan}. So solitary wave has amplitude $1.0$ and the simulations are executed to time $t=20$ to invent the error norms $L_{2}$ and $L_{\infty }$ and the numerical invariants $I_{1},I_{2}$ and $I_{3}.$ For these values of the parameters, the conservation properties and the $ L_{2}$-error as well as the $L_{\infty }$-error norms have been listed in Table$(\ref{4040})$ for several values of the time level $t$. It can be referred from Table$(\ref{4040}),$ the error norms $\mathit{L}_{2}$ and $ \mathit{L}_{\infty }$ remain less than $1.96046\times 10^{-3}$, $ 1.33416\times 10^{-3}$ and they are still small when the time is increased up to $t=20$ and the invariants $I_{1},I_{2}$, $I_{3}$ change from their initial values by less than $4.07\times \ 10^{-5},$ $5.80\times \ 10^{-5}$ and $6.32\times \ 10^{-5},$ respectively, throughout the simulation. Hence we can say our method is sensibly conservative. The comparison between the results obtained by the current method with those in the other papers \cite {sbgk,sbgk6,roshan} is also documented in Table$(\ref{4050})$. It is noticeably seen from the table that errors of the current method are radically less than those obtained with the earlier methods \cite {sbgk,sbgk6,roshan}. For visual representation, the simulations of single soliton for values $p=4,c=0.2,h=0.1,\Delta t=0.2$ at times $t=0,10$ and $20$ are illustrated in Figure$(\ref{504})$. It is understood from this figure that the numerical scheme performs the motion of propagation of a single solitary wave, which moves to the right at nearly unchanged speed and conserves its amplitude and shape with increasing time. The amplitude is $ 1.00000$ at $t=0$ and located at $x=30$, while it is $0.999475$ at $t=20$ and located at $x=34$. The absolute difference in amplitudes at times $t=0$ and $t=10$ is $5.25\times 10^{-4}$ so that there is a little change between amplitudes. Error distributions at time $t=20$ are shown graphically in Figure$(\ref{505})$. As it is seen, the maximum errors are between $ -1.5\times 10^{-3}$ to $1.5\times 10^{-3}$ and occur around the central position of the solitary wave. \begin{table}[h!] \caption{Invariants and errors for single solitary wave with $p=4,$ $c=0.2,$ $h=0.1,$ $\Delta t=0.2,$ $\protect\varepsilon =3,$ $\protect\mu =1,$ $x\in \left[ 0,80\right] .$ } \label{4040}\vskip-1.cm \par \begin{center} {\scriptsize \begin{tabular}{cccccc} \hline\hline $Time$ & $I_{1}$ & $I_{2}$ & $I_{3}$ & $L_{2}$ & $L_{\infty }$ \\ \hline 0 & 2.6220516 & 2.3598323 & 0.7853952 & 0.00000000 & 0.00000000 \\ 5 & 2.6220846 & 2.3598808 & 0.7854675 & 0.00125061 & 0.00141788 \\ 10 & 2.6220915 & 2.3598891 & 0.7854783 & 0.00178634 & 0.00147002 \\ 15 & 2.6220920 & 2.3598898 & 0.7854785 & 0.00193428 & 0.00139936 \\ 20 & 2.6220923 & 2.3598903 & 0.7854785 & 0.00196046 & 0.00133416 \\ \hline\hline \end{tabular} } \end{center} \end{table} \begin{table}[h!] \caption{Comparisons of results for single solitary wave with $p=4,$ $c=0.2,$ $h=0.1,$ $\Delta t=0.2,$ $\protect\varepsilon =3,$ $\protect\mu =1,$ $x\in \left[ 0,100\right] $ at $t=20.$ } \label{4050}\vskip-1.cm \par \begin{center} {\scriptsize \begin{tabular}{cccccc} \hline\hline $Method$ & $I_{1}$ & $I_{2}$ & $I_{3}$ & $L_{2}$ & $L_{\infty }$ \\ \hline Our Method & 2.6220923 & 2.3598903 & 0.7854785 & 0.00196046 & 0.00133416 \\ Cubic Galerkin\cite{sbgk} & 2.6327833 & 2.3730032 & 0.8023383 & 0.00890617 & 0.00821991 \\ Quintic Collocation First Scheme\cite{sbgk6} & 2.6220508 & 2.3561901 & 0.7853939 & 0.00421697 & 0.00297952 \\ Quintic Collocation First Scheme\cite{sbgk6} & 2.6219284 & 2.3559327 & 0.7851364 & 0.00339086 & 0.00247031 \\ Petrov-Galerkin\cite{roshan} & 2.62206 & 2.35615 & 0.78534 & 0.00230499 & 0.00188285 \\ \hline\hline \end{tabular} } \end{center} \end{table} {\normalsize \begin{figure} \caption{Motion of single solitary wave for $p=4,$ $c=0.2,$ $h=0.1,$ $\Delta t=0.2,$ $\protect\varepsilon =3,$ $\protect\mu =1,x\in $ $[0,80]$ at $ t=0,10,20.$} \label{504} \end{figure} \begin{figure} \caption{Error graph for $p=4,$ $c=0.2,$ $h=0.1,$ $\Delta t=0.2,$ $\protect \varepsilon =3,$ $\protect\mu =1$ at $t=20$.} \label{505} \end{figure} } \subsection{Interaction of two solitary waves} Our second test problem pertains to the interaction of two solitary wave solutions of GEW equation having different amplitudes and traveling in the same direction. We tackle GEW equation with initial conditions given by the linear sum of two well separated solitary waves of various amplitudes as follows \begin{equation} U(x,0)=\sum_{j=1}^{2}\sqrt[p]{\frac{c_{j}(p+1)(p+2)}{2\varepsilon }\sec h^{2}[\frac{p}{2\sqrt{\mu }}(x-x_{j})]}, \label{ini. for two solit} \end{equation} where $c_{j}$ and $x_{j}$, $\ j=1,2$ are arbitrary constants. For the computational work, two sets of parameters are considered by taking different values of $p,c_{i}$ and the same values of $h=0.1$, $\Delta t=0.025,$ $\varepsilon =3,$ $\mu =1$ over the interval $0\leq x\leq 80.$ We firstly take $p=3,$ $c_{1}=0.3,$ $c_{2}=0.0375.$ So the amplitudes of the two solitary waves are in the ratio $2:1.$ Calculations are done up to $ t=100 $. The three invariants in this case are tabulated in Table$(\ref{4051} )$ . It is clear that the quantities are satisfactorily constant and very closed with the methods \cite{sbgk,sbgk6,roshan} during the computer run. Fig. $(\ref{40510})$ illustrates the behavior of the interaction of two positive solitary waves. At $t=100$, the magnitude of the smaller wave is $ 0.510619$ on reaching position $x=31.8$, and of the larger wave $0.999364$ having the position $x=46.7$, so that the difference in amplitudes is $ 0.010619$ for the smaller wave and $0.000636$ for the larger wave. For the second case, we have studied the interaction of two solitary waves with the parameters \ $p=4,c_{1}=0.2,$ $c_{2}=1/80.$ So the amplitudes of the two solitary waves are in the ratio $2:1$. \ For this case the experiment is run until time $t=120$. The three invariants in this case are recorded in Table$( \ref{40530}).$ The results in this table indicate that the numerical values of the invariants are good agreement with those of methods\cite {sbgk,sbgk6,roshan} during the computer run. Fig.$(\ref{4052})$ shows the development of the solitary wave interaction$.$ \begin{table}[h!] \caption{Invariants for interaction of two solitary waves with $p=3.$ } \label{4051}\vskip-1.cm \par \begin{center} {\scriptsize \begin{tabular}{ccccccc} \hline\hline & & & & & & \\ \hline & $t$ & $0$ & $30$ & $60$ & $90$ & $100$ \\ \hline & Our Method & 4.20653 & 4.20657 & 4.20622 & 4.20502 & 4.20517 \\ $I_{1}$ & \cite{sbgk} & 4.20653 & 4.20653 & 4.20616 & 4.20490 & 4.20503 \\ & \cite{sbgk6} First & 4.20653 & 4.20653 & 4.20653 & 4.20653 & 4.20653 \\ & \cite{sbgk6} Second & 4.20653 & 4.20653 & 4.20653 & 4.20653 & 4.20653 \\ & \cite{roshan} & 4.20655 & 4.20655 & 4.20655 & 4.20655 & 4.20655 \\ & Our Method & 3.08311 & 3.08318 & 3.08309 & 3.08220 & 3.08251 \\ $I_{2}$ & \cite{sbgk} & 3.07987 & 3.07991 & 3.07947 & 3.07777 & 3.07797 \\ & \cite{sbgk6} First & 3.07988 & 3.07988 & 3.07988 & 3.07988 & 3.07988 \\ & \cite{sbgk6} Second & 3.07988 & 3.07988 & 3.07988 & 3.07988 & 3.07988 \\ & \cite{roshan} & 3.97977 & 3.07980 & 3.07987 & 3.07974 & 3.07972 \\ & Our Method & 1.01636 & 1.01644 & 1.01664 & 1.01632 & 1.01634 \\ $I_{3}$ & \cite{sbgk} & 1.01636 & 1.01638 & 1.01654 & 1.01616 & 1.01616 \\ & \cite{sbgk6} First & 1.01636 & 1.01636 & 1.01636 & 1.01636 & 1.01636 \\ & \cite{sbgk6} Second & 1.01636 & 1.01636 & 1.01636 & 1.01636 & 1.01636 \\ & \cite{roshan} & 1.01634 & 1.01634 & 1.01634 & 1.01633 & 1.01634 \\ \hline\hline \end{tabular} } \end{center} \end{table} \begin{table}[h!] \caption{Invariants for interaction of two solitary waves with $p=4.$ } \label{40530}\vskip-1.cm \par \begin{center} {\scriptsize \begin{tabular}{ccccccc} \hline\hline & & & & & & \\ \hline & $t$ & $0$ & $30$ & $60$ & $90$ & 120 \\ \hline & Our Method & 3.93307 & 3.93311 & 3.93393 & 3.93229 & 3.93037 \\ $I_{1}$ & \cite{sbgk} & 3.93307 & 3.93309 & 3.93388 & 3.93222 & 3.93026 \\ & \cite{sbgk6} First & 3.93307 & 3.93307 & 3.93307 & 3.93307 & 3.93307 \\ & \cite{sbgk6} Second & 3.93307 & 3.93307 & 3.93307 & 3.93307 & 3.93307 \\ & \cite{roshan} & 3.93309 & 3.93309 & 3.93309 & 3.93309 & 3.93308 \\ & Our Method & 2.94979 & 2.94985 & 2.95122 & 2.94939 & 2.94801 \\ $I_{2}$ & \cite{sbgk} & 2.94521 & 2.94527 & 2.94703 & 2.94436 & 2.94212 \\ & \cite{sbgk6} First & 2.94524 & 2.94524 & 2.94524 & 2.94524 & 2.94524 \\ & \cite{sbgk6} Second & 2.94524 & 2.94523 & 2.94523 & 2.94523 & 2.94523 \\ & \cite{roshan} & 2.94512 & 2.94510 & 2.94505 & 2.94520 & 2.94511 \\ & Our Method & 0.79766 & 0.79775 & 0.79952 & 0.79824 & 0.79811 \\ $I_{3}$ & \cite{sbgk} & 0.79766 & 0.79770 & 0.79942 & 0.79812 & 0.79794 \\ & \cite{sbgk6} First & 0.79766 & 0.79766 & 0.79766 & 0.79766 & 0.79766 \\ & \cite{sbgk6} Second & 0.79766 & 0.79766 & 0.79766 & 0.79766 & 0.79766 \\ & \cite{roshan} & 0.79761 & 0.79761 & 0.79762 & 0.79761 & 0.79761 \\ \hline\hline \end{tabular} } \end{center} \end{table} \begin{figure} \caption{Interaction of two solitary waves at $p=3;$ $(a)t=0,$ $(b)t=50,$ $ (c)t=70,$ $(d)t=100.$} \label{40510} \end{figure} \begin{figure} \caption{Interaction of two solitary waves at $p=4;$ $(a)t=0,$ $(b)t=60,$ $ (c)t=80,$ $(d)t=120.$} \label{4052} \end{figure} \subsection{Evolution of solitons} Finally, another attracting initial value problem for the GEW equation is evolution of the solitons that is used as the Gaussian initial condition in solitary waves given by \begin{equation} U(x,0)=\exp (-x^{2}). \end{equation} Since the behavior of the solution depends on values of $ \mu $, we choose different values of $ \mu =0.1$ and $ \mu =0.05$ for $p=2,3,4$. The numerical computations are done up to $t=12$. Calculated numerical invariants at different values of $t$ are documented in Table$(\ref{4053}).$ From this table, we can easily see that as the value of $\mu $ increases, the variations of the invariants become smaller and it is seen that calculated invariant values are satisfactorily constant. The development of the evolution of solitons is presented in Figs.$(\ref{4055}),( \ref{4056})$ and $(\ref{4057})$. It is clearly seen in these figures that when the value of $\mu $ decreases, the number of the stable solitary wave increases. \begin{table}[h!] \caption{Maxwellian initial condition for different values of $\protect\mu .$ } \label{4053}\vskip-1.cm \par \begin{center} {\scriptsize \begin{tabular}{ccccccccccc} \hline\hline $\mu $ & $t$ & \multicolumn{3}{c}{p=2} & \multicolumn{3}{c}{p=3} & \multicolumn{3}{c}{p=4} \\ \hline & & $I_{1}$ & $I_{2}$ & $I_{3}$ & $I_{1}$ & $I_{2}$ & $I_{3}$ & $I_{1}$ & $ I_{2}$ & $I_{3}$ \\ \hline & 0 & 1.7724537 & 1.3792767 & 0.8862269 & 1.7724537 & 1.3792767 & 0.7926655 & 1.7724537 & 1.3792767 & 0.7236013 \\ 0.1 & 4 & 1.7724537 & 1.5760586 & 0.8862269 & 1.7724537 & 1.6168691 & 0.7926655 & 1.7724537 & 1.6360543 & 0.7236013 \\ & 8 & 1.7724537 & 1.5838481 & 0.8862269 & 1.7724537 & 1.6245008 & 0.7926655 & 1.7724537 & 1.6481131 & 0.7236013 \\ & 12 & 1.7724537 & 1.5920722 & 0.8862269 & 1.7724537 & 1.6325922 & 0.7926655 & 1.7724537 & 1.6531844 & 0.7236013 \\ \cite{sbgk6} & 12 & 1.7724 & 1.3786 & 0.8862 & 1.7724 & 1.3786 & 0.7928 & 1.7725 & 1.3786 & 0.7243 \\ \cite{roshan} & 12 & 1.7724 & 1.3785 & 0.8861 & 1.7724 & 1.3787 & 0.7926 & 1.7734 & 1.3836 & 0.7224 \\ & 0 & 1.7724537 & 1.3162954 & 0.8862269 & 1.7724537 & 1.3162954 & 0.7926655 & 1.7724537 & 1.3162954 & 0.7236013 \\ & 4 & 1.7724537 & 1.5406812 & 0.8862269 & 1.7724537 & 1.5766908 & 0.7926655 & 1.7724537 & 1.6243519 & 0.7236013 \\ 0.05 & 8 & 1.7724537 & 1.6342604 & 0.8862269 & 1.7724537 & 1.6367952 & 0.7926655 & 1.7724537 & 1.6554614 & 0.7236013 \\ & 12 & 1.7724537 & 1.6835979 & 0.8862269 & 1.7724537 & 1.6372439 & 0.7926655 & 1.7724537 & 1.7079133 & 0.7236013 \\ \cite{sbgk6} & 12 & 1.7724 & 1.3159 & 0.8864 & 1.7725 & 1.3160 & 0.7940 & 1.7735 & 1.3188 & 0.7345 \\ \cite{roshan} & 12 & 1.7724 & 1.3160 & 0.8861 & 1.7724 & 1.3156 & 0.7922 & 1.7724 & 1.3177 & 0.7245 \\ \hline\hline \end{tabular} } \end{center} \end{table} \begin{figure} \caption{Maxwellian initial condition $p=2,$ $a)$ $\protect\mu =0.1,$ $b)$ $ \protect\mu =0.05$ at $t=12.$ } \label{4055} \end{figure} \begin{figure} \caption{Maxwellian initial condition $p=3,$ $a)$ $\protect\mu =0.1,$ $b)$ $ \protect\mu =0.05$ at $t=12.$ } \label{4056} \end{figure} \begin{figure} \caption{Maxwellian initial condition $p=4,$ $a)$ $\protect\mu =0.1,$ $b)$ $ \protect\mu =0.05$ at $t=12.$} \label{4057} \end{figure} \section{Concluding remarks} . Solitary-wave solutions of the GEW equation by using Petrov-Galerkin method based on linear B-spline weight functions and quadratic B-spline trial functions, have been successfully obtained. \newline . Existence and uniqueness of solutions of the weak of the given problem as well as the proof of convergence has been proposed. \newline . Solutions of a semi-discrete finite element formulation of the equation and the theoretical bound of the error in the semi-discrete scheme are demonstrated. \newline . The theoretical upper bound of the error in such a full discrete approximation at $t=t^{n}$ has been proved. \newline . Our numerical algorithm has been tested by implementing three test problems involving a single solitary wave in which analytic solution is known and expanded it to investigate the interaction of two solitary waves and evolution of solitons where the analytic solutions are generally unknown during the interaction. \newline . The proffered method has been shown to be unconditionally stable. \newline . For single soliton the $L_{2}$ and $L_{\infty }$ error norms and\ for the three test problems the invariant quantities $I_{1}$, $I_{2}$ and $I_{3}$ have been computed. From the obtained results it is obviously clear that the error norms are sufficiently small and the invariants are marginally constant in all computer run. We can also see that our algorithm for the GEW equation is more accurate than the other earlier algorithms in the literature. \newline . Our method is an effective and a productive method to study behaviors of the dispersive shallow water waves. \end{document}
\begin{document} \author{David Joyner, Amy Ksir\thanks{Mathematics Dept, USNA, Annapolis, MD 21402, [email protected] and [email protected]}, Roger Vogeler\thanks{Mathematics Dept., Ohio State Univ., [email protected]} } \title{Group representations on \\ Riemann-Roch spaces \\ of some Hurwitz curves } \date{11-14-2006} \maketitle \begin{abstract} Let $q>1$ denote an integer relatively prime to $2,3,7$ and for which $G=PSL(2,q)$ is a Hurwitz group for a smooth projective curve $X$ defined over $\mathbb{C}$. We compute the $G$-module structure of the Riemann-Roch space $L(D)$, where $D$ is an invariant divisor on $X$ of positive degree. This depends on a computation of the ramification module, which we give explicitly. In particular, we obtain the decomposition of $H^1(X,\mathbb{C})$ as a $G$-module. \end{abstract} \vskip .5in \tableofcontents \vskip .3in \section{Introduction} Let $X$ be a smooth projective curve over an algebraically closed field $k$, and let $k(X)$ denote the function field of $X$ (the field of rational functions on $X$). If $D$ is any divisor on $X$ then the Riemann-Roch space $L(D)$ is a finite dimensional $k$-vector space given by \[ L(D)=L_X(D)= \{f\in k(X)^\times \ |\ {\rm div}(f)+D\geq 0\}\cup \{0\}, \] where ${\rm div}(f)$ denotes the (principal) divisor of the function $f\in k(X)$. If $G$ is a finite group of automorphisms of $X$, then $G$ has a natural action on $k(X)$, and on the group $\Div(X)$ of divisors on $X$. If $D$ is a $G$-invariant divisor, then $G$ also acts on the vector space $L(D)$, making it into a $k[G]$-module. The problem of finding the $k[G]$-module structure of $L(D)$ was first considered in the case where $k=\mathbb{C}$ and $D$ is canonical, i.e. $L(D)$ is the space of holomorphic differentials on $X$. This problem was solved by Hurwitz for $G$ cyclic, and then by Chevalley and Weil for general $G$. More generally, the problem has been solved by work of Ellingsrud and L{\o}nsted \cite{EL}, Kani \cite{K}, Nakajima \cite{N}, and Borne \cite{B}. This has resulted in the following equivariant Riemann-Roch formula for the class of $L(D)$ (denoted by square brackets) in the Grothendieck group $R_k(G)$, in the case where $D$ is non-special: \begin{equation} \label{eqn:Borne} [L(D)]=(1-g_{X/G})[k[G]]+[\deg_{eq}(D)]-[\tilde{\Gamma}_G]. \end{equation} Here $g_{X/G}$ is the genus of $X/G$, $\deg_{eq}(D)$ is the equivariant degree of $D$, and $\tilde{\Gamma}_G$ is the (reduced) ramification module (this notation will be defined in sections \ref{sec:rammod} and \ref{sec:equivdeg}). Explicitly computing the $k[G]$-module structure of $L(D)$ in specific cases is of interest currently due to advances in the theory of algebraic-geometric codes. Permutation decoding algorithms use this information to increase their efficiency. In this paper, we consider the case where $X$ is a Hurwitz curve with automorphism group $G=PSL(2,q)$ for some prime power $q$, over $k=\mathbb{C}$. Using the equivariant Riemann-Roch formula above (\ref{eqn:Borne}) and the representation theory of $PSL(2,q)$, we compute explicitly the $\mathbb{C}[G]$-module structure of $L(D)$ for a general invariant effective divisor $D$. In the case where $D$ is a canonical divisor, this yields an explicit computation for the $\mathbb{C}[G]$-module structure of $H^{1}(X,\mathbb{C})$. We are also interested in rationality questions. We find that $\tilde{\Gamma}_G$ has a $\mathbb{Q}[G]$-module structure, and therefore may be computed more simply (see Joyner and Ksir \cite{JK1}), as follows: \begin{equation} \label{eqn:JKrammod} \tilde{\Gamma}_G =\bigoplus_{\pi\in G^*} \left[ \sum_{\ell=1}^L ({\rm dim}\, \pi -{\rm dim}\, (\pi^{H_\ell})) \frac{R_\ell}{2}\right]\pi . \end{equation} The sum is over all conjugacy classes of cyclic subgroups of $G$, $H_\ell$ is a representative cyclic subgroup, $\pi^{H_\ell}$ indicates the fixed part of $\pi$ under the action of $H_\ell$, and $R_\ell$ denotes the number of branch points in $Y$ over which the decomposition group is conjugate to $H_\ell$. For some but not all divisors $D$, $L(D)$ has a $\mathbb{Q}[G]$-module structure, and may also be computed more simply. The organization of this paper is as follows. In section 2, we recall some facts about Hurwitz curves and Hurwitz groups. In section 3, we review the representation theory of $PSL(2,q)$, and compute the induced characters necessary for the following section. Our main results are in section $4$, where we compute the ramification module, the equivariant degree for any invariant divisor $D$, and thus the structure of $L(D)$. At the end of section 4 we compute the $\mathbb{C}[G]$-module structure of $H^1(X,\mathbb{C})$. In section 5, we discuss rationality questions, using the results of \cite{JK1} to give more streamlined formulas for the ramification module, and in some cases for $L(D)$. \section{Hurwitz curves} \label{sec:hurwitz} The automorphism group $G$ of a smooth projective curve of genus $g>1$ over an algebraically closed field $k$ of characteristic zero satisfies the {\it Hurwitz bound} \[ |G|\leq 84\cdot (g-1). \] A curve which attains this bound is called a {\it Hurwitz curve} and its automorphism group is called a {\it Hurwitz group}. \subsection{Classification} The number of distinct Hurwitz groups is infinite, and to each one corresponds a finite number of Hurwitz curves. Nevertheless, these curves are quite rare; in particular, the Hurwitz genus values are known to form a rather sparse set of positive integers (see Larsen \cite{L}). Hurwitz groups are precisely those groups which occur as non-trivial finite homomorphic images of the 2,3,7-triangle group \[ \Delta =\langle a,b : a^2=b^3=(ab)^7=1\rangle. \] This is most naturally viewed as the group of orientation-preserving symmetries of the tiling of the hyperbolic plane $\bold{H}$ generated by reflections in the sides of a fundamental triangle having angles $\pi/2$, $\pi/3$, and $\pi/7$. Each proper normal finite-index subgroup $K\triangleleft\Delta$ corresponds to a Hurwitz group $G=\Delta/K$. The associated Hurwitz curve now appears (with $k=\mathbb{C}$) as a compact hyperbolic surface $\bold{H}/K$ regularly tiled by a finite number of copies of the fundamental triangle. $G$ is the group of orientation-preserving symmetries of this tiling, with fundamental domain consisting of one fundamental triangle plus one reflected triangle. (From this perspective, the Hurwitz bound simply says that there is no smaller polygon which gives a regular tiling of $\bold{H}$.) We note that $\Delta$ has only a small number of torsion elements (up to conjugacy). These are the non-trivial powers of $a$, $b$, and $ab$. Each acts as a rotation of order 2, 3, or 7, and has as its fixed point one vertex of (some copy of) the fundamental triangle. Clearly no other point of the tiling can occur as a fixed point; this is true both for the tiling of $\bold{H}$ and the induced tilings on the quotient surfaces. In other words, all points {\it other} than the tiling vertices have trivial stabilizer. It follows easily from the above presentation for $\Delta$ that a group is Hurwitz if and only if it is generated by two elements having orders 2 and 3, and whose product has order 7. This characterization has made possible much of the work in classifying Hurwitz groups. The most relevant for our investigation is the following result of Macbeath (see \cite{M}): \begin{verse} The simple group $PSL(2,q)$ is Hurwitz in exactly three cases:\\ \quad i) $q=7$;\\ \quad ii) $q$ is prime, with $q\equiv \pm 1 \pmod{7}$;\\ \quad iii) $q=p^3$, with $p$ prime and $p\equiv \pm 2,\pm 3$ (mod 7). \end{verse} In particular, $PSL(2,8)$ and $PSL(2,27)$ are Hurwitz groups. We shall require that $q$ be relatively prime to $2\cdot 3\cdot 7$, but this excludes just three possibilities, namely $q\in \{7,8,27\}$. Note that in all of the cases we consider, $q \equiv \pm 1 \pmod{7}$. The order of $PSL(2,q)$ (for odd $q$) is $q(q^2-1)/2$. Hence we obtain \[ g=1+\frac{q(q^2-1)}{168} \] as the genus of the corresponding curve(s). For completeness, we remark that there are three distinct Hurwitz curves when $q$ is prime (apart from $q=7$), and just one when $q=p^3$. However, this has no bearing on the representations that we study. In addition, there are other known families of Hurwitz groups. For example, all Ree groups are Hurwitz, as are all but finitely many of the alternating groups. See Conder \cite{C} for a summary of such results. \subsection{Ramification data} \label{sec:ram} Let $X$ be a Hurwitz curve with automorphism group $G$ and let \begin{equation} \label{eqn:psi} \psi:X\rightarrow Y=X/G \end{equation} denote the quotient map. By again viewing $X$ as a hyperbolic surface, the ramification data are easily deduced. The quotient $Y$ is formed by one fundamental triangle and its mirror image, with the natural identifications on their boundaries. Hence it is a surface of genus 0 with 3 metric singularities. Thus $\psi$ has exactly three branch points. The stabilizer subgroups of the corresponding ramification points in $X$ are cyclic, of orders $2$, $3$, and $7$. We label the three branch points $P_1$, $P_2$, and $P_3$, so that if $P \in \psi^{-1}(P_1)$, then $P$ has stabilizer subgroup of order $2$, if $P \in \psi^{-1}(P_2)$, $P$ has stabilizer subgroup of order $3$, and if $P \in \psi^{-1}(P_3)$, $P$ has stabilizer subgroup of order $7$. \section{Representation theory of $PSL(2,q)$} \label{sec:rep thy} \subsection{General theory on representations of PSL(2,q)} We first review the representation theory of $G=PSL(2,q)$ over $\mathbb{C}$, following the treatment in \cite{FH}, to fix notation. Let ${\mathbb{F}}=GF(q)$ be the field with $q$ elements. The group $PSL(2,q)$ has $3+(q-1)/2$ conjugacy classes of elements. Let $\varepsilon \in {{\mathbb{F}}}$ be a generator for the cyclic group ${{\mathbb{F}}}^{\times}$. Then each conjugacy class will have a representative of exactly one of the following forms: \begin{equation} \label{eqn:conjclasses} \left( \begin{array}{cc} 1 & 0\\ 0 &1 \end{array} \right),\ \left( \begin{array}{cc} x & 0\\ 0 & x^{-1} \end{array} \right),\ \left( \begin{array}{cc} 1 & 1\\ 0 & 1 \end{array} \right),\ \left( \begin{array}{cc} 1 & \varepsilon\\ 0 & 1 \end{array} \right),\ \left( \begin{array}{cc} x & \varepsilon y\\ y & x \end{array} \right). \end{equation} The irreducible representations of $PSL(2,q)$ include the trivial representation $\mathbf{1}$ and one irreducible $V$ of dimension $q$. All but two of the others fall into two types: representations $W_{\alpha}$ of dimension $q+1$ (``principal series''), and $X_{\beta}$ of dimension $q-1$ (``discrete series''). The principal series representations $W_{\alpha}$ are indexed by homomorphisms $\alpha: {{\mathbb{F}}}^{\times} \to \mathbb{C}^{\times}$ with $\alpha(-1)=1$. The discrete series representations $X_{\beta}$ are indexed by homomorphisms $\beta: T \to \mathbb{C}^{\times}$ with $\beta(-1)=1$, where $T$ is a cyclic subgroup of order $q+1$ of ${{\mathbb{F}}}(\sqrt{\varepsilon})^{\times}$. The characters of these are as follows: {\footnotesize{ \[ \begin{array}[ht]{r||c|c|cc|c} & \left( \begin{array}{cc} 1& 0 \\ 0 & 1 \end{array} \right) & \left( \begin{array}{cc} x& 0 \\ 0 & x^{-1} \end{array} \right)& \left( \begin{array}{cc} 1& 1 \\ 0 & 1 \end{array} \right)& \left( \begin{array}{cc} 1& \varepsilon \\ 0 & 1 \end{array} \right)& \left( \begin{array}{cc} x& \varepsilon y \\ y & x \end{array} \right)\\ \hline \hline \mathbf{1} & 1 & 1 & 1 & 1 &1 \\ \hline X_{\beta} & q-1 & 0 & -1 & -1 & -\beta(x+\sqrt{\varepsilon}y)-\beta(x-\sqrt{\varepsilon}y) \\ \hline V & q & 1 & 0 & 0 & -1 \\ \hline W_{\alpha} & q+1 & \alpha(x) + \alpha(x^{-1}) & 1 & 1 & 0 \\ \end{array} \] }} Let $\zeta$ be a primitive $q$th root of unity in $\mathbb{C}$. Let $\xi$ and $\xi'$ be defined by \begin{equation} \label{eqn:qq'} \xi = \sum_{\left(\frac{a}{q}\right)=1} \zeta^a \mbox{ and } \xi' = \sum_{\left(\frac{a}{q}\right)=-1} \zeta^a, \end{equation} where the sums are over the quadratic residues and nonresidues $\pmod q$, respectively. If $q \equiv 1$ mod 4, then the principal series representation $W_{\alpha_0}$ corresponding to \[ \begin{array}{ccc} \alpha_0:{{\mathbb{F}}}^{\times} & \to & \mathbb{C}^{\times} \\ \varepsilon & \mapsto & -1 \end{array} \] is not irreducible, but splits into two irreducibles $W'$ and $W''$, each of dimension $(q+1)/2$. Their characters satisfy: \[ \begin{array}[ht]{r||c|c|cc|c} & \left( \begin{array}{cc} 1& 0 \\ 0 & 1 \end{array} \right) & \left( \begin{array}{cc} x& 0 \\ 0 & x^{-1} \end{array} \right)& \left( \begin{array}{cc} 1& 1 \\ 0 & 1 \end{array} \right)& \left( \begin{array}{cc} 1& \varepsilon \\ 0 & 1 \end{array} \right)& \left( \begin{array}{cc} x& \varepsilon y \\ y & x \end{array} \right)\\ \hline \hline W' & \frac{q+1}{2} & \alpha_0(x) & 1+\xi & 1+\xi' & 0 \\ \hline W'' & \frac{q+1}{2} & \alpha_0(x) & 1+\xi' & 1+\xi & 0 \\ \end{array} \] Let $\tau$ denote a generator of $T$. Similarly, if $q \equiv 3$ mod 4, then the discrete series representation $X_{\beta_0}$ corresponding to \[ \begin{array}{ccc} \beta_0:T & \to & \mathbb{C}^{\times} \\ \tau & \mapsto & -1 \end{array} \] splits into two irreducibles $X'$ and $X''$, each of dimension $(q-1)/2$. Their characters satisfy: \[ \begin{array}[ht]{r||c|c|cc|c} & \left( \begin{array}{cc} 1& 0 \\ 0 & 1 \end{array} \right) & \left( \begin{array}{cc} x& 0 \\ 0 & x^{-1} \end{array} \right)& \left( \begin{array}{cc} 1& 1 \\ 0 & 1 \end{array} \right)& \left( \begin{array}{cc} 1& \varepsilon \\ 0 & 1 \end{array} \right)& \left( \begin{array}{cc} x& \varepsilon y \\ y & x \end{array} \right)\\ \hline \hline X' & \frac{q-1}{2} & 0 & \xi & \xi' & -\beta_0(x+y\sqrt{\varepsilon}) \\ \hline X'' & \frac{q-1}{2} & 0 & \xi' & \xi & -\beta_0(x+y\sqrt{\varepsilon}) \\ \end{array} \] According to Janusz \cite{Ja}, the Schur index of each irreducible representation of $G$ is $1$. There is a ``Galois action'' on the set of equivalence classes of irreducible representations of $G$ as follows. Let $\chi$ denote an irreducible character. The character values $\chi(g)$ lie in $\mathbb{Q}(\mu)$, where $\mu$ is a primitive $m^{th}$ root of unity and $m=q(q^2-1)/4$. Let ${\cal{G}}=Gal(\mathbb{Q}(\mu)/\mathbb{Q})$ denote the Galois group. For each integer $j$ relatively prime to $m$, there is an element $\sigma_j$ of ${\cal{G}}$ taking $\mu$ to $\mu^j$. This Galois group element will act on representations by taking a representation with character values $(a_1, \ldots, a_n)$ to a representation with character values $(\sigma_j(a_1), \ldots, \sigma_j(a_n))$. Representations with rational character values will be fixed under this action. Because the Schur index of each representation is $1$, representations with rational character values will be defined over $\mathbb{Q}$. The action of the Galois group $\mathcal{G}$ can easily be seen from the character table. It will fix the trivial representation and the $q$-dimensional representation $V$. Its action permutes the set of $q-1$-dimensional ``principal series'' representations $X_{\beta}$, and the set of $q+1$-dimensional ``discrete series'' representations $W_{\alpha}$. In the case $q \equiv 1\pmod 4$, the Galois group will exchange the two $(q+1)/2$-dimensional representations $W'$ and $W''$; if $q \equiv 3 \pmod 4$, the Galois group will exchange the two $(q-1)/2$-dimensional representations $X'$ and $X''$. \subsection{Induced characters} \label{sec:indchars} We will be interested in the induced characters from subgroups of orders $2$, $3$, and $7$. For each value of $q$, each of these subgroups is unique up to conjugacy; we can choose subgroups $H_2$ of order $2$, $H_3$ of order $3$, and $H_7$ of order $7$ that are generated by elements of the form \begin{equation*} \left( \begin{array}{cc} x & 0\\ 0 & x^{-1} \end{array} \right) \mbox{ or } \left( \begin{array}{cc} x & \varepsilon y\\ y & x \end{array} \right). \end{equation*} Which of these two forms each generator will take depends on $q$ mod 4, mod 3, and mod 7, respectively. Recall that we defined generators $\varepsilon$ of the cyclic group ${\mathbb{F}}^{\times}$, of order $q-1$, and $\tau$ of the cyclic group $T \subseteq {\mathbb{F}}(\sqrt{\varepsilon})^{\times}$ of order $q+1$, respectively. We define numbers $i$, $\omega$, and $\phi$ to be primitive roots of unity as follows. When $q \equiv 1 \pmod 4$, let $i$ denote an element in ${{\mathbb{F}}}^{\times}$ whose square is $-1$ (one can take $i=\varepsilon^{(q-1)/4}$). Then the subgroup $H_2$ of order $2$ in $PSL(2,q)$ is generated by \[ \left( \begin{array}{cc} i & 0\\ 0 & i^{-1} \end{array} \right ). \] If $q \equiv 3 \pmod 4$, then we take $i=x_i+\sqrt{\varepsilon}y_i$ to be an element of $T$ whose square is $-1$ (one can take $i=\tau^{(q+1)/4}$). Then the subgroup $H_2$ of order $2$ in $PSL(2,q)$ is generated by \[ \left( \begin{array}{cc} x_i & \varepsilon y_i\\ y_i & x_i \end{array} \right ). \] Similarly, we define $\omega$ to be a primitive $6$th root of unity. In the case where $q \equiv 1 \pmod 3$, we can take $\omega=\varepsilon^{(q-1)/6} \in {\mathbb{F}}^{\times}$. When $q \equiv -1 \pmod 3$, we take $\omega = x_{\omega} + \sqrt{\varepsilon} y_{\omega} = \tau^{(q+1)/6} \in T$. The subgroup $H_3$ of order $3$ in $PSL(2,q)$ will then be generated by \[ \left( \begin{array}{cc} \omega & 0\\ 0 & \omega^{-1} \end{array} \right ),\mbox{ if } \ q \equiv 1 \pmod 3, \mbox{ or } \left( \begin{array}{cc} x_{\omega} & \varepsilon y_{\omega}\\ y_{\omega} & x_{\omega} \end{array} \right ),\mbox{ if } \ q \equiv -1 \pmod 3. \] Lastly, we want to define $\phi$ to be a primitive $14$th root of unity. Recall that $q \equiv \pm 1 \pmod 7$. If $q \equiv 1 \pmod 7$, then we can take $\phi = \varepsilon^{(q-1)/14} \in {\mathbb{F}}^{\times}$, and if $q \equiv -1 \pmod 7$, then we can take $\phi = x_{\phi} + \sqrt{\varepsilon} y_{\phi} = \tau^{(q+1)/14} \in T$. The subgroup $H_7$ of order $7$ in $PSL(2,q)$ will then be generated by \[ \left( \begin{array}{cc} \phi & 0\\ 0 & \phi^{-1} \end{array} \right ), \ q \equiv 1 \pmod 3, \mbox{ or } \left( \begin{array}{cc} x_{\phi} & \varepsilon y_{\phi}\\ y_{\phi} & x_{\phi} \end{array} \right ), \ q \equiv -1 \pmod 3. \] With these definitions, it is easy to compute the restrictions of the irreducible representations of $PSL(2,q)$ to the subgroups above. We omit the details, but the computations for the groups of order $2$ and $3$ are given in \cite{JK2}, and the computation for the group of order $7$ is very similar. Using Frobenius reciprocity, we then obtain the corresponding induced representations. In each case, we denote a primitive character of the cyclic group $H_k$ by $\theta_k$. \subsubsection{Induced characters from $H_2$} The induced representations from the nontrivial character of $H_2$ are given below. The multiplicities depend on $q \pmod 8$. Note that most representation have the same multiplicity as $V$. When $i \in {\mathbb{F}}^{\times}$, i.e. when $q \equiv 1 \pmod 4$, the multiplicity of a discrete series representation $W_{\alpha}$ depends on the sign of $\alpha(i)$. Recall that $\alpha(-1)=1$, so $\alpha(i) = \pm 1$. the multiplicity of $W_{\alpha}$ will be the same as the multiplicity of $V$ if $\alpha(i)=1$ and one larger if $\alpha(i) = -1$. Similarly, when $q \equiv 3 \pmod 4$ and $i \in T$, the multiplicity of a principal series representation $X_{\beta}$ depends on the sign of $\beta(i)$. In this case the multiplicity of $X_{\beta}$ will be the same as the multiplicity of $V$ when $\beta(i)=1$, and one less if $\beta(i) = -1$. Lastly, the signs of $\alpha_0(i)$ or $\beta_0(i)$ depend on $q \pmod 8$ and determine the multiplicities of $W'$ and $W''$ or $X'$ and $X''$, respectively. A similar pattern will hold for the induced representations from $H_3$ and $H_7$. For $q \equiv 1 \pmod 8$, {\footnotesize{ \begin{eqnarray*} Ind_{H_2}^{G} \theta_2 & = & \frac{q-1}{2} \left [ \frac{1}{2}(W' + W'') + \sum_{\beta} X_{\beta} + V + \sum_{\alpha(i)=1} W_{\alpha} \right ] + \frac{q+3}{2} \sum_{\alpha(i)=-1} W_{\alpha}. \end{eqnarray*} }} For $q \equiv 3 \pmod 8$, {\footnotesize{ \begin{eqnarray*} Ind_{H_2}^{G} \theta_2 & = & \frac{q+1}{2} \left [ \sum_{\beta(i)=1} X_{\beta} + V + \sum_{\alpha} W_{\alpha} \right ] + \frac{q-3}{2} \left [ \frac{1}{2}(X' + X'') + \sum_{\beta(i)=-1} X_{\beta} \right ]. \end{eqnarray*} }} For $q \equiv 5 \pmod 8$, {\footnotesize{ \begin{eqnarray*} Ind_{H_2}^{G} \theta_2 & = & \frac{q-1}{2} \left [ \sum_{\beta} X_{\beta} + V + \sum_{\alpha(i)=1} W_{\alpha} \right ] + \frac{q+3}{2} \left [ \frac{1}{2}(W' + W'') + \sum_{\alpha(i)=-1} W_{\alpha} \right ]. \end{eqnarray*} }} And for $q \equiv 7 \pmod 8$, {\footnotesize{ \begin{eqnarray*} Ind_{H_2}^{G} \theta_2 & = & \frac{q+1}{2} \left [ \frac{1}{2} (X' + X'') + \sum_{\beta(i)=1} X_{\beta} + V + \sum_{\alpha} W_{\alpha} \right ] + \frac{q-3}{2} \sum_{\beta(i)=-1} X_{\beta}. \end{eqnarray*} }} \subsubsection{Induced characters from $H_3$} The induced representations from the two nontrivial characters $\theta_3$ and $\theta_3^2$ of $H_3$ are the same. In this case the multiplicities depend on $q \pmod {12}$, which determines whether the $6$th root of unity $\omega$ is in ${\mathbb{F}}^{\times}$, or in $T \subset {\mathbb{F}}(\sqrt{\varepsilon})^{\times}$. Now the multiplicity of a discrete (resp. principal) series representation $W_{\alpha}$ (resp. $X_{\beta}$) will be the same as the multiplicity of $V$ if $\alpha(\phi)=1$ (resp. $\beta(\phi)=1$) and one larger (resp. smaller) if $\alpha(\phi) = e^{\frac{\pm 2 \pi i}{3}}$ (resp. $\beta(\phi) = e^{\frac{\pm 2 \pi i}{3}}$). The signs of $\alpha_0(\omega)$ or $\beta_0(\omega)$ depend on $q \pmod {12}$ and determine the multiplicities of $W'$ and $W''$ or $X'$ and $X''$, respectively. If $q \equiv 1 \pmod {12}$, we have {\footnotesize{ \begin{equation*} Ind_{H_3}^{G} \theta_3 = \frac{q-1}{3} \left [ \frac{1}{2}(W' + W'') + \sum_{\beta} X_{\beta} + V + \sum_{\alpha(\omega)=1} W_{\alpha} \right ] + \frac{q+2}{3} \sum_{\alpha(\omega) = e^{\frac{\pm 2 \pi i}{3}}} W_{\alpha}. \end{equation*} }} If $q \equiv 5 \pmod {12}$, we have {\footnotesize{ \begin{equation*} Ind_{H_3}^{G} \theta_3 = \frac{q+1}{3} \left [ \frac{1}{2} (W' + W'') + \sum_{\beta(\omega)=1} X_{\beta} + V + \sum_{\alpha} W_{\alpha} \right ] + \frac{q-2}{3} \sum_{\beta(\omega) = 1} X_{\beta}. \end{equation*} }} If $q \equiv 7 \pmod {12}$, we have {\footnotesize{ \begin{equation*} Ind_{H_3}^{G} \theta_3 = \frac{q-1}{3} \left [ \frac{1}{2}(X' + X'') + \sum_{\beta} X_{\beta} + V + \sum_{\alpha(\omega)=1} W_{\alpha} \right ] + \frac{q+2}{3} \sum_{\alpha(\omega) = e^{\frac{\pm 2 \pi i}{3}}} W_{\alpha}. \end{equation*} }} And if $q \equiv 11 \pmod {12}$, we have {\footnotesize{ \begin{equation*} Ind_{H_3}^{G} \theta_3 = \frac{q+1}{3} \left [\frac{1}{2} (X' + X'') + \sum_{\beta(\omega)=1} X_{\beta} + V + \sum_{\alpha} W_{\alpha} \right ] + \frac{q-2}{3} \sum_{\beta(\omega)= e^{\frac{\pm 2 \pi i}{3}}} X_{\beta}. \end{equation*} }} \subsubsection{Induced characters from $H_7$} For $H_7$, the induced representations from the six nontrivial characters $\theta_7^k$ are not all the same, but depend on $k$. These representations also depend on $q \pmod {28}$, which determines whether the $14$th root of unity $\phi$ is in ${\mathbb{F}}^{\times}$ or ${\mathbb{F}}(\sqrt{\varepsilon})^{\times}$. For an induced nontrivial character $\Ind_{H_7}^{G} \theta_7^k$, the multiplicity of a discrete (resp. principal) series representation $W_{\alpha}$ (resp. $X_{\beta}$) will be the same as the multiplicity of $V$ if $\alpha(\phi) \neq e^{\pm \frac{2 \pi i k}{7}}$ (resp. $\beta(\phi) \neq e^{\pm \frac{2 \pi i k}{7}}$) and one larger (resp. smaller) if $\alpha(\phi) = e^{\pm \frac{2 \pi i k}{7}}$ (resp. $\beta(\phi) = e^{\pm \frac{2 \pi i k}{7}}$). The signs of $\alpha_0(\phi)$ or $\beta_0(\phi)$ depend on $q \pmod {28}$ and determine the multiplicities of $W'$ and $W''$ or $X'$ and $X''$, respectively. If $q \equiv 1 \pmod {28}$, we have {\footnotesize{ \begin{equation*} Ind_{H_7}^{G} \theta_7^k = \frac{q-1}{7} \left [\frac{1}{2} (W' + W'') + \sum_{\beta} X_{\beta} + V + \sum_{\alpha(\phi) \neq e^{\pm \frac{2 \pi i k}{7}}} W_{\alpha} \right ] + \frac{q+6}{7} \sum_{\alpha(\phi) = e^{\pm \frac{2 \pi i k }{7}}} W_{\alpha}. \end{equation*} }} If $q \equiv 13 \pmod {28}$, we have {\footnotesize{ \begin{equation*} Ind_{H_7}^{G} \theta_7^k = \frac{q+1}{7} \left [ \frac{1}{2} (W' + W'') + \sum_{\beta(\phi) \neq e^{\pm \frac{2 \pi i k}{7}}} X_{\beta} + V + \sum_{\alpha} W_{\alpha} \right ] + \frac{q-6}{7} \sum_{\beta(\phi) = e^{\pm \frac{2 \pi i k}{7}}} X_{\beta}. \end{equation*} }} If $q \equiv 15 \pmod {28}$, we have {\footnotesize{ \begin{equation*} Ind_{H_7}^{G} \theta_7^k = \frac{q-1}{7} \left [ \frac{1}{2} (X' + X'') + \sum_{\beta} X_{\beta} + V + \sum_{\alpha(\phi) \neq e^{\pm \frac{2 \pi i k}{7}}} W_{\alpha} \right ] + \frac{q+6}{7} \sum_{\alpha(\phi) = e^{\pm \frac{2 \pi i k }{7}}} W_{\alpha}. \end{equation*} }} And if $q \equiv 27 \pmod {28}$, we have {\footnotesize{ \begin{equation*} Ind_{H_7}^{G} \theta_7^k = \frac{q+1}{7} \left [ \frac{1}{2} (X' + X'') + \sum_{\beta(\phi) \neq e^{\pm \frac{2 \pi i k}{7}}} X_{\beta} + V + \sum_{\alpha} W_{\alpha} \right ] + \frac{q-6}{7} \sum_{\beta(\phi) = e^{\pm \frac{2 \pi i k}{7}}} X_{\beta}. \end{equation*} }} \section{The Riemann-Roch space as a $G$-module} Now we have all of the pieces we need to compute the $G$-module structure of the Riemann-Roch space $L(D)$ of a general $G$-invariant divisor $D$. We will first compute the ramification module, which does not depend on $D$. We will then compute the equivariant degree of $D$, and use the equivariant Riemann-Roch formula (\ref{eqn:Borne}) to compute $L(D)$. \subsection{Ramification module} \label{sec:rammod} The ramification module introduced by Kani \cite{K} and Nakajima \cite{N} is defined by \[ \Gamma_G=\sum_{P\in X_{\rm{ram}}} \Ind_{G_P}^G\left( \sum_{\ell =1}^{e_P-1} \ell\theta_P^\ell \right), \] where the first sum is over the ramification points of $\psi:X\rightarrow Y=X/G$, and $\theta_P$ is the ramification character at a point $P$. Both Kani and Nakajima showed that there is a $G$-module $\tilde{\Gamma}_G$ such that $\Gamma_G \simeq \bigoplus_{|G|} \tilde{\Gamma}_G$. Because $\Gamma_G$ does not figure in our calculations, we abuse notation and refer to $\tilde{\Gamma}_G$ as the {\it ramification module}. Recall from section \ref{sec:ram} that $\psi:X\rightarrow Y=X/G$ has three branch points, $P_1$, $P_2$, and $P_3$. If $P \in \psi^{-1}(P_1)$, $G_P$ has order $2$, so there are $\frac{|G|}{2}$ ramification points where $G_P$ is conjugate to $H_2$. If $P \in \psi^{-1}(P_2)$, $G_P$ has order $3$, so there are $\frac{|G|}{3}$ ramification points where $G_P$ is conjugate to $H_3$, and if $P \in \psi^{-1}(P_3)$, $G_P$ has order $7$, so there are $\frac{|G|}{7}$ ramification points where $G_P$ is conjugate to $H_7$. Thus \begin{equation} \tilde{\Gamma}_G= \frac{1}{|G|} \left (\frac{|G|}{2}\Ind_{H_2}^{G} \theta_2 +\frac{|G|}{3} \sum_{\ell =1}^{2} \ell \Ind_{H_3}^{G}\theta_3^{\ell} +\frac{|G|}{7} \sum_{\ell=1}^{6} \ell \Ind_{H_7}^{G} \theta_7^{\ell} \right ). \end{equation} To compute this, we break it into three pieces: \begin{eqnarray*} \tilde{\Gamma}_G & = & \Gamma_{H_2} + \Gamma_{H_3} + \Gamma_{H_7}, \\ \Gamma_{H_2} & = & \frac{1}{2} \Ind_{H_2}^{G} \theta_2, \\ \Gamma_{H_3} & = & \frac{1}{3} ( \Ind_{H_3}^{G} \theta_3 + 2 \Ind_{H_3}^{G} \theta_3^2), \\ \Gamma_{H_7} & = & \frac{1}{7} ( \Ind_{H_7}^{G} \theta_7 + 2 \Ind_{H_7}^{G} \theta_7^2 + 3 \Ind_{H_7}^{G} \theta_7^3 \\ & & + 4 \Ind_{H_7}^{G} \theta_7^4 + 5 \Ind_{H_7}^{G} \theta_7^5 + 6 \Ind_{H_7}^{G} \theta_7^6 ). \end{eqnarray*} Each piece is then computed from the induced characters in section \ref{sec:indchars}. $\Gamma_{H_2}$ depends on $q \pmod 8$. For $q \equiv 1 \pmod 8$, {\footnotesize{ \begin{eqnarray*} \Gamma_{H_2} & = & \frac{q-1}{4} \left [ \frac{1}{2}(W' + W'') + \sum_{\beta} X_{\beta} + V + \sum_{\alpha(i)=1} W_{\alpha} \right ] + \frac{q+3}{4} \sum_{\alpha(i)=-1} W_{\alpha}. \end{eqnarray*} }} For $q \equiv 3 \pmod 8$, {\footnotesize{ \begin{eqnarray*} \Gamma_{H_2} & = & \frac{q+1}{4} \left [ \sum_{\beta(i)=1} X_{\beta} + V + \sum_{\alpha} W_{\alpha} \right ] + \frac{q-3}{4} \left [ \frac{1}{2}(X' + X'') + \sum_{\beta(i)=-1} X_{\beta} \right ]. \end{eqnarray*} }} For $q \equiv 5 \pmod 8$, {\footnotesize{ \begin{eqnarray*} \Gamma_{H_2} & = & \frac{q-1}{4} \left [ \sum_{\beta} X_{\beta} + V + \sum_{\alpha(i)=1} W_{\alpha} \right ] + \frac{q+3}{4} \left [ \frac{1}{2}(W' + W'') + \sum_{\alpha(i)=-1} W_{\alpha} \right ]. \end{eqnarray*} }} And for $q \equiv 7 \pmod 8$, {\footnotesize{ \begin{eqnarray*} \Gamma_{H_2} & = & \frac{q+1}{4} \left [ \frac{1}{2} (X' + X'') + \sum_{\beta(i)=1} X_{\beta} + V + \sum_{\alpha} W_{\alpha} \right ] + \frac{q-3}{4} \sum_{\beta(i)=-1} X_{\beta}. \end{eqnarray*} }} The contribution $\Gamma_{H_3}$ of $H_3$ to the ramification module is \begin{equation*} \Gamma_{H_3} = \frac{1}{3} \left ( \Ind_{H_3}^{G} \theta_3 + 2 \Ind_{H_3}^{G} \theta_3^2 \right) = \Ind_{H_3}^{G} \theta_3, \end{equation*} since $\Ind_{H_3}^{G} \theta_3$ and $\Ind_{H_3}^{G} \theta_3^2$ are the same. This character was computed in section \ref{sec:indchars}. For $H_7$, the induced representations from the six nontrivial characters $\theta_7^k$ are not all the same. However, the representations $\Ind_{H_7}^{G} \theta_7^k$ and $\Ind_{H_7}^{G} \theta_7^{-k}$ are equal. Thus $\Gamma_{H_7}$ is \begin{eqnarray*} \Gamma_{H_7} & = & \frac{1}{7} \left (\Ind_{H_7}^{G} \theta_7 + 2 \Ind_{H_7}^{G} \theta_7^2 + \ldots + 6 \Ind_{H_7}^{G} \theta_7^6 \right ) \\ & = & \frac{1}{7} \left ( 7 \Ind_{H_7}^{G} \theta_7 + 7 \Ind_{H_7}^{G} \theta_7^2 + 7 \Ind_{H_7}^{G} \theta_7^4 \right ) \\ & = & \Ind_{H_7}^{G} \theta_7 + \Ind_{H_7}^{G} \theta_7^2 + \Ind_{H_7}^{G} \theta_7^4. \end{eqnarray*} Recall from section \ref{sec:indchars} that the multiplicities of the irreducible representations $W_\alpha$ and $X_\beta$ in the induced representation $\Ind_{H_7}^G \theta_7^k$ depend on the value of $\alpha(\phi)$ or $\beta(\phi)$, and that this value must be $e^{\frac{2 \pi i k}{7}}$ for some $k = 0, \ldots, 6$. In the sum $ \Gamma_{H_7} = \Ind_{H_7}^{G} \theta_7 + \Ind_{H_7}^{G} \theta_7^2 + \Ind_{H_7}^{G} \theta_7^4$ we will have, for example for the multiplicities of the $W_\alpha$ when $q \equiv 1 \pmod {28}$, {\footnotesize{ \begin{eqnarray*} \Gamma_{H_7} & = & \Ind_{H_7}^{G} \theta_7 + \Ind_{H_7}^{G} \theta_7^2 + \Ind_{H_7}^{G} \theta_7^4 \\ & = & \frac{q-1}{7} \sum_{\alpha(\phi) \neq e^{\pm \frac{2 \pi i }{7}}} W_{\alpha}+ \frac{q+6}{7} \sum_{\alpha(\phi) = e^{\pm \frac{2 \pi i }{7}}} W_{\alpha} \\ & + & \frac{q-1}{7} \sum_{\alpha(\phi) \neq e^{\pm \frac{4 \pi i }{7}}} W_{\alpha} + \frac{q+6}{7} \sum_{\alpha(\phi) = e^{\pm \frac{4 \pi i }{7}}} W_{\alpha} \\ &+& \frac{q-1}{7} \sum_{\alpha(\phi) \neq e^{\pm \frac{8 \pi i }{7}}} W_{\alpha} + \frac{q+6}{7} \sum_{\alpha(\phi) = e^{\pm \frac{8 \pi i }{7}}} W_{\alpha} \\ & + & \mbox{ other characters}. \end{eqnarray*} }} This adds up to \begin{equation*} \Gamma_{H_7} = \frac{3q+4}{7} \sum_{\alpha(\phi) \neq 1} W_{\alpha}+ \frac{3q-3}{7} \sum_{\alpha(\phi) = 1} W_{\alpha} + \mbox{ other characters}. \end{equation*} The multiplicities of the other irreducible characters in $\Ind_{H_7}^G \theta_7^k$ do not depend on $k$. Adding these in, the total for the case $q \equiv 1 \pmod {28}$ is {\footnotesize{ \begin{eqnarray*} \Gamma_{H_7} = \frac{3q-3}{7} \left [ \sum_{\beta} X_{\beta} + V + \sum_{\alpha(\phi) =1} W_{\alpha} + \frac{1}{2} (W' + W'') \right ] + \frac{3q+4}{7} \sum_{\alpha(\phi) \neq 1} W_{\alpha}. \end{eqnarray*} }} Similar calculations yield the following. If $q \equiv 13 \pmod {28}$, {\footnotesize{ \begin{eqnarray*} \Gamma_{H_7} = \frac{3q+3}{7} \left [ \sum_{\beta(\phi) =1} X_{\beta} + V + \sum_{\alpha} W_{\alpha} + \frac{1}{2} (W' + W'') \right ] + \frac{3q-4}{7} \sum_{\beta(\phi) \neq 1} X_{\beta}. \end{eqnarray*} }} If $q \equiv 15 \pmod {28}$, we have {\footnotesize{ \begin{eqnarray*} \Gamma_{H_7} = \frac{3q-3}{7} \left [ \sum_{\beta} X_{\beta} + V + \sum_{\alpha(\phi) =1} W_{\alpha} + \frac{1}{2} (X' + X'') \right ] + \frac{3q+4}{7} \sum_{\alpha(\phi) \neq 1} W_{\alpha}. \end{eqnarray*} }} And if $q \equiv 27 \pmod {28}$, we have {\footnotesize{ \begin{eqnarray*} \Gamma_{H_7} = \frac{3q+3}{7} \left [ \sum_{\beta(\phi) =1} X_{\beta} + V + \sum_{\alpha} W_{\alpha} + \frac{1}{2} (X' + X'') \right ] + \frac{3q-4}{7} \sum_{\beta(\phi) \neq 1} X_{\beta}. \end{eqnarray*} }} To compute the ramification module, we sum the components $\Gamma_{H_2}$, $\Gamma_{H_3}$, and $\Gamma_{H_7}$ listed above. The following numbers will be useful. \begin{definition} For each possible equivalence class of $q \pmod {84}$, we define a \textbf{base multiplicity} $m$, as follows: \begin{itemize} \item If $q \equiv 1, 13, 29, \mbox{ or } 43 \pmod {84}$, then $m = q + \lfloor \frac{q}{84} \rfloor$. \item If $q \equiv 41, 55, 71, \mbox{ or } 83 \pmod {84}$, then $m = q + \lceil \frac{q}{84} \rceil$. \end{itemize} \end{definition} \begin{definition} Let $\alpha: {\mathbb{F}}^{\times} \to \mathbb{C}^{\times}$ be a character of ${\mathbb{F}}^{\times}$. Then we define a number \begin{equation*} N_{\alpha} = \# \{ x \in \{i, \omega, \phi\} \ | \ x \in {\mathbb{F}}^{\times} \mbox{ and } \alpha(x) \neq 1\}. \end{equation*} \end{definition} \begin{definition} Recall that $T$ is the cyclic subgroup of ${\mathbb{F}}(\sqrt{\varepsilon})^{\times}$ of order $q+1$. Let $\beta: T \to \mathbb{C}^{\times}$ be a character of $T$. Then we define a number \begin{equation*} N_{\beta} = \# \{ x \in \{i, \omega, \phi\}\ |\ x \in T \mbox { and } \beta(x) \neq 1\}. \end{equation*} \end{definition} \begin{theorem} \label{thrm:main} We have the following decomposition of the ramification module: \begin{itemize} \item If $q\equiv 1\pmod {8}$, then \begin{equation*} \tilde{\Gamma}_G = \frac{m}{2}(W'+W'')+ m V + \sum_{\beta} (m - N_{\beta}) X_{\beta} + \sum_{\alpha} (m + N_{\alpha}) W_{\alpha} \end{equation*} \item If $q\equiv 3\pmod {8}$, then \begin{equation*} \tilde{\Gamma}_G = \frac{m-1}{2}(X'+X'') + m V + \sum_{\beta} (m-N_{\beta}) X_{\beta} + \sum_{\alpha} (m + N_{\alpha}) W_{\alpha} \end{equation*} \item If $q\equiv 5\pmod {8}$, then \begin{equation*} \tilde{\Gamma}_G = \frac{m+1}{2}(W'+W'')+ m V + \sum_{\beta} (m - N_{\beta}) X_{\beta} + \sum_{\alpha} (m + N_{\alpha}) W_{\alpha} \end{equation*} \item If $q\equiv 7 \pmod {8}$, then \begin{equation*} \tilde{\Gamma}_G = \frac{m}{2}(X'+X'') + m V + \sum_{\beta} (m-N_{\beta}) X_{\beta} + \sum_{\alpha} (m + N_{\alpha}) W_{\alpha} \end{equation*} \end{itemize} \end{theorem} \subsection{Equivariant degree} \label{sec:equivdeg} Now we will define and compute the equivariant degree of a $G$-invariant divisor. (See for example \cite{B} for more details). This, together with the equivariant Riemann-Roch formula (\ref{eqn:Borne}), will allow us to compute the $G$-module structure of the Riemann-Roch space $L(D)$. Fix a point $P\in X$ and let $D$ be a divisor on $X$ of the form \[ D=\frac{1}{e_P}\sum_{g\in G}g(P)=\sum_{g\in G/G_P}g(P), \] where $G_P$ denotes the stabilizer in $G$ of $P$ and $e_P=|G_P|$ denotes the ramification index at $P$. Such a divisor is called a {\it reduced orbit}; any $G$-invariant divisor on $X$ can be written as a sum of multiples of reduced orbits. The {\it equivariant degree} of a multiple $rD$ of a reduced orbit is the virtual representation \begin{equation*} \deg_{eq}(rD) = \left\{ \begin{array}{c c} \Ind_{G_P}^G\ \sum_{\ell =1}^r \theta_P^{-\ell}, & r > 0 \\ 0, & r=0 \\ -\Ind_{G_P}^G\ \sum_{\ell =0}^{|r|-1} \theta_P^{-\ell}, & r < 0 \end{array} \right. \end{equation*} where $\theta_P$ is the ramification character of $X$ at $P$ (a nontrivial character of $G_P$). In general, the equivariant degree is additive on disjointly supported divisors. Note that if $r$ is a multiple of $e_P$, then then $D$ is the pull-back of a divisor on $X/G$ via $\psi$ in (\ref{eqn:psi}), and the equivariant degree is a multiple of the regular representation $\mathbb{C}[G]$ of $G$. More generally, if $D$ is a reduced orbit and $r = e_P r' + r ''$, then \[ \deg_{eq}(rD) = r' \cdot \mathbb{C}[G] + \deg_{eq}(r''D). \] (Note this is true even when $r'$ is negative). On the Hurwitz curve $X$, the results of section \ref{sec:ram} tell us that there are only four types of reduced orbits to consider: the stabilizer $G_P$ of a point $P$ in the support of $D$ may have order $1$, $2$, $3$, or $7$, and therefore be either trivial or conjugate to $H_2$, $H_3$, or $H_7$. Let $D_1$, $D_2$, $D_3$, and $D_7$ denote reduced orbits of each type. There is only one choice of reduced orbit for $D_2$, $D_3$, and $D_7$; for $D_1$ we see from the definition that the equivariant degree does not depend on our choice of orbit. Given a point in $D_1$, the stabilizer is trivial, so the divisor is a pullback and the equivariant degree is \[ \deg_{eq}(D_1) = \mathbb{C}[G]. \] A general $G$-invariant divisor may be written as $r_1 D_1 + r_2 D_2 + r_3 D_3 + r_7 D_7$. If we write $r_2 = 2 r_2' + r_2 ''$, $r_3 = 3 r_3' + r_3 ''$, and $r_7 = 7 r_7' + r_7 ''$, then we have \begin{equation*} \begin{array}{l} \deg_{eq}(r_1 D_1 + r_2 D_2 + r_3 D_3 + r_7 D_7) \\ \quad \quad = \deg_{eq}((r_1 + r_2' + r_3' + r_7') D_1 + r_2'' D_2 + r_3'' D_3 + r_7'' D_7)\\ \quad \quad = (r_1 + r_2' + r_3' + r_7')\mathbb{C}[G] + \deg_{eq}(r_2'' D_2 + r_3'' D_3 + r_7'' D_7). \end{array} \end{equation*} Therefore, to compute the equivariant degree of a general divisor, all that remains is to compute $\deg_{eq}(r_i D_i)$ for $i \in \{2, 3, 7\}$, where we may assume that $1 \leq r_i < i$. \begin{description} \item[Case 1]: $r_2 D_2$. Given our assumptions, the only possibility is that $r_2=1$. Given a point $P$ in the support of $D_2$, the stabilizer $G_P$ is conjugate to $H_2$. In this case, the equivariant degree of $D_2$ is \[ \deg_{eq}(D_2) = \Ind_{H_2}^G \theta_2. \] \item[Case 2]: $r_3 D_3$. Here we may have either $r_3 = 1$ or $r_3 = 2$. The stabilizer of a point in the support of $D_3$ is conjugate to $H_3$. Recall that $\Ind_{H_3}^G \theta_3^2= \Ind_{H_3}^G \theta_3$, so we have \begin{eqnarray*} \deg_{eq}(D_2) &=& \Ind_{H_3}^G \theta_3 \\ \deg_{eq}(2D_2)&=& 2 \Ind_{H_3}^G \theta_3. \end{eqnarray*} \item[Case 3]: $r_7 D_7$. In this case, we have $1 \leq r_7 \leq 6$. The stabilizer of a point in the support of $D_7$ is conjugate to $H_7$. Recall that for $k = 1, \ldots, 6$, $\Ind_{H_7}^G \theta_7^k = \Ind_{H_7}^G \theta_7^{-k}$. Therefore the equivariant degree is as follows: \begin{itemize} \item $\deg_{eq}(D_7) = \Ind_{H_7}^G \theta_7$. \item $\deg_{eq}(2 D_7) = \Ind_{H_7}^G \theta_7 + \Ind_{H_7}^G \theta_7^2$. \item $\deg_{eq}(3 D_7) = \Ind_{H_7}^G \theta_7 + \Ind_{H_7}^G \theta_7^2 + \Ind_{H_7}^G \theta_7^3$, which is the same as the $H_7$ component of the ramification module, $\Gamma_{H_7}$. \item $\deg_{eq}(4 D_7) = \Gamma_{H_7} + \Ind_{H_7}^G \theta_7^3$. \item $\deg_{eq}(5 D_7) = \Gamma_{H_7} + \Ind_{H_7}^G \theta_7^3 + \Ind_{H_7}^G \theta_7^2$. \item $\deg_{eq}(6 D_7) = 2 \Gamma_{H_7}$. \end{itemize} \end{description} Now we add these up. As in the case of the ramification module, the equivariant degree is most conveniently written in terms of a ``base multiplicity'' and modifiers. We define the base multiplicity as follows. \begin{itemize} \item If $q \equiv 1 \pmod 4$, then let $b_2 = r_2 \left ( \frac{q-1}{2} \right )$. Otherwise, if $q \equiv 3 \pmod 4$, then let $b_2 = r_2 \left ( \frac{q+1}{2} \right )$. \item If $q \equiv 1 \pmod 3$, then let $b_3 = r_3 \left ( \frac{q-1}{3} \right )$, and if $q \equiv 2 \pmod 3$, then let $b_3 = r_3 \left ( \frac{q+1}{3} \right )$. \item Similarly, if $q \equiv 1 \pmod 7$, then let $b_7 = r_7 \left ( \frac{q-1}{7} \right )$, and if $q \equiv 6 \pmod 7$, then let $b_7 = r_7 \left ( \frac{q+1}{7} \right )$. \end{itemize} The base multiplicity is then defined to be \begin{eqnarray*} b &=& b_2 + b_3 + b_7 \\ &=& r_2 \left ( \frac{q \pm 1}{2} \right ) + r_3 \left ( \frac{q \pm 1}{3} \right ) + r_7 \left ( \frac{q \pm 1}{7} \right ). \end{eqnarray*} Then the equivariant degree $\deg_{eq}(D)$ of the divisor $D=r_1 D_1 + r_2 D_2 + r_3 D_3 + r_7 D_7$, with $0 \leq r_2 \leq 1$, $0 \leq r_3 \leq 2$, and $0 \leq r_7 \leq 6$, is \begin{equation} \label{eqn:degeq} \deg_{eq}(D) = b \left[ \sum_{\beta} X_{\beta} + V + \sum_{\alpha} W_{\alpha} \right] + \mbox{ modifiers}, \end{equation} where the modifiers are listed in the table below. For each $q$, three of the rows below will be added. \begin{center} \begin{tabular}[ht]{|c|c|} \hline $q$ & Modifiers to equivariant degree \\ \hline \hline $q \equiv 1 \pmod 8$ & $ \displaystyle + \ r_2 \sum_{\alpha(i)=-1} W_{\alpha} + \frac{b}{2} (W'+W'')$ \\ \hline $q \equiv 3 \pmod 8$ & $ \displaystyle - \ r_2 \sum_{\beta(i)=-1} X_{\beta} + \frac{b-r_2}{2} (X'+X'')$ \\ \hline $q \equiv 5 \pmod 8$ & $ \displaystyle + \ r_2 \sum_{\alpha(i)=-1} W_{\alpha} + \frac{b+r_2}{2} (W'+W'')$ \\ \hline $q \equiv 7 \pmod 8$ & $ \displaystyle - \ r_2 \sum_{\beta(i)=-1} X_{\beta} + \frac{b}{2} (X'+X'')$ \\ \hline \hline $q \equiv 1 \pmod 3$ & $ \displaystyle + \ r_3 \sum_{\alpha(\omega) \neq 1} W_{\alpha}$ \\ \hline $q \equiv 2 \pmod 3$ & $ \displaystyle - r_3 \sum_{\beta(\omega) \neq 1} X_{\beta} $ \\ \hline \hline $q \equiv 1 \pmod 7$ & $ \displaystyle + \ \sum_{k=1}^{r_7} \sum_{\alpha(\phi)= e^{\pm \frac{2 \pi i k}{7}} } W_{\alpha}$ \\ \hline $q \equiv 6 \pmod 7$ & $ \displaystyle - \sum_{k=1}^{r_7} \sum_{\beta(\phi) = e^{\pm \frac{2 \pi i k}{7}}} X_{\beta} $ \\ \hline \hline \end{tabular} \end{center} \subsection{The Riemann-Roch space} Now we would like to compute the $G$-module structure of the Riemann-Roch space $L(D)$ for a $G$-invariant divisor $D$. First, let us consider which $G$-invariant divisors are non-special. To be non-special, it is sufficient to have $\deg D > 2g-2$, where \[ g=1+\frac{(q)(q^2-1)}{168} \] is the genus of $X$, so $2g-2 = \frac{1}{84}q(q^2-1)=\frac{1}{168}|G|$. The reduced orbits $D_1$, $D_2$, $D_3$ and $D_7$ have degrees $|G|$, $|G|/2$, $|G|/3$, and $|G|/7$, respectively. Therefore if a $G$-invariant divisor $r_1 D_1 + r_2 D_2 + r_3 D_3 + r_7 D_7$ has positive degree, the smallest its degree could be is $|G|/42$, which is strictly larger than $2g-2$. Therefore any $G$-invariant divisor with positive degree is non-special. Thus for any $G$-invariant divisor $D$ with positive degree, we may use the equivariant Riemann-Roch formula (\ref{eqn:Borne}) to compute the $G$-module structure of the Riemann-Roch space $L(D)$: \begin{equation*} [L(D)]=(1-g_{X/G})[\mathbb{C}[G]]+[\deg_{eq}(D)]-[\tilde{\Gamma}_G]. \end{equation*} Since $X/G \cong {\mathbb{P}}^1$, its genus is zero. As in section \ref{sec:equivdeg}, we may assume that $D=r_1 D_1 + r_2 D_2 + r_3 D_3 + r_7 D_7$, with $0 \leq r_2 \leq 1$, $0 \leq r_3 \leq 2$, and $0 \leq r_7 \leq 6$. Combining the results and notation of sections \ref{sec:rammod} and \ref{sec:equivdeg}, we obtain the following. \begin{equation*} L(D) = (1+r_1) \mathbb{C}[G] + (b-m) \left[ \sum_{\beta} X_{\beta} + V + \sum_{\alpha} W_{\alpha} \right] + \mbox{ modifiers}, \end{equation*} where the modifiers depend on $q \pmod {168}$ and are listed in the following table. Again, for each value of $q$, three of the rows below will be added. \begin{center} \begin{tabular}[ht]{|c|c|} \hline $q$ & Modifiers to Riemann-Roch space \\ \hline \hline $q \equiv 1 \pmod 8$ & $ \displaystyle + \ (r_2-1) \sum_{\alpha(i)=-1} W_{\alpha} + \frac{b-m}{2} (W'+W'')$ \\ \hline $q \equiv 3 \pmod 8$ & $ \displaystyle + \ (1-r_2) \sum_{\beta(i)=-1} X_{\beta} + \frac{b-m + 1 -r_2}{2} (X'+X'')$ \\ \hline $q \equiv 5 \pmod 8$ & $ \displaystyle + \ (r_2-1) \sum_{\alpha(i)=-1} W_{\alpha} + \frac{b-m+r_2-1}{2} (W'+W'')$ \\ \hline $q \equiv 7 \pmod 8$ & $ \displaystyle + \ (1-r_2) \sum_{\beta(i)=-1} X_{\beta} + \frac{b-m}{2} (X'+X'')$ \\ \hline \hline $q \equiv 1 \pmod 3$ & $ \displaystyle + \ (r_3-1) \sum_{\alpha(\omega) \neq 1} W_{\alpha}$ \\ \hline $q \equiv 2 \pmod 3$ & $ \displaystyle + \ (1- r_3) \sum_{\beta(\omega) \neq 1} X_{\beta} $ \\ \hline \hline $q \equiv 1 \pmod 7$ & $ \displaystyle + \ \sum_{k=1}^{r_7} \sum_{\alpha(\phi)= e^{\pm \frac{2 \pi i k}{7}} } W_{\alpha} - \sum_{alpha(\phi) \neq 1} W_{\alpha} $ \\ \hline $q \equiv 6 \pmod 7$ & $ \displaystyle + \ \sum_{\beta(\phi) \neq 1} X_{\beta} - \sum_{k=1}^{r_7} \sum_{\beta(\phi) = e^{\pm \frac{2 \pi i k}{7}}} X_{\beta} $ \\ \hline \hline \end{tabular} \end{center} \subsection{Action on holomorphic differentials} \label{sec:cohomology} As a corollary, it is an easy exercise now to compute explicitly the decomposition $$ H^1(X,\mathbb{C}) = H^0(X,\Omega^1)\oplus \overline{H^0(X,\Omega^1)} =L(K_X)\oplus \overline{L(K_X)}, $$ into irreducible $G$-modules, where $K_X$ is a canonical divisor of $X$. The action of $G$ on the complex conjugate vector space $\overline{L(K_X)}$ of $L(K_X)$ will be by the complex conjugate (contragredient) representation. The Riemann-Hurwitz theorem tells us that \begin{eqnarray*} K_X &=& \pi^*(K_{{\mathbb{P}}^1}) + R\\ &=& -2 D_1 + D_2 + 2 D_3 + 6 D_7 \end{eqnarray*} where $R$ is the ramification divisor. Thus the equivariant degree of $K_X$ is $\deg_{eq}(K_X) = -2 \cdot \mathbb{C}[G] + \deg_{eq}(R)$. Note from the preliminary equivariant degree calculations, that \begin{eqnarray*} \deg_{eq}(R) &=& \deg_{eq}{D_2} + \deg_{eq}{2 D_3} + \deg_{eq}{6 D_7} \\ &=& \Ind_{H_2}^{G} \theta_2 + 2 \Ind_{H_3}^{G} \theta_3 + \sum_{k=1}^6 \Ind_{H_7}^{G} \theta_7^k \\ &=& 2 \Gamma_{H_2} + 2 \Gamma_{H_3} + 2 \Gamma_{H_7} \\ &=& 2 \tilde{\Gamma}. \end{eqnarray*} Therefore, using the equivariant Riemann-Roch formula (\ref{eqn:Borne}), \begin{equation} L(K_X) = \tilde{\Gamma} - \mathbb{C}[G]. \end{equation} We will see in the next section that this is invariant under complex conjugation, so that as $G$-modules, $H^1(X,\mathbb{C}) \cong 2L(K_X)$. Using the results of section \ref{sec:rammod}, we obtain the following. \begin{theorem} The $G$-module structure of $L(K)=H^0(X,\Omega^1)$ is as follows: \begin{itemize} \item If $q \equiv 1$, $97$, or $113 \pmod {168}$, then {\footnotesize{ \begin{equation*} L(K_X) = \frac{\lfloor \frac{q}{84} \rfloor -1}{2} (W' + W'') + \sum_{\beta} \left( \lfloor \frac{q}{84} \rfloor + 1 - N_{\beta} \right) X_{\beta} + \lfloor \frac{q}{84} \rfloor V + \sum_{\alpha} \left( \lfloor \frac{q}{84} \rfloor - 1 + N_{\alpha} \right) W_{\alpha}. \end{equation*} }} \item If $q \equiv 43 \pmod {168}$, then {\footnotesize{ \begin{equation*} L(K_X) = \lfloor \frac{q}{84} \rfloor \left[ \frac{1}{2}(X' + X'') + V \right ] + \sum_{\beta} \left( \lfloor \frac{q}{84} \rfloor + 1 - N_{\beta} \right) X_{\beta} + \sum_{\alpha} \left( \lfloor \frac{q}{84} \rfloor - 1 + N_{\alpha} \right) W_{\alpha}. \end{equation*} }} \item If $q \equiv 13$, $29$, or $85 \pmod {168}$, then {\footnotesize{ \begin{equation*} L(K_X) = \lfloor \frac{q}{84} \rfloor \left[ \frac{1}{2}(W' + W'') + V \right ] + \sum_{\beta} \left( \lfloor \frac{q}{84} \rfloor + 1 - N_{\beta} \right) X_{\beta} + \sum_{\alpha} \left( \lfloor \frac{q}{84} \rfloor - 1 + N_{\alpha} \right) W_{\alpha}. \end{equation*} }} \item If $q \equiv 127 \pmod {168}$, then {\footnotesize{ \begin{equation*} L(K_X) = \frac{\lfloor \frac{q}{84} \rfloor +1}{2} (X' + X'') + \sum_{\beta} \left( \lfloor \frac{q}{84} \rfloor + 1 - N_{\beta} \right) X_{\beta} + \lfloor \frac{q}{84} \rfloor V + \sum_{\alpha} \left( \lfloor \frac{q}{84} \rfloor - 1 + N_{\alpha} \right) W_{\alpha}. \end{equation*} }} \item If $q \equiv 41 \pmod {168}$, then {\footnotesize{ \begin{equation*} L(K_X) = \frac{\lceil \frac{q}{84} \rceil -1}{2} (W' + W'') + \sum_{\beta} \left( \lceil \frac{q}{84} \rceil + 1 - N_{\beta} \right) X_{\beta} + \lceil \frac{q}{84} \rceil V + \sum_{\alpha} \left( \lceil \frac{q}{84} \rceil - 1 + N_{\alpha} \right) W_{\alpha}. \end{equation*} }} \item If $q \equiv 83$, $139$, or $155 \pmod {168}$, then {\footnotesize{ \begin{equation*} L(K_X) = \lceil \frac{q}{84} \rceil \left[ \frac{1}{2}(X' + X'') + V \right ] + \sum_{\beta} \left( \lceil \frac{q}{84} \rceil + 1 - N_{\beta} \right) X_{\beta} + \sum_{\alpha} \left( \lceil \frac{q}{84} \rceil - 1 + N_{\alpha} \right) W_{\alpha}. \end{equation*} }} \item If $q \equiv 125 \pmod {168}$, then {\footnotesize{ \begin{equation*} L(K_X) = \lceil \frac{q}{84} \rceil \left[ \frac{1}{2}(W' + W'') + V \right ] + \sum_{\beta} \left( \lceil \frac{q}{84} \rceil + 1 - N_{\beta} \right) X_{\beta} + \sum_{\alpha} \left( \lceil \frac{q}{84} \rceil - 1 + N_{\alpha} \right) W_{\alpha}. \end{equation*} }} \item If $q \equiv 55$, $71$, or $167$ $\pmod {168}$, then {\footnotesize{ \begin{equation*} L(K_X) = \frac{\lceil \frac{q}{84} \rceil +1}{2} (X' + X'') + \sum_{\beta} \left( \lceil \frac{q}{84} \rceil + 1 - N_{\beta} \right) X_{\beta} + \lceil \frac{q}{84} \rceil V + \sum_{\alpha} \left( \lceil \frac{q}{84} \rceil - 1 + N_{\alpha} \right) W_{\alpha}. \end{equation*} }} \end{itemize} \end{theorem} \section{Galois action} As discussed in section \ref{sec:rep thy}, there is a Galois action on the set of equivalence classes of irreducible representations of $PSL(2,q)$. One question of obvious interest is whether the modules we have computed are invariant under this action. \begin{theorem} The ramification module is Galois-invariant. \end{theorem} {\bf Proof}:\ Recall from section \ref{sec:rep thy} that the Galois group $\mathcal{G}$ permutes $m$th roots of unity, where $m = q(q^2-1)/4$. It acts on representations of $PSL(2,q)$ by permuting character values. Thus it fixes the trivial representation and the $q$-dimensional representation $V$, whose character values are rational. It will act as a permutation on the representations $W_{\alpha}$ and on the representations $X_{\beta}$. Lastly, it will act as an involution on either the representations $W'$ and $W''$ or $X'$ and $X''$. Because the multiplicities of $W'$ and $W''$ or $X'$ and $X''$ are the same in the ramification module, the Galois action will be invariant on this component. The multiplicity of a representation $W_{\alpha}$ or $X_{\beta}$ in the ramification module depends on the number $N_{\alpha}$ or $N_{\beta}$, which is determined by the value of the character $\alpha$ or $\beta$ on the special numbers $i$, $\omega$, and $\phi$. In fact, the numbers $N_{\alpha}$ and $N_{\beta}$ are determined only by whether these character values are equal to $1$ or not equal to $1$. Since an element of the Galois group will take a character value to a power of itself, the Galois action must preserve the numbers $N_{\alpha}$ and $N_{\beta}$. Therefore this component of the ramification module is invariant as well. $\Box$ Since the ramification module is Galois-invariant, and of course the regular representation is Galois-invariant, $L(K_X)$ will be Galois invariant. In particular, as stated in section \ref{sec:cohomology}, $L(K_X)$ will be invariant under complex conjugation. For a general divisor $D$, the Riemann-Roch space $L(D)$ will be Galois-invariant if and only if the equivariant degree of $D$ is. \begin{theorem} Let $D=r_1 D_1 + r_2 D_2 + r_3 D_3 + r_7 D_7$ be a $G$-invariant divisor. Then the equivariant degree of $D$ is Galois-invariant if $r_7 \in \{0, 3, 6\} \pmod{7}$. \end{theorem} {\bf Proof}:\ As in section \ref{sec:equivdeg}, multiples of $2$ in $r_2$, $3$ in $r_3$, and $7$ in $r_7$ can be absorbed into the $r_1 D_1$ term without affecting the equivariant degree. Therefore we may assume that $0 \leq r_2 \leq 1$, $0 \leq r_3 \leq 2$, and $0 \leq r_7 \leq 6$. The result can again be seen by looking at the multiplicities of representations permuted by the Galois group. The multiplicities of $W'$ and $W''$ or $X'$ and $X''$ are the same. By (\ref{eqn:degeq}), the multiplicity of a representation $W_{\alpha}$ or $X_{\beta}$ depends on $r_2$, $r_3$, and $r_7$, and not on $r_1$. Again, the Galois action will not permute a representation $W_{\alpha}$ with $\alpha(i)=1$ with one with $\alpha(i) \neq 1$; similarly for $X_{\beta}$, and for $\omega$. However, it could permute for example a representation $W_{\alpha}$ with $\alpha(\phi)=e^{\frac{2 \pi i}{7}}$ with one with $\alpha(\phi)=e^{\frac{4 \pi i}{7}}$. Thus the equivariant degree may not be Galois-invariant unless the multiplicities of these representations are equal. In the cases where $r_7 \in \{0, 3, 6\}$, then these multiplicities will be equal; otherwise they will not. $\Box$ Note that for some values of $q$, the equivariant degree may be Galois-invariant even if $r_7$ is not $0$, $3$, or $6$. A previous result of the first two authors (see \cite{JK1}) gives a simpler formula (see equation \ref{eqn:JKrammod}) to compute the multiplicity of an irreducible representation in the ramification module, when the ramification module is Galois-invariant. In the example at hand, if $r_7 \in \{0, 3, 6\}$, then since the equivariant degree is a multiple of the $H_7$ component of the ramification module, a slight modification of this formula gives an easy computation of the equivariant degree and therefore the Riemann-Roch space. \begin{corollary} Let $D = r_1 D_1 + r_2 D_2 + r_3 D_3 + r_7 D_7$, with $0 \leq r_2 \leq 1$, $0 \leq r_3 \leq 2$, and $r_7 \in \{0, 3, 6\}$. Then \begin{eqnarray*} L(D) &=& \bigoplus_{\pi \in G^*} \left[ (1+ r_1 + r_2 + \frac{r_3}{2} + \frac{r_7}{6}) \dim \pi \right.\\ && \left. + (\frac{1}{2}- r_2) \dim \pi^{H_2} + (\frac{1}{2}- \frac{r_3}{2}) \dim \pi^{H_3} +(\frac{1}{2}- \frac{r_7}{6}) \dim \pi^{H_7} \right] \pi. \end{eqnarray*} \end{corollary} Note that in spite of appearances, the multiplicity of each irreducible representation will in fact be an integer. {\bf Proof}:\ We see from the calculations in section \ref{sec:equivdeg} that the equivariant degree of $D$ is equal to \begin{eqnarray*} \deg_{eq}(D) &=& r_1 \mathbb{C}[G] + 2 r_2 \Gamma_{H_2} + r_3 \Gamma_{H_3} + \frac{r_7}{3} \Gamma_{H_7} \\ &=& \bigoplus_{\pi \in G^*} \left[ (r_1 + r_2 + \frac{r_3}{2} + \frac{r_7}{6}) \dim \pi \right.\\ && \left. - r_2 \dim \pi^{H_2} - \frac{r_3}{2} \dim \pi^{H_3} - \frac{r_7}{6} \dim \pi^{H_7} \right] \pi. \end{eqnarray*} The ramification module is \begin{equation*} \tilde{\Gamma}_G =\bigoplus_{\pi\in G^*} \left[ \sum_{\ell \in {2,3,7}} ({\rm dim}\, \pi -{\rm dim}\, (\pi^{H_\ell})) \frac{1}{2}\right]\pi . \end{equation*} This sum splits into $\tilde{\Gamma}_G = \Gamma_{H_2} + \Gamma_{H_3} + \Gamma_{H_7}$ in the obvious way along the inner sum. Putting these together using the equivariant Riemann-Roch formula (\ref{eqn:Borne}), we obtain the desired result. $\Box$ \end{document}
\begin{document} \title{Restoring Heisenberg scaling in noisy quantum metrology by monitoring the environment} \author{Francesco Albarelli} \affiliation{Quantum Technology Lab, Dipartimento di Fisica ``Aldo Pontremoli'', Università degli Studi di Milano, IT-20133, Milan, Italy} \affiliation{Department of Physics, University of Warwick, Coventry CV4 7AL, United Kingdom} \orcid{0000-0001-5775-168X} \author{Matteo A. C. Rossi} \affiliation{Quantum Technology Lab, Dipartimento di Fisica ``Aldo Pontremoli'', Università degli Studi di Milano, IT-20133, Milan, Italy} \affiliation{QTF Centre of Excellence, Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turun Yliopisto, Finland} \orcid{0000-0003-4665-9284} \author{Dario Tamascelli} \affiliation{Quantum Technology Lab, Dipartimento di Fisica ``Aldo Pontremoli'', Università degli Studi di Milano, IT-20133, Milan, Italy} \orcid{0000-0001-6575-4469} \author{Marco G. Genoni} \affiliation{Quantum Technology Lab, Dipartimento di Fisica ``Aldo Pontremoli'', Università degli Studi di Milano, IT-20133, Milan, Italy} \orcid{0000-0001-7270-4742} \email{[email protected]} \begin{abstract} We study quantum frequency estimation for $N$ qubits subject to independent Markovian noise via strategies based on time-continuous monitoring of the environment. Both physical intuition and the extended convexity of the quantum Fisher information (QFI) suggest that these strategies are more effective than the standard ones based on the measurement of the unconditional state after the noisy evolution. Here we focus on initial GHZ states subject to parallel or transverse noise. For parallel, i.e., dephasing noise, we show that perfectly efficient time-continuous photodetection allows us to recover the unitary (noiseless) QFI, and hence obtain Heisenberg scaling for every value of the monitoring time. For finite detection efficiency, one falls back to noisy standard quantum limit scaling, but with a constant enhancement due to an effective reduced dephasing. In the transverse noise case, Heisenberg scaling is recovered for perfectly efficient detectors, and we find that both homodyne and photodetection-based strategies are optimal. For finite detector efficiency, our numerical simulations show that, as expected, an enhancement can be observed, but we cannot give any conclusive statement regarding the scaling. We finally describe in detail the stable and compact numerical algorithm that we have developed in order to evaluate the precision of such time-continuous estimation strategies, and that may find application in other quantum metrology schemes. \end{abstract} \maketitle \section{Introduction} Quantum metrology is one of the most promising fields within the realm of quantum technologies, with applications ranging from spectroscopy and magnetometry to interferometry and gravitational waves detection~\cite{Caves1981,Holland1993,Bollinger1996,McKenzie2002}. While $N$ uncorrelated (\emph{classical}) probe states lead to an estimation precision scaling as $1/\sqrt{N}$, typically referred to as \emph{standard quantum limit} (SQL), quantum probes made of $N$ entangled particles allow to design schemes with precision scaling as $1/N$, reaching the so-called \emph{Heisenberg limit} (HL)~\cite{GiovannettiNatPhot}. The above result relies on the assumption of noiseless unitary dynamics, but the unavoidable interaction with the surrounding environment can have dramatic consequences on the performances of these protocols. When the dynamics is described by a dephasing dynamical semigroup, the estimation precision is bounded to follow a SQL scaling, and thus only a constant enhancement can be obtained by exploiting quantum resources~\cite{Huelga97,EscherNatPhys,KolodynskyNatComm,Koodynski2013}. Several attempts to circumvent these no-go theorems and obtain a superclassical scaling in the presence of noise have been pursued, exploiting time-inhomogeneous dynamics~\cite{Matsuzaki2011,Chin12,Smirne16,Haase2017,Gorecka2017}, noise with a particular geometry~\cite{Chaves2013,Brask15}, dynamical decoupling~\cite{Sekatski2015a}, and quantum error-correction protocols (or, more generally, the possibility to implement control Hamiltonians)~\cite{Kessler14,Arrad14,Dur14,Plenio2016,Gefen2016,Layden2017,Sekatski2017metrologyfulland,Matsuzaki2017,Zhou2018}. Our goal is to attack the problem of noisy quantum metrology by exploiting time-continuous monitoring~\cite{WisemanMilburn,SteckJacobs}. Time-continuous measurements have been often studied as a tool for quantum parameter/state estimation~\cite{Mabuchi1996,Verstraete2001,Gambetta2001,Chase2009,Ralph2011a,Six2015,Cortez2017,Ralph2017}, with a particular focus on classical time-dependent signals~\cite{Tsang2009b,Tsang2011,Ng2016}. More recently, methods to calculate classical and quantum Cramér-Rao bounds for parameter estimation via time-continuous monitoring have been proposed~\cite{Guta2011,GammelmarkCRB,GammelmarkQCRB,Macieszczak2016,Genoni2017}, and put into action~\cite{KiilerichPC,Kiilerich2015a,KiilerichHomodyne,Albarelli2017a}. In this paper, we apply such techniques to the problem of frequency estimation for an ensemble of two-level atoms (qubits), each subjected to independent Markovian noise. In particular, we consider estimation strategies based on time-continuous (\emph{weak}) measurements of each qubit via the monitoring of the corresponding environment, and a final (\emph{strong}) measurement on the conditional state of the $N$ qubits. Few similar attempts in this sense have already been discussed in the literature, but they all present substantial differences compared to our approach. For instance, in~\cite{Geremia2003,Molmer2004,Albarelli2017a} the same problem of frequency estimation (magnetometry) is addressed, but in the presence of collective transverse noise. There, a single \emph{environment} collectively interacts with all the qubits and the corresponding noise is not detrimental for the Heisenberg scaling; time-continuous monitoring is however extremely promising as, thanks to the corresponding back-action on the system, a Heisenberg-limited precision can be achieved even by starting the dynamics with an uncorrelated spin-coherent state. An estimation problem more similar to ours is presented in~\cite{Catana2014}, where independent environments interact with each probe, but the analysis is restricted to a single (discrete) step of the dynamics and to interaction Hamiltonians commuting with the generator of the phase rotation (i.e., in the presence of pure dephasing only). Here we consider a proper continuous evolution, described by a Markovian master equation in the Lindblad form, leading to a dynamics satisfying the semigroup property. We focus in particular on independent noise, either parallel or transverse to the generator of the phase rotation to be estimated. In the former case, which physically corresponds to pure dephasing, the unconditional dynamics leads to a standard quantum limited precision, even for an infinitesimal amount of noise~\cite{KolodynskyNatComm}. In the latter, it was shown that, by optimizing over the evolution time, it is possible to restore a super-classical scaling between SQL and HL~\cite{Chaves2013,Brask15}. Our goal is to study whether in both cases time-continuous monitoring will allow for the restoration of the HL, and to analyze in detail the effect of the monitoring efficiency on the performance of the estimation schemes. First, we will derive the ultimate limit on estimation precision, optimizing over the most general measurements on the joint degrees of freedom of system and environment. We will then restrict ourselves to the strategies briefly described above, focusing on time-continuous photodetection and homodyne-detection. To achieve those aims, we develop a stable and compact numerical algorithm that allows us to calculate the effective quantum Fisher information characterizing this kind of measurement strategy, and that will find application in quantum metrology problems beyond the ones considered in this paper. The manuscript is structured as follows: In Sec.~\ref{s:qest} we introduce the basic concepts of quantum estimation theory, with a particular focus on measurement strategies based on time-continuous measurements. In Sec.~\ref{s:results} we first introduce the problem of noisy quantum frequency estimation and then we present our original results for parallel and transverse Markovian noise. The following sections are devoted to describing the methods exploited to obtain such results. In Sec.~\ref{s:UQFI} we show how to evaluate analytically the ultimate limits posed by quantum mechanics for strategies optimizing over all the the possible global measurements on system and environment. In Sec.~\ref{s:algorithm} we present in detail the numerical algorithm we have developed for the calculation of the effective quantum Fisher information, pertaining to strategies based on time-continuous monitoring of the environment. Sec.~\ref{s:conclusion} concludes the paper with some final remarks and discussion of possible future directions. \section{Quantum estimation via time-continuous measurements} \label{s:qest} A classical estimation problem is typically described in terms of a conditional probability $p(x|\omega)$ of obtaining the measurement outcome $x$, given the value of the parameter $\omega$ that one wants to estimate. The classical Cram\'er-Rao bound states that the precision of any unbiased estimator is lower bounded as \begin{equation} \delta \omega \geq \frac{1}{\sqrt{M \mathcal{F}[p(x|\omega)]}} \,, \end{equation} where $M$ is the number of measurements performed, $\mathcal{F}[p(x|\omega)] = \mathbbm{E}_p[(\partial_\omega \ln p(x|\omega) )^2]$ is the classical Fisher information, and $\mathbbm{E}_p[\cdot ]$ denotes the average over the probability distribution $p(x|\omega)$. In the quantum setting, the probability is obtained via the Born rule, $p(x|\omega)=\hbox{Tr}[\varrho_\omega \Pi_x]$, where $\varrho_\omega$ is a family of quantum states parametrized by $\omega$, and $\Pi_x$ is an element of the positive-operator valued measure (POVM) describing the measuring process. It is then possible to optimize over all the possible POVMs, obtaining the quantum Cram\'er-Rao inequality \cite{CavesBraunstein} \begin{align} \delta \omega \geq \frac{1}{\sqrt{M \mathcal{F}[p(x|\omega)]}} \geq \frac{1}{\sqrt{M \mathcal{Q}[\varrho_\omega]}}\,, \label{eq:QCRB} \end{align} where \begin{align} \mathcal{Q}[\varrho_\omega] = \lim_{\epsilon \rightarrow 0} \frac{ 8 \left(1 - F \left[ \varrho_\omega, \varrho_{\omega + \epsilon} \right] \right) }{ \epsilon^2 } \end{align} is the quantum Fisher information (QFI), expressed in terms of the fidelity between quantum states $F \left[\varrho_1,\varrho_2 \right] = \hbox{Tr} \left[ \sqrt{\sqrt{\varrho_1} \varrho_2 \sqrt{\varrho_1} } \right] $. The QFI $\mathcal{Q}[\varrho_\omega]$ clearly depends only on the quantum statistical model, namely the $\omega$-parametrized family of quantum states $\varrho_\omega$, and one can always find the optimal POVM saturating the bound., i.e., such that $\mathcal{F}[p(x|\omega)]=\mathcal{Q}[\varrho_\omega]$. In several quantum parameter estimation problems, including the case of standard frequency estimation, $\varrho_\omega$ corresponds to the \emph{unconditional} state evolved according to a Markovian master equation of the form \begin{equation} \frac{d\varrho}{dt} = \mathcal{L}_\omega \varrho = -i [ \hat{H}_\omega,\varrho] + \sum_{j} \mathcal{D}[\hat{c}_j] \varrho \,, \label{eq:Markov} \end{equation} where the parameter is encoded in the Hamiltonian $\hat{H}_\omega$, the operators $\hat{c}_j$ denote different independent noisy channels, and we have defined the superoperator $\mathcal{D}[\hat{A}] \varrho = \hat{A}\varrho \hat{A}^\dag - (\hat{A}^\dag \hat{A} \varrho + \varrho \hat{A}^\dag \hat{A})/2$. The master equation above can be obtained assuming that the system is interacting with a sequence of input operators $\hat{a}^{(j)}_\mathsf{in}(t)$, one for each noise operator $\hat{c}_j$, that satisfy the bosonic commutation relation $[\hat{a}^{(j)}_\mathsf{in}(t),\hat{a}^{(k) \dag}_\mathsf{in}(t')]=\delta_{jk}\delta(t-t')$. Some information on the parameter $\omega$ is then contained in the corresponding output operators $\hat{a}^{(j)}_\mathsf{out}(t)$, i.e., the environmental modes, just after the interaction with the system. One can imagine to perform a joint measurement on all output modes and the quantum system itself. The ultimate limit on the estimation of $\omega$ that takes into account all these strategies based on measurements of system and output, is quantified by the QFI introduced in~\cite{GammelmarkQCRB}. It depends only on the initial state of the system and on the superoperator $\mathcal{L}_\omega$ describing the evolution. In general, it can be evaluated as \begin{align} \label{eq:ultimQFI} \mathcal{\overline{Q}}_{\mathcal{L}_\omega} = 4 \partial_{\omega_1} \partial_{\omega_2} \log \left| \hbox{Tr}[\bar\varrho ]\right| \big|_{\omega_1 = \omega_2 = \omega}\,\, , \end{align} where, in the case we are considering (i.e., a Hamiltonian parameter), the operator $\overline\varrho$ is the one solving the generalized master equation \begin{align} \label{eq:MolmerGenME} \frac{d\bar\varrho}{dt} &= \mathcal{L}_{\omega_1,\omega_2} \bar\varrho \notag \\ &= -i \left( \hat{H}_{\omega_1}\bar\varrho -\bar\varrho \hat{H}_{\omega_2} \right) + \sum_{j} \mathcal{D}[\hat{c}_j] \bar\varrho \,. \end{align} This QFI corresponds to an optimization of the classical FI over all the possible measurement strategies described above, including the possibility of performing non-separable (entangled) measurements over the system and all the output modes at different times. However, one can restrict to more feasible strategies, where the outputs are sequentially measured continuously in time, and a final \emph{strong} measurement is performed on the conditional state of the system. Depending on the type of measurement performed on the outputs, one obtains different stochastic master equations for the conditional state of the system. In the following we will focus on the two paradigmatic cases of photodetection (PD) and homodyne detection (HD). In the first case, assuming time-continuous PD on each output with efficiency $\eta_j$, the evolution is described by the stochastic master equation~\cite{WisemanMilburn} \begin{align} d\varrho^{(c)} &= - i [\hat{H}_{\omega} , \varrho^{(c)}]\,dt + \sum_j (1-\eta_j) \mathcal{D}[\hat{c}_j] \varrho^{(c)} \,dt \notag \\ & -\frac{\eta_j}{2} (\hat{c}_j^\dag \hat{c}^{\vphantom{\dag}}_j \varrho^{(c)} +\varrho^{(c)} \hat{c}_j^\dag \hat{c}^{\vphantom{\dag}}_j ) \,dt + \eta_j \hbox{Tr}[\varrho^{(c)} \hat{c}_j^\dag \hat{c}^{\vphantom{\dag}}_j ]\varrho^{(c)} \,dt \notag \\ & + \left( \frac{\hat{c}^{\vphantom{\dag}}_j \varrho^{(c)} \hat{c}_j^\dag}{\hbox{Tr}[\varrho^{(c)} \hat{c}_j^\dag \hat{c}^{\vphantom{\dag}}_j ] } - \varrho^{(c)} \right)dN_j \,, \label{eq:photoSME} \end{align} where $dN_j$ denote Poisson increments taking value $0$ (no-click event) or $1$ (detector click event), and having average value $\mathbbm{E}[dN_j] = \eta_j \hbox{Tr}[\hat{c}_j^\dag \hat{c}^{\vphantom{\dag}}_j \varrho^{(c)}]\,dt$. Likewise, for time-continuous HD on each output, the stochastic master equation reads~\cite{WisemanMilburn,Rouchon2015} \begin{align} d\varrho^{(c)} ={}& - i [\hat{H}_{\omega} , \varrho^{(c)}]\,dt + \sum_j \mathcal{D}[\hat{c}_j] \varrho^{(c)} \,dt \notag \\ & + \sum_j \sqrt{\eta}_j \mathcal{H}[\hat{c}_j]\varrho^{(c)} \, dw_j \:,\label{eq:homoSME} \end{align} where $\mathcal{H}[\hat{c}]\varrho = \hat{c} \varrho + \varrho \hat{c}^\dag - \hbox{Tr}[\varrho (\hat{c} +\hat{c}^\dag)]\varrho$, and $dw_j = dy_j - \sqrt{\eta}_j \hbox{Tr}[\varrho^{(c)} (\hat{c}^{\vphantom{\dag}}_j + \hat{c}_j^\dag )]$ represents a standard Wiener increment (s.t. $dw_j dw_k = \delta_{jk} dt$), operationally corresponding to the difference between the measurement result $dy_j$ and the rescaled average value of the operator $(\hat{c}^{\vphantom{\dag}}_j + \hat{c}_j^\dag)$. In general, different measurement strategies mathematically correspond to different possible unravellings of the Markovian master equation: any quantum state, solution of Eq.~\eqref{eq:Markov} at a certain time $t$, can be written as $\varrho_\mathsf{unc} = \sum_\mathsf{traj} p_\mathsf{traj} \varrho^{(c)}$, where $p_\mathsf{traj}$ is the probability of a certain trajectory leading to the conditional state $\varrho^{(c)}$, and where the sum is replaced by an integral in the case of time-continuous measurements with a continuous spectrum (e.g. homodyne). One can then define an effective QFI, which poses the ultimate bounds for these kinds of estimation strategies~\cite{Catana2014,Ng2016,Albarelli2017a}, and that depends both on the specific unravelling and on the monitoring efficiencies $\eta_j$: \begin{align} \widetilde{\mathcal{Q}}_{\mathsf{unr},\eta_j} = \mathcal{F}[p_\mathsf{traj}] + \sum_\mathsf{traj} p_\mathsf{traj} \mathcal{Q}[\varrho^{(c)}] \,. \label{eq:effQFI} \end{align} As it is apparent from the formula, $\widetilde{\mathcal{Q}}_{\mathsf{unr},\eta} $ is equal to the sum of the classical Fisher information of the trajectories probability distribution $\mathcal{F}[p_\mathsf{traj}]=\sum_{\mathsf{traj}} \frac{ (\partial_\omega p_\mathsf{traj} )^2 }{p_\mathsf{traj}}$, plus the average of the QFIs of the corresponding conditional states $\mathcal{Q}[\varrho^{(c)}]$. The first term quantifies the amount of information obtained from the measurement of the output modes after the interaction with the system, while the second term quantifies the amount of information obtained from the final strong measurement on the conditional states of the system itself. We point out that an analogous figure of merit has been recognized as the appropriate one for metrology with post-selection~\cite{Combes2014,Zhang2015,Alves2015}. A method for the calculation of $\mathcal{F}[p_\mathsf{traj}]$ has been firstly proposed in \cite{GammelmarkCRB} for generic quantum states, and then in~\cite{Genoni2017} for continuous-variable Gaussian states. In Sec.~\ref{s:algorithm}, we describe in detail a more compact and stable numerical algorithm for the calculation of the terms present in Eq.~\eqref{eq:effQFI}, for both homodyne and photodetection measurements. It is reasonable to expect that, thanks to the information obtained from the time-continuous monitoring, and thanks to the, typically beneficial, effects of quantum measurements on the conditional states, one will obtain more information on the parameter via these strategies, than by solely measuring the unconditional state solution of the Markovian master equation \eqref{eq:Markov}. This intuition is confirmed by the extended convexity property of the QFI, recently proved in~\cite{Alipour2015,Ng2016}. In fact, the following chain of inequalities holds: \begin{align}\label{eq:QFIineq} \mathcal{Q}[\varrho_\mathsf{unc}] &\leq \qunr{\eta_j} \leq \overline{\mathcal{Q}}_\mathcal{L_\omega} \,. \end{align} We also conjecture that, for a fixed unravelling, and for fixed efficiency on all the output channels (i.e., for $\eta_j=\eta \, \forall j$), the effective QFI is monotonic with respect to the efficiency parameter, that is: \begin{align}\label{eq:QFIineq_conj} \qunr{\eta} \leq \qunr{\eta^\prime} \quad \Longleftrightarrow \quad \eta \leq \eta^\prime \qquad \textrm{(conjecture)}\,. \end{align} \section{Quantum frequency estimation in the presence of noise} \label{s:results} We consider a system of $N$ qubits, described by a Hamiltonian $\hat{H}_\omega = (\omega/2) \sum_{j=1}^N \sigma_z^{(j)}$, where $\omega$ is the unknown frequency to be estimated, and $\sigma_z^{(j)}$ is the Pauli-z operator acting on the $j$-th qubit. The system is interacting also with a Markovian environment, so that the evolution is described by the master equation \begin{align} \frac{d\varrho}{dt} &= \mathcal{L}_\omega \varrho = -i [ \hat{H}_\omega,\varrho] + \frac\kappa{2} \sum_{j=1}^N \mathcal{D}[\sigma_\alpha^{(j)}] \varrho \,, \label{eq:MarkovFreq} \end{align} where $\alpha \in \{ x , z \} $, i.e., we only consider noise parallel or transverse to the Hamiltonian. The master equation~\eqref{eq:MarkovFreq} can be easily mapped to Eq.~\eqref{eq:Markov} by considering the noise operators $\hat{c}_j = \sqrt{\kappa/2} \sigma_\alpha^{(j)}$. In what follows we will study all the quantities described in the previous section in order to assess the estimation of the parameter $\omega$. In quantum frequency estimation strategies, one considers the number of qubits $N$ and the total time of the experiment $T$ as the resources of the protocol. The quantum Cram\'er-Rao bound~\eqref{eq:QCRB} is then more efficiently rewritten as \begin{align} \delta \omega \sqrt{T} \geq \frac{1}{\sqrt{\mathcal{Q}/t}} \geq \frac{1}{\sqrt{ \max_t [\mathcal{Q}/t ]}} \label{eq:QCRBomega} \end{align} where $t=T/M$ corresponds to the duration of each round, over which one can perform a further optimization, and where $\mathcal{Q}$ corresponds here to the proper QFI charaterizing the particular estimation strategy considered. In the rest of the manuscript we are restricting to initial entangled GHZ states $|\psi_\mathsf{GHZ}\rangle = ( |0\rangle^{\otimes N} + |1\rangle^{\otimes N})/\sqrt{2}$. It is well known that in the noiseless case, i.e., for $\kappa=0$, the corresponding QFI is Heisenberg limited, $\mathcal{Q}_\mathsf{HL}= N^2 t^2$. This leads to a quadratic enhancement w.r.t. the ``standard quantum limited'' QFI, $\mathcal{Q}_\mathsf{SQL} = N t^2$ (obtained in the case of a factorized \emph{coherent-spin} initial state $|\psi_\mathsf{coh}\rangle = [(|0\rangle + |1\rangle)/\sqrt{2}]^{\otimes N}$). In what follows, we will consider the noisy case ($\kappa>0$); the generic QFI $\mathcal{Q}$ in Eq.~\eqref{eq:QCRBomega} will correspond to either: i) the QFI of the unconditional state $\mathcal{Q}[\varrho_\mathsf{unc}]$ corresponding to the master equation~\eqref{eq:MarkovFreq}; ii) the ultimate QFI $\overline{\mathcal{Q}}_{\mathcal{L}_\omega}$ obtained optimizing over all the possible measurements on system and environmental outputs; iii) the effective QFI $\widetilde{\mathcal{Q}}_{\mathsf{unr},\eta}$ corresponding to a specific time-continuous (sequential) measurement of the output modes and a final strong measurement on the conditional state of the system. In particular we are will focus on time-continuous photodetection and homodyne detection, with measurement efficiency $\eta$ and we are labelling the respective effective QFIs as $\widetilde{\mathcal{Q}}_\mathsf{pd,\eta}$ and $\widetilde{\mathcal{Q}}_\mathsf{hom,\eta}$. It is important to remark that, given the master equation~\eqref{eq:MarkovFreq}, one is assuming that $N$ different (homodyne or photo-) detectors are monitoring the environment of each qubit. We will however find instances where this assumptions may be relaxed. In the next subsections we address separately the two different cases of parallel and transverse noise. For each case we start by reviewing known results for the QFI of the unconditional states; then we will present our original results, regarding the ultimate QFI $\overline{\mathcal{Q}}_{\mathcal{L}_\omega}$ and effective QFIs $\widetilde{\mathcal{Q}}_\mathsf{pd,\eta}$ and $\widetilde{\mathcal{Q}}_\mathsf{hom,\eta}$ (corresponding to the schemes pictorially represented in Fig.~\ref{fig:diagram}), without dwelling into the details of their calculation, which will be left to Secs.~\ref{s:UQFI} and~\ref{s:algorithm}. \begin{figure} \caption{Schematic representation of the metrological approach we consider in this paper. A $N$-qubit (possibly entangled) input state $| \psi_0 \rangle$ interacts with $N$ independent environments which are monitored by $N$ detectors, either by photodetection (PD) or homodyne detection (HD). In the former case each output is a binary valued detection record $N_j (t)$, in the latter a real valued photo-current $y_j (t)$. The map $\mathcal{E} \label{fig:diagram} \end{figure} \subsection{Parallel noise} Parallel noise, corresponding to the master equation~\eqref{eq:MarkovFreq} with $\sigma_\alpha^{(j)}=\sigma_z^{(j)}$, is typically considered the most detrimental noise for frequency estimation since the evolution induces dephasing in the eigenbasis of the Hamiltonian $\hat{H}_\omega$. For an initial GHZ state, the QFI of the unconditional state can be evaluated analytically~\cite{Huelga97,Alipour2014}, obtaining $\mathcal{Q}[\varrho_\mathsf{unc}] = N^2 t^2 e^{- 2 \kappa N t}$.\\ By optimizing over the single-shot duration $t$, one obtains that the optimal QFI is standard quantum limited, \begin{align} \max_t \left[\frac{\mathcal{Q}[\varrho_\mathsf{unc}]}{t} \right] = \frac{N}{2 e \kappa} \,. \label{eq:QFIparUnc} \end{align} This result is equivalent to the one obtainable via a factorized initial state. Only a constant enhancement can in fact be gained, by optimizing over the initial entangled state~\cite{Huelga97}; by doing this one saturates the ultimate noisy bound derived in~\cite{EscherNatPhys,KolodynskyNatComm}, which dictates that, as soon as some parallel noise is present in the dynamics, the ultimate precision is standard quantum limited. On the other hand, the ultimate QFI $\overline{\mathcal{Q}}_{\mathcal{L}_\omega}$ can be easily evaluated (see Sec.~\ref{s:UQFI} for details), and it turns out to be equal to the noiseless QFI, i.e. \begin{align} \overline{\mathcal{Q}}_{\mathcal{L}_\omega}^\parallel = N^2 t^2 \,. \end{align} This shows that, by measuring also the output modes, it is in principle possible to recover not only a Heisenberg scaling for the error $\delta\omega$, but also the whole information on the parameter. For both time-continuous homodyne and photodetection, the evolution of an initial GHZ state, under parallel noise and conditioned on the measurement results, is restricted to a two-dimensional Hilbert space. As a consequence, the results obtained for $N=1$ qubit can be readily used to infer the results for generic $N$ qubits, by simply rescaling the evolution time $t \rightarrow Nt$ (see Sec.~\ref{s:algorithm} for details). We can thus obtain numerically exact values of the effective QFIs for any value of $N$; in particular we have been able to numerically verify that the strategy based on time-continuous photodetection with perfect efficiency $\eta=1$ is indeed optimal, i.e. \begin{align} \widetilde{\mathcal{Q}}^\parallel_{\mathsf{pd},\eta=1} = \overline{\mathcal{Q}}_{\mathcal{L}_\omega}^\parallel = N^2 t^2 \,, \end{align} showing that the noiseless Heisenberg-limited result can be recovered, without the need to perform complicated (non-local in time) measurement strategies on system and environment. The physical explanation of this result can be easily understood by studying the action of the two Kraus operators describing the conditional dynamics for initial GHZ states (see Sec.~\ref{s:algorithm} for more details on the Kraus operators). The \emph{no-jump} evolution is ruled by the infinitesimal Kraus operator $M_0 = \mathbbm{1} - i \hat{H}_\omega \,dt +(\kappa N/4)\mathbbm{1} dt$; as a consequence it easy to check that the GHZ state evolves as in the unitary case, apart from an irrelevant normalization factor that can be ignored. On the other hand, when a jump occurs due to a photon detected in the output corresponding to one of the $N$ qubits, the only effect of the corresponding Kraus operator, $M_1^{(j)} = \sigma_z^{(j)} \sqrt{(\kappa/2) dt}$, is to induce a relative minus sign between the $|0\rangle^{\otimes N}$ and the $|1\rangle^{\otimes N}$ components of the quantum conditional state, independently on the index $j$ of the corresponding qubit. In general, after a time $t$ and $m$ detected photons from all the the output channels, the conditional state reads: \begin{align} |\psi_{m\lvert\mathsf{GHZ}}\rangle = \frac{1}{\sqrt{2}}\left( |0\rangle^{\otimes N} + e^{i (N \omega t + m \pi)} |1\rangle^{\otimes N} \right) \,. \end{align} It is important to remark that, thanks to the symmetry of the GHZ state, and given that jumps may occur only one by one~\cite{WisemanMilburn}, it is not necessary to know exactly from which qubit channel the photon has been detected; as a consequence one may obtain the same result by using a single perfect photo-detector monitoring jointly all the $N$ output modes. Once the number of detected photons $m$ is known, and the corresponding ``GHZ-equivalent'' conditional state is prepared, one can estimate the frequency $\omega$ at the Heisenberg limit. As soon as the photodetection monitoring is not perfectly efficient (we always consider equal efficiency for all the qubits, $\eta_j = \eta < 1, \,\, \forall j$), the Heisenberg scaling is immediately lost. Our numerical results clearly show that the effective QFI is equal to $\widetilde{\mathcal{Q}}^\parallel_{\mathsf{pd},\eta} = N^2 t^2 e^{- 2 \kappa (1-\eta) N t}$, and the optimized QFI reads \begin{align} \max_t \left[\frac{\widetilde{\mathcal{Q}}^\parallel_{\mathsf{pd},\eta}}{t} \right] = \frac{N}{2 e \kappa (1-\eta)} \,. \end{align} Time-continuous inefficient photodetection thus leads to a constant enhancement, compared to the unconditional/classical case~\eqref{eq:QFIparUnc}, which physically corresponds to have a reduced effective dephasing parameter $\kappa_\mathsf{eff} = \kappa (1-\eta)$. We remark that also in this case it is not necessary to monitor every output channel separately, and that a single-photo detector can be employed. Moreover, in this picture, the overall efficiency parameter $\eta$ corresponds to the product between the factual efficiency of the detectors and the fraction of qubits that are effectively monitored. We also notice that, for every value of $\omega$, all the information is contained in the conditional quantum states: the classical Fisher information $\mathcal{F}[p_\mathsf{traj}]$ is in fact identically equal to zero. This follows from the unitarity of the Pauli matrices, leading to $\hat{c}_j^\dag \hat{c}^{\vphantom{\dag}}_j = \kappa \mathbbm{1}_2/2$, and thus to a parameter-independent Poisson increment with average value $\mathbbm{E}[dN_j] = (\eta \kappa/2)\, dt$ in Eq. \eqref{eq:photoSME}. Nevertheless, we remark that the output from the photodetection measurement is in fact essential to know the corresponding conditional state, and thus to extract the whole information on $\omega$ via the final strong measurement. These results can be readily extended to the scenario of non-linear quantum metrology, where $k$-body Hamiltonian and $p$-body dissipators are considered ~\cite{Beau2017}. If we focus on the case where $k$ and $p$ are odd numbers, by preparing the initial state in a GHZ state, and by monitoring via photodetection each dissipation channel, one can show by a simple rescaling of the variables $\omega$ and $\kappa$ that for unit monitoring efficiency, the ultimate (noiseless) scaling $\sim N^k$ can be restored; on the other hand, as soon as the efficiency is smaller than one, one goes back to the noisy scaling $\sim N^{k-p/2}$, with only a constant enhancement, due to the finite monitoring efficiency. The results for the case of continuous homodyne monitoring are not reported here since the corresponding effective QFI, at fixed efficiency $\eta$, is always lower than the one obtained for continuous photodetection, and Heisenberg scaling is not recovered even in the case of perfect monitoring. \subsection{Transverse noise} Thanks to the symmetry of the GHZ state, the QFI for the unconditional dynamics with arbitrary collapse operators can be obtained without the need to diagonalize the full density matrix~\cite{Chaves2013}. The corresponding optimized QFI can be then numerically obtained and the scaling is found to be intermediate between SQL and Heisenberg: $\max_t \left[ \mathcal{Q} \left[ \varrho_\mathsf{unc}^\bot \right] / t \right] \approx N^{5/3} $. On the other hand, the ultimate QFI can be computed analytically (see Sec.~\ref{s:UQFI} for details on the calculation), yielding \begin{align} \label{eq:QFImolmerGHZfinal} &\overline{\mathcal{Q}}_{\mathcal{L}_{\omega}}^\bot = \frac{ N^2 \left(1- e^{-\kappa t}\right)^2 + N \left[ 2 \kappa t+ 1 - \left(2 - e^{-\kappa t} \right)^2 \right] }{\kappa ^2}. \end{align} Two main observations are in order here: on the one hand we observe that $\overline{\mathcal{Q}}_{\mathcal{L}_{\omega}}^\bot$ depends on the noise parameter $\kappa$ and is always smaller than the noiseless QFI $\mathcal{Q}_{\sf HL}=N^2t^2$. This shows how, unlike in the parallel noise case, for transverse (non-commuting) noise, part of the information leaking into the environment is irretrievably lost, and cannot be recovered even if one has at disposal all the environmental degrees of freedom. On the other hand, this expression does explicitly show Heisenberg scaling, and can be further optimized over the evolution time $t$. In contrast to the unconditional case, the optimal time $t_\mathsf{opt} (N)$ does not go to zero for $N \to \infty $, instead it tends to a constant: $\lim_{N \to \infty} t_\mathsf{opt} (N) = c / \kappa$, where $c \approx 1.26$ . The very same results can be obtained by computing the unconditional QFI in the limit $\omega \to 0$~\cite{Albarelli2018thesis}, which is already known to give rise to Heisenberg scaling (see Appendix D of~\cite{Brask15}), i.e. \begin{align} \overline{\mathcal{Q}}_{\mathcal{L}_{\omega}}^\bot&= \lim_{\omega \to 0} \mathcal{Q} \left[ \varrho^\bot_\mathsf{unc} \right]. \end{align} Notice that it is necessary to take the limit, instead of using directly $\omega=0$, because for this value of the parameter the density matrix changes its rank and this gives rise to a discontinuity in the QFI~\cite{Safranek2017,SevesoTBA}. The effective QFI for photodetection and homodyne detection is obtained numerically with the methods presented in Sec.~\ref{s:algorithm}. From our results we observe that for unit efficiency $\eta = 1$ the effective QFI saturates the ultimate bound in both cases \begin{align} \widetilde{\mathcal{Q}}^\bot_{\mathsf{hd},\eta=1} = \widetilde{\mathcal{Q}}^\bot_{\mathsf{pd},\eta=1} = \overline{\mathcal{Q}}_{\mathcal{L}_\omega}^\bot. \end{align} This has been checked up to $N=14$, but we conjecture that this equality holds in general. The two terms that contribute to $\widetilde{\mathcal{Q}}^\bot_{\mathsf{hd},\eta=1}$ in Eq.~\eqref{eq:effQFI} always sum up to $\overline{\mathcal{Q}}_{\mathcal{L}_\omega}^\bot$ but with $\omega$-dependent behaviors. This is shown for a particular set of parameters in Fig.~\ref{fig:contributions}. On the other hand, as explained before for the parallel case, $\widetilde{\mathcal{Q}}^\bot_{\mathsf{pd},\eta=1}$ is only equal to the average QFI of the conditional states, since the classical FI $\mathcal{F}[p_\mathsf{traj}]$ is always identical to zero. \begin{figure} \caption{Contributions of the classical FI $\mathcal{F} \label{fig:contributions} \end{figure} As expected, for $\eta < 1 $, we observe that the effective QFI lies between the unconditional QFI and the ultimate QFI, confirming the chained inequalities~\eqref{eq:QFIineq} for both detection strategies and also the conjecture regarding the monotonicity with the efficiency $\eta$. \begin{figure*} \caption{The figure shows the effective QFI for photodetection $\widetilde{\mathcal{Q} \label{fig:qeff_over_t_vs_t} \end{figure*} As an example, in Fig.~\ref{fig:qeff_over_t_vs_t} we show the effective QFI over time for photodetection and homodyne detection at different efficiencies, for three different values of $N$. We can see that at lower efficiencies the curves tend to the unconditional QFI $\mathcal{Q}[\varrho_\mathsf{unc}^\bot]$, while for perfect efficiency they coincide with $\overline\mathcal{Q}_{\mathcal{L}_\omega}$. Notice that in general one cannot define a hierarchy between the two strategies, and that in particular, in the case of homodyne detection, the curves become constant at large $t$, due to the non-vanishing contribution of the classical Fisher information $\mathcal{F}[p_\mathsf{traj}]$ that is linear in $t$. Since the numerical method for non-unit efficiency requires using the full Hilbert space, the complexity of the algorithm is exponential in $N$ and we have been able to obtain results only up to $N=7$. Consequently, we cannot explicitly witness a different scaling from the unconditional case. As a matter of fact the difference between $\max_t \left[ \overline{\mathcal{Q}}_{\mathcal{L}_\omega}^\bot / t \right]$ and $\max_t \left[ \mathcal{Q} \left[ \varrho^\bot_\mathsf{unc} \right] /t \right]$ is not very significant for $N \leq 7$. This is shown in Fig.~\ref{fig:q_opt_pd} for photodetection, in the case $\omega = \kappa$. As we can see, the two quantities have a similar scaling in this range of $N$, with the effective QFI lying between them with optimal values that are monotonous with $\eta$. The optimal measurement time decreases with $N$, and it increases with increasing $\eta$. \begin{figure*} \caption{The maximum over time of the quantity $\mathcal{Q} \label{fig:q_opt_pd} \end{figure*} \section{Evaluation of the ultimate QFI} \label{s:UQFI} In this section we show how to derive the analytical formulas for the ultimate QFI $\overline{\mathcal{Q}}_{\mathcal{L}_\omega}$ for both parallel and transverse noise. \subsection{Parallel noise} We start by showing that, for parallel noise, the ultimate QFI is equal to the QFI of the noiseless case: the proof is however more general and is valid whenever the collapse operators $\hat{c}_j$ commute with the free Hamiltonian $\hat{H}_\omega$. The master equation~\eqref{eq:Markov} is obtained by considering the following interaction Hamiltonian between the system and the input modes: $\hat{H}_\text{int} (t) = \sum_{j=1}^N \left( \hat{c_j} \hat{a}^{(j) \dag}_\mathsf{in} (t) + \hat{c}^\dag_j \hat{a}^{(j)}_\mathsf{in} (t) \right)$. We remark that $t$ is merely a parameter that labels which mode interacts with the system at time $t$ and for each $t$ we have a different operator acting on a different Hilbert space. The total time-dependent Hamiltonian for the system and environment is thus $\hat{H}_\mathsf{SE} (t) = \hat{H}_\omega + \hat{H}_\mathsf{int} (t) $ and each input mode interacts with the main system for an infinitesimal time $d t$. However, for the sake of clarity, we will consider a discretization with a finite interaction time $\delta t$, so that the evolution over a total time $T= M \delta t$ involves a finite number $M$ of input modes. We also assume that the state of the input modes is the vacuum $|0 \rangle$ and that the initial state of the system $| \psi_0 \rangle$ is pure. Under these assumptions the joint state of system and environment evolves as \begin{equation} | \psi_\mathsf{SE} (\omega) \rangle = \hat{U}_{t_M} \dots \hat{U}_{t_2} \hat{U}_{t_1} \left( | \psi_0 \rangle \otimes | 0 \rangle^{\otimes M} \right), \end{equation} where $t_j = j \cdot \delta t $ and $U_{t_j}= \exp \left[ -i \delta t \left( \hat{H}_{\omega} + \hat{H}_\text{int} (t_j) \right) \right]$. This joint state is connected to the operator $\bar{\varrho}$ appearing in Eq.~\eqref{eq:MolmerGenME}, since its trace represents the fidelity for two different values of the parameter~\cite{GammelmarkQCRB,Macieszczak2016}, i.e. \begin{equation} \label{eq:overlapQFImolmer} \langle \psi_\mathsf{SE} (\omega_1) | \psi_\mathsf{SE} (\omega_2) \rangle =\hbox{Tr}\left[\bar \varrho \right]. \end{equation} When all the collapse operators $\hat{c}_j$ commute with the free Hamiltonian $\hat{H}_\omega$, $\hat{H}_\mathsf{int}$ commutes as well and we have \begin{equation} \hat{U}_{t_i}= \exp \left[ - i \delta t \hat{H}_\mathsf{int} (t_i) \right] \cdot \exp \left[ -i \delta t \hat{H}_{\omega} \right]. \end{equation} Therefore, in the computation of the overlap~\eqref{eq:overlapQFImolmer} the terms due to the interaction cancel out and we have \begin{equation*} \langle \psi_\mathsf{SE} (\omega_1) | \psi_\mathsf{SE} (\omega_2) \rangle = \langle \psi_0 | \exp\left[ -i T \left( H_{\omega_2} - H_{\omega_1} \right) \right] | \psi_0 \rangle, \end{equation*} so that Eq.~\eqref{eq:ultimQFI} gives the QFI of the unitary case. In particular, this is true for parallel noise, i.e., for $\hat{c}_j=\sqrt{k/2} \sigma_z^{(j)}$, and, for any pure initial state of the system $|\psi_0\rangle$, we have: \begin{align} \overline{\mathcal{Q}}_{\mathcal{L}_{\omega}}^{\mathsf{\parallel}} &= \mathcal{Q} \left[ e^{-i \hat{H}_\omega t} | \psi_0\rangle \!\langle \psi_0 | e^{i \hat{H}_\omega t}\right] \\ &= 4 \left[ \langle \psi_0 | \left( \partial_\omega \hat{H}_\omega \right)^2 | \psi_0\rangle - \langle \psi_0 | \left( \partial_\omega \hat{H}_\omega \right) | \psi_0\rangle^2 \right]\,. \notag \end{align} \subsection{Transverse noise} When the collapse operators do not commute with the generator, as in the case of frequency estimation with transverse noise, we can simplify the computation by using the assumption of an identical and independent noise acting on each qubit. Since Eq.~\eqref{eq:MolmerGenME} is linear in $\bar{\varrho}$ and the coefficients are time-independent, we can still write the solution as a linear map, formally $\mathcal{\tilde{E}}_{\omega_1, \omega_2}(t) = \exp \left( t \mathcal{\tilde{L}}_{\omega_1,\omega_2} \right) $. This map is only guaranteed to be linear and in general it is not even positive. Since the map acts independently on every qubit, we can still write the global action on the $N$-qubit state as the tensor product $\mathcal{\tilde{E}}^N_{\omega_1,\omega_2} (t) = \mathcal{\tilde{E}}_{\omega_2,\omega_2}(t)^{\otimes N}$, where $\mathcal{\tilde{E}}_{\omega_2,\omega_2}(t)$ is the single-qubit solution. The ultimate bound is thus obtained as \begin{equation} \label{eq:MolmerFromChannels} \overline{\mathcal{Q}}_{\mathcal{L}_{\omega}} \left[ \ket{\psi_0} \right] = 4 \left. \partial_{\omega_1} \partial_{\omega_2} \log \hbox{Tr} [ \mathcal{\tilde{E}}^N_{\omega_1,\omega_2} (t) \, \varrho_0 ] \right|_{\omega_1 = \omega_2 = \omega}, \end{equation} where $\varrho_0$ is the initial pure state. Given our choice $\varrho_0 = |\psi_\mathsf{GHZ}\rangle\!\langle \psi_\mathsf{GHZ}| $, the computation can be greatly simplified. We find that \begin{align} &\mathcal{\tilde{E}}^N_{\omega_1,\omega_2} (t) \, |\psi_\mathsf{GHZ}\rangle\!\langle \psi_\mathsf{GHZ}|= \\ \notag &= \frac{1}{2} \Biggl[ \left( \mathcal{\tilde{E}}_{\omega_1,\omega_2} (t) \, |0 \rangle \! \langle 0 | \right)^{\otimes N} + \left( \mathcal{\tilde{E}}_{\omega_1,\omega_2} (t) \, |1 \rangle \! \langle 1 | \right)^{\otimes N} \\ \notag & \quad \; \, + \left( \mathcal{\tilde{E}}_{\omega_1,\omega_2} (t) \, |0 \rangle \! \langle 1 | \right)^{\otimes N} + \left( \mathcal{\tilde{E}}_{\omega_1,\omega_2} (t) \, |1 \rangle \! \langle 0 | \right)^{\otimes N} \Biggr]. \end{align} For transverse noise and for a single qubit, the equation to solve to compute $\overline{\mathcal{Q}}$ is the following \begin{equation} \begin{split} \frac{d \bar{\varrho}}{d t} &= \mathcal{ \tilde{L} }^{\bot}_{\omega_1,\omega_2} \left[ \bar{\varrho} \right] =\\ & = -\frac{i}{2} \left( \omega_1 \sigma_z \bar{\varrho} - \omega_2 \tilde{\varrho} \sigma_z \right) + \frac{\kappa}{2} \left( \sigma_x \bar{\varrho} \sigma_x - \bar{\varrho} \right). \end{split} \end{equation} The solution is obtained in the same way as for canonical master equations: by choosing a basis of operators and using a matrix representation of superoperators~\cite{Andersson2007} (see also the Supplementary Material of~\cite{GammelmarkQCRB}). By making use of the normalized Pauli operators $\tilde{\sigma}_i = \sigma_i / \sqrt{2}$ (where $\sigma_0 = \mathbbm{1}$), so that $\hbox{Tr} \left[ \tilde{\sigma}_i \tilde{\sigma}_j \right] = \delta_{ij} $, we can find the matrix associated to the single qubit map $\mathcal{\tilde{E}}_{\omega_1,\omega_2} (t)$, obtained by matrix exponentiation. We can write the generalized density operator $\bar{\varrho}$ in Bloch form as $\bar{\varrho}= \frac{1}{\sqrt{2}} \left( a_0 \tilde{\sigma}_0 + \vec{a} \cdot \vec{\tilde{\sigma}} \right) $ such that its trace is simply $\hbox{Tr} \left[ \bar{\varrho} \right] = a_0$. In this notation, the initial states $| 0 \rangle \! \langle 0 |$ and $ | 1 \rangle \! \langle 1 |$ correspond to the vectors $e_{00} = \frac{1}{\sqrt{2}} (1,0,0,1)^\mathsf{T}$ and $e_{11} = \frac{1}{\sqrt{2}} (1,0,0,-1)^\mathsf{T}$, while the off-diagonal elements are \(e_{01/10}= \frac{1}{\sqrt{2}} (0,1,\pm i,0)^\mathsf{T} \). Since we are only interested in the coefficient $a_0$, we only need the first row of the matrix representation of $\mathcal{\tilde{E}}^{\bot}_{\omega_1,\omega_2}(t)$, which turns out to be \begin{widetext} \begin{equation} \resizebox{.9\textwidth}{!}{$ \left[ \mathcal{\tilde{E}}^{\bot}_{\omega_2,\omega_2}(t) \right]_{0,*}= e^{-\frac{\kappa t}{2}} \left( \cosh \left( \frac{t}{2} \sqrt{ \kappa^2 - \left(\omega _1-\omega _2\right)^2 }\right) + \frac{\kappa \sinh \left( \frac{t}{2} \sqrt{\kappa^2 - \left(\omega _1-\omega _2\right)^2 }\right)}{2 \sqrt{ \kappa^2 - \left(\omega _1-\omega _2\right)^2 } } , 0 , 0 , \frac{ i \left(\omega _2-\omega _1\right) \sinh \left(\frac{t}{2} \sqrt{\kappa^2 - \left(\omega _1-\omega _2\right)^2 } \right) }{\sqrt{\kappa^2 - \left(\omega _1-\omega _2\right)^2 } } \right)$} . \end{equation} \end{widetext} We can see that the off-diagonal terms are kept off-diagonal by the map, therefore they do not contribute to the trace and we can further simplify the calculation as \begin{align} \label{eq:QFImolmerGHZpre} &\hbox{Tr} \left[ \mathcal{\tilde{E}}^{\bot N}_{\omega_1,\omega_2} (t) |\psi_\mathsf{GHZ}\rangle\!\langle \psi_\mathsf{GHZ}| \right] = \\ & = \frac{1}{2} \left( \hbox{Tr} \left[ \mathcal{\tilde{E}}^{\bot}_{\omega_1,\omega_2} (t) |0 \rangle \! \langle 0 | \right]^{ N} + \hbox{Tr} \left[ \mathcal{\tilde{E}}^{\bot}_{\omega_1,\omega_2} (t) |1 \rangle \! \langle 1 | \right]^{N} \right) \notag \\ & = \frac{1}{\sqrt{2}} \Biggl\{ \left( [\mathcal{\tilde{E}}^{\bot}_{\omega_1,\omega_2}(t)]_{0,*} \cdot e_{00} \right)^N + \left( [ \mathcal{\tilde{E}}^{\bot}_{\omega_1,\omega_2}(t)]_{0,*} \cdot e_{11} \right)^N \Biggr\}. \notag \end{align} We can thus plug this result into~\eqref{eq:MolmerFromChannels} and finally obtain Eq.~\eqref{eq:QFImolmerGHZfinal}. \section{Numerical algorithm for the calculation of the effective QFI for time-continuous strategies} \label{s:algorithm} We can now describe the numerical algorithm we have implemented to calculate the effective QFI $\widetilde{\mathcal{Q}}_\mathsf{unr,\eta}$. The numerical results presented in this manuscript are obtained with the code available online at~\cite{ContinuousMeasurementFI}, written in the Julia language~\cite{julia}. We will focus on the case of homodyne detection and then we will discuss the small changes needed for photodetection. We first review the existing method to calculate one of the two key quantities in Eq.~\eqref{eq:effQFI}, namely the classical Fisher information $\mathcal{F}[p_\mathsf{traj}]$. In \cite{GammelmarkCRB} it is shown that it can be calculated as \begin{align} \mathcal{F}[p_\mathsf{traj}] = \mathbbm{E}\left[ \hbox{Tr}[\tau]^2 \right] , \label{eq:Fishtau} \end{align} where we have defined the operator \begin{align} \tau= \frac{ \partial_\omega \tilde{\varrho}^{(c)} }{\hbox{Tr}[\tilde{\varrho}^{(c)}]} \:, \label{eq:tau_def} \end{align} in terms of the unormalized conditional state $\tilde{\varrho}^{(c)}$ evolving according to the SME \begin{align} d\tilde{\varrho}^{(c)} = {}& - i [\hat{H}_{\omega} , \tilde{\varrho}^{(c)}]\,dt + \sum_j \mathcal{D}[\hat{c}_j] \tilde{\varrho}^{(c)} \,dt \notag \\ & + \sum_j \sqrt{\eta}_j \left( \hat{c}_j \tilde{\varrho}^{(c)} + \tilde{\varrho}^{(c)} \hat{c}_j^\dagger \right) dy_j \,, \label{eq:unormalizedSME} \end{align} and such that $\varrho^{(c)}= \tilde{\varrho}^{(c)}/\hbox{Tr}[\tilde{\varrho}^{(c)}]$. One can then numerically integrate simultaneously the two SMEs, the one for the conditional state $\varrho^{(c)}$ in Eq.~\eqref{eq:homoSME}, and the one above for $\tau$, and then evaluate the corresponding FI, by averaging over a certain number of trajectories. However, it is computationally very expensive to obtain a guaranteed Hermitian $\varrho^{(c)}$ by using standard methods, such as Euler-Maruyama or Euler-Milstein, and still these methods do not guarantee the positivity of corresponding density operator. An alternative method to integrate the SME, which is able to circumvent these problems, has been introduced in \cite{Rouchon2014, Rouchon2015}, by exploiting the Kraus operators corresponding to the (weak) measurement performed at each instant of time. After an infinitesimal time $dt$, the evolved state corresponding to~\eqref{eq:homoSME} can in fact be written as \begin{align}\label{eq:rouchon} \varrho^{(c)}_{t+dt} = \frac{ M_\mathbf{dy}^{\vphantom{\dag}}\varrho^{(c)}_t M_\mathbf{dy}^\dag + \sum_j (1-\eta_j) \hat{c}^{\vphantom{\dag}}_j \varrho_t^{(c)} \hat{c}_j^\dag \,dt} { \hbox{Tr}[ M_\mathbf{dy}^{\vphantom{\dag}}\varrho^{(c)}_t M_\mathbf{dy}^\dag +\sum_j (1-\eta_j) \hat{c}^{\vphantom{\dag}}_j \varrho_t^{(c)} \hat{c}_j^\dag \,dt ]} \, , \end{align} where we have explicitly put the time dependence of the density operators and where we have defined the Kraus operator \begin{align} M_\mathbf{dy}^{\vphantom{\dag}}= \mathbbm{1} - i \hat{H}_\omega \,dt - \frac{1}{2} \sum_j \hat{c}_j^\dag \hat{c}^{\vphantom{\dag}}_j \, dt + \sum_j \sqrt{\eta} \hat{c}_j \,dy_j \,, \label{eq:Mdyvec} \end{align} with $\mathbf{dy} = \{ dy_j \}$ being a vector of measurement results, corresponding to each output channel, \begin{equation} dy_j = \sqrt{\eta_j}\,\hbox{Tr}[\varrho_t^{(c)} (\hat{c}^{\vphantom{\dag}}_j + \hat{c}_j^\dag)] \,dt + dw_j. \end{equation} Equation \eqref{eq:rouchon} can be used for numerical purposes, where the infinitesimal time $dt$ is replaced by a finite time step $\Delta t$, the Wiener increments $dw_j$ are replaced by Gaussian random variables $\Delta w_j$ centered in zero and with variance equal to $\Delta t$, and where one can also implement the second order Euler-Milstein corrections. The (numerical) Kraus operators in this case read \begin{align} M_\mathbf{\Delta y} ={}& \mathbbm{1} - i \hat{H}_{\omega} \, \Delta t - \frac{1}{2} \sum_j \hat{c}_j^\dag \hat{c}^{\vphantom{\dag}}_j \, \Delta t + \sum_j \sqrt{\eta_j}\, \hat{c}_j \, \Delta y_j \notag \\ & + \sum_{j,k} \frac{\eta}{2} \hat{c}_j \hat{c}_k ( \Delta y_j \Delta y_k - \delta_{j,k} \Delta t) \, , \label{eq:numericalKraus} \end{align} with \begin{align} \Delta y_j = \sqrt{\eta_j}\, \hbox{Tr}[ \varrho_t^{(c)} (\hat{c}^\dag_j + \hat{c}_j^{\vphantom{\dag}})] \, \Delta t + \Delta w_j \,, \end{align} denoting the (finite) increments of the measurement records. In the following we will show how to extend this method to obtain an efficient and numerically stable calculation of both $\mathcal{F}[p_\mathsf{traj}]$ and $\mathcal{Q}[\varrho_c]$. We will prove everything in terms of the ``infinitesimal'' Kraus operators, which will have to be replaced by Eq.~\eqref{eq:numericalKraus} when implementing the numerical algorithm. We will start by showing the results for the most general case of SME and inefficient detection; we will then describe the more efficient algorithm that can be implemented in the case of perfect detection ($\eta_j=1$), i.e., when the dynamics can be described by a stochastic Schr\"odinger equation. \subsection{Non-unit efficiency detection (stochastic master equation)} We start by observing that the evolution of the unnormalized conditional state and of its derivative can be written in terms of the Kraus operators (in what follows we will omit the superscript $(c)$ used to denote conditional states): \begin{align} \tilde{\varrho}_{t+dt} ={}& M_\mathbf{dy}^{\vphantom{\dag}}\tilde{\varrho}_t M_\mathbf{dy}^\dag + \sum_j (1-\eta_j) \hat{c}_j^{\vphantom{\dag}} \tilde{\varrho}_t \hat{c}_j^\dag \,dt \:, \notag \\ \partial_\omega \tilde{\varrho}_{t+dt} ={}& M_\mathbf{dy}^{\vphantom{\dag}}(\partial_\omega \tilde{\varrho}_t) M_\mathbf{dy}^\dag + (\partial_\omega M_\mathbf{dy}^{\vphantom{\dag}}) \tilde{\varrho}_t M_\mathbf{dy}^\dag + \notag \\ & + M_\mathbf{dy}^{\vphantom{\dag}}\tilde{\varrho}_t (\partial_\omega M_\mathbf{dy}^\dag ) +\notag \\ & + \sum_j (1-\eta_j) \hat{c}_j^{\vphantom{\dag}} (\partial_\omega \tilde{\varrho}_t) \hat{c}_j^\dag \,dt\:, \label{eq:rho_tilde_kraus} \end{align} where $\partial_\omega M_\mathbf{dy}^{\vphantom{\dag}}= -i (\partial_\omega \hat{H}_\omega) \,dt$. The trace of the unnormalized state reads \begin{align} \hbox{Tr}[\tilde{\varrho}_{t+dt} ] &= \hbox{Tr}[M_\mathbf{dy}^{\vphantom{\dag}}\tilde{\varrho}_t M_\mathbf{dy}^\dag + \sum_j (1-\eta_j) \hat{c}_j^{\vphantom{\dag}} \tilde{\varrho}_t \hat{c}_j^\dag \,dt ] \notag \\ &= \hbox{Tr}[\tilde{\varrho}_t] \hbox{Tr}[M_\mathbf{dy}^{\vphantom{\dag}}\varrho_t M_\mathbf{dy}^\dag + \sum_j (1-\eta_j) \hat{c}_j^{\vphantom{\dag}} {\varrho}_t \hat{c}_j^\dag \,dt ]\,, \end{align} where we have used the relation $\varrho_t = \tilde{\varrho}_t / \hbox{Tr}[\tilde{\varrho}_t]$. We can now use these formulas to obtain the evolution for the operator $\tau_{t+dt}$ just in terms of the operators $\varrho_t$ and $\tau_t$ at the previous time step: \begin{widetext} \begin{align} \tau_{t+dt} ={} & \frac{1}{\hbox{Tr}[\tilde{\varrho}_{t+dt}]} \left[(\partial_\omega M_\mathbf{dy}^{\vphantom{\dag}} ) \tilde{\varrho}_t M_\mathbf{dy}^\dag + M_\mathbf{dy}^{\vphantom{\dag}}(\partial_\omega \tilde{\varrho}_t) M_\mathbf{dy}^\dag + M_\mathbf{dy}^{\vphantom{\dag}} \tilde{\varrho}_t (\partial_\omega M_\mathbf{dy}^\dag) + \sum_j (1-\eta_j) \hat{c}^{\vphantom{\dag}}_j (\partial_\omega \tilde{\varrho}_t ) \hat{c}_j^\dag \,dt \right] \notag \\ ={} & \frac{(\partial_\omega M_\mathbf{dy}^{\vphantom{\dag}} ) \varrho_t M_\mathbf{dy}^\dag + M_\mathbf{dy}^{\vphantom{\dag}}\tau_t M_\mathbf{dy}^\dag + M_\mathbf{dy}^{\vphantom{\dag}}\varrho_t (\partial_\omega M_\mathbf{dy}^\dag) + \sum_j (1-\eta_j) \hat{c}^{\vphantom{\dag}}_j \tau_t \hat{c}_j^\dag \,dt} {\hbox{Tr}\left[M_\mathbf{dy}^{\vphantom{\dag}}\varrho_t M_\mathbf{dy}^\dag + \sum_j (1-\eta_j) \hat{c}^{\vphantom{\dag}}_j {\varrho}_t \hat{c}_j^\dag \,dt \right]} \,. \label{eq:tau} \end{align} \end{widetext} One can thus evaluate the trace of this operator at each time $t$, and evaluate accordingly the classical Fisher information $\mathcal{F}[p_\mathsf{traj}]$ as in Eq.~\eqref{eq:Fishtau}. Notice that the evolution for the derivative operator $\partial_\omega \varrho_t$ can be now written in terms of the renormalized operators $\varrho_t$ and $\tau_t$: \begin{align} \partial_\omega \varrho_t &= \frac{\partial_\omega \tilde{\varrho}_t }{\hbox{Tr}[\tilde{\varrho}_t]} - \frac{\hbox{Tr}[\partial_\omega \tilde{\varrho}_t]}{\hbox{Tr}[\tilde{\varrho}_t]^2} \tilde{\varrho}_t = \tau_t - \hbox{Tr}[\tau_t] \varrho_t \,. \label{eq:drho} \end{align} The QFI $\mathcal{Q}[\varrho_t]$ can then be evaluated at each time $t$ by using the formula \cite{MatteoIJQI}: \begin{align} \mathcal{Q}[\varrho_t] = 2\sum_{\lambda_s + \lambda_t \neq 0} \frac{|\langle \psi_s | \partial_\omega \varrho_t | \psi_t \rangle |^2}{\lambda_s + \lambda_t} \,, \end{align} upon writing $\varrho_t$ in its eigenbasis, i.e., $\varrho_t =\sum_s \lambda_s |\psi_s\rangle\!\langle\psi_s| \,.$ We would like to underline the key features of our algorithm. The relevant figures of merit could naively be derived from the evolution of the unnormalized conditional state $\tilde\varrho_t$, described in Eq.~\eqref{eq:rho_tilde_kraus}. However $\hbox{Tr}[\tilde\varrho_t]$ becomes very small during the evolution, leading to numerical instabilities in the evaluation of both $\mathcal{F}[p_\mathsf{traj}]$ and $\mathcal{Q}[\varrho_t]$. Thanks to Eqs. \eqref{eq:tau} and \eqref{eq:drho}, we are able to express the above quantities only in terms of the numerically stable operators $\varrho_t$ and $\tau_t$. Besides, the formulation in terms of Kraus operators, following \cite{Rouchon2014,Rouchon2015}, ensures that the density operator remains positive, as opposed to standard numerical integration of the SME. \subsection{Unit efficiency detection (stochastic Schr\"odinger equation)} The above calculations are greatly simplified when the dynamics starts with a pure initial state and the efficiency parameters are equal to one, $\eta_j=1$. In fact the quantum conditional state $|\psi_t\rangle$ remains pure during the whole evolution and the dynamics is described by a stochastic Schr\"odinger equation. We can thus work with state vectors, instead of density matrices, with a consequent reduction of complexity of the numerical simulation, which allows us to reach higher values of $N$ with a given amount of memory. In terms of Kraus operators, the unnormalized and normalized conditional states are obtained respectively as: \begin{align} |\widetilde\psi_{t+dt} \rangle &= M_\mathbf{dy}^{\vphantom{\dag}}|\widetilde\psi_t \rangle \,, \\ |\psi_{t+dt} \rangle &= \frac{ M_\mathbf{dy}^{\vphantom{\dag}}|\psi_t \rangle }{\sqrt{\langle \psi_t | M_\mathbf{dy}^\dag M^{{\vphantom{\dag}}}_\mathbf{dy} | \psi_t\rangle }} = \frac{ M_\mathbf{dy}^{\vphantom{\dag}}|\widetilde\psi_t \rangle }{\sqrt{\langle \widetilde\psi_t | M_\mathbf{dy}^\dag M^{{\vphantom{\dag}}}_\mathbf{dy} | \widetilde\psi_t\rangle }} \notag \,. \end{align} The operator $\tau_t$ in this case can be written as \begin{align} \tau_t &= \frac{\partial_\omega ( |\widetilde\psi_t \rangle \! \langle \widetilde\psi_t | )}{ \langle \widetilde\psi_t |\widetilde\psi_t \rangle} = \frac{| \partial_\omega \widetilde\psi_t \rangle \! \langle \widetilde\psi_t | +| \widetilde\psi_t \rangle \! \langle \partial_\omega \widetilde\psi_t | }{ \langle \widetilde\psi_t |\widetilde\psi_t \rangle} \,, \end{align} and its trace is equal to \begin{align} \hbox{Tr}[\tau_t] &= \frac{ \langle \widetilde\psi_t | \partial_\omega \widetilde\psi_t\rangle + \hbox{h.c.} } {\langle \widetilde\psi_t | \widetilde\psi_t \rangle} = \langle \psi_t | \phi_t\rangle + \langle \phi_t | \psi_t \rangle \,. \label{eq:tracephi} \end{align} In the last equation we have introduced the vector \begin{align} |\phi_t \rangle = \frac{ |\partial_\omega \widetilde \psi_t \rangle} { \sqrt{\langle \widetilde\psi_t | \widetilde\psi_t \rangle}} \,. \end{align} At time $t+dt$, the vector $|\phi_{t+dt}\rangle$ can be obtained as \begin{align} |\phi_{t+dt} \rangle &= \frac{ (\partial_\omega M_\mathbf{dy}^{\vphantom{\dag}}) |\widetilde\psi_t \rangle + M_\mathbf{dy}^{\vphantom{\dag}}|\partial_\omega \widetilde\psi_t \rangle }{\sqrt{\langle \widetilde\psi_t | M_\mathbf{dy}^\dag M^{{\vphantom{\dag}}}_\mathbf{dy} | \widetilde\psi_t \rangle}} \notag \\ &= \frac{ (\partial_\omega M_\mathbf{dy}^{\vphantom{\dag}}) |\psi_t \rangle + M_\mathbf{dy}^{\vphantom{\dag}}|\phi_t\rangle }{\sqrt{\langle \psi_t | M_\mathbf{dy}^\dag M^{{\vphantom{\dag}}}_\mathbf{dy}| \psi_t \rangle}}\,, \label{eq:phit} \end{align} where we have exploited the identity \begin{equation} \langle \widetilde \psi_t | M_\mathbf{dy}^\dag M^{{\vphantom{\dag}}}_\mathbf{dy} | \widetilde \psi_t \rangle = \langle \widetilde\psi_t | \widetilde\psi_t \rangle \! \langle \psi_t |M_\mathbf{dy}^\dag M^{{\vphantom{\dag}}}_\mathbf{dy} | \psi_t \rangle \,. \end{equation} We notice that as in Eq.~\eqref{eq:tau}, the evolution equation for the vector $|\phi_{t+dt}\rangle$ depends only on the vectors $|\psi_t \rangle$ and $|\phi_t\rangle$ and not on the unnormalized state $|\widetilde\psi_t \rangle$, and one can readily evaluate the classical Fisher information in terms of these two vectors via Eqs.~\eqref{eq:tracephi} and~\eqref{eq:Fishtau}. Regarding the QFI of the conditional state, we first observe that \begin{align} |\partial_\omega \psi_t\rangle &= \frac{ |\partial_\omega \widetilde{\psi}_t\rangle}{\sqrt{\langle \widetilde\psi_t | \widetilde\psi_t \rangle }} - \frac{\langle \widetilde \psi_t |\partial_\omega \widetilde\psi_t\rangle + \langle \partial_\omega \widetilde\psi_t | \widetilde\psi_t\rangle |\widetilde \psi _t\rangle}{2 \langle \widetilde \psi_t | \widetilde\psi_t\rangle^{3/2}} \notag \\ &= |\phi_t\rangle - \frac{\langle \psi_t | \phi_t\rangle + \langle \phi_t | \psi_t\rangle}{2} |\psi_t\rangle \,. \end{align} As we are dealing with pure states, the QFI of the conditional state can be simply evaluated as \cite{MatteoIJQI} \begin{align} \mathcal{Q}[|\psi_t\rangle ] = 4 \left[ \langle \partial_\omega \psi_t | \partial_\omega \psi_t\rangle + (\langle \partial_\omega \psi_t | \psi_t\rangle)^2 \right] \,. \end{align} The algorithms above have been derived for the case of time-continuous homodyne detection. Nonetheless one can easily extend them to time-continuous photodetection as follows. The Kraus operators $M_\mathbf{dy}^{\vphantom{\dag}}$ have to be replaced at each time step with one of the following Kraus operators corresponding to ``no detector click'' and ``detector click'' (in one of the output channels) events, respectively \begin{align} \label{eq:M0pd} M_0 &= \mathbbm{1} - i \hat{H}_\omega \, dt - \frac{1}{2} \sum_j \hat{c}_j^{\dag} \hat{c}^{{\vphantom{\dag}}}_j \,dt \, ,\\ \label{eq:M1pd} M_1^{(j)} &= \sqrt{\eta_j\, dt} \, \hat{c}_j \,. \end{align} At each time step, each Kraus operator $M_1^{(j)}$ has to be applied with probabilities $p_1^{(j)} = \eta_j \hbox{Tr}[\varrho_t \hat{c}_j^\dag \hat{c}^{{\vphantom{\dag}}}_j ]\,dt$ and, correspondingly, the Kraus operator $M_0$ has to be applied with probability $p_0 = 1 - \sum_j p_1^{(j)}$ \cite{WisemanMilburn}. We finally remark that in the case of frequency estimation with parallel noise and initial GHZ state, the numerical calculations are incredibly simplified thanks to the symmetry of the dynamics. In fact the whole (both conditional and unconditional) evolution is equivalently described in two two-dimensional Hilbert space spanned by the vectors $|\bar 0 \rangle = |0\rangle^{\otimes N}$ and $|\bar 1 \rangle = |1 \rangle^{\otimes N}$. As regards continuous photodetection, the Kraus operators \eqref{eq:M0pd} and \eqref{eq:M1pd} are replaced by the effective Kraus operators \begin{align} \overline{M}_0 &= \mathbbm{1}_2 - i N \omega \overline{\sigma}_z \, dt - (N \kappa/4) \mathbbm{1}_2 \,dt \,,\\ \overline{M}_1 &= \sqrt{(\eta \kappa/2) \, dt} \; N \overline{\sigma}_z \,. \end{align} where $\mathbbm{1}_2$ and $\overline{\sigma}_z$ are respectively the identity operator and the Pauli-z matrix in the Hilbert space spanned by $|\bar{0}\rangle$ and $|\bar{1}\rangle$, and where the operators $\overline{M}_1$ and $\overline{M}_0$ have to be applied respectively with probabilities $\bar{p}_1 = N (\eta \kappa/2) \, dt$ (i.e., the total probability of observing a photon in one of the $N$ output channels) and $\bar{p}_0 = 1 - \bar{p}_1$. More practically the results for generic $N$ qubits can be readily obtained by exploiting the results obtained for one qubit, and rescaling the time as $t \rightarrow Nt$. In this case it is also easy to show that the efficiency parameter $\eta$ corresponds to the product between the factual efficiency of the photo-detectors (that we assume to be equal) and the fraction of qubits that are effectively monitored. \section{Conclusions and remarks} \label{s:conclusion} We have discussed for the first time the usefulness of time-continuous monitoring in the context of noisy quantum metrology. We have proven that the desired Heisenberg scaling can in fact be restored by exploiting these schemes and we have obtained several conceptually and practically relevant results.\\ We have shown the fundamental difference between parallel and transverse noise with regards to the ultimate limit achievable when the environmental degrees of freedom can be measured. In the first case, i.e., when the Hamiltonian and the noise generator commute, having access to every degree of freedom of the environment allows us to restore the unitary (noiseless) Heisenberg limit. In the latter case, i.e., in the presence of transverse noise, the ultimate QFI still presents a Heisenberg (quadratic) scaling in the number of qubits $N$; however, it is always lower than the unitary QFI, thus showing that some information is irremediably lost due to the interaction with the environment. These findings complement the results obtained for frequency estimation with open quantum systems~\cite{Haase2018}, where the geometry of the noise is indeed crucial in determining the different achievable scalings. The second non-trivial and conceptually relevant result is that estimation strategies based on (sequential) time-continuous monitoring and final strong measurements on the system are optimal, i.e., the corresponding effective QFIs $\qunr{\eta_j}$ are equal to the ultimate QFIs $\overline{\mathcal{Q}}_\mathcal{L_\omega}$. This is true for both parallel and transverse noise, showing that no complicated estimation strategies based on ``entangled measurements'' on the environment and system are needed to achieve the ultimate limit on the precision. This result, which was not suggested by any mathematical or physical intuition, is particularly important from a practical point of view, as it greatly relaxes the assumptions and demands that are requested to achieve the ultimate limits. We have also discussed in detail the case of finite efficiency monitoring. In the presence of parallel noise we have shown that the estimation precision is subject to the standard quantum limit, with a behaviour ruled by a reduced effective dephasing: one can still obtain a constant enhancement, which can be made arbitrarily high by increasing the monitoring efficiency. In the presence of transverse noise, we have observed the expected enhancement with respect to the optimized (super-classical) unconditional results \cite{Chaves2013}. However, due to the computational complexity of the algorithm needed to obtain the effective QFI, we could not infer any conclusion regarding the scaling with the number of probes $N$. It is important to remark that the use of continuous measurements and feedback techniques has long been recognized as a useful tool for fighting decoherence and to prepare non-classical quantum states~\cite{WisemanMilburn,Ahn2003,Ahn2004,Akerman2012,Ganesan2007,Szigeti2014,Tempura,Levante}. Our metrological scheme follows this line of thought, but with the great advantage that it is based on continuous monitoring only, without error correction steps (or feedback), differing it from other recent approaches in noisy quantum metrology~\cite{Plenio2016,Gefen2016}. Moreover, while most of the literature on parameter estimation with continuous measurements focuses on the information gained from the continuous signal, the crucial part that makes our protocol able to recover Heisenberg scaling is the final strong measurement on the conditional state. A great deal of effort has been also devoted to studying the asymptotic properties of estimation via repeated/continuous measurements~\cite{Burgarth2015a,Catana2015,Guta2016}. In this approach one is usually interested in performing a single run of the experiment and thus observes the system for a long time. Our point of view is radically different and rooted in previous works on quantum frequency estimation. As a matter of fact, we are interested in the initial part of the dynamics, long before reaching the steady state. For this reason the asymptotic regime of the statistical model is reached by running the experiment many times, instead of observing the system for a long time. In order to obtain these results we have developed a stable algorithm for the evaluation of the effective quantum Fisher information that quantifies the performance of the proposed strategies. The flexibility of this algorithm and the originality of our protocols pave the way for a series of future investigations that will exploit techniques and ideas that so far have only been applied to standard noisy metrology schemes. For example, one can investigate the performances of these protocols with optimized~\cite{Frwis2014} and noisy quantum probes~\cite{Gorecka2017}, or by considering entangled ancillary systems~\cite{Demkowicz-Dobrzanski2014,Huang2018,Sbroscia2017}. Furthermore, one can also study in detail the role played by quantum coherence~\cite{Giorda2016} and the possibility of considering unravellings of non-Markovian master equations~\cite{Diosi2014}. \begin{acknowledgments} The authors acknowledge useful discussions with A. Del Campo, M. Paris, P. Rouchon, A. Smirne and T. Tufarelli. MACR was supported by the Horizon 2020 EU collaborative project QuProCS (Grant Agreement No. 641277) and by the Academy of Finland Centre of Excellence program (project 312058). MGG acknowledges support from Marie Skodowska-Curie Action H2020-MSCA-IF-2015 (project ConAQuMe, grant no. 701154) and from a Rita Levi-Montalcini fellowship of MIUR. \\ \noindent \emph{FA and MACR have equally contributed to this work.} \end{acknowledgments} \nocite{apsrev41Control} \end{document}
\begin{document} \title{A Capitalized Title: Something about a Package \pkg{foo} \section[About Java]{About \proglang{Java}} \end{document}
\begin{document} \title{On commutator relations in $2$-spherical RGD-systems} \begin{abstract} In this paper we investigate the commutator relations for prenilpotent roots which are nested. These commutator relations are trivial in a lot of cases. \mathrm{e}nd{abstract} \section{Introduction} In \cite{Ti92} Tits introduced RGD-systems in order to describe groups of Kac-Moody type. Let $(W, S)$ be a Coxeter system, let $\mathcal{P}hi$ be the associated set of roots (viewed as a set of half-spaces) and let $\mathcal{P}i$ be a basis (i.e. a set of simple roots) of $\mathcal{P}hi$. An RGD-system of type $(W, S)$ is a pair $(G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ consisting of a group $G$ and a family of subgroups $U_{\alpha}$ (called \textit{root groups}) indexed by the set of roots $\mathcal{P}hi$ satisfying a few axioms (for the precise definition see Section \ref{sec:rgd}). In this paper we are interested in \textit{$2$-spherical} RGD-systems (i.e. the order of $st$ in $W$ is finite for any $s, t \in S$). For $\alpha \neq \beta \in \mathcal{P}i$ we define $X_{\alpha} := \langle U_{\alpha} \cup U_{-\alpha} \rangle$ and $X_{\alpha, \beta} := \langle X_{\alpha} \cup X_{\beta} \rangle$. A $2$-spherical RGD-system satisfies Condition $\costar$ if the following holds: \begin{equation} X_{\alpha, \beta} / Z(X_{\alpha, \beta}) \not\cong B_2(2), G_2(2), G_2(3), ^2 F_4(2) \text{ for all pairs } \{ \alpha, \beta \} \subseteq \mathcal{P}i. \tag{Co$^{\star}$} \mathrm{e}nd{equation} This condition was introduced by M\"uhlherr and Ronan in \cite{MR95}. In loc.cit. they give a formulation in terms of buildings. One axiom of an RGD-system provides a commutator relation between $U_{\alpha}$ and $U_{\beta}$, where $\{ \alpha, \beta \} \subseteq \mathcal{P}hi$ is a pair of \textit{prenilpotent roots} (i.e. a pair of distinct roots where both $\alpha \cap \beta$ and $(-\alpha) \cap (-\beta)$ are non-empty sets). In the spherical case two roots $\alpha, \beta$ are prenilpotent if and only if $\alpha \neq -\beta$. In the non-spherical case this is no longer true. For a pair $\{ \alpha, \beta \}$ of prenilpotent roots we have either $o(r_{\alpha} r_{\beta}) <\infty$ or $o(r_{\alpha} r_{\beta}) = \infty$. In the first case the commutator relations between $U_{\alpha}$ and $U_{\beta}$ are completely determined by Tits and Weiss in \cite{TW02}. In the second case the pair $\{ \alpha, \beta \}$ is \textit{nested}, i.e. $\alpha \subsetneq \beta$ or $\beta \subsetneq \alpha$. In this paper we are interested in the commutator relations of root groups corresponding to nested roots in $2$-spherical RGD-systems of rank $3$. We will show that in a certain class of RGD-systems these commutator relations are trivial. This leads to the following definition: An RGD-system satisfies Condition $\nc$, if the following holds: \begin{equation} \forall \alpha \subsetneq \beta \in \mathcal{P}hi: [U_{\alpha}, U_{\beta}] = 1. \tag{nc} \mathrm{e}nd{equation} We prove the following theorem (cf. Proposition \ref{Propsimplylacedaffine} and Theorem \ref{Mainresult}): \noindent \textbf{Theorem A:} Let $(W, S)$ be a $2$-spherical Coxeter system of rank $3$ and let $\mathcal{D} = (G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ be an RGD-system of type $(W, S)$ satisfying Condition $\costar$. If $\mathcal{D}$ does not satisfy Condition $\nc$ then $\mathcal{D}$ is of type $(2, 4, 4), (2, 4, 6), (2, 4, 8), (2, 6, 6)$ or $(2, 6, 8)$. \noindent The following proposition shows that our theorem is optimal in the sense that there are RGD-systems for any type mentioned in Theorem A, where Condition $\costar$ does not imply Condition $\nc$ (see Remark \ref{ExampleC2tilde} and Section \ref{sec:Non-cyclic}): \noindent \textbf{Proposition B:} There exist RGD-systems of type $(2, 4, 4), (2, 4, 6), (2, 4, 8), (2, 6, 6), (2, 6, 8)$, which satisfy Condition $\costar$, but do not satisfy Condition $\nc$. \noindent In the higher rank case we have the following result which is contained in Theorem \ref{Mainresult}: \noindent \textbf{Theorem C:} Assume that the Coxeter diagram of $(W, S)$ is the complete graph or simply-laced (i.e. $m_{st} \in \{2, 3\}$ for all $s, t\in S$). Then in every RGD-system of type $(W, S)$ Condition $\costar$ implies Condition $\nc$. \noindent \textbf{Remarks:} \noindent $1$. We believe that we can't drop Condition $\costar$ in general in Theorem A. But if the rank $3$ RGD-system is of affine type, we can drop Condition $\costar$ (see Proposition \ref{Propsimplylacedaffine}). Furthermore, Theorem A shows that Condition $\nc$ is a weaker condition than Condition $\costar$ for a $2$-spherical RGD-system of rank $3$ which is not of type $(2, 4, 4), (2, 4, 6), (2, 4, 8), (2, 6, 6), (2, 6, 8)$. \noindent $2$. Let $(W, S)$ be of type $(k, l, m)$ and we assume that $2 \leq k \leq l \leq m \leq 8$. If $l \in \{3, 8\}$, then in an RGD-system of type $(W, S)$ Condition $\costar$ implies Condition $\nc$. Moreover, there are many $2$-spherical non-affine diagrams $(W, S)$ such that in an RGD-system of type $(W, S)$ Condition $\costar$ does not imply Condition $\nc$. Since $k, l, m \in \{2, 3, 4, 6, 8\}$, there are $125$ different types of RGD-systems of rank $3$ and in only five types Condition $\costar$ does not necessarily imply Condition $\nc$. \noindent $3$. In Theorem C we can drop Condition $\costar$, if $(W, S)$ is simply-laced, since every simply-laced RGD-system automatically satisfies Condition $\costar$. Theorem C is proved by Allcock and Carbone for RGD-systems of simply-laced hyperbolic type associated to Kac-Moody groups over fields (see Lemma $6$ in \cite{AC16}). Our theorem includes these cases. Furthermore, we learned that Ted Williams produced results in \cite{Wi20} that are similar to our Theorem A and Proposition B. We also remark, that Condition $\nc$ often holds for RGD-systems, which come from Kac-Moody groups over fields (cf. Remark $3.7 (f)$ in \cite{Ti87}). \renewcommand{Acknowledgement}{Acknowledgement} \begin{abstract} The author thanks the anonymous referee for careful reading any many helpful comments. \mathrm{e}nd{abstract} \section{Preliminaries} \subsection*{Coxeter systems} Let $(W, S)$ be a Coxeter system and let $\mathrm{e}ll$ denote the corresponding length function. Defining $w \sim_s w'$ if and only if $w^{-1}w' \in \langle s \rangle$ we obtain a chamber system with chamber set $W$ and equivalence relations $\sim_s$ for $s\in S$, which we denote by $\Sigma(W, S)$. We call two chambers $w, w'$ \textit{$s$-adjacent} if $w \sim_s w'$ and \textit{adjacent} if they are $s$-adjacent for some $s\in S$. A \textit{gallery of length $n$} from $w_0$ to $w_n$ is a sequence $(w_0, \ldots, w_n)$ of chambers where $w_i$ and $w_{i+1}$ are adjacent for any $0 \leq i < n$. A gallery $(w_0, \ldots, w_n)$ is called \textit{minimal} if there exists no gallery from $w_0$ to $w_n$ of length $k<n$ and we denote the length of a minimal gallery between $w_0$ and $w_n$ by $d(w_0, w_n)$. For $s, t \in S$ we denote the order of $st$ in $W$ by $m_{st}$. The \textit{Coxeter diagram} corresponding to $(W, S)$ is the labeled graph $(S, E(S))$, where $E(S) = \{ \{s, t \} \mid m_{st}>2 \}$ and where each edge $\{s,t\}$ is labeled by $m_{st}$ for all $s, t \in S$. The \textit{rank} of a Coxeter diagram is the cardinality of the set of its vertices. It is well-known that the pair $(\langle J \rangle, J)$ is a Coxeter system (cf. \cite[Ch. IV, §$1$ Th\'eor\`eme $2$]{Bo68}). For $J \subseteq S$ we define the \textit{$J$-residue} of a chamber $c\in W$ to be the set $c \langle J \rangle$. A \textit{residue} $R$ is a $J$-residue for some $J \subseteq S$; we call $J$ the \textit{type} of $R$ and the cardinality of $J$ is called the \textit{rank} of $R$. A \textit{panel} is a residue of rank $1$. It is a fact that for any chamber $x\in W$ and every residue $R$ there exists a unique chamber $z\in R$ such that $d(x, y) = d(x, z) + d(z, y)$ holds for any chamber $y\in R$. The chamber $z$ is called the \textit{projection} of $x$ onto $R$ and is denoted by $z = \proj_R x$. Let $(W, S)$ be a $2$-spherical Coxeter system of rank $3$ and let $S = \{ r, s, t \}$. Sometimes we will also call $(m_{rs}, m_{rt}, m_{st})$ the \textit{type} of $(W, S)$. The Coxeter system $(W, S)$ is called \textit{affine} if $\frac{1}{m_{rs}} + \frac{1}{m_{rt}} + \frac{1}{m_{st}} =1$, and it is called \textit{hyperbolic} if $\frac{1}{m_{rs}} + \frac{1}{m_{rt}} + \frac{1}{m_{st}} <1$. A Coxeter system of rank $3$ is called \textit{cyclic} if the underlying Coxeter diagram is the complete graph. \subsection*{Roots and walls} A \textit{reflection} is an element of $W$ that is conjugated to an element of $S$. For $s\in S$ we let $\alpha_s := \{ w\in W \mid \mathrm{e}ll(sw) > \mathrm{e}ll(w) \}$ be the \textit{simple root} corresponding to $s$. A \textit{root} is a subset $\alpha \subseteq W$ such that $\alpha = v\alpha_s$ for some $v\in W$ and $s\in S$. We denote the set of all roots by $\mathcal{P}hi(W, S)$. The set $\mathcal{P}hi(W, S)_+ = \{ \alpha \in \mathcal{P}hi(W, S) \mid 1_W \in \alpha \}$ is the set of all \textit{positive roots} and $\mathcal{P}hi(W, S)_- = \{ \alpha \in \mathcal{P}hi(W, S) \mid 1_W \notin \alpha \}$ is the set of all \textit{negative roots}. For each root $\alpha \in \mathcal{P}hi(W, S)$ we denote the \textit{opposite root} by $-\alpha$ and we denote the unique reflection which interchanges these two roots by $r_{\alpha}$. For a pair $\{ \alpha, \beta \}$ of prenilpotent roots we will write $\left[ \alpha, \beta \right] := \{ \gamma \in \mathcal{P}hi(W, S) \mid \alpha \cap \beta \subseteq \gamma \text{ and } (-\alpha) \cap (-\beta) \subseteq -\gamma \}$ and $(\alpha, \beta) := \left[ \alpha, \beta \right] \backslash \{ \alpha, \beta \}$. For $\alpha \in \mathcal{P}hi(W, S)$ we denote by $\partial \alpha$ (resp. $\partial^2 \alpha$) the set of all panels (resp. spherical residues of rank $2$) stabilized by $r_{\alpha}$. Furthermore, we define $\mathcal{C}(\partial^2 \alpha) := \bigcup_{R \in \partial^2 \alpha} R$. Some authors call a panel $P \in \partial \alpha$ a \textit{boundary panel} of the root $\alpha$. The set $\partial \alpha$ is called the \textit{wall} associated to $\alpha$. Let $G = (c_0, \ldots, c_k)$ be a gallery. We say that $G$ \textit{crosses the wall $\partial \alpha$} if there exists $1 \leq i \leq k$ such that $\{ c_{i-1}, c_i \} \in \partial \alpha$. It is a basic fact that a minimal gallery crosses a wall at most once (cf. Lemma $3.69$ in \cite{AB08}). \begin{comment} \begin{remark}\label{rkmingalroots} Let $(c_0, \ldots, c_k)$ and $(d_0 = c_0, \ldots, d_k = c_k)$ be two minimal galleries from $c_0$ to $c_k$. Then $\partial \alpha$ is crossed by the minimal gallery $(c_0, \ldots, c_k)$ if and only if it is crossed by the minimal gallery $(d_0, \ldots, d_k)$ for any root $\alpha \in \mathcal{P}hi$. \mathrm{e}nd{remark} \begin{lemma}\label{CM06Prop2.7} Let $\alpha \in \mathcal{P}hi$ and let $P, Q \in \partial \alpha$. Then there exist a sequences $P_0 = P, \ldots, P_n = Q$ of panels in $\partial \alpha$ and a sequence $R_1, \ldots, R_n$ of spherical rank $2$ residues in $\partial^2 \alpha$ such that $P_{i-1}, P_i$ are distinct and contained in $R_i$ and moreover, we have $\proj_{R_i} P := \{ \proj_{R_i} p \mid p \in P \} = P_{i-1}$ and $\proj_{R_i} Q = P_i$. \mathrm{e}nd{lemma} \begin{proof} This is a consequence of Proposition $2.7$ in \cite{CM06}. \mathrm{e}nd{proof} \mathrm{e}nd{comment} \begin{convention} For the rest of this paper we let $(W, S)$ be a $2$-spherical Coxeter system of finite rank and $\mathcal{P}hi := \mathcal{P}hi(W, S)$ (resp. $\mathcal{P}hi_+$ and $\mathcal{P}hi_-$). \mathrm{e}nd{convention} \subsection*{Reflection triangles and combinatorial triangles} This subsection is based on \cite{CM05}. A set $D$ a called \textit{fundamental domain} for the action of a group $G$ on a set $E$ containing $D$ if $\bigcup_{ g\in G } gD = E$ and $D \cap gD \neq \mathrm{e}mptyset \Rightarrow g=1$ for any $g\in G$. Let $\mathcal{P}si$ be a set of roots. We put $R(\mathcal{P}si) := \{ r_{\psi} \mid \psi \in \mathcal{P}si \}$ and $W(\mathcal{P}si) := \langle R(\mathcal{P}si) \rangle$. The set $\mathcal{P}si$ is called \textit{geometric} if $\bigcap_{\psi \in \mathcal{P}si} \psi$ is non-empty and if for all $\varphi, \psi \in \mathcal{P}si$, the set $\varphi \cap \psi$ is a fundamental domain for the action of $W(\{ \varphi, \psi \})$ on $\Sigma(W, S)$. A \textit{parabolic subgroup} of $W$ is a subgroup of the form $\Stab_W(R)$ for some residue $R$. The \textit{rank} of the parabolic subgroup is defined to be the rank of the residue $R$. A \textit{reflection triangle} is a set $T$ of three reflections such that the order of $tt'$ is finite for all $t, t' \in T$ and such that $T$ is not contained in any parabolic subgroup of rank $2$. It is a fact that given a reflection triangle $T$ there exists a reflection triangle $T'$ such that $\langle T \rangle = \langle T' \rangle$ and that $(\langle T \rangle, T')$ is a Coxeter system. This follows essentially from Proposition $4.2$ in \cite{MW02}. In particular, the Coxeter diagram of $(\langle T \rangle, T')$ is uniquely determined by $\langle T \rangle$ and we denote this diagram by $\mathcal{M}(T)$. We say that $T$ is \textit{affine} if $\mathcal{M}(T)$ is affine. A set of three roots $T$ is called \textit{combinatorial triangle} (or simply a \textit{triangle}) if the following hold: \begin{enumerate}[label=(CT\arabic*), leftmargin=*] \item The set $\{ r_{\alpha} \mid \alpha \in T \}$ is a reflection triangle. \item The set $\bigcap_{\alpha \in T} \alpha$ is geometric. \mathrm{e}nd{enumerate} A triangle $T = \{ \alpha_0, \alpha_1, \alpha_2 \}$ is called a \textit{fundamental triangle} if $(-\alpha_i, \alpha_{i+1}) = \mathrm{e}mptyset$ for any $0 \leq i \leq 2$, where the indices are taken modulo $3$. To avoid complicated subscripts we will write $r_i$ instead of $r_{\alpha_i}$ for $i\in \mathbb{N}$ and a root $\alpha_i \in \mathcal{P}hi$. \begin{lemma}\label{Theorem1.2CM} Let $T$ be an affine reflection triangle. Then there exists an irreducible affine parabolic subgroup $W_0 \leq W$ of rank $\geq 3$ such that $\langle T \rangle$ is conjugated to a subgroup of $W_0$. \mathrm{e}nd{lemma} \begin{proof} This is Theorem $1.2$ of \cite{CM05}, which is essentially based on a result in \cite{KDis}. \mathrm{e}nd{proof} Let $T = \{ \alpha_0, \alpha_1, \alpha_2 \}$ be a triangle and let $\sigma_i \in \partial^2 \alpha_{i-1} \cap \partial^2 \alpha_{i+1}$ (the indices are taken modulo $3$) which is contained in $\alpha_i$. The set $\{ \sigma_0, \sigma_1, \sigma_2 \}$ is called a \textit{set of vertices} of $T$. For $0 \leq i \neq j \leq 2$ we let $x_{i, j}$ be the unique chamber of $\proj_{\sigma_i} \sigma_j$ which belongs to $\alpha_k$ where $i \neq k \neq j$. Using Lemma $2.3$ of \cite{CM05}, there exists for each $0 \leq i \leq 2$ a minimal gallery $\Gamma_i$ from $x_{i-1, i+1}$ to $x_{i+1, i-1}$ such that every element belongs to $\mathcal{C}(\partial^2 \alpha_i) \cap \alpha_i$. Let also $\tilde{\Gamma}_i$ be the unique minimal gallery from $x_{i, i+1}$ to $x_{i, i-1}$. Finally, let $\Gamma$ be the gallery obtained by concatenating the $\Gamma_i$'s and the $\tilde{\Gamma}_i$'s, i.e. \[ \Gamma = \Gamma_0 \sim \tilde{\Gamma}_1 \sim \Gamma_2 \sim \tilde{\Gamma}_0 \sim \Gamma_1 \sim \tilde{\Gamma}_2. \] Then $\Gamma$ is a closed gallery by construction and we say that $\Gamma$ \textit{skirts around} the triangle $T$ and that the vertices $\{ \sigma_0, \sigma_1, \sigma_2 \}$ \textit{supports} $\Gamma$. The \textit{perimeter} of $T$ is the minimum of the set of all lengths of all galleries that skirt around $T$. In the next lemma we use the same induction as in Lemma $5.2$ of \cite{CM05} but we deduce different things. \begin{proposition}\label{completefundamentaltriangle} Assume that the Coxeter diagram is the complete graph and let $T$ be a triangle. Then $T$ is fundamental. \mathrm{e}nd{proposition} \begin{proof} Let $T = \{ \alpha_0, \alpha_1, \alpha_2 \}, \{ 0, 1, 2 \} = \{ i, j, k \}$, let $p$ be the perimeter of $T$, let $\Gamma$ be a closed gallery of length $p$ which skirts around $T$ and let $\{ \sigma_0, \sigma_1, \sigma_2 \}$ be a set of vertices that supports $\Gamma$. We prove the hypothesis by induction on the length $p$. If $p=0$ then $\sigma_0, \sigma_1, \sigma_2$ contain a common chamber. Then $\bigcap_{\alpha \in T} \alpha$ is a chamber and hence $T$ is fundamental. Thus we assume $p>0$. Assume that $T$ is not fundamental and that $\beta \in (-\alpha_i, \alpha_j)$. Then $o(r_{\beta} r_k) < \infty$ and we obtain the reflection triangles $T_+ := \{ \alpha_i, \beta, \alpha_k \}$ and $T_- := \{ -\beta, \alpha_j, \alpha_k \}$. Let $\sigma \in \partial^2 \alpha_k \cap \partial^2 \beta$ such that $\sigma$ is crossed by $\Gamma$ (i.e. there exists a root $\alpha$ such that $\sigma \in \partial^2 \alpha$ and $\partial \alpha$ is crossed by $\Gamma$). Then the perimeter of $T_+$ and $T_-$ is $<p$ and we can use induction. This implies that $T_+$ and $T_-$ are fundamental triangles and hence $(\beta, \alpha_k) = \mathrm{e}mptyset = (-\alpha_k, \beta)$. Since $m_{st} \neq 2$ for any $s \neq t \in S$ this yields a contradiction. \mathrm{e}nd{proof} \section{Triangles in the hyperbolic plane} In this section we consider $\Sigma(W, S)$, where $(W, S)$ is hyperbolic and of rank $3$. In this case $\Sigma(W, S)$ has a geometric realization whose underlying metric space is $\mathbb{H}^2$. For more details see the discussion in the proof of Theorem $14$ in \cite{CR09}. Let $\alpha, \beta \in \mathcal{P}hi$ such that $o(r_{\alpha}r_{\beta}) < \infty$. Then $r_{\alpha}$ and $r_{\beta}$ have a unique common fixed point in $\mathbb{H}^2$. We denote the angle counterclockwise between $r_{\alpha}$ and $r_{\beta}$ divided by $\pi$ by $\angle r_{\alpha} r_{\beta}$: \[ \begin{tikzpicture}[scale=0.8] \draw[line width=0.01mm, domain=-2:2, smooth] plot (\x, {pow(0.8*2.71828, \x)}); \draw (2, 4.5) node [right]{$r_{\alpha}$}; \draw[line width=0.01mm, domain=-2:2, smooth] plot (\x, {pow(0.8*2.71828, -\x)}); \draw (2, 0.1) node [right]{$r_{\beta}$}; \draw[domain=-2:2] plot (\x, 0.8*\x+1); \draw[domain=-2:2] plot (\x, -0.8*\x+1); \coordinate (A) at (-2, 2.6); \coordinate (B) at (2, 2.6); \coordinate (M) at (0, 1); \coordinate (C) at (2, -0.6); \tikzset{anglestyle/.style={angle eccentricity=1.5, draw, thick, angle radius=0.5cm}} \draw pic ["$\angle r_{\alpha} r_{\beta}$", anglestyle] {angle = B--M--A}; \draw[line width=0.01mm, domain=5:9, smooth] plot (\x, {pow(0.8*2.71828, \x-7)}); \draw (9, 4.5) node [right]{$r_{\alpha}$}; \draw[line width=0.01mm, domain=5:9, smooth] plot (\x, {pow(0.8*2.71828, -\x+7)}); \draw (9, 0.1) node [right]{$r_{\beta}$}; \draw[domain=5:9] plot (\x, 0.8*\x -5.6+1); \draw[domain=5:9] plot (\x, -0.8*\x +5.6+1); \coordinate (A1) at (5, 2.6); \coordinate (B1) at (9, 2.6); \coordinate (M1) at (7, 1); \coordinate (C1) at (9, -0.6); \tikzset{anglestyle/.style={angle eccentricity=1.5, draw, thick, angle radius=0.5cm}} \draw pic ["$\angle r_{\beta} r_{\alpha}$", anglestyle, angle eccentricity=2.5] {angle = C1--M1--B1}; \mathrm{e}nd{tikzpicture} \] Note that $\angle r_{\alpha} r_{\beta} + \angle r_{\beta} r_{\alpha} = 1$ and that in general $\angle r_{\alpha} r_{\beta} \neq \angle r_{\beta} r_{\alpha}$. Since we consider RGD-systems we can assume that every angle between two reflections having a unique common fixed point is in the following set (cf. \cite[$(17.1)$ Theorem]{TW02}): $\{ \frac{\pi}{8}, \frac{\pi}{6}, \frac{\pi}{4}, \frac{\pi}{3}, \frac{3\pi}{8}, \frac{\pi}{2}, \frac{5\pi}{8}, \frac{2\pi}{3}, \frac{3\pi}{4}, \frac{5\pi}{6}, \frac{7\pi}{8} \}$. In $\mathbb{H}^2$ every triangle has interior angle sum strictly less than $\pi$ and every quadrangle has interior angle sum strictly less than $2\pi$. It is a fact that two different lines of $\mathbb{H}^2$ intersect in at most one point of $\mathbb{H}^2$. Thus for two roots $\alpha, \beta \in \mathcal{P}hi$ such that $o(r_{\alpha} r_{\beta}) < \infty$ there exists a unique residue $R$ of rank $2$ which is stabilized by $r_{\alpha}$ and $r_{\beta}$. Let $\{ s, t \}$ be the type of $R$. Then we define $m_{\alpha, \beta} := m_{st}$ and we remark that $o(r_{\alpha} r_{\beta})$ divides $m_{\alpha, \beta}$. \begin{corollary}\label{uniquechamberhyperbolicrank3} Every triangle contains a unique chamber, if the Coxeter diagram is cyclic hyperbolic. \mathrm{e}nd{corollary} \begin{proof} This is a consequence of the classification in \cite{Fe98} (cf. Figure $8$ in $\S 5.1$ in loc.cit). \mathrm{e}nd{proof} \begin{comment} \begin{proof} Let $T = \{ \alpha_0, \alpha_1, \alpha_2 \}$ be a triangle. Using Proposition \ref{completefundamentaltriangle}, $T$ is fundamental. Assume that $T$ is not a chamber. Then there exists a root $\alpha_3$ which ''divides'' $\{ r_i \mid \alpha_i \in T \}$ into two parts: $$\begin{tikzpicture}[scale=0.5] \draw (0 , 0 ) to (10 , 0 ); \draw (10 , 0 ) to (10 , -0.3); \draw (9.9, 0 ) to (9.9, -0.3); \draw (9.8, 0 ) to (9.8, -0.3); \draw (9.9, -0.3) node [below]{$-\alpha_0$}; \draw (0, -1) to (6, 5); \draw (0 , -1 ) to (0.2, -1.2); \draw (0.1, -0.9) to (0.3, -1.1); \draw (0.2, -0.8) to (0.4, -1 ); \draw (0.3, -1.15) node [right]{$\alpha_1$}; \draw (5, 5) to (8, -1); \draw (8 , -1 ) to (7.8, -1.1); \draw (7.95, -0.9) to (7.75, -1); \draw (7.9, -0.8) to (7.7, -0.9); \draw (7.75, -1.1) node [below]{$\alpha_2$}; \draw (0 , 2.6) to (10 , 2.6); \draw (10 , 2.6) to (10 , 2.3); \draw (9.9, 2.6) to (9.9, 2.3); \draw (9.8, 2.6) to (9.8, 2.3); \draw (9.8, 2.3) node [below]{$\alpha_3$}; \mathrm{e}nd{tikzpicture}$$ Using Proposition \ref{completefundamentaltriangle} and the fact that the diagram is cyclic hyperbolic, we obtain $(-\alpha_2, \alpha_3) \neq \mathrm{e}mptyset$. Let $\alpha_4 \in (-\alpha_2, \alpha_3)$. Then $r_4$ has to intersect $r_0$ because otherwise the triangle $\{ \alpha_1, \alpha_2, -\alpha_4 \}$ would not be fundamental. Using Proposition \ref{completefundamentaltriangle} we obtain $\angle r_2 r_3 = \angle r_3 r_4 = \angle r_4 r_2 = \frac{1}{3}$ (otherwise there would be a non-fundamental triangle). With the same arguments we obtain a root $\alpha_5 \in (-\alpha_1, \alpha_3)$ and we have $\angle r_1 r_5 = \angle r_5 r_3 = \angle r_3 r_1 = \frac{1}{3}$ as before. Then there are three possibilities: $r_4, r_5, r_0$ meet in one point, $r_4, r_5$ intersect in the interior of the triangle $T$ or they do not intersect in the interior of the triangle $T$ and both intersect $r_0$ in different points. In each case we obtain a contradiction to the interior angle sum of a triangle or a quadrangle in $\mathbb{H}^2$. In the case where $r_4$ and $r_5$ intersect in the interior of the triangle $T$, there must be a root $\alpha_6 \in (\alpha_5, \alpha_0)$ with $\angle r_6 r_5 \geq \frac{1}{3}$. Then $r_6$ has to intersect $r_2$ in the quadrangle $\{ r_0, r_1, r_2, r_3 \}$ but not in the intersection of $r_2$ and $r_3$. Since $m_{st} \neq 2$ for any $s \neq t \in S$ and every triangle is fundamental, this yields a contradiction. \mathrm{e}nd{proof} \mathrm{e}nd{comment} \begin{remark} For a fundamental triangle $T = \{ \alpha, \beta, \gamma \}$ in a Coxeter system of hyperbolic type and of rank $3$, the Coxeter system is of type $(m_{\alpha, \beta}, m_{\alpha, \gamma}, m_{\beta, \gamma})$ (cf. \cite{Fe98} and the previous corollary). In particular, $\bigcap_{\mathrm{e}psilon \in T} \mathrm{e}psilon$ is a fundamental domain for the action of $W$ on the geometric realization of $\Sigma(W, S)$. \mathrm{e}nd{remark} \begin{lemma}\label{angle1over2} Let $T = \{ \alpha_0, \alpha_1, \alpha_2 \}$ be a triangle and let $\{ i, j, k \} = \{ 0, 1, 2 \}$. Let $\alpha_3 \in (-\alpha_i, \alpha_j)$. Then the following hold: \begin{enumerate}[label=(\alph*)] \item We have $\angle r_k r_3 \neq \frac{5}{8} \neq \angle r_3 r_k$. \item We have $\angle r_k r_3 = \angle r_3 r_k = \frac{1}{2}$ if the diagram is not of type $(2, 3, 8)$. In particular, we have $(-\alpha_i, \alpha_k) = (-\alpha_j, \alpha_k) = \mathrm{e}mptyset$ and $(-\alpha_i, \alpha_j) = \{ \alpha_3 \}$. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{proof} This is also a consequence of the classification in \cite{Fe98} (cf. Figure $8$ in $\S 5.1$ in loc.cit). \mathrm{e}nd{proof} \begin{comment} \begin{proof} Let $\alpha_3 \in (-\alpha_i, \alpha_j)$: \[\begin{tikzpicture}[scale=0.5] \draw (0 , 0 ) to (10 , 0 ); \draw (10 , 0 ) to (10 , -0.3); \draw (9.9, 0 ) to (9.9, -0.3); \draw (9.8, 0 ) to (9.8, -0.3); \draw (9.9, -0.3) node [below]{$-\alpha_i$}; \draw (0, -1) to (6, 5); \draw (0 , -1 ) to (0.2, -1.2); \draw (0.1, -0.9) to (0.3, -1.1); \draw (0.2, -0.8) to (0.4, -1 ); \draw (0.3, -1.15) node [right]{$\alpha_j$}; \draw (5, 5) to (8, -1); \draw (8 , -1 ) to (7.8, -1.1); \draw (7.95, -0.9) to (7.75, -1); \draw (7.9, -0.8) to (7.7, -0.9); \draw (7.75, -1.1) node [below]{$\alpha_k$}; \draw (0 , -0.5 ) to (9, 4); \draw (9 , 4 ) to (9.1, 3.8); \draw (8.9, 3.95) to (9 , 3.75); \draw (8.8, 3.9 ) to (8.9, 3.7); \draw (9, 3.75) node [below]{$\alpha_3$}; \mathrm{e}nd{tikzpicture}\] Thus $\angle r_k r_3 + \angle r_3 r_k = 1$. W.l.o.g. we assume that $\angle r_k r_3 \geq \angle r_3 r_k$. Using the interior angle sum of a triangle, we obtain $\angle r_k r_3 \in \{ \frac{1}{2}, \frac{5}{8}, \frac{4}{6} \}$. We distinguish the following two cases: \begin{enumerate}[label=(\alph*)] \item $\angle r_k r_3 = \frac{5}{8}$: Then there exists a root $\alpha_4 \in ( \alpha_3, \alpha_k )$ such that $\angle r_k r_4 = \frac{3}{8}, \angle r_4 r_3 = \frac{2}{8}$. Then the following hold: \begin{align*} 1 > \angle r_3 r_j + \angle r_j r_4 + \angle r_4 r_3 \geq \frac{3}{8} + \angle r_j r_4 \\ 1 > \angle r_j r_k + \angle r_k r_4 + \angle r_4 r_j \geq \frac{4}{8} + \angle r_4 r_j \mathrm{e}nd{align*} Since $1 = \angle r_j r_4 + \angle r_4 r_j \leq \frac{1}{2} + \frac{3}{8}$ this yields a contradiction and hence part $(a)$ follows. \item $\angle r_k r_3 = \frac{4}{6}$: We first assume that $m_{st} \neq 3$ for any $s, t \in S$. Then there exist two roots $\alpha_4, \alpha_5 \in (\alpha_3, \alpha_k)$ such that $\angle r_k r_4 = \angle r_4 r_5 = \frac{1}{6}, \angle r_5 r_3 = \frac{2}{6}$. Then the following hold: \begin{align*} 1 > \angle r_j r_k + \angle r_k r_4 + \angle r_4 r_j \geq \frac{7}{24} + \angle r_4 r_j \\ 1 > \angle r_j r_4 + \angle r_4 r_5 + \angle r_5 r_j = \frac{1}{6} + \angle r_j r_4 + \angle r_5 r_j \\ 1 > \angle r_j r_5 + \angle r_5 r_3 + \angle r_3 r_j \geq \frac{11}{24} + \angle r_j r_5 \mathrm{e}nd{align*} Then $\angle r_j r_5 < \frac{13}{24}$ and hence $\leq \frac{1}{2}$. Since $1 > \angle r_j r_k + \angle r_k r_5 + \angle r_5 r_j \geq \frac{11}{24} + \angle r_5 r_j$ we obtain $\angle r_j r_5 = \frac{1}{2} = \angle r_5 r_j$. Furthermore, we have $\angle r_4 r_j < \frac{17}{24}$ and hence $\leq \frac{2}{3}$. This implies $\angle r_j r_4 \geq \frac{1}{3}$. Since $1 > \frac{1}{6} + \angle r_j r_4 + \angle r_5 r_j \geq 1$ this yields a contradiction. Now let $(W, S)$ be cyclic hyperbolic. Then there exists a root $\alpha_4 \in (\alpha_3, \alpha_k)$ such that $\angle r_k r_4 = \angle r_4 r_3 = \frac{1}{3}$. W.l.o.g. we assume $\angle r_j r_4 \geq \angle r_4 r_j$. Then we have $\angle r_j r_4 \in \{ \frac{1}{2}, \frac{4}{6} \}$ by part $(a)$. Since the diagram is cyclic hyperbolic, there exists a root $\alpha_5 \in (-\alpha_j, \alpha_4)$ such that $\angle r_j r_5 \geq \frac{1}{4}$ and $\angle r_5 r_4 \geq \frac{1}{4}$. But then $\angle r_3 r_5 < \frac{1}{2}$ and $\angle r_5 r_3 \leq \frac{1}{2}$. This is a contradiction to $\angle r_3 r_5 + \angle r_5 r_3 = 1$. \mathrm{e}nd{enumerate} Thus $\angle r_k r_3 = \angle r_3 r_k = \frac{1}{2}$. Furthermore, we obtain $1 > \angle r_j r_k + \angle r_k r_3 + \angle r_3 r_j \geq \frac{5}{8} + \angle r_j r_k$ and hence $\angle r_j r_k \in \{ \frac{1}{8}, \frac{1}{6}, \frac{1}{4}, \frac{1}{3} \}$. Assume that $(-\alpha_j, \alpha_k) \neq \mathrm{e}mptyset$. Then $\angle r_j r_k \in \{ \frac{2}{8}, \frac{2}{6} \}$ and we obtain a root $\alpha_4 \in (-\alpha_j, \alpha_k)$. Then we have $\angle r_3 r_4 = \frac{1}{2}$ as before. But this is a contradiction to $1 > \angle r_4 r_k + \angle r_k r_3 + \angle r_3 r_4 >1$ and thus we have $(-\alpha_j, \alpha_k) = \mathrm{e}mptyset$. Analogously we obtain $(-\alpha_i, \alpha_k) = \mathrm{e}mptyset$. Let $\alpha_3 \neq \alpha_4 \in (-\alpha_i, \alpha_j)$. Then w.l.o.g. we have $\alpha_4 \in (-\alpha_i, \alpha_3)$ and $\angle r_k r_4 = \frac{1}{2}$ as before. Thus $1 > \angle r_k r_4 + \angle r_4 r_3 + \angle r_3 r_k >1$ which yields a contradiction. \mathrm{e}nd{proof} \mathrm{e}nd{comment} \begin{comment} \begin{proposition}\label{cycliccase} Every triangle is fundamental, if the Coxeter diagram is cyclic hyperbolic. \mathrm{e}nd{proposition} \begin{proof} Let $T = \{ \alpha_0, \alpha_1, \alpha_2 \}$ be a triangle, let $\{ 0, 1, 2 \} = \{i, j, k\}$ and assume that $(-\alpha_i, \alpha_j) \neq \mathrm{e}mptyset$. Then there exists a root $\alpha_3 \in (-\alpha_i, \alpha_j)$. $$\begin{tikzpicture}[scale=0.5] \draw (0 , 0 ) to (10 , 0 ); \draw (10 , 0 ) to (10 , -0.3); \draw (9.9, 0 ) to (9.9, -0.3); \draw (9.8, 0 ) to (9.8, -0.3); \draw (9.9, -0.3) node [below]{$-\alpha_i$}; \draw (0, -1) to (6, 5); \draw (0 , -1 ) to (0.2, -1.2); \draw (0.1, -0.9) to (0.3, -1.1); \draw (0.2, -0.8) to (0.4, -1 ); \draw (0.3, -1.15) node [right]{$\alpha_j$}; \draw (5, 5) to (8, -1); \draw (8 , -1 ) to (7.8, -1.1); \draw (7.95, -0.9) to (7.75, -1); \draw (7.9, -0.8) to (7.7, -0.9); \draw (7.75, -1.1) node [below]{$\alpha_k$}; \draw (0 , -0.5 ) to (9, 4); \draw (9 , 4 ) to (9.1, 3.8); \draw (8.9, 3.95) to (9 , 3.75); \draw (8.8, 3.9 ) to (8.9, 3.7); \draw (9, 3.75) node [below]{$\alpha_3$}; \draw (0 , 2.6) to (10 , 2.6); \draw (10 , 2.6) to (10 , 2.3); \draw (9.9, 2.6) to (9.9, 2.3); \draw (9.8, 2.6) to (9.8, 2.3); \draw (9.8, 2.3) node [below]{$\alpha_4$}; \draw (0 , 0.8 ) to (11, 6.3); \draw (11 , 6.3 ) to (11.1, 6.1); \draw (10.9, 6.25) to (11 , 6.05); \draw (10.8, 6.2 ) to (10.9, 6); \draw (11, 5.95) node [below]{$\alpha_5$}; \mathrm{e}nd{tikzpicture}$$ By Lemma \ref{angle1over2} we obtain $\angle r_k r_3 = \angle r_3 r_k = \frac{1}{2}$. Since $(W, S)$ is cyclic hyperbolic there exists a root $\alpha_4 \in ( \alpha_3, \alpha_k )$ such that $\angle r_k r_4 \in \{ \frac{1}{4}, \frac{1}{3} \}$ and $\angle r_4 r_3 \in \{ \frac{1}{4}, \frac{1}{6} \}$. Applying Lemma \ref{angle1over2} again we have $\angle r_j r_4 = \frac{1}{2} = \angle r_4 r_j$. With the same argument we obtain a root $\alpha_5 \in (\alpha_4, \alpha_j)$ such that $\angle r_4 r_5 \in \{ \frac{1}{4}, \frac{1}{3} \}$ and $\angle r_5 r_j \in \{ \frac{1}{4}, \frac{1}{6} \}$. Using Lemma \ref{angle1over2} again we obtain $\angle r_5 r_k = \frac{1}{2}$. But $1 > \angle r_4 r_5 + \angle r_5 r_k + \angle r_k r_4 \geq 1$. Hence we obtain a contradiction. \mathrm{e}nd{proof} \mathrm{e}nd{comment} \begin{proposition}\label{Prop238} Assume that the Coxeter diagram is of type $(2, 3, 8)$. Let $T = \{ \alpha_0, \alpha_1, \alpha_2 \}$ be a triangle and let $\{ i, j, k \} = \{ 0, 1, 2 \}$. If there exists a root $\alpha_3 \in (-\alpha_i, \alpha_j)$ such that the angle between $r_k$ and $r_3$ in the reflection triangle $\{ r_j, r_k, r_3 \}$ is $\frac{2}{3} \pi$, then $(-\alpha_j, \alpha_k) = (-\alpha_i, \alpha_k) = \mathrm{e}mptyset, \vert (-\alpha_i, \alpha_j) \vert \leq 3$ and $m_{\alpha_i, \alpha_j} = 8$. \mathrm{e}nd{proposition} \begin{proof} The first part is a consequence of the classification in \cite{Fe98} (cf. Figure $8$ in $\S 5.1$ in loc.cit). Let $\alpha_3 \in (-\alpha_i, \alpha_j)$ such that the angle between $r_k$ and $r_3$ in the reflection triangle $\{ r_j, r_k, r_3 \}$ is $\frac{2}{3} \pi$. Then we have $\angle r_3 r_j = \angle r_j r_k = \frac{1}{8}$ because of the interior angle sum of a triangle. This implies $m_{\alpha_i, \alpha_j} = 8$. \mathrm{e}nd{proof} \begin{comment} \begin{proof} Let $\alpha_3 \in (-\alpha_i, \alpha_j)$ such that $\angle r_k r_3 = \frac{2}{3}$. Then we have $\angle r_3 r_j = \angle r_j r_k = \frac{1}{8}$ because of the interior angle sum of a triangle and hence $(-\alpha_j, \alpha_k) = (\alpha_3, \alpha_j) = \mathrm{e}mptyset$ as well as $m_{\alpha_i, \alpha_j} = 8$. Now we assume that $(-\alpha_i, \alpha_k) \neq \mathrm{e}mptyset$. Then there exists a root $\alpha_4 \in (-\alpha_i, \alpha_k)$. Applying Lemma \ref{angle1over2} $(a)$ we obtain $\angle r_4 r_j \in \{ \frac{1}{3}, \frac{1}{2}, \frac{2}{3} \}$. \[ \begin{tikzpicture}[scale=0.5] \draw (0 , 0 ) to (10 , 0 ); \draw (10 , 0 ) to (10 , -0.3); \draw (9.9, 0 ) to (9.9, -0.3); \draw (9.8, 0 ) to (9.8, -0.3); \draw (9.9, -0.3) node [right]{$-\alpha_i$}; \draw (0, -1) to (6, 5); \draw (0 , -1 ) to (0.2, -1.2); \draw (0.1, -0.9) to (0.3, -1.1); \draw (0.2, -0.8) to (0.4, -1 ); \draw (0.3, -1.15) node [right]{$\alpha_j$}; \draw (5, 5) to (8, -1); \draw (8 , -1 ) to (7.8, -1.1); \draw (7.95, -0.9) to (7.75, -1); \draw (7.9, -0.8) to (7.7, -0.9); \draw (7.75, -1.1) node [below]{$\alpha_k$}; \draw (0 , -0.5 ) to (9, 4); \draw (9 , 4 ) to (9.1, 3.8); \draw (8.9, 3.95) to (9 , 3.75); \draw (8.8, 3.9 ) to (8.9, 3.7); \draw (9, 3.75) node [below]{$\alpha_3$}; \draw (9, -0.75) to (0, 3.75); \draw (0, 3.75) to (-0.1, 3.55); \draw (0.1, 3.7) to (0, 3.5); \draw (0.2, 3.65) to (0.1, 3.45); \draw (0, 3.5) node [below]{$\alpha_4$}; \mathrm{e}nd{tikzpicture} \] Now we have to distinguish the following three cases: \begin{enumerate}[label=(\alph*)] \item $\angle r_4 r_j = \frac{1}{3}$: Then $\angle r_j r_4 = \frac{2}{3}$ and hence $\angle r_3 r_4 = \frac{7}{8}$. But this implies that the interior angle sum of the quadrangle $\{ r_j, r_4, r_3, r_k \}$ would be $2 \pi$ which is a contradiction in $\mathbb{H}^2$. \item $\angle r_4 r_j = \frac{1}{2}$: Then we have $\angle r_3 r_4 \in \{ \frac{2}{3}, \frac{6}{8}, \frac{7}{8} \}$. If $\angle r_3 r_4 \neq \frac{2}{3}$ then we obtain a contradiction as above. Thus we assume that $\angle r_3 r_4 = \frac{2}{3}$. But then there exists a root $\alpha_6 \in (\alpha_3, -\alpha_4)$ such that $\angle r_3 r_6 = \angle r_6 r_4 = \frac{1}{3}$. Then $r_6$ has to intersect $r_j$ because of the interior angle sum of a triangle. But then we have \[ 1 > \angle r_4 r_j + \angle r_j r_6 + \angle r_6 r_4 = \frac{5}{6} + \angle r_j r_6 \] This implies $\angle r_j r_6 = \frac{1}{8}$ and hence $\angle r_6 r_j = \frac{7}{8}$. Considering the quadrangle $\{ r_3, r_6, r_j, r_k \}$ we obtain a contradiction to the interior angle sum of a quadrangle in $\mathbb{H}^2$. \item $\angle r_4 r_j = \frac{2}{3}$: Let $\alpha_5 \in (\alpha_3, \alpha_k)$. Note that $r_5$ either intersects $r_4$ in the interior of the triangle $\{ -\alpha_3, \alpha_j, \alpha_k \}$ or $r_5$ intersects $r_j$ and $r_4$ in their intersection or $r_5$ intersects $r_j$ in the quadrangle $\{ r_3, r_4, r_j, r_k \}$ and not in the intersection of $r_j$ and $r_4$. Using the fact that $\angle r_3 r_4 \geq \frac{1}{2}$ we obtain in any case a contradiction to the interior angle sum of a triangle or a quadrangle in $\mathbb{H}^2$. \mathrm{e}nd{enumerate} Thus we have $(-\alpha_j, \alpha_k) = (-\alpha_i, \alpha_k) = \mathrm{e}mptyset$. Now let $n := \vert (-\alpha_i, \alpha_j)\vert $. Since $\angle r_i r_j \leq \frac{5}{8}$, we have $n \leq 4$. Assume that $n=4$. Then there exist roots $\alpha_5, \alpha_6, \alpha_7 \in (-\alpha_i, \alpha_j) \backslash \{ \alpha_3 \}$ such that $\angle r_i r_5 = \angle r_5 r_6 = \angle r_6 r_7 = \angle r_7 r_3 = \angle r_3 r_j = \frac{1}{8}$. Considering the angles we obtain $\angle r_5 r_k \geq \frac{7}{8}$ which is a contradiction. \mathrm{e}nd{proof} \mathrm{e}nd{comment} \begin{lemma}\label{Prop238auxres} Assume that the Coxeter diagram is of type $(2, 3, 8)$. Let $T = \{ \alpha_0, \alpha_1, \alpha_2 \}$ be a triangle and let $\{ i, j, k \} = \{ 0, 1, 2 \}$. Assume that $(-\alpha_i, \alpha_k) = (-\alpha_j, \alpha_k) = \mathrm{e}mptyset$ and that $m_{\alpha, \beta} = 8$ for any $\alpha \neq \beta \in T$. Then one of the following hold: \begin{enumerate}[label=(\alph*)] \item $\vert (-\alpha_i, \alpha_j) \vert = 1$ and there exists a root $\beta$ such that $o(r_i r_{\beta}) < \infty$ and $m_{\alpha_i, \beta} = 3$. \item $\angle r_i r_j = \frac{1}{2}$. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{proof} In the following proof, up to reordering, we can assume that we are in the situation displayed in the figure. By hypothesis $m_{\alpha_i, \alpha_j} = 8$ and we obtain $\angle r_i r_j \in \{ \frac{1}{8}, \ldots, \frac{5}{8} \}$. Since the diagram is not of type $(8, 8, 8)$, it follows that $\angle r_i r_j \neq \frac{1}{8}$. Thus there exists a root $\alpha_3 \in (-\alpha_i, \alpha_j)$ such that $\angle r_i r_3 = \frac{1}{8}$. Using Lemma \ref{angle1over2} we obtain $\angle r_3 r_k \in \{ \frac{1}{3}, \frac{1}{2}, \frac{2}{3} \}$. Since the diagram is not of type $(3, 8, 8)$, we obtain $\angle r_3 r_k \neq \frac{1}{3}$. We distinguish the following two cases: \begin{enumerate}[label=(\alph*)] \item $\angle r_3 r_k = \frac{2}{3}$: Then $\angle r_k r_3 = \frac{1}{3}$. Since $(W, S)$ is not of type $(3, 8, 8)$, we obtain a root $\alpha_4 \in (\alpha_3, \alpha_j)$ such that $\angle r_3 r_4 = \frac{1}{8}$. Applying Lemma \ref{angle1over2} we have $\angle r_4 r_k \in \{ \frac{1}{3}, \frac{1}{2}, \frac{2}{3} \}$. Using the interior angle sum of a triangle and the fact that the diagram is not of type $(3, 3, 8)$, we obtain $\angle r_4 r_k = \frac{1}{2}$. Assume that $(-\alpha_k, \alpha_4) \neq \mathrm{e}mptyset$. Then there exists a root $\gamma \in (-\alpha_k, \alpha_4)$ such that $\angle r_4 r_{\gamma} = \frac{1}{4} = \angle r_{\gamma} r_k$. But then $\angle r_3 r_{\gamma} < \frac{1}{2}$ and hence $\angle r_3 r_{\gamma} = \frac{1}{3}$. This implies $\angle r_3 r_4 + \angle r_4 r_{\gamma} + \angle r_{\gamma} r_3 >1$ which is a contradiction. Thus $(-\alpha_k, \alpha_4) = \mathrm{e}mptyset$. This implies $(\alpha_4, \alpha_k) = \mathrm{e}mptyset$. Since the diagram is not of type $(2, 8, 8)$, we obtain a root $\alpha_5 \in (\alpha_4, \alpha_j)$ such that $\angle r_4 r_5 = \frac{1}{8}$. Moreover, we obtain $\angle r_5 r_k = \frac{1}{3}$. Thus $\angle r_k r_5 = \frac{2}{3}$ and hence $\angle r_i r_j = \frac{1}{2}$. \[ \begin{tikzpicture}[scale=0.5] \draw (0 , 0 ) to (10 , 0 ); \draw (10 , 0 ) to (10 , -0.3); \draw (9.9, 0 ) to (9.9, -0.3); \draw (9.8, 0 ) to (9.8, -0.3); \draw (9.9, -0.3) node [right]{$-\alpha_i$}; \draw (0, -1) to (6, 5); \draw (0 , -1 ) to (0.2, -1.2); \draw (0.1, -0.9) to (0.3, -1.1); \draw (0.2, -0.8) to (0.4, -1 ); \draw (0.3, -1.15) node [right]{$\alpha_j$}; \draw (5, 5) to (8, -1); \draw (8 , -1 ) to (7.8, -1.1); \draw (7.95, -0.9) to (7.75, -1); \draw (7.9, -0.8) to (7.7, -0.9); \draw (7.75, -1.1) node [below]{$\alpha_k$}; \draw (0 , -0.25 ) to (10, 2.25); \draw (10 , 2.25 ) to (10.1, 2); \draw (9.9, 2.225) to (10 , 1.975); \draw (9.8, 2.2 ) to (9.9, 1.95); \draw (10.5, 2.1) node [below]{$\alpha_3$}; \draw (0, -0.5) to (10, 4.5); \draw (10, 4.5) to (10.1, 4.3); \draw (9.9, 4.45) to (10, 4.25); \draw (9.8, 4.4) to (9.9, 4.2); \draw (10.2, 4.3) node [below]{$\alpha_4$}; \draw (0, -0.75) to (10, 6.75); \draw (10, 6.75) to (10.1, 6.5); \draw (9.9, 6.675) to (10, 6.425); \draw (9.8, 6.6) to (9.9, 6.35); \draw (10.5, 6.6) node [below]{$\alpha_5$}; \mathrm{e}nd{tikzpicture} \] \item $\angle r_3 r_k = \frac{1}{2}$: Since the diagram is not of type $(2, 8, 8)$, we obtain three roots $\alpha_4, \alpha_5, \alpha_6 \in (\alpha_3, \alpha_k)$ such that $\angle r_k r_4 = \angle r_4 r_5 = \angle r_5 r_6 = \angle r_6 r_3 = \frac{1}{8}$. Using similar arguments as above we obtain $\angle r_j r_6 = \frac{2}{3}$ and hence $\angle r_3 r_j = \frac{1}{8}$. Thus we have $\angle r_i r_j = \frac{2}{8}$. Furthermore, we obtain three roots $\alpha_7, \alpha_8, \alpha_9 \in (-\alpha_3, \alpha_k)$ with $\angle r_3 r_7 = \angle r_7 r_8 = \angle r_8 r_9 = \angle r_9 r_k = \frac{1}{8}$. This implies $\angle r_7 r_i = \frac{2}{3}$ as above. For $\beta = \alpha_7$ the claim follows.\qedhere \mathrm{e}nd{enumerate} \mathrm{e}nd{proof} \section{Root group data}\label{sec:rgd} An \textit{RGD-system of type $(W, S)$} is a pair $\mathcal{D} = \left( G, \left( U_{\alpha} \right)_{\alpha \in \mathcal{P}hi}\right)$ consisting of a group $G$ together with a family of subgroups $U_{\alpha}$ (called \textit{root groups}) indexed by the set of roots $\mathcal{P}hi$, which satisfies the following axioms, where $H := \bigcap_{\alpha \in \mathcal{P}hi} N_G(U_{\alpha}), U_{\pm} := \langle U_{\alpha} \mid \alpha \in \mathcal{P}hi_{\pm} \rangle$: \begin{enumerate}[label=(RGD\arabic*), leftmargin=*] \setcounter{enumi}{-1} \item For each $\alpha \in \mathcal{P}hi$, we have $U_{\alpha} \neq \{1\}$. \item For each prenilpotent pair $\{ \alpha, \beta \} \subseteq \mathcal{P}hi$, the commutator group $[U_{\alpha}, U_{\beta}]$ is contained in the group $U_{(\alpha, \beta)} := \langle U_{\gamma} \mid \gamma \in (\alpha, \beta) \rangle$. \item For every $s\in S$ and each $u\in U_{\alpha_s} \backslash \{1\}$, there exists $u', u'' \in U_{-\alpha_s}$ such that the product $m(u) := u' u u''$ conjugates $U_{\beta}$ onto $U_{s\beta}$ for each $\beta \in \mathcal{P}hi$. \item For each $s\in S$, the group $U_{-\alpha_s}$ is not contained in $U_+$. \item $G = H \langle U_{\alpha} \mid \alpha \in \mathcal{P}hi \rangle$. \mathrm{e}nd{enumerate} \begin{comment} For each $J \subseteq S$ we define the \textit{$J$-residue of $\mathcal{D}$} as the RGD-system $\mathcal{D}_J = (\langle U_{\alpha} \mid \alpha \in \mathcal{P}hi^J \rangle, (U_{\alpha})_{\alpha \in \mathcal{P}hi^J} )$. In this case $J$ is the \textit{type} and the cardinality of $J$ is the \textit{rank} of the $J$-residue. \begin{lemma}\label{Trivialintersection} Let $(G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ be an RGD-system of type $(W, S)$ and let $\alpha \neq \beta \in \mathcal{P}hi$ be such that $o(r_{\alpha} r_{\beta}) < \infty$. Then $U_{\beta} \cap \langle U_{\gamma} \mid \gamma \in \mathcal{P}hi, \alpha \subseteq \gamma \rangle = 1$. \mathrm{e}nd{lemma} \begin{proof} Since $\alpha \not\subseteq \beta$, this is a consequence of Proposition $8.56$ and Theorem $8.81$ of \cite{AB08}. \mathrm{e}nd{proof} \mathrm{e}nd{comment} \begin{lemma}\label{costarsimplerggenerate} Let $(G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ be an RGD-system of type $(W, S)$ satisfying Condition $\costar$. Then we have $\langle U_{\gamma} \mid \gamma \in [\alpha, \beta] \rangle = \langle U_{\alpha} \cup U_{\beta} \rangle$ or equivalently $U_{(\alpha, \beta)} = [U_{\alpha}, U_{\beta}]$ for any pair of roots $\{ \alpha, \beta \} \subseteq \mathcal{P}hi$ which is a basis of a finite root subsystem. \mathrm{e}nd{lemma} \begin{proof} This follows from Lemma $18$ and Proposition $7$ on Page $60$ of \cite{Ab96}. \mathrm{e}nd{proof} \begin{lemma}\label{hexagonproperties} Let $\mathcal{D} = (G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ be an RGD-system of type $(W, S)$. \begin{enumerate}[label=(\alph*)] \item If $\mathcal{D}$ is simply-laced or of type $G_2$, then every root group is abelian. \item If $\mathcal{D}$ is of type $G_2$ and if $\alpha, \beta \in \mathcal{P}hi$ such that $\vert (\alpha, \beta) \vert = 2$, then $[U_{\alpha}, U_{\beta}] = 1$. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{proof} This follows from the classification in \cite{TW02}. In particular, we used Theorem $(17.1)$, Theorem $(17.5)$, Example $(16.1)$ and Example $(16.8)$ in loc.cit. \mathrm{e}nd{proof} \subsection*{RGD-systems of type $I_2(8)$} In this subsection we let $\mathcal{D} = (G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ be an RGD-system of type $I_2(8)$. \begin{lemma}\label{octagonproperties} Let $\alpha, \beta \in \mathcal{P}hi$. Then the following hold: \begin{enumerate}[label=(\alph*)] \item If $\vert (\alpha, \beta) \vert = 3$, then $[U_{\alpha}, U_{\beta}] = 1$. \item If $\mathcal{D}$ satisfies Condition $\costar$, then there is only one kind of root groups which is abelian. If, furthermore, $U_{\alpha}$ is abelian and $\vert (\alpha, \beta) \vert = 1$, then $[U_{\alpha}, U_{\beta}] = 1$. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{proof} This follows from Example $(16.9)$ and $(10.15)$ of \cite{TW02}. \mathrm{e}nd{proof} \begin{lemma}\label{octagoncommutatorrelations} Using the notations of \cite{TW02} we obtain the following commutator relation: \begin{align*} [x_1(t), x_8(u_1, u_2)] = &x_2( t^{\sigma +1}u_1 + t^{\sigma+1} u_2^{\sigma+1}, tu_2 ) \\ \cdot &x_3( t^{\sigma+1} u_2^{\sigma+2} + t^{\sigma+1} u_1 u_2 + t^{\sigma+1} u_1^{\sigma} ) \\ \cdot &x_4( t^{\sigma+2} u_1^{\sigma} u_2^{\sigma+1} + t^{\sigma +2} u_1^2 u_2 + t^{\sigma +2} u_2^{2\sigma +3}, t^{\sigma} u_1 ) \\ \cdot &x_5( t^{\sigma+1} u_1^{\sigma} u_2^{\sigma} + t^{\sigma+1} u_2^{2\sigma+2} + t^{\sigma+1} u_1^2 ) \\ \cdot &x_6( t^{\sigma+1} u_1^2 u_2 + t^{\sigma+1} u_1^{\sigma} u_2^{\sigma+1} + t^{\sigma+1} u_1^{\sigma+1}, tu_1 + tu_2^{\sigma+1} ) \\ \cdot &x_7( tu_1^{\sigma} + tu_1 u_2 + tu_2^{\sigma+2} ) \mathrm{e}nd{align*} \mathrm{e}nd{lemma} \begin{proof} This is a straight forward computation, using only the commutator relations in Example $(16.9)$ of \cite{TW02}. For the computation see Lemma \ref{proofofoctagoncommutatorrelations} in the appendix. \mathrm{e}nd{proof} \subsection*{Affine RGD-systems} \begin{proposition}\label{Propsimplylacedaffine} Every RGD-system of simply-laced affine type or of type $(2, 3, 6)$ satisfies Condition $\nc$. \mathrm{e}nd{proposition} \begin{proof} Our proof uses the fact that any affine Moufang building has a spherical Moufang building at infinity. Before we start the proof, we will describe this shortly: Let $\tilde{\mathcal{D}} = (\tilde{G}, (\tilde{U}_{\alpha})_{\alpha \in \tilde{\mathcal{P}hi}})$ be an RGD-system of affine type $\tilde{X}_{n-1}$. Then there exists an RGD-system $\mathcal{D} = (G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ of spherical type $X_{n-1}$ such that $U_{\alpha}, U_{\beta} \leq U_{\gamma}$, where $\alpha, \beta \in \tilde{\mathcal{P}hi}$ such that $\alpha \subseteq \beta$ and some $\gamma \in \mathcal{P}hi$ (cf. \cite[Proposition $11.110$]{AB08}). Now we start the proof of the proposition. Let $(G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ be an RGD-system of simply-laced affine type $(W, S)$ or of type $(2, 3, 6)$ and let $\alpha \subsetneq \beta \in \mathcal{P}hi$. Then $U_{\alpha}, U_{\beta}$ are subgroups of an abelian group by Lemma \ref{hexagonproperties}. Thus $[U_{\alpha}, U_{\beta}] = 1$. \mathrm{e}nd{proof} \begin{remark}\label{ExampleC2tilde} The following is independent of Condition $\costar$: \begin{enumerate}[label=(\alph*)] \item There are RGD-systems of type $(2, 4, 4)$ with abelian root groups at infinity. \item There are RGD-systems of type $(2, 4, 4)$ with non-abelian root groups at infinity. \mathrm{e}nd{enumerate} \mathrm{e}nd{remark} \section{Commutator relations} In this section we let $\mathcal{D} = (G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ be an RGD-system of type $(W, S)$. \begin{remark} Let $G$ be a group and let $A, B, C \leq G$ be three subgroups of $G$. If $[[A, B], C] = [[A, C], B] = 1$ then $[A, [B, C]]=1$. This is known in the literature as the \textit{three subgroup lemma} (see also $(2.3)$ of \cite{TW02}). \mathrm{e}nd{remark} \begin{comment} \begin{lemma}\label{A2tildenc} If $(W, S)$ is of type $\tilde{A}_2$, then $\mathcal{D}$ satisfies Condition $\nc$. \mathrm{e}nd{lemma} \begin{proof} Let $\alpha \subsetneq \beta \in \mathcal{P}hi$. Let $(c_0, \ldots, c_k)$ be a minimal and let $(\alpha_1 = \alpha, \ldots, \alpha_k = \beta)$ be the sequence of roots which are crossed by that minimal gallery. Let $R$ be the rank $2$ residue containing $c_{k-2}, c_{k-1}$ and $c_k$ and let $c_{j-1} = \proj_R c_0$. Then there exists a root $\gamma$ in $R$ such that $\{ \gamma, \alpha_j \}$ is a root basis of the set of roots associated to $R$. Considering the euclidean plane it follows that $o(r_{\alpha} r_{\gamma}), o(r_{\alpha} r_j) < \infty$. Using the interior angle sum of a triangle in the euclidean plane and the fact that every angle between two intersecting reflections is in the set $\{ \frac{1}{3}, \frac{2}{3} \}$ we obtain that $(\alpha, \alpha_j) = (\alpha, \gamma) = \mathrm{e}mptyset$. The three subgroup lemma yields $[U_{\alpha}, [U_{\alpha_j}, U_{\gamma}]]=1$. Since $U_{\beta} = [U_{\alpha_j}, U_{\gamma}]$ by Lemma \ref{costarsimplerggenerate} the claim follows. \mathrm{e}nd{proof} \mathrm{e}nd{comment} \begin{lemma}\label{geometrictrianglecomplete} Assume that the Coxeter diagram is the complete graph and let $\{ \alpha, \beta, \gamma \}$ be a triangle. Then $[U_{-\alpha}, U_{\beta}] = [U_{-\alpha}, U_{\gamma}] = 1$. \mathrm{e}nd{lemma} \begin{proof} This follows directly from Proposition \ref{completefundamentaltriangle}. \mathrm{e}nd{proof} \begin{lemma}\label{geometrictrianglenotcomplete} Let $(W, S)$ be of type $(2, 6, 6), (2, 6, 8)$ or $(2, 8, 8)$ and let $\{ \alpha, \beta, \gamma \}$ be a triangle. Then we have $[[U_{-\alpha}, U_{\beta}], U_{\gamma}] = [[U_{-\alpha}, U_{\gamma}], U_{\beta}]=1$. \mathrm{e}nd{lemma} \begin{proof} The claim follows from Lemma \ref{angle1over2}, Lemma \ref{hexagonproperties} and Lemma \ref{octagonproperties}. \mathrm{e}nd{proof} \begin{comment} \begin{lemma}\label{auxlem288} Let $(W, S)$ be of type $(2, 8, 8)$. Let $\{ \alpha, \beta, \gamma \}$ be a set of three roots such that $o(r_{\alpha} r_{\beta}), o(r_{\beta} r_{\gamma}) < \infty$ and $\alpha \subsetneq \gamma$. Let $\delta \in (\alpha, \beta)$ such that $\{ -\delta, \beta, \gamma \}$ is a triangle with $(\delta, \gamma) \neq \mathrm{e}mptyset$. Then one of the following hold: \begin{enumerate}[label=(\alph*)] \item $[U_{\delta}, U_{\gamma}] = 1$. \item The $U_{\delta}$-part of $[U_{\alpha}, U_{\beta}]$ commutes with $U_{\gamma}$ and we have $(\delta', \gamma) = \mathrm{e}mptyset$ for any $\delta \neq \delta' \in (\alpha, \beta)$ with $o(r_{\delta'} r_{\gamma}) < \infty$. \item $[[U_{\alpha}, U_{\beta}], U_{\gamma}] \subseteq U_{\mathrm{e}psilon}$, where $\mathrm{e}psilon \in \mathcal{P}hi, o(r_{\alpha} r_{\mathrm{e}psilon}) < \infty$ and $[U_{\mathrm{e}psilon}, U_{\beta}] = 1 = [U_{\mathrm{e}psilon}, U_{\gamma}]$. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{proof} We have $\vert (\delta, \gamma) \vert = 1, (\delta, \beta)=\mathrm{e}mptyset$ and we have $(\delta', \gamma) = \mathrm{e}mptyset$ for any $\delta' \in (\alpha, \delta)$ such that $o(r_{\delta'} r_{\gamma}) < \infty$ by Lemma \ref{angle1over2}. $$\begin{tikzpicture}[scale=0.5] \draw (0 , 0 ) to (10 , 0 ); \draw (10 , 0 ) to (10 , -0.3); \draw (9.9, 0 ) to (9.9, -0.3); \draw (9.8, 0 ) to (9.8, -0.3); \draw (9.9, -0.3) node [below]{$\alpha$}; \draw (0, -1) to (5, 4); \draw (0 , -1 ) to (0.2, -1.2); \draw (0.1, -0.9) to (0.3, -1.1); \draw (0.2, -0.8) to (0.4, -1 ); \draw (0.3, -1.1) node [below]{$\beta$}; \draw (3.4, 4) to (9, 1.2); \draw (9, 1.2) to (8.9, 1.0); \draw (8.9, 1.25) to (8.8, 1.05); \draw (8.8, 1.3) to (8.7, 1.1); \draw (8.8, 1.2) node [right]{$\gamma$}; \draw (0 , -0.5 ) to (9, 4); \draw (9 , 4 ) to (9.1, 3.8); \draw (8.9, 3.95) to (9 , 3.75); \draw (8.8, 3.9 ) to (8.9, 3.7); \draw (9, 3.75) node [below]{$\delta$}; \draw (0 , 2.6) to (10 , 2.6); \draw (0 , 2.6) to (0 , 2.3); \draw (0.1, 2.6) to (0.1, 2.3); \draw (0.2, 2.6) to (0.2, 2.3); \draw (0.1, 2.3) node [below]{}; \mathrm{e}nd{tikzpicture}$$ Since $(W, S)$ is of type $(2, 8, 8)$, we have $m_{\alpha, \beta} = m_{\delta, \gamma} = 8$. We assume that $U_{\beta}$ corresponds to $x_{2k}$ in Example $(16.9)$ of \cite{TW02}. Then $U_{\delta}$ corresponds to $x_{2k-1}$ and hence $[U_{\delta}, U_{\gamma}] =1$ by Lemma \ref{octagonproperties}. If $U_{\beta}$ corresponds to $x_{2k-1}$ we distinguish the following two cases: \begin{enumerate}[label=(\alph*)] \item $\vert (\alpha, \beta) \vert < 6$: It follows from the commutator relations of \cite{TW02}, that the $U_{\delta}$-part of $[U_{\alpha}, U_{\beta}]$ is contained in the centralizer of $U_{\delta}$ and such an element commutes with $U_{\gamma}$. \item $\vert (\alpha, \beta) \vert = 6$: Because of the angles it follows that $o(r_{\alpha} r_{\mathrm{e}psilon}) < \infty$, where $\{ \mathrm{e}psilon \} = (\delta, \gamma)$. \qedhere \mathrm{e}nd{enumerate} \mathrm{e}nd{proof} \begin{lemma}\label{auxlem238a} Let $(W, S)$ be of type $(2, 3, 8)$. Let $\{ -\alpha, \beta, \gamma \}$ be a triangle. Then one of the following hold: \begin{enumerate}[label=(\alph*)] \item $[[U_{\alpha}, U_{\gamma}], U_{\beta}] = [[U_{\alpha}, U_{\beta}], U_{\gamma}]=1$. \item $[U_{\alpha}, U_{\gamma}]=1$ and $[[U_{\alpha}, U_{\beta}], U_{\gamma}] \subseteq U_{\mathrm{e}psilon}$, where $\mathrm{e}psilon \in \mathcal{P}hi, o(r_{\alpha} r_{\mathrm{e}psilon}) < \infty$ and $[U_{\mathrm{e}psilon}, U_{\beta}] = 1 = [U_{\mathrm{e}psilon}, U_{\gamma}]$. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{proof} Assume that for any $\delta \in (\alpha, \beta)$ and any $\delta' \in (\alpha, \gamma)$ we have $[U_{\delta}, U_{\gamma}] = 1 = [U_{\delta'}, U_{\beta}]$. Then the claim follows. Now we assume that there exists $\delta \in (\alpha, \beta)$ such that $[U_{\delta}, U_{\gamma}] \neq 1$. Since $\angle r_{\gamma} r_{\delta} \in \{ \frac{1}{2}, \frac{2}{3} \}$, we have $\angle r_{\gamma} r_{\delta} = \frac{2}{3}$ by Lemma \ref{octagonproperties}. Applying Proposition \ref{Prop238} we obtain $\vert (\alpha, \beta) \vert \leq 3$ and $(\alpha, \gamma) = \mathrm{e}mptyset$. This implies $[U_{\alpha}, U_{\gamma}] = 1$. Let $n := \vert (\alpha, \beta) \vert$. If $n=1$ then the triangle $\{ -\alpha, \delta, \gamma \}$ is a fundamental triangle and hence we have $\angle r_{\gamma} r_{\alpha} = \frac{1}{2}$. Thus we have $o(r_{\alpha} r_{\mathrm{e}psilon}) < \infty$, where $\{ \mathrm{e}psilon \} = (\delta, \gamma)$. This finishes the claim. If $n=2$ then $U_{\alpha}$ corresponds to $U_{2k+1}$ of Example $(16.9)$ in \cite{TW02} since $U_{\delta}$ is abelian. Using the commutator relations and the fact that $(\alpha, \delta) = \{ \delta_0 \}$ and $\angle r_{\gamma} r_{\delta_0} \in \{ \frac{1}{3}, \frac{1}{2} \}$ we obtain $[[U_{\alpha}, U_{\beta}], U_{\gamma}] = 1$. If $n=3$ then we have $\angle r_{\alpha} r_{\beta} = \frac{1}{2}$ and we obtain $[U_{\alpha}, U_{\beta}]=1$ by Lemma \ref{octagonproperties}. \mathrm{e}nd{proof} \begin{lemma}\label{auxlem238b} Let $(W, S)$ be of type $(2, 3, 8)$. Let $\{ \alpha, \beta, \gamma \}$ be a set of three roots such that $o(r_{\alpha} r_{\beta}), o(r_{\beta} r_{\gamma}) < \infty$ and $\alpha \subsetneq \gamma$. Let $\delta \in (\alpha, \beta)$ such that $\{ -\delta, \beta, \gamma \}$ is a triangle with $(\delta, \gamma) \neq \mathrm{e}mptyset$. Then one of the following hold: \begin{enumerate}[label=(\alph*)] \item $[U_{\delta}, U_{\gamma}] = 1$. \item The $U_{\delta}$-part of $[U_{\alpha}, U_{\beta}]$ commutes with $U_{\gamma}$ and we have $(\delta', \gamma) = \mathrm{e}mptyset$ for any $\delta \neq \delta' \in (\alpha, \beta)$ with $o(r_{\delta'} r_{\gamma}) < \infty$. \item $C := [[U_{\alpha}, U_{\beta}], U_{\gamma}] \subseteq U_{\mathrm{e}psilon}$, where $\mathrm{e}psilon \in \mathcal{P}hi, o(r_{\alpha} r_{\mathrm{e}psilon}) < \infty$ and $[C, U_{\beta}] = 1 = [C, U_{\gamma}]$. \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{proof} Let $\delta \neq \delta' \in (\alpha, \beta)$ with $o(r_{\delta'} r_{\gamma}) < \infty$. Then $(\delta', \gamma) = \mathrm{e}mptyset$ by Proposition \ref{Prop238} and hence the second claim of $(b)$ follows. Now we prove the hypothesis. Therefore we distinguish the following cases: If $m_{\alpha, \beta} = 3$, then $\angle r_{\gamma} r_{\delta} \in \{ \frac{2}{8}, \frac{3}{8}, \frac{4}{8} \}$. Since $U_{\delta}$ is abelian we have $[U_{\delta}, U_{\gamma}]=1$ for $\angle r_{\gamma} r_{\delta} \in \{ \frac{2}{8}, \frac{4}{8} \}$ by Lemma \ref{octagonproperties}. Now we assume that $\angle r_{\gamma} r_{\delta} = \frac{3}{8}$. Let $\{\delta', \delta''\} = (\delta, \gamma)$ and assume that $\angle r_{\gamma} r_{\delta''} = \angle r_{\delta''} r_{\delta'} = \angle r_{\delta'} r_{\delta} = \frac{1}{8}$. Using the commutator relations of $(16.9)$ of \cite{TW02} one obtains that $[U_{\delta}, U_{\gamma}] \subseteq U_{\delta'}$ and $[[U_{\delta}, U_{\gamma}], U_{\gamma}] = 1$. Using the hyperbolic plane one obtains that $o(r_{\alpha} r_{\delta'}) < \infty$. Now we assume that $m_{\alpha, \beta} = 8$. If $m_{\delta, \gamma} = 8$ the claim follows by Lemma \ref{Prop238auxres} and Lemma \ref{octagonproperties}. Thus we can assume that $m_{\delta, \gamma} = 3$. This implies $\angle r_{\delta} r_{\beta} = \frac{1}{8}$ and $\angle r_{\gamma} r_{\delta} = \frac{2}{3}$. If $n := \vert (\alpha, \beta) \vert \geq 5$ then $o(r_{\alpha} r_{\delta'}) < \infty$, where $\{ \delta' \} = (\delta, \gamma)$ and the claim follows. If $n \leq 3$ we obtain $o(r_{\alpha} r_{\gamma}) < \infty$ which is a contradiction to the assumption. Thus we can assume $n=4$. Since $[U_{2i+1}, U_{2i+6}]_{2i+5} = 1$ the claim follows. \mathrm{e}nd{proof} \mathrm{e}nd{comment} \begin{remark} If $m_{st} \in \{4, 6\}$ there exist so called \textit{long roots} and \textit{short roots}. These are indicated by the \textit{Dynkin diagram}. An arrow from $s$ to $t$ means that the node $t$ corresponds to a short root. \mathrm{e}nd{remark} \begin{theorem}\label{Mainresult} Assume that $\mathcal{D}$ satisfies Condition $\costar$. Then $\mathcal{D}$ satisfies Condition $\nc$, if one of the following hold: \begin{enumerate}[label=(\alph*)] \item The Coxeter diagram of $(W, S)$ is the complete graph. \item $(W, S)$ is of type $(2, 3, 8)$ or of type $(2, 8, 8)$. \item $(W, S)$ is simply-laced. \item $(W, S)$ is of type $(2, 6, 6)$ and has Dynkin diagram \begin{tikzpicture} \node[dynkinnode] (A) at (0,0) {}; \node[dynkinnode] (B) at (1,0) {}; \node[dynkinnode] (C) at (2,0) {}; \draw (A) -- (B) -- (C); \doubleline{B}{A}; \doubleline{B}{C} \mathrm{e}nd{tikzpicture} or \begin{tikzpicture} \node[dynkinnode] (A) at (0,0) {}; \node[dynkinnode] (B) at (1,0) {}; \node[dynkinnode] (C) at (2,0) {}; \draw (A) -- (B) -- (C); \doubleline{A}{B}; \doubleline{C}{B} \mathrm{e}nd{tikzpicture}. \item $(W, S)$ is of type $(2, 6, 8)$ and has diagram \begin{tikzpicture} \node[dynkinnode] (A) at (0,0) {}; \node[dynkinnode] (B) at (1,0) {}; \node[dynkinnode] (C) at (2,0) {}; \node[above] (bc) at (1.5, 0) {$8$}; \draw (A) -- (B) -- (C); \doubleline{B}{A} \mathrm{e}nd{tikzpicture} \mathrm{e}nd{enumerate} \mathrm{e}nd{theorem} \begin{proof} Let $\alpha \subsetneq \beta$ be two roots. Then there exists a minimal gallery $(c_0, \ldots, c_k)$ such that $(\alpha_1 = \alpha, \ldots, \alpha_k = \beta)$ is the sequence of roots which are crossed by that minimal gallery, i.e. $\{ c_{i-1}, c_i \} \in \partial \alpha_i$. We can assume that $k$ is minimal with this property. We prove the hypothesis by induction on $k$. For $k \leq 2$ there is nothing to show. For $k=3$ we have $(\alpha, \beta) = \mathrm{e}mptyset$ and hence $[U_{\alpha}, U_{\beta}] = 1$. Now let $k>3$ and let $R$ be the rank $2$ residue containing $c_{k-2}, c_{k-1}, c_k$. Then there exists a minimal gallery $(d_0 = c_0, \ldots, d_k = c_k)$ such that $d_j = \proj_R c_0$ for some $j$. Since the set of roots which are crossed by the gallery $(c_0, \ldots, c_k)$ equals the set of roots which are crossed by $(d_0, \ldots, d_k)$, the minimality of $k$ yields that $\alpha$ (resp. $\beta)$ is the first (resp. last) root which is crossed by the gallery $(d_0, \ldots, d_k)$. Thus we can assume that $\proj_R c_0$ is a chamber on the minimal gallery $(c_0, \ldots, c_k)$. Let $c_{j-1} = \proj_R c_0$. Then there exists a root $\gamma \in \mathcal{P}hi$ such that $\{ \gamma, \alpha_j \}$ is a root basis of the set of roots associated to $R$. Applying Lemma \ref{costarsimplerggenerate} we obtain $U_{\beta} \subseteq [U_{\alpha_j}, U_{\gamma}]$. If $o(r_{\alpha} r_{\alpha_j}) = o(r_{\alpha} r_{\gamma}) = \infty$, then $U_{\alpha}$ commutes with $U_{\alpha_j}$ and $U_{\gamma}$ by induction. Hence $U_{\alpha}$ commutes with the group generated by $U_{\alpha_j}$ and $U_{\gamma}$. Since this subgroup contains $U_{\beta}$ the claim follows. Thus we can assume that at least one of $o(r_{\alpha} r_j), o(r_{\alpha} r_{\gamma})$ is finite. In each case we will argue as before or use the three subgroup lemma. Thus we forget the root $\beta$ for the moment and focus on the roots $\alpha, \alpha_j, \gamma$. Thus the role of the roots $\alpha_j$ and $\gamma$ can be swapped. W.l.o.g. we can assume that $o(r_{\alpha} r_j) < \infty$. Then we consider the following two cases: \begin{enumerate}[label=(\alph*)] \item $o(r_{\alpha} r_{\gamma}) < \infty$: Then the set $\{ -\alpha, \alpha_j, \gamma \}$ is a triangle. If $(W, S)$ is as in $(a)$, we obtain $[U_{\alpha}, U_{\alpha_j}] = 1 = [U_{\alpha}, U_{\gamma}]$ by Lemma \ref{geometrictrianglecomplete}. As before, the claim follows. If $(W, S)$ is of type $(2, 6, 6), (2, 6, 8)$ or $(2, 8, 8)$ we use Lemma \ref{geometrictrianglenotcomplete} and the three subgroup lemma and the claim follows. If $(W, S)$ is of type $(2, 3, 8)$ we will show that $[[U_{\alpha}, U_{\alpha_j}], U_{\gamma}] = [[U_{\alpha}, U_{\gamma}], U_{\alpha_j}] = 1$. W.l.o.g. we can assume that there exists $\delta \in (\alpha, \alpha_j)$ such that $[U_{\delta}, U_{\gamma}] \neq 1$ (otherwise we are done). Furthermore, we have $\angle r_{\gamma} r_{\delta} \in \{ \frac{1}{3}, \frac{1}{2}, \frac{2}{3} \}$. Using Lemma \ref{octagonproperties}, we obtain $m_{\delta, \gamma} = 3$. Applying Proposition \ref{Prop238} we obtain $n := \vert (\alpha, \alpha_j) \vert \leq 3$ and $(\alpha, \gamma) = \mathrm{e}mptyset$. This implies $[U_{\alpha}, U_{\gamma}] = 1$. If $n=1$ then the triangle $\{ -\alpha, \delta, \gamma \}$ is fundamental and hence $\angle r_{\gamma} r_{\alpha} = \frac{1}{2}$. Thus we have $o(r_{\alpha} r_{\mathrm{e}psilon}) < \infty$ and $(\alpha, \mathrm{e}psilon) = \mathrm{e}mptyset$, where $\{ \mathrm{e}psilon \} = (\delta, \gamma)$. Since $\angle r_j r_{\mathrm{e}psilon} = \frac{1}{2}$, we obtain $[U_{\alpha_j}, U_{\mathrm{e}psilon}] = 1$ and hence $[U_{\alpha}, U_{\beta}] \subseteq U_{\mathrm{e}psilon} \cap U_{(\alpha, \beta)} = 1$. This finishes the claim. If $n=2$ then $U_{\alpha}$ corresponds to $U_{2k+1}$ of Example $(16.9)$ in \cite{TW02}, since $U_{\delta}$ is abelian. Using the commutator relations and the fact that $(\alpha, \delta) = \{ \delta_0 \}$ and $\angle r_{\gamma} r_{\delta_0} \in \{ \frac{1}{3}, \frac{1}{2} \}$, we obtain $[[U_{\alpha}, U_{\alpha_j}], U_{\gamma}] = 1$. If $n=3$, then we have $\angle r_{\alpha} r_j = \frac{1}{2}$ and we obtain $[U_{\alpha}, U_{\alpha_j}] = 1$ by Lemma \ref{octagonproperties}. Now we assume that $(W, S)$ is simply-laced. Then we obtain that $T := \{ r_{\alpha}, r_j, r_{\gamma} \}$ is an affine reflection triangle. Using Lemma \ref{Theorem1.2CM} we obtain an irreducible affine parabolic subgroup $W_0 \leq W$ of rank at least $3$ such that $\langle T \rangle$ is conjugated to a subgroup of $W_0$. Let $g\in W$ such that $\langle T \rangle^g \leq W_0$. Then $r_{\alpha}^g, r_{\beta}^g \in W_0$. In view of Proposition \ref{Propsimplylacedaffine} the residue corresponding to $W_0$ satisfies Condition $\nc$ and we have $[U_{r_{\alpha}^g}, U_{r_{\beta}^g}] = 1$. Using $r_{\alpha}^g = r_{g^{-1} \alpha}$, we obtain \[ [U_{\alpha}, U_{\beta}]^g = [U_{\alpha}^g, U_{\beta}^g] = [U_{g^{-1}\alpha}, U_{g^{-1}\beta}] = 1 \] \item $o(r_{\alpha} r_{\gamma}) = \infty$: Then we have $\alpha \subsetneq \gamma$. Using induction we obtain $[U_{\alpha}, U_{\gamma}] = 1$. If $o(r_{\delta} r_{\gamma}) = \infty$ for any $\delta \in (\alpha, \alpha_j)$ then the claim follows by induction and the three subgroup lemma. Thus we can assume that there exists $\delta \in (\alpha, \alpha_j)$ such that $\{ -\delta, \alpha_j, \gamma \}$ is a triangle. We first assume that $(W, S)$ is as in $(a)$. By Lemma \ref{geometrictrianglecomplete} we have $[U_{\delta}, U_{\gamma}] = 1$. The claim follows now from the three subgroup lemma. Let $(W, S)$ be of type $(2, 8, 8)$ and assume that $(\delta, \gamma) \neq \mathrm{e}mptyset$. Then we have $\vert (\delta, \gamma) \vert = 1, (\delta, \alpha_j) = \mathrm{e}mptyset$ and we have $(\delta', \gamma) = \mathrm{e}mptyset$ for any $\delta' \in (\alpha, \delta)$ such that $o(r_{\delta'} r_{\gamma}) < \infty$ by Lemma \ref{angle1over2}. Since $(W, S)$ is of type $(2, 8, 8)$, we have $m_{\alpha, \alpha_j} = m_{\delta, \gamma} = 8$. We assume that $U_{\alpha_j}$ corresponds to $x_{2k}$ in Example $(16.9)$ of \cite{TW02}. Then $U_{\delta}$ corresponds to $x_{2k-1}$ and hence $[U_{\delta}, U_{\gamma}] =1$ by Lemma \ref{octagonproperties}. If $U_{\alpha_j}$ corresponds to $x_{2k-1}$ we distinguish the following two cases: \begin{enumerate}[label=(\roman*)] \item $\vert (\alpha, \alpha_j) \vert < 6$: It follows from the commutator relations of \cite{TW02}, that the $U_{\delta}$-part of $[U_{\alpha}, U_{\beta}]$ is contained in the centralizer of $U_{\delta}$ and such an element commutes with $U_{\gamma}$. Using induction and the three subgroup lemma the claim follows. \item $\vert (\alpha, \alpha_j) \vert = 6$: Because of the angles it follows that $o(r_{\alpha} r_{\mathrm{e}psilon}) < \infty$, where $\{ \mathrm{e}psilon \} = (\delta, \gamma)$. Using induction and the three subgroup lemma, we obtain $[U_{\alpha}, U_{\beta}] \subseteq U_{\mathrm{e}psilon} \cap U_{(\alpha, \beta)} = 1$ as above. \mathrm{e}nd{enumerate} Now we assume that $(W, S)$ is of type $(2, 6, 6)$ and the Dynkin diagram is as in the statement. We use the notations of \cite{TW02}. Then the middle node is in both residues either the field or the vector space. We can assume that $(\delta, \gamma) \neq \mathrm{e}mptyset$. We obtain $m_{\alpha, \beta} = 6$ and $\vert (\delta, \gamma) \vert = 1$ by Lemma \ref{angle1over2}. We assume that $U_{\alpha_j}$ corresponds to $x_{2k-1}$ in Example $(16.8)$ of \cite{TW02}. Then $[U_{\delta}, U_{\gamma}]=1$. Now we assume that $U_{\alpha_j}$ corresponds to $x_{2k}$. If $k= 2$ we have $[U_{\alpha}, U_{\alpha_j}] = 1$. Thus we assume $k \in \{1, 3\}$. If $U_{\alpha}$ does not correspond to $U_1$ in Example $(16.9)$ of \cite{TW02} we obtain $[U_{\alpha}, U_{\alpha_j}] \subseteq \langle U_{\delta'} \mid \delta' \in (\alpha, \alpha_j) \backslash \{ \delta \} \rangle$ and $(\delta', \gamma) = \mathrm{e}mptyset$ for any $\delta \neq \delta' \in (\alpha, \alpha_j)$ with $o(r_{\delta'} r_{\gamma}) < \infty$. Otherwise we obtain $o(r_{\alpha} r_{\delta'}) <\infty$, where $\delta' \in (\delta, \gamma)$ and $[U_{\delta'}, U_{\alpha_j}] = 1 = [U_{\delta'}, U_{\gamma}]$. Now the claim follows from the three subgroup lemma. Let $(W, S)$ be of type $(2, 6, 8)$ with diagram as in the statement. Again we use the notations of \cite{TW02}. We can assume that there exists $\mathrm{e}psilon \in (\delta, \gamma)$. Using Lemma \ref{angle1over2} we obtain that $\{ -\delta, \beta, \mathrm{e}psilon \}$ and $\{ -\mathrm{e}psilon, \gamma, \beta \}$ are fundamental triangles. Thus $U_{\delta}$ must be parametrized by the middle node. Using the commutator relations we obtain in both cases ($m_{\delta, \gamma} = 8$ and $m_{\delta, \gamma} = 6$) that $[U_{\delta}, U_{\gamma}] = 1$. Furthermore, we have $(\delta', \gamma) = \mathrm{e}mptyset$ for any $\delta \neq \delta' \in (\alpha, \beta)$ with $o(r_{\delta'} r_{\gamma}) < \infty$. Using induction and the three subgroup lemma the claim follows. Let $(W, S)$ be of type $(2, 3, 8)$ and assume that $(\delta, \gamma) \neq \mathrm{e}mptyset$. We distinguish the following cases: If $m_{\alpha, \alpha_j} = 3$, then $\vert (\delta, \gamma) \vert \in \{1, 2, 3\}$ and $m_{\delta, \gamma} = 8$. Since $U_{\delta}$ is abelian we have $[U_{\delta}, U_{\gamma}]=1$ for $\vert (\delta, \gamma) \vert \in \{1, 3\}$ by Lemma \ref{octagonproperties}. Now we assume that $\vert (\delta, \gamma) \vert = 2$. Let $\{\delta', \delta''\} = (\delta, \gamma)$ and assume that $\angle r_{\gamma} r_{\delta''} = \angle r_{\delta''} r_{\delta'} = \angle r_{\delta'} r_{\delta} = \frac{1}{8}$. Using the commutator relations of $(16.9)$ of \cite{TW02} one obtains that $[U_{\delta}, U_{\gamma}] \subseteq U_{\delta'}$ and $[[U_{\delta}, U_{\gamma}], U_{\gamma}] = 1$. Using the hyperbolic plane one obtains that $o(r_{\alpha} r_{\delta'}) < \infty$ and hence $[U_{\alpha}, U_{\beta}] \subseteq U_{\delta'} \cap U_{(\alpha, \beta)} = 1$ as above. Now we assume that $m_{\alpha, \alpha_j} = 8$. If $m_{\delta, \gamma} = 8$ one can show that $(\delta, \alpha_j) = \mathrm{e}mptyset$ and the claim follows by Lemma \ref{Prop238auxres} and Lemma \ref{octagonproperties}. Thus we can assume that $m_{\delta, \gamma} = 3$. This implies $m_{\alpha, \alpha_j} = 8$ and $(\delta, \alpha_j) = \mathrm{e}mptyset$. If $n := \vert (\alpha, \alpha_j) \vert \geq 5$ then $o(r_{\alpha} r_{\delta'}) < \infty$, where $\{ \delta' \} = (\delta, \gamma)$ and the claim follows as above using Lemma \ref{Prop238auxres} and induction. If $n \leq 3$ we obtain $o(r_{\alpha} r_{\gamma}) < \infty$ which is a contradiction to the assumption. Thus we can assume $n=4$. Since $[U_{2i+1}, U_{2i+6}]_{2i+5} = 1$ the claim follows from Lemma \ref{Prop238auxres}, induction and the three subgroup lemma. If $(W, S)$ is simply-laced, then $\{ -\delta, \alpha_j, \gamma \}$ is a triangle. Using similar arguments as in the previous case the claim follows. \qedhere \mathrm{e}nd{enumerate} \mathrm{e}nd{proof} \section{Non-cyclic hyperbolic cases}\label{sec:Non-cyclic} As we have seen in Theorem \ref{Mainresult}, Condition $\costar$ implies Condition $\nc$ in a lot of cases, where $(W, S)$ is of rank $3$ and of hyperbolic type. In this section we will consider the hyperbolic cases of rank $3$ which were not mentioned in Theorem \ref{Mainresult}. In this section we let $\mathcal{D} = (G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ be an RGD-system of type $(W, S)$. In the proofs of this section we use the notations of \cite{TW02}. From now on by a Coxeter complex we mean the geometric realization of $\Sigma(W, S)$. \subsection*{The case $(2, 4, 6)$} \begin{theorem} There exists an RGD-system of type $(2, 4, 6)$, in which Condition $\costar \ldots$ \begin{enumerate}[label=(\alph*)] \item $\ldots$ implies Condition $\nc$. \item $\ldots$ does not imply Condition $\nc$. \mathrm{e}nd{enumerate} \mathrm{e}nd{theorem} \begin{proof} Let $\mathbb{K}$ be a field. We take the hexagonal system $(\mathbb{E} / \mathbb{K})^{\circ}$ of type $(1/\mathbb{K})$ (cf. Example $(15.20)$ of \cite{TW02}) and for the quadrangle we take $\mathcal{Q}_I(\mathbb{K}, \mathbb{K}_0, \sigma)$ (cf. Example $(16.2)$ of \cite{TW02}). We assume that the middle node of the diagram is parametrized by the field $\mathbb{K}$: \[ \begin{tikzpicture} \node[below] (a) at (0, -0.1) {$\mathbb{K}_0$}; \node[below] (b) at (2, -0.1) {$\mathbb{K}$}; \node[below] (c) at (4, -0.1) {$\mathbb{E}$}; \node[above] (d) at (1, 0) {$4$}; \node[above] (e) at (3, 0) {$6$}; \filldraw [black] (0, 0) circle (2pt) (2, 0) circle (2pt) (4, 0) circle (2pt); \draw (0, 0) -- (2, 0) -- (4, 0); \mathrm{e}nd{tikzpicture} \] In the Coxeter complex of type $(2, 4, 6)$ there exists the following figure such that $(\mathrm{e}psilon', \delta) = (\mathrm{e}psilon', \gamma) = (\alpha, \gamma) = \mathrm{e}mptyset, \angle r_{\mathrm{e}psilon '} r_{\mathrm{e}psilon} = \frac{1}{4}$ and $\angle r_{\alpha} r_{\mathrm{e}psilon} = \angle r_{\mathrm{e}psilon} r_{\delta} = \angle r_{\delta} r_{\gamma} = \frac{1}{6}$: \[ \begin{tikzpicture}[scale=0.5] \draw (0 , 0 ) to (10 , 0 ); \draw (10 , 0 ) to (10 , -0.3); \draw (9.9, 0 ) to (9.9, -0.3); \draw (9.8, 0 ) to (9.8, -0.3); \draw (9.9, -0.3) node [below]{$\alpha$}; \draw (0, -1) to (6, 5); \draw (0 , -1 ) to (0.2, -1.2); \draw (0.1, -0.9) to (0.3, -1.1); \draw (0.2, -0.8) to (0.4, -1 ); \draw (0.3, -1.1) node [below]{$\delta$}; \draw (5, 5) to (8, -1); \draw (8 , -1 ) to (7.8, -1.1); \draw (7.95, -0.9) to (7.75, -1); \draw (7.9, -0.8) to (7.7, -0.9); \draw (7.75, -1.1) node [below]{$\gamma$}; \draw (0 , -0.5 ) to (9, 4); \draw (9 , 4 ) to (9.1, 3.8); \draw (8.9, 3.95) to (9 , 3.75); \draw (8.8, 3.9 ) to (8.9, 3.7); \draw (9, 3.75) node [below]{$\mathrm{e}psilon$}; \draw (0 , 2.6) to (10 , 2.6); \draw (0 , 2.6) to (0 , 2.3); \draw (0.1, 2.6) to (0.1, 2.3); \draw (0.2, 2.6) to (0.2, 2.3); \draw (0.1, 2.3) node [below]{$\mathrm{e}psilon'$}; \mathrm{e}nd{tikzpicture} \] Thus $U_{\mathrm{e}psilon}$ is parametrized by the additive group of the field $\mathbb{K}$. Using induction and the commutator relations (cf. Chapter $16$ of \cite{TW02}) we obtain: \[ \left[ x_{\alpha}(v), \prod_{i=1}^{n} [x_{\delta}(w_i), x_{\gamma}(k_i)] \right] = x_{\mathrm{e}psilon'}\left( -\sum_{i=1}^{n} T(v, w_i)^{\sigma}k_i + k_i^{\sigma} T(v, w_i) \right) \] Now let $\sigma = \id$ and $\mathbb{E} = \mathbb{K} = \mathbb{K}_0$. Using the definition of $T$ and $\sigma$ in this example the commutator relation above reduces to \[ \left[ x_{\alpha}(v), \prod_{i=1}^{n} [x_{\delta}(w_i), x_{\gamma}(k_i)] \right] = x_{\mathrm{e}psilon'}\left( -6v \sum_{i=1}^{n} w_i k_i \right) \] Now we assume that $\mathbb{K}$ is a field of characteristic different from $2$ and $3$. Then we obtain \[ [x_{\alpha}(1), [x_{\delta}(1), x_{\gamma}(1)]] = x_{\mathrm{e}psilon'}(-6) \neq 1 \] Thus there must be a root in the residue with root basis $\{ \delta, \gamma \}$, which does not commute with $x_{\alpha}$ (otherwise the previous commutator would be trivial). Hence this RGD-system does not satisfy Condition $\nc$ and part $(b)$ follows. For part $(a)$ we could take a field of characteristic $2$ with at least $4$ elements. The existence of such an RGD-system follows from the theory of groups of Kac-Moody type. For example one can take the split Kac-Moody group over $\mathbb{K}$ of type $(W, S)$ with Cartan matrix $\begin{pmatrix} 2 & -1 & 0 \\ -2 & 2 & -1 \\ 0 & -3 & 2 \mathrm{e}nd{pmatrix}$. \mathrm{e}nd{proof} \subsection*{The case $(2, 6, 6)$} \begin{theorem} There exists an RGD-system of type $(2, 6, 6)$, in which Condition $\costar$ implies Condition $\nc$. \mathrm{e}nd{theorem} \begin{proof} Consider the split Kac-Moody group over $\mathbb{K} \neq \{ \mathbb{F}_2, \mathbb{F}_3 \}$ of type $(2, 6, 6)$ with Cartan matrix $\begin{pmatrix} 2 & -3 & 0 \\ -1 & 2 & -1 \\ 0 & -3 & 2 \mathrm{e}nd{pmatrix}$ or $\begin{pmatrix} 2 & -1 & 0 \\ -3 & 2 & -3 \\ 0 & -1 & 2 \mathrm{e}nd{pmatrix}$. Then the corresponding Dynkin diagram is as in Theorem \ref{Mainresult} and the claim follows. \mathrm{e}nd{proof} \begin{comment} \begin{proof} Let $\mathbb{K}, \mathbb{K}'$ be fields and let $V$ (resp. $V'$) be a $\mathbb{K}$ vector space (resp. $\mathbb{K}'$ vector space). Assume that the diagram is of one of the following forms: $$ \begin{tikzpicture} \node[below] (a) at (0, -0.1) {$\mathbb{K}$}; \node[below] (b) at (2, -0.1) {$V = V'$}; \node[below] (c) at (4, -0.1) {$\mathbb{K}'$}; \node[above] (d) at (1, 0) {$6$}; \node[above] (e) at (3, 0) {$6$}; \filldraw [black] (0, 0) circle (2pt) (2, 0) circle (2pt) (4, 0) circle (2pt); \draw (0, 0) -- (2, 0) -- (4, 0); \mathrm{e}nd{tikzpicture}$$ $$\begin{tikzpicture} \node[below] (a) at (0, -0.1) {$V$}; \node[below] (b) at (2, -0.1) {$\mathbb{K} = \mathbb{K}'$}; \node[below] (c) at (4, -0.1) {$V'$}; \node[above] (d) at (1, 0) {$6$}; \node[above] (e) at (3, 0) {$6$}; \filldraw [black] (0, 0) circle (2pt) (2, 0) circle (2pt) (4, 0) circle (2pt); \draw (0, 0) -- (2, 0) -- (4, 0); \mathrm{e}nd{tikzpicture}$$ This means that the middle node is in both cases either the field or the vector space. Let $\{ \alpha, \beta, \gamma \}$ be a set of three roots such that $o(r_{\alpha} r_{\beta}), o(r_{\beta} r_{\gamma}) < \infty$ and $\alpha \subsetneq \gamma$. Let $\delta \in (\alpha, \beta)$ such that $\{ -\delta, \beta, \gamma \}$ is a triangle with $(\delta, \gamma) \neq \mathrm{e}mptyset$. Then we obtain $m_{\alpha, \beta} = 6$ and $\vert (\delta, \gamma) \vert = 1$. We assume that $U_{\beta}$ corresponds to $x_{2k-1}$ in Example $(16.8)$ of \cite{TW02}. Then $[U_{\delta}, U_{\gamma}]=1$. Now we assume that $U_{\beta}$ corresponds to $x_{2k}$. If $k= 2$ we have $[U_{\alpha}, U_{\beta}] = 1$. Thus we assume $k=3$. If $U_{\alpha}$ does not correspond to $U_1$ in Example $(16.9)$ of \cite{TW02} we obtain $[U_{\alpha}, U_{\beta}] \subseteq \langle U_{\delta'} \mid \delta' \in (\alpha, \beta) \backslash \{ \delta \} \rangle$ and $(\delta', \gamma) = \mathrm{e}mptyset$ for any $\delta \neq \delta' \in (\alpha, \beta)$ with $o(r_{\delta'} r_{\gamma}) < \infty$. Otherwise we obtain $o(r_{\alpha} r_{\delta'}) <\infty$, where $\delta' \in (\delta, \gamma)$ and $[U_{\delta'}, U_{\beta}] = 1 = [U_{\delta'}, U_{\gamma}]$. Using the same arguments as in Theorem \ref{Mainresult}, such an RGD-system satisfies Condition $\nc$. An example is the split Kac-Moody group over $\mathbb{K} \neq \mathbb{F}_2$ of type $(W, S)$ with Cartan matrix $\begin{pmatrix} 2 & -3 & 0 \\ -1 & 2 & -1 \\ 0 & -3 & 2 \mathrm{e}nd{pmatrix}$ or $\begin{pmatrix} 2 & -1 & 0 \\ -3 & 2 & -3 \\ 0 & -1 & 2 \mathrm{e}nd{pmatrix}$. \mathrm{e}nd{proof} \mathrm{e}nd{comment} \begin{theorem} There exists an RGD-system of type $(2, 6, 6)$, in which Condition $\costar$ does not imply Condition $\nc$. \mathrm{e}nd{theorem} \begin{proof} Assume that \begin{tikzpicture} \node[dynkinnode] (A) at (0,0) {}; \node[dynkinnode] (B) at (1,0) {}; \node[dynkinnode] (C) at (2,0) {}; \draw (A) -- (B) -- (C); \doubleline{A}{B}; \doubleline{B}{C} \mathrm{e}nd{tikzpicture} is the Dynkin diagram of the RGD-system. Let $\mathbb{K}$ be a field and $V$ be a vector space over $\mathbb{K}$. Then the right node is parametrized by $V$ and the other two nodes are parametrized by $\mathbb{K}$. In the Coxeter complex of type $(2, 6, 6)$ there exists the following figure such that $(\mathrm{e}psilon', \delta) = (\mathrm{e}psilon', \gamma) = (\alpha, \gamma) = \mathrm{e}mptyset, \angle r_{\mathrm{e}psilon '} r_{\mathrm{e}psilon} = \angle r_{\alpha} r_{\mathrm{e}psilon} = \angle r_{\mathrm{e}psilon} r_{\delta} = \angle r_{\delta} r_{\gamma} = \frac{1}{6}$ and $o(r_{\alpha} r_{\mathrm{e}psilon'}) = o(r_{\alpha} r_{\gamma}) = \infty$: \[ \begin{tikzpicture}[scale=0.5] \draw (0 , 0 ) to (10 , 0 ); \draw (10 , 0 ) to (10 , -0.3); \draw (9.9, 0 ) to (9.9, -0.3); \draw (9.8, 0 ) to (9.8, -0.3); \draw (9.9, -0.3) node [below]{$\alpha$}; \draw (0, -1) to (5, 4); \draw (0 , -1 ) to (0.2, -1.2); \draw (0.1, -0.9) to (0.3, -1.1); \draw (0.2, -0.8) to (0.4, -1 ); \draw (0.3, -1.1) node [below]{$\delta$}; \draw (3.4, 4) to (9, 1.2); \draw (9, 1.2) to (8.9, 1.0); \draw (8.9, 1.25) to (8.8, 1.05); \draw (8.8, 1.3) to (8.7, 1.1); \draw (8.8, 1.2) node [right]{$\gamma$}; \draw (0 , -0.5 ) to (9, 4); \draw (9 , 4 ) to (9.1, 3.8); \draw (8.9, 3.95) to (9 , 3.75); \draw (8.8, 3.9 ) to (8.9, 3.7); \draw (9, 3.75) node [below]{$\mathrm{e}psilon$}; \draw (0 , 2.6) to (10 , 2.6); \draw (0 , 2.6) to (0 , 2.3); \draw (0.1, 2.6) to (0.1, 2.3); \draw (0.2, 2.6) to (0.2, 2.3); \draw (0.1, 2.3) node [below]{$\mathrm{e}psilon'$}; \mathrm{e}nd{tikzpicture} \] We can assume that $U_{\alpha}$ is parametrized by the vector space $V$. Then $U_{\mathrm{e}psilon}$ is parametrized by the field $\mathbb{K}$. We remark that $[U_{\mathrm{e}psilon}, U_{\gamma}] \neq 1$ because of the diagram. We use for the hexagon the hexagonal system defined in $(15.20)$ of \cite{TW02}. Using the commutator relations (cf. Chapter $16$ of \cite{TW02}) we obtain: \[ \left[ x_{\alpha}(v), \prod_{i=1}^{n} [x_{\delta}(w_i), x_{\gamma}(k_i)] \right] = x_{\mathrm{e}psilon'}\left( \sum_{i=1}^{n} T\left( T(v, w_i), k_i \right) \right) = x_{\mathrm{e}psilon '}\left( 9v \sum_{i=1}^{n} w_i k_i \right) \] For $0 \neq w\in V$ we obtain $[x_{\alpha}(1), [x_{\delta}(w), x_{\gamma}(1)]] = x_{\mathrm{e}psilon'}(9w) \neq 1$ if the characteristic of $\mathbb{K}$ is different from $3$ and hence such an RGD-system does not satisfy Condition $\nc$. An example of such an RGD-system is provided by the split Kac-Moody group over $\mathbb{K} \neq \mathbb{F}_2$ of characteristic $\neq 3$ with Cartan matrix $\begin{pmatrix} 2 & -1 & 0 \\ -3 & 2 & -1 \\ 0 & -3 & 2 \mathrm{e}nd{pmatrix}$. \mathrm{e}nd{proof} \subsection*{The case $(2, 4, 8)$} \begin{theorem} There exists an RGD-system of type $(2, 4, 8)$ such that Condition $\costar \ldots$ \begin{enumerate}[label=(\alph*)] \item $\ldots$ implies Condition $\nc$. \item $\ldots$ does not imply Condition $\nc$. \mathrm{e}nd{enumerate} \mathrm{e}nd{theorem} \begin{proof} Let $(\mathbb{K}, \sigma)$ be as in Example $(10.12)$ of \cite{TW02}. Let $\mathbb{E}$ be a field containing $\mathbb{K}$ and let $a, b \in \mathbb{E}$ algebraically independent over $\mathbb{K}$ such that $\mathbb{E} = \mathbb{K}(a, b)$. We extend $\sigma$ to an endomorphism of $\mathbb{E}$ by setting $a^{\sigma} = b$ and $b^{\sigma} = a^2$. Then $(\mathbb{E}, \sigma)$ is an octagonal set as in Example $(10.12)$ of \cite{TW02}. For the quadrangle we take $\mathcal{Q}_I(\mathbb{E}, \mathbb{E}_0, \tau)$ (cf. Example $(16.2)$ of \cite{TW02}), where $\tau$ fixes $\mathbb{K}$ pointwise and interchanges $a$ and $b$. We assume that the middle node of the diagram is parametrized by the field $\mathbb{E}$: \[ \begin{tikzpicture} \node[below] (a) at (0, -0.1) {$\mathbb{E}_0$}; \node[below] (b) at (2, -0.1) {$\mathbb{E}$}; \node[below] (c) at (4, -0.1) {$\mathbb{E}\times \mathbb{E}$}; \node[above] (d) at (1, 0) {$4$}; \node[above] (e) at (3, 0) {$8$}; \filldraw [black] (0, 0) circle (2pt) (2, 0) circle (2pt) (4, 0) circle (2pt); \draw (0, 0) -- (2, 0) -- (4, 0); \mathrm{e}nd{tikzpicture} \] In the Coxeter complex of type $(2, 4, 8)$ there exists the following figure such that $(\mathrm{e}psilon', \delta) = (\mathrm{e}psilon', \gamma) = (\alpha, \gamma) = \mathrm{e}mptyset, m_{\mathrm{e}psilon, \gamma} = 4$ and $\angle r_{\alpha} r_{\mathrm{e}psilon} = \angle r_{\mathrm{e}psilon} r_{\delta} = \angle r_{\delta} r_{\gamma} = \frac{1}{8}$: \[ \begin{tikzpicture}[scale=0.5] \draw (0 , 0 ) to (10 , 0 ); \draw (10 , 0 ) to (10 , -0.3); \draw (9.9, 0 ) to (9.9, -0.3); \draw (9.8, 0 ) to (9.8, -0.3); \draw (9.9, -0.3) node [below]{$\alpha$}; \draw (0, -1) to (6, 5); \draw (0 , -1 ) to (0.2, -1.2); \draw (0.1, -0.9) to (0.3, -1.1); \draw (0.2, -0.8) to (0.4, -1 ); \draw (0.3, -1.1) node [below]{$\delta$}; \draw (5, 5) to (8, -1); \draw (8 , -1 ) to (7.8, -1.1); \draw (7.95, -0.9) to (7.75, -1); \draw (7.9, -0.8) to (7.7, -0.9); \draw (7.75, -1.1) node [below]{$\gamma$}; \draw (0 , -0.5 ) to (9, 4); \draw (9 , 4 ) to (9.1, 3.8); \draw (8.9, 3.95) to (9 , 3.75); \draw (8.8, 3.9 ) to (8.9, 3.7); \draw (9, 3.75) node [below]{$\mathrm{e}psilon$}; \draw (0 , 2.6) to (10 , 2.6); \draw (0 , 2.6) to (0 , 2.3); \draw (0.1, 2.6) to (0.1, 2.3); \draw (0.2, 2.6) to (0.2, 2.3); \draw (0.1, 2.3) node [below]{$\mathrm{e}psilon'$}; \mathrm{e}nd{tikzpicture} \] Thus $U_{\mathrm{e}psilon}$ is parametrized by the additive group of the field $\mathbb{E}$. Using induction and the commutator relations (cf. Chapter $16$ of \cite{TW02}) we obtain: \[ \left[ x_{\alpha}(t, u), \prod_{i=1}^{n} [x_{\gamma}(k_i), x_{\delta}(t_i, u_i)] \right] = x_{\mathrm{e}psilon'}\left( \sum_{i=1}^{n} (uu_i)^{\tau}k_i + k_i^{\tau}uu_i \right) \] This implies that $[x_{\alpha}(0, 1), [ x_{\gamma}(1), x_{\delta}( 1, a ) ] ] = x_{\mathrm{e}psilon'}( b + a ) \neq 1$ and hence there exists $2 \leq i \leq 7$ with $[x_{\alpha}(0, 1), x_i(k)] \neq 1$ for some $k \in \mathbb{K} \cup \left( \mathbb{K} \times \mathbb{K} \right)$ (depending if $i$ is even or odd). This shows part $(b)$. For part $(a)$ we take the same example as above, except that the map $\tau = \id$ in the quadrangle. The existence of such RGD-systems is established by applying a Suzuki-Ree twist to a suitable split Kac-Moody group. The construction uses the same strategy as the construction given in \cite{He90}. \mathrm{e}nd{proof} \subsection*{The case $(2, 6, 8)$} \begin{theorem} There exists an RGD-system of type $(W, S)$, in which Condition $\costar$ implies Condition $\nc$. \mathrm{e}nd{theorem} \begin{proof} An RGD-system with diagram \begin{tikzpicture} \node[dynkinnode] (A) at (0,0) {}; \node[dynkinnode] (B) at (1,0) {}; \node[dynkinnode] (C) at (2,0) {}; \node[above] (bc) at (1.5, 0) {$8$}; \draw (A) -- (B) -- (C); \doubleline{B}{A} \mathrm{e}nd{tikzpicture} is an example. The existence of such an RGD-system can be proved along the lines in \cite{He90}. \mathrm{e}nd{proof} \begin{theorem} There exists an RGD-system of type $(2, 6, 8)$, in which Condition $\costar$ does not imply Condition $\nc$. \mathrm{e}nd{theorem} \begin{proof} Let $(\mathbb{K}, \sigma)$ be as in Example $(10.12)$ of \cite{TW02}. Let $\mathbb{E}$ be a field containing $\mathbb{K}$ and let $a, b \in \mathbb{E}$ algebraically independent over $\mathbb{K}$ such that $\mathbb{E} = \mathbb{K}(a, b)$. We extend $\sigma$ to an endomorphism of $\mathbb{E}$ by setting $a^{\sigma} = b$ and $b^{\sigma} = a^2$. Then $(\mathbb{E}, \sigma)$ is an octagonal set as in Example $(10.12)$ of \cite{TW02}. For the hexagon we take the hexagonal system $(\mathbb{E}/\mathbb{E})^{\circ}$ of type $(1/\mathbb{E})$ (cf. Example $(15.20)$). We assume that \begin{tikzpicture} \node[dynkinnode] (A) at (0,0) {}; \node[dynkinnode] (B) at (1,0) {}; \node[dynkinnode] (C) at (2,0) {}; \node[above] (bc) at (1.5, 0) {$8$}; \draw (A) -- (B) -- (C); \doubleline{A}{B} \mathrm{e}nd{tikzpicture} is the diagram of the RGD-system. This means that the right node is parametrized by $\mathbb{E} \times \mathbb{E}$ and the other two nodes are parametrized by $\mathbb{E}$. In the Coxeter complex of type $(2, 6, 8)$ there exists the following figure such that $(\mathrm{e}psilon', \delta) = (\mathrm{e}psilon', \gamma) = (\alpha, \gamma) = \mathrm{e}mptyset, m_{\mathrm{e}psilon, \gamma} = 6, \angle r_{\alpha} r_{\mathrm{e}psilon} = \angle r_{\mathrm{e}psilon} r_{\delta} = \angle r_{\delta} r_{\gamma} = \frac{1}{8}$ and $o(r_{\alpha} r_{\gamma}) = o(r_{\alpha} r_{\mathrm{e}psilon'}) = \infty$: \[ \begin{tikzpicture}[scale=0.5] \draw (0 , 0 ) to (10 , 0 ); \draw (10 , 0 ) to (10 , -0.3); \draw (9.9, 0 ) to (9.9, -0.3); \draw (9.8, 0 ) to (9.8, -0.3); \draw (9.9, -0.3) node [below]{$\alpha$}; \draw (0, -1) to (5, 4); \draw (0 , -1 ) to (0.2, -1.2); \draw (0.1, -0.9) to (0.3, -1.1); \draw (0.2, -0.8) to (0.4, -1 ); \draw (0.3, -1.1) node [below]{$\delta$}; \draw (3.4, 4) to (9, 1.2); \draw (9, 1.2) to (8.9, 1.0); \draw (8.9, 1.25) to (8.8, 1.05); \draw (8.8, 1.3) to (8.7, 1.1); \draw (8.8, 1.2) node [right]{$\gamma$}; \draw (0 , -0.5 ) to (9, 4); \draw (9 , 4 ) to (9.1, 3.8); \draw (8.9, 3.95) to (9 , 3.75); \draw (8.8, 3.9 ) to (8.9, 3.7); \draw (9, 3.75) node [below]{$\mathrm{e}psilon$}; \draw (0 , 2.6) to (10 , 2.6); \draw (0 , 2.6) to (0 , 2.3); \draw (0.1, 2.6) to (0.1, 2.3); \draw (0.2, 2.6) to (0.2, 2.3); \draw (0.1, 2.3) node [below]{$\mathrm{e}psilon'$}; \mathrm{e}nd{tikzpicture} \] Thus $U_{\mathrm{e}psilon}$ is parametrized by the additive group of the field $\mathbb{E}$. Using induction and the commutator relations (cf. Chapter $16$ of \cite{TW02}) we obtain: \[ \left[ x_{\alpha}(t, u), \prod_{i=1}^{n} [x_{\gamma}(k_i), x_{\delta}(t_i, u_i)] \right] = x_{\mathrm{e}psilon'}\left( \sum_{i=1}^{n} T(uu_i, k_i) \right) = x_{\mathrm{e}psilon'}\left( u\sum_{i=1}^{n} u_ik_i \right) \] This implies that $[x_{\alpha}(0, 1), [ x_{\gamma}(1), x_{\delta}( 0, 1 ) ]] = x_{\mathrm{e}psilon'}( 1 ) \neq 1$. This shows that such an RGD-system does not satisfy Condition $\nc$. The existence of such RGD-systems is established by applying a Suzuki-Ree twist to a suitable split Kac-Moody group. The construction uses the same strategy as the construction given in \cite{He90}. \mathrm{e}nd{proof} \appendix \renewcommand{Appendix \Alph{section}}{Appendix \Alph{section}} \renewcommand\thecountercheck{(\Alph{section}.\arabic{countercheck})} \section{Computations in an RGD-system of type $G_2$} Let $\mathcal{D} = (G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ be an RGD-system of type $G_2$. If $\mathcal{D}$ satisfies Condition $\costar$ we can write every element of $U_{\gamma}$ as a product of elements $[u_{\alpha}, u_{\beta}]$, where $u_{\alpha} \in U_{\alpha}, u_{\beta} \in U_{\beta}, \mathcal{P}i = \{ \alpha, \beta \}$ and $\gamma \in (\alpha, \beta)$. In this section we will state concrete elements $u_{\alpha}, u_{\beta}$ with the required property for some $u_{\gamma} \in U_{\gamma}$. We remark that we do not assume Condition $\costar$ in this section. \begin{lemma} Using the notations of \cite{TW02} we obtain the following commutator relations: \begin{align*} [x_1(a_1), x_6(t_1)] [x_1(a_2), x_6(t_2)] = &x_2(-t_1N(a_1)-t_2N(a_2)) \\ \cdot &x_3(t_1a_1^{\#} + t_2a_2^{\#}) \\ \cdot &x_4(t_1^2N(a_1) + t_2^2N(a_2) + T(t_1a_1, t_2a_2^{\#})) \\ \cdot &x_5(-t_1a_1 - t_2a_2) \mathrm{e}nd{align*} \mathrm{e}nd{lemma} \begin{proof} This is an elementary computation. \mathrm{e}nd{proof} \begin{remark}\label{hexagoncommutatorrelations} Using the notations of \cite{TW02} and the previous lemma we obtain the following commutator relations: \begin{enumerate}[label=(\alph*)] \item $[x_1(-k \cdot 1_V), x_6(k^{-1})] [x_1(-k \cdot 1_V), x_6(-k^{-1})] = x_4(k)$. \item $[x_1(v), x_6(k)] [x_1(-v), x_6(k)] = x_3(2kv^{\#}) x_4(3k^2 N(v))$. \item $[x_1(v), x_6(k^3)] [x_1(kv), x_6(-1)] = x_3(\left( k^3 - k^2 \right)v^{\#}) x_4(N(v) \left( k^6 + k^3 - 3k^5 \right)) x_5((k-k^3)v)$. \mathrm{e}nd{enumerate} If the characteristic of the field is different from $2$ and $v\neq 0$, we can choose $v' := v^{\#} \neq 0$ and $k' := (2N(v^{\#}))^{-1}$ and obtain the following: \[ [x_1(v'), x_6(k')] [x_1(-v'), x_6(k')] = x_3(v) x_4(3(k')^2 N(v')) \] \mathrm{e}nd{remark} \section{Computations in an RGD-system of type $I_2(8)$} In this section we let $\mathcal{D} = (G, (U_{\alpha})_{\alpha \in \mathcal{P}hi})$ be an RGD-system of type $I_2(8)$. \begin{lemma}\label{proofofoctagoncommutatorrelations} Using the notations of \cite{TW02} we obtain the following commutator relations: \begin{enumerate}[label=(\alph*)] \item The general commutator of two basis roots is given by \begin{align*} \mathrm{e}veryafterautobreak{\cdot} \begin{autobreak} [x_1(t), x_8(u_1, u_2)] = x_2( t^{\sigma +1}u_1 + t^{\sigma+1} u_2^{\sigma+1}, tu_2 ) x_3( t^{\sigma+1} u_2^{\sigma+2} + t^{\sigma+1} u_1 u_2 + t^{\sigma+1} u_1^{\sigma} ) x_4( t^{\sigma+2} u_1^{\sigma} u_2^{\sigma+1} + t^{\sigma +2} u_1^2 u_2 + t^{\sigma +2} u_2^{2\sigma +3}, t^{\sigma} u_1 ) x_5( t^{\sigma+1} u_1^{\sigma} u_2^{\sigma} + t^{\sigma+1} u_2^{2\sigma+2} + t^{\sigma+1} u_1^2 ) x_6( t^{\sigma+1} u_1^2 u_2 + t^{\sigma+1} u_1^{\sigma} u_2^{\sigma+1} + t^{\sigma+1} u_1^{\sigma+1}, tu_1 + tu_2^{\sigma+1} ) x_7( tu_1^{\sigma} + tu_1 u_2 + tu_2^{\sigma+2} ) \mathrm{e}nd{autobreak} \mathrm{e}nd{align*} \item The multiplication of two general commutators of basis roots is given by \begin{align*} \begin{autobreak} [x_1(t_1), x_8(u_1, u_2)] [x_1(t_2), x_8(v_1, v_2)] = x_2( t_1^{\sigma+1} \left( u_1 + u_2^{\sigma +1} \right) + t_2^{\sigma+1} \left( v_1 + v_2^{\sigma+1} \right) + t_1^{\sigma} t_2 u_2^{\sigma} v_2, t_1u_2 + t_2v_2 ) \cdot x_3( t_1^{\sigma+1} \left( u_2^{\sigma +2} + u_1 u_2 + u_1^{\sigma} \right) + t_2^{\sigma+1} \left( v_2^{\sigma+2} + v_1 v_2 + v_1^{\sigma} \right) + t_1^{\sigma} t_2 u_1 v_2 ) \cdot x_4( t_1^{\sigma+2} \left( u_1^{\sigma} u_2^{\sigma+1} + u_1^2 u_2 + u_2^{2\sigma +3} \right) + t_1 \left( u_1^{\sigma} + u_1u_2 + u_2^{\sigma+2} \right) \left( v_1 + v_2^{\sigma+1} \right) t_2^{\sigma+1} + t_1^{\sigma+1} \left( u_1^{\sigma} u_2^{\sigma} + u_2^{2\sigma +2} + u_1^2 \right) t_2v_2 + t_1 \left( u_1 + u_2^{\sigma+1} \right) \left( v_2^{\sigma+2} + v_1 v_2 + v_1^{\sigma} \right) t_2^{\sigma+1} + t_2^{\sigma+2} \left( v_1^{\sigma} v_2^{\sigma+1} + v_1^2 v_2 + v_2^{2\sigma +3} \right) + t_1^2 t_2^{\sigma} u_1^{\sigma} v_1, t_1^{\sigma}u_1 + t_2^{\sigma} v_1 ) \cdot x_5( t_1^{\sigma+1} \left( u_1^{\sigma} u_2^{\sigma} + u_2^{2\sigma+2} + u_1^2 \right) + t_1 \left( u_1^{\sigma} + u_1 u_2 + u_2^{\sigma+2} \right)t_2^{\sigma} v_2^{\sigma} + t_1 \left( u_1 + u_2^{\sigma+1} \right) t_2^{\sigma}v_1 + t_2^{\sigma+1} \left( v_1^{\sigma} v_2^{\sigma} + v_2^{2\sigma +2} + v_1^2 \right) ) \cdot x_6( t_1^{\sigma+1} \left( u_1^2 u_2 + u_1^{\sigma} u_2^{\sigma+1} + u_1^{\sigma+1} \right) + t_1^{\sigma} \left( u_1^2 + u_1^{\sigma}u_2^{\sigma} + u_2^{2\sigma+2} \right) t_2v_2 + t_1 \left( u_1^{\sigma} + u_1u_2 + u_2^{\sigma+2} \right) t_2^{\sigma} v_1 + t_2^{\sigma+1} \left( v_1^2 v_2 + v_1^{\sigma} v_2^{\sigma+1} + v_1^{\sigma+1} \right) + t_1^{\sigma} t_2 \left( u_1^{\sigma} + u_2^{\sigma +2} \right) \left( v_1 + v_2^{\sigma +1} \right), t_1 \left( u_1 + u_2^{\sigma+1} \right) + t_2 \left( v_1 + v_2^{\sigma+1} \right) ) \cdot x_7( t_1 \left( u_1^{\sigma} + u_1u_2 + u_2^{\sigma+2} \right) + t_2 \left( v_1^{\sigma} + v_1v_2 + v_2^{\sigma+2} \right) ) \mathrm{e}nd{autobreak} \mathrm{e}nd{align*} \mathrm{e}nd{enumerate} \mathrm{e}nd{lemma} \begin{proof} At first we state all non-trivial commutator relations we need which are not stated in \cite{TW02}: \begin{align*} [x_1(t), x_8(u)] &= x_2(t^{\sigma +1}u) x_3(t^{\sigma +1} u^{\sigma}) y_4(t^{\sigma}u) x_5( t^{\sigma +1}u^2 ) y_6(tu)^{-1} x_7( tu^{\sigma} ) \\ &= x_2(t^{\sigma +1}u) x_3(t^{\sigma +1} u^{\sigma}) y_4(t^{\sigma}u) x_5( t^{\sigma +1}u^2 ) x_6(t^{\sigma +1} u^{\sigma +1})y_6(tu) x_7( tu^{\sigma} ) \\ [x_7(t), x_2(u)] &= x_4(tu) \\ [x_5(t), y_2(u)] &= x_4(tu) \\ [x_1(t), y_6(u)] &= [x_1(t), x_6(u^{\sigma +1})y_6(u)^{-1}] = x_2(t^{\sigma}u) x_3(tu^{\sigma})x_4(tu^{\sigma +1}) [x_1(t), x_6(u^{\sigma +1})]^{y_6(u)^{-1}} \\ &= x_2(t^{\sigma}u) x_3(tu^{\sigma})x_4(tu^{\sigma +1}) x_4(tu^{\sigma +1}) = x_2(t^{\sigma}u) x_3(tu^{\sigma})\\ [x_7(t), y_2(u)] &= x_6(t^{\sigma}u) x_5(tu^{\sigma}) = x_5(tu^{\sigma}) x_6(t^{\sigma}u) \\ [y_6(u), x_3(t)] &= x_4(tu)^{-1} = x_4(tu) \\ [y_6(u), y_4(t)] &= x_5(tu)^{-1} = x_5(tu) \\ [x_7(t), y_4(u)] &= x_6(tu) \\ [y_6(t)^{-1}, y_8(u)^{-1}] &= [y_6(t), y_8(u)] = x_7(tu) \\ [x_5(t), y_8(u)^{-1}] &= [x_5(t), y_8(u)] = x_6(tu) \\ [x_3(t), y_8(u)^{-1}] &= x_4(t^{\sigma}u) x_5(tu^{\sigma}) x_6(tu^{\sigma+1}) \mathrm{e}nd{align*} We put $g := x_2(a_2) y_2(b_2) x_3(a_3) x_4(a_4) y_4(b_4) x_5(a_5) x_6(a_6) y_6(b_6) x_7(a_7)$. Using the commutator relations in \cite{TW02} we obtain the following: \begin{align*} g x_2(z) &= x_2(a_2 + z) y_2(b_2) x_3(a_3) x_4(a_4) y_4(b_4) x_5(a_5) x_6(a_6) y_6(b_6) x_7(a_7) [x_7(a_7), x_2(z)] \\ &= x_2(a_2 + z) y_2(b_2) x_3(a_3) x_4(a_4) y_4(b_4) x_5(a_5) x_6(a_6) y_6(b_6) x_7(a_7) x_4(a_7 z) \\ &= x_2(a_2 + z) y_2(b_2) x_3(a_3) x_4(a_4 + a_7 z) y_4(b_4) x_5(a_5) x_6(a_6) y_6(b_6) x_7(a_7) \\ g y_2(z) &= x_2(a_2) y_2(b_2)y_2(z) x_3(a_3)x_4(a_4) y_4(b_4) [y_4(b_4), y_2(z)] x_5(a_5) [x_5(a_5), y_2(z)] x_6(a_6) y_6(b_6) \\ &\hspace{0.5cm} \cdot x_7(a_7) [x_7(a_7), y_2(z)] \\ &= x_2(a_2 + b_2^{\sigma}z) y_2(b_2 + z) x_3(a_3)x_4(a_4) y_4(b_4) x_3(b_4 z)^{-1} x_5(a_5) x_4(a_5 z) x_6(a_6) y_6(b_6) \\ &\hspace{0.5cm} \cdot x_7(a_7) x_5(a_7 z^{\sigma}) x_6(a_7^{\sigma} z) \\ &= x_2(a_2 + b_2^{\sigma}z) y_2(b_2 + z) x_3(a_3 + b_4 z) x_4(a_4 + a_5 z) y_4(b_4) x_5(a_5 + a_7 z^{\sigma}) x_6(a_6 + a_7^{\sigma} z) y_6(b_6) x_7(a_7) \\ g x_3(z) &= x_2(a_2) y_2(b_2) x_3(a_3 + z) x_4(a_4) y_4(b_4) x_5(a_5) x_6(a_6) y_6(b_6) [y_6(b_6), x_3(z)] x_7(a_7) \\ &= x_2(a_2) y_2(b_2) x_3(a_3 + z) x_4(a_4) y_4(b_4) x_5(a_5) x_6(a_6) y_6(b_6) x_4(b_6 z) x_7(a_7) \\ &= x_2(a_2) y_2(b_2) x_3(a_3 + z) x_4(a_4 + b_6 z) y_4(b_4) x_5(a_5) x_6(a_6) y_6(b_6) x_7(a_7) \\ g x_4(z) &= x_2(a_2) y_2(b_2) x_3(a_3) x_4(a_4 + z) y_4(b_4) x_5(a_5) x_6(a_6) y_6(b_6) x_7(a_7) \\ g y_4(z) &= x_2(a_2) y_2(b_2) x_3(a_3) x_4(a_4) y_4(b_4) y_4(z) x_5(a_5) x_6(a_6) y_6(b_6) [y_6(b_6), y_4(z)] x_7(a_7) [x_7(a_7), y_4(z)] \\ &= x_2(a_2) y_2(b_2) x_3(a_3) x_4(a_4 + b_4^{\sigma}z) y_4(b_4+z) x_5(a_5) x_6(a_6) y_6(b_6) x_5(b_6 z) x_7(a_7) x_6(a_7 z) \\ &= x_2(a_2) y_2(b_2) x_3(a_3) x_4(a_4 + b_4^{\sigma}z) y_4(b_4+z) x_5(a_5 + b_6 z) x_6(a_6 + a_7 z) y_6(b_6) x_7(a_7) \\ g x_5(z) &= x_2(a_2) y_2(b_2) x_3(a_3) x_4(a_4) y_4(b_4) x_5(a_5 + z) x_6(a_6) y_6(b_6) x_7(a_7) \\ g x_6(z) &= x_2(a_2) y_2(b_2) x_3(a_3) x_4(a_4) y_4(b_4) x_5(a_5) x_6(a_6 + z) y_6(b_6) x_7(a_7) \\ gy_6(z) &= x_2(a_2) y_2(b_2) x_3(a_3) x_4(a_4) y_4(b_4) x_5(a_5) x_6(a_6) y_6(b_6)y_6(z) x_7(a_7) \\ &= x_2(a_2) y_2(b_2) x_3(a_3) x_4(a_4) y_4(b_4) x_5(a_5) x_6(a_6 + b_6^{\sigma}z) y_6(b_6 + z) x_7(a_7) \\ g x_7(z) &= x_2(a_2) y_2(b_2) x_3(a_3) x_4(a_4) y_4(b_4) x_5(a_5) x_6(a_6) y_6(b_6) x_7(a_7 + z) \mathrm{e}nd{align*} The rest of the proof is applying the previous relations. We let $u := u_1 + u_2^{\sigma +1}$. \begin{align*} \begin{autobreak} [x_1(t), x_8(u)]^{y_8(u_2)^{-1}} = y_8(u_2) x_2(t^{\sigma +1} u) x_3(t^{\sigma +1} u^{\sigma}) y_4(t^{\sigma} u) x_5(t^{\sigma +1} u^2) y_6(tu)^{-1} x_7(tu^{\sigma}) y_8(u_2)^{-1} = y_8(u_2) x_2(t^{\sigma +1} u) x_3(t^{\sigma +1} u^{\sigma}) y_4(t^{\sigma} u) y_8(u_2)^{-1} x_5(t^{\sigma +1} u^2) [x_5(t^{\sigma +1} u^2), y_8(u_2)^{-1}] y_6(tu)^{-1} [y_6(tu)^{-1}, y_8(u_2)^{-1}] x_7(tu^{\sigma}) = x_2(t^{\sigma +1} u) [x_2(t^{\sigma +1}u), y_8(u_2)^{-1}] x_3(t^{\sigma +1} u^{\sigma}) [x_3(t^{\sigma +1} u^{\sigma}), y_8(u_2)^{-1}] y_4(t^{\sigma} u) x_5(t^{\sigma +1} u^2) x_6(t^{\sigma +1}u^2 u_2) y_6(tu)^{-1} x_7(tuu_2) x_7(tu^{\sigma}) = x_2(t^{\sigma +1} u) x_3(t^{\sigma+1}uu_2) x_4((t^{\sigma+1}u)^{\sigma} u_2^{\sigma+1}) x_6(t^{\sigma+1}u u_2^{\sigma +2}) x_3(t^{\sigma +1} u^{\sigma}) x_4((t^{\sigma+1}u^{\sigma})^{\sigma} u_2) x_5(t^{\sigma+1}u^{\sigma} u_2^{\sigma}) x_6(t^{\sigma+1}u^{\sigma} u_2^{\sigma+1}) y_4(t^{\sigma} u) x_5(t^{\sigma +1} u^2) x_6(t^{\sigma +1}u^2 u_2) y_6(tu)^{-1} x_7(tuu_2 + tu^{\sigma}) = x_2(t^{\sigma +1} u) x_3(t^{\sigma+1}uu_2 + t^{\sigma +1} u^{\sigma}) x_4((t^{\sigma+1}u)^{\sigma} u_2^{\sigma+1} + (t^{\sigma+1}u^{\sigma})^{\sigma} u_2) y_4(t^{\sigma} u) x_5(t^{\sigma+1}u^{\sigma} u_2^{\sigma} + t^{\sigma +1} u^2) x_6(t^{\sigma+1}u u_2^{\sigma +2} + t^{\sigma+1}u^{\sigma} u_2^{\sigma+1} + t^{\sigma +1}u^2 u_2 + t^{\sigma +1} u^{\sigma +1}) y_6(tu) x_7(tuu_2 + tu^{\sigma}) \mathrm{e}nd{autobreak} \mathrm{e}nd{align*} \begin{align*} \begin{autobreak} [x_1(t), x_8(u_1)y_8(u_2)] = [x_1(t), x_8(u_1 + u_2^{\sigma +1}) y_8(u_2)^{-1}] = [x_1(t), y_8(u_2)^{-1}] [x_1(t), x_8(u)]^{y_8(u_2)^{-1}} = y_2(tu_2) x_3(t^{\sigma+1} u_2^{\sigma+2}) x_4(t^{\sigma +2} u_2^{2\sigma +3}) y_4(t^{\sigma} u_2^{\sigma +1}) x_5(t^{\sigma +1} u_2^{2\sigma +2}) x_6(t^{\sigma +1} u_2^{2\sigma +3}) x_7(tu_2^{\sigma +2}) x_2(t^{\sigma+1} u) \cdots = x_2(t^{\sigma+1} u) y_2(tu_2) x_3(t^{\sigma+1} u_2^{\sigma+2}) x_4(t^{\sigma +2} u_2^{2\sigma +3} + tu_2^{\sigma +2} t^{\sigma+1} u) y_4(t^{\sigma} u_2^{\sigma +1}) x_5(t^{\sigma +1} u_2^{2\sigma +2}) x_6(t^{\sigma +1} u_2^{2\sigma +3}) x_7(tu_2^{\sigma +2}) x_3(t^{\sigma+1}uu_2 + t^{\sigma+1}u^{\sigma}) \cdots = \cdots x_3(t^{\sigma+1} u_2^{\sigma+2} + t^{\sigma+1}uu_2 + t^{\sigma+1}u^{\sigma}) x_4(t^{\sigma +2} u_2^{2\sigma +3} + tu_2^{\sigma +2} t^{\sigma+1} u) y_4(t^{\sigma} u_2^{\sigma +1}) x_5(t^{\sigma +1} u_2^{2\sigma +2}) x_6(t^{\sigma +1} u_2^{2\sigma +3}) x_7(tu_2^{\sigma +2}) x_4((t^{\sigma+1}u)^{\sigma} u_2^{\sigma+1} + (t^{\sigma+1}u^{\sigma})^{\sigma} u_2) \cdots = \cdots x_4(t^{\sigma +2} u_2^{2\sigma +3} + tu_2^{\sigma +2} t^{\sigma+1} u + (t^{\sigma+1}u)^{\sigma} u_2^{\sigma+1} + (t^{\sigma+1}u^{\sigma})^{\sigma} u_2) y_4(t^{\sigma} u_2^{\sigma +1}) x_5(t^{\sigma +1} u_2^{2\sigma +2}) x_6(t^{\sigma +1} u_2^{2\sigma +3}) x_7(tu_2^{\sigma +2}) y_4(t^{\sigma}u) \cdots = \cdots x_4(t^{\sigma +2} u_2^{2\sigma +3} + tu_2^{\sigma +2} t^{\sigma+1} u + (t^{\sigma+1}u)^{\sigma} u_2^{\sigma+1} + (t^{\sigma+1}u^{\sigma})^{\sigma} u_2 + (t^{\sigma} u_2^{\sigma +1})^{\sigma} t^{\sigma}u) y_4(t^{\sigma} u_2^{\sigma +1} + t^{\sigma}u) x_5(t^{\sigma +1} u_2^{2\sigma +2}) x_6(t^{\sigma +1} u_2^{2\sigma +3} + tu_2^{\sigma+2}t^{\sigma}u) x_7(tu_2^{\sigma +2}) x_5(t^{\sigma+1}u^{\sigma} u_2^{\sigma} + t^{\sigma +1} u^2) \cdots = \cdots x_5(t^{\sigma +1} u_2^{2\sigma +2} + t^{\sigma+1}u^{\sigma} u_2^{\sigma} + t^{\sigma +1} u^2) x_6(t^{\sigma +1} u_2^{2\sigma +3} + tu_2^{\sigma+2}t^{\sigma}u) x_7(tu_2^{\sigma +2}) x_6(t^{\sigma+1}u u_2^{\sigma +2} + t^{\sigma+1}u^{\sigma} u_2^{\sigma+1} + t^{\sigma +1}u^2 u_2 + t^{\sigma +1} u^{\sigma +1}) \cdots = \cdots x_6(t^{\sigma +1} u_2^{2\sigma +3} + tu_2^{\sigma+2} t^{\sigma}u + t^{\sigma+1}u u_2^{\sigma +2} + t^{\sigma+1}u^{\sigma} u_2^{\sigma+1} + t^{\sigma +1}u^2 u_2 + t^{\sigma +1} u^{\sigma +1}) x_7(tu_2^{\sigma +2}) y_6(tu) \cdots = \cdots y_6(tu) x_7(tu_2^{\sigma +2}) x_7(tuu_2 + tu^{\sigma}) = \cdots x_7( tu_2^{\sigma +2} + tuu_2 + tu^{\sigma} ) \mathrm{e}nd{autobreak} \mathrm{e}nd{align*} These relations yields us: \begin{align*} \begin{autobreak} [x_1(t), x_8(u_1, u_2)] = [x_1(t), x_8(u_1)y_8(u_2)] = x_2( t^{\sigma+1} u_1 + t^{\sigma+1} u_2^{\sigma+1} ) y_2(tu_2) \qquad \cdot x_3( t^{\sigma+1} u_2^{\sigma +2} + t^{\sigma+1} u_1 u_2 + t^{\sigma+1} u_2^{\sigma+2} + t^{\sigma+1}u_1^{\sigma} + t^{\sigma+1} u_2^{\sigma+2} ) \qquad \cdot x_4( t^{\sigma+2} u_2^{2\sigma+3} + t^{\sigma +2}u_1 u_2^{\sigma+2} + t^{\sigma+2} u_2^{2\sigma +3} + t^{\sigma+2}u_1^{\sigma}u_2^{\sigma+1} + t^{\sigma+2} u_2^{2\sigma +3} \qquad \qquad + t^{\sigma +2}u_1^2 u_2 + t^{\sigma+2} u_2^{2\sigma +3} + t^{\sigma+2}u_1 u_2^{\sigma +2} + t^{\sigma +2} u_2^{2\sigma +3} ) \qquad \cdot y_4(t^{\sigma} u_2^{\sigma+1} + t^{\sigma} u_1 + t^{\sigma}u_2^{\sigma+1}) \qquad \cdot x_5( t^{\sigma +1} u_2^{2\sigma +2} + t^{\sigma+1}u_1^{\sigma} u_2^{\sigma} + t^{\sigma+1} u_2^{2\sigma +2} + t^{\sigma +1} u_1^2 + t^{\sigma+1} u_2^{2\sigma +2} ) \qquad \cdot x_6( t^{\sigma+1}u_2^{2\sigma +3} + t^{\sigma+1}u_1u_2^{\sigma+2} + t^{\sigma+1}u_2^{2\sigma+3} + t^{\sigma+1}u_1u_2^{\sigma+2} + t^{\sigma+1}u_2^{2\sigma+3} \qquad \qquad + t^{\sigma+1} u_1^{\sigma} u_2^{\sigma+1} + t^{\sigma+1} u_2^{2\sigma +3} + t^{\sigma+1}u_1^2 u_2 + t^{\sigma+1} u_2^{2\sigma+3} \qquad \qquad + t^{\sigma+1}u_1^{\sigma+1} + t^{\sigma+1} u_2^{2\sigma+3} ) y_6(tu_1 + tu_2^{\sigma+1}) \qquad \cdot x_7( tu_2^{\sigma +2} + tu_1u_2 + tu_2^{\sigma +2} + tu_1^{\sigma} + tu_2^{\sigma+2} ) = x_2( t^{\sigma+1} u_1 + t^{\sigma+1} u_2^{\sigma+1}, tu_2 ) x_3( t^{\sigma+1} u_1 u_2 + t^{\sigma+1}u_1^{\sigma} + t^{\sigma+1} u_2^{\sigma +2} ) \qquad \cdot x_4( t^{\sigma+2}u_1^{\sigma}u_2^{\sigma+1} + t^{\sigma+2} u_2^{2\sigma +3} + t^{\sigma +2}u_1^2 u_2, t^{\sigma} u_1 ) \qquad \cdot x_5( t^{\sigma+1}u_1^{\sigma} u_2^{\sigma} + t^{\sigma +1} u_1^2 + t^{\sigma+1} u_2^{2\sigma +2} ) \qquad \cdot x_6( t^{\sigma+1} u_1^{\sigma} u_2^{\sigma+1} + t^{\sigma+1} u_1^2 u_2 + t^{\sigma+1}u_1^{\sigma+1}, tu_1 + tu_2^{\sigma+1} ) \qquad \cdot x_7( tu_1u_2 + tu_1^{\sigma} + tu_2^{\sigma +2} ) \mathrm{e}nd{autobreak} \mathrm{e}nd{align*} This finishes part $(a)$. Now we will prove part $(b)$. We put $g := [ x_1(t_1), x_8(u_1, u_2) ] [x_1(t_2), x_8(v_1, v_2)]$. Then the following hold: \begin{align*} \begin{autobreak} g = x_2( t_1^{\sigma+1} u_1 + t_1^{\sigma +1} u_2^{\sigma+1} + t_2^{\sigma+1} v_1 + t_2^{\sigma +1} v_2^{\sigma+1} + t_1^{\sigma}u_2^{\sigma} t_2 v_2 ) y_2(t_1 u_2 + t_2v_2) x_3( t_1^{\sigma+1} u_1 u_2 + t_1^{\sigma+1}u_1^{\sigma} + t_1^{\sigma+1} u_2^{\sigma +2} + t_1^{\sigma} u_1 t_2v_2 ) x_4( t_1^{\sigma+2}u_1^{\sigma}u_2^{\sigma+1} + t_1^{\sigma+2} u_2^{2\sigma +3} + t_1^{\sigma +2}u_1^2 u_2 + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right)\left( t_2^{\sigma+1} v_1 + t_2^{\sigma +1} v_2^{\sigma+1} \right) + \left( t_1^{\sigma+1}u_1^{\sigma} u_2^{\sigma} + t_1^{\sigma +1} u_1^2 + t_1^{\sigma+1} u_2^{2\sigma +2} \right) t_2v_2, t_1^{\sigma} u_1 ) x_5( t_1^{\sigma+1}u_1^{\sigma} u_2^{\sigma} + t_1^{\sigma +1} u_1^2 + t_1^{\sigma+1} u_2^{2\sigma +2} + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right) t_2^{\sigma} v_2^{\sigma} ) x_6( t_1^{\sigma+1} u_1^{\sigma} u_2^{\sigma+1} + t_1^{\sigma+1} u_1^2 u_2 + t_1^{\sigma+1}u_1^{\sigma+1} + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right)^{\sigma} t_2v_2, t_1 u_1 + t_1 u_2^{\sigma+1} ) x_7( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} ) x_3( t_2^{\sigma+1} v_1 v_2 + t_2^{\sigma+1}v_1^{\sigma} + t_2^{\sigma+1} v_2^{\sigma +2} ) \cdots = x_2(\ldots) x_3( t_1^{\sigma+1} u_1 u_2 + t_1^{\sigma+1}u_1^{\sigma} + t_1^{\sigma+1} u_2^{\sigma +2} + t_1^{\sigma} u_1 t_2v_2 + t_2^{\sigma+1} v_1 v_2 + t_2^{\sigma+1}v_1^{\sigma} + t_2^{\sigma+1} v_2^{\sigma +2}) x_4( t_1^{\sigma+2}u_1^{\sigma}u_2^{\sigma+1} + t_1^{\sigma+2} u_2^{2\sigma +3} + t_1^{\sigma +2}u_1^2 u_2 + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right)\left( t_2^{\sigma+1} v_1 + t_2^{\sigma +1} v_2^{\sigma+1} \right) + \left( t_1^{\sigma+1}u_1^{\sigma} u_2^{\sigma} + t_1^{\sigma +1} u_1^2 + t_1^{\sigma+1} u_2^{2\sigma +2} \right) t_2v_2 + \left( t_1u_1 + t_1u_2^{\sigma +1} \right) \left( t_2^{\sigma+1} v_1 v_2 + t_2^{\sigma+1}v_1^{\sigma} + t_2^{\sigma+1} v_2^{\sigma +2} \right), t_1^{\sigma} u_1 ) x_5( t_1^{\sigma+1}u_1^{\sigma} u_2^{\sigma} + t_1^{\sigma +1} u_1^2 + t_1^{\sigma+1} u_2^{2\sigma +2} + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right) t_2^{\sigma} v_2^{\sigma} ) x_6( t_1^{\sigma+1} u_1^{\sigma} u_2^{\sigma+1} + t_1^{\sigma+1} u_1^2 u_2 + t_1^{\sigma+1}u_1^{\sigma+1} + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right)^{\sigma} t_2v_2, t_1u_1 + t_1u_2^{\sigma+1} ) x_7( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} ) x_4( t_2^{\sigma+2}v_1^{\sigma}v_2^{\sigma+1} + t_2^{\sigma+2} v_2^{2\sigma +3} + t_2^{\sigma +2}v_1^2 v_2, t_2^{\sigma} v_1 ) \cdots = x_2(\ldots) x_3(\ldots) x_4( t_1^{\sigma+2}u_1^{\sigma}u_2^{\sigma+1} + t_1^{\sigma+2} u_2^{2\sigma +3} + t_1^{\sigma +2}u_1^2 u_2 + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right)\left( t_2^{\sigma+1} v_1 + t_2^{\sigma +1} v_2^{\sigma+1} \right) + \left( t_1^{\sigma+1}u_1^{\sigma} u_2^{\sigma} + t_1^{\sigma +1} u_1^2 + t_1^{\sigma+1} u_2^{2\sigma +2} \right) t_2v_2 + \left( t_1u_1 + t_1u_2^{\sigma +1} \right) \left( t_2^{\sigma+1} v_1 v_2 + t_2^{\sigma+1}v_1^{\sigma} + t_2^{\sigma+1} v_2^{\sigma +2} \right) + t_2^{\sigma+2}v_1^{\sigma}v_2^{\sigma+1} + t_2^{\sigma+2} v_2^{2\sigma +3} + t_2^{\sigma +2}v_1^2 v_2 + \left( t_1^{\sigma} u_1 \right)^{\sigma} t_2^{\sigma} v_1, t_1^{\sigma} u_1 + t_2^{\sigma} v_1 ) x_5( t_1^{\sigma+1}u_1^{\sigma} u_2^{\sigma} + t_1^{\sigma +1} u_1^2 + t_1^{\sigma+1} u_2^{2\sigma +2} + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right) t_2^{\sigma} v_2^{\sigma} + \left( t_1 u_1 + t_1 u_2^{\sigma +1} \right) t_2^{\sigma} v_1 ) x_6( t_1^{\sigma+1} u_1^{\sigma} u_2^{\sigma+1} + t_1^{\sigma+1} u_1^2 u_2 + t_1^{\sigma+1}u_1^{\sigma+1} + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right)^{\sigma} t_2v_2 + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right) t_2^{\sigma} v_1, t_1u_1 + t_1u_2^{\sigma+1} ) x_7( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} ) x_5( t_2^{\sigma+1}v_1^{\sigma} v_2^{\sigma} + t_2^{\sigma +1} v_1^2 + t_2^{\sigma+1} v_2^{2\sigma +2} ) x_6( t_2^{\sigma+1} v_1^{\sigma} v_2^{\sigma+1} + t_2^{\sigma+1} v_1^2 v_2 + t_2^{\sigma+1}v_1^{\sigma+1}, t_2v_1 + t_2v_2^{\sigma+1} ) x_7( t_2v_1v_2 + t_2v_1^{\sigma} + t_2v_2^{\sigma +2} ) = x_2(\ldots)x_3(\ldots)x_4(\ldots) x_5( t_1^{\sigma+1}u_1^{\sigma} u_2^{\sigma} + t_1^{\sigma +1} u_1^2 + t_1^{\sigma+1} u_2^{2\sigma +2} + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right) t_2^{\sigma} v_2^{\sigma} + \left( t_1 u_1 + t_1 u_2^{\sigma +1} \right) t_2^{\sigma} v_1 + t_2^{\sigma+1}v_1^{\sigma} v_2^{\sigma} + t_2^{\sigma +1} v_1^2 + t_2^{\sigma+1} v_2^{2\sigma +2} ) x_6( t_1^{\sigma+1} u_1^{\sigma} u_2^{\sigma+1} + t_1^{\sigma+1} u_1^2 u_2 + t_1^{\sigma+1}u_1^{\sigma+1} + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right)^{\sigma} t_2v_2 + \left( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} \right) t_2^{\sigma} v_1 + t_2^{\sigma+1} v_1^{\sigma} v_2^{\sigma+1} + t_2^{\sigma+1} v_1^2 v_2 + t_2^{\sigma+1}v_1^{\sigma+1} + \left( t_1u_1 + t_1u_2^{\sigma+1} \right)^{\sigma} \left( t_2v_1 + t_2v_2^{\sigma+1} \right) ) y_6( t_1u_1 + t_1u_2^{\sigma+1} + t_2v_1 + t_2v_2^{\sigma+1} ) x_7( t_1 u_1u_2 + t_1 u_1^{\sigma} + t_1 u_2^{\sigma +2} + t_2v_1v_2 + t_2v_1^{\sigma} + t_2v_2^{\sigma +2} ) \mathrm{e}nd{autobreak} \qedhere \mathrm{e}nd{align*} \mathrm{e}nd{proof} \begin{comment} \begin{remark}\label{remarkoctagon} Using the notations of \cite{TW02} we obtain the following commutator relations: \begin{align*} [x_1(t), x_8(u, 0)]^2 &= x_4( t^{\sigma+2} u^2, 0) x_5(t^{\sigma +1} u^2), \\ [x_1(t), x_8(0, u)]^2 &= x_2( t^{\sigma +1} u^{\sigma +1}, 0 ) x_4( t^{\sigma +2} u^{2\sigma +3}, 0 ) x_5(t^{\sigma +1} u^{2\sigma +2}), \\ [x_1(t), x_8(u^{\sigma +1}, u)]^2 &= x_2( t^{\sigma +1} u^{\sigma +1}, 0 )x_3(t^{\sigma +1} u^{\sigma +2}) x_4(t^{\sigma +2} u^{2\sigma +3} + t^{\sigma +2} u^{2\sigma +2}, 0) x_5(t^{\sigma +1} u^{2\sigma +2}). \mathrm{e}nd{align*} To avoid the indices we let $[t; u_1, u_2] := [x_1(t), x_8(u_1, u_2)]$. We assume that $\mathbb{K} = \mathbb{F}_2[x] / (x^3 + x + 1)$. Then $x$ generates the multiplicative group of $\mathbb{K}$. As in example $(10.12)$ we let $\sigma$ be the map, which maps $x^i$ onto $x^{4i}$. The previous commutator relations reduces to the following: \begin{align*} [t; u, 0]^2 &= x_4( t^6 u^2, 0 ) x_5( t^5 u^2 ) \\ [t; 0, u]^2 &= x_2( t^5 u^5, 0 ) x_4( t^6 u^4, 0 ) x_5( t^5 u^3 ) \\ [t; u^5, u]^2 &= x_2( t^5 u^5, 0 ) x_3( t^5 u^6 ) x_4( t^6 u^4 + t^6 u^3, 0 ) x_5( t^5 u^3 ) \mathrm{e}nd{align*} From this we deduce $[k]_2 := [k^3; 0, 1]^2 [k^3, 1, 0]^2 = x_2(k, 0)$. Now we consider the following, where $k \neq 0$: $$ [1; 0, 1]^2 [k; k^{-2}, 0]^2 = x_2( 1, 0 ) x_5( k+1 ) $$ Thus we define $[k]_5 := [1]_2 [1; 0, 1]^2 [k-1; (k-1)^2, 0]^2 = x_5(k)$ for $k \notin \{0, 1\}$. Then $[1]_5 := [x]_5 + [x+1]_5$. \begin{align*} [k]_4 &:= [1; 0, k^4]^2 [k]_5 = x_4(k, 0) \\ [k]_3 &:= [k]_2 [k^3; 1, 1]^2 [k]_5 = x_3(k) \mathrm{e}nd{align*} Now one can easily check, that $$ [1; (1+x^4)^{-1}x^3, 1] [x; x^3(1+x^4)^{-1}x^3, x^6]= x_2(\ldots, 0) x_3( \ldots) x_4(\ldots, 0) x_5( \ldots) x_6(k, 0) x_7(k') $$ \tcol{Noch ausrechnen!} \mathrm{e}nd{remark} \mathrm{e}nd{comment} \mathrm{e}nd{document}
\begin{document} \title[A functional model for the Fourier--Plancherel operator]{A functional model for the Fourier--Plancherel operator truncated to the positive semiaxis} \author{V.~Katsnelson} \address{Department of Mathematics,\\ The Weizmann Institute,\\ 76100, Rehovot, Israel} \email{[email protected];\\ [email protected]} \keywords{truncated Fourier--Plancherel operator, functional model for a linear operator} \date{27 окт€бр€, 2017} \dedicatory{This paper is dedicated to the memory of my colleague Mikhail Solomyak.} \begin{abstract} The truncated Fourier operator \(\mathscr{F}_{\mathbb{R^{+}}}\), \begin{equation*} (\mathscr{F}_{\mathbb{R^{+}}}x)(t)=\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R^{+}}}x(\xi)e^{it\xi}\,d\xi\,,\ \ \ t\in{}{\mathbb{R^{+}}}, \end{equation*} is studied. The operator \(\mathscr{F}_{\mathbb{R^{+}}}\) is viewed as an operator acting in the spa\-ce~\(L^2(\mathbb{R^{+}})\). A functional model for the operator \(\mathscr{F}_{\mathbb{R^{+}}}\) is constructed. This \break functional model is the operator of multiplication by an appropriate\break (${2\times2}$)-ma\-t\-rix function acting in the space \(L^2(\mathbb{R^{+}})\oplus{}L^2(\mathbb{R^{+}})\). Using this functional model, the spectrum of the operator \(\mathscr{F}_{\mathbb{R^{+}}}\) is found. The resolvent of the operator \(\mathscr{F}_{\mathbb{R^{+}}}\) is estimated near its spectrum. \end{abstract} \maketitle \noindent \textsf{Notation:}\\ \(\mathbb{R}\) \ \ is the set of all real numbers.\\ \(\mathbb{R}^{+}\) is the set of all positive real numbers.\\ \(\mathbb{C}\) \ \ is the set of all complex numbers.\\ \(\mathbb{Z}\) \ \ is the set of all integes.\\ \(\mathbb{N}=\{1,2,3,\,\dots\}\) is the set of all natural numbers. \setcounter{section}{0} \section{The Fourier--Plancherel operator truncated to the positive semiaxis.} In this paper we study the truncated Fourier operator \(\mathscr{F}_{\mathbb{R^{+}}}\), \begin{equation} \label{DTFTr} (\mathscr{F}_{\mathbb{R^{+}}}x)(t)=\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R^{+}}}x(\xi)e^{it\xi}\,d\xi\,,\ \ \ t\in{}{\mathbb{R^{+}}}. \end{equation} The operator \(\mathscr{F}_{\mathbb{R^{+}}}\) is viewed as an operator acting in the space \(L^2(\mathbb{R^{+}})\) of all square integrable complex-valued functions on \(\mathbb{R^{+}}\) equipped with the inner product \begin{equation*} \langle{}x,y\rangle_{L^2(\mathbb{R^{+}})}=\int\limits_{\mathbb{R^{+}}}x(t)\overline{y(t)}\,dt. \end{equation*} The operator \(\mathscr{F}^{ \ast}_{\mathbb{R^{+}}}\) adjoint to~\(\mathscr{F}_{\mathbb{R^{+}}}\) with respect to this inner product is \begin{equation} \label{DTFTrA} (\mathscr{F}_{\mathbb{R^{+}}}^{\ast}x)(t)=\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R^{+}}}x(\xi)e^{-it\xi}\,d\xi\,,\ \ \ t\in{}{\mathbb{R^{+}}}. \end{equation} The operator \(\mathscr{F}_{\mathbb{R^{+}}}\) has the form \begin{equation} \label{UnDil} \mathscr{F}_{\mathbb{R^{+}}}= P_{\mathbb{R^{+}}}\,\mathscr{F}\,P_{\mathbb{R^{+}}|_{{\scriptstyle L}^2{\scriptstyle({\mathbb{R}}^{+})}}}\,, \end{equation} where \(\mathscr{F}\) is the Fourier--Plancherel operator on the whole real axis: \begin{equation} \label{FWRA} (\mathscr{F}x)(t)=\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}}x(\xi)e^{it\xi}\,d\xi\,,\ \ \ t\in{}{\mathbb{R}}, \end{equation} \begin{equation*} \mathscr{F}:\,L^2(\mathbb{R})\to{}L^2(\mathbb{R})\,, \end{equation*} and \(P_{_{\scriptstyle\mathbb{R^{+}}}}\) is the natural orthogonal projector from \(L^2(\mathbb{R})\) onto \(L^2(\mathbb{R}^{+})\): \begin{equation} \label{NaPr} (P_{\mathbb{R^{+}}}x)(t)=\mathds{1}_{_{\scriptstyle\mathbb{R}^{+}}}\!(t)\,x(t),\ \ x\in{}L^2(\mathbb{R}),\quad t\in\mathbb{R}^{+}, \end{equation} where \(\mathds{1}_{_{\scriptstyle\mathbb{R^{+}}}}\!(t)\) is the indicator function of the set \(\mathbb{R}^{+}\). For any set \(E\), its indicator function \(\mathds{1}_{_{\scriptstyle E}}\) is \begin{equation} \label{IndF} \mathds{1}_E(t)= \begin{cases}1&\ \textup{if}\ t\in{}E,\\ 0&\ \textup{if}\ t\not\in{}E. \end{cases} \end{equation} It should be mentioned that the Fourier operator \(\mathscr{F}\) is a unitary operator in~\(L^2(\mathbb{R})\): \begin{equation} \label{Unit} \mathscr{F}^{\ast}\mathscr{F}=\mathscr{F}\mathscr{F}^{\ast}= \mathscr{I}_{L^2(\mathbb{R})}, \end{equation} where \(\mathscr{I}_{L^2(\mathbb{R})}\) is the identity operator in \(L^2(\mathbb{R})\) and \(\mathscr{F}^{\ast}\) is the operator adjoint to~\(\mathscr{F}\) with respect to the standard inner product in \(L^2(\mathbb{R})\). From \eqref{UnDil} and \eqref{Unit} it follows that the operators \(\mathscr{F}_{\mathbb{R^{+}}}\) and \(\mathscr{F}_{\mathbb{R^{+}}}^{\ast}\) are contractive: \(\|\mathscr{F}_{\mathbb{R^{+}}}\|\leqslantqslant1\), \(\|\mathscr{F}_{\mathbb{R^{+}}}^{\ast}\|\leqslantqslant1\). Later it will be shown that actually \begin{equation} \label{NorTrF} \|\mathscr{F}_{\mathbb{R^{+}}}\|=1,\quad \|\mathscr{F}_{\mathbb{R^{+}}}^{\ast}\|=1. \end{equation} Nevertheless, these operators are strictly contractive: \begin{equation} \label{StrCon} \|\mathscr{F}_{\!_{\scriptstyle\mathbb{R_{+}}}}x\|<\|x\|,\quad \|\mathscr{F}^{\ast}_{\!_{\scriptstyle\mathbb{R^{+}}}}x\|<\|x\|\, \quad \mbox{for all}\quad x\in{}L^2(\mathbb{R}^{+}),\quad x\not=0\,, \end{equation} and their spectral radii \(r(\mathscr{F}_{\!_{\scriptstyle\mathbb{R^{+}}}})\) and \(r(\mathscr{F}^{\ast}_{\!_{\scriptstyle\mathbb{R^{+}}}})\) are less than one: \begin{equation} \label{SpRad} r(\mathscr{F}_{\!_{\scriptstyle\mathbb{R^{+}}}})= r(\mathscr{F}^{\ast}_{\!_{\scriptstyle\mathbb{R^{+}}}})=1/\sqrt{2}. \end{equation} In particular, the operators \(\mathscr{F}_{\!_{\scriptstyle\mathbb{R^{+}}}}\) and \(\mathscr{F}^{\ast}_{\!_{\scriptstyle\mathbb{R^{+}}}}\) are contractions of class~\(C_{00}\) in the sense of \cite{Sz}. (See \cite[Chapter 2, \S4]{Sz}.) In \cite{Sz}, the spectral theory of contractions in Hilbert space was developed. The starting point of this theory is the representation of a given contractive operator \(A\) acting in a Hilbert space \(\mathscr{H}\) in the form \begin{equation} \label{ComCon} A=PUP, \end{equation} where \(U\) is a unitary operator acting is some \emph{ambient} Hilbert space \(\mathfrak{H}\), \(\mathscr{H}\subset\mathfrak{H}\), and \(P\) is the orthogonal projector from \(\mathfrak{H}\) onto \(\mathscr{H}\). In the construction of \cite{Sz} it is required that not only~\eqref{ComCon} but also the series of identities \begin{equation} \label{Dilat} A^n=PU^nP,\quad n\in\mathbb{N}, \end{equation} be true. The unitary operator \(U\) acting in an ambient Hilbert space \(\mathfrak{H}\), \(\mathcal{H}\subset\mathfrak{H}\), is called a \emph{unitary dilation of the operator \(A\),} \(A:\mathcal{H}\to\mathcal{H}\), if identities \eqref{Dilat} are fulfilled. In \cite{Sz} it was shown that every contractive operator \(A\) admits a unitary dilation. By using the unitary dilation, a functional model of the operator \(A\) was constructed. This functional model is an operator acting in some Hilbert space of analytic functions. The functional model of the operator \(A\) is an operator unitarily equivalent to \(A\). The spectral theory of the original operator \(A\) is developed by analyzing its functional model. However, the functional model constructed in \cite{Sz} is not suitable for the spectral analysis of the truncated Fourier--Plancherel operator \(\mathscr{F}_{\mathbb{R^{+}}}\). Relation \eqref{UnDil} is of the form \eqref{ComCon}, where \(\mathscr{H}=L^2(\mathbb{R}^{+})\), \(\mathfrak{H}=L^2(\mathbb{R})\),\break ${U\!=\!\mathscr{F}}$, $A=\mathscr{F}_{_{\scriptstyle\mathbb{R}^{+}}}$, and \(P=P_{\mathbb{R^{+}}}\) is the orthoprojector from \(L^2(\mathbb{R})\) onto \(L^2(\mathbb{R}^{+})\), see~\eqref{NaPr}. For these objects, identities \eqref{Dilat} do not hold true for all \(n\in\mathbb{N}\), but only for \(n=1\). So, the operator \(\mathscr{F}\) is not a unitary dilation of its truncation~\(\mathscr{F}_{_{\scriptstyle\mathbb{R}^{+}}}\). Nevertheless, we succeeded in constructing a functional model of the operator~\(\mathscr{F}_{_{\scriptstyle\mathbb{R}^{+}}}\) such that it is easily analyzable. Analyzing this model, we can develop a complete spectral theory of the operator \(\mathscr{F}_{_{\scriptstyle\mathbb{R}^{+}}}\). \section{The model space.} \label{MS} \begin{defn}{}\ \\ $1.$ The \emph{model space} \(\mathfrak{M}\) is the set of all \(2\times1\) columns \(\varphi= \displaystyle \begin{bmatrix} \varphi_{+}\\ \varphi_{-} \end{bmatrix} \) whose entries \(\varphi_{+}\) and \(\varphi_{-}\) are arbitrary complex-valued functions of class~\(L^2(\mathbb{R^{+}})\). \\ \noindent $2.$ The space \(\mathfrak{M}\) is equipped by the natural linear operations. \noindent$3.$ The inner product \(\langle\varphi,\psi\rangle_{\mathfrak{M}}\) of columns \(\varphi= \displaystyle \begin{bmatrix} \varphi_{+}\\ \varphi_{-} \end{bmatrix} \) and \(\psi= \displaystyle \begin{bmatrix} \psi_{+}\\ \psi_{-} \end{bmatrix} \) belonging to this space is defined as \begin{equation} \label{InP} \langle\varphi, \psi\rangle_{\mathfrak{M}}=\langle\varphi_{+},\psi_{+}\rangle_{L^2(\mathbb{R}^{+})} +\langle\varphi_{-},\psi_{-}\rangle_{L^2(\mathbb{R}^{+})}. \end{equation} In particular, \begin{equation} \label{No} \|\varphi\|^2_{\mathfrak{M}}=\|\varphi_{+}\|^2_{L^2(\mathbb{R}^{+})}+ \|\varphi_{-}\|^2_{L^2(\mathbb{R}^{+})}. \end{equation} \end{defn} \begin{rem} The model space \(\mathfrak{M}\) is merely the orthogonal sum of two copies of the space \(L^2(\mathbb{R}^{+})\). The standard notation \(L^2(\mathbb{R^{+}})\oplus{}L^2(\mathbb{R^{+}})\) for such orthogonal sum does not reflect the fact that the elements of \(\mathfrak{M}\) are $(2\times1)$-\emph{columns}. The notation \(\begin{bmatrix} L^2(\mathbb{R^{+}})\\{}\oplus\\{}L^2(\mathbb{R^{+}}) \end{bmatrix} \) is more logical, but too bulky. \end{rem} We define a linear mapping \(U\) of the space \(L^2(\mathbb{R})\) into the model space. For \(x\in{}L^2(\mathbb{R}^{+})\), the formal definition is \begin{equation} \label{FoD} (Ux)(\mu)= \begin{bmatrix} (Ux)_{+}(\mu)\\[1.0ex] (Ux)_{-}(\mu) \end{bmatrix} \ccomma \quad \mu\in\mathbb{R}^{+}, \end{equation} where \begin{subequations} \label{fo} \begin{align} \label{fo+} (Ux)_{+}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}^{+}} x(\xi)\,\xi^{-1/2}\xi^{+i\mu}\,d\xi,\qquad \mu\in\mathbb{R}^+,\\ \label{fo-} (Ux)_{-}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}^{+}} x(\xi)\,\xi^{-1/2}\xi^{-i\mu}\,d\xi,\qquad \mu\in\mathbb{R}^+. \end{align} \end{subequations} Here and in what follows, \(\xi^\zeta=e^{\zeta\ln\xi}\), where \(\ln\xi\in\mathbb{R}\) for \(\xi\in\mathbb{R}^{+}\). If \(x\in{}L^2(R^{+})\), the functions \(x(\xi)\,\xi^{-1/2}\xi^{\pm{}i\mu}\) occurring in \eqref{fo} may fail to be integrable with respect to Lebesgue measure \(d\xi\) on \(\mathbb{R}^{+}\). Therefore, the integrals in \eqref{fo} may fail to exist as Lebesgue integrals. \begin{defn} \label{alf} The \emph{set \(\mathcal{D}\)} is the set of all functions \(x\in{}L^2(\mathbb{R}^{+})\) satisfying \begin{equation} \label{exc} \int\limits_{\mathbb{R}^{+}}|x(\xi)|\xi^{-1/2}d\xi<\infty. \end{equation} \end{defn} \begin{lem} \label{ExU} If a function \(x\) belongs to \(L^2(\mathbb{R}^{+})\) and its support $\textup{supp}\,x$ lies strictly inside the positive semiaxis $\mathbb{R}^{+},$ then $x\in\mathcal{D}$. \end{lem} \begin{proof} \begin{equation*} \begin{split} \int\limits_{\mathbb{R}^{+}} \big|x(\xi)|\xi^{-1/2}d\xi&= \int\limits_{\xi\in\textup{supp}x} \big|x(\xi)|\xi^{-1/2}\,d\xi \\ &\leqslantqslant \bigg\{\int\limits_{\mathbb{R}^{+}} |x(\xi)|^2d\xi\bigg\}^{1/2}\cdot\,\, \bigg\{\int\limits_{\xi\in\textup{supp}x} |\xi|^{-1}d\xi\bigg\}^{1/2}<\infty.\qedhere \end{split} \end{equation*} \end{proof} \noindent For \(x\in\mathcal{D}\), the integrals on the right-hand sides of \eqref{fo+} and \eqref{fo-} exist as Lebesgue integrals for every $\mu\in\mathbb{R}^{+}$. So, the functions \((Ux)_{+}(\mu)\) and \break \((Ux)_{-}(\mu)\) are well defined \emph{for every \(\mu\in\mathbb{R}^{+}\).} \begin{lem} \label{SqU} If a function \(x\) belongs to \(\mathcal{D}\), then both functions \((Ux)_{+}\) and \((Ux)_{-}\) belong to \(L^2(\mathbb{R}^{+})\). Moreover, we have \begin{equation} \label{paE} \|(Ux)_{+}\|_{L^2(\mathbb{R}^{+})}^{2}+\|(Ux)_{-}\|_{L^2(\mathbb{R}^{+})}^{2}= \|x\|^2_{L^2(\mathbb{R}^{+})}. \end{equation} \end{lem} \begin{proof} Changing the variable \(\xi=e^{\eta}\) in \eqref{fo}, we obtain \begin{subequations} \label{co} \begin{align} \label{co+} (Ux)_{+}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_\mathbb{R} v(\eta)\,e^{+i\mu\eta}\,d\eta,\quad \mu\in\mathbb{R}^+,\\ \label{co-} (Ux)_{-}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_\mathbb{R} v(\eta)\,e^{-i\mu\eta}\,d\eta,\quad \mu\in\mathbb{R}^+, \end{align} \end{subequations} where \begin{equation} \label{dV} v(\eta)=e^{\eta/2}x(e^{\eta}). \end{equation} We have \begin{equation} \label{cv} \int\limits_{\mathbb{R}}|v(\eta)|^2d\eta= \int\limits_{\mathbb{R}^{+}}|x(\xi)|^2d\xi. \end{equation} Put \begin{equation} \label{ca} u(\nu)= \begin{cases} (Ux)_{+}(\,\,\nu\,\,)\ &\textup{if}\ \nu\in\mathbb{R}^{+}, \\[1.0ex] (Ux)_{-}(-\nu)\ &\textup{if}\ \nu\in\mathbb{R}^{-}. \end{cases} \end{equation} It is clear that \begin{equation} \label{ic} \int\limits_{\mathbb{R}}|u(\nu)|^2d\nu= \int\limits_{\mathbb{R}^{+}}|(Ux)_{+}(\mu)|^2d\mu+ \int\limits_{\mathbb{R}^{+}}|(Ux)_{-}(\mu)|^2d\mu. \end{equation} From \eqref{co} and \eqref{ca} it follows that \begin{equation} \label{os} u(\nu)=\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}}v(\eta)\,e^{i\nu\eta}\,d\eta,\quad \nu\in\mathbb{R}. \end{equation} Thus, \begin{equation} \label{hfp} u(\nu)=(\mathscr{F}v)(\nu),\quad \nu\in\mathbb{R}, \end{equation} where \(\mathscr{F}\) is the Fourier--Plancherel operator \eqref{FWRA}. The Parceval itentity \begin{equation} \label{pe} \int\limits_{\mathbb{R}}|u(\nu)|^2d\nu= \int\limits_{\mathbb{R}}|v(\eta)|^2d\eta \end{equation} and formulas \eqref{cv} and \eqref{ic} yield~\eqref{paE}. \end{proof} It is clear that the set $\mathcal{D}$ is a (nonclosed) vector subspace of~\(L^2(\mathbb{R}^{+}\!)\). \begin{lem} \label{den} The set $\mathcal{D}$ is dense in $L^2(\mathbb{R}^{+})$. \end{lem} \begin{proof} Given \(x\in{}L^2(\mathbb{R}^{+})\), we define \begin{equation} \label{aps} x_n(t)=x(t)\cdot\mathds{1}_{[1/n,n]}(t),\quad n=1,2,\dots, \end{equation} where \(\mathds{1}_{[1/n,n]}(t)\) is the indicator function of the interval \([1/n,n]\). (See \eqref{IndF}.) Clearly, \begin{equation} \|x-x_n\|_{L^2(\mathbb{R}^{+})}\to0\quad\textup{as}\quad{}n\to\infty. \end{equation} Moreover, \(x_n\in\mathcal{D}\) by Lemma \ref{ExU}. \end{proof} \begin{defn} \label{DeU} Formula \eqref{paE} means that the operator~$U$ defined by\break \eqref{FoD}--\eqref{fo} for \(f\in\mathcal{D}\) maps the subspace $\mathcal{D}$ into the model space \(\mathfrak{M}\) isometrically. Therefore, the operator $U$ extends from the subspace \(\mathcal{D}\subset{}L^2(\mathbb{R}^{+})\) to its closure \(L^2(\mathbb{R}^{+})\) by continuity: \begin{equation} \label{ext} \begin{split} \textup{if}\ x\in{}L^2(\mathbb{R}^{+}),\quad x_n\in\mathcal{D},\quad n=1,2,\dots,\quad x&= \lim\limits_{n\to\infty}x_n, \\ \textup{then}\quad Ux &\stackrel{\textup{def}}{=} \lim\limits_{n\to\infty}Ux_n. \end{split} \end{equation} We preserve the notation \(U\) for the operator extended in this way. \end{defn} \emph{From now on, we deal with the operator $U$ that is already extended from~$\mathcal{D}$ to the whole space $L^2(\mathbb{R}^{+})$ in accordance with~\eqref{ext}.} It is clear that~\(U\) maps \(L^2(\mathbb{R}^{+})\) onto some closed subspace of the model space~\(\mathfrak{M}\). \begin{thm} \label{on} The operator \(U\) maps \(L^2(\mathbb{R}^{+})\) onto the whole model space~\(\mathfrak{M}\). \end{thm} \begin{proof} Let \(y= \displaystyle \begin{bmatrix} y_{+}\\ y_{-} \end{bmatrix}\) be an arbitrary element of~$\mathfrak{M}$. Both functions \(y_{+}\) and \(y_{-}\) belong to \(L^2(\mathbb{R}^{+})\). We set \begin{equation} \label{auf} u(\nu)= \begin{cases} y_{+}(\,\nu\,)\quad &\textup{if}\quad \nu>0,\\ y_{-}(-\nu)\quad &\textup{if}\quad \nu<0. \end{cases} \end{equation} Clearly, $u\in{}L^2(\mathbb{R})$. As a function of class~$L^2(\mathbb{R})$, the function~$u$ is representable in the form \begin{equation} \label{l2r} u(\nu)=\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}}v(\eta)e^{i\nu\eta}\,d\eta,\quad \nu\in\mathbb{R}, \end{equation} where $v$ is a function in $L^2(\mathbb{R})$. Relation~\eqref{l2r} can be interpreted as the following pair of formulas: \begin{subequations} \label{int} \begin{align} \label{int1} y_{+}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}} v(\eta)e^{\,i\mu\eta\,}\,d\eta,\ \ \mu\in\mathbb{R^+},\\ \label{int2} y_{-}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}} v(\eta)e^{-i\mu\eta}\,d\eta,\ \ \mu\in\mathbb{R^+}, \end{align} \end{subequations} where \(v\in{}L^2(\mathbb{R})\). Changing the variable \(\xi=e^{\eta}\) in \eqref{int}, we reduce \eqref{int} to the form \begin{subequations} \label{rint} \begin{align} \label{rint1} y_{+}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}^{+}} x(\xi)\xi^{-1/2}\xi^{\,i\mu\,}\,d\xi,\ \ \mu\in\mathbb{R^+},\\ \label{rint2} y_{-}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}^{+}} x(\xi)\xi^{-1/2}\xi^{-i\mu}\,d\xi,\ \ \mu\in\mathbb{R^+}, \end{align} \end{subequations} where \begin{equation} \label{Cv} v(\eta)=e^{\eta/2}x(e^{\eta}). \end{equation} Moreover, \(x\in{}L^2(\mathbb{R}^{+})\): \begin{equation} \label{cvi} \int\limits_{\mathbb{R}^{+}}|x(\xi)|^2\,d\xi= \int\limits_{\mathbb{R}}|v(\eta)|^2\,d\eta. \end{equation} By the definition of the operator \(U\), formulas~\eqref{rint} mean that \begin{equation} \label{fin} y=Ux. \end{equation} \end{proof} \begin{rem} \label{Reg} The function \(v\) in \eqref{l2r} may fail to belong to \(L^1(\mathbb{R})\). To give a meaning to~\eqref{l2r}, we use the standard approximation procedure. We choose a sequence \(\{v_n\}_{n=1,2,\dots}\) such that $$ v_n\in{}L^2(\mathbb{R})\cap{}L^1(\mathbb{R}) $$ for every \(n\) and \(\|v_n-v\|_{L^2(\mathbb{R})}\to0\) as \(n\to\infty\). The sequence \(\{u_n\}_{n=1,2,\dots}\), where $$ u_n(t)=\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}}v_n(\xi)e^{it\xi}\,d\xi, $$ is well defined and converges to~\(u\): \(\|u_n-u\|_{L^2(\mathbb{R})}\to0\) as \(n\to\infty\). \end{rem} \begin{rem} \label{met} The transformation~\eqref{fo} \,is none other than the Mellin transform $ \int_{\mathbb{R}^{+}}x(t)\,t^{\zeta-1}dt $ restricted to the line $\textup{Re}\,\zeta=\frac{1}{2}:\zeta=\frac{1}{2}+i\mu$, $\mu\in\mathbb{R}$. For\-mu\-la~\eqref{paE} is the Parceval identity for the Mellin transform, see \cite[\S3.17, Theorem 71]{T}. However, we would like to emphasize that we view this Mellin transform not as a single function defined for \(\mu\in\mathbb{R}\), but as a pair of functions defined for \(\mu\in\mathbb{R}^{+}\). \end{rem} \section{The model of the truncated Fourier--Plancherel operator.} \label{MFP} In \S\ref{MS} we introduced the operator \(U\) that maps the space \(L^2(\mathbb{R}^+)\) onto the model space \(\mathfrak{M}\) isometrically. In this section we calculate the operator \(U\mathscr{F}_{\mathbb{R}^+}U^{-1}\), which serves as a model of the operator \(\mathscr{F}_{\mathbb{R}^+}\). Let \(x\in{}L^2(\mathbb{R}^+)\) and \(\displaystyle \begin{bmatrix} y_{+}\\ y_{-} \end{bmatrix} =Ux\), i.e., let~\eqref{fo} be true. We would like to express the pair \(\displaystyle \begin{bmatrix} z_{+}\\ z_{-} \end{bmatrix} =U\mathscr{F}_{\mathbb{R}^+}x\) in terms of the pair \(\displaystyle \begin{bmatrix} y_{+}\\ y_{-} \end{bmatrix}\). Substituting the function $$ \big(\mathscr{F}_{\mathbb{R}^+}x\big)(t)=\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}^+}x(\xi)e^{it\xi}\,d\xi $$ for~$x$ in~\eqref{fo}, we obtain \begin{subequations} \label{sst} \begin{align} \label{sst1} z_{+}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}^{+}} \bigg(\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}^+}x(\xi)e^{it\xi}\,d\xi\bigg) t^{-1/2}t^{\,i{\mu}\,}\,dt,\quad \mu\in\mathbb{R^+},\\ \label{sst2} z_{-}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}^{+}} \bigg(\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}^+}x(\xi)e^{it\xi}\,d\xi\bigg) t^{-1/2}t^{-i\mu}\,dt,\quad \mu\in\mathbb{R^+}. \end{align} \end{subequations} Changing the order of integration in \eqref{sst}, we arrive at the formulas \begin{subequations} \label{cst} \begin{align} \label{cst1} z_{+}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}^{+}}x(\xi) \bigg(\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}^+}e^{it\xi}\, t^{-1/2}t^{\,i\mu\,}\,dt\bigg) d\xi,\quad \mu\in\mathbb{R^+},\\ \label{cst2} z_{-}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}^{+}}x(\xi) \bigg(\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}^+}e^{it\xi}\, t^{-1/2}t^{-i\mu}\,dt\bigg) d\xi,\quad \mu\in\mathbb{R^+}. \end{align} \end{subequations} Changing \(t\to{}t/\xi\) in \eqref{cst}, we obtain \begin{subequations} \label{fst} \begin{align} \label{fst1} z_{+}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}^{+}}x(\xi)\, \xi^{-1/2}\xi^{-i\mu}\bigg(\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}^+}e^{it}\, t^{-1/2}t^{\,i\mu\,}\,dt\bigg) d\xi,\quad t\in\mathbb{R^+},\\ \label{fst2} z_{-}(\mu)=&\frac{1}{\sqrt{2\pi}}\int\limits_{\mathbb{R}^{+}}x(\xi)\, \xi^{-1/2}\xi^{\,i\mu\,}\bigg(\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}^+}e^{it}\, t^{-1/2}t^{-i\mu}\,dt\bigg) d\xi,\quad t\in\mathbb{R^+}. \end{align} \end{subequations} The inner integrals in \eqref{fst} do not depend on \(\xi\). Calculating these integrals, we can present~\eqref{fst} in the form \begin{subequations} \label{fft} \begin{align} \label{fft1} z_{+}(\mu)=&F_{+-}(\mu)\,y_{-}(\mu),\quad \mu\in\mathbb{R^+},\\ \label{fft2} z_{-}(\mu)=&F_{-+}(\mu)\,y_{+}(\mu),\quad \mu\in\mathbb{R^+}, \end{align} \end{subequations} where \begin{subequations} \label{ft} \begin{align} \label{ft1} F_{+-}(\mu)=&\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}^+}e^{it}\, t^{-1/2}t^{\,i\mu\,}\,dt,\quad \mu\in\mathbb{R^+}, \\ \label{ft2} F_{-+}(\mu)=&\frac{1}{\sqrt{2\pi}} \int\limits_{\mathbb{R}^+}e^{it}\, t^{-1/2}t^{-i\mu}\,dt,\quad \mu\in\mathbb{R^+}. \end{align} \end{subequations} The functions \(F_{+-}\) and \(F_{-+}\) can be expressed in term of the Euler \(\Gamma\)-function: \begin{subequations} \label{Ft} \begin{align} \label{Ft1} F_{+-}(\mu)=&\frac{1}{\sqrt{2\pi}}\,e^{i\pi/4} e^{-\frac{\pi}{2}\mu}\,\Gamma\big(\tfrac{1}{2}+i\mu\big),\quad \mu\in\mathbb{R^+}, \\ \label{Ft2} F_{-+}(\mu)=&\frac{1}{\sqrt{2\pi}}\,e^{i\pi/4} e^{\,\,\frac{\pi}{2}\mu}\,\,\Gamma\big(\tfrac{1}{2}-i\mu\big),\quad \mu\in\mathbb{R^+}. \end{align} \end{subequations} We shall not justify the possibility of changing the order of integration in \eqref{sst}. The above argument, which leads from \eqref{sst} to \eqref{fft}, plays a heuristic role. Actually, we establish formulas \eqref{fft}, where the functions \(F_{+-}(\mu)\) and \(F_{-+}(\mu)\) are of the form \eqref{Ft}, in a different way. The pair of identities~\eqref{fft} can be presented in the matrix form \begin{equation} \label{meq} (U\mathscr{F}_{\mathbb{R}^{+}}x)(\mu)= F(\mu)(Ux)(\mu),\quad \mu\in\mathbb{R}^{+}, \end{equation} where \(F(\mu)\) is a $(2\times2)$-matrix: \begin{equation} \label{mF} F(\mu)=\begin{bmatrix} 0&F_{+-}(\mu)\\ F_{-+}(\mu)&0 \end{bmatrix} \ccomma \quad\mbox{for all }\,\mu\in\mathbb{\mathbb{R}^{+}}. \end{equation} \begin{thm} \label{MaTe} Let \(x\) be an arbitrary function in~\(L^2(\mathbb{R}^{+})\) and \(\mathscr{F}_{\mathbb{R}^{+}}x\) the truncated Fourier--Plancherel transform of \(x\). Then their images \(Ux\) and \(U\mathscr{F}_{\mathbb{R}^{+}}x\) under the operator \(U\) are related by formulas~\eqref{meq}, where the entries \(F_{+-}(\mu)\) and \(F_{-+}(\mu)\) of the matrix \(F(\mu)\) are of the form \eqref{Ft}. \end{thm} \begin{proof} It suffices to verify identities \eqref{fft} only for \(x\) of the form \(x(t)=e_{a}(t)\), where \begin{equation} \label{ea} e_{a}(t)=e^{-at},\quad t\in\mathbb{R}^{+} \end{equation} and \(a\) is an arbitrary positive number. It is well known that the linear hull of the family of function \(\{e_a(t)\}_{0<a<\infty}\) is a dense set in \(L^2(\mathbb{R}^{+})\). The function \((\mathscr{F}_{\mathbb{R}^{+}}e_a)(t)\) can be calculated explicitly: \begin{equation} \label{fea} (\mathscr{F}_{\mathbb{R}^{+}}e_a)(t)=\frac{1}{\sqrt{2\pi}}\cdot\frac{1}{a-it}. \end{equation} The corresponding elements \(\displaystyle \begin{bmatrix} y_{+}\\ y_{-} \end{bmatrix} =Ue_a\) and \(\displaystyle \begin{bmatrix} z_{+}\\ z_{-} \end{bmatrix} =(U\mathscr{F}_{\mathbb{R}^{+}})e_a\) can also be calculated explicitly. By the definition \eqref{FoD}--\eqref{fo} of the operator~$U$, we have \begin{subequations} \label{uea} \begin{align} \label{uea1} y_{+}(\mu)=&a^{-\frac{1}{2}-i\mu}\,\Gamma\big(\tfrac{1}{2}+i\mu\big),\quad \mu\in\mathbb{R^+},\\ \label{uea2} y_{-}(\mu)=&a^{-\frac{1}{2}+i\mu}\,\Gamma\big(\tfrac{1}{2}-i\mu\big),\quad \mu\in\mathbb{R^+}, \end{align} \end{subequations} and \begin{subequations} \label{uFea} \begin{align} \label{uFea1} z_{+}(\mu)=&\sqrt{\frac{\pi}{2}}\,e^{i\frac{\pi}{4}}\,a^{-\frac{1}{2}+i\mu}\, \frac{e^{-\frac{\pi}{2}\mu}}{\cosh\pi\mu},\quad \mu\in\mathbb{R^+},\\ \label{uFea2} z_{-}(\mu)=&\sqrt{\frac{\pi}{2}}\,e^{i\frac{\pi}{4}}\,a^{-\frac{1}{2}-i\mu}\, \frac{e^{\frac{\pi}{2}\mu}}{\cosh\pi\mu}, \quad \mu\in\mathbb{R^+}. \end{align} \end{subequations} Relations \eqref{fft} follow from \eqref{uea}, \eqref{uFea}, \eqref{Ft}, and the identity \footnote{ This is a special case of the identity \(\Gamma(\zeta)\cdot\Gamma(1-\zeta)=\frac{\pi}{\sin\pi\zeta}, \quad\zeta\in\mathbb{C}\setminus\mathbb{Z}\). } \begin{equation} \label{Refl} \Gamma(1/2+{}i\mu)\,\Gamma(1/2-{}i\mu)=\frac{\pi}{\cosh{\pi\mu}}\,\cdot \end{equation} \end{proof} If \begin{math} \label{ma} M=\begin{bmatrix} M_{++}&M_{+-}\\ M_{-+}&M_{--} \end{bmatrix} \end{math} is a $(2\times2)$-matrix with complex entries, then\break \(\|M\|_{\mathbb{C}^2\to\mathbb{C}^2}\) is the norm of the matrix \(M\) viewed as an operator in the space~\(\mathbb{C}^2\), where the space \(\mathbb{C}^2\) is equipped with the standard Hermitian norm. \begin{defn} Let \(M(\mu)=\begin{bmatrix} M_{++}(\mu)&M_{+-}(\mu)\\ M_{-+}(\mu)&M_{--}(\mu) \end{bmatrix}\) be a $(2\times2)$-matrix-valued function of $\mu\in\mathbb{R}^{+}$ whose entries are complex valued functions defined almost everywhere. \emph{The multiplication operator \(\mathcal{M}_M\) generated by the matrix function~\(M\)} is defined by the formula \begin{equation} \label{dmo} (\mathcal{M}_My)(\mu)=M(\mu)y(\mu),\quad y=\begin{bmatrix} y_{+}\\ y_{-} \end{bmatrix} \in\,\mathfrak{M}. \end{equation} \end{defn} \begin{lem} \label{enmo} If the matrix function \(M(\mu)\) is bounded on \(\mathbb{R}^{+}\), i.e., $$ \underset{\mu\in\mathbb{R}^{+}}{\textup{ess\,sup}} \|M(\mu)\|_{\mathbb{C}^{2}\to\mathbb{C}^{2}}<\infty, $$ then $\mathcal{M}_M$ is a bounded operator in the space $\mathfrak{M}$, and \begin{equation} \label{eqn} \|\mathcal{M}_M\|_{\mathfrak{M}\to\mathfrak{M}}= \underset{\mu\in\mathbb{R}^{+}}{\textup{ess\,sup}}\, \|M(\mu)\|_{\mathbb{C}^{2}\to\mathbb{C}^{2}}. \end{equation} \end{lem} The expression on the left-hand side of \eqref{eqn} means the norm of the multiplication operator \(\mathcal{M}_M\) in the space \(\mathfrak{M}\). Concerning the notion of $\textup{ess\,sup}$ see \cite[\S2.11, p.~140.]{Bo} \begin{rem} \label{cont} If the matrix function \(M(\mu)\) is continuous on \(\mathbb{R}^{+}\), then \begin{equation} \label{eqnC} \|\mathcal{M}_M\|_{\mathfrak{M}\to\mathfrak{M}}= \underset{\mu\in\mathbb{R}^{+}}{\textup{\,sup}}\, \|M(\mu)\|_{\mathbb{C}^{2}\to\mathbb{C}^{2}}. \end{equation} \end{rem} Since \(x\in{}L^2(\mathbb{R}^{+})\) in \eqref{meq} is arbitrary, we can interpret~\eqref{meq} as an identity of operators. The following theorem is the core of the present paper. \begin{thm} \label{mt} The truncated Fourier--Plancherel operator \(\mathscr{F}_{\mathbb{R}^{+}}\) is unitarily equivalent to the multiplication operator \(\mathcal{M}_F\) generated by the matrix function~$F$ of the form \eqref{mF}--\eqref{Ft} in the space \(\mathfrak{M}\). We have \begin{equation} \label{UE} \mathscr{F}_{\mathbb{R}^{+}}=U^{-1}\mathcal{M}_F\, U, \end{equation} where \(U\) is the unitary operator described in Definition~{\rm\ref{DeU}}. \end{thm} \begin{rem} \label{mop} The multiplication operator \(\mathcal{M}_F\) possesses the same spectral properties as the operator~$\mathscr{F}_{\mathbb{R}^{+}}$. However, to study~$\mathcal{M}_F$ is much easier than to study~$\mathscr{F}_{\mathbb{R}^{+}}$. By the \emph{model operator} we mean~$\mathcal{M}_F$. \end{rem} \section{The spectrum and the resolvent of the operator \(\boldsymbol{\mathscr{F}_{\mathbb{R}^{+}}}\). \label{SpARes}} The unitary equivalence \eqref{UE} allows us to reduce the spectral analysis of the operator \(\mathscr{F}_{\!_{\scriptstyle{\mathbb{R}^{+}}}}:\,L^2(\mathbb{R}^{+})\to{}L^2(\mathbb{R}^{+})\) to that of the operator \(\mathcal{M}_{_{\scriptstyle F}}:\,\mathfrak{M}\to\mathfrak{M}\). To perform the spectral analysis of the \emph{operator} \(\mathcal{M}_{_{\scriptstyle F}}\) acting in the \emph{in\-fi\-ni\-te-dimen\-si\-o\-nal} space~\(\mathfrak{M}\), we need to perform the spectral analysis of the \break $(2\times2)$-mat\-rix \(F(\mu)\) acting in the \emph{two-dimensional} space \(\mathbb{C}^2\). The spectral analysis of the matrix \(F(\mu)\) can be done for \emph{each} \(\mu\in\mathbb{R}^{+}\) separately. Then we can \emph{glue} the spectrum \(\textup{\large\(\sigma\)}(\mathcal{M}_{F})\) of~\(\mathcal{M}_{_{\scriptstyle F}}\) from the spectra \(\textup{\large\(\sigma\)}(F(\mu))\) of the matrices \(F(\mu)\), and the resolvent of~\(\mathcal{M}_{_{\scriptstyle F}}\) can be glued from the resolvents of the matrices \(F(\mu)\). For \(\mu\in[0,\infty)\), let \begin{equation} \label{EiVa} \zeta(\mu)=e^{i\pi/4}\frac{1}{\sqrt{2\,\cosh\pi\mu}}, \end{equation} and let \begin{equation} \label{EiV} \zeta_{+}(\mu)=\zeta(\mu),\quad \zeta_{-}(\mu)=-\zeta(\mu). \end{equation} It is clear that \(\zeta(\mu)\not=0\), so that \(\zeta_{+}(\mu)\not=\zeta_{-}(\mu)\) for every \(\mu\in[0,\infty)\). \begin{lem}{ \ } \label{Spmm} For \(\mu\in[0,\infty)\), the spectrum \(\textup{\large\(\sigma\)}(F(\mu))\) of the matrix \(F(\mu)\) given by~\eqref{mF} is simple and consists of two different points \(\zeta_{+}(\mu)\) and \(\zeta_{-}(\mu)\), see~\eqref{EiV} and~\eqref{EiVa}{\rm:} \begin{equation} \label{SpMM} \textup{\large\(\sigma\)}(F(\mu))=\{\zeta_{+}(\mu)\,,\zeta_{-}(\mu)\}. \end{equation} \end{lem} \begin{proof} Let \(D(z,\mu)\) be the determinant of that matrix: \begin{equation} \label{detm} D(z,\mu)=\det(zI-F(\mu)). \end{equation} The structure \eqref{mF} of~\(F(\mu)\) shows that \begin{equation} \label{di} D(z,\mu)=z^2-F_{+-}(\mu)\cdot{}F_{-+}(\mu). \end{equation} The product \(F_{+-}(\mu)\cdot{}F_{-+}(\mu)\) can be calculated by using \eqref{Ft} and \eqref{Refl}: $$ F_{+-}(\mu)~\cdot{}~F_{-+}(\mu)=\frac{i}{2\,\cosh\pi\mu}. $$ Thus, \begin{equation} \label{prdi} D(z,\mu)=z^2-\frac{i}{2\,\cosh\pi\mu}. \end{equation} \end{proof} \begin{defn} \label{dein} Let \(a\) and \(b\) be points in~\(\mathbb{C}\). By definition, the interval \([a,\,b]\) is the set \([a,\,b]=\{(1-\tau)a+\tau{}b:\,\tau\,\,\textup{runs over}\,\,[0,\,1]\}\). The open interval \((a,b)\) as well as half-open intervals are defined similarly. \end{defn} When \(\mu\) runs over the interval \([0,\,\infty)\), the points \(\zeta_{+}(\mu)\) fill the interval \(\Big(0,\,\frac{1}{\sqrt{2}}\,e^{i\pi/4}\Big]\) and the points \(\zeta_{-}(\mu)\) fill the interval \(\Big[-e^{i\pi/4}\frac{1}{\sqrt{2}},\,0\Big)\). When \(\mu\) increases, the points \(\zeta_{+}(\mu)\), \(\zeta_{-}(\mu)\) move monotonically: the point \(\zeta_{+}(\mu)\) moves from \(e^{i\pi/4}\,\frac{1}{\sqrt{2}}\) to \(0\), the point \(\zeta_{-}(\mu)\) moves from \(-e^{i\pi/4}\,\frac{1}{\sqrt{2}}\) to \(0\). Thus, the mappings \(\mu\to\zeta_{+}(\mu)\) is a homeomorphism of \([0,\,\infty)\) onto \(\Big(0,\,e^{i\pi/4}\,\frac{1}{\sqrt{2}}\Big]\) and the mapping \(\mu\to\zeta_{-}(\mu)\) is a homeomorphism of \([0,\,\infty)\) onto \(\Big[-e^{i\pi/4}\frac{1}{\sqrt{2}},\,0\Big)\). Moreover \(\zeta_{+}(\infty)=\zeta_{-}(\infty)=\{0\}.\) Thus, the interval \(\Big[-e^{i\pi/4}\frac{1}{\sqrt{2}},\,e^{i\pi/4}\frac{1}{\sqrt{2}}\Big]\) is naturally decomposed into the union of three nonintersecting parts: \begin{equation} \label{nade} \Big[-e^{i\pi/4}\tfrac{1}{\sqrt{2}},\,e^{i\pi/4}\tfrac{1}{\sqrt{2}}\Big]= \Big[-e^{i\pi/4}\tfrac{1}{\sqrt{2}},\,0\Big)\cup\{0\}\cup \Big(0,\,e^{i\pi/4}\tfrac{1}{\sqrt{2}}\Big]. \end{equation} \begin{thm} \label{spmop} The spectrum \(\textup{\large\(\sigma\)}(\mathcal{M}_{F})\) of the model operator~$\mathcal{M}_{_{\scriptstyle F}}$ looks like this\/{\rm:} \begin{equation} \label{dsmo} \textup{\large\(\sigma\)}(\mathcal{M}_{F})= \Big[-e^{i\pi/4}\tfrac{1}{\sqrt{2}},\,e^{i\pi/4}\tfrac{1}{\sqrt{2}}\Big]. \end{equation} \end{thm} In other words, Theorem \ref{spmop} claims that the spectrum \(\textup{\large\(\sigma\)}(\mathcal{M}_{F})\) of~\(\mathcal{M}_{_{\scriptstyle F}}\) is represented in the form \begin{equation} \label{GlSp} \textup{\large\(\sigma\)}(\mathcal{M}_{_{\scriptstyle F}})= \bigcup_{\mu\in[0,\,\infty]}\textup{\large\(\sigma\)}(F(\mu)). \end{equation} Since the spectra of unitarily equivalent operators coincide, Theorem \ref{spmop} can be reformulated as follows. \begin{thm} \label{spfop} For the spectrum \(\textup{\large\(\sigma\)}(\mathscr{F}_{\mathbb{R}^{+}})\) of the truncated Fourier operator~\(\mathscr{F}_{\mathbb{R}^{+}}\) we have \begin{equation} \label{dsmot} \textup{\large\(\sigma\)}(\mathscr{F}_{\mathbb{R}^{+}})= \Big[-e^{i\pi/4}\tfrac{1}{\sqrt{2}},\,e^{i\pi/4}\tfrac{1}{\sqrt{2}}\Big]. \end{equation} \end{thm} In the present section we prove the description \eqref{GlSp} of the spectrum \(\textup{\large\(\sigma\)}(\mathcal{M}_{_{\scriptstyle F}})\) of the model operator \(\mathcal{M}_{F}\). In passing, we obtain some estimates for the resolvents of the matrices \(F(\mu)\). These estimates are not quite evident because the matrices \(F(\mu)\) are not selfadjoint. In particular, \(F(\infty)\) is a Jordan cell. \begin{lem} \label{EMME} The norm of an arbitrary $(2\times2)$-matrix \(M\), \begin{equation*} M= \begin{bmatrix} m_{11}&m_{12}\\[1.5ex] m_{21}&m_{22} \end{bmatrix}\,, \end{equation*} viewed as an operator from \(\mathbb{C}^2\) to \(\mathbb{C}^2\), admits the estimates \begin{equation} \label{EMMED} \tfrac{1}{2}\,\textup{trace}\,(M^{\ast}M) \leqslantqslant\|M\|^2 \leqslantqslant\,\textup{trace}\,(M^{\ast}M). \end{equation} Under the assumption that \(\det\,M\not=0\), the norm of the inverse matrix \(M^{-1}\) can be estimated as follows\/{\rm:} \begin{equation} \label{EMMEI} \begin{split} |(\det\,M)|^{-2}\,\textup{trace}\,(M^{\ast}M) &-\frac{2}{\textup{trace}\,(M^{\ast}M)} \\ &\leqslantqslant\|M^{-1}\|^{\,2}\leqslantqslant |(\det\,M)|^{-2}\,\textup{trace}\,(M^{\ast}M), \end{split} \end{equation} where \begin{equation} \label{Trac} \textup{trace}\,M^{\ast}M=|m_{11}|^2+|m_{12}|^2+|m_{21}|^2+|m_{22}|^2. \end{equation} \end{lem} \begin{proof} Let \(s_0\) and \(s_1\) be the singular values of the matrix \(M\), i.e., \begin{equation} \label{SinV} 0<s_1\leqslantqslant{}s_0 \end{equation} and the numbers \(s_0^2,\,s_1^2\) are eigenvalues of the matrix \(M^{\ast}M\). Then \begin{gather*} \|M\|=s_0,\quad \|M^{-1}\|=s_1^{\,-1},\\ \textup{trace}(M^{\ast}M)=s_0^2+s_1^2, \quad{}|\det(M)|^2=\det{}(M^{\ast}M)=s_0^2\cdot{}s_1^2. \end{gather*} Therefore, inequality \eqref{EMMED} takes the form \[\frac{1}{2}(s_{0}^{2}+s_{1}^{2})\leqslantqslant{}s_{0}^{2}\leqslantqslant(s_0^{2}+s_1^{2})\,,\] and \eqref{EMMEI} takes the form \[(s_0s_1)^{\,-2}(s_0^{\,2}+s_{1}^{\,2})\,-\,\frac{2}{s_0^{\,2}+s_1^{\,2}}\,\leqslantqslant\,s_1^{\,-2}\,\leqslantqslant\, (s_0s_1)^{\,-2}(s_0^{\,2}+s_{1}^{\,2}).\] The last inequalities are valid for arbitrary numbers \(s_0,\,s_1\) that satisfy~\eqref{SinV}. \end{proof} Since the numbers \(\Gamma(1/2\pm{}i\mu)\) are complex conjugate, from \eqref{Refl} it follows that \begin{equation} \label{AbGa} |\Gamma(1/2\pm{}i\mu)|^2=\frac{2\pi}{e^{\pi\mu}+e^{-\pi\mu}},\quad \mu\in\mathbb{R}^{+}. \end{equation} Using \eqref{Ft} and \eqref{AbGa}, we calculate the absolute values \(|F_{+-}(\mu)|\) and \(|F_{-+}(\mu)|\): \begin{subequations} \label{av} \begin{align} \label{av1} |F_{+-}(\mu)|=&\frac{1}{\sqrt{1+e^{\,\,2\pi{}\mu}}}\,,\ \ \mu\in\mathbb{R^+}, \\ \label{av2} |F_{-+}(\mu)|=&\frac{1}{\sqrt{1+e^{-2\pi{}\mu}}},\ \ \mu\in\mathbb{R^+}. \end{align} \end{subequations} Note that, in particular, \begin{equation} \label{MaFC} 1/\sqrt{2} \leqslantqslant|F_{-+}(\mu)|<1,\quad |F_{+-}(\mu)|\leqslantqslant 1/\sqrt{2},\quad \mu\in\mathbb{R}^{+}. \end{equation} If \(\mu\) runs over the interval \([0,\infty)\), then \(|F_{-+}(\mu)|\) increases from \(2^{-1/2}\) to \(1\) and \(|F_{+-}(\mu)|\) decreases from \(2^{-1/2}\) to \(0\). In particular, \begin{equation} \label{MaFCSu} \sup_{\mu\in\mathbb{R}^{+}}|F_{-+}(\mu)|=\underset{\mu\in\mathbb{R}^{+}} {\textup{ess\,sup}}|F_{-+}(\mu)|=1. \end{equation} From \eqref{av} it follows that \begin{equation} \label{SMc2} |F_{+-}(\mu)|^2+|F_{-+}(\mu)|^2=1. \end{equation} For the matrix \(F(t)\) defined by \eqref{mF}--\eqref{Ft} its norm is \begin{equation} \|F(t)\|_{\mathbb{C}^2\to\mathbb{C}^2}=\frac{1}{\sqrt{1+e^{-2\pi{}t}}} \quad\mbox{ for all }\ t\in\mathbb{R}^{+}. \end{equation} \begin{lem} \label{eremf} For every \(\mu\in[0,\infty)\) and every \(z\in\mathbb{C}\setminus\!\textup{\large\(\sigma\)}(F(\mu))\), the matrix \((zI-F(\mu))^{-1}\) admits the estimates \begin{equation} \begin{split} \label{EsSRe} |D(z,\mu)|^{-2}\big(2|z|^2+1\big)&-\frac{2}{2|z|^2+1} \\ &\leqslantqslant\|(zI-F(\mu))^{-1}\|^{2}\leqslantqslant{}|D(z,\mu)|^{-2}\big(2|z|^2+1\big)\,, \end{split} \end{equation} where \begin{equation} \label{det} D(z,\mu)=\det(zI-F(\mu)) \end{equation} and \(\textup{\large\(\sigma\)}(F(\mu))\) is the spectrum of the matrix~\(F(\mu)\). \end{lem} \begin{proof} We apply estimate \eqref{EMMEI} to the matrix \(M=zI-F(\mu)\). We calculate the quantity \(\textup{trace}\,M^{\ast}M\) with the help of~\eqref{Trac}: \begin{equation} \label{Etr} \textup{trace}\,(zI-F(\mu))^{\ast}(zI-F(\mu))=2|z|^{2}+|F_{+-}(\mu)|^{2}+|F_{-+}(\mu)|^{2}. \end{equation} Using~\eqref{SMc2}, we see that \begin{equation} \textup{trace}\,(zI-F(\mu))^{\ast}(zI-F(\mu))=2|z|^{2}+1. \end{equation} \end{proof} \begin{proof}[Proof of Theorem \ref{spmop}] When \(\mu\) runs over the interval \([0,\infty)\), the complex numbers \(\dfrac{i}{2\cosh{\pi\mu}}\), which occur on the right-hand side of~\eqref{prdi}, fill the interval~\((0,i/2]\). Therefore, \begin{equation} \label{DistCo} \inf_{\mu\in(0,\infty)}|D(z,\mu)|=\textup{dist}(z^2,\,[0,i/2]), \end{equation} where \begin{equation} \label{Dist} \textup{dist}(z^2,\,[0,i/2]\,)=\min_{\zeta\in[0,i/2]}|z^2-\zeta|. \end{equation} In particular, \begin{equation*} \Big(\inf_{\mu\in[0,\infty)}|D(z,\mu)|>0\Big)\Leftrightarrow \big(\,z^2\not\in[0,i/2]\big), \end{equation*} or, in other words, \begin{equation} \label{SepCo} \Big(\inf_{\mu\in[0,\infty)}|D(z,\mu)|>0\Big)\Leftrightarrow \Big(\,z\not\in\Big[-\frac{1}{\sqrt{2}}\,e^{i\pi/4}, \frac{1}{\sqrt{2}}\,e^{i\pi/4}\Big]\Big), \end{equation} Inequalities \eqref{EsSRe} imply \begin{equation} \begin{split} \label{CoFRe} &\frac{2|z|^2+1}{\big(\inf_{\mu\in[0,\infty)}|D(z,\mu)|\big)^{2}}-\frac{2}{2|z|^2+1} \\ &\quad\leqslantqslant\sup\limits_{\mu\in[0,\infty)}\|(zI-F(\mu))^{-1}\|^{2}_{\mathbb{C}^2\to\mathbb{C}^2}\leqslantqslant \frac{2|z|^2+1}{\big(\inf_{\mu\in[0,\infty)}|D(z,\mu)|\big)^{2}}. \end{split} \end{equation} From \eqref{SepCo} and \eqref{CoFRe} it follows that \begin{equation} \label{CoFRe1} \Big(\sup\limits_{\mu\in[0,\infty)}\!\|(zI\!-\!F(\mu))^{-1}\|_{\mathbb{C}^2\to\mathbb{C}^2}\!<\!\infty\Big)\!\Leftrightarrow\! \Big(z\!\not\in\!\Big[\!-\!\frac{1}{\sqrt{2}}\,e^{i\pi/4}. \frac{1}{\sqrt{2}}\,e^{i\pi/4}\Big]\Big) \end{equation} \end{proof} By Lemma \ref{enmo}, inequalities \eqref{CoFRe} can be viewred as estimates for the norm of the resolvent of the model operator \(\mathcal{M}_F\), or, in other words, as estimates for the norm of the resolvent of the truncated Fourier--Plancherel operator~\(\mathscr{F}_{\mathbb{R}^{+}}\): \begin{equation} \begin{split} \label{CoFReso} \frac{2|z|^2+1}{\big(\textup{dist}(z^2,[0,i/2])\big)^{2}}-\frac{2}{2|z|^2+1}&\leqslantqslant \big\|\big(z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}} \big)^{-1}\big\|_{L^2(\mathbb{R}^{+})\to{}L^2(\mathbb{R}^{+})}^{2} \\ &\leqslantqslant \frac{2|z|^2+1}{\big(\textup{dist}(z^2\,,\,[0,i/2]\,)\big)^{2}}\cdot \end{split} \end{equation} The left inequality can be presented in the form \begin{equation*} \frac{(2|z|^2+1)^{1/2}}{\textup{dist\,}(z^2,[0,i/2]\,)} \sqrt{1-\frac{2(\textup{dist\,}(z^2,\,[0,i/2]\,)^2}{(2|z|^2+1)^2}} \leqslantqslant\big\|\big(z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}} \big)^{-1}\big\|_{L^2(\mathbb{R}^{+})\to{}L^2(\mathbb{R}^{+})}. \end{equation*} Here, the quantity under the square root is positive because \begin{equation*} \label{GOH} \frac{2\textup{dist\, }^2(z^2,\,[0,i/2])}{\big(2|z|^2+1\big)^2}\leqslantqslant \frac{2|z|^2}{\big(2|z|^2+1\big)^2}\leqslantqslant\frac{1}{2}. \end{equation*} Since \((1-\alpha)\leqslantqslant\sqrt{1-\alpha}\) for \(0\leqslantqslant\alpha\leqslantqslant1\), we have \[1-\frac{2\textup{dist}^2(z^2,\,[0,i/2])}{\big(2|z|^2+1\big)^2} \leqslantqslant\sqrt{1-\frac{2\textup{dist}^2(z^2,\,[0,i/2])}{\big(2|z|^2+1\big)^2}} .\] Thus, we get a lower estimate for the norm of the resolvent: \begin{subequations} \label{EsRes} \begin{equation} \label{LoEs} \frac{\big(2|z|^2+1\big)^{1/2}}{\textup{dist}(z^2,[0,i/2])}- \frac{2\textup{dist}(z^2,[0,i/2])}{\big(2|z|^2+1\big)^{3/2}} \leqslantqslant\big\|(z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1}\big\|_{L^2(\mathbb{R}^{+})\to{}L^2(\mathbb{R}^{+})}. \end{equation} An upper estimate for the norm of the resolvent is provided by the right inequality in~\eqref{CoFReso}: \begin{equation} \label{UpEs} \big\|(z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1}\big\|_{L^2(\mathbb{R}^{+})\to{}L^2(\mathbb{R}^{+})} \leqslantqslant\frac{\big(2|z|^2+1\big)^{1/2}}{\textup{dist}(z^2\,,\,[0,i/2])}\cdot \end{equation} \end{subequations} The smaller is the value $\operatorname{dist}(z^2,[0,i/2])$, the closer are the lower estimate \eqref{LoEs} and the upper estimate \eqref{UpEs}. However, we would like to estimate the resolvent not in terms of $$ \operatorname{dist}(z^2,\,[0,i/2]), $$ but rather in terms of $ \operatorname{dist}(z,\,\textup{\large\(\sigma\)}(\mathscr{F}_{\mathbb{R}^{+}})). $ \begin{lem} \label{DfDi} Let \(\zeta\) be a point of the spectrum \(\textup{\large\(\sigma\)}(\mathscr{F}_{\mathbb{R}^{+}}))\)\textup{:} \begin{equation} \label{zePsp} \zeta\in\bigg[-\frac{1}{\sqrt{2}}\,e^{i\pi/4},\, \frac{1}{\sqrt{2}}\,e^{i\pi/4}\bigg]\,, \end{equation} and let $z$ lie on the normal to the interval \(\Big[-\frac{1}{\sqrt{2}}\,e^{i\pi/4},\, \frac{1}{\sqrt{2}}\,e^{i\pi/4}\Big]\) at the point~$\zeta\!:$ \begin{equation} \label{znorspm} z=\zeta\pm{}|z-\zeta|e^{i3\pi/4}. \end{equation} Then \begin{equation} \label{distot} \textup{dist}\big(z^2\,,\big[0\,,i/2\big]\,\big)\,=\, \begin{cases} 2|\zeta|\,|z-\zeta|\,&\textup{if \ }|z-\zeta|\leqslantqslant|\zeta|\,,\\ |\zeta|^2+|z-\zeta|^2\,=\,|z|^2&\textup{if \ }|z-\zeta|\geqslantqslant|\zeta|. \end{cases} \end{equation} \end{lem} \begin{proof} Condition \eqref{zePsp} means that \(\zeta=\pm|\zeta|e^{i\pi/4}\). Substituting this in~\eqref{znorspm}, we obtain \begin{equation*} z^2=\pm2|\zeta|\,|z-\zeta|+i(|\zeta|^2-|z-\zeta|^2). \end{equation*} If \(|z-\zeta|\leqslantqslant|\zeta|\), then the point \(i(|\zeta|^2-|z-\zeta|^2)\) lies on the interval \([0,i/2]\). In this case, $$ \operatorname{dist}\,(z^2,\,[0,i/2])=2|\zeta|\,|z-\zeta|. $$ If \(|z-\zeta|\geqslantqslant|\zeta|\), then the point \(i(|\zeta|^2-|z-\zeta|^2)\) lies on the half-axis \([0,-i\infty)\). In this case, $$ \textup{dist}\,\big(z^2,[0,\,i/2]\big)=\sqrt{{\big(|\zeta|^2-|z-\zeta|^2\big)^2+ 4|\zeta|^2|z-\zeta|^2}}=|\zeta|^2+|z-\zeta|^2=|z|^2. $$ Since \(|\zeta|^2+|z-\zeta|^2\geqslantqslant2|\zeta||z-\zeta|\), it follows that, in any of the cases above, \(|z-\zeta|\leqslantqslant|\zeta|\), or \(|z-\zeta|\geqslantqslant|\zeta|\), we have \begin{equation} \label{ERFB} \operatorname{dist}\,(z^2\,,\,[0,\,i/2]))\geqslantqslant2|\zeta|\,|z-\zeta|. \end{equation} holds. \end{proof} \begin{thm} \label{TEstRes} Let \(\zeta\) be a point of the spectrum \(\textup{\large\(\sigma\)}(\mathscr{F}_{\mathbb{R}^{+}})\) of the operator \(\mathscr{F}_{\mathbb{R}^{+}}\), and let~\(z\) lie on the normal to the interval \(\textup{\large\(\sigma\)}(\mathscr{F}_{\mathbb{R}^{+}})\) at the point \(\zeta\).\\ Then the following statements hold true. \begin{enumerate} \item[\textup{1.}] The resolvent \((z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1}\) admits the upper estimate \begin{equation} \label{UpEsResSM} \big\|(z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1}\big\| _{L^2(\mathbb{R}^{+})\to{}L^2(\mathbb{R}^{+})}\leqslantqslant A(z)\frac{1}{|\zeta|}\cdot\frac{1}{|z-\zeta|}\,, \end{equation} where \begin{equation} \label{Az} A(z)=\frac{(2|z|^2+1)^{1/2}}{2}\cdot \end{equation} \item[\textup{2.}] If, moreover, the condition \(|z-\zeta|\leqslantqslant|\zeta|\) is satisfied, then the resolvent \((z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1}\) also admits the lower estimate \begin{equation} \label{LoEsResSM} \begin{split} \hspace{3.0ex}A(z)\frac{1}{|\zeta|}\cdot\frac{1}{|z-\zeta|} &-B(z)|\zeta||z-\zeta| \\ &\leqslantqslant\big\|(z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1} \big\|_{L^2(\mathbb{R}^{+})\to{}L^2(\mathbb{R}^{+})}\,\ccomma \end{split} \end{equation} where \(A(z)\) is the same as in \eqref{Az} and \begin{equation} \label{Bz} B(z)=\frac{4}{(2|z|^2+1)^{3/2}}\,\cdot \end{equation} \item[\textup{3.}] For \(\zeta=0\), the resolvent \((z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1}\) admits the estimates \begin{align} \label{EZEZM} \hspace{3.0ex}2A(z)\frac{1}{|z|^2}-B(z) \leqslantqslant\big\|(z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1} \big\|_{L^2(\mathbb{R}^{+})\to{}L^2(\mathbb{R}^{+})} \leqslantqslant{}2A(z)\frac{1}{|z|^2}, \end{align} where \(A(z)\) and \(B(z)\) are the same as in \eqref{Az}, \eqref{Bz}, and \(z\) is an arbitrary point of the normal. \end{enumerate} In particular, if \(\zeta\not=0\), and \(z\) tends to \(\zeta\) along the normal to the interval \(\textup{\large\(\sigma\)}(\mathscr{F}_{\mathbb{R}^{+}})\), then we have \begin{equation} \label{AsNzpM} \big\|(z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1}\big\|_{L^2(\mathbb{R}^{+}) \to{}L^2(\mathbb{R}^{+})}=\frac{A(\zeta)}{|\zeta|}\,\frac{1}{|z-\zeta|} +O(1). \end{equation} If \(\zeta=0\) and \(z\) tends to \(\zeta\) along the normal to the interval \(\textup{\large\(\sigma\)}(\mathscr{F}_{\mathbb{R}^{+}})\), then we have \begin{equation} \label{AszpM} \big\|(z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1}\big\| _{L^2(\mathbb{R}^{+})\to{}L^2(\mathbb{R}^{+})}=|z|^{-2}+O(1)\,, \end{equation} where \(O(1)\) is a quantity that remains bounded as \(z\) tends to \(\zeta\). \end{thm} \begin{proof} The proof is based on estimates \eqref{EsRes} for the resolvent and on Lem\-ma~\ref{DfDi}. Combining inequality~\eqref{ERFB} with estimate~\eqref{UpEs}, we obtain estimate~\eqref{UpEsResSM}, which is valid for \emph{all} \(z\) lying on the normal to the interval \(\textup{\large\(\sigma\)}(\mathscr{F}_{\mathbb{R}^{+}})\) at the point~$\zeta$. \emph{If, moreover, \(z\) is suffisiently close to \(\zeta\)}, namely, the condition \(|z-\zeta|\leqslantqslant|\zeta|\) is satisfied, then equality occurs in~\eqref{ERFB}. Combining \emph{identity}~\eqref{ERFB} with estimate~\eqref{LoEs}, we obtain~\eqref{LoEsResSM}. The asymptotic relation \eqref{AsNzpM} is a consequence of inequalities \eqref{LoEsResSM} and \eqref{EZEZM} because $$ \frac{|A(z)-A(\zeta)|}{|z-\zeta|}=O(1) $$ as \(z\) tends to \(\zeta\). The asymptotic relation \eqref{AszpM} is a consequence of inequalities \eqref{EsRes} and the relation $$ \textup{dist}\,(z^2,\,[0,\,i/2])=|z|^2, $$ which is valid for all \(z\) lying on the normal to the interval \(\textup{\large\(\sigma\)}(\mathcal{M}_{F})\) at the point \(\zeta=0\). (See \eqref{distot} for \(\zeta=0\).) \end{proof} \begin{cor} \label{NoSNO} The operator \(\mathscr{F}_{\mathbb{R}^{+}}\) is not similar to any normal operator. \end{cor} Should the operator \(\mathscr{F}_{\mathbb{R}^{+}}\) be similar to some normal operator \(\mathcal{N}\), the resolvent of the operator \(\mathscr{F}_{\mathbb{R}^{+}}\) would admit the estimate \[\big\|(z\mathscr{I}-\mathscr{F}_{\mathbb{R}^{+}})^{-1}\big\|_{L^2(\mathbb{R}^{+})\to{}L^2(\mathbb{R}^{+})} \leqslantqslant\,C\,(\textup{dist}\,(z,\,\textup{\large\(\sigma\)}(\mathscr{F}_{\mathbb{R}^{+}}))^{-1}\,,\] with \(C<\infty\) independent of~\(z\). However, this estimate is not compatible with the asymptotic relation \eqref{AszpM}. \end{document}
\begin{document} \preprint{} \title{Quantum broadcast communication} \author{Jian Wang} \email{[email protected]} \affiliation{School of Electronic Science and Engineering, \\National University of Defense Technology, Changsha, 410073, China } \author{Quan Zhang} \affiliation{School of Electronic Science and Engineering, \\National University of Defense Technology, Changsha, 410073, China } \author{Chao-jing Tang} \affiliation{School of Electronic Science and Engineering, \\National University of Defense Technology, Changsha, 410073, China } \begin{abstract} Broadcast encryption allows the sender to securely distribute his/her secret to a dynamically changing group of users over a broadcast channel. In this paper, we just consider a simple broadcast communication task in quantum scenario, which the central party broadcasts his secret to multi-receiver via quantum channel. We present three quantum broadcast communication schemes. The first scheme utilizes entanglement swapping and Greenberger-Horne-Zeilinger state to realize a task that the central party broadcasts the secret to a group of receivers who share a group key with him. In the second scheme, based on dense coding, the central party broadcasts the secret to multi-receiver who share each of their authentication key with him. The third scheme is a quantum broadcast communication scheme with quantum encryption, which the central party can broadcast the secret to any subset of the legal receivers. \end{abstract} \pacs{03.67.Dd, 03.67.Hk} \keywords{Quantum key distribution; Quantum teleportation} \maketitle \section{Introduction} \label{introduction} Quantum cryptography has been one of the most remarkable applications of quantum mechanics in quantum information science. Quantum key distribution (QKD), which provides a way of exchanging a private key with unconditional security, has progressed rapidly since the first QKD protocol was proposed by Benneett and Brassard in 1984 \cite{bb84}. A good many of other quantum communication schemes have also been proposed and pursued, such as quantum secret sharing (QSS)\cite{hbb99,kki99,zhang,gg03,zlm05,xldp04}, quantum secure direct communication (QSDC) \cite{beige,Bostrom,Deng,denglong,cai1,cai4,jwang1,jwang2,jwang3,hlee,cw1,cw2,tg,zjz}. QSS is the generalization of classical secret sharing to quantum scenario and can share both classical and quantum messages among sharers. Many researches have been carried out in both theoretical and experimental aspects after the pioneering QSS scheme proposed by Hillery, Buz\v{e}k and Berthiaume in 1999 \cite{hbb99}. Different from QKD, QSDC's object is to transmit the secret message directly without first establishing a key to encrypt it. QSDC can be used in some special environments which has been shown in Ref. \cite{Bostrom,Deng}. The works on QSDC have attracted a great deal of attention. Bostr\"{o}m and Felbinger \cite{Bostrom} proposed a Ping-Pong protocol which is quasi-secure for secure direct communication if perfect quantum channel is used. Deng et al. \cite{Deng,denglong} put forward a two-step QSDC protocol using Einstein-Podolsky-Rosen (EPR) pairs and a QSDC scheme by using batches of single photons which serves as quantum one-time pad cryptosystem. We proposed a multiparty controlled QSDC scheme using Greenberger-Horne-Zeilinger (GHZ) state and a QSDC scheme based on the order rearrangement of single photons \cite{jwang2,jwang3}. Lee et al. \cite{hlee} presented two QSDC protocols with user authentication. Recently, some multiparty quantum direct communication schemes which are used to realize the task that many users send each of their secrets to a central party have been proposed. Gao et al. \cite{gao1} proposed a QSDC scheme using GHZ states and entanglement swapping. In their scheme, the secret messages can be transmitted from two senders to a remote receiver. They also presented a simultaneous QSDC scheme between the central party and other $M$ parties using GHZ states, which the $M$ parties can transmit each of their secret messages to the central party simultaneously \cite{gao2}. Jin et al. \cite{jin} put forward a simultaneous QSDC scheme by using GHZ states and dense coding. Broadcast encryption involves a sender and multi-user (receiver) \cite{fn94}. The sender first encrypts his content and then transmits it to a dynamically changing set of users via insecure broadcasting channels. The broadcast encryption scheme assures that only privileged receivers can recover the content subscribed and the unauthorized users cannot learn anything. In this paper, we consider a simple broadcast encryption task in quantum scenario, called quantum broadcast communication (QBC), which the central party broadcasts his secret message to a group of legal receivers via quantum channel and any illegal receiver cannot obtain the central party's secret. We then consider a naive scheme that the sender first establishes a common key with multi-user and then encrypts the secret with the sharing key. Thus the multi-user can obtain the sender's secret by decrypting the cypher with the key. In the present schemes, we try to allow the sender broadcast the secret to multi-user directly without first establishing a common key to encrypt it. We then present three QBC schemes based on the ideas in Ref. \cite{hlee,gao2,jin,zlg01}. In scheme 1, a group of users share a group key with the central party. After authenticating the users, the central party broadcasts his secret message to them by using entanglement swapping \cite{zzhe93}. In scheme 2, each user shares a authentication key with the central party. The central party first authenticates the users and then broadcasts the secret to them by using dense coding \cite{bw92}. In scheme 3, based on quantum encryption \cite{zlg01}, the central party utilizes controlled-not (CNOT) operation to encrypt his secret qubit by using the particle in GHZ state and the designated users also use CNOT operation to decrypt the central party's secret. The aim of QBC is to broadcast the secret to legal multi-user directly. In our schemes, we suppose a trusted third party, Trent, broadcasts his secret to $r$ legal users, Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$. Similar to Ref. \cite{hlee}, Trent shares a secret identity number $ID_i$ ($i=1,2,\cdots,r$) and a secret hash function $h_i$ ($i=1,2,\cdots,r$) with each user. Only the users' identities are legal can Trent broadcast his secret to them. Here the hash function is \begin{eqnarray} h: \{0,1\}^l\times\{0,1\}^m\rightarrow\{0,1\}^n, \end{eqnarray} where $l$, $m$ and $n$ denote the length of the identity number, the length of a counter and the length of authentication key, respectively. Thus the user's authentication key can be expressed as $AK=h(ID, C)$, where $C$ is the counter of calls on the user's hash function. When the length of the authentication key is not enough to satisfy the requirement of cryptographic task. The parties can increase the counter and then generates a new authentication key. We denote the authentication keys of Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$ as $AK_{A_1}=h_{A_1}(ID_{A_1}, C_{A_1})$, $AK_{A_2}=h_{A_2}(ID_{A_2}, C_{A_2})$, $\cdots$, $AK_{A_r}=h_{A_r}(ID_{A_r}, C_{A_r})$. We then give some relations which will be used in our schemes. The four Bell states and the eight three-particle GHZ states are defined as \begin{eqnarray} \ket{\phi^+}=\frac{1}{\sqrt{2}}(\ket{00}+\ket{11}),\ket{\phi^-}=\frac{1}{\sqrt{2}}(\ket{00}-\ket{11}),\nonumber\\ \ket{\psi^+}=\frac{1}{\sqrt{2}}(\ket{01}+\ket{10}),\ket{\psi^-}=\frac{1}{\sqrt{2}}(\ket{01}-\ket{10}), \end{eqnarray} and \begin{eqnarray} \label{1} \ket{\Psi_1}=\frac{1}{\sqrt{2}}(\ket{000}+\ket{111}), \ket{\Psi_2}=\frac{1}{\sqrt{2}}(\ket{000}-\ket{111}),\nonumber\\ \ket{\Psi_3}=\frac{1}{\sqrt{2}}(\ket{100}+\ket{011}), \ket{\Psi_4}=\frac{1}{\sqrt{2}}(\ket{100}-\ket{011}),\nonumber\\ \ket{\Psi_5}=\frac{1}{\sqrt{2}}(\ket{010}+\ket{101}), \ket{\Psi_6}=\frac{1}{\sqrt{2}}(\ket{010}-\ket{101}),\nonumber\\ \ket{\Psi_7}=\frac{1}{\sqrt{2}}(\ket{110}+\ket{001}), \ket{\Psi_8}=\frac{1}{\sqrt{2}}(\ket{110}-\ket{001}),\nonumber\\ \end{eqnarray} respectively. The four unitary operations which can be encoded as two bits classical information are \begin{eqnarray} I=\ket{0}\bra{0}+\ket{1}\bra{1},\nonumber\\ \sigma_x=\ket{0}\bra{1}+\ket{1}\bra{0},\nonumber\\ i\sigma_y=\ket{0}\bra{1}-\ket{1}\bra{0},\nonumber\\ \sigma_z=\ket{0}\bra{0}-\ket{1}\bra{1}. \end{eqnarray} Here the encoding is defined as $I\rightarrow00$, $\sigma_x\rightarrow01$, $i\sigma_y\rightarrow10$, $\sigma_z\rightarrow11$. The Hadamard ($H$) operation is \begin{eqnarray} H=\frac{1}{\sqrt{2}}(\ket{0}\bra{0}-\ket{1}\bra{1}+\ket{0}\bra{1}+\ket{1}\bra{0}). \end{eqnarray} \section{Scheme1: Quantum broadcast communication using entanglement swapping} In scheme 1, Trent utilizes multi-particle GHZ states and entanglement swapping to realize quantum broadcast communication, called QBC-ES. We first present our QBC-ES scheme with two users (Alice$_1$, Alice$_2$) and then generalize it to the case with many users (Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$). In the scheme, Trent broadcasts his secret message to a group of users and all users have the same authentication key which we call group key (GK). (S1) Trent prepares an ordered $N$ three-particle GHZ states each of which is in the state $\ket{\Psi_1}=\frac{1}{\sqrt{2}}(\ket{000}+\ket{111})_{TA_1A_2}$, where the subscripts $T$, $A_1$ and $A_2$ represent the three particles of each GHZ state. Trent takes particle $T$ ($A_1$, $A_2$) for each state to form an ordered particle sequence, called $T$ ($A_1$, $A_2$) sequence. For each GHZ state, Trent performs $I$ or $H$ operation on particles $A_1$ and $A_2$ according to the group key, $GK$. If the $i$th value of $GK$ is 0 (1), he performs $I$ ($H$) operation on particles $A_1$ and $A_2$. As we have described in Sec. \ref{introduction}, here $GK=h(ID, C)$. If the length of $GK$ is not long enough to $N$, new $GK$ can be generated by increasing the counter until the length of $GK$ is no less than $N$. Trent also performs randomly one of the two operations \{$I$, $i\sigma_y$\} on particle $A_1$. He then sends $A_1$ and $A_2$ sequences to Alice$_1$ and Alice$_2$, respectively. (S2) After receiving $A_1$ and $A_2$ sequences, Alice$_1$ and Alice$_2$ perform corresponding $I$ or $H$ operations on each of their particles according to $GK$. For example, if the $i$th value of $GK$ is 0 (1), Alice$_1$ executes $I$ ($H$) operation on particle $A_1$. They inform Trent that they have transformed their qubit using unitary operation according to $GK$. Trent then authenticates the users and checks eavesdropping during the transmission of $A_1$ and $A_2$ sequences. (S3) The procedure of authentication and eavesdropping check is as follows. (a) After hearing from the users, Trent selects randomly a sufficiently large subset from the ordered $N$ GHZ states. (b) He measures the sampling particles in $T$ sequence, in a random measuring basis, $Z$-basis(\ket{0},\ket{1}) or $X$-basis (\ket{+}=$\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$, \ket{-}=$\frac{1}{\sqrt{2}}(\ket{0}-\ket{1})$). (c) Trent announces publicly the positions of the sampling particles and the measuring basis for each of the sampling particles. Alice$_1$ (Alice$_2$) measures the sampling particles in $A_1$ ($A_2$) sequence, in the same measuring basis as Trent. After measurements, the users publishes their measurement results. (d) Trent can then authenticate the users and check the existence of eavesdropper by comparing their measurement results. If the users are legal and the channel is safe, their results must be completely correlated. Suppose Trent's operation performed on particle $A_1$ is $I$. When Trent performs $Z$-basis measurement on his particle, Alices' result should be \ket{00} (\ket{11}) if Trent's result is \ket{0} (\ket{1}). On the contrary, Alices' result should be \ket{++} or \ket{--} (\ket{+-} or \ket{-+}) if Trent performs $X$-basis measurement on his particle and gets the result \ket{+} (\ket{-}). Thus if Trent confirms that the users are legal and there is no eavesdropping, they continue to execute the next step. Otherwise, he aborts the communication. (S4) After authenticating the users, Trent announces publicly his random operations on the particles in $A_1$ sequence and Alice$_1$ performs the same operations on them. Trent divides the remaining GHZ states into $M$ ordered groups, \{P(1)$_{TA_1A_2}$, Q(1)$_{T'A_1'A_2'}$\}, \{P(2)$_{TA_1A_2}$, Q(2)$_{T'A_1'A_2'}$\}, $\cdots$, \{P(M)$_{TA_1A_2}$, Q(M)$_{T'A_1'A_2'}$\}, where 1, 2, $\cdots$, $M$ represent the order of the group and the subscripts $T$ and $T'$ ($A_1$, $A_1'$ and $A_2$, $A_2'$) denote Trent's (Alice$_1$'s and Alice$_2$'s ) particles. Trent encodes his secret on each particle $T$ by using one of the four operations \{$I$, $\sigma_x$, $i\sigma_y$, $\sigma_z$\}. The parties agree that the four operations represent two-bit classical message, as we have described in Sec.\ref{introduction}. Alice$_1$ generates a $M$-bit random string, $a_1$. For each group, she performs one of the two unitary operations \{$I$, $\sigma_x$\} on particle $A_1$ according to $a_1$. For example, if the $i$th value of $a_1$ is 0 (1), Alice$_1$ executes $I$ ($\sigma_x$) operation on particle $A_1$. Here Alice$_1$ does not perform any operation on particle $A_1'$. (S5) Alice$_1$ (Alice$_2$) measures particles $A_1$ and $A_1'$ ($A_2$ and $A_2'$) of each group in Bell basis. After measurements, Alice$_1$ publishes her measurement results, but Alice$_2$ does not do this directly. According to $GK$, Alice$_2$ transforms her result by using corresponding unitary operation. If the $(2i-1)$th and $2i$th values of $GK$ are 00 (01, 10, 11), she performs $I$ ($\sigma_x$, $i\sigma_y$, $\sigma_z$) operation on her result and then publishes the transformed result. If Alice$_2$'s result is \ket{\phi^+} and the corresponding bits of $GK$ are 01, the published result by her is \ket{\psi^+}. (S6) Trent performs Bell basis measurement on particles $T$ and $T'$ of each group and publishes his measurement results. According to the published information, the users can obtain Trent's secret message. We then explain it in detail. The state of a group can be written as \begin{eqnarray} \ket{\Psi_1}_{TA_1A_2}\otimes\ket{\Psi_1}_{T'A_1'A_2'}= \frac{1}{2\sqrt{2}}(\ket{\phi^+_{TT'}}\ket{\phi^+_{A_1A_1'}}\ket{\phi^+_{A_2A_2'}}\nonumber\\ +\ket{\phi^+_{TT'}}\ket{\phi^-_{A_1A_1'}}\ket{\phi^-_{A_2A_2'}}\nonumber\\ +\ket{\phi^-_{TT'}}\ket{\phi^+_{A_1A_1'}}\ket{\phi^-_{A_2A_2'}}+\ket{\phi^-_{TT'}}\ket{\phi^-_{A_1A_1'}}\ket{\phi^+_{A_2A_2'}}\nonumber\\ +\ket{\psi^+_{TT'}}\ket{\psi^+_{A_1A_1'}}\ket{\psi^+_{A_2A_2'}}+\ket{\psi^+_{TT'}}\ket{\psi^-_{A_1A_1'}}\ket{\psi^-_{A_2A_2'}}\nonumber\\ +\ket{\psi^-_{TT'}}\ket{\psi^+_{A_1A_1'}}\ket{\psi^-_{A_2A_2'}}+\ket{\psi^-_{TT'}}\ket{\psi^-_{A_1A_1'}}\ket{\psi^+_{A_2A_2'}}). \end{eqnarray} If Trent's encoding operation is $\sigma_x$ which corresponds to secret bits 01 and Alice$_1$'s random operation is also $\sigma_x$ corresponding to bit 1, \ket{\Psi_1}$_{TA_1A_2}$ is then transformed to \ket{\Psi_7}$_{TA_1A_2}$ and the state of the group becomes \begin{eqnarray} \label{es} \ket{\Psi_7}_{TA_1A_2}\otimes\ket{\Psi_1}_{T'A_1'A_2'}= \frac{1}{2\sqrt{2}}(\ket{\psi^+_{TT'}}\ket{\psi^+_{A_1A_1'}}\ket{\phi^+_{A_2A_2'}}\nonumber\\ -\ket{\psi^+_{TT'}}\ket{\psi^-_{A_1A_1'}}\ket{\phi^-_{A_2A_2'}}\nonumber\\ -\ket{\psi^-_{TT'}}\ket{\psi^+_{A_1A_1'}}\ket{\phi^-_{A_2A_2'}}+\ket{\psi^-_{TT'}}\ket{\psi^-_{A_1A_1'}}\ket{\phi^+_{A_2A_2'}}\nonumber\\ +\ket{\phi^+_{TT'}}\ket{\phi^+_{A_1A_1'}}\ket{\psi^+_{A_2A_2'}}-\ket{\phi^+_{TT'}}\ket{\phi^-_{A_1A_1'}}\ket{\psi^-_{A_2A_2'}}\nonumber\\ -\ket{\phi^-_{TT'}}\ket{\phi^+_{A_1A_1'}}\ket{\psi^-_{A_2A_2'}}+\ket{\phi^-_{TT'}}\ket{\phi^-_{A_1A_1'}}\ket{\psi^+_{A_2A_2'}}). \end{eqnarray} From the results of Trent and Alice$_1$, Alice$_2$ can deduce Trent's secret message because the three parties' results correspond to an exclusive state. For example, the results of Trent and Alice$_1$ are each \ket{\psi^-_{TT'}} and \ket{\psi^-_{A_1A_1'}} and Alice$_2$'s original result is \ket{\phi^+_{A_2A_2'}}. According to Eq. (\ref{es}), the state of the group must be \ket{\Psi_7}$_{TA_1A_2}\otimes$\ket{\Psi_1}$_{T'A_1'A_2'}$. Alice$_2$ then knows Trent's secret must be ``01'' because only the operation $\sigma_x\otimes\sigma_x$ applied on particles $T$ and $A_1$ can change the state \ket{\Psi_1} into \ket{\Psi_7}. On the other hand, Alice$_1$ knows $GK$ and she can deduce Alice$_2$'s original result according to her published result. Similarly, she can also obtain Trent's secret. Thus Trent broadcasts his secret to two legal users. Now, let us discuss the security for the present scheme. The security requirement for the scheme is that any illegal user cannot obtain Trent's secret. As long as the procedure of authentication and eavesdropping check is secure, the whole scheme is secure. Anyone who has no $GK$ cannot obtain Trent's secret message because it is impossible to deduce a definite result about the secret from the published results. We then discuss the security for the procedure of authentication and eavesdropping check. At step (S1), Trent performs $I$ or $H$ operation on particles $A_1$ and $A_2$ according to $GK$ which is only shared by the three parties. If the $i$th bit of $GK$ is 0 or 1, the three-particle GHZ state becomes \begin{eqnarray} \label{security1} \ket{\Phi_1}&=&\frac{1}{\sqrt{2}}(\ket{000}+\ket{111})=\frac{1}{\sqrt{2}}(\ket{+}\ket{\phi^+}+\ket{-}\ket{\phi^-})\nonumber\\ &=&\frac{1}{2}[\ket{+}(\ket{++}+\ket{--})+\ket{-}(\ket{+-}+\ket{-+})]\nonumber\\ \end{eqnarray} or \begin{eqnarray} \label{security2} \ket{\Phi_2}&=&\frac{1}{\sqrt{2}}(\ket{0++}+\ket{1--})=\frac{1}{\sqrt{2}}(\ket{+}\ket{\phi^+}+\ket{-}\ket{\psi^+})\nonumber\\ &=&\frac{1}{2}[\ket{+}(\ket{++}+\ket{--})+\ket{-}(\ket{++}-\ket{--})]. \end{eqnarray} According to Eqs. (\ref{security1}) and (\ref{security2}), if an eavesdropper, Eve, intercepts particles $A_1$ and $A_2$ and makes a Bell basis measurement on them, she can obtain partial information of $GK$. However, Trent performs random $I$ or $i\sigma_y$ operation on particle $A_1$, which can prevent Eve from eavesdropping the information of $GK$. As a result of Trent's operation, there are four possible states \ket{\Phi_1}, \ket{\Phi_2}, \begin{eqnarray} \label{security3} \ket{\Phi_3}&=&\frac{1}{\sqrt{2}}(-\ket{010}+\ket{101})=\frac{1}{\sqrt{2}}(\ket{+}\ket{\psi^-}-\ket{-}\ket{\psi^+})\nonumber\\ &=&\frac{1}{2}[\ket{+}(\ket{-+}-\ket{+-})-\ket{-}(\ket{++}-\ket{--})]\nonumber\\ \end{eqnarray} and \begin{eqnarray} \label{security4} \ket{\Phi_4}&=&\frac{1}{\sqrt{2}}(\ket{0-+}-\ket{1+-})=\frac{1}{\sqrt{2}}(\ket{+}\ket{\psi^-}+\ket{-}\ket{\phi^-})\nonumber\\ &=&\frac{1}{2}[\ket{+}(\ket{-+}-\ket{+-})+\ket{-}(\ket{+-}+\ket{-+})].\nonumber\\ \end{eqnarray} According to Eqs. (\ref{security1})-(\ref{security4}), Eve cannot distinguish the four states by using Bell basis measurement. During the authentication and eavesdropping check, Trent measures his sampling particles in $Z$-basis or $X$-basis randomly and allows the users to measure their corresponding particles in the same measuring basis. Suppose Trent performs $Z$-basis measurement on his particle and Eve also measures particles $A_1$ and $A_2$ in $Z$-basis. Eve publishes her measurement result after measurements. If the state is \ket{\Phi_1} or \ket{\Phi_3}, Eve will not introduce any error during the process of authentication and eavesdropping check. However, If the state is \ket{\Phi_2} and \ket{\Phi_4}, Eve will obtain \ket{00}, \ket{01}, \ket{10} and \ket{11} each with probability 1/4 and the error rate introduced by her achieves 75\%. Similarly, when Trent performs $X$-basis measurement and Eve measures particles $A_1$ and $A_2$ in the same measuring basis, if the state is \ket{\Phi_2} or \ket{\Phi_4}, Eve will not introduce any error. But if the state is \ket{\Phi_1} and \ket{\Phi_3}, Eve will obtains \ket{++}, \ket{+-}, \ket{-+} and \ket{--} each with probability 1/4 and the error rate is 50\%. Suppose Trent performs $X$-basis measurement and Eve measures particles $A_1$ and $A_2$ in Bell basis. When Eve's result is \ket{\phi^+} (\ket{\psi^-}), her action will not be detected by Trent if she publishes the result $\ket{++}$ or \ket{--} (\ket{+-} or \ket{-+}). However, when her result is \ket{\phi^-} (\ket{\psi^+}), if the state is \ket{\Phi_4} (\ket{\Phi_3}), Trent will detect Eve's eavesdropping. Similarly, if Trent performs $Z$-basis measurement and Eve executes Bell basis measurement, Eve's eavesdropping will also be detected by Trent with a certain probability. According to Stinespring dilation theorem, Eve's action can be realized by a unitary operation $\hat{E}$ on a large Hilbert space, $H_{A_1A_2}\otimes H_{E}$. Then the state of Trent, Alice$_1$, Alice$_1$ and Eve is \begin{eqnarray} \ket{\Phi}=\sum_{T,A_1,A_2\in\{0,1\}}\ket{\varepsilon_{T,A_1,A_2}}\ket{T}\ket{A_1A_2}, \end{eqnarray} where \ket{\varepsilon} denotes Eve's probe state and \ket{T} and \ket{A_1A_2} are states shared by Trent and the users. The condition on the states of Eve's probe is \begin{eqnarray} \sum_{T,A_1,A_2\in\{0,1\}}\bra{\varepsilon_{T,A_1,A_2}}\; \varepsilon_{T,A_1,A_2}\rangle=1. \end{eqnarray} As Eve can eavesdrop particle $A_1$ and $A_2$, Eve's action on the system can be written as \begin{eqnarray} \label{security5} \ket{\Phi}&=&\frac{1}{\sqrt{2}}[\ket{0}(\alpha_1\ket{00}\ket{\varepsilon_{000}}+\beta_1\ket{01}\ket{\varepsilon_{001}}+\gamma_1\ket{10}\ket{\varepsilon_{010}}\nonumber\\ &+&\delta_1\ket{11}\ket{\varepsilon_{011}})+\ket{1}(\delta_2\ket{11}\ket{\varepsilon_{100}}+\gamma_2\ket{10}\ket{\varepsilon_{101}}\nonumber\\ &+&\beta_2\ket{01}\ket{\varepsilon_{110}}+\alpha_2\ket{00}\ket{\varepsilon_{111}}]. \end{eqnarray} When the state is \ket{\Phi_1}, the error rate introduced by Eve is $\epsilon=1-|\alpha_1|^2=1-|\delta_2|^2$. Here the complex numbers $\alpha$, $\beta$, $\gamma$ and $\delta$ must satisfy $\hat{E}\hat{E}^\dag=I$. We then generalize the three-party QBC-ES scheme to a multiparty one (more than three parties). Suppose Trent wants to broadcast his secret to a group of users, \{Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$\}. He prepares an ordered $N$ $(r+1)$-particle GHZ states \begin{eqnarray} \frac{1}{\sqrt{2}}(\ket{00\cdots0}+\ket{11\cdots1})_{T,A_1,\cdots,A_r}. \end{eqnarray} The details of the multiparty QBC-ES is very similar to those of three-party one. Trent performs $I$ or $H$ operations on particles $A_1$, $A_2$, $\cdots$, $A_r$ according to GK they shares. He also performs randomly $I$ or $i\sigma_y$ operation on particles $A_1$, $A_2$, $\cdots$, $A_{(r-1)}$ and sends $A_1$, $A_2$, $\cdots$, $A_r$ sequences to each Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$. After receiving the particles, each user performs $I$ or $H$ operations on their particles according to $GK$. Similar to step (S3), Trent authenticates the users and checks eavesdropping. If all users are legal, he announces publicly his operations on particles $A_1$, $A_2$, $\cdots$, $A_{(r-1)}$ and Alice$_1$, Alice$_2$, $\cdots$, Alice$_{(r-1)}$ execute the same operations on them. Trent divides all GHZ states into $N$ ordered groups, [\{P(1)$_{TA_1\cdots A_r}$, Q(1)$_{T'A_1'\cdots A_r'}$\}, $\cdots$, \{P(N)$_{TA_1\cdots A_r}$, Q(N)$_{T'A_1'\cdots A_r'}$\}]. He encodes his secret on particles $T$ using one of the four operations, \{$I$, $\sigma_x$, $i\sigma_y$, $\sigma_z$\}. Alice$_1$, Alice$_2$, $\cdots$, Alice$_{(r-1)}$ each perform randomly one of the two operations \{$I$, $\sigma_x$\} on their particles. Each user measures particles $A_i$ and $A'_i$ ($i=1,2,\cdots,r$) in Bell basis. After measurements, Alice$_1$, Alice$_2$, $\cdots$, Alice$_{(r-1)}$ publish their measurement results. Alice$_r$ first transforms her result according to $GK$ and then publishes the transformed result. Trent also performs Bell basis measurement on particles $T$ and $T'$ of each group. Thus Trent broadcasts his secret to all legal users, Alice$_1$, Alice$_2$, $\cdots$, Alice$_(r-1)$ and Alice$_r$. The security for multiparty QBC-ES scheme is similar to that for three-party one. As long as the procedure of authentication and eavesdropping check is secure, the scheme is secure. Based on entanglement swapping, we can also obtain a multiparty simultaneous quantum authentication scheme using multi-particle GHZ state. Here each user shares each of their authentication keys with Trent. Trent prepares a batch of GHZ states $\frac{1}{\sqrt{2}}(\ket{00\cdots0}+\ket{11\cdots1})_{t,1,2,\cdots,r}$. For each GHZ state, he sends particles 1, 2, 3, $\cdots$, $r$ to Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$, respectively and keeps particle $t$. Similar to the above method, he divides all GHZ states into ordered groups and performs randomly one of the two unitary operations \{ $I$, $i\sigma_y$\} on particle $t$. Each Alice$_i$ ($i=1,2,\cdots,r$) performs $I$ or $i\sigma_y$ operation on their particles according to their authentication keys. Similarly, the parties perform Bell basis measurements on their particles of each group. Trent lets the users publish their measurement results and can then authenticate the $r$ users simultaneously. \section{Scheme2: Quantum broadcast communication based on dense coding} In the scheme 1, Trent can only broadcast secret to a group of users who share a $GK$ with him. Based on dense coding, we present a quantum broadcast communication scheme, called QBC-DC scheme, which Trent broadcasts his secret to multi-user \{Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$\} and the users shares each of their authentication keys with Trent. We first present a three-party QBC-DC scheme and then generate it to a multiparty one. (S1) Trent prepares an ordered $N$ three-particle GHZ states randomly in one of the eight GHZ states \{\ket{\Psi_1}$_{TA_1A_2}$,\ket{\Psi_2}$_{TA_1A_2}$,$\cdots$,\ket{\Psi_7}$_{TA_1A_2}$\}, where the subscripts $T$, $A_1$ and $A_2$ represent three particles of each GHZ state. He takes particle $T$ from each of the GHZ states to form an ordered particle sequence, called $T$ sequence. Similarly, the remaining partner particles compose $A_1$ sequence and $A_2$ sequence. Trent performs one of the two operations \{$I$, $H$\} on each particle in $A_1$ sequence according to Alice$_1$'s authentication key, $AK_{A_1}$. Here $AK_{A_1}=h_{A_1}(ID_{A_1}, C_{A_1})$. That is, if the $i$th value of $AK_{A_1}$ is 0 (1), he performs $I$ ($H$) operation on particle $A_1$. Trent also executes $I$ or $H$ operation on particle $A_2$ according to Alice$_2$'s authentication key, $AK_{A_2}$. After doing these, he sends $A_1$ and $A_2$ sequences to Alice$_1$ and Alice$_2$, respectively. (S2) Alice$_1$ (Alice$_2$) performs corresponding $I$ or $H$ operation on each of her particles according to $AK_{A_1}$ ($AK_{A_2}$). For example, if the $i$th value of $AK_{A_1}$ is 0 (1), $I$ ($H$) operation is applied to particle $A_1$. After doing these, they inform Trent. Trent then authenticates the users and checks eavesdropping during the transmission of $A_1$ and $A_2$ sequences. (S3) We then describe the procedure of authentication and eavesdropping check in detail. (a) Trent selects randomly a sufficiently large subset from the ordered GHZ states. (b) He measures the sampling particles in $T$ sequence in $Z$-basis or $X$-basis randomly. (c) Trent announces publicly the positions of the sampling particles and the measuring basis for each of the sampling photons. Alice$_1$ and Alice$_2$ measure their sampling particles in the same basis as Trent. After measurements, the users publishes their measurement results. (d) Trent can authenticate the users and check the existence of eavesdropper by comparing their measurement results. If the users are legal and the channel is safe, their results must be completely correlated. For example, the initial state is \ket{\Psi_1}. Suppose Trent performs $Z$-basis measurement on particle $T$. If Trent's result is \ket{0} (\ket{1}), the users' results must be \ket{00} (\ket{11}). If Trent performs $X$-basis measurement on his particle and gets the result \ket{+} (\ket{-}), the users' results should be \ket{++} or \ket{--} (\ket{+-} or \ket{-+}). If any user is illegal, Trent abandons the communication. Otherwise, they continue to the next step. (S4) Alice$_1$ and Alice$_2$ each generate a random string, $a_1$ and $a_2$. According to their random strings, they perform one of the two unitary operations \{$I$, $i\sigma_y$\} on particles $A_1$ and $A_2$, respectively. For example, if the $i$th value of Alice$_1$'s random string is 0 (1), she performs $I$ ($i\sigma_y$) operation on particle $A_1$. After their operations, they return $A_1$ and $A_2$ sequences to Trent. (S5) Trent selects randomly a sufficiently large subset from the ordered GHZ states and performs randomly one of the four unitary operations \{ $I$, $\sigma_x$, $i\sigma_y$, $\sigma_z$\} on each of the sampling particles in $T$ sequence. He then encodes his secret message on the remaining particles $T$ by performing one of the four unitary operations on each of them. Trent measures particles $T$, $A_1$ and $A_2$ of each GHZ state in three-particle GHZ basis. He announces publicly the positions of the sampling particles and lets each user publish their corresponding random operations on the sampling particles in $A_1$ and $A_2$ sequences. According to his measurement result, Trent can check the security for the transmission of the returned particle sequences. When the initial state is \ket{\Psi_1}, his result is \ket{\Psi_8} and his operation on particle $T$ is $\sigma_x$, he can deduce Alice$_1$'s and Alice$_2$'s operations are each $i\sigma_y$ and $I$. He then compares his conclusion with the operations published by the users and can decide the security for the transmitting particles. If he confirms that there is no eavesdropping, he publishes his measurement results and the initial GHZ states he prepared. Alice$_1$ and Alice$_2$ can then obtain Trent's secret. For example, when the initial state is \ket{\Psi_1} and Trent's operation is $\sigma_x$ (corresponds to his secret 01), $\ket{\Psi_1}$ is transformed to \ket{\Psi_8}, if Alice$_1$ and Alice$_2$ perform $i\sigma_y$ and $I$ operations on particles $A_1$ and $A_2$, respectively. That is, $\sigma_x\otimes$$i\sigma_y\otimes$$I$\ket{\Psi_1}=\ket{\Psi_8}. According to her operation performed on particle $A_1$ ($A_2$), Trent's result \ket{\Psi_8} and the initial state \ket{\Psi_1}, Alice$_1$ (Alice$_2$) can obtains Trent's secret message, 01. The security for the three-party QBC-DC scheme is based on that for the procedure of authentication and eavesdropping check. Trent prepares an ordered $N$ GHZ states each of which is in one of the eight GHZ states and performs $I$ or $H$ operation on particle $A_1$ and $A_2$ according to each user's authentication key. For each initial state, there are four possible states after Trent's operations. For example, the initial state is \ket{\Psi_1}, then the four possible states are \begin{eqnarray} \ket{\Omega_1}&=&\frac{1}{\sqrt{2}}(\ket{000}+\ket{111}),\nonumber\\ \ket{\Omega_2}&=&\frac{1}{\sqrt{2}}(\ket{0+0}+\ket{1-1}),\nonumber\\ &=&\frac{1}{2}[\ket{+}(\ket{\phi^-}+\ket{\psi^+})+\ket{-}(\ket{\phi^+}-\ket{\psi^-})]\nonumber\\ \ket{\Omega_3}&=&\frac{1}{\sqrt{2}}(\ket{00+}+\ket{11-}),\nonumber\\ &=&\frac{1}{2}[\ket{+}(\ket{\phi^-}+\ket{\psi^+})+\ket{-}(\ket{\phi^+}+\ket{\psi^-})]\nonumber\\ \ket{\Omega_4}&=&\frac{1}{\sqrt{2}}(\ket{0++}+\ket{1--}). \end{eqnarray} Eve has no information of the user's authentication key. If Eve intercepts particles $A_1$ and $A_2$ and performs Bell basis measurement on them, she cannot obtain the information of the user's authentication key because she cannot distinguish the four states. During the procedure of authentication and eavesdropping check, Trent performs randomly $Z$-basis or $X$-basis measurement on his particle. When Trent performs $Z$-basis measurement and Eve measures the intercepted particles in the same measuring basis as Trent, if the state is not \ket{\Omega_1}, it is possible for her to publish a wrong result after measurements. Similarly, whether Eve utilizes $Z$-basis, $X$-basis or Bell basis measurement, she will publish a wrong result with some probability and her action will be detected by Trent during the procedure of authentication and eavesdropping check. We can also describe Eve's effect on the system as Eq. (\ref{security5}). If the initial state is \ket{\Psi_5}, the error rate introduced by Eve is $\epsilon=1-|\gamma_1|^2$. We then generalize the three-party QBC-DC scheme to a multiparty (more than three parties) one which Trent broadcasts his secret to $r$ users \{Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$\}. He prepares an ordered $N$ GHZ states each of which is randomly in one of the $2^{(r+1)}$ $(r+1)$-particle GHZ states \begin{eqnarray} \frac{1}{\sqrt{2}}(\ket{ij\cdots k}+\ket{\bar{i}\bar{j}\cdots \bar{k}})_{T,A_1,\cdots,A_r}, \end{eqnarray} where $i,j,\cdots,k \in \{0,1\}$ and $\bar{i},\bar{j},\cdots,\bar{k}$ are the counterparts of $i,j,\cdots,k$. The details of the multiparty QBC-DC is very similar to those of three-party one. Trent performs $I$ or $H$ operations on each particle in $A_1$ ( $A_2$, $\cdots$, $A_r$) sequence according to $AK_{A_1}$ ($AK_{A_2}$, $\cdots$, $AK_{A_r}$). He then sends $A_1$, $A_2$, $\cdots$, $A_r$ sequences to Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$, respectively. After receiving the particle sequence, each user performs $I$ or $H$ operations on their particles according to their respective authentication key. Similar to step (S3) in the three-party QBC-DC scheme, Trent authenticates the users and checks eavesdropping. If any user is illegal, Trent aborts communication, otherwise, they continue to the next step. Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$ perform randomly $I$ or $i\sigma_y$ operation on their respective particles. After doing these, they return $A_1$, $A_2$, $\cdots$, $A_r$ sequences to Trent. Trent first chooses a sufficiently large subset to check the security for the transmitting particles and performs randomly one of the four operations on the sampling particles in $T$ sequence. He also encodes his secret on the remaining particles in $T$ sequence using the four operations. Trent measures each of $(r+1)$-particle GHZ states in $(r+1)$-particle GHZ basis. He publishes the positions of the sampling particles and lets the users announce publicly their operations on the sampling particles. Trent can then check the security for the transmission of the returned particle sequences. If there is no eavesdropping, he publishes his measurement results and the initial $(r+1)$-particle GHZ states she prepared. Thus Trent broadcasts his secret to Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$. \section{Scheme 3: Quantum broadcast communication based on quantum encryption} In scheme 2, Trent broadcasts his secret to a group of designated users. We then present a QBC scheme with quantum encryption, called QBC-QE scheme, which Trent can broadcast his secret to any subset of the users. The details of the QBC-QE scheme are as follows. (S1) Trent prepares an ordered $N$ $(r+1)$-particle GHZ states each of which is in the state \begin{eqnarray} \frac{1}{\sqrt{2}}(\ket{00\cdots0}+\ket{11\cdots1})_{T,A_1,\cdots,A_r}. \end{eqnarray} The $N$ particles $T$ form $T$ sequence and the $N$ particles $A_i$ ($i=1,2,\cdots,r$) form $A_i$ sequence. Trent performs one of the two operations \{$I$, $H$\} on the particles in $A_i$ sequence according to Alice$_i$'s authentication key. That is, if Alice$_i$'s $AK_{A_i}$ is 0 (1), Trent performs $I$ ($H$) on particle $A_i$. Trent sends $A_1$, $A_2$, $\cdots$, $A_r$ sequences to Alice$_1$, Alice$_2$, $\cdots$, Alice$_r$, respectively. (S2) Alice$_i$ ($i=1,2,\cdots,r$) performs corresponding $I$ or $H$ operation on each of her particles according to $AK_{A_i}$. Similar to the method of step (S3) in QBC-DC scheme, Trent authenticates the users and checks eavesdropping by performing random $Z$-basis or $X$-basis measurement. If the users are legal and the channel is safe, they continue to the next step. Otherwise, Trent stops the communication. (S3) Trent utilizes controlled-not (CNOT) operation to encrypt his secret message. For example, Trent transmits his secret $P=\{p_1, p_2, \cdots, p_m\}$, where $p_i\in\{0,1\}$ ($i=1,2,\cdots,m$) represents classical bit 0 or 1, to two users \{Alice$_j$, Alice$_k$\} ($1\leq j,k\leq r$). He prepares his secret in the state \ket{p_ip_i}$_{S_jS_k}$, where $S_j$, $S_k$ denote the two particles of the state. Trent performs CNOT operation on particles $T$, $S_j$ and $S_k$ (particle $T$ is the controller and $S_j$ and $S_k$ are the targets). Then the GHZ state of the whole quantum system becomes \begin{eqnarray} \label{qe1} \ket{\Upsilon}&=&\frac{1}{\sqrt{2}}(\ket{00\cdots0,p_i,p_i}\nonumber\\ &+&\ket{11\cdots 1,1\oplus p_i,1\oplus p_i})_{T,A_1,\cdots,A_r,S_j,S_k}. \end{eqnarray} According to Alice$_j$'s and Alice$_k$'s authentication keys, Trent performs corresponding $I$ or $H$ operation on particles $S_j$ and $S_k$. For example, if the $i$th value of Alice$_j$'s authentication key is 0 (1), he performs $I$ ($H$) operation on particle $S_j$. Here we denote the operation performed on $S_j$ ($S_k$) as $H_{{AK}^i_{A_j}}$ ($H_{{AK}^i_{A_k}}$), where ${AK}^i_{A_j}$ (${AK}^i_{A_k}$) represents the $i$th value of Alice$_j$'s (Alice$_k$'s) authentication key and $H_0$ ($H_1$) represents $I$ ($H$) operation. Thus \ket{\Upsilon} is transformed to \begin{eqnarray} \label{qe2} \ket{\Upsilon'}=\frac{1}{\sqrt{2}}[\ket{00\cdots0,H_{{AK}^i_{A_j}}(p_i),H_{{AK}^i_{A_k}}(p_i)}+\vert 11\cdots1,\nonumber\\ H_{{AK}^i_{A_j}}(1\oplus{p_i}),H_{{AK}^i_{A_k}}(1\oplus{p_i})\rangle]_{T,A_1,\cdots,A_r,S_j,S_k}.\nonumber\\ \end{eqnarray} Trent then sends $S_j$ and $S_k$ sequences to Alice$_j$ and Alice$_k$, respectively. To insure the security of the transmission of $S_j$ and $S_k$ sequences, Trent should insert randomly some sampling particles into $S_j$ and $S_k$ sequences before sending them to the users. The aim of inserting the sampling particles is to make the parties detect Eve's disturbance attack although Eve cannot obtain any information of Trent's secret message. (S4) After receiving $S_j$ ($S_k$) sequence, Alice$_j$ (Alice$_k$) first performs corresponding $I$ or $H$ operation on the transmitting particles according to $AK_{A_j}$ ($AK_{A_k}$) and then executes CNOT operation on particles $A_j$ ($A_k$) and $S_j$ ($S_k$). For example, if the $i$th value of Alice$_j$'s authentication key is 1, she performs $H$ operation on the corresponding particle in $S_j$ sequence and does CNOT operation on particles $A_j$ and $S_j$ ($A_j$ is the controller and $S_j$ is the target). According to Eq.(\ref{qe1}) and (\ref{qe2}), Alice$_j$ and Alice$_k$ can obtain Trent's secret message. In the above scheme, we just give an example of sending secret to any two users of $r$ users. Obviously, Trent can send his secret to any subset of the legal users in the scheme. Strictly speaking, the present scheme is not a genuine QBC scheme because Trent must transmit a particle sequence to each user. In view of Trent can transmit his secret to multi-user directly in this scheme, we still regard it as QBC. The security for authentication and eavesdropping check in the scheme is the same as that in QBC-DC scheme. After confirming the users are legal and insuring the security of the quantum channel, the GHZ states can be regarded as quantum key. Trent can then encrypt his secret message using quantum key they share. If Trent wants to transmit his secret to a subset of the users, he then performs $I$ or $H$ operation on the encoding particles according to each designated user's authentication key, which ensures that only the users in this subset can obtain Trent's secret. After insuring the security for the transmitting particles, the designated user decrypts the secret by using quantum key. The procedure of encryption and decryption in our scheme is the same as quantum one-time pad, but the quantum key is the GHZ states shared by the parties. In the scheme, the quantum key can be used repeatedly for next round of cryptographic task. \section{summary} In summary, we have presented three schemes for quantum broadcast communication. In our schemes, Trent broadcasts his secret message to multi-user directly and only the legal users can obtain Trent's secret. In scheme 1, based on the idea in Ref. \cite{gao2}, we utilizes entanglement swapping to realize a QBC scheme which Trent sends his secret to a group of users who share a group key with Trent. In scheme 2, based on the idea in Ref. \cite{jin}, we present a QBC scheme that Trent broadcasts his secret to multi-user who share each of their authentication keys with Trent, by using dense coding. Scheme 3 is based on quantum encryption \cite{zlg01}, which Trent can broadcast his secret to any subset of the legal users. Because our schemes utilize block transmission, quantum memory is necessary. Moreover, compared with classical broadcast encryption which allows the sender to securely distribute the secret to a dynamically changing group of users, the present schemes are not genuine quantum broadcast encryption schemes. That is why we call them quantum broadcast communication schemes. We hope that our work will attract more attention and give impetus to further research on quantum broadcast communication. \begin{acknowledgments} This work is supported by the National Natural Science Foundation of China under Grant No. 60472032. \end{acknowledgments} \end{document}